content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Burlington, MA Trigonometry Tutor
Find a Burlington, MA Trigonometry Tutor
I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students
understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well.
16 Subjects: including trigonometry, French, elementary math, algebra 1
...I also coached. I have worked with infants of 6 mos to senior citizens to providing therapeutic lessons to those with disabilities. I have a WSI and lifeguard certification.
19 Subjects: including trigonometry, Spanish, chemistry, calculus
I am a Dartmouth College grad with 13 years of professional tutoring experience. I have particular expertise with standardized tests such as the SAT, ACT, SSAT and ISEE. I also tutor math and
writing for middle school and high school students.
26 Subjects: including trigonometry, English, linear algebra, algebra 1
...I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. In addition, I took Algebra, Calculus, Geometry, Probability and Trigonometry courses in high
school, and this knowledge helped me to achieve my goals in research projects involving 4-dimentional ma...
16 Subjects: including trigonometry, calculus, geometry, algebra 1
...I embrace the use of technology to aid in instruction. I’ve used Geometer Sketchpad for discovery of area of trapezoids, circumference of circles, arc-length & arc-angles and relations of
parallel lines. I used Microsoft Excel for discovery of the general form of the area of pyramids and cones ...
13 Subjects: including trigonometry, physics, SAT math, algebra 1
Related Burlington, MA Tutors
Burlington, MA Accounting Tutors
Burlington, MA ACT Tutors
Burlington, MA Algebra Tutors
Burlington, MA Algebra 2 Tutors
Burlington, MA Calculus Tutors
Burlington, MA Geometry Tutors
Burlington, MA Math Tutors
Burlington, MA Prealgebra Tutors
Burlington, MA Precalculus Tutors
Burlington, MA SAT Tutors
Burlington, MA SAT Math Tutors
Burlington, MA Science Tutors
Burlington, MA Statistics Tutors
Burlington, MA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bedford, MA trigonometry Tutors
Belmont, MA trigonometry Tutors
Billerica trigonometry Tutors
Chelmsford trigonometry Tutors
Lexington, MA trigonometry Tutors
Melrose, MA trigonometry Tutors
Pinehurst, MA trigonometry Tutors
Reading, MA trigonometry Tutors
Saugus trigonometry Tutors
Stoneham, MA trigonometry Tutors
Tewksbury trigonometry Tutors
Wakefield, MA trigonometry Tutors
Wilmington, MA trigonometry Tutors
Winchester, MA trigonometry Tutors
Woburn trigonometry Tutors
|
{"url":"http://www.purplemath.com/burlington_ma_trigonometry_tutors.php","timestamp":"2014-04-21T11:05:12Z","content_type":null,"content_length":"24252","record_id":"<urn:uuid:9748661c-0248-4733-9d4f-5316ed814954>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
List Of Integrals Of Rational Functions
A selection of articles related to list of integrals of rational functions.
Original articles from our library related to the List Of Integrals Of Rational Functions. See Table of Contents for further available material (downloadable resources) on List Of Integrals Of
Rational Functions.
List Of Integrals Of Rational Functions is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, List Of Integrals Of Rational
Functions books and related discussion.
Suggested Pdf Resources
f g x g xdx f udu. ′. = ∫.
ax as a function of x. Given a positive real number a, the function ax is defined for all rational numbers x. ..
Algebraic Factoring and Rational Function Integration by lea:s~ .degree extension field in which the integral ..
We know we can solve this problem: Given any rational function f(x) = p(x)/q(x), where p and q are univariate polynomials over the rationals, compute its indefinite integral, using if .. Call the
Macsyma polynomial zero-finder; get a list of roots.
We were looking at how to integrate a rational function. After an produces a polynomial part of the integral, we were left with a rational function where the degree of is less . That is the list
of all the different coefficients that we could have.
Suggested Web Resources
The following is a list of integrals (antiderivative functions) of rational functions. For a more complete list of integrals, see lists of integrals.
more integrals: List of integrals of rational functions.
May 20, 1999 The following problems involve the integration of rational functions, These formulas lead immediately to the following indefinite integrals : . Click HERE to return to the original
list of various types of calculus problems.
Indefinite integrals and definite integrals: Tables. Indefinite integrals with rational and irrational expressions - from S.O.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
|
{"url":"http://www.realmagick.com/list-of-integrals-of-rational-functions/","timestamp":"2014-04-16T07:48:05Z","content_type":null,"content_length":"28511","record_id":"<urn:uuid:fae90694-0219-4b36-ae65-4e2ffe1bfd1c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stata Press books
Multilevel and Longitudinal Modeling Using Stata, Second Edition
Sophia Rabe-Hesketh and Anders Skrondal
Copyright 2008
ISBN-13: 978-1-59718-040-5
Pages: 562; paperback
Price $59.00
New edition now available
See a larger photo of the front cover
See the back cover
Table of contents
Preface (pdf)
Author index (pdf)
Subject index (pdf)
Download the datasets used in this book
Obtain answers to the exercises
Read reviews of the first edition
Review of the second edition from the Stata Journal
Read reviews of the second edition
Comment from the Stata technical group
Multilevel and Longitudinal Modeling Using Stata, Second Edition, by Sophia Rabe-Hesketh and Anders Skrondal, looks specifically at Stata’s treatment of generalized linear mixed models, also known as
multilevel or hierarchical models. These models are “mixed” because they allow fixed and random effects, and they are “generalized” because they are appropriate for continuous Gaussian responses as
well as binary, count, and other types of limited dependent variables.
The second edition has much to offer for readers of the first edition, reading more like a sequel than an update. The text has almost doubled in length from the original, coming in at 562 pages. This
second edition incorporates three new chapters: a chapter on standard linear regression, a chapter on discrete-time survival analysis, and a chapter on longitudinal and panel data containing an
expanded discussion of random-coefficient and growth-curve models. The authors have updated this edition for Stata 10, expanding on discussions in the original edition and adding new in-text examples
and end-of-chapter exercises. In particular, the authors have thoroughly covered the new Stata commands xtmelogit and xtmepoisson.
The first chapter provides a review of the methods of linear regression. Rabe-Hesketh and Skrondal then begin with the comparatively simple random-intercept linear model without covariates,
developing the mixed model from principles and thereby familiarizing the reader with terminology, summarizing and relating the widely used estimating strategies, and providing historical perspective.
Once the authors have established the mixed-model foundation, they smoothly generalize to random-intercept models with covariates and then to a discussion of the various estimators (between, within,
and random-effects). The authors then discuss models with random coefficients, followed by models for growth curves. The middle chapters of the book apply the concepts for Gaussian models to models
for binary responses (e.g., logit and probit), ordinal responses (e.g., ordered logit and ordered probit), and count responses (e.g., Poisson).
The text continues with a discussion of how to use multilevel methods in discrete-time survival analysis, for example, using complimentary log-log regression to fit the proportional hazards model.
The authors then consider models with multiple levels of random variation and models with crossed (nonnested) random effects. In its examples and end-of-chapter exercises, the book contains real
datasets and data from the medical, social, and behavioral sciences literature.
The book has several applications of generalized mixed models performed in Stata. Rabe-Hesketh and Skrondal developed gllamm, a Stata program that can fit many latent-variable models, of which the
generalized linear mixed model is a special case. As of version 10, Stata contains the xtmixed, xtmelogit, and xtmepoisson commands for fitting multilevel models, in addition to other xt commands for
fitting standard random-intercept models. The type of models fit by these commands sometimes overlap; when this happens, the authors highlight the differences in syntax, data organization, and output
for the two (or more) commands that can be used to fit the same model. The authors also point out the relative strengths and weaknesses of each command when used to fit the same model, based on
considerations such as computational speed, accuracy, available predictions, and available postestimation statistics.
In reference to the first edition, a reviewer for American Statistician commends Rabe-Hesketh and Skrondal for promoting the appropriate use of multilevel and longitudinal modeling. The reviewer
writes in the August 2006 issue, “All too often computer manuals leave off ... important aspects of an analysis, but the authors have been careful to provide a well-rounded and complete approach to
model fitting and interpretation.”
In summary, this book is the most complete, up-to-date depiction of Stata's capacity for fitting generalized linear mixed models. The authors provide an ideal introduction for Stata users wishing to
learn about this powerful data-analysis tool.
Table of contents
List of Tables
List of Figures
I Preliminaries
1 Review of linear regression
1.1 Introduction
1.2 Is there gender discrimination in faculty salaries?
1.3 Independent-samples t test
1.4 One-way analysis of variance
1.5 Simple linear regression
1.6 Dummy variables
1.7 Multiple linear regression
1.8 Interactions
1.9 Dummies for more than two groups
1.10 Other types of interactions
1.10.1 Interaction between dummy variables
1.10.2 Interaction between continuous covariates
1.11 Nonlinear effects
1.12 Residual diagnostics
1.13 Summary and further reading
1.14 Exercises
II Two-level linear models
2 Variance-components models
2.1 Introduction
2.2 How reliable are peak-expiratory-flow measurements
2.3 The variance-components model
2.3.1 Model specification and path diagram
2.3.2 Error components, variance components, and reliability
2.3.3 Intraclass correlation
2.4 Fixed versus random effects
2.5 Estimation using Stata
2.5.1 Data preparation
2.5.2 Using xtreg
2.5.3 Using xtmixed
2.5.4 Using gllamm
2.6 Hypothesis tests and confidence intervals
2.6.1 Hypothesis test and confidence interval for the population mean
2.6.2 Hypothesis test and confidence interval for the between-cluster variance
2.7 More on statistical inference
2.7.1 Different estimation models
2.7.2 Inference for Β
Estimate and standard error: Balanced case
Estimate: Unbalanced case
2.8 Crossed versus nested effects
2.9 Assigning values to the random intercepts
2.9.1 Maximum likelihood estimation
Implementation via OLS regression
Implementation via the mean total residual
2.9.2 Empirical Bayes prediction
2.9.3 Empirical Bayes variances
2.10 Summary and further reading
2.11 Exercises
3 Random-intercept models with covariates
3.1 Introduction
3.2 Does smoking during pregnancy affect birthweight?
3.3 The linear random-intercept model with covariates
3.3.1 Model specification
3.3.2 Residual variance and intraclass correlation
3.4 Estimation using Stata
3.4.1 Using xtreg
3.4.2 Using xtmixed
3.4.3 Using gllamm
3.5 Coefficients of determination or variance explained
3.6 Hypothesis tests and confidence intervals
3.6.1 Hypothesis tests for regression coefficients
Hypothesis tests for individual regression coefficients
Joint hypothesis tests for several regression coefficients
3.6.2 Predicted means and confidence intervals
3.6.3 Hypothesis test for between-cluster variance
3.7 Between and within effects
3.7.1 Between-mother effects
3.7.2 Within-mother effects
3.7.3 Relations among estimators
3.7.4 Endogeneity and different within- and between-mother effects
3.7.5 Hausman endogeneity test
3.8 Fixed versus random effects revisited
3.9 Residual diagnostics
3.10 More on statistical inference for regression coefficients
3.10.1 Consequences of using ordinary regression for clustered data
3.10.2 Power and sample-size determination
3.11 Summary and further reading
3.12 Exercises
4 Random-coefficient models
4.1 Introduction
4.2 How effective are different schools
4.3 Separate linear regressions for each school
4.4 Specification and interpretation of a random-coefficient model
4.4.1 Specification of random-coefficient model
4.4.2 Interpretation of the random-effects variances and covariances
4.5 Estimation using Stata
4.5.1 Using xtmixed
Random-intercept model
Random-coefficient model
4.5.2 Using gllamm
Random-intercept model
Random-coefficient model
4.6 Testing the slope variance
4.7 Interpretation of estimates
4.8 Assigning values to the random intercepts and slopes
4.8.1 Maximum likelihood estimation
4.8.2 Empirical Bayes prediction
4.8.3 Model visualization
4.8.4 Residual diagnostics
4.8.5 Inferences for individual schools
4.9 Two-stage model formulation
4.10 Some warnings about random-coefficient models
4.11 Summary and further reading
4.12 Exercises
5 Longitudinal, panel, and growth-curve models
5.1 Introduction
5.2 How and why do wages change over time?
5.3 Data structure
5.3.1 Missing data
5.3.2 Time-varying and time-constant variables
5.4 Time scales in longitudinal data
5.5 Random- and fixed-effects approaches
5.5.1 Correlated residuals
5.5.2 Fixed-intercept model
Using xtreg
Using anova
5.5.3 Random-intercept model
5.5.4 Random-coefficient model
5.5.5 Marginal mean and covariance structure induced by random effects
Marginal mean and covariance structure for random-intercept models
Marginal mean and covariance structure for random-coefficient models
5.6 Marginal modeling
5.6.1 Covariance structures
Compound symmetric or exchangeable structure
Random-coefficient structure
Autoregressive residual structure
Unstructured covariance matrix
5.6.2 Marginal modeling using Stata
5.7 Autoregressive- or lagged-response models
5.8 Hybrid approaches
5.8.1 Autoregressive response and random effects
5.8.2 Autoregressive responses and autoregressive residuals
5.8.3 Autoregressive residuals and random or fixed effects
5.9 Missing data
5.9.1 Maximum likelihood estimation under MAR: A simulation
5.10 How do children grow?
5.10.1 Observed growth trajectories
5.11 Growth-curve modeling
5.11.1 Random-intercept model
5.11.2 Random-coefficient model
5.11.3 Two-stage model formulation
5.12 Prediction of trajectories for individual children
5.13 Prediction of mean growth trajectory and 95% band
5.14 Complex level-1 variation or heteroskedasticity
5.15 Summary and further reading
5.16 Exercises
III Two-level generalized linear models
6 Dichotomous or binary responses
6.1 Introduction
6.2 Single-level models for dichotomous responses
6.2.1 Generalized linear model formulation
6.2.2 Latent-response formulation
Logistic regression
Probit regression
6.3 Which treatment is best for toenail infection?
6.4 Longitudinal data structure
6.5 Population-averaged or marginal probabilities
6.6 Random-intercept logistic regression
6.7 Estimation of logistic random-intercept models
6.7.1 Using xtlogit
6.7.2 Using xtmelogit
6.7.3 Using gllamm
6.8 Inference for logistic random-intercept models
6.9 Subject-specific vs. population-averaged relationships
6.10 Measures of dependence and heterogeneity
6.10.1 Conditional or residual intraclass correlation of the latent responses
6.10.2 Median odds ratio
6.11 Maximum likelihood estimation
6.11.1 Adaptive quadrature
6.11.2 Some speed considerations
Advice for speeding up gllamm
6.12 Assigning values to random effects
6.12.1 Maximum likelihood estimation
6.12.2 Empirical Bayes prediction
6.12.3 Empirical Bayes modal prediction
6.13 Different kinds of predicted probabilities
6.13.1 Predicted population-averaged probabilities
6.13.2 Predicted subject-specific probabilities
Predictions for hypothetical subjects
Predictions for the subjects in the sample
6.14 Other approaches to clustered dichotomous data
6.14.1 Conditional logistic regression
6.14.2 Generalized estimating equations (GEE)
6.15 Summary and further reading
6.16 Exercises
7 Ordinal responses
7.1 Introduction
7.2 Single-level cumulative models for ordinal responses
7.2.1 Generalized linear model formulation
7.2.2 Latent-response formulation
7.2.3 Proportional odds
7.2.4 Identification
7.3 Are antipsychotic drugs effective for patients with schizophrenia?
7.4 Longitudinal data structure and graphs
7.4.1 Longitudinal data structure
7.4.2 Plotting cumulative proportions
7.4.3 Plotting estimated cumulative logits and transforming the time scale
7.5 A single-level proportional odds model
7.5.1 Model specification
7.5.2 Estimation using Stata
7.6 A random-intercept proportional odds model
7.6.1 Model specification
7.6.2 Estimation using Stata
7.7 A random-intercept proportional odds model
7.7.1 Model specification
7.7.2 Estimation using gllamm
7.8 Different kinds of predicted probabilities
7.8.1 Predicted population-averaged probabilities
7.8.2 Predicted patient-specific probabilities
7.9 Do experts differ in the grading of student essays?
7.10 A random-intercept probit model with grader bias
7.10.1 Model specification
7.10.2 Estimation
7.11 Including grader-specific measurement error variances
7.11.1 Model specification
7.11.2 Estimation
7.12 Including grader-specific thresholds
7.12.1 Model specification
7.12.2 Estimation
7.13 Summary and further reading
7.14 Exercises
8 Discrete-time survival
8.1 Introduction
8.1.1 Censoring and truncation
8.1.2 Time-varying covariates and different time-scales
8.1.3 Discrete- versus continuous-time survival data
8.2 Single-level models for discrete-time survival data
8.2.1 Discrete-time hazard and discrete-time survival
8.2.2 Data expansion for discrete-time survival analysis
8.2.3 Estimation via regression models for dichotomous responses
8.2.4 Including covariates
Time-constant covariates
Time-varying covariates
8.2.5 Handling left-truncated data
8.3 How does birth history affect child mortality?
8.4 Data expansion
8.5 Proportional hazards and interval censoring
8.6 Complementary log-log models
8.7 A random-intercept complementary log-log model
8.7.1 Model specification
8.7.2 Estimation using Stata
8.8 Marginal and conditional survival probabilities
8.9 Summary and further reading
8.10 Exercises
9 Counts
9.1 Introduction
9.2 What are counts?
9.2.1 Counts versus proportions
9.2.2 Counts as aggregated event-history data
9.3 Single-level Poisson models for counts
9.4 Did the German health-care reform reduce the number of doctor visits?
9.5 Longitudinal data structure
9.6 Single-level Poisson regression
9.6.1 Model specification
9.6.2 Estimation using Stata
9.7 Random-intercept Poisson regression
9.7.1 Model specification
9.7.2 Estimation using Stata
Using xtpoisson
Using xtmepoisson
Using gllamm
9.8 Random-coefficient Poisson regression
9.8.1 Model specification
9.8.2 Estimation using Stata
Using xtmepoisson
Using gllamm
9.8.3 Interpretation of estimates
9.9 Overdispersion in single-level models
9.9.1 Normally distributed random intercept
9.9.2 Negative binomial models
Mean dispersion or NB2
Constant dispersion or NB1
9.9.3 Quasilikelihood or robust standard errors
9.10 Level-1 overdispersion in two-level models
9.11 Other approaches to two-level count data
9.11.1 Conditional Poisson regression
9.11.2 Conditional negative binomial regression
9.11.3 Generalized estimating equations
9.11.4 Marginal and conditional estimates when responses are MAR
9.12 How does birth history affect child mortality?
9.12.1 Simple piecewise exponential survival model
9.12.2 Piecewise exponential survival model with covariates and frailty
9.13 Which Scottish counties have a high risk of lip cancer?
9.14 Standardized mortality ratios
9.15 Random-intercept Poisson regression
9.15.1 Model specification
9.15.2 Estimation using gllamm
9.15.3 Prediction of standardized mortality ratios
9.16 Nonparametric maximum likelihood estimation
9.16.1 Specification
9.16.2 Estimation using gllamm
9.16.3 Prediction
9.17 Summary and further reading
9.18 Exercises
IV Models with nested and crossed random effects
10 Higher-level models with nested random effects
10.1 Introduction
10.2 Do peak-expiratory-flow measurements vary between methods?
10.3 Two-level variance-components models
10.3.1 Model specification
10.3.2 Estimation using xtmixed
10.4 Three-level variance-components models
10.4.1 Model specification
10.4.2 Different types of intraclass correlation
10.4.3 Three-stage formulation
10.4.4 Estimating using xtmixed
10.4.5 Empirical Bayes prediction using xtmixed
10.5 Did the Guatemalan immunization campaign work?
10.6 A three-level logistic random-intercept model
10.6.1 Model specification
10.6.2 Different types of intraclass correlations for the latent responses
10.6.3 Different kinds of median odds ratios
10.6.4 Three-stage formulation
10.7 Estimation of three-level logistic random-intercept models using Stata
10.7.1 Using gllamm
10.7.2 Using xtmelogit
10.8 A three-level logistic random-coefficient model
10.9 Estimation of three-level logistic random-coefficient models using Stata
10.9.1 Using gllamm
10.9.2 Using xtmelogit
10.10 Prediction of random effects
10.10.1 Empirical Bayes prediction
10.10.2 Empirical Bayes modal prediction
10.11 Different kinds of predicted probabilities
10.11.1 Predicted marginal probabilities
10.11.2 Predicted median or conditional probabilities
10.11.e Predicted posterior mean probabilities
10.12 Summary and further reading
10.13 Exercises
11 Crossed random effects
11.1 Introduction
11.2 How does investment depend on expected profit and capital stock?
11.3 A two-way error-components model
11.3.1 Models specification
11.3.2 Residual intraclass correlations
11.3.3 Estimation
11.3.4 Prediction
11.4 How much do primary and secondary schools affect attainment at age 16?
11.5 An additive crossed random-effects model
11.5.1 Specification
11.5.2 Estimation using xtmixed
11.6 Including a random interaction
11.6.1 Model specification
11.6.2 Intraclass correlations
11.6.3 Estimation using xtmixed
11.6.4 Some diagnostics
11.7 A trick requiring fewer random effects
11.8 Do salamanders from different populations mate successfully?
11.9 Crossed random-effects logistic regression
11.10 Summary and further reading
11.11 Exercises
A Syntax for gllamm, eq, and gllapred: The bare essentials
B Syntax for gllamm
C Syntax for gllapred
D Syntax for gllasim
|
{"url":"http://www.stata-press.com/books/mlmus2.html","timestamp":"2014-04-18T20:43:45Z","content_type":null,"content_length":"30536","record_id":"<urn:uuid:089aa892-7bed-45d5-9ae2-1bc76a664019>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Jen on Friday, September 14, 2007 at 11:37pm.
I need to have some problems checked and I need help with one of them.
Determine which two functions are invereses of each other.
1. f(x)=5x, g(x)=x/5, h(x)=5/x
-I got that f(x) and g(x) are invereses
2. f(x)=x-2/2, g(x)=2x-2, h(x)=x+2/2
- I got that none are invereses.
5. f(x)=x^4 -3, g(x)=4th square root -3, h(x)=x^4 + 3
-I am not too sure how to find the inverse on this one.
• Alegbra - drwls, Saturday, September 15, 2007 at 1:14am
1. Correct
2. The inverse of (x-2)/2 is f^-1(x) = 2x -2
f and g are inverse functions
5. If f(x) = y,
x = (y+3)^(1/4)
f^-1(x) = (x+3)^(1/4)
That one has no inverse listed.
If h(x)= x^4 + 3
y = x^4 +3
(y-3)^(1/4) = x
h^-1(x) = (x-3)^(1/4)
This may be the same as g(x), but I think you typed g(x) incorrectly, leaving out an x
Related Questions
Algebra-Functions - Could you please check my answers and help me with two ...
Algebra problems - I have some problems that I would like checked please. Find ...
One to one functions - I still need help with finding the inverese of one to ...
composite functions - f(x)=2x-4, g(x)=x squared + 5x What is F o G and Domain ...
College Algebra--Still Confused - I have a few problems I need help with and ...
College Algebra - I have a few problems I need help with and also do have ...
Math, Still Need Help! - Label each statement TRUE or FALSE. a. The sum of two ...
Algebra - I have some problems that I need help with please. Thanxs! Find the ...
Math - I need help to set the equation up to solve a system to ensure that I get...
Algebra 2 - Are the functions f and g inverse functions if f(x)=5/3x+1 and g(x)=...
|
{"url":"http://www.jiskha.com/display.cgi?id=1189827438","timestamp":"2014-04-20T16:56:31Z","content_type":null,"content_length":"8889","record_id":"<urn:uuid:0d097975-5974-452c-912a-3f7bbcaff7ea>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
N Miami Beach, FL Science Tutor
Find a N Miami Beach, FL Science Tutor
...I also received my Master's in Biomedical Science and took additional Biochemistry courses I received my undergraduate degree in Biochemistry and have taken upper level Biology, Chemistry, and
Physical Science courses in order to complete my degree. In addition, I have a Master's of Science in B...
16 Subjects: including chemistry, microbiology, biology, psychology
...Currently I am certified to teach Math K-12. I have taught ninth grade and tenth grade math. I am familiar with the curriculum of Geometry and Algebra.
18 Subjects: including chemistry, biology, biochemistry, calculus
...I would like to conduct tutoring sessions with open communications. I am here to help you and to aid you with any questions, no matter how small. I am here to make sure that you are confident
with your subject so you can not only pass but excel.
9 Subjects: including chemistry, algebra 1, algebra 2, biology
...The 3 basic fundamentals of calculus, known as the "big three" are: limits, derivatives, and integration (the anti-derivative). Most of what is known as Calculus comes from the ideas of the big
three. I will teach you all the basic concepts, formulas, and techniques so you can solve more compl...
41 Subjects: including biochemistry, ESL/ESOL, genetics, GRE
...My career is diverse and rather varied. Farmer, manager, learner, supervisor, accountant, leader, notary public, business owner, educator, teacher, benefits coordinator, insurance agent. As a
history creating student leader I served and represented thousands of students from over 100 countries.
36 Subjects: including genetics, GRE, GMAT, algebra 1
|
{"url":"http://www.purplemath.com/N_Miami_Beach_FL_Science_tutors.php","timestamp":"2014-04-20T09:08:55Z","content_type":null,"content_length":"24110","record_id":"<urn:uuid:e1d6cf2c-51fb-43e8-8c4d-a83cb26f251e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
regular 2-category
2-Category theory
Transfors between 2-categories
Morphisms in 2-categories
Structures in 2-categories
Limits in 2-categories
Structures on 2-categories
The notion of regular 2-catory is the analog in 2-category theory of the notion of regular category in category theory.
A 2-category $K$ is called regular if
1. It is finitely complete (has all finite 2-limits );
2. esos are stable under 2-pullback;
3. Every 2-congruence which is a kernel can be completed to an exact 2-fork.
• Cat is regular.
• A 1-category is regular as a 2-category iff it is regular as a 1-category, since the esos in a 1-category are precisely the strong epis.
• Every finitely complete] (0,1)-category (that is, every meet-semilattice) is regular.
In StreetCBS the last condition is replaced by
• Every morphism $f$ factors as $f\cong m e$ where $m$ is ff and $e$ is eso.
We now show that this follows from our definition. First we need:
(Street’s Lemma) Let $K$ be a finitely complete 2-category where esos are stable under pullback, let $e:A\to B$ be eso, and let $n:B\to C$ be a map.
1. If the induced morphism $ker(e) \to ker(n e)$ is ff, then $n$ is faithful.
2. If $ker(e) \to ker(n e)$ is an equivalence, then $n$ is ff.
First note that $ker(e)\to ker(n e)$ being ff means that if $a_1,a_2: Y \rightrightarrows A$ and $\delta_1,\delta_2 : e a_1 \;\rightrightarrows\; e a_2$ are such that $n \delta_1 = n \delta_2$, then
$\delta_1=\delta_2$. Likewise, $ker(e)\to ker(n e)$ being an equivalence means that given any $\alpha: n e a_1 \to n e a_2$, there exists a unique $\delta: e a_1 \to e a_2$ such that $n \delta = \
We first show that $n$ is faithful under the first hypothesis. Suppose we have $b_1,b_2:X \rightrightarrows B$ and $\beta_1,\beta_2:b_1\to b_2$ with $n \beta_1 = n \beta_2$. Take the pullback
$\array{&Y & \overset{r}{\to} & X \\ (a_1,a_2) & \downarrow && \downarrow & (b_1,b_2)\\ & A\times A & \overset{e\times e}{\to} & B\times B}$
Then we have two 2-cells
$\beta_1 r, \beta_2 r: b_1 r \;\rightrightarrows\; b_2 r$
such that the composites
$n e a_1 \cong n b_1 r \overset{n \beta_1 r = n \beta_2 r}{\to} n b_2 r \cong n e a_2$
are equal. By the hypothesis, $n \beta_1 r = n \beta_2 r$ implies $\beta_1 r = \beta_2 r$. But $r$ is eso, since it is a pullback of the eso $e\times e$, so this implies $\beta_1=\beta_2$. Thus, $n$
is faithful.
Now suppose the (stronger) second hypothesis, and form the pair of pullbacks:
$\array{(n e / n e) & \overset{g}{\to} & n / n & \to & C^{\mathbf{2}}\\ \downarrow && \downarrow && \downarrow \\ A\times A & \overset{e\times e}{\to} & B\times B & \overset{n\times n}{\to} & C\times
Then $g$, being a pullback of $e\times e$, is eso. We also have a commutative square
$\array{(e/e) & \to & (n e / n e)\\ \downarrow && \downarrow g \\ B^{\mathbf{2}} & \to & (n/n).}$
By assumption, $(e/e) \to (n e / n e)$ is an equivalence. Since we have shown that $n$ is faithful, the bottom map $B^{\mathbf{2}} \to (n/n)$ is ff, so since the eso $g$ factors through it, it must
be an equivalence as well. But this says precisely that $n$ is ff.
A 2-category is regular if and only if
1. it has finite limits,
2. esos are stable under pullback,
3. every morphism $f$ factors as $f\cong m e$ where $m$ is ff and $e$ is eso, and
4. every eso is the quotient of its kernel.
First suppose $K$ is regular; we must show the last two conditions. Let $f:A\to B$ be any morphism. By assumption, the kernel $ker(f)$ can be completed to an exact 2-fork $ker(f) \rightrightarrows A
\overset{e}{\to} C$. Since $e$ is the quotient of the 2-congruence $ker(f)$, it is eso, and since $f$ comes with an action by $ker(f)$, we have an induced map $m:C\to B$ with $f\cong m e$. But since
the 2-fork is exact, we also have $ker(f)\simeq ker(e)$, so by Street’s Lemma, $m$ is ff.
Now suppose that in the previous paragraph $f$ were already eso. Then since it factors through the ff $m$, $m$ must be an equivalence; thus $f$ is equivalent to $e$ and hence is a quotient of its
Now suppose $K$ satisfies the conditions in the lemma. Let $f:A\to B$ be any morphism; we must show that $ker(f)$ can be completed to an exact 2-fork. Factor $f = m e$ where $m$ is ff and $e$ is eso.
Since $m$ is ff, we have $ker(f)\simeq ker(e)$. But every eso is the quotient of its kernel, so the fork $ker(f) \rightrightarrows A \overset{e} \to C$ is exact.
In StreetCBS it is claimed that the final condition in Theorem 1 follows from the other three, but there is a flaw in the proof.
In a regular 2-category $K$, we call a ff $m:A\to X$ with codomain $X$ a subobject of $X$. We write $Sub(X)$ for the preorder of subobjects of $X$, as a full sub-2-category of the slice 2-category $K
/X$. Since $K$ is finitely complete and pullbacks preserve ffs, we have pullback functors $f^*:Sub(Y)\to Sub(X)$ for any $f:X\to Y$.
If $g \cong m e$ where $m$ is ff and $e$ is eso, we call $m$ the image of $g$. Taking images defines a left adjoint $\exists_f:Sub(X)\to Sub(Y)$ to $f^*$ in any regular 2-category, and the
Beck-Chevalley condition is satisfied for any pullback square, because esos are stable under pullback.
It is easy to check that if $K$ is regular, so are:
The slice 2-category $K/X$ does not, in general, inherit regularity, but we have:
Regular completion
See at 2-congruence the section Regularity.
The above definitions and observations are originally due to
|
{"url":"http://ncatlab.org/nlab/show/regular+2-category","timestamp":"2014-04-16T13:10:42Z","content_type":null,"content_length":"54994","record_id":"<urn:uuid:9fac9de9-bfc7-4b56-b6a4-30f316f77dbd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Earliest Uses of Symbols for Matrices and Vectors
This page has been contributed by John Aldrich of the University of Southampton. Last revision: April 13, 2007.
For matrix and vector entries on the Words pages, see here for a list. For vector analysis words see here; vector analysis symbols are on the calculus page.
Most of the basic notation for matrices and vectors in use today was available by the early 20^th century. Its development is traced in volume 2 of Florian Cajori’s History of Mathematical Notations
published in 1929. Cajori made much use of Thomas Muir’s 4-volume The Theory of Determinants in the Historical Order of Development (1906-24) which covered the years 1990-1900; a supplement (1930)
brought the story up to 1920.
The modern reader of Muir will be struck that he invested so much in a history of determinants but determinants seemed so much more central at the end of the 19^th century when Muir began work than
they do now. The modern reader of Cajori will be struck by how very differently the material was organised and the emphasis distributed in his time. Thus matrices appear in a sub-section of
“Determinant notations” and vectors in “Symbolism for imaginaries and vector analysis.” Muir’s investment and Cajori’s organisation reflect how the subject(s) developed. The emerging field of
abstract linear spaces is hardly noticed by Cajori.
The emphasis on matrices and the blending of matrix algebra and abstract linear spaces only became features of undergraduate mathematics after the Second World War. See the entry LINEAR ALGEBRA on
These developments can be followed in the general histories, e.g.,
• Morris Kline Mathematical Thought from Ancient to Modern Times. Oxford University Press, 1972.
• Victor J. Katz, A History of Mathematics: An Introduction. Harper Collins, 1993, or Addison Wesley, 1998
Two MacTutor essays are good for background:
Notation for Systems of Linear Equations
The modern notation for systems of linear equations did not become established until the early twentieth century, though by then such systems had been studied continuously in the West for perhaps 200
years. MacTutor: Matrices and determinants describes how GAUSSIAN ELIMINATION (see below) was known to Chinese mathematicians of the Han dynasty.
A few of the many notations presented by Muir and Cajori will be illustrated here.
Muir and Cajori quote a remarkable passage from a letter of 1693 from Leibniz to L’Hospital in which something like the modern scheme of literal coefficients with numerical subscripts (a[11], a[12],
etc.) is suggested. The letter appears G. W. Leibniz Mathematische Schriften, Band 5 (ed. C. I. Gerhardt) p. 239
However Leibniz’s economical notation remained an isolated curiosity and there was no standardisation on this or any other scheme. The schemes described by Muir and Cajori did not just use numbers
but exploited the ordering provided by the alphabet and the possibilities of adding primes.
Two important papers from 1772 by Vandermonde and Laplace illustrate some possibilities. Vandermonde treats the constants in a similar way to Leibniz but replaces the alphabetic ordering of the
variables by a numerical one. Laplace combines numerical ordering, alphabetical ordering and repeated primes applied to a single symbol representing an unknown.
Vandermonde’s paper in Histoire de l'Académie royale des sciences Ann. 1772 p. 523 presents a pair of equations in the unknowns ξ 1 and ξ 2. The constants are on two levels with the upper level
representing the equation (1 or 2) and the lower level representing the number of the variable—with 3 representing the constant. (I apologise for the poor reproduction):
Laplace’s memoir in Histoire de l'Académie royale des sciences Ann. 1772 p. 294. shows how the alphabet, repeated primes and generous use of “&c.” (after the third instance) could be combined to good
effect—the use of the numerical pre-fix to indicate the number of the equation, as in ^1a, ^2a, ^3a did not catch on:
(This was the paper in which Laplace described what is now called the LAPLACE EXPANSION.) Elements of this notation remained in use for a long time: see Gauss’s notation from 1811 (below) and
Cayley’s matrix and determinant notation from the 1850s (below).
Forty years after these contributions from Vandermonde and Laplace, what is essentially the modern notation appears in Cauchy’s Memoire of 1815 (the paper that introduced the term DETERMINANT). See,
for instance, this set of equations (Oeuvres (2) i: p. 130). Cauchy used numerical subscripts for the variables and doubly indexed numerical subscripts, e.g. a[1,1], for the coefficients
Although Cauchy’s Memoire was recognised as a very important paper, its notation did not become established as the notation for almost another century: at the beginning of the 20^th century it
appears in the textbooks by M. Bôcher Introduction to Higher Algebra (1909) and G. Kowalewski Einführung in die Determinantentheorie (1909).
Gaussian Elimination & Least Squares. One of the most long-lived notations for One of the most long-lived notations for expressing linear equations is not discussed by Muir or Cajori. It belongs to
the numerical analysis of linear equations and was devised by Gauss for the method now called GAUSSIAN ELIMINATION. Gauss used the method for solving the NORMAL EQUATIONS associated with the METHOD
OF LEAST SQUARES. Both in Gauss’s time and subsequently the least squares problem has had a major influence on the theory and formalism of linear algebra. See the review of different least squares
formalisms in J. Aldrich Doing Least Squares: Perspectives from Gauss and Yule, International Statistical Review, 66, (1998), 61-81.
Gauss describes the method and the associated notation in section 13 of the Disquisitio de Elementis Ellipticis Palladis (1811) in Werke vol 6 pp. 20-22. (Section 2 in P. M. Lee’s translation
Application of the Method of Least Squares to the Elements of the Planet Pallas.)
In the use of primes and alphabetical ordering Gauss’s practice resembles Laplace but the abbreviations were an innovation.
(In the modern regression notation (below) n’, n’’, n’’’, etc. are the elements of the y vector and the a’s, b’s, etc. are elements of the X matrix.)
Using his abbreviations Gauss writes the normal equations (the equations to be solved) as
(where p, q, r, s etc. are the elements of the β vector. )
When Gauss reduces this system to triangular form he uses constructions of the form
for the new constant when p is eliminated and then for the new constant when q is eliminated
The notation survived well into the twentieth century when it was replaced by matrix formulations of least squares calculations. For a typical textbook presentation see Chauvenet Manual of Spherical
and Practical Astronomy (4^th edition, 1871, pp 530ff).
Symbols for determinants. See DETERMINANT on Words.
All the authors who produced notation for systems of equations produced notation for determinants. A great variety of notations are displayed by Muir and Cajori.
The use of a single vertical line on both sides of the entries seems to have introduced by Cayley writing in 1841: see Muir vol. 1 p. and Cajori vol. 2, p. 92.
The notation appears in the Cambridge Mathematical Journal, Vol. II (1841), p. 267-271, reprinted in Papers vol. 1, p.1. On p. 1 Cayley writes
Apart from the use of commas to separate entries within rows, this is how determinants are written today.
By the early twentieth century the modern combination of bar lines and the a[ij] symbols for elements are found in textbooks: see Bôcher’s Introduction to Higher Algebra (1909) and Kowalewski’s
Einführung in die Determinantentheorie (1909).
Symbols for Matrices. See MATRIX on Words.
Determinants had been studied for well over a century and a half before the idea of a matrix appeared. Cayley wrote about matrices on several occasions without seeming to settle on a fixed notation.
In his “Mémoire sur les Hyperdéterminants” of 1846, Crelle, 50 p. 2 Cayley uses the double vertical line notation found in works from the 1930s.
In his most substantial work on matrices, “A Memoir on the Theory of Matrices” of 1855 (Papers, II, 475-96), Cayley does not use such nice notation. He writes (p. 475).
A set of quantities arranged in the form of a square, e.g.
is said to be a matrix.
Cayley explained the motivation for introducing matrices:
The notion of such a matrix arises naturally from an abbreviated notation for a set of linear equations, viz. the equations
However Cayley did not manipulate such matrix equations nor did he introduce a symbol for non-square matrices like (X, Y, Z): his matrix manipulations were confined to forming powers of square
Cayley’s main interest was in powers of square matrices (below) and matrix polynomials. The big result in the 1855 paper is the CAYLEY-HAMILTON THEOREM. Cayley uses upper-case letters for matrices
but there is nothing systematic about his notation.
Matrix algebra was rediscovered by Frobenius; see “Ueber lineare Substitutionen und bilineare Formen,” J. reine angew. Math. Vol. 84 (1878) pp.1-63. The notation in this and in his later works was
much more sophisticated than that used by Cayley.
The printing of Cayley’s matrices always looks improvised but more permanent forms followed. Cajori (vol. 2, p. 103) writes that round parentheses were used for matrices by many, including Maxine
Bocher in 1909 in Introduction to Higher Algebra and G. Kowalewski in 1909 in Determinantentheorie (although Kowalewski also used double vertical lines and a single brace).
In the 1930s books on matrices written in English started to appear. The two leading ones were C. C. MacDuffee Theory of Matrices, Springer (1933) and J. H. M. Wedderburn Lectures on Matrices (1934).
They both wrote matrices with double vertical lines as in
Associated symbols
Matrix inverse. See INVERSE on Words.
In the “Memoir” Cayley describes the formation of powers of matrices including the inverse of the matrix: see Papers, I, 480.
Identity matrix and zero matrix. See IDENTITY MATRIX and ZERO MATRIX on Words.
Cayley used 1 for the identity matrix and 0 for the zero matrix. Both Wedderburn Lectures on Matrices (1934, p. 8) and MacDuffee use I and O.
Transpose. See TRANSPOSE on Words.
MacDuffee Theory of Matrices (1933, p. 5) reports, “Many different notations for the transpose have been used, as A’, Ā, Ă, A*, A[1], [1]A.” His preference is for A^T: “The present notation is in
keeping with a systematic notation which, it is hoped, may find favour.” Wedderburn Lectures on Matrices (1934, p. 8) writes A’ for what he calls the transverse.
Kronecker product. See KRONECKER PRODUCT on Words.
F.D. Murnaghan Theory of Group Representations (1938, p. 68) writes A X B for the Kronecker product. The same symbols is used by Wedderburn Lectures on Matrices (1934, p. 74) although he uses the
term direct product as does MacDuffee The Theory of Matrices, (1933, p. 81). MacDuffee writes A ^. X B although the dot is not printed quite so high.
Generalized inverse. See GENERALIZED INVERSE on Words.
The generalized inverse of A is written as A^† by R. Penrose in “A Generalized Inverse for Matrices,” Proc. Cambridge Philos. Soc., 51, (1955), 406-413. However the symbol A^- quickly became popular;
see e.g. C. R. Rao (1962) Note on a Generalized Inverse of a Matrix with Applications to Problems in Mathematical Statistics, Journal of the Royal Statistical Society B, 24, 152-158.
Symbols for Vectors. See VECTOR on Words.
Vector notation and indeed the idea of a vector developed independently of matrix ideas; the vector development had more to do with generalising complex numbers. M. J. Crowe A History of Vector
Analysis (1967/87) describes the nineteenth century work of Hamilton, Grassmann and Gibbs.
Of these Gibbs seems to have had the greatest influence on the notation, but not through his own writing. His vector analysis became widely known through E. B. Wilson’s Vector Analysis: A Text Book
for the Use of Students of Mathematics and Physics Founded upon the Lectures of J. Willard Gibbs (1901).
Many symbols have come and gone: e.g. it is now standard to use the ordinary equals sign to represent equality between vectors but Cajori (pp. 133-4) shows earlier writers reluctant to do this and
using a variety of equals-like symbols.
Vectors and scalars
In his Elements of Vector Analysis (1881) Gibbs writes, “we shall use small Greek letters to denote vectors and the small English letters to denote scalars.” (Scientific Papers, 2, p. 17).
One of the innovations in Wilson’s book was very influential, the use of Clarendon (bold) type for vectors and ordinary type for scalars. E.g. “Thus if A be the vector and x the scalar, the product
is denoted by x A or A x.”
Wilson used the bold/ordinary pair to representing a vector and its length: if A is a vector, then A is the scalar denoting its length. Later writers used the bold/ordinary type possibility to mark
other distinctions. In Hermann Weyl’s Raum, Zeit, Materie (Space, Time, Matter, 4^th edition 1921) the bold type x represents the vector but the ordinary type x[i] represents a component of the
vector. German writers also used old German (Gothic) characters to represent vectors and modern script for components: see Courant and Hilbert’s Methoden der Mathematischen Physik (1924).
Following Hamilton’s example, Wilson writes the “3 fundamental unit vectors” as i, j and k. Hamilton was extending the notation for complex numbers. The corresponding notation in Weyl’s Raum, Zeit,
Materie (for n-dimensional spaces) is e[1] ,.., e[n] .
Associated symbols
Scalar product and inner product. See INNER PRODUCT and DOT PRODUCT on Words.
In the Elements (1881) Gibbs wrote α.β for what he called the direct product of the vectors α and β.
In Vector Analysis Wilson wrote A∙B. The notation inspired the alternative term for the direct product, the dot product
Other authors used symbols resembling Gauss’s symbol: see above. Cajori (p. 135) reports that (u,v) was used in Henrici & Turner’s Vectors and Rotors (1903). In his paper axiomatising HILBERT SPACE,
“Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren,” Math. Ann. 102, (1929), 49-131, von Neumann used the symbol (f,g)
Vector product. See VECTOR PRODUCT on Words.
In the Elements (1881) Gibbs wrote α X β for what he called the skew product of the vectors α and β.
In Vector Analysis Wilson wrote the skew product as A X B. The notation inspired the term cross product. See CROSS PRODUCT.
Length or norm. See NORM
Wilson’s use of ordinary type has been mentioned. The absolute value symbol of Weierstrass (see Function Symbols), or some modification of it, has often been used. The symbols |X| and ║φ║ both appear
in Banach’s “Sur les opérations dans les ensembles abstraits et leur application aux équations integrales”, Fundamenta Mathematicae, 3, (1922) 133-181.
For the vector calculus symbols see the calculus page.
Matrix notation in least squares/regression.
For the past fifty years or so it has been common to write the REGRESSION model in matrix notation as
where y is the vector of values of the DEPENDENT VARIABLE, X is the DESIGN MATRIX and ε is the vector of ERRORS.
Such notation can be found in J. Durbin and G. S. Watson “Testing for Serial Correlation in Least Squares Regression: I,” Biometrika, 37, (1950) and O. Kempthorne's Design and Analysis of Experiments
(1952). Matrix notation was first used in the 1920s but the most noticed of the early contributions was a paper by Aitken, “On least squares and linear combinations of observations,” Proc. Royal Soc.
Edinburgh, 55, (1935), 42-48.
See also Regression and Matrix notation in Symbols in Probability & Statistics. The use of matrix theory in statistics is described by R. W. Farebrother “A. C. Aitken and the Consolidation of Matrix
Theory,” Linear Algebra and its Applications, 264, (1997), 3-12. For the earlier Gauss notation that was used in works on the combination of observations, or theory of errors, see above.
Front Page | Operation | Grouping | Relation | Fractions and Decimals | Constants | Variables | Functions | Geometry | Trigonometry | Calculus | Matrices and Vectors | Set Theory and Logic | Number
theory | Statistics | Sources
|
{"url":"http://jeff560.tripod.com/matrices.html","timestamp":"2014-04-20T06:58:25Z","content_type":null,"content_length":"27830","record_id":"<urn:uuid:809b2f75-7ff2-4559-a7dd-bc1aba0ec19f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by JJ on Wednesday, January 9, 2008 at 5:48pm.
related rates:
a ladder, 12 feet long, is leaning against a wall. if the foot of the ladder slides away from the wall along level ground,what is the rate of change of the top of the ladder with respect to the
distance of the foot of the ladder from the wall when the foot is 6 ft from the wall?
I don't understand what this question is asking for.
• Calculus - Damon, Wednesday, January 9, 2008 at 5:59pm
Draw the 12 foot ladder up against the wall with the foot at 6 feet from the foot of the wall. Then the top is sqrt (144 - 36) feet up on the wall.
Now it starts slipping out at rate dx/dt at the bottom of the wall.
The question is, what is dy/dt, the speed of the top of the ladder down the wall.
I drew the ladder foot to the right of the wall so that dx/dt is positive :)
Of course dy/dt will change as the ladder goes down the wall even if dx/dt is constant, but the problem only asks for the vertical speed at the moment that the ladder is sqrt (108) high on the
• Calculus - drwls, Wednesday, January 9, 2008 at 6:20pm
The problem is poorly worded, and I can see why you are confused. I think they want the rate at which the elevation of the top of the ladder changes (dy/dt) in terms of how fast the bottom of the
ladder slides away from the wall (dx/dt), when x = 6 ft. Let x be the distance of the bottom of the ladder from the wall, and y be the elevation of the top of the ladder above the floor.
Start with
x^2 + y^2 = 12^2 = 144
Both x and y are functions of t. Differentiate both sides of the abovew equation with respect to t.
2x dx/dt = -2y dy/dt
dy/dt = - (x/y) dx/dt
When x = 6, y = sqrt (144-36)= 10.39, so
dy/dt = -0.577 dx/dt
Related Questions
Calculus - The top of a 24 foot ladder, leaning against a vertical wall, is ...
Calculus--Please Help! - A 13-foot ladder is leaning against a vertical wall. If...
College Calculus - A 13-foot ladder is leaning against a vertical wall. If the ...
Math- Calculus - A 20 foot ladder is sliding down a vertical wall at a constant ...
Math- Calculus - A 20 foot ladder is sliding down a vertical wall at a constant ...
Math- Calculus - A 20 foot ladder is sliding down a vertical wall at a constant ...
Math- Calculus - A 20 foot ladder is sliding down a vertical wall at a constant ...
Calculus - A ladder 41 feet long that was leaning against a vertical wall begins...
Calculus - A 17 foot ladder is leaning against a wall. The bottom of the ladder...
Related Rates - A ladder 10 ft long rests against a vertical wall. If the bottom...
|
{"url":"http://www.jiskha.com/display.cgi?id=1199918933","timestamp":"2014-04-21T10:30:44Z","content_type":null,"content_length":"9847","record_id":"<urn:uuid:ede578d8-ba00-4ccf-864b-a61140393a2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Random Number Generation in VHDL
Re: Random Number Generation in VHDL
Random Number Generation in VHDL
Hello members,
I would like to know if VHDL already has functions defined to generate
Random Numbers.
If not, which would be the best algorithm for generating random
numbers for implementation on an FPGA.
Thank you
Good luck with the random numbers, but if you want pseudo-random:
Xilinx also has an excellent document on LFSRs with polynomials up to
about 1000 bits or so.
Re: Random Number Generation in VHDL
> I would like to know if VHDL already has functions defined to generate
> Random Numbers.
google is your friend :)
> If not, which would be the best algorithm for generating random
> numbers for implementation on an FPGA.
The answer to this question will depend on the FPGA architecture that
you're using, as well as your needs for cryptographic security. This
is because, when it comes to pseudo-random number generation, "best"
can be subjective. e.g., "best" speed? "best" area? "best" power
consumption? "best" random numbers (cryptographically secure)?
This is a well-studied area; I recommend that you do some reading to
see what suits you ...
Re: Random Number Generation in VHDL
On 24 Jan., 18:54, FPGA wrote:
> Hello members,
> I would like to know if VHDL already has functions defined to generate
> Random Numbers.
> If not, which would be the best algorithm for generating random
> numbers for implementation on an FPGA.
> Thank you
Hello FPGA,
Maybe it's off-topic for you, because you want to implement in FPGA
rather than simulating your design,
but have you seen this package :
http://www.janick.bergeron.com/wtb/packages/random1.vhd ?
Re: Random Number Generation in VHDL
FPGA wrote:
> I would like to know if VHDL already has functions defined to generate
> Random Numbers.
> If not, which would be the best algorithm for generating random
> numbers for implementation on an FPGA.
LFSR are pretty popular for random numbers, and very easy to
implement in an FPGA.
-- glen
Re: Random Number Generation in VHDL
On Jan 25, 1:01*am, glen herrmannsfeldt wrote:
> FPGA wrote:
> > I would like to know if VHDL already has functions defined to generate
> > Random Numbers.
> > If not, which would be the best algorithm for generating random
> > numbers for implementation on an FPGA.
> LFSR are pretty popular for random numbers, and very easy to
> implement in an FPGA.
> -- glen
I just found out that I need random number generator just for
simulation. I do not need to synthesize it. Some feedback on this
would be helpful. I am having a look at some of the links posted here.
Re: Random Number Generation in VHDL
On Fri, 25 Jan 2008 06:44:10 -0800 (PST),
Ann wrote:
>I just found out that I need random number generator just for
>simulation. I do not need to synthesize it. Some feedback on this
>would be helpful. I am having a look at some of the links posted here.
OK, that's easy. The math_real package contains an excellent
random number generator that you can adapt for your own purposes.
use ieee.math_real.all;
variable R: real;
variable S1, S2: positive := 42;
--- seed variables, change initialization to
--- get a different random number stream
uniform(S1, S2, R);
This modifies seed variables S1 and S2 ready for the
next call to uniform() - DON'T DO ANYTHING ELSE with
these two variables. And it also puts a random number
into R, uniformly distributed in the real range 0.0 to
0.99999...; you can then very easily scale this
number to get whatever you want. A couple of examples:
--- Get the integer value "5" with 20% probability,
--- and "7" with 80% probability
if R < 0.2 then
x := 5;
x := 7;
end if;
--- Get an integer in the range LO to HI (where LO, HI
--- are both integers and LO<=HI)
R := R * real(HI-LO) + real(LO);
x := integer(floor(R));
Jonathan Bromley, Consultant
DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services
Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
Re: Random Number Generation in VHDL
On Fri, 25 Jan 2008 15:13:14 +0000,
Jonathan Bromley wrote:
>OK, that's easy.
Not so easy, it seems: apologies for this
off-by-one error...
> --- Get an integer in the range LO to HI (where LO, HI
> --- are both integers and LO<=HI)
> R := R * real(HI-LO) + real(LO);
That should be
R := R * real(HI+1-LO) + real(LO);
Jonathan Bromley, Consultant
DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services
Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
Re: Random Number Generation in VHDL
I usually use a maximal LFSR to obtain psuedo random numbers.
The following link will give you some good information.
I like Appendix B wich lists the tap points up to 168bits for a maximal
length LFSR.
The following would generate psudeo random 64 bit numbers starting with seed
value 1.
entity generator is
port (
clk:in bit;
achitecture processflow of generator is
variable temp:bit_vector(63 downto 0) :=
temp := temp(63 downto 0 ) & (temp(63) xor temp(62) );
a <= temp;
wait until (clk = '0');
end process
"glen herrmannsfeldt" wrote in message
> FPGA wrote:
>> I would like to know if VHDL already has functions defined to generate
>> Random Numbers.
>> If not, which would be the best algorithm for generating random
>> numbers for implementation on an FPGA.
> LFSR are pretty popular for random numbers, and very easy to
> implement in an FPGA.
> -- glen
Re: Random Number Generation in VHDL
On Fri, 25 Jan 2008 10:40:11 -0800,
Dwayne Dilbeck wrote:
>I usually use a maximal LFSR to obtain psuedo random numbers.
>The following link will give you some good information.
>I like Appendix B wich lists the tap points up to 168bits for a maximal
>length LFSR.
>The following would generate psudeo random 64 bit numbers starting with seed
>value 1.
>entity generator is
> port (
> clk:in bit;
> a
>achitecture processflow of generator is
> CLKED
> variable temp:bit_vector(63 downto 0) :=
> begin
> temp := temp(63 downto 0 ) & (temp(63) xor temp(62) );
> a <= temp;
> wait until (clk = '0');
> end process
aaargh.... note that this gives you one pseudo-random BIT per
clock cycle.... but the 64-bit words are painfully strongly
correlated from one cycle to the next. You need to clock
your N-bit LFSR for at least N cycles before pulling the
next N-bit value from it.
Even then, the random numbers aren't brilliantly random
(or at least that's what I am led to understand - I don't
have a particularly good grip on the somewhat scary math)
but LFSRs are indeed a good source of quasi-random stuff
for non-critical applications. Just remember to clock
them enough times!
Jonathan Bromley, Consultant
DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services
Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
|
{"url":"http://dbaspot.com/arch/365312-random-number-generation-vhdl.html","timestamp":"2014-04-16T18:58:19Z","content_type":null,"content_length":"65663","record_id":"<urn:uuid:01506e1d-2edc-4b75-9d10-c5ae2a1f5ecb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - algebra isomorphism
Consider the complex numbers C as an algebra over the reals R. The author of the book I have in front of me (Dirac operators in Riemannian Geometry, p.13) writes
(as real algebras). Does anyone know what this canonical algebra isomorphism is??? Obviously, woz -->(w,z) is not even linear.
|
{"url":"http://www.physicsforums.com/showpost.php?p=3800229&postcount=1","timestamp":"2014-04-20T21:24:53Z","content_type":null,"content_length":"9264","record_id":"<urn:uuid:31de2f95-615d-415e-a0e3-70bddca32f57>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: to prove: rays bisecting 3 angles of a triangle meet at a single point
Replies: 6 Last Post: Feb 10, 1999 10:10 AM
Messages: [ Previous | Next ]
Re: to prove: rays bisecting 3 angles of a triangle meet at a single point
Posted: Dec 28, 1997 5:34 AM
Michael Keyton wrote:
> Do you know a property about how to tell if a point lies on the bisector
> of an angle, or not?
> If you know this theorem, apply it to two of the bisectors, in particular
> their intersection, and then argue that it is on the other bisector. And
> since an angle can only have one bisector, then the three meet at a point
> (the formal word for this phenomenon is "concurrent" and this point of
> concurrency is called the "incenter". It is one of over 100 centers of a
> triangle.
This incenter has this name because it is the center of the circle
inscribed in the triangle, i.e. touching the sides.
There are three very related points in a triangle, which can be found by
taking alternative bisectors for two of the angles. These alternative
bisectors can be found by lengthening the sides of the triangle,
resulting in a 'cross'. In this cross you see that there are in fact two
angles that can be bisected. The two alternative bisectors are
new bisector
\ | /
\ | /
\ | /
\ | /
-----X------ old bisector
/ | \
/ | \
/ | \
In this way you get three _excenters_, centers of three excircles. You
should try to find out what that means.
If you want to know more about the numerous triangle centers check out
the www-page of Clark Kimberling:
Floor van Lamoen
Date Subject Author
12/27/97 to prove: rays bisecting 3 angles of a triangle meet at a single point Eileen Stevenson
12/28/97 Re: to prove: rays bisecting 3 angles of a triangle meet at a single point Michael Keyton
12/28/97 Re: to prove: rays bisecting 3 angles of a triangle meet at a single point Floor van Lamoen
12/28/97 Re: to prove: rays bisecting 3 angles of a triangle meet at a single point Guy F. Brandenburg
5/16/98 center of gravity formula Imrich Miklosko
5/19/98 Re: center of gravity formula Guy F. Brandenburg
2/10/99 Re: to prove: rays bisecting 3 angles of a triangle meet at a single point AWatsonNY@aol.com
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=1077586","timestamp":"2014-04-19T13:17:05Z","content_type":null,"content_length":"24917","record_id":"<urn:uuid:84b46f19-b067-44ba-bb9a-2e14008f7fd9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
selfstudy: Remove all smallest integers
09-13-2006 #1
Registered User
Join Date
Sep 2006
selfstudy: Remove all smallest integers
#include <stdio.h>
#define MAX_ARRAY_SIZE 100
int RemoveAllSmallest(int A[],int n)
int i;
int id_smallest=0;
int min_value=A[0];
for (i=1;i<n;i++)
if (A[i]<min_value)
for (i=id_smallest;i<n-1;i++)
void main()
{ int i;
int A[MAX_ARRAY_SIZE];
int n;
//Ask and read the number of integers.
printf("How many integers totally? (1-%d)",MAX_ARRAY_SIZE);
printf("\nInput the integers.\n");
printf("(Each integers should be in the range 0-99. ");
printf("Seperate them with spaces.)\n");
//Read all integers
for (i=0;i<n;i++)
//Remove all smallest numbers
//Display result
printf("\nAfter removing all smallest number(s):\r\n");
for (i=0;i<n;i++)
printf("%d ",A[i]);
I have tried the above, but it can not remove all the samllest integer, but only one. Can anyone help me to approch it?
Last edited by Salem; 09-13-2006 at 11:56 PM. Reason: code tagged - please learn to use them
Layout ten M&Ms in front of you in a row. Start removing all the yellow ones and sliding the others over to fill in the empty spaces. Each time you remove another yellow one, notice how the
sliding you are doing changes. Do the same thing with your array.
Also, don't forget to return a value from your function. What value you return will also help you figure out when to stop the deletion loop.
First: code tags are your friend.
Second: This is the C++ forum; your program is C. Either you should convert it to C++ (which in fact would make this far, far easier to solve -- using a std::vector in place of the array makes
the logic for removal trivial) or ask the mods to move the post to the correct forum.
You ever try a pink golf ball, Wally? Why, the wind shear on a pink ball alone can take the head clean off a 90 pound midget at 300 yards.
void main()
I thought many compilers would flag this as an error nowdays. Void main has long since gone.
I use
int main ( void )
. I think void main is non standard, but i could be wrong.
Perhaps C compilers like Miriacle C still accept it, but many standard C++ compilers wouid give at least a warning
Miracle C isn't a C compiler.
It's a student assignment pretending to be a compiler, but since it's $$$ nagware, and there are many free (and fully featured) compilers available, it makes for a very poor choice.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
09-13-2006 #2
Registered User
Join Date
Jan 2005
09-13-2006 #3
Registered User
Join Date
May 2003
09-14-2006 #4
09-14-2006 #5
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/82971-selfstudy-remove-all-smallest-integers.html","timestamp":"2014-04-16T11:58:28Z","content_type":null,"content_length":"56964","record_id":"<urn:uuid:cf7cfc35-c9e1-4295-a0dd-7b6e41320621>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Incremental Penetration Depth Estimation between Convex Polytopes Using Dual-Space Expansion
March/April 2004 (vol. 10 no. 2)
pp. 152-163
ASCII Text x
Young J. Kim, Ming C. Lin, Dinesh Manocha, "Incremental Penetration Depth Estimation between Convex Polytopes Using Dual-Space Expansion," IEEE Transactions on Visualization and Computer Graphics,
vol. 10, no. 2, pp. 152-163, March/April, 2004.
BibTex x
@article{ 10.1109/TVCG.2004.1260767,
author = {Young J. Kim and Ming C. Lin and Dinesh Manocha},
title = {Incremental Penetration Depth Estimation between Convex Polytopes Using Dual-Space Expansion},
journal ={IEEE Transactions on Visualization and Computer Graphics},
volume = {10},
number = {2},
issn = {1077-2626},
year = {2004},
pages = {152-163},
doi = {http://doi.ieeecomputersociety.org/10.1109/TVCG.2004.1260767},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Visualization and Computer Graphics
TI - Incremental Penetration Depth Estimation between Convex Polytopes Using Dual-Space Expansion
IS - 2
SN - 1077-2626
EPD - 152-163
A1 - Young J. Kim,
A1 - Ming C. Lin,
A1 - Dinesh Manocha,
PY - 2004
KW - Penetration depth
KW - Minkowski sums
KW - Gauss map
KW - incremental algorithm
KW - haptic rendering.
VL - 10
JA - IEEE Transactions on Visualization and Computer Graphics
ER -
Abstract—We present a fast algorithm to estimate the penetration depth between convex polytopes in 3D. The algorithm incrementally seeks a "locally optimal solution” by walking on the surface of the
Minkowski sums. The surface of the Minkowski sums is computed implicitly by constructing a local dual mapping on the Gauss map. We also present three heuristic techniques that are used to estimate
the initial features used by the walking algorithm. We have implemented the algorithm and compared its performance with earlier approaches. In our experiments, the algorithm is able to estimate the
penetration depth in about a milli-second on an 1 GHz Pentium PC. Moreover, its performance is almost independent of model complexity in environments with high coherence between successive instances.
[1] S.A. Cameron and R.K. Culley, Determining the Minimum Translational Distance between Two Convex Polyhedra Proc. IEEE Int'l Conf. Robotics and Automation, pp. 591-596, 1986.
[2] D. Hsu, L. Kavraki, J. Latombe, R. Motwani, and S. Sorkin, On Finding Narrow Passages with Probabilistic Roadmap Planners Proc. Third Workshop Algorithmic Foundations of Robotics, 1998.
[3] W. McNeely, K. Puterbaugh, and J. Troy, Six Degree-of-Freedom Haptic Rendering Using Voxel Sampling Proc. ACM SIGGRAPH, pp. 401-408, 1999.
[4] A. Gregory, A. Mascarenhas, S. Ehmann, M.C. Lin, and D. Manocha, 6-DOF Haptic Display of Polygonal Models Proc. IEEE Visualization Conf., 2000.
[5] M. McKenna and D. Zeltzer, Dynamic Simulation of Autonomous Legged Locomotion Computer Graphics (SIGGRAPH '90 Proc.), F. Baskett, ed., vol. 24, pp. 29-38, Aug. 1990.
[6] M. Moore and J. Wilhelms, Collision Detection and Response for Computer Animation Computer Graphics (SIGGRAPH '88 Proc.), J. Dill, ed., vol. 22, pp. 289-298, Aug. 1988.
[7] S. Cameron, Enhancing GJK: Computing Minimum and Penetration Distance between Convex Polyhedra Proc. Int'l Conf. Robotics and Automation, pp. 3112-3117, 1997.
[8] E.G. Gilbert, D.W. Johnson, and S.S. Keerthi, A Fast Procedure for Computing the Distance between Objects in Three-Dimensional Space J. Robotics and Automation, vol. 4, no. 2, pp. 193-203, 1988.
[9] D. Dobkin, J. Hershberger, D. Kirkpatrick, and S. Suri, Computing the Intersection-Depth of Polyhedra Algorithmica, vol. 9, pp. 518-533, 1993.
[10] G. van Bergen, Proximity Queries and Penetration Depth Computation on 3D Game Objects Proc. Game Developers Conf., 2001.
[11] K. Hoff, A. Zaferakis, M. Lin, and D. Manocha, Fast and Simple Geometric Proximity Queries Using Graphics Hardware Proc. ACM Symp. Interactive 3D Graphics, 2001.
[12] S. Fisher and M.C. Lin, Deformed Distance Fields for Simulation of Non-Penetrating Flexible Bodies Proc. EG Workshop Computer Animation and Simulation, 2001.
[13] Y.J. Kim, M.C. Lin, and D. Manocha, DEEP: Dual-Space Expansion for Estimating Penetration Depth between Convex Polytopes Proc. IEEE Conf. Robotics and Automation, 2002.
[14] M. Lin and S. Gottschalk, Collision Detection between Geometric Models: A Survey Proc. IMA Conf. Math. of Surfaces, 1998.
[15] R. Seidel, Linear Programming and Convex Hulls Made Easy Proc. Sixth Ann. ACM Conf. Computational Geometry, pp. 211-215, 1990.
[16] M.C. Lin and J.F. Canny, "Efficient Algorithms for Incremental Distance Computation," Proc. IEEE Conf. Robotics and Automation, pp. 1,008-1,014, 1991.
[17] B. Mirtich, V-Clip: Fast and Robust Polyhedral Collision Detection ACM Trans. Graphics, vol. 17, no. 3, pp. 177-208, July 1998.
[18] S. Ehmann and M.C. Lin, Accelerated Proximity Queries between Convex Polyhedra Using Multi-Level Voronoi Marching Proc. IEEE/RSJ Int'l Conf. Intelligent Robots and Systems, 2000.
[19] L. Guibas, D. Hsu, and L. Zhang, H-Walk: Hierarchical Distance Computation for Moving Convex Bodies Proc. ACM Symp. Computational Geometry, 1999.
[20] N. Beckmann, H. Kriegel, R. Schneider, and B. Seeger, The r*-Tree: An Efficient and Robust Access Method for Points and Rectangles Proc. SIGMOD Conf. Management of Data, pp. 322-331, 1990.
[21] S. Gottschalk, M. Lin, and D. Manocha, OBB-Tree: A Hierarchical Structure for Rapid Interference Detection Proc. ACM Siggraph '96, pp. 171-180, 1996.
[22] P.M. Hubbard, Collision Detection for Interactive Graphics Applications EEE Trans. Visualization and Computer Graphics, vol. 1, no. 3, pp. 218-228, Sept. 1995.
[23] J.T. Klosowski, M. Held, J.S.B. Mitchell, H. Sowizral, and K. Zikan, Efficient Collision Detection Using Bounding Volume Hierarchies of k-dops IEEE Trans. Visualization and Computer Graphics,
vol. 4, no. 1, pp. 21-36, Jan.-Mar. 1998.
[24] E. Larsen, S. Gottschalk, M. Lin, and D. Manocha, Fast Proximity Queries with Swept Sphere Volumes Technical Report TR99-018, Dept. of Computer Science, Univ. of North Carolina, 1999.
[25] S. Ehmann and M.C. Lin, Accurate and Fast Proximity Queries between Polyhedra Using Convex Surface Decomposition Computer Graphics Forum (Proc. Eurographics 2001), vol. 20, no. 3, 2001.
[26] L.J. Guibas and R. Seidel, Computing Convolutions by Reciprocal Search Proc. Second Ann. ACM Symp. Computational Geometry, pp. 90-99, 1986.
[27] P. Agarwal, L.J. Guibas, S. Har-Peled, A. Rabinovitch, and M. Sharir, Penetration Depth of Two Convex Polytopes in 3D Nordic J. Computing, vol. 7, pp. 227-240, 2000.
[28] K. Hoff, A. Zaferakis, M. Lin, and D. Manocha, Fast 3D Geometric Proximity Queries between Rigid and Deformable Models Using Graphics Hardware Acceleration Technical Report, UNC-CS, 2002.
[29] E.G. Gilbert and C.J. Ong, "New Distances for the Separation and Penetration of Objects," Proc. Int'l Conf. Robotics and Automation, pp. 579-586, 1994.
[30] M.E. Houle and G.T. Toussaint, Computing the Width of a Set IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, no. 5, pp. 761-765, 1988.
[31] J. Schwerdt, M. Smid, J. Majhi, and R. Janardan, Computing the Width of a Three-Dimensional Point Set: An Experimental Study Report 18, Dept. of Computer Science, Univ. of Magdeburg, Germany,
[32] B. Chazelle, H. Edelsbrunner, L.J. Guibas, and M. Sharir, Diameter, Width, Closest Line Pair, and Parametric Searching Proc. Eighth Ann. ACM Symp. Computational Geometry, pp. 120-129, 1992.
[33] P.K. Agarwal and M. Sharir, Efficient Randomized Algorithms for Some Geometric Optimization Problems Proc. 11th Ann. ACM Symp. Computational Geometry, pp. 326-335, 1995.
[34] L.J. Guibas and J. Stolfi, Primitives for the Manipulation of General Subdivisions and the Computation of Voronoi Diagrams ACM Trans. Graphics, vol. 4, no. 2, pp. 74-123, Apr. 1985.
[35] M. de Carmo, Differential Geometry of Curves and Surfaces. Englewood Cliffs, N.J.: Prentice Hall, 1976.
[36] J. O'Rourke, C.-B. Chien, T. Olson, and D. Naddor, A New Linear Algorithm for Intersecting Convex Polygons Computer Graphics and Image Processing, vol. 19, pp. 384-391, 1982.
[37] B. Grünbaum, Convex Polytopes. New York: John Wiley&Sons, 1967.
[38] Y.J. Kim, M. Otaduy, M.C. Lin, and D. Manocha, Six-Degree-of-Freedom Haptic Display Using Localized Contact Computations Proc. Symp. Haptic Interfaces for Virtual Environment and Teleoperator
Systems, Mar. 2002.
[39] B. Chazelle, D. Dobkin, N. Shouraboura, and A. Tal, Strategies for Polyhedral Surface Decomposition: An Experimental Study Computational Geometry Theory and Applications, vol. 7, pp. 327-342,
[40] Y. Kim, M. Otaduy, M. Lin, and D. Manocha, Fast Penetration Depth Computation for Physically-Based Animation Proc. ACM Symp. Computer Animation, 2002.
Index Terms:
Penetration depth, Minkowski sums, Gauss map, incremental algorithm, haptic rendering.
Young J. Kim, Ming C. Lin, Dinesh Manocha, "Incremental Penetration Depth Estimation between Convex Polytopes Using Dual-Space Expansion," IEEE Transactions on Visualization and Computer Graphics,
vol. 10, no. 2, pp. 152-163, March-April 2004, doi:10.1109/TVCG.2004.1260767
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tg/2004/02/v0152-abs.html","timestamp":"2014-04-17T11:29:45Z","content_type":null,"content_length":"58840","record_id":"<urn:uuid:9d037be3-266d-4bdb-8162-4f70a358917f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
References for Liu
References for Liu Hui
1. H Peng-Yoke, Biography in Dictionary of Scientific Biography (New York 1970-1990).
2. J-C Martzloff, A history of Chinese mathematics (Berlin-Heidelberg, 1997).
3. J-C Martzloff, Histoire des mathématiques chinoises (Paris, 1987).
4. K Shen, J N Crossley and A W-C Lun, The nine chapters on the mathematical art : Companion and commentary (Beijing, 1999).
5. T S Ang and F J Swetz, A Chinese mathematical classic of the third century : the 'Sea island mathematical manual' of Liu Hui, Historia Math. 13 (2) (1986), 99-117.
6. S S Bai, A re-examination of a ring area problem in the 'Jiu zhang suanshu' (Chinese), Beijing Shifan Daxue Xuebao 30 (1) (1994), 139-142.
7. E I Berezkina (trs.), Two texts of Liu Hui on geometry (Russian), in Studies in the history of mathematics, No. 19 (Russian) (Moscow, 1974), 231-273.
8. K Chemla, Different concepts of equations in 'The nine chapters on mathematical procedures' and in the commentary on it by Liu Hui (3rd century), Historia Sci. (2) 4 (2) (1994), 113-137.
9. K Chemla, Relations between procedure and demonstration : Measuring the circle in the 'Nine chapters on mathematical procedures' and their commentary by Liu Hui (3rd century), in History of
mathematics and education: ideas and experiences (Essen, 1992) (1996), 69-112.
10. C Cullen, Learning from Liu Hui? A different way to do mathematics, Notices Amer. Math. Soc. 49 (7) (2002), 783-790.
11. J W Dauben, The 'Pythagorean theorem' and Chinese mathematics : Liu Hui's commentary on the gou-gu theorem in Chapter Nine of the 'Jiu zhang suan shu', in Amphora (Basel, 1992), 133-155.
12. Y Z Dong and Y Yao, The mathematical thought of Liu Hui (Chinese), Qufu Shifan Daxue Xuebao Ziran Kexue Ban 13 (4) (1987), 99-108.
13. L S Feng, 'Jiu zhang suanshu' and Hui Liu's theory of similar right triangles (Chinese), in Collected research papers on the history of mathematics, Vol. 1 (Chinese) (Hohhot, 1990), 37-45.
14. D W Fu, Why did Liu Hui fail to derive the volume of a sphere?, Historia Math. 18 (3) (1991), 212-238.
15. S C Guo, Liu Hui's great contributions to mathematics : celebrating the 1720th anniversary of his commentary on the 'Jiu zhang suanshu' (Chinese), Math. Practice Theory (3) (1983), 75-79.
16. X H Guo, On Liu Hui's 'qiyan method' (Chinese), J. Central China Normal Univ. Natur. Sci. 20 (3) (1986), 400-408.
17. W S Horng, How did Liu Hui perceive the concept of infinity : a revisit, Historia Sci. (2) 4 (3) (1995), 207-222.
18. D Liu, A comparison of Archimedes' and Liu Hui's studies of circles, in Chinese studies in the history and philosophy of science and technology 179 (Dordrecht, 1996), 279-287.
19. R Mei, Liu Hui's theories of mathematics, in Chinese studies in the history and philosophy of science and technology 179 (Dordrecht, 1996), 243-254.
20. S K Mo, 'Jiuzhang suanshu' ('Nine chapters on the mathematical art') and Liu Hui's commentary (Chinese), Stud. Hist. Nat. Sci. 19 (2) (2000), 97-113.
21. H H Shou, L G Liu and G J Wang, An offset approximation algorithm based on Liu Hui's circle subdivision method (Chinese), Appl. Math. J. Chinese Univ. Ser. A 17 (1) (2002), 105-112.
22. P D Straffin Jr, Liu Hui and the first golden age of Chinese mathematics, Math. Mag. 71 (3) (1998), 163-181.
23. A Volkov, Calculation of pi in ancient China : from Liu Hui to Zu Chongzhi, Historia Sci. (2) 4 (2) (1994), 139-157.
24. D B Wagner, A proof of the Pythagorean theorem by Liu Hui (third century AD), Historia Math. 12 (1) (1985), 71-73.
25. D B Wagner, An early Chinese derivation of the volume of a pyramid : Liu Hui, third century AD, Historia Math. 6 (2) (1979), 164-188.
26. X Q Wang, Elementary studies on pi in France in the 17th - 19th centuries and Liu Hui's cyclotomic rule (Chinese), J. Zhejiang Univ. Sci. Ed. 30 (1) (2003), 1-6.
27. Z W Xi and S L Zhang, On the characteristic of the dialectical thought of the 'Jiu zhang suanshu' and Hui Liu's commentary (Chinese), Qufu Shifan Daxue Xuebao Ziran Kexue Ban 19 (4) (1993),
28. S C Yang, 'Ratio' and 'power' in Hui Liu's commentary on the 'Jiu zhang suan shu' ('Arithmetic in nine chapters') (Chinese), Dongbei Shida Xuebao (4) (1990), 39-43.
December 2003
MacTutor History of Mathematics
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Printref/Liu_Hui.html","timestamp":"2014-04-20T18:53:57Z","content_type":null,"content_length":"5519","record_id":"<urn:uuid:c36ca96d-43e1-47b7-a818-5ac60aab33f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numbers Divisible by 9 and 10
A number is divisible by 10 only if the last digit is a 0. Numbers are divisible by 9 if the sum of all the individual digits is evenly divisible by 9. For example, the last sum of the digits of
the number 3627 is 18, which is evenly divisible by 9 so 3627 is evenly divisible by 9.
|
{"url":"http://www.aaamath.com/div66_x9.htm","timestamp":"2014-04-21T02:01:24Z","content_type":null,"content_length":"5731","record_id":"<urn:uuid:627c165d-7e60-4d16-aefb-b134e2da7a78>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portability non-portable
Stability experimental
Maintainer Jan Snajder <jan.snajder@fer.hr>
Safe Haskell None
Implementation of the GenProg.GenExpr interface for members of the Data typeclass. The implementation is based on SYB and SYZ generic programming frameworks (see http://hackage.haskell.org/package/
syb and http://hackage.haskell.org/package/syz for details).
NB: Subexpressions that are candidates for crossover points or mutation must be of the same type as the expression itself, and must be reachable from the root node by type-preserving traversal. See
below for an example.
This module re-exports GenExpr typeclass.
class GenExpr e whereSource
This typeclass defines an interface to expressions that can be genetically programmed. The operations that must be provided by instances of this class are used for the generation of random
individuals as well as crossover and mutation operations. (An instance for members of the Data typeclass is provided in GenProg.GenExpr.Data.)
Minimal complete definition: exchange, nodeMapM, nodeMapQ, and nodeIndices.
exchange :: e -> Int -> e -> Int -> (e, e)Source
Exchanges subtrees of two expressions: exchange e1 n1 e2 n2 replaces the subexpression of e1 rooted in node n1 with the subexpression of e2 rooted in n2, and vice versa.
nodeMapM :: Monad m => (e -> m e) -> e -> m eSource
Maps a monadic transformation function over the immediate children of the given node.
nodeMapQ :: (e -> a) -> e -> [a]Source
Maps a query function over the immediate children of the given node and returns a list of results.
nodeIndices :: e -> ([Int], [Int])Source
A list of indices of internal (functional) and external (terminal) nodes of an expression.
adjustM :: Monad m => (e -> m e) -> e -> Int -> m eSource
Adjusts a subexpression rooted at the given node by applying a monadic transformation function.
nodes :: e -> IntSource
Number of nodes an expression has.
depth :: e -> IntSource
The depth of an expression. Equals 1 for single-node expressions.
Suppose you have a datatype defined as
data E = A E E
| B String [E]
| C
deriving (Eq,Show,Typeable,Data)
and an expression defined as
e = A (A C C) (B "abc" [C,C])
The subexpressions of a e are considered to be only the subvalues of e that are of the same type as e. Thus, the number of nodes of expression e is
>>> nodes e
because subvalues of node B are of different type than expression e and therefore not considered as subexpressions.
Consequently, during a genetic programming run, subexpressions that are of a different type than the expression itself, or subexpression that cannot be reached from the root node by a type-preserving
traversal, cannot be chosen as crossover points nor can they be mutated.
|
{"url":"http://hackage.haskell.org/package/genprog-0.1.0.2/docs/GenProg-GenExpr-Data.html","timestamp":"2014-04-17T16:08:42Z","content_type":null,"content_length":"9442","record_id":"<urn:uuid:466a98fc-f0aa-424e-893f-1d3999d442d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with evaluating sine
October 10th 2009, 05:26 PM #1
Junior Member
Apr 2009
Help with evaluating sine
I have the function
$f:[-\pi,\pi] \rightarrow R, f(x) = \sin 3(x+ \frac{\pi}{4})$
and I need to find the endpoints by subbing in $-\pi$ and $\pi$
$f(-\pi) = \sin 3(-\pi + \frac{\pi}{4})$
$= \sin 3(\pi+\frac{\pi}{4})$
$= \sin (3\pi+\frac{3\pi}{4})$
$= \sin (2\pi+\frac{7\pi}{4})$
$= \sin (\frac{7\pi}{4})$
Is this right so far? If so, what do I do next?
$\sin\left[3\left(\pi - \frac{\pi}{4}\right)\right]$
fyi, $\frac{9\pi}{4}$ is coterminal with $\frac{\pi}{4}$
you should find the sine value on your unit circle.
$\sin \frac{\pi}{4} = \frac{1}{\sqrt2}$
The answer I have in my book for $f(-\pi) = -\frac{1}{\sqrt2}$
That's wrong, the sine of $\frac{\pi}{4}$ (a 45 degree angle) is $\frac{\sqrt2}{2}$ all you have to do in that problem is figure out if that answer is positive or negative. The way to do that is
first evaluate both of your functions with their inputs:
$f(-\pi) = \sin 3(-\pi + \frac{\pi}{4})$
$f(-\pi) = \sin (-3\pi + \frac{3\pi}{4})$
$f(-\pi) = \sin (-\frac{12\pi}{4} + \frac{3\pi}{4})$
$f(-\pi) = \sin (-\frac{9\pi}{4})$
Now, $-\frac{9\pi}{4}$ is coterminal with $\frac{\pi}{4}$ meaning when you go around the unit circle 9 times after $\frac{\pi}{4}$ you end up at the same place. Since sin is the y-axis of the
unit circle and $-\frac{9\pi}{4}$ ends up being in the 4th quadrant of the graph, your answer will be negative. Therefore it should be:
EDIT: Well now that I think about it: $\frac{1}{\sqrt2} = \frac{\sqrt2}{2}$ so I guess your book is right :P
$\frac{1}{\sqrt{2}} = \frac{\sqrt{2}}{2}$
October 10th 2009, 05:36 PM #2
October 10th 2009, 05:49 PM #3
Junior Member
Apr 2009
October 10th 2009, 05:58 PM #4
October 10th 2009, 06:14 PM #5
May 2009
October 10th 2009, 06:20 PM #6
|
{"url":"http://mathhelpforum.com/trigonometry/107249-help-evaluating-sine.html","timestamp":"2014-04-21T10:04:04Z","content_type":null,"content_length":"55507","record_id":"<urn:uuid:580ac370-b1c3-4b61-aec6-51bca7474476>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fairchilds, TX Prealgebra Tutor
Find a Fairchilds, TX Prealgebra Tutor
...I understand that in addition to struggling with understanding a subject, test anxiety can be a difficult obstacle to overcome. Because I have personally dealt with test anxiety myself and
helped students conquer theirs as well, I am well-equipped to help students overcome their fears through wa...
47 Subjects: including prealgebra, reading, English, chemistry
...I have also tutored high school students in various locations. My reputation at the Air Force Academy was the top calculus instructor. I have taught precalculus during the past two years and
have enjoyed success with the accomplishments of my students.
11 Subjects: including prealgebra, calculus, geometry, statistics
...I am 100% confident that you will leave our tutoring session feeling LESS STRESSED OUT and EXCITED!Algebra is probably the most practical of the mathematics. Understanding graphs, proportions,
and how to solve simple equations is essential for every student regardless of their field of study. B...
5 Subjects: including prealgebra, chemistry, algebra 2, geometry
...I can help your student improve in English, Grammar, Reading, Elementary Math, Pre Algebra, and Algebra 1. For the past three years I have tutored Middle School Math and High School Algebra 1.
Here is an unsolicited quote from one of the students I tutored last school year. "What you did for me was soooooooooo helpful.
7 Subjects: including prealgebra, reading, algebra 1, SAT math
...My experience really shows in the way I tutor my students as I have seen and mastered almost all types of problems and scenarios. I believe that my role as an educator is not only to inform,
but also to train students to think both independently and collaboratively and take the knowledge they ga...
21 Subjects: including prealgebra, calculus, algebra 1, GRE
Related Fairchilds, TX Tutors
Fairchilds, TX Accounting Tutors
Fairchilds, TX ACT Tutors
Fairchilds, TX Algebra Tutors
Fairchilds, TX Algebra 2 Tutors
Fairchilds, TX Calculus Tutors
Fairchilds, TX Geometry Tutors
Fairchilds, TX Math Tutors
Fairchilds, TX Prealgebra Tutors
Fairchilds, TX Precalculus Tutors
Fairchilds, TX SAT Tutors
Fairchilds, TX SAT Math Tutors
Fairchilds, TX Science Tutors
Fairchilds, TX Statistics Tutors
Fairchilds, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Fairchilds_TX_Prealgebra_tutors.php","timestamp":"2014-04-20T19:37:24Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:f2b9526f-6862-4b99-8b3a-0b99bdca595f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rationalizing Denominators
Date: 07/12/97 at 15:11:49
From: Jim Summerlin
Subject: Rationalizing denominators
Dear Dr. Math,
HELP!! I just can't seen to do this. How can I rationalize a
denominator with odd power radicals? This one for example
1/[ (64^(1/5))+(36^(1/5))+(24^(1/5))+(16^(1/5))]
It looks better with radicals but I don't know how to enter symbols in
this text box. Can you rationalize that denominator? Thanks for your
Jim Summerlin
Date: 07/16/97 at 16:19:06
From: Doctor Rob
Subject: Re: Rationalizing denominators
Yes, you can, but it will end up with a ghastly numerator.
In order to do this, you need to drag in a 5-th root of unity, call it
z. Then it satisfies (z^5 - 1)/(z - 1) = z^4 + z^3 + z^2 + z + 1 = 0.
You then multiply both numerator and denominator by the product of all
for all 4-tuples (a, b, c, d) with 0 <= a, b, c, d <= 4, except for
the 4-tuple (0,0,0,0), which is already there in the denominator.
This is a list of 255 additional factors. Now expand everything in
sight, and simplify using first z^5 = 1 and then the above irreducible
quartic polynomial equation satisfied by z.
It may be hard to believe, but all the z's will miraculously vanish
from the expression, and all the radicals will vanish from the
denominator, and you will have your answer. Of course not all the
radicals vanish from the numerator!
The denominator turns out to be
or about 5.831779 * 10^303 !
In this particular case, we didn't need b or d at all, because
(64^(1/5))^4 = 16*16^(1/5)
(24^(1/5))^2/(16^(1/5)) = 36^(1/5).
If we hadn't used the b and d, we would have gotten a denominator of
only 2551318400000. After reducing to lowest terms, the denominator
is only 510263680.
Computing the numerator I leave as an exercise ( :-) !). Actually
it only consumed 7 lines of Mathematica output.
-Doctor Rob, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/53052.html","timestamp":"2014-04-16T16:47:48Z","content_type":null,"content_length":"6972","record_id":"<urn:uuid:31d4d634-55ad-4e78-9996-12a7dbf06c23>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Rest of the Metacharacters
• {m,n} is straightforward now
• It's like * but keeps track of the number of matches
• P{n} is the same as P{n,n}
• Because it keeps track of the number in a small integer, m and n are restricted to be between 0 and 32767.
• There's a non-greedy version {m,n}? which is rarely used
• Actually X* is implemented with {m,n} for nontrivial X.
• This means that ^(foo|bar)*$ wouldn't match "foo" x 35000.
• Sometime after 5.004_04 and at or before 5.005_02, this was fixed
□ n=32767 now has a special meaning; it is used internally to mean infinity
□ You are no longer allowed to specify 32767 explicitly
|
{"url":"http://perl.plover.com/yak/regex/samples/slide050.html","timestamp":"2014-04-20T11:49:29Z","content_type":null,"content_length":"2903","record_id":"<urn:uuid:bd4798b0-9293-47a8-a730-578a63b5d3a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The point of icicles
Contemplating some of nature's cool creations is always fun. Now a team of scientists from The University of Arizona in Tucson has figured out the physics of how drips of icy water can swell into the
skinny spikes known as icicles.
Deciphering patterns in nature is a specialty of UA researchers Martin B. Short, James C. Baygents and Raymond E. Goldstein. In 2005, the team figured out that stalactites, the formations that hang
from the ceilings of caves, have a unique underlying shape described by a strikingly simple mathematical equation.
However, stalactites aren't the only natural formations that look like elongated carrots. Once the researchers had found a mathematical representation of the stalactite's shape, they began to wonder
if the solution applied to other similarly shaped natural formations caused by dripping water.
So the team decided to investigate icicles. Although other scientists have studied how icicles grow, they had not found a formula to describe their shape.
Surprisingly, the team found that the same mathematical formula that describes the shape of stalactites also describes the shape of icicles.
"Everyone knows what an icicle is and what it looks like, so this research is very accessible. I think it is amazing that science and math can explain something like this so well. It really
highlights the beauty of nature," Short said.
The finding is surprising because the physical processes that form icicles are very different from those that form stalactites. Whereas heat diffusion and a rising air column are keys to an icicle's
growth, the diffusion of carbon dioxide gas fuels a stalactite's growth.
Short, a doctoral candidate in UA's physics department, Baygents, a UA associate professor of chemical and environmental engineering, and Goldstein, a UA professor of physics and the Schlumberger
Professor of Complex Physical Systems at the University of Cambridge in England, published their article, "A Free-Boundary Theory for the Shape of the Ideal Dripping Icicle," in the August 2006 issue
of Physics of Fluids. The National Science Foundation funded the research.
As residents of cold climates know, icicles form when melting snow begins dripping down from a surface such as the edge of a roof. For an icicle to grow, there must be a constant layer of water
flowing over it.
The growth of an icicle is caused by the diffusion of heat away from the icicle by a thin fluid layer of water and the resulting updraft of air traveling over the surface. The updraft of air occurs
because the icicle is generally warmer than its surrounding environment, and thus convective heating causes the air surrounding the icicle to rise. As the rising air removes heat from the liquid
layer, some of the water freezes, and the icicle grows thicker and elongates.
"At first, we focused only on the thin water layer covering the icicle, just like we did with stalactites," said Short. "It was only later that we examined the layer of rising air, which is
technically more correct. Strangely though, both methods lead to the same mathematical shape for icicles."
The resulting shape turns out to be described by the same mathematical equation that describes stalactites. One could call it the Platonic form.
The team wanted to compare the predicted shape to real icicles. Because icicles are scarce in Tucson, the scientists naturally turned to the Internet. They were able to compare pictures of actual
icicles with their predicted shape.
The team found that it doesn't matter how big or small the actual icicles were, they could all fit to the shape generated by the mathematical equation.
"Fundamentally, just like in the early stalactite work, it's a result that implies that the shape of an icicle, at least in its ideal, pristine form, ought to be described by this mathematical
equation. And we found, examining images of icicles, that it is a very good fit," senior author Goldstein said.
The team's next step will be to solve the problem of how ripples are formed on the surfaces of both stalactites and icicles.
Source: University of Arizona
|
{"url":"http://phys.org/news77991850.html","timestamp":"2014-04-18T01:02:13Z","content_type":null,"content_length":"66973","record_id":"<urn:uuid:0229de24-42d2-45fa-bc8e-7ee26bdaa7ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Investing Money HELP
June 10th 2013, 11:10 PM
Investing Money HELP
1. Last year, Helen bought 150 shares at $2.00 per share. They are now worth $2.50 per share. Helen recieves a dividend of $0.10 per share. What is her dividend yeild?
2. Frank has saved $30 000 to buy a new car. He decideds to try to get another two years use out of his old care and in the meantime invest the money he has saved.
a)(did this question) if frank invested the $30 000 at 3.5% p.a. for 2 years with interest compounded annually, calculate the money that Frank has at the end of the investment.
a = 3000 (1+0.035)^2
= 3214
(need help with this one)b) Over two years that Frank had invested his money, the inflation rate has averaged at 4.2%p.a. Assumuing the car was worth $30 000, calculate the new cost of the car at
the end of the two years if the price rose at the same rate as inflation to nearest $100.
3. Karen bought a lounge a terms (a loan) over 3 years and with regualr payments, paid a total of $31000. if she had enough cash at the time, the cash price was $ 2500
(did this question) a) how much more did she pay that is she paid cash? - $600
(need help with) b) what rate of simple interest did she pay, per annum?
Please help, and provide working out so i understand how to do it, thankyou :)
|
{"url":"http://mathhelpforum.com/math/219742-investing-money-help-print.html","timestamp":"2014-04-17T23:17:15Z","content_type":null,"content_length":"4251","record_id":"<urn:uuid:aba778bf-6740-46b6-874a-e7fb0020d571>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stephen Stigler has written a paper in the Journal of the Royal Statistical Society Series A on Francis Galton’s analysis of (his cousin) Charles Darwin’ Origin of Species, leading to nothing less
than Bayesian analysis and accept-reject algorithms! “On September 10th, 1885, Francis Galton ushered in a new era of Statistical Enlightenment with an address
Riemann, Langevin & Hamilton [reply]
Here is a (prompt!) reply from Mark Girolami corresponding to the earlier post: In preparation for the Read Paper session next month at the RSS, our research group at CREST has collectively read the
Girolami and Calderhead paper on Riemann manifold Langevin and Hamiltonian Monte Carlo methods and I hope we will again produce a
Effective sample size
In the previous days I have received several emails asking for clarification of the effective sample size derivation in “Introducing Monte Carlo Methods with R” (Section 4.4, pp. 98-100). Formula
(4.3) gives the Monte Carlo estimate of the variance of a self-normalised importance sampling estimator (note the change from the original version in Introducing Monte
Monte Carlo Statistical Methods third edition
Last week, George Casella and I worked around the clock on starting the third edition of Monte Carlo Statistical Methods by detailing the changes to make and designing the new table of contents. The
new edition will not see a revolution in the presentation of the material but rather a more mature perspective on what
R tee-shirt
I gave my introduction to the R course in a crammed amphitheatre of about 200 students today. Had to wear my collectoR teeshirt from Revolution Analytics, even though it only made the kids pay
attention for about 30 seconds… The other few “lines” that worked were using the Proctor & Gamble “car 54″ poster and
Typo in Example 3.6
Edward Kao pointed out the following difficulty about Example 3.6 in Chapter 3 of “Introducing Monte Carlo Methods with R”: I have two questions that have puzzled me for a while. I hope you can shed
some lights. They are all about Example 3.6 of your book. 1. On page 74, there is a term
News on MCMSki III
Here is a message sent by the organisers of MCMSki III in Utah next early January. When registering, make sure to tick the free registration for Adap’skiii as well! The fourth joint international
meeting of the IMS (Institute of Mathematical Statistics) and ISBA (International Society for Bayesian Analysis), nicknamed “MCMSki III“, will be held at
10w2170, Banff [2]
Over the two days of the Hierarchical Bayesian Methods in Ecology workshop, we managed to cover normal models, testing, regression, Gibbs sampling, generalised linear models, Metropolis-Hastings
algorithms and of course a fair dose of hierarchical modelling. At the end of the Saturday marathon session, we spent one and half discussing some models studied by the
“simply start over and build something better”
The post on the shortcomings of R has attracted a huge number of readers and Ross Ihaka has now posted a detailed comment that is fairly pessimistic… Given the directions drafted in this comment from
the father of R (along with Robert Gentleman), I once again re-post this comment as a main entry to advertise
10w2170, Banff
Yesterday night, we started the Hierarchical Bayesian Methods in Ecology workshop by trading stories. Everyone involved in the programme discussed his/her favourite dataset and corresponding
expectations from the course. I found the exchange most interesting, like the one we had two years ago in Gran Paradiso, because of the diversity of approaches to Statistics reflected
|
{"url":"http://www.r-bloggers.com/author/xian/page/37/","timestamp":"2014-04-16T04:18:45Z","content_type":null,"content_length":"40916","record_id":"<urn:uuid:3873dd1b-fd87-43e8-8837-81dd60ead06e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portability Non-portable (GHC extensions)
Stability Provisional
Maintainer Daniel Fischer <daniel.is.fischer@googlemail.com>
Various functions related to prime factorisation. Many of these functions use the prime factorisation of an Integer. If several of them are used on the same Integer, it would be inefficient to
recalculate the factorisation, hence there are also functions working on the canonical factorisation, these require that the number be positive and in the case of the Carmichael function that the
list of prime factors with their multiplicities is ascending.
Factorisation functions
Factorisation of Integers by the elliptic curve algorithm after Montgomery. The algorithm is explained at http://programmingpraxis.com/2010/04/23/modern-elliptic-curve-factorization-part-1/ and http:
The implementation is not very optimised, so it is not suitable for factorising numbers with several huge prime divisors. However, factors of 20-25 digits are normally found in acceptable time. The
time taken depends, however, strongly on how lucky the curve-picking is. With luck, even large factors can be found in seconds; on the other hand, finding small factors (about 12-15 digits) can take
minutes when the curve-picking is bad.
Given enough time, the algorithm should be able to factor numbers of 100-120 digits, but it is best suited for numbers of up to 50-60 digits.
factorise :: Integer -> [(Integer, Int)]Source
factorise n produces the prime factorisation of n, including a factor of (-1) if n < 0. factorise 0 is an error and the factorisation of 1 is empty. Uses a StdGen produced in an arbitrary manner from
the bit-pattern of n.
defaultStdGenFactorisation :: StdGen -> Integer -> [(Integer, Int)]Source
defaultStdGenFactorisation first strips off all small prime factors and then, if the factorisation is not complete, proceeds to curve factorisation. For negative numbers, a factor of -1 is included,
the factorisation of 1 is empty. Since 0 has no prime factorisation, a zero argument causes an error.
stepFactorisation :: Integer -> [(Integer, Int)]Source
stepFactorisation is like factorise', except that it doesn't use a pseudo random generator but steps through the curves in order. This strategy turns out to be surprisingly fast, on average it
doesn't seem to be slower than the StdGen based variant.
Factor sieves
factorSieve :: Integer -> FactorSieveSource
factorSieve n creates a store of smallest prime factors of the numbers not exceeding n. If you need to factorise many smallish numbers, this can give a big speedup since it avoids many superfluous
divisions. However, a too large sieve leads to a slowdown due to cache misses. To reduce space usage, only the smallest prime factors of numbers coprime to 30 are stored, encoded as Word16s. The
maximal admissible value for n is therefore 2^32 - 1. Since φ(30) = 8, the sieve uses only 16 bytes per 30 numbers.
sieveFactor :: FactorSieve -> Integer -> [(Integer, Int)]Source
sieveFactor fs n finds the prime factorisation of n using the FactorSieve fs. For negative n, a factor of -1 is included with multiplicity 1. After stripping any present factors 2, 3 or 5, the
remaining cofactor c (if larger than 1) is factorised with fs. This is most efficient of course if c does not exceed the bound with which fs was constructed. If it does, trial division is performed
until either the cofactor falls below the bound or the sieve is exhausted. In the latter case, the elliptic curve method is used to finish the factorisation.
Trial division
trialDivisionTo :: Integer -> Integer -> [(Integer, Int)]Source
trialDivisionTo bound n produces a factorisation of n using the primes = bound@. If @n@ has prime divisors @ bound, the last entry in the list is the product of all these. If n <= bound^2, this is a
full factorisation, but very slow if n has large prime divisors.
Partial factorisation
smallFactors :: Integer -> Integer -> ([(Integer, Int)], Maybe Integer)Source
smallFactors bound n finds all prime divisors of n > 1 up to bound by trial division and returns the list of these together with their multiplicities, and a possible remaining factor which may be
:: Maybe Integer Lower bound for composite divisors
-> StdGen Standard PRNG
-> Maybe Int Estimated number of digits of smallest prime factor
-> Integer The number to factorise
-> [(Integer, Int)] List of prime factors and exponents
A wrapper around curveFactorisation providing a few default arguments. The primality test is bailliePSW, the prng function - naturally - randomR. This function also requires small prime factors to
have been stripped before.
:: Maybe Integer Lower bound for composite divisors
-> (Integer -> Bool) A primality test
-> (Integer -> g -> (Integer, g)) A PRNG
-> g Initial PRNG state
-> Maybe Int Estimated number of digits of the smallest prime factor
-> Integer The number to factorise
-> [(Integer, Int)] List of prime factors and exponents
curveFactorisation is the driver for the factorisation. Its performance (and success) can be influenced by passing appropriate arguments. If you know that n has no prime divisors below b, any divisor
found less than b*b must be prime, thus giving Just (b*b) as the first argument allows skipping the comparatively expensive primality test for those. If n is such that all prime divisors must have a
specific easy to test for structure, a custom primality test can improve the performance (normally, it will make very little difference, since n has not many divisors, and many curves have to be
tried to find one). More influence has the pseudo random generator (a function prng with 6 <= fst (prng k s) <= k-2 and an initial state for the PRNG) used to generate the curves to try. A lucky
choice here can make a huge difference. So, if the default takes too long, try another one; or you can improve your chances for a quick result by running several instances in parallel.
curveFactorisation requires that small prime factors have been stripped before. Also, it is unlikely to succeed if n has more than one (really) large prime factor.
Single curve worker
montgomeryFactorisation :: Integer -> Word -> Word -> Integer -> Maybe IntegerSource
montgomeryFactorisation n b1 b2 s tries to find a factor of n using the curve and point determined by the seed s (6 <= s < n-1), multiplying the point by the least common multiple of all numbers <=
b1 and all primes between b1 and b2. The idea is that there's a good chance that the order of the point in the curve over one prime factor divides the multiplier, but the order over another factor
doesn't, if b1 and b2 are appropriately chosen. If they are too small, none of the orders will probably divide the multiplier, if they are too large, all probably will, so they should be chosen to
fit the expected size of the smallest factor.
It is assumed that n has no small prime factors.
The result is maybe a nontrivial divisor of n.
totient :: Integer -> IntegerSource
Calculates the totient of a positive number n, i.e. the number of k with 1 <= k <= n and gcd n k == 1, in other words, the order of the group of units in ℤ/(n).
totientSieve :: Integer -> TotientSieveSource
totientSieve n creates a store of the totients of the numbers not exceeding n. Like a FactorSieve, a TotientSieve only stores values for numbers coprime to 30 to reduce space usage. However, totients
are stored as Words, thus the space usage is 2 or 4 times as high. The maximal admissible value for n is fromIntegral (maxBound :: Word).
sieveTotient :: TotientSieve -> Integer -> IntegerSource
sieveTotient ts n finds the totient π(n), i.e. the number of integers k with 1 <= k <= n and gcd n k == 1, in other words, the order of the group of units in ℤ/(n), using the TotientSieve ts. The
strategy is analogous to sieveFactor.
Carmichael function
carmichael :: Integer -> IntegerSource
Calculates the Carmichael function for a positive integer, that is, the (smallest) exponent of the group of units in &8484;/(n).
carmichaelSieve :: Integer -> CarmichaelSieveSource
carmichaelSieve n creates a store of values of the Carmichael function for numbers not exceeding n. Like a FactorSieve, a CarmichaelSieve only stores values for numbers coprime to 30 to reduce space
usage. However, values are stored as Words, thus the space usage is 2 or 4 times as high. The maximal admissible value for n is fromIntegral (maxBound :: Word).
sieveCarmichael :: CarmichaelSieve -> Integer -> IntegerSource
sieveCarmichael cs n finds the value of λ(n) (or ψ(n)), the smallest positive integer e such that for all a with gcd a n == 1 the congruence a^e ≡ 1 (mod n) holds, in other words, the (smallest)
exponent of the group of units in ℤ/(n). The strategy is analogous to sieveFactor.
sigma :: Int -> Integer -> IntegerSource
sigma k n is the sum of the k-th powers of the (positive) divisors of n. k must be non-negative and n positive. For k == 0, it is the divisor count (d^0 = 1).
|
{"url":"http://hackage.haskell.org/package/arithmoi-0.2.0.1/docs/Math-NumberTheory-Primes-Factorisation.html","timestamp":"2014-04-23T22:35:45Z","content_type":null,"content_length":"51511","record_id":"<urn:uuid:1decea00-24a3-44fb-a82f-5a122a81750c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Teacher2Teacher - Q&A #6322
View entire discussion
[<< prev] [ next >>]
From: Loyd <Loydlin@aol.com>
To: Teacher2Teacher Public Discussion
Date: 2012010512:50:22
Subject: Re: Finding the radius of a circle when you only have an arc
On 2004030820:02:12, Philip Wetherington wrote:
>I have a board that is 6 inches wide and 20 inches long. I want to
>an arc in the board that starts at the lower left bottom corner of
>board and runs to the lower right corner of the board. In the middle
>of the board the height of the arc will be 4 inches from the bottom
>the board (midpoint along the bottom of the board). If I know the
>formula for this, I can mark the point 4 inches up from the midpoint
>at the bottom of the board, run down the radius line the "proper
>distance",(this is the unknown I'm looking for), anchor the end of a
>string, run back up the distance of the radius, wrap the string
>a pencil and draw my arc.
>This is not a hypothetical question. I am a woodworker and find
>having to construct such an arc quite often using a time consuming
>"trial and error" method. A formula using the 22 inches and 4 inches
>to find the center of the circle, and thus the radius would be a
>Thank you for your help,
>Philip R. Wetherington
>Macon, GA
There is an answer or two already posted but they require the radius
to solve the problem. So, I found a theorem that should help get the
radius of the circle that contains the arc. "If two chords intersect
in a circle then the products of the segments of the chords are
equal." So, the diameter of the circle and the 22 inch chord are both
chords that intersect.
The diameter is 4 + S and the other chord is 22 inches divided into
segments of 11 inches each. So we multiply:
4xS = 11x11
S= 121/4 so the Diameter of the circle is 4 + 121/4. Half of that is
the radius.
I found a similar problem in a high school geometry book under a
chapter called special segments in a circle were a clock maker was
trying to repair a broken escape wheel in an antique clock. The book
reference is:
Post a reply to this message
Post a related public discussion message
Ask Teacher2Teacher a new question
|
{"url":"http://mathforum.org/t2t/discuss/message.taco?thread=6322&n=6","timestamp":"2014-04-19T02:32:31Z","content_type":null,"content_length":"6423","record_id":"<urn:uuid:4424d7fc-f108-4e4d-9e5c-1e683fbf80f7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lynnfield Statistics Tutor
Find a Lynnfield Statistics Tutor
...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student at
MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point.
16 Subjects: including statistics, French, elementary math, algebra 1
...I have a math degree from MIT and taught math at Rutgers University for 10 years. I don't just know the material; I know the student as well.I performed well in my physics courses as an MIT
student. I have tutored students in classic mechanics, electricity, and magnetism.
24 Subjects: including statistics, chemistry, calculus, physics
...In order for a website to truly capture a client it must also be personalized. PHP is the server-side scripting language that, particularly when paired with MySQL, allows you to store,
retrieve, and make decisions based upon the knowledge you have acquired of your clients. I have extensive know...
46 Subjects: including statistics, calculus, geometry, algebra 1
...In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run test preparation classes (MCAS and SAT). Students tell me I'm
awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are real...
8 Subjects: including statistics, geometry, algebra 1, SAT math
...Also the use of the TI-84 calculator In the past 6 years I have taught more than a dozen sections of Elementary Statistics at the Community College level. These courses included a chapter on
probability and counting. I have been an AP Statistics reader since 2011.
13 Subjects: including statistics, physics, ASVAB, algebra 1
|
{"url":"http://www.purplemath.com/lynnfield_ma_statistics_tutors.php","timestamp":"2014-04-19T09:36:11Z","content_type":null,"content_length":"24083","record_id":"<urn:uuid:77331663-1080-4b20-b93a-e4fda6b06bbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Just another lambdabananacamel,
Greg confessed on #london.pm to having written a average function in Perl in FP style… recursively.
I asked him about this, as a perfectly good average function (using List::Util would be:
sub avg { sum(@_) / @_ }
which is perfectly fine FP style too. As far as I could understand
it, he was reducing a tuple of sum and count. Though he said he wrote
it just-for-fun, it's not entirely unreasonable if you're using a
list-based language, where you'd otherwise have to traverse the list
twice (in Perl, arrays know how long they are).
Out of curiosity, I tried to check how Haskell defined avg, but it seems that it doesn't.
LoganCapaldo suggested the following definition:
> import Data.List > avg xs = sum xs / genericLength xs
We use genericLength because the normal length
function insists on returning an integer, while genericLength returns a
generic Number, which can be divided appropriately. (We'd otherwise
have to write sum xs / (fromIntegral $ length xs))
The recursive function would work by summing a tuple, so that
avgs [1,3,5] => (9,3)
Once we have that we could divide the tuple using uncurry.
> avg' = uncurry (/) . avgs
As we're going to fold the number values into a tuple, we need a function like:
> sum_count s (t,c) = (t+s, c+1)
after which our definition is easily written as a fold
> avgs = foldr sum_count (0,0)
Which works just fine. But seeing the tuples, I thought of the comments from Joao and doserj, and thought of (&&&).
> import Control.Arrow > sum_count s = (+s).fst &&& succ.snd
And byorgey commented that the .fst and .snd pair can be abstracted away using (***). So I tried
> sum_count s = (+s) *** succ
and was really rather surprised that it worked! Admittedly, the
definition is now something that scares me and whose intent is far less
clear than the first naive definition above:
> avg2 = uncurry (/) . foldr (\s -> (+s) *** succ) (0,0)
For extra bonus obfuscation points, making the lambda expression points-free is left as an exercise to the reader ;-)
Update: mauke pointed out that my Perl avg sub didn't work… added parens to sum (@_). Looks like naughty me didn't bother testing that code…
|
{"url":"http://greenokapi.net/blog/2007/11/22/haskell-snippet-recursive-average/","timestamp":"2014-04-19T19:33:35Z","content_type":null,"content_length":"18472","record_id":"<urn:uuid:1d7d8aa8-6b9b-433e-bb93-6d821ec1fa0c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On My New Theory of Computation Series
The fall 2012 semester is approaching. It’s not as fast as those winter waves in Waimea Bay, but close enough. (Yes, the above photo I took is of the same beach — click for a larger view.)
Here are all the courses I’m taking:
1. Applied Abstract Algebra
2. Computer Graphics
3. Probability
4. Theory of Computation
All are lectures, with Computer Graphics being the only course that includes a lab component. Classes 1 and 3 satisfy my math major requirements, while 2 and 4 are for computer science. For the first
semester ever, I won’t have at least one class that doesn’t fall outside of my two majors. So on the one hand, this means I’ll maximize my dedication in these classes, and will probably get high
marks (famous last words).
But unfortunately, it means I won’t have as many options if I get a little burned out of computer science and math. I tend to spend long hours studying for exams and working on homework, so I’m going
to try and do something that will hopefully alleviate some of the workload. This is purely an experiment, and one that I plan to continue if it brings solid results this semester.
The Plan
I’m going to make a series of blog posts on my Theory of Computation class (henceforth, CS 361). For reference, here’s the course description from the Williams Course Catalog that delineates the fun
stuff coming up for me:
This course introduces a formal framework for investigating both the computability and complexity of problems. We study several models of computation including finite automata, regular languages,
context-free grammars, and Turing machines. These models provide a mathematical basis for the study of computability theory–the examination of what problems can be solved and what problems cannot
be solved–and the study of complexity theory–the examination of how efficiently problems can be solved. Topics include the halting problem and the P versus NP problem.
After every few classes, I hope to record on Seita’s Place what I learned and any relevant information going above and beyond the classroom discussion. By the time I take the midterm and final, I’ll
have a nice repository of information online to help do quick review. I will strive to start these entries as soon as possible in draft form, and will add information to them a few hours after each
CS 361 class.
There will be a consistent format for each of the posts. Each entry will be titled “CS Theory Part X: Y” where X is some natural number, and Y is a phrase that relates with the material I’ve learned
and will cover in the blog entry. I want this to be like a personal Wikipedia that makes heavy use of rigorous proofs and outside sources.
The Benefits
So why do I want to do this? The most important benefit will be that it deepens my knowledge of theoretical computer science in a way that avoids long study hours and memorization session. Again, as
I plan to update these entries soon after my classes end, I will minimize the amount of material I forget due to time. Furthermore, by writing these entries in my own words, I force myself to
understand the material well, a prerequisite for explaining a subject in depth. (There’s a whole host of information online that backs up the previous claim.) Since I don’t want to write a book on
theory, I have to pick the right spots to focus on, which requires me to be able to effectively judge the importance of all the concepts hurled at me in the class. Also, using the Internet over paper
to express these posts makes it easier to link together concepts in a web, as explained by Scott Young’s holistic learning method.
But this begs the question: why this class, and not one of the other three?
My long-term goal is to pursue a Ph.D in computer science. As part of the process, I’ll be taking the computer science GRE and Ph.D qualifying exams. As you may expect by the course description, the
material in CS 361 is most closely related with what’s going to be on the test than the material in the other three classes. Taken from the Educational Testing Service Website, we see that 40 percent
of the material is theory!
III. THEORY AND MATHEMATICAL BACKGROUND — 40%
A. Algorithms and complexity
Exact and asymptotic analysis of specific algorithms
Algorithmic design techniques (e.g., greedy, dynamic programming, divide and conquer)
Upper and lower bounds on the complexity of specific problems
Computational complexity, including NP-completeness
B. Automata and language theory
Models of computation (finite automata, Turing machines)
Formal languages and grammars (regular and context-free)
C. Discrete structures
Mathematical logic
Elementary combinatorics and graph theory
Discrete probability, recurrence relations and number theory
I suspect the amount of theory material on Ph.D qualifying exams is similar. These vary among institutions, so there’s no standard.
Computer graphics, while no doubt an interesting subject, isn’t as important in terms of the subject test material.
IV. OTHER TOPICS — 5%
Example areas include numerical analysis, artificial intelligence, computer graphics, cryptography, security and social issues.
It would also be more difficult for me to post graphics-related concepts online, as I’m certain that would involve an excessive number of figures and photos. I do have a limit on how many images I
can upload here, and I’m not really keen on doing a whole lot of copying from my graphics class’ webpage; I prefer to have the images here be created by myself.
I also chose CS 361 over my two math classes. If I’m planning to pursue doctoral studies in computer science, it makes sense to focus on CS 361 more than the math classes. I was seriously considering
doing some Probability review here, but the possibly vast number of diagrams and figures I’d have to include (as in graphics) is a deterrent.
Finally, another benefit of writing the series will be to increase my attention to Seita’s Place. I hope to boost my writing churn rate and my research into deafness and deaf culture. Even though
it’s relatively minor, this blog has become more important to me over the past year, and I want to make sure it flourishes.
I’ll keep you posted.
3 Responses to On My New Theory of Computation Series
1. Cool! Remember the theoretical computer science presentation from the summer academy? Perhaps you can revisit it after you learn about “theory of computation” :D
□ Unfortunately, my algorithms class (last spring) covered everything you guys talked about. But I’ll still keep it in mind. :)
|
{"url":"http://seitad.wordpress.com/2012/08/14/on-my-new-theory-of-computation-series/","timestamp":"2014-04-21T12:09:08Z","content_type":null,"content_length":"79217","record_id":"<urn:uuid:a7ef7268-acf9-4597-9547-6a4277f2f379>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Willis, TX Statistics Tutor
Find a Willis, TX Statistics Tutor
...I look forward to tutoring linear algebra. I care about my students, their needs, and their concerns. I have extensive experience in all aspects of computers.
54 Subjects: including statistics, reading, chemistry, writing
...There, I successfully aided students in Algebra, Statistics, Calculus, Inorganic and Organic Chemistry, and Physics. I occasionally assisted students with their English and Biology subjects,
but my expertise in the physical sciences was more sought-after. I aim to explain the subject in a way that the student can understand.
41 Subjects: including statistics, chemistry, English, reading
...No two students are alike and neither is the method of instruction. What works for one doesn't necessarily work for others, and I have the willingness to differentiate the instructional
strategies. Give me a try!
39 Subjects: including statistics, reading, Spanish, English
I have taught math and science as a tutor since 1989. I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact
me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible.
35 Subjects: including statistics, chemistry, physics, calculus
my name is Kevin, and I can tutor on a variety of subjects. I am a research scientist and Yale educated with a post graduate degree. I have tutored in the past.
85 Subjects: including statistics, English, Spanish, reading
Related Willis, TX Tutors
Willis, TX Accounting Tutors
Willis, TX ACT Tutors
Willis, TX Algebra Tutors
Willis, TX Algebra 2 Tutors
Willis, TX Calculus Tutors
Willis, TX Geometry Tutors
Willis, TX Math Tutors
Willis, TX Prealgebra Tutors
Willis, TX Precalculus Tutors
Willis, TX SAT Tutors
Willis, TX SAT Math Tutors
Willis, TX Science Tutors
Willis, TX Statistics Tutors
Willis, TX Trigonometry Tutors
Nearby Cities With statistics Tutor
Coldspring statistics Tutors
Cut And Shoot, TX statistics Tutors
Dobbin statistics Tutors
Magnolia, TX statistics Tutors
New Caney statistics Tutors
New Waverly, TX statistics Tutors
North Cleveland, TX statistics Tutors
Oak Ridge N, TX statistics Tutors
Oak Ridge North, TX statistics Tutors
Panorama Village, TX statistics Tutors
Pinehurst, TX statistics Tutors
Plum Grove, TX statistics Tutors
Richards, TX statistics Tutors
Stagecoach, TX statistics Tutors
Todd Mission, TX statistics Tutors
|
{"url":"http://www.purplemath.com/Willis_TX_Statistics_tutors.php","timestamp":"2014-04-19T02:40:44Z","content_type":null,"content_length":"23844","record_id":"<urn:uuid:ef9efe68-7029-45de-8369-af0c8e42457e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hyperbolic sine. Related to cosh; in fact, its function looks very similar:
It is pronounced like "cinch" (think 'sin h', where sin is said like the religious term for wrongdoing, and h is said as a letter). A joke I thought was funny was to say "it's a sinh" and spell "it's
a" with "iota, tau, sigma, alpha." The hyperbolicness of its context may be discovered in calculus. Not quite as useful a function as sine, but still helpful in certain cases. sinh is the derivative
of cosh, and vice versa, unlike their cousins, cos and sin, where cos is the derivative of sin, but -sin is the derivative of cos.
|
{"url":"http://everything2.com/title/sinh","timestamp":"2014-04-18T16:20:39Z","content_type":null,"content_length":"26866","record_id":"<urn:uuid:f6f1f3c7-7082-495f-a41d-15955b97efeb>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
n-plectic geometry
Symplectic geometry
Basic concepts
Classical mechanics and quantization
$n$-plectic geometry is a generalization of symplectic geometry to higher category theory.
It is closely related to multisymplectic geometry and n-symplectic manifolds.
For $n \in \mathbb{N}$, an n-plectic vector space is a vector space $V$ (over the real numbers) equipped with an $(n+1)$-linear skew function
$\omega : \wedge^{n+1} V \to \mathbb{R}$
such that regarded as a function
$V \to \wedge^n V^*$
is has trivial kernel.
Let $X$ be a smooth manifold, $\omega \in \Omega^{n+1}(X)$ a differential form.
We say $(X,\omega)$ is a $n$-plectic manifold if
• $\omega$ is closed: $d \omega = 0$;
• for all $x \in X$ the map
$\hat \omega : T_x X \to \Lambda^n T_x^n X$
given by contraction of vectors with forms
$v \mapsto \iota_v \omega$
is injective.
See also the definition at multisymplectic geometry.
1. For $X$ orientable, take $\omega$ the volume form. This is $(dim(X)-1)$-plectic.
2. $\wedge^n T^* X \to X$
3. $G$ a compact simple Lie group,
let $u : (x,y,z) \mapsto \langle x, [y,z]\rangle$ be the canonical Lie algebra 3-cocycle and extend it left-invariantly to a 3-form $\omega_u$ on $G$. Then $(G,\omega_u)$ is 2-plectic.
Poisson $L_\infty$-algebras
To an ordinary symplectic manifold is associated its Poisson algebra. Underlying this is a Lie algebra, whose Lie bracket is the Poisson bracket.
We discuss here how to an $n$-plectic manifold for $n \gt 1$ there is correspondingly assoociated not a Lie algebra, but a Lie n-algebra: the Poisson bracket Lie n-algebra. It is natural to call this
a Poisson Lie $n$-algebra (see for instance at Poisson Lie 2-algebra).
(Not to be confused with the Lie algebra of a Poisson Lie group, which is a Lie group that itself is equipped with a compatible Poisson manifold structure. It is a bit unfortunate that there is no
better established term for the Lie algebra underlying a Poisson algebra apart from “Poisson bracket”.)
Consider the ordinary case in $n=1$ for how a Poisson algebra is obtained from a symplectic manifold $(X, \omega)$.
$\hat \omega : T_x X \to T^*_x X$
is an isomorphism.
Given $f \in C^\infty(X)$, $\exists ! u_f \in \Gamma(T X)$ such that $d f = - \omega(v_f, -)$
Define $\{f,g\} := \omega(v_f, v_g)$. Then $(C^\infty(X,), \{-,-\})$ is a Poisson algebra.
We can generalize this to $n$-plectic geometry.
Let $(X,\omega)$ be $n$-plectic for $n \gt 1$.
Observe that then $\hat \omega : T_x X \to \wedge^n T_x X$ is no longer an isomorphism in general.
$\alpha \in \Omega^{n-1}(X)$
is Hamiltonian precisely if
$\exists v_\alpha \in \Gamma(T X)$
such that
$d \alpha = - \omega(v_\alpha, -) \,.$
This makes $v_\alpha$ uniquely defined.
Denote the collection of Hamiltonian forms by $\Omega^{n-1}_{Hamilt}(X)$.
Define a bracket
$\{-,-\} : \Omega^{n-1}_{Hamilt}(X)^{\otimes_2} \to \Omega^{n-1}_{Hamilt}(X)$
$\{\alpha, \beta\} = - \omega(v_\alpha, v_\beta, -, \cdots, -) \,.$
This satisfies
1. k
$d \{\alpha, \beta\} = - \omega([v_\alpha, v_\beta], -, \cdots, -) \,.$
2. $\{-,-\}$ is skew-symmetric;
3. $\{\alpha_1, \{\alpha_2, \alpha_3\}\}$ + cyclic permutations
$d \omega(v_{\alpha_1}, v_{\alpha_2}, v_{\alpha_3}, -, \cdots)$.
So the Jacobi dientity fails up to an exact term. This will yield the structure of an L-infinity algebra.
Given an $n$-plectic manifold $(X,\omega)$ we get a Lie n-algebra structure on the complex
$C^\infty(X) \stackrel{d_{dR}}{\to} \Omega^1(X) \stackrel{d_{dR}}{\to} \to \cdots \to \Omega^{n-1}_{Hamilt}(X)$
(where the rightmost term is taken to be in degree 0).
• the unary bracket is $d_{dR}$;
• the $k$-ary bracket is
$[\alpha_1, \cdots, \alpha_k] = \left\{ \array{ \pm \omega(v_{\alpha_1}, \cdots, v_{\alpha_k}) & if \forall i : \alpha_i \in \Omega^{n-1}_{Hamilt}(X) \\ 0 & otherwise } \right.$
This is the Poisson bracket Lie n-algebra.
This appears as (Rogers, theorem 3.14).
For $n = 1$ this recovers the definition of the Lie algebra underlying a Poisson algebra.
Review of the symplectic situation
Recall for $n=1$ the mechanism of geometric quantization of a symplectic manifold.
Given a 2-form $\omega$ and the corresponding complex line bundle $P$, consider the Atiyah Lie algebroid sequence
$ad P \to T P/U(1) \to T X$
The smooth sections of $T P/U(1) \to X$ are the $U(1)$ invariant vector fields on the total space of $P$.
Using a connection $abla$ on $P$ we may write such a section as
$s(v) + f \partial_t$
for $v \in \Gamma(T X)$ a vector field downstairs, $s(v)$ a horizontal lift with respect to the given connection and $f \in C^\infty(X)$.
Locally on a suitable patch $U \subset X$ we have that $s(V)|_U = v|_U + \iota_v \theta_i|_U$ .
We say that $\tilde v = s(v) + f \partial_t$preserves the splitting iff $\forall u \in \Gamma(X)$ we have
$[\tilde v, s(u)] = s([v,u]) \,.$
One finds that this is the case precisely if
$d f = - \iota_v \omega \,.$
This gives a homomorphism of Lie algebras
$C^\infty(X) \to \Gamma(T P / U(1))$
$f \mapsto s(v_f) + f \partial_t \,.$
2-plectic geometry and Courant algebroids
We consder now prequantization of 2-plectic manifolds.
Let $(X,\omega)$ be a 2-plectic manifold such that the de Rham cohomology class $[\omega]/2 \pi i$ is in the image of integral cohomology (Has integral periods.)
We can form a cocycle in Deligne cohomology from this, encoding a bundle gerbe with connection.
On a cover $\{U_i \to X\}$ of $X$ this is given in terms of Cech cohomology by data
• $(g_{i j k} : U_{i j k} \to U(1)) \in C^\infty(U_{i j k}, U(1))$
• $A_{i j} \in \Omega^1(U_{i j})$;
• $B_i \in \Omega^2(U_i)$
satisfying a cocycle condition.
Now recall that an exact Courant algebroid is given by the following data:
• a vector bundle $E \to X$;
• an anchor morphism $\rho : E \to T X$ to the tangent bundle;
• an inner product $\langle -,-\rangle$ on the fibers of $E$;
• a bracket $[-,-]$ on the sections of $E$.
Satisfying some conditions.
The fact that the Courant algebroid is exact means that
$0 \to T^* X \to E \to T X \to 0$
is an exact sequence.
The standard Courant algebroid is the example where
• $E = T X \oplus T^* X$;
• $\langle v_1 + \alpha_1, v_2 + \alpha_2\rangle = \alpha_2(v_1) + \alpha_1(v_2)$;
• the bracket is the skew-symmetrization of the Dorfman bracket
$(v_1 + \alpha_1, v_2 + \alpha_2) = [v_1, v_2] - \mathbb{L}_{v_1}\alpha_2 - (d \alpha_1)(v_2,-)$
Now with respect to the above Deligne cocycle, build a Courant algebroid as follows:
• on each patch $U_i$ is is the standard Courant algebroid $E_i := T U_i \oplus T^* U_i$;
• glued together on double intersections using the $d A_{i j}$
This gives an exact Courant algebroid $E \to X$ as well as a splitting $s : T X \to E$ given by the $\{B_i\}$.
The bracket on this $E$ is given by the skew-symmetrization of
$[ [ s(v_1) \alpha_1, s(v_2) + \alpha_2 ] ] = s([v_1, v_2]) + \mathcal{L}_{v_1} \alpha_2 - (d \alpha_2)(v_2, -) - \omega(v_1, v_2, \cdots) \,.$
Here a section $e = s(v) + ...$ preserves the splitting precisely if
for all $u \in \Gamma(T X)$ we have
$[ [ e, s(u)] ]_D = s([v,u])$
exactly if $\alpha$ is Hamiltonian and $v = v_\alpha$.
Recall that to every Courant algebroid $E$ is associated a Lie 2-algebra $L_\infty(E)$.
Then: we have an embedding of L-infinity algebras
$\phi : L_\infty(X,\omega) \to L_\infty(E)$
given by $\phi(\alpha) = s(v_\alpha) + \alpha$.
Central extensions under geometric quantization
higher and integrated Kostant-Souriau extensions
(∞-group extension of ∞-group of bisections of higher Atiyah groupoid for $\mathbb{G}$-principal ∞-connection)
$(\Omega \mathbb{G})\mathbf{FlatConn}(X) \to \mathbf{QuantMorph}(X,abla) \to \mathbf{HamSympl}(X,abla)$
(extension are listed for sufficiently connected $X$)
duality between algebra and geometry in physics:
∞-Chern-Simons theory from binary and non-degenerate invariant polynomial
(adapted from Ševera 00)
• Chris Rogers, $L_\infty$ algebras from multisymplectic geometry , Letters in Mathematical Physics April 2012, Volume 100, Issue 1, pp 29-50 (arXiv:1005.2230, journal).
• Chris Rogers, 2-plectic geometry, Courant algebroids, and categorified prequantization , arXiv:1009.2975.
• Chris Rogers, Higher geometric quantization, talk at Higher Structures 2011 in Göttingen (pdf slides)
Discussion in the more general context of higher differential geometry/extended prequantum field theory is in
See also the references at multisymplectic geometry and n-symplectic manifold.
A higher differential geometry-generalization of contact geometry in line with multisymplectic geometry/$n$-plectic geometry is discussed in
• Luca Vitagliano, L-infinity Algebras From Multicontact Geometry (arXiv.1311.2751)
Some more references on application, on top of those mentioned in the articles above.
A survey of some (potential) applications of 2-plectic geometry in string theory and M2-brane models is in section 2 of
and in
|
{"url":"http://www.ncatlab.org/nlab/show/n-plectic%20geometry","timestamp":"2014-04-18T05:32:00Z","content_type":null,"content_length":"100617","record_id":"<urn:uuid:2be4b5cb-b572-4059-9d05-8326497ba934>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please I need help in these exercises:
tan (x+π/4)=2tanx + 2
- Homework Help - eNotes.com
Please I need help in these exercises:
tan (x+π/4)=2tanx + 2
Solve `tan(x+pi/4)=2tanx+2` :
Now `tan(A+B)=(tanA+tanB)/(1-tanAtanB)` so we can rewrite the left-hand side:
`(tanx+tan(pi/4))/(1-tanxtan(pi/4))=2tanx+2` and `tan(pi/4)=1` so we have:
`(tanx+1)/(1-tanx)=2tanx+2` Multiply both sides by (1-tanx)
`(2tanx-1)(tanx+1)=0` By the zero product property
`tanx=1/2 "or" tanx=-1`
If tanx=-1 then `x=-pi/4+npi,n in ZZ` (n an integer)
If `tanx=1/2 ==> x=tan^(-1)1/2==>x~~.464+npi,n in ZZ`
The solutions are `x=-pi/4+npi,x~~.464+npi`
The graph of the left side in black, the right side in red:
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/please-need-help-these-exercises-tan-x-4-2tanx-2-442785","timestamp":"2014-04-20T04:35:56Z","content_type":null,"content_length":"25696","record_id":"<urn:uuid:48115a31-9a9d-4623-91b3-5d75bb813a62>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trapezoids, Rhombi, and Kites
10.2: Trapezoids, Rhombi, and Kites
Created by: CK-12
Learning Objectives
• Derive and use the area formulas for trapezoids, rhombi, and kites.
Review Queue
Find the area of the shaded regions in the figures below.
2. $ABCD$
3. $ABCD$
Know What? The Brazilian flag is to the right. The flag has dimensions of $20 \times 14$
Find the area of the rhombus (including the circle). Do not round your answer.
Area of a Trapezoid
Recall that a trapezoid is a quadrilateral with one pair of parallel sides. The lengths of the parallel sides are the bases and the perpendicular distance between the parallel sides is the height of
the trapezoid.
To find the area of the trapezoid, make a copy of the trapezoid and then rotate the copy $180^\circ$$h$$b_1 + b_2$$A=h(b_1+b_2)$
Because the area of this parallelogram is two congruent trapezoids, the area of one trapezoid would be $A=\frac{1}{2} h(b_1+b_2)$
Area of a Trapezoid: $A=\frac{1}{2} h(b_1+b_2)$
$h$always perpendicular to the bases.
You could also say the area of a trapezoid is the average of the bases times the height.
Example 1: Find the area of the trapezoids below.
a) $A = \frac{1}{2} (11)(14+8)\!\\A = \frac{1}{2} (11)(22)\!\\A = 121 \ units^2$
b) $A = \frac{1}{2} (9)(15+23)\!\\A = \frac{1}{2} (9)(38)\!\\A = 171 \ units^2$
Example 2: Find the perimeter and area of the trapezoid.
Solution: Even though we are not told the length of the second base, we can find it using special right triangles. Both triangles at the ends of this trapezoid are isosceles right triangles, so the
hypotenuses are $4 \sqrt{2}$
$P &= 8+4\sqrt{2}+16+4\sqrt{2} && A=\frac{1}{2}(4)(8+16)\\P &= 24+8\sqrt{2} \approx 35.3 \ units && A=48 \ units^2$
Area of a Rhombus and Kite
Recall that a rhombus is an equilateral quadrilateral and a kite has adjacent congruent sides.
Both of these quadrilaterals have perpendicular diagonals, which is how we are going to find their areas.
Notice that the diagonals divide each quadrilateral into 4 triangles. If we move the two triangles on the bottom of each quadrilateral so that they match up with the triangles above the horizontal
diagonal, we would have two rectangles.
So, the height of these rectangles is half of one of the diagonals and the base is the length of the other diagonal.
Area of a Rhombus: $A=\frac{1}{2} d_1 d_2$
The area is half the product of the diagonals.
Area of a Kite: $A=\frac{1}{2} d_1 d_2$
Example 3: Find the perimeter and area of the rhombi below.
Solution: In a rhombus, all four triangles created by the diagonals are congruent.
a) To find the perimeter, you must find the length of each side, which would be the hypotenuse of one of the four triangles. Use the Pythagorean Theorem.
$12^2+8^2 &=side^2 && A=\frac{1}{2} \cdot 16 \cdot 24\\144+64 &= side^2 && A=192\\side &= \sqrt{208}=4 \sqrt{13}\\P &= 4 \left( 4\sqrt{13} \right)=16 \sqrt{13}$
b) Here, each triangle is a 30-60-90 triangle with a hypotenuse of 14. From the special right triangle ratios the short leg is 7 and the long leg is $7 \sqrt{3}$
$P &= 4 \cdot 14=56 && A=\frac{1}{2} \cdot 14 \cdot 14\sqrt{3}=98\sqrt{3}$
Example 4: Find the perimeter and area of the kites below.
Solution: In a kite, there are two pairs of congruent triangles. Use the Pythagorean Theorem in both problems to find the length of sides or diagonals.
a) $&\text{Shorter sides of kite} && \text{Longer sides of kite}\\6^2+5^2 &= s_1^2 && 12^2+5^2=s_2^2\\36+25 &= s_1^2 && 144+25=s_2^2\\s_1 &= \sqrt{61} && \qquad \quad s_2=\sqrt{169}=13$
$P = 2 \left( \sqrt{61} \right)+2(13)=2\sqrt{61}+26 \approx 41.6 && A=\frac{1}{2} (10)(18)=90$
b) $&\text{Smaller diagonal portion} && \text{Larger diagonal portion}\\20^2+d_s^2&=25^2 && 20^2+d_l^2=35^2\\d_s^2&=225 && \qquad \ \ d_l^2=825\\d_s&=15 && \qquad \quad d_l=5\sqrt{33}$
$A=\frac{1}{2} \left(15+5 \sqrt{33} \right)(40) \approx 874.5 && P=2(25)+2(35)=120$
Example 5: The vertices of a quadrilateral are $A(2, 8), B(7, 9), C(11, 2)$$D(3, 3)$$ABCD$
Solution: After plotting the points, it looks like a kite. $AB = AD$$BC = DC$
$m_{AC} &= \frac{2-8}{11-2}=-\frac{6}{9}=-\frac{2}{3}\\m_{BD} &= \frac{9-3}{7-3}=\frac{6}{4}=\frac{3}{2}$
The diagonals are perpendicular, so $ABCD$
$d_1 &= \sqrt{(2-11)^2+(8-2)^2} && d_2=\sqrt{(7-3)^2+(9-3)^2}\\&= \sqrt{(-9)^2+6^2} && \quad =\sqrt{4^2+6^2}\\&= \sqrt{81+36}=\sqrt{117}=3\sqrt{13} && \quad =\sqrt{16+36}=\sqrt{52}=2\sqrt{13}$
Plug these lengths into the area formula for a kite. $A=\frac{1}{2} \left(3 \sqrt{13} \right)\left( 2\sqrt{13} \right)=39 \ units^2$
Know What? Revisited To find the area of the rhombus, we need to find the length of the diagonals. One diagonal is $20-1.7-1.7=16.6$$14-1.7-1.7=10.6$$A=\frac{1}{2} (16.6)(10.6)=87.98 \ units^2$
Review Questions
• Question 1 uses the formula of the area of a kite and rhombus.
• Questions 2-16 are similar to Examples 1-4.
• Questions 17-23 are similar to Example 5.
• Questions 24-27 use the area formula for a kite and rhombus and factors.
• Questions 28-30 are similar to Example 4.
1. Do you think all rhombi and kites with the same diagonal lengths have the same area? Explain your answer.
Find the area of the following shapes. Round your answers to the nearest hundredth.
Find the area and perimeter of the following shapes. Round your answers to the nearest hundredth.
Quadrilateral $ABCD$$A(-2, 0), B(0, 2), C(4, 2)$$D(0, -2)$
17. Find the slopes of $\overline{AB}$$\overline{DC}$Plotting the points will help you find the answer.
18. Find the slope of $\overline{AD}$$\overline{AB}$$\overline{DC}$
19. Find $AB, AD$$DC$
20. Use #19 to find the area of the shape.
Quadrilateral $EFGH$$E(2, -1), F(6, -4), G(2, -7)$$H(-2, -4)$
21. Find the slopes of all the sides and diagonals. What type of quadrilateral is this? Plotting the points will help you find the answer.
22. Find $HF$$EG$
23. Use #22 to find the area of the shape.
For Questions 24 and 25, the area of a rhombus is $32 \ units^2$
24. What would the product of the diagonals have to be for the area to be $32 \ units^2$
25. List two possibilities for the length of the diagonals, based on your answer from #24.
For Questions 26 and 27, the area of a kite is $54 \ units^2$
26. What would the product of the diagonals have to be for the area to be $54 \ units^2$
27. List two possibilities for the length of the diagonals, based on your answer from #26.
Sherry designed the logo for a new company, made up of 3 congruent kites.
28. What are the lengths of the diagonals for one kite?
29. Find the area of one kite.
30. Find the area of the entire logo.
Review Queue Answers
1. $A = 9(8)+ \left [ \frac{1}{2} (9)(8) \right ] = 72 + 36 = 108 \ units^2$
2. $A = \frac{1}{2} (6)(12) 2 = 72 \ units^2$
3. $A = 4 \left [ \frac{1}{2} (6)(3) \right ] = 36 \ units^2$
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/Basic-Geometry/r1/section/10.2/","timestamp":"2014-04-17T23:27:37Z","content_type":null,"content_length":"133006","record_id":"<urn:uuid:29d5a3fa-f4a7-4f0d-a1cf-4a273eab01c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interprétation Fonctionnelle et Elimination des Coupures dans l’Arithmétique d’Ordre Supérieur
Results 11 - 20 of 177
- IN ACM SIGPLAN INTERNATIONAL CONFERENCE ON FUNCTIONAL PROGRAMMING , 1997
"... Statically typed languages with Hindley-Milner polymorphism have long been compiled using inefficient and fully boxed data representations. Recently, several new compilation methods have been
proposed to support more efficient and unboxed multi-word representations. Unfortunately, none of these tech ..."
Cited by 65 (14 self)
Add to MetaCart
Statically typed languages with Hindley-Milner polymorphism have long been compiled using inefficient and fully boxed data representations. Recently, several new compilation methods have been
proposed to support more efficient and unboxed multi-word representations. Unfortunately, none of these techniques is fully satisfactory. For example, Leroy's coercion-based approach does not handle
recursive data types and mutable types well. The type-passing approach (proposed by Harper and Morrisett) handles all data objects, but it involves extensive runtime type analysis and code
manipulations. This paper presents a new flexible representation analysis technique that combines the best of both approaches. Our new scheme supports unboxed representations for recursive and
mutable types, yet it only requires little runtime type analysis. In fact, we show that there is a continuum of possibilities between the coercion-based approach and the type-passing approach. By
varying the amount of boxing an...
- IN SCHEME AND FUNCTIONAL PROGRAMMING WORKSHOP , 2006
"... Static and dynamic type systems have well-known strengths and weaknesses, and each is better suited for different programming tasks. There have been many efforts to integrate static and dynamic
typing and thereby combine the benefits of both typing disciplines in the same language. The flexibility o ..."
Cited by 64 (10 self)
Add to MetaCart
Static and dynamic type systems have well-known strengths and weaknesses, and each is better suited for different programming tasks. There have been many efforts to integrate static and dynamic
typing and thereby combine the benefits of both typing disciplines in the same language. The flexibility of static typing can be improved by adding a type Dynamic and a typecase form. The safety and
performance of dynamic typing can be improved by adding optional type annotations or by performing type inference (as in soft typing). However, there has been little formal work on type systems that
allow a programmer-controlled migration between dynamic and static typing. Thatte proposed Quasi-Static Typing, but it does not statically catch all type errors in completely annotated programs.
Anderson and Drossopoulou defined a nominal type system for an object-oriented language with optional type annotations. However, developing a sound, gradual type system for functional languages with
structural types is an open problem. In this paper
, 1998
"... Recent advances in compiler technology have demonstrated the benefits of using strongly typed intermediate languages to compile richly typed source languages (e.g., ML). A typepreserving
compiler can use types to guide advanced optimizations and to help generate provably secure mobile code. Types, u ..."
Cited by 61 (16 self)
Add to MetaCart
Recent advances in compiler technology have demonstrated the benefits of using strongly typed intermediate languages to compile richly typed source languages (e.g., ML). A typepreserving compiler can
use types to guide advanced optimizations and to help generate provably secure mobile code. Types, unfortunately, are very hard to represent and manipulate efficiently; a naive implementation can
easily add exponential overhead to the compilation and execution of a program. This paper describes our experience with implementing the FLINT typed intermediate language in the SML/NJ production
compiler. We observe that a type-preserving compiler will not scale to handle large types unless all of its type-preserving stages preserve the asymptotic time and space usage in representing and
manipulating types. We present a series of novel techniques for achieving this property and give empirical evidence of their effectiveness.
, 1997
"... The ease of understanding, maintaining, and developing a large program depends crucially on how it is divided up into modules. The possible ways a program can be divided are constrained by the
available modular programming facilities ("module system") of the programming language being used. Experien ..."
Cited by 58 (0 self)
Add to MetaCart
The ease of understanding, maintaining, and developing a large program depends crucially on how it is divided up into modules. The possible ways a program can be divided are constrained by the
available modular programming facilities ("module system") of the programming language being used. Experience with the Standard-ML module system has shown the usefulness of functions mapping modules
to modules and modules with module subcomponents. For example, functions over modules permit abstract data types (ADTs) to be parameterized by other ADTs, and submodules permit modules to be
organized hierarchically. Module systems with such facilities are called higher-order, by analogy with higher-order functions. Previous higher-order module systems can be classified as either opaque
or transparent. Opaque systems totally obscure information about the identity of type components of modules, often resulting in overly abstract types. This loss of type identities precludes most
interesting uses of hi...
- Annals of Pure and Applied Logic , 1998
"... Girard and Reynolds independently invented System F (a.k.a. the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design,
respectively. Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions ..."
Cited by 58 (4 self)
Add to MetaCart
Girard and Reynolds independently invented System F (a.k.a. the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively.
Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions of typability and type checking . Typability asks for a term whether there exists some type it can
be given. Type checking asks, for a particular term and type, whether the term can be given that type. The decidability of these problems has been settled for restrictions and extensions of F and
related systems and complexity lower-bounds have been determined for typability in F, but this report is the rst to resolve whether these problems are decidable for System F. This report proves that
type checking in F is undecidable, by a reduction from semiuni cation, and that typability in F is undecidable, by a reduction from type checking. Because there is an easy reduction from typability
to typ...
, 2001
"... We present a variant of Proof-Carrying Code (PCC) in which the trusted inference rules are represented as a higherorder logic program, the proof checker is replaced by a nondeterministic
higher-order logic interpreter and the proof by an oracle implemented as a stream of bits that resolve the nondet ..."
Cited by 55 (3 self)
Add to MetaCart
We present a variant of Proof-Carrying Code (PCC) in which the trusted inference rules are represented as a higherorder logic program, the proof checker is replaced by a nondeterministic higher-order
logic interpreter and the proof by an oracle implemented as a stream of bits that resolve the nondeterministic interpretation choices. In this setting, Proof-Carrying Code allows the receiver of the
code the luxury of using nondeterminism in constructing a simple yet powerful checking procedure. This oracle-based variant of PCC is able to adapt quite naturally to situations when the property
being checked is simple or there is a fairly directed search procedure for it. As an example, we demonstrate that if PCC is used to verify type safety of assembly language programs compiled from Java
source programs, the oracles that are needed are on the average just 12% of the size of the code, which represents an improvement of a factor of 30 over previous syntactic representations of PCC
proofs. ...
, 1991
"... The concept of relations over sets is generalized to relations over an arbitrary category, and used to investigate the abstraction (or logical-relations) theorem, the identity extension lemma,
and parametric polymorphism, for Cartesian-closed-category models of the simply typed lambda calculus and P ..."
Cited by 53 (1 self)
Add to MetaCart
The concept of relations over sets is generalized to relations over an arbitrary category, and used to investigate the abstraction (or logical-relations) theorem, the identity extension lemma, and
parametric polymorphism, for Cartesian-closed-category models of the simply typed lambda calculus and PL-category models of the polymorphic typed lambda calculus. Treatments of Kripke relations and
of complete relations on domains are included.
"... We study the problem of certifying programs combining imperative and functional features within the general framework of type theory. Type theory constitutes a powerful specification language,
which is naturally suited for the proof of purely functional programs. To deal with imperative programs, we ..."
Cited by 52 (4 self)
Add to MetaCart
We study the problem of certifying programs combining imperative and functional features within the general framework of type theory. Type theory constitutes a powerful specification language, which
is naturally suited for the proof of purely functional programs. To deal with imperative programs, we propose a logical interpretation of an annotated program as a partial proof of its specification.
The construction of the corresponding partial proof term is based on a static analysis of the effects of the program, and on the use of monads. The usual notion of monads is refined in order to
account for the notion of effect. The missing subterms in the partial proof term are seen as proof obligations, whose actual proofs are left to the user. We show that the validity of those proof
obligations implies the total correctness of the program. We also establish a result of partial completeness. This work has been implemented in the Coq proof assistant. It appears as a tactic taking
an ann...
, 1991
"... We present a new approach to the polymorphic typing of data accepting in-place modi cation in ML-like languages. This approach is based on restrictions over type generalization, and a re ned
typing of functions. The type system given here leads to a better integration of imperative programming sty ..."
Cited by 49 (1 self)
Add to MetaCart
We present a new approach to the polymorphic typing of data accepting in-place modi cation in ML-like languages. This approach is based on restrictions over type generalization, and a re ned typing
of functions. The type system given here leads to a better integration of imperative programming style with the purely applicative kernel of ML. In particular, generic functions that allocate mutable
data can safely be given fully polymorphic types. We show the soundness of this type system, and give a type reconstruction algorithm.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=178448&sort=cite&start=10","timestamp":"2014-04-20T10:09:16Z","content_type":null,"content_length":"37399","record_id":"<urn:uuid:df40d884-e713-45f7-b309-357f72cb8f01>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
someone to help me with this factorise 5550 fully in 1) the integers 2) the Gaussian integers?????????
I assume you can do 1). (Wink) For 2), you need to know that if p is an integer prime of the form 4k+3 then it is also a Gaussian prime. If it is of the form 4k+1 then it is always possible to
express it as a sum of two squares, $p=a^2+b^2$. Then p factorises in the Gaussian integers as $p=(a+ib)(a-ib)$, and those factors are both Gaussian primes. Finally, 2 factorises in the same way, $2=
(1+i)(1-i)$. So for example 11 is a Gaussian prime, but $13 = 3^2+2^2 = (3+2i)(3-2i)$.
|
{"url":"http://mathhelpforum.com/number-theory/97348-integers-print.html","timestamp":"2014-04-18T06:13:14Z","content_type":null,"content_length":"5030","record_id":"<urn:uuid:15821d3b-1170-4699-8da0-68f70fa99354>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary and Conclusions
The ArcView Number.MakeRandom request does not work correctly. It has three major flaws:
1. It does not produce uniformly distributed pseudo-random numbers (except in special situations).
2. It fails for values less than -2^31 or greater than 2^31-1 but provides no indication of error.
3. Depending on how it is used, it can fail many tests of randomness, even after compensating for flaws #1 and #2.
However, these limitations can be overcome. Avenue scripts are provided below to experiment with the MakeRandom request, to replace the MakeRandom request, and to mimic Excel's RAND() function. ESRI
(the manufacturer of ArcView) would be wise to fix these problems in future releases of ArcView and to verify that any other random number generation in its software is not subject to the same
Note added 9 July 1999: Excel 95's RAND() function is much worse than Number.MakeRandom (after compensating for the problems noted in #1 and #2 above).
25 October 2001: Dilbert has an apt comment about random number generators.
The ArcView GIS software produces random numbers in many ways:
• Colors in new unique-value and chart legends are selected randomly.
• Dot legends place dots randomly within polygonal features.
• Color ramps for new grids (using the Spatial Analyst extension) are selected randomly.
• The Grid.MakeRandom and related requests produce grids of random values in the Spatial Analyst extension.
• The Number.MakeRandom request produces "random numbers".
The last technique is useful for testing, simulation, and other analytical pursuits. Therefore it should produce reasonably random-looking numbers. It does not, but it can be modified to do a pretty
good job.
The syntax for the Number.MakeRandom request is
Number.MakeRandom(min, max)
All that the ArcView help says about this request is "Returns a random number between min and max inclusive. UNIX systems base MakeRandom on the system service rand() which is not a great random
number generator (see man pages for details)." This seems to imply that MakeRandom is a "great" random number generator on Windows systems. There is evidence, presented below, that it may be based on
a reasonably good random number generator, but its implementation (as exposed through the MakeRandom request) is awful.
A little experimentation on a Windows machine running ArcView3.1 reveals the true behavior of MakeRandom.
1. MakeRandom returns only integer values.
Compile and repeatedly run this little script.
MsgBox.Info(Number.MakeRandom(0,1).SetFormatPrecision(6).AsString, "")
It displays a "random number" between 0 and 1 each time, to six decimal places. You will see that only "0.00000" and "1.00000" are ever returned. Play with the constants 0 and 1 in this script: even
when they are replaced by non-integral values (such as 0 and 3.1416), MakeRandom returns only integers.
2. MakeRandom does not return uniformly distributed values.
Since only integer values are returned, one would expect MakeRandom to return them with equal frequencies (in the long run). The next script generates a lot of random values between fixed endpoints
and displays the frequencies with which they were produced.
NMax = 10 ' Upper limit of range of random values
NMin = 0 ' Lower limit of range of random values
NIter = 10000 ' Number of iterations
K = (NMax + 1 - NMin).Ceiling ' Number of random values possible
dctF = Dictionary.Make(K Min NIter)
for each i in 1..NIter
a = Number.MakeRandom(NMin,NMax)
count = dctF.Get(a)
if (count = NIL) then count = 0 end
dctF.Set(a, count+1)
s = ""
lstKeys = dctF.ReturnKeys
for each i in lstKeys
a = dctF.Get(i) ' Count
f = a/NIter ' Frequency
sd = (f*(1-f)/Niter).Sqrt ' Approximate std. dev.
s = s + i.AsString + ": " +
a.SetFormatPrecision(0).AsString +
" (" +
(100*f).SetFormatPrecision(1).AsString +
" +-" + (200*sd).SetFormatPrecision(1).AsString +
"%)" + NL
MsgBox.Report(s, "Frequencies for" ++ NIter.AsString ++ "Iterations")
' end of script
You will see that there are serious problems when NMin or Nmax are negative or exceed 2^31 -1. You will also discover that the frequencies with which MakeRandom(NMin, NMax) returns NMin, 0 (if NMin
<=0 and NMax >= 0), and NMax are strange.
After many runs with different values of NMin and NMax, including non-integral values and negative values, you should be able to verify that MakeRandom behaves as if its implementation were as
1. Truncate the decimal parts of NMin and Nmax so that they become integers.
2. Reduce NMin and NMax modulo 2^32 so that they are in the range -2^31 .. (2^31-1). If in so doing NMax < NMin, then reverse these values to assure that NMax >= NMin.
3. Generate a 32-bit pseudorandom value. Using floating point arithmetic, scale this value so it lies in the range from NMin + 0.5 to NMax + 0.5 inclusive.
4. Truncate the decimal part of the result so it becomes an integer. Return this value.
The two truncations, the reductions modulo 2^32, and the additions of 0.5 (in step 3) all contribute to the anomalous behavior.
The following table illustrates the long-run frequencies of MakeRandom using various starting values and compares them to the expected uniform frequencies.
│ Value│Frequency│ Expected Uniform Frequency│Difference│
│Min=0, Max=2 │ │ │ │
│ 0│ 25.00%│ 33.33%│ -8.33%│
│ 1│ 50.00%│ 33.33%│ 16.67%│
│ 2│ 25.00%│ 33.33%│ -8.33%│
│Min=-3, Max=0│ │ │ │
│ -3│ 0.00%│ 25%│ -25.00%│
│ -2│ 16.67%│ 25%│ -8.33%│
│ -1│ 33.33%│ 25%│ 8.33%│
│ 0│ 50.00%│ 25%│ 25.00%│
│Min=-1, Max=1│ │ │ │
│ -1│ 0.00%│ 33.33%│ -33.33%│
│ 0│ 25.00%│ 33.33%│ -8.33%│
│ 1│ 75.00%│ 33.33%│ 41.67%│
These non-uniform frequencies have to be considered extremely serious flaws in Number.MakeRandom. As described below, however, they can be overcome by appropriate programming, so they are not
critical flaws.
3. MakeRandom does not produce independent values.
We applied G. Marsaglia's Diehard battery of tests to the Number.MakeRandom output. This is a collection of tests of random number generators. As of 1996 there exist many simple, fast pseudo-random
number generators that pass all these tests. Passing the tests is not assurance of true "randomness" but failing one or more provides evidence of non-randomness. These tests were motivated by
simulation needs. Thus, simulation results produced using a random number generator (RNG) that fails Diehard should be viewed with caution.
There is a complication. Diehard requires about 80 million random bits. A 32-bit RNG can be iterated 2.5 million times to produce this many random bits. From the study in the previous selection we
suspect the underlying procedure for Number.MakeRandom is a 32-bit generator, but it is not exactly clear how a 32 bit result is converted into a random value. There are many, many ways to produce
the required 80 million bits (10 million bytes) of random values using Number.MakeRandom. We tried three natural ones, after compensating for the non-uniform frequency problem.
• Method 1: Number.MakeRandom(0, 2^31 -1) was called 2.5 million times in succession. Each time the result was doubled and then output in hexadecimal to a file, as required by Diehard.
• Method 2: Number.MakeRandom(0, 2^16 - 1) was called 5 million times. Each time the result was output in hexadecimal to a file.
• Method 3: Number.MakeRandom(0,2^8) was called 10 million times. Any result of 2^8 (=256) was converted to 0. All results were output in hexadecimal to a file.
(The frequency anomalies noted above are inconsequential when using Methods 1 or 2. Method 3 compensates for the frequency anomalies; if it did not, it would fail most of the tests.)
Here is the Avenue script for Method 3.
strOut = "Random3.txt"
fnOut = strOut.AsFilename
lfOut = LineFile.Make(fnOut, #FILE_PERM_WRITE)
if (lfOut = NIL) then
MsgBox.Error("Unable to open file for writing", "")
chars = "0123456789ABCDEF"
char2 = ""
for each i in 0..15
for each j in 0..15
char2 = char2 + chars.middle(i,1) + chars.middle(j,1)
char2 = char2 + "00" ' For the case y = 256
N = 2^8-1 ' AV can't handle anything above 2^31 - 1.
M = 2^18*1.1' Will need to be 2^18, approximately
for each i in 1..M
doMore = av.SetStatus(100*(i-0.5)/M)
if (doMore.Not) then break end
s = ""
for each ii in 1..40
y = Number.MakeRandom(0, N+1) ' Cases 0 and N+1 have half the expected frequency.
s = char2.Middle(2*y, 2) + s ' Case N+1 converts to "00".
' end of script
(In this script the conversion to hexadecimal is computed explicitly rather than using ArcView's Number.AsHexString request, which does not produce the required kind of output.) The script will take
about a half hour to execute on a 400 MHz Pentium II class machine.
The output of this script is a file, random3.txt, which is post-processed by Marsaglia's asc2bin.exe program to produce input for Diehard. Diehard consists of 15 tests. All 15 were run on all three
output files. For reference, they were also run using a file of 2,560,000+ values of RAND() produced by Excel 95. Here is a summary of the results:
│Test │Method 1│Method 2│Method 3│Excel 95's RAND()│
│Birthday spacings │ FAIL │ FAIL │ Pass │ Pass │
│OPERM5 │ FAIL │ Pass │ Pass │ Pass │
│Binary rank, 31 X 31 │ FAIL │ FAIL │ Pass │ Pass │
│Binary rank, 32 X 32 │ * │ FAIL │ Pass │ Pass │
│Binary rank, 6 X 8 │ Pass │ Pass │ Pass │ FAIL │
│Bitstream │ * │ FAIL │ Pass │ Pass │
│OPSO, OQSO, and DNA │ FAIL │ FAIL │ FAIL │ FAIL │
│Count the ones │ * │ Pass │ Pass │ FAIL │
│Parking lot │ FAIL │ Pass │ Pass │ FAIL │
│Minimum distance │ Pass │ Pass │ Pass │ Pass │
│3-D spheres │ Pass │ Pass │ Pass │ Pass │
│Squeeze │ Pass │ Pass │ Pass │ FAIL │
│Overlapping sums │ Pass │ Pass │ Pass │ Pass │
│Runs │ Pass │ Pass │ Pass │ Pass │
│Craps │ Pass │ Pass │ Pass │ FAIL │
* Method 1 is guaranteed to fail some of the tests simply because it produced 31-bit values rather than 32 bit values. Such a failure is not an adequate test of the generator in these cases.
The first two methods failed miserably, but the nature of their failures suggested the third method. The third method performed pretty well. The OPSO, OQSO, and DNA tests are stringent tests of
correlation among sequences of pseudorandom values. They have the power to explore correlations at individual bit positions. The DNA test indicates that bits 6 and 7 of each byte generated by Method
3 are the causes of the failure.
4. A note on Excel's RAND() function (9 July 1999)
The failure of Excel's RAND() function is particularly dramatic. To do the tests, a spreadsheet was filled in all 256 columns through row 5024 with the formula INT(2^32 * RAND()). This was calculated
and saved as tab-delimited ASCII. It was recalculated and saved again as tab-delimited ASCII to a separate file. These files were concatenated to produce a file with 256 * 5024 * 2 random values in
the range 0..2^32-1. The following AWK program converted this output into the packed hexadecimal format needed for Diehard to process:
# Input: lines of 256 random 32-bit integers in base 10
# Output: lines of 80 hex characters.
# NB: Might omit up to the last 9 values.
BEGIN {
T31 = 2^31 # Saves a little computation time
for (i=1; i<=NF; i++) {
s = s sprintf("%-8.8x", $i-T31) # Flipping the high bit prevents overflow during conversion to hex.
if (++j >= 10) {
print s
s = ""
j = 0
The nature of the test failure is particularly awful: the frequencies of "1"s in bits 0-2 and 29-31 of the output were significantly different than the frequencies of "0"s, so the output is
non-uniform! One possible fix is to use the fractional part of 8*RAND() instead of RAND() itself, which effectively rejects bits 0-2 and starts the values with bit 3. I have not tested this, in part
because the algorithm used for RAND() (which Microsoft has published) is poor, so there's really not much hope.
1. How to produce uniformly distributed values.
The flaws in Number.MakeRandom are readily overcome without too much compromise in performance (which is already poor, since Avenue is an interpreted language). Here are some rules to follow:
1. Use MakeRandom only to generate pseudorandom integers. If you want floating point values, generate integers within a wide range and then rescale.
2. Do not attempt to generate random values in a range wider than 2^31 -1. Use multiple MakeRandom requests if wider ranges are needed. For example, to generate random double-precision floating
point values (about 50 bits of precision), consider generating two independent values in the range 0..(2^25 - 1). Call these x and y. Then (x/(2^25) + y)/(2^25) should produce a uniform value
between 0 and 1 (not including 1).
3. Do not use MakeRandom directly to generate negative random values. Instead, subtract a uniform result from a random positive value.
4. Compensate for the low frequencies at the endpoint of MakeRandom's range.
The following Avenue script generates uniform pseudo-random integers in the range provided by its two arguments.
' Name this script Number.MakeRandom.
' Example of use:
' av.Run("Number.MakeRandom", {0, 10})
NMin = (SELF.Get(0) min SELF.Get(1)).Truncate
NMax = (SELF.Get(0) max SELF.Get(1)).Truncate
N = NMax - NMin + 1
if (2^32 <= N) then return NIL end
x = Number.MakeRandom(0, N)
if (x = N) then x = 0 end
return x + NMin
' end of script
2. How to produce values with reasonably good randomness using ArcView.
Generate random bytes (values in the range 0..255) using the procedure just given above. This is essentially Method 3 (the byte-by-byte method), which passed most of the Diehard tests. Create larger
ranges of values by combining bytes. For example, to produce uniform values in the range 0..(2^16-1) use av.Run("Number.MakeRandom", {0,255}) * 256 + av.Run("Number.MakeRandom", {0,255}). To produce
values in ranges that are not powers of two, you will have to rescale and round the results. A good way to start is with a script that returns uniform random floating point values between 0 and 1
(not including 1). Here is one based on Method 3 that should give good results. Since it behaves like Excel's RAND() function, name it "Rand":
k = 0
for each i in 0..3
a = Number.MakeRandom(0, 256)
if (a = 256) then a = 0 end
k = k/256 + a
return k/256
' end of script
Since the values are based on 32 random bits, you get about 10 decimal places of precision. For more precision yet (but slower execution), replace the range 0..3 in the script by 0..4, 0..5, or 0..6
(wider ranges will do no good).
This script is easily used to produce uniform integers within any desired range (within the precision available in ArcView). For example, (av.Run("Rand", NIL)*1001).Floor is an Avenue expression that
will return uniformly distributed integers between 0 and 1000 inclusive (each will be produced with a long-run frequency of 1/1001).
Knuth, Donald E. The Art of Computer Programming. Volume 2, Seminumerical Algorithms, Third Edition. Addison-Wesley, 1998. Chapter 3.
(William A. Huber, 5 July 1999. Updated 9 July 1999.)
|
{"url":"http://www.quantdec.com/arcview.htm","timestamp":"2014-04-20T02:08:53Z","content_type":null,"content_length":"29375","record_id":"<urn:uuid:bc15fee1-1b46-4c6f-90cc-b60b50b1c42c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The n-Category Café
Definitions of Ultrafilter
Posted by Tom Leinster
One of these days I want to explain a precise sense in which the notion of ultrafilter is inescapable. But first I want to do a bit of historical digging.
If you’re subscribed to Bob Rosebrugh’s categories mailing list, you might have seen one of my historical questions. Here’s another: have you ever seen the following definition of ultrafilter?
Definition 1 An ultrafilter on a set $X$ is a set $U$ of subsets with the following property: for all partitions $X = X_1 \amalg \cdots \amalg X_n$ of $X$ into a finite number $n \geq 0$ of
subsets, there is precisely one $i$ such that $X_i \in U$.
This is equivalent to any of the usual definitions. It’s got to be in the literature somewhere, but I haven’t been able to find it. Can anyone help?
Just for fun, here’s a list of other equivalent definitions of ultrafilter. I wouldn’t be at all surprised if there’s some text where someone has compiled a similar list; but again, I haven’t found
Throughout, let $X$ be a set. I’ll write $P(X)$ for its power set.
Definition 2 An ultrafilter on $X$ is a set $U$ of subsets with the following property: for all partitions $X = X_1 \amalg X_2 \amalg X_3$ of $X$ into three subsets, there is precisely one $i \
in \{1, 2, 3\}$ such that $X_i \in U$.
This is the same as the first definition except that $n$ is constrained to be equal to $3$. You can do the same with $4, 5, \ldots$, but not $2$.
Another way of defining ultrafilter is in the spirit of my recent post on Hadwiger’s theorem. The idea is that an ultrafilter is a way of measuring the “size” of subsets of $X$.
Definition 3 An ultrafilter on $X$ is a function $\phi: P(X) \to \{0, 1\}$ such that (i) $\phi$ is a valuation: $\phi(\emptyset) = 0$ and $\phi(Y \cup Z) = \phi(Y) + \phi(Z) - \phi(Y \cap Z)$
for all $Y, Z \subseteq X$, and (ii) $\phi(X) = 1$.
So an ultrafilter is almost the same thing as a $\{0, 1\}$-valued valuation. The only difference is the extra condition that $\phi(X) = 1$, which could equivalently be replaced with “$\phi$ is not
identically zero”.
A valuation is something like a measure. Measures are closely related to integrals. So, we can try to come up with a way of defining ultrafilters so that they look like integrals. I had a go at that
a year ago, but I think the following feels more authentically integral-esque.
Choose your favourite rig $k$. I’ll assume that your favourite rig has the property that there are no natural numbers $n eq 1$ satisfying $n.1 = 1$ in $k$. For example, $k$ might be a field of
characteristic zero, or $\mathbb{N}$, or $\mathbb{Z}$.
Definition 4 An ultrafilter on $X$ is a $k$-linear function $\int: \{functions X \to k with finite image\} \to k$ such that $\int \lambda = \lambda$ for all $\lambda \in k$ (where
the integrand is a constant function) and $\int f \in image(f)$ for all $f$.
Let’s look now at the classic way of defining “ultrafilter”. In fact there are two classic ways, closely related to each other. Before stating either, we need a preliminary definition. A filter on
$X$ is a collection $F$ of subsets that is upwards closed ($Y \supseteq Z \in F$ implies $Y \in F$) and closed under finite intersections (or equivalently (i) $Y, Z \in F$ implies $Y \cap Z \in F$,
and (ii) $X \in F$).
Definition 5 An ultrafilter on $X$ is a filter $U$ such that the only filters containing $U$ are $P(X)$ and $U$ itself.
In other words, an ultrafilter is a maximal proper filter. There is an alternative way of framing the maximality, which gives the other classic definition:
Definition 6 An ultrafilter on $X$ is a filter $U$ such that for all $Y \subseteq X$, either $Y \in U$ or $X \setminus Y \in U$, but not both.
This is ripe for restating order-theoretically. We’ll use the inclusion ordering on $P(X)$, and we’ll use the two-element totally ordered set $2$. A filter on $X$ is nothing but a map $P(X) \to 2$ of
meet-semilattices—that is, a map preserving finite meets ($=$ infs $=$ greatest lower bounds). An ultrafilter is a filter that, viewed as a map, also preserves complements.
Definition 7 An ultrafilter on $X$ is a map $P(X) \to 2$ of Boolean algebras (or equivalently, of lattices).
We’re now getting into the realm of Stone duality (the equivalence between the category of Boolean algebras and the opposite of the category of totally disconnected compact Hausdorff spaces). So it’s
no surprise that accompanying the Boolean algebra definition, there’s a topological definition:
Definition 8 An ultrafilter on $X$ is a point of the Stone–Čech compactification of $X$.
Finally, here’s a definition I learned from the $n$Lab page on ultrafilters, but haven’t digested yet:
Definition 9 An ultrafilter on $X$ is a set $U$ of subsets such that for all $Y \subseteq X$, $Y \in U \Leftrightarrow \forall n \geq 0, \forall Z_1, \ldots, Z_n \in U, Y \cap
Z_1 \cap \cdots \cap Z_n eq \emptyset.$
Anyway, the last eight of those nine were mostly for entertainment. What I’d most like is if someone can give me a reference for the first one. Thanks!
Posted at July 2, 2011 11:35 PM UTC
Re: Definitions of Ultrafilter
Can you give me an example of an ultrafilter $U$ on a set $X$ that isn’t of the form $\exists x \in X$ such that $S \in U \Leftrightarrow x \in S$?
Posted by: Jamie Vicary on July 3, 2011 12:47 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
No one can, really; the existence of such a nonprincipal ultrafilter comes down to a highly nonconstructive (weak) form of the axiom of choice.
A principal ultrafilter is one generated by a single element $x$ in the sense you wrote down. Being a nonprincipal ultrafilter is equivalent (in Zermelo set theory, say, without assuming choice) to
being an ultrafilter that contains the filter of cofinite sets i.e., complements of finite sets. In particular, a nonprincipal ultrafilter exists only on an infinite set.
The usual way to prove existence of nonprincipal ultrafilters is to prove a more general lemma, the ultrafilter lemma, which proves that any filter can be extended to an ultrafilter. The standard
proof uses the axiom of choice. While it is not quite as strong as the axiom of choice, there are certainly models of ZF where the ultrafilter lemma fails.
The ultrafilter lemma is quite a handy thing; it is at the bottom of, for example, the compactness theorem for propositional and first-order logic, the construction of ultrapowers and nonstandard
reals, and many other things.
Posted by: Todd Trimble on July 3, 2011 1:43 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
A strengthening of the ultrafilter lemma that I have used not too long ago is the following. For an (ADJECTIVES) cardinal $\kappa$, define a $\kappa$-filter on $X$ to be an upwards-closed collection
of “large subsets” of $X$ such that any intersection of fewer than $\kappa$ “large” subsets is again “large”. Then the axiom of choice continues to assure that $\kappa$-ultrafilters (maximal $\kappa$
-filters) exist.
With these, you can prove the following: Let $X$ be a set and $\mathbb{F}$ a field with $|\mathbb{F} |\lt |X |$. Then there are maximal ideals in the ring $\mathbb{F}^X$ with residue field $\mathbb
{F}$ but that do not correspond to evaluating at any point in $X$. (It is always true, regardless of cardinality, that the maximal ideals of $\mathbb{F}^X$ are in natural bijection with points in the
Stone-Cech completion of $X$; the question arose in a project of mine whether you could distinguish the “finite”=principal ones from the “infinite”=nonprincipal ones by looking at the residue field.
Of course, when I say “the residue field is $\mathbb{F}$”, what I mean is that I’m asking to understand the $\mathbb{F}$-algebra homomorphisms $\mathbb{F}^X \to \mathbb{F}$.)
Posted by: Theo JF on July 7, 2011 12:15 AM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Incidentally, the fact about residue fields in $\mathbb{F}^X$ is not at all surprising if your first thought is to think of $\mathbb{F}$ as a finite field. But if you are me, then when you fear “let
$\mathbb{F}$ be a field” you think “let $\mathbb{F}$ denote $\mathbb{C}$ the field of complex numbers”. And when you think of a set $X$, if you are me you think of $X = \mathbb{N}$ a countable set,
or a second-countable topological space. So then it is very surprising that the non-principle residue fields in the Stone-Cech completion can be as small as $\mathbb{F}$. All such residue fields are
necessarily models of the first-order theory of $\mathbb{F}$ (for any value of $\mathbb{F}$ — you can let $\mathbb{F}$ denote “Set Theory” if you want), but generically they are nonstandard models
thereof. Whereas they can be “standard” when $X$ is very large; also bizarre is that the model becomes “nonstandard” upon sufficiently large base-change.
Of course, Todd knows more about all of this than I do; I mention it for others who might be reading.
Posted by: Theo JF on July 7, 2011 12:28 AM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Then the axiom of choice continues to assure that κ-ultrafilters exist.
Really? I thought that got you into measurable-cardinal territory. Is there some subtlety that I missed in what you said which keeps you in ZFC?
Posted by: Mike Shulman on July 7, 2011 2:51 AM | Permalink | Reply to this
Re: Definitions of Ultrafilter
I’m guessing some of those subtleties are packed into “ADJECTIVES” — maybe to make sure there are “enough” filters? The Zorn’s Lemma choice mirror would seem then to give you maximal filters… unless
there are other subtleties?
Posted by: some guy on the street on July 8, 2011 5:44 AM | Permalink | Reply to this
Countably complete ultrafilters?
I worried about the same thing that Mike worried about, but at the same time it wasn’t clear where the argument breaks down exactly. So this might be a good opportunity to work through some details.
[Edit in hindsight: most of the following is fluff. If you want to cut to the chase, you can skip to the last three paragraphs.]
For example, take the set $X = \mathbb{R}$, and consider the filter of subsets whose complements are at most countable. This is closed under countable intersections. (I’m working in ZFC of course.)
But there’s some reason now why this can’t extend to an ultrafilter closed under countable intersections. I “know” this by the following reasoning: (1) any such ultrafilter must be non-principal
(since any singleton has empty intersection with some co-countable set); (2) the least uncountable measurable cardinal is the least cardinal that admits a nonprincipal ultrafilter closed under
countable intersections; (3) therefore, the cardinality of $\mathbb{R}$ is greater than the first measurable cardinal; (4) this can’t be because uncountable measurable cardinals are strongly
So is that saying that a simple Zorn’s lemma argument breaks down somewhere? There’s one little subtlety that I want to look into first. An ultrafilter is sometimes described as a maximal element in
the poset of all proper filters; this is equivalent to being a proper filter $F$ with the property that for any subset $Y \subseteq X$, either $Y$ or its complement $eg Y$ belong $F$. Okay, the thing
I want to check is whether this complementation property still holds for maximal elements in the poset of all proper filters that are closed under countable intersections.
Actually, before I do that, let me review how the proof goes for maximal elements in the poset of all proper filters. Suppose that neither $Y$ nor $eg Y$ belong to a proper filter $F$. I claim that
the empty set can belong to at most one of the collections
$\{Y \cap A: A \in F\}, \qquad \{eg Y \cap A: A \in F\}.$
[For if the empty set belonged to both, then there exist $A, B \in F$ such that $\emptyset = Y \cap A$ and $\emptyset = eg y \cap B$. It follows that
$\emptyset = Y \cap A \cap B, \qquad \emptyset = eg Y \cap A \cap B$
so that by the distributive law, $\emptyset = (Y \cup eg Y) \cap A \cap B = A \cap B$; this would contradict properness of the filter $F$.] Then, if say the empty set did not belong to the first
$F' = \{Y \cap A: A \in F\},$
this would be a proper filter that properly extends $F$. Therefore, $F$ is not a maximal proper filter. It follows that maximal proper filters have the complementation property.
It seems to me that the exact same argument would carry over proper countably complete filters, because if $F$ is closed under countable intersections, then so is this $F'$ we created.
So okay, maximal elements in the poset of countably complete proper filters indeed have the complementation property, and therefore they are ultrafilters that are countably complete.
So although it might seem strange, by my reckoning there must be something wrong specifically with the Zorn’s lemma argument. Can it be that chains in the poset of countably complete proper filters
might not have upper bounds??
Oh! That’s obviously it. Or I think it’s “obvious”. Take a countable chain $C_1 \subset C_2 \subset \ldots$ of countably complete (proper) filters. Then the obvious thing to take for the upper bound,
which is the union, doesn’t work. In other words, how would we prove that the intersection of sets $A_i$ belongs to the union, if we take each $A_i$ to belong to the difference $C_{i+1} - C_i$?
In different, more high-falutin’ language: filtered colimits along chains commute with the finite limits we need to prove closure of the colimit under finitary operations, but once we need to deal
with infinitary operations like countable intersections, filtered colimits of general chains are out the window. Thus we should expect Zorn’s lemma arguments to generally fail where infinitary
operations are involved.
Posted by: Todd Trimble on July 8, 2011 5:00 PM | Permalink | Reply to this
Re: Countably complete ultrafilters?
I made a slight mistake in what I wrote down above: that $F'$ should instead be the upward-closed collection generated by $\{Y \cap A: A \in F\}$. Hopefully there aren’t any other significant errors.
Posted by: Todd Trimble on July 8, 2011 5:13 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
My understanding is that as soon as you have an ultrafilter that is closed under $\kappa$-ary intersections for any infinite cardinal $\kappa$, you have a measurable cardinal. In fact, if $X$ is the
smallest set on which such an ultrafilter exists, then its cardinality is measurable.
Posted by: Mike Shulman on July 8, 2011 4:10 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
There’s a paper that touches on aspects of this (i.e. ultrafilters, categories and measurable cardinals):
Reinhard Börger, Coproducts and ultrafilters. Journal of Pure and Applied Algebra 46 (1987), 35–47.
Posted by: Tom Leinster on July 8, 2011 5:10 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Tom, the only lead I have for a reference is that post of Lawvere to the categories list that I mentioned to you in email. Lawvere was remarking that an ultrafilter on $X$ is essentially the same
thing as a map
$[n]^X \to [n]$
which preserves the canonical action of the monoid $\hom([n], [n])$ of endofunctions on a (finite) $n$-element set $[n]$, and that is a reformulation of condition (1) that you wrote down. Here $n \
geq 3$.
So I’m guessing that if you can track down that 1960 article of Isbell that Lawvere mentions (but what is it?), you might find what you are looking for. (As an aside, I’ll bet this condition (1) is
really ancient folklore.) I’m also guessing that if no one here comes up with a concrete reference, your best bet might be to write Lawvere directly.
Posted by: Todd Trimble on July 3, 2011 1:55 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Thanks, Todd. I suspected that the conversation might roam into the territory of what I wanted to talk about in that later post, but it did so quicker than I’d anticipated :-) Never mind! That fact
of Lawvere’s is remarkable.
I’ll ask Lawvere about Definition 1 in a fortnight, at the CT meeting in Vancouver. I think the 1960 article of Isbell’s that he cites must be Adequate subcategories. (Lawvere mentions Ulam measures,
which appear on p.548-9 of Isbell.) A discussion of ultrafilters would fit right into Isbell’s paper, but there isn’t one.
I’d bet a large sum against Definition 1 being new. It would seem a bit funny to me if it had the status of folklore, though: it’s so simple that you’d think it would quickly have made its way into
the written literature. But stranger things have happened. (Perhaps filters are so important to logicians that they always want to view ultrafilters in that context.)
It occurred to me that I should probably ask this at MathOverflow. The trouble is that MO is too addictive… so I’ll hold off for a while.
Posted by: Tom Leinster on July 3, 2011 2:25 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Incidentally, Definition 4 suggests that if we join Lawvere in thinking of an ultrafilter $U$ on $X$ as a map
$[n]^X \to [n]$
of $End([n])$-sets, then this map should be thought of as integration against $U$. (Here $n \geq 3$ is a fixed natural number.)
I’ll explain both this comment and how Lawvere’s characterization works. An ultrafilter $U$ on $X$ is usually construed as a set of subsets of $X$. We can think of the subsets that belong to $U$ as
“large”, or “measure 1”, and the subsets that don’t belong to $U$ as “small”, or “measure zero”. The “measure” idea is made precise in Definition 3, where the measure (really, valuation) is called $\
Now let’s try to build some kind of integral from this measure $\phi$. We choose a rig $k$ where the integrable functions are going to take their values, and where the integrals themselves will also
live. Our integral should certainly satisfy
$\int \chi_A \;d\phi = \phi(A)$
for all $A \subseteq X$, where $\chi_A$ is the characteristic function of $A$. And it turns out that, in the normal way of things, we can extend linearly to get an integral defined on a reasonably
large class of functions (Definition 5).
But what I didn’t say before is that there’s also a direct way of defining $\int -\;d\phi$, without any “extend by linearity” business. It’s simply this: given $f: X \to k$ with finite image, $\int f
\;d\phi$ is the unique element of $k$ such that
$f^{-1}\Bigl(\int f\;d\phi\Bigr) \in U.$
(Definition 1 guarantees that there is a unique element of $k$ with this property.) And this makes no reference to the rig structure of $k$!
The measure $\phi$ was derived from the ultrafilter $U$ in a very simple way: it’s just the characteristic function of $U$, in fact. So we could reasonably write $\int -\;d U$ instead of $\int -\; d\
Finally, let’s come back to Lawvere’s characterization of ultrafilters. We’ve fixed a natural number $n \geq 3$. Starting from an ultrafilter $U$ on $X$, we get a map
$\int - \;d U: [n]^X \to [n]$
uniquely determined by
$f^{-1}\Bigl(\int f\; d U\Bigr) \in U$
whenever $f \in [n]^X$. It’s $End([n])$-invariant, that is, $\int \theta\circ f\;d U = \theta(\int f\; d U)$ whenever $\theta: [n] \to [n]$. That’s how an ultrafilter gives rise to an $End([n])$
-invariant map $[n]^X \to X$.
For the converse, you’re trying to turn an “integral” into a “measure”. As usual, the idea is to define the measure of a subset as the integral of its characteristic function. I’ll skip the details,
but here’s where you need the assumption $n\geq 3$: it’s to prove that the resulting “measure” really is an ultrafilter, which you can do by applying Definition 2.
Posted by: Tom Leinster on July 3, 2011 4:13 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
The, in my opinion, best way of proving Stone duality is by looking at the functor $F^{op} \to Top^{op}$—here $F$ is the category of finite sets—which views a finite set as a discrete topological
space, and applying to it the construction with which we associate the name of Kan.
This yields an adjunction between $[F, Set]$ and $Top^{op}$, even an idempotent adjunction, whose fixpoints on the left, and on the right, are, respectively, the finite limit preserving functors $F \
to Set$ (i.e., the boolean algebras), and the Stone spaces; with the equivalence between these categories of fixpoints proving Stone duality, and at the same time exhibiting the category of Stone
spaces as the free completion of $F$ under cofiltered limits.
The details of this hinge in part on your Definition 1 and so I would be minded to go hunting for the equivalence of this with the usual definition in the Stone duality literature; probably you have
already done so but there is my two penn’orth.
Posted by: Richard Garner on July 4, 2011 9:13 AM | Permalink | Reply to this
Re: Definitions of Ultrafilter
and applying to it the construction with which we associate the name of Kan.
Ah, Richard. Every sentence is golden.
I like your way of looking at Stone duality. But maybe it would be kind to expand on this part:
the finite limit preserving functors $F \to Set$ (i.e., the boolean algebras)
Let me try to guess what you had in mind when you wrote this. For any finitary algebraic theory $T$, there is an equivalence of categories
$T\text{-}Alg \simeq FinLim(T\text{-}Alg_{fp}^{op}, Set)$
where $T\text{-}Alg_{fp}$ is the full subcategory of $T\text{-}Alg$ consisting of just the finitely presentable $T$-algebras, and $FinLim(-, -)$ is the full subcategory of the functor category
consisting of just the finite limit preserving functors.
(In fact, this equivalence is also obtained by applying the construction with which we associate the name of Kan. Explicitly, given an arbitrary $T$-algebra $A$, you get a finite limit preserving
$Hom(-, A): T\text{-}Alg_{fp}^{op} \to Set.$
This process $A \mapsto Hom(-, A)$ turns out to define an equivalence between $T$-algebras and finite limit preserving presheaves on $T\text{-}Alg_{fp}$.)
That’s one half of what I assume you had in mind. The other half is that
$Bool_{fp} \simeq F^{op}$
where $Bool_{fp}$ is the category of finitely presentable Boolean algebras. It’s fairly easy to see that a Boolean algebra is finitely presentable if and only if it’s finite, so what this says is
that the category of finite Boolean algebras is dual to the category of finite sets: “baby Stone duality”.
Putting the two halves together, we get equivalences
$Bool \simeq FinLim(Bool_{fp}^{op}, Set) \simeq FinLim(F, Set)$
where $Bool$ is the category of Boolean algebras.
Have I guessed your thinking right?
so I would be minded to go hunting for the equivalence of this with the usual definition in the Stone duality literature
Thanks. That’s a good idea.
Posted by: Tom Leinster on July 4, 2011 12:01 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Here’s a thing. Richard used the fact that a Boolean algebra amounts to a finite limit preserving functor $F \to Set$, where $F$ is the category of finite sets. But Todd has been writing about the
fact that a Boolean algebra amounts to a finite product preserving functor $F_+ \to Set$, where $F_+$ is the category of nonempty finite sets.
So, a finite limit preserving functor $F \to Set$ is the same thing as a finite product preserving functor $F_+ \to Set$.
Without thinking about it, I lazily assume that the equivalence
$FinLim(F, Set) \simeq FinProd(F_+, Set)$
is restriction. If so, this tells us that a functor $F \to Set$ preserves finite limits just as long as its restriction to $F_+$ preserves finite products. Is that obvious from first principles?
Posted by: Tom Leinster on July 4, 2011 2:13 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Let’s see: for a finitary algebraic theory $T$, general algebras can be identified with finite-limit-preserving functors
$Alg_{fp}^{op} \to Set$
and in the case where $T$ is the theory of Boolean algebras, $Alg_{fp}$ can be identified with the category of finite Boolean algebras (including the terminal one). The opposite of that category is
the category of all finite sets (including the empty one).
The mechanism I was using was to take the Cauchy completion of the category of finitely generated free Boolean algebras. It just so happens that the absolute colimits (coequalizers of pairs $(1_B, e)
$ where $e: B \to B$ is an idempotent on a f.g. free Boolean algebra) give you all the objects of $Alg_{fp}$ but one: the terminal one. (It’s a little easier to see that in the dual picture, applying
“baby” Stone duality.)
I think the first principle then is that restriction along the inclusion $i: Alg_{f.g.free} \to Alg_{fp}$ induces an equivalence
$FinLim(Alg_{fp}^{op}, Set) \to FinProd(Alg_{f.g.free}^{op}, Set)$
and the second is that that can be “improved” to
$FinLim(Alg_{fp}^{op}, Set) \to FinProd(\widebar{Alg_{f.g.free}}^{op}, Set)$
where the bar overhead denotes Cauchy completion.
Posted by: Todd Trimble on July 4, 2011 4:36 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
I see. Thanks very much.
So we’re shifting between doctrines. $Alg_{f.g.free}^{op}$ is the free finite product category containing an algebra, a.k.a. the Lawvere theory. $Alg_{fp}^{op}$ is the free finite limit category
containing an algebra. Hence both the categories
$FinLim(Alg_{fp}^{op}, Set), \quad FinProd(Alg_{f.g.free}^{op}, Set)$
are equivalent to the category of algebras. In particular, they’re equivalent to each other.
I’m idly wondering how hard it would be to prove in a nuts and bolts way that, for functors $F \to Set$, if the restriction to $F_+$ preserves finite products then the original functor preserves
finite limits. It’s much nicer to have the conceptual explanation that you’ve supplied, though.
Posted by: Tom Leinster on July 5, 2011 11:54 AM | Permalink | Reply to this
Re: Definitions of Ultrafilter
According to Forssell, we can think of the theory/model duality as
$FinProd(T, Set) = FinProd(Alg^{op}_{f.g.free}, Set) = Alg$
$\mathcal{G}(Alg, Set) = Alg^{op}_{f.g.free} = T,$
where $\mathcal{G}$ is the category of categories with all limits and colimits and with functors which preserve limits, filtered colimits, and regular epimorphisms.
Is there then a category, $\mathcal{H}$, such that
$\mathcal{H}(Alg, Set) = Alg^{op}_fp ?$
Posted by: David Corfield on July 5, 2011 1:14 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Yes, I think we take $\mathcal{H}$ to be the 2-category of locally finitely presentable categories and functors which preserve limits and filtered colimits. Functors $Alg \to Set$ which preserve
limits are representable functors $Alg(A, -): Alg \to Set$, and representable functors which preserve filtered colimits here will be those where $A$ is a finitely presented algebra. This comes under
the heading of Gabriel-Ulmer duality.
Posted by: Todd Trimble on July 5, 2011 2:56 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
I should have remembered, thanks. You’re probably feeling doomed to have to repeat this to me over and over (see here and here). I’ll get it one of these days.
Posted by: David Corfield on July 5, 2011 3:12 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
I see now that A Duality Relative to a Limit Doctrine deals with the general theory of which this situation is a special case, i.e., dualities for doctrines and what happens when there’s an injection
from one to the other as with the finite product and finite limit doctrines (p. 9).
Posted by: David Corfield on July 5, 2011 4:35 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Wait, are you sure that what it’s saying? I got the idea there could be lots of ways of extending a functor $F_+ \to Set$ (in particular, one that preserves finite products) to a functor $F \to Set$,
and the latter won’t necessarily preserve finite limits even if the former preserves finite products.
But let me try to do this more carefully. (When I say “carefully”, I mean I’m going to do a boring calculation, which can be cleaned up and made more elegant after the fact. You can skip to the end
to see my conclusion.)
To extend a functor $\phi: F_+ \to Set$ to a functor $\phi': F \to Set$, all you need is to define $\phi'(0)$ and maps $\phi'(!_n): \phi'(0) \to \phi(n)$ so that $\phi'(!_n) = \phi(g) \circ \phi'(!
_m)$ for every $g: m \to n$ in $F_+$. (That’s enough because you only have to worry about maps going out of $0$, never maps going in to $0$, because $0$ is a strict initial object.) But that data is
the same as a set $\phi'(0)$ together with a function
$\phi'(0) \to \lim_m \phi(m)$
and so let’s compute that limit (in the case where $\phi$ preserves finite products). It will be of the form
$\lim_m Bool(2^m, B)$
where $B$ is some Boolean algebra. That’s the same as
$Bool(colim_m 2^m, B)$
where the colimit is computed in the category of Boolean algebras. Applying Stone duality, that colimit inside is the Boolean algebra attached to the Stone space limit $\lim_{m \in F_+} m$.
And that limit sure as heck looks like the empty space to me. Unwinding, that Boolean algebra colimit is the terminal Boolean algebra, which I’ll call $t$. Unwinding some more,
$Bool(t, B)$
is empty, unless $B$ is itself $t$.
Summarizing, we’re in the following funny situation:
• If $B$ is a non-terminal Boolean algebra, then by golly you were right Tom: there’s only one way to extend the corresponding product-preserving functor $\phi: F_+ \to Set$ to a functor $\phi': F
\to Set$. Here $\phi'(0)$ is forced to be the empty set.
• But if $B$ is a terminal Boolean algebra, then I was right: the functor $\phi: F_+ \to Set$ is the terminal functor, and there are lots of ways to extend to a functor $\phi': F \to Set$. The only
one of these that will be finite-limit-preserving will be the one where $\phi'(0)$ is terminal.
Posted by: Todd Trimble on July 5, 2011 1:18 PM | Permalink | Reply to this
Re: Definitions of Ultrafilter
Wait, are you sure that’s what it’s saying?
Oops: my mistake. Thanks for clearing it up.
If $B$ is a non-terminal Boolean algebra, then by golly you were right Tom
Pure fluke!
Posted by: Tom Leinster on July 8, 2011 12:07 AM | Permalink | Reply to this
Re: Definitions of Ultrafilter
After nearly two years, a helpful MathOverflow user has pointed out a place in the literature where Definitions 1 and 2 of ultrafilter appear (and are proved to be equivalent to the usual
Fred Galvin, Alfred Horn, Operations preserving all equivalence relations. Proceedings of the American Mathematical Society 24 (1970), 521–523.
Posted by: Tom Leinster on May 8, 2013 9:22 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2011/07/definitions_of_ultrafilter.html","timestamp":"2014-04-18T15:39:57Z","content_type":null,"content_length":"132031","record_id":"<urn:uuid:b167379e-1449-4af4-a6a4-705d429f37ef>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A120428 - OEIS
%S 1,2,3,3,1,5,5,1,7,5,3,7,2,7,3,11,7,5,13,11,3,13,2,13,3,17,13,5,19,17,
%T 3,19,2,19,3,23,19,5,23,2,23,3,19,5,3,23,5,29
%N Triangle read by rows in which row n gives a representation of n as a sum of distinct numbers from {1, primes} with a minimal number of terms.
%C If there are several solutions with the minimal number of terms, choose the one with the greatest leading term, then the greatest second term, etc.
%C It can be shown that such a representation exists for all n.
%e 1=1
%e 2=2
%e 3=3
%e 4=3+1
%e 5=5
%e 6=5+1
%e 7=7
%e 8=5+3
%e 9=7+2
%e 10=7+3
%e 11=11
%e 12=7+5
%K nonn,tabf
%O 1,2
%A _N. J. A. Sloane_, based on a posting by Henry Baker to the math-fun list, Jul 22 2006
|
{"url":"http://oeis.org/A120428/internal","timestamp":"2014-04-21T12:59:48Z","content_type":null,"content_length":"8752","record_id":"<urn:uuid:5ae071e9-60c0-403b-8845-a7fb5949185f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How much is 2ml in fl. oz?
You asked:
How much is 2ml in fl. oz?
0.067628045402 US fluid ounces
the volume 0.067628045402 US fluid ounces
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/how_much_is_2ml_in_fl._oz","timestamp":"2014-04-18T08:15:59Z","content_type":null,"content_length":"56211","record_id":"<urn:uuid:97a7257a-7659-4ca7-aff9-89c2f7363da2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Waltham, MA Precalculus Tutor
Find a South Waltham, MA Precalculus Tutor
...Over the past five years I have acquired hundreds of hours of experience tutoring a variety of different levels of calculus. As a tutor I am highly adaptable and can accommodate students with
busy schedules who need to absorb essential calculus concepts quickly, as well as those who want to take...
14 Subjects: including precalculus, calculus, geometry, algebra 1
I am available and eager to tutor anyone seeking additional assistance in the fields of physics (or mathematics), either at the high school or college level! I have been teaching physics as an
adjunct faculty at several universities for the last few years and very much look forward to the opportuni...
9 Subjects: including precalculus, calculus, physics, geometry
...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science
from West Point. My academic strengths are in mathematics and French.
16 Subjects: including precalculus, French, elementary math, algebra 1
...I have many years of experience teaching the SAT, and the SSAT is very similar. I have tutored the SAT many times, and the ACT is very similar. I have tutored the SAT many times, and the ACT
is very similar.
29 Subjects: including precalculus, reading, calculus, geometry
...I am comfortable with Standard, Honors, and AP curricula. In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run
test preparation classes (MCAS and SAT). Students tell me I'm awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are realistic; my results are always
8 Subjects: including precalculus, geometry, statistics, algebra 2
Related South Waltham, MA Tutors
South Waltham, MA Accounting Tutors
South Waltham, MA ACT Tutors
South Waltham, MA Algebra Tutors
South Waltham, MA Algebra 2 Tutors
South Waltham, MA Calculus Tutors
South Waltham, MA Geometry Tutors
South Waltham, MA Math Tutors
South Waltham, MA Prealgebra Tutors
South Waltham, MA Precalculus Tutors
South Waltham, MA SAT Tutors
South Waltham, MA SAT Math Tutors
South Waltham, MA Science Tutors
South Waltham, MA Statistics Tutors
South Waltham, MA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Auburndale, MA precalculus Tutors
Cherry Brook, MA precalculus Tutors
Cochituate, MA precalculus Tutors
East Somerville, MA precalculus Tutors
East Watertown, MA precalculus Tutors
Hastings, MA precalculus Tutors
Kendal Green, MA precalculus Tutors
Kenmore, MA precalculus Tutors
North Natick, MA precalculus Tutors
Reservoir, MS precalculus Tutors
Stony Brook, MA precalculus Tutors
Waltham, MA precalculus Tutors
West Newton, MA precalculus Tutors
West Somerville, MA precalculus Tutors
Winter Hill, MA precalculus Tutors
|
{"url":"http://www.purplemath.com/South_Waltham_MA_Precalculus_tutors.php","timestamp":"2014-04-17T04:19:25Z","content_type":null,"content_length":"24621","record_id":"<urn:uuid:a5fa3500-ef37-4656-a074-085d806d0c6a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Time-dependent angular acceleration problem
1. The problem statement, all variables and given/known data
As a result of friction, the angular speed of a
wheel changes with time according to
dθ/dt=ω0*e^-σt ,
where ω0 and σ are constants. The angular
speed changes from an initial angular speed
of 3.96 rad/s to 3.46 rad/s in 3.92 s .
Determine the magnitude of the angular
acceleration after 2.44 s.
Answer in units of rad/s2
2. Relevant equations
dω/dt = [tex]\alpha[/tex]
3. The attempt at a solution
I've tried differentiating the given expression for omega in an attempt to get the angular acceleration, but that didn't work because [tex]\sigma[/tex] is undefined in the problem. I've also tried
taking the ln of both sides, but that didn't work either. I tried solving for [tex]\sigma[/tex] in terms of ω and ω0, but that didn't work. Finally, I tried just assuming that the acceleration is
just constant from t0 to t, but that also wasn't the right answer. So, I have no idea what to try next......
|
{"url":"http://www.physicsforums.com/showthread.php?t=390306","timestamp":"2014-04-21T02:13:24Z","content_type":null,"content_length":"22578","record_id":"<urn:uuid:3c85bfac-99cd-4c06-a73c-5cb0b6b89f9a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seminar: Interference Alignment in Single-Beam MIMO Networks: Algorithms and Large System Analysis
Wolfgang Utschick
Mobile Communications
Date: April 21, 2010
Location: Eurecom - Salle des Conseils
To achieve the full multiplexing gain of MIMO interference networks at high SNRs, the interference from different transmitters must be aligned in lower-dimensional subspaces at the receivers.
Recently a distributed max-SINR algorithm for precoder optimization has been proposed that achieves interference alignment for sufciently high SNRs. We show that this algorithm can be interpreted as
a variation of an algorithm that minimizes the sum Mean Squared Error (MSE). To maximize sum utility, where the utility depends on rate or SINR, a weighted sum MSE objective is used to compute the
beams, where the weights are updated according to the sum utility objective. We specify a class of utility functions for which convergence of the sum utility to a local optimum is guaranteed with
asynchronous updates of beams, receiver lters, and utility weights. In the second part we consider a network of K interfering transmitter receiver pairs, where each node has N antennas and at most
one beam is transmitted per user. We investigate the asymptotic performance of different strategies, as characterized by the slope and y-axis intercept (or offset) of the high signal-to-noise ratio
(SNR) sum rate asymptote. It is known that a slope (or multiplexing gain) of 2N − 1 is achievable with interference alignment. On the other hand, a strategy achieving a slope of only N might
allow for a significantly higher offset. With the assumption that only a discrete number of strategies is able to achieve a slope of 2N − 1 for a given channel realization, we approximate the
average offset when the best out of a large number L of these solutions is selected, by means of extreme statistics. Furthermore, we derive a simple large system approximation for a successive beam
allocation scheme achieving a slope of N. We show that both approximations provide good matches to numerically simulated results for moderate system dimensions and discuss how the approximated
asymptotes behave for larger systems depending on the relationship between L and N. (Joint work with David Schmidt from TUM and Michael L. Honig from Northwestern University, Illinois, USA)
Permalink: http://www.eurecom.fr/seminar/336
Interference Alignment in Single-Beam MIMO Networks: Algorithms and Large System Analysis
|
{"url":"http://www.eurecom.fr/en/seminar/336/detail/interference-alignment-in-single-beam-mimo-networks-algorithms-and-large-system-analysis","timestamp":"2014-04-16T20:19:05Z","content_type":null,"content_length":"28255","record_id":"<urn:uuid:ac351e61-0de8-415b-ad6a-c40d25d42c4a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Los Angeles, CA Math Tutor
Find an East Los Angeles, CA Math Tutor
...I worked at a tutoring center in Rancho Cucamonga from December 2011, and I loved tutoring students in the areas of calculus, algebra, geometry, basic maths, and physics. It was my passion for
tutoring these subjects that convinced me to pursue becoming a school teacher. I now teach at a high school in San Pedro.
11 Subjects: including calculus, physics, precalculus, trigonometry
...Precalculus is a great chance to brush up on the Algebra 2 skills you need to do well in calculus. The problems are a little harder, but most of the concepts are the same. Some students don't
need this review and can proceed straight to calculus.
14 Subjects: including calculus, geometry, prealgebra, psychology
...Received A grade in Precalculus Honors Long time proficiency in all Math. Long time interest and proficiency in Sciences. I played three years of high school tennis.
19 Subjects: including algebra 1, algebra 2, biology, vocabulary
...I often use a different approach when tutoring chemistry than I would with math or physics, because I feel that chemistry is similar to learning a new language. Once you learn to speak the
language the problem solving becomes simpler and students can more comfortably take the next step. I will ...
10 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I have taught preschool, K-12, and ESL(English as Second Language)in adult education over the past 22 years. I've kept in touch with many of my students. One of my first students in preschool
has graduated high school and begun college.
52 Subjects: including algebra 1, reading, Spanish, differential equations
Related East Los Angeles, CA Tutors
East Los Angeles, CA Accounting Tutors
East Los Angeles, CA ACT Tutors
East Los Angeles, CA Algebra Tutors
East Los Angeles, CA Algebra 2 Tutors
East Los Angeles, CA Calculus Tutors
East Los Angeles, CA Geometry Tutors
East Los Angeles, CA Math Tutors
East Los Angeles, CA Prealgebra Tutors
East Los Angeles, CA Precalculus Tutors
East Los Angeles, CA SAT Tutors
East Los Angeles, CA SAT Math Tutors
East Los Angeles, CA Science Tutors
East Los Angeles, CA Statistics Tutors
East Los Angeles, CA Trigonometry Tutors
Nearby Cities With Math Tutor
August F. Haw, CA Math Tutors
Boyle Heights, CA Math Tutors
City Industry, CA Math Tutors
City Of Industry Math Tutors
Commerce, CA Math Tutors
Firestone Park, CA Math Tutors
Glassell, CA Math Tutors
Hazard, CA Math Tutors
Los Nietos, CA Math Tutors
Montebello, CA Math Tutors
Monterey Park Math Tutors
Rancho Dominguez, CA Math Tutors
South, CA Math Tutors
Walnut Park, CA Math Tutors
Windsor Hills, CA Math Tutors
|
{"url":"http://www.purplemath.com/east_los_angeles_ca_math_tutors.php","timestamp":"2014-04-21T04:59:15Z","content_type":null,"content_length":"24265","record_id":"<urn:uuid:3d9932c3-7402-46b0-bde3-2c00be89f71d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inferences about Regression
9.3: Inferences about Regression
Created by: CK-12
Learning Objectives
• Make inferences about the regression models including hypothesis testing for linear relationships.
• Make inferences about regression and predicted values including the construction of confidence intervals.
• Check regression assumptions.
In the previous section, we learned about the least-squares or the linear regression model. The linear regression model uses the concept of correlation to help us predict a variable based on our
knowledge of scores on another variable. As we learned in the previous section, this concept is used quite frequently in statistical analysis to predict variables such as IQ, test performance, etc.
In this section, we will investigate several inferences and assumptions that we can make about the linear regression model.
Hypothesis Testing for Linear Relationships
Let’s think for a minute about the relationship between correlation and the linear regression model. As we learned, if there is no correlation between two variables ($X$$Y$$(r)$$0$$(Y)$$(r=0)$$X$
Using this knowledge, we can determine that if there is no relationship between $Y$$X$$(\beta)$$Y$
In hypothesis testing of linear regression models, the null hypothesis to be tested is that the regression coefficient $(\beta)$does not equal zero.
$H_0&:(\beta)=0\\H_a&:(\beta)e 0$
We perform this hypothesis test similar to the previous conducted hypothesis test and need to next establish the critical values for the hypothesis test. We use the $t$$n-2$degrees of freedom to set
such values. The general formula used to calculate the test statistic for testing this null hypothesis is:
$t = \frac{\text{observed value} - \text{hypothesized or predicted value}} {\text{Standard Error of the statistic}} = \frac{b - \beta} {s_b}$
To calculate the test statistic for this regression coefficient, we also need to estimate the sampling distributions of the regression coefficients. This statistic about this distribution that we
will use is the standard error of the regression coefficient $(s_b)$
$S_b = \left (\frac{s_{y * x}} {\sqrt{SS_x}} \right)$
$s_{y * x} =$
$SS_x =$$(X)$
Let’s say that the football coach is using the results from a short physical fitness test to predict the results of a longer, more comprehensive one. He developed the regression equation of $Y =
.635X+ 1.22$$s_{Y*x} = .56$
$\mathbf{Summary statistics for two football fitness tests.} \\& n=24 && \sum XY=591.50\\& \sum X=118 & & \sum Y=104.3\\& \bar{X}=4.92 & & \bar{Y}=4.35\\& \sum X^2 = 704 & & \sum Y^2 =510.01\\& SS_x
=123.83 & & SS_y =56.74$
Using a $\alpha =.05$$(H_0: \beta = 0)$
We use the $t$$t$$22 \;\mathrm{degrees}$$(n-2)$$2.074$
$S_b & = \left (\frac{s_{y * x}} {\sqrt{SS_x}} \right) = \left (\frac{.56} {\sqrt{123.83}} \right) = 0.05\\t & = \frac{b - \beta} {s_b} = \frac{0.635 - 0} {0.05} = 12.70$
Since the observed value of the test statistic exceeds the critical value, the null hypothesis would be rejected and we can conclude that if the null hypothesis was true, we would observe a
regression coefficient of $0.635$$5\%$
Making Inferences about Predicted Scores
As we have mentioned, the regression line simply makes predictions about variables based on the relationship of the existing data. However, it is important to remember that the regression line simply
infers or estimates what the value will be. These predictions are never accurate $100\%$conditional distribution since it is conditional on the $X$$(X)$
If we assume that these distributions are normal, we are able to make inferences about each of the predicted scores. One example of making inferences about the predicted scores is identifying
probability levels associated with predicted scores. Using this concept, we are able to ask questions such as “If the predictor variable ($X$$4.0$$Y$$3$
The reason that we would ask questions like this depends on the scenario. Say, for example, that we want to know the percentage of students with a $4$$5$
To find the percentage of students with scores above or below a certain point, we use the concept of standard scores and the standard normal distribution. Remember the general formula for calculating
the standard score:
$\text{Test Statistic} = \frac{\text{Observed Statistic} - \text{Population Mean}} {\text{Standard error}}$
Applying this formula to the regression distribution, we find that the corresponding formula would be:
$z = \frac{Y - \hat{Y}} {s_{XY}}$
Since we have a certain predicted value for every value of $X$$Y$$0.56$$Y$$X$
Using our example above, if a student scored a $5$$5$
From the regression equation $Y = .635X+1.22$$X=5$$Y=4.40$$Y$$X=5$$(4.40)$$0.56$
Therefore, to find the percentage of $Y$$5$
$z = \frac{Y - \hat{Y}} {s_{Y * X}} = \frac{5 - 4.40} {0.56} = 1.07$
Using the $z$$z$$1.07$$.1423$$5$$5$$.1423$$14.23\%$
Confidence Intervals
Similar to hypothesis testing for samples and populations, we can also build a confidence interval around our regression results. This helps us ask questions like “If the predictor value was equal to
We know that the standard error of the predicted score is smaller when the predicted value is close to the actual value and it increases as $X$
$s_{\hat{Y}} = s_{Y * X} \sqrt{1 + \frac{1} {n} + \frac{(X - \bar{X})^2} {SS_x}}$
The general formula for the confidence interval for predicted scores is found by using the following formula:
$CI = \hat{Y} \underline \pm (t_{cv} s_Y)$
$\hat{Y} =$
$t_{cv} =$$t$$df(n-2)$
$s_Y =$
Develop a $95\%$$4$$(X=4)$
We calculate the standard error of the predicted value using the formula:
$s_{\hat{Y}} = s_{Y * X} \sqrt{1 + \frac{1} {n} + \frac{(X - \bar{X})^2} {SS_x}} = 0.56 \sqrt{1 + \frac{1} {24} + \frac{(4 - 4.92)^2} {123.83}} = 0.57$
Using the general formula for the confidence interval, we find that
$CI & = \hat{Y} \underline \pm (t_{cv} s_Y)\\CI_{95} & = 3.76 \underline \pm (2.074) (0.57)\\CI_{95} & = 3.76 \underline \pm 1.18\\CI_{95} & = (2.58, 4.94)\\2.58 & < CI_{95} < 4.94)$
Therefore, we can say that we are $95\%$$(X)$$2.58$$4.94$
Regression Assumptions
We make several assumptions under a linear regression model including:
1. At each value of $X$there is a distribution of $Y$
2. The best regression model is a straight line. Using a regression model to predict scores only works if the regression line is a straight line. If this relationship is non linear, we could either
transform the data (i.e., a logarithmic transformation) or try one of the other regression equations that are available with Excel or a graphing calculator.
3. Homoscedasticity. The standard deviations, or the variances, of each of these distributions for each of the predicted values is equal.
4. Independence of observation. For each give value of $X$$Y$
Lesson Summary
1. When we estimate a linear regression model, we want to ensure that the regression coefficient in the population $(\beta)$hypothesis test where we set the regression coefficient equal to zero and
test for significance.
2. For each predicted value, we have a normal distribution (also known as the conditional distribution since it is conditional on the $X$likelihood of obtaining other scores that are associated with
the value of the predicted variable $(X)$
3. We can also build confidence intervals around the predicted values to give us a better idea about the ranges likely to contain a certain score.
4. We make several assumptions when dealing with a linear regression model including:
• At each value of $X$$Y$
• The regression model is a straight line
• Homoscedasticity
• Independence of observations
Review Questions
The college counselor is putting on a presentation about the financial benefits of further education and takes a random sample of $120$
$n = 120 & & r = 0.67 & & \sum X = 1,782 & & \sum Y = 1,854 & & s_x = 3.6 & & s_Y = 4.2 & & SS_x=1542$
1. What is the predictor variable? What is your reasoning behind this decision?
2. Do you think that these two variables (income and level of formal education) are correlated? Is so, please describe the nature of their relationship.
3. What would be the regression equation for predicting income $(Y)$$(X)$
4. Using this regression equation, predict the income for a person with $2\;\mathrm{years}$$13.5 \;\mathrm{years}$
5. Test the null hypothesis that in the population, the regression coefficient for this scenario is zero.
1. First develop the null and alternative hypotheses.
2. Set the critical values at $\alpha =.05$
3. Compute the test statistic.
4. Make a decision regarding the null hypothesis.
6. For those parents with $15\;\mathrm{years}$$18,500$
7. For those parents with $12\;\mathrm{years}$$18,500$
8. Develop a $95\%$$16 \;\mathrm{years}$
9. If you were the college counselor, what would you say in the presentation to the parents and students about the relationship between further education and salary? Would you encourage students to
further their education based on these analyses? Why or why not?
Review Answers
1. The predictor variable is the number of years of formal education. The reasoning behind this decision is that we are trying to determine and predict the financial benefits of further education
(as measured by annual salary) by using the number of years of formal education (the predictor, or the $X$
2. Yes. With an $r$$0.67$
3. $Y = 0.782X+ 3.842$
4. For $X=13.5, Y = 14.39$$\14,390$
5. (a) $H_0: \beta = 0, H_a: \beta e 0$$t = \underline \pm 1.98$$S_b = \left (\frac{s_{y*x}} {\sqrt{SS_x}} \right) = \left (\frac{3.12} {\sqrt{1542}} \right) = .08 , t = \frac{b - \beta} {s_b} = \
frac{0.792 - 0} {.08} = 9.9$$9.9$$1.98$$0.792$$5\%$
6. For $X=15$$\hat{Y}=15.57$$18.50$$z$$0.93$$z = \frac{Y - \ddot {Y}} {s_{Y * X}} = \frac{18.5 - 15.57} {3.12} = 0.93$$z$$0.936$$p$$.1677$$15\;\mathrm{years}$$16.77\%$$\18,500$
7. For $X=12$$\hat{Y}=13.2$$18.50$$z$$0.93$$z = \frac{Y - \ddot{Y}} {s_{Y * X}} = \frac{18.5 - 13.25} {3.12} = 1.68$$z$$0.936$$p$$.0465$$15\;\mathrm{years}$$4.65\%$$\18,500$
8. $s_{\hat{Y}} = s_{Y*X} \sqrt{1 + \frac{1} {n} + \frac{(X - \hat{X})^2} {SS_x}} = 3.12 \sqrt{1 + \frac{1} {120} + \frac{(16 - 14.85)^2} {1542}} = 3.14$
Using the general formula for the confidence interval $(CI = \hat{Y} \underline \pm (t_{cv} s_Y))$
$CI_{95} & = 16.35 \underline \pm (1.98) (3.14) = 16.35 \underline \pm 6.22\\CI_95 & = (10.13, 22.57)$
9. Answer is to the discretion of the teacher.
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/Probability-and-Statistics-%2528Advanced-Placement%2529/r1/section/9.3/","timestamp":"2014-04-17T13:28:23Z","content_type":null,"content_length":"159212","record_id":"<urn:uuid:dc2f02ff-7adb-4321-8651-bfc18f83c7df>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Publications by Joseph Conlon
Volume Modulus Inflation and the Gravitino Mass Problem
ArXiv (2008)
The Hubble constant during the last stages of inflation in a broad class of models based on the KKLT mechanism should be smaller than the gravitino mass, H <~ m_{3/2}. We point out that in the models
with large volume of compactification the corresponding constraint typically is even stronger, H <~ m_{3/2}^{3/2}, in Planck units. In order to address this problem, we propose a class of models with
large volume of compactification where inflation may occur exponentially far away from the present vacuum state. In these models, the Hubble constant during inflation can be many orders of magnitude
greater than the gravitino mass. We introduce a toy model describing this scenario, and discuss its strengths and weaknesses.
Show full publication list
|
{"url":"http://www2.physics.ox.ac.uk/contacts/people/conlonj/publications/186095","timestamp":"2014-04-19T17:02:08Z","content_type":null,"content_length":"11585","record_id":"<urn:uuid:4ff34617-1b54-4f6c-bb65-d75bc37a2e67>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computing the Antipenumbra of an Area Light Source
Seth J. Teller
EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-92-666
December 1991
We define the antiumbra and the antipenumbra of a convex areal light source shining through a sequence of convex areal holes in three dimensions. The antiumbra is the volume beyond the plane of the
final hole from which all points on the light source can be seen. The antipenumbra is the volume from which some, but not all, of the light source can be seen. We show that the antipenumbra is, in
general, a disconnected set bounded by portions of quadric surfaces, and describe an implemented O( n^2) time algorithm that computes this boundary.
The antipenumbra computation is motivated by visibility computations, and might prove useful in rendering shadowed objects. We also present an implemented extension of the algorithm that computes
planar and quadratic surfaces of discontinuous illumination useful for polygon meshing in global illumination computations.
BibTeX citation:
Author = {Teller, Seth J.},
Title = {Computing the Antipenumbra of an Area Light Source},
Institution = {EECS Department, University of California, Berkeley},
Year = {1991},
Month = {Dec},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/6141.html},
Number = {UCB/CSD-92-666},
Abstract = {We define the antiumbra and the antipenumbra of a convex areal light source shining through a sequence of convex areal holes in three dimensions. The antiumbra is the volume beyond the plane of the final hole from which all points on the light source can be seen. The antipenumbra is the volume from which some, but not all, of the light source can be seen. We show that the antipenumbra is, in general, a disconnected set bounded by portions of quadric surfaces, and describe an implemented <i>O</i>(<i>n</i>^2) time algorithm that computes this boundary. <p>The antipenumbra computation is motivated by visibility computations, and might prove useful in rendering shadowed objects. We also present an implemented extension of the algorithm that computes planar and quadratic surfaces of discontinuous illumination useful for polygon meshing in global illumination computations.}
EndNote citation:
%0 Report
%A Teller, Seth J.
%T Computing the Antipenumbra of an Area Light Source
%I EECS Department, University of California, Berkeley
%D 1991
%@ UCB/CSD-92-666
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/6141.html
%F Teller:CSD-92-666
|
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1992/6141.html","timestamp":"2014-04-18T18:29:41Z","content_type":null,"content_length":"6315","record_id":"<urn:uuid:4ac77861-3efd-4d27-bbf3-5785ffd1a0ed>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can someone tell me if im right. In the State of Florida, a redfish must be at least 18 inches long, but no more than 27 inches long for the fish to be considered a keeper. Choose the combined
inequality that describes this scenario. 18 x 27 x 18 and x 27 x 18 or x 27 none of the above I got A
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Yeah, that's right.
Best Response
You've already chosen the best response.
ok thank you
Best Response
You've already chosen the best response.
just wanted to check
Best Response
You've already chosen the best response.
I can't see any inequality symbols
Best Response
You've already chosen the best response.
Just to confirm, is that a \(\le\)?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Must be just me
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Okay then, you are right!
Best Response
You've already chosen the best response.
you do FLVS Parth?
Best Response
You've already chosen the best response.
@apple_pi I have seen many cases like these. @Solmyr Nopers.
Best Response
You've already chosen the best response.
I do FLVS:)
Best Response
You've already chosen the best response.
oh lol i see
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ive got school starting in a week and im trying to finish as fast a i can
Best Response
You've already chosen the best response.
Okay then! You guys collaborate while I become a teacher. :P
Best Response
You've already chosen the best response.
in only at 56% complete
Best Response
You've already chosen the best response.
ok lol
Best Response
You've already chosen the best response.
Had it last year:) Openstudy helpes a lot...but you have to try 2:)
Best Response
You've already chosen the best response.
no im gonna stick with regular school it seems harder to me on the PC
Best Response
You've already chosen the best response.
btw did you complete segment 1 completely?
Best Response
You've already chosen the best response.
haha:) Parth makes a good teacher... Yeah it doesn't work for everyone:) I'm actually a full time student though...
Best Response
You've already chosen the best response.
oh :P cool
Best Response
You've already chosen the best response.
I finished the whole course during the last school year...Now I started Geometry:) I have to admit that at least math is a lot easier with someone explaining!
Best Response
You've already chosen the best response.
ya i agree there :]
Best Response
You've already chosen the best response.
Never think too hard about Algebra, guys. It just comes and goes ;)
Best Response
You've already chosen the best response.
Now i miss all the formulas;( I actually like Algebra though it WAS confusing sometimes.. plus I got kinda lazy last year(bad idea)
Best Response
You've already chosen the best response.
ya that how i am right now
Best Response
You've already chosen the best response.
hey do you think u could help me out with some of my algebra assessments??
Best Response
You've already chosen the best response.
Algebra is beautiful—I never understand why people worry about it.
Best Response
You've already chosen the best response.
Sure, Solmyr. I'd love to(as long as you understand). :)
Best Response
You've already chosen the best response.
im having allot of difficulty
Best Response
You've already chosen the best response.
I still have all my notes:) But Parth is much more up to date on this
Best Response
You've already chosen the best response.
ok but we can all try :]]
Best Response
You've already chosen the best response.
hold on
Best Response
You've already chosen the best response.
Everything they teach you in Algebra is isolation. lol
Best Response
You've already chosen the best response.
04.08 Scatter plots and lines of best fit project. Step One: Measurements 1. Measure your own height and arm span in inches. You will likely need some help from a parent, guardian, or sibling to
get accurate measurements. Record your measurements on the “Data Record” document. You will not need to submit the “Data Record” document as part of this project. Instead, you will use the record
to help you complete part two. 2. Measure five additional people you know and record their arm spans and heights in inches. 3. Share the names and measurements of the five people you measured, as
well as your own measurements, to your student partner in the class. You may request a method to getting a partner from your teacher. Together you and your partner will have 12 sets of data. Step
Two: Creating Your Scatter Plot Using GeoGebra or a graphing software of your choice, draw your scatter plot and the line you think represents the line of best fit. Note: Directions for
downloading and using GeoGebra can be found in the "Course Information area.” 1. Both you and your partner need to create a scatter plot of the 12 data points you both collected. 2. Draw the line
that you feel best fits the data. 3. Copy and paste your scatter plot into a word processing document. 4. Share and discuss your scatter plot and the line of best fit with your partner. If your
scatter plots or lines of best fit are different, discuss reasons why they might be different. Step Three: The Line of Best Fit Include your scatter plot and the answers to the following
questions in your word processing document and submit to your instructor. Be sure to review how to save your files before getting started. You and your partner are free to discuss this part of
the project. However, each of you will be responsible for submitting your own attachment of this file. 1. Which two points did you use to draw the line of best fit? 2. Write the equation of the
line passing through those two points using the point-slope formula y - y1 = m(x - x1). Show all of your work. Remember to find the slope of the line first. 3. What does the slope of the line
represent within the context of your graph? 4. Using the equation that you found in question 2, approximately how tall is a person whose arm span is 66 inches? 5. According to your line of best
fit, what is the arm span of a 74-inch-tall person?
Best Response
You've already chosen the best response.
this was my hardest one
Best Response
You've already chosen the best response.
oh and the honors version of this
Best Response
You've already chosen the best response.
I am not good at doing assignments, but you could ask questions one-at-a-time. For example: What is the point-slope form?
Best Response
You've already chosen the best response.
Isn't this a mini-project? Do this first....too many words!!: 1. Measure your own height and arm span in inches. You will likely need some help from a parent, guardian, or sibling to get accurate
measurements. Record your measurements on the “Data Record” document. You will not need to submit the “Data Record” document as part of this project. Instead, you will use the record to help you
complete part two. 2. Measure five additional people you know and record their arm spans and heights in inches. 3. Share the names and measurements of the five people you measured, as well as
your own measurements, to your student partner in the class. You may request a method to getting a partner from your teacher. Together you and your partner will have 12 sets of data.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
We are not allowed to do assignments, but I can explain such mathematical concepts. :|
Best Response
You've already chosen the best response.
^^that you can do without help...If you can't measure 5 ppl then make up some of the height vs armsapan)
Best Response
You've already chosen the best response.
lol, then you can do that too.
Best Response
You've already chosen the best response.
ok but how do i use the data record? thing watever it is?
Best Response
You've already chosen the best response.
would it be easier to save those kind of things last?
Best Response
You've already chosen the best response.
Use Microsoft Excel. Haha.
Best Response
You've already chosen the best response.
Name Relationship to Student Arm Span in Inches Height in Inches Pat friend 63 63
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
That was the data record i think^^
Best Response
You've already chosen the best response.
I can't see what it exactly is. Could a file be uploaded?
Best Response
You've already chosen the best response.
hold on i can get the data record thing
Best Response
You've already chosen the best response.
see if the link to it works
Best Response
You've already chosen the best response.
Let me see.
Best Response
You've already chosen the best response.
It works for me but I'm logged in...u see it parth?
Best Response
You've already chosen the best response.
Yes, I do.
Best Response
You've already chosen the best response.
ok good
Best Response
You've already chosen the best response.
didnt know if it would work for parth
Best Response
You've already chosen the best response.
You can use Microsoft Excel for that.
Best Response
You've already chosen the best response.
Or any other spreadsheet.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Or copy+paste into word and you can still use it.
Best Response
You've already chosen the best response.
omg that looks complicated lol
Best Response
You've already chosen the best response.
Nope. That wouldn't be perfect and convenient.
Best Response
You've already chosen the best response.
the microsoft excel thing looks complicatesd
Best Response
You've already chosen the best response.
Then just select it all and paste it into a word document
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
It's not. I'd make it for you.
Best Response
You've already chosen the best response.
But you have to fill the info.. I am just making that format.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Meanwhile y don't you start figuring out some heights/armspans...or yours
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok btw how many ppl do i have to have??
Best Response
You've already chosen the best response.
thx parth
Best Response
You've already chosen the best response.
6 total people.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Your friend and five others.
Best Response
You've already chosen the best response.
You and 5 others:)
Best Response
You've already chosen the best response.
Bob Shirley Robert Peter John MEEE:]]]
Best Response
You've already chosen the best response.
Your friend will measure you and five others. lol and you don't measure yourself.
Best Response
You've already chosen the best response.
True Parth sorry! Now what's their height/armspan?
Best Response
You've already chosen the best response.
Measure yours and just make numbers near to those.
Best Response
You've already chosen the best response.
Or make it up if they aren't handy right now:)
Best Response
You've already chosen the best response.
so bob shirley robert peter john alberto
Best Response
You've already chosen the best response.
Like if your height is 150 cm, then you make up numbers near that. 152, 156, 159 etc.
Best Response
You've already chosen the best response.
im just making random names off the top of my head
Best Response
You've already chosen the best response.
lol, nice.
Best Response
You've already chosen the best response.
I think it wants the information in inches, correct me if I'm wrong
Best Response
You've already chosen the best response.
ok sounds simple
Best Response
You've already chosen the best response.
let me cheeck
Best Response
You've already chosen the best response.
Okay, then inches is cool.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so im like 6 feet
Best Response
You've already chosen the best response.
so divide 12 and 6
Best Response
You've already chosen the best response.
Get short people in the list too, so that it gets legit.
Best Response
You've already chosen the best response.
srry 12*6
Best Response
You've already chosen the best response.
You want a nice, sorted out, and diverse graph:)
Best Response
You've already chosen the best response.
im 72 inches
Best Response
You've already chosen the best response.
Make it 71 or 73 so it gets even more real ;)
Best Response
You've already chosen the best response.
And get people with 60 and 65 too.
Best Response
You've already chosen the best response.
Here are some numbers from my thing 72 73 62 66 76 76 65 64 68 68 72 73
Best Response
You've already chosen the best response.
Cool stuff.
Best Response
You've already chosen the best response.
Such as myself...I'm 62 inches lol
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
looks legit :PP
Best Response
You've already chosen the best response.
Now you have to make a scatter plot with the information.
Best Response
You've already chosen the best response.
i hate mini-projects lol
Best Response
You've already chosen the best response.
Same here :)
Best Response
You've already chosen the best response.
when this is over what do i put it all on?
Best Response
You've already chosen the best response.
That Excel which I submitted.
Best Response
You've already chosen the best response.
The excel part is for your own benefit...you dont submit it... Steps 2 & 3 are the actual assignment
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Step Two: Creating Your Scatter Plot Using GeoGebra or a graphing software of your choice, draw your scatter plot and the line you think represents the line of best fit. Note: Directions for
downloading and using GeoGebra can be found in the "Course Information area.” 1. Both you and your partner need to create a scatter plot of the 12 data points you both collected. 2. Draw the line
that you feel best fits the data. 3. Copy and paste your scatter plot into a word processing document. Step Three: The Line of Best Fit Include your scatter plot and the answers to the following
questions in your word processing document and submit to your instructor. 1. Which two points did you use to draw the line of best fit? 2. Write the equation of the line passing through those two
points using the point-slope formula y - y1 = m(x - x1). Show all of your work. Remember to find the slope of the line first. 3. What does the slope of the line represent within the context of
your graph? 4. Using the equation that you found in question 2, approximately how tall is a person whose arm span is 66 inches? 5. According to your line of best fit, what is the arm span of a
74-inch-tall person?
Best Response
You've already chosen the best response.
so this part :P
Best Response
You've already chosen the best response.
Its not THAT bad...they put so many words that are exhausting!
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
its the line graph sorta right
Best Response
You've already chosen the best response.
Scatter plot doesn't have the line - just the points. Make the x-axis the height Make the y-axis the armspan
Best Response
You've already chosen the best response.
or vice versa
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so what now
Best Response
You've already chosen the best response.
Did you do the scatter plot?
Best Response
You've already chosen the best response.
i got it but now the saving part
Best Response
You've already chosen the best response.
what do i save it as
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
jazy u there?
Best Response
You've already chosen the best response.
Sorry! The graph is really good:) Now you just have to make a line of best fit. Put a line straight through the space where most of the points are.
Best Response
You've already chosen the best response.
can i us paint
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
its the second graph
Best Response
You've already chosen the best response.
A line of best fit is a straight line going through the whole graph... It doesn't have to go through all of the point. It only represents the majority of the information collected on the graph.
Best Response
You've already chosen the best response.
can u give me an example of what u mean
Best Response
You've already chosen the best response.
Heres a picture from google:
Best Response
You've already chosen the best response.
ok give me a sec
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
srry it took me so long my internet malfuntioned
Best Response
You've already chosen the best response.
is this what u were talking about?
Best Response
You've already chosen the best response.
Yes. Now all you have left are the questions: 1. Which two points did you use to draw the line of best fit? 2. Write the equation of the line passing through those two points using the
point-slope formula y - y1 = m(x - x1). Show all of your work. Remember to find the slope of the line first. 3. What does the slope of the line represent within the context of your graph? 4.
Using the equation that you found in question 2, approximately how tall is a person whose arm span is 66 inches? 5. According to your line of best fit, what is the arm span of a 74-inch-tall
Best Response
You've already chosen the best response.
Really? My internet glitched too:/
Best Response
You've already chosen the best response.
what does 1. mean????
Best Response
You've already chosen the best response.
it might be because we have so much stuff
Best Response
You've already chosen the best response.
if you want i can reopen a question
Best Response
You've already chosen the best response.
Woops sorry! When you draw a line of best fit you are supposed to guide yourself with two points on the graph.
Best Response
You've already chosen the best response.
my bad
Best Response
You've already chosen the best response.
its cool
Best Response
You've already chosen the best response.
well would my points be 64 and 73
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/502a4f0ee4b0fbb9a3a7034f","timestamp":"2014-04-21T04:50:34Z","content_type":null,"content_length":"394606","record_id":"<urn:uuid:a068dae1-09ac-42bf-80a5-9d82315c8877>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ordinary Differential Equations/Structure of Differential Equations
Differential equations are all made up of certain components, without which they would not be differential equations. In working with a differential equation, we usually have the objective of solving
the differential equation. A solution in this context is a new function with all the derivatives gone. If this is impossible, we go for a numerical solution.
Differential EquationsEdit
The first and most basic example of a differential equation is the one we are already familar with from calculus. That is
In this case we know how to solve for y (eliminate the derivative) by integrating f. So we know that
$y(x)=\int_{a}^x f(x)\,dx+c.$
Recall from the fundamental theorem of calculus that $\int_{a}^x f(x)\,dx$ is an anti-derivative for f(x) for any choice of a. Notice that there is an arbitrary constant c and so we get a family of
solutions, one for each choice of c. Often in the study in the book we will encounter initial value problems. These are problems where we are asked to find a solution to an ordinary differential
equation that passes through some initial point (x[0], y[0]), where x[0] is the independent and y[0] the dependent variable. To find which solution passes through this point, one simply plugs x[0]
into the equation for x and y[0] in for y(x[0]). This allows us to make a specfic choice for c which normally would be arbitrary.
\begin{align}y_0&=\int_a^{x_0} f(x)\,dx+c\\ c&=y_0-\int_a^{x_0}f(x)\,dx\end{align}
If we substitute this choice for c into the expression for y we find that:
\begin{align}y(x)&=\int_a^x f(x)\,dx+y_0-\int_a^{x_0}f(x)\,dx\\&=y_0+\int_{x_0}^x f(x)\,dx\end{align}
Notice this is really a statement of the fundamental theorem calculus.
An n-th order ordinary differential equation is an equation of the form
$F(y^{(n)},y^{(n-1)}, \ldots, y, x)=0$
Where F is a function of n + 2 variables that is not constant with respect to its first variable.
Note that n can be interpreted as the degree of derivation and that $y^{(n)}$ is the first variable which in turn is itself a function of x. This definition can be a lot to swallow. It helps to take
an example. Suppose F(t[1], t[2], t[3])=t[1]-cos(t[3])t[2]. Then F(y',y,x)=0 becomes
$y'-\cos(x)y=0\quad\text{or}\quad y'=\cos(x)y$.
Thus, by our definition above, y′=cos(x)y is a first order ordinary differential equation.
In general we will run into problems if some restrictions are not placed on the function F. For example, if we didn't require F to depend on its first variable, then we could have taken a function
like F(t[1], t[2], t[3])=1-cos(t[3])t[2] which is independent of its first variable. In which case F(y′, y, x) = 0 simply becomes 1 − cos(x)y=0 which involves no derivatives at all! It would be very
odd indeed to call this a first order differential equation.
Specific examples of ordinary differential equations we are familiar with from calculus would be:
$\frac{dy}{dx}=x \,$
$\frac{dy}{dx}=2 \sin x^2. \,$
However, they can also involve the higher order derivatives of y with respect to x. For example:
is also an ordinary differential equation.
Characteristics of Differential EquationsEdit
The order of a differential equation is the order of the highest derivative involved in the equation. Thus:
is a second-order differential equation, as the highest derivative is the second: d²y/dx².
The degree of a polynomial differential equation is the power to which the highest derivative is raised.
Linear and Non-Linear Differential EquationsEdit
DEs fall into two major types: linear and non-linear.
Linear DEs are the simpler kind. A partial differential equation or an ordinary differential equation that has a degree of 1 and no higher degree is called linear.Thus,
is a linear DE.
Non-Linear DEs are much more complex, as they are any DEs that are not linear. For example,
$y^2, \,\,\,\sqrt{y}\,\,\,\cos y$
$\left( \frac{d^2y}{dx^2} \right)^2=-7y$
$\sqrt{ \frac{d^2y}{dx^2}}+y^2=x$
are non-linear DEs.
Only a tiny proportion of non-linear DEs are solvable exactly - most have to be approximated.
Homogeneous Differential EquationsEdit
A homogeneous DE is one in which only the terms involving y ( includes the derivatives of y ) are present in the equation. No terms involving the independent variables must be present in the
equation. Therefore:
is homogeneous. If something is left over, then the DE is non-homogeneous, like this one:
A non-zero constant on the right-hand-side also implies a non-homogeneous DE - after all a constant is still a function.
Generally, if a DE can be written as:
where a[n](x), etc are functions of x, it is homogeneous. However, if it can only be written as
where b(x) is a function of x, it is non-homogeneous.
Solutions of a Differential EquationEdit
This article or section does not entirely cite its references or sources.
You can help Wikibooks by introducing appropriate citations.
A solution of this differential equation is any function y=f(x), which, when substituted into the above equation, satisfies the equation.
An equation of the form
with $C_1,C_2,C_3,...,C_n$ as arbitrary constants is called an integral solution of the differential equation if all functions y=f(x) that are solutions to the integral solution when
$C_1,C_2,C_3,...,C_n$ are substituted for any values (with the possibility of restrictions) are solutions to the differential equation. Originally, James Bernoulli in 1689 used the term integral and
Euler used the term particular integral in 1768. The word solution seems to have first appeared around 1774 by Lagrange, and through Poincaré this term has been established.
A third type of solution is called the parametric solution in the form
with arbitrary constants $C_1,C_2,C_3,...,C_n$ whenever all functions y=f(x) that make the second equation an identity are also solutions to the differential equation.
People have tried to define general solutions (formerly known as complete integral or complete integral equations due to Euler, these two terms now mean something different) to be integral solutions
with arbitrary constants, and singular solutions to be integral solutions which are not contained in the general solution. However, these definitions have turned out to be contradictory, since it may
be possible that given one general solution that excludes a singular solution, that another general solution may be found that includes the singular solution. Thus, the idea of singular solutions is
contradictory and there is no good way to work with these terms.
Instead, we are going to define general solutions to be an integral solution that includes all solutions of the DE, and a particular solution to be any single solution or integral solution of the DE.
When solving a DE in the crude sense, we aim to find ways to solve equations in particular forms to solutions directly, or to reduce them to a more amenable form. Later, we will aim to solve a DE in
a more general sense.
An initial value problem is a differential equation together with the initial conditions that the solution $y=f(x)$ also satisfy the equations
at a specific $x_0$. If the $x_0$ are different, then it is called a boundary value problem with boundary conditions.
We first consider the simple case of the equation $y'=f(x)$. This is easily solvable with the following theorem that you probably have already proved in Calculus:
Relationship to other types of equationEdit
The following types of equation are not normally encountered in a first course in differential equations but are included here to illustrate the range of problems where differential equations play a
It is possible to formulate equations where the function being sought is part of the integrand. Such equations are known as integral equations. It is a theorem in differential equations that states
that virtually any differential equation can be reformulated as an integral equation. Integral equations are normally studied after differential equations have been mastered. In practice it is
sometimes the case that the corresponding integral equation may be easier to solve than the original differential equation.
It is also possible to encounter equations which include both derivatives and integrals. These equations may or may not be convertible to either purely differential or integral equations.
Another related area is that of difference equations. These equations involve the formation of derivatives where the denominator is not an infinitely small quantity but one of finite size. Their
methods of solution parallel those of differential equations. One major difference in their solutions is the role played by the exponential function in differential equations is often replaced by
another value which may be complex.
Equations containing both difference and differential terms are not commonly encountered in practice. These may be difficult to solve in closed form.
Differential equations may be formulated for matrices as well as for real and complex numbers. Because matrix multiplication is not in general commutative while solving these equations careful
attention to the order of the factors must be paid.
Additionally, fractional differential equations, which may be either ordinary or partial differential equations, also present some peculiarities and for this reason are also studied after a firm
grounding in the more usual forms have been mastered.
Fractional differential equations are rarely mentioned in most text books so a brief note is included here. Typical ordinary differential equations involve integer power of derivatives while
fractional differential equations involve any power. This class of equation has been studied almost as long as the other types of differential equation but other than the semi derivative equations -
those involving powers of +/- 1/2 - methods for solving them in closed form are not known. Many examples of the diffusion equation - a commonly occurring partial differential equation in physics and
chemistry - can be reformulated in terms of a semiderivative equation and solved immediately.
One reason for the difficulties encountered with this type of differential equation is because the range of potential solutions is much larger than those encountered elsewhere. Integer valued
derivatives require a function to be differentiable: only functions of this type can be solutions to a typical differential equation. Fractional derivatives may be applied to completely discontinuous
functions and some generalized functions. Methods for identifying these less well studied functions as solutions to fractional differential equations have yet to be developed systematically.
Existence and Uniqueness theoremsEdit
As well as attempting to solve a new differential equation it is frequently worthwhile determining if a solution to the equation actually exists and if it does whether the solution is unique. The
answers to these questions will be addressed in the section on the existence and uniqueness theorems that will be proved later.
Since most differential equations cannot be solved in closed form, numerical solutions are of great importance. While the existence theorems may seem to be rather esoteric to the beginner they are of
considerable importance when attempting a numerical solution: in practice it is very helpful to know that a solution really does exist before trying to compute it.
Understanding when solutions exist and are unique often provides qualitative information about solutions. For example, the basic theorems about uniqueness state that for each initial condition, there
is a unique solution. This immediately implies that two solutions can never intersect. If they did you could take the intersection point as the your initial data and the uniqueness theorem would
imply the solutions are the same function. We will discuss more about qualitative behavior toward the second part of the text.
Last modified on 8 December 2012, at 09:12
|
{"url":"http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Structure_of_Differential_Equations","timestamp":"2014-04-16T16:23:34Z","content_type":null,"content_length":"32944","record_id":"<urn:uuid:810c83fe-c482-4ff9-9539-ee72f0b79037>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Welcome to Zoran's Homepage
Email: zskoda@irb.hr
Department of Mathematics Page
My nlab pages: bio page, my writings, nlab section
Zoran Škoda, Ph.D. University of Wisconsin, Madison, 2002
link to the conferences "Categories in geometry and in mathematical physics", September 24-28, 2007 , and summer school Black hole physics, Trpanj, Peljesac, Croatia, June 21-25, 2010; and the Rab
conferences on Hot Matter and Gauge Field Theories, 2006 and 2007
Web page of a graduate courses I teach at the University of Zagreb (in Croatian): Sheaves, bundles and cohomology (full acad. yr. 2008/9); and a proposed course for Winter 2009/2010: http://
My main research areas are noncommutative algebraic geometry and geometric aspects of mathematical physics. I have also spent some years on study and projects related to human linguistics (historical
and computational) and formal and programming language theory, design, compiler optimization and semantics.
My thesis was titled COSET SPACES FOR QUANTUM GROUPS (see abstract here ). I used Ore localizations and a new concept of localized coinvariants to construct (without any representation theory, or
Ansatze) noncommutative schemes which are in a strict sense coset spaces for matrix quantum groups of type A; the motivation and application was to find the geometric theory (and the appropriate
measure) for coherent states for quantum groups. Now I use actions of monoidal categories and various sorts of descent theory to further study similar setups. Recently I delved into more categorical
algebra and study of various phenomena related to nonabelian cocycles of various sorts. With C. Schweigert I coordinated a bilateral grant Croatia-Germany on NONABELIAN COHOMOLOGY for 2006-7 and now
I hold a small similar bilateral grant with Urs Schreiber also on homological algebra and applications. This grant and hospitality of MPIM Bonn enabled me to participate in the work in progress
"twisted nonabelian differential cohomology" , commented recently on category cafe. I am also recently active in the construction of nlab.
My PhD thesis advisor was mathematician J. W. Robbin, and A. Chubukov was my physics supervisor. My research host at Indiana University was Valery Lunts . My present boss at IRB (project
098-0000000-2865) is S. Meljanac (he is a theoretical physicist) and I participate in a project on homological and geometric methods in rep. theory lead by P. Pandzic at Zagreb university
(037-0372794-2807). My basic education is in theoretical physics but in recent years I am more mathematical in my work and interests. Some people with whom I particularly like to talk mathematics are
V. A. Lunts, M. Jibladze U. Schreiber and R. Friedrich. I had less chance but it was very useful talking also to T. Maszczyk, A. L. Rosenberg, G. Sharygin, P. Bressler, B. Noohi, V. Guletskii, G.
Bohm, I. Mencattini, Y. Soibelman ... Recently, I was also influenced by my younger colleague in Zagreb, Igor Bakovic. His thesis is on gerbes and bigroupoid bundles.
I am myself planning a book on "Actions, sheaves and categories", but at this point there is only a preliminary list actionsbk.dvi of parts and chapters.
A partial list of my past conference talks and invited seminar talks may be found here.
You can also download online some of my papers (see also a graphically more attractive list at nlab)
Localizations for construction of quantum coset spaces math.QA/0301090, 34 pages. "Noncommutative geometry and Quantum groups", W.Pusz, P.M. Hajac, eds. Banach Center Publications vol.61, pp.
265--298, Warszawa 2003.
Coherent states for Hopf algebras, Letters in Mathematical Physics 81, N.1, pp. 1-17, July 2007. (earlier arXiv version: math.QA/0303357 )
Noncommutative localization in noncommutative geometry, London Math. Society Lecture Note Series 330, ed. A. Ranicki; pp. 220--313, math.QA/0403276.
Distributive laws for actions of monoidal categories math.CT/0406310
Cyclic structures for simplicial objects from comonads math.CT/0412001
Included-row exchange principle for quantum minors math.QA/0510512
A universal formula for representing Lie algebra generators as formal power series with coefficients in the Weyl algebra (with N. Durov, S. Meljanac, A. Samsarov), Journal of Algebra 309, Issue 1,
pp.318-359 (2007) (math.RT/0604096)
Every quantum minor generates an Ore set , (free-access link) International Math. Res. Notices 2008, rnn063-8; math.QA/0604610
Equivariant monads and equivariant lifts versus a 2-category of distributive laws (arXiv:0707.1609)
(with S. Meljanac) Leibniz rules for enveloping algebras (pdf)
-------- An older, obsolete version is at arXiv: (arXiv:0711.0149)
S. Meljanac, D. Svrtan, Z. Skoda, Exponential formulas and Lie algebra type star products, SIGMA 8 (2012), 013, 1-15, arxiv:1006.0478
H. Sati, U. Schreiber, Z. Skoda, D. Stevenson, Twisted nonabelian differential cohomology: Twisted (n-1)-brane n-bundles and their Chern-Simons (n+1)-bundles with characteristic (n+2)-classes,
preliminary version
A simple algorithm for extending the identities for quantum minors to the multiparametric case (arXiv:0801.4965)
Quantum heaps, cops and heapy categories , Mathematical Communications 12, No. 1, pp. 1-9 (2007); math.QA/0701749.
Bicategory of entwinings arXiv:0805.4611
Twisted exterior derivative for enveloping algebras arXiv:0806.0978
Compatibility of (co)actions and localization (pdf) arXiv:0902.1398 (preliminary version)
Some equivariant constructions in noncommutative algebraic geometry, Georgian Mathematical Journal 16 (2009), No. 1, 183--202, arXiv:0811.4770
Heisenberg double versus deformed derivatives, Int. J. Mod. Physics A 27 & 28 (2011) 4845--4854, arXiv:0909.3769.
(with V. Lunts) Hopf modules, {\rm Ext}-groups and descent (incomplete version)
Bi-actegories (May 2007, preliminary version)
(with Gabriella Böhm) Globalizing Hopf-Galois extensions (preliminary version)
Urs Schreiber, Zoran Skoda, Categorified symmetries, Proceedings of 5th Mathematical Physics Meeting, SFIN, XXII Series A: Conferences, No A1, (2009), 397-424 (Editors: Branko Dragovich, Zoran
Rakic). Extended 55 page arxiv version: arXiv/1004.2472 (Summer School and Conference on Modern Mathematical Physics, Institute of Physics, Belgrade, Serbia, July 6-17, 2008)
Domagoj Kovacevic, Stjepan Meljanac, Andjelo Samsarov, Zoran Skoda, Hermitian realizations of kappa-Minkowski spacetime, arxiv/1307.5772
Dimitri Gurevich, Vladimir Rubtsov, Pavel Saponov, Zoran Skoda, Generalizations of Poisson structures related to rational Gaudin model, arxiv/1312.7813
Prilozi za vrednovanje u teorijskim znanstvenim disciplinama I (34 pages, preliminary version, in Croatian)
With my student M. Basic, I work on a programme of finding integration objects for Leibniz algebras, Leibniz groups. Here is a manifesto: Search for Leibniz groups (pdf)
Recently I started working on a short note about functoriality of actegories induced from (co)module algebras. "Functoriality of actegories from comodule algebras". With Georgiy Sharygin we are
working on a project on certain star products related to Lie algebroids. The first phase was to construct, in the presence of a connection, a correction to the symmetrization map for Lie algebroids
which would make it compactible with coproduct.
|
{"url":"http://www.irb.hr/korisnici/zskoda/","timestamp":"2014-04-16T07:12:41Z","content_type":null,"content_length":"11760","record_id":"<urn:uuid:81f88bb8-d2c8-498b-9e05-fb7b2ac0c6e8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hash Function question
Hash Function question
I have been browsing through the book "Introduction to Algorithms", and I was reading the chapter on Hash Tables/Hash Functions. (Ch.12). There are three hash function algorithms described:
Division method, Multiplication method, and Universal Hashing.
Division method is simple, it is just: h(k) = k % m, where m is the maximum elements in the table. It is assumed that the key k is an integer (and if it is string there are methods given for
converting string to int). So far, so good: For an implementation that at core uses something like "vector< list<T> > vec", we can always be sure that the value for key will be within bounds of
the vector (because of k % vec.size()).
Now for multiplication method, it gives the function: h(k) = floor( M * ( (k * A) - floor(k * A))), where 0 < A < 1. The book suggests A = (sqrt(5) - 1)/2 as a good choice. M is an integer, and
the books says the choice of M is not critical. In it's example it gives M as being 10000.
Now my question is, this function will return some value for h(k), and that will be my index into my vector<list<T>>. But how do I know that it will be within bounds of the array? Am I supposed
to apply the division method after I have applied the multiplication method, to ensure that I have a value that is in the proper range? If so, this is not explained in the text.
The range will be [0..M), you would use size() for M to get your index. x - floor(x) is always in the range [0..1), here square bracket means including and paren means approaching. This is
somewhat related to the modulus used in the "division" method. What it is basically doing is shifting part of K below the decimal point and "chopping off" the whole number part, yielding a number
in the range [0..1), but without regard to how many buckets you have. Dividing a number by A is the same as multipliying by (A^-1) where ^ is power, not xor. When A is (0..1) this is the same as
dividing by (1..inf). M then stretches this back out to the range you want. Note that you do have to be careful because while I can put a nice algebraic "this number approaches 1" your computer
may very well decide at some point decide that it's close enough and return M, so you may want to use size()-1;
Thanks, for the confirmation. Just a mement ago, I realized that M * K, when K is say, .9999, and M is 10000, will yield 9999. So I guess the range should be 0 to M-1.
Anyway, thanks for the info/clarification.
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/63653-hash-function-question-printable-thread.html","timestamp":"2014-04-25T03:58:40Z","content_type":null,"content_length":"8825","record_id":"<urn:uuid:dfaccd60-b37c-4581-9c94-92a024ce6d85>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
shape theory
Basic concepts
Homotopy theory
Paths and cylinders
Homotopy groups
While the idea of a homotopy type is very suitable for the study of (locally) good topological spaces, the weak homotopy type fails to give useful information for ‘bad’ spaces, of which classical
examples include the Warsaw Circle, Sierpinski gasket, p-adic solenoid and so on. Even if our initial and principal interest is more often than not in good spaces, bad spaces arise naturally in their
study. For example, in the study of dynamical systems on manifolds, an important issue is the study of the attractors of such systems, which are typically fractal sets, and thus not ‘locally nice’ at
all! The intuitive idea of shape theory is to define invariants of quite general topological spaces by approximating them with ‘good’ spaces, either by embedding them into good spaces, and looking at
open or polyhedral neighborhoods of them, or by considering abstract inverse systems of good spaces. The two approaches are closely related.
If there are few maps from polyhedra (e.g. from spheres) into the space, then the weak homotopy type may tell too little about the space. Therefore one “expands” the space into a successive system of
spaces which are good recipients of maps from polyhedra (e.g. ANR-s, polyhedra) and one adapts the homotopy theory to such expansions. The analogue of (strong) homotopy type in this setting is the
shape of a space; the shape is an invariant of the strong homotopy type and agrees with it on the ANR-s for metric spaces and on the polyhedra. It is more crude for other spaces, but more suitable
than the weak homotopy type, or more exactly gives complementary information. Instead of embedding a space, one may abstractly expand or resolve the space or its homotopy class into a pro-object in a
category of nice spaces. Strong shape theory is a variant which is closer to the usual kind of homotopy, is more geometric and has more homotopy theoretic constructions available in its ‘toolkit’. It
differs by passing to the homotopy category at a later stage in the theory, so one gets homotopy coherent approximating systems rather than homotopy commutative ones.
Shape theory was first explicitly introduced by Polish mathematician Karol Borsuk in the 1960s, although Christie, a student of Lefshetz, had done some initial development work on the same basic idea
much earlier. One of the modern versions of shape theory is developed in terms of inverse systems of absolute neighbourhood retracts (ANRs) (which are pro-objects in the homotopy category of
polyhedra). These were introduced in this setting by S. Mardešić, and J. Segal (1971) and independently, in a slightly different form, by Tim Porter (thesis, 1971), using the more combinatorial
framework of pro-objects in the category of simplicial sets. This latter approach also indicated the possible link with the étale homotopy theory of Artin and Mazur, (Springer Lecture Notes 100).
Shape theory is a ‘Čech homotopy theory’, having a similar relationship to Čech homology as homotopy theory, based on the singular complex construction, has to singular homology. In fact, as
mentioned above, the origins of both shape theory and strong shape theory go back than further Borsuk’s initial papers to work by Lefshetz and his student, D. Christie (thesis plus article, D.E.
Christie, Net homotopy for compacta, Trans. Amer. Math. Soc., 56 (1944) 275–308). Christie considered a 2-truncated form of strong shape theory, categorically this corresponds to a lax or op-lax
2-categorical version of shape theory. Although many of the initial ideas were developed by Christie, the paper went unnoticed until Borsuk developed his slightly different approach in the late
For many applications one needs more refined invariants which build up strong shape theory, while sometimes more crude versions may be useful, for example the recent theory of coarse shape?.
Strong Shape Theory developed in the 1970s through the work of Edwards and Hastings (lecture notes, see below), Porter, Quigley, and others. It has, especially in the approach pioneered by Edwards
and Hastings, strong links to proper homotopy theory. The links are a form of duality related to some of the more geometric duality theorems of classical cohomology.
M. Batanin further elucidated strong shape theory from a categorical and 2-categorical point of view, but his approach is as yet not much used. His 1997 paper, shows the connections between this
theory and a homotopy theory of simplicial distributors linked to $A_{\infty}$-categories.
The structure of the strong shape theory of compact spaces is related to certain structure and constructions on the corresponding (commutative) $C^*$-algebras of functions. These are related to the
algebraic K-theory of such commutative $C^*$-algebras. Extensions to non-commutative $C^*$-algebras have been made; see (Blackadar) and (Dadarkat) below, for a start.
As shape theory is a Čech homotopy theory, its corresponding homology is Cech homology, but what is the corresponding construction for strong shape? The answer is Steenrod–Sitnikov homology. This is
discussed in Mardešić’s book, Strong Shape and Homology, (see below). Many of the themes of homotopy coherence and related ideas occur in this theory and this suggests an infinity categorical
approach (closely related to Batanin’s) may be important. This seems to be emerging with interpretations of work by Toen and Vezzosi, and by Lurie, and perhaps suggests a review of Batanin’s work
from that new viewpoint.
Borsuk’s shape theory (K. Borsuk, (1968))
This was the original form and applies to compact metric spaces. It uses the fact that any compact metric space can be embedded in the Hilbert Cube. For any such embedded compact metric spaces, $X$
and $Y$, one considers shape maps from the collection of open neighbourhoods of $X$ to those of $Y$. These shape maps are families of continuous maps satisfying a compatibility relationship ’ up to
homotopy’. These compose nicely and form the Borsuk shape category. Two spaces have the same shape if they are isomorphic in this category. Full details of the definition of such shape morphisms are
given in the separate entry, Borsuk shape theory.
A remarkable and beautiful theorem of Chapman (the Chapman complement theorem) shows that the shape of two compact metric spaces, $X$ and $Y$ embedded in the pseudo-interior of the Hilbert cube, $Q$,
have the same shape if and only if their complements $Q\setminus X$ and $Q\setminus Y$ are homeomorphic.
ANR-systems approach (Mardešić and Segal (1970))
Abstract shape category
The idea of abstract shape theory is very simple. You have a category, $C$, of objects that you want to study. (In Borsuk’s classical topological case this was the (homotopy) category of compact
metric spaces.) You have a well behaved set of methods that work well for some subcategory, $D$, of those objects (polyhedra in Borsuk’s case, where the methods were those of homotopy theory). The
categorical idea that can be glimpsed behind the topological constructions of topological shape theory is that of replacing an object $X$ of $C$ with approximations to $X$ by objects of $D$, (so
‘approximating’ a compact metric space by polyhedra, for instance). Categorically this replaces the object $X$ by the comma category, $(X/D)$, which comes with a projection functor to $D$, which
‘records’ the approximating $D$-object for each approximation. You then use your invariants for objects in $D$ to define (and study) the more general objects in $C$. This does not come without
consequences as you obtain new types of maps, (shape maps) between the objects of $C$, namely functors between the comma categories that respect the projections. The objects of $C$ together with your
new shape maps form the shape category of your situation.
The shape category $Shape(C,D)$ is associated to a pair $(C,D)$ of a category $C$ and a dense subcategory $D$.
Here dense subcategory is used in the second sense of that term: for every object $X$ in $C$ there is its $D$-expansion, which is the object $\bar{X}$ in the category $pro D$ of pro-objects in $D$
that is universal (initial) with the property that it is equipped with a morphism $X\to\bar{X}$ in $pro D$.
The shape category $Shape(C,D)$ has
A more categorical form of shape theory was studied by Deleanu and Hilton in a series of papers in the 1970s. They consider a more general setting of a functor $K : D \to C$, which in the classical
Borsuk case would be the inclusion of the homotopy category of compact polyhedra into that of all compact metric spaces.
This was developed further by Bourn and Cordier, and a strong shape version was then found by Batanin.
Pro-spaces in a shape context
The classical application of shape theoretic idea is to the study of topological spaces that do not have the homotopy type of a CW-complex. This is the case obtained from the above general setup by
More on this is in the section Shape theory for topological spaces below and in Cech homotopy.
Profinite groups
Consider the category $C =$Grp of groups and its subcategory $D$ of finite group. A shape map between two groups is a map between their profinite completions. This sort of behaviour is quite general
as this form of abstract shape theory is related to equational completions; see
• Gildenhuys and Kennison, Equational completions, model induced triples and pro-objects, J. Pure Applied Algebra, 4 (1971) 317-346.
This aspect is explored reasonably fully in the book by Cordier and Porter (see below).
A different terminology and slightly different emphasis is often used within the shape theoretic literature as it corresponds more to the geometric intuition needed there, deriving originally from
the important classical motivation of Borsuk, Mardešić, and Segal.
Shape theory for topological spaces
Strong shape in terms of $(\infty,1)$-sheaves on a space
There is a way to study the strong shape theory of a topological space $X$ in terms of ∞-stacks on $X$, i.e. in terms of the (∞,1)-category of (∞,1)-sheaves $Sh_{(\infty,1)}(X) := Sh_{(\infty,1)}(Op
(X))$ on the category of open subsets of $X$. This is described in
• Bertrand Toen and Gabriele Vezzosi, Segal topoi and stacks over Segal categories in Proceedings of the Program Stacks, Intersection theory and Non-abelian Hodge Theory , MSRI, Berkeley,
January-May 2002 (arXiv:math/0212330)
and in section 7.1.6 of
For more details see shape of an (infinity,1)-topos.
This theory fits into the general picture above of a subcategory $D\subset C$, where now $C$ is the $(\infty,1)$-category of $(\infty,1)$-toposes, while $D$ is the category of $\infty$-groupoids,
regarded as their presheaf $(\infty,1)$-toposes. Thus, the “shape” of an $(\infty,1)$-topos $X$ is the functor $Hom(X,-)\colon \infty Gpd \to \infty Gpd$.
Alternately, since a geometric morphism from an $(\infty,1)$-topos $X$ into presheaves on an $\infty$-groupoid $K$ is the same as a global section of the constant ∞-stack $L Const(K)$ over $X$, we
can also describe this functor as the composite
$\infty Grpd \xrightarrow{LConst} Sh_{(\infty,1)}(X) \xrightarrow{\Gamma} \infty Grpd \,.$
Thus, we can equivalently describe the shape of $X$ by mapping out of it into topological spaces over $X$ that are at least fiberwise nice topological spaces: in other words, to look at $\infty$-
covering spaces over $X$.
Now, for a small (∞,1)-category $C$, a functor $C \to \infty Grpd$ that preserves finite limits may be thought of as a pro-object in $C$. Now $\infty Gpd$ is not small, but one may hope that the
functors $Shape(X)\colon \infty Gpd \to \infty Gpd$ arising in this way are determined by a small amount of data, and thus give honest pro-$\infty$-groupoids.
We can, if we wish, define for the nonce
$Pro(\infty Grpd) \subset Func(\infty Grpd, \infty Grpd)^{op}$
to be the fully subcategory of (∞,1)-functors that preserve finite limits, although as discussed above this is not quite correct. We call the objects in $Pro(\infty Grpd)$pro-spaces or shapes. Notice
that by the homotopy hypothesis-theorem, we can think here of $\infty Grpd \simeq Top_{cg,wH}$ as the category of nice topological spaces, considered up to homotopy equivalence.
The first description of shapes makes it obviously functorial in geometric morphisms of $(\infty,1)$-toposes. This can be seen from the second definition as well: given $(f^* \dashv f_*) \colon \
mathbf{H} \to \mathbf{K}$, the unit $Id_{\mathbf{K}} \to f_* \circ f^*$ induces a transformation
$\Gamma_{\mathbf{K}}\circ LConst_{\mathbf{K}} \to \Gamma_{\mathbf{K}} \circ f_* \circ f^* \circ LConst_{\mathbf{K}} \simeq \Gamma_{\mathbf{H}}\circ LConst_{\mathbf{H}}$
that may be regarded as a morphism of shapes
$Shape(f) : Shape(\mathbf{K}) \to Shape(\mathbf{H}) \,.$
We say the geometric morphism $f$ is a shape invariance if $Shape(f)$ is an equivalence of pro-spaces.
For $f : X \to Y$ a continuous map of paracompact spaces, the induced geometric morphism $(f^* \dashv f*) : Sh_{(\infty,1)}(X) \to Sh_{(\infty,1)}(Y)$ is a shape equivalence, precisely if for each
CW-complex $K$ the map
$Top(Y,K) \to Top(X,K)$
is an equivalence.
Applications of Shape Theory
Geometric Topology
Dynamical Systems
In a dynamical system, the attractor?s are rarely polyhedra and their homotopy properties correspond more nearly to shape theoretic ones than to standard homotopy theoretic ones. This seems first to
have been studied by Hastings? in 1988, (see references) and more recently has been explored in papers by José Sanjurjo and his coworkers, see below. Adapting this idea Hernandez, Teresa Rivas and
José García Calcines? have been using ideas developed for proper homotopy theory and shape involving pro-spaces, to describe limiting properties of dynamical systems.
See also $n$lab entries shape fibration, approximate fibration, … and references
The original references for the shape theory of metric compacta are:
• K. Borsuk, Concerning homotopy properties of compacta, Fund Math. 62 (1968) 223-254
• K. Borsuk, Theory of Shape, Monografie Matematyczne Tom 59,Warszawa 1975.
The ‘ANR-systems’ approach of Mardešić and Segal appeared in
• S. Mardešić and J. Segal, Shapes of compacta and ANR-systems, Fund. Math. 72 (1971) 41-59,
and is fully developed in
• S. Mardešić, J. Segal, (1982) Shape Theory, North Holland.
The more or less equivalent pro-object approach was independently developed by Porter in
• T. Porter, Cech homotopy I, Jour. London Math. Soc., 1, 6, 1973, pp. 429-436.
• T. Porter, Cech homotopy II, Jour. London Math. Soc., 2, 6, 1973, pp. 667-675.
References relating more to strong shape theory include:
These last three papers developed a version of the BrownAHT to pro-categories of simplicial sets and of chain complexes, so as to give strong shape theory a better foundation and toolbox of
homotopical methods. These methods were complementary to those of Edwards and Hastings, (listed above), who used a Quillen model category structure on the pro-category.
References to the categorical forms of shape theory include
• A. Deleanu, P.J. Hilton, On the categorical shape of a functor, Fund. Math. 97 (1977) 157 - 176.
• A. Deleanu, P.J. Hilton, Borsuk’s shape and Grothendieck categories of pro-objects, Math. Proc. Camb. Phil. Soc. 79 (1976) 473-482.
• D. Bourn, J.-M. Cordier, Distributeurs et théorie de la forme, Cahiers Topologie Géom. Différentielle Catég. 21,(1980), no. 2, 161–188, numdam.
• J.-M. Cordier and T. Porter, (1989), Shape Theory: Categorical Methods of Approximation, Mathematics and its Applications, Ellis Horwood. Reprinted Dover (2008),
which explores categorical methods in the area.
The relationship between invariants of $C^*$-algebras and the shape of their spectra was explored in
The links are with K-theory and Kasparov’s theory. This connection and a related one to ‘asymptotic morphisms’ is explored in some neat notes by Anderson and Grodal:
That connection with asymptotic morphisms is fully explored in the work of Dadarlat; see his papers,
• Marius Dādārlat, Terry A. Loring, Deformations of topological spaces predicted by E-theory, in Algebraic methods in operator theory, pages 316-327. Birkhäuser 1994.
For links with dynamical systems, see the early paper,
• H. M. Hastings?, Shape theory and dynamical systems in M.G.Markely and W.Perizzo: The structure of attractors in dynamical systems, Lect. Notes in Math. 688 (1978) 150-160. Springer-Verlag.
and more recently
• Joel W. Robbin, Dietmar A. Salamon, Dynamical systems, Shape Theory and the Conley index, Ergodic Theory Dynam. Systems 8 (1988) 375 - 393,
• A. Giraldo, M. A. Morón, F. R. Ruiz del Portal, J. M. R. Sanjurjo, Shape of global attractors in topological spaces, Nonlinear Analysis 60 (2005) 837 - 847
|
{"url":"http://www.ncatlab.org/nlab/show/shape+theory","timestamp":"2014-04-23T20:37:53Z","content_type":null,"content_length":"80271","record_id":"<urn:uuid:c6b2c28a-2890-4b8b-8710-40726fbe2ac0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IOI: A Medium Problem
Here is another, medium-level problem from the IOI. (Parental advisory: this is not quite as easy as it may sound!)
I think of a number between 1 and N. You want to guess the secret number by asking repeatedly about values in 1 to N. After your second guess, I will always reply "hotter" or "colder", indicating
whether your recent guess was closer or farther from the secret compared to the previous one.
You have lg N + O(1) questions.
The solution to the
first problem
I mentioned can be found in the comments. Bill Hesse solved the open question that I had posed. He has a neat example showing that the space should be N
3 bits, up to lower order terms. It is very nice to know the answer.
A very elegant solution to the
second problem
was posted in the comments by a user named Radu (do you want to disclose your last name?). Indeed, this is simpler than the one I had. However, the one I had worked even when the numbers in the array
were arbitrary (i.e. you could not afford to sort them in linear time). I plan to post it soon if commenters don't find it.
22 comments:
Quite easy. First ask 1 and N, so you know which half the secret number lies in. Then repeat the above process, each time cutting the possible range into half.
x10000: that sounds like it takes 2 lg N, which is not allowed.
What is the response if the secret number is exactly half way between the previous two guesses? Do you say "equal" and I win in that case?
Only the first turn needs two questions. Every successive turn needs only one question.
Sorry, I'm wrong. I'm not able to delete my comments.
Suppose we know that the secret number lies in [1,N/2] (with one or two additional questions)
Because we need to find out the secret number in at most lg_2(N) questions, each question must cut down the possible range into half in average.
So first assume the secret number is 1. The last question should guess 1, which cuts down [1,2] into half. The second last question should guess 2, which cuts down [1,4] into half. The third last
question should guess 3, and the 4th 6, the 5th 11, ...
The the kth last question should guess f(k), where f(1)=1 and f(k)+f(k+1)-1=2^k. Each question can cut down the possible range into half. We can first guess f(k) for the first k that f(k)>N/2.
When the secret number is not 1 (but less than N/2), we still choose the next guess value that can cut down the possible range into half. The value being asked will lie in the range [1,N].
Mihai, shouldn't it be hotter/colder/same? Otherwise I don't think you can do it in log (n) + O(1).
In this game, it seems as if getting the answer "colder" is bad since the next move will not bring any new information (it will necessarily hotter...).
Therefore, I think the strategy should be designed to have a bias in favor of hotter (which is not the case for usual dichotomy techniques).
If we can ask with values outside 1...N then it is easy.
Yes, it is hotter-colder-same. When you get "same" you have guessed the answer.
No, you cannot ask for numbers outside 1..N
As I said, this is not as easy as it may sound. But I like the problem since it sounds very canonical :)
This is a cute problem and would make a good homework problem for an algorithms course (although it might require hints depending on the level of the students).
Is there a repository of past IOI problems, ideally tagged or sorted by topic and difficulty? It is a pain to come with fresh problems year after year, and a bit of reformulation is usually
sufficient to stop students from finding solutions online.
Also, it might be a fun resource with which students could test themselves.
Adam, the list of problems is in principle available here: http://ioinformatics.org/history.shtml
Too bad these are the official task descriptions, which contain a lot of fluff. I would love a list with short descriptions in mathematical notation.
Anonymous' comment that if you can guess numbers outside of 1...N then the problem is easy can be extended to a solution. In particular if you know the secret number is in the interval [l,u] then
it suffices to be able to ask numbers in the interval [l-(u-l), u+(u-l)]. With this possible range of guesses it is possible to find the secret number in only lg(u-l) + O(1) questions (starting
with guessing l and u and for further guesses, guessing the smallest number outside of the [l,u] interval in order to cut the remaining interval in half).
Reducing the original problem to this simpler form is fairly easy. Simply guess initially N and 0. Assume wlog that the secret number is in [0, N/2]. Set k_0 = N/2.
Given an interval [0, k_i] we guess k_i/2 if our previous guess was 0 and 0 if our previous guess was k_i. If we learn that the secret is in the interval [k_i/4, k_i] we then almost have an
instance of the simpler version of this problem which we can solve. Using O(1) guesses though we can easily turn this into an instance of the simpler problem.
Otherwise the solution is in [0,k_i/2] in which case we set k_{i+1}=k_i/2 and recurse.
Each step cuts the interval we have to consider in half and we use at most lg(n) + O(1) guesses
I don't think your solution can work when the secret number is 1. Your approach may need 2lgN questions.
Given an interval [0, k_i] we guess k_i/2 if our previous guess was 0 and 0 if our previous guess was k_i
Should you guess k_i/4 instead of k_i/2 to ensure the interval size goes down by a factor of 4 after every two guesses?
There was a typo in my previous comment. Since 0 is outside of 1,...,N replace each occurence of 0 in my previous comment with 1
I don't think your solution can work when the secret number is 1. Your approach may need 2lgN questions.
Consider the case where the secret number is 2 (2 instead of 1 as you mention because of my off by one error above). The first guess is N followed by 1. Since the secret number is 2, we learn
that it is in the lower interval [1, N/2]. The next guess we make is N/4. We then learn the secret number is in [1, N/4]. Then we guess 1 and learn that the secret number is in [1, N/8] and so
on. Each guess cuts the interval in half so only lg(n)+O(1) guess are required when the secret number is 2.
Given an interval [0, k_i] we guess k_i/2 if our previous guess was 0 and 0 if our previous guess was k_i
As long as the secret number is in the lower interval, we cut our possible choices in half. If we discover it is in the upper interval than we simply use O(1) guess to reduce it to an appropriate
size to get an instance of the "simpler problem". So in each guess cuts in the interval we have to consider in half except for possibly one of the guesses we make.
This comment has been removed by the author.
@Travis and what happens when the secret number is N-1 ?
@Travis and what happens when the secret number is N-1 ?
The situation where the secret number is in [N/2, N] can be handled in the same manner as when the secret number is in [1, N/2]. Instead of considering intervals of the form [0, k_i] we have
intervals of the form [k_i, N].
@Travis for N=100 and X=2 the following guesses are made according to your algorithm:
[1] 100
[2] 1 Hotter
[3] 50 Colder
[4] 1 Hotter
[5] 25 Colder
[6] 1 Hotter
[7] 12 Colder
[8] 1 Hotter
[9] 6 Colder
[10] 1 Hotter
[11] 3 Same
For each Hotter there is a Colder so in this case you require 2lgN guesses. There may be a lot of "Colder" answers and their count can go up to lgN.
Every algorithm I tried was for N and 2 and this case is really awful.
Let's say the current interval [a,b] has been reduced after a Colder answer to guess c and that a<b<c. Then we don't guess Max(1, a+b-c) as @Travis does (i.e. lower bound or outside the interval)
but we choose a+(b-a)/4 (i.e. inside the interval while splitting in 4). This way we have more reductions over the other algorithm.
Splitting in 4 works better than in 2. Meaning that starting with N/4 instead of N,1,N/2 works better (one guess less).
These optimizations combined with an "aggressive" interval reduction gives better upper bounds for a given N than the other algorithm.
Here is my suggestion (not checked thoroughly):
Starting from k=0, then 1, 2... test value N/2^k.
Sto as soon as you get a "colder" at k=x.
Then there is enough room to simulate a normal binary search. Complexity estimate = lg(N/2^x)+x+O(1) = lg(N) + O(1).
Sto = Stop, and my tentative solution appears identical to Travis' one. I might add that if the first answer (at k=1) is "colder" then we should do a u-turn (going left) and continue as if the
answer was "hotter".
|
{"url":"http://infoweekly.blogspot.com/2010/08/ioi-medium-problem.html?showComment=1283100942643","timestamp":"2014-04-19T17:51:49Z","content_type":null,"content_length":"77309","record_id":"<urn:uuid:42ab8bc4-76ca-462f-8e15-01de1b4c5271>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Something New
Time for a geekfest. Nongeeks are warned to look elsewhere. Now.
Every undergraduate in science or engineering has learned to add the concept of vectors to his repertoir, expanding mathematics beyond the application of numbers only. Vectors are liberating; they
form a tremendously powerful approach to physics, constituting the backbone of almost all science and engineering today. A
is defined as a little arrow, i.e., a directed line segment. The addition of vectors is defined in a natural and physically intuitive way by moving one vector to the top of the other. Then draw the
vector that starts at the bottom of the first and ends at the top to the second. That's the sum of the original two.
Multiplication of vectors, by contrast, is problematic. There are two ways of "multiplying" vectors, neither of them philosophically satisfactory. The dot product,
v ⋅ w
, and the cross product,
v × w
, are both problematic. The dot product takes you out of the realm of vectors into the realm of scalars and the cross product only exists in dimension 3. Worse, it takes what is an essentially
2-dimensional situation, a relationship between two vectors which are sitting in the same plane, and pops you up to an apparently extraneous third dimension.
The multiplication of vectors is so unsatisfactory, in fact, that in most treatments of vectors in higher mathematics it is ignored entirely, vectors being thought of merely as objects which it is
possible to add (and stretch), but not as a general rule to multiply.
There is a new way to look at multiplication between vectors however, a new foundation of vector relations which allows the laws of physics to be rewritten much more succinctly and intuitively. The
name is "Geometric Algebra", and detailed references can be found
, and
. The new formalism appears to be very powerful, allowing such disparate topics as Maxwell's equations, the Pauli spin matrices familiar from quantum mechanics, and relativity theory, for three
examples, to be reformulated much more clearly than ever before. Maxwell's equations can now be reduced to a single line as shown below, for example.
∇ F = J
That single line contains
of Maxwell's equations in one fell swoop!
Like all revolutionary acts of brilliance, the fundamental ideas of geometric algebra are quite simple yet very powerful.
The first idea is to allow a few more objects than simply scalars and vectors into our theory. We now also include directed areas (parallelograms), called
. "Directed" in this case means that we have chosen either a clockwise or a counterclockwise direction to traverse the parallelogram, and associated it to the area. If we start with the vectors
, then we define the bivector
v ∧ w
to be the parallelogram formed by moving
, with the orientation taken to be that given by first moving along
and then along
. This new product, called the
wedge product
, is an area normal to the cross product, having area equal to the length of the cross product. It is defined in every dimension and doesn't remove us from the 2-dimensional situation we started
with. It is in fact the replacement for the cross product in the new formalism. The wedge product is
, i.e.,
v ∧ w = - w ∧ v,
(because the bivector on the left is oriented clockwise while the one on the right is oriented counterclockwise), whereas the dot product is symmetric,
v ⋅ w = w ⋅ v.
This first new idea, bivectors, can be likewise generalized to trivectors, a trivector being an oriented parallelepiped, and similar oriented objects in higher dimensions.
The second great insight of the new scheme is that the multiplication of vectors should contain both a symmetric and an antisymmetric component
at the same time
. Therefore a new product, the
geometric product
of two vectors is defined by the simple formula
vw = v ⋅ w + v ∧ w.
This definition still suffers from the drawback that a pair of 1-dimesional objects (vectors) has given rise through "multiplication" to a 0-dimensional object (a scalar) and a 2-dimensional object
(a bivector). In a sense, this seems to balance out and to be more satisfying than either the dot product or the cross product alone. But the real solution is to change the scope of operations.
Instead of considering vectors as our fundamental objects, we now consider
to be the fundamental objects of mathematical physics, where multivectors are simply abstract sums of the form
a + bu + c(v ∧ w),
with a, b, and c being ordinary scalars,
an ordinary vector, and
v ∧ w
a bivector. As with the definition of the complex numbers, the disparate pieces of the multivectors are added only with like pieces, allowing us to stay in the arena of multivectors when adding, and
the geometric product defined above allows us to remain in the same arena when multiplying.
In dimension 3, the proper objects are now considered to be the set of all multivectors of the form
a + bu + c(v ∧ w) + d(s ∧ t ∧ r),
with a, b, c, d real numbers. Similarly, in dimension 4 we throw in a quadvector term on the end.
Experience has painfully taught us that the best long-term progress in our theories comes from improvements in our fundamental notations themselves, in what mathematicians call the "machinery" which
we use to conceptualize reality. Geometric Algebra seems to hold the potential to be an earth-shattering paradigm shift in physics, engineering, and mathematics. Keep an eye on it.
Update: Graphics for wedge and dot products fixed as per Seneca's suggestion, using the code found on
this useful page
14 comments:
chuck said...
But geometric algebra requires a non-degenerate inner product so as to make use of the resulting identity between a vector space and its dual. Consequently, it is not quite so general as the
exterior algebra, nor, IMHO, are Clifford algebras as easy to use on a computer. However, there is a lot of fun in translating traditional formulas from mechanics to the geometric algebra version
and trying to understand the result. It's a great way to pass time. I will also admit that complex numbers and quaternions look quite spiffy dressed up in geometric algebraical clothing.
God--this is like the math courses I had at the academy--WHY are you doing this to me. Get help!!!
Didn't Gen. Bradley relax by doing integrals? Math does have its uses ;)
I doubt if I am alone when I say I don't have the least idea what you people are talking about.
Roger and Terrye, You were warned!
But you stopped just as it was getting interesting.
Well, hallelujah! I think I actually understand the concept! Not that I'm able to play around with it. Nor can I turn around and explain it to someone else!
(Maybe that means I don't understand at all...LOL)
Well, I got an A in differential calculus while simultaneously using both differential and integral calculus in my physics course.
But, you must understand, and you can probably see by my terminology, that that was over ::gasp:: forty years ago!
And I have dabbled very little since.
But, damn, you're right on when you say:
"Experience has painfully taught us that the best long-term progress in our theories comes from improvements in our fundamental notations themselves, in what mathematicians call the "machinery"
which we use to conceptualize reality."
Therein lies the beauty, the utility, and all those AHA moments.
Yeah but I just had to look.
It is kind of like when you are watching the horror movie and the silly girl hears the sound behind the door and you are saying "Don't open the door!!"
but she does.
I just had to look.
MHA: ⋅ and ×, thus v ⋅ w and v × w.
See Elizabeth Castro's handy table.
God--this is like the math courses I had at the academy--WHY are you doing this to me. Get help!!!
Some people like rap music.
I doubt if I am alone when I say I don't have the least idea what you people are talking about.
"In mathematics you don't understand things. You just get used to them." — John von Neumann
I think a simpler way to describe the algebra is to start with an orthogonal set of unit vectors. The multiplication rules are then:
u_i × u_i = 1,
u_i × u_j = − u_j × u_i , i ≠ j.
Another way to think of it is in terms of linear maps and wedge products. The product
u ∧ F,
where F is of degree i, can be regarded as a linear map from terms of degree i into the terms of degree i + 1 with kernel precisely those terms in F containing u. Remove u from those terms and
you have a mapping of the kernel into the terms of degree i - 1. The geometric algebra product puts these maps together, modulo some sign changes, to get a bijection F → ker ⊕ coker. It is not
hard to see why a second multiplication reverses this map upto a scalar factor.
Chuck, I think
u_i × u_i = 0.
Chuck, I think
u_i × u_i = 0.
I am using the × for the geometric product, not the cross product. The wedge product (∧) is the part that corresponds to the usual three dimensional cross product. One small correction would to
note that in some cases (special relativity) one could have u_i × u_i = -1 instead of u_i × u_i = 1.
Intriguing. Thank you, MHA. I should find time to play with this.
|
{"url":"http://yargb.blogspot.com/2005/10/something-new.html","timestamp":"2014-04-18T23:22:24Z","content_type":null,"content_length":"157192","record_id":"<urn:uuid:936dfa5d-a88c-4b10-bfab-4b1918de1b1f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
This problem
uses simple shapes in three colours for sorting data in more than one way. Teachers and children may well think of other useful purposes for the cards.
If the shapes are printed onto thin card and laminated they should last a long time.
Possible approach
You could start by showing all the cards to the children and asking what they can tell you about them. This will be a good opportunity to listen to their use of mathematical vocabulary and introduce
new terms where appropriate.
This problem is intended for children working in pairs, so that they are able to talk through their ideas with a partner. You could continue by giving each pair copies of
these cards so that they can sort them in their own ways.
After they have had an opportunity to try sorting and to discuss with their partners how it could be done, it might be a good time to use, or introduce, a simple Venn diagram. This one shows how the
question asked in the first part of the problem could be answered:
At the end of the lesson you could try different ways of sorting suggested by the children, using a Venn diagram if you think it appropriate.
Key questions
What can you see?
Can you think of another way you could sort the cards?
What is the same/different about these?
Tell me about what you are collecting.
Why do you think those two go together?
Where could you put this/these?
Possible extension
Learners could try one of the harder sorting problems, for example, Butterfly Cards.
Possible support
Some children may need a suggestion from you as to how to sort the cards to get them started.
|
{"url":"http://nrich.maths.org/7523/note?nomenu=1","timestamp":"2014-04-18T06:29:48Z","content_type":null,"content_length":"6309","record_id":"<urn:uuid:69866627-12fa-425c-afb0-9ea4b5e54188>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matching Pennies Definition | Investopedia
Definition of 'Matching Pennies'
A basic game theory example that demonstrates how rational decision-makers seek to maximize their payoffs. “Matching Pennies” involves two players simultaneously placing a penny on the table, with
the payoff depending on whether the pennies match. If both pennies are heads or tails, the first player wins and keeps the other’s penny; if they do not match, the second player wins and keeps the
other’s penny. Matching Pennies is a zero-sum game in that one player’s gain is the other’s loss. Since each player has an equal probability of choosing heads or tails and does so at random, there
is no “Nash Equilibrium” in this situation; in other words, neither player has an incentive to try a different strategy.
Investopedia explains 'Matching Pennies'
Matching Pennies is conceptually similar to the popular “Rock, Paper, Scissors,” as well as the “odds and evens” game where two players concurrently show one or two fingers and the winner is
determined by whether the fingers match.
Consider the following example to demonstrate the Matching Pennies concept. Adam and Bob are the two players in this case, and the table below shows their payoff matrix. Of the four sets of numerals
shown in the cells marked (a) through (d), the first numeral represents Adam’s payoff, while the second entry represents Bob’s payoff. +1 means that the player wins a penny, while -1 means that the
player loses a penny.
If Adam and Bob both play “Heads,” the payoff is as shown in cell (a) – Adam gets Bob’s penny. If Adam plays “Heads” and Bob plays “Tails,” then the payoff is reversed; as shown in cell (b), it
would now be -1, +1, which means that Adam loses a penny and Bob gains a penny. Likewise, if Adam plays “Tails” and Bob plays “Heads,” the payoff as shown in cell (c) is -1, +1, and if both play
“Tails” the payoff as shown in cell (d) is +1, -1.
│Adam / Bob │ Heads │ Tails │
│ │ │ │
│Heads │(a) +1, -1 │(b) -1, +1 │
│ │ │ │
│Tails │(c) -1, +1 │(d) +1, -1 │
│ │ │ │
comments powered by Disqus
|
{"url":"http://www.investopedia.com/terms/m/matching-pennies.asp","timestamp":"2014-04-18T11:15:26Z","content_type":null,"content_length":"95350","record_id":"<urn:uuid:da328584-6795-4a8d-ace5-568b80e6ebe8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanics thread
In my mind it makes them too strong, likely should be balanced with other stuff.
I think 20% + level 20 gem its something like 50% penetration. Which is huge means. if you had 75% cap'd resists before you would be taking 3x the damage. No way for the defender to stop this.
100 base Cold damage vs 75% resist = 25 damage taken.
100 base cold damage +50% Cold Penetration vs 75% resist = 75 damage taken.
So even if the defender had 150 Cold resist with 75% capped, he would still take 3x the damage.
Unlike Ele weakness, you can keep capped resist by stacking a resist.
agreed, it blows every other support gem out of the water for any elemental skill
|
{"url":"http://www.pathofexile.com/forum/view-thread/11707/page/106","timestamp":"2014-04-18T23:20:29Z","content_type":null,"content_length":"42534","record_id":"<urn:uuid:f13b8ea8-31d3-4c4e-a5d8-0510df82b68f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Students With Both mathematics and Reading Difficulties
Educational Neuroscience
> Students With Both mathematics and Reading Difficulties
Students With Both mathematics and Reading Difficulties
Students who have both reading and mathematics difficulties are obviously at a double disadvantage. However, even though the reading and mathematical processing areas of the brain are separate from
each other, these two cerebral regions interact whenever the learner must translate word problems into symbolic representations. Here are some strategies that are effective with these students.
Cue words in word problems. Help these students decode language into mathematical operations by alerting them to common phrases or cue words found in word problems that identify which operation to
Word problem maps. Give students with reading problems a story map to highlight certain important aspects of the story such as introduction, plot line, characters, time line, and story climax. Gagnon
and Maccini (2001) have developed a similar learning aid, called a word problem map, to help students with mathematics difficulties organize their thoughts as they tackle word problems. The map can
be completed by an individual student or by students working in groups of two or three.
The RIDD strategy. The RIDD strategy was developed by Jackson (2002) in 1997 for students with learning disabilities. In practice, it has shown to be particularly helpful to students who have
difficulties in both reading and mathematics. RIDD stands for Read, Imagine, Decide, and Do. The following is a description of these four steps.
Step 1: Read the problem. Read the passage from beginning to end. This helps students focus on the entire task rather than just one line at a time. Good readers often skip words within a text, or
they substitute another word and continue reading. In this step, students decide ahead of time what they will call a word that they do not recognize. In mathematics word problems, substitutions can
be made for long numbers rather than saying the entire number on the first reading. Teachers should model this substitution when they read the problem aloud to the class.
Step 2: Imagine the problem. In this step, the students create a mental picture of what they have read. Using imagery when learning new material activates more brain regions and transforms the
learning into meaningful visual, auditory, or kinesthetic images of information. This makes it easier for the new information to be stored in the students’ own knowledge base. Imagery helps students
focus on the concept being presented, and provides a way of monitoring their performance.
Step 3: Decide what to do. In order to generate a mental picture of the situation, this step encourages students to read the entire mathematics problem without stopping. They then decide what to do
and in what order to solve the problem. For example, in a word problem requiring addition and then subtraction, students would read the problem, create a mental picture, and then decide whether to
add or subtract first. For young students, teachers can guide them through this step with appropriate questioning so the students can decide what procedures to use. Note how this step combines
reading, visualization, and problem solving.
Step 4: Do the work. During this step the students actually complete the task. Often, students start reading a mathematics problem, stop part way through it, and begin writing numerical expressions.
This process can produce errors because the students do not have all the information. By making this a separate step, students realize that there are things to do between reading the problem and
writing it down. Jackson (2002) observed that when students used RIDD to solve mathematics problems, they liked this strategy because they perceived the last step as the only time they did work.
Apparently, the students did not realize that what they did in the first three steps was all part of the process for solving problems.
Computer assistance. Computer programs are now available for elementary level students that address both reading and mathematics weaknesses. For example, Knowledge Adventure has several software
titles that focus on teaching basic mathematics and reading skills while adhering to national and state standards. Each program provides instruction at a student’s own pace and includes automatic
progress tracking for each student so teachers can provide additional instruction to those who need it.
Gagnon, J., & Maccini, P. (2001). Preparing students with disabilities for algebra. Teaching Exceptional Children, 34, 8–15.
Jackson, F. B., (2002, May). Crossing content: A strategy for students with learning disabilities. Intervention in School and Clinic, 37, 279–282.
Categories: Educational Neuroscience brain-based learning, David Sousa, Dr. David Sousa, education, educational neuroscience, how the brain learns, learning difficulties, math, Mathematics, Number
Sense, Teaching, teaching strategies
1. No comments yet.
1. No trackbacks yet.
|
{"url":"http://howthebrainlearns.wordpress.com/2012/10/29/students-with-both-mathematics-and-reading-difficulties/","timestamp":"2014-04-17T00:49:46Z","content_type":null,"content_length":"62880","record_id":"<urn:uuid:3b218435-11df-49cb-ad4a-54ba8b69bb63>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unformatted Document Excerpt
ECO 3203
Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
010 Chapter Making Capital Investment Decisions Multiple Choice Questions 1. The changes in a firm's future cash flows that are a direct consequence of accepting a project are called _____ cash
flows. A. incremental b. stand-alone c. after-tax d. net present value e. erosion SECTION: 10.1 TOPIC: INCREMENTAL CASH FLOWS TYPE: DEFINITIONS 2. The evaluation of a project based solely on its
incremental cash flows is the basis of the: a. future cash flow method. B. stand-alone principle. c. dividend growth model. d. salvage value model. e. equivalent cost principle. SECTION: 10.1 TOPIC:
STAND-ALONE PRINCIPLE TYPE: DEFINITIONS 3. A cost that has already been incurred and cannot be recouped is a(n): a. salvage value expense. b. net working capital expense. C. sunk cost. d. opportunity
cost. e. erosion cost. SECTION: 10.2 TOPIC: SUNK COSTS TYPE: DEFINITIONS 10-1 Chapter 010 Making Capital Investment Decisions 4. The most valuable investment given up if an alternative investment is
chosen is a(n): a. salvage value expense. b. net working capital expense. c. sunk cost. D. opportunity cost. e. erosion cost. SECTION: 10.2 TOPIC: OPPORTUNITY COSTS TYPE: DEFINITIONS 5. Erosion is
best described as: a. expenses that have already been incurred and cannot be reversed. b. net working capital expenses. C. the cash flows of a new project that come at the expense of a firm's
existing cash flows. d. the next alternative that is forfeited when a fixed asset is utilized for a project. e. the differences in a firm's cash flows with and without a particular project. SECTION:
10.2 TOPIC: EROSION TYPE: DEFINITIONS 6. A pro forma financial statement is one that: A. projects future years' operations. b. is expressed as a percentage of the total assets of the firm. c. is
expressed as a percentage of the total sales of the firm. d. is expressed relative to a chosen base year's financial statement. e. reflects the past and current operations of a firm. SECTION: 10.3
TOPIC: PRO FORMA FINANCIAL STATEMENTS TYPE: DEFINITIONS 10-2 Chapter 010 Making Capital Investment Decisions 7. The depreciation method currently allowed under U.S. tax law governing the accelerated
write-off of property under various lifetime classifications is called: a. FIFO. B. MACRS. c. straight-line depreciation. d. sum-of-years depreciation. e. erosion. SECTION: 10.4 TOPIC: MACRS
DEPRECIATION TYPE: DEFINITIONS 8. The tax savings generated as a result of a firm's depreciation expense is called the: a. aftertax depreciation savings. b. depreciable basis. C. depreciation tax
shield. d. operating cash flow. e. aftertax salvage value. SECTION: 10.5 TOPIC: DEPRECIATION TAX SHIELD TYPE: DEFINITIONS 9. The annual annuity stream of payments with the same present value as a
project's costs is called the project's _____ cost. a. incremental b. sunk c. opportunity d. erosion E. equivalent annual SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST TYPE: DEFINITIONS 10-3 Chapter
010 Making Capital Investment Decisions 10. Lester's Dairy gathers and processes cow's milk for distribution to retail outlets. Lester's is currently considering processing goat's milk as well. Which
one of the following is most apt to be an incremental cash flow related to the goat milk project? a. processing the goat's milk in the same building as the cow's milk b. utilizing the same
pasteurizing equipment to process both kinds of milk C. purchasing additional milk jugs to handle the increased volume of milk d. researching the market to ascertain if goat milk sales might be
profitable before deciding to proceed e. reducing the projected interest expense by assuming the proceeds of the goat milk sales will reduce the outstanding debt SECTION: 10.1 AND 10.2 TOPIC:
INCREMENTAL CASH FLOW TYPE: CONCEPTS 11. Russell's of Westerfield is a furniture store which is considering offering carpet for sale. Which of the following should be considered incremental cash
flows of theproject? I. utilizing the credit offered by a carpet supplier to build an initial inventory II. granting credit to a customer so she can purchase carpet and pay for it at a later date
III. borrowing money from a bank to fund the carpet project IV. purchasing carpet to hold in inventory a. I and II only b. III and IV only C. I, II, and IV only d. II, III, and IV only e. I, II, III,
and IV SECTION: 10.1 AND 10.2 TOPIC: INCREMENTAL CASH FLOW TYPE: CONCEPTS 10-4 Chapter 010 Making Capital Investment Decisions 12. The stand-alone principle advocates that project analysis should
focus on _____ costs. a. sunk b. total c. variable D. incremental e. fixed SECTION: 10.1 TOPIC: STAND-ALONE PRINCIPLE TYPE: CONCEPTS 13. Sunk costs include any cost that will: a. change if a project
is undertaken. b. be incurred if a project is accepted. C. not change as it was previously incurred and cannot be recouped. d. be paid to a third party and cannot be recouped. e. occur if a project
is accepted and once incurred, cannot be recouped. SECTION: 10.2 TOPIC: SUNK COST TYPE: CONCEPTS 14. You spent $600 last week repairing the brakes on your car. Now, the starter is acting up and you
are trying to decide whether to fix the starter or trade the car in for a newer model. In analyzing the starter situation, the $600 you spent fixing the brakes is a(n) _____ cost. a. opportunity b.
fixed c. incremental d. erosion E. sunk SECTION: 10.2 TOPIC: SUNK COST TYPE: CONCEPTS 10-5 Chapter 010 Making Capital Investment Decisions 15. Which one of the following best illustrates erosion as
it relates to project analysis? a. providing both ketchup and mustard for your customer's use b. repairing the roof of your hamburger stand because of water damage C. selling less hamburgers because
you also started selling hot dogs d. opting to sell french fries but not onion rings e. opting to increase your work force by hiring two part-time employees SECTION: 10.2 TOPIC: EROSION TYPE:
CONCEPTS 16. Which of the following are examples of erosion? I. the loss of current sales due to increased competition in the product market II. the loss of current sales because your chief
competitor just opened a store across the street from your store III. the loss of current sales due to a new product which you recently introduced IV. the loss of current sales due to a new product
recently introduced by your competitor A. III only b. III and IV only c. I, III, and IV only d. II and IV only e. I, II, III, and IV SECTION: 10.2 TOPIC: EROSION TYPE: CONCEPTS 17. You are
considering the purchase of new equipment. Your analysis includes the evaluation of two machines which have differing initial and ongoing costs and differing lives. Whichever machine is purchased
will be replaced at the end of its useful life. You should select the machine which has the: a. longest life. b. highest annual operating cost. c. lowest annual operating cost. d. highest equivalent
annual cost. E. lowest equivalent annual cost. SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COSTS TYPE: CONCEPTS 10-6 Chapter 010 Making Capital Investment Decisions 18. The bid price is: a. an aftertax
price. b. the aftertax contribution margin. c. the highest price you should charge if you want the project. d. the only price you can bid if the project is to be profitable. E. the minimum price you
should charge if you want to financially breakeven. SECTION: 10.6 TOPIC: BID PRICE TYPE: CONCEPTS 19. Which of the following should be included in the analysis of a project? I. sunk costs II.
opportunity costs III. erosion costs IV. noncash expenses a. I and II only b. III and IV only c. II and III only D. II, III, and IV only e. I, II, and IV only SECTION: 10.2 TOPIC: TYPES OF COSTS
TYPE: CONCEPTS 10-7 Chapter 010 Making Capital Investment Decisions 20. All of the following are related to a proposed project. Which should be included in the cash flow at time zero? I. initial
inventory increase of $2,500 II. loan of $125,000 to commence a project III. depreciation tax shield of $1,100 IV. initial purchase of $6,500 of fixed assets a. I and II only B. I and IV only c. II
and IV only d. I, II, and IV only e. I, II, III, and IV SECTION: 10.4 TOPIC: NET WORKING CAPITAL TYPE: CONCEPTS 21. Changes in the net working capital: A. can affect the cash flows of a project every
year of the project's life. b. only affect the initial cash flows of a project. c. are included in project analysis only if they represent cash outflows. d. are generally excluded from project
analysis due to their irrelevance to the total project. e. affect the initial and the final cash flows of a project but not the cash flows of the middle years. SECTION: 10.4 TOPIC: NET WORKING
CAPITAL TYPE: CONCEPTS 22. Which one of the following is a cash inflow? Ignore any tax effects. a. a decrease in accounts payable b. an increase in inventory C. a decrease in accounts receivable d.
depreciation expense e. an increase in fixed assets SECTION: 10.4 TOPIC: NET WORKING CAPITAL TYPE: CONCEPTS 10-8 Chapter 010 Making Capital Investment Decisions 23. Net working capital: a. can be
ignored in project analysis because any expenditure is normally recouped by the end of the project. b. requirements generally, but not always, create a cash inflow at the beginning of a project. c.
expenditures commonly occur at the end of a project. D. is frequently affected by the additional sales generated by a new project. e. is the only expenditure where at least a partial recovery can be
made at the end of a project. SECTION: 10.4 TOPIC: NET WORKING CAPITAL TYPE: CONCEPTS 24. The operating cash flows for a cost reduction project: a. cannot be computed since there is no incremental
sales revenue. b. will equal zero because there will be no incremental sales. c. can only be analyzed if all the sales and expenses of a firm are considered. D. must consider the depreciation tax
shield. e. will always be negative values. SECTION: 10.3 TOPIC: PRO FORMA INCOME STATEMENT TYPE: CONCEPTS 25. Pro forma statements for a proposed project should: I. be compiled on a stand-alone
basis. II. include all the incremental cash flows related to a project. III. generally exclude interest expense. IV. include all project-related fixed asset acquisitions and disposals. a. I and II
only b. II and III only c. I, II, and IV only d. II, III, and IV only E. I, II, III, and IV SECTION: 10.3 TOPIC: PRO FORMA STATEMENTS TYPE: CONCEPTS 10-9 Chapter 010 Making Capital Investment
Decisions 26. Which one of the following statements is correct? a. Project analysis should only include the cash flows which affect the income statement. B. A project can create a positive operating
cash flow without affecting sales. c. For the majority of projects that increase sales, there will be a cash outflow related to net working capital that occurs at the end of the project. d. Interest
expense should always be included as a cash outflow when analyzing a project. e. The opportunity cost of a company-owned building that is going to be used in a new project should be included as a
cash inflow to the project. SECTION: 10.3 TOPIC: PROJECT CASH FLOWS TYPE: CONCEPTS 27. A company which utilizes the MACRS system of depreciation: a. will have equal depreciation costs each year of an
asset's life. B. will have a greater tax shield in year two of a project than they would have if the firm had opted for straight-line depreciation. c. can depreciate the cost of land, if they so
desire. d. will expense less than the entire cost of an asset over the asset's class life. e. cannot expense any of the cost of a new asset during the first year of the asset's life. SECTION: 10.4
TOPIC: MACRS TYPE: CONCEPTS 10-10 Chapter 010 Making Capital Investment Decisions 28. Wiley Electric just purchased some MACRS 5-year property at a cost of $118,000. Which one of the following will
correctly give you the book value of this equipment at the end of year 3? a. $118,000 / (1 + .20 + .32 + .192) B. $118,000 (1 .20 .32 .192) c. $118,000 (.20 + .32 + .192) d. {[$118,000 (1 .20)] (1
.32)} e. $118,000 /{[(1.20 / 1.32)] / 1.192} (1 .192) SECTION: 10.4 TOPIC: MACRS TYPE: CONCEPTS 29. Jenningston Manor just purchased some equipment at a cost of $58,000. What is the proper
methodology for computing the depreciation expense for year 2 if the equipment is classified as 5-year property for MACRS? a. $58,000 (1 .20) .32 b. $58,000 / (1 .20 .32) c. $58,000 1.32 d. $58,000
(1 .32) E. $58,000 .32 SECTION: 10.4 TOPIC: MACRS TYPE: CONCEPTS 10-11 Chapter 010 Making Capital Investment Decisions 30. The book value of a fixed asset must be used in the computation of which one
of the following? a. annual tax shield B. tax due on the sale of a fixed asset c. operating cash flow d. change in net working capital e. MACRS depreciation SECTION: 10.4 TOPIC: BOOK VALUE TYPE:
CONCEPTS 31. The book value of equipment will: a. remain constant over the life of the equipment. b. vary in response to changes in the market value. c. decrease at a constant rate when MACRS
depreciation is used. d. increase over the taxable life of an asset. E. decrease slower under straight-line depreciation than under MACRS. SECTION: 10.4 TOPIC: BOOK VALUE TYPE: CONCEPTS 32. The
aftertax salvage value = Sales price: a. + (Sales price Book value) T. b. + (Sales price Book value) (1 T). C. (Sales price Book value) T. d. (Sales price Book value) (1 T). e. (1 T). SECTION: 10.4
TOPIC: SALVAGE VALUE TYPE: CONCEPTS 10-12 Chapter 010 Making Capital Investment Decisions 33. The pre-tax salvage value of an asset is equal to the: a. book value if straight-line depreciation is
used. b. book value if MACRS depreciation is used. c. market value minus the book value. d. book value minus the market value. E. market value. SECTION: 10.4 TOPIC: SALVAGE VALUE TYPE: CONCEPTS 34. A
project's operating cash flow will increase when: a. the tax rate increases. b. sales decrease. c. interest expense decreases. D. depreciation expense increases. e. earnings before interest and taxes
decreases. SECTION: 10.5 TOPIC: PROJECT OCF TYPE: CONCEPTS 35. The cash flows of a project should: a. be computed on a pre-tax basis. b. include all sunk costs and opportunity costs. C. include the
effects of erosion. d. be included in the year when the related expense or income is recognized by GAAP. e. include all financing costs related to the project. SECTION: 10.2 TOPIC: PROJECT CASH FLOWS
TYPE: CONCEPTS 10-13 Chapter 010 Making Capital Investment Decisions 36. Which one of the following is correct method for computing the operating cash flow of a project assuming that the interest
expense is equal to zero? a. EBIT + D b. EBIT T C. NI + D d. (Sales Costs) (1 D) (1 T) e. (Sales Costs) (1 T) SECTION: 10.5 TOPIC: PROJECT OCF TYPE: CONCEPTS 37. The cash flows of a project should
exclude the incremental changes in which one of the following accounts? a. taxes b. accounts payable c. fixed assets D. long-term debt e. depreciation SECTION: 10.2 TOPIC: PROJECT CASH FLOWS TYPE:
CONCEPTS 38. The bottom-up approach to computing the operating cash flow applies only when: a. both the depreciation expense and the interest expense are equal to zero. B. the interest expense is
equal to zero. c. the project is a cost-cutting project. d. no fixed assets are required for a project. e. taxes are ignored and the interest expense is equal to zero. SECTION: 10.5 TOPIC: BOTTOM-UP
OCF TYPE: CONCEPTS 10-14 Chapter 010 Making Capital Investment Decisions 39. The top-down approach to computing the operating cash flow: A. ignores all noncash items. b. applies only if a project
increases sales. c. can only be used if the entire cash flows of a firm are analyzed. d. is equal to sales costs taxes + depreciation. e. includes the interest expense related to a project. SECTION:
10.5 TOPIC: TOP-DOWN OCF TYPE: CONCEPTS 40. Increasing which one of the following will increase the operating cash flow? a. erosion b. taxes c. fixed expenses d. salaries E. depreciation SECTION:
10.5 TOPIC: TAX SHIELD TYPE: CONCEPTS 41. Which one of the following creates a tax shield? a. dividend payment b. increase in accounts payable c. decrease in inventory D. noncash expense e. sunk cost
SECTION: 10.5 TOPIC: TAX SHIELD TYPE: CONCEPTS 10-15 Chapter 010 Making Capital Investment Decisions 42. A project which improves the operating efficiency of a firm but which generates no revenue is
referred to as a(n) _____ project. a. sunk cost b. opportunity C. cost-cutting d. erosion e. cashless SECTION: 10.6 TOPIC: COST-CUTTING TYPE: CONCEPTS 43. Which of the following statements are
correct regarding the analysis of a cost-cutting project that has an initial cash outflow for fixed assets? I. The costs shown on the pro forma income statement represent a cash inflow. II. The
depreciation expense related to the fixed assets creates a tax shield. III. The project operating cash flow can be computed as (Costs Taxes). IV. The earnings before interest and taxes are equal to
the costs. a. I and II only b. III and IV only c. I and III only d. II and IV only E. I, II, and III only SECTION: 10.6 TOPIC: COST-CUTTING TYPE: CONCEPTS 44. Which one of the following statements is
correct concerning bid prices? a. The competitor who wins the bid is the one who submits the highest bid price. B. The winning bid may be at a price that is below break-even especially if there is a
related aftermarket for the product. c. A bid price is computed based on 110 percent of a firm's normal required return. d. A bid price should be computed based solely on the operating cash flows of
the proposed project. e. A bid price should be computed based on a zero percent required rate of return. SECTION: 10.6 TOPIC: BID PRICE TYPE: CONCEPTS 10-16 Chapter 010 Making Capital Investment
Decisions 45. Frederick is comparing machines to determine which one to purchase. The machines sell for differing prices, have differing operating costs, differing machine lives, and will be replaced
when worn out. These machines should be compared using: a. their internal rates of return. b. both net present value and the internal rate of return. C. their effective annual costs. d. the
depreciation tax shield approach. e. the replacement parts approach. SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST TYPE: CONCEPTS 46. The equivalent annual cost method is useful in determining: a.
which one of two machines to purchase if the machines are mutually exclusive, have differing lives, and are a one-time purchase. b. the tax shield benefits of depreciation given the purchase of new
assets for a project. c. operating cash flows for cost-cutting projects of unequal duration. d. which one of two investments to accept when the investments have different required rates of return. E.
which one of two machines to purchase when the machines are mutually exclusive, have different machine lives, and will be replaced once they are worn out. SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST
TYPE: CONCEPTS 10-17 Chapter 010 Making Capital Investment Decisions 47. Justin's Manufacturing purchased a lot in Lake City ten years ago at a cost of $790,000. Today, that lot has a market value of
$1.2 million. At the time of the purchase, the company spent $100,000 to grade the lot and another $20,000 to build a small garage on the lot to house additional equipment. The company now wants to
build a new facility on the site. The building cost is estimated at $1.7 million. What amount should be used as the initial cash flow for this project? a. $2,490,000 b. $2,610,000 C. $2,900,000 d.
$3,020,000 e. $3,690,000 CF0 = $1,200,000 + ( $1,700,000) = $2,900,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: RELEVANT CASH FLOWS TYPE: PROBLEMS 10-18 Chapter 010 Making Capital Investment
Decisions 48. McLain, Inc. currently produces boat sails and is considering expanding its operations to include awnings for homes and travel trailers. The company owns land beside its current
manufacturing facility that could be used for the expansion. The company bought this land eight years ago at a cost of $500,000. At the time of purchase, the company paid $70,000 to level out the
land so it would be suitable for future use. Today, the land is valued at $750,000. The company currently has some unused equipment which it currently owns valued at $40,000. This equipment could be
used for producing awnings if $10,000 is spent for equipment modifications. Other equipment costing $400,000 will also be required. What is the amount of the initial cash flow for this expansion
project? a. $870,000 b. $1,020,000 C. $1,200,000 d. $1,620,000 e. $2,020,000 CF0 = $750,000 + ( $40,000) + ( $10,000) + ( $400,000) = $1,200,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: RELEVANT
CASH FLOWS TYPE: PROBLEMS 10-19 Chapter 010 Making Capital Investment Decisions 49. Keller Co. paid $50,000, in cash, for a piece of equipment four years ago. At the beginning of the year, the
company spent $5,000 to update the equipment with the latest technology. The company no longer uses this equipment in their current operations and has received an offer of $75,000 from a firm who
would like to purchase it. Keller Co. is debating whether to sell the equipment or to expand their operations such that the equipment can be used. When evaluating the expansion option, what value, if
any, should Keller Co. assign to this equipment as an initial cost of the project? a. $0 b. $5,000 c. $50,000 D. $75,000 e. $80,000 CF0 = $75,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: RELEVANT
CASH FLOWS TYPE: PROBLEMS 50. Elite Design, Inc. sells customized handbags. Currently, they sell 30,000 handbags annually at an average price of $79 each. They are considering adding a lower-priced
line of handbags which sell for $45 each. Elite Design estimates they can sell 12,000 of the lowerpriced handbags but will sell 4,000 less of the higher-priced handbags by doing so. What is the
amount of the sales that should be used when evaluating the addition of the lower-priced handbags? A. $224,000 b. $540,000 c. $856,000 d. $1,234,000 e. $1,514,000 Sales = (12,000 $45) (4,000 $79) =
$224,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: RELEVANT CASH FLOWS TYPE: PROBLEMS 10-20 Chapter 010 Making Capital Investment Decisions 51. Expansion, Inc. purchased a building for $485,000
seven years ago. Five years ago, repairs were made to the building which cost $80,000. The annual taxes on the property are $30,000. The building has a current market value of $424,000 and a current
book value of $399,000. The building is totally paid for and solely owned by the firm. If the company decides to assign this building to a new project, what value, if any, should be included in the
initial cash flow of the project for this building? a. $0 B. $424,000 c. $454,000 d. $485,000 e. $504,000 Opportunity cost = $424,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: OPPORTUNITY COST TYPE:
PROBLEMS 52. You own a house that you rent for $1,600 a month. The maintenance expenses on the house average $300 a month. The house cost $110,000 when you purchased it six years ago. A recent
appraisal on the house valued it at $295,000. If you sell the house you will incur $15,000 in real estate fees. The annual property taxes are $25,000. You are deciding whether to sell the house or
convert it for your own use as a professional office. What value should you place on this house when analyzing the option of using it as a professional office? a. $150,000 b. $255,000 C. $280,000 d.
$293,100 e. $310,000 Opportunity cost = $295,000 $15,000 = $280,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: OPPORTUNITY COST TYPE: PROBLEMS 10-21 Chapter 010 Making Capital Investment Decisions
53. Janson's Auto Parts owns a manufacturing facility that is currently sitting idle. The facility is located on a piece of land that originally cost $134,000. The facility itself cost $700,000 to
build. As of now, the book value of the land and the facility are $134,000 and $214,000, respectively. Janson's Auto Parts received a bid of $640,000 for the land and facility last week. They
rejected this bid even though they were told that it is a reasonable offer in today's market. If Janson's Auto Parts were to consider using this land and facility in a new project, what cost, if any,
should they include in the project analysis? a. $348,000 B. $640,000 c. $700,000 d. $774,000 e. $834,000 CF0 = $640,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: OPPORTUNITY COST TYPE: PROBLEMS 54.
Jenna's Home Spa Sales currently sells 2,000 Class A spas, 5,000 Class C spas, and 1,000 deluxe model spas each year. Jenna is considering adding a mid-class spa and expects that if she does she can
sell 2,500 of them. However, if the new spa is added, Jenna expects that her Class A sales will decline to 1,700 units while the Class C sales decline to 4,500. The sales of the deluxe model will not
be affected. Class A spas sell for an average of $75,000 each. Class C spas are priced at $25,000 and the deluxe model sells for $100,000 each. The new midrange spa will sell for $50,000. What is the
erosion cost? A. $35,000,000 b. $90,000,000 c. $125,000,000 d. $205,000,000 e. $240,000,000 Erosion cost = [(1,700 2,000) $75,000] + [(4,500 5,000) $25,000] = $35,000,000 AACSB TOPIC: ANALYTIC
SECTION: 10.2 TOPIC: EROSION COST TYPE: PROBLEMS 10-22 Chapter 010 Making Capital Investment Decisions 55. Shelly's Boutique is evaluating a project which will increase annual sales by $70,000 and
annual costs by $40,000. The project will initially require $100,000 in fixed assets which will be depreciated straight-line to a zero book value over the 5-year life of the project. The applicable
tax rate is 34 percent. What is the operating cash flow for this project? a. $26,400 B. $26,600 c. $30,000 d. $46,400 e. $46,600 Tax = .34 [$70,000 $3,400 = $26,600 40,000 ($100,000 / 5)] = $3,400;
OCF = $70,000 $40,000 AACSB TOPIC: ANALYTIC 10.3 SECTION: AND 10.4 TOPIC: OCF TYPE: PROBLEMS 56. The Clothing Co. is looking at a project that will require $40,000 in net working capital and $100,000
in fixed assets. The project is expected to produce annual sales of $90,000 with associated costs of $60,000. The project has a 10-year life. The company uses straight-line depreciation to a zero
book value over the life of the project. The tax rate is 35 percent. What is the operating cash flow for this project? a. $17,000 b. $19,500 C. $23,000 d. $33,000 e. $90,000 Tax = .35 [$90,000 $7,000
= $23,000 60,000 ($100,000 / 10)] = $7,000; OCF = $90,000 $60,000 AACSB TOPIC: ANALYTIC SECTION: 10.3 AND 10.4 TOPIC: OCF TYPE: PROBLEMS 10-23 Chapter 010 Making Capital Investment Decisions 57.
John's Surf Shop has sales of $620,000 and a profit margin of 8 percent. The annual depreciation expense is $50,000. What is the amount of the operating cash flow if the company has no long-term
debt? a. $45,600 b. $49,600 c. $53,600 d. $95,600 E. $99,600 OCF = ($620,000 .08) + $50,000 = $99,600 AACSB TOPIC: ANALYTIC SECTION: 10.5 TOPIC: BOTTOM-UP OCF TYPE: PROBLEMS 58. Ann's Custom Catering
has sales of $214,000, depreciation of $9,000, and net working capital of $16,000. The firm has a tax rate of 34 percent and a profit margin of 7 percent. The firm has no interest expense. What is
the amount of the operating cash flow? a. $7,980 B. $23,980 c. $30,350 d. $39,980 e. $53,700 OCF = ($214,000 .07) + $9,000 = $23,980 AACSB TOPIC: ANALYTIC SECTION: 10.5 TOPIC: BOTTOM-UP OCF TYPE:
PROBLEMS 10-24 Chapter 010 Making Capital Investment Decisions 59. Al's Bistro is considering a project which will produce sales of $23,000 and increase cash expenses by $13,000. If the project is
implemented, taxes will increase from $25,000 to $27,500 and depreciation will increase from $5,000 to $8,000. What is the amount of the operating cash flow using the top-down approach? a. $4,500 B.
$7,500 c. $9,950 d. $10,000 e. $10,500 OCF = $23,000 $13,000 ($27,500 $25,000) = $7,500 AACSB TOPIC: ANALYTIC SECTION: 10.5 TOPIC: TOP-DOWN OCF TYPE: PROBLEMS 60. Ben's Ice Cream Parlor is
considering a project which will produce sales of $8,000 and increase cash expenses by $3,500. If the project is implemented, taxes will increase by $1,700. The additional depreciation expense will
be $1,200. An initial cash outlay of $2,500 is required for net working capital. What is the amount of the operating cash flow using the topdown approach? a. $300 b. $1,600 c. $2,000 D. $2,800 e.
$3,300 OCF = $8,000 $3,500 $1,700 = $2,800 AACSB TOPIC: ANALYTIC SECTION: 10.5 TOPIC: TOP-DOWN OCF TYPE: PROBLEMS 10-25 Chapter 010 Making Capital Investment Decisions 61. A project will increase the
sales of Joe's Workshop by $50,000 and increase cash expenses by $36,000. The project will cost $30,000 and be depreciated using straight-line depreciation to a zero book value over the 3-year life
of the project. The company has a marginal tax rate of 35 percent. What is the operating cash flow of the project using the tax shield approach? a. $8,400 b. $9,100 C. $12,600 d. $15,600 e. $17,500
OCF = [($50,000 $36,000) (1 .35)] + [($30,000 / 3) .35] = $12,600 AACSB TOPIC: ANALYTIC SECTION: 10.5 TOPIC: TAX SHIELD OCF TYPE: PROBLEMS 62. A firm is considering a project that will increase sales
by $135,000 and cash expenses by $105,000. The project will cost $120,000 and be depreciated using the straight-line method to a zero book value over the 4-year life of the project. The company has a
marginal tax rate of 34 percent. What is the value of the depreciation tax shield? a. $6,000 B. $10,200 c. $13,200 d. $19,800 e. $20,000 Depreciation tax shield = ($120,000 / 4) .34 = $10,200 AACSB
TOPIC: ANALYTIC SECTION: 10.5 TOPIC: DEPRECIATION TAX SHIELD TYPE: PROBLEMS 10-26 Chapter 010 Making Capital Investment Decisions 63. The Barber Shop just purchased some fixed assets classified as
5-year property for MACRS. The assets cost $26,000. How much depreciation has accumulated by the end of the third year? a. $4,992 b. $6,656 c. $13,520 d. $14,572 E. $18,512 Depreciation = $26,000 (
.20 + .32 + .192) = $18,512 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: MACRS DEPRECIATION TYPE: PROBLEMS 10-27 Chapter 010 Making Capital Investment Decisions 64. You just purchased some equipment
that is classified as 5-year property for MACRS. The equipment cost $79,000. What will the book value of this equipment be at the end of two years should you decide to resell the equipment at that
point in time? a. $5,056 b. $22,752 C. $37,920 d. $41,080 e. $56,248 Book value at the end of year 2 = $79,000 [$79,000 (.20 + .32)] = $37,920 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: MACRS
DEPRECIATION TYPE: PROBLEMS 10-28 Chapter 010 Making Capital Investment Decisions 65. Allied Partners just purchased some fixed assets that are classified as 3-year property for MACRS. The assets
cost $2,400. What is the amount of the depreciation expense in year 4? a. $0 B. $177.84 c. $355.68 d. $799.92 e. $1,066.56 Depreciation for year 4 = $2,400 .0741 = $177.84 AACSB TOPIC: ANALYTIC
SECTION: 10.4 TOPIC: MACRS DEPRECIATION TYPE: PROBLEMS 10-29 Chapter 010 Making Capital Investment Decisions 66. Retailers, Inc. purchased some fixed assets four years ago at a cost of $21,200. They
no longer need these assets so are going to sell them today at a price of $4,400. The assets are classified as 5-year property for MACRS. What is the current book value of these assets? a. $1,221.12
B. $3,663.36 c. $4,240.00 d. $4,400.00 e. $5,300.00 Book value at the end of year 4 = $21,200 $3,663.36 [$21,200 (.20 + .32 + .192 + .1152)] = AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: MACRS
DEPRECIATION TYPE: PROBLEMS 10-30 Chapter 010 Making Capital Investment Decisions 67. You own some equipment which you purchased three years ago at a cost of $155,000. The equipment is 5-year
property for MACRS. You are considering selling the equipment today for $41,500. Which one of the following statements is correct if your tax rate is 34 percent? a. The tax due on the sale is
$2,072.40. b. The book value today is $74,400. c. The book value today is $60,600. d. The taxable amount on the sale is $44,640. E. You will receive a tax refund of $1,067.60 as a result of this
sale. Tax refund = [$41,500 $155,000 (1 .2 .32 .192)] .34 = $1,067.60 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: SALVAGE VALUE TYPE: PROBLEMS 10-31 Chapter 010 Making Capital Investment Decisions 68.
The Furniture Makers purchased some fixed assets three years ago for $52,000. The assets are classified as 5-year property for MACRS. The company is considering selling these assets now so they can
buy some newer fixed assets which utilize the latest in technology. The company has been offered $15,500 for these old assets. What is the net cash flow from the salvage value if the tax rate is 34
percent? a. $12,283.60 b. $14,976.00 C. $15,321.84 d. $15,500.00 e. $15,678.16 Book value at the end of year 3 = $52,000 (1 .2 .32 Tax on sale = ($15,500 $14,976) .34 = $178.16 After-tax cash flow =
$15,500 $178.16 = $15,321.84 .192) = $14,976 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: SALVAGE VALUE TYPE: PROBLEMS 10-32 Chapter 010 Making Capital Investment Decisions 69. Winslow, Inc. is
considering the purchase of a $116,000 piece of equipment. The equipment is classified as 5-year MACRS property. The company expects to sell the equipment after two years at a price of $50,000. The
tax rate is 35 percent. What is the expected after-tax cash flow from the anticipated sale? a. $32,500 b. $35,020 c. $40,012 d. $44,193 E. $51,988 Book value at the end of year 2 = $116,000 (1 .2
.32) = $55,680 Tax on sale = ($50,000 $55,680) .35 = $1,988 (refund) After-tax cash flow = $50,000 + $1,988 = $51,988 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: SALVAGE VALUE TYPE: PROBLEMS 10-33
Chapter 010 Making Capital Investment Decisions 70. A project is expected to create operating cash flows of $35,000 a year for four years. The initial cost of the fixed assets is $100,000. These
assets will be worthless at the end of the project. An additional $5,000 of net working capital will be required throughout the life of the project. What is the project's net present value if the
required rate of return is 11 percent? a. $1,879.25 b. $3,585.60 C. $6,879.25 d. $8,585.60 e. $11,879.25 AACSB TOPIC: ANALYTIC SECTION: 10.3 AND 10.4 TOPIC: PROJECT NPV TYPE: PROBLEMS 10-34 Chapter
010 Making Capital Investment Decisions 71. A project will produce operating cash flows of $60,000 a year for four years. During the life of the project, inventory will be lowered by $20,000 and
accounts receivable will increase by $25,000. Accounts payable will decrease by $10,000. The project requires the purchase of equipment at an initial cost of $200,000. The equipment will be
depreciated straight-line to a zero book value over the life of the project. The equipment will be salvaged at the end of the project creating a $30,000 after-tax cash flow. At the end of the
project, net working capital will return to its normal level. What is the net present value of this project given a required return of 12 percent? a. $17,759.04 b. $13,693.50 C. $4,160.73 d.
$2,194.46 e. $10,839.27 AACSB TOPIC: ANALYTIC SECTION: 10.3, 10.4 AND 10.5 TOPIC: PROJECT NPV TYPE: PROBLEMS 10-35 Chapter 010 Making Capital Investment Decisions 72. A project will produce an
operating cash flow of $10,100 a year for five years. The initial cash investment in the project will be $32,500. The net after-tax salvage value is estimated at $6,000 and will be received during
the last year of the project's life. What is the net present value of the project if the required rate of return is 10 percent? a. $3,613.72 b. $5,515.64 c. $5,786.95 D. $9,512.47 e. $11,786.95 AACSB
TOPIC: ANALYTIC SECTION: 10.3, 10.4 AND 10.5 TOPIC: PROJECT NPV TYPE: PROBLEMS 10-36 Chapter 010 Making Capital Investment Decisions 73. Stall Enterprises is considering the installation of a new
wireless computer network that will cut annual operating costs by $15,000. The system will cost $66,000 to purchase and install. This system is expected to have a 6-year life and will be depreciated
to zero using straight-line depreciation. What is the amount of the earnings before interest and taxes for this project? a. $5,000 B. $4,000 c. $5,000 d. $6,000 e. $11,000 Earnings before interest
and taxes = $15,000 ($66,000 / 6) = $4,000 AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: COST-CUTTING TYPE: PROBLEMS 74. The Make-Up Artists is considering replacing the equipment it uses to produce
lipstick. The equipment would cost $1.8 million and lower manufacturing costs by an estimated $260,000 a year. The equipment will be depreciated using straight-line depreciation to a book value of
zero. The life of the equipment is 9 years. The required rate of return is 9 percent and the tax rate is 34 percent. What is the net income from this proposed project? a. $241,236 b. $180,000 c.
$20,400 D. $39,600 e. $60,000 Annual depreciation = $1,800,000 / 9 = $200,000 Net income = ($260,000 $200,000) (1 .34) = $39,600 AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: COST-CUTTING TYPE: PROBLEMS
10-37 Chapter 010 Making Capital Investment Decisions 75. Superior Manufacturers is considering a 3-year project with an initial cost of $846,000. The project will not directly produce any sales but
will reduce operating costs by $295,000 a year. The equipment is depreciated straight-line to a zero book value over the life of the project. At the end of the project the equipment will be sold for
an estimated $30,000. The tax rate is 34 percent. The project will require $31,000 in extra inventory for spare parts and accessories. Should this project be implemented if Superior Manufacturing
requires an 8 percent rate of return? Why or why not? a. No; The NPV is $128,147.16. B. No; The NPV is $87,820.48. c. No; The NPV is $81,429.28. d. Yes; The NPV is $33,769.37. e. Yes; The NPV is
$153,777.33. AACSB TOPIC: ANALYTIC SECTION: 10.3 AND 10.6 TOPIC: COST-CUTTING TYPE: PROBLEMS 10-38 Chapter 010 Making Capital Investment Decisions 76. You are working on a bid to build three
amusement parks a year for the next two years. This project requires the purchase of $52,000 of equipment which will be depreciated using straight-line depreciation to a zero book value over the two
years. The equipment can be sold at the end of the project for $34,000. You will also need $16,000 in net working capital over the life of the project. The fixed costs will be $10,000 a year and the
variable costs will be $70,000 per park. Your required rate of return is 10 percent for this project and your tax rate is 35 percent. What is the minimal amount, rounded to the nearest $500, you
should bid per amusement park? a. $20,000 b. $66,500 c. $68,000 d. $74,000 E. $79,500 NI = $21,038.10 $26,000 = -$4,961.90; EBT = -$4,961.90 / (1 .35) = -$7,633.69 Sales = -$7,633.69 + ($52,000 / 2)
+ $10,000 + ($70,000 3) = $238,366.31 Bid per amusement park = $238,366 / 3 = $79,455 When rounded to the nearest $500, the bid price is $79,500. AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: BID PRICE
TYPE: PROBLEMS 10-39 Chapter 010 Making Capital Investment Decisions 77. You are working on a bid to build four small apartment buildings a year for the next three years for a local community. This
project requires the purchase of $900,000 of equipment which will be depreciated using straight-line depreciation to a zero book value over the three years. The equipment can be sold at the end of
the project for $400,000. You will also need $200,000 in net working capital over the life of the project. The fixed costs will be $475,000 a year and the variable costs will be $140,000 per
building. Your required rate of return is 12 percent for this project and your tax rate is 34 percent. What is the minimal amount, rounded to the nearest $500, that you should bid per building? a.
$292,500 b. $316,500 c. $330,500 D. $341,500 e. $365,000 NI = $320,477.95 $300,000 = $20,477.95; EBT = $20,477.95 / (1 .34) = $31,027.20 Sales = $31,027.20 + ($900,000 / 3) + $475,000 + ($140,000 4)
= $1,366,027.20 Bid per building = $1,366,027.20 / 4 = $341,506.80 When rounded to the nearest $500, the bid price is $341,500. AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: BID PRICE TYPE: PROBLEMS
10-40 Chapter 010 Making Capital Investment Decisions 78. Office Furniture Makers, Inc. uses machines to produce high quality office chairs for other firms. The initial cost of one customized machine
is $750,000. This machine costs $12,000 a year to operate. Each machine has a life of 3 years before it is replaced. What is the equivalent annual cost of this machine if the required return is 10
percent? (Round your answer to whole dollars) a. $259,947 b. $285,942 c. $301,586 D. $313,586 e. $326,947 AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST TYPE: PROBLEMS 10-41
Chapter 010 Making Capital Investment Decisions 79. Glassparts, Inc. uses machines to manufacture windshields for automobiles. One machine costs $142,000 and lasts about 5 years before it needs
replaced. The operating cost per machine is $7,000 a year. What is the equivalent annual cost of one machine if the required rate of return is 11 percent? (Round your answer to whole dollars) a.
$30,811 b. $33,574 c. $35,400 d. $37,267 E. $45,421 AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST TYPE: PROBLEMS 10-42 Chapter 010 Making Capital Investment Decisions 80. Great
Enterprises is analyzing two machines to determine which one they should purchase. The company requires a 13 percent rate of return and uses straight-line depreciation to a zero book value. Machine A
has a cost of $285,000, annual operating costs of $8,500, and a 3-year life. Machine B costs $210,000, has annual operating costs of $14,000, and has a 2year life. Whichever machine is purchased will
be replaced at the end of its useful life. Great Enterprises should select machine _____ because it will save the company about _____ a year in costs. A. A; $10,688 b. A; $ 17,716 c. B; $5,500 d. B;
$14,987 e. B; $16,204 Machine A lowers the annual cost of the equipment by about $10,688, which is $139,892 less $129,204. AACSB TOPIC: ANALYTIC SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST TYPE:
PROBLEMS 10-43 Chapter 010 Making Capital Investment Decisions 81. Dollar Diamond is considering a project which will require additional inventory of $134,000 and will also increase accounts payable
by $37,000 as suppliers are willing to finance part of these purchases. Accounts receivable are currently $100,000 and are expected to increase by 8 percent if this project is accepted. What is the
initial project cash flow related to net working capital? A. $105,000 b. $97,000 c. $89,000 d. $8,560 e. $94,720 Initial cash flow for NWC = $134,000 + $37,000 ($100,000 .08) = $105,000 AACSB TOPIC:
ANALYTIC SECTION: 10.4 TOPIC: NET WORKING CAPITAL TYPE: PROBLEMS 82. Joel's Shop needs to maintain 15 percent of its sales in net working capital. Joel's is considering a 4-year project which will
increase sales from their current level of $130,000 to $150,000 the first year and to $165,000 a year for the following three years. What amount should be included in the project analysis for net
working capital in year four of the project? a. $19,500 b. $0 C. $5,250 d. $7,000 e. $24,750 NWC recovery = ($165,000 $130,000) .15 = $5,250 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: NET WORKING
CAPITAL TYPE: PROBLEMS 10-44 Chapter 010 Making Capital Investment Decisions 83. Bright Lighting is expanding its product offerings to reach a wider range of customers. The expansion project includes
increasing the floor inventory by $175,000 and increasing its debt to suppliers by 60 percent of that amount. The company will also spend $180,000 for a building contractor to expand the size of the
showroom. As part of the expansion plan, the company will be offering credit to its customers and thus expects accounts receivable to rise by $35,000. For the project analysis, what amount should be
used as the initial cash flow for net working capital? a. $35,000 b. $70,000 C. $105,000 d. $175,000 e. $210,000 Initial NWC requirement = -$175,000 + (.60 $175,000) $35,000 = $105,000 AACSB TOPIC:
ANALYTIC SECTION: 10.4 TOPIC: NET WORKING CAPITAL TYPE: PROBLEMS Johnson, Inc. is considering a new project. The project will require $350,000 for new fixed assets, $140,000 for additional inventory,
and $45,000 for additional accounts receivable. Short-term debt is expected to increase by $110,000 and long-term debt is expected to increase by $330,000. The project has a 7-year life. The fixed
assets will be depreciated straight-line to a zero book value over the life of the project. At the end of the project, the fixed assets can be sold for 30 percent of their original cost. The net
working capital returns to its original level at the end of the project. The project is expected to generate annual sales of $600,000 and costs of $400,000. The tax rate is 35 percent and the
required rate of return is 12 percent. 10-45 Chapter 010 Making Capital Investment Decisions 84. What is the project's cash flow at time zero? a. $195,000 b. $350,000 C. $425,000 d. $490,000 e.
$535,000 Initial cash flow = $350,000 $140,000 $45,000 + $110,000 = $425,000 AACSB TOPIC: ANALYTIC SECTION: 10.2 TOPIC: RELEVANT COSTS TYPE: PROBLEMS 85. What is the amount of the earnings before
interest and taxes for the first year of this project? a. $97,500 b. $130,000 C. $150,000 d. $200,000 e. $250,000 EBIT = $600,000 $400,000 ($350,000 / 7) = $150,000 AACSB TOPIC: ANALYTIC SECTION:
10.3 TOPIC: EBIT TYPE: PROBLEMS 10-46 Chapter 010 Making Capital Investment Decisions 86. What is the amount of the after-tax cash flow from the sale of the fixed assets at the end of this project?
a. $0 b. $32,500 c. $36,750 D. $68,250 e. $105,000 After-tax salvage value = .30 $350,000 (1 .35) = $68,250 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: AFTER-TAX SALVAGE VALUE TYPE: PROBLEMS 87. What
is the cash flow recovery from net working capital at the end of this project? a. $30,000 B. $75,000 c. $90,000 d. $185,000 e. $205,000 Net working capital recovery = $140,000 + $45,000 $110,000 =
$75,000 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: RECOVERY OF NET WORKING CAPITAL TYPE: PROBLEMS Layla's Distribution Co. is considering a project which will require the purchase of $1.8 million in
new equipment. The equipment will be depreciated straight-line to a zero book value over the 5-year life of the project. Layla's expects to sell the equipment at the end of the project for 10 percent
of its original cost. Annual sales from this project are estimated at $1.3 million. Net working capital equal to 30 percent of sales will be required to support the project. All of the net working
capital will be recouped at the end of the project. The firm desires a minimal 15 percent rate of return on this project. The tax rate is 34 percent. 10-47 Chapter 010 Making Capital Investment
Decisions 88. What is the value of the depreciation tax shield in year 3 of the project? A. $122,400 b. $237,600 c. $367,200 d. $612,000 e. $712,800 Depreciation tax shield = $1,800,000 / 5 .34 =
$122,400 AACSB TOPIC: ANALYTIC SECTION: 10.5 TOPIC: DEPRECIATION TAX SHIELD TYPE: PROBLEMS 89. What is the amount of the after-tax salvage value of the equipment? a. $0 b. $61,200 C. $118,800 d.
$180,000 e. $237,600 After-tax salvage value = $1,800,000 .10 (1 .34) = $118,800 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: AFTER-TAX SALVAGE VALUE TYPE: PROBLEMS 10-48 Chapter 010 Making Capital
Investment Decisions 90. What is the recovery amount attributable to net working capital at the end of the project? a. $130,000 b. $260,000 c. $360,000 D. $390,000 e. $540,000 NWC recapture = .30
$1,300,000 = $390,000 AACSB TOPIC: ANALYTIC SECTION: 10.4 TOPIC: CHANGE IN NET WORKING CAPITAL TYPE: PROBLEMS Essay Questions 91. Explain how a manager can determine which cash flows should be
included and which cash flows should be excluded from the analysis of a proposed project. Assume the analysis adheres to the stand-alone principle. Any changes in cash flows that will result from
accepting a new investment should be included in the analysis of that investment. AACSB TOPIC: REFLECTIVE THINKING SECTION: 10.1 TOPIC: STAND-ALONE PRINCIPLE 10-49 Chapter 010 Making Capital
Investment Decisions 92. What is the formula for the tax-shield approach to OCF? Explain the two key points the formula illustrates. OCF = (Sales Costs) (1 T) + Depreciation T The formula illustrates
that cash income and expenses affect OCF on an aftertax basis. The formula also illustrates that even though depreciation is a non-cash expense it does affect OCF because of the tax savings realized
from the depreciation expense. AACSB TOPIC: REFLECTIVE THINKING SECTION: 10.5 TOPIC: DEPRECIATION TAX SHIELD 10-50 Chapter 010 Making Capital Investment Decisions 93. What is the primary purpose
behind computing the equivalent annual cost of two machines? What is the assumption that is being made about each machine? The primary purpose is to compute the annual cost of each machine on a
comparable basis so that the least expensive machine can be identified given that the machines have differing lives. The assumption is that whichever machine is employed, it will be replaced at the
end of its useful life. AACSB TOPIC: REFLECTIVE THINKING SECTION: 10.6 TOPIC: EQUIVALENT ANNUAL COST 94. Assume a firm sets its bid price for a project at the minimum level as computed using the
discounted cash flow analysis presented in chapter 10. Given this, what do you know about the net present value, the internal rate of return, and the payback period for this project? The discounted
cash flow approach to setting a bid price assumes the net present value of the project will be zero which means the internal rate of return will equal the required rate. The payback period must be
less than the life of the project. AACSB TOPIC: REFLECTIVE THINKING SECTION: 10.6 TOPIC: MINIMUM BID PRICE 95. Can the initial cash flow at time zero for a project ever be a positive value? If yes,
give an example. If no, explain why not. The initial cash flow can be a positive value. For example, if a project reduced net working capital by an amount which exceeded the initial cost for fixed
assets, the initial cash flow would be a positive amount. AACSB TOPIC: REFLECTIVE THINKING SECTION: 10.3 TOPIC: PROJECT INITIAL CASH FLOW 10-51 Chapter 010 Making Capital Investment Decisions 96.
Describe the procedure for setting a bid price and explain the manager's objective in setting this bid price. How is it that two different firms often arrive at different values for the bid price?
The bid process involves determining the price for which the NPV of the project is zero (or some alternative minimum NPV level acceptable to the firm). In setting a bid price, a manager typically
forecasts all relevant cash outflows and inflows exclusive of revenues. Then, the manager determines the level of OCF that will make the NPV just equal to zero. Finally, the manager works backwards
up through the income statement to determine the bid price that results in the desired level of OCF. The ultimate objective here is to determine the price at which the firm just reaches its financial
break-even point. Each bidding firm usually arrives at a different calculated bid price because they may use different assumptions in the evaluation process, such as the estimated time to complete
the project, costs and quality of the materials used, estimated labor costs, the required rate of return, the tax rate, and so on. AACSB TOPIC: REFLECTIVE THINKING SECTION: 10.6 TOPIC: SETTING A BID
PRICE 10-52
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
USF - ECO - 3203
Chapter 011 Project Analysis and EvaluationMultiple Choice Questions 1. Forecasting risk is defined as the: a. possibility that some proposed projects will be rejected. b. process of estimating
future cash flows relative to a project. C. possibility that
USF - ECO - 3203
Chapter 012 Some Lessons from Capital Market HistoryMultiple Choice Questions 1. The excess return required from a risky asset over that required from a risk-free asset is called the: A. risk
premium. b. geometric premium. c. excess return. d. average re
USF - ECO - 3203
Chapter 013 Return Risk and the Security Market LineMultiple Choice Questions 1. The return on a risky asset which is anticipated being earned in the future is called the _ return. a. average b.
historical C. expected d. geometric e. requiredSECTION: 13
USF - ECO - 3203
Chapter 014 Options and Corporate FinanceMultiple Choice Questions 1. A contract that grants its owner the right to buy or sell a specified asset at an agreed-upon price on or before a given date is
called a(n): A. option. b. invoice. c. exercise. d. swa
USF - ECO - 3203
Chapter 015 Cost of CapitalMultiple Choice Questions 1. The return shareholders require on their investment in a firm is called the: a. dividend yield. B. cost of equity. c. capital gains yield. d.
cost of capital. e. income return.SECTION: 15.2 TOPIC:
USF - ECO - 3203
Chapter 016 Raising CapitalMultiple Choice Questions 1. What is venture capital? a. equity funds from internal sources used to finance high-risk projects b. capital raised from issuing equity
securities in order to retire debt securities C. financing for
USF - ECO - 3203
Chapter 017 Financial Leverage and Capital Structure PolicyMultiple Choice Questions 1. The use of personal borrowing to change the overall amount of financial leverage to which an individual is
exposed is called: A. homemade leverage. b. restructured le
USF - ECO - 3203
Chapter 018 Dividends and Dividend PolicyMultiple Choice Questions 1. A payment made out of a firm's earnings to its owners in the form of either cash or stock is called a: A. dividend. b.
distribution. c. repurchase. d. payment-in-kind. e. stock split.
USF - ECO - 3203
Chapter 019 Short-Term Finance and PlanningMultiple Choice Questions 1. The length of time between the acquisition of inventory and the collection of cash from receivables is called the: A. operating
cycle. b. inventory period. c. accounts receivable per
USF - ECO - 3203
Chapter 020 Cash and Liquidity ManagementMultiple Choice Questions 1. The speculative motive is the need to hold cash: a. to pay outstanding checks. b. to maintain a firm's daily operations. C. to
invest in opportunities which may arise. d. to compensate
USF - ECO - 3203
Chapter 021 Credit and Inventory ManagementMultiple Choice Questions 1. The conditions under which a firm sells its goods and services for cash or credit are called the: A. terms of sale. b. credit
analysis. c. collection policy. d. payables policy. e. c
USF - ECO - 3203
Chapter 022 International Corporate FinanceMultiple Choice Questions 1. A security issued in the United States that represents shares of a foreign stock and allows that stock to be traded in the
United States is called a(n): A. American Depository Receip
USF - ECO - 3203
Chapter 023 Risk Management: An Introduction to Financial EngineeringMultiple Choice Questions 1. The process of lowering a firm's exposure to rate or price fluctuations is called: a. abating. b.
deriving. C. hedging. d. forwarding. e. manipulating.SECT
USF - ECO - 3203
Chapter 024 Option ValuationMultiple Choice Questions 1. Which one of the following entails the purchase of a put option on a stock to limit the downside risk associated with owning that stock? a.
put-call parity b. covered call C. protective put d. stra
USF - ECO - 3203
Chapter 025 Mergers and AcquisitionsMultiple Choice Questions 1. The complete absorption of one company by another, wherein the acquiring firm retains its identity and the acquired firm ceases to
exist as a separate entity, is called a: A. merger. b. con
USF - ECO - 3203
Chapter 026 LeasingMultiple Choice Questions 1. The user of an asset in a leasing arrangement is called the: A. lessee. b. lessor. c. guarantor. d. trustee. e. manager.SECTION: 26.1 TOPIC: LESSEE
TYPE: DEFINITIONS2. The owner of an asset in a leasing a
USF - ECO - 3203
Solutions ManualFundamentals of Corporate Finance 8th edition Ross, Westerfield, and Jordan Updated 03-05-2007CHAPTER 1 INTRODUCTION TO CORPORATE FINANCEAnswers to Concepts Review and Critical
Thinking Questions 1. Capital budgeting (deciding whether t
USF - ACG - 3103
CHAPTER 15EquityASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)T opicsQuestionsBriefExercisesExercisesProble msConceptsfor Analysis1. Shareholders rights;c orporate form.1, 2, 32. Equity.4, 5, 6, 1 6,17,
18, 29,30, 3137, 10,16, 171, 2, 3,
USF - ACG - 3103
CHAPTER 16Dilutive Securities and Earnings Per ShareASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)T opicsQuestionsBriefExercisesExercisesProble msConceptsfor Analysis1.Convertible debtand preferences
hares.1, 2, 3, 4,5, 6, 7, 271, 2, 31, 2,
USF - ACG - 3103
CHAPTER 17InvestmentsASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)T opicsQuestions1 . Debt investments.BriefExercises Exercises1, 2, 3, 13Proble ms1Conceptsfor Analysis4, 7(a)Held -for-collection.4, 5,
6, 8,11, 131, 3, 102, 3 , 41, 2, 71
USF - ACG - 3103
CHAPTER 18RevenueASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)BriefExercisesT opicsQuestions*1. Revenue recognition;m easurement andrecognition.1, 2, 3, 4, 5, 1, 2, 3 , 4, 5, 1, 2, 3, 4, 5, 1, 12, 136,
7, 8, 9,6, 76, 7, 8, 910, 11, 12,13, 25*
USF - ACG - 3103
CHAPTER 20Accounting for Pensions and Postretirement BenefitsASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)T opicsQuestions1.Basic definitions and1, 2, 3, 4, 5,c oncepts related to pension 6, 7, 8,
12,plans .13, 232.W orksheet preparation.3.Inco
USF - ACG - 3103
CHAPTER 23Statement of Cash FlowsASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)T opicsQuestionsBriefExercisesExercisesConceptsProble ms for Analysis1.Format, objectivespurpose, and sourceof statement.1,
2, 7,8, 122.Classifying investing,fina
USF - ACG - 3103
CHAPTER 24Presentation and Disclosure in Financial ReportingASSIGNMENT CLASSIFICATION TABLE (BY TOPIC)BriefExercisesQuestions* 1.The disclosure principle; typeof disclosure.2, 3, 22* 2.Role of notes
that accompanyfinancial statements.1, 4, 5
USF - PSY - 3204
Statistics&ResearchGeneralBackground ImportanceofStatistics Numbersarepowerful Understandingresearch Conductingresearch Software SPSS ExcelVariables Variablesomethingthatcantakeonmorethanonevalue
Independentvariableexplainsorcauses Dependent
USF - PSY - 3204
USF - PSY - 3204
CentralTendencyVariableshavedistributionsA variable is something that changes or hasdifferent values (e.g., anger).A distribution is a collection of measures,usually across people.Distributions of
numbers can be summarizedwith numbers (called stati
USF - PSY - 3204
VariabilityVariabilityreferstotheSpreadorDispersionoftheDistributionVariabilityoftheDistribution(CommonStatistics)RangeVarianceMax - minAverage Squared Distance from MeanStandard DeviationAverage
Distance from Mean(Verbal definitions of Varianc
USF - PSY - 3204
zScoresandtheNormalCurveI.zscoresandconversionsWhat is a z-score?A measure of an observations distance from themean.The distance is measured in standard deviationunits.If a z-score is zero, its on
the mean.If a z-score is positive, its above the
USF - PSY - 3204
6StandardErroroftheMeanProbability; Sampling Distribution ofMean, SEMProbabilityFrequencyViewProbability is long run relative frequencySame as relative frequency in the populationDice p(1) = p(2) = =
p(6) = 1/6Coin p(Head) = p(Tail) = .5FrequencyD
USF - PSY - 3204
ConfidenceIntervalsParameterEstimationWe use statistics to estimate parameters,e.g., effectiveness of pilot training, psychotherapy.X SD StandardErroroftheMeanX =NThis means that the standard error
gets largewhen the population SD is large and w
USF - PSY - 3204
SignificanceTestingStatistical testing of the meanBinomialDistributionMathematicians have figured formulas toestimate long run relative frequencies forsimple events, like how many heads willappear
for a given number of coin tosses.The binomial is o
USF - PSY - 3204
HypothesistestingandDecisionMakingFormal aspects of hypothesis testingNullandAlternativeHypothesesNull hypothesis (H0)sets the what if forcalculating probabilitiesH 0 : = 50H 0 : 1 2 = 0Alternative
hypothesis(Ha) sets the rejectionregion. Oddly
USF - PSY - 3204
ThettestInferences about Population Meanswhen population SD is unknownConfidenceintervalsinz(Review)Want to estimate height of students at USF.Sampled N=100 students. Found mean =68 inand SD = 6
in.Best guess for population mean is 68 inches pluso
USF - PSY - 3204
ThetwosamplettestExpanding t to two groupsttestsusedforpopulationmeandiffsWith 1-sample t, we have a single sample and apopulation value in mindWith 2-sample t, we have two groups.Experimental vs.
ControlBrand PreferencePepsi, CokeCoors, Red Stri
USF - PSY - 3204
DependentttestVarietiesoft1-Sample (only one kind)2-Sample (two kinds)Independent samples groups are unrelatedExperimental vs. control groups (at random)Male vs. female participants (unrelated)
Dependent samples groups are relatedSame person in bot
USF - PSY - 3204
CorrelationandRegressionCorrelationCoefficientaka Pearson Product-Moment CorrelationCoefficient.Correlation coefficient summarizes the relations b/t 2variables, both direction and degree
(closeness).Scattergram summary.Sample r; population (Greek r
USF - PSY - 3204
OneWayANOVAIntroduction to Analysis of Variance(ANOVA)WhatisANOVA?ANOVA is short for ANalysis Of VArianceUsed with 3 or more groups to test for MEANDIFFS.E.g., caffeine study with 3 groups:No
caffeineMild doseJolt groupLevel is value, kind or a
USF - PSY - 3204
ANOVAWithMoreThanOneIV2wayANOVASo far, 1-Way ANOVA, but can have 2 ormore IVs. IVs aka Factors.Example: Study aids for examIV 1: workbook or notIV 2: 1 cup of coffee or notWorkbook (Factor A)Caffeine
(Factor B)YesYesNoCaffeineonlyBothNoNe
USF - PSY - 3204
Data75777981838082848688707274767879798474Grand MGrp 1 MGrp 2 MGrp 3 MSourceBWTGroupSSGrand M DevGM*217916179417901794179162791279927925279492798137981379493792537993791SST =370
USF - PSY - 3204
Exam and Course Distributions Psych Stats Fall 2007
USF - PSY - 3204
Exam 1 Summer C 2008M (with curve) = 78 pctSD = 10 pctQuizzes so far C08M = 51 pct, SD= 22 pctHomework so far C08M = 74 pct, SD = 27 pct
USF - PSY - 3204
Summer C08Exam1 M=.78, SD=.10; Exam2 M= .72, SD=.14, Course M=.73, SD=.13Couse includes quizzes and homeworks
USF - PSY - 3204
USF - PSY - 3204
Course Grades Sp 2008
USF - PSY - 3204
Class Data Sp 08Height in Inchesht Stem-and-Leaf PlotFrequency1.001.007.003.0015.0020.0010.0020.0018.0016.0017.0010.008.009.008.002.002.003.002.00Stem & Leaf58 . 059 . 060 . 000000061 . 00062 .
00000000000000063 . 000000000000
USF - PSY - 3204
Age22271940192019192027252021212018222220221918222222241925192721262020183724183518184920191919192023201824Sex 1F,2M feet11121112x12221121211111112121222111111112
USF - PSY - 3204
Demographics Summer 2008 Psych StatsAge DistributionTotal SibsTotal Siblings (number brothers + number sisters)Height in InchesHeight in inches for entire class (N=71)Height by SexSeparate Box Plots
for Females and Males
USF - PSY - 3204
agesex20 f18 f19 f20 f21 f19 f24 f20 f18 f21 f21 m20 f20 m19 f19 f32 f19 f22 f23 f20 f25 f20 f17 m20 m29 m21 f22 f23 f26 m22 f23 f20 f23 m22 f20 m22 m18 f20 f19 f19 m18 f23 f30 m24 f25 f20 f21 f19
f21 f20
USF - PSY - 3204
One rolloutcomes123456Two rollsout1 out21111112222223333334444445555556avg1234561234561234561234561234561sorted111.51.521.52.52323.521.52.522.52.52.532.53.534
USF - PSY - 3204
Psych Stats Psy 3204 Sp 2010M=.82M=.71M=.75M=.74M=.74
USF - PSY - 3204
Distribution Exam 1 Spring 2010 Psych StatsCase Processing SummaryCasesValidNpct1MissingPercent19496.0%NTotalPercent84.0%NPercent202100.0%DescriptivesStatisticpct1Mean.813795% Confidence Interval
forLower BoundUpper Bound.00884
USF - PSY - 3204
Exam 1 Fall 2007DescriptivesExam1Mean95% ConfidenceInterval for Mean5% Trimmed MeanMedianVarianceStd. DeviationMinimumMaximumRangeInterquartile RangeSkewnessKurtosisLower BoundUpper
BoundStatistic78.720676.6587Std. Error1.0425980.78
USF - PSY - 3204
Exam 2 Fall 07DescriptivesExam2.00Mean35.268995% Confidence Interval for MeanLower BoundUpper Bound5% Trimmed MeanMedian 35.0000Variance 61.164Std. Deviation7.82076Minimum15.00Maximum49.00Range
34.00Interquartile RangeSkewness-.385Ku
USF - PSY - 3204
Exam 2 C08 KeyItemKey1B2C3A4B5D6B7B8A9A10C11C12C13A14B15B16B17B18C19A20D21C22D23A24D25C26B (was keyed c)27B28B29A30A31A32C33A34A35B36C37A38C39C40D41B42A43A44454647
USF - PSY - 3204
Exam 1 Spring 08Mean = 82
USF - PSY - 3204
Exam 1 Summer C 20081U number_Psych Stats Summer C 2008 Brannick Exam 1Instructions: Write your name, U number, and section number on the scantron; bubblethem in. Answer any 75 of 80 questions on the
exam by bubbling in the best of the fouralternati
USF - PSY - 3204
Exam 1 Fall 20071U number_Psych Stats Fall 2007 Brannick Exam 1Instructions: Write your name, U number, and section number on the scantron. Answerany 50 of 55 questions on the exam by bubbling in the
best of the four alternatives given.For those que
USF - PSY - 3204
Exam 1 Fall 2007 Psych Stats KeyItem123456789101112131415161718192021222324252627282930313233343536373839404142KeyCACABBBDDBDDCBABABDCCCDCDCDDCDABADDCBDCBBD43444546
USF - PSY - 3204
Exam 1 Spring 20071Name_Psych Stats Spring 2007 Brannick Exam 1Instructions: Write your name, U number, and section number on the scantron. Answerany 45 of 50 questions on the exam by bubbling in the
best of the four alternatives given.For those que
USF - PSY - 3204
Exam 1 Spring 20081U number_Psych Stats Spring 2008 Brannick Exam 1Instructions: Write your name, U number, and section number on the scantron; bubblethem in. Answer any 50 of 55 questions on the
exam by bubbling in the best of the fouralternatives
|
{"url":"http://www.coursehero.com/file/6259914/chap010/","timestamp":"2014-04-16T04:34:05Z","content_type":null,"content_length":"97571","record_id":"<urn:uuid:35265d16-b872-431f-9bb6-db142ca9c901>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Most interesting mathematics mistake?
up vote 50 down vote favorite
Some mistakes in mathematics made by extremely smart and famous people can eventually lead to interesting developments and theorems, e.g. Poincare's 3d sphere charaterization or the search to prove
that Euclid's parallel axiom is really [S:necessary:S] unnecessary.
But I also think there are less famous mistakes worth hearing about. So, here's a question:
What's the most interesting mathematics mistake that you know of?
This question is community wiki, meaning neither the question nor the answers receive points (which are reserved for "hard" questions). So please post as much as you like (indeed please post one
answer per post so that others can upvote the ones easier), vote a lot and vote freely.
(should there be a tag 'not-math-related' or similar?)
EDIT: There is a similar question which has been closed as a duplicate to this one, but which also garnered some new answers. It can be found here:
Failures that lead eventually to new mathematics
soft-question big-list
6 My mistake, not an interesting one unfortunately. – Ilya Nikokoshev Oct 18 '09 at 7:13
3 Closed: big-list questions don't need to keep cycling back to the front page, after some point. – Scott Morrison♦ Mar 7 '10 at 6:41
7 doesn't "cycling back to the front page" could also mean that it is still of interest? e.g. this one has been just been edited and therefore got to the front page again. Therefore it gets
closed??? I don't get the logic behind that... – vonjd Mar 12 '10 at 18:28
8 Well, cycling well-viewed topics back to the front comes at the cost of pushing newer questions out of immediate visibility faster, so I understand the motivation. On the other hand, as the site
grows, we get new perspectives on old questions which, and as vonjd points out, are apparently still of interest. We shouldn't close things just because the site old-timers are tired of seeing
them. This discussion is probably on meta somewhere already.... – Cam McLeman Mar 12 '10 at 18:40
I agree with Cam - and in this case additionally: the big-list-tag means it is a big list and it can only become a big-list because many people make it a big list - so to close big-lists because
5 they became big-lists is kind of absurd. Perhaps the underlying mechanism of bringing things to the front page should be changed in the software then. Just closing it is no solution – vonjd Mar 12
'10 at 18:46
show 6 more comments
protected by François G. Dorais♦ Nov 7 '13 at 13:17
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead?
36 Answers
active oldest votes
C.N. Little listing the Perko pair as different knots in 1885 (10[161] and 10[162]). The mistake was found almost a century later, in 1974, by Ken Perko, a NY lawyer (!)
For almost a century, when everyone thought they were different knots, people tried their best to find knot invariants to distinguish them, but of course they failed. But the effort was a
major motivation to research covering linkage etc., and was surely tremendously fruitful for knot theory.
Update (2013):
This morning I received a letter from Ken Perko himself, revealing the true history of the Perko pair, which is so much more interesting! Perko writes:
The duplicate knot in tables compiled by Tait-Little [3], Conway [1], and Rolfsen-Bailey-Roth [4], is not just a bookkeeping error. It is a counterexample to an 1899 "Theorem" of C.N.
Little (Yale PhD, 1885), accepted as true by P.G. Tait [3], and incorporated by Dehn and Heegaard in their important survey article on "Analysis situs" in the German Encyclopedia of
Mathematics [2].
Little's `Theorem' was that any two reduced diagrams of the same knot possess the same writhe (number of overcrossings minus number of undercrossings). The Perko pair have different writhes,
and so Little's "Theorem", if true, would prove them to be distinct!
Perko continues:
Yet still, after 40 years, learned scholars do not speak of Little's false theorem, describing instead its decapitated remnants as a Tait Conjecture- and indeed, one subsequently proved
correct by Kauffman, Murasugi, and Thislethwaite.
up vote
82 down I had no idea! Perko concludes (boldface is my own):
I think they are missing a valuable point. History instructs by reminding the reader not merely of past triumphs, but of terrible mistakes as well.
And the final nail in the coffin is that the image above isn't of the Perko pair!!! It's the `Weisstein pair' $10_{161}$ and mirror $10_{163}$, described by Perko as "those magenta colored,
almost matching non-twins that add beauty and confusion to the Perko Pair page of Wolfram Web’s Math World website. In a way, it’s an honor to have my name attached to such a well-crafted
likeness of a couple of Bhuddist prayer wheels, but it certainly must be treated with the caution that its color suggests by anyone seriously interested in mathematics."
The real Perko pair is this:
You can read more about this fascinating story at Richard Elwes's blog.
Well, I'll be jiggered! The most interesting mathematics mistake that I know turns out to be more interesting than I had ever imagined!
1. J.H. Conway, An enumeration of knots and links, and some of their algebraic properties, Proc. Conf. Oxford, 1967, p. 329-358 (Pergamon Press, 1970). 2. M. Dehn and P. Heegaard, Enzyk. der
Math. Wiss. III AB 3 (1907), p. 212: "Die algebraische Zahl der Ueberkreuzungen ist fuer die reduzierte Form jedes Knotens bestimmt." 3. C.N. Little, Non-alternating +/- knots, Trans. Roy.
Soc. Edinburgh 39 (1900), page 774 and plate III. This paper describes itself at p. 771 as "Communicated by Prof. Tait." 4. D. Rolfsen, Knots and links (Publish or Perish, 1976).
2 That's a nice mistake. Do you know how it started -- presumably at some point the knots were separated by a flawed computation of some invariant? – Ryan Budney Dec 16 '09 at 3:06
Little (with Tait and Kirkman) compiled his tables combinatorially. He drew all possible 4-valent graphs with some number of vertices (in this case 10), and resolved 4-valent vertices
4 into crossings in all possible ways. He ended up with 2<sup>10</sup> knots. Then he worked BY HAND to eliminate doubles, by making physical models with string. He failed to bring these
two knots to the same position, and concluded that they must be different. It took almost 100 years to find the ambient isotopy which shows that there are the same knot, but the quest to
show they are different was fruitful. – Daniel Moskovich Dec 16 '09 at 7:22
2 Did Conway assume they were different as well, or did the mistake persist for other reasons, like an error in computing an invariant? – Ryan Budney Dec 16 '09 at 7:56
2 Yes- Conway assumed they were different in his table as well, but had no invariant to prove it. I don't know of any miscalculated invariant which "showed" they were different. – Daniel
Moskovich Dec 16 '09 at 11:01
Ken Perko attempted to make another edit, by adding the following to the citation of Conway's paper: CONWAY WAS NOT MISLED BY THIS FALSE THEOREM OF C.N.LITTLE. HE FOUND THREE
2 COUNTEREXAMPLES AMONG HIS 11-CROSSING NON-ALTERNATING KNOTS AND CORRECTLY WEEDED OUT THE DUPLICATE KNOT TYPES. Cf. Hoste-Thistlethwaite-Weeks, The first 1,701,936 knots, Math.
Intelligencer 20 (1998) FOOTNOTE 8 and Jablan-Radovic-Saxdanovic, Adequacy of link fanilies, Publictiones de L'Institute Mathematique, Nouvelle Serie, Tome 88(102) (2010), 21-52. – S.
Carnahan♦ Nov 23 '13 at 5:07
show 3 more comments
All of the (in retrospect) misguided attempts to prove Euclid's Parallel Postulate, which eventually lead Gauss to develop hyperbolic geometry.
up vote 52 down vote
8 (and/or Lobachevsky, and/or Bolyai) This gets my vote as one of the most fruitful mistakes, and one of the longest perpetuated. – Aaron Mazel-Gee Oct 17 '09 at 18:46
add comment
An error of Lebesgue. 1905 or so. Take a Borel set in the plane, project it onto a line, the result is a Borel set. Obvious: the projection of an open set is open, and the Borel sets in
the plane are the least family containing the open sets, closed under countable unions and countable intersections.
up vote 46 But wrong. Projection doesn't commute with countable intersection.
down vote
Studying this error lead Suslin to begin the line of study now called "descriptive set theory", 1917 or so.
add comment
Kempe's "proof" of the four-color theorem, which didn't prove the four-color theorem, but did:
1. Prove the five-color theorem
2. Somehow manage to go unnoticed for a dozen years
up vote 37 down vote 3. Lay the foundations for major tools in structural graph theory, and despite being fundamentally flawed, serve as the starting point for the eventual successful proof(s) of
add comment
A story I heard in grad school:
Once upon a time, a set theorist was writing a paper on inner models, and in it he wrote, "... and we will call such models nice." When he got his manuscript back from the typist (this was
up vote 33 back in the pre-LaTeX days of technical typists), he saw that his handwriting had been misread, and the line came out as: "... and we will call such models mice." The name stuck, and to
down vote this day if you browse almost any recent volume of the Journal of Symbolic Logic, you will find set theory articles on "mice."
3 I've heard a version of this story too, but I've also heard that Jensen denied that this was the origin of "mice". I never asked Jensen himself about it, so I don't know what to
believe. – Andreas Blass Oct 9 '11 at 23:55
5 You know what will be a great paper title? "Of mice and men" – Aleks Vlasev Oct 10 '11 at 6:50
add comment
Maybe it's not true, but there's the story of the "Grothendieck prime":
One striking characteristic of Grothendieck's mode of thinking is that it seemed to rely so little on examples. This can be seen in the legend of the so-called "Grothendieck prime". In
a mathematical conversation, someone suggested to Grothendieck that they should consider a particular prime number. "You mean an actual number?" Grothendieck asked. The other person
replies, yes, an actual prime number. Grothendieck suggested, "All right, take 57."
up vote 28
down vote But Grothendieck must have known that 57 is not prime, right? Absolutely not, said David Mumford of Brown University. "He doesn’t think concretely."
from here: http://www.ams.org/notices/200410/fea-grothendieck-part2.pdf
4 But does this qualify as an interesting mistake? – Todd Trimble♦ Nov 7 '13 at 14:17
show 2 more comments
An insignificant mistake, but amusing nonetheless: in Cayley's famous 1854 paper where he defines the concept of an abstract group, as an illustration he proves that there are three
up vote 28 groups of order 6 (up to isomorphism). This is because he does not realize that the groups $Z_2\times Z_3$ and $Z_6$ are isomorphic. (See my comment for the correct Cayley reference.)
down vote
4 It took me a while to track down the correct reference. It is page 51 of A. Cayley, Desiderta and suggestions: No. 1. The theory of groups, American J. Math. 1 (1878), 50-52. An
interesting related paper is G. A. Miller, Contradictions in the literature of group theory, American Math. Monthly 29 (1922), 319-328. – Richard Stanley Mar 1 '10 at 15:51
show 3 more comments
I believe Kummer's failed attempt at a proof of Fermat's last theorem led to the discovery of ideals.
up vote 27
down vote
1 I'm told that Kummer actually didn't care about Fermat's last theorem; it just happened that the techniques he developed were applicable. – Qiaochu Yuan Oct 17 '09 at 20:54
3 It was actually Lame who came up with that bad proof. – Ben Webster♦ Oct 18 '09 at 1:13
1 Oh, ok my mistake. – Grétar Amazeen Oct 18 '09 at 14:27
1 Harold Edwards wrote a wonderful account of this history in his paper "The background of Kummer's proof of Fermat's last theorem for regular primes". It doesn't seem to be available
online, but the mathsci net review is: ams.org/mathscinet-getitem?mr=57:12066a – Ben Linowitz Jan 6 '10 at 2:19
1 I don't know whether it is appropriate to say "discovery" of ideals. Maybe "recognition of the importance/relevance of ideals"? – Kevin H. Lin Apr 5 '10 at 6:14
add comment
Pontryagin made a famous mistake while computing the stable homotopy groups of spheres (specifically, π[2]) which led to the discovery of the Kervaire invariant. I won't spoil what the
up vote 22 mistake was: watch this video of Mike Hopkins' talk (second video on the page), starting about 7 minutes in.
down vote
show 1 more comment
It was "proved" in 1961 that the first right derived functor, $\lim^1_{\leftarrow}$ of the inverse limit functor is zero on Mittag-Leffler systems.
up vote 21 down vote However, recently a counter-example was found by Neeman and Deligne: http://www.springerlink.com/content/aeem2yx884nnufxn/
16 wait... really? this is serious, i use that a lot... dammnit! – Sean Tilson Mar 31 '10 at 6:35
show 2 more comments
Frege's proposed axioms in Die Grundgesetze der Arithmetik.
up vote 19 Frege was trying to derive the concept of "number" from more basic concepts, and he tried to axiomatize higher-order logic (essentially, a kind of set theory), but his intuitive-seeming
down vote axioms were logically inconsistent. Russell first found the inconsistency, which we now call Russell's Paradox.
add comment
From wikipedia (http://en.wikipedia.org/wiki/Uniform_convergence), about uniform convergence:
"Augustin Louis Cauchy in 1821 published a faulty proof of the false statement that the pointwise limit of a sequence of continuous functions is always continuous. Joseph Fourier and Niels
up vote 18 Henrik Abel found counter examples in the context of Fourier series. Dirichlet then analyzed Cauchy's proof and found the mistake: the notion of pointwise convergence had to be replaced by
down vote uniform convergence."
6 I have always loved that way Abel wrote this (in a footnote): «it appears to me that this theorem suffers exceptions»... – Mariano Suárez-Alvarez♦ Dec 15 '09 at 23:49
Some (e.g. A. Robinson) say that this is a mis-interpretation of the situation. When Cauchy says the sequence converges at all points this includes infinitesimals and such things not
15 recognized as real numbers nowadays. Abel's counterexample $\sum (1/n) \sin(nx)$ in fact does not converge at certain points $x$ infinitely close to $0$. We can hardly fault Cauchy if
he did not use the notion of real number from Dedekind and Cantor, since that would not come until 50 years later. – Gerald Edgar Dec 16 '09 at 16:40
add comment
Poincare defined the fundamental group and the homology groups and proved that H _1 was pi _1 abelianized. So the question came up whether there were other groups pi _n whose
abelianization would give the H _n. Cech defined the higher pi _n as a proposed answer and submitted a paper on this. But Alexandroff and Hopf got the paper, proved that the higher pi _n
were abelian and thus not the solution, and they persuaded Cech to withdraw the paper. Nevertheless a short note appeared and the pi _n started to be studied anyway...
up vote 17
down vote Taken from http://www.intlpress.com/hha/v1/n1/a1/ ,page 17
add comment
Not just a great mistake, but also a great documentation of a mistake: Stallings's How not to prove the Poincare Conjecture. (I think this paper is my answer to every
up vote 17 down community-wiki question.)
1 Whitehead's similar mistake is very interesting, too, as it lead him to the construction of contractible 3-manifolds that aren't balls. – Ryan Budney Dec 15 '09 at 21:22
2 There is also a paper by Cartier with a similar title: "Comment l'hypothèse de Riemann ne fut pas prouvée." – Joël Nov 7 '13 at 14:50
add comment
Hilbert's program, whose development was induced by on assumptions shattered by Goedel.
up vote 14 down vote
add comment
Supposedly Stefan Bergman attended a course on orthogonal functions while an undergraduate, and misunderstood what he was hearing, believing that the functions were supposed to be
analytic. This led him to the Bergman kernel and Hilbert spaces of analytic functions, which has developed into a whole field of study at the junction of complex analysis and operator
up vote 14 theory, and also with important ramifications in the more geometric parts of SCV. If the story is true, this was certainly an extremely fruitful mistake!
down vote
show 1 more comment
Steiner's count 7776 of the number of the number of plane conics tangent to 5 general plane conics certainly deserves a mention here. He gave this answer in 1848, and it wasn't fixed
until 1864, when Chasles pointed out the error and came up with the correct value of 3264. You can regard this as the first recognition of needing appropriate compactifications in order
up vote 12 to do valid calculations in enumerative geometry.
down vote
add comment
Goodrick's "story from Grad school" is incorrect. According to Ronald Jensen, the set theorist in question, he felt that the concept was important enough that it deserved a name which had
up vote 9 not already been used elsewhere in mathematics. And 'mice' was it. (Also, note that 'mice' is a noun, and 'nice' is an adjective --- it would not make sense.)
down vote
7 But the urban legend is so funny... – Ilya Nikokoshev Oct 19 '09 at 20:02
5 I have heard 3 versions of the origin of the name. They all originated with Jensen, and were told at a rate of one per decade. Last I checked, he actually does not seem to remember the
reason for the name. – Andres Caicedo Oct 26 '10 at 4:52
add comment
Samuel I. Krieger made many attempts at significant contributions to the field of mathematics, unfortunately some of his efforts did not pan out.
In 1934, he claimed that the 72-digit composite number 231,584,178,474,632,390,847,141,970,017,375,815,706,593,969,331,281,128,078,915,826,259,279,871 was the largest known prime number.
He also attempted to show that the number 2^256(2^257-1) was perfect, implying that 2^257-1 is a prime number. 2^257-1 is actually a composite number: its smallest prime factor is
up vote 7 535,006,138,814,359.
down vote
Finally, he claimed to have a counter example to Fermat's Last Theorem x^n + y^n = z^n using the numbers x = 1324, y = 731 and z = 1961 with an undisclosed n. A reporter supposedly called
Krieger to ask how the left and the right hand side could be equal, when the left hand side could only end in a 4 or a 6 plus 1, and the right hand side could only end in 1.
25 A reporter who knew about modulo 10 arithmetics, now that's quite a story! – Ilya Nikokoshev Oct 22 '09 at 7:43
add comment
I find this one (it is not in the same vein as the ones that have been posted here so far, this is not a pure math mistake) to be interesting and instructive to students: patriot
up vote 6 down missile failure due to poor understanding of binary decimals
add comment
Perhaps not under this heading but I enjoy reading in Marshall Hall Group Theory book:
up vote 6 down vote "Let p be any old prime."
add comment
In chapter 3 of What Is Mathematics, Really? (pages 43-45), Prof. Hersh writes:
How is it possible that mistakes occur in mathematics?
René Descartes's Method was so clear, he said, a mistake could only happen by inadvertence. Yet, ... his Géométrie contains conceptual mistakes about three-dimensional space.
Henri Poincaré said it was strange that mistakes happen in mathematics, since mathematics is just sound reasoning, such as anyone in his right mind follows. His explanation was memory
lapse—there are only so many things we can keep in mind at once.
Wittgenstein said that mathematics could be characterized as the subject where it's possible to make mistakes. (Actually, it's not just possible, it's inevitable.) The very notion of a
mistake presupposes that there is right and wrong independent of what we think, which is what makes mathematics mathematics. We mathematicians make mistakes, even important ones, even in
famous papers that have been around for years.
Philip J. Davis displays an imposing collection of errors, with some famous names. His article shows that mistakes aren't uncommon. It shows that mathematical knowledge is fallible, like
other knowledge.
Some mistakes come from keeping old assumptions in a new context.
Infinite dimensionl space is just like finite dimensional space—except for one or two properties, which are entirely different.
Riemann stated and used what he called "Dirichlet's principle" incorrectly [when trying to prove his mapping theorem].
Julius König and David Hilbert each thought he had proven the continuum hypothesis. (Decades later, it was proved undecidable by Kurt Gödel and Paul Cohen.)
Sometimes mathematicians try to give a complete classification of an object of interest. It's a mistake to claim a complete classification while leaving out several cases. That's what
happened, first to Descartes, then to Newton, in their attempts to classify cubic curves (Boyer). [cf. this annotation by Peter Shor.]
Is a gap in a proof a mistake? Newton found the speed of a falling stone by dividing 0/0. Berkeley called him to account for bad algebra, but admitted Newton had the right answer...
up vote Mistake or not?
6 down
vote ...
"The mistakes of a great mathematician are worth more than the correctness of a mediocrity." I've heard those words more than once. Explicating this thought would tell something about
the nature of mathematics. For most academic philosopher of mathematics, this remark has nothing to do with mathematics or the philosophy of mathematics. Mathematics for them is
indubitable—rigorous deductions from premises. If you made a mistake, your deduction wasn't rigorous, By definition, then, it wasn't mathematics!
So the brilliant, fruitful mistakes of Newton, Euler, and Riemann, weren't mathematics, and needn't be considered by the philosopher of mathematics.
Riemann's incorrect statement of Dirichlet's principle was corrected, implemented, and flowered into the calculus of variations. On the other hand, thousands of correct theorems are
published every week. Most lead nowhere.
A famous oversight of Euclid and his students (don't call it a mistake) was neglecting the relation of "between-ness" of points on a line. This relation was used implicitly by Euclid in
300 B.C. It was recognized explicitly by Moritz Pasch over 2,000 years later, in 1882. For two millennia, mathematicians and philosophers accepted reasoning that they later rejected.
Can we be sure that we, unlike our predecessors, are not overlooking big gaps? We can't. Our mathematics can't be certain.
The reference to the said article by Philip J. Davis is:
Fidelity in mathematical discourse: Is one and one really two? Amer. Math. Monthly 79 (1972), 252–263.
From that article, I find particularly amusing the following couple of paragraphs from page 262:
There is a book entitled Erreurs de Mathématiciens, published by Maurice Lecat in 1935 in Brussels. This book contains more than 130 pages of errors committed by mathematicians of the
first and second rank from antiquity to about 1900.There are parallel columns listing the mathematician, the place where his error occurs, the man who discovers the error, and the place
where the error is discussed. For example, J. J. Sylvester committed an error in "On the Relation between the Minor Determinant of Linearly Equivalent Quadratic Factors", Philos. Mag.,
(1851) pp. 295-305. This error was corrected by H. E. Baker in the Collected Papers of Sylvester, Vol. I, pp. 647-650.
A mathematical error of international significance may occur every twenty years or so. By this I mean the conjunction a mathematician of great reputation and a problem of great
notoriety. Such a conjunction occurred around 1945 when H. Rademacher thought he had solved the Riemann Hypothesis. There was a report in Time magazine.
show 1 more comment
I don't know if this is really a mistake: Fermat's "missing proof" for Fermat's last theorem.
up vote 5 down vote
add comment
Petrovisky-Landis solution to the second part of Hilbert 16th problem. They "proved" the existence of a bound for the number of limit cycles of planar polynomial vector fields of
fixed degree. Ilyashenko pointed out the mistake.
up vote 5 down
vote The problem remains wide open but the basic idea of Petrovisky-Landis ( complexify ) lead to the study of holomorphic foliations.
add comment
Euler conjectured that there were no pairs of orthogonal Latin squares for orders $n \equiv 2 \pmod 4$. Nearly two hundred years later, this was proved false for every $n \equiv 2 \pmod
up vote 5 4$ except $2$ and $6$. Here's the link to Euler's paper. Regardless, Euler's work certainly helped spur research into Latin squares.
down vote
add comment
Then there's always the Martian Climate Orbiter Newtons vs Pounds of thrust embarrassment.
up vote 4 down vote
add comment
Karl Pearson's contributions in the development of statistics are so ubiquitous that most users take his assumptions for granted. One key contribution and mistake of his was to claim that
all distributions are parametric. Such models are still predominantly used in social and behavioral sciences, but his insistence led to a lot of interesting and very useful developments in
mathematical statistics and its applications by people who published refutations of his work (like R.A. Fisher).
up vote 3
down vote As a non-math mistake, Karl Pearson avidly advocated eugenics towards racial purity. Big mistake.
show 1 more comment
If Hilbert's program was a "mistake", then surely so was Russell-Whitehead's Principia Mathematica.
up vote 2 down vote
add comment
William Shanks (1812-1882), who calculated pi to the 707th place, by hand, but it was only correct for the first 527 places.
up vote 2 down vote
add comment
For surfaces of constant mean curvature, it is alleged that Hopf thought that all compact CMC surfaces in $\mathbb{R}^3$ were round spheres. CMC surfaces are what you get if you have a soap
film bounding a fixed volume, so after a childhood full of blowing bubbles this is a pretty reasonable thing to think. And it even happens to be mostly true: Hopf proved that immersed CMC
spheres are round, and Alexandrov proved with a nice reflection argument that embedded CMC surfaces of any genus must actually be round spheres.
up vote 2 But a bit later, Wente discovered a collection of CMC tori. Ivan Sterling has some nice pictures of these on his website, as does MSRI. There are many very pretty connections between these
down vote surfaces and algebraic geometry, so to me they sort of mark the start of the modern "integrable systems" era of CMC research.
I should probably add that nobody actually seems sure if Hopf believed that compact CMC surfaces are spheres, but it makes a good creation story for the subfield!
add comment
Not the answer you're looking for? Browse other questions tagged soft-question big-list or ask your own question.
|
{"url":"http://mathoverflow.net/questions/879/most-interesting-mathematics-mistake","timestamp":"2014-04-17T10:19:08Z","content_type":null,"content_length":"184093","record_id":"<urn:uuid:e984c117-ac18-459b-8abd-fc128a73433f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gainesville, GA Trigonometry Tutor
Find a Gainesville, GA Trigonometry Tutor
Hello,My name is Hannah. I received my bachelor and master degrees at Piedmont College in Early Childhood Education. I am currently at Walden University working on my doctorate degree in Teacher
26 Subjects: including trigonometry, English, reading, geometry
...I can teach a variety of subjects from math, science, ACT/SAT, and history as well as teach from elementary school to high school. I am a mentor at my high school and involved in many honor
societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am teaching I will adjust my teachings to be more suitable for my
14 Subjects: including trigonometry, chemistry, biology, algebra 2
...I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions. This makes sure that when the student
leaves he or she is equipped to answer the problems on their own for tests and quizzes.
9 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...During high school, I was affiliated with the mentoring program and had the privilege of helping a 3rd grader at Sharon Elementary master his objectives in math. Also, through Habitat for
Humanity, I conducted educational activities with the fourth grade student body at Sharon Elementary School....
14 Subjects: including trigonometry, Spanish, physics, algebra 1
...I lived in Brazil for 30 years and had a school teaching English to executives in multinational companies. We developed an easy system to show how English works. English is very different from
Spanish and Portuguese.
32 Subjects: including trigonometry, reading, calculus, physics
Related Gainesville, GA Tutors
Gainesville, GA Accounting Tutors
Gainesville, GA ACT Tutors
Gainesville, GA Algebra Tutors
Gainesville, GA Algebra 2 Tutors
Gainesville, GA Calculus Tutors
Gainesville, GA Geometry Tutors
Gainesville, GA Math Tutors
Gainesville, GA Prealgebra Tutors
Gainesville, GA Precalculus Tutors
Gainesville, GA SAT Tutors
Gainesville, GA SAT Math Tutors
Gainesville, GA Science Tutors
Gainesville, GA Statistics Tutors
Gainesville, GA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Alpharetta trigonometry Tutors
Athens, GA trigonometry Tutors
Buford, GA trigonometry Tutors
Duluth, GA trigonometry Tutors
Dunwoody, GA trigonometry Tutors
Johns Creek, GA trigonometry Tutors
Lawrenceville, GA trigonometry Tutors
Oakwood, GA trigonometry Tutors
Roswell, GA trigonometry Tutors
Sandy Springs, GA trigonometry Tutors
Smyrna, GA trigonometry Tutors
Snellville trigonometry Tutors
Suwanee trigonometry Tutors
Westside, GA trigonometry Tutors
Woodstock, GA trigonometry Tutors
|
{"url":"http://www.purplemath.com/Gainesville_GA_Trigonometry_tutors.php","timestamp":"2014-04-19T05:05:39Z","content_type":null,"content_length":"24376","record_id":"<urn:uuid:87f38263-be50-4f67-876b-dff80663a0bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Centripetal acceleration of Earth around Sun
The answer looks right. An alternative way of solving this question (to check your answer) would be to just ask what is the centripetal acceleration of earth around the sun, given the sun's
gravitation force at our distance.
G=gravitational constant=6.67E-11 m(3)kg(-1)s(-2)
m1=mass of sun (1.00 E30) kg
r=distance to the sun = 1.5E11 m
Ie. acceleration = 6.67E-11 m(3)kg(-1)s(-2) * (1.00 E30) m / [ (1.5E11 m) * (1.5E11 m)]
Answer = 5.8987E-03 ms(-2)
My mass of distance were approximations, but the answer is very close indeed.
|
{"url":"http://www.physicsforums.com/showthread.php?t=212860","timestamp":"2014-04-17T07:30:56Z","content_type":null,"content_length":"29073","record_id":"<urn:uuid:17cf01f4-84d8-4c83-9b59-728e2a46eb2a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brazilian Journal of Chemical Engineering
Serviços Personalizados
Links relacionados
versão impressa ISSN 0104-6632
Braz. J. Chem. Eng. v.17 n.4-7 São Paulo dez. 2000
MATHEMATICAL MODELING OF DISPERSION POLYMERIZATIONS STUDY OF THE STYRENE POLYMERIZATION IN ETHANOL
P.H.H.Araújo^1 and J.C.Pinto^2
Programa de Engenharia Química / COPPE, Universidade Federal do Rio de Janeiro,
Cidade Universitária, CP: 68502, 21945-970, Fax: (0xx21) 590-7135,
Phone: (0xx21) 590-2241, Rio de Janeiro - RJ, Brazil
Email: pinto@peq.coppe.ufrj.br
(Received: October 19, 1999 ; Accepted: April 4, 2000)
Abstract - A mathematical model for prediction of monomer conversion, of particle number and of the evolution of the particle size distribution (PSD) in dispersion polymerization is developed.
Despite being completed very early during the polymerization process (monomer conversion <1%), nucleation of new particles is the most important factor affecting the PSD. In order to describe the
particle nucleation phenomena, the mechanism of homogeneous coagulative nucleation is considered. According to this mechanism, polymer chain aggregates can either coagulate and grow, to give
birth to new polymer particles (particle nucleation), or be captured by existing polymer particles. Two sets of population balance equations are used: one for the aggregates, and a second one for
the stable polymer particles. It is shown that the model is able to describe the dispersion polymerization of styrene in ethanol and the formation of micron-size monodisperse polymer particles.
Keywords: Dispersion polymerization, particle size distribution, population balance.
Dispersion polymerization in polar organic media is a unique way to produce micron-size monodisperse polymer particles in a single process step. Monodisperse polymer particles are required for a
number of practical applications, including the production of toners, preparation of instrument calibration standards, development of column packing materials for chromatography, and performing of
biomedical and biochemical analysis (Sudol, 1997). Batch dispersion polymerization is usually started with the preparation of a homogeneous mixture containing monomer, organic solvent, soluble
polymeric stabilizer, and initiator. The solvent selection depends on its miscibility with the constituents of the reaction environment: monomer, stabilizer and initiator are expected to dissolve
completely in the solvent, while polymer is expected to precipitate during polymerization. Polymerization is initiated in solution by an initiator that is soluble in both the solvent and monomer.
After precipitation, polymer particles are nucleated and form a distinct phase, which must be stabilized by the soluble polymeric stabilizer to avoid complete coagulation of the polymer material. The
formation of new particles can be understood in terms of the homogeneous coagulative nucleation, where polymer chain aggregates formed during homogeneous nucleation either coagulate or are captured
by existing polymer particles. New polymer particles are formed through coagulation and growth of aggregates.
The production of micron-size monodisperse polymer particles requires the tight control of the nucleation conditions. The nucleation period must be short and neither nucleation nor coagulation must
occur after a specified time window. The nucleation of new polymer particles must be completed for monomer conversion levels below 1%. In spite of that, particle nucleation is the most important
factor affecting the PSD. Different models have been presented in the open literature to describe the kinetics of dispersion polymerization (Ahmed and Poehlein, 1997; Sáenz and Asua, 1999), but none
of them analyze the nucleation step. Therefore, just particle growth is taken into consideration to allow the calculation of the PSD evolution, which means that particle nucleation rates are assumed
to be known. Using a geometrical progression of population bins, Paine (1990) presented a simple mechanistic model to predict particle size based on the coagulative nucleation mechanism. According to
Kawaguchi et al. (1995), though, experimental results cannot be explained quantitatively yet in terms of Paine s model. In spite of that, Paine s model may be effective for some practical
applications, given its underlying principles and deeper qualitative understanding of the particle formation process.
In the present work, a mathematical model is developed for prediction of conversion, of particle number and of the evolution of the particle size distribution (PSD) in dispersion polymerizations. The
model is based on the homogeneous coagulative nucleation and comprises two sets of population balance equations: one for polymer chain aggregates and a second one for the stable polymer particles.
Simulations are carried out for the styrene dispersion polymerization in ethanol.
MATHEMATICAL MODEL
The model is implemented for the batch dispersion polymerization of styrene in ethanol, using PVPK-30 (polyvinylpyrrolidone) as steric stabilizer, and AIBN (2,2-azobis isobutyronitrile) as initiator.
Initiator concentration is equal to 1.0 wt % in respect to monomer. Initial monomer concentration is equal to 12.6 wt %. Polymerization temperature is equal to 70 ^oC. This system is commonly used to
produce monodisperse polymer particles (Saénz and Asua, 1995).
According to Saénz and Asua (1998), experimental results suggest that radicals are produced in both the polymer and the continuous phases through the usual thermal decomposition of the initiator.
Radicals dissolved in the continuous phase grow through the incorporation of dissolved monomer molecules and form oligoradicals. When the size of oligoradicals reaches a limiting value, oligoradicals
precipitate and form a polymer chain aggregate. The fraction of radicals that precipitate is limited by the rate of termination with other radicals, by the entry rate into polymer particles and
aggregates, and by chain transfer reactions. Aggregates can coagulate among themselves or be captured by existing polymer particles. New polymer particles are formed through coagulation and growth of
aggregates. Inside the polymer particles, reaction is initiated by radicals formed through thermal decomposition of the initiator dissolved in the polymer mass and by radicals that are captured from
the continuous phase. The most important assumptions used to write down the model equations are:
i) Monomer and initiator concentrations in the different phases are in thermodynamic equilibrium;
ii) Polymerization occurs in both continuous phase and polymer particles;
iii) The quasi-steady state assumption is valid for radicals;
iv) Radical desorption from polymer particles is negligible;
v) Volume contraction is negligible.
The first assumption is used because mass transfer rate are much higher than polymerization reaction rates. The second assumption is based on experimental data, while the third one is a rather
standard assumption used to describe free-radical polymerizations. The forth assumption is due to the quite small surface area / volume ratio of polymer particles produced through dispersion
polymerizations, when compared to emulsion polymerizations. The fifth assumption is a fair approximation of the actual experimental conditions, as the initial monomer holdup fraction is small and
equal to 12.6 wt %.
Mass Balances
The mass balance for the initiator is:
where [I] is the mass concentration of initiator in the reactor vessel and k[d] is the rate constant for initiator decomposition.
The mass balance for the monomer may be written as:
where V[R] is the reactor volume; V^II is the volume occupied by the continuous phase; [M[A]], [M[A]]^II, [M[A]]^III are the mass concentrations of monomer in the reactor, in the continuous phase and
in the polymer phase respectively; ñ[m] is the average number of radicals per particle; N[p] is the total number of polymer particles and aggregates in the reactor; [A] is the Avogadro s number; [R
[TOT]] is the concentration of radicals in the continuous phase.
Equations for Thermodynamic Equilibrium
Monomer concentrations in the continuous and polymer phases are calculated with the iterative algorithm proposed originally by Omi et al. (1985). Equilibrium equations are written in terms of the
partition coefficient (
where V^III is the volume of the polymer phase.
For solvent, equations become:
where [S], [S]^II and [S]^III are the mass concentrations of solvent (ethanol) in the reactor, in the continuous phase and in the polymer phase respectively;
where [A]^II is the molar monomer concentration in the continuous phase. The volume of the continuous phase (monomer + solvent) and of the polymer phase (monomer + solvent + polymer) are computed
through an iterative procedure using the following recursive equations:
where [P]^III is the mass concentration of polymer in the particle phase; r[A], r[S] and r[P] are the densities of monomer, solvent and polymer respectively.
In order to calculate the monomer concentration, Equations (3-10) must be solved by the following iterative procedure:
1. Assume initial values for V^II and V^III;
2. [M[A]]^III and [S]^III are calculated with Equations (5) and (7);
3. [M[A]]^II and [S]^II are calculated with Equations (3) and (6);
4. V^II and V^III are calculated with Equations (9) and (10);
5. Return to 2 until convergence is reached.
The equilibrium equation for the initiator can be written in terms of a partition coefficient, as written for monomer and solvent. However, as this value is not available in the literature, it is
assumed here that all initiator is dissolved in the continuous phase. Therefore, polymerization inside the polymer particles is assumed to be caused mainly by capture of growing radicals.
Radical Balance in the Continuous Phase
The quasi-steady state assumption is applied for radicals in the continuous phase. The concentration of all radicals in the continuous phase [R[TOT]] is given by:
where [R[h]] is an oligoradical with h mers.
For radicals containing a single mer, it is possible to write:
where [I]^II is the mass concentration of initiator in the continuous phase; PM[I] is the molecular weight of initiator; [e] is the entry rate coefficient. In Equation (12), the numerator accounts
for the formation of radicals by initiator decomposition. The first and second terms in the denominator of Equation (12) account for the loss of R[1] by propagation and termination, respectively. The
last term in the denominator of Equation (12) accounts for the loss by entry into polymer particles and aggregates. Entry is assumed to occur through diffusion, so that the entry rate coefficient may
be given by:
where R[pinch] is the radius of a particle swollen by monomer; D[c] is the diffusion coefficient of monomer in the continuous phase and f[i] is a factor used to take into account the fact that the
diffusion coefficient of an oligoradical with i mers is lower than the diffusion coefficient of the monomer. Those two variables are assumed to be D[c] = 1.5^.10^-5 cm^2/s and f[i] = 4.0^.10^-2.
For radicals with 2 or more mers:
The concentration of all radicals in the continuous phase [R[TOT]] is given by:
Equations (12-16) are solved iteratively, to allow the computation of [R[1]] and [R[TOT]].
Average Radical Number per Volume of Polymer Particle
Dispersion polymerization is used to produce micron size polymer particles. These particles are very large (when compared to emulsions, for instance) and contain a high number of radicals per
particle (ñ>>0.5). Therefore, the "pseudo-bulk" kinetics may be assumed. However, according to Saénz and Asua (1999), the high internal viscosity of the polymer particles promotes the cage effect.
This means that radicals generated within the polymer particles are likely to terminate before any significant growth can be attained, as diffusion of radicals through the polymer mass is seriously
impaired. This also means that radicals present within the polymer particles come mainly from the continuous phase. As the entering chain is relatively long, it may be difficult for radicals to
diffuse towards the core of the particle. This may cause the development of radical concentration distributions along the particle radius, as believed to occur in conventional emulsion polymerization
by anchoring of the hydrophilic part of the entering radical to the particle surface. Under such circumstances, the radical number (ñ) per particle diameter (d[p] in cm) can be given by:
where K is assumed to be a constant equal to 3.0^.10^11 cm^-2.
Population Balance of Aggregates
Polymer chain aggregates are nucleated by precipitation of oligoradicals of length equal to or longer than a critical length j[crit]. Aggregates can coagulate among themselves or be captured by
existing polymer particles. The population balance for polymer chain aggregates can be written as:
where the first and second terms of the right hand side of Equation (18) account for aggregates of mass l that disappear after coagulation with aggregates and with stable polymer particles
respectively; while the fifth term of the right hand side of Equation (18) accounts for aggregates of mass l that are formed after coagulation of two aggregates.
The lower and upper limits for aggregate polymer mass (l) are amn and bmx respectively. The lower and upper limits for polymer mass of stable particles (m) are bmn and cmx respectively. N[Tp] is the
total number of aggregates.
The rate of growth of polymer chain aggregates due to propagation is:
where h(l,t) is the density of the aggregate size distribution, given by:
where n(l,t) is the number of aggregates of mass l at time t.
Total Number of Aggregates
If Equation (18) is integrated to include all particle aggregates of all sizes, Equation (22) can be obtained for the total number of aggregates:
where the first and second terms of the right hand side of Equation (22) account for aggregates that disappear after coagulation with other aggregates and stable particles respectively; the third,
for generation of aggregates due to homogeneous nucleation; the forth and fifth, for aggregates that disappear after formation of new stable polymer particles through coagulation and propagation
Population Balance of Stable Polymer Particles
Polymer particles are formed through coagulation and growth of aggregates with mass bmx. Stable polymer particles also grow due to coagulation with aggregates and propagation of oligoradicals inside
the particle. The population balance for the stable polymer particles is
obs.: a - if m = bmn the integral is equal zero; if amn+bmn < m <2.bmn, the integration limits are amn and m-bmn.
where the third term of the right hand side of Equation (23) accounts for growth through coagulation with aggregates of a stable polymer particle of mass m.
The nucleation of new stable polymer particles is caused by coagulation of the aggregates and by propagation of an aggregate of mass bmx as:
obs.: b - the integral is valid only for particle sizes between bmn and 2.bmn.
The growth rate due to propagation is given as:
Finally, defining the density distribution of stable polymer particles, f(m,t):
where n(m,t) is the number of polymer particles of mass m at time t.
Total Number of Polymer Particles
As performed previously, Equation (23) can be integrated over the size domain to allow the computation of the total number of stable polymer particles as:
where the first and second terms of the right hand side of Equation (27) account for the new stable polymer particles that are formed after coagulation of aggregates and propagation of an aggregate
of mass bmx respectively.
Model Parameters
In order to calculate the coagulation rate coefficients, one may use an extension of the standard DLVO model developed by Gilbert (1995). According to the DLVO Theory, the movement of small particles
suspended in a fluid may be described by a potential field that is composed of the electrical interactions among the charges distributed over the particle surfaces and of the attractive van der Waals
particle interactions. The rate of coagulation may be described as the rate of diffusion across the maximum in this potential field. A possible disadvantage for using this approach is the relatively
large number of parameters required by the DLVO model. From a practical point of view, most of them cannot be estimated based solely on the measurement of kinetic and PSD data.
The parameters used for model computations are listed in Table 1. Kinetic constants for propagation and termination are assumed to be the same in both phases. Based on the DLVO Theory, the difference
between the coagulation rate constants for aggregates ([i] and r[j] are the radius of the swollen particles subject to coagulation. As the diameters of two aggregates that coagulate are much more
uniform than the diameters of a large stable particle and a small aggregate, the coagulation rate constant for aggregate coagulation is also much lower. The size range of aggregates (amn, bmx) was
defined in order to obtain spherical particles with diameter between 4 nm (j[crit] meric units) and 80 nm. The size range of stable polymer particles (bmn, cmx) was defined in order to obtain
spherical particles with diameter between 80 nm and 5 mm.
A finite difference discretization scheme was used to allow the numerical solution of the population balance equations, as described by Araújo (1999). The resulting set of algebraic -differential
equations was solved with the help of the integrator DASSL (Petzold, 1982).
MODEL RESULTS
One of the objectives of dispersion polymerization of styrene in ethanol is the preparation of monodisperse polymer particles with diameters in the range of 1-10 mm for applications that require high
quality latex. The uniformity of a particle size distribution may be characterized through the polydispersity index (PDI). A particle population may be regarded as monodisperse when the PDI is lower
than 1.02 (Saénz and Asua, 1995). The PDI may be calculated as follows:
where d[n] and d[w] are number and weight average diameters; d[p](x) is the diameter of a particle with mass x; and n(x) is the number of particles with mass x.
In Figure 1.a it is possible to observe the evolution of the average particle diameter and of the PDI during polymerization. At the beginning of reaction, the PDI is large, as the particles are small
and the relative difference between the largest and smallest particles is also high. The particles grow at similar rates, so that the absolute difference between the sizes of these particles does not
change significantly. Therefore, the relative difference between particle sizes and the PDI decrease. The final PDI of 1.014 indicates that the final distribution is monodisperse. The final average
particle diameter (dp) obtained through simulation is 2.6 mm, which is in accordance with experimental data (Run A4) obtained by Saénz and Asua (1995) at similar conditions. At 88% monomer conversion
Run A4 led to a PDI of 1.02 (monodisperse) and to a d[p] of 2.4 mm. At the same monomer conversion level, the model led to a PDI of 1.018 and to a d[p] of 2.5 mm.
Nucleation occurs during a very short time period and a large number of aggregates are formed. These aggregates grow though propagation and coagulation up to the critical size, when aggregates become
stable polymer particles. In Figure 1.b it is possible to observe that the total particle number (for both aggregates and stable polymer particles) decreases slowly down to an almost constant value
during the polymerization, after a short period of fast increase of the number of particles. This occurs because a large number of stable polymer particles are formed since the very beginning of the
polymerization, which then absorb through coagulation most of the aggregates formed and avoid the nucleation of new stable polymer particles. After that, the model shows that the number of aggregates
decreases. Despite of that, the rate of homogeneous nucleation increases, because the total particle area decreases with the aggregate coagulation, which causes the increase of the probability for an
oligoradical to reach the critical value (j[crit]) and nucleate new aggregates before being absorbed by the particles. The increase of the rate of nucleation does not affect the total particle number
because the new aggregates coagulate almost immediately after nucleation. Reaction is completed after 1400 minutes of reaction.
Figure 2 presents the evolution of the particle size distribution (PSD) of stable polymer particles as a function of conversion. The graphic is plotted for number fraction and the distributions are
normalized. Stable polymer particles start to be formed after 8.2 minutes of reaction through coagulation of aggregates. Stable particles grow mainly through coagulation with aggregates at the
beginning, and afterwards growth is caused mostly by propagation. Particle growth leads to shifting of the PSD towards higher size values.
Figure 3 presents experimental PSD data (Run A4) at 88 % of conversion obtained by Saénz and Asua (1995) and the PSD obtained with the model at 88.9 % of conversion. It is possible to observe the
very good agreement between experimental data and the model.
Although this work presents a useful model to allow the understanding of the behavior of PSDs during dispersion polymerizations, which is able to describe the production of monodisperse polymer
particles through homogeneous coagulative nucleation, it is certain that more experimental data is needed for calculation of model parameters. For instance, the detailed development of the
coagulation model is a key factor for proper description of the final shape of the PSD obtained.
A mathematical model that predicts conversion, particle number and particle size distribution in dispersion polymerization was developed. The model is based on first principles and presents the same
behavior observed experimentally for the dispersion polymerization of styrene in ethanol used for the production of monodisperse particles (Saénz and Asua, 1995). This work illustrates the importance
of using mathematical modeling to quantitatively analyze kinetic data as a unique way to elucidate the mechanisms involved in complex processes, although more experimental data are still necessary in
order to calculate the parameters for the coagulation model.
The authors thank CNPq Conselho Nacional de Desenvolvimento Científico e Tecnológico and CAPES Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for providing scholarships.
amn Lower mass limit for aggregates
[A]^II Molar concentration of monomer in phase II gmol/cm^3
[A] Molar concentration of monomer in phase III gmol/cm^3
[Ap] Interfacial area of a polymer particlescm^2
bmx Upper mass limit for aggregates
bmn Lower mass limit for stable polymer particles
cmx Upper mass limit for stable polymer particles
[P]^III Mass concentration of polymer in phase IIIg/cm^3
Dp Particle diameter (unswollen)cm
D[w] Diffusion coefficient of a radical in phase IIcm^2/s
f(m) Population density of stable particles of mass m
h(l) Population density of aggregates of mass l
[I]^II Mass concentration of initiator in phase IIg/cm^3
j[crit] Number of meric units of an oligoradical when it becomes insoluble in continuous phase
Partition coefficient of monomer between phases II and III
Partition coefficient of solvent between phases II and III
K[cE] Coagulation rate constant for stable polymer particles
K[cP] Coagulation rate constant for aggregates
k[e] Entry rate coefficient of radicals into polymer particles cm^3/gmol.s
k[d] Initiator thermal decomposition rate constant cm^3/gmol.s
Propagation rate constant in phase III cm^3/gmol.s
Propagation rateconstant in phase II cm^3/gmol.s
Termination rate constant in phase IIIcm^3/gmol.s
Termination rate constant in phase II cm^3/gmol.s
l Polymer mass of an aggregate
m Polymer mass of a stable polymer particle
[MA] Mass concentration of monomer in the reactor g/cm^3
[MA]^ II Mass concentration of monomer in phase II g/cm^3
[MA]III Mass concentration of monomer in phase IIIg/cm^3
ñ Average radical number per particle
N[A] Avogadro Number
Np Total polymer particle (aggregates and stable particles) in the reactor
Number of stable particles per cm^3 of latex 1/cm^3
Number of aggregates per cm^3 of latex 1/cm^3
[P]^III Mass concentration of polymer in phase III g/cm^3
PM[h] Molecular weight of component hg/gmol
Rpinch Radius of a swollen particlecm
[R[h]] Molar concentration of radicals with h meric units in phase II gmol/cm^3
[R[TOT]] Total molar concentration of radicals in phase IIgmol/cm^3
t Time s
V^II Volume of phase II cm^3
V^III Volume of phase III cm^3
VR Volume of the reactor cm^3
[S]II Mass concentration of solvent in phase II g/cm^3
[S]^III Mass concentration of solvent in phase III g/cm^3
A Styrene (monomer)
P Polymer
I Initiator
S Ethanol (solvent for the monomer)
II Phase II continuous
III Phase III polymer
Greek Letters
Volume fraction of phase II in the reactor
r[A] Monomer density g/cm^3
r[P] Polymer density g/cm^3
r[S] Solvent density g/cm^3
Ahmed, S.F., Poehlein, G.W., "Kinetics of Dispersion Polymerization of Styrene in Ethanol. 1. Model Development",. Ind. Eng. Chem. Res., 36, 2597 (1997). [ Links ]
Araújo, P.H.H., "Distribuição de Tamanhos de Partícula em Sistemas Heterogêneos de Polimerização", DSc. Thesis, PEQ/COPPE/UFRJ, Rio de Janeiro (1999). [ Links ]
Brandrup, J., Immergut, E.H., "Polymer Handbook", Ed. J. Wiley, 3^a ed., New York (1989). [ Links ]
Cutter, L. A., Drexler, T. D., "Simulation of the Kinetics of Styrene Polymerization", in Computer Applications in Polymer Science, 13 (1982). [ Links ]
Gilbert, R.G., "Emulsion Polymerization - A Mechanistic Approach", Academic Press, London (1995). [ Links ]
Hui, A. W., Hamielec, A. E., "Thermal Polymerization of Styrene at High Conversions and Temperatures", J. Appl. Polym. Sci., 16, 749 (1972). [ Links ]
Kawaguchi, S., Winnik, M.A., Ito, K., "Dispersion Copolymerization of n-Butyl Methacrylate with Poly(ethylene oxide) Macromonomers in Methanol-Water. Comparison of Experiment with Theory",
Macromolecules, 28, 1159 (1995). [ Links ]
Lenz, R.W., "Organic Chemistry of Synthetic High Polymers", Interscience Publishers, New York (1967). [ Links ]
Omi, S., Kushibiki, K., Negishi, M., Iso, M., "Generalized Computer Modeling of Semi-Batch, n-Component Emulsion Copolymerization Systems and its Application", Zairyo Gijutsu, 3, 426 (1985). [ Links
Paine, A.J., "Dispersion Polymerization of Styrene in Polar Solvents. 7. A Simple Mechanistic Model to Predict Particle Size", Macromolecules, 23, 3109 (1990). [ Links ]
Petzold, L.R., "A description of DASSL: A Differential Algebraic System Solver", Sandia National Laboratories, Report # SAND82-8637 (1982). [ Links ]
Pinto, U.B., "Estudo do Sistema Ternário Poliestireno/Estireno/Etanol Aplicado à Polimerização por Precipitação de Estireno em Etanol", M.Sc. Thesis, PEQ/COPPE/UFRJ, Rio de Janeiro (1990). [ Links ]
Reid, R.C., Prausnitz, J.M., Poling, B.E., "The Properties of Gases and Liquids", Ed. McGraw-Hill, New York (1987). [ Links ]
Sáenz, J.M., Asua, J.M., "Dispersion Polymerization in Polar Solvents", J. Polym. Sci. Chem., 33, 1511 (1995). [ Links ]
Sáenz, J.M., Asua, J.M., "Kinetics of the Dispersion Copolymerization of Styrene and Butyl Acrylate", Macromolecules, 31, 5215 (1998). [ Links ]
Sáenz, J.M., Asua, J.M., "Mathematical Modeling of Dispersion Copolymerization", Coll. Surf. Phys. Engngn. Asp., 153, 61 (1999). [ Links ]
Stevens, C. J., "Mathematical Modeling of Bulk and Solution Free Radical Polymerization in Tubular Reactors", Ph. D. Thesis, University of Winscosin, Madison (1988). [ Links ]
Sudol, E.D., in: J.M. Asua (Ed.), "Polymeric Dispersions: Principles and Applications", Kluwer Academic, Dordrecht (1997). [ Links ]
|
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-66322000000400003&lng=pt&nrm=iso","timestamp":"2014-04-19T22:32:16Z","content_type":null,"content_length":"69613","record_id":"<urn:uuid:b2ce10b2-e62c-45e6-8e79-6f72d0f2e4aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3 probably simple questions
October 6th 2009, 08:27 PM #1
Junior Member
Oct 2009
Im doing a section on arithmetic and ordering in natural numbers and i got 3 questions i cant figure out how to get started.
1. 0 does not equal 1 (duh?)
2. for a,b element of natural numbers if ab = 1 a = 1 b = 1.
Can we use the a1=a somehow?
3. a >= 0
thanks in advance
How you approach this problem depends completely on what approach was made in the definition of what natural numbers are.
How were they defined in your source work? From the Peano postulates, as a naturally ordered semigroup, or from the Zermelo-Fraenkel axioms?
I expect you already have that $\mathbb{N}$ is a well-ordered set such that 0 is the "smallest" member, which gives you the answer to 3 straight away.
First ill give you what we got to work with our teacher wasnt very specific but a list of properties is on the top of the page so i figure thats what we get to use. We have that a and b are
closed under multiplication and addition, a = b implies a+c=b+c, a +1 not equal 0, if a not 0 then a = d +1, then we also have associativity, commutivity, distributive law, identities in both
addition and multiplication, and additive cancellation.
I was working with someone today and I think we came up with a solution but i dont know if it is correct.
Assume 0 = 1 then there exists b such that 0 + b = 1 + b by cancellation 0=1 contradiction
I dont imagine the solution to this is very hard, but for some reason that doesnt look legal to me
October 6th 2009, 09:41 PM #2
October 7th 2009, 06:00 PM #3
Junior Member
Oct 2009
|
{"url":"http://mathhelpforum.com/number-theory/106599-3-probably-simple-questions.html","timestamp":"2014-04-16T13:19:21Z","content_type":null,"content_length":"34727","record_id":"<urn:uuid:4fe58661-a784-400b-91f8-f95e50e16438>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Introduction
With rapid progress of the information age, information itself as well as physical objects has become a major part of economic value and current popularity of information technology (IT)
intensifies the importance of information. For example, in the manufacturing industries, products are mostly represented in digital form at the design stage, and the digital information of the
products becomes an essential element in fabrication.
In the past, such 3-D model description had been presented with a fairly restrictive format such as 2-D drawings, i.e. blueprints. Today, these 3-D objects are typically described with
computer-aided design (CAD) systems using digital data. Here, the richest shape variety can be described using free-form boundary surfaces that are typically defined by Non-Uniform Rational B-spline
(NURBS) surface patches.
Digital data models describing objects are often accessible to various parties. This occurs, for example, when the detailed design specifications are shown to prospective buyers of a physical
copy of the object. Furthermore, the digital model data could be known to another party, e.g. a subcontractor or by industrial espionage, e.g., when data are communicated via the Internet. It is also
possible that a party could approximate digital model data by reconstructing the geometric model by reverse engineering a single physical prototype. In other instances, there could be a need for
several engineers working together to simultaneously access a design database and to make changes in the database that are instantly accessible by others on the team. At the same time, there is a
need for conveying these data to designers at remote locations while providing a high level of security on proprietary designs. Especially, when proprietary digital content is exposed to the
Internet, it can become an easy target for malicious parties who wish to reproduce unauthorized copies. For these reasons, protection of digital intellectual property has become an active research
topic and many people have begun to research digital watermarking methods.
Several methods have been reported on digital watermarking for 3-D polygonal models that are widely used for virtual reality and computer graphics. Most of the methods are designed for triangle
meshes and embed watermark information by perturbing geometry or changing topological connectivity [13,21,1] or using the frequency domain of 3-D models for embedding watermarking information [
Watermarking on Constructive Solid Geometry (CSG) models was developed by Fornaro and Sanna [4]. An extractable watermark is built using a hash function, and a public key algorithm is used for
encryption and decryption of the watermark. Two places are considered for storing the watermark: solid and comment nodes. To store the watermark in the solid, a new watermark node is created and
linked to the original CSG tree. In the comment nodes, the watermark information can be added without change of the model.
Despite the popularity of NURBS curves and surfaces, watermarking for NURBS representations is relatively new to engineering CAD. Ohbuchi et al. [14] proposed a new data embedding algorithm for
NURBS curves and surfaces, which are reparameterized using rational linear function whose coefficients are manipulated for encoding data. The exact geometric shape is preserved. The watermark
information, however, can be easily destroyed by reparameterization or reapproximation of the surface.
In this project, we developed efficient computational algorithms and criteria for comparing two objects represented in NURBS form or 3-D B-rep solids so that one can determine if one is a
duplicate of the other. Surface intrinsic properties are used to make the algorithms independent of representation methods, scaling, rotation and transformation and use of generic umbilical points
for comparison enhances robustness of the algorithms with respect to small perturbations.
This paper is structured as follows. In section 2, algorithms are introduced with detailed explanation of computational methods. Section 3 demonstrates the algorithms with a few examples and
Section 4 concludes the paper.
2. Overview of Shape Intrinsic Watermarking
Two 3-D objects are provided for comparison. One of them is an original model denoted as A and the other is a suspect denoted as B.
(1) Representation form
a. A and B are solids
i. They are represented via a boundary representation (B-rep) scheme and the surfaces of the two B-rep models are homeomorphic (they have the same genus G or number of handles).
ii. The surfaces which bound the solids are of NURBS form.
b. A and B are surfaces
i. They are represented in NURBS form.
(2) After applying uniform scaling, rotation and translation to one object, the difference of the two objects as point sets is small.
(3) All kinds of geometrical operations including translation, rotation, uniform scaling, perturbation of control points, reparameterization, etc. have been imposed on B. The operations such as
shearing and non-uniform scaling, which degrade the appearance or functionality of the object, however, are not included.
2.1. Preliminary Step
Before checking the similarity of two objects, a sequence of operations needs to be performed. Four operations are proposed in Figure 1, and explained in subsequent sections of this paper. After
this step, the object A and localized (translated, rotated and scaled) solid B^’ are available for input to the comparison algorithms.
Figure 1: A Preliminary step (Clikc here for a larger image.)
2.1.1. Detection and Classification of Umbilics
An umbilical point is a point on a surface where all the normal curvatures are equal in all directions. All the points in planar and spherical surface regions are umbilics. There are two categories
of isolated umbilics: one is generic and the other is non-generic. Generic umbilics are stable with respect to small perturbations of the function representing the surface, while non-generic umbilics
are unstable [2,18,20,12,15].
Generic umbilics have three different types called star, (le) monstar and lemon based on the pattern of the lines of curvature in the vicinity of the umbilics as shown in Figure 2. They are
stable with respect to small perturbations of the surface. An input NURBS surface is decomposed into rational Bézier patches using the knot insertion technique and each rational Bézier patch is
supplied to set up governing equations. The governing equations for locating umbilics consist of three polynomial equations with two unknowns when input surfaces are in integral/rational Bézier
forms. The system of nonlinear polynomial equations can be solved robustly and accurately by the Interval Projected Polyhedron (IPP) algorithm [19,15]. One line of curvature passes for the lemon type
umbilical point whereas three pass throught the umbilical point of monstar or star types. The criterion distinguishing between monstar and star types is that all three directions of lines of
curvature through a monstar umbilic are contained in a right angle, whereas in the star type umbilical point case, they are not enclosed in a right angle [12]. Given two umbilical points of same type
i.e. star type, then we can compute the distributions of the angles formed by two consecutive lines of curvature at the umbilical points. Using the distributions, two umbilical points of same type
can be distinguished. For more details, see [12,15].
Figure 2: Three generic umbilics adopted from [15]
2.1.2. Shape Intrinsic Wireframing
Computation of Orthogonal Net of Lines of Curvature
A curve on a surface whose tangent at each point is in a principal curvature direction is called a line of curvature [12]. At each point there are two principal directions (maximum and minimum) that
are orthogonal so that the corresponding lines of curvature form an orthogonal net of lines. Lines of curvature are intrinsic to the surface and do not depend on coordinate transformations and
parameterization of the surface. The same orthogonal net of lines is always obtained as long as the surface shape remains unchanged.
Lines of curvature on a parametric surface can be computed from a set of differential equations with respect to the arc length. An integration method such as fourth order Runge-Kutta method or
Adam’s method with variable step size may be used to find the solution [3,12].
Computation of Geodesic Curves
Some areas do not allow us to construct an orthogonal net of lines of curvature such as near boundary and umbilical points. In such areas, another surface intrinsic property, geodesic curves, may be
employed for completion of the wireframe. A geodesic curve is defined as a curve whose geodesic curvature is zero. In general, it arises as the boundary value problem (BVP) so that when two points
are provided, a geodesic curve is calculated which connects them. For a stable solution, the relaxation method is adopted for the computation of the geodesic curve. For details, see [15,10].
An algorithm for the construction of intrinsic wireframing is shown in Figure 3.
Figure 3: A diagram of the algorithm
Steps 10 and 12 : Input Surfaces and Umbilics
The algorithm needs to be provided with information on exact locations of umbilical points over an input surface to trace lines of curvature properly using the method in Step 18 since lines of
curvature cannot be uniquely determined at umbilical points, which can be computed and classified by using the methods in Section 2.1.1.
Step 14 : Starting Points
Either a star type umbilical point or a non-umbilical point can be chosen as a starting point for wireframing because the maximum and minimum lines of curvature radiate from the point in an
alternating pattern so that a simple algorithm is sufficient and the resulting mesh is more well-proportioned. The other two types of umbilics, i.e. lemon and monstar, are not appropriate for this
purpose. At a star type umbilical point, there are three lines of curvature each of which changes its attribute from the maximum to the minimum or vice versa. Therefore, six lines of curvature are
considered to radiate from the umbilical point so that we can use up to six initial directions for tracing lines of curvature. When there is no umbilical point on the surface, a non-umbilical point
is chosen as a starting point where four initial directions for tracing are obtained.
Step 18 : Wireframing with Lines of Curvature
Surface intrinsic wireframe representation is constructed with lines of curvature which are calculated by using any popular numerical methods such as the fourth order Runge-Kutta method from starting
points determined in Step 14. In most cases, maximum and minimum lines of curvature form quadrilateral meshes. Therefore, the important step in creating a wireframe with lines of curvature is to
locate intersection points between maximum and minimum lines of curvature. These points can be accurately computed using Newton’s method.
Step 20 : Geodesic Wireframing
In a region where the algorithm using lines of curvature fails, i.e. in the neighborhood of an umbilical point (except the umbilical point used as a starting point) and near boundary or in an
umbilical region, a geodesic curve can be used to complete wireframing. Two points are selected and a geodesic line is calculated which connects them. As an initial approximation, a straight line
connecting two boundary points is used, from which an accurate solution is obtained iteratively by using the relaxation method, see [10, 15].
In Figure 4, an example of shape intrinsic wireframing is presented. The construction of the wireframe starts from the star type umbilical point in the center using lines of curvature. Geodesic
curves are used in the regions that the lines of curvature do not cover. Since it only depends on the shape of the input surface, the same wireframe can be obtained irrespective of representation.
Figure 4: An example of shape intrinsic wireframing
2.1.3. Matching
Matching is a process determining a rigid body motion (translation and rotation) which makes two objects match as closely as possible. Three methods for matching are used: the moment method, the KH
method and the umbilical point method. The moment method is a global method and used for solid matching. The other two approaches are to use intrinsic properties on the surface of objects. The KH
method uses curvature information and the umbilical point method is based on generic umbilical points on the surface of objects. These two methods can be applied not only to global matching but also
to partial matching.
Integral Property
Matching via integral properties is used for solids. The integral properties for solid A and B i.e. centroids (centers of volume) and moments of inertia, are calculated using Gauss’s theorem or the
divergence theorem which reduces volume integrals to surface integrals. The inertia tensors of solid A and B are constructed. The inertia tensor consists of a 3x3 symmetric square matrix whose terms
on the main diagonal I[xx], I[yy] and I[zz] are called the moments of inertia and the remaining terms (I[xy] , I[yx] , I[xz] , I[zx] , I[yz] and I[zy]) are called the products of inertia. Because of
symmetry, it is always true that I[xy] = I[yx], I[xz] = I[zx] and I[yz] = I[zy]. Principal moments of inertia and their directions are obtained by solving an eigenvalue problem. Once the centroids
and principal directions of both solids are calculated, solid A and B are translated and rotated so that their centroids and principal axes of inertia coincide. Solid B is uniformly scaled based on
the volume ratio of the two solids.
KH Method
Two surface intrinsic properties, the Gaussian and mean curvatures, are used for localization purposes. Three points are selected on the surface of the object B and three pairs of the Gaussian and
mean curvatures are calculated there. Then the corresponding points which have the same Gaussian and mean curvatures calculated on the object B are located by solving a system of equations formulated
for the object A using the Interval Projected Polyhedron algorithm. After the correspondence has been established, a rigid body motion is calculated which aligns two objects as closely as possible.
This method works well when no initial correspondence information is available and only partial surfaces are provided. For details, see Ko et al. [7].
Umbilical Point Method
This paper employed ensembles of isolated umbilical points that have stable patterns useful to recognize that a surface patch had been obtained by a small perturbation of another surface patch having
an equivalent collection of finitely many (isolated) stable umbilical points. Our proposal involving use of the "pattern of stable umbilical points" in order to support the recognition of surface
patches may raise some questions concerning its usefulness. The reason for those doubts is that if we consider surfaces that are infinitely often continuously differentiable then the structure of
their respective sets of umbilical points may be quite complicated. Within the family of general infinitely often differentiable surfaces it is possible to construct very small perturbations that
will change surfaces having only finitely many (stable) umbilics into new surfaces having infinitely many isolated stable umbilical points that may occur in quite complex geometric arrangements.
(Here a simple construction of surface with infinitely many isolated stable umbilics may e.g. be achieved by attaching a sequence of infinitely many very small shrinking bumps to a cylindrical
surface patch.) The latter construction and complex arrangements of umbilical points are possible due to the fact that the differentiability class C^¥ still allows e.g. all kinds of infinitely many
arbitrarily small local perhaps oscillatory surface deformations. However, if we restrict our attention to Bézier or B-spline surface patches of limited degree then the number of isolated umbilical
points is limited as well. Here the maximal number of isolated umbilical points existing on the Bézier surface patch can be estimated from above using the degree information of the Bézier surface
patch or the information on the number of control points. A similar estimation is possible for B-spline surface patches as well. Hence working with Bézier surface patches yields a finiteness
condition for the number of umbilics. In case we perturb e.g. the Bézier surface patch within the family of Bézier surface patches of the same degree, then we shall obtain new patches with a finite
maximal number of isolated umbilics as well. The latter maximal number has the same bound as the one being valid for the unperturbed Bézier surface patch. Such a surface perturbation can be realized
by deforming the original surface patch via perturbing all control points of the original Bézier surface patch, see [12]. Such perturbations are very specialized and restrictive in comparison to the
general case where we may consider general surfaces that have regular parametrizations being infinitely often differentiable. However, from a practical point of view the restriction to the family of
Bézier surface patches even with degree restrictions is justified in an engineering context. Hence, here it is practically correct to consider only the restricted metamorphosis -possibilities- of an
ensemble of finitely many stable umbilics located on the original Bézier surface patch. This definitely excludes inconvenient pathologies that would otherwise be possible. Moreover the preceding
considerations should help to illuminate a posteriori why the diagnostic tool employing the pattern of (finitely many) isolated stable umbilics is relevant in supporting the diagnosis that a Bézier
surface patch is obtained via a fairly small deformation (within the Bézier family) from the given original Bézier surface patch.
When isolated generic umbilical points exist on the surfaces of two objects, they can be used to establish correspondence between the objects. Conceptually, a matching algorithm using generic
isolated umbilical points is straightforward. The basic idea is to find a pair of umbilical points which have the same type. Each type can be distinguished by the methods described in Section 2.1.1.
If we have more than three matching pairs of umbilical points, then we can easily find a rigid body transformation in the least squares sense. When there are less than two pairs, we match the normal
vectors and align the radiating patterns of lines of curvature at the corresponding umbilical points. In most cases, the number of generic isolated umbilical points is small. Therefore, a brute force
search scheme to find matching pairs of umbilical points can be employed without loss of performance. See Ko et al. [8] for details.
Matching with Scaling Effects
Two methods are proposed to resolve uniform scaling in global and partial matching. One is based on the umbilical point method and the other is to formulate a matching problem as an optimization
problem using the KH method. In case of global matching, a scaling value may be recovered by calculating the ratio of areas between two surfaces or volumes between two objects. However, for partial
matching, comparison of any kinds of quantitative measures does not make sense. Only qualitative feature matching or searching a good match from a set of candidate solutions can be considered. See Ko
et al. [8] for details.
Searching a pair of corresponding umbilical points can be performed irrespective of scaling effects since the qualitative aspects of umbilical points are used for finding corresponding umbilical
points. Once a pair of the umbilical points are found, the normal curvature values at the umbilical points can be used to estimate a scaling factor.
Matching with scaling effects is formulated as an optimization problem. The KH method can be treated as an objective function of the scaling factor. Namely, under a certain scaling value, we can
apply the KH method to the objects under consideration. Then the KH method computes a value which represents the sum of the Euclidean distances between two objects. This process manifests itself as
an optimization problem and we can employ any optimization solution method to find a scaling factor as well as a rigid body transformation with which the KH method produces the minimum value.
2.2. Matching Conditions
Computational methods and criteria are proposed that allow us to check the similarity of two objects. Let us denote the localized object as B^’ after the preliminary step in Figure 1.
1. e-Offset Test (Weak Test)
- How close B^’ is to A in terms of the Euclidean distance is the main interest of this test. The squared distances between A and B^’ are calculated and checked if all of them are within e-distance
bound or one of them is an e-offset of the other. In [23,15], the stationary points of the squared distance function between two variable points located on two different geometric entities are
investigated. Based on this technique, the maximum distance is calculated between the boundary surfaces of A and those of B.
2. Principal Curvature Test (Intermediate Test)
- Intrinsic properties are investigated for use in similarity check. They are independent of parametrization and only depend on the shape of the geometry. Among them, principal curvatures and their
directions are chosen. Criteria for comparison and decision algorithms using them are proposed. The principal directions are calculated at grid points of the mesh of lines of curvature for the
surfaces of object A. The grid points are orthogonally projected onto the surfaces of object B^’ [23] and principal directions are estimated at the projected grid points. The principal directions
calculated for A are then compared with the estimated principal directions of B^’. Alternatively, the lines of curvature are orthogonally projected onto B^’ directly by extending the method by Pegna
and Wolter [16]. The minimum distance projection [16] is used where the orthogonal projection is not possible.
3. Umbilic Test (Strong Test)
- The availability of this test depends on the existence of generic umbilics. This test is based on the fact that generic umbilical points and the patterns of lines of curvature around them are
stable to perturbations so that they preserve their qualitative properties. Comparing umbilics includes determining whether locations and patterns of the umbilics for object A match those for object
2.3. Similarity Assessment
Let us denote k node points of the surface intrinsic wireframe from surface B as P[i] (i = 1, 2,…, k). Since the node points are obtained from the surface intrinsic wireframe, they are independent of
parametrization and any rigid body transformation. Next, find the footpoints Q[i] on surface A of P[i] which give the minimum distance from P[i] to surface A. The IPP algorithm can be used to find
these minimum distance footpoints robustly as in [23]. After finding the footpoints Q[i] on A, calculate the following quantities between P[i] and Q[i] (i = 1, 2,…, k).
· Euclidean distance of |P[i] - Q[i]| : e[0][i]
· The second derivative properties
- Difference of principal curvatures : e[1][i,] e^’[1][i]^
- Difference of principal directions : e[2][i]
Maximum values, average values and standard deviations can be calculated to provide quantitative statistical measures to determine how similar the two surfaces are in a global manner. Local
similarity can also be assessed with e[ji ]after alignment. Each e[ji] (j = 0, 1, 2) is normalized with respect to the maximum value of max[j](e[ji]). Tolerances d[j] (j = 0, 1, 2), corresponding to
e[ji] are used to extract the regions of interest. Namely, the regions in which e[ji] > d[j] are those where the two surfaces are different. As an extension of this idea, the similarity between two
surfaces can be provided as a percentage value. First, the difference values e[ji] are located over the uv plane. Then the uv plane is subdivided into a set of square grids of size (d[s] × d[s])
where d[s] is a user defined value. The total number of the square meshes is denoted as D[T] . Given a tolerance d[s], the number of the squares D[e], which contain at least one point satisfying e[ji
] > d[j], is counted. Then, the equation (1 - D[e] / D[T] )× 100 becomes a percentage value of similarity. The squares which do not contain points satisfying the condition indicate the regions where
the two surfaces are equivalent under a given test with a tolerance d[j].
2.4. Similarity Decision Algorithms
Two similarity decision algorithms are summarized in this section. They consist of three proposed tests and provide quantitative results with which one can determine whether one object is a copy of
another object or not. Algorithm 1 uses maximum deviation values at each test for a decision, while Algorithm 2 employs statistical methods for a decision. Each algorithm produces hierarchical
results for similarity between two objects.
2.4.1. Algorithm 1
The preliminary operations are followed by a weak test (e-offset test) as shown in Figure 5. Then a decision is made that object A and B^’ are within or out of tolerance e[d] based on Euclidean
distance. If the maximum distance between corresponding points on the surfaces of object A and B^’ is within e[d], then object B^’ is considered to have passed the weak test and determined to be a
copy of object A under the weak test. On the other hand, if the distance is greater than tolerance e[d], the test fails. In such case, there are two possible courses of action. If e[d] is not large
with respect to the size of objects, the user may decide to increase it and retry the weak test. If e[d] is large, then the user may decide to stop the process and decide that B^’ (and B) is not
derived from A.
If the weak test is passed, then an intermediate test (principal curvature test) may be performed. The procedure is similar to that of the weak test. If the test succeeds, object B^’ is
considered to be a copy of object A under the intermediate test. If it fails, a decision is made whether or not the intermediate test with a new e[a] is tried again which is selected based on the
relative magnitude of e[a] in the previous test. If e[a] is not sufficiently large, the user may decide to increase it and continue the intermediate test. Otherwise, the user may decide to stop the
process and conclude that B^’ (and B) is derived from A with respect only to the weak test.
The availability of the strong test depends on the existence of isolated umbilical points. If no umbilical point exist, the process stops and it is concluded that B^’ (and B) is derived from A
with respect to the intermediate test. If so, the strong test (umbilic test) may be performed. If the test succeeds, B^’ (and B) is concluded to be derived from A with respect to the strong test.
Otherwise, B^’ (and B) is decided to be a copy of A with respect to the intermediate test.
Figure 5: Algorithm 1
2.4.2. Algorithm 2
The overall procedure is the same as algorithm 1 except that no iteration is involved as shown in Figure 6. In this algorithm, a decision is made based on statistical information constructed in each
step. From the weak test, statistics of the distance function are computed and evaluated by the user or a computer program. If the statistics pass a set of threshold tests, then B^’ (and B) is
concluded to be derived from A under the weak test and the intermediate test begins. Otherwise, B^’ (and B) is concluded not to be derived from A.
The intermediate test constructs statistics of intrinsic properties, i.e. angle differences of the principal directions. They are computed and evaluated by the user or a computer program. A
determination is made as to whether the statistics pass a set of threshold tests. If the tests are negative, it is concluded that B^’ (and B) is derived from A under the weak test. If the tests are
positive, it is concluded that B^’ (and B) is derived from A with respect to the intermediate test.
Similarly, depending on the existence of umbilical points, the strong test may be performed. Statistics of position differences of the locations between corresponding umbilics are computed and
evaluated by the user or a computer program. A determination is made as to whether the statistics pass a set of threshold tests. If the tests are negative, B^’ (and B) is concluded to be derived from
A with respect to the intermediate test. If the tests are positive, it is decided that B^’ (and B) is derived from A under the strong test.
Figure 6: Algorithm 2
3. Examples
In this section, the proposed algorithms are demonstrated with various examples. Two subsections are presented: one subsection deals with examples of the proposed matching methods and the other
contains an example of application to copyright protection.
3.1. Matching
3.1.1. Moment Method
Solids bounded by cubic integral B-spline surfaces, A and B are used for simplicity. Solid A is enclosed in a rectangular box of 25.0mm x 23.48mm x 11.0mm. Here, the height of solid A is 25.0mm.
Figure 7: Matching via integral properties
Figure 7 shows a sequence of operations for matching of two surfaces using the principal moments of inertia of input solids. In Figure 7-(A), two boundary surfaces of the input solids are shown with
the control points with them. In this example, for better visualization, only part of the boundary surfaces of each solid is displayed. The smaller solid is translated, rotated, uniformly scaled and
reparameterized. Matching the centroids of mass of two solids is done by translating the small solid by the difference between the centroids, which is demonstrated in Figure 7-(B). After matching the
directions of principal moment of inertia, two solids align their directions that are shown in Figure 7-(c). Figure 7-(d) shows that two solid match after the scaling value obtained by the ratio
between the masses of the solids is applied to the small solid.
3.1.2. KH Method
A partial matching example using the KH method is presented in Figure 8. It shows half of a car hood which is represented by a bicubic B-spline surface patch. The hood has 64 (8 × 8) control points
(enclosed in a rectangular box of 13mm × 12mm × 6mm). The smaller surface in Figure 8-(B) is a B-spline surface patch with 16 × 16 control points. The relative error after matching, i.e. the maximum
distance divided by the square root of the surface area, is 0.00484.
Figure 8 : A partial matching example
3.1.2. Umbilical Point Method
Suppose we have a set of data points r[A] and a surface r[B] as shown in Figures 9 and 10. The surface r[B] shown in Figure 9 is a bicubic B-spline surface patch with 64 (8 × 8) control points
enclosed in a box of 25mm×23.48mm×11mm. It has three star type umbilical points as shown in Figure 9.
Figure 9 : Surface r[B] and umbilical points
The point set r[A] shown in Figure 10 is approximated with a bicubic B-spline surface patch of 256 (16×16) control points. It has one umbilical point of star type as shown in Figure 10.
Figure 10 : Approximated surface r[A] and an umbilical point
Angle distributions at the umbilical points are computed and it is found that the star type umbilical point on r[A] matches the center umbilical point on r[B]. Then, surface r[A ]is translated and
rotated by using the positions of the corresponding umbilical points and the normal vectors at the umbilcal points to match surface r[B] as shown in Figure 11.
Figure 11 : Matching surfaces r[A ]and r[B]
3.2. Copyright Protection
In this section, the two proposed similarity decision algorithms are demonstrated with the bottle example used for the moment method in Section 3.1.1. After aligning two solids A and B shown in
Figure 7, we are ready to assess the similarity between them. Here, part of the bounding surfaces is used for similarity checking. The surfaces are represented as bicubic B-splines and one surface
has 64 (8 × 8) and the other 144 (12×12) control points with different parametrization. Both surfaces are enclosed in a rectangular box of 25mm × 23.48mm × 11mm. The 413 node points are used from the
wireframe given in Figure 4. All umbilical points for the two surfaces are located as shown in Figure 12.
│ │Umbilic 1│Umbilic 2│Umbilic 3│
│Distance (mm) │ 0.08099 │ 0.02954 │ 0.08115 │
Table 1: Distances between two corresponding umbilics
Figure 12: Comparison of lines of curvatures and umbilical points
In order to use Algorithm 1, we have to specify tolerances for e[0], e[1], e^’[1] and e[2]. Depending on each tolerance, we can determine which test has passed or failed. Statistical information
given in Table 2 is obtained for Algorithm 2. Suppose we have 0.01 as a tolerance for the weak test and subdivide the uv region into 400 square sub-regions (each box size of 0.05 × 0.05). The total
number of sub-regions which contain footpoints P[i] satisfying e[i] > 0.01 is 31. Therefore, we can conclude that two surfaces are similar by 92.25% under the weak test with tolerance 0.01 and
sub-region of size 0.05 ×0.05. This can be visualized as in Figure 13-(A). Here, the boxes indicate the regions that have at least one point with deviation larger than the tolerance 0.01. The results
of the intermediate test using the maximum principal curvature are visualized in Figure 13-(B). Under the intermediate test for the maximum principal curvature with a tolerance 0.03, the similarity
value between two surfaces is 91.25%. The strong test can also be performed based on the umbilical points for both surfaces as shown in Figure 12. Three star type umbilical points are identified for
each surface, and the Euclidean distances between the corresponding umbilical points are calculated in Table 1. The types of the corresponding umbilical points match, and the position differences
between the corresponding umbilics are small compared to the size of the objects. Therefore, we may conclude that the strong test has passed.
Figure 13: (A) Weak test and (B) intermediate test (maximum principal curvature)
based on Algorithm 2
│ Values │Distance (mm)│Max. Principal Curvature(mm^-1) │Min. Principal Curvature(mm^-1) │
│ Maximum │ 0.03163 │ 0.07875 │ 0.10577 │
│ Mean │ 0.00852 │ 0.01573 │ 0.01411 │
│ Deviation │ 0.00004 │ 0.00023 │ 0.00047 │
│Standard Deviation│ 0.00650 │ 0.01533 │ 0.02165 │
Table 2: Comparison of various quantities
4. Conclusions
We have addressed a problem of matching NURBS surface patches and solids bounded by NURBS surface patches, and introduced algorithms for a similarity decision. As auxiliary steps, methods of
detection of umbilical points and construction of an intrinsic wireframe have been also proposed. Quantitative assessment of matching is another issue discussed in this paper. Three hierarchical
tests are proposed, and two decision algorithms are developed which provide systematic and statistical measures for a user to determine the similarity between two geometric objects. The proposed
matching and similarity checking techniques can be used for copyright protection of NURBS surfaces. A user can compare a suspicious surface with a surface registered with an independent repository to
check if the suspect surface is a copy of the copyrighted one. The partial matching technique may provide a method to determine whether or not part of the copyrighted surface has been stolen.
Similarity decision depends on user-defined tolerances. Therefore, these tolerances need to be defined by an independent party.
Extension of this work to deal with the problem of matching and similarity evaluation for geometric objects expressed in different representation form such as polyhedra and range data is a subject
recommended for future study.
This work was funded by the National Science Foundation (NSF), grant number DMI-0010127.
[1] O. Benedens. Geometry-based watermarking of 3-D models. IEEE Computer Graphics and Applications, 19(1):46-55, 1999.
[2] M. V. Berry and J. H. Hannay. Umbilic points on Gaussian random surfaces. Journal of Physics A., 10(11):1809-1821, 1977.
[3] R. T. Farouki. Graphical methods for surface differential geometry. In R. R. Martin, editor, The Mathematics of Surfaces II, pp. 363-385. Clarendon Press, 1987.
[4] C. Fornaro and A. Sanna. Public key watermarking for authentication of CSG models. Computer Aided Design, 32(12):727-735, 2000.
[5] R. A. Jinkerson, S. L. Abrams, L. Bardis, C. Chryssostomidis, N. M. Patrikalakis, and F.-E. Wolter. Inspection and Feature Extraction of Marine Propellers. Journal of Ship Production, 9
(2):88-106, May 1993.
[6] S. Kanai, H. Date, and T. Kishinami. Digital watermarking for 3D polygons using multiresolution wavelet decomposition. In Proceedings of the Sixth IFIP WG 5.2/GI International Workshop on
Geometric Modelling: Fundamentals and Applications, Tokyo, December, 1998, pp. 296-307, 1998.
[7] K. H. Ko, T. Maekawa, and N. M. Patrikalakis. An algorithm for optimal freeform object matching. Computer Aided Design, 35(10):913-923, September 2003.
[8] K. H. Ko, T. Maekawa, and N. M. Patrikalakis. Algorithms for optimal partial matching of free-form objects with scaling effects. Submitted for publication, December 2002.
[9] K. H. Ko, T. Maekawa, N. M. Patrikalakis, H. Masuda, and F.-E.Wolter. Shape intrinsic fingerprints for free-form object matching. In G. Elber and V. Shapiro, editors, 8^th ACM Symposium on Solid
Modeling and Applications, pp. 196-207, Seattle, WA, June 2003.
[10] T. Maekawa. Computation of shortest paths on free-form parametric surfaces. Journal of Mechanical Design, Transactions of the ASME, 118(4):499-508, December 1996.
[11] T. Maekawa, N. M. Patrikalakis, F.-E. Wolter, and H. Masuda. Shape-intrinsic watermarks for 3-D solids. MIT Technology Disclosure Case 9505S, September 2001. Patent Attorney Docket No.
0050.2042-000. Application pending, January 7, 2002.
[12] T. Maekawa, F.-E. Wolter, and N. M. Patrikalakis. Umbilics and lines of curvature for shape interrogation. Computer Aided Geometric Design, 13(2):133-161, March 1996.
[13] R. Ohbuchi, H. Masuda, and M. Aono. Watermarking three-dimensional polygonal models through geometric and topological modifications. IEEE Journal on Selected Areas in Communications, 16
(14):551-560, May 1998.
[14] R. Ohbuchi, H. Masuda, and M. Aono. A shape-preserving data embedding algorithm for NURBS curves and surfaces. In Proceedings of Computer Graphics International, CGI '99, June 1999, pp.
180-187. IEEE Computer Society, 1999.
[15] N. M. Patrikalakis and T. Maekawa. Shape Interrogation for Computer Aided Design and Manufacturing. Springer-Verlag, Heidelberg, February 2002.
[16] J. Pegna and F.-E. Wolter. Surface curve design by orthogonal projection of space curves onto free-form surfaces. Journal of Mechanical Design, ASME Transactions, 118(1):45-52, March 1996.
[17] E. Praun, H. Hoppe, and A. Finkelstein. Robust mesh watermarking. In Proceedings of SIGGRAPH '99, Los Angeles, August 8-13, 1999, pp. 49-56. ACM, 1999.
[18] P. T. Sander and S. W. Zucker. Singularities of principal direction fields from 3-D images. In IEEE Second International Conference on Computer Vision, Tampa Florida, pp. 666-670, 1988.
[19] E. C. Sherbrooke and N. M. Patrikalakis. Computation of the solutions of nonlinear polynomial systems. Computer Aided Geometric Design, 10(5):379-405, October 1993.
[20] S. S. Sinha and P. J. Besl. Principal patches: A viewpoint-invariant surface description. In IEEE International Robotics and Automation, Cincinnati, Ohio, pp. 226-231, May 1990.
[21] B. L. Yeo and M. M. Yeung. Watermarking 3D objects for verification. IEEE Computer Graphics and Applications, 19(1):36-45, 1999.
[22] K. Yin, Z. Pan, J. Shi, and D. Zhang. Robust mesh watermarking based on multiresolution processing. Computers & Graphics, 25:409-420, 2001.
[23] J. Zhou, E. C. Sherbrooke, and N. M. Patrikalakis. Computation of stationary points of distance functions. Engineering with Computers, 9(4):231-246, Winter 1993.
|
{"url":"http://deslab.mit.edu/DesignLab/Watermarking/NSF.htm","timestamp":"2014-04-19T19:42:19Z","content_type":null,"content_length":"125737","record_id":"<urn:uuid:3e0433e1-e5f8-4244-b2ec-ebcc3deef951>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
Use a compass to construct the perpendicular to line b from point A which is outside line b.
First you should pick any point, let's name it B on the line b. Construct your first circle with center A and radius AB. . (circle 1 black).Name the intersction between the circle and line given C.
Draw a new circle with center C through A.(circle 2-red) Add circle 3(green) center B, through A. Mark the intersection of the red and green circles with D. Join A and D , and you have your
|
{"url":"http://mathhelpforum.com/geometry/14736-lines.html","timestamp":"2014-04-17T09:53:26Z","content_type":null,"content_length":"32643","record_id":"<urn:uuid:e520bb4f-1f9f-475b-9b7e-5a07a3d0a02e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An array of N bits, split in two parts with same amout of bits set to one
You are given an array A of N bits. I tell you that K << N are set to one, and the remaining are set to 0. You don't know what bits are set to one and what are not. You just know the amount.
What is the optimal time and space complexity to split the array in two parts such that the number of bit set to one are equal in both the parts? The two parts may be or many not be contiguous. In
the latter case, you can provide two sets of indexes for A as answer.
3 comments:
1. select k random bits...
2. Since you don't know where bits-ONE are you gotta look everywhere linearly. You need one variable to count the bits you've found and another one to index your position inside the bit array. Run
the array until the count is equal to K/2 (or the bit-ZERO count is (N-K)/2). By that time your index will point to the joint splitting the array in equal number of bits-ONE.
Best case: K/2 lookups
Worst case: N - K/2 lookups
average: N/2 (or N/2-k/4 if the worst case is N-K)
Space required: N bits + 1 counter + 1 index.
3. Create a subset S1 of A, by selecting K random bits. Define S2 = A - S1
By construction in S1 you have:
+ K - t (0<=t<=K) bits set to 1
+ t (0<=t<K) bits set to 0
By construction in S2 you have:
+ K-t (0<=t<=K) bits set to 0
+ t (0<=t<K) bits set to 1
what is the next step?
|
{"url":"http://codingplayground.blogspot.com/2009/03/array-of-n-bits-split-in-two-parts-with.html","timestamp":"2014-04-17T03:49:02Z","content_type":null,"content_length":"138478","record_id":"<urn:uuid:6dbf60d9-e3fd-4e07-a0f4-944dc0c9814b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
my assignment is at photobucket saso
Best Response
You've already chosen the best response.
the questions are not clear enough... :/
Best Response
You've already chosen the best response.
ill take another pair of photos brb
Best Response
You've already chosen the best response.
im uploading to photobucket
Best Response
You've already chosen the best response.
okay :)
Best Response
You've already chosen the best response.
ok,done,plz refresh the tab in which youve opened my library thanks
Best Response
You've already chosen the best response.
theyll appear. should be clearer images
Best Response
You've already chosen the best response.
yeah, so what are you supposed to do?
Best Response
You've already chosen the best response.
Im supposed to complete the tables,then graph the points
Best Response
You've already chosen the best response.
thank you:)
Best Response
You've already chosen the best response.
what are the equations for 3 and 6?
Best Response
You've already chosen the best response.
which photo?
Best Response
You've already chosen the best response.
Photo 1 or Photo 2?
Best Response
You've already chosen the best response.
the first one
Best Response
You've already chosen the best response.
let me make the charts
Best Response
You've already chosen the best response.
google docs br
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ohhh, you have the table filled for that one and you have to find the equation...?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok, are you asking about The picture on the lef tof on the right
Best Response
You've already chosen the best response.
the one on the left
Best Response
You've already chosen the best response.
Ok, do you see the one with y=3x+5 as problem number one?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Let us make that on Page A.
Best Response
You've already chosen the best response.
The one with y=1/3x-4 as number one will be page two
Best Response
You've already chosen the best response.
What questions were you asking about? from what page?
Best Response
You've already chosen the best response.
page A
Best Response
You've already chosen the best response.
lost connection ok brb
Best Response
You've already chosen the best response.
what problems
Best Response
You've already chosen the best response.
1 and 3?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
do you understand what you're supposed to do though? as in how to go about solving it?
Best Response
You've already chosen the best response.
Well, I know how to plot points,yes
Best Response
You've already chosen the best response.
And solve for x/y
Best Response
You've already chosen the best response.
the graphing,though...
Best Response
You've already chosen the best response.
what about the graphing?
Best Response
You've already chosen the best response.
Im not so sure how to do it I am told that you need only two points to make the line?
Best Response
You've already chosen the best response.
you need AT LEAST two points to make a straight line, you could have more points :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ive a question
Best Response
You've already chosen the best response.
go ahead :)
Best Response
You've already chosen the best response.
thanks just wondering is that you in the pic?
Best Response
You've already chosen the best response.
also i went to algebra.com,and it generated the line.. you still get a medal though:)
Best Response
You've already chosen the best response.
yeah, that's me in the pic lol :)
Best Response
You've already chosen the best response.
thanks for looking at my q
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok then im rachel btw
Best Response
You've already chosen the best response.
no worries :) anytime x
Best Response
You've already chosen the best response.
nice to meet u:))
Best Response
You've already chosen the best response.
you too :) if i'm on and you need help, tag me in ur question, if i can help, i'll be sure to do so :) if i can't, i'll ask someone who's available and may be able to help you do so :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ive a question though
Best Response
You've already chosen the best response.
oh right nvm
Best Response
You've already chosen the best response.
Well thanks again!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/507c9ac1e4b07c5f7c1fa535","timestamp":"2014-04-17T04:25:35Z","content_type":null,"content_length":"156495","record_id":"<urn:uuid:45d27b7e-7a62-4975-b1a3-65df5b8f9f84>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Interesting trivia - anybody has a different answer than I keep getting - zero
Replies: 6 Last Post: Nov 29, 2012 10:39 AM
Messages: [ Previous | Next ]
Interesting trivia - anybody has a different answer than I keep getting - zero
Posted: Nov 27, 2012 12:31 PM
My daughter and I were solving a math trivia and I could not come up
with any answer other than zero. Would be interesting to see if
somebody has a different opinion. The problem follows:
You are at the start of a 1000 mile road with 3000 gummybears and a
donkey. At the end of the road is a supermarket. You want to find
the greatest number of gummy bears you can sell. Unfortunately, your
donkey has a disease and can only carry 1000 gummybears at 1 time.
Also, the donkey must eat 1 gummybear per mile.
- You can drop off gummybears anywhere on the road
- You can't carry gummybears while walking
- No loopholes
Again, this was a math trivia question and I could not ask anybody for
clarification about what some the caveats meant or what the "no
loopholes" meant, therefore I got zero.
(If I were to guess about the "no loopholes", I would think they meant
no "carrying the donkey" like I suggested to my daughter :) )
Thanks for your time.
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7928835","timestamp":"2014-04-16T13:09:34Z","content_type":null,"content_length":"24381","record_id":"<urn:uuid:8d11834e-ef74-48af-96db-c3e9d2f5edd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry and the imagination
You are currently browsing the tag archive for the ‘Sang-hyun Kim’ tag.
Last Friday, Henry Wilton gave a talk at Caltech about his recent joint work with Sang-hyun Kim on polygonal words in free groups. Their work is motivated by the following well-known question of
Question(Gromov): Let $G$ be a one-ended word-hyperbolic group. Does $G$ contain a subgroup isomorphic to the fundamental group of a closed hyperbolic surface?
Let me briefly say what “one-ended” and “word-hyperbolic” mean.
A group is said to be word-hyperbolic if it acts properly and cocompactly by isometries on a proper $\delta$-hyperbolic path metric space — i.e. a path metric space in which there is a constant $\
delta$ so that geodesic triangles in the metric space have the property that each side of the triangle is contained in the $\delta$-neighborhood of the union of the other two sides (colloquially,
triangles are thin). This condition distills the essence of negative curvature in the large, and was shown by Gromov to be equivalent to several other conditions (eg. that the group satisfies a
linear isoperimetric inequality; that every ultralimit of the group is an $\mathbb{R}$-tree). Free groups are hyperbolic; fundamental groups of closed manifolds with negative sectional curvature (eg
surfaces with negative Euler characteristic) are word-hyperbolic; “random” groups are hyperbolic — and so on. In fact, it is an open question whether a group $G$ that admits a finite $K(G,1)$ is word
hyperbolic if and only if it does not contain a copy of a Baumslag-Solitar group $BS(m,n):=\langle x,y \; | \; x^{-1}y^{m}x = y^n \rangle$ for $m,n e 0$ (note that the group $\mathbb{Z}\oplus \mathbb
{Z}$ is the special case $m=n=1$); in any case, this is a very good heuristic for identifying the word-hyperbolic groups one typically meets in examples.
If $G$ is a finitely generated group, the ends of $G$ really means the ends (as defined by Freudenthal) of the Cayley graph of $G$ with respect to some finite generating set. Given a proper
topological space $X$, the set of compact subsets of $X$ gives rise to an inverse system of inclusions, where $X-K'$ includes into $X-K$ whenever $K$ is a subset of $K'$. This inverse system defines
an inverse system of maps of discrete spaces $\pi_0(X-K') \to \pi_0(X-K)$, and the inverse limit of this system is a compact, totally disconnected space $\mathcal{E}(X)$, called the space of ends of
$X$. A proper topological space is canonically compactified by its set of ends; in fact, the compactification $X \cup \mathcal{E}(X)$ is the “biggest” compactification of $X$ by a totally
disconnected space, in the sense that for any other compactification $X \subset Y$ where $Y-X$ is zero dimensional, there is a continuous map $X \cup \mathcal{E}(X) \to Y$ which is the identity on
For a word-hyperbolic group $G$, the Cayley graph can be compactified by adding the ideal boundary $\partial_\infty G$, but this is typically not totally disconnected. In this case, the ends of $G$
can be recovered as the components of $\partial_\infty G$.
A group $G$ acts on its own ends $\mathcal{E}(G)$. An elementary argument shows that the cardinality of $\mathcal{E}(G)$ is one of $0,1,2,\infty$ (if a compact set $V$ disconnects $e_1,e_2,e_3$ then
infinitely many translates of $V$ converging to $e_1$ separate $e_3$ from infinitely many other ends accumulating on $e_1$). A group has no ends if and only if it is finite. Stallings famously showed
that a (finitely generated) group has at least $2$ ends if and only if it admits a nontrivial description as an HNN extension or amalgamated free product over a finite group. One version of the
argument proceeds more or less as follows, at least when $G$ is finitely presented. Let $M$ be an $n$-dimensional Riemannian manifold with fundamental group $G$, and let $\tilde{M}$ denote the
universal cover. We can identify the ends of $G$ with the ends of $\tilde{M}$. Let $V$ be a least ($n-1$-dimensional) area hypersurface in $\tilde{M}$ amongst all hypersurfaces that separate some end
from some other (here the hypothesis that $G$ has at least two ends is used). Then every translate of $V$ by an element of $G$ is either equal to $V$ or disjoint from it, or else one could use the
Meeks-Yau “roundoff trick” to find a new $V'$ with strictly lower area than $V$. The translates of $V$ decompose $\tilde{M}$ into pieces, and one can build a tree $T$ whose vertices correspond to to
components of $\tilde{M} - G\cdot V$, and whose edges correspond to the translates $G\cdot V$. The group $G$ acts on this tree, with finite edge stabilizers (by the compactness of $V$), exhibiting
$G$ either as an HNN extension or an amalgamated product over the edge stabilizers. Note that the special case $|\mathcal{E}(G)|=2$ occurs if and only if $G$ has a finite index subgroup which is
isomorphic to $\mathbb{Z}$.
Free groups and virtually free groups do not contain closed surface subgroups; Gromov’s question more or less asks whether these are the only examples of word-hyperbolic groups with this property.
Kim and Wilton study Gromov’s question in a very, very concrete case, namely that case that $G$ is the double of a free group $F$ along a word $w$; i.e. $G = F *_{\langle w \rangle } F$ (hereafter
denoted $D(w)$). Such groups are known to be one-ended if and only if $w$ is not contained in a proper free factor of $F$ (it is clear that this condition is necessary), and to be hyperbolic if and
only if $w$ is not a proper power, by a result of Bestvina-Feighn. To see that this condition is necessary, observe that the double $\mathbb{Z} *_{p\mathbb{Z}} \mathbb{Z}$ is isomorphic to the
fundamental group of a Seifert fiber space, with base space a disk with two orbifold points of order $p$; such a group contains a $\mathbb{Z}\oplus \mathbb{Z}$. One might think that such groups are
too simple to give an insight into Gromov’s question. However, these groups (or perhaps the slightly larger class of graphs of free groups with cyclic edge groups) are a critical case for at least
two reasons:
1. The “smaller” a group is, the less room there is inside it for a surface group; thus the “simplest” groups should have the best chance of being a counterexample to Gromov’s question.
2. If $G$ is word-hyperbolic and one-ended, one can try to find a surface subgroup by first looking for a graph of free groups $H$ in $G$, and then looking for a surface group in $H$. Since a closed
surface group is itself a graph of free groups, one cannot “miss” any surface groups this way.
Not too long ago, I found an interesting construction of surface groups in certain graphs of free groups with cyclic edge groups. In fact, I showed that every nontrivial element of $H_2(G;\mathbb{Q})
$ in such a group is virtually represented by a sum of surface subgroups. Such surface subgroups are obtained by finding maps of surface groups into $G$ which minimize the Gromov norm in their
(projective) homology class. I think it is useful to extend Gromov’s question by making the following
Conjecture: Let $G$ be a word-hyperbolic group, and let $\alpha \in H_2(G;\mathbb{Q})$ be nonzero. Then some multiple of $\alpha$ is represented by a norm-minimizing surface (which is necessarily $\
Note that this conjecture does not generalize to wider classes of groups. There are even examples of $\text{CAT}(0)$ groups $G$ with nonzero homology classes $\alpha \in H_2(G;\mathbb{Q})$ with
positive, rational Gromov norm, for which there are no $\pi_1$-injective surfaces representing a multiple of $\alpha$ at all.
It is time to define polygonal words in free groups.
Definition: Let $F$ be free. Let $X$ be a wedge of circles whose edges are free generators for $F$. A cyclically reduced word $w$ in these generators is polygonal if there exists a van-Kampen graph $
\Gamma$ on a surface $S$ such that:
1. every complementary region is a disk whose boundary is a nontrivial (possibly negative) power of $w$;
2. the (labelled) graph $\Gamma$ immerses in $X$ in a label preserving way;
3. the Euler characteristic of $S$ is strictly less than the number of disks.
The last condition rules out trivial examples; for example, the double of a single disk whose boundary is labeled by $w^n$. Notice that it is very important to allow both positive and negative powers
of $w$ as boundaries of complementary regions. In fact, if $w$ is not in the commutator subgroup, then the sum of the powers over all complementary regions is necessarily zero (and if $w$ is in the
commutator subgroup, then $D(w)$ has nontrivial $H_2$, so one already knows that there is a surface subgroup).
Condition 2. means that at each vertex of $\Gamma$, there is at most one oriented label corresponding to each generator of $F$ or its inverse. This is really the crucial geometric property. If $\
Gamma,S$ is a van-Kampen graph as above, then a theorem of Marshall Hall implies that there is a finite cover of $X$ into which $\Gamma$ embeds (in fact, this observation underlies Stallings’s work
on foldings of graphs). If we build a $2$-complex $Y$ with $\pi_1(Y)=D(w)$ by attaching two ends of a cylinder to suitable loops in two copies of $X$, then a tubular neighborhood of $\Gamma$ in $S$
(i.e. what is sometimes called a “fatgraph” ) embeds in a finite cover $\tilde{Y}$ of $Y$, and its double — a surface of strictly negative Euler characteristic — embeds as a closed surface in $\tilde
{Y}$, and is therefore $\pi_1$-injective. Hence if $w$ is polygonal, $D(w)$ contains a surface subgroup.
Not every word is polygonal. Kim-Wilton discuss some interesting examples in their paper, including:
1. suppose $w$ is a cyclically reduced product of proper powers of the generators or their inverses (e.g a word like $a^3b^7a^{-2}c^{13}$ but not a word like $a^3bc^{-1}$); then $w$ is polygonal;
2. a word of the form $\prod_i a^{p_{2i-1}}(a^{p_{2i}})^b$ is polygonal if $|p_i|>1$ for each $i$;
3. the word $abab^2ab^3$ is not polygonal.
To see 3, suppose there were a van-Kampen diagram with more disks than Euler characteristic. Then there must be some vertex of valence at least $3$. Since $w$ is positive, the complementary regions
must have boundaries which alternate between positive and negative powers of $w$, so the degree of the vertex must be even. On the other hand, since $\Gamma$ must immerse in a wedge of two circles,
the degree of every vertex must be at most $4$, so there is consequently some vertex of degree exactly $4$. Since each $a$ is isolated, at least $2$ edges must be labelled $b$; hence exactly two.
Hence exactly two edges are labelled $a$. But one of these must be incoming and one outgoing, and therefore these are adjacent, contrary to the fact that $w$ does not contain a $a^{\pm 2}$.
1 above is quite striking to me. When $w$ is in the commutator subgroup, one can consider van-Kampen diagrams as above without the injectivity property, but with the property that every power of $w$
on the boundary of a disk is positive; call such a van-Kampen diagram monotone. It turns out that monotone van-Kampen diagrams always exist when $w \in [F,F]$, and in fact that norm-minimizing
surfaces representing powers of the generator of $H_2(D(w))$ are associated to certain monotone diagrams. The construction of such surfaces is an important step in the argument that stable commutator
length (a kind of relative Gromov norm) is rational in free groups. In my paper scl, sails and surgery I showed that monomorphisms of free groups that send every generator to a power of that
generator induce isometries of the $\text{scl}$ norm; in other words, there is a natural correspondence between certain equivalence classes of monotone surfaces for an arbitrary word in $[F,F]$ and
for a word of the kind that Kim-Wilton show is polygonal (Note: Henry Wilton tells me that Brady, Forester and Martinez-Pedroza have independently shown that $D(w)$ contains a surface group for such
$w$, but I have not seen their preprint (though I would be very grateful to get a copy!)).
In any case, if not every word is polygonal, all is not lost. To show that $D(w)$ contains a surface subgroup is suffices to show that $D(w')$ contains a surface subgroup, where $w$ and $w'$ differ
by an automorphism of $F$. Kim-Wilton conjecture that one can always find an automorphism $\phi$ so that $\phi(w)$ is polygonal. In fact, they make the following:
Conjecture (Kim-Wilton; tiling conjecture): A word $w$ not contained in a proper free factor of shortest length (in a given generating set) in its orbit under $\text{Aut}(F)$ is polygonal.
If true, this would give a positive answer to Gromov’s question for groups of the form $D(w)$.
Recent Comments
Ian Agol on Cube complexes, Reidemeister 3…
Danny Calegari on kleinian, a tool for visualizi…
Quod est Absurdum |… on kleinian, a tool for visualizi…
dipankar on kleinian, a tool for visualizi…
Ludwig Bach on Liouville illiouminated
|
{"url":"http://lamington.wordpress.com/tag/sang-hyun-kim/","timestamp":"2014-04-21T08:09:14Z","content_type":null,"content_length":"68647","record_id":"<urn:uuid:bc0d7232-4671-4d86-9192-d4f7dcf7d38a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Casino - Less Than Zero Chance
Human thought has led to a variety of remarkable and profound insights. Many of these insights are well established and have been embraced by a significant portion of the global population.
The earth being round, the atomistic nature of matter, our unremarkable place in the universe, and us being a product of evolution, all being examples of such insights. Other insights, although
unanimously embraced by experts, have a long way to go for a larger population to accept them. More than for any other subject, this holds for quantum physics. No other product of human thought is as
profoundly mysterious as quantum theory.
Unfortunately, the subject is not easy to digest and, to make things worse, it is often misrepresented in the pop-science literature. Those eager to understand the quantum are often fed with
confusing slogans and misleading analogies.
A year ago I dedicated two blogposts (
) to the mysteries of the quantum. Judging from the reactions, these posts must have been useful to at least a few of the readers. The simple thought experiment presented (dealing with Albert's
socks) allows the reader to explore the weird world of quantum physics, an experience that likely will challenge the reader's view on reality. A view that - for all of us - is heavily biased by
sensory perceptions limited to classical (non-quantum) physics.
Yet, I feel one question remained insufficiently discussed in my earlier posts. And that is the question how physicists deal with quantum reality. Let's see if I can fill this gap by further
exploring Albert's quantum socks.
Initially, out of habit, each morning Albert opens a horizontal row of drawers. He does this without giving much thought to which of the three rows to open. After a few days, he starts noticing a
pattern. Whichever row he opens, either all three drawers are empty, or two drawers each contain a sock leaving one empty drawer. In other words, a row of drawers always contains a total even number
of socks. Many days pass by, and Albert never encounters a row not containing an even number. That makes sense: each row containing an even number of socks tells Albert that each morning the chest
must be filled with an even number of socks. An observation comfortably compatible with the notion that socks come in pairs.
One morning, Albert decides to deviate from his fixed ritual, and he opens a vertical column of drawers: the three leftmost drawers. Interestingly, this time he observes an odd number of socks
distributed over the three drawers opened. The next day, he opens the same leftmost column of drawers, and again observed an odd number of socks.
Albert gets curious about the other columns. What number of socks will they reveal? The next morning he opens the middle column. Again an odd number of socks. The next day he once more checks the
middle column. Once again he observes an odd number of socks.
Albert is a clever guy, and he now realizes he can predict with certainty that the rightmost column of drawers must behave differently from the two columns already inspected. The rightmost drawers
for sure must contain a total number of socks that is even. This is obvious as an odd number of socks in the rightmost drawers, added to the number of socks in the middle column (observed to be odd)
and the leftmost column (also observed to be odd) would result in an odd total number of socks in the chest. A contradiction, as he had already observed, based on opening horizontal rows, that the
total number of socks in the chest is always even.
The next morning he eagerly opens the three rightmost drawers. To his astonishment he observes an odd number of socks.
The next few mornings he randomly checks the various columns. Each attempt reveals an odd number of socks. Albert realizes something must have gone wrong. Maybe his housekeeper coincidently changed
from filling the chest with an even number of socks to filling it with an odd number of socks, just at the time he switched over from opening horizontal rows to vertical columns?The next morning
Albert again opens a horizontal row. An even number of socks stares him in the face. He starts randomly switching between horizontal rows and vertical columns. Horizontal rows always deliver even
numbers, vertical columns odd numbers.
This drives Albert crazy. The results he is obtaining are logically impossible. "At any given morning if I would open three rows", Albert reasons, "I would end up with an even number of socks.
However, would I open three columns, I would end up with an odd number of socks. Yet in both cases I would have opened the same nine drawers. How can this be?"
"Nice story" you might react, "but nothing more than a fantasy. Obviously chests of drawers like these logically can't exist in our world".
Well, as a matter of fact, they do. I can put that even more strongly: all evidence points at our world consisting solely of devices like Albert's drawers. It just happens to be that these drawers
tend to be very, very small and that we generally observe the aggregate behavior of many drawers without being able to distinguish critical details such as the distinction between even and odd
Many physicists deal with quantum reality day in, day out. How have they come to terms with a physical reality that presents us with devices like Albert's chest of drawers? Physicists don't seem to
lose any sleep over the many counter-intuitive notions and apparent paradoxes surrounding quantum theory. Are they oblivious to the utterly strange world view that the quantum represents?
Not at all. The point is that the vast majority of physicists simply have stopped worrying and have embraced a practical approach described by the catch-phrase:
shut up and calculate" phrase achieved the status of a popular instrumentalist approach to quantum physics equated with eschewing all interpretation.
Let me try to offer you the "shut up and calculate" approach as a choice for coming to terms with the quantum. To do that, I need to explain how to calculate sock counts for Albert's drawers.
The mathematical machinery behind quantum devices like Albert's chest is compelling and carries a great beauty. It is based on a generalized probabilistics that adds likelihoods not as a linear sum,
but rather as a Pythagorean sum. I would love to present this quantum math to you. But alas, the math is too involved to explain in a blog like this.
Fortunately, the complex quantum math built on Pythagorean sums can be mapped on the much simpler classical math based on straightforward linear sums. Physicists apply such mappings all the time when
studying what is known as "the classical limit" of quantum systems: the behavior of large quantum systems that can be accurately represented with classical (non-quantum) laws of physics.
Eugene Wigner, one of the quantum pioneers, discovered in the early 1930's that something weird happens when approximating quantum physics with classical probabilities. He discovered that one can
describe quantum systems with probabilities that add linearly, but these probabilities are no longer guaranteed to be non-negative.
Paul Dirac, Wigner's brother in law, later wrote a paper that discusses the use of concepts like negative energies and negative probabilities in quantum physics:
"Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money."
Years later the idea of negative probabilities received increased attention when Richard Feynman started to popularize the idea:
“Trying to think of negative probabilities gave me a cultural shock at first, but when I finally got easy with the concept I wrote myself a note so I wouldn’t forget my thoughts. . ."
In describing Albert's chest of drawers, how far do we get with negative probabilities? Before you read my negative probabilities description for Albert's chest, I urge you to try it yourself first.
It's a straightforward and instructive exercise.
Ok, here is a probabilistic description of Albert's drawers. A total of 10 configurations are relevant:
1) Nine configurations consist of two drawers in a single row being filled, and all other drawers being empty. These nine configurations each carry a 1/6 probability.
2) One additional configuration features an empty chest of drawers. This configuration carries a minus 1/2 probability.
It is easy to see that all probabilities add up to unity, as they should. The negative probability for the 'all empty' state is a strange beast, but a beast that doesn't show up in the end result
when calculating observable probabilities.
For instance, let's calculate the probabilities for observing the various numbers of socks in the bottom row of Albert's chest. In the above table you can read off that we have seven realizations
with an empty bottom row, six with +1/6 probability, and one with -1/2 probability. The total probability for an empty bottom row is therefore +1/2. Furthermore, we have three realizations with a
bottom row filled with two socks, each with a probability of +1/6. The total probability for finding two socks in the bottom row is therefore +1/2. So we have two equal likelihoods of finding zero or
two socks, each with a positive (+1/2) value, and we are guaranteed to retrieve an even number of socks from the bottom row.
Repeating the same calculation for a column leads to the result that a column renders one sock with unit probability, hence a guarantee for an outcome of an odd number of socks.
What happened in both these cases is that the negative probability gets cancelled by positive probabilities describing the same outcome for the row or column under consideration. As a result, rows
lead to even numbers and columns to odd numbers, and never does a negative probability turn up its ugly head.
This obviously doesn't hold when determining the number of socks in the entire chest of drawers. A negative probability does show up prominently, and it does not get cancelled as there are no
positive probabilities describing the same configuration. However, we don't need to worry about this, as the configuration with negative probability does not correspond to a possible observation.
Only three drawers (one row or one column) can be opened, not the whole chest.
What this means is that one can extract only a limited amount of information out of Albert's chest. Getting more information out of it (opening more drawers) is impossible as it would lead to
negative probabilities to become evident. This is a manifestation of Heisenberg's uncertainty principle for Albert's chest.
Heisenberg uncertainty, destructive interference, spooky action at a distance, violation of Bell's inequalities, all of quantum physics can be mapped on negative probability models. So why don't
physicists use such negative probability models? The answer is simple: negative probability models are rather clumsy representations of what is better described using "Pythagorean probabilities".
But as a model to get some understanding of the fundamentals of quantum physics, negative probability models do carry a value largely unexplored by pop-science writers.
All of this places some of Einstein's quotes in a different light.
God does play dice. And malicious he is. Or how would you characterize anyone who throws dice loaded to render negative chances, and who manages to keep this so well-hidden from us?
A bit off topic, but it occurred to me that an actual set of draws that has this observable behavior is quite easy prepare if the many worlds interpretation holds. It can be done as a straight
forward application of quantum immortality. Although I don't think that route has any interesting probability math involved, hence off topic.
Tolomea (not verified) | 10/24/12 | 13:07 PM
Thanks for this remark. Reminds me that I was planning to touch upon the Many Worlds Interpretation. Simply forgot about it when writing the blogpost (in hindsight not too bad, as the article was
already growing into a too large post).
I think you are arguing that MWI can be seen as compatible with a negative probability interpretation for Albert's drawers (with the absolute values of all relevant probabilities adding up to 2)?
Johannes Koelman | 10/24/12 | 13:23 PM
No, I've no idea what the math does, my point was far cruder.
You take a regular set of draws, hook up some sort of interlock so you can only open one row or column, and toss some socks in it. Some of the rows and columns will violate the rules, you hook those
up to the trigger of a small thermonuclear device.
Regardless of what the rest of the math does the probability of seeing row with an odd number of socks or column with an even number is now zero.
Quantum immortality then provides for you to always live to observe the desired result.
Tolomea (not verified) | 10/24/12 | 13:34 PM
I see. You can somewhat refine this demolition model as you can always fill the chest of drawers such that no more than one drawer can trigger a wrong even/odd count. Only the opening of this
specific drawer needs to cause a nuclear explosion. (A big one, I assume, as you need to annihilate the whole universe, right? :) )
Johannes Koelman | 10/24/12 | 14:00 PM
No, you just need to make sure Einstein's chance of survival is effectively 0.
Your description talks only of the behavior Einstein observes, with this setup (and many worlds) he can go on opening draws morning after morning indefinitely, always observing the puzzling behavior.
Of course it looks rather different to the rest of us, we see his entire neighborhood reduced to a smoldering crater within a couple of days of starting this exercise, with an apparent 1 in 6 chance
of it happening on any particular day.
What his neighbors observe under this setup is kinda interesting. For them the experiment rolls on indefinitely... until they go on holiday. Every morning they are outside the 100% zone there is an
apparent 1 in 6 chance of the bomb going off.
Tolomea (not verified) | 10/24/12 | 14:14 PM
You say that Pythagorean sums are too difficult for a blog, but they are actually straight forward to understand as how shadows (projections) obviously behave (I described this on a lay person (blog)
level here, you are welcome to contribute/criticise).
Instead, you support tossing out the positivity of probabilities while on the other hand keeping normalization, which I find immediately and obviously less helpful from all perspectives, be it
arbitrariness of what is let go or interpretational issues, but especially on a blog.
The underlying reason for refusing intuitive models (which you usually like and are quite a master at coming up with) seems to be your support of "genuine stochasticity" (God plays dice). I encourage
you to step back and look at that any theory that is not already fully internally "parametrized" (i.e. still needs external "time of time" (flow of time), "random stochasticity") is not a satisfying
model but a regress error (premature termination of regress at least). In a satisfying model of randomness, randomness is by definition not any longer something external to the model (God).
I think you should focus your skills in comming up with intuitive models that advance, for example, EPR MWI models like I propose, because in those models, there is no longer external randomness/time
that allows time to flow so that we drift to branching points where God then throws a coin. EPR-MW models need examples for how the number of microstates is consistent with the quantum probabilities
that the models already allow. I think this is very fitting to your skill set, judging from all the models you came up with (they are usually microstate counting heavy, which is exactly what I need).
Sascha Vongehr | 10/24/12 | 23:02 PM
Thanks for the compliments. Yes, we are in a strong and solid agreement on the virtues of micro state counting models. Apart from that, however, there seems to be little we agree upon.
I disagree that an amplitude-based ('Pythagorean sum') description of the model discussed here are straightforward and easy to explain. To explain Albert's chest using 'Pythagorean probabilities'
requires a Mermin-Peres magic square (a 3x3 table containing tensor products of Pauli spin operators). All very elegant, but no matter how you chose to present that stuff, it will not be easy for an
average reader to work out that such a contraption of operators indeed describes Albert's chest. In contrast, any 11 year old can sum probabilities (negative or not).
Secondly, my short answer towards your proposal for "advancing MWI models" is: "shut up and calculate!".
My somewhat longer (and more satisfactory?) answer is that my objective here is far more modest than "advancing EPR/MWI models". It is nothing more than an attempt to help the reader building some
intuition and appreciation for quantum physics (in relation to EPR/Bell and Kochen-Specker).
Finally, you are mistaken in thinking that I support "genuine stochasticity". In fact, I expect that some future generation of physicists will classify the stochastics introduced by 'the collapse of
the wave function' as the ether of the 20th (21st?) century. But that does not take anything away from the fact that this stochastic description is our (unreasonably effective!) state-of-the-art.
Although we all like to digress once-in-while into the realm of our own pet ideas and theories, I consider 'rendering accessible the state of the art in the field' the key task of any science
Johannes Koelman | 10/25/12 | 11:00 AM
I disagree that an amplitude-based ('Pythagorean sum') description of the model discussed here are straightforward and easy to explain.
Depends on how you do it. I heard the same about EPR. I tried it anyway, and I succeeded. Don't give up just because others tell you it is impossible.
Secondly, my short answer towards your proposal for "advancing MWI models" is: "shut up and calculate!". ... attempt to help the reader building some intuition and appreciation
Your way of building intuition is "shut up and calculate"? Well, since I do not only know that this is completely upside down, but also that you actually do not really mean this (why else would you
come up with all the intuitive toy models), ...
mistaken in thinking that I support "genuine stochasticity". In fact, I expect that some future generation of physicists will classify the stochastics introduced by 'the collapse of the wave
function' as the ether of the 20th (21st?) century. But that does not take anything away from the fact that this stochastic description is our (unreasonably effective!) state-of-the-art.
Johannes - what about getting up to snuff? You are waiting for something that already happened. Why do you think that my models already have randomness fully parametrized? You think it is my private
revolutionary idea? Thank you, but no, I also stand on the shoulders of a tower of other small humans. What you write here (God plays dice, collapse of WF, etc) is precisely support of genuine
'rendering accessible the state of the art in the field' the key task of any science blogger.
Sorry, but if you want that, you need to first learn the state of the art, and that state is certainly no longer god throwing dice(!!!) but properly parametrized models. Are you actually aware of
that relational QM has already resolved EPR? My model is not, as you may presume, a pet idea, but plainly a model that makes their resolution more intuitive! This is the state of the art Johannes.
Sascha Vongehr | 10/25/12 | 21:09 PM
Sorry to say Sascha, but you are throwing out just a a bunch of trivialities.
"You are waiting for something that already happened"
Don't get carried away by your own philosophies and interpretations like MWI. When I say "shut up and calculate" what I mean is that no single "quantum interpretation" invented in the last 80 years
has resulted in any observable consequence nor in any hint towards a 'deeper theory'. None of these philosofies and fantasies passes Occam's razor, and like it or not (I personally don't like it),
but a conservative strictly operational approach to QM is our current state-of-the-art.
"Depends on how you do it. I heard the same about EPR. I tried it anyway, and I succeeded. Don't give up just because others tell you it is impossible."
No others tell me such a thing. This is common knowledge for anyone in the field. Suggest you read up on Mermin-Perez and then reconsider your comment.
"Are you actually aware of that relational QM has already resolved EPR?"
There is nothing "to resolve" in EPR.
"My model is [..] the state of the art Johannes."
Let me know when you receive your Nobel.
Johannes Koelman | 10/26/12 | 09:22 AM
Sascha Vongehr | 10/26/12 | 10:33 AM
"I also stand on the shoulders of a tower of other small humans."
Could you supply the addresses of a few of the floors of this "tower of other small humans"? Or even some names would be enough to help me see where you stand up there.
asog (not verified) | 10/26/12 | 21:45 PM
Starting from Sascha, it is crackpots all the way down.
Crackpot police (not verified) | 10/26/12 | 22:03 PM
Names, please.
asog (not verified) | 10/27/12 | 00:37 AM
I quite enjoyed this entry. The use of negative probabilities is intuitive and easy-to-handle. (In fact surprisingly so. Makes me wonder whether negative probability mathematics had been explored
prior to the advent of quantum theory, and moreover why the odds behave so well under such a strange imposition.)
I actually took to this iteration of your "Quantum Chest" series of thought experiments easier than the prior two. One issue with this specific setup though is that it doesn't give any notion of why
you can't open all three rows or columns at once, whereas in real-life the restriction is of fundamental significance.
eloheim (not verified) | 10/25/12 | 00:24 AM
"One issue with this specific setup though is that it doesn't give any notion of why you can't open all three rows or columns at once, whereas in real-life the restriction is of fundamental
Right. It would be nice if In this story Albert could in principle open all drawers, but only at the expense of spoiling the measurements. For instance: whenever Albert opens a pattern of drawers
that introduces negative probabilities, the chest would blow up in Albert's face. But somehow I feel this would be an unsatisfactory representation of the physics. Another idea I toyed with is to
have the opening of a certain drawer to cause all drawers in different rows and different columns to shrink into oblivion. Perhaps a better representation of the physics, but somehow this strikes me
as unsatisfactory and too artificial as part of the storyline.
Happy to hear alternative ideas that would improve the analogy.
Johannes Koelman | 10/25/12 | 10:49 AM
it doesn't give any notion of why you can't open all three rows or columns at once
Yes, that is why a projection model is much better. If you throw a shadow onto the x-direction (ground), the information about the y-direction (height) is lost (and vice versa). You cannot throw
shadows onto both directions at the same time either with only one sun. This is pretty much precisely what happens in QM, namely projection of states onto measurement directions.
Sascha Vongehr | 10/25/12 | 21:07 PM
Sadly (for this particular demonstration of the intuitiveness of certain simple macro-scale models of quantum behavior) the "rope and a fork" wikipedia reference has been updated since you first
linked to it in that blog post of yours Sascha. It now clarifies via a reference to a journal article describing an actual physical experiment that "Note that the polarization direction is
perpendicular to the wires; the notion that waves "slip through" the gaps between the wires is incorrect."
Nonetheless the mental picture is still helpful, as long as one takes care not to conclude that simple mental analogies imply the hasty conclusion that the analogous quantum behavior is equally
simple, or that our macro world intuitions are always trustworthy and can reveal some sort of one-to-one correspondence with the quantum world if only we view them in a certain way.
melior (not verified) | 10/25/12 | 23:29 PM
Sascha Vongehr | 10/25/12 | 23:48 PM
The problem I have with classical analogies like "shadows casted" is that these don't introduce any real quantum effects (such as violations of Bell's inequalities). Apparently, non-classical
behaviors can be enforced in classical models by allowing negative probabilities. Had heard about this before, but never in any detail. Didn't know the idea came from Dirac. All of this begs the
question: is a hidden variables description of quantum mechanics possible if one allows for negative probabilities?
Bra Ket (not verified) | 10/26/12 | 10:08 AM
Forget negative probabilities - it is nonsense. The Bell violation does never come from projections, as such can always be classical. It comes from a sort of "extra branching" when different worlds
"match up" - however, that is cutting edge so you won't find it here where Johannes still thinks gods throwing dice is a fundamental explanation.
Sascha Vongehr | 10/26/12 | 10:38 AM
~~ Bell violation does never come from projections, as such can always be classical.~~
My point exactly. I was responding to your reaction above where you state that shadow casting models are better than negative probability models. However, negative probability are capable of
describing quantum effects (Bell violations), and shadow casting analogies aren't.
~~ a sort of "extra branching" when different worlds "match up." ~~
Can you make this more precise? As stated, it is way too vague for me. I would be interested to see you derive Bell violations from a branching universe model as simple as Dirac's negative
probability model. Would be very convincing to me. In the meantime, hope you don't mind if I stick to Einstein's God casting a die, but ready to jump over to your God branching off new universes.
Bra Ket (not verified) | 10/26/12 | 13:38 PM
I would be interested to see you derive Bell violations from a branching universe model as simple as Dirac's negative probability model.
The negative probabilities cannot derive Bell violations; such would be revolutionary. They are a way to do some math more conveniently, but at a high cost: All physical intuition is lost and
probability is no longer even anything to do with expectation.
That the branching allows for Bell violation at the very point where classicality is destroyed can be seen in a completely physical toy model.
hope you don't mind if I stick to Einstein's God casting a die, but ready to jump over to your God branching off new universes.
That is precisely the difference! There is no longer any role for anything external (time, randomness, god) in MW models. That is the very reason people already unconsciously reject them. This is not
about physics (since MW is completely consistent with QM), but about the human mind being evolved as a reproduction machine in a social environment (agency, ascribing of intentionality, ...).
Sascha Vongehr | 10/26/12 | 23:45 PM
~~ The negative probabilities cannot derive Bell violations; such would be revolutionary. ~~
Sometimes revolution stares us in the face without us noticing it. Sascha, you have missed the point. Johannes did just that: deriving Bell violations using nothing more than negative probability
distributions. He is using the Mermin-Peres variant of the Bell inequalities: http://users.wpi.edu/~paravind/Publications/MSQUARE5.pdf
Crackpot police (not verified) | 10/27/12 | 02:57 AM
Johannes Koelman | 10/27/12 | 08:16 AM
Sorry Johannes, but you are doing a big mistake here! For example the paper that you suggest, it derives that every LHV model leads to negative probabilities if Bell is violated, which basically is
just another way of saying that LHV are impossible! It does also give the negative values that belong to certain Bell violations (OF COURSE, if I make some subtracted m negative, since it is
subtracted, all of a sudden the sum is bigger and violates an inequality - that is trivial!). That the paper writes almost as if this means that QM behavior can be derived from "negative
probabilities" is crackpottery. Johannes: The archive has lots of crackpottery. Please be more careful. Look at how and what the paper quotes, too! It misrepresents what the big names like Feynman
supposedly said! Look also at the paper your other commenter suggested. Look at what that paper writes are "elements of reality", and then compare to what you claim to be similar. The paper the
commenter suggests is fine; the paper you suggest here is beyond the edge "fringe science".
My models are totally consistent with the Mermin stuff, but your negative probabilities are crackpottery straight from the local hidden variables camp. Not quite as bad as Joy Christian's stuff, as
far as I can see, there seems to be no conscious cheating here, however, there is certainly a large dosis of misrepresenting at least.
Sascha Vongehr | 10/28/12 | 02:31 AM
"X cannot derive Y" is not the same as "Y cannot be derived from X"! I never said that negative parameters as input are not possible, in fact, I already said that they are useful at times. However,
there is nothing physical (no phenomenon) from which these negative parameters thus derive the Bell violation!
BTW: Note that the paper you suggest nowhere talks about negative probabilities and counts all the possible outcomes ("worlds") as "elements of reality".
Sascha Vongehr | 10/28/12 | 02:10 AM
Sascha: "X cannot derive Y" is not the same as "Y cannot be derived from X"!
In other words: "Sunshine cannot result in a wet the pavement" is not the same as "A wet pavement cannot result from sunshine" ??
It seems you have completely shut yourself off from anything reasonable people are saying here. Bell inequalities are derived (from local realism). Violations from Bell inequalities result (the use
of the word 'derived' indicates you don't understand the context) from QM as well as from negative probability theories. These latter theories violate local realism as only non-negative probabilities
(for instance the averaged variables yielding the semi-classical limit) can correspond to elements of reality.
I am probably wasting me time and energy here. Will no longer follow this thread.
Crackpot police (not verified) | 10/28/12 | 11:33 AM
Sorry - you need to read more carefully what people actually write and not "read" what you think to know they write because you have them already in a drawer.
You can input something like the totality of many worlds, thus come to certain mathematical expressions, and those then help to derive violations of inequalities.
But you cannot just have violation, then find that the mathematical tools, like probabilities, become nonsensical (here negative), and instead of concluding that one of your assumptions (e.g. hidden
variables) is wrong, simply go ahead and present the issue as if the violation is derived from starting out with nonsensical math. You can derive anything from starting with nonsense math. That is
not how proper physics is done, that is how you do crackpottery.
Sascha Vongehr | 10/29/12 | 22:34 PM
Ha, ha, vongehr is revealing Dirac and Feynman to be crackpots. Hilarious.
On a more serious note: this new article on the physics of negative probabilities ( http://arxiv.org/abs/1210.6870 ), by Imperial College and Cambridge physicists, Halliwel and Yearsley, I found
quite intriguing.
Anonymous (not verified) | 11/02/12 | 11:32 AM
A beautifully written post. Still, the actual state of the chest could be useful to write down explicitly. ;-)
Wouldn't you agree to repost it as a guest blog on my website?
Luboš Motl | 11/02/12 | 16:05 PM
Johannes Koelman | 11/03/12 | 07:39 AM
Thanks a lot. Please check here:
I know that very many non-crackpot readers visit my website but most of them stay silent.
BTW do you have an explicit formula for the state vector of the 9 qubits that leads to the observations?
Update: I probably know how to do that. They're not 9 independent qubits, of course, because then they would behave classically. But one may create the 9 operators as products of some Pauli matrices/
spins so that the even/odd number of the factors anticommute with those of their friends in the rows/columns.
Luboš Motl | 11/03/12 | 11:06 AM
"But one may create the 9 operators as products of some Pauli matrices/spins so that the even/odd number of the factors anticommute with those of their friends in the rows/columns."
The 3x3 table of operators you are looking for consists of direct products of Pauli matrices with operators in rows mutually commuting, and similarly for those in columns. It's called the
Mermin-Peres magic square.
Johannes Koelman | 11/03/12 | 11:31 AM
“Shut up and calculate” works fine until one gets tripped up calculating. I followed your link Motl and noticed your recent excellent post on Feynman sums and diagrams. The mathematics and physics
quickly gets very hairy. A graduate level lecture on Feynman sums from a department of mathematics would be unrecognizable from one in a physics department where prickly issues on divergence and
convergence and what to count (and the underlying spaces and fibres) are glossed over.
Maybe we have this whole outreach program upside-down. Maybe it's the classical world that's the weirdo that's hard to fathom ~ not the quantum world.
The quantum fundamentals are not so difficult and students learn them with basic atomic chemistry. There are only a few dozen frequently encountered elements from the periodic table. There are even
fewer common elements at the level of quarks. Each of these types of elements has no individuality. Unlike students in a classroom, they can swap places with no observable effects. Their
characteristics are also completely independent of the place and time in which they are observed.
Contrast this quantum simplicity with the classical mess in which everywhere and every when there are petty individual differences burdened with history, cliques and unfairness in the way things are
counted and tallied. It's the classical world that is the hydra-headed monster, not the quantum world.
The problem with the stick-and-ball models of molecules is that students tend to think of the balls as being like little classical balls with each one being a distinct “particle”. That may be OK for
understanding how pressure works in a balloon, yet it is useless for understanding solid state physics or phase changes .... or entanglement.
Instead of trying to make quantum physics seem classical … and instead of pretending that its calculations are easy, how about a different approach that doesn't slip in assumptions about an
underlying space and time that isn't there?
blue- green | 11/03/12 | 09:59 AM
"Contrast this quantum simplicity with the classical mess"
It is relevant to note that this "classical mess" is the emergent behavior we are presented with following some significant coarse-graining. Unfortunately, this emergent behavior heavily colors our
idea of reality. Deep down reality is much more austere than the "classical mess" suggests.
Johannes Koelman | 11/03/12 | 21:24 PM
Although “course-graining” has its uses, it may well be that 140 years after its use in Boltzmann's H-Theorem, it has been mined for all it is worth.
And yes, if one squints hard enough, quantum mechanics may appear to be “austere”.
However, when one considers many-body systems, the dynamical space for a many-body system is an extremely high dimensional cross-product of all of the individual spaces available to each “body” ….
whereas …. in the blurred over and “austere” classical budget-vision, one simply adds more particles to a single box … with no window into biology or consciousness, even if you do get an emergent
space and time.
blue- green | 11/04/12 | 08:03 AM
|
{"url":"http://www.science20.com/hammock_physicist/quantum_casino_less_zero_chance-95615","timestamp":"2014-04-21T02:10:47Z","content_type":null,"content_length":"101360","record_id":"<urn:uuid:6c2ebc7f-7623-434a-a1ff-000ccd2fe2ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BrainBashers : Brain Teasers and Puzzles
Puzzle 1
They pay £30 to the manager and go to their room.
The manager suddenly remembers that the room rate is £25 and gives £5 to the bellboy to return to the people.
On the way to the room the bellboy reasons that £5 would be difficult to share among three people so he pockets £2 and gives £1 to each person.
Now each person paid £10 and got back £1.
So they paid £9 each, totalling £27. The bellboy has £2, totalling £29.
Where is the missing £1?
[Ref: ZVTU]
Puzzle 2
Would you believe that this amazing sentence contains ninety two letters, one comma and a single question mark?
[Ref: ZQML] © Kevin Stone
Puzzle 3
You can imagine an arrow in flight, toward a target. For the arrow to reach the target, the arrow must first travel half of the overall distance from the starting point to the target. Next, the
arrow must travel half of the remaining distance.
For example, if the starting distance was 10m, the arrow first travels 5m, then 2.5m.
If you extend this concept further, you can imagine the resulting distances getting smaller and smaller. Will the arrow ever reach the target?
[Ref: ZJXZ]
Puzzle 4
Imagine a prisoner in a prison. He is sentenced to death and has been told that he will be killed on one day of the following week. He has been assured that the day will be a surprise to him, so he
will not be anticipating the hangman on a particular day, so keeping his stress levels in check.
The prisoner starts to think to himself, if I am still alive on Thursday, then clearly I shall be hanged on Friday, this would mean that I then know the day of my death, therefore I cannot be hanged
on Friday. Now then, if I am still alive on Wednesday, then clearly I shall be hanged on Thursday, since I have already ruled out Friday. The prisoner works back with this logic, finally concluding
that he cannot after all be hanged, without already knowing which day it was.
Casually, resting on his laurels, sitting in his prison cell on Tuesday, the warden arrives to take him to be hanged, the prisoner was obviously surprised!
Ponder this...
[Ref: ZAOX]
Puzzle 5
Consider an arrow in flight towards a target.
At any given moment of time, a snapshot could be taken of this arrow. In this snapshot, the arrow would not be moving. Let us now take another snapshot, leaving a very small gap of time between
them. Again, the arrow is stationary. We can keep taking snapshots for each moment of time, each of which shows the arrow to be stationary. Therefore the overall effect is that the arrow never
moves, however it still hits the target!
Where lies the flaw in the logic?
[Ref: ZNPA]
|
{"url":"http://www.brainbashers.com/showpuzzles.asp?formpost=Y&page=1&field=ca&puzzletext=P","timestamp":"2014-04-20T10:56:14Z","content_type":null,"content_length":"19052","record_id":"<urn:uuid:8038b419-5199-414a-9a1d-26077f6c4511>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kiddie Algebra
Published Online: February 10, 2009
Published in Print: February 11, 2009, as Kiddie Algebra
Kiddie Algebra
Rather than wait till students reach middle school, teachers are introducing children to algebra concepts in the early grades by tapping into their intuitive math skills.
Melissa Romano grew up attending school in classrooms that were quiet and orderly. And she liked it that way.
Today, as a 2nd grade teacher, Ms. Romano has learned to tolerate and even encourage more spirited discussion among her pupils, in the hope of cultivating their mathematical skills and, specifically,
their algebraic thinking.
As educators and policymakers search for ways to prepare students for the rigors of algebra, Ms. Romano, at Broadwater Elementary School, and other teachers in the Helena, Mont., school system are
starting early. They are among the teachers in a number of schools who are attempting to nurture students’ algebraic-reasoning ability, as well as their basic number skills, in early elementary
school, rather than waiting until middle or early high school.
—Larry Beckner for Education Week
To accomplish that aim, Ms. Romano says, it’s not enough that she simply present pupils with a problem, collect their answers, correct their mistakes, and move on.
She takes relatively simple problems, then expands them, integrating algebraic thinking along the way. She changes the conditions and calls for class discussions. And she asks individual children to
explain aloud: How do you know that?
As that give and take unfolds in class, “they’re talking,” Ms. Romano said, and “it’s loud.”
That process was “a huge, huge change for me,” said the educator, who began using the algebraic reasoning and number-skill model in 2005. “The first two years were a big learning curve on my part.”
When she was in elementary school herself, teachers gave her a math problem “and let me work on it,” she said. “And I didn’t understand half the math.”
Young, But Skilled
—Larry Beckner for Education Week
Ms. Romano and her colleagues in the 8,000-student Helena school district began to rethink math instruction, and the connection between basic arithmetic and algebra, four years ago, when they
attended an institute offered by the Northwest Regional Educational Laboratory, or NWREL, a nonprofit research and evaluation organization in Portland, Ore.
Those sessions focused on improving math instruction in the early grades. They placed a heavy emphasis on the principles of “cognitively guided instruction,” an influential approach to math teaching
and professional development first developed in the 1980s by Thomas P. Carpenter and other researchers at the University of Wisconsin-Madison.
A core idea of cognitively guided instruction is that young children arrive at school with a surprisingly strong set of intuitive math skills in areas such as understanding numbers and
problem-solving. Teachers who understand those skills, and how students’ math knowledge develops, can greatly improve their instruction, proponents of the methodology say.
Rather than emphasizing drill and procedure, cognitively guided instruction encourages teachers to nurture students’ broader understanding of the relationships between numbers, patterns, and the fact
that symbols can be used to represent numbers—skills that prove essential in algebra.
Long-Term Payoff
Education policymakers at all levels are grappling with the question of why students have difficulty in algebra and what can be done to help them. They reason that students who overcome challenges
and complete introductory algebra, or Algebra 1, relatively early in school have a jump on advanced math and, presumably, a broader array of skills they will need in college and in the job market.
—Larry Beckner for Education Week
That rationale has led two states, California and Minnesota, to phase in requirements that students take Algebra 1 in 8th, rather than 9th grade, though the California measure has been held up in
court. Many districts, meanwhile, are scrambling to improve teacher training in algebra and create intervention programs for students who cannot keep up.
Such struggles can be traced in part to schools’ narrow conception of algebra, Mr. Carpenter said. Many teachers present arithmetic as a tool for “getting answers” and, separately, algebra as a more
complicated study of relationships between numbers. A better approach, he said, is to zero in on the “big ideas” in arithmetic that will help students conquer algebra down the road.
“If you learn these big ideas early, there’s a lot less to learn” later in algebra courses, Mr. Carpenter said. “Learning with understanding pays off in the short run, and it pays off in the long
If students, by contrast, approach math only by memorizing steps and procedures, he argues, their instruction leads to misconceptions that haunt them when they reach a full-fledged algebra course.
One common misconception Mr. Carpenter often cites involves the “equals” sign. Many students are mistakenly taught to regard the equals sign as signifying “the answer comes next,” as in 8 + 4 = __.
But that’s a mistaken belief, which is compounded when students reach algebra, where variables such as x and y can appear on both sides of the equation. The equals sign, in fact, connotes a
relationship between numbers—that both sides of the equation are of equal value, or in balance. Yet for many students, Mr. Carpenter said, “the misconception continues all the way through algebra.”
—Larry Beckner for Education Week
Linda Griffin, the director of the mathematics education unit at the Center for Classroom Teaching and Learning at NWREL, has led institutes in which she introduced teachers to principles of
cognitively guided instruction, or cgi. Her organization has trained about 700 teachers in algebraic reasoning and number sense since 2004, typically over four or five days. NWREL also encourages
districts to arrange for mentors and coaches to provide on-site support in those methods.
The idea of building algebraic reasoning in the elementary grades is a major departure for many teachers, Ms. Griffin said. Many were taught through their own experiences in school, and their
professional coursework, to emphasize procedural knowledge, as opposed to “making sense of mathematics,” she said.
Ms. Griffin and others who promote cgi strategies emphasize that they are not attempting to “teach algebra” to elementary school students in the strictest sense, so much as to “algebra-fy’
early-grades math in ways that carry forward.
Schools tend to “act like algebra’s a whole new world,” Ms. Griffin said. “Done well, it shouldn’t be that way.”
Tutoring Parents, Too
Ms. Romano tries to make the shift from arithmetic to algebra in subtle ways.
During one recent class, she gave her 2nd graders a problem about geese flying in a V formation. She used flocks of different sizes—three geese, five geese, seven geese. If the geese fly in a perfect
V, she asked them, how many end up flying on each side?
Over the course of the class period, the pupils learned that with odd-numbered flocks, they could solve the problem by dividing the total number of geese by two, and subtracting one, for the lead
goose. Ms. Romano moved on to ever-larger numbers: 49, 103. Eventually, she and the children worked out a formula they could use to solve the problem:
X - 1
Some students ended up making calculations into the hundreds. When others struggled, Ms. Romano encouraged them to draw pictures to illustrate their thinking. All told, the teacher spent 60 minutes
working on variations of that single problem.
As Ms. Romano has been forced to change the way she thinks about math, she’s asked parents at her school to adjust their way of thinking, too.
Some mothers and fathers, when they see their children’s math homework, are eager to jump in and provide the answers for them. Ms. Romano urges them to hold back, and let children sort through
problems and provide explanations on their own.
Just because pupils can spit out an answer, that doesn’t mean they understand what they’re doing, the teacher tells parents. When parents understand the process, she said, “they are amazed at what
their kids can do.”
Talking It Over
The principles used in cognitively guided instruction have played a significant role in shaping curriculum and the overall thinking about children’s math skills in the early grades, said Douglas H.
Clements, a professor of learning and instruction at the State University of New York at Buffalo.
—Larry Beckner for Education Week
Mr. Clements has developed an early-grades curriculum that builds math skills through games and other means—work that is based on research about how young children learn math tasks. The scholar, who
says his own work has been shaped by principles of cgi, served on the National Mathematical Advisory Panel, a White House-commissioned group that last year issued recommendations on how to prepare
students for introductory algebra.
While Mr. Clements supports having students think and talk through math problems, he also says teachers face a challenge in knowing when they have to step in and correct children when they make clear
mistakes, so that students are not led astray mathematically.
“You can get so caught up in the talk, talk, talking,” Mr. Clements said. “Sometimes, a teacher has to say, ‘That’s wrong.’ ” Teachers’ time in class in limited, he noted. “You’ve got to use it for
students’ advantage.”
Marla Ernst, an elementary teacher in Oregon’s Lebanon Community School District who uses the algebraic-reasoning strategies discussed by NWREL, tries to correct students’ errors in ways that
encourage further discussion. She will ask them to explain a correct answer and diagnose where a wrong answer went awry.
“We don’t leave a lesson until it’s clear,” Ms. Ernst said. “Math is about finding a right answer, but it’s also about a process. You don’t want to lose either part.”
Vol. 28, Issue 21, Pages 21-23
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our
user agreement
All comments are public.
Get 10 free stories, e-newsletters, and more!
Most Popular Stories
• Maryville University, MO
• BetterLesson, US
• Hazard, Young, Attea & Associates, Multiple Locations
• Durham Public Schools, Durham, NC
• Township High School District #113, IL
|
{"url":"http://www.edweek.org/ew/articles/2009/02/11/21earlyalgebra_ep.h28.html","timestamp":"2014-04-24T09:58:03Z","content_type":null,"content_length":"109817","record_id":"<urn:uuid:ff05c0c8-59c6-461e-9f55-318dbb5d146d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hovhannes Grigoryan
Thomas Jefferson National Accelerator Facility
Holographic dual model of QCD as an alternative tool for studying the strong interactions at low energies
At low energies, QCD is a strongly interacting theory, and nonperturbative methods such as DSE formalism and lattice QCD are developed to study it. Here, we discuss an alternative approach for
performing nonperturbative calculations inspired from the AdS/CFT correspondence which relates the strongly interacting theory in 4D and the weakly interacting theory in 5D. This correspondence
allows to find the n-point functions of the strongly coupled 4D theory from the perturbative calculations in 5D. Working in the framework of the AdS/QCD model, we develop a formalism to calculate
various form factors of vector and pseudoscalar mesons. From the form factors of the rho-meson we extract the values of such observables as electric charge radius, magnetic and quadrupole moments and
compare these with the results using DSE formalism and lattice QCD. For the pion, we express the electric charge radius and the decay constant in terms of the two parameters of the theory and compare
these with experiment, ChPT and NJL model. Finally, we show how to incorporate the anomalous pion decay into the holographic model and calculate the pi → gamma-gamma form factor.
Back to the theory seminar page.
|
{"url":"http://www.phy.anl.gov/theory/semabstracts07/grigoryan.html","timestamp":"2014-04-20T01:01:41Z","content_type":null,"content_length":"1850","record_id":"<urn:uuid:f5666673-0cfc-4706-8f48-7004decdcf31>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
P.J. Besl, R.C. Jain, "Segmentation through Variable-Order Surface Fitting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp. 167-192, March, 1988.
BibTex x
@article{ 10.1109/34.3881,
author = {P.J. Besl and R.C. Jain},
title = {Segmentation through Variable-Order Surface Fitting},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {10},
number = {2},
issn = {0162-8828},
year = {1988},
pages = {167-192},
doi = {http://doi.ieeecomputersociety.org/10.1109/34.3881},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - Segmentation through Variable-Order Surface Fitting
IS - 2
SN - 0162-8828
EPD - 167-192
A1 - P.J. Besl,
A1 - R.C. Jain,
PY - 1988
KW - surface curvature sign labelling; computerised picture processing; variable-order surface fitting; image structure; piecewise-smooth surface model; surface coherence; bivariate functions;
noiseless image reconstruction; image segmentation; iterative region-growing method; computerised picture processing; iterative methods
VL - 10
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
The solution of the segmentation problem requires a mechanism for partitioning the image array into low-level entities based on a model of the underlying image structure. A piecewise-smooth surface
model for image data that possesses surface coherence properties is used to develop an algorithm that simultaneously segments a large class of images into regions of arbitrary shape and approximates
image data with bivariate functions so that it is possible to compute a complete, noiseless image reconstruction based on the extracted functions and regions. Surface curvature sign labeling provides
an initial coarse image segmentation, which is refined by an iterative region-growing method based on variable-order surface fitting. Experimental results show the algorithm's performance on six
range images and three intensity images.
[1] G. J. Agin and T. O. Binford, "Computer description of curved objects," inProc. 3rd Int. Joint Conf. Artificial Intelligence, Stanford, CA, Aug. 20-23, 1973, pp. 629-640.
[2] R. L. Anderson and E. E. Houseman,Tables of Orthogonal Polynomial Values Extended to N = 104, Iowa State College Agricultural and Mechanic Arts, Ames, IA, Res. Bull. 297, Apr. 1942.
[3] D. H. Ballard and C. M. Brown,Computer Vision. Englewood Cliffs, NJ: Prentice-Hall, 1982.
[4] S. Barnard, "A stochastic approach to stereo vision," inProc. 5th Nat. Conf. Artificial Intelligence, AAAI, Philadelphia, PA, August 11-15, 1986, pp. 676-680.
[5] R. H. Bartels and J. J. Jezioranski, "Least-squares fitting using orthogonal multinomials,"ACM Trans. Math. Software, vol. 11, no. 3, pp. 201-217, Sept. 1985.
[6] P. R. Beaudet, "Rotationally invariant image operators," inProc. 4th Int. Conf. Pattern Recognition, Kyoto, Japan, Nov. 7-10, 1978, pp. 579-583.
[7] G. Beheim and K. Fritsch, "Range finding using frequency-modulated laser diode,"Appl. Opt., vol. 25, no. 9, pp. 1439-1442, May 1986.
[8] P. J. Besl, "Surfaces in early range image understanding," Ph.D. thesis, EECS Dept., Univ. of Michigan, Ann Arbor, May 1986.
[9] P. J. Besl and R. C. Jain, "Three-dimensional object recognition,"ACM Comput. Surveys, vol. 17, no. 1, pp. 75-145, Mar. 1985.
[10] P. Besl and R. Jain, "Invariant surface characteristics for 3-D object recognition in range images,"Comput. Vision Graphics Image Processing, 1986, pp. 33-80, vol. 33.
[11] P. J. Besl, E. J. Delp, and R. C. Jain, "Automatic visual solder joint inspection,"IEEE J. Robotics Automation, vol. RA-1, pp. 42-56, Mar. 1985.
[12] B. Bhanu, "Representation and shape matching of 3-D objects,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, no. 3, pp. 340-350, May 1984.
[13] B. Bhanu, S. Lee, C. C. Ho, and T. Henderson, "Range data processing: Representation of surfaces by edges," inProc. Int. Pattern Recognition Conf., IAPR-IEEE, Oct. 1986, pp. 236-238.
[14] R. M. Bolle and D. B. Cooper, "Bayesian recognition of local 3-D shape by approximating image intensity functions with quadric polynomials,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-
6, no. 4, pp. 418-429, July 1984.
[15] R. C. Bolles and M. A. Fischler, "A RANSAC-based approach to model fitting and its application to finding cylinders in range data," inProc. 7th Int. Joint Conf. Artificial Intelligence,
Vancouver, B.C., Canada, Aug. 24-28, 1981, pp. 637-643.
[16] R. C. Bolles and P. Horaud, "3DPO: A three dimensional part orientation svstem,"Int. J. Robotics Res., vol. 5, no. 3, Fall 1986, pp. 3-26.
[17] B. A. Boyter, "Three-dimensional matching using range data," inProc. 1st Conf. Artificial Intelligence Applications, IEEE Comput. Soc., 1984, pp. 211-216.
[18] M. Brady, "Computational approaches to image understanding,"Comput. Surveys, vol. 14, pp. 3-71, Mar. 1982.
[19] M. Brady, J. Ponce, A. Yuille, and H. Asada, "Describing surfaces,"Comput. Vision, Graphics, Image Processing, vol. 32, pp. 1-28, 1985.
[20] C. Brice and C. Fennema, "Scene analysis using regions,"Artificial Intell., vol. 1, pp. 205-226, 1970.
[21] B. Carrihill and R. Hummel, "Experiments with the intensity ratio depth sensor,"Comput. Vision, Graphics, Image Processing, vol. 32, pp. 337-358, 1985.
[22] D. Chen, "A regression updating approach for detecting multiple curves," inProc. 2nd World Conf. Robotics Research, Scottsdale, AZ, Aug. 18-21, 1986, Paper RI/SME, MS86-764; alsoIEEE Trans.
Pattern Anal. Machine Intell., to be published.
[23] S. S. Chern, "A proof of the uniqueness of Minkowski's problem for convex surfaces,"Amer. J. Math., vol. 79, pp. 949-950, 1957.
[24] F. S. Cohen and D. B. Cooper, "Simple parallel hierarchical and relaxation algorithms for segmenting noncausal Markovian fields,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, pp.
195-219, March 1987.
[25] E. N. Coleman and R. Jain, "Obtaining shape of textured and specular surfaces using four-source photometry,"Comput. Graphics Image Processing, vol. 18, no. 4, pp. 309-328, Apr. 1982.
[26] G. R. Cross and A. K. Jain, "Markov random field texture models,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, pp. 25- 39, 1983.
[27] C. Dane, "An object-centered three-dimensional model builder," Ph.D. dissertation, Dep. Comput. Inform. Sci., Moore School Elec. Eng., Univ. Pennsylvania, Philadelphia, 1982.
[28] W. W. Daniel,Applied Nonparametric Statistics. Boston, MA: Houghton-Mifflin, 1978.
[29] L. S. Davis, "A survey of edge detection techniques,"Comput. Graphics Image Processing, vol. 4, pp. 248-270, 1975.
[30] H. Derin and H. Elliott, "Modeling and segmentation of noisy and textured images using Gibbs random fields,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, pp. 39-55, Jan. 1987.
[31] S. Dizenzo, "Advances in image segmentation,"Image and Vision Comput., vol. 1, no. 4, pp. 196-210, Nov. 1983.
[32] T. G. Fan, G. Medioni, and R. Nevatia, "Description of surfaces from range data using curvature properties," inProc. Computer Vision and Pattern Recognition Conf., IEEE Comput. Soc., Miami, FL,
June 22-26, 1986, pp. 86-91.
[33] O. D. Faugeras and M. Hebert, "The representation, recognition, and locating of 3-D objects,"Int. J. Robotics Res., vol. 5, no. 3, Fall 1986, pp. 27-52.
[34] O. D. Faugeras, M. Hebert, and E. Pauchon, "Segmentation of range data into planar and quadric patches," inProc. 3rd Computer Vision and Pattern Recognition Conf., Arlington, VA, 1983, pp. 8-
[35] I. Faux and M. Pratt,Computational Geometry for Design and Manufacture. Ellis Horwood, 1979.
[36] F. P. Ferrie and M. D. Levine, "Piecing together 3D shape of moving objects: An overview," inProc. Computer Vision and Pattern Recognition Conf., IEEE Comput. Soc., San Francisco, CA, June 9-13,
1985, pp. 574-584.
[37] K. S. Fu and J. K. Mui, "A survey on image segmentation,"Pattern Recognition, vol. 13, pp. 3-16, 1981.
[38] S. Geman and D. Geman, "Stochastic relaxation, gibbs distributions, and Bayesian restoration of images,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, no. 6, pp. 721-741, Nov. 1984.
[39] B. Gil, A. Mitiche, and J. K. Aggarwal, "Experiments in combining intensity and range edge maps,"Comput. Vision, Graphics, Image Processing, vol. 21, pp. 395-411, Mar. 1983.
[40] D. Gilbarg and N. Trudinger,Elliptic Partial Differential Equations of Second Order. Berlin: Springer-Verlag, 1983.
[41] C. Goad, "Special purpose automatic programming for 3D model-based vision," inProc. Image Understanding Workshop, DARPA, Arlington, VA, June 23, 1983, pp. 94-104.
[42] G. H. Golub and C. F. Van Loan,Matrix Computations. Baltimore, MD: Johns Hopkins University Press, 1983.
[43] W. E. L. Grimson, "A computer implementation of a theory of human stereo vision," M.I.T. Artificial Intelligence Lab., Cambridge, MA, Memo. 565, 1980.
[44] W. E. L. Grimson,From Images to Surfaces. Cambridge, MA: M.I.T. Press, 1981.
[45] W. E. L. Grimson and T. Pavlidis, "Discontinuity detection for visual surface reconstruction,"Comput. Vision, Graphics, Image Processing, vol. 30, pp. 316-330, 1985.
[46] E. L. Hall, J. B. K. Tio, C. A. McPherson, and F. A. Sadjadi, "Measuring curved surfaces for robot vision,"Computer, vol. 15, no. 12, pp. 42-54, Dec. 1982.
[47] R. M. Haralick and L. G. Shapiro, "Image segmentation techniques,"Comput. Vision, Graphics, Image Processing, vol. 29, pp. 100-132, 1985.
[48] R. M. Haralick and L. Watson, "A facet model for image data,"Comput. Graphics Image Processing, vol. 15, pp. 113-129, 1981.
[49] R. M. Haralick, L. T. Watson, and T. J. Laffey, "The topographic primal sketch,"Int. J. Robotics Res., vol. 2, no. 1, pp. 50-72, Spring 1983.
[50] M. Hebert and T. Kanade, "The 3-D profile method for object recognition," inProc. Computer Vision and Pattern Recognition Conf., IEEE Comput. Soc., San Francisco, CA, June 9-13, 1985, pp. 458-
[51] M. Hebert and J. Ponce, "A new method for segmenting 3-D scenes into primitives," inProc. 6th Int. Conf. Pattern Recognition, Munich, West Germany, Oct. 19-22, 1982, pp. 836-838.
[52] T. C. Henderson, "Efficient 3-D object representations for industrial vision systems,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, no. 6, pp. 609-617, Nov. 1983.
[53] T. C. Henderson and B. Bhanu, "Three-port seed method for the extraction of planar faces from range data," inProc. Workshop Industrial Applications of Machine Vision, Research Triangle Park, NC,
May 1982, pp. 181-186.
[54] M. Herman, "Generating detailed scene descriptions from range images," inProc. Int. Conf. Robotics and Automation, St. Louis, MO, Mar. 25-28, 1985, pp. 426-431.
[55] B. K. P. Horn, "Extended Gaussian images,"Proc. IEEE, vol. 72, no. 12, pp. 1656-1678, Dec. 1984.
[56] K. Ikeuchi and B. K. P. Horn, "Numerical shape from shading and occluding boundaries,"Artificial Intell., vol. 17, pp. 141-184, Aug. 1981.
[57] S. L. Horowitz and T. Pavlidis, "Picture segmentation by a directed split-and-merge procedure," inProc. 2nd Int. Joint Conf. Pattern Recognition, 1974, pp. 424-433.
[58] K. Ikeuchi and B. K. P. Horn, "Numerical shape from shading and occluding boundaries,"Artificial Intell., vol. 17, pp. 141-184, Aug. 1981.
[59] S. Inokuchi and R. Nevatia, "Boundary detection in range pictures," inProc. 5th Int. Conf. Pattern Recognition, Miami, FL, Dec. 1-4, 1980, pp. 1031-1035.
[60] S. Inokuchi, T. Nita, F. Matsuday, and Y. Sakurai, "A three-dimensional edge-region operator for range pictures," inProc. 6th Int. Conf. Pattern Recognition, Munich, West Germany, Oct. 19- 22,
1982, pp. 918-920.
[61] S. Inokuchi, K. Sato, and F. Matsuda, "Range imaging system for 3-D object recognition," inProc. 7th Int. Conf. Pattern Recognition, Montreal, P.Q., Canada, July 30-Aug. 2, 1984, pp. 806-808.
[62] R. Hoffman and A. K. Jain, "Segmentation and classification of range images,"IEEE Trans. Patt. Anal. Machine Intell., vol. PAMI-9, pp. 608-619, 1987.
[63] R. Jain, "Dynamic scene analysis," inProgress in Pattern Recognition, vol. 2, A. Rosenfeld and L. Kanal, Eds. Amsterdam, The Netherlands: North-Holland, 1983.
[64] R. A. Jarvis, "A perspective on range finding techniques for computer vision,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, no. 2, pp. 122-139, Mar. 1983.
[65] T. Kanade, "Survey: Region segmentation: Signal vs. semantics,"Comput. Graphics Image Processing, vol. 13, pp. 279-297, 1980.
[66] T. Kanade, "Recovery of the three-dimensional shape of an object from a single view,"Artificial Intell., vol. 17, pp. 409-460, Aug. 1981.
[67] J. R. Kender and E. M. Smith, "Shape from darkness: Deriving surface information from dynamic shadows," inProc. 5th Nat. Conf. Artifical Intelligence, AAAI, Philadelphia, PA, Aug. 11-15, 1986,
pp. 664-669.
[68] G. Kinoshita, M. Idesawa, and S. Naomi, "Robotic range sensor with projection of bright ring pattern,"J. Robotic Syst., vol. 3, no. 3, pp. 249-257, 1986.
[69] D. T. Kuan and R. J. Drazovich, "Model-based interpretation of range imagery," inProc. Nat. Conf. Artificial Intelligence, Austin, TX, Aug. 6-10, 1984, pp. 210-215.
[70] C. L. Lawson and R. J. Hanson,Solving Least Squares Problems. Englewood Cliffs, NJ: Prentice-Hall, 1974.
[71] R. A. Lewis and A. R. Johnston, "A scanning laser rangefinder for a robotic vehicle," inProc. 5th Int. Joint Conf. Artificial Intelligence, Cambridge, MA, Aug. 22-25, 1977, pp. 762-768.
[72] C. Lin and M. J. Perry, "Shape description using surface triangularization," inProc. Workshop Computer Vision: Representation and Control, IEEE Comput. Soc., Rindge, NH, Aug. 23-25, 1982, pp.
[73] D. Marr,Vision. New York: Freeman, 1982.
[74] G. Medioni and R. Nevatia, "Description of 3-D surfaces using curvature properties," inProc. Image Understanding Workshop, DARPA, New Orleans, LA, Oct. 3-4, 1984, pp. 291-299.
[75] D. L. Milgrim and C. M. Bjorklund, "Range image processing: Planar surface extraction," inProc. 5th Int. Conf. Pattern Recognition, Miami, FL, Dec. 1-4, 1980, pp. 912-919.
[76] B. Gil, A. Mitiche, and J. K. Aggarwal, "Experiments in combining intensity and range edge maps,"Comput. Vision, Graphics, Image Processing, vol. 21, pp. 395-411, Mar. 1983.
[77] R. Nevatia and T. O. Binford, "Structured descriptions of complex objects," inProc. 3rd Int. Joint Conf. Artificial Intelligence, Stanford, CA, Aug. 20-23, 1973, pp. 641-647.
[78] W.M. Newman and R.F. Sproull,Principles of Interactive Computer Graphics, 2nd Ed., McGraw Hill, Amsterdam, 1979.
[79] M. Oshima and Y. Shirai, "Object recognition using three-dimensional information,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, no. 4, pp. 353-361, July 1983.
[80] T. Pavlidis, "Segmentation of pictures and maps through functional approximation,"Comput. Graphics Image Processing, vol. 1, pp. 360-372, 1972.
[81] F. G. Peet and T. S. Sahota, "Surface curvature as a measure of image texture,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-7, no. 6, pp. 734-738, Nov. 1985.
[82] T. C. Pong, L. G. Shapiro, L. T. Watson, and R. M. Haralick, "Experiments in segmentation using a facet model region grower,"Comput. Vision, Graphics, Image Processing, vol. 25, pp. 1-23,, 1984.
[83] R. J, Popplestone, C. M. Brown, A. P. Ambler, and G. F. Crawford, "Forming models of plane-and-cylinder faceted bodies from light stripes," inProc. 4th Int. Joint Conf. Artificial Intelligence,
Tbilisi, Georgia, USSR, Sept. 1975, pp. 664-668.
[84] M. Potmesil, "Generating models of solid objects by matching 3D surface segments," inProc. 8th Int. Joint Conf. Artificial Intelligence, Karlsruhe, West Germany, Aug. 8-12, 1983, pp. 1089-1093.
[85] J. Prewitt, "Object enhancement and extraction," inPicture Processing and Psychopictorics, B. Lipkin and A, Rosenfeld, Eds. New York: Academic, 1979, pp. 75-149.
[86] G. T. Reid, "Automatic fringe pattern analysis: a review,"Opt. Lasers Eng., vol. 7, pp. 37-68, 1986.
[87] W. Richards and D. D. Hoffman, "Codon constraints on closed 2D shapes,"Comput. Vision, Graphics, Image Processing, vol. 31, pp. 265-281, 1985.
[88] E. M. Riseman and M. A. Arbib, "Computational techniques in the visual segmentation of static scenes,"Comput. Graphics Image Processing, vol. 6, pp. 221-276, 1977.
[89] I. Rock,The Logic of Perception. Cambridge, MA: M.I.T. Press, 1983.
[90] A. Rosenfeld and L. S. Davis, "Image segmentation and image models,"Proc. IEEE, vol. 67, no. 5, pp. 764-772, May 1979.
[91] A. Rosenfeld and A. Kak,Digital Picture Processing, New York: Academic, 1976.
[92] I. K. Sethi and S. N. Jayaramamurthy, "Surface classification using characteristic contours," inProc. 7th Int. Conf. Pattern Recognition, Montreal, P.Q., Canada, July 30-Aug. 2, 1984, pp.
[93] Y. Shirai, "Recognition of polyhedrons with a range finder,"Pattern Recognition, vol. 4, pp. 243-250, 1972.
[94] Y. Shirai and M. Suwa, "Recognition of polyhedra with a range finder," inProc. 2nd Int. Joint Conf. Artificial Intelligence, London, UK, Aug. 1971, pp. 80-87.
[95] D. R. Smith and T. Kanade, "Autonomous scene description with range imagery,"Comput. Vision, Graphics, Image Processing, vol. 31, pp. 322-334, 1985.
[96] W. Snyder and G. Bilbro, "Segmentation of three-dimensional images," inProc. Int. Conf. Robotics and Automation, IEEE Comput. Soc., St. Louis, MO, Mar. 25-28, 1985, pp. 396-403.
[97] K. Sugihara, "Range-data analysis guided by junction dictionary,"Artificial Intell., vol. 12, pp. 41-69, 1979.
[98] D. Terzopoulos, "Computing visible surface reconstruction," Memo 800, Artificial Intell. Lab., Mass. Inst. Technol., Cambridge, MA, 1985.
[99] W. Tiller, "Rational B-splines for curve and surface representation,"IEEE Comput. Graphics Applications, vol. 3, no. 6, pp. 61- 69, 1983.
[100] F. Tomita and T. Kanade, "A 3D vision system: Generating and matching shape descriptions in range images," inProc. Int. Conf. Robotics, IEEE Comput. Soc., Atlanta, GA, Mar. 13-15, 1984, pp.
[101] S. Ullman,The Interpretation of Visual Motion. Cambridge, MA: M.I.T. Press, 1979.
[102] B.C. Vemuri, A. Mitiche, and J.K. Aggarwal, "Curvature-based representation of objects from range data,"Image and Vision Comput., vol. 4, no. 2, pp. 107-114, May 1986.
[103] A. P. Witkin, "Recovering surface shape and orientation from texture,"Artificial Intell., vol. 17, pp. 17-45, Aug. 1981.
[104] A. P. Witkin and J. Tenenbaum, "The role of structure in vision," inHuman and Machine Vision, Beck et al., Eds. New York: Academic, 1983, pp. 481-543.
[105] R. J. Woodham, "Analysing images of curved surfaces,"Artificial Intell., vol. 17, pp. 117-140, Aug. 1981.
[106] J. Woods, "Two-dimensional discrete Markov random fields,"IEEE Trans. Inform. Theory,vol. IT-18, pp. 232-240, 1972.
[107] S. W. Zucker, "Region growing: Childhood and adolescence,"Comput. Graphics Image Processing, vol. 5, pp. 382-399, 1976.
[108] D. M. Zuk and M. L. Delleva, "Three-dimensional vision system for the adaptive suspension vehicle," Defense Supply Service, Washington, Final Rep. 170400-3-F, ERIM, DARPA 4468, 1983.
Index Terms:
surface curvature sign labelling; computerised picture processing; variable-order surface fitting; image structure; piecewise-smooth surface model; surface coherence; bivariate functions; noiseless
image reconstruction; image segmentation; iterative region-growing method; computerised picture processing; iterative methods
P.J. Besl, R.C. Jain, "Segmentation through Variable-Order Surface Fitting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp. 167-192, March 1988, doi:10.1109/
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tp/1988/02/i0167-abs.html","timestamp":"2014-04-17T18:36:28Z","content_type":null,"content_length":"70194","record_id":"<urn:uuid:0430d9ce-6b22-4b63-8822-69b5b9648e17>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The first part of this Blog is about the Triangular numbers, related to the Number 3, the Holy Trinity.
The second part shows that Pascal’s Triangle (called Meru’s Mountain in Mystics), the Binomial Expansion, contains every Possible Mystical Number Pattern (including the Triangular Numbers) you can
Pascal Triangle also shows that our Universe is a combinatorial miracle. It explores every possibility, is always in balance, expands and moves back to the beginning which is and was the Void, the
Empty Set, the merge of Every Paradox, that is Possible.
About Mystical Number-Patterns
The Sēpher Yəṣîrâh (Book of Formation or Book of Creation, ספר יצירה) is the oldest book on Jewish Mysticism. The Sefer Yetzirah describes how the universe was created by the “God of Israel” through
32 Wondrous Ways of Wisdom.
The Number 32 is the Sum of the 10 Sephirot and the 22 Letters of the Hebrew Alphabet.
The Sephirot is related to the Tetraktys of Pythagoras. The Tetraktys embodies the Four main Greek Cyclical (Platonic) Musical Harmonies: the Fourth (4:3), the Ffth (3:2), the Octave (2:1) and the
Double Octave (1:4).
1+2+3+4 = 10. 10 is the 4th Triangular Number. The Nth triangular number is the Sum of the numbers 1 -> N. This Sum is equal to 1/2N(N+1).
Between the 10 Sephirot run 22 Channels or Paths which connect them.
The Sephirot are the Points of the Tetraktys. The Hebrew Letters are the Lines between the Points. The Lines of the Sephirot and the Tetraktys create a Cube (6) at the Top and a Tetrahedron (4) at
the Bottom.
The Letters of the Hebrew Alphabet are divided in the 3 Mother Letters (אמש, the Trinity), the Seven Doubles (The Planets) and the Twelve Simples (the Zodiac).
When you analyse the Sepher Yeshirah the Cube of Space (the Kaaba) appears out of the Hebrew Alphabet. The Kaaba is related to the Seventh Planet, Saturn.
The 3 Axis of the Cube of Space are the Trinity, the 6 (2×3) Faces of the Cube stand for the Planets with the 7th Saturn, the Son of the Central Sun (3+1 (Center)+3) in the Center and the 12 (4×3)
Boundary Lines of the Cube represent the 12 Signs of the Zodiac.
As you can see the Number Three, the Triangle, plays an important role. It is the First Structure that is Closed in Itself and is therefore Topological related to the Circle. The Circle (and the
Triangle) is able to rotate With and Against the Clock. The property is called Spin in Physics.
It is very important to realize that Everything Rotates in our Universe around a Central Object that rotates around another Central Object. The Central Object Gives Time, determinates the Rythm or
Harmonics, of the Rotation Structure.
The Trinity rotates around the Void. The 7 Chakra’s of the Human rotate around the 4th Chakra, the Heart Chakra, The Planets rotate around the Sun and the Sun rotates around the Central Black Hole.
The arrow of Sagitarius points to this Black Hole.
On a Six Sided Dice the Sum of all the Numbers is Seven (1+6,2+5,3+4). The Sum of the Six Numbers is 3 X 7 = 21. If we add the Center (Saturn) the Number 22 appears.
22/7 is a good approximation of the number π. π relates the Square (and the Cube) to the Circle.
The Cube of Space symbolizes the Playing Board of the Game of Life. On the Playing Board we have a Free Choice to move into the many Paths that are available. Every Path has its own Probability and
this Probability can be calculated. If we don’t know what to do we could throw a Dice.
The Cube of Space contains the same six lines that exist in the I Ching. Four of the lines are of equal length, the other two, the diagonals, are longer. For this reason symmetry cannot be statically
produced and the Dance (of Shiva) results.
The Circle represents the Cycles of Time of the Matrix of the Demiurg. Behind all the Probabilities of all the Possible Paths lies a Hidden Order.
A Hexagram, represented by the Star of David, is a Two-Dimensional (Orthographic) projection of a Cube. A Symmetric Projection of the Cube creates a Cross.
One of the many meanings of the first word in the Bible “Bereshit“, is “They (Elohim) created Six” which means that in Six Stages of the Time Cycle the Cube of Space (or the Hexagram) was
populated. On the Seventh Day the Center was filled.
The book of Genesis does not describe the creation of the Trinity (They, Elohim, 1+2+3, 1x2x3) itself. This stage was later covered in the Zohar.
In my blog “About the Sum of Things” it is shown that Six Stages are part of an Expansion Pattern governed by the Powers of Two. After 2**6 (64) Expansions (or Compressions) the Same Fractal Pattern
repeats itself on a higher level.
64 is the Number of the I Tjing and the Game of Chess. The number 32 of the Sepher Yeshirah is 64/2 and is a Contraction of the I Tjing.
The I Tjing is a contraction of the oldest Divination System in the Word called FA. FA is still used all over the world by the followers of the oldest wisdom-system created by the YOrubA in Africa.
The Yoruba lived at the place where the ancient Paradise was situated.
About the Triangular Numbers
The Tetraktys contains the Numbers 1, 3, 6 and 10. These numbers are called Triangular Numbers.
The number 21 is also a Triangular Number because it is the Sum of the Sixth Level of the Tetraktys, the Numbers 1 to 6.
The Fifth Level of the Tetractys is related to the Number 15 (1+2+3+4+5). This number connects the Tetractys and the Sephirot to the 3×3 Lo Shu Magic Square also called the Seal of Saturn.
The nth Triangle number T(n) is the number of dots in a triangle with n dots on a side; it is the sum of the n natural numbers from 1 to n. T(n)=n(n+1)/2.
The Triangular Numbers contain the Perfect Numbers. A perfect number is a positive integer that is equal to the sum of its proper positive divisors, that is, the sum of its positive divisors
excluding the number itself. Six (1+2+3=1x2x3) is the first Perfect Number and 28 (1+2+4+7+14) is the next.
The Sum of two adjacent Triangular Numbers T(n) + T(n+1) is a Square Number because Two Triangels can be combined in a Square. 1+3=2**2 and 3+6=3**2.
There are many relationships between the Triangular Numbers. These relationships were the focus of the research of the Mystical Group of the Mathematikoi of Pythagoras.
6 (Bereshit, the Cube, the Hexagram) + the 22 Letters of the Hebrew Alphabet = 28, the Next Perfect Number (1+2+3+4+5+6+7).
28 is like the numbers 6 and 15 also a Hexagonal Number. As you can see in the picture below 28 is the fourth Hexagonal Number. As we have seen before a Hexagon is a Projection of a Cube so 28
represents a Cube in a Cube in a Cube. A Cube in a Cube is called a Tessarect or a HyperCube.
The first sentence in Genesis (“In the beginning Elohim created Heaven and Eearth“) contains 7 words and 28 letters. This indicates that the Creation Process was already in the 7th stage of the
Tetraktys and in its 2nd Fractal Expansion, the Birth of the Material Universe.
The sum of the entire verse is the 73^rd Triangular Number. The prime Numbers 37 and 73 are geometrically related. They form the third and the fourth term in the sequence of Star Numbers (1, 13, 37,
73, 121).
Hexagon/Star pairs are closely related to Triangular numbers. Their product is always a Triangle, and they can be symmetrically generated from a Pair of Triangles.
About Pascal’s Triangle
When a number represents a Geometric Structure it is called a Figurative Number.
Every possible figurative number is generated by the Triangle of Pascal.
The Fractal Sierpinsky Triangle is the Triangle of Pascal Modulo 2.
The Triangle of Pascal was known long before Pascal (re)discovered it.
It was known in Ancient India as the Meru Prastara and in China as the Yang Hui. Meru Prastara relates the triangel to a Mystical Mountain called Mount Meru. Mount Meru is also implemented in the
Sri Yantra.
The Triangle shows the Coefficients of the Function F(X,Y))= (X+Y)**n. If n=0 F(X,Y)=1 and if n=1 F(X,Y)=X+Y so the Coeffcients are (1,1).
Pascals Triangle is a 2-Dimensional System based on the Polynomal (X+Y)**N. It is always possible to generalize this structure to Higher Dimensional Levels. 3 Variables ((X+Y+X)**N) generate The
Pascal Pyramid and n variables (X+Y+Z+….)**N generate The Pascal Simplex.
The rows of the Pascal’s Triangle add up to the power of 2 of the row. So the sum of row 0 is 2**0 and the sum of row 1 is 2**1 =2.
The Sum of the rows of the higher n-dimensional versions of the Triangle is n**N where n is the Amount of Variables and N the level of expansion. So the Sum of Pascal’s Pyramid (3 variables X,Y,Z)
is 3**N.
The most interesting property of the Triangle is visible in the Diagonals.
The First Diagonal contains only 1′s. The Ones represent Unique Objects. They are the Points in the Tetraktys.
The Second Diagonal contains the natural numbers. These Numbers are used to Count Objects that are The Same. The Natural Numbers are the Lines that connect the Points. The Natural Numbers are the Sum
of the previous Ones.
The Third Diagonal contains the triangular numbers. The Triangular Numbers are the Sum of the previous Natural Numbers.
This pattern repeats itself all the time.
The Fourth Diagonal contains the tetrahedral numbers (Pyramid Numbers) and the Fifth Diagonal, the pentatope numbers.
Fermat stated that Every Positive Integer is a Sum of at most three Triangular numbers, four Square numbers, five Pentagonal numbers, and n n-polygonal numbers.
The Tetrahedron with basic length 4 (summing up to 20) can be looked at as the 3-Dimensional analogue of the Tetraktys.
The Diagonals of the Triangle of Pascal contain every Possible 2-Dimensional Figurative Number (and Structure).
These Numbers are Projections of Higher Dimensional Numbers and Higher Dimensional Structures.
The Higher Dimensional Versions of the Triangle (the Pascal Pyramid, The Pascal Simplex) contain these structures.
The Rows of the Triangle Sum to the Powers of Two (2 Dimensions). These Powers control the Levels of Expansion.
Every 7th step the Fractal Pattern of the Triangle repeats itself on a higher Level.
The Figurative Numbers are the Geometric Shapes that are created by the Lines of the Natural Numbers that are connecting the Points of the One.
Pascal’s Triangle also contains the numbers of the Fibonacci Sequence (“The Golden Spiral“).
When we take the Modulo 9 (the Digital Root of Pythagoras) of the Numbers of Fibonacci a repeating patterns of 24 steps shows itself that can be represented by a Star Tetrahedron or Stella Octangula
. The Star Tetrahedron is a Three Dimensional Star of David.
Every Figurative Number N is the Sum of the Figurative Numbers N-1. Every Geometric Shape is a combination of all the Previous Geometric Shapes.
This means that Every Geometric Shape is in the end The Sum of the Sum of the Sum of …. Triangels, Trinities (Elohim) or Triangular Numbers and therefore an Extension of the Tetractys of Pythagoras.
The Expansion of the Whole is a (Fractal) Combination of Combinations.
The Triangle of Pascal is related to the so called Binomial Theorem which is used in Combinatorics and Probability Theory to describe the Amount of Combinations of a Set of Objects.
The rows of the Triangle of Pascal also shows the Bell Shaped Pattern of the Normal Distribution.
The Probability Distribution of the Triangle of Pascal converges to the Normal Distribution because of the Central Limit Theorem. Every Row has a Mean of N/2 and a Variance of (N**1/2)/2 which means
that with every new row the Mean and the Variance become Bigger and Bigger.
The Triangle of Pascal and therefore the Figurative Numbers describe Everything that is Possible but every Expansion of the Triangle is less Likely to Occur.
Because of the Fractal Expansion/Contraction Pattern The Cube of Space, related to the Element Earth, explains Everything there is to Know on Our Level of Existence, Mother Earth.
The interesting part of the Figurative Numbers is that they representent Visual Patterns with which we can Reason.
We don’t need complex formulas because we can See what is Possible.
The interesting part of the Triangle of Pascal is that we can See that the Complex Figurative Structures are created out of a very Simple Structure, the Triangle.
If we want to understand our Reality we have to begin with looking at the Beginning and not start somewhere in the Middle.
If we look at the Fractal Expansion Pattern of the Triangle we See that Every new Stage is an Expansion Out of the Middle.
The Expansion of the Human, the Next Step in our Evolution, is therefore an Expansion Out of the Heart, the Balance of Father Sky and Mother Earth.
Life is not only about Me and the Other.
Life is also about the Relationship between Me and the Other.
If we don’t Collaborate the Next stage in our Evolution will never happen.
The Content of the Sepher Yesirah
About the Cube of Space and Psychology
About the Sepher Yesirah and the I Tjing
A correspondence table of the Cube of Space
About the Sri Yantra and Plato
About the Lo Shu and the I Tjing
All kind of strange relationships between Triangular Numbers
A website about Mystical Number Theory
About Combining the Combinations
About the Golden Spiral and Plato
About Pascal’s Triangle and the Normal Distribution
A complete course in elementary Number Theory
About the Psychology of the Cube of Space
About the Tetraktys and the Zodiac
About the Process Theory of Paul Young
About the Theory of Dewey B. Larsson
Mysteries of the Equilateral Triangle
About Visual Patterns in Number Theory
|
{"url":"http://hans.wyrdweb.eu/tag/kabbalah/","timestamp":"2014-04-18T05:47:07Z","content_type":null,"content_length":"85243","record_id":"<urn:uuid:87aa124c-fa34-42a2-a090-1d46feef5e44>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] shape problem after flipud
Robert Kern robert.kern@gmail....
Thu Jun 14 16:04:26 CDT 2007
Dominik Szczerba wrote:
> Hi,
> The following trivial codelet does not work as expected:
> -------------------------------
> from scipy import *
> import copy
> shape = (256,256)
> data = zeros(256*256)
> data.shape = shape
> print 'old shape', data.shape
> print data
> data=flipud(data)
> data.shape=(256*256,)
> print 'new shape', data.shape
> -------------------------------
> exiting with an uncomprehensive error:
> data.shape=(256*256,)
> AttributeError: incompatible shape for a non-contiguous array
> If 'flipud' is ommited, it works as expected. I tried via a deepcopy,
> the problem persists. Why should flipud invalidate 'reshapeability'?
Assigning to .shape only adjusts the strides. It does not change any of the
memory. It will only let you do that when the memory layout is consistent with
the desired shape. flipud() just gets a view on the original memory by using
different strides; the result is non-contiguous. The memory layout is no longer
consistent with the flattened view that you are requesting. Here is an example:
In [25]: data = arange(4)
This is the layout in memory for 'data' and (later) 'd2':
In [26]: data
Out[26]: array([0, 1, 2, 3])
In [29]: data.shape = (2, 2)
In [30]: data
array([[0, 1],
[2, 3]])
In [31]: d2 = flipud(data)
In [32]: d2
array([[2, 3],
[0, 1]])
Calling .ravel() will copy the array if it is non-contiguous and will show you
the memory layout that 'd2' is mimicking with its strides.
In [33]: d2.ravel()
Out[33]: array([2, 3, 0, 1])
Assigning to .shape will only let you do that if the memory layout is consistent
with the view that the array is trying to do.
In [52]: import copy
In [53]: d3 = copy.deepcopy(d2)
In [54]: d3
array([[2, 3],
[0, 1]])
In [55]: d3.shape = (4,)
In [56]: d3
Out[56]: array([2, 3, 0, 1])
copy.deepcopy() should have worked. I don't know why it didn't for you. However:
> What am I doing wrong?
You will want to use numpy.reshape() if you want the most foolproof and
idiomatic way to get a reshaped array. It will copy the array if necessary.
In [57]: reshape(d2, (4,))
Out[57]: array([2, 3, 0, 1])
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-June/012539.html","timestamp":"2014-04-18T05:41:35Z","content_type":null,"content_length":"5178","record_id":"<urn:uuid:c69a2164-e0c4-4ca6-8d5f-a95779b2cc46>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mass is energy
The USS Enterprise in 1964 (pre Zefram Cochrane era), during Operation Sea Orbit when it sailed around the world in 65 days without refuelling - demonstrating the capability of nuclear-powered ships.
Credit: US Navy.
Some say that the reason you can't travel faster than light is that your mass will increase as your speed approaches light speed so, regardless of how much energy your star drive can generate, you
reach a point where no amount of energy can further accelerate your spacecraft because its mass is approaching infinite.
This line of thinking is at best an incomplete description of what s really going on and is not a particularly effective way of explaining why you can t move faster than light (even though you can
t). However, the story does offer some useful insight into why mass is equivalent to energy, in accordance with the relationship e=mc^2.
Firstly, here s why the story isn t complete. Although someone back on Earth might see your spacecraft s mass increase as you move near light speed you certainly aren t going notice your spacecraft
s, or your own, mass change at all. Within your spacecraft, you would still be able to climb stairs, jump rope and if you had a set of bathroom scales along for the ride you would still weigh just
the same as you did back on Earth (assuming your ship is equipped with the latest in artificial gravity technology that mimics conditions on Earth s surface).
The change perceived by an Earth observer is just relativistic mass. If you hit the brakes and returned to a more conventional velocity, all the relativistic mass would go away and an Earth observer
would just see you retaining with same proper (or rest) mass that the spacecraft and you had before you left Earth.
The Earth observer would be more correct to consider your situation in terms of momentum energy, which is a product of your mass and your speed. So as you pump more energy in to your star drive
system, someone on Earth really sees your momentum increase but interprets it as a mass increase, since your speed doesn t seem to increase much at all once it is up around 99% of the speed of
light. Then when you slow down again, although you might seem to be losing mass you are really offloading energy perhaps by converting your kinetic energy of motion into heat (assuming your
spacecraft is equipped with the latest in relativistic braking technology).
As the ratio of your velocity to light speed approaches 1, the ratio of your relativistic mass to your rest mass grows asymptotically - i.e. it approaches infinite.
From the perspective of the Earth-based observer, you can formulate that the relativistic mass gain observed when travelling near light speed is the sum of the spacecraft s rest mass/energy plus the
kinetic energy of its motion all divided by c^2. From that you can (stepping around some moderately complex math) derive that e=mc^2. This is a useful finding, but it has little to do with why the
spacecraft s speed cannot exceed light speed.
The phenomenon of relativistic mass follows a similar, though inverse, asymptotic relationship to your speed. So as you approach light speed, your relativistic time approaches zero (clocks slow),
your relativistic spatial dimensions approach zero (lengths contract) and your relativistic mass grows towards infinite.
But as we ve covered already, on the spacecraft you do not experience your spacecraft gaining mass (nor does it seem to shrink or have its clocks slow down). So you must interpret your increase in
momentum energy as a genuine speed increase at least with respect to a new understanding you have developed about speed.
When you approach light speed and still keep pumping more energy into your drive system, what you find is that you keep reaching your destination faster not so much because you are moving faster,
but because the time you estimated it would take you to cross the distance from point A to Point B becomes perceivably much less, indeed the distance between point A to Point B also becomes
perceivably much less. So you never break light speed because the distance over time parameters of your speed keep changing in a way that ensures that you can t.
In any case, consideration of relativistic mass is probably the best way to derive the relationship e=mc2 since the relativistic mass is a direct result of the kinetic energy of motion. The
relationship does not easily fall out of consideration of (say) a nuclear explosion since much of the energy of the blast derives from the release of the binding energy which holds a heavy atom
together. A nuclear blast is more about energy transformation than about matter converting to energy, although at a system level it still represents genuine mass to energy conversion.
Similarly you might consider that your cup of coffee is more massive when it s hot and gets measurably less massive when it cools down. Matter, in terms of protons, neutrons, electrons and coffee,
is largely conserved throughout this process. But, for a while, the heat energy really does add to the mass of the system although since it s a mass of m=e/c^2, it is a very tiny amount of mass.
3.6 / 5 (11) Nov 21, 2011
Not sure why this is on physorg. No new finding, invention, paper, technology or otherwise novel thing is covered here. This is just basic Relativity stuff.
Which is pretty much summed up by the author description in the original article:
Steve Nerlich is a very amateur Australian astronomer, publisher of the Cheap Astronomy website and the weekly Cheap Astronomy Podcasts and one of the team of volunteer explainers at Canberra
Deep Space Communications Complex - part of NASA's Deep Space Network.
4.4 / 5 (8) Nov 21, 2011
@antialias : belive me, if you really understand every word of this article and with understanding i mean that you really know what you can do with it, and that it is technically totally possible to
do some of the stuff, then you don't want to to see any novel looking discovery on this page ever again, which just fills up your mind and just prevents you from understanding.
I think clearing the stuff up is much more important than just showing another new looking discovery which has actually been discovered many years before but maybe in a slightly different version.
And i think physorg now totally recognized this.
5 / 5 (1) Nov 21, 2011
Ok, but i actually thought like you when i read the title the first time. But then i saw the date in the title ( The USS Enterprise in 1964 ...), which is actually like a slight hint, i think. I
guess we are about 50 to 100 years off in really understanding.
2 / 5 (2) Nov 21, 2011
Good article. But I have a small question: Let's say I'm traveling to a planet that's 10ly away. And I manage to accelerate to 20% of the speed of light. When the time comes to decelerate close to my
destination, won't all that savings in time-dilation be "taken back" during my slowing down? Or as a passenger on my ship, will the trip actually appear to take a shorter time than a
straight-forward, non-time-dilated calculation would predict?
I love this stuff.
1.5 / 5 (4) Nov 21, 2011
Relativistic effects are mind blowing. Question, if two ships accelerate away from each other, would they only have to reach 1/2 the speed of light to achieve a relativistic light speed between them?
Could they exceed the the speed of light relative to each other, as each would only need to achieve slightly more than 50% the speed of light each.
5 / 5 (2) Nov 21, 2011
Yes to the above, as seen by you, they are moving away from each other at greater than c, but in the frame of reference of one photon relative to the other, no, they are moving less than c.
Of course, I'm just repeating what I learned watching Susskind's lectures on youtube, to the my best recollection. I think it's lecture 6 of the cosmology series he explains this.
4 / 5 (1) Nov 21, 2011
No Mayday, it would increase the total time compared to how fast you could fly right past the planet at full speed. But overall it wouldn't be "taken back" in relation to only being able to
accelerate to anything less. Remember acceleration is velocity increasing over time so even if you could could keep increasing speed for 5 of those light years, your velocity would inevitably be
increasing that whole time.
not rated yet Nov 21, 2011
Given what he says about a nuclear explosion, about how much mass is converted to energy in, say, a 1 megaton blast, vs atomic bond energy?
2 / 5 (1) Nov 21, 2011
Some people spoil great articles. As nature proves things can move faster. Like the redshift seen from distance universes. Or particles who enter through our atmosphere with such a high speeds that
we can detect them still at ground level (particle's clock slow down). To travel faster then light one needs to bend empty space, or create an object on which vacuum pressure on all sides is not
equal. In short to travel faster one needs to abondon rocket based ideas, and hack into the framework of our dimensions wormholes etc. Its not easy, but impossible well not for now.. but never say no
i think.
5 / 5 (1) Nov 21, 2011
No, you don't pay it back. Even when slowing down you are still 'saving time' because you are still going faster than when at rest.
A megaton is a million tonnes (metric tons) of TNT, where the energy of a gram of TNT is defined as 4184 Joules (one kCal). A tonne a million grams, so that is 4.2x10^15 Joules.
C is 3x10^8 meters/sec, so C^2 is 9x10^18, which is ~2000 times bigger. So a 1 megaton H-bomb convert about 1/2 gram of matter to energy.
5 / 5 (1) Nov 21, 2011
Alfredh H, 5:45 minutes into the 4th lecture
1 / 5 (2) Nov 21, 2011
baaaaaaaaaad article
worst ever
3.5 / 5 (4) Nov 22, 2011
reading this article did not clear up any relativistic problems anybody may have. In fact it would just add to the confusion.
The problem a lay person has boils down to not appreciating a speed limit.
The best explanation would avoid any reference to a speed limit and fix the perceived problem from an alternate view point.
Look at velocity from the point of view of the traveler. Question: How long would it take me to fly to the nearest star which is 4 light years away?
Answer: depends how fast you go. Given a very powerful engine you could be there in a year, a month, a day, an hour or whatever.
Question: But what about the speed limit?
Answer: There is no speed limit you just have to speed up the universe to get there sooner and that uses a lot of fuel.
3 / 5 (2) Nov 22, 2011
Go into a little more detail along those lines. If you could reach the speed of light you could be anywhere in the universe in no time at all.
Maybe all accelleration applies to yourself pushing the universe. The faster you want to travel the harder you push.
5 / 5 (1) Nov 22, 2011
I don't know why people still keep mixing frames of reference in the spacecraft-approaching-the-speed-of-light issue since from the lab's frame, if the spacecraft is gaining mass, so is it's fuel.
not rated yet Nov 23, 2011
Thank you for your response, but I have an additional question. You said:
"A megaton ...convert about 1/2 gram of matter to energy."
But the author said: "A nuclear blast is more about energy transformation than about matter converting to energy"
Is there a calculation for how much energy release each source contributes to the whole?
5 / 5 (1) Nov 23, 2011
richgibula - The author is referring to a nuclear blast not converting neutrons, protons (or electrons) to energy (the way a matter/antimatter drive would), but by rearranging these particles into a
lower energy state.
In a chemical explosion, it is the atoms' electrons that change configuration to release energy.
In a nuclear explosion, it is the protons and neutrons that are rearranged to release energy.
In either case the energy released is reflected by the reaction products (once cooled and at rest) having less mass than the starting material had. The change is very small - even in a hydrogen bomb
the energy released is less than 1% of the mass of the nuclei involved.
1 / 5 (2) Nov 23, 2011
The E=mc^2 equation can be derived from classical mechanics easily - it's not relativistic effect.
We can understand it easily with using of water surface model of space-time. The energy introduced makes it undulating and elongated like the deformed carpet. This deformation slows down another
particles of matter, which are traveling across undulating place, so it behaves like are of more dense matter. The E=mc^2 is therefore the relation between deformation of space-time and energy
introduced into this deform and it's valid only for 4D space-time. At the 3D space-time it would correspond E=mc and so on. The introduction of extradimensions therefore changes the E=mc^2 formula
and general relativity should account into it, when describing the hyperdimensional effects, like the dark matter.
3 / 5 (2) Nov 24, 2011
Is there a calculation for how much energy release each source contributes to the whole?
The energy released in a nuclear blast (if I remember that lecture by Feinman correctly) is a rearrangement (transformation) of neutrons/protons from larger nuclei to smaller ones. The energy
released is electrostatic energy. the number of protons and neutrons (and electrons) after a nuclear blast are the same as before. The energy content of those products is, however, less - since they
are now rearranged into more stable configurations (and some neutrons flying off on their own)
So really we're not dealing with a 'nuclear' (force) bomb but with an 'electrostatic' bomb. The nuclear forces actually get a bit larger - they are a negative contribution to the energy output.
5 / 5 (1) Nov 24, 2011
AP - Since the question said 1 megaton, that would be a hydrogen bomb. This has a fission bomb at its core (and often as an outer shell as well), which is indeed huge nuclei to merely large nuclei.
But most of the energy of a hydrogen bomb comes from fusion of hydrogen isotopes, typically deuterium and tritium.
Thus most of the energy of a 1-megaton bomb is from very small nuclei fusion into fairly small nuclei. In this the energy released comes from the strong nuclear force, with the electrostatic
potential increasing.
5 / 5 (2) Nov 24, 2011
Yes. It seems I was mistaken in that fission bombs could be in the megaton range. Thanks for the heads-up.
not rated yet Nov 26, 2011
I wonder about space ships at relativistic speeds.
If your mass increases with speed, does the energy contained in that mass also increase? And if so does the energy content of the propulsion fuel you carry with you also increase? Meaning that the
energy you carry to propel your ship is always in proportion to energy needed to propel the mass of the ship? Does this mean that accelerating to light speeds is just a question of accelerating long
enough without it becoming more difficult to accelerate at all, does it become easier (e.g. you need less fuel to accelarate even more the quicker you go)?
Just asking.
1 / 5 (3) Nov 26, 2011
In dense aether model the mass of speedy objects increases in the same way, like the mass of duck, swimming at the surface of pond, i.e. by nothing. But the duck deforms the water surface in ripples,
which are perpendicular to the direction of its motion and this deform expands the surface area and it slows down the speed of surface ripples in such a way, their speed with respect to duck remains
constant. In vacuum, the area of more dense vacuum foam shaken with motion of objects is called the deBroglie wave.
It means, the mass of objects in motion increases with the mass of surrounding vacuum, which is dragged and shaken by their motion, not by mass of objects itself. The area of dense vacuum slows down
the light spreading with respect to observer, so it participates to the relativistic length contraction too.
1 / 5 (3) Nov 26, 2011
Does this mean that accelerating to light speeds is just a question of accelerating long enough without it becoming more difficult to accelerate at all
The speed of light with respect to which? You're already moving with speed of light with respect to some lone proton flying through free space. Does it prohibit you in further acceleration? I would
say not.
In dense aether model your relativistic speed is followed with area of dense vacuum, which increases your mass with respect to other observers. But when some observer is moving with relativistic
speed in the same direction, like you, then you're both surrounded with the same area of dense vacuum and you cannot detect any gain in mass, after then. Because you're both surrounded with vacuum of
the same relative density.
not rated yet Nov 26, 2011
antiaalias, they have the article tagged as general physics, maybe that's why? I thought it was really interesting.
Other than that, not sure why it is here.
About the length contraction thing, I wonder if as your relativisitc mass goes up you actually do something to the fabric of space-time that makes it harder for you to keep accelerating closer and
closer to C.
jsa09, please don't do us any more "favors" in trying to make it easier to understand.
not rated yet Nov 27, 2011
Why do lengths of objects contract when their velocity aproach speed light?
1 / 5 (3) Nov 27, 2011
Why do lengths of objects contract when their velocity aproach speed light?
Basically because of geometry of energy wave spreading with limited speed. As usually in dense aether theory, the water surface analogy explains it well: http://www.aether...rnik.gif
The length contraction is the necessary consequence of the constant speed of light postulate of special relativity.
1 / 5 (3) Nov 27, 2011
the duck deforms the water surface in ripples, which are perpendicular to the direction of its motion and this deform expands the surface area and it slows down the speed of surface ripples in
such a way, their speed with respect to duck remains constant. In vacuum, the area of more dense vacuum foam shaken with motion of objects is called the deBroglie wave.
It means, the mass of objects in motion increases with the mass of surrounding vacuum, which is dragged and shaken by their motion, not by mass of objects itself. The area of dense vacuum slows
down the light spreading with respect to observer, so it participates to the relativistic length contraction too.
What you are referring to is the state of displacement of the aether.
Aether has mass. Aether is physically displaced by matter.
The faster an object moves through the aether the more aether the object displaces the greater the relativistic mass of the object.
1 / 5 (3) Nov 28, 2011
The vacuum is behaving like the elastic material with some density of inertia. Why? Simply because it's able to serve as an environment for light waves of finite energy density. If it would be
material of zero inertia, than every wave in it would undulate with infinite frequency like infinitesimally lightweight string.
Therefore the inertial wave equation formalism can be applied to the vacuum as well. For example, when we undulate elastic material of final mass density, this material will increase its density for
all waves which are passing trough the undulating place accordingly.
1 / 5 (4) Nov 28, 2011
The vacuum is behaving like the elastic material
The elastic material is the aether. Aether has mass.
1 / 5 (3) Nov 28, 2011
The vacuum is behaving like the elastic material
The elastic material is the aether. Aether has mass.
'From Analogue Models to Gravitating Vacuum'
"The aether of the 21-st century is the quantum vacuum, which is a new form of matter. This is the real substance"
"Einstein's 'First Paper'"
"The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces."
1 / 5 (3) Nov 28, 2011
The similarity goes even deeper. The deform of vacuum at the case of light wave is close to the deform of piece of elastic foam or jelly: http://www.aether...def0.gif http://www.aether...oton.gif It
explains the presence of mutually perpendicular vectors of electric and magnetic fields.
1 / 5 (3) Nov 28, 2011
The property aether has which corrects all of the nonsense in physics today is mass. Aether has mass. Aether physically occupies three dimensional space. Aether is physically displaced by matter.
Aether is not at rest when displaced.
Pressure exerted by displaced aether toward matter is gravity.
A moving particle has an associated aether displacement wave. In a double slit experiment the particle enters and exits a single slit and it is the associated aether displacement wave which enters
and exits both. As the wave exits the slits it creates wave interference. As the particle exits a single slit the direction it travels is altered by the wave interference. This is the wave piloting
the particle of pilot-wave theory. Detecting the particle turns the associated aether displacement wave into chop, there is no wave interference, and the direction the particle travels is not
Curved spacetime is displaced aether.
1 / 5 (3) Nov 28, 2011
A moving particle has an associated aether displacement wave.
"Displacement wave" sounds too fuzzy for me. Draw it or illustrate it with some analogy. Why it should have associated such a wave? Don't pile up tautologies, use if-then predicates.
1 / 5 (4) Nov 28, 2011
A moving particle has an associated aether displacement wave.
"Displacement wave" sounds too fuzzy for me. Draw it or illustrate it with some analogy. Why it should have associated such a wave? Don't pile up tautologies, use if-then predicates.
A boat's bow wave is its water displacement wave. A moving boat physically displaces the water into the form of a wave.
A moving particle physically displaces the aether into the form of a wave.
5 / 5 (3) Nov 29, 2011
I see Callipo has a new sockpuppet
1 / 5 (3) Nov 29, 2011
A boat's bow wave is its water displacement wave. A moving boat physically displaces the water into the form of a wave.
In another words, it's wake wave. http://aetherwave...ics.html
I see Callipo has a new sockpuppet
I've no connection to wpdwpd. It just illustrates, the aether model leads to the reproducible conclusions.
1 / 5 (2) Nov 29, 2011
In another words, it's wake wave.
It's aether displacement wave. In a double slit experiment the particle enters and exits a single slit and it is the associated aether displacement wave which enters and exits both.
1 / 5 (2) Nov 29, 2011
I tend not to introduce new terms, when the mechanical analogy has already its well established denomination. Here you can find the analogy of this experiment with water surface http://
1 / 5 (1) Nov 29, 2011
I tend not to introduce new terms, when the mechanical analogy has already its well established denomination. Here you can find the analogy of this experiment with water surface http://
"The walkers wave is similar to the surface wave of a raindrop falling on a puddle"
A raindrop falling on a puddle creates a ripple because the raindrop displaces the water in the puddle.
The physical wave of de Broglie wave mechanics is an aether displacement wave.
|
{"url":"http://phys.org/news/2011-11-mass-energy.html","timestamp":"2014-04-20T08:47:11Z","content_type":null,"content_length":"123723","record_id":"<urn:uuid:aacf9c4a-2219-4eea-9ced-d72768b5e5bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Plotting besselfunction for circular plate
Replies: 1 Last Post: Jun 24, 2013 4:35 AM
Messages: [ Previous | Next ]
Re: Plotting besselfunction for circular plate
Posted: Jun 24, 2013 4:35 AM
On 6/24/2013 2:52 AM, Nana wrote:
> I wish to plot an equation involving Bessel function in polar co-ordinates:
> u(r,theta)=besselj(p,l)*r*cos(l*theta)+(r^l)*cos(theta).
> where r, theta refer to the radius of the circle and angle.
You have a function of 2 variables, hence you need to look
at the 3D plots in Matlab, not 2D plots.
Date Subject Author
6/24/13 Plotting besselfunction for circular plate Nana
6/24/13 Re: Plotting besselfunction for circular plate Nasser Abbasi
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2578087&messageID=9143653","timestamp":"2014-04-21T13:21:12Z","content_type":null,"content_length":"17346","record_id":"<urn:uuid:98cda77a-e890-45c7-bfed-9ac9b19e68e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert miles to angstrom - Conversion of Measurement Units
›› Convert mile to angstrom
›› More information from the unit converter
How many miles in 1 angstrom? The answer is 6.21371192237E-14.
We assume you are converting between mile and angstrom.
You can view more details on each measurement unit:
miles or angstrom
The SI base unit for length is the metre.
1 metre is equal to 0.000621371192237 miles, or 10000000000 angstrom.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between miles and angstroms.
Type in your own numbers in the form to convert the units!
›› Definition: Mile
A mile is any of several units of distance, or, in physics terminology, of length. Today, one mile is mainly equal to about 1609 m on land and 1852 m at sea and in the air, but see below for the
details. The abbreviation for mile is 'mi'. There are more specific definitions of 'mile' such as the metric mile, statute mile, nautical mile, and survey mile. On this site, we assume that if you
only specify 'mile' you want the statute mile.
›› Definition: Angstrom
An angstrom or ångström (Å) is a non-SI unit of length equal to 10^-10 metres, 0.1 nanometres or 100 picometres.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0028 seconds.
|
{"url":"http://www.convertunits.com/from/miles/to/angstrom","timestamp":"2014-04-18T00:38:25Z","content_type":null,"content_length":"20450","record_id":"<urn:uuid:2661e9c8-ebef-4574-88f2-139ca3129b23>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
forgot password
create account
Or use Mozilla Persona:
Nick's post on Why Persona is Great
CodingBat code practice
Python > Logic-2 > make_bricks
prev | next | chance
We want to make a row of bricks that is goal inches long. We have a number of small bricks (1 inch each) and big bricks (5 inches each). Return True if it is possible to make the goal by choosing
from the given bricks. This is a little harder than it looks and can be done without any loops. See also: Introduction to MakeBricks
make_bricks(3, 1, 8) → True
make_bricks(3, 1, 9) → False
make_bricks(3, 2, 10) → True
...Save, Compile, Run
prev | next | chance | CodingBat > Logic-2
Forget It! -- delete my code for this problem
Progress graphs, just for fun:
Your progress graph for this problem
Random user progress graph for this problem
Random Epic Progress Graph
Python Help
Copyright Nick Parlante 2006-11 - privacy
|
{"url":"http://codingbat.com/prob/p118406","timestamp":"2014-04-24T16:07:46Z","content_type":null,"content_length":"11250","record_id":"<urn:uuid:b87b586f-f61c-406f-ad37-a2e3be82c3cc>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Home Page Log In Messageboard The Mindat Directory
Register Search Pages Photo Galleries The Mindat Store
Amphibole group: programme for classifying microprobe or wet chemical analysis
Uwe Kolitsch Registered: 8 years ago
Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 11,820
October 15, 2011 06:21PM
Esawi, E.K. (2011): Calculations of amphibole chemical parameters and implementation of the 2004 recommendations of the IMA classification and nomenclature of amphiboles. Journal of Mineralogical and
Petrological Sciences 106, 123-129.
The amphibole group is a complex, compositionally diverse group among silicates, and exists in large varieties of rock types and P-T ranges making it very useful P-T and petrogenetic indicator. In
2004 the International Mineralogical Association (IMA) revised its 1997 nomenclature scheme for amphiboles to accommodate all known amphibole species including several species discovered after 1997.
The main difference between the 1997 and 2004 schemes is that amphiboles were divided into five groups in the 2004 scheme instead of four groups in the 1997 scheme. ClassAmp is an MS Excel-VBA
program that implements the 2004 IMA revisions of and additions to the 1997 nomenclature scheme of amphiboles. In addition to implementing the new nomenclature scheme of amphiboles, ClassAmp is
flexible where it can be used to classify microprobe or wet chemical analysis; obtain any combination of amphibole's cation names, cation values, structural formula names, structural formula values,
groups, names, prefixes, and modifiers; properly format chemical symbols, associated with amphibole nomenclature so that the output can be directly imported to final drafts/publications; and
determine pressure-temperature involves amphiboles as well as other petrogenetic determinations.
Frank K. Mazdab Registered: 4 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 24
October 15, 2011 08:27PM
I was a reviewer for an earlier version of this manuscript several years ago and couldn't get the program to work on my computer, despite repeated consultation with the author. It was ultimately not
published then. I notice in the current version of the paper the author specifically notes the program is unlikely to work on Macs, which may have been the problem I had in the past. I personally
still believe that with the Leake classification in hand, and with only a minimal understanding of writing formulas in Excel, a program such as this is not really necessary. It is a fairly
straightforward exercise to input formulas into Excel to classify and name amphiboles (or any other mineral groups) oneself, and it is a great learning experience for students. The hand-written set
of formulas I use, which took only a few hours to set up, even checks for the optional chemical modifiers (e.g. "potassian", "chlorian", etc.) and includes them with the name, where appropriate.
Olav Revheim Registered: 8 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 785
October 16, 2011 08:51AM
I have downloaded the excel spreadsheet, and I think it is absolutely great for my use. When downloading, just remember to activate macros. I hope that someone will take the time and effort to
develop something similar for the micas, tourmalines and other complex groups.
Thank you for posting Uwe.
Edited 1 time(s). Last edit at 10/16/2011 08:54AM by Olav Revheim.
EK Esawi
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis
April 19, 2012 07:56PM
I must state at the outset that I am the author of the program. Let me say that I respectfully, sincerely, and strongly disagreed with your assessment then and now. I remember our correspondence; I
am sure I had stated that the program may not work on MAC simply because Microsoft stated that it may not provide VBA support for MAC. You are not the only MAC user who did not like my program;
another MAC user did not like it for the exact same reasons when submitted to another journal. Yet, none had problems using the program or mentioned any problems with it despite the fact that it’s
been use a lot.
In my view the problem is not with the program but with MAC users for many reasons that we all know. More importantly, I will challenge anyone who could write an efficient formulas, etc. that can be
used to classify any given amphibole unless one uses the whole sheet for one analysis-try that for 800 analyses. I had seen many of such formulas, macros, etc. but they are very inefficient and very
time consuming. My program works well and very well tested
Frank K. Mazdab Registered: 4 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 24
May 04, 2012 04:08AM
While I appreciate the effort you put into developing your program, and I applaud the rationale behind it, I think you're giving yourself way too much credit for your accomplishment. The fact of the
matter is that an amphibole nomenclature routine (or for that matter, a nomenclature routine for any mineral group) can be easily put together in Excel without the need for VBA programming, simply by
using the formulas already available in Excel. Indeed, one of the assignments I've given to my mineralogy students (both graduate and undergraduate) is to derive an Excel-based spreadsheet which
normalizes EPMA amphibole analyses, and I give further "brownie points" if they include a nomenclature routine.
Amphibole nomenclature is based on simple chemical rules, outlined in Leake et al. 1997 and Leake et al. 2004. For example, if an amphibole has Ca >= 1.5 apfu in the M4 site, Na+K >= 0.5 apfu in the
A site, Ti < 0.5 apfu, Si > 6.5 apfu in the T site, and Mg/(Mg+Fe2) > 0.5, then the mineral is edenite. In this example, 5 chemical parameters define edenite, and these 5 parameters can be easily
incorporated into a set of nested IF statements in Excel. In older versions of Excel (which I still use), one is limited to no more than 7 nested IF statements. So, to name all the amphiboles takes
me 25 columns of nested IF formulas, and that number of columns seems somewhat high because most of my columns contain considerably less than 7 nested IF statements, just to limit the complexity of
any individual formula and keep things organized and attractive. Newer versions of Excel dispense with the nesting limitation, and thus the formulas could be reduced to very few columns (albeit
individually more complex).
Not only do I name the basic amphiboles, but I also include extra columns which test for the presence of minor elements in sufficient quantity to be included as obligatory modifiers (i.e. if K > 0.5
in the previous edenite example, the name would be potassic-edenite). Similarly, I include columns for the optional chemical modifier, so for example I might end up with a name such as chlorian
Contrary to your assertion that such formulas would require a whole sheet per analysis, my all takes place on one line. So it is easy to input an amphibole chemical analysis on the left-hand side of
the spreadsheet, copy the formulas down from the preceding line, and instantaneously name dozens or hundreds of entries. My spreadsheet not only allows for an assigned (or floating) Fe3/∑Fe ratio,
but also for an assigned (or floating) Mn3/∑Mn ratio, to permit normalization and naming of amphiboles such as kornite and ungarettiite (with an optional input for Li estimate for the former). The
amphibole normalization section allows for 6 different "normal" cation-based routines, 2 different routines optimized for Mn3-bearing amphiboles, and the traditional 23 O routine. If one routine
doesn't work well, simply changing a few cells makes another routine available.
In addition to amphibole, I've set up comparable sheets for pyroxenes, the eudialyte group, micas, tourmalines (the most difficult one), the osumilite group, the epidote group, and several other
complex mineral groups. Each spreadsheet took me about a day to put together, with the original amphibole one taking the longest and the other ones loosely modeled after it.
All in all, it's a great exercise to have complete control over how minerals are normalized and named. There are no "black box" formulas someone else wrote to rely on. And because it's written using
the standard Excel formulas, it works in all versions of the program, across all computer platforms, Mac & PC included. To be clear, it's not my intention here to offer a nomenclature program to
compete with yours. My point is simply that while a program such as yours certainly has its value, I would personally recommend to anyone interested in normalizing chemical data and naming minerals
to simply sit down with Excel, with the appropriate issue of Canadian Mineralogist or European Journal of Mineralogy (these two journals seem to have the most collected nomenclature papers), and a
free afternoon, and just build a spreadsheet to suit your own needs.
Stuart Mills Registered: 8 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 1,191
May 04, 2012 08:18AM
Based on the new amphibole nomecnlature which has recently been approved Roberta Oberti will be offering some spreasheets via her institutions website.
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Registered: 8 years ago
May 04, 2012 09:42AM Posts: 9,212
I hope that those producing spreadsheets in Excel will also check they work in OpenOffice as well - they should if the formula code is simple enough.
Not everyone has Excel these days!
Ralph Bottrill Registered: 8 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 2,487
May 04, 2012 10:16AM
I find the most complicated part is estimating the Fe^2+/Fe^3+ which is vital to calculate the formulae.
Uwe Kolitsch Registered: 8 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 11,820
May 04, 2012 05:36PM
Coincidentally, there is a new paper on this issue:
Determination of Fe3+/Fe using the electron microprobe: A calibration for amphiboles
William M. Lamb, Renald Guillemette, Robert K. Popp, Steven J. Fritz, and Gregory J. Chmiel
American Mineralogist 2012;97 951-961
Ralph Bottrill Registered: 8 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 2,487
May 06, 2012 04:41AM
Thanks Uwe, I look forward to that paper
EK Esawi
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis
July 12, 2012 02:17PM
First of all let say that when I replied to this forum and in particular to Frank’s comment. I neither intended nor it even crossed my mind that I am giving myself any credit. I simply stated some
facts that are valid now and when I wrote and submitted my paper; chief among them is that no other user or reviewer has ever found any problems with the program, as far as I know.
I still think that it’s nearly impossible to use Excel formulas alone in an elegant and comprehensive way for the nomenclature of most minerals, let alone amphibole, mica, etc. You stated you used 25
columns of nested IF statements and that’s where the problem lies in my view because: (1) nested IF statements, with or without arrays, are clumsy inefficient and difficult to edit and modify; (2)
using nested statements, you need the whole sheet just to classify amphiboles and you will have to copy the results somewhere else, (3) I am wondering if such a big set of nested statement can be
extensively tested and if it can be used for any type of amphibole analysis; and (4) I think it takes far more than an afternoon-at least it took me far more-to do anything that complicated.
In any case, I leave it to users to judge for themselves. My program has been used a lot and just yesterday, someone from Germany requested a copy.
PS. Anyone knows when the newest amphibole classification comes out. I was told many times that I will in so and so and never do.
EK Esawi Registered: 1 year ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 3
February 22, 2013 12:48AM
Hi All--Anyone tried the 2012 amphibole program. It can be downloaded from the IMA site and was listed on the original article. I tried to download it to 2 computers but I id not run on either and
there is no documentation files at all. EK Esawi
Olav Revheim Registered: 8 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 785
February 22, 2013 06:47AM
It works fine for me. I had some error messages when downloading it also. If I remember correctly it still should work OK. My biggest issue was to find out how I changed the comma separator from , to
Roberta Oberti Registered: 5 years ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 2
February 22, 2013 10:48AM
Dear all,
should you have problems in downloading the program based on the 2012 scheme for amphibole classificatiion and nomenclature, please contact me (
[email protected]
). I will send it as a .zip file. Unfortunately, it is bigger than 1MB, and I cannot attach it to this message.
Hence, I am attaching only the paper published on the Periodico di Mineralogia.
- Oberti et al how to name PM2012.pdf (289.9 KB)
EK Esawi Registered: 1 year ago
Re: Amphibole group: programme for classifying microprobe or wet chemical analysis Posts: 3
March 04, 2013 03:23AM
Hi Olav,
I have a copy of the program too and it works for the example with the program. However I have problems figuring out how it works with data files, the format of such files, etc.
I am planning on updating my previous program to accommodate IMA12. This is basically because I believe Excel is an overall better suited for such calculations and the fact that one still has to
calculate the structural formula and readjust the ferric/ferrous ratio-a simple task but very clumsy to do via formulas in Excel, convert to text files, etc.
Thanks, EK Esawi
Roberta Oberti Registered: 5 years ago
AMPH2012 - a new and implemented release Posts: 2
April 08, 2013 10:51AM
For those who are interested in amphiboles.
A new and implemented version of the program AMPH 2012, which gives the correct name to amphibole compositions after the IMA report (Hawthorne et al. American Mineralogist, 97, 2031-2048,l 2012) can
now be downloaded at [
This 2.0 version also covers orthorhombic amphiboles and acknowledges the recent recognition of the new rootnames ghoseite (for rootname 11) and suenoite (for the orthorhombic counterpart of rootname
Mistakes in the use of the magnesio- prefix have been corrected.
Multiple entries from a file are now allowed.
Should you need any help or further explanation, or should you find any mistake in the procedure, please do not hesitate to contact me. Your feedback has proven to be very useful.
All the best
Roberta Oberti
Your Email:
Attach a file (photo)
• Valid attachments: jpg, gif, png, pdf
• No file can be larger than 1000 KB
• 3 more file(s) can be attached to this message
Spam prevention:
Please, enter the code that you see below in the input field. This is for blocking bots that try to post this form automatically. If the code is hard to read, then just try to guess it right. If you
enter the wrong code, a new image is created and you get another chance to enter it right.
Copyright © Jolyon Ralph and Ida Chau 1993-2014. Site Map. Locality, mineral & photograph data are the copyright of the individuals who submitted them. Site hosted & developed by Jolyon Ralph.
Mindat.org is an online information resource dedicated to providing free mineralogical information to all. Mindat relies on the contributions of thousands of members and supporters. Mindat does not
offer minerals for sale. If you would like to add information to improve the quality of our database, then click here to register.
Current server date and time: April 19, 2014 14:30:38
Fade toolbar when not in focus Fix toolbar to bottom of page
Hide Social Media Links
Slideshow frame delay seconds
|
{"url":"http://www.mindat.org/forum.php?read,9,238359,259780","timestamp":"2014-04-19T14:30:38Z","content_type":null,"content_length":"65692","record_id":"<urn:uuid:9c02f5ac-f983-452e-920b-14df0f3239f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[racket] I don't understand structural recursion.
From: Damien Lee (charlicon at gmail.com)
Date: Sat Sep 17 18:19:59 EDT 2011
On 17/09/11 23:04, Nadeem Abdul Hamid wrote:
> Do you have some examples? Concrete examples of possible input values
> and what you expect the corresponding output to look like? And if the
> function expects integers as input, then what is the difference in the
> output between two successive input values like 2 and 3? 4 and 5? 0
> and 1?
My appologies.
I find it quite difficult to draw trees using ASCII characters, but for
the case when N = 1, there is only one such tree,
When N=2, there are two structurally distinct trees,
. and .
/ \
. .
I'm told to output the results in "dotted parenthesised notation" which
for the tree examples above looks like this,
((.).) and (.(.))
For N=3 the dotted parenthesised notation goes like this,
I have a function that takes a tree and emits this notation, but I thought it best to abstract the data into a
structure for more interesting output formats later.
I hope that helps, and thank you for looking,
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2011-September/048103.html","timestamp":"2014-04-16T07:14:26Z","content_type":null,"content_length":"6352","record_id":"<urn:uuid:b60e6b98-eb5a-47de-bed9-e6c2e3508f05>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Schrödinger equation
If all direct observables have some uncertainty, won't this mess up our intensity distribution even more than the fouriers already do?
OK, I think I see where you were going with your original question...
In the time-dependent Schrodinger equation [itex]H\Psi=E\Psi[/itex] the Hamiltonian is written as if all of its inputs were exactly known. For example, if we're dealing with two charged particles,
there will be a [itex]\frac{1}{r1-r2}[/itex] term somewhere in it, where r1 and r2 are the positions of the two particles. You should read that as saying not that the two particles are at those exact
positions, but rather that if they were in those positions that would be the exact distance between them. The uncertainty principle doesn't stop us from talking about how things would be if we knew
exactly where a particle was, it just forbids us from knowing exactly where it is.
Once I have the Hamiltonian written down, I solve Schrodinger's equation; and as tom.stoer said in #2, the uncertainty principle is inherent in the ψ that comes out.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4196823","timestamp":"2014-04-17T03:54:25Z","content_type":null,"content_length":"51485","record_id":"<urn:uuid:07c5468d-c067-4b57-b765-a35cac55a6a1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: October 2004 [00349]
[Date Index] [Thread Index] [Author Index]
Re: Re: Re: normal distribution random number generation
• To: mathgroup at smc.vnet.net
• Subject: [mg51332] Re: [mg51283] Re: [mg51265] Re: normal distribution random number generation
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Thu, 14 Oct 2004 06:37:26 -0400 (EDT)
• References: <20041012184801.53136.qmail@web60509.mail.yahoo.com> <4AC453EE-1C99-11D9-BF82-000A95B4967A@mimuw.edu.pl>
• Sender: owner-wri-mathgroup at wolfram.com
Thanks to Sean Kim I now realize that creating this fix is a lot harder
that I ever imagined. The main culprit is somethig otherwise very
useful: Mathematica's packed array technology. The essential point, I
think, is this:
When Mathematica 'sees" that it is asked to generate "sufficiently
large' array, where sufficiently large means large enough for the
PackedArray technology to be used, it "ignores" any custom rules for
Random[] and reverts to using the built in Random[] generator.
Actually, I suspect this is not exactly what happens; rather than
"reverting" probably Mathematica uses special code for generating
packed arrays containing random numbers, but since this code uses the
same random number generator as random[] the effect is the same as if
the user defined rules were being ignored. What makes the problem
harder to overcome is that the same principle applies for nested lists,
but it is harder to work out precisely when the PackedArray technology
is going to be used.
What follows is a rather long explanation of what is going on including
the history of my recent attempts to solve this problem. At the end
there current best "solution'. It results in a substantial loss of
speed compared with simple usage of the unmodified Random[] function.
While it seems clear to me that no satisfactory solution can be found
until WRI provides a built in one, I hope someone will think of some
improvements to this temporary fix, as i think the matter is of some
First let us choose a particular random seed that we shall use
throughout to compare "random" outcomes with different definitions of
Next we shall redifine Random[], according to the original idea of
Daniel Lichblau, in the form proposed by Bobby Treat and including a
suggestion of Ray Koopman.
With this definition we now get:
At this point almost all of us believed we had a fix, until Mark Fisher
pointed out that;
we got the standardValue again. The explanation seems to lie here:
As I already stated above whenever Mathematica generates a packed array
it reverts to using the built in Random[] no matter what rules we
define. Now for lists the point at which the packed array technology
enters seems to be at length 250. So I thought I could solve the
problem by adding a rule for random:
Random /: Table[Random[], {n_}] /; n ³ 250 := Developer`ToPackedArray[
Flatten[{Table[Random[], {i, 1, Quotient[n, 249]}, {j, 1, 249}], \
Table[Random[], {i, 1, Mod[n, 249]}]}]]
I thought I had it solved:
No problem. Then Sean Kim sent me a message pointing out that things
were still not working. At first I thought he must be using the old
package (which in any case was full of bugs) but eventually I cam to
realize that the problem was still there:
The point is that I had thought that I could avoid the problem by
constructing long lists as lists of lists of 249 elements (plus a
shorter list), but Mathematica anticipates this and at some point the
PackedArray technology again kicks in and the problem returns. To see
it more clearly let's look at the following way of generating a table:
We shall first clear Random and redefine it form the beginning;
Now compare:
So we clearly see that for lists of lists the problem again occurs,
although for much larger lists: of size around 10000.
What about the solution? Well, at the moment the only thing that comes
to my mind is to generate tables using something like this:
{1.57 Second,0.147012}
if we clear Random we get a different answer, a lot faster:
{0.28 Second,0.160058}
Note that this is no longer what we called standardvalue, since this
time the first element of the list is not the first random number that
was generated. But at least for now it seems to me that I have got a
rather slow fix. I am no longer convinced that this is necessarily the
best way to approach the problem: there should be a faster way to
"randomize' the uniform random number generator than this. But we
obviously need WRI to do something about this matter soon.
Andrzej Kozlowski
Andrzej Kozlowski
Chiba, Japan
On 13 Oct 2004, at 06:54, Andrzej Kozlowski wrote:
> This is exactly what it is mean to do. The problem was due to the
> usage of the SWB algortithm by the built in Random[] for generating
> random reals. RandomReplace makes Mathematica use the Wolfran CA
> random number generator, which Mathematica uses for generating random
> integers instead of the SWB. This "cures" the problem reported int
> hat message. Have you got any reason to think it does not?
> Andrzej
> On 13 Oct 2004, at 03:48, sean kim wrote:
>> *This message was transferred with a trial version of CommuniGate(tm)
>> Pro*
>> Hi andrej
>> thank you so much for making your package available.
>> I think I may have found another problem... ("may have
>> found" being operative phrase, or non-operative for
>> that matter)
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Oct/msg00349.html","timestamp":"2014-04-20T01:02:01Z","content_type":null,"content_length":"41310","record_id":"<urn:uuid:0a63bfb6-bf98-42fe-8351-9002753a7b23>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relativized circuit complexity
Results 1 - 10 of 45
, 1992
"... . We investigate the distribution of nonuniform complexities in uniform complexity classes. We prove that almost every problem decidable in exponential space has essentially maximum circuit-size
and space-bounded Kolmogorov complexity almost everywhere. (The circuit-size lower bound actually exceeds ..."
Cited by 170 (34 self)
Add to MetaCart
. We investigate the distribution of nonuniform complexities in uniform complexity classes. We prove that almost every problem decidable in exponential space has essentially maximum circuit-size and
space-bounded Kolmogorov complexity almost everywhere. (The circuit-size lower bound actually exceeds, and thereby strengthens, the Shannon 2 n n lower bound for almost every problem, with no
computability constraint.) In exponential time complexity classes, we prove that the strongest relativizable lower bounds hold almost everywhere for almost all problems. Finally, we show that
infinite pseudorandom sequences have high nonuniform complexity almost everywhere. The results are unified by a new, more powerful formulation of the underlying measure theory, based on uniform
systems of density functions, and by the introduction of a new nonuniform complexity measure, the selective Kolmogorov complexity. This research was supported in part by NSF Grants CCR-8809238 and
CCR-9157382 and in ...
- In Proceedings of the 5th Structure in Complexity Theory Conference , 1990
"... This paper is dedicated to the memory of Ronald V. Book, 1937-1997. ..."
, 1995
"... . We show that if a self-reducible set has polynomial-size circuits, then it is low for the probabilistic class ZPP(NP). As a consequence we get a deeper collapse of the polynomial-time
hierarchy PH to ZPP(NP) under the assumption that NP has polynomial-size circuits. This improves on the well-known ..."
Cited by 57 (8 self)
Add to MetaCart
. We show that if a self-reducible set has polynomial-size circuits, then it is low for the probabilistic class ZPP(NP). As a consequence we get a deeper collapse of the polynomial-time hierarchy PH
to ZPP(NP) under the assumption that NP has polynomial-size circuits. This improves on the well-known result of Karp, Lipton, and Sipser (1980) stating a collapse of PH to its second level \Sigma P 2
under the same assumption. As a further consequence, we derive new collapse consequences under the assumption that complexity classes like UP, FewP, and C=P have polynomial-size circuits. Finally, we
investigate the circuit-size complexity of several language classes. In particular, we show that for every fixed polynomial s, there is a set in ZPP(NP) which does not have O(s(n))-size circuits. Key
words. polynomial-size circuits, advice classes, lowness, randomized computation AMS subject classifications. 03D10, 03D15, 68Q10, 68Q15 1. Introduction. The question of whether intractable sets
- SIAM Journal on Computing , 1994
"... The main theorem of this paper is that, for every real number ff ! 1 (e.g., ff = 0:99), only a measure 0 subset of the languages decidable in exponential time are P n ff \Gammatt -reducible to
languages that are not exponentially dense. Thus every P n ff \Gammatt -hard language for E is exp ..."
Cited by 43 (13 self)
Add to MetaCart
The main theorem of this paper is that, for every real number ff ! 1 (e.g., ff = 0:99), only a measure 0 subset of the languages decidable in exponential time are P n ff \Gammatt -reducible to
languages that are not exponentially dense. Thus every P n ff \Gammatt -hard language for E is exponentially dense. This strengthens Watanabe's 1987 result, that every P O(log n)\Gammatt -hard
language for E is exponentially dense. The combinatorial technique used here, the sequentially most frequent query selection, also gives a new, simpler proof of Watanabe's result. The main theorem
also has implications for the structure of NP under strong hypotheses. Ogiwara and Watanabe (1991) have shown that the hypothesis P 6= NP implies that every P btt -hard language for NP is non-sparse
(i.e., not polynomially sparse). Their technique does not appear to allow significant relaxation of either the query bound or the sparseness criterion. It is shown here that a stronger hypothesis---
- Theoretical Computer Science , 1997
"... In this paper we extend a key result of Nisan and Wigderson [17] to the nondeterministic setting: for all ff ? 0 we show that if there is a language in E = DTIME(2 O(n) ) that is hard to
approximate by nondeterministic circuits of size 2 ffn , then there is a pseudorandom generator that can be u ..."
Cited by 42 (3 self)
Add to MetaCart
In this paper we extend a key result of Nisan and Wigderson [17] to the nondeterministic setting: for all ff ? 0 we show that if there is a language in E = DTIME(2 O(n) ) that is hard to approximate
by nondeterministic circuits of size 2 ffn , then there is a pseudorandom generator that can be used to derandomize BP \Delta NP (in symbols, BP \Delta NP = NP). By applying this extension we are
able to answer some open questions in [14] regarding the derandomization of the classes BP \Delta \Sigma P k and BP \Delta \Theta P k under plausible measure theoretic assumptions. As a consequence,
if \Theta P 2 does not have p-measure 0, then AM " coAM is low for \Theta P 2 . Thus, in this case, the graph isomorphism problem is low for \Theta P 2 . By using the NisanWigderson design of a
pseudorandom generator we unconditionally show the inclusion MA ` ZPP NP and that MA " coMA is low for ZPP NP . 1 Introduction In recent years, following the development of resource-bounded meas...
- Bulletin of the European Association for Theoretical Computer Science , 1994
"... Several recent nonrelativizing results in the area of interactive proofs have caused many people to review the importance of relativization. In this paper we take a look at how complexity
theorists use and misuse oracle results. We pay special attention to the new interactive proof systems and progr ..."
Cited by 40 (9 self)
Add to MetaCart
Several recent nonrelativizing results in the area of interactive proofs have caused many people to review the importance of relativization. In this paper we take a look at how complexity theorists
use and misuse oracle results. We pay special attention to the new interactive proof systems and program checking results and try to understand why they do not relativize. We give some new results
that may help us to understand these questions better.
, 1997
"... The 1980's saw rapid and exciting development of techniques for proving lower bounds in circuit complexity. This pace has slowed recently, and there has even been work indicating that quite
different proof techniques must be employed to advance beyond the current frontier of circuit lower bounds. Al ..."
Cited by 30 (3 self)
Add to MetaCart
The 1980's saw rapid and exciting development of techniques for proving lower bounds in circuit complexity. This pace has slowed recently, and there has even been work indicating that quite different
proof techniques must be employed to advance beyond the current frontier of circuit lower bounds. Although this has engendered pessimism in some quarters, there have in fact been many positive
developments in the past few years showing that significant progress is possible on many fronts. This paper is a (necessarily incomplete) survey of the state of circuit complexity as we await the
dawn of the new millennium.
- MIT Theory of Computing Colloquium , 2007
"... Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have
linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a ..."
Cited by 30 (2 self)
Add to MetaCart
Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have
linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory. In this paper
we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access
not only to an oracle A, but also to a low-degree extension of A over a finite field or ring. We systematically go through basic results and open problems in complexity theory to delineate the power
of the new algebrization barrier. First, we show that all known non-relativizing results based on arithmetization—both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP � ⊂
P/poly —do indeed algebrize. Second, we show that almost all of the major open problems—including P versus NP, P versus RP, and NEXP versus P/poly—will require non-algebrizing techniques. In some
cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP. Our second set of results
follows from lower bounds in a new model of algebraic query complexity, which we introduce in this paper and which is interesting in its own right. Some of our lower bounds use direct combinatorial
and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MA-protocol for the Inner
Product function with O ( √ n log n) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL � = NP. 1
- In ACM Symposium on Theory of Computing (STOC , 1999
"... We study the complexity of the circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at
most s. We argue why this problem is unlikely to be in P (or even in P=poly) by giving a number of surpris ..."
Cited by 26 (1 self)
Add to MetaCart
We study the complexity of the circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at most s.
We argue why this problem is unlikely to be in P (or even in P=poly) by giving a number of surprising consequences of such an assumption. We also argue that proving this problem to be NP-complete (if
it is indeed true) would imply proving strong circuit lower bounds for the class E, which appears beyond the currently known techniques. Keywords: hard Boolean functions, derandomization, natural
properties, NP-completeness. 1 Introduction An n-variable Boolean function f n : f0; 1g n ! f0; 1g can be given by either its truth table of size 2 n , or a Boolean circuit whose size may be
significantly smaller than 2 n . It is well known that most Boolean functions on n variables have circuit complexity at least 2 n =n [Sha49], but so far no family of sufficiently hard functions has
, 2000
"... We clarify the computational complexity of planarity testing, by showing that planarity testing is hard for L, and lies in SL. This nearly settles the question, since it is widely conjectured
that L = SL [25]. The upper bound of SL matches the lower bound of L in the context of (nonuniform) circ ..."
Cited by 23 (7 self)
Add to MetaCart
We clarify the computational complexity of planarity testing, by showing that planarity testing is hard for L, and lies in SL. This nearly settles the question, since it is widely conjectured that L
= SL [25]. The upper bound of SL matches the lower bound of L in the context of (nonuniform) circuit complexity, since L/poly is equal to SL/poly. Similarly, we show that a planar embedding, when one
exists, can be found in FL SL . Previously, these problems were known to reside in the complexity class AC 1 , via a O(log n) time CRCW PRAM algorithm [22], although planarity checking for
degree-three graphs had been shown to be in SL [23, 20].
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1050181","timestamp":"2014-04-18T21:32:46Z","content_type":null,"content_length":"38792","record_id":"<urn:uuid:3ed2b438-e015-4e02-8f45-fd6cb7548d61>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semifluxons in long Josephson junctions with phase shifts
Ahmad, Saeed (2012) Semifluxons in long Josephson junctions with phase shifts. PhD thesis, University of Nottingham.
A Josephson junction is formed by sandwiching a non-superconducting material between two superconductors. If the phase difference across the superconductors is zero, the junction is called a
conventional junction, otherwise it is unconventional junction. Unconventional Josephson junctions are widely used in information process and storage.
First we investigate long Josephson junctions having two p-discontinuity points characterized by a shift of p in phase, that is, a 0-p-0 long Josephson junction, on both infinite and finite domains.
The system is described by a modified sine-Gordon equation with an additional shift q(x) in the nonlinearity. Using a perturbation technique, we investigate an instability region where semifluxons
are spontaneously generated. We study the dependence of semifluxons on the facet length, and the applied bias current.
We then consider a disk-shaped two-dimensional Josephson junction with concentric regions of 0- and p-phase shifts and investigate the ground state of the system both in finite and infinite domain.
This system is described by a (2 + 1) dimensional sine-Gordon equation, which becomes effectively one dimensional in polar coordinates when one considers radially symmetric static solutions. We show
that there is a parameter region in which the ground state corresponds to a spontaneously created ringshaped semifluxon. We use a Hamiltonian energy characterization to describe analytically the
dependence of the semifluxonlike ground state on the length of the junction and the applied bias current. The existence and stability of excited states bifurcating from a uniform case has been
discussed as well.
Finally, we consider 0-k infinitely long Josephson junctions, i.e., junctions having periodic k-jump in the Josephson phase. We discuss the existence and stability of ground states about the periodic
solutions and investigate band-gaps structures in the plasma band and its dependence on an applied bias current. We derive an equation governing gap-breathers bifurcating from the edge of the
transitional curves.
Item Type: Thesis (PhD)
Supervisors: Susanto, H.
Wattis, J.A.D.
Faculties/Schools: UK Campuses > Faculty of Science > School of Mathematical Sciences
ID Code: 2729
Deposited By: Dr Saeed Ahmad
Deposited On: 20 Sep 2012 12:25
Last Modified: 20 Sep 2012 12:25
Archive Staff Only: item control page
|
{"url":"http://etheses.nottingham.ac.uk/2729/","timestamp":"2014-04-17T07:17:02Z","content_type":null,"content_length":"19060","record_id":"<urn:uuid:1643497b-f31a-4bc7-8249-124a51493963>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decibel Flavors Part 1 – L Values
Over the next few articles I will explain some of the different meanings of the word “decibel” as it relates to sound. ”Decibel,” by itself, is not a unit of measurement of loudness. Decibels are a
way of counting large numbers, very similar to the Richter scale. For people to be talking apples to apples about noise, they need to be clear about what flavor of decibels they’re using.
In this article I will talk about the different kinds of L Values. In the next article I will talk about weighted values and sound spectra. You need both of these pieces to determine what “decibel”
Sound level measurements are taken over time. If you have an inexpensive sound level meter, it probably just has a screen with a single number that bounces up and down on the screen depending on
what it’s measuring right at that moment. While this is useful for getting a general idea of what sound levels you’re measuring, it’s not very useful for making numerical comparisons. The type of
sound level meter used by acoustical engineers makes a complete measurement with a beginning and an end; basically a short recording.
This chart shows a basic sound level measurement. The curve is the Sound Pressure Level (SPL), which is the sound level at a specific moment. SPL fluctuates with time. If you were making this
measurement with an inexpensive meter, you would see the value on the meter move up and down in time in a way that resembled the curve below.
The problem is a sound that lasts over time never has a single SPL value. Since it fluctuates over time you have to find a way of describing the entire curve with a single number. It turns out
there are several ways of doing just that.
Very frequently, the metrics Leq, Lmax, and Lmin are used. These refer to the Equivalent Level, the Maximum Level, and the Minimum Level. Lmax and Lmin are the easiest to understand, they’re simply
the highest and lowest values the sound level meter saw during the measurement:
Leq is trickier to understand. Technically the following charts aren’t a correct representation, because Leq depends on the actual numerical value for sound pressure level, not the decibel value.
But understanding that is not necessary for understanding the idea behind Leq.
Leq is the Time Weighted Equivalent Level. That’s kind of a mouthful but it’s not too complicated. Weighting is a way of averaging. Here’s how it works out:
The below chart shows everything below the curve highlighted in red. The size of the red area is, essentially, the SPL multiplied by the amount of time of the measurement. In order to multiply
something by a curvy line, you have to use calculus, which is actually what an “integrating sound level meter” is doing.
If you take the entire red area and rearrange it so that it makes a nice square shape, you’ll have something like this:
This red square has the same area as the red area under the curve in the previous figure.
The Equivalent Level, Leq, is the sound level that would result in a square with the same area as the curve.
Leq and Lmax are used quite frequently. Lmin is not used as often, but is usually recorded alongside Leq and Lmax anyway.
Another method of deriving a single number from a sound level measurement is with a Statistical Analysis, or “Ln values.” ”Ln” by itself describes the methodology and isn’t an actual level. The
actual levels are written with a number in place of the letter n. Typical Ln metrics are L10, L50, and L90. You can have any Ln value you want, L37.5, for example.
What the number refers to is the “percentile” of the value. Specifically, the amount of time the sound level was above the Ln value. L10 is the level, in decibels, that the sound level exceeded for
10% of the time. L50 is the value that the sound level was above for 50% of the time and can be considered the median value. L90 is the value that the sound level was above 90% of the time.
Graphically speaking, Ln values are calculated by adjusting a line up and down until exactly the correct percentage of the line is below the curve. The L10 line is adjusted until 10% of it is below
the curve, the L50 line is adjusted until exactly half of it is below the curve, and the L90 line is adjusted until 90% of it is below the curve. Once those lines are adjusted to the proper height,
you read the value from the left axis of the graph to determine your Ln values.
So on the following chart, if our Lmax was 80 dB and our Lmin was 40 dB, L10 would be about 78 dB, L50 would be about 65 dB, and L90 would be about 45 dB.
The chart below shows the lines without showing the SPL curve. The sections in red are what would appear below the curve. For L10, the section below the curve (red) is 10% of the total length of
the line. For L50, the red portions make up half of the length of the line. For L90, the red portions make up 90% of the length of the line. The length of the lines left to right is equal to the
amount of time of the measurement.
Ln values are very useful for long term noise measurements, such as what you would use for an environmental noise study. Such studies often have measurements that last for several days. L90 is
commonly used to determine the ambient, background level. If your L90 value is 45 dB, that is the same as saying “the sound level was 45 dB or higher 90% of the time.”
In the next article I will write about the different ways of combing all the different frequencies of a given sound into a single number. This includes A-weighted decibels, or dBA, which are the
most commonly used single-number flavor of decibels, and what the great majority of noise ordinances refer to.
Is something still unclear? Did I make a mistake? If so, please ask any questions or share any comments in the comments section of this article. I will attempt to clarify anything I didn’t explain
well or correct any mistakes I may have made.
11 Comments
1. Joshua, I am interested in getting in touch with folks who are concerned about the loudness of Nutty Brown concerts. Do you have any contact info for those groups or individuals (such as
yourself)that are trying to make a difference?
2. Could you explain how L10 and L90 are actually calculated from sound level measurement data?
That is, given a one hour sound level measurement at 10 second intervals, how would you calculate the L10, practically speaking? Is there a way to do it in the Excel spreadsheet program?
□ Hi David. Very good question!
It’s really a matter of counting more than a matter of calculating. Ln values are called statistical measurements because they’re a result of analyzing a group of samples, rather than
performing a calculation. During a measurement the sound level meter continuously adds the current SPL to a histogram. The interval between samples is constant and internal to the meter, and
certainly less than a second. L10 then becomes the value that is lower than 10% of the measurements and higher than 90% of the measurements.
Here’s a very simplified example. Suppose we turn on our SLM long enough for it to collect 20 samples and we ask it to tell us the L20 of the measurement. The samples it collects, in
chronological order, are:
58, 59, 58, 57, 56, 55, 55, 56, 56, 57, 58, 59, 60, 61, 62, 62, 61, 62, 63, 64
To determine L20, we look for the value that is below 20%, or 4 out of 20, of the samples. The easiest way to do this is to sort the samples in descending order:
64, 63, 62, 62, 62, 61, 61, 60, 59, 59, 58, 58, 58, 57, 57, 56, 56, 56, 55, 55
L20 will be the value below the top 4 values and above the bottom 16. The 4th and 5th samples (in descending order) are 62 and 62, so L20 is 62. In other words, 20% of the time, the measured
sound level was above 62 dB.
We can also determine the L50 (or any other Ln from our data) by using a similar analysis. Instead of the top 20% of samples, we would instead determine the top 50%, or 10 out of 20. The 10th
and 11th samples, in descending order, are 59 and 58. So the L50 is somewhere between 58 and 59 dB.
The only way you could really do this calculation in Excel is if you had access to frequent periodic samples of the measured sound level meter. Generally the only way you can collect that
type of data is with an advanced sound level meter, and such a meter will almost certainly have Ln functionality built in, so it’s sort of pointless to do it yourself. In a real Ln
measurement the SLM will collect far more than 20 samples, so doing it by hand in a spreadsheet could be quite an undertaking.
If you had a quick pencil, you could approximate Ln values by watching a simple SLM and recording the value at regular intervals; every 10 seconds for an hour, to use your example, would work
well, giving you 360 samples. You would watch a clock with a second hand and every 10 seconds write whatever value was on the SLM at that moment. You would then enter all of your samples into
a spreadsheet and sort them in descending order. You could then determine your approximate L10 by seeing what value had 10% (36 out of 360) of the samples above it.
3. hello hye..
thanks alot for details explaination..u’re really help me to find out the L10, L90 and Ln manually.
actually for noise level measurement i used sound level meter and the value will appear digitally
but for manually, if we can’t read the Leq so we can use L10 L90 l50 to calculate Leq,is that right joshua?
good jobs dear.
4. Joshua:
Thanks, I discovered a function / formula in Excel that calculates L10 and L90 automagically from a data set
It is the “percentile” function.
□ Determining Ln values from a data set isn’t too difficult, though it’s tedious, so the percentile function would certainly make that task more simpe (“percentile” is another term used to
describe statistical values). The trick is actually collecting the data. Generally the types of sound level meters that can collect the type of data appropriate for determining Ln already
have Ln functionality built in.
5. Thanks for the useful explanation Joshua. Would the L90 figures have to be calculated from the actual numerical value for sound pressure level, not the decibel value, in the same way as in the
LEQ calculations?
□ Hi Ralph. There’s no point in converting decibels to sound pressure, since you’re not actually “calculating” L90 in a strict sense of the word. It’s really just a statistical examination of
the data, and each individual SPL sample the meter takes is not added/subtracted/multiplied or otherwise mathematically combined with any of the other samples. 60 dBA will always be higher
than 59 dBA, regardless of whether you’re talking in decibels or Pascals of sound pressure, so it works no matter what you do. Better to just stay in decibels.
Another perspective that might help you understand the process is to consider that finding L50 is the same process as finding the median. The median value of any group of data falls below 50%
and above 50% of the group. The L90 value falls below 90% and above 10% of the samples.
6. “60 dBA will always be higher than 59 dBA, regardless of whether you’re talking in decibels or Pascals of sound pressure”
Thanks Joshua, I realise that was a kind of dumb question now! (Like asking if I need to change a box of jelly babies into chorus girls before counting them. Actually, there would be some point
in doing that….)
I’ve got the percentile formula working in Excel now. Just as a point of interest for those trying it, the syntax is:
Because of the way the formula works, For L90 the xValue is 0.1, for L10 it is 0.9. The only Ln value that is the same as the xValue obviously is L50.
7. Hi, I have a dilemma. During monitoring noise at a certain location, overall 15-minute L90 values (linear) were recorded. However, I am after the A-weighted 15-minute L90 value. There is no
spectrum for the L90 values and only have spectrum data for Leq,1min. Is there any way the A-weighted L90 value,15minute could be derived from the available data? Any help would be appreciated.
□ There’s no way to do a direct conversion, since Leq and L90 are calculated in fundamentally different ways.
I can think of two ways you can make an approximation. Neither of them are great, but I can’t think of any better options.
The first thing you can do is make the assumption that the Leq spectrum is similar in shape to the L90 spectrum. This is not a reliable assumption to make, since the L90 spectrum can be quite
different from Leq, depending on what sounds you were measuring. Subtract the difference between your unweighted (overall) Leq and A-weighted Leq from your unweighted L90 and that will
approximate the A-weighted Leq.
L90 (dBA, approx) = L90 (unweighted) – [Leq (unweighted) - Leq (dBA)]
If you don’t have the unweighted Leq already, you can calculate it by adding up the 1/3-octave or octave bands using decibel addition.
The other approach you can take is to do some research and dig up some Ln spectra for a similar measurement. L90 captures a good representation of background, ambient noise, so if you can
find a measurement done in a similar location chances are decent the background spectrum will be similar to what it was for your measurement. Again, approximate the A-weighted L90 for your
data by subtracting the difference between the unweighted and A-weighted numbers for your reference measurement.
Hopefully you weren’t in an unusual environment. If you were somewhere typical, like an outdoor rural area, then comparing your data to a measurement in a similar environment should yield
decent results.
|
{"url":"http://austinnoise.org/2010/02/03/decibel-flavors-part-1-l-values/","timestamp":"2014-04-21T04:31:16Z","content_type":null,"content_length":"48226","record_id":"<urn:uuid:bb5a1c66-e072-405d-8019-64b3398b16b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Literature Choices You decide to take a Literature course. A requirement for the course is that you must read one classic book, one nonfiction book, and one science fiction book from the list below.
Classic A Farewell to Arms (F) The Grapes of Wrath (G) Tom Sawyer (T) Moby-wingspan (M) Nonfiction John Adams(J) Band of brothers(B) Science Fiction War of the Worlds (W) Dune(D) a)Determine the
number of points in the sample space. b)Construct a tree diagram and determine the sample space.Determine the probability that d) either a farewell to arms to moby wingspan is selecte
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f2c9b7ee4b0571e9cba3221","timestamp":"2014-04-25T06:24:06Z","content_type":null,"content_length":"47391","record_id":"<urn:uuid:fffb9251-8bc0-4590-ab87-5bb89a77d65e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Box 1: Big-bang Basics
Next: Box 2: The Physics Up: The Science Harvest: From Previous: The Science Harvest: From
The expansion of the Universe is described by the cosmic-scale factor R(t). The expansion rate
As the Universe expands, photons have their wavelengths stretched (``redshifted'') proportional to R(t). The measured redshift z of a photon of known wavelength at emission (e.g., a spectral line)
reveals the size of the Universe when it was emitted,
As the Universe expands, it cools adiabatically with temperature falling as 1/R(t). At a temperature of around 3000K (energy equivalent
When the Universe was about a factor of 6000 times smaller than present (
Next: Box 2: The Physics Up: The Science Harvest: From Previous: The Science Harvest: From Martin White
Sun Nov 2 13:44:30 CST 1997
|
{"url":"http://astro.berkeley.edu/~mwhite/rosetta/node5.html","timestamp":"2014-04-16T07:18:29Z","content_type":null,"content_length":"6072","record_id":"<urn:uuid:a4740861-ad09-4331-a560-b8eeb2f1cab1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fisher’s Exact test
Fisher's Exact Test
When you open the file (see hyperlink at the bottom of this page), the following Excel file will emerge.
To run the program, place your data frequencies in the cells. For instance, if you had the following data
If any cell contains a zero, you are done. If not get a piece of paper and keep track of the little red number at the bottom right corner of the spreadsheet. So far, this number indicates the
probability of finding this arrangement of frequencies at random (p = .36, rounded).
To find the probability of such an occurrence of frequencies or ones even more extreme (provided the column and row totals are the same), we make a change in the cell frequencies.
look at the diagonal frequencies and find the diagonal pair that is smallest. In our example, the smallest diagonal frequencies are indicated by the circled frequencies
Subtract one from each of these smallest diagonal frequencies and add one to the other diagonal frequencies.
When this step is taken, the spreadsheet becomes:
Add the probability value (p = .13 rounded) to the previous probability score.
Since no cell contains a zero, the process is repeated. Subtract 1 from each of the diagonal frequencies that are smallest. Add 1 to each of the diagonal frequencies that are largest.
Since a cell contains a zero, the process is finished. In this case, the probability of .01 rounded is added to the previous probability values of .36 and .13, which equals .50. Because this value is
above a probability of .05 (the typical decision rule set), most researchers would not reject the null hypothesis and, thus no claim of a statistically significant relationship would be made.
NOTE: You must have Excel installed on your computer to use this spreadsheet.
This program was prepared with Microsoft Excel 2003.*
Download Excel file to compute a Fisher's Exact Test for a 2 x 2 table
* Microsoft Excel is a registered trademark of the Microsoft Corporation
|
{"url":"http://commfaculty.fullerton.edu/jreinard/excelexcel.htm","timestamp":"2014-04-21T14:57:28Z","content_type":null,"content_length":"5668","record_id":"<urn:uuid:51588187-ec20-4f58-9629-55a6973a293d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Safe Haskell Safe-Inferred
data FTree a Source
Mother structure holds functions that allow to get a value to be summed and comparison function. Below there is a tree of FNodes.
query :: a -> FTree a -> ValSource
Finds a cumulative sum up to a given node of a Fenwick tree. Note: if the node is not found, a sum at point corresponding to this node is still returned. (Convenient for finding CDF value at a given
invQuery :: Val -> FTree a -> Maybe aSource
Finds a node corresponding to a given cumulative sum, convenient for sampling quantile function of a distribution. NOTE: returns an answer only up to a cumulative sum of a whole tree.
toList :: FTree a -> [a]Source
Extract a sorted list of inserted values from the tree.
fromList :: (a -> a -> Ordering) -> (a -> Val) -> [a] -> FTree aSource
Creates a tree from a list and helper functions: compare, and value.
|
{"url":"http://hackage.haskell.org/package/FenwickTree-0.1/docs/Data-Tree-Fenwick.html","timestamp":"2014-04-18T11:07:26Z","content_type":null,"content_length":"8181","record_id":"<urn:uuid:6010b698-c733-4129-b806-c41973696c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis of variance
PAGE RETIRED: Click here for the new StatsDirect help system.
OR YOU WILL BE REDIRECTED IN 10 SECONDS
Analysis of variance (ANOVA)
Menu location: Analysis_Analysis of Variance.
ANOVA is a set of statistical methods used mainly to compare the means of two or more samples. Estimates of variance are the key intermediate statistics calculated, hence the reference to variance in
the title ANOVA. The different types of ANOVA reflect the different experimental designs and situations for which they have been developed.
Excellent accounts of ANOVA are given by Armitage & Berry (1994) and Kleinbaum et. al (1998). Nonparametric alternatives to ANOVA are discussed by Conover (1999) and Hollander and Wolfe (1999).
ANOVA and regression
ANOVA can be treated as a special case of general linear regression where independent/predicator variables are the nominal categories or factors. Each value that can be taken by a factor is referred
to as a level. k different levels (e.g. three different types of diet in a study of diet on weight gain) are coded not as a single column (e.g. of diet 1 to 3) but as k-1 dummy variables. The
dependent/outcome variable in the regression consists of the study observations.
General linear regression can be used in this way to build more complex ANOVA models than those described in this section; this is best done under expert statistical guidance.
Fixed vs. random effects
A fixed factor has only the levels used in the analysis (e.g. sex, age, blood group). A random factor has many possible levels and some are used in the analysis (e.g. time periods, subjects,
observers). Some factors that are usually treated as fixed may also be treated as random if the study is looking at them as part of a larger group (e.g. treatments, locations, tests).
Most general statistical texts arrange data for ANOVA into tables where columns represent fixed factors and the one and two way analyses described are fixed factor methods.
Multiple comparisons
ANOVA gives an overall test for the difference between the means of k groups. StatsDirect enables you to compare all k(k-1)/2 possible pairs of means using methods that are designed to avoid the type
I error that would be seen if you used two sample methods such as t test for these comparisons. The multiple comparison/contrast methods offered by StatsDirect are Tukey(-Kramer), Scheffé,
Newman-Keuls, Dunnett and Bonferroni (Armitage and Berry, 1994; Wallenstein, 1980; Liddell, 1983; Miller, 1981; Hsu, 1996; Kleinbaum et al., 1998). See multiple comparisons for more information.
Further methods
There are many possible ANOVA designs. StatsDirect covers the common designs in its ANOVA section and provides general tools (see general linear regression and dummy variables) for building more
complex designs.
Other software such as SAS and Genstat provide further specific ANOVA designs. For example, balanced incomplete block design:
- with complete missing blocks you should consider a balanced incomplete block design provided the number of missing blocks does not exceed the number of treatments.
A x x x
B x x x
C x x x
D x x x
Complex ANOVA should not be attempted without expert statistical guidance. Beware situations where over complex analysis is used in order to compensate for poor experimental design. There is no
substitute for good experimental design.
|
{"url":"http://www.statsdirect.com/help/analysis_of_variance/anova.htm","timestamp":"2014-04-16T07:23:48Z","content_type":null,"content_length":"16202","record_id":"<urn:uuid:350f6a3f-ce88-402d-9c7e-2015dd593ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
F07 -
Chapter Introduction
F07ADFP (PDGETRF) LU factorization of real general matrix
F07AEFP (PDGETRS) Solution of real linear system, matrix already factorized by F07ADFP (PDGETRF)
F07ARFP (PZGETRF) LU factorization of complex general matrix
F07ASFP (PZGETRS) Solution of complex linear system, matrix already factorized by F07ARFP (PZGETRF)
F07FDFP (PDPOTRF) Cholesky factorization of real symmetric positive-definite matrix
F07FEFP (PDPOTRS) Solution of real symmetric positive-definite linear system, matrix already factorized by F07FDFP (PDPOTRF)
F07FRFP (PZPOTRF) Cholesky factorization of complex Hermitian positive-definite matrix
F07FSFP (PZPOTRS) Solution of complex Hermitian positive-definite linear system, matrix already factorized by F07FRFP (PZPOTRF)
* F07HDFP (PDPBTRF) Cholesky factorization of real symmetric banded matrix with no pivoting
* F07HEFP (PDPBTRS) Solution of real symmetric banded linear system, matrix already factorized by F07HDFP (PDPBTRF)
* F07HRFP (PZPBTRF) Cholesky factorization of complex Hermitian banded matrix with no-pivoting
* F07HSFP (PZPBTRS) Solution of complex Hermitian banded linear system, matrix already factorized by F07HRFP (PZPBTRF) (complex version of F07HEFP (PDPBTRF))
* F07JDFP (PDPTTRF) Cholesky factorization of real symmetric tridiagonal matrix with no-pivoting
* F07JEFP (PDPTTRS) Solution of real symmetric tridiagonal linear system, matrix already factorized by F07JDFP (PDPTTRF)
* F07JRFP (PZPTTRF) Factorization of complex Hermitian tridiagonal matrix with no-pivoting
* F07JSFP (PZPTTRS) Solution of real symmetric tridiagonal linear system, matrix already factorized by F07JRFP (PZPTTRF) (complex version of F07JEFP (PDPTTRS))
F07TGFP (PDTRCON) Estimates condition number of real triangular matrix
© The Numerical Algorithms Group Ltd, Oxford UK. 2000
|
{"url":"http://www.nag.com/numeric/FD/manual/html/F07_fd03.html","timestamp":"2014-04-18T11:00:46Z","content_type":null,"content_length":"5952","record_id":"<urn:uuid:b1639fd7-8ed9-44a8-81d2-3e159bd41fab>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mass is energy
The USS Enterprise in 1964 (pre Zefram Cochrane era), during Operation Sea Orbit when it sailed around the world in 65 days without refuelling - demonstrating the capability of nuclear-powered ships.
Credit: US Navy.
Some say that the reason you can't travel faster than light is that your mass will increase as your speed approaches light speed so, regardless of how much energy your star drive can generate, you
reach a point where no amount of energy can further accelerate your spacecraft because its mass is approaching infinite.
This line of thinking is at best an incomplete description of what s really going on and is not a particularly effective way of explaining why you can t move faster than light (even though you can
t). However, the story does offer some useful insight into why mass is equivalent to energy, in accordance with the relationship e=mc^2.
Firstly, here s why the story isn t complete. Although someone back on Earth might see your spacecraft s mass increase as you move near light speed you certainly aren t going notice your spacecraft
s, or your own, mass change at all. Within your spacecraft, you would still be able to climb stairs, jump rope and if you had a set of bathroom scales along for the ride you would still weigh just
the same as you did back on Earth (assuming your ship is equipped with the latest in artificial gravity technology that mimics conditions on Earth s surface).
The change perceived by an Earth observer is just relativistic mass. If you hit the brakes and returned to a more conventional velocity, all the relativistic mass would go away and an Earth observer
would just see you retaining with same proper (or rest) mass that the spacecraft and you had before you left Earth.
The Earth observer would be more correct to consider your situation in terms of momentum energy, which is a product of your mass and your speed. So as you pump more energy in to your star drive
system, someone on Earth really sees your momentum increase but interprets it as a mass increase, since your speed doesn t seem to increase much at all once it is up around 99% of the speed of
light. Then when you slow down again, although you might seem to be losing mass you are really offloading energy perhaps by converting your kinetic energy of motion into heat (assuming your
spacecraft is equipped with the latest in relativistic braking technology).
As the ratio of your velocity to light speed approaches 1, the ratio of your relativistic mass to your rest mass grows asymptotically - i.e. it approaches infinite.
From the perspective of the Earth-based observer, you can formulate that the relativistic mass gain observed when travelling near light speed is the sum of the spacecraft s rest mass/energy plus the
kinetic energy of its motion all divided by c^2. From that you can (stepping around some moderately complex math) derive that e=mc^2. This is a useful finding, but it has little to do with why the
spacecraft s speed cannot exceed light speed.
The phenomenon of relativistic mass follows a similar, though inverse, asymptotic relationship to your speed. So as you approach light speed, your relativistic time approaches zero (clocks slow),
your relativistic spatial dimensions approach zero (lengths contract) and your relativistic mass grows towards infinite.
But as we ve covered already, on the spacecraft you do not experience your spacecraft gaining mass (nor does it seem to shrink or have its clocks slow down). So you must interpret your increase in
momentum energy as a genuine speed increase at least with respect to a new understanding you have developed about speed.
When you approach light speed and still keep pumping more energy into your drive system, what you find is that you keep reaching your destination faster not so much because you are moving faster,
but because the time you estimated it would take you to cross the distance from point A to Point B becomes perceivably much less, indeed the distance between point A to Point B also becomes
perceivably much less. So you never break light speed because the distance over time parameters of your speed keep changing in a way that ensures that you can t.
In any case, consideration of relativistic mass is probably the best way to derive the relationship e=mc2 since the relativistic mass is a direct result of the kinetic energy of motion. The
relationship does not easily fall out of consideration of (say) a nuclear explosion since much of the energy of the blast derives from the release of the binding energy which holds a heavy atom
together. A nuclear blast is more about energy transformation than about matter converting to energy, although at a system level it still represents genuine mass to energy conversion.
Similarly you might consider that your cup of coffee is more massive when it s hot and gets measurably less massive when it cools down. Matter, in terms of protons, neutrons, electrons and coffee,
is largely conserved throughout this process. But, for a while, the heat energy really does add to the mass of the system although since it s a mass of m=e/c^2, it is a very tiny amount of mass.
3.6 / 5 (11) Nov 21, 2011
Not sure why this is on physorg. No new finding, invention, paper, technology or otherwise novel thing is covered here. This is just basic Relativity stuff.
Which is pretty much summed up by the author description in the original article:
Steve Nerlich is a very amateur Australian astronomer, publisher of the Cheap Astronomy website and the weekly Cheap Astronomy Podcasts and one of the team of volunteer explainers at Canberra
Deep Space Communications Complex - part of NASA's Deep Space Network.
4.4 / 5 (8) Nov 21, 2011
@antialias : belive me, if you really understand every word of this article and with understanding i mean that you really know what you can do with it, and that it is technically totally possible to
do some of the stuff, then you don't want to to see any novel looking discovery on this page ever again, which just fills up your mind and just prevents you from understanding.
I think clearing the stuff up is much more important than just showing another new looking discovery which has actually been discovered many years before but maybe in a slightly different version.
And i think physorg now totally recognized this.
5 / 5 (1) Nov 21, 2011
Ok, but i actually thought like you when i read the title the first time. But then i saw the date in the title ( The USS Enterprise in 1964 ...), which is actually like a slight hint, i think. I
guess we are about 50 to 100 years off in really understanding.
2 / 5 (2) Nov 21, 2011
Good article. But I have a small question: Let's say I'm traveling to a planet that's 10ly away. And I manage to accelerate to 20% of the speed of light. When the time comes to decelerate close to my
destination, won't all that savings in time-dilation be "taken back" during my slowing down? Or as a passenger on my ship, will the trip actually appear to take a shorter time than a
straight-forward, non-time-dilated calculation would predict?
I love this stuff.
1.5 / 5 (4) Nov 21, 2011
Relativistic effects are mind blowing. Question, if two ships accelerate away from each other, would they only have to reach 1/2 the speed of light to achieve a relativistic light speed between them?
Could they exceed the the speed of light relative to each other, as each would only need to achieve slightly more than 50% the speed of light each.
5 / 5 (2) Nov 21, 2011
Yes to the above, as seen by you, they are moving away from each other at greater than c, but in the frame of reference of one photon relative to the other, no, they are moving less than c.
Of course, I'm just repeating what I learned watching Susskind's lectures on youtube, to the my best recollection. I think it's lecture 6 of the cosmology series he explains this.
4 / 5 (1) Nov 21, 2011
No Mayday, it would increase the total time compared to how fast you could fly right past the planet at full speed. But overall it wouldn't be "taken back" in relation to only being able to
accelerate to anything less. Remember acceleration is velocity increasing over time so even if you could could keep increasing speed for 5 of those light years, your velocity would inevitably be
increasing that whole time.
not rated yet Nov 21, 2011
Given what he says about a nuclear explosion, about how much mass is converted to energy in, say, a 1 megaton blast, vs atomic bond energy?
2 / 5 (1) Nov 21, 2011
Some people spoil great articles. As nature proves things can move faster. Like the redshift seen from distance universes. Or particles who enter through our atmosphere with such a high speeds that
we can detect them still at ground level (particle's clock slow down). To travel faster then light one needs to bend empty space, or create an object on which vacuum pressure on all sides is not
equal. In short to travel faster one needs to abondon rocket based ideas, and hack into the framework of our dimensions wormholes etc. Its not easy, but impossible well not for now.. but never say no
i think.
5 / 5 (1) Nov 21, 2011
No, you don't pay it back. Even when slowing down you are still 'saving time' because you are still going faster than when at rest.
A megaton is a million tonnes (metric tons) of TNT, where the energy of a gram of TNT is defined as 4184 Joules (one kCal). A tonne a million grams, so that is 4.2x10^15 Joules.
C is 3x10^8 meters/sec, so C^2 is 9x10^18, which is ~2000 times bigger. So a 1 megaton H-bomb convert about 1/2 gram of matter to energy.
5 / 5 (1) Nov 21, 2011
Alfredh H, 5:45 minutes into the 4th lecture
1 / 5 (2) Nov 21, 2011
baaaaaaaaaad article
worst ever
3.5 / 5 (4) Nov 22, 2011
reading this article did not clear up any relativistic problems anybody may have. In fact it would just add to the confusion.
The problem a lay person has boils down to not appreciating a speed limit.
The best explanation would avoid any reference to a speed limit and fix the perceived problem from an alternate view point.
Look at velocity from the point of view of the traveler. Question: How long would it take me to fly to the nearest star which is 4 light years away?
Answer: depends how fast you go. Given a very powerful engine you could be there in a year, a month, a day, an hour or whatever.
Question: But what about the speed limit?
Answer: There is no speed limit you just have to speed up the universe to get there sooner and that uses a lot of fuel.
3 / 5 (2) Nov 22, 2011
Go into a little more detail along those lines. If you could reach the speed of light you could be anywhere in the universe in no time at all.
Maybe all accelleration applies to yourself pushing the universe. The faster you want to travel the harder you push.
5 / 5 (1) Nov 22, 2011
I don't know why people still keep mixing frames of reference in the spacecraft-approaching-the-speed-of-light issue since from the lab's frame, if the spacecraft is gaining mass, so is it's fuel.
not rated yet Nov 23, 2011
Thank you for your response, but I have an additional question. You said:
"A megaton ...convert about 1/2 gram of matter to energy."
But the author said: "A nuclear blast is more about energy transformation than about matter converting to energy"
Is there a calculation for how much energy release each source contributes to the whole?
5 / 5 (1) Nov 23, 2011
richgibula - The author is referring to a nuclear blast not converting neutrons, protons (or electrons) to energy (the way a matter/antimatter drive would), but by rearranging these particles into a
lower energy state.
In a chemical explosion, it is the atoms' electrons that change configuration to release energy.
In a nuclear explosion, it is the protons and neutrons that are rearranged to release energy.
In either case the energy released is reflected by the reaction products (once cooled and at rest) having less mass than the starting material had. The change is very small - even in a hydrogen bomb
the energy released is less than 1% of the mass of the nuclei involved.
1 / 5 (2) Nov 23, 2011
The E=mc^2 equation can be derived from classical mechanics easily - it's not relativistic effect.
We can understand it easily with using of water surface model of space-time. The energy introduced makes it undulating and elongated like the deformed carpet. This deformation slows down another
particles of matter, which are traveling across undulating place, so it behaves like are of more dense matter. The E=mc^2 is therefore the relation between deformation of space-time and energy
introduced into this deform and it's valid only for 4D space-time. At the 3D space-time it would correspond E=mc and so on. The introduction of extradimensions therefore changes the E=mc^2 formula
and general relativity should account into it, when describing the hyperdimensional effects, like the dark matter.
3 / 5 (2) Nov 24, 2011
Is there a calculation for how much energy release each source contributes to the whole?
The energy released in a nuclear blast (if I remember that lecture by Feinman correctly) is a rearrangement (transformation) of neutrons/protons from larger nuclei to smaller ones. The energy
released is electrostatic energy. the number of protons and neutrons (and electrons) after a nuclear blast are the same as before. The energy content of those products is, however, less - since they
are now rearranged into more stable configurations (and some neutrons flying off on their own)
So really we're not dealing with a 'nuclear' (force) bomb but with an 'electrostatic' bomb. The nuclear forces actually get a bit larger - they are a negative contribution to the energy output.
5 / 5 (1) Nov 24, 2011
AP - Since the question said 1 megaton, that would be a hydrogen bomb. This has a fission bomb at its core (and often as an outer shell as well), which is indeed huge nuclei to merely large nuclei.
But most of the energy of a hydrogen bomb comes from fusion of hydrogen isotopes, typically deuterium and tritium.
Thus most of the energy of a 1-megaton bomb is from very small nuclei fusion into fairly small nuclei. In this the energy released comes from the strong nuclear force, with the electrostatic
potential increasing.
5 / 5 (2) Nov 24, 2011
Yes. It seems I was mistaken in that fission bombs could be in the megaton range. Thanks for the heads-up.
not rated yet Nov 26, 2011
I wonder about space ships at relativistic speeds.
If your mass increases with speed, does the energy contained in that mass also increase? And if so does the energy content of the propulsion fuel you carry with you also increase? Meaning that the
energy you carry to propel your ship is always in proportion to energy needed to propel the mass of the ship? Does this mean that accelerating to light speeds is just a question of accelerating long
enough without it becoming more difficult to accelerate at all, does it become easier (e.g. you need less fuel to accelarate even more the quicker you go)?
Just asking.
1 / 5 (3) Nov 26, 2011
In dense aether model the mass of speedy objects increases in the same way, like the mass of duck, swimming at the surface of pond, i.e. by nothing. But the duck deforms the water surface in ripples,
which are perpendicular to the direction of its motion and this deform expands the surface area and it slows down the speed of surface ripples in such a way, their speed with respect to duck remains
constant. In vacuum, the area of more dense vacuum foam shaken with motion of objects is called the deBroglie wave.
It means, the mass of objects in motion increases with the mass of surrounding vacuum, which is dragged and shaken by their motion, not by mass of objects itself. The area of dense vacuum slows down
the light spreading with respect to observer, so it participates to the relativistic length contraction too.
1 / 5 (3) Nov 26, 2011
Does this mean that accelerating to light speeds is just a question of accelerating long enough without it becoming more difficult to accelerate at all
The speed of light with respect to which? You're already moving with speed of light with respect to some lone proton flying through free space. Does it prohibit you in further acceleration? I would
say not.
In dense aether model your relativistic speed is followed with area of dense vacuum, which increases your mass with respect to other observers. But when some observer is moving with relativistic
speed in the same direction, like you, then you're both surrounded with the same area of dense vacuum and you cannot detect any gain in mass, after then. Because you're both surrounded with vacuum of
the same relative density.
not rated yet Nov 26, 2011
antiaalias, they have the article tagged as general physics, maybe that's why? I thought it was really interesting.
Other than that, not sure why it is here.
About the length contraction thing, I wonder if as your relativisitc mass goes up you actually do something to the fabric of space-time that makes it harder for you to keep accelerating closer and
closer to C.
jsa09, please don't do us any more "favors" in trying to make it easier to understand.
not rated yet Nov 27, 2011
Why do lengths of objects contract when their velocity aproach speed light?
1 / 5 (3) Nov 27, 2011
Why do lengths of objects contract when their velocity aproach speed light?
Basically because of geometry of energy wave spreading with limited speed. As usually in dense aether theory, the water surface analogy explains it well: http://www.aether...rnik.gif
The length contraction is the necessary consequence of the constant speed of light postulate of special relativity.
1 / 5 (3) Nov 27, 2011
the duck deforms the water surface in ripples, which are perpendicular to the direction of its motion and this deform expands the surface area and it slows down the speed of surface ripples in
such a way, their speed with respect to duck remains constant. In vacuum, the area of more dense vacuum foam shaken with motion of objects is called the deBroglie wave.
It means, the mass of objects in motion increases with the mass of surrounding vacuum, which is dragged and shaken by their motion, not by mass of objects itself. The area of dense vacuum slows
down the light spreading with respect to observer, so it participates to the relativistic length contraction too.
What you are referring to is the state of displacement of the aether.
Aether has mass. Aether is physically displaced by matter.
The faster an object moves through the aether the more aether the object displaces the greater the relativistic mass of the object.
1 / 5 (3) Nov 28, 2011
The vacuum is behaving like the elastic material with some density of inertia. Why? Simply because it's able to serve as an environment for light waves of finite energy density. If it would be
material of zero inertia, than every wave in it would undulate with infinite frequency like infinitesimally lightweight string.
Therefore the inertial wave equation formalism can be applied to the vacuum as well. For example, when we undulate elastic material of final mass density, this material will increase its density for
all waves which are passing trough the undulating place accordingly.
1 / 5 (4) Nov 28, 2011
The vacuum is behaving like the elastic material
The elastic material is the aether. Aether has mass.
1 / 5 (3) Nov 28, 2011
The vacuum is behaving like the elastic material
The elastic material is the aether. Aether has mass.
'From Analogue Models to Gravitating Vacuum'
"The aether of the 21-st century is the quantum vacuum, which is a new form of matter. This is the real substance"
"Einstein's 'First Paper'"
"The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces."
1 / 5 (3) Nov 28, 2011
The similarity goes even deeper. The deform of vacuum at the case of light wave is close to the deform of piece of elastic foam or jelly: http://www.aether...def0.gif http://www.aether...oton.gif It
explains the presence of mutually perpendicular vectors of electric and magnetic fields.
1 / 5 (3) Nov 28, 2011
The property aether has which corrects all of the nonsense in physics today is mass. Aether has mass. Aether physically occupies three dimensional space. Aether is physically displaced by matter.
Aether is not at rest when displaced.
Pressure exerted by displaced aether toward matter is gravity.
A moving particle has an associated aether displacement wave. In a double slit experiment the particle enters and exits a single slit and it is the associated aether displacement wave which enters
and exits both. As the wave exits the slits it creates wave interference. As the particle exits a single slit the direction it travels is altered by the wave interference. This is the wave piloting
the particle of pilot-wave theory. Detecting the particle turns the associated aether displacement wave into chop, there is no wave interference, and the direction the particle travels is not
Curved spacetime is displaced aether.
1 / 5 (3) Nov 28, 2011
A moving particle has an associated aether displacement wave.
"Displacement wave" sounds too fuzzy for me. Draw it or illustrate it with some analogy. Why it should have associated such a wave? Don't pile up tautologies, use if-then predicates.
1 / 5 (4) Nov 28, 2011
A moving particle has an associated aether displacement wave.
"Displacement wave" sounds too fuzzy for me. Draw it or illustrate it with some analogy. Why it should have associated such a wave? Don't pile up tautologies, use if-then predicates.
A boat's bow wave is its water displacement wave. A moving boat physically displaces the water into the form of a wave.
A moving particle physically displaces the aether into the form of a wave.
5 / 5 (3) Nov 29, 2011
I see Callipo has a new sockpuppet
1 / 5 (3) Nov 29, 2011
A boat's bow wave is its water displacement wave. A moving boat physically displaces the water into the form of a wave.
In another words, it's wake wave. http://aetherwave...ics.html
I see Callipo has a new sockpuppet
I've no connection to wpdwpd. It just illustrates, the aether model leads to the reproducible conclusions.
1 / 5 (2) Nov 29, 2011
In another words, it's wake wave.
It's aether displacement wave. In a double slit experiment the particle enters and exits a single slit and it is the associated aether displacement wave which enters and exits both.
1 / 5 (2) Nov 29, 2011
I tend not to introduce new terms, when the mechanical analogy has already its well established denomination. Here you can find the analogy of this experiment with water surface http://
1 / 5 (1) Nov 29, 2011
I tend not to introduce new terms, when the mechanical analogy has already its well established denomination. Here you can find the analogy of this experiment with water surface http://
"The walkers wave is similar to the surface wave of a raindrop falling on a puddle"
A raindrop falling on a puddle creates a ripple because the raindrop displaces the water in the puddle.
The physical wave of de Broglie wave mechanics is an aether displacement wave.
|
{"url":"http://phys.org/news/2011-11-mass-energy.html","timestamp":"2014-04-20T08:47:11Z","content_type":null,"content_length":"123723","record_id":"<urn:uuid:aacf9c4a-2219-4eea-9ced-d72768b5e5bf>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Meromorphic functions of lower order less than one [MatSciRep:64]
Title: Meromorphic functions of lower order less than one [MatSciRep:64]
Author: Fuchs, W.H.J.
Main Mathematics
Institution: Institute of Mathematical Sciences
Year: 1967
Pages: 83p.
The two properties of the polynomials, viz., 1) A polynomial takes on every complex value the same number of times; 2) On large circles |z| = r, the absolute value of a polynomial p(z)
is large and "Limit r tends to Infinity{|p(r*e^(ialpha))| / |p(r*e^(ibeta))|} = 1" , uniformly in Alpha and Beta. The example of the exponential function shows that neither of these two
Abstract: properties subsists for entire functions. These lectures discuss the problem of finding analogues for the properties 1 and 2 for the entire and meromorphic functions of lower grade. Some
auxiliary results are given in sections 1 and 2; Analogues of property 2 are discussed in sections 3 to 5 of these lectures, while analogues of property 1 are discussed in sections 6 to
8. A knowledge of the fundamentals of Nevanlinna Theory is assumed such as it can be found in W.K. Hayman's Meromorphic functions, chapters 1 and 2.
URI: http://hdl.handle.net/123456789/231
Files in this item
MR64.pdf 3.727Mb PDF View/Open
This item appears in the following Collection(s)
Search DSpace
My Account
|
{"url":"http://www.imsc.res.in/xmlui/handle/123456789/231","timestamp":"2014-04-20T21:53:38Z","content_type":null,"content_length":"12220","record_id":"<urn:uuid:1adae65a-16df-4a95-80ab-019e20cc2282>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intrinsic Bursters Increase the Robustness of Rythm Generation in an Excitatory NetworkModel StatusModel Structuremodel diagram
Geoffrey Nunns Bioengineering Institute, University of Auckland This model is known to run in both PCEnv and COR. The published results cannot be replicated at this time, and further curation is
needed. In addition to this, the paper describes a multi-cell network and uses this as a basis for its figures, this model reproduces a single cell of the pacemaker type, and FieldML will be needed
to replicate the model data. Abstract: The pre-Botzinger complex (pBC) is a vital subcircuit of the respiratory central pattern generator. Although the existence of neurons with pacemaker-like
bursting properties in this network is not questioned, their role in network rhythmogenesis is unresolved. Modeling is ideally suited to address this debate because of the ease with which biophysical
parameters of individual cells and network architecture can be manipulated. We modeled the parameter variability of experimental data from pBC bursting pacemaker and nonpacemaker neurons using a
modified version of our previously developed pBC neuron and network models. To investigate the role of pacemakers in networkwide rhythmogenesis, we simulated networks of these neurons and varied the
fraction of the population made up of pacemakers. For each number of pacemaker neurons, we varied the amount of tonic drive to the network and measured the frequency of synchronous networkwide
bursting produced. Both excitatory networks with all-to-all coupling and sparsely connected networks were explored for several levels of synaptic coupling strength. Networks containing only
nonpacemakers were able to produce networkwide bursting, but with a low probability of bursting and low input and output ranges. The results indicate that inclusion of pacemakers in an excitatory
network increases robustness of the network by more than tripling the input and output ranges compared with networks containing no pacemakers. The largest increase in dynamic range occurs when the
number of pacemakers in the network is greater than 20% of the population. Experimental tests of the model predictions are proposed. Schematic diagram depicting the relationships of the active
contraction framework proposed by Hunter et al. (11). The model is driven by SL and sarcomere velocity, and intracellular [Ca21]i. Inputs are in bold, algebraic length dependencies are in italics,
processes described by differential equations are standard font. The complete original paper reference is cited below: Intrinsic Bursters Increase the Robustness of Rythm Generation in an Excitatory
Network, L.K. Purvis, J.C. Smith, H. Koizumi, R.J. Butera 2007, Journal of Neurophysiology, 97, 1515-1526. PubMed ID: 17167061
|
{"url":"http://models.cellml.org/workspace/purvis_smith_koizumi_butera_2007/@@rawfile/4fba9d4e96aa686458225046dbe86cf54bacd35b/purvis_smith_koizumi_butera_2007.cellml","timestamp":"2014-04-19T01:50:04Z","content_type":null,"content_length":"32500","record_id":"<urn:uuid:476c8a65-be76-465a-9143-53bd81d445fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Changing the subject in an equation
September 10th 2011, 11:46 AM #1
Sep 2011
United Kingdom
Changing the subject in an equation
I know how to change the subject of the formula in an equation like u=v+2t, but I get confused when the equation is for example p=5xyz+5ab^2 making x the subject.
This is the answer I got:
p - 5ab^2= 5xyz
p - 5ab^2/5yz= x
x = p - 5ab^2/5yz
Was wondering if i could get some advice on whether the answer is correct and my steps as to how I got it are correct because I have a feeling I have made a mistake.
Thank you
Re: Changing the subject in an equation
I know how to change the subject of the formula in an equation like u=v+2t, but I get confused when the equation is for example p=5xyz+5ab^2 making x the subject.
This is the answer I got:
p - 5ab^2= 5xyz
p - 5ab^2/5yz= x
x = p - 5ab^2/5yz
Was wondering if i could get some advice on whether the answer is correct and my steps as to how I got it are correct because I have a feeling I have made a mistake.
parentheses around your numerator and denominator ...
x = (p - 5ab^2)/(5yz)
... now you're done.
September 10th 2011, 11:50 AM #2
|
{"url":"http://mathhelpforum.com/algebra/187719-changing-subject-equation.html","timestamp":"2014-04-17T04:27:48Z","content_type":null,"content_length":"33514","record_id":"<urn:uuid:92d6f3f8-8232-4b85-8904-4b256629eb86>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: July 2001 [00203]
[Date Index] [Thread Index] [Author Index]
Re: Naming pieces of patterns
• To: mathgroup at smc.vnet.net
• Subject: [mg29845] Re: Naming pieces of patterns
• From: "Alan Mason" <amason2 at austin.rr.com>
• Date: Fri, 13 Jul 2001 04:19:25 -0400 (EDT)
• References: <9ijhj3$rhj$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Dear Cyril,
To see why it doesn't work, use FullForm. For instance, in your first
FullForm[-(I/(2 a))] is Times[Complex[0,Rational[-1,2]],Power[a,-1]]. This
does *not* match your pattern (I/(2 a)) , for which FullForm{(I/(2 a)) ]
gives Times[Complex[0,Rational[1,2]],Power[a,-1]]; it's the Rational[__]
parts that don't match.
This is one of several examples of Mathematica pattern matching "quirks", in
the sense that you must know how Mathematica represents expressions
internally (as obtained by using FullForm). There are only two arithmetic
heads, Plus and Times. Minus is represented as Times[-1, ...], and Division
by Power[..., -1]. Also, negative integers like -3 are atoms; the minus
sign won't match in patterns. These design decisions were presumably made
for the sake of efficiency, but they are not user-friendly. A common
problem is trying to simplify a square root everywhere in an expression; the
obvious rule doesn't work if you are dividing by that square root (it won't
be replaced, because a b/(a+b)^(1/2)
is represented as Times[a,b,Power[Plus[a,b],Rational[-1,2]]]; it's that
pesky Rational[-1, 2], with the -1 that doesn't match the pattern
(u_+v_)^(1/2) ).
One could write a whole book on the pattern matcher and the design decisions
underlying its behavior. Perhaps I will do this, from the viewpoint of an
outsider looking out.
"Cyril Fischer" <fischerc at itam.cas.cz> wrote in message
news:9ijhj3$rhj$1 at smc.vnet.net...
> How can I as simply as possible use "substitutions"
> 1.
> -(I/(2 a)) /. I/(2 a) -> A
> does not work, while
> (I/(2 a)) /. I/(2 a) -> A
> works well
> 2.
> {(a + b), -(a + b)}/. a + b -> e
> gives
> {e, -a - b}
> instead of {e,-e}
> 3.
> {-Sqrt[a + b], 1/Sqrt[a + b]} /. Sqrt[a + b] -> e
> gives
> {-e,1/Sqrt[a + b]}
> 4.
> {I, 2 I, -I} /. I -> J
> gives
> {J, 2 \[ImaginaryI], -\[ImaginaryI]}
> I know _why_ these cases do not work, but I would like to know, if there
> is a possibilty to use a common pattern rule to substitute all
> occurences of an expression.
> Thank you,
> Cyril Fischer
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Jul/msg00203.html","timestamp":"2014-04-17T07:05:24Z","content_type":null,"content_length":"36385","record_id":"<urn:uuid:51fdb11b-810d-46c3-b23b-e900f6fa0638>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quick Sinusoidal Problem Help
May 22nd 2010, 01:19 PM #1
Mar 2010
Quick Sinusoidal Problem Help
Hey guys can you help me with this one question please and maybe explain the process.
The daylight time (time between sunrise and sunset) varies sinusoidally throughout a year. The longest day of the year is the 170th day with a daylight time of 15 hours. The Shortest daylight
time occurs on the 353rd day of the year with a daylight time of 6 hours.
It's a two part question:
What is the period and amplitude: I figured A=4.5 and P=365
And then it asks to create a graph of this model and generate a specific equation with time t=0 being the first day of the year.
So could you guys explain how to graph this equation? She also said there is a vertical and horizontal shift and that C/B=170.
I was sick the day my teacher was explaining problems like these and am totally lost.
Where T is total length of the day and D is day of the year you have
$T= A\sin b\left(D-c \right) +d$
You need to find $a,b,c,d$
From your information
$T= 4.5\sin \frac{2\pi}{365}\left(D-c \right) +10.5$
Now to find $c$ use the fact that
What do you get?
Last edited by pickslides; May 22nd 2010 at 02:39 PM.
$y = 4.5 \sin\left[\frac{2\pi}{365}(t - c)\right] + 10.5$
$y_{max} = 15$ occurs at $t = 170$, and when $\sin\left[\frac{2\pi}{365}(170 - c)\right] = 1$
$\frac{2\pi}{365}(170 - c) = \frac{\pi}{2}$
$170 - c = \frac{365}{4}$
$c = 170 - \frac{365}{4} = \frac{315}{4}$
$y = 4.5 \sin\left[\frac{2\pi}{365}\left(t - \frac{315}{4}\right)\right] + 10.5$
graph attached
May 22nd 2010, 02:04 PM #2
May 22nd 2010, 02:33 PM #3
|
{"url":"http://mathhelpforum.com/trigonometry/145992-quick-sinusoidal-problem-help.html","timestamp":"2014-04-19T23:49:57Z","content_type":null,"content_length":"41012","record_id":"<urn:uuid:3f595f14-509c-4e82-bb2c-6a3d1ad4b7cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis of Coding Tools in HEVC Test Model (HM 1.0) – Intra Prediction
2010-12-01 H.265/HEVC View Comments Views(15,601)
The current intra prediction in HM unified two directional intra prediction methods, Arbitrary Direction Intra (ADI) introduced in JCTVC-A124 and Angular Intra Prediction introduced in JCTVC-A119,
with simplification for parallel processing possibility, leading to a simplified unified intra prediction (JCTVC-B100, JCTVC-C042).
In current HM, unified intra prediction provides up to 34 directional prediction modes for different PUs. With the PU size of 4×4, 8×8, 16×16, 32×32, 64×64, there are 17, 34, 34, 34, and 5 prediction
modes available respectively. The prediction directions in the unified intra prediction have the angles of +/- [0, 2, 5, 9, 13, 17, 21, 26, 32]/32. The angle is given by displacement of the bottom
row of the PU and the reference row above the PU in case of vertical prediction, or displacement of the rightmost column of the PU and the reference column left from the PU in case of horizontal
prediction. Figure 1 shows an example of prediction directions for 32×32 block size. Instead of different accuracy for different sizes, the reconstruction of the pixel uses the linear interpolation
of the reference top or left samples at 1/32th pixel accuracy for all block size.
Two arrays of reference samples are used in the unified intra prediction, corresponding to the row of samples lying above the current PU to be predicted, and the column of samples lying to the left
of the same PU. Given a dominant prediction direction (horizontal or vertical), one of the reference arrays is defined to be the main array and the other array the side array. In the case of vertical
prediction, the reference row above the PU is called the main array and the reference column to the left of the same PU is called the side array. In the case of horizontal prediction, the reference
column to the left of the PU is called the main array and the reference row above the PU is called the side array.
When the intra prediction angle is positive, blue lines in Figure 1, only the samples from the main array are used for prediction. When the intra prediction angle is negative, red lines in Figure 1,
a per-sample test should be performed to determine whether samples from the main or the side array should be used for prediction, as shown in Figure 2 (a). Additionally when the side array is used,
the computation of the index into the side array requires a division operation. In order to remove the division operation, the lookup-table (LUT) technique is used for the negative angular
prediction process in the calculation of the y-intercept in the case of vertical prediction or the x-intercept in the case of horizontal prediction. Normally, the integer and fractional parts of the
intercept between are calculated using the following equations respectively:
deltaIntSide = (256*32*(l+1)/absAng)>>8
deltaFractSide = (256*32*(l+1)/absAng)%256
where absAng is the absolute value of intra prediction angle (=[ 2,   5,  9, 13, 17, 21, 26, 32]) and l is x/y pixel location for vertical/horizontal prediction. With LUT technique,
the above equations can be replaced with the following equations:
deltaIntSide = (invAbsAngTable[absAngIndex]*(l+1))>>8
deltaFractSide = (invAbsAngTable[absAngIndex]*(l+1))%256
where invAbsAngTable = [4096, 1638, 910, 630, 482, 390, 315, 256]. By using LUT, there is no division operation during the computation of the index into the side array. However, there still exists
the per-sample test to determine the main or the side array for prediction. To simplify this process, the main array is extended by projecting samples from the side array onto it according to the
prediction direction, as shown in Figure 2 (b). During the projection, the fractional part of the intercept is omitted and the intercept is rounded to the nearest integer:
deltaInteger = (invAbsAngTable[absAngIndex]*(l+1)+128)>>8
Finally, the prediction process only uses the extended main array and the same simple linear interpolation formula to predict all samples in the PU. Figure 2 shows an example of simplified intra
prediction with direction angle VER-8.
Figure 2. An example of simplified intra prediction with direction angle VER-8
When DC mode is used in the intra prediction, the mean value of samples from both top row and left column is used for the DC prediction.
Table 1 summarizes the physical and logical mode indexes used in the bitstream and prediction functions respectively. Depending on the PU size, different numbers of prediction modes are available
respectively, as listed in Table 2.
 Table 1. Physical and logical mode indexes used in the bitstream and prediction functions respectively
│Index │0 │1 │2 │3 │4 │5 │6 │7 │8 │9 │10 │11 │
│Physical │VER│HOR │DC │VER-8│VER-4│VER+4│VER+8│HOR-4│HOR+4│HOR+8│VER-6│VER-2│
│Logical │DC │VER-8│VER-7│VER-6│VER-5│VER-4│VER-3│VER-2│VER-1│VER │VER+1│VER+2│
│Index │12 │13 │14 │15 │16 │17 │18 │19 │20 │21 │22 │23 │
│Physical │VER+2│VER+6│HOR-6│HOR-2│HOR+2│HOR+6│VER-7│VER-5│VER-3│VER-1│VER+1│VER+3│
│Logical │VER+3│VER+4│VER+5│VER+6│VER+7│VER+8│HOR-7│HOR-6│HOR-5│HOR-4│HOR-3│HOR-2│
│Index │24 │25 │26 │27 │28 │29 │30 │31 │32 │33 │
│Physical │VER+5│VER+7│HOR-7│HOR-5│HOR-3│HOR-1│HOR+1│HOR+3│HOR+5│HOR+7│
│Logical │HOR-1│HOR │HOR+1│HOR+2│HOR+3│HOR+4│HOR+5│HOR+6│HOR+7│HOR+8│
Table 2. Prediction modes used for each PU size
│PU size│Number of prediction modes│Used prediction modes in physical index │
│4×4 │17 │0-16 │
│8×8 │34 │0-33 │
│16×16 │34 │0-33 │
│32×32 │34 │0-33 │
│64×64 │5 │0-4 │
For chroma intra prediction, the prediction for chroma have at most five modes for all PU sizes: four basic modes (VER, HOR, DC, and VER-8) and one optional mode from the  best luma prediction
mode, which depends on whether it is included in the basic prediction modes or not.Â
In addition, an encoder modification for intra prediction search (JCTVC-C207) is also adopted by HM, but this is just an encoder optimization issue. Other coding tools in previous TMuC versions,
such as adaptive intra smoothing (AIS), combined intra prediction (CIP), planar prediction, edge-based prediction, are still under further investigation and not included in current HM.
[1]. JCTVC-A119, Video coding technology proposal by Tandberg, Nokia, and Ericsson
[2]. JCTVC-A124, Video coding technology proposal by Samsung (and BBC)
[3]. JCTVC-B093, Simplified angular intra prediction
[4]. JCTVC-B100, Unification of the Directional Intra Prediction Methods in TMuC
[5].  JCTVC-B118, Angular intra prediction and ADI simplification
[6]. JCTVC-C042, TE5: Results for Simplification of Unified Intra Prediction
[7]. JCTVC-C207, Encoder improvement of unified intra prediction
Permanent Link: Analysis of Coding Tools in HEVC Test Model (HM 1.0) – Intra Prediction
Post Comment
|
{"url":"http://www.h265.net/2010/12/analysis-of-coding-tools-in-hevc-test-model-hm-intra-prediction.html","timestamp":"2014-04-18T18:10:08Z","content_type":null,"content_length":"33383","record_id":"<urn:uuid:8fa20c38-0010-4021-b972-8e31d8c4eab9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|