content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Uxbridge Math Tutor
Find an Uxbridge Math Tutor
...I have extensive experience working with students with ADD/ADHD. I am an expert writing instructor and I am currently certified as an English Language Arts teacher by the Commonwealth of
Massachusetts. I am also available for test preparation tutoring, including SAT, PSAT, ACT, SSAT and ISEE.
31 Subjects: including calculus, European history, special needs, dyslexia
...I worked in Law, Corporate Affairs and business/finance (mergers and acquisitions, corporate finance), as well as in consulting, but I love tutoring, and my students look forward to our
tutoring sessions. Elementary education tutoring in math and English is also available (reading comprehension,...
67 Subjects: including precalculus, marketing, logic, geography
Hi! My name is Dan, and I love helping students to improve in Math and Science. I attended U.C.
27 Subjects: including logic, grammar, ACT Math, GED
...My primary teaching philosophy with my clients is to first focus on managing their studying methods, then work on helping them understand the subject matter being reviewed!! I am very flexible
and design different methods based on each student's individual learning curve. Once we find a techn...
22 Subjects: including geometry, SAT math, physics, Portuguese
...Louis, MO, in 1984 with a Masters degree in History. After five years of teaching at private high schools I practiced law. After 20 years as attorney I am going back to teaching.
21 Subjects: including algebra 1, algebra 2, SAT math, prealgebra | {"url":"http://www.purplemath.com/uxbridge_math_tutors.php","timestamp":"2014-04-16T21:52:07Z","content_type":null,"content_length":"23389","record_id":"<urn:uuid:7fcacc0b-76c0-4e0c-824d-f9996dd03167>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving constraint satisfaction problems using neural-networks, in
Results 11 - 20 of 30
- In Proceedings of the Workshop on Constraint Programming for Decision and Control (CPDC99 , 1999
"... : Constraint programming is an emergent software technology for declarative description and effective solving of large, particularly combinatorial, problems especially in areas of planning and
scheduling. Not only it is based on a strong theoretical foundation but it is attracting widespread commerc ..."
Cited by 8 (0 self)
Add to MetaCart
: Constraint programming is an emergent software technology for declarative description and effective solving of large, particularly combinatorial, problems especially in areas of planning and
scheduling. Not only it is based on a strong theoretical foundation but it is attracting widespread commercial interest as well, in particular, in areas of modelling heterogeneous optimisation and
satisfaction problems. In the paper we give a survey of technology behind constraint programming (CP) with particular emphasis on constraint satisfaction problems. We place the constraint programming
in history context and highlight the interdisciplinary character of CP. In the main part of the paper, we give an overview of basic constraint satisfaction and optimization algorithms and methods of
solving over-constrained problems. We also list some main application areas of constraint programming. Keywords: constraint satisfaction, search, consistency techniques, constraint propagation,
optimization 1...
"... Constraint satisfaction and optimisation is NP-complete by nature. The combinatorial explosion problem prevents complete constraint programming methods from solving many real-life constraint
problems. In many situations, stochastic search methods, many of which sacrifice completeness for efficiency, ..."
Cited by 5 (3 self)
Add to MetaCart
Constraint satisfaction and optimisation is NP-complete by nature. The combinatorial explosion problem prevents complete constraint programming methods from solving many real-life constraint
problems. In many situations, stochastic search methods, many of which sacrifice completeness for efficiency, are needed. This paper reports a family of stochastic algorithms for constraint
satisfaction and optimisation. Developed with hardware implementation in mind, GENET is a class of computation models for constraint satisfaction. Genet is a connectionist approach. A problem is
represented by a network with inhibitory connections. The network is designed to converge, in a fashion that resembles the min-conflict repair method. Reinforcement learning is used to bring GENET
out of local optima. Building upon GENET as well as ideas from operations research, Guided Local Search (GLS) and Fast Local Search are novel meta-heuristic search methods for constraint
optimisation. GLS sits on top of other l...
- Journal of Functional and Logic Programming , 1998
"... This paper proposes a number of models for integrating stochastic constraint solvers into constraint logic programming systems in order to solve constraint satisfaction problems efficiently.
Stochastic solvers can solve hard constraint satisfaction problems very efficiently, and constraint logic ..."
Cited by 5 (1 self)
Add to MetaCart
This paper proposes a number of models for integrating stochastic constraint solvers into constraint logic programming systems in order to solve constraint satisfaction problems efficiently.
Stochastic solvers can solve hard constraint satisfaction problems very efficiently, and constraint logic programming allows heuristics and problem breakdown to be encoded in the same language as the
constraints. Hence their combination is attractive. Unfortunately there is a mismatch in the kind of information a stochastic solver provides, and that which a constraint logic programming system
requires. We study the semantic properties of the various models of constraint logic programming systems that make use of stochastic solvers, and give soundness and completeness results for their
use. We describe an example system we have implemented using a modified neural network simulator, GENET, as a constraint solver. We briefly compare the efficiency of these models against the
propagation base...
- Proceedings of Sixth International Conference on Tools with Artificial Intelligence , 1994
"... Many real-life problems belong to the class of constraint satisfaction problems (CSP's), which are NP-complete, and some NP-hard, in general. When the problem size grows, it becomes difficult to
program solutions and to execute the solution in a timely manner. In this paper, we present a genera ..."
Cited by 3 (2 self)
Add to MetaCart
Many real-life problems belong to the class of constraint satisfaction problems (CSP's), which are NP-complete, and some NP-hard, in general. When the problem size grows, it becomes difficult to
program solutions and to execute the solution in a timely manner. In this paper, we present a general framework for integrating artificial neural networks into constraint logic programming languages
to provide an efficient and yet easy-to-program environment for solving CSP's. To realize this framework, we propose a novel programming language PROCLANN. The syntax of PROCLANN is similar to that
of Flat GHC. PROCLANN uses the standard goal reduction strategy as frontend to generate constraints and an efficient backend constraint-solver based on artificial neural network . PROCLANN retains
the simple and elegant declarative semantics of constraint logic programming. Its operational semantics is probabilistic in nature but it possesses the soundness and probabilistic completeness res...
- in Proc. CP2003 Workshop on Soft Constraints (Soft-2003 , 2003
"... Solving semiring-based constraint satisfaction problem (SCSP) is a task of finding the best solution, which can be viewed as an optimization problem. Current research of SCSP solution methods
focus on tree search algorithms, which is computationally intensive. In this paper, we present an e#cien ..."
Cited by 3 (0 self)
Add to MetaCart
Solving semiring-based constraint satisfaction problem (SCSP) is a task of finding the best solution, which can be viewed as an optimization problem. Current research of SCSP solution methods focus
on tree search algorithms, which is computationally intensive. In this paper, we present an e#cient local search framework for SCSPs, which adopts problem transformation and soft constraint
consistency techniques, and E-GENET local search model as a foundation. Our framework is parameterized by the semiring structure S, resulting in a family of algorithms for various kinds of soft
constraint problems. We build a prototype solver that is based on the proposed framework, and test it on both structured and non-structured problems. The benchmarking results show that it is feasible
to tackle SCSPs in an e#cient manner.
- IN DIMACS SERIES IN DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE VOLUME 57 , 2001
"... Developed from constraint satisfaction as well as operations research ideas, Guided Local Search (GLS) and Fast Local Search are novel meta-heuristic search methods for constraint satisfaction
and optimisation. GLS sits on top of other local-search algorithms. The basic principle of GLS is to penali ..."
Cited by 3 (0 self)
Add to MetaCart
Developed from constraint satisfaction as well as operations research ideas, Guided Local Search (GLS) and Fast Local Search are novel meta-heuristic search methods for constraint satisfaction and
optimisation. GLS sits on top of other local-search algorithms. The basic principle of GLS is to penalise features exhibited by the candidate solution when a localsearch algorithm settles in a local
optimum. Using penalties is an idea used in operations research before. The novelty in GLS is in the way that features are selected and penalised. FLS is a way of reducing the size of the
neighbourhood. GLS and FLS together have been applied to a non-trivial number of satisfiability and optimisation problems and achieved remarkable result. One of their most outstanding achievements is
in the well-studied travelling salesman problem, in which they obtained results as good as, if not better than the state-of-the-art algorithms. In this paper, we shall outline these algorithms and
describe some of their discrete optimisation applications.
, 2006
"... Local search is one of the fundamental paradigms for solving computationally hard combinatorial problems, including the constraint satisfaction problem (CSP). It provides the basis for some of
the most successful and versatile methods for solving the large and difficult problem instances encountered ..."
Cited by 3 (0 self)
Add to MetaCart
Local search is one of the fundamental paradigms for solving computationally hard combinatorial problems, including the constraint satisfaction problem (CSP). It provides the basis for some of the
most successful and versatile methods for solving the large and difficult problem instances encountered in many real-life applications. Despite impressive advances in systematic, complete search
algorithms, local search methods in many cases represent the only feasible way for solving these large and complex instances. Local search algorithms are also naturally suited for dealing with the
optimisation criteria arising in many practical applications. The basic idea underlying local search is to start with a randomly or heuristically generated candidate solution of a given problem
instance, which may be infeasible, sub-optimal or incomplete, and to iteratively improve this candidate solution by means of typically minor modifications. Different local search methods vary in the
way in which improvements are achieved, and in particular, in the way in which situations are handled in which no direct improvement is possible. Most local search methods use randomisation to ensure
that the search process does not
, 1993
"... Constraint satisfaction has received great attention in recent years and a large number of algorithms have been developed. Unfortunately, from the problem solvers' point of view, it is very
difficult to see when and how to use these algorithms. This paper points out the need to map constraint satisf ..."
Cited by 3 (0 self)
Add to MetaCart
Constraint satisfaction has received great attention in recent years and a large number of algorithms have been developed. Unfortunately, from the problem solvers' point of view, it is very difficult
to see when and how to use these algorithms. This paper points out the need to map constraint satisfaction problems to constraint satisfaction algorithms and heuristics, and proposes that more
research should be done on how to retrieve the most efficient and effective algorithms and heuristics for a given problem. We claim that such algorithms/heuristics retrieval systems should also be
valuable to guide future research. 1 Introduction Constraint satisfaction is a general problem which appears in many places, notably scheduling. Because of its generality and importance, constraint
satisfaction has received a great deal of attention in recent years. A (finite) constraint satisfaction problem (CSP) is a problem which consists of a set of variables, each of which has a finite
domain from wh...
- In Proc. CP-AI-OR'00 , 2002
"... The constraint satisfaction problem and its derivate, the propositional satisfiability problem (SAT), are fundamental problems in computing theory and mathematical logic. SAT was the first
proved NP-complete problem, and although complete algorithms have been dominating the constraint satisfaction f ..."
Cited by 2 (1 self)
Add to MetaCart
The constraint satisfaction problem and its derivate, the propositional satisfiability problem (SAT), are fundamental problems in computing theory and mathematical logic. SAT was the first proved
NP-complete problem, and although complete algorithms have been dominating the constraint satisfaction field, incomplete approaches based on local search has been successful the last ten years. In
this report we give a general framework for constraint satisfaction using local search as well as an different techniques to improve this basic local search framework. We also give an overview of
algorithms for problems of constraint satisfaction and optimization using heuristics, and discuss hybrid methods that combine complete methods for constraint satisfaction with local search
, 1998
"... and vertex z 1 cannot both be assigned the color 0. Once a CSP has been transformed into a network, the steps outlined below are performed to find one of its solutions. First, each connection in
the network is 1 In the context of constraint programming, a solution for a graph-coloring problem is ..."
Cited by 1 (0 self)
Add to MetaCart
and vertex z 1 cannot both be assigned the color 0. Once a CSP has been transformed into a network, the steps outlined below are performed to find one of its solutions. First, each connection in the
network is 1 In the context of constraint programming, a solution for a graph-coloring problem is any consistent color assignment. Whether the number of colors used is minimal or not is of no
importance. Cluster Domain 0 1 2 0 z z 1 z 2 z 3 z 4 Figure 1: The GENET network (with an initial assignment) corresponding to a graph coloring problem with 5 vertices (z 0 to z 4 ) and 3 colors (f0,
1, 2g) assigned an initial weight of 1 and exactly one random node within each cluster is turned ON. Then, each node x in the network computes its input | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=102710&sort=cite&start=10","timestamp":"2014-04-25T03:40:43Z","content_type":null,"content_length":"41163","record_id":"<urn:uuid:a986b056-63a1-43f0-a4dd-7833a10751d4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Matrix help in MatLab, Not the simple matrix!! I have searched, found nothing!
Please help me with this.
I need to code into matlab a function that results in a matrix answer.
It is this:
Here is my code so far:
for i=1:4;
if theta(i)<0;
See I need a 4X4 matrix for phi.
I only have four etas and four zetas. both i and j = 4. With some work I can get a 4x4 matrix but it isn't coded right. It gives me the same answers in rows. It should have a diagonal line of zero's
from top left to bottom right because that is where i=j
And I need to be able to change N to 250 so it needs to be easy to write for large N's
Thank you for your help! | {"url":"http://www.physicsforums.com/showpost.php?p=3779464&postcount=1","timestamp":"2014-04-21T12:14:52Z","content_type":null,"content_length":"9952","record_id":"<urn:uuid:f6efa82c-2fe7-4dcc-b489-1d27ecfa6c8c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Kemp on Climate Sanity
It has been a while since I wrote about ”Climate related sea-level variations over the past two millennia” (Andrew C. Kemp, Benjamin P. Horton, Jeffrey P. Donnelly, Michael E. Mann, Martin Vermeer,
and Stefan Rahmstorf, PNAS, 2011), which I will refer to as KMVR2011.
Please see this index of my posts concerning KMVR2011.
I want to sew up one loose end here. Last time around I showed that this latest incarnation of the Rahmstorf model relating sea level to temperature was just as bogus at the previous versions. But I
did not talk about one of their interesting (but ultimately irrelevant) new twists. Another layer of complexity was added by the application of Bayesian analysis, or in KMVR2011 nomenclature:
“Bayesian multiple change-point regression.”
Bayesian analysis is a useful, but often counter intuitive, statistical method to tease out an underlying distribution from an observed distribution. That being said, the KMVR2011 application of
Bayesian analysis starts out with a bogus model, which has been demonstrated ad nauseam. (See here and here.) This added layer of complexity simply obfuscates the failures of the starting model,
rather that addressing those failures.
My next series of posts will move on to another recent outing by Rahmstorf and company – Testing the robustness of semi-empirical sea level projections (Rahmstrof, et. al., Climate Dynamics,
November 2011) | {"url":"http://climatesanity.wordpress.com/tag/kemp/","timestamp":"2014-04-19T15:04:44Z","content_type":null,"content_length":"36685","record_id":"<urn:uuid:0b725049-3189-43d2-9985-7c1b98572b0c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fixed point iteration
February 2nd 2010, 07:30 AM #1
Fixed point iteration
Hey guys,
I'm struggling a bit with this problem. Just confused I guess.
The equation $f(x) = e^x - 3x^2 = 0$ has 3 roots. To determine the roots by fixed point iteration we can rearrange the equation to obtain the iteration formulas: $x = g(x) = \pm \sqrt{\frac{e^x}
Determine analytically the largest interval $[a,b]$, so that for any $p_0 \ \text{element of} \ [a,b]$ the iteration formula with the minus sign will converge to a root close to -0.5. You must
therefore find the largest interval on which the conditions of the Fixed Point theorem are satisfied.
I've drawn the graphs (see attached), and $g(x)$ approaches 0 as x approaches negative infinity. But it doesn't make sense to me that the interval can be [-inf, b] where b is the rightmost bound
of the interval.
I'm just very confused I guess. Can I get a push in the right direction?
Last edited by janvdl; February 2nd 2010 at 07:49 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-math-topics/126783-fixed-point-iteration.html","timestamp":"2014-04-20T02:49:26Z","content_type":null,"content_length":"32782","record_id":"<urn:uuid:0fbafd79-3d21-41f8-bc12-1f891e5b3eba>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
A dilute aqueous solution (0.250 L) of an organic compund soluble in water is formed by dissolving 22.3 g of the... - Homework Help - eNotes.com
A dilute aqueous solution (0.250 L) of an organic compund soluble in water is formed by dissolving 22.3 g of the organic compound in water. The solution formed has an osmotic pressure of 2.12 atm at
25 degrees celcius. Assuming that the organic compund is a non-electrolyte, what is its molar mass?
Osmotic pressure obeys the following formula:
`Pi V = nRT`
(sometimes the formula `Pi = iMRT` is also used where i is referred to as the van't Hoff factor, a dimensionless constant assumed to be equal to 1 for non-electrolytes, and M is the molarity or the
ratio of moles solute per liter of solution.)
We know that number of moles is equal to the ratio of the mass and the molar mass: `n = m/(MW).`
`Pi V = m/(MW) RT rArr (MW) = (mRT)/(Pi V)`
`MW = (22.3*0.08206*(25 + 273))/(2.12*0.250) = 1028.909.`
Hence, the organic compound has molecular weight 1028.909 grams/mol.
Note: We had to convert temperature to the absolute scale (Kelvin), otherwise, we wouldn't get the correct answer.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/dilute-aqueous-solution-0-250-l-an-organic-434919","timestamp":"2014-04-18T18:59:22Z","content_type":null,"content_length":"26533","record_id":"<urn:uuid:63a6649c-e352-451c-81ae-f4b8cb59c529>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distilling Ideas: An Introduction to Mathematical Thinking
The authors of this book, as they explain in their first chapter, are taking readers/students on a journey — one that will “add new, powerful inquiry skills” to a student’s repertoire: abstraction,
exploration, conjecture, justification, application, and extension. The book is clearly intended for inquiry-based mathematics courses. It is replete with explorations to undertake, definitions to
explore, and theorems to prove, but there are no sample proofs or answers.
The book has five chapters, with the middle three chapters on graphs, groups, and \(\varepsilon\)-\(\delta\) calculus constituting the bulk of the book. To me, it is clear that any one of these
middle chapters by itself has the potential to be used for an entire one-semester course. There is really enough material in the \(\varepsilon\)-\(\delta\) calculus chapter for an introductory junior
level real analysis course. Similarly, the group theory chapter gets to Sylow’s Theorem and has more in it than most students could cover in one-semester when doing all the work themselves. Perhaps
the same is true of the graph theory chapter, but I cannot personally vouch for that. While a teacher can “choose to select only some of the exercises and theorems in a given unit [chapter]”, the
book would be slow going for students who are expected to come up with all their own examples, solutions, and proofs. However, as has been said of such courses, in the end, “less is more.” The
students gain self-confidence, persistence, and skills they rarely learn in lecture-based courses.
The back of the book contains an annotated index (a.k.a., glossary) that “rather than containing precise definitions … gives reminders of the terms and links to [page numbers for] their precise
definitions”. For example, one such glossary entry states, “An automorphism is an isomorphism of an object with itself, possibly in a non-trivial fashion.” There is also a four-page list of symbols,
and their meanings, that includes: for sets, symbols like those for union, intersection, and ordered pairs; for groups, symbols like those for coset, symmetric group, and stabilizer; and for
calculus, symbols like those for sup, inf, derivative, and integral.
The chapters on graphs and \(\varepsilon\)-\(\delta\) calculus are introduced using fictionalized characters and scenes. For example, the graph theory chapter begins, “One day, Königsberg resident
Friedrich ran into his friend Otto at the local Sternbuck’s coffee shop. Otto bet Friedrich a Venti Raspberry Mocha Cappucino that Friedrich could not leave the café, walk over all seven bridges
without crossing over the same bridge twice (without swimming or flying) and return to the café. Friedrich set out, but he never returned.” The characters in the \(\varepsilon\)-\(\delta\) calculus
chapter are Zeno, Isaac (Newton) and Gottfried (Leibniz) with Zeno being an archer at the Summer Olympics and Isaac and Gottfried being two referees charged with deciding whether Zeno’s arrow
actually struck the bullseye. In contrast, the chapter on groups has a more no-nonsense tone. As a result, one gets the impression that the chapters on graphs and the \(\varepsilon\)-\(\delta\)
calculus were written by one of the two authors, whereas the chapter on groups was written by the other.
Who might this book be for? Although I often teach inquiry-based courses, I prefer to write my own notes for them, and I think many others who already teach such courses would similarly prefer to
write their own notes. However, some teachers may not have the time to, or may not feel equipped to, write their own notes, and perhaps that is where this book could play a role. And although the
authors claim the book could be used for individual study, I find it hard to believe that any student would come to write proofs in the style of mathematicians, without a single sample proof
included. I think a lot of the effectiveness of such a textbook/course depends on its implementation by a perceptive, knowledgeable teacher who can steer classroom discussions in helpful, and
correct, mathematical directions. In my opinion, this is definitely not a self-help book for individual students.
This small paperback comes with a high price tag — $45 for just 171 pages, at the MAA member price, and $54 for everyone else. It is in the MAA Textbook Series, and according to the Preface, is part
of a “collection of resources” called, Through Inquiry, which is “intended to provide instructors with the flexibility to create textbooks supporting a whole range of different courses to fit a
variety of instructional needs. To date, the series has four e-units [chapters] treating graphs, groups, \(\varepsilon\)-\(\delta\) calculus, and number theory”. While these four units [chapters] are
briefly foreshadowed in the Preface, the unit [chapter] on number theory is nowhere to be found, nor is it listed in the Table of Contents. The Preface also states that “Sample, specific threads are
described in the Instructor’s Resource”; however, that instructor’s resource is also nowhere in the book. If it exists, there should be an indication of where. Such anomalies could be fixed in a
second printing.
Annie Selden is Adjunct Professor of Mathematics at New Mexico State University and Professor Emerita of Mathematics from Tennessee Technological University. She regularly teaches graduate courses in
mathematics and mathematics education. In 2002, she was recipient of the Association for Women in Mathematics 12th Annual Louise Hay Award for Contributions to Mathematics Education. In 2003, she was
elected a Fellow of the American Association for the Advancement of Science. She remains active in mathematics education research and curriculum development. | {"url":"http://www.maa.org/publications/maa-reviews/distilling-ideas-an-introduction-to-mathematical-thinking?device=mobile","timestamp":"2014-04-17T11:03:10Z","content_type":null,"content_length":"29902","record_id":"<urn:uuid:e277c247-71dd-406a-9795-1c642c61d964>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase transition for parking blocks, Brownian excursion and coalescence
Results 1 - 10 of 24
"... This paper provides tight bounds for the moments of the width of rooted labeled trees with n nodes, answering an open question of Odlyzko and Wilf (1987). To this aim, we use one of the many
one-to-one correspondences between trees and parking functions, and also a precise coupling between parking f ..."
Cited by 20 (5 self)
Add to MetaCart
This paper provides tight bounds for the moments of the width of rooted labeled trees with n nodes, answering an open question of Odlyzko and Wilf (1987). To this aim, we use one of the many
one-to-one correspondences between trees and parking functions, and also a precise coupling between parking functions and the empirical processes of mathematical statistics. Our result turns out to
be a consequence of the strong convergence of empirical processes to the Brownian bridge (Komlos, Major and Tusnady, 1975).
- PROBAB. TH. REL. FIELDS , 1998
"... Regard an element of the set of ranked discrete distributions \Delta := f(x 1 ; x 2 ; : : :) : x 1 x 2 : : : 0; P i x i = 1g as a fragmentation of unit mass into clusters of masses x i . The
additive coalescent is the \Delta-valued Markov process in which pairs of clusters of masses fx i ; ..."
Cited by 18 (13 self)
Add to MetaCart
Regard an element of the set of ranked discrete distributions \Delta := f(x 1 ; x 2 ; : : :) : x 1 x 2 : : : 0; P i x i = 1g as a fragmentation of unit mass into clusters of masses x i . The additive
coalescent is the \Delta-valued Markov process in which pairs of clusters of masses fx i ; x j g merge into a cluster of mass x i + x j at rate x i + x j . Aldous and Pitman (1998) showed that a
version of this process starting from time \Gamma1 with infinitesimally small clusters can be constructed from the Brownian continuum random tree of Aldous (1991,1993) by Poisson splitting along the
skeleton of the tree. In this paper it is shown that the general such process may be constructed analogously from a new family of inhomogeneous continuum random trees.
"... Abstract. We study moments and asymptotic distributions of the construction cost, measured as the total displacement, for hash tables using linear probing. Four different methods are employed
for different ranges of the parameters; together they yield a complete description. This extends earlier res ..."
Cited by 11 (3 self)
Add to MetaCart
Abstract. We study moments and asymptotic distributions of the construction cost, measured as the total displacement, for hash tables using linear probing. Four different methods are employed for
different ranges of the parameters; together they yield a complete description. This extends earlier results by Flajolet, Poblete and Viola. The average cost of unsuccessful searches is considered
too. 1.
- of Lecture Notes-Monograph Series , 2002
"... This paper presents some general formulas for random partitions of a finite set derived by Kingman's model of random sampling from an interval partition generated by subintervals whose lengths
are the points of a Poisson point process. These lengths can be also interpreted as the jumps of a subordin ..."
Cited by 11 (3 self)
Add to MetaCart
This paper presents some general formulas for random partitions of a finite set derived by Kingman's model of random sampling from an interval partition generated by subintervals whose lengths are
the points of a Poisson point process. These lengths can be also interpreted as the jumps of a subordinator, that is an increasing process with stationary independent increments. Examples include the
two-parameter family of Poisson-Dirichlet models derived from the Poisson process of jumps of a stable subordinator. Applications are made to the random partition generated by the lengths of
excursions of a Brownian motion or Brownian bridge conditioned on its local time at zero.
, 1999
"... We describe a Vervaat-like path transformation for the reflected Brownian bridge conditioned on its local time at 0: up to random shifts, this process equals the two processes constructed from a
Brownian bridge and a Brownian excursion by adding a drift and then taking the excursions over the cur ..."
Cited by 10 (1 self)
Add to MetaCart
We describe a Vervaat-like path transformation for the reflected Brownian bridge conditioned on its local time at 0: up to random shifts, this process equals the two processes constructed from a
Brownian bridge and a Brownian excursion by adding a drift and then taking the excursions over the current minimum. As a consequence, these three processes have the same occupation measure, which is
easily found. The three processes arise as limits, in three different ways, of profiles associated to hashing with linear probing, or, equivalently, to parking functions. 1 Introduction We regard the
Brownian bridge b(t) and the normalized (positive) Brownian excursion e(t) as defined on the circle R=Z, or, equivalently, as defined on the whole real line, being periodic with period 1. We define,
for a 0, the operator \Psi a on the set of bounded functions on the line by \Psi a f(t) = f(t) \Gamma at \Gamma inf \Gamma1!st (f(s) \Gamma as) = sup st (f(t) \Gamma f(s) \Gamma a(t \Gamma s))...
, 2000
"... . Consider a sum P N 1 Y i of random variables conditioned on a given value of the sum P N 1 X i of some other variables, where X i and Y i are dependent but the pairs (X i ; Y i ) form an
i.i.d. sequence. We prove, for a triangular array (X ni ; Y ni ) of such pairs satisfying certain condi ..."
Cited by 5 (4 self)
Add to MetaCart
. Consider a sum P N 1 Y i of random variables conditioned on a given value of the sum P N 1 X i of some other variables, where X i and Y i are dependent but the pairs (X i ; Y i ) form an i.i.d.
sequence. We prove, for a triangular array (X ni ; Y ni ) of such pairs satisfying certain conditions, both convergence of the distribution of the conditioned sum (after suitable normalization) to a
normal distribution, and convergence of its moments. The results are motivated by an application to hashing with linear probing; we give also some other applications to occupancy problems, random
forests, and branching processes. 1. Introduction Many random variables arising in different areas of probability theory, combinatorics and statistics turn out to have the same distribution as a sum
of independent random variables conditioned on a specific value of another such sum. More precisely, we are concerned with variables with the distribution of P N 1 Y i conditioned on P N 1 X...
- ACM Transactions on Algorithms , 2005
"... Abstract. We study the distribution of the individual displacements in hashing with linear probing for three different versions: First Come, Last Come and Robin Hood. Asymptotic distributions
and their moments are found when the the size of the hash table tends to infinity with the proportion of occ ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract. We study the distribution of the individual displacements in hashing with linear probing for three different versions: First Come, Last Come and Robin Hood. Asymptotic distributions and
their moments are found when the the size of the hash table tends to infinity with the proportion of occupied cells converging to some α, 0 < α < 1. (In the case of Last Come, the results are more
complicated and less complete than in the other cases.) We also show, using the diagonal Poisson transform studied by Poblete, Viola and Munro, that exact expressions for finite m and n can be
obtained from the limits as m, n → ∞. We end with some results, conjectures and questions about the shape of the limit distributions. These have some relevance for computer applications. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.26.1571","timestamp":"2014-04-18T15:01:16Z","content_type":null,"content_length":"34286","record_id":"<urn:uuid:18feddef-fa11-4cd4-bcbb-6ad4b430186b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fayville Algebra Tutors
...Before I began a family I was in the actuarial field. I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students. I did this from
25 Subjects: including algebra 2, algebra 1, English, reading
...I was also a Summit private tutor for SAT, both Math and English. I was an SAT instructor for Princeton Review and Kaplan. I was also a Summit private tutor for SAT, both Math and English.
67 Subjects: including algebra 1, algebra 2, English, calculus
...Things need to make sense to me, I work hard to be sure to understand the big picture of everything and because of that I'm able to show others or explain things better to others about
different mathematical and science concepts. I currently have a Masters Degree in Microbiology from Loyola University in Chicago. I also got my Bachelors degree in Biochemistry from Mount Holyoke
22 Subjects: including algebra 1, algebra 2, English, reading
I have taught physics (from the college/honors levels to the AP levels (Mechanics/Electricity & Magnetism Level "C" with calculus and Level "B" without calculus) and mathematics (algebra 2;
pre-calculus; calculus: differential/integral and multivariable). I have had forty-two years of teaching expe...
6 Subjects: including algebra 2, physics, calculus, trigonometry
...Regardless of a student's current situation, there is a basic plan that I always follow to ensure success. If you have any questions, please don't hesitate to contact me. Most students can
achieve results in one or two sessions a week for 1-2 hours each week which can be usually be reduced as progress is made.
13 Subjects: including algebra 2, algebra 1, calculus, geometry | {"url":"http://www.algebrahelp.com/Fayville_algebra_tutors.jsp","timestamp":"2014-04-19T22:14:48Z","content_type":null,"content_length":"24766","record_id":"<urn:uuid:ae827032-5f32-48d1-9d3f-9fd73cbcf30d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sultan Math Tutor
Find a Sultan Math Tutor
...I've tutored all subjects on the ACT since 2003. ACT Science is more about reading graphs and understanding how experiments are put together than it is about actual science knowledge. In my
tutoring sessions, I teach students techniques for getting information reliably and quickly from the figures.
32 Subjects: including prealgebra, LSAT, algebra 1, algebra 2
...The student needs to have a previously unmarked copy of "The Official Sat Study Guide 3rd Edition” containing 10 practice tests, bringing it to each session s/he has with me. The student will
be given strategies with which to attack each of the 5+ types of question found in the multiple choice writing section. Essay writing will be covered and practice essays critiqued.
22 Subjects: including trigonometry, algebra 1, algebra 2, geometry
...My approach is to teach the student how to identify the nature of the problem and to recognize the appropriate way to solve it. We work through the process several times together, identifying
simple, logical steps for solving each type of problem. And then, I give the student sample problems to solve independently and coach them further as needed.
26 Subjects: including algebra 1, ACT Math, probability, SAT math
...I earned my Bachelor of Science degree in Mechanical and Aerospace Engineering from Rutgers University (New Brunswick, NJ) in 2012. Math is my all time favorite course. I have tutored Math at
Bergen Community College since 2008 for 3 years to a wide diversity of students.
11 Subjects: including trigonometry, precalculus, algebra 1, algebra 2
...In addition, we work on ear training to ensure pitch is right where it should be. I have been a soccer coach with Covington Community sports since 2008. When I was young, I played select
soccer for 6 years before I had a knee injury which took me out of the game.
46 Subjects: including ACT Math, trigonometry, SAT math, algebra 1 | {"url":"http://www.purplemath.com/Sultan_Math_tutors.php","timestamp":"2014-04-19T11:59:40Z","content_type":null,"content_length":"23622","record_id":"<urn:uuid:183bd7c3-2492-409d-bedf-d43c0a9a4aac>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US8184627 - Point-to-multipoint (P2MP) network resource management
Publication number US8184627 B2
Publication type Grant
Application number US 12/396,039
Publication date May 22, 2012
Filing date Mar 2, 2009
Priority date Mar 2, 2009
Also published as US20100220722
Publication number 12396039, 396039, US 8184627 B2, US 8184627B2, US-B2-8184627, US8184627 B2, US8184627B2
Inventors Nirwan Ansari, Si Yin
Original Assignee New Jersey Institute Of Technology
Export Citation BiBTeX, EndNote, RefMan
Patent Citations (11), Non-Patent Citations (30), Classifications (6), Legal Events (2)
External Links: USPTO, USPTO Assignment, Espacenet
Point-to-multipoint (P2MP) network resource management
US 8184627 B2
Techniques for managing resources in a point-to-multipoint (P2MP) network are disclosed. In some examples, a root station is adapted to transmit and receive network packets and leaf stations are
adapted to transmit and receive the network packets from the root station. An electrical control system can be adapted to reduce an amount of time for the electrical control system to produce a
steady state output and to define a maximum boundary for the output. The electrical control system may include feedback to control the root station based, at least in part, on the output of the
electrical control system.
1. A method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one leaf station each configured to
transmit and receive one or more network packets over the P2MP network, the method for the electrical control system comprising:
controlling the at least one root station based, at least in part, on output of the electrical control system, wherein the electrical control system is configured to reduce an amount of time for the
electrical control system to produce an output comprising a steady state and to define a maximum boundary for the output, the electrical control system comprising a feedback control loop;
transmitting one or more network packets to the at least one leaf station, wherein the at least one leaf station is capable of transmitting and receiving the one or more network packets from the at
least one root station, and wherein the at least one leaf station is capable of communication with the at least one root station; and
repeating the controlling and the transmitting for at least one of the one or more network packet; and
wherein the electrical control system is based at least in part on the equation x[i](n+1)=(A[i]−B[i]K[i])x[i](n), where x[i](n) is a state vector that indicates a bandwidth requirement and queue
length of the at least one leaf station, where A[i ]is a state vector matrix, B[i ]is an input vector matrix, where K[i ]is a constant matrix, and where n is a given time.
2. The method of claim 1, wherein the at least one leaf station is prohibited from communicating with at least a portion of the other leaf stations.
3. The method of
claim 1
, wherein the controlling further comprises:
analyzing the output to determine if the output is equal to a desired output;
if the output is not equal to the desired output, adjusting the output by a controller gain to produce a new output; and
repeating the analyzing and the adjusting until the new output is equal to the desired output.
4. The method of claim 3, wherein the controller gain is K[i], a constant matrix.
5. The method of claim 4, wherein K[i ]is defined by the equation
${ f 1 ( k 11 , k 12 , k 21 , k 22 , α i ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 11 , k 12 , k 21 , k 22 , α i ) = ⅇ - 2 T i ,$
where k[11],k[12],k[21 ]and k[22 ]are vectors of K[i], α[i ]is an estimate index, r is a reference input, T[i ]is the amount of time for the electrical control system to produce an output comprising
a steady state, and M[i ]is the maximum boundary for the output.
6. The method of claim 4, wherein K[i ]is defined by the equation
${ f 1 ( k 1 , k 2 ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 1 , k 2 ) = ⅇ - 2 T i ,$
where k[1 ]and k[2 ]are vectors of K[i], r is a reference input, T[i ]is the amount of time for the electrical control system to produce an output comprising a steady state, and M[i ]is the maximum
boundary for the output.
7. The method of claim 1, wherein the electrical control system comprises a controller based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is a state
vector that indicates a bandwidth requirement and a queue length of the at least one leaf station.
8. A system for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one leaf station that are configured to
transmit and receive one or more network packets over the P2MP network, the system for the electrical control system comprising:
a compensator operably coupled to a reference input, the compensator configured to offset a control error, the control error being a difference between the reference input and an output signal;
a comparator operably coupled to the compensator, the comparator configured to calculate the control error;
a controller configured to output the output signal and further configured to manipulate the output signal if the control error is determined to be a non-zero value; and
a controller gain operably coupled to the controller and the comparator, the controller gain being multiplied by the output signal;
wherein the controller may adjust the controller gain to manipulate the output signal such that specific characteristics of the output signal may be attained; and
wherein the electrical control system is based at least in part on the equation x[i](n+1)=(A[i]−B[i]K[i])x[i](n), where x[i](n) is a state vector that indicates a bandwidth requirement and queue
length of the at least one leaf station, where A[i ]is a state vector matrix, B[i ]is an input vector matrix, where K[i ]is a constant matrix, and where n is a given time.
9. The system of claim 8, wherein the at least one leaf station is prohibited from communicating with the other leaf stations.
10. The system of claim 8, wherein the controller gain is K[i], a constant matrix.
11. The system of claim 10, wherein K[i ]is defined by the equation
${ f 1 ( k 11 , k 12 , k 21 , k 22 , α i ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 11 , k 12 , k 21 , k 22 , α i ) = ⅇ - 2 T i ,$
where k[11],k[12],k[21 ]and k[22 ]are vectors of K[i], α[i ]is an estimate index, r is the reference input, T[i ]is an amount of time for the resource management component to produce the output
signal comprising a steady state, and M[i ]is a maximum boundary for the output signal.
12. The system of claim 10, wherein K[i ]is defined by the equation
${ f 1 ( k 1 , k 2 ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 1 , k 2 ) = ⅇ - 2 T i ,$
where k[1 ]and k[2 ]are vectors of K[i], r is the reference input, T[i ]is an amount of time for the resource management component to produce the output signal comprising a steady state, and M[i ]is
a maximum boundary for the output signal.
13. The system of claim 8, wherein the controller is based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is a state vector that indicates a bandwidth
requirement and a queue length of the at least one leaf station.
14. A method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one leaf station that are configured to
transmit and receive one or more network packets over the P2MP network, the method for the electrical control system comprising:
configuring a control for the electrical control system based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is a state vector that indicates a bandwidth
requirement and a queue length of the at least one leaf station, wherein the electrical control system includes a variable gain for an output, wherein the output has a steady state and a defined
maximum boundary;
repeatedly analyzing the output of the electrical control system to determine if the output is equal to a desired output;
dynamically adjusting the output of the electrical control system to provide a new output by changing a variable gain until the new output achieves the desired output; and
transmitting one or more network packets to one or more of the at least one leaf station and at least one root station based at least in part on the output of the electrical control system;
wherein the variable gain is K[i], a constant matrix and is defined by the equation
${ f 1 ( k 11 , k 12 , k 21 , k 22 , α i ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 11 , k 12 , k 21 , k 22 , α i ) = ⅇ - 2 T i ,$
where k[11],k[12],k[21 ]and k[22 ]are vectors of K[i], α[i ]is an estimate index, r is a reference input, T[i ]is the amount of time for the electrical control system to produce an output comprising
a steady state, and M[i ]is the maximum boundary for the output.
15. The method of claim 14, wherein the electrical control system implements a prediction-based resource allocation (PRA) scheme.
16. A method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one leaf station that are configured to
transmit and receive one or more network packets over the P2MP network, the method for the electrical control system comprising:
configuring a control for the electrical control system based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is a state vector that indicates a bandwidth
requirement and a queue length of the at least one leaf station, wherein the electrical control system includes a variable gain for an output, wherein the output has a steady state and a defined
maximum boundary;
repeatedly analyzing the output of the electrical control system to determine if the output is equal to a desired output;
dynamically adjusting the output of the electrical control system to provide a new output by changing a variable gain until the new output achieves the desired output; and
transmitting one or more network packets to one or more of the at least one leaf station and at least one root station based at least in part on the output of the electrical control system;
wherein the variable gain is K[i], a constant matrix and is defined by the equation
${ f 1 ( k 1 , k 2 ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 1 , k 2 ) = ⅇ - 2 T i ,$
where k[1 ]and k[2 ]are vectors of K[i], r is a reference input, T[i ]is the amount of time for the electrical control system to produce an output comprising a steady state, and M[i ]is the maximum
boundary for the output.
17. The method of claim 16, wherein the electrical control system implements a request-based resource allocation (RRA) scheme.
18. A method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one leaf station configured to transmit
and receive one or more network packets over the P2MP network, the method for the electrical control system comprising:
controlling the at least one root station based, at least in part, on output of the electrical control system, wherein the electrical control system is configured to reduce an amount of time for the
electrical control system to produce an output comprising a steady state and to define a maximum boundary for the output, the electrical control system comprising a feedback control loop;
transmitting one or more network packets to the at least one leaf station, wherein the at least one leaf station is capable of transmitting and receiving the one or more network packets from the at
least one root station, and wherein the at least one leaf station is capable of communication with the at least one root station; and
repeating the controlling and the transmitting for at least one of the one or more network packet; and
wherein the electrical control system comprises a controller based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is a state vector that indicates a
bandwidth requirement and a queue length of the at least one leaf station.
19. The method of claim 18, wherein the at least one leaf station is prohibited from communicating with at least a portion of the other leaf stations.
20. The method of
claim 18
, wherein the controlling further comprises:
analyzing the output to determine if the output is equal to a desired output;
if the output is not equal to the desired output, adjusting the output by a controller gain to produce a new output; and
repeating the analyzing and the adjusting until the new output is equal to the desired output.
Point to multipoint (P2MP) topology is one of the most commonly used topologies in an access network. In general, P2MP may include a root station (RS) and a number of leaf stations (LSs). In P2MP,
any media having a RS that broadcasts packets through a single trunk (such as a frequency, wavelength, or wireless channel) to LSs typically may be referred to as downstream. Similarly, LSs
unicasting packets through branches and the trunk to the RS may be referred to as upstream. In addition, the LSs may not communicate with each other in a peer-to-peer manner.
Many wired broadband access networks such as the Time Division Multiplex (TDM) Passive Optical Network (PON) (which includes Ethernet passive optical networks (EPONs), Gigabit passive optical
networks (GPONs), and Broadband passive optical networks (BPONs)) can be generalized into a P2MP architecture. The P2MP architecture of PONs may reduce the dominant deployment and maintenance cost,
and facilitates the central management by utilizing the RS as the central office.
In the recent past, there have been attempts to address upstream resource management and allocation mechanism issues in P2MP networks, especially in P2MP EPON networks. These schemes may be
categorized into three categories: fixed resource allocation (FRA), request-based resource allocation (RRA), and prediction-based resource allocation (PRA). Although most of the schemes address the
resource management in EPONs, they can be generalized to other P2MP networks by employing appropriate MAC control cells and fields in the frames.
Although attempts have been made to address the upstream resource management issue in P2MP networks, few attempts have addressed the above different resource management schemes such that these
schemes can be evaluated, compared and further improved.
Furthermore, current upstream resource allocation schemes in P2MP networks may have difficulties reaching transient performance objectives such as minimum settling time and maximum overshoot, due to
the complexity of mapping the objectives into the corresponding scheduling algorithm and resource management schemes.
The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings.
Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described
with additional specificity and detail through use of the accompanying drawings.
FIG. 1 is a diagram of a general P2MP architecture;
FIG. 2 is a flowchart showing the operation of an example embodiment;
FIG. 3 is a flowchart showing the operation of another example embodiment;
FIG. 4 is a flowchart showing the operation of yet another example embodiment;
FIG. 5 is a diagram depicting service cycles over time in an example embodiment of a P2MP architecture;
FIG. 6 is a schematic diagram of an example embodiment of an electrical control system;
FIG. 7 is a schematic diagram of another example embodiment of an electrical control system; and
FIG. 8 is a schematic diagram of an example computing system, all arranged in accordance with the present disclosure.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context
dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be
made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and
illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this
This disclosure is drawn, inter alia, to methods and systems related to systems and methods for efficient resource management in P2MP networks.
FIG. 1 is a diagram of a general point-to-multipoint (P2MP) architecture arranged in accordance with the present disclosure. As shown in FIG. 1, a P2MP network resource management system may include
at least one root station 10 (identified in FIG. 1 as RS-1, RS-2 and RS-n, where n may be any numeral) adapted to transmit and receive network packets, at least one leaf station 12 (identified in
FIG. 1 as LS-1, LS-2 and LS-n, where n may be any numeral) may be adapted to transmit and receive network packets from the at least one root station 10, and a resource management component 14 may be
adapted to manage network packet transmission. Each leaf station 12 is in communication with the root station(s) 10.
The resource management component 14 may include a reference input signal 16, a compensator 18, a comparator 20, a controller 22, and a feedback loop 24. The compensator 18 may be operably coupled to
the reference input signal 16. The compensator 18 may be adapted to offset a control error (which may be the difference between the reference input signal 16 and an output signal 26, for example).
The comparator 20 may be adapted to calculate the control error and may be operably connected to the compensator 18. The comparator 20 may be an apparatus and/or circuitry capable of comparing two
values (e.g., reference input signal 16 and output signal 26) and providing an output (to the controller 22, for example) depending on the comparison. The controller 22 may be adapted to output the
output signal 26. Further, the controller 22 may be adapted to manipulate the output signal 26 if the control error is determined to be a non-zero value. The feedback loop 24 may be operably coupled
to the controller 22 and the comparator 20. Further, the feedback loop 24 may include a controller gain (which may be multiplied by or otherwise combined with the output signal 26).
In some example embodiments, the controller 22 may be configured to adjust the controller gain to manipulate the output signal 26 such that specific characteristics of the output signal 26 may be
attained. Design objectives for resource allocation in a P2MP system may include system robustness, accuracy, and target transient performance. These objectives (and others) may be achieved through
proper design of the controller 22, as discussed below.
Some additional example embodiments may include a method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at
least one leaf station each adapted to transmit and receive one or more network packets over the P2MP network, which may operate as depicted in FIG. 2. The illustrated example includes processing
operations 28, 30 and 32. Operation 28 includes controlling the at least one root station based, at least in part, on output of the electrical control system, where the electrical control system is
configured to reduce an amount of time for the electrical control system to produce an output comprising a steady state and to define a maximum boundary for the output. The electrical control system
comprises a feedback control loop. Operation 30 includes transmitting one or more network packets to the at least one leaf station, where the at least one leaf station is capable of transmitting and
receiving the one or more network packets from the at least one root station, and where the at least one leaf station is capable of communication with the at least one root station. Operation 32
includes repeating the controlling and the transmitting for at least one of the one or more network packets.
In some example embodiments, a point-to-multipoint (P2MP) network architecture may be configured to implement the method of FIG. 2. In one example embodiment, the P2MP network architecture may be the
RRA scheme.
Additional example embodiments include a method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one
leaf station that are adapted to transmit and receive one or more network packets over the P2MP network, and operates as depicted in FIG. 3. The illustrated example may include processing operations
34, 36, 38, 40 and 42. Operation 34 includes adapting a control for the electrical control system based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is
a state vector that indicates a bandwidth requirement and a queue length of the at least one leaf station, where the electrical control system includes a variable gain for an output, where the output
has a steady state and a defined maximum boundary. Operation 36 includes repeatedly analyzing the output of the electrical control system to determine if the output is equal to a desired output.
Operation 38 includes dynamically adjusting the output of the electrical control system to provide a new output by changing a variable gain until the new output achieves the desired output. Operation
40 includes transmitting one or more network packets to one or more of the at least one leaf station and at least one root station based at least in part on the output of the electrical control
system. As shown in 42, the variable gain is K[i], a constant matrix and is defined by the equation
${ f 1 ( k 11 , k 12 , k 21 , k 22 , α i ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) , f 2 ( k 11 , k 12 , k 21 , k 22 , α i ) = ⅇ - 2 T i$
where k[11], k[12], k[21 ]and k[22 ]are vectors of K[i], α[i ]is an estimate index, r is a reference input, T[i ]is the amount of time for the electrical control system to produce an output
comprising a steady state, and M[i ]is the maximum boundary for the output.
In some example embodiments, a point-to-multipoint network (P2MP) architecture may be configured to implement the method of FIG. 3. In one example, such a P2MP network architecture may implement the
PRA scheme.
Additional example embodiments include a method for an electrical control system to manage resources in a point-to-multipoint (P2MP) network that includes at least one root station and at least one
leaf station that are adapted to transmit and receive one or more network packets over the P2MP network, and operates as depicted in FIG. 4. The illustrated example may include processing operations
44, 46, 48, 50 and 52. Operation 34 includes adapting a control for the electrical control system based at least in part on the equation U[i](n)=−KX[i](n), where K=K[i]|[i=1˜4], and where X[i](n) is
a state vector that indicates a bandwidth requirement and a queue length of the at least one leaf station, where the electrical control system includes a variable gain for an output, where the output
has a steady state and a defined maximum boundary. Operation 36 includes repeatedly analyzing the output of the electrical control system to determine if the output is equal to a desired output.
Operation 38 includes dynamically adjusting the output of the electrical control system to provide a new output by changing a variable gain until the new output achieves the desired output. Operation
40 includes transmitting one or more network packets to one or more of the at least one leaf station and at least one root station based at least in part on the output of the electrical control
system. As shown at 52, the variable gain is K[i], a constant matrix and is defined by the equation
${ f 1 ( k 1 , k 2 ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) , f 2 ( k 1 , k 2 ) = ⅇ - 2 T i$
where k[1 ]and k[2 ]are vectors of K[i], r is a reference input, T[i ]is the amount of time for the electrical control system to produce an output having a steady state, and M[i ]is the maximum
boundary for the output.
In some example embodiments, a point-to-multipoint (P2MP) network architecture may be configured to implement the method of FIG. 4. In one example, such a P2MP network architecture may implement the
RRA scheme.
The present disclosure now considers a P2MP system with one RS and y LSs, as depicted in FIG. 5. The RS serves each LS once in a service cycle 66, 68, 70. As previously discussed, the present
disclosure contemplates that an issue with P2MP networks may include the upstream resource management and allocation mechanism. The upstream resource could be bandwidth (TDM-PON), wavelength
(wavelength-division multiplexing or WDM-PON), or frequency (orthogonal frequency-division multiplexing or OFDM). Under the P2MP architecture, multiple LSs may share an upstream trunk, and each LS
may have no knowledge of the transmission condition of the other LSs. To avoid data collision, a request/grant arbitrary mechanism, such as the multipoint control protocol (MPCP) in an EPON,
typically may be deployed for upstream resource sharing. The request/grant mechanism may be implemented in continuous service cycles 66, 68, 70. In each service cycle 66, 68, 70, LSs send requests 72
to RS for a resource grant 74 before any transmission may occur. Thereafter, RS may determine an appropriate transmission window in the next service cycle 68, 70 to each LS, by considering the
requests 72 as well as the available resources, and may send out grants 74 to LSs. Finally, after receiving the grants 74, LSs may begin to transmit their packets until their granted window has
passed. In this manner, a dynamic resource allocation may be achieved.
As used herein, the following notations shall be adopted.
□ Q[i](n): the reported residual queued length from LS[i](1≦l≦y) at the end of service cycle n 66;
□ R[i](n): the resource request 72 of LS[i ]for service cycle n 66 (R[i](n) may or may not be the same as Q[i](n), depending on the particular resource allocation scheme as described below);
□ λ[i](n): the actual arrived data of LS[i ]at service cycle n+1 68;
□ {circumflex over (λ)}[i](n): the predicted arrival data at LS[i ]in service cycle n+1 68;
□ d[i](n): the departed data from LS[i ]at service cycle n 66;
□ G[i](n): the allocated timeslot to LS[i ]at service cycle n 66;
□ G[i] ^max: the maximum timeslot length prescribed by the service level agreement (SLA).
Since no queue status report may be conducted in the FRA scheme, and the reported queue length may equal zero, e.g.,
Q [i](n+1)=0(Eq. 1a)
for FRA.
In some RRA schemes, the reported queue length of transmission cycle (n+1) 68 may be determined by the difference of the injected data, which may include the transmission residual of cycle n 66
(e.g., Q[i](n)) as well as the incoming data arrived in the waiting time at ONU[i ]in transmission cycle n 66 (e.g., λ[i](n)), and the delivered data (e.g., d[i](n)), e.g.,
Q [i](n+1)=Q [i](n)+λ[i](n)−d [i](n+1)(Eq. 1b)
In the PRA scheme, “over-grant” may occur. This “over-grant” may be adjusted by reporting the difference between the injected data (e.g., Q[i](n)+λ[i](n)) and the grant G[i](n) 74, e.g.,
Q [i](n+1)=Q [i](n)+λ[i](n)−G [i](n+1)(Eq. 1c)
Eqs. 1a-1c may be summarized as
$Q i ( n + 1 ) = { 0 , for FRA Q i ( n ) + λ i ( n ) - d i ( n + 1 ) , for RRA Q i ( n ) + λ i ( n ) - G i ( n + 1 ) , for PRA ( Eq . 1 )$
On the other hand, the resource request R[i](n) 72 of LS[i ]for service cycle n 66 may be determined by perspective resource allocation schemes. For FRA, the resource request of LS[i ]in service
cycle (n+1) 68 (e.g., R[i](n+1)) is the fixed value R[fix], e.g.,
R [i](n+1)=R [fix](Eq. 2a)
In RRA, R[i](n+1) may be determined by the reported queue length, e.g.,
R [i](n+1)=Q [i](n),(Eq. 2b)
When a traffic predictor is employed, as in PRA, R[i](n+1) may be determined by the sum of the reported queue length and the predicted arrival data, e.g.,
R [i](n+1)=Q [i](n)+{circumflex over (λ)}[i](n)(Eq. 2c)
where {circumflex over (λ)}[i](n) is the predicted arrival data at LS[i ]in service cycle (n+1) 68. Eq. 2a-2c may be summarized as
$R i ( n + 1 ) = { R fix , for FRA Q i ( n ) , for RRA Q i ( n ) + λ ^ i ( n ) , for PRA ( Eq . 2 )$
After processing the request 72, the RS allocates time windows G[i](n+1) to LS[i]. In FRA, the assigned resource to LS[i ]in transmission cycle n+1 68 (e.g., G[i](n+1)) may be the fixed value R[fix].
In both RRA and PRA, G[i](n+1) may be the smaller value of the bandwidth request (e.g., R[i](n+1)) and the SLA parameter (e.g., G[i] ^max), e.g.,
$G i ( n + 1 ) = { R fix , for FRA min [ R i ( n + 1 ) , G i max ] , for RRA . min [ R i ( n + 1 ) , G i max ] , for PRA ( Eq . 3 )$
After receiving the bandwidth allocation decision, LS[i ]may schedule its upstream transmission indicated by G[i](n+1), and the delivered data d[i](n+1) may be described as
d [i](n+1)=min{G [i](n+1),Q [i](n)+λ[i](n)}(Eq. 4).
The present disclosure considers that a unified state space model may be constructed for FRA, RRA, and PRA based at least in part on Eqs. 1 and 2, as follows
X [i](n+1)=AX [i](n)+BU [i](n),(Eq. 5)
where X[i](n)=[R[i](n) Q[i](n)]^T may be the state vector, indicating the bandwidth requirement and the queue length of LS[i], and U[i](n) may be the input vector, representing the arrived data
during the waiting time and the SLA parameter. A and B may be the matrices for the state vector and input vector, respectively, that may determine intrinsic characteristics of each scheme at the
system level.
Therefore, a unified model for upstream resource allocation over a P2MP system may be established through the state space equation (Eq. 5), with Eqs. 3 and 4 being performance constraints. The model
essentially exhibits the relationship between the input (e.g., on-line network traffic load), output (e.g., bandwidth allocation decision), and state variables (e.g., queue length and resource
requirement). The state space representation may provide a convenient and compact way to model and analyze various resource allocation schemes for the P2MP system from the control theory point of
view. In this way, a specific resource allocation scheme may essentially define its particular coefficient matrices A and B to assign the upstream resource in a different way.
The present disclosure contemplates that, traditionally, the above control objectives may have been difficult to solve in P2MP systems because of the complexity of mapping the objectives into the
corresponding scheduling algorithm and resource management schemes. However, the example state space model described herein may give a simple and straightforward framework to achieve such objectives
by using state space feedback control techniques.
In some example embodiments, the settling time T[i ]and the maximum overshoot M[i ]may be two central parameters for the transient performance. The settling time T[i ]may be defined as the time for
the P2MP system to reach the steady state. Short settling times may be utilized to achieve the performance objective, especially when the incoming traffics of LSs have large volatility. In such case,
short settling time may ensure the system converges to the stable state before the traffic load changes. On the other side, the maximum overshoot M[i ]may be defined as the difference between the
maximum system output y[max ]and steady-state system output y[ss ]divided by the steady-state system output y[ss], e.g.,
$M i = y max - y ss y ss ( Eq . 6 )$
The maximum overshoot may give the upper bound for the output oscillations of a P2MP system. For example, the specifications of a P2MP system may call for the system to reach a stable state within 10
seconds, and the overshoot should be less than 5%.
In some embodiments, for resource allocation schemes that may be based at least in part on the state space model (e.g., Eq. (5)), there may exist a controller, u[i](n) 76, such that
u [i](n)=−K [i] x [i](n),(Eq. 7)
to drive the system into a close-loop form, as long as the system is controllable; this is known as pole placement. Substituting Eq. 7 into Eq. 5 yields,
x [i](n+1)=(A [i] −B [i] K [i])x [i](n).(Eq. 8)
which is the close-loop form for Eq. 5, where K[i ] 78 may be a constant matrix. The controller design in this embodiment is illustrated in FIG. 6.
The present disclosure contemplates that, from the control point of view, the settling time and maximum overshoot may be determined by the closed loop poles. Further, the controller gain K[i ] 78 in
Eq. 8 may essentially determine the poles in the closed loop characteristic polynomial det[zI−(A[i]−B[i]K[i])]. Thus, the target transient performance T[i ]and M[i ]may be achieved by properly tuning
the controller gain K[i ] 78.
Now, the present disclosure considers that the poles of a second order P2MP system are a pair of complex conjugates re^±jθ. According to control theory, the relationship between the pole parameters r
and θ, and the settling time T[i ]and the maximum overshoot M[i ]may be stated as
r≈e ^−4/T ^ i (Eq. 9a)
$θ ≈ π log r log M i ( Eq . 9 b )$
The eigenvalues of the closed-form characteristic polynomial det[zI−(A[i]−B[i]K[i])] may be re^±jθ, or simply, det[zI−(A[i]−B[i]K[i])]=(z−re^jθ)(z−re^−jθ), e.g.,
det[zI−(A [i] −B [i] K [i])]=z ^2−2r cos θz+r ^2(Eq. 10)
On the other hand, the present disclosure considers that the second order closed-form characteristic polynomial for the PRA scheme may be based at least in part on,
det[zI−(A [i] −B [i] K [i])]=z ^2 +f [1](k [11] ,k [12] ,k [21],k[22],α[i])z+f [2](k [11],k[12],k[21],k[22],α[i])(Eq. 11)
where k[11],k[12],k[21], and k[22 ]are vectors of K[i], and α[i ]is the estimate index.
As Eqs. 10 and 11 represent the same closed-form characteristic polynomial for PRA, they have the same coefficients for each order of z. Thus,
${ f 1 ( k 11 , k 12 , k 21 , k 22 , α i ) = - 2 r cos θ f 2 ( k 11 , k 12 , k 21 , k 22 , α i ) = r 2 . ( Eq . 12 )$
By substituting Eqs. 9a and 9b into Eq. 12, the result is
${ f 1 ( k 11 , k 12 , k 21 , k 22 , α i ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 11 , k 12 , k 21 , k 22 , α i ) = ⅇ - 2 T i ( Eq . 13 )$
Eq. 13 provides the range of each vector of the controller gain K[i ] 78 to reach the target settling time and maximum overshoot. The solutions of Eq. 13 also show relationships between each vector
of the control gain matrix 78 and the estimate index. Although the exact value of each vector and estimate index is not given, Eq. 13 may essentially provide a guideline to design a suitable
controller gain K[i ] 78 such that the target settling time T[i ]and maximum overshoot M[i ]in the PRA scheme may be met. It is also noted that the estimate index a, may have an impact on achieving
the called for transient performance.
Similarly, the characteristic polynomial for the RRA scheme may be
det[zI−(A [i] −B [i] K [i])]=z ^2 +f [1](k [1] ,k [2])z+f [2](k [1] ,k [2])(Eq. 14)
where k[1 ]and k[2 ]are vectors of K[i ] 78.
Since Eqs. 10 and 14 represent the same or similar closed-form characteristic polynomial for RRA, they may have the same or similar coefficients for each order of z. Comparing the coefficients of
Eqs. 10 and 14 yields
${ f 1 ( k 1 , k 2 ) = - 2 r cos θ f 2 ( k 1 , k 2 ) = r 2 ( Eq . 15 )$
Substituting Eqs. 9a and 9b into Eq. 15 yields
${ f 1 ( k 1 , k 2 ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 1 , k 2 ) = ⅇ - 2 T i ( Eq . 16 )$
Therefore, the solutions of Eq. 16 may essentially provide guidelines to design a suitable controller gain K[i ] 78 such that the target settling time T[i ]and maximum overshoot M[i ]in RRA scheme
may be met.
As shown in FIG. 6, the target system 80 may be achieved by feeding back proportional state variables 84 to the control input 82. The state variables 84 may represent the on-line traffic dynamics,
which may imply changes of the queue length and bandwidth requirement of an LS. The controller 76 may essentially feedback the traffic dynamics information, after multiplying by the controller gain
78, to the input 82 of the system. By doing so, the eigenvalues of an open plant system, which is usually outside of the unit circle, may be driven back into the inside of the unit circle after
implementing proper controller gains. In this manner, the system is driven into the stable state. An example controller 76 may be facilitated through the proper buffering and intra-LS scheduling
schemes at the RS, or the appropriate inter-LS scheduling scheme among LSs. Thus, the RS may work as a central controller to tune LSs accordingly, which may ensure that the upstream resource of a
P2MP system is fairly shared among multiple LSs. The controller gains K[i]|[i=1,2,3,4,5,6 ] 78 describe the controller 76 characteristics in different scenarios.
For PRA, the RS may manipulate the upstream transmission from multiple LSs by using a controller U[i](n)=−KX[i](n) 76, where K=K[i]|[i=1˜4 ] 78. The estimation index α[i ]may affect the system
stability when designing a controller 76 for a P2MP system with PRA. In both RRA and PRA, the above equations may provide guidelines for the controller 76 design to achieve a P2MP system's stability.
Based at least in part on the models discussed above, example controller designs may now be determined. An example of one such controller design is depicted in FIG. 7. The present disclosure
considers that design objectives typically may be an initial action in controller design. In this case, the design objectives for resource allocation in a P2MP system may be system robustness,
accuracy, and target transient performance.
P2MP resource allocation schemes may achieve robustness performance regarding the system dynamics. Robustness implies that the system should be able to handle various conditions, even when online
traffic changes dramatically.
An electrical control system is said to be “accurate” if the measured output converges or becomes sufficiently close to the reference input. In a P2MP system, the reference input r 82 may be chosen
from various SLA parameters, or other pre-defined parameters. The measured output Y[i](n) 86 is thus desired to converge to r 82 in order to ensure that control objectives are met. For example, the
desired queue length Q[i] ^d of LS[i ]may be the reference input 82, e.g., r=Q[i] ^d. The desired queue length Q[i] ^d is defined as the efficient queue length to achieve high network resource
utilization. Theoretically, each LS may maintain a desired queue length Q[i] ^d to avoid overflow or emptiness. If the queue length becomes too large, data loss and retransmission may occur due to
limited available buffers. On the other hand, if the queued length becomes empty, it indicates that the allocated resource for this LS may be more than it actually needs. In that case, the network
resource may be wasted with low utilization. Both of these extremes may be avoided by maintaining a desired queue length Q[i] ^d.
As discussed above, in a electrical control system, the settling time T[i ]and the maximum overshoot M[i ]may be two main parameters to prescribe the system's target transient performance. The
settling time T[i ]may be defined as the time for the P2MP system to reach the steady state. Short settling times may be utilized to achieve the performance objective, especially when incoming
traffic of LSs have large volatility. On the other side, the maximum overshoot M[i ]may be defined as the difference between the maximum system output y[max ]and steady-state system output y[ss ]
divided by the steady-state system output y[ss], e.g.,
$M i = y max - y ss y ss .$
The maximum overshoot may provide the upper bound for the output oscillations of a P2MP system.
In designing one embodiment, consider the measured system output Y[i](n)=CX[i](n) 86, and define matrix C=[0 1]. The system output essentially may be the measurement to the report queue length Q[i]
(n). The state space system may then be described as
X [i](n+1)=AX [i](n)+BU [i](n)
Y [i](n)=CX [i](n)90(Eq. 17)
It may be useful to design a controller 92 to achieve the design objectives of robustness, accuracy and target transient performances. FIG. 7 illustrates such a controller 92 by implementing
U [i](n)=−K [i] X [i](n)+F [i] r92(Eq. 18)
to achieve the prescribed objectives. The reference input r 82 is the desired queue length, and thus e(n)=Y[i](n)−r is the control error. The matrix F[i ] 88 is a compensator to offset the control
error, so that the system output 86 can eventually converge to the input reference 82 (i.e., e(n)=0). Therefore, a suitable controller gain K[i ] 94 and compensator F[i ] 88 may be determined.
As discussed above, the objective of system accuracy relates to output Y[i](n) 86, which is the output measurement of report queue length, converging to the system input r 82, the desired queue
length. To reach this objective, a compensator F[i ] 88 may be implemented after the reference 82, and may be added up to the feedback from the state variable 84, to form the controller U[i](n) 92,
which is illustrated in FIG. 7. Therefore, a compensator F[i ] 88 may be designed in such way to offset the control error, e.g., e(n)=0.
For a particular P2MP system i, the compensator F[i ] 88 may be determined by the state matrix A[i], the input matrix B[i], the output matrix C[i], and the controller gain K[i ] 94. The present
disclosure contemplates that the compensator F[i ] 88 that drives the control error e(n)=Y[i](n)−r to zero may be given by
$F i = [ K i 1 ] [ A i - I B i C i 0 ] - 1 [ 0 1 ] ( Eq . 19 )$
$[ A i - I B i C i 0 ]$
is a non-singular matrix.
Consequently, by implementing the compensator F[i ] 88 of Eq. 19, the controller 92 may be able to force the system output Y[i](n) 86 to track the reference input r 82, implying that the queue length
can be eventually driven into the desired queue length Q[i] ^d.
Regarding the transient performance, recall that controller gain K[i ] 94 in Eq. 18 may determine the poles in the closed loop characteristic polynomial det[zI−(A[i]−B[i]K[i])], and thus the target
transient performance T[i ]and M[i ]may be achieved by properly tuning the controller gain K[i ] 94. From the discussion above, in the RRA scheme, it can be shown that the equation
${ f 1 ( k 1 , k 2 ) = - 2 ⅇ - 4 T i cos ( π log r log M i ) f 2 ( k 1 , k 2 ) = ⅇ - 2 T i$
provides a guideline to design a suitable controller gain K[i ] 94 such that the target settling time T[i ]and maximum overshoot M[i ]may be met.
With reference to FIG. 8, depicted is a block diagram illustrating an example computing device 800 that is arranged for point-to-multipoint (P2MP) network resource management in accordance with the
present disclosure. In a very basic configuration 801, computing device 800 typically includes one or more processors 810 and system memory 820. A memory bus 830 can be used for communicating between
the processor 810 and the system memory 820.
Depending on the desired configuration, processor 810 can be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any
combination thereof. Processor 810 can include one more levels of caching, such as a level one cache 811 and a level two cache 812, a processor core 813, and registers 814. The processor core 813 can
include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. A memory controller 815 can also be used with the
processor 810, or in some implementations the memory controller 815 can be an internal part of the processor 810.
Depending on the desired configuration, the system memory 820 can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or
any combination thereof. System memory 820 typically includes an operating system 821, one or more applications 822, and program data 824. Application 822 includes a point-to-multipoint network
resource management algorithm 823 that is arranged to efficiently manage network resources in a point-to-multipoint network. Program Data 824 includes point-to-multipoint (P2MP) network resource
management data 825. In some embodiments, application 822 can be arranged to operate with program data 824 on an operating system 821 to effectuate the efficient management of network resources. This
described basic configuration is illustrated in FIG. 8 by those components within dashed line 801.
Computing device 800 can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 801 and any required devices and interfaces.
For example, a bus/interface controller 840 can be used to facilitate communications between the basic configuration 801 and one or more data storage devices 850 via a storage interface bus 841. The
data storage devices 850 can be removable storage devices 851, non-removable storage devices 852, or a combination thereof. Examples of removable storage and non-removable storage devices include
magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD),
and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of
information, such as computer readable instructions, data structures, program modules, or other data.
System memory 820, removable storage 851 and non-removable storage 852 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash
memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any
other medium which can be used to store the desired information and which can be accessed by computing device 800. Any such computer storage media can be part of device 800.
Computing device 800 can also include an interface bus 842 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces)
to the basic configuration 801 via the bus/interface controller 840. Example output devices 860 include a graphics processing unit 861 and an audio processing unit 862, which can be configured to
communicate to various external devices such as a display or speakers via one or more A/V ports 863. Example peripheral interfaces 870 include a serial interface controller 871 or a parallel
interface controller 872, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other
peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 873. An example communication device 880 includes a network controller 881, which can be arranged to facilitate
communications with one or more other computing devices 890 over a network communication via one or more communication ports 882. The communication connection is one example of a communication media.
Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport
mechanism, and includes any information delivery media. A “modulated data signal” can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information
in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio
frequency (RF), infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media.
Computing device 800 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player
device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Computing device 800 can also be
implemented as a personal computer including both laptop computer and non-laptop computer configurations.
According to one embodiment, computing device 800 is coupled to a networking environment such that the processor 810, application 822 and/or program data 824 can perform with or as a
point-to-multipoint (P2MP) network resource management system in accordance with embodiments herein. For example, the diagrams shown in FIGS. 2-4 may be implemented in such environment.
It should also be understood that, while a stated objective of various example embodiments disclosed herein may be to “minimize” the settling time or other parameters or characteristics, it is not
necessary to literally minimize any parameters or other characteristic to fall within the scope of any claim unless such specific objective is expressly claimed. Likewise, it should be understood
that it is not necessary to literally “optimize” the settling time or other parameters or characteristics to fall within the scope of any claim unless such specific objective is expressly claimed.
The herein described subject matter sometimes illustrates different components contained within, or coupled with, different other components. It is to be understood that such depicted architectures
are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same
functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated
with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being
“operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably
couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and
/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as
is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms
(e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as
“includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be
explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the
introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by
the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the
introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the
same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in
the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically
means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a
construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that
have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or
C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include
but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those
within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the
possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are
for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Cited Patent Filing Publication Applicant Title
date date
US5784358 * Mar 8, Jul 21, 1998 British Telecommunications Public Limited Broadband switching network with automatic bandwidth allocation in response to data cell detection
1995 Company
US6633579 * Oct 21, Oct 14, 2003 Marconi Communications, Inc. Efficient method for storing multicast trees
US6937678 * Oct 12, Aug 30, 2005 Honeywell International, Inc. Rate and acceleration limiting filter and method for processing digital signals
US7483374 * Oct 24, Jan 27, 2009 Scalent Systems, Inc. Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using
2003 adaptive flow-based routing
US7724636 * Jul 14, May 25, 2010 Industrial Technology Research Institute Asymmetry compensator for partial response maximum likelihood (PRML) decoder
US7808913 Apr 17, Oct 5, 2010 New Jersey Institute Of Technology Dynamic bandwidth allocation and service differentiation for broadband passive optical networks
US7948881 Apr 13, May 24, 2011 New Jersey Institute Of Technology Distributed bandwidth allocation for resilient packet ring networks
US7969881 Nov 28, Jun 28, 2011 New Jersey Institute Of Technology Providing proportionally fair bandwidth allocation in communication systems
US20060250986 Apr 13, Nov 9, 2006 New Jersey Institute Of Technology Distributed bandwidth allocation for resilient packet ring networks
US20070268828 May 18, Nov 22, 2007 Via Technologies, Inc. Closed loop control system and method of dynamically changing the loop bandwidth
* 2006
US20090143872 Nov 7, Jun 4, 2009 Fisher-Rosemount Systems, Inc. On-Line Adaptive Model Predictive Control in a Process Control System
* 2008
1 Assi, et al., Dynamic Bandwidth Allocation for Quality-of-Service Over Ethernet PONs, IEEE Journal on Selected Areas in Communications, 2003, pp. 1467-1477, vol. 21, No. 9, NYC, NY, USA, Nov.
2 Banerjee, et al., Fair Sharing Dual Service-Level Agreements to Achieve Open Access in a Passive Optical Network, IEEE Journal on Selected Areas in Communications, 2006, pp. 32-44, vol. 24, No.
8, NYC, NY, USA, Aug. 2006.
3 Banerjee, et al., Wavelength-division-multiplexed passive optical network (WDM-POM) technologies for broadband access: a review [Invited], Journal of Optical Networking, 2005, pp. 737-758, vol.
4, No. 11, Optical Society of America, USA, Nov. 2005.
4 Bubnicki, Modem Control Theory, 2005, pp. 55-60, Spring Berlin Heidelberg, New York, USA. [Month of publication is unknown. The year of publication is sufficiently earlier than the effective U.S.
filing date and any foreign priority date so the particular month of publication is not in issue.].
5 Byun, et al, Dynamic bandwidth allocation algorithm in ethernet passive optical networks, IEEE Electronic Letters, 2003, pp. 1001-1002, vol. 39., No. 13, IEEE Xplore, NYC, NY, USA, Jun. 2003.
6 Chen, DT Space-Space Equations and Realizations, Signals and Systems, 2004, pp. 380.389, Oxford University Press, New York, USA. [Month of publication is unknown. The year of publication is
sufficiently earlier than the effective U.S. filing date and any foreign priority date so the particular month of publication is not in issue.].
7 Hellerstein, et al., Feedback Control of Computing Systems, 2004, ch. 10, pp. 337-373, Wiley-Interscience, Hoboken, NJ, USA. [Month of publication is unknown. The year of publication is
sufficiently earlier than the effective U.S. filing date and any foreign priority date so the particular month of publication is not in issue.].
8 Hollot, et al., A Control of Theoretic Analysis of RED, Proc. of IEEE INFOCOM 2001, 2001, pp. 1510-1519, vol. 3, Anchorage, AK, USA. [Month of publication is unknown. The year of publication is
sufficiently earlier than the effective U.S. filing date and any foreign priority date so the particular month of publication is not in issue.].
9 IEEE Computer Society, Part 3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications, IEEE-SA Standards Board, 2004, IEEE Std. 802.3ah,
Amendment to IEEE Std. 802.2 TM-2002, as amended by IEEE Stds 802.3ae TM,-2002, 802.3aj TM-2003 and 802.3ak-2004, Sep. 2004.
10 International Telecommunication Union, Broadband optical access systems based on Passive Optical Networks (PON), Series G: Transmission Systems and Media, Digital Systems and Networks 2005 ITU-T
Recommendation G.983.1, Jan. 2005.
11 International Telecommunication Union, Gigabite-capable passive optical networks (PON), Series G: Transmission Systems and Media, Digital Systems and Networks, 2003, ITU-T Recommendation G.984.1,
Mar. 2008.
12 Jury, Inners and Stability of Dynamic Systems, 2nd Ed., 1982, pp. 264-270, Krieger, Malabar, FA. [Month of publication is unknown. The year of publication is sufficiently earlier than the
effective U.S. filing date and any foreign priority date so the particular month of publication is not in issue.].
13 Kim, et al., Optimal rate based flow control for ABR services in ATM Networks, 1999 IEEE TENCON, 1999, pp. 773-776, vol. 1, Korea, Sep. 1999.
14 Kramer, et al, Ethernet PON (ePON): Design and Analysis of an Optical Access Network, Photonic Network Communications, 2001, pp. 307-319, 3:3, Kluwer Academic Publishers, The Netherlands, Dec.
15 Kramer, et al., IPACT: A Dynamic Protocol for an Ethernet PON (EPON), IEEE Communications Magazine, 2002, pp. 74-80, vol. 40, No. 2, www.ieee.org, NYC, NY, USA, Feb. 2002.
16 Luo, et al., Bandwidth allocation for multiservice access on EPONs, IEEE Optical Communications, 2005, pp. S16-S21, vol. 43, No. 2, IEEE, NYC, NY, USA, Feb. 2005.
17 Luo, et al., Limited Sharing with traffic prediction for dynamic bandwidth allocation and QoS provisioning over ethernet passive optical networks, Journal of Optical Networking, 2005, pp.
561-572, vol. 4, No. 9, Optical Society of America, USA, Jul. 2005.
18 Luo, et al., Resource Management for Broadband Access over Time-Division Multiplexed Passive Optical Networks, IEEE Network Magazine, 2007, pp. 20-27, vol. 21, Issue 5, IEEE, NYC, NY, USA, Sep.
19 Ma, et al., A Bandwidth Guaranteed Polling MAC Protocol for Ethernet Passive Optical Networks, Proc. IEEE INFOCOM, 2003, pp. 22-31, San Francisco, CA, USA. [Month of publication is unknown. The
year of publication is sufficiently earlier than the effective U.S. filing date and any foreign priority date so the particular month of publication is not in issue.].
20 MPCP-State of the Art-PDF Presentation, (2002), http:wwwieee802.org/3/efm/public/jan02/maislos-1-0102.pdf, Jan. 2002.
21 MPCP—State of the Art—PDF Presentation, (2002), http:wwwieee802.org/3/efm/public/jan02/maislos—1—0102.pdf, Jan. 2002.
N. Ansari and Y. Luo, "Passive Optical Networks for Broadband Access," The Handbook of Computer Networks Volume 1: Data Transmission and Digital and Optical Networks (Hossein Ridgoli, ed.), pp.
22 948-957, John Wiley & Son, ISBN 978-0-471-78458-6, 2008, USA. [Month of publication is unknown. The year of publication is sufficiently earlier than the effective U.S. filing date and any foreign
priority date so the particular month of publication is not in issue.].
23 Naser, et al., A Joint-ONU Interval-Based Dynamic Scheduling Algorithm for Ethernet Passive Optical Networks, IEEE/ACM Transactions on Networking, 2006, pp. 889-899, vol. 14, No. 4, IEEE Xplore,
NYC, NY, USA, Aug. 2006.
24 Office action issued Feb. 16, 2011 in U.S. Appl. No. 12/418,681, filed Apr. 6, 2009.
25 Shami, et al., Jitter Performance in Ethernet Passive Optical Networks, Journal of Lightwave Technology, 2005, pp. 1745-1753, vol. 23, No. 4, IEEE Xplore, NYC, NY, USA, Apr. 2005.
26 Sherif, et al., A Novel Decentralized Ethernet-Based PON Access Architecture for Provisioning Differentiated QoS, Journal of Lightwave Technology, 2004, pp. 2483-2497, vol. 22, No. 11, IEEE
XPlore, NYC, NY, USA, Nov. 2004.
27 Willinger, et al., Self-Similarity Through High-Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level, IEEE/ACM Transactions on Networking, 1997, pp. 71-86, vol. 5, No. 1,
IEEE, NYC, NY, USA, Feb. 1997.
28 Yin, et al., Bandwidth Allocation over EPONs: A Controllability Perspective, Proc. IEEE GLOBCOM 2006, 2006, San Francisco, CA, USA, Nov. 2006.
29 Yin, et al., Stability of Predictor Based Dynamic Bandwidth Allocation over EPONs, IEEE Communications Letters, 2007, pp. 549-551, vol. 11, No. 6, IEEE, NYC, NY, USA, Jun. 2007.
30 Zhang, et al., Dual DEB-GPS scheduler for delay-constraint applications in Ethernet passive optical networks, IEICE Transactions on Communications, May 2003, pp. 1575-1584, vol. E86-B. No. 5,
Joint Special Issue on Recent Progress in Optoelectronics and Communications, USA.
Date Code Event Description
Aug 20, 2013 CC Certificate of correction
Effective date: 20090226
Owner name: NEW JERSEY INSTITUTE OF TECHNOLOGY, NEW JERSEY
Jun 17, 2011 AS Assignment Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ANSARI, NIRWAN;REEL/FRAME:026454/0035
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YIN, SI;REEL/FRAME:026477/0834
Effective date: 20090301
Original Image | {"url":"http://www.google.com/patents/US8184627?dq=5311516","timestamp":"2014-04-16T23:34:44Z","content_type":null,"content_length":"158659","record_id":"<urn:uuid:ce0000ec-8c1f-45a6-acf3-8f89db20c7ae>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frattini argument, a special variant.
April 17th 2012, 09:38 AM #1
Feb 2009
Frattini argument, a special variant.
Hi: In the following, G is a group that acts on the set $\Omega$. For $\alpha \in \Omega, G_\alpha := \{x \in G | \alpha x = \alpha\}$.
Proposition: Suppose that G contains a normal subgroup N, which acts transitively on $\Omega$. Then $G = G_\alpha N$ for every $\alpha \in \Omega$. In particular, $G_\alpha$ is a complement of N
in G if $N_\alpha= 1$.
This proposition is in a book, Kurzweil - Stellmacher, The Theory of Finite Groups, An Introduction, Springer, 2004. What I do not understand is why N has to be normal. In the proof given by the
author, no use is made of that fact. The proof runs like this:
Let $\alpha \in \Omega$ and $y \in G$. The transitivity of $N$ on $\Omega$ gives an element $x \in N$ such that $\alpha y = \alpha x$. Hence $\alpha yx^{-1} = \alpha$ and thus $yx^{-1} \in G_\
alpha$. This shows that $y \in G_\alpha x \subseteq G_\alpha N$.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/197446-frattini-argument-special-variant.html","timestamp":"2014-04-20T08:41:46Z","content_type":null,"content_length":"31843","record_id":"<urn:uuid:d7fe8efd-8aa3-4352-b206-84f54b808e35>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating Missing Data
March 6th 2012, 11:43 PM
Estimating Missing Data
Not sure if this is the right forum, or site for that matter, but hopefully someone can help me out.
I have three sets of data which includes three subsets. Lets say x, y and z. z is made up entirely of data from both x and y, however I do not know how much of z is made up of x and how much is
made up of y. Sets x, y and z, contain a subset each. I'll call them x1, y1, z1. I know the percentage of x that is x1, y that is y1 and z that is z1 and z1 is also made up entirely of x1 and y1.
So to break it down.
x = 1000 units
y = 500 units
z = 40 units
x1 = 20 (2% of x)
y1 = 75 (15% of y)
z1 = 4 (10% of z)
So my question is, is it possible to estimate what value of z is made up of x and what value of z is made up of y knowing the percentages of x1, y1 and z1.
I think it can be done by working out what amount x1 is of z1 and y1 is of z1 then applying that to x and y. | {"url":"http://mathhelpforum.com/statistics/195679-estimating-missing-data-print.html","timestamp":"2014-04-23T18:31:13Z","content_type":null,"content_length":"3973","record_id":"<urn:uuid:0b30f353-3708-4b4e-9392-ca12f51d951b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/asaad123/medals","timestamp":"2014-04-19T10:13:27Z","content_type":null,"content_length":"97724","record_id":"<urn:uuid:39518472-d01e-4e39-b9cb-5d89b3b74517>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Positively curved manifold with a codimension 1 totally geodesic submanifold.
up vote 3 down vote favorite
Fact : Consider the inclusion $V^{n-1} \rightarrow M^n$ where $M$ is a closed orientable simply connected positively curved manifold.
Then connectivity lemma implies that the inclusion is $(n-1)$-connected so that $M$ is homeomorphic to a sphere.
Situation : As you know ther exists a $S^3$-action on $M={\bf CP}^2$ which is cohomogeneity one, i.e., $M/S^2=[0,1]$.
Hence $M$ is the union of two disk bundles over two singular orbits $S^3\cdot x_1$, $S^3\cdot x_2$ : $S^3\cdot x_1$ is diffeomorphic to $S^2$ and $ S^3\cdot x_2 = \{ x_2\}$.
Here think about two conditions :
(1) A geodesic sphere of suitable radius around $x_2$ is totally geodesic.
(2) $M$ is positively curved.
By the above fact (1) and (2) cannot be compatible.
Here I have a question : Is it true that cannonical $S^5(1)/S^1={\bf CP}^2$ does not have a codimensional 1 totally geodesic submanifold.
Thank you in advance.
riemannian-geometry dg.differential-geometry projective-geometry
add comment
2 Answers
active oldest votes
$\mathbb{CP}^n$ (with $n>1$) does indeed not have any codimension $1$ totally geodesic manifold; neither does $\mathbb{CH}^n$. You can probably find a proof in Goldman's book on complex
hyperbolic geometry.
(Added later: this is true even locally: there are no open codimension $1$ totally geodesic manifold in $\mathbb{CP}^n$ nor in $\mathbb{CH}^n$.)
Note that this is an important geometrical fact, as (as far as I know) all proofs of the isoperimetric inequality that work in the real hyperbolic space use reflexions with respect to a
up vote 6 totally geodesic codimension one manifold. This explains why we still don't know if balls are optimal for the isoperimetric problem in $\mathbb{CH}^n$ and $\mathbb{CP}^n$ (small balls
down vote in the latter case, as for large volumes balls are known not to be optimal).
Also note that it is a source of great difficulty in the study of subgroups of isometries of $\mathbb{CH}^n$: for many groups $\Gamma$ acting isometrically on $\mathbb{CH}^n$, we do not
know whether they are discrete; one cannot construct a fundamental domain with geodesic faces that could be used to prove discreteness, as it is done in real hyperbolic geometry. We are
therefore mainly left with arithmetic methods, and to find non-arithmetic lattices of $\mathrm{SU}(1,n)$ is an important problem, see notably the work of Martin Deraux.
@Benoit, do you know any references or survey articles regarding the isoperemetric problem in $CP^n$ or $CH^n$? – Ralph May 1 '13 at 21:02
@Ralph: no. As far as I know, very little has been written on this and the best lower bounds we have are the optimal linear bound (good for large domains) in complex dimension $2$ the
Euclidean inequality (both for $\mathbb{CH}^n$) and optimal asymptotic bounds for small domains in all dimension (in a non-explicit sense). All are consequences of more general result
(Yau's linear isoperimetric inequality, Croke's inequality for $4$-manifold with non-positive curvature, and Druet's inequality for manifolds with scalar curvature bounded above). –
Benoît Kloeckner May 2 '13 at 13:39
(...) I just remember that the problem is posed in a survey by Choe, Ritoré or both of them, but I could not get my hand on it. – Benoît Kloeckner May 2 '13 at 13:41
add comment
Compact positively curved manifold with totally geodesic hypersurface has to be sphere, or its double cover has to be a sphere.
up vote 10 To prove this cut along the hypersurface and apply the soul theorem to each part.
down vote
Thank you first. The above fact is about open problem : Compact positively curved manifold with totally geodesic hypersurface is diffeomorphic to be sphere, or its double cover has to
be a sphere. As far as I know soul theorem is that non-compact complete nonnegatively curved manifold is diffeomorphic to normal bundle over a soul. But if we cut closed manifold along
hypersurface then it is not complete. Hence how can we apply the soul theorem ? – Hee Kwon Lee May 1 '13 at 2:38
2 @Hee Kwon Lee, there is a version of soul theorem for manifolds with boundary (the proof is exactly the same). Alternatively, you may take the boundary$\times\mathbb{R}_\ge$ and glue it
in; this way you get complete manifold without boundary. – Anton Petrunin May 1 '13 at 4:06
add comment
Not the answer you're looking for? Browse other questions tagged riemannian-geometry dg.differential-geometry projective-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/129227/positively-curved-manifold-with-a-codimension-1-totally-geodesic-submanifold/129236","timestamp":"2014-04-19T20:15:49Z","content_type":null,"content_length":"63446","record_id":"<urn:uuid:12d6ac4f-2179-4ff3-83fe-c6d568c364d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplying Fractions.
April 24th 2012, 05:02 AM
Multiplying Fractions.
Just curious as to why you can multiply straight across and it works. Why does multiplying the two parts, and two wholes works. I can see that it does work but am not sure what exactly is going
on that does make it work.
My assumption is the following:
When multiplying 4/2 • 6/3
You are really saying 4 ÷ 2 • 6 ÷ 3
that can then be flipped to say 4 • 1/2 • 6 • 1/3
Multiplication can be done in any order. Multiplying by 1/2 and 1/3 is the same as multiplying by 1/6 or in other words dividing by 6 (this would be multiplying the denominators).
Also because multiplying can be done in any order you can multiply the 4 and 6 (numerators).
So you would then have the product of the numerators (24) times the product of the denominators (1/6) which si the same as 4/6 or 4÷6
I was just hoping there was an easier way to explain this.
April 24th 2012, 05:23 AM
Prove It
Re: Multiplying Fractions.
It helps if we think of multiplication as finding the area in square units of a rectangle - i.e. multiplying the number of squares in one row (the length) by the number of rows (width).
Then we need to define a unit square to be a square with length and width each = 1 unit in length.
Say we wanted to evaluate \displaystyle \begin{align*} \frac{1}{2} \times \frac{1}{2} \end{align*}, we need to evaluate the area of a square that has a length \displaystyle \begin{align*} \frac
{1}{2} \end{align*} unit in length, and width \displaystyle \begin{align*} \frac{1}{2} \end{align*} unit in length.
How much of the unit square \displaystyle \begin{align*} \frac{1}{2} \times \frac{1}{2} \end{align*}? Why, \displaystyle \begin{align*} \frac{1}{4} \end{align*} of it. Therefore \displaystyle \
begin{align*} \frac{1}{2} \times \frac{1}{2} = \frac{1}{4} \end{align*}.
Now what if we tried something like \displaystyle \begin{align*} \frac{3}{4} \times \frac{2}{3} \end{align*}? We would need to evaluate the area of a rectangle that has a length of \displaystyle
\begin{align*} \frac{3}{4} \end{align*} of a unit, and a width of \displaystyle \begin{align*} \frac{2}{3} \end{align*} of a unit.
How much of the unit square is covered by this \displaystyle \begin{align*} \frac{3}{4} \times \frac{2}{3} \end{align*} rectangle? Why, \displaystyle \begin{align*} \frac{6}{12} \end{align*}.
Therefore \displaystyle \begin{align*} \frac{3}{4} \times \frac{2}{3} = \frac{6}{12} \end{align*}.
I'm sure you're seeing that it looks like you can multiply tops and multiply bottoms. Why is this?
It's because in order to multiply the fractions, you need to split the "length" of your unit square into as many pieces as defined by the denominator of the first fraction, and you need to split
the "width" of your unit square into as many pieces as defined by the denominator of the second fraction. That means the unit square is divided into as many pieces as the product of the
denominators, and so this becomes your new denominator.
Then you need to count as many of these pieces along the "length" of your unit square as in the numerator of the first fraction, and you need to count as many of these pieces along the "width" of
your unit square as in the numerator of the second fraction. So the number of pieces you will be counting in total is the same as the product of the numerators, and so this becomes your new
Therefore, to multiply fractions, you can multiply the numerators and multiply the denominators.
April 24th 2012, 05:33 AM
Re: Multiplying Fractions.
Another way to think about this- if you have a rectangle with one side of length "3 meters" and another of length "5 meters", then the area is 15 square meters. That is, we multiply the lengths
and we "multiply" the units.
In a very real sense, the denominator is a "unit". The fraction "3/5" says that we have 3 things, each the size of a "1/5".
April 24th 2012, 06:03 AM
Re: Multiplying Fractions.
Awesome, that is perfect and makes sense. THanks so much, you guys rock as always. | {"url":"http://mathhelpforum.com/algebra/197824-multiplying-fractions-print.html","timestamp":"2014-04-23T21:01:06Z","content_type":null,"content_length":"11045","record_id":"<urn:uuid:750ba690-4b35-40ac-ac36-1e344411f77f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lolly/answered/1","timestamp":"2014-04-18T16:36:40Z","content_type":null,"content_length":"120167","record_id":"<urn:uuid:e9126b3d-f376-435c-bfb1-965087bcdf6c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
That curious termination argument
Quick, does the SML function
terminate on all tree-valued inputs?
datatype tree
= Nil
| Node of tree * int * tree
fun norm Nil = Nil
| norm (Node (Nil, x, t)) =
Node (Nil, x, norm t)
| norm (Node (Node (t1, x, t2), y, t3)) =
norm (Node (t1, x, (Node (t2, y, t3))))
And if so, why?
This came from a post on the Theoretical Computer Science StackExchange asking about
the theorem that, if two trees agree on their in-order traversals, then they are equivalent up to rotations
. The "simple and intuitive" result mentioned in that post is a constructive proof which relies on the same termination ordering as the
function. And it's also not terribly difficult to see why the function should terminate - it's a lexicographic termination argument: either the tree loses a node (the second case) or the tree keeps
the same number of nodes and the depth of the left-most leaf decreases (the third case).
However, this sort of problem comes up not-terribly-infrequently in settings (like Agda) where termination must be established by
structural induction
, and the preceding argument is not structurally inductive on trees. Whenever I encounter this it always drives me crazy, so this is something of a note to myself.
The solution that works great for this sort of function in a dependently typed language is to define a judgment over terms that captures precisely the termination ordering that we will want to
data TM {A : Set} : Tree A -> Set where
Done : TM Nil
Recurse : {x : A}{t : Tree A}
-> TM t
-> TM (Node Nil x t)
Rotate : {x y : A}{t1 t2 t3 : Tree A}
-> TM (Node t1 x (Node t2 y t3))
-> TM (Node (Node t1 x t2) y t3)
Then (and this is really the tricky part) we have to write the proof that for every tree
there is a derivaiton of
TM t
. The key, and the reason that I always have to bend my brain whenever I encounter a termination argument like this one, is the
helper function.
metric : {A : Set} (t : Tree A) -> TM t
metric Nil = Done
metric (Node t1 x t2) = helper t1 (metric t2)
append : {A : Set}{t1 t2 : Tree A} -> TM t1 -> TM t2 -> TM (t1 ++> t2)
append Done t2 = t2
append (Recurse t1) t2 = Recurse (append t1 t2)
append (Rotate t1) t2 = Rotate (append t1 t2)
helper : {A : Set} {x : A}{t2 : Tree A}
(t1 : Tree A)
-> TM t2
-> TM (Node t1 x t2)
helper Nil tt = Recurse tt
helper (Node t1 x t2) tt = Rotate (helper t1 (append (helper t2 Done) tt))
Now it's trivial to write a version of the
function that Agda will treat as terminating, because I just pass in an extra proof of
TM t
, and the proof proceeds by trivial structural induction on
norm : {A : Set} -> Tree A -> Tree A
norm t = helper t (metric t)
helper : {A : Set} -> (t : Tree A) -> TM t -> Tree A
helper Nil Done = Nil
helper (Node Nil x t) (Recurse tm) =
Node Nil x (helper t tm)
helper (Node (Node t1 x t2) y t3) (Rotate tm) =
helper (Node t1 x (Node t2 y t3)) tm
The Agda code for the above is
. Are there other ways of expressing this termination argument in Agda that make as much or more sense? One approach I fiddled with was presenting a tree indexed by (1) the total number of nodes in
it and (2) the depth of the left-most leaf:
data ITree (A : Set) : NatT → NatT → Set where
Nil : ITree A Z Z
Node : ∀{total1 total2 left1 left2}
-> ITree A left1 total1
-> A
-> ITree A left2 total2
-> ITree A (1 +n left1) (1 +n (total1 +n total2))
However, due to the complexity of the dependent equality reasoning, I couldn't get Agda to believe the intuitive termination argument I presented at the beginning.
Trees have normal forms under rotation
Once the above argument works, it's not difficult to prove the theorem mentioned on TCS StackExchange;
here's the Agda proof
[Update Nov 15, 2010]
Over at
, Conor McBride says
"When someone asks "how do I show this non-structural function terminates?", I always wonder what structurally recursive function I'd write instead."
and then proceeds to
answer that question
by giving the appropriate structurally inductive functions
. Nice! His proof also introduces me to a new Agda built-in,
, whose existence I was previously unaware of. Oh
Agda release announcements
, when will I learn to read you?
3 comments:
1. BTW, what you're describing---defining a predicate that gives an inductive definition of the domain/recursive calls of the function, and separately proving that everything satisfies that
predicate---is often called the "Bove-and-Capretta" method
2. Thanks - is the right citation for that "Modelling general recursion in type theory," MSCS 2005?
3. Simple General Recursion in Type Theory (Bove, 2000, Nordic Journal of Computing) is the earliest cite I can find | {"url":"http://requestforlogic.blogspot.co.il/2010/10/that-curious-termination-argument.html","timestamp":"2014-04-21T04:32:23Z","content_type":null,"content_length":"84736","record_id":"<urn:uuid:e7300c7e-b279-4d00-a328-8d584b29fc85>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pseudo-Boolean constraints and extended resolution
Next: Summary Up: Proof Complexity Previous: Modularity constraints and pseudo-Boolean
Finally, let us clarify a point that we made earlier. Given that there is an encoding of
The answer is as follows. While the fact that
and wish to conclude from this that
There is no single pseudo-Boolean axiom that is equivalent to (40).
Matt Ginsberg 2004-02-19 | {"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume21/dixon04a-html/node24.html","timestamp":"2014-04-18T14:12:24Z","content_type":null,"content_length":"4402","record_id":"<urn:uuid:ac3f1945-67ea-4642-998e-22898ce9f23c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
setup of mathematical work
Re: setup of mathematical work
Hi pari_alf;
Have you tried Geogebra?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://mathisfunforum.com/viewtopic.php?id=20255","timestamp":"2014-04-21T12:09:28Z","content_type":null,"content_length":"10802","record_id":"<urn:uuid:082084e8-cb2b-432e-898c-8a86087a92b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Construction of Affine Invariant Functions in Spatial Domain
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 690262, 11 pages
Research Article
Construction of Affine Invariant Functions in Spatial Domain
^1School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China
^2Department of Mathematics G. Castelnuovo, University of Rome la Sapienza, Piazzale, Aldo Moro 2, 00185 Rome, Italy
Received 17 January 2012; Accepted 9 March 2012
Academic Editor: Bin Fang
Copyright © 2012 Jianwei Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Affine invariant functions are constructed in spatial domain. Unlike the previous affine representation functions in transform domain, these functions are constructed directly on the object contour
without any transformation. To eliminate the effect of the choice of points on the contour, an affine invariant function using seven points on the contour is constructed. For objects with several
separable components, a closed curve is derived to construct the affine invariant functions. Several experiments have been conducted to evaluate the performance of the proposed method. Experimental
results show that the constructed affine invariant functions can be used for object classification.
1. Introduction
Recognizing objects that are subjected to certain viewing transformation is important in the field of computer vision [1]. Affine transformation may be used as an approximation to viewpoint-related
changes of objects [2–4]. Typical geometric transformation such as rotation, translation, scaling, and skewing are included in the affine transformation.
The extraction of affine invariant features plays a very important role in object recognition and has been found application in many fields such as shape recognition and retrieval [5, 6],
watermarking [7], identification of aircrafts [1, 8], texture classification [9], image registration [10], and contour matching [11].
Many algorithms have been developed for affine invariant features extraction. Based on whether the features are extracted from the contour only or from the whole-shape region, the approaches can be
classified into two main categories: region-based methods and contour-based methods [12]. For good overviews of the various techniques refer to [12–15]. Contour-based methods provide better data
reduction, and the contour usually offers more shape information than interior content [12]. A number of contour-based methods have been introduced in recent years. Affine invariant function (AIF) in
these papers is usually constructed in transform domain (see [1, 8, 16–20], etc.).
Due to the spatial and frequency localization property of wavelets, many wavelet-based algorithms have been developed for the extraction of affine invariant features. It is reported that these
wavelet-based methods outperform Fourier descriptors [1, 8, 19]. In these methods, the object boundary is firstly analyzed by wavelet transform at different scales. The obtained approximation and
detail signals are then used for the construction of AIF. The choice of the signals, the number of decomposition levels, and the wavelet functions used have all resulted in a number of different
approaches. Many promising results have been reported; Alferez and Wang [21] proposed geometric and illumination invariants for object recognition depending on the details coefficients of dyadic
wavelet decomposition. Tieng and Boles [19] have developed an approximation-detail AIF using one dyadic level only. Another AIF, the detail-detail representation function, was derived by Khalil and
Bayoumi using a dyadic wavelet transform [1, 8]. The invariant function is computed by utilizing two, three, or four dyadic scale levels. Recently, AIF from the approximation coefficients has been
developed by applying two different wavelet transforms with different wavelet basis [18]. The synthesized AIF is proposed by Lin and Fang [17] with the synthesized feature signals of the shape.
However, in all these methods, AIFs are constructed in transform domain. That is to say, the shape contour is firstly transformed by a linear operator (e.g., wavelet transform, Fourier transform,
etc.). Then AIFs are constructed from the transformed contour. In this paper, we construct AIF directly by the shape contour without any transformation. Equidistant Points on the object contour are
used to construct AIFs. To eliminate the effect of the choice of points on the contour, an AIF using seven points on the contour is constructed. In addition, the shape contour is not available [12]
in many cases. For example, the image of Chinese character “Yang’’ as shown in Figure 3 consists of several components. AIFs can not be constructed from these objects. To address this problem, we
derive a closed curve, which is called general contour (GC), from the object. GC is obtained by performing projections along lines with different polar angles. The GC derived from the affine
transformed object is the same affine transformed version as that of the original object. AIFs can be constructed in spatial domain from the derived GC. Several experiments have been conducted to
evaluate the performance of the proposed method. Experimental results show that the constructed affine invariant functions can be used for object classification.
The rest of the paper is organized as follows: in Section 2, some basic concepts about affine transform are introduced. AIFs are constructed in Section 3. The performance of the proposed method is
evaluated experimentally in Section 4. Finally, some conclusion remarks are provided in Section 5.
2. Preliminaries
2.1. Affine Transformation
Consider a parametric point with parameter on the object contour. The affine transformation consists of a linear transformation and translation as follows: The above equations can be written with the
following form: where the nonsingular matrix represents the scaling, rotation, skewing transformations, and the vector corresponds to the translation.
If is an affine invariant function and is the same invariant function calculated using the points under the affine transformations, then the relation between them can be formulated as where is the
determination of the matrix . The exponent of the power is called the weight of the invariance. If , the function is called as absolute invariant. If , the function is called a relative invariant.
2.2. Affine Invariant Parameters
To establish one-to-one relation between two contour, the object contour should be parameterized. The arc length parameter transforms linearly under any liner transformation up to the similarity
transform including translation, rotation, and scaling. But, it is not a suitable parameter for the constructing affine invariant function.
There are two parameters which are liner under an affine transformation: the affine arc length [22] and the enclosed area [16]. The affine arc length is defined as follows: where , , and , are the
first and the second derivatives of , with respect to the arc length parameter . Arbter et al. [16] defined the enclosed area parameter as follows: These two parameters can be made completely
invariant by simply normalizing them with respect to either the total affine arc length or the enclosed area of the contour. In the discrete case, the derivatives of and can be calculated using
finite difference equations. To establish one-to-one relation between two parameter, the contour should be normalized and resampled as in [19]. In the experiments of this paper, we use the enclosed
area as the parameter. In the discrete case, the parameterization should be normalized and resampled. The curve normalization approach used in this paper mainly composes of the following steps [23].
(i)For the discrete object contour , compute the total area of the object contour by the following formula where denotes the centroid of the object. Let the number of points on the contour after the
parameterization be too. Denote that .(ii)Select the starting point on object contour as the starting point of the normalized curve. From on object contour, search a point along the contour, such
that the area of each closed zone; namely, the polygon equals to .(iii)Using the same method, from point , calculate all the points , along the object contour. , along object contour.
In the experiments of this paper, the object contour or GC is normalized and resampled such that .
3. Affine Invariant Object Representation
In this part, we will derive invariant function from the normalized object contours. Correlation coefficient is used to measure the similarity of two AIFs. To construct AIFs from objects with several
separable components, we convert the object into a closed curve by performing projections along lines with different polar angles.
3.1. AIFs Construct in Spatial Domain
Let , and be the parametric equations of two contours that differ only by an affine transformation. For simplicity, in this subsection, we assume that the starting points on both contours are
identical. After normalizing and resampling, there is a one-to-one relation between and . We use the object centroid as the origin, then translation factor is eliminated. Equation (2.2) can be
written in matrix form as .
Let be an arbitrary positive constant, then is a shift version of . We define the following function: where denotes determination of a matrix. As a result of normalizing and resampling, , , and ,
satisfy the following equation: It follows that In other words, given in (3.1) is a relative invariance function. To eliminate the factor in (3.3), needs to be normalized. We normalize as follows:
where EAN denotes enclosed area of the object contour as defined in (2.6). It follows from (3.3) that given in (3.4) is an AIF. In [1, 8, 16–20], the shape contour is firstly transformed by a linear
operator (such as wavelet transform, Fourier transform, etc.). Then AIFs are constructed from the transformed contour. In our method, the AIF given in (3.4) is directly constructed from the shape
contour without any transformation.
Figure 1(a) shows a plane object, and Figure 1(b) shows its boundary. Figure 1(c) shows the AIF defined in (3.3) associated with Figure 1(b). Figure 2(a) shows an affine transformation version of
plane in Figure 1(a), and Figure 2(b) shows its boundary. Figure 2(c) shows the AIF derived from Figure 2(b). In Figures 1(c) and 2(c), is set to 32. Note that after affine transformation, the
starting points of AIFs are different. We observe that Figure 2(c) is nearly a translated version of Figure 1(c).
Experimental results show that the choice of may affect the accuracy of the object classification based on . Some choice of may result in lower accuracy while other choice of may result in higher
accuracy. To eliminate the effect of the choice of , we construct AIFs that involve more points on the object contour. In experiments of this paper, we use seven equidistant partition points of the
object contour: , to construct AIF as follows: Indeed, it can be shown that, for arbitrary constants: , homogeneous polynomials in terms of are also AIFs.
3.2. Measurement of the Similarity between Two AIFs
We have seen from Figures 1(c) and 2(c) that affine transformation may result in a translated version of AIF. To eliminate the effect of starting point, one-dimensional Fourier transform can be
applied to the obtained AIF. The invariance can be achieved by ignoring the phase in the coefficients and only keeping the magnitudes of the coefficients. This way has a lower computational
complexity since that FFT is faster than shift matching [24].
In this paper, we construct AIFs in spatial domain. Therefore, to eliminate the effect of starting point, we use correlation coefficient as in [18] to measure the similarity between two AIFs. For two
sequences and , the normalized cross-correlation is defined as follows: One of sequences, or is rendered periodically, then the maximum value of correlation is selected. Such an arrangement reduces
the effect of the boundary starting point variation [18]. Consequently, translation invariant is achieved. Based on [25–27], some other approaches can be derived to eliminate the effect of starting
3.3. AIFs for Objects with Several Separable Components
AIFs given in (3.4) and (3.5) can be used to object contour. But, in real-life, many objects consist of several separable components (such as Chinese character “Yang’’ in Figure 3(a)). Object
contours are not available for these objects. Consequently, AIFs given in Section 3.1 cannot be used to these objects. To address this problem, we convert the object into a closed curve by performing
projection along lines with different polar angles (which is called central projection transformation in [28]). The obtained closed curve is called general contour (GC) in [29]. It can be proved that
the GC extracted from the affine transformed object is also an affine transformed version of GC extracted from the original object. Consequently, AIFs given in Section 3.1 can be constructed based on
the GC of the object. For example, Figure 3(b) shows the GC of Figure 3(a). Figure 3(c) shows the AIF derived from GC of Figure 3(a).
4. Experiment
In this section, we evaluate the discriminate ability of the proposed method. In the first experiment, we examine the proposed method by using some airplane images. Object contours can be derived
from these images. In the second experiment, we evaluate the discriminate ability of the proposed method by using some Chinese characters. These characters have several separable components, and
contours are not available for these objects.
In the following experiments, the classification accuracy is defined as where denotes the number of correctly classified images, and denotes the total number of images applied in the test. Affine
transformations are generated by the following matrix [1]: where denote the scaling, rotation transformation, respectively, and denote the skewing transformation. To each object, the affine
transformations are generated by setting the parameters in (4.2) as follows: , , and, . Therefore, each image is transformed 168 times.
4.1. Air Plane Image Classification
The first experiment is conducted to classify the airplane images. Seven airplane images shown in Figure 4 are used as models in this experiment. Some of these models represent different objects but
with similar contours, such as model 6 and model 7. They can be easily misclassified due to their similarity. We test the effect of the choice of the constant . The contour is normalized and
resampled such that . We set . To each airplane image, the affine transformations are generated by setting the parameters in (4.2) as aforementioned. Therefore, each image is transformed 168 times.
That is to say, the test is repeated 1176 times. Table 1 shows the classification accuracy of different constants and that AIF is given in (3.5). It can be observed that different accuracies may be
achieved with different . For example, the accuracy rates are very low for and . To eliminate the effect of the choice , AIFs involved more points that can be used for object classification. In the
rest of this paper, we use AIF given in (3.5) to extract affine invariant features.
4.2. The Classification of Objects with Several Separable Components
In this experiments, we extract affine invariant features from objects with several separable components. 10 Chinese characters shown in Figure 4 are used as the database. These characters are with
regular script font. The size of these characters is . Each of these characters consists of several separable components. Some characters have the same structures, but the number of strokes or the
shape of specific stokes may be a little different. As aforementioned, each character image is transformed 168 times. That is to say, the test is repeated 1680 times. Experiments on Chinese
characters in Figure 5 and their affine transformations show that 96.25% accurate classification can be achieved by using AIF given in (3.5).
5. Conclusions
In this paper, we construct AIFs in spatial domain. Unlike the previous affine representation functions in transform domain, these AIFs are constructed directly on the object contour without any
transformation. This technique is based upon object contours, parameterized by an affine invariant parameter, and shifting of the contour. To eliminate the effect of the choice of points on the
contour, an AIF using seven points on the contour is constructed. For objects with several separable components, a closed curve is derived to construct the AIFs. Several experiments have been
conducted to evaluate the performance of the proposed method.
This work was supported in part by the National Science Foundation under Grant 60973157, 61003209, in part by the Natural Science Foundation of Jiangsu Province Education Department under Grant
1. M. I. Khalil and M. M. Bayoumi, “A dyadic wavelet affine invariant function for 2D shape recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 10, pp.
1152–1164, 2001. View at Publisher · View at Google Scholar · View at Scopus
2. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962.
3. J. Flusser and T. Suk, “Pattern recognition by affine moment invariants,” Pattern Recognition, vol. 26, no. 1, pp. 167–174, 1993. View at Publisher · View at Google Scholar
4. T. Suk and J. Flusser, “Affine moment invariants generated by graph method,” Pattern Recognition, vol. 44, no. 9, pp. 2047–2056, 2011. View at Publisher · View at Google Scholar · View at Scopus
5. M. R. Daliri and V. Torre, “Robust symbolic representation for shape recognition and retrieval,” Pattern Recognition, vol. 41, no. 5, pp. 1799–1815, 2008. View at Publisher · View at Google
Scholar · View at Scopus
6. P. L. E. Ekombo, N. Ennahnahi, M. Oumsis, and M. Meknassi, “Application of affine invariant fourier descriptor to shape based image retrieval,” International Journal of Computer Science and
Network Security, vol. 9, no. 7, pp. 240–247, 2009.
7. X. B. Gao, C. Deng, X. Li, and D. Tao, “Geometric distortion insensitive image watermarking in affine covariant regions,” IEEE Transactions on Systems, Man and Cybernetics C, vol. 40, no. 3,
Article ID 5378648, pp. 278–286, 2010. View at Publisher · View at Google Scholar · View at Scopus
8. M. I. Khalil and M. M. Bayoumi, “Affine invariants for object recognition using the wavelet transform,” Pattern Recognition Letters, vol. 23, no. 1–3, pp. 57–72, 2002. View at Publisher · View at
Google Scholar · View at Scopus
9. G. Liu, Z. Lin, and Y. Yu, “Radon representation-based feature descriptor for texture classification,” IEEE Transactions on Image Processing, vol. 18, no. 5, pp. 921–928, 2009. View at Publisher
· View at Google Scholar · View at Scopus
10. R. Matungka, Y. F. Zheng, and R. L. Ewing, “Image registration using adaptive polar transform,” IEEE Transactions on Image Processing, vol. 18, no. 10, pp. 2340–2354, 2009. View at Publisher ·
View at Google Scholar
11. Y. Wang and E. K. Teoh, “2D affine-invariant contour matching using B-Spline model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1853–1858, 2007. View at
Publisher · View at Google Scholar · View at Scopus
12. D. Zhang and G. Lu, “Review of shape representation and description techniques,” Pattern Recognition, vol. 37, no. 1, pp. 1–19, 2004. View at Publisher · View at Google Scholar · View at Scopus
13. E. Rahtu, A multiscale framework for affine invariant pattern recognition and registration, Ph.D. thesis, University of OULU, Oulu, Finland, 2007.
14. R. Veltkamp and M. Hagedoorn, “State-of the art in shape matching,” Tech. Rep. UU-CS-1999, 1999.
15. I. Weiss, “Geometric invariants and object recognition,” International Journal of Computer Vision, vol. 10, no. 3, pp. 207–231, 1993. View at Publisher · View at Google Scholar · View at Scopus
16. K. Arbter, W. E. Snyder, H. Burkhardt, and G. Hirzinger, “Application of affine-invariant Fourier descriptors to recognition of 3-D objects,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 12, no. 7, pp. 640–647, 1990. View at Publisher · View at Google Scholar · View at Scopus
17. W. S. Lin and C. H. Fang, “Synthesized affine invariant function for 2D shape recognition,” Pattern Recognition, vol. 40, no. 7, pp. 1921–1928, 2007. View at Publisher · View at Google Scholar ·
View at Scopus
18. I. El Rube, M. Ahmed, and M. Kamel, “Wavelet approximation-based affine invariant shape representation functions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2,
pp. 323–327, 2006. View at Publisher · View at Google Scholar · View at Scopus
19. Q. M. Tieng and W. W. Boles, “Wavelet-based affine invariant representation: a tool for recognizing planar objects in 3D space,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 19, no. 8, pp. 846–857, 1997. View at Scopus
20. G. Tzimiropoulos, N. Mitianoudis, and T. Stathaki, “Robust recognition of planar shapes under affine transforms using principal component analysis,” IEEE Signal Processing Letters, vol. 14, no.
10, pp. 723–726, 2007. View at Publisher · View at Google Scholar · View at Scopus
21. R. Alferez and Y. F. Wang, “Geometric and illumination invariants for object recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 505–536, 1999. View
at Publisher · View at Google Scholar · View at Scopus
22. D. Cyganski and R. F. Vaz, “A linear signal decomposition approach to affine invariant contour identification,” SPIE: Intelligent Robots and Computer Vision X, vol. 1607, pp. 98–109, 1991.
23. M. Yang, K. Kpalma, and J. Ronsin, “Affine invariance contour desciptor based on the equal area normalization,” IAENG International Journal of Applied Mathematics, vol. 36, no. 2, 2007. View at
Zentralblatt MATH
24. Y. W. Chen and C. L. Xu, “Rolling penetrate descriptor for shape-based image retrieval and object recognition,” Pattern Recognition Letters, vol. 30, no. 9, pp. 799–804, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
25. M. Li, C. Cattani, and S. Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011.
View at Publisher · View at Google Scholar · View at Scopus
26. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012. View at Publisher · View at
Google Scholar
27. W. S. Chen, P. C. Yuen, and X. Xie, “Kernel machine-based rank-lifting regularized discriminant analysis method for face recognition,” Neurocomputing, vol. 74, no. 17, pp. 2953–2960, 2011. View
at Publisher · View at Google Scholar · View at Scopus
28. Y. Y. Tang, Y. Tao, and E. C. M. Lam, “New method for feature extraction based on fractal behavior,” Pattern Recognition, vol. 35, no. 5, pp. 1071–1081, 2002. View at Publisher · View at Google
Scholar · View at Scopus
29. J. Yang, Z. Chen, W. S. Chen, and Y. Chen, “Robust affine invariant descriptors,” Mathematical Problems in Engineering, vol. 2011, Article ID 185303, 15 pages, 2011. View at Publisher · View at
Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/mpe/2012/690262/","timestamp":"2014-04-16T12:08:31Z","content_type":null,"content_length":"221136","record_id":"<urn:uuid:368483f0-fab5-4e97-9c12-8dc19ad4bf4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Reasoning for Elementary Teachers presents the mathematical knowledge needed for teaching, with an emphasis on why future teachers are learning the content as well as when and how they
will use it in the classroom.
The Sixth Edition has been streamlined throughout to make it easier to focus on the important concepts. The authors continue to make the course relevant for future teachers by adding new features
such as questions connected to School Book Pages; enhancing hallmark features such as Responding to Students exercises; and making the text a better study tool through the redesigned Chapter
To see available supplements that will enliven your course with activities, classroom videos, and professional development for future teachers, visit www.pearsonhighered.com/teachingmath
CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book.
Table of Contents
1. Thinking Critically
1.1 An Introduction to Problem Solving
1.2 Pólya's Problem-Solving Principles
1.3 More Problem-Solving Strategies
1.4 Algebra as a Problem-Solving Strategy
1.5 Additional Problem-Solving Strategies
1.6 Reasoning Mathematically
2. Sets and Whole Numbers
2.1 Sets and Operations on Sets
2.2 Sets, Counting, and the Whole Numbers
2.3 Addition and Subtraction of Whole Numbers
2.4 Multiplication and Division of Whole Numbers
3. Numeration and Computation
3.1 Numeration Systems Past and Present
3.2 Nondecimal Positional Systems
3.3 Algorithms for Adding and Subtracting Whole Numbers
3.4 Algorithms for Multiplication and Division of Whole Numbers
3.5 Mental Arithmetic and Estimation
4. Number Theory
4.1 Divisibility of Natural Numbers
4.2 Tests for Divisibility
4.3 Greatest Common Divisors and Least Common Multiples
5. Integers
5.1 Representations of Integers
5.2 Addition and Subtraction of Integers
5.3 Multiplication and Division of Integers
6. Fractions and Rational Numbers
6.1 The Basic Concepts of Fractions and Rational Numbers
6.2 Addition and Subtraction of Fractions
6.3 Multiplication and Division of Fractions
6.4 The Rational Number System
7. Decimals, Real Numbers, and Proportional Reasoning
7.1 Decimals and Real Numbers
7.2 Computations with Decimals
7.3 Proportional Reasoning
7.4 Percent
8. Algebraic Reasoning and Connections with Geometry
8.1 Algebraic Expressions, Functions, and Equations
8.2 Graphing Points, Lines, and Elementary Functions
8.3 Connections Between Algebra and Geometry
9. Geometric Figures
9.1 Figures in the Plane
9.2 Curves and Polygons in the Plane
9.3 Figures in Space
9.4 Networks
10. Measurement: Length, Area, and Volume
10.1 The Measurement Process
10.2 Area and Perimeter
10.3 The Pythagorean Theorem
10.4 Surface Area and Volume
11. Transformations, Symmetries, and Tilings
11.1 Rigid Motions and Similarity Transformations
11.2 Patterns and Symmetries
11.3 Tilings and Escher-like Designs
12. Congruence, Constructions, and Similarity
12.1 Congruent Triangles
12.2 Constructing Geometric Figures
12.3 Similar Triangles
13. Statistics: The Interpretation of Data
13.1 Organizing and Representing Data
13.2 Measuring the Center and Variation of Data
13.3 Statistical Inference
14. Probability
14.1 Experimental Probability
14.2 Principles of Counting
14.3 Permutations and Combinations
14.4 Theoretical Probability
A. Manipulatives in the Mathematics Classroom
B. Getting the Most out of Your Calculator
C. A Brief Guide to the Geometer's Sketchpad
D. Resources
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Mathematical Reasoning for Elementary School Teachers, CourseSmart eTextbook, 6th Edition
Format: Safari Book
$73.99 | ISBN-13: 978-0-321-71719-1 | {"url":"http://www.mypearsonstore.com/bookstore/mathematical-reasoning-for-elementary-school-teachers-0321717198","timestamp":"2014-04-21T14:50:45Z","content_type":null,"content_length":"17807","record_id":"<urn:uuid:a4a8276d-424b-4494-8d74-36c3ced5e9fe>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
birational geometry
birational geometry
Let $X$ be an algebraic variety (in the modern approach an irreducible reduced scheme of finite type over a field, but we will work here with the usual maximal spectra to simplify the exposition). To
a variety one can associate a function field $k(X)$ whose elements are rational functions on $X$ (they are regular functions on the big-Zariski open subvarieties of $X$). A partially defined map from
a variety to another variety is rational if it is defined and regular on a Zariski open set. One can check easily that in fact rational maps compose. Rational maps which have an inverse on a Zariski
open subset are called birational maps or birational isomorphisms (see there for a more precise definition).
Birational geometry considers properties of varieties which depend only on the birational class, i.e., they are equivalent when they have isomorphic function fields. In fact, one can define the
appropriate category by starting with the category of varieties over a fixed field and then localizing at all birational equivalences.
Related articles: Mori program?
• Janos Kollár, Shigefumi Mori, Birational geometry of algebraic varieties, Cambridge Tracts in Mathematics 134
• wikipedia: birational geometry, rational variety
• Caucher Birkhar, Birationali geometry, online notes short version 35 pages, pdf; Lectures on birational geometry, 85 pages, arxiv/1210.2670
A recent new approach is in
• S. Cantat, S. Lamy, Normal subgroups of the Cremona group, Acta Mathematica 210 (2013), 31–94
Revised on August 1, 2013 16:42:18 by
Zoran Škoda | {"url":"http://ncatlab.org/nlab/show/birational+geometry","timestamp":"2014-04-20T15:53:46Z","content_type":null,"content_length":"14993","record_id":"<urn:uuid:2917b700-d261-4488-9fc3-92746d805ee8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2005 [00464]
[Date Index] [Thread Index] [Author Index]
Re: probability with simulation
• To: mathgroup at smc.vnet.net
• Subject: [mg60520] Re: [mg60496] probability with simulation
• From: "W. Craig Carter" <ccarter at mit.edu>
• Date: Mon, 19 Sep 2005 04:45:39 -0400 (EDT)
• References: <200509180515.BAA02275@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Without doing the entire set of problems, here is something that is
straightforward, not terribly efficient, but can probably be
modified to help you move forward:
<< DiscreteMath`Combinatorica` (*we want the RandomPermutation[]*)
(*The following function is a single deal, you get the first
five cards and they are sorted (i.e., Sort[Take[deal,5]]). The we
check if the cards make as sequence, returning a zero as soon as we
determine that your hand is not a sequence.
If you look through all the cards and it is a sequence, return a 1*)
ATrial[numcards_] :=
{deal = RandomPermutation[numcards], hand, check = 1},
hand = Sort[Take[deal, 5]];
While[check < 5 , If[deal[[check]] != deal[[++check]] - 1,
(*So we count the number of successes to get a probability
ProbEst[NCards_, NTrials_] := Sum[ATrial[NCards], {NTrials}]/NTrials
(*and then run a simulation over a large number of trials*)
ProbEst[10, 10000]
(*which returned a 1/5000 for me*)
Note, for rare events, convergence to the actuall probablility will
be slow and therefore you will want to get a sense of the
On Sun, 18 Sep 2005, Sara wrote:
> Date: Sun, 18 Sep 2005 01:15:51 -0400 (EDT)
> From: Sara <ma_sara177 at hotmail.com>
To: mathgroup at smc.vnet.net
> Subject: [mg60520] [mg60496] probability with simulation
> I have a problem with probability. I have to solve a problem av probability with simulation. i have done it with the combinatorial method but I dont know how to use Mathematica to solve the problem with simulation. can any one Help me please. If you want my result with combinatorial method or my try with simulation I can email you. the problems are:
> 1)Find the probability of getting exactly two 1:s when throwing 5 dices.
> Solve this problem with the combinatorial method and by simulation.
> We will play a very simple variant of poker. In the deck we have N cards which are
> called 1,2, ?, N. Each player is dealt 5 cards randomly. In this game there are 3 types
> of hands:
> ? Straight: The 5 cards form a sequence, e.g. 3,4,5,6,7.
> ? Parity: The 5 cars are all odd or all even.
> ? Standard: A hand that is not a Straight or a Parity.
> A Straight beats a Parity and a Parity beats a Standard. If we have two hands of the
> same type the hand with the highest top-card will win.
> Problem 1
> Assume that there are 3 players including you. What is the probability that you get a
> Straight? Solve this problem first with.
> The combinatorial method? (ihave don that)
> Simulation?
> Do it for N=15. Make a simulation for N=20.
> Problem 2
> Let us assume N= 30, that there are 4 players including you, that the cards are dealt
> and you have a Parity with 20 as top-card. What is the probability that you have a
> winning hand?
W. Craig Carter
Lord Foundation Professor of Materials Science and Engineering
MIT, Dept. of Materials Science and Engineering 13-5018 77 Massachusetts Ave, Cambridge, MA 02139-4307 USA
617-253-6048 ccarter at mit.edu http://pruffle.mit.edu/~ccarter http://pruffle.mit.edu/~ccarter/FAQS/ http://pruffle.mit.edu/~ccarter/I_do_not_use_microsoft.html
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Sep/msg00464.html","timestamp":"2014-04-18T18:52:28Z","content_type":null,"content_length":"37901","record_id":"<urn:uuid:8a7fb424-9563-4f29-986e-165d7ec80108>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
It should be easy... but it isn't.
Re: Linear Interpolation FP1 Formula
Like I said before although she is brutal she at least dropped you quickly and cleanly. Some of them like to torture.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Historically I have had one big crush every three years. One when I was 8, I was obviously very immature back then. Another immature one when I was 11. A slightly more serious one at 14, and then F
at 17. I don't know if this pattern is a coincidence but it would explain why I haven't felt strongly about anyone this year and have been 'playing the field' a bit more, so to speak.
Re: Linear Interpolation FP1 Formula
Crushes are disastrous. Save that for the girl that is worth that. One you know at least right now is on your side.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I think I have just closed myself off to protect myself from being hurt again; I think that might be partially why I'm so hesitant to ask girls out, to be honest. I remember thinking "time to be a
man, ask F out, what's the worst that could happen?"... and regretted doing that for the whole year.
Re: Linear Interpolation FP1 Formula
You have to learn how to be tougher than that. Dating a girl and breaking up is a different thing.
Asking them out and getting a no is easy. Happens all the time. You will soon be immune to rejection, even laughing about it with your friends and to yourself.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I think that will take a lot of trial and error. What hurt about F was that she didn't actually say 'no'. She just walked away from me and smiled which made me confused... I should have been smarter
than that and quickly forgot about her. But you only learn through experience I suppose.
Re: Linear Interpolation FP1 Formula
Listen, I have asked lots of girls and only 2 of them did not give me a straight no and wanted me to ask again. Just two out of many dozens. The rest were clear in their answer of yes. Any answer
other than "I would love to," is insufficient and a sure sign of danger.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
In the next couple of weeks I will be asking someone out. Maybe I should just ask them all out, kill 5 birds with one stone.
Re: Linear Interpolation FP1 Formula
Probably not the best idea, what if they all say yes?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Then I will have to get my little black book out and find 5 suitable times and dates.
Re: Linear Interpolation FP1 Formula
Spread them out maybe a week apart is the best way.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
That way it only takes a couple of weeks to find out where you stand and gives you enough time between girls to date one or more should she say yes.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Okay. But I still cannot ask both Hannah and adriana out. The other will know very quickly. I must choose.
Re: Linear Interpolation FP1 Formula
Yes, there you choose. But it is not unheard of to ask both out at different times. If you do date one, you may find the other one suddenly is more interested in you. Girls love stealing boyfriends.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I could wait between asking Hannah and adriana out. I think I'm more likely to get a yes from adriana than Hannah though. Although I think Hannah is slightly hotter, both are equally nice.
Re: Linear Interpolation FP1 Formula
Listen, nice is part of the attraction. A girl with just okay looks can suddenly be very attractive because of her personality. I learned that the hard way.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Fingers crossed I experience that too.
Re: Linear Interpolation FP1 Formula
Sorry, got a phone call turned out they wanted me for some sort of poll.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
We get those sometimes too.
Re: Linear Interpolation FP1 Formula
Sometimes I wish I did not have a phone. This one was particularly pushy and she is asking me about topics I do not like discussing over the phone.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I find that tends to happen less often in person. Although in some ways I feel the phone is more personal and there is more emphasis on what you say, since there aren't any other factors involved or
ways in which you can communicate.
Re: Linear Interpolation FP1 Formula
I was just venting off some steam from that imbecile. They spent zillions of dollars trying to defeat a candidate that they can not, now thay want more money to replenish their "War Chest" as she put
Any emails?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't think I've got that type of phone call before...
Yes, one from PJ:
"Don't worry about the chemistry. It's only one question and even if you didn't get the right answer you'll probably get some method marks and at any rate there's nothing anyone can do about it till
after results day.
My last exam is geography on Mon PM. How about you, or have you finished now?" | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=248891","timestamp":"2014-04-16T04:13:40Z","content_type":null,"content_length":"35978","record_id":"<urn:uuid:67505cde-6411-4545-8a4f-6fed5ef1def4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
plz help!!! (physics)
August 2nd 2007, 10:26 AM #1
Jun 2007
plz help!!! (physics)
A 3.0-g bullet traveling at 350 m/s hits a tree and slow down uniformly to stop while penetrating a distance of 12 cm into the tree's trunk. what was the force exerted on the bullet in bringing
it to rest?
August 2nd 2007, 10:34 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-applied-math/17449-plz-help-physics.html","timestamp":"2014-04-18T04:25:04Z","content_type":null,"content_length":"33380","record_id":"<urn:uuid:3815a0b9-f1ca-4d43-b81f-6baffafe649f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elementary statistics in social research /
• Levin, Jack, 1941-
• Fox, James Alan
• Forde, David R. 1959-
• Social sciences -- Statistical methods.
• Statistics
• 0205636926 (pbk.) :
• 9780205636921 (pbk.)
• Previous ed.: Boston, Mass.: Pearson Allyn and Bacon, 2006.
• Includes index.
• IN THIS SECTION: 1.) BRIEF 2.) COMPREHENSIVE BRIEF TABLE OF CONTENTS: Chapter 1 Why the Social Researcher Uses Statistics PART ONE: Description Chapter 2 Organizing the Data Chapter 3 Measures of
Central Tendency Chapter 4 Measures of Variability PART TWO: From Description to Decision Making Chapter 5 Probability and the Normal Curve Chapter 6 Samples and Populations PART THREE: Decision
Making Chapter 7 Testing Differences between Means Chapter 8 Analysis of Variance Chapter 9 Nonparametric Tests of Significance PART FOUR: From Decision Making to Association Chapter 10
Correlation Chapter 11 Regression Analysis Chapter 12 Nonparametric Measures of Correlation PART FIVE: Applying Statistics Chapter 13 Choosing Statistical Procedures for Research Problems
APPENDICES Appendix A Introduction to SPSS Appendix B A Review of Some Fundamentals of Mathematics Appendix C Tables Appendix D List of Formulas Glossary Answers to Problems Index COMPREHENSIVE
TABLE OF CONTENTS: (All chapters end with Summary, Terms to Remember, and Questions & Problems) Chapter 1 Why the Social Researcher Uses Statistics - The Nature of Social Research - Why Test
Hypotheses? - The Stages of Social Research - Using Series of Numbers to Do Social Research - The Functions of Statistics PART ONE: Description Chapter 2 Organizing the Data - Frequency
Distributions of Nominal Data - Comparing Distributions - Proportions and Percentages - Ratios and Rates - Simple Frequency Distributions of Ordinal and Interval Data - Grouped Frequency
Distributions of Interval Data - Cumulative Distributions - Percentile Ranks - Dealing with Decimal Data - More on Class Limits - Flexible Class Intervals - Cross-Tabulations - Graphic
Presentations Chapter 3 Measures of Central Tendency - The Mode - The Median - The Mean - Taking One Step at a Time - Obtaining the Mode, Median, and Mean from a Simple Frequency Distribution -
Comparing the Mode, Median, and Mean Chapter 4 Measures of Variability - The Range - The Inter-Quartile Range - The Variance and Standard Deviation - The Raw-Score Formula for Variance and
Standard Deviation - Obtaining the Variance and Standard Deviation from a Simple Frequency Distribution - The Meaning of the Standard Deviation - Comparing Measures of Variability - Visualizing
Distributions PART TWO: From Description to Decision Making Chapter 5 Probability and the Normal Curve - Rules of Probability - Probability Distributions - The Normal Curve as a Probability
Distribution - Characteristics of the Normal Curve - The Model and the Reality of the Normal Curve - The Area under the Normal Curve - Standard Scores and the Normal Curve - Finding Probability
under the Normal Curve Chapter 6 Samples and Populations - Sampling Methods - Sampling Error - Sampling Distribution of Means - Standard Error of the Mean - Practical and Statistical: Weigh It
Again, Sam - Confidence Intervals - The t Distribution - Estimating Proportions PART THREE: Decision Making Chapter 7 Testing Differences between Means - The Null Hypothesis: No Difference
between Means - The Research Hypothesis: A Difference between Means - Sampling Distribution of Differences between Means - Testing Hypotheses with the Distribution of Differences between Means -
Levels of Significance - Standard Error of the Difference between Means - Testing the Difference between Means - Comparing Related Samples - Two Sample Test of Proportions - One-Tailed Tests -
Requirements for Testing the Difference between Means Chapter 8 Analysis of Variance - The Logic of Analysis of Variance - The Sum of Squares - Mean Square - The F Ratio - A Multiple Comparison
of Means - Two-Way Analysis of Variance - Requirements for Using the F Ratio Chapter 9 Nonparametric Tests of Significance - One-Way Chi-Square Test - Two-Way Chi-Square Test -
people who borrowed this, also borrowed:University of Huddersfield Library Catalogue | {"url":"http://library.hud.ac.uk/catlink/bib/648219","timestamp":"2014-04-18T03:45:42Z","content_type":null,"content_length":"4894","record_id":"<urn:uuid:d571bda7-ae2d-469f-91de-dfc294f195b8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Second Bibliography on the Teaching of Probability and Statistics
Hardeo Sahai
University of Puerto Rico
Anwer Khurshid
University of Exeter, U.K.
and University of Karachi, Pakistan
Satish Chandra Misra
U.S. Food and Drug Administration
and The American University, Washington, DC
Journal of Statistics Education v.4, n.3 (1996)
Copyright (c) 1996 by Hardeo Sahai, Anwer Khurshid, and Satish Chandra Misra, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium
without express written consent from the authors and advance notification of the editor.
Bibliography for years up to and including 1987
L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
Labovitz, S. (1968). "Criteria for Selecting a Significance Level: A Note on the Sacredness of .05," AmSoc, 3, 220-222.
Lafleur, M. S., Hinrichsen, P. R., Landry, P. C. and Moore, R. B. (1972). "The Poisson Distribution: An Experimental Approach to Teaching Statistics," PhyTchr, 10, 314-321.
Lamaison, H. J. (1986). "The Use of Micro-Computer Software Package in Teaching Economics and Statistics," IJMaEdST, 16, 67-69.
Lancaster, H. O. (1987). "A Bibliography of Statistical Bibliographies: A Nineteenth List," IntStRvw, 55, 221-227.
Landauer, E. G. (1981). "Random Numbers: Finding a Good, Low Cost Generator," IJMaEdST, 12, 1-8.
Landis, J. R. and Feinstein, A. R. (1973). "An Empirical Comparison of Random Numbers Acquired by Computer Generation and from the Random Tables," CompBioRes, 6, 322-326.
Langwitz, D. and Rodewald, B. (1987). "A Simple Characterization of the Gamma Function," AmMaMnth, 81, 534-536.
Lappan, G., Phillips, E., Winter, M. J. and Fitzgerald, W. M. (1987). "Area Models for Probability," MathTchr, 80, 217-220.
Latour, S. A. (1981) "Variance Explained: It Measures Neither Important nor Effect Size," DecisnSc, 12, 150.
Latzko, W. (1987). "Quality and Productivity Courses Under Development at the Fordham University, II," RptStBus2, pp. 150-155.
Lauckner, F. B. (1984). "The Status of Biometry in Agriculture Research in the CARICOM Region," Biomtrcs, 40, 841-848.
Layne, B. H. and Schuyler, W. H. (1981). "The Usefulness of Computational Examples in Statistics Courses," JGePsy, 104, 283-285.
Ledolter, J. (1987 a, b). "Quality and Productivity Courses Under Development at the University of Iowa; Statistical Methods for Quality and Productivity Improvement," RptStBus2; pp. 146-148,
Lee, M. P. and Soper, J. B. (1987). "Using Spreadsheets to Teach Statistics in Geography," JHEG, 11, 27-33.
Lehman, S., Starr, B. J. and Young, K. C. (1975). "Computer Aids in Teaching Statistics and Methodology," BehResMetIns, 7, 93-102.
Lennes, G. (1981). "The Influence of Statistical Packages Upon Teaching and Consulting in an Applied Statistics Department," BioPrax, 21, 29-41. (In French).
Lenz, H. J. (1978). "NPSTAT -- A Software Package for Teaching Nonparametric Statistics," PCompStat3, pp. 414-441.
Leonard, C. A. (1973). "Those Intriguing Binomial Coefficients Again," MathTchr, 66, 665-666.
Leuine, M. and Rolwing, R. H. (1983). "A Case for Mathematical Modeling," UMAPJ, 4, 415-420.
Levine, D. M. (1981). "Integrating Computer Packages with the Teaching of Applied Statistics," AmInstDecScP, 1, 99-100.
Lewis, K. (1979). "Seven, and Wiser, Virgins," TeachgSt, 1, 91-92.
Lewis, P. A. and Charny, M. (1987). "The Cardiff Health Survey: Teaching Survey Methodology by Participation," StatMed, 6, 869-874.
Liddell, F. D. K. (1958). "Lies, Damn Lies,...," IncStstcian, 8, 167-176.
Lilienfeld, A. M. (1979). "More Statistics in Medical Education (Letter)," NEngJMd, 300, 204-205.
Lilienfeld, A. M. (1982). "Bradford Hill's Influence on Epidemiology," StatMed, 1, 325-328.
Linder, F. E. (1952). "Important Elements in an Effective International Training Programme in Statistics," Estd, 36, 387-395.
Lindley, D. V. (1975). "Probability and Medical Diagnosis," JRCP, 9, 197-204.
Litwiller, B. H. (1987). "Keno Probabilities," SchSciMa, 87, 33-39.
Litwiller, B. H. and Duncan, D. R. (1977). "Poker Probabilities: A New Setting," MathTchr, 70, 766-771.
Litwiller, B. H. and Duncan, D. R. (1982). "Probabilities in Yahtzee," MathTchr, 75, 751-754.
Litwiller, B. H. and Duncan, D. R. (1987). "Maalox Lottery: A Novel Probability Problem," MathTchr, 80, 455-456.
Loase, J. F. (1976). "Extrasensory Probability," MathTchr, 69, 116-118.
Lobley, D. H. (1987). "Industry Quality and Statistics," Ststcian, 36, 479-485.
Lock, P. F. and Lock R. H. (1987). "In-Class Data Collection Experiments to Use in Teaching Statistics," PConfTeachSt, pp. 83-86.
Lock, R. H. (1986 a). "SAMPLER: A Computer Simulation in Statistics," CollMcmp, 4, 1.
Lock, R. H. (1986 b). "Computer Generation of Statistical Data," PICTeachSt2, pp. 212-216.
Lock, R. H. (1987 a,b). "The Role of Computers in Statistics Instruction; Writing Effective Simulation Programs," PConfTeachSt, pp. 87-95, 109-115.
Loosin, F., Lisen, M. and Lacante, M. (1985). "The Standard Deviation -- Some Drawbacks of an Intuitive Approach," TeachgSt, 7, 2-5.
Lopez, R. L. (1985). "Refining the Use of Statistics in Employment Studies," JColUniPerAss, 36, 29-34.
Lorenzo, F. O. (1987). "Teaching About Influence in Simple Regression," TeachgSoc, 15, 173-177.
Lowe, C. R. (1963). "On the Teaching of Statistics to Medical Students," Lancet, 1, 985-987.
Loyer, M. W. (1986). "Not-so Surprising Results," ASAProStEd, pp. 143-147.
Loyer, M. W. (1987). "Using Classroom Data to Illustrate Statistical Concepts," PConfTeachSt, pp. 43-72.
Loynes, R. M. (1986). "Creating and Strengthening a Statistics Teaching Group," PICTeachSt2, pp. 254-257.
Lucas, W. F. (1983). "What is Operations Research," UMAPJ, 4, 489-496.
Ludeman, R. (1982). "Strategies for Teaching Nursing Research: Experimental Learning in Data Analysis," WestJNurRes, 4, 124-126.
Lunn, A. D. (1982). "The Use of Animation, Visual and Audio-Visual Techniques in the Teaching of Statistical Principles," ASAProStEd, pp. 29-34.
Lunn, A. D. and Saunders, D. J. (1986). "The Making of Statistical Films," PICTeachSt2, pp. 461-464.
Luoma, M. (1986). "Targets and Tools for Teaching Statistics to Students of Business and Marketing," PICTeachSt2, pp. 431-434.
Lykken, D. T. (1968). "Statistical Significance in Psychological Research," PsychBu, 70, 151-159.
MacDonald-Ross, M. (1977). "How Numbers are Shown: A Review of Research on the Presentation of Quantitative Data in Texts," AudVisCommRvw, 25, 359-407.
Madai, L. (1978). "100 Years Since the Introduction of the Teaching of Statistics into the Curriculum of Medical Schools in Hungary," OrvHetil, 119, 1493-1494.
Madsen, R. (1986). "Provision and Utilization of Outside Sources of Assistance for Undergraduates," PICTeachSt2, pp. 446-451.
Maghsoodla, S. and Hool J. N. (1976). "On Response Surface Methodology and Its Computer-Assisted Teaching," AmerStat, 30, 140-144.
Mahalanobis, P. C. (1965). "Professor Ronald Ayilmer Fisher," Sankhya, 4, 265-272.
Mahalanobis, P. C. (1965). "Statistics as a Key Technology," AmerStat, 19, 43-46.
Mainland, D. (1950). "Statistics in Clinical Research: Some General Principles," AnlsNYAcaSc, 52, 922-930.
Mainland, D. (1982). "Medical Statistics -- Thinking Versus Arithmetic," JChroDis, 35, 413-417.
Mandel, F. S. (1987). "Statistical Computing as an Aide in Teaching Introductory Business Statistics," PConfTeachSt2, pp. 101-107.
Mangles, T. H. (1984). "Application of Micros and the Use of Computer Graphics in the Teaching of Statistical Principles," ProSt, 3, 24-27.
Maniatopoulous, K., Pappas, I. A., Protosigelos, K. and Vakalopsulos, S. A. (1982). "A Tool for Teaching Monte Carlo Simulation Without Really Meaning It," EurJOR, 11, 217-221.
Mantel, N. and Greenhouse, S. W. (1968). "What is the Continuity Correction?," AmerStat, 22, 27-30.
Marchi, E. and Miguel, O. (1974). "On the Structure of the Teaching---Learning Interactive Process," IntJourGameT, 3, 83-99.
Marinoni, A. (1984). "Italian Experience in Specialization and Postgraduate Training in Medical Statistics," PEuSympSt, pp. 37-43.
Market, W. D. (1985). "Statistical Significance: A Misunderstood Concept," SchSciMa, 85, 361-366.
Marks, R. G. (1986). "Teaching Statistical Students to Communicate as Consultants -- An Example in Biostatistics," PICTeachSt2, pp. 325-328.
Martin, M., Sanz, F. and Andrey, D. (1982). "Effect of the Introduction of Biostatistics into the Curriculum of the Study of Medicine. Analysis of a Decade in the Periodical 'Medicina Clinica
(Barcelona)'," MedClin, 79, 273-276.
Mather, K. (1951). "R. A. Fisher's Statistical Methods for Research Workers: An Appreciation," JASA, 46, 51-54.
Maxfield, M. W. (1981). "Sixteen Left Feet," TeachgSt, 3, 25-26.
McCallon, E. L. and Brown, J. D. (1971). "A Semantic Differential Instrument for Measuring Attitude Toward Mathematics," JExpEd, 39, 69-79.
McGill, R., Tukey, J. W., and Larsen, W. A. (1978). "Variations of Box Plots," AmerStat, 32, 12-16.
McKay, D. A. and Jensen, E. W. (1980). "Beyond Biology: A Curriculum in Methods of Analysis for Clinicians," JMedEd, 55, 521-528.
McKeage, R. (1976). "The Statistical Package (STATPAK)," PConfTeachSt1, pp. 38-43.
McKenzie, J. D., Jr. (1980). "Selecting a Statistical Package for Teaching Statistics," ASAProStEd, pp. 49-51.
McKenzie, J. D., Jr. (1987). "The Past, Present and Future of Textbooks in an Introductory Statistics Course," PConfTeachSt2, pp. 15-25.
McKenzie, J. D., Jr. and Kopcso, D. P. (1986). "A Study of the Interaction Between the Use of Statistical Software and the Data Analysis Process," PICTeachSt2, pp. 190-193.
McNown, R. F. (1984). "Econometric Laboratory," JEcEd, 15, 71-76.
McNulty, S. (1985). "A Numerical Analysis Approach to the Teaching of Statistical Computing," SAS.SUGI17, pp. 137-140.
Meiborg, A. (1979). "Introduction to the Calculus of Probability in Mathematics Teaching: A Contribution to Teaching at Secondary Level," ErzBer, 27, No. 1, 83-92.
Meierhoefer, B. (1980). "Calculation of Means with Proportions, Percentages and Per Thousands," EhrHau, 5, No. 4, 47-50.
Melnyk, M. and Myers, B. (1984). "Student Ratings of Instructors in Undergraduate Business Statistics," ASAProStEd, pp. 90-95.
Menger, K. (1952). "The Formative Years of Abraham Wald and His Work in Geometry," AnlsMathStat, 23, 13-20.
Menton, R. (1987). "Monte Carlo Studies for Understanding Efficiency," PConfTeachSt2, pp. 117-130.
Mevarech, Z. R. (1983). "A Deep Structure Model of Students' Statistical Misconceptions," EdStuMath, 14, 415-429.
Meyers, C. H. (1977). "Use of a Computerized Forecasting Model in the Teaching of Basic Business Statistics," CompUnCur, 8, 25-31.
Meyerson, M. D. (1983). "Random Numbers," UMAPJ, 4, 453-487.
Michels, E. (1970). "Use of the t-test," PhyThep, 50, 581-585.
Miller, D. A. (1966). "Significant and Highly Significant," Nature, 210, 1190-1192.
Miller, R. B. (1986). "Panel Discussion on Teaching Business Statistics: Overview of the Session; Teaching Models and Case Studies: Overview of the Session," PICTeachSt2, pp. 407-408.
Millikan, R. C. (1983). "The Magic of the Monte Carlo Method," Byte, 371-373.
Milton, J. S. and Corbet, J. J. (1978). "Teaching Probability: Who Killed the Cook," MathTchr, 71, 263-266.
Milton, J. S. and Corbet, J. J. (1982). "Conditional Probability and Medical Tests: An Exercise in Conditional Probability," UMAPJ, 3, 157-162.
Milton, J. S. and Corbet, J. J. (1983). "Odds, Wagers, and the Horses," UMAPJ, 4, 127-134.
Minassian, D. P. (1987). "The Arithmetic-Geometric Mean Inequality Revisited: Elementary Calculus and Negative Numbers," AmMaMnth, 94, 977-978.
Mizsei, I., Eller, J., Huhn, E. and Gyori, I. (1984). "On the Relationship of Biomathematics Teaching and Medical Research," PEuSympSt2, p. 314.
Montroll, E. W. (1984). "On the Vienna School of Statistical Thought," RanWaPhyBioSc, pp. 1-10.
Moore, C. N. (1973). "Computer Assigned Laboratory Problems for Teaching Business and Economic Statistics," CompUnCur, 4, 358-364.
Moore, C. N. (1974). "Computer Assisted Laboratory Experiments for Teaching Business and Economic Statistics," IJMaEdST, 5, 713-716.
Moore, D. S. (1987). "Should Mathematicians Teach Statistics," CollMaJ, 19, 3-7.
Moore, P.C. (1976). "The Teaching of Statistics at a Business School," Ststcian, 25 , 147-154.
Moortgat, L. R. (1986). "Improved and Expanded Teaching of Statistics in the Philippines," PICTeachSt2, pp. 227-235.
Morgan, L. A. (1987). "Correlation from Throws of the Dice," TeachgSt, 9, 56-59.
Moroney, M. J. (1958). "The Design of Experiments," IncStstcian, 8, 177-188.
Morris, C. (1977). "The Most Important Point in Tennis," OptStrSpo, pp. 131-140.
Morrison, S. J. (1981). "A Note on the Teaching of AOV," Ststcian, 30, 271-274.
Mortensen, P. S. (1986). "On A New Approach to an Introductory Course in Statistics for Business and Economics Including Some Experiences," PICTeachSt2, pp. 415-424.
Morton, J. E. (1952). "Standards of Statistical Conduct in Business and Government," AmerStat, 6, 6-7.
Moser, C. A. (1973). "Staffing in the Government Statistical Service," JRSS-A, 136, 531-554.
Moser, S. C. (1980). "Statistics and Public Policy," JRSS-A, 143, 1-28.
Moses, L. E. (1986). "Statistical Concepts Fundamental to Investigations," MedUseStat, pp. 3-26.
Moses, L. E. and Louis, T. A. (1986). "Statistical Consultation in Clinical Research: A Two-Way Street," MedUseStat, pp. 338-345.
Mosteller, F. (1962). "Understanding the Birthday Problem," MathTchr, 55, 322-325.
Mosteller, F. (1971). "Report of the Evaluation Committee on the University of Chicago Department of Statistics," AmerStat, 25, 17-24.
Mosteller, F. (1981). "Innovation and Evaluation," Science, 211, 881-886.
Mosteller, F. (1982). "The Role of Statistics in Medical Research," StatMedRsr, pp. 3-20.
Mosteller, F. (1986). "Writing about Numbers," MedUseStat, pp. 305-321.
Munick, D. H. and Allison, J. (1973). "On Uses and Misuses of Computer Programs in Statistics," CompUnCur, 4, 334-342.
Murakami, M. and Uchida, Y. (1985). "A Bibliography on the Teaching of Probability and Statistics Since 1970," JapJAppStat, 14, 143-150.
Murdock, M. (1983). "Math Anxiety and Statistical Readiness in Health Science Students," ASAProStEd, pp. 124-125.
Murphy, J. R. (1980 a). "Evaluation of a Biometrics Course Designed to Help Freshmen Medical Students Deal with Quantitative Problems in Clinical Laboratory Data," ASAProStEd, pp. 63-65.
Murphy, J. R. (1980 b). "Biometrics in Medical School Curriculum: Making the Necessary Relevant," JMedEd, 55, 27-33.
Murty, V. N. (1982). "On a Overlooked Property of the Median," MaCompEd, 16, No. 3, 196-197.
Murty, V. N. (1984). "On the Mean and Standard Deviation of a Random Sample," CollMaJ, 15, 60-63.
Mustonen, S. (1978). "Editorial Approach in Statistical Computing," PCompStat3, pp. 205-224.
Nagel, E. H. (1983). "General Data Analysis Routine," CollMcmp, 1, 177-178.
Neave, H. R. (1987). "Deming's 14 Points for Management," Ststcian, 36, 561-570.
Neffendorf, H. (1983). "Statistical Packages for Microcomputers: A Listing," AmerStat, 37, 83-86.
Nelson, L. S. (1975). "Use of the Range to Estimate Variability," JQualTek, 7, 41-48.
Nelson, L. S. (1976). "Nomograph of Sample Size for Estimating Standard Deviation," JQualTek, 8, 179-180.
Neter, J. and Bryant, E. C. (1987). "Quality and Production Improvement," RptStBus2, pp. 169-172.
Neville, S. (1977). "Celebrating the Birthday Problems," MathTchr, 70, 348-353.
Neyman, J. (1955). "Statistics -- Servants of All Sciences," Science, 122, 401-406.
Neyman, J. (1960). "Indeterminism in Science and New Demands on Statisticians," JASA, 55, 625-639.
Nix, A. B. J., Rowlands, R. J., Kemp, K. W., Wilson, D. W and Griffiths, K. (1987). "Internal Quality Control in Clinical Chemistry: A Teaching Review," StatMed, 6, 425-440.
Noble, B. (1976). "The Perennial Problem of Teaching Applied Mathematics," MathScntst, 1, 77-88.
Noelting, G. (1980 a). "The Development of Proportional Reasoning and the Ratio Concept, Part 1-- Differentiation of Stages," EdStuMa, 11, 217-253.
Noelting,G. (1980 b). "The Development of Proportional Reasoning and the Ratio Concept, Part 2 -- Problem Structure at Successive Stages," EdStuMa, 11, 331-363.
Noether, G. E. (1974). "The Nonparametric Approach in Elementary Statistics," MathTchr, 67, 123-126.
Noether, G. E. (1987). "Mental Random Numbers: Perceived and Real Randomness," TeachgSt, 9, 68-70.
Nouri, E. (1976). "The Development of a Statistics Program," PConfTeachSt1, pp. 125-130.
Nouri, E. (1986). "The Realities of Statistical Practice," ASAProStEd, pp. 188-190.
Nyambala, P. M. (1986). "The Use of Microcomputers in Commonwealth Africa," PICTeachSt2, pp. 197-203.
O'Brien, P. C. and Shampo, M. A. (1981). "Statistics for Clinicians: Introduction; 1. Descriptive Statistics; 2. Graphic Displays -- Histograms, Frequency Polygons and Cumulative Frequency Polygons;
3. Graphic Displays -- Scatter Diagrams; 4. Estimation from Samples; 5. One Sample of Paired Observations (Paired t Test); 6. Comparing Two Samples (The Two-Sample t Test); 7. Regression; 8.
Comparing Two Proportions. The Relative Deviate Test and Chi-Square Equivalent; 9. Evaluating a New Diagnostic Procedure; 10. Normal Value; 11. Survivorship Studies; 12. Sequential Methods;
Statistics for Clinicians, Epilogue," MayoClProc, 56; 45-46, 47-49, 126-128, 196-197, 274-276, 324-326, 393-394, 452-452, 513-515, 573-575, 639-640, 709-711, 753-754, 754-756.
O'Brien, P. C., Shampo, M. A. and Anderson, C. F. (1986). "Statistics in Nutrition: Part 1. Introduction, Descriptive Statistics and Graphic Displays; Part 2. Estimation from Samples and t-Tests;
Part 3. A: Comparing Two Proportions (The Relative Deviate Test and the Chi-Square Equivalent). B: Common Types of Studies; Part 4. Regression," NutInt, 2; 59-64, 119-124, 188-190, 331-333.
O'Brien, P. C., Shampo, M. A. and Anderson, C. F. (1987). "Statistics in Nutrition, Part 5. Survivorship Studies; Part 6. Normal Values, Evaluating a New Procedure, Sequential Methods and
Conclusion," NutInt, 3; 61-64, 130-135.
O'Brien, P. C., Shampo, M. A. and Bachman, J. W. (1984). "Statistics for Family Physicians," Family Practice, Third Edition, Chapter 25. (Ed. R. E. Rakel). W. B. Saunders Co., Philadelphia.
O'Brien, P. C., Shampo, M. A. and Robertson, J. S. (1983). "Statistics for Nuclear Medicine: Part 1. Introduction, Descriptive Statistics and Graphic Displays; Part 2. Estimation from Samples and
t-Tests; Part 3. A: Comparing Two Populations (The Relative Deviate Test and Chi-Square Equivalent). B: Counting Data; Part 4. Regression; Part 5. Survivorship Studies; Part 6. Normal Values,
Evaluating a New Diagnostic Procedure, Sequential Methods and Conclusion," JNucMed, 24; 83-88, 165-167, 269-272, 363,365, 444-446, 536-541.
O'Brien, R. G. (1986 a, b). "Using the SAS System to Perform Power Analyses for Log-Liner Models; Power Analysis for Linear Models," SAS.SUGI11, pp. 778-784, 915-922.
O'Brien, R. G. (1986 c). "Teaching Power Analysis Using Regular Statistical Software," PICTeachSt2, pp. 204-211.
O'Fallon, J. R., Dubey, S. D., Salsburg, D. S., Edmonson, J. H., Soffer, A., and Colton, T. (1978). "Should There Be Statistical Guidelines for Medical Research Papers?" Biomtrcs, 34, 687-695.
O'Muircheartaigh, I. G. (1986). "Teaching Data Analysis Using Interactive APL-Based Graphics Packages," PICTeachSt2, pp. 168-174.
Oberg, R. J. (1979). "An Empirical Approach to Teaching General Statistics by the Computer," ASAProStEd, pp. 54-56.
Oberg, R. J. and Preskenis, K. (1979). "Teaching Statistics and Using Computers," PNECC1979, pp. 297-300.
Okuno, T. (1986). "Statistical Quality Control in Japanese Industry and Education Programs for Engineers," PICTeachSt2, pp. 385-394.
Olds, E. G. and Knowler, L. A. (1949). "Teaching Statistical Quality Control for Town and Gown," JASA, 44, 213.
Olsen, C. (1986). "The Nightingale Programs," PICTeachSt2, pp. 180-183.
Onions, C. R. (1987). "The Longest Run," MathSch, 16, 12-13.
Ottaviani, M. G. (1987). "Some Notes on a History of Teaching Statistics in Italy," Stata, 47, 619-648. (In Italian).
Ovedovitz, A. C. (1986). "The Use of Real-Life Data to Teach Small and Large Sample Techniques," ASAProStEd, pp. 133-134.
Paksy, A. and Vargha, P. (1984). "The Role of Biometry in Postgraduate Training of Physicians in Hungary," PEuSympSt, pp. 84-88.
Panzer, D., Schneider, W., and Fichtner, N. (1984). "The System of Medical Statistics in the GDR and the Tasks of Postgraduate Training of Physicians and other Personnel," PEuSympSt, pp. 211-219.
Parisan, A., Kirmani, S. N. and Alvandi, S. M. (1986). "On Estimation of the S. D. When the Mean is Known," PICTeachSt2, pp. 180-183.
Parker, J. B. (1986). "A Story About Statistics," MathScntst, 11, 73.
Parkhurst, A. M. (1982). "STATAN, A Query Language to Help Students Practice Statistics," PNECC1982, pp. 302-305.
Pathak, S. (1978). "Difference Between Best Critical Region of Size Alpha And BCR of Significance Level Alpha from a Teaching Standpoint," GujStRvw, 5, No. 2, 57-62.
Pavlick, F. M. (1975). "The Attitudinal Effect of Using the Computer in an Elementary Statistics Course," IJMaEdST, 6, 353-360.
Pearson, E. S. (1936). "Karl Pearson, An Appreciation of Some Aspects of His Life and Work, Part I: 1857-1906," Bioma, 28, 193-257.
Pearson, E. S. (1938). "Karl Pearson, An Appreciation of Some Aspects of His Life and Work, Part II: 1906-1936," Bioma, 29, 161-248.
Pearson, E. S., Allen, R. G. D., Brookes, B. C., Campion, H. et. al. (1952). "The Teaching of Statistics in Schools (Report of the Council)," JRSS-A, 115, 126-137.
Pehl, K. (1982 a). "Significance Test with Composite Alternatives," LerUnt, No. 1, 23-28. (In German).
Pehl, K. (1982 b). "Aspect of a Problem-Oriented Introduction to Statistics in Adult Evening Classes," LerUnt, No. 2, 1-10. (In German).
Pendlebury, C. (1987). "Cricket Table Positions," TeachgSt, 9, 93-93.
Pereira-Mendoza, L. (1977). "Graphing and Prediction in the Elementary Schools," ArithTchr, 24, 112-113.
Pereira-Mendoza, L. (1986 a, b). "Chairman's Report: Teaching Statistics to Children Aged 6-11; A Comparison of the Statistics Curriculum for Children aged 5-11 in Britain, Canada and the U.S.A.,"
PICTeachSt2; pp. 39-39, 40-45.
Perriman, W. S. (1986). "Putting Statistical Ideas Across to Industry," PICTeachSt2, pp. 395-400.
Pesarin, F. (1983). "On the First International Conference on Statistics Teaching," InsMaScInt, 6, No. 1, 90-97.
Peskun, P. H. (1987). "Constructing Symmetric Tests," TeachgSt, 9, 19-22.
Phillippe, P. (1974). "Biometry or Statistics in Medical Education?," CanJPubH, 65, 185-187.
Piazza, T. (1986). "Teaching Statistics Through Data Analysis," PICTeachSt2, pp. 275-279.
Piele, D. T. (1981). "How to Solve It -- With the Computer," CreaComp, 7, 142-151.
Pike, D. J. (1976). "Statistical Games as Teaching Aids," Ststcian, 25, 109-115.
Pirie, W. R. (1986). "The Changing State of Undergraduate Statistics Education: Guidelines for Majors," ASAProStEd, pp. 43-45.
Pitman, E. J. G. (1957). "Statistics and Science," JASA, 52, 322-330.
Pocock, S. J., Hughes, M. D. and Lee, R. J. (1987). "Statistical Problems in the Reporting of Clinical Trials: A Survey of Three Medical Journals," NEngJMd, 317, 426-432.
Poliakov, I. V. and Sokolova, N. S. (1967). "Some Problems of Teaching the Elements of Mathematical Statistics in the Medical School," ZdrfRF, 11, No. 11, 30-33.
Poljakow, L. E. (1984). "State and Perspectives of Improvement of General Training and Specialization of Physicians in the Field of Mathematical and Medical Statistics," PEuSympSt, pp. 154.
Pollak, H. O. (1986). "Pure and Applied Mathematics: From an Industrial Perspective," UMAPJ, 7, 263-272.
Pollard, G. H. (1983). "An Analysis of Classical and Tie-Breaker Tennis," AstrlJSt, 25, 496-505.
Pomeranz, J. B. (1983). "Exploring Data from the 1980 New York City Marathon," UMAPJ, 4, 187-233.
Poole, C. (1987). "Beyond the Confidence Interval," AmJPubHealth, 77, 195-199.
Postelnicu, T. (1984). "Some Aspects Concerning Graduate Degree Programme Requirements in Biostatistics," PEuSympSt, p. 316.
Posten, H. O. (1981). "Discussion of 'Audio-Visual and Computer assisted Teaching of Statistics' by Shvyrkov and Strout," ASAProStEd, pp. 140-142.
Posten, H. O. (1986). "The Use of a Transparency Master Book for Teaching Statistics," PICTeachSt2, pp. 329-331.
Preece, D. A. (1982). "T is for Trouble (and Textbooks): A Critique of Some Examples of the Paired Samples t-Test," Ststcian, 31, 169-195.
Preece, D. A. (1984). "Biometry in the Third World: Science not Ritual," Biomtrcs, 40, 519-523.
Preece, D. A. (1986). "Illustrative Examples: Illustrative of What?," Ststcian, 35, 33-44.
Preece, D. A. (1987). "Good Statistical Practice," Ststcian, 36, 327-408.
Preece, P. F. W. (1977). "A Note in Defense of Inferential Statistics," EduRes, 20, 54-55.
Prescott, P. (1987). "Chi-square Tests, Critical Ellipses and Computer Graphics," TeachgSt, 9, 36-41.
Price, J. (1984). "Random Numbers and Buffons' Needle," Parab, 20, 2-9.
Priddis, M. J. (1978). "An Upper Bound for the Standard Deviation," TeachgSt, 9, 78-79.
Pridmore, W. A. (1985). "The Market," WhiStatEd, pp. 2-11.
Pukkila, T. (1978). "The Utilization of the Computer in the Teaching of Statistics at Elementary Level," PCompStat3, pp. 422-430.
Pukkila, T. and Putanen, S. (1980). "Computer Usage on First Statistics Courses at the University of Tampere," PCompStat5, pp. 145-151.
Pukkila, T. and Putanen, S. (1981). "The Computer as an Aid in Teaching Basic Statistics Courses at University Level," CompEd, pp. 149-155.
Pukkila, T. and Putanen, S. (1986). "The Role of the Computer in the Teaching of Statistics," PICTeachSt2, pp. 163-167.
Pulley, L. B. and Dolbear, F. T. (1982). "Computer Simulation Exercises for an Economic Statistics Course," ASAProStEd, pp. 142-146.
Puranen, J. (1984). "DISTEP-Dynamic Interactive Statistical Teaching Package," PCompStat6, pp. 105-110.
Puritz, C. W. (1982). "Deriving Regression Lines Without Calculus," StochSchule, 2, No. 2, 29-31.
Räde, L. (1975). "The Teaching of Probability and Statistics at the School Level -- An International Survey," StatSchLevel, pp. 153-166.
Räde, L. (1985). "Statistics," StuMathEd, pp. 97-107.
Raffetto, A. M. and Williams, C. R. (1974). "The Computer and a Self-paced Statistics Course," CompUnCur, 5, 315-318.
Ramalhoto, M. F. (1985). "Probability and Statistics for Non-mathematicians in Portugal," ApplProbNews, Spring, 9, 1.
Ramalhoto, M. F. (1986). "The Teaching of Statistics in Portugal --Problems and Some Suggestions Towards Its Solution," PICTeachSt2, pp. 441-445.
Ramsey, P. (1987). "Q and P Courses Under Development at the Fordham University, I," RptStBus2, pp. 148-150.
Randall, J. H. (1976). "Application of Recent Adavances in Pocket Calculator Technology in Statistics," CommStA, 5, 969-975.
Ranft, U., Gille, P., Schiemann, M., and Torok, M. (1984). "A 5x2 Hours Introductory Intensive Course for Computer Aided Statistical Data Evaluation -- Purpose and Experience," PEuSympSt, pp.
Rao, C. R. (1974). "Teaching of Statistics at the Secondary Level: An Interdisciplinary Approach," StatSchLevel, pp. 121-140,
Rao, C. R. (1984). "Prasanta Chandra Mahalanobis (1893-1972)," BullMAI, 16, 6-19.
Rao, N. S. and Marwah, S. M. (1971). "Biostatistics Section in a Medical College," JIndMedA, 57, 308-310.
Rappaport, K. and Stafford, T. (1978). "The Computer in an Introductory Course," CompUnCur, 9, 314-323.
Rasch, D. (1984). "The Use of Simulation in Teaching Biometry," PEuSympSt, pp. 140.
Reed, N. J. (1983) "One Musician's Use of Combinations and Permutations," UMAPJ, 4, 389-401.
Reep, C. and Benjamin, B. (1968). "Skill and Chance in Association Football," JRSS-A, 131, 581-585.
Reep, C., Pollard, R and Benjamin, B. (1971). "Skill in Chance is Ball Games," JRSS-A, 134, 623-629.
Reid, R. D. (1950). "Statistics in Clinical Research," AnNYAcaSc, 52, 931-934.
Reiland, T. (1986). "Statistics for Computer Science Undergraduates," ASAProStEd, pp. 46-52.
Reynolds, P. (1979). "Statistics at County Hall," TeachgSt, 1, 50-51.
Ridenhour, J. R. and Woodward, E. (1984). "The Probability of Winning in McDonald's Spur Raiders Contest," MathTchr, 77, 124-128.
Riedwyl, H. and Klaey, M. (1979). "STATEX: A Program to Generate Exercises in Applied Mathematics and Statistics," EDVMedBio, 10, 27-30.
Riegelman, R. K. (1986). "Effects of Teaching First-Year Medical Students Skills to Read Medical Literature," JMedEd, 61, 454-460.
Ringer, L. J. and Tubb, G. W. (1978). "Current Use of Computers in the Teaching of Statistics," PCompScSt10, pp. 437-441.
Roaf, D. (1982). "Coupon Collecting by Computer," MaSpctrm, 15, 82-85.
Robbins, H. (1984). "Some Breakthroughs in Statistical Methodology," CollMaJ, 15, 25-29.
Roberts, H. V. (1986). "Data Analysis for Managers," PICTeachSt2, pp. 410-414.
Roberts, H. V. (1987 a). "Data Analysis for Managers," AmerStat, 41, 270-278.
Roberts, H. V. (1987 b, c, d). "Statistical Methods for Quality and Productivity Improvement: Designing a Course: A Report; What Business School Students Should Know About Quality and Productivity;
Quality and Productivity Courses Under Development at the University of Chicago," RptStBus2; pp. 122-127, 142-146, 166-168.
Roberts, N. (1981). "Introducing Computer Simulation into the High School," MathTchr, 74, 647-652.
Roberts, N. (1983). "Testing the World with Simulations," ClaComNws, 3, 28-31.
Robertson, J. B. (1985). "Independence and Fair Coin Tossing," MathScntst, 10, 109-117.
Robinson, H., Burke, R., and Stahl, S. M. (1976). "Self-Instructional Teaching of Biostatistics for Medical Students," JCommHealth, 1, 249-255.
Rolfe, T. J. (1982). "Least Squares Fitting of Polynomials and Exponential, with Programming Examples," MaCompEd, 16, 122-132.
Romeu, J. L. (1986). "Teaching Engineering Statistics with Simulation: A Classroom Experience," Ststcian, 35, 441-447.
Rosen, M. and Hoffman, B. (1978). "Editorial: Statistics, Biomedical Scientists and Circulation Research," CirRes, 42, 739.
Rosenfeld, I. B. (1987). "Statistical Applications in Marketing," RptStBus2, pp. 31-32.
Rowe, K. E. (1975). "SIPS as a Part of Statistical Computing in Teaching and Data Analysis," PCompScSt8, pp. 56-60.
Royal Statistical Society (1974). "Teaching of Statistics in Colleges," JRSS-A, 137, 412-427.
Royal Statistical Society (1975). "Are Statistical Journals Becoming Too Theoretical? -- Summary Report of a Discussion," JRSS-B, 138, 499-503.
Royal Statistical Society (1986). "Report of a Joint Working Party of the Royal Statistical Society and the Institute of Statisticians: Supply and Demand," JRSS-A, 149, 122-14.
Rudd, E. (1987). "The Educational Qualifications and Social Class of the Parents of Undergraduates Entering British Universities in 1984," JRSS-A, 150, 346-372.
Rundfeldt, H., Aukes, G. and Janke-Grimm (1984). "Experiences with Computer Generated Exercises in Teaching Biometrics," PEuSympSt, pp. 111-116.
Runnenburg, J. T. (1978). "Mean, Median, Mode: I," StatNeer, 32, 73-79.
Ryan, T. A., Jr., Joiner, B. L. and Ryan, B. F. (1975). "Teaching Statistics with Minitab II," CompUnCur, 6, 195-204.
Sacco, W., Sloyer, C., Crouse, R. and Copes, W. (1984). "Application of Mathematics in Medical Science for Motivated Secondary School Students," SchSciMa, 84, 27-32.
Sadler, C. (1986). "A Primer for the Novice: Basic Computer Concepts," OrthClAm, 17, 515-517.
Salop, S. C. (1987). "Evaluating Uncertain Evidence with Sir Thomas Bayes: A Note for Teacher," JEcoPers, 1, 155-160.
Sanchez-Crespo, J. L. (1986). "Statistics in Government and Implications for Teaching," PICTeachSt2, pp. 337-338.
Sandusky, A. (1971). "Instruction in Statistics: A Report on the Computer Laboratory for Analysis of Data in Psychology," CompUnCur, 2, 429-435.
Sargent, T. J. (1977). "Observations on Improper Methods of Simulating and Teaching Friedman's Time Series Consumption Model," IntEcoRvw, 18, 445-462.
Saunders, D. J. (1986). "Computer Graphics and Animations for Teaching Probability and Statistics," IJMaEdST, 17, 561-568.
Saville, D. J. and Wood, G. R. (1986). "A Method for Teaching Statistics Using N-dimensional Geometry," AmerStat, 40, 204-214; PacStCong, pp. 480-481.
Saville, D. J. and Wood, G. R. (1987). Reply to "Comment on 'A Method for Teaching Statistics Using N-dimensional Geometry (Vol. 40, pp. 205-214)'; and Correction," AmerStat, 41; 242-243, 248.
Scalzo, F. and Hughes, R. (1977). "Integrating Prepackaged Computer Programs into an Undergraduate Introductory Statistics Course," CompUnCur, 8, 331-338.
Schach, S. (1987). "Effect of the EDP -- Developments on Research and Teaching in Statistics in the Federal Republic of Germany," StatSoftNews, 13, 62-65.
Scheaffer, R. L. and Burrill, G. (1986). "Statistics and Probability in the School Mathematics Curriculum: A Review of the ASA-NCTM Quantitative Literacy Project," PICTeachSt2, pp. 141-144.
Schechtman, K. B. and Spitznagel, E. L. (1987). "Teaching Biostatistics With An Emphasis on Reading the Medical Literature," ASAProStEd, pp. 111-115.
Schell, E. D. (1960). "Samuel Pepys, Isaac Newton, and Probability," AmerStat, 14, 27-30.
Scherkenbach, W. W. (1987). "Statistical Applications in Management," RptStBus2, pp. 36-37.
Schmid, C. F. (1956). "What for Pictorial Charts," Estd, 14, 12-25.
Schneider, B. (1981). "We Calculate the Mean Value," EhrGrun, 8, No. 1, 25-26.
Schneider, H. and Stein, G. (1980). "The Chi-square Test in Ordinary-Level Courses of Stochastics in the Upper Secondary," DidakMath, 8, No. 3, 200-212. (In German).
Schoenwald, H. G. (1982). "Geometric Interpretation of Correlation Coefficients," PraxMath, 24, No. 7, 202-203.
Schoenwald, H. G. (1983). "Basic Ideas In Statistics and Montessori Method," Prim, 11, No. 6, 205-211. (In German).
Schoffa, G. (1984). "Necessary Changes in Statistics Courses for Biologists Due to the Latest Advances in Computer Technology and Electronic Data Processing," PEuSympSt, pp. 53-57.
Schor, S. S. (1967). "Statistical Reviewing Program for Medical Manuscripts," AmerStat, 21, 28-31.
Schor, S. S. (1969). "How to Evaluate Medical Research Reports," HospPhy, 5, 95-109.
Schrage, G. (1987). "Limitations of the Microcomputers in the Classroom," SchSciMa, 87, 683-691.
Schroeder, T. L. (1986). "Elementary Schools Students' Use of Strategy in Playing Microcomputer Probability Games," PICTeachSt2, pp. 51-56.
Schucany, W. R. (1972). "Some Remarks on Educating Problem Solvers," PCompScSt, pp. 33-36.
Schupp, H. (1986). "Appropriate Teaching and Learning of Stochastics in the Middle Grades," PICTeachSt2, pp. 265-269.
Schwarze, J. (1981). "Applying Mean Values Properly," PraxMath, 23, No. 10, 296-307.
Selkirk, K. (1973). "Random Models in the Classroon; 1: An Example," MathSch, 2, 5-6.
Selkirk, K. (1974 a, b). "Random Models in the Classroon; 2: Random Numbers; 3: Some Ideas to Try," MathSch, 3, 5-7; 15-17.
Selkirk, K. (1983 a, b, c, d). "Simulation Exercises for the Classroom; 1: The Bus Company Game; 2: Leaving the Motorway; 3: Arriving at a Camp-Site; 4: The Potato Beetle," MathSch, 12, 2-4; 2-4;
20-22; 10-13.
Selvin, H. C. (1957). "A Critique of Tests of Significance in Survey Research," AmSocRvw, 22, 519-527.
Service, J. (1971). "Inclusion of Explanatory Material in Computer Programs for Statistical Analysis," CompUnCur, 2, 444-445.
Shahani, A. K., Parsons, P. S. and Meacok, S. E. (1979). "Animals in a Pond -- Biology and Statistics in Mutual Support," StochSchule, 1, No. 2, 23-30.
Shanken, J. (1987). "Some Statistical Problems in Testing Asset Pricing Models," RptStBus2, pp. 50-52.
Shaughnessy, J. M. (1977). "Misconceptions of Probability: An Experiment with a Small-Group, Activity-Based Model Building Approach to Introductory Probability," EdStuMath, 8, 295-316.
Sheese, R. L. (1976). "A Multilevel Approach to Teaching Statistics," PWshpTeachSt, pp. 125-130.
Sher, L. A. (1986). "Topics in Teaching Business Statistics: Overview of the Session," PICTeachSt2, pp. 409-409.
Sheynin, O. B. (1985). "On the History of the Statistical Method in Physics," ArchHistExSc, 33, 351-382.
Shiffler, R. E. (1987). "Bounds for the Maximum Z-Score," TeachgSt, 9, 80-81.
Shigan, E. (1984). "Principles and Methods of Active Postgraduate Training in the Field of Medical Statistics," PEuSympSt, pp. 44.
Shoemaker, H. H., Jr., Bryson, K. R., Brown, P. and Solomon, L. (1986). "Academic Versus Applied Training in Statistics: Approaches Used by the U.S. Bureau of the Census," PICTeachSt2, pp. 249-253.
Shore, S. D. (1986). "Industrial Training in Quality Improvement -- Part III: A First Course, A Group Approach to Problem Solving," PICTeachSt2, pp. 365-369.
Shulte, A. P. (1977). "A Case for Statistics," ArithTchr, 26, 24.
Shulte, A. P. (1981). "The Birth and Development of a Yearbook on the Teaching of Statistics and Probability," TeachgSt, 3, 14-16.
Simon, H. A. (1980). "The Behavioral and Social Sciences," Science, 209, 72-78.
Singh, C. (1976). "Teaching Statistics: Traditional Versus the Keller Plan," PWshpTeachSt, pp. 104-110.
Skellam, G. J. (1964). "Models, Inference and Strategy," Biomtrcs, 25, 457-475.
Skipper, J. K., Guenther, A. L. and Nass, G. (1967). "The Sacredness of .05: A Note Concerning the Uses of Significance in Social Science," AmSoc, 2, 16-18.
Smith, J. T. (1986). "Teaching Statistical Consulting Using Videotape," ASAProStEd, pp. 159-162.
Smith, M. R. and Larsen, A. M. (1983). "Blood Pressure and Introductory Statistics Instruction," ASAProStEd, p. 172.
Snedecor, G. W. (1950). "The Statistical Part of the Scientific Method," AnlsNYAcaSc, 52, 792-799.
Snee, R. D. (1974). "Computation and Use of Expected Mean Squares in Analysis of Variance," JQualTek, 6, 128-137.
Sobol, M. G. (1981). "Workshops on How to Teach Cases in a Statistics Course: A Demonstration and Discussion," AmInstDecScP, 1, 106.
Southward, G. M., Urquhart, N. S. and Ortiz, M. (1983). "Computer Enriched Instruction of Intermediate Level Statistical Methods," ASAProStEd, pp. 6-9.
Sowey, E. R. (1986). "More About Less or Less About More? -- Depth Versus Breadth in the Statistical Education of Business and Economics Students," PICTeachSt2, pp. 435-440.
Speed, T. (1986). "Questions, Answers and Statistics," PICTeachSt2, pp. 18-28.
Spencer, N. (1977). "Celebrating the Birthday Problem," MathTchr, 70, 348-353.
Spitznagel, E. L., Jr. (1971). "The Uses of Computing in a Modernized Probability and Statistics Course," CompUnCur, 2, 217-220.
Spitznagel, E. L., Jr. (1973). "Use of a Questionnaire-Oriented Research Project in Teaching Undergraduate Statistics," CompUnCur, 4, 352-357.
Spurrier, J. D. (1984). "Training of Statistical Consultants at the University of South Carolina," ASAProStEd, pp. 26-28.
Stahl, S. M. and Hennes, J. D. (1973). "Biostatistics: An Experiment with Self-Learning in the Health Sciences," JMedEd, 48, 271-275.
Stahl, S. M., Hennes, J. D. and Fleischli, G. (1975). "Progress on Self-learning in Biostatistics," JMedEd, 50, 294-296.
Steinbring, H. (1986). "The Interaction Between Teaching Practice and Theoretical Conceptions -- A Cooperative Model of In-Service Training in Statistics for Mathematics Teachers (Grades 5-10),"
PICTeachSt2, pp. 150-155.
Stephan, F. F. (1967). "The Quality of Statistical Information and Statistical Inference in a Rapidly Changing World," JASA, 62, 1-9.
Stephenson, H. (1985). "Rank 'Em," TeachgSt, 7, 87-88.
Stergion, A. P. (1982). "Industry's Expectations of BS Statisticians," ASAProStEd, pp. 67-69.
Sterling, T. D. (1973). "The Statistician Vis-a-vis Issues of Public Health," AmerStat, 27, 212-217.
Sterling, W. D. (1987). "The Designs of a Microcomputer Program for Teaching Introductory Statistics," NZlndStat, 22, 46-54.
Sterrett, A. and Karian, Z. A. (1978). "Interactive Computing in Teaching Statistical Concepts," CompUnCur, 9, 324-333.
Stockburger, D. W. (1982). "Evaluation of Three Simulation Exercises in an Introductory Statistics Course," ContEdPsy, 7, 365-370.
Stoodley, K. D. C. (1980). "Statistical Inference in the Social Sciences," EduRes, 23, 51-56.
Stoyanov, J. (1986). "The Use of Counter Examples in Learning Probability and Statistics," PICTeachSt2, pp. 280-286.
Strik, H. K. (1978). "Mathematical Statistics: Proposal for a One-Semester Fundamental Course," LerUnt, No. 4, 28-37. (In German).
Strohmaier, L. (1980). "Should the Students of Secondary Level Learn that Chemical Processes That are of a Statistical Nature," Chemieunterr, 11, No. 2, 5-21. (In German).
Suich, R. (1983). "Areas Under Regression Curves," StochSchule, 3, No. 2, 38-45.
Sundquist, C. and Enkvist, K. (1987). "The Use of Lotus 1-2-3 in Statistics," CompBioMed, 17, 395-399.
Swanson, J. M. and Riederer, S. (1973). "Using OMNITAB to Teach Statistics," CompUnCur, 4, 128-134.
Swanson, J. M., Riederer, S. A., Reynolds, E. and Harris, G. S. (1975). "IMP and SHRIMP: Small, Interactive Mimics of OMNITAB Designed for Teaching Applications," PCompScSt8, p. 84.
Sweeney, K. J. (1985). "Advances in Statistical Software for Micros," CollMcmp, 3, 55-58.
Swift, J. (1983). "The Vitality of Statistics," PIConfMaEd4, p. 198.
Sykes, A. W. (1982). "An Alternative Approach to the Mean," StochSchule, 2, No. 2, 22-28.
Szalajka, W. S. (1980). "Statistics for Computer Scientists," SIGCSEBull, 12, No. 4, 27-32.
Szekely, G. J. (1986). "Teaching Statistics Through Paradoxes," PICTeachSt2, pp. 322-324.
Taffe, J. (1986). "Panel Session: Teaching Statistics -- Mathematical or Practical Model," PICTeachSt2, pp. 332-336.
Tanis, E. A. (1972). "Theory of Probability and Statistics Illustrated by the Computer," CompUnCur, 3, 513-520.
Tanis, E. A. (1973 a). "A Computer-based Laboratory for Mathematical Probability and Statistics, CompUnCur, 4, 416-426.
Tanis, E. A. (1973 b). "A Statistical Hypothesis Test for the Classroom," MathTchr, 66, 657-658.
Tanis, E. A. (1977). "A Computer-based Laboratory for Mathematical Statistics and Probability," CompUnCur, 8, 339-346.
Tarter, M. and Berger, B. (1972). "On the Training and Practice of Computer Science and Statistical Consultants," PCompScSt6, pp. 16-23.
Taube, A. (1986). "Teaching Statistics in Developing Countries: Some Experiences As a Statistics Teacher in Africa," PICTeachSt2, pp. 236-240.
Tavig, G. R. and Gibbons, J. D. (1977). "Comparing the Mean and the Median as Measures of Centrality," IntStRvw, 45, 63-70.
Tedford, J. R. (1977). "Use of a Deterministic Macroeconomic Computer Model as a Teaching Aid in Economic Statistics," CompUnCur, 8, 63-70.
Teichmann, W. (1984). "Biostatistical Methods in Gastroenterology," PEuSympSt, pp. 78-83.
Terasvirta, T. (1987). "How We Got the Data," SITConfSt2, pp. 1-8.
Teuscher, F., Tuchscherer, A. and Rasch, D. (1984). "Postgraduate Training in Biometry," PEuSympSt, pp. 164-169.
Thisted, R. A. (1979). "Teaching Statistical Computing Using Computer Packages," AmerStat, 33, 27-35.
Thisted, R. A. (1986). "Computing Environments for Data Analysis," StatSc, 2, 259-275.
Thomas, E., Dorfel, H. and Batz, G. (1984). "Biostatistics in Specialization and Postgraduate Training in the Field of Agriculture," PEuSympSt, pp. 170-174.
Thomas, L. (1977). (Editorial). "Biostatistics in Medicine," Science, 198, 675.
Thomas, P. A. J. (1987). "Practically Binomial," TeachgSt, 9, 90-91.
Tintner, G. (1952). "Abraham Wald's Contributions to Econometrics," AnlsMathStat, 23, 21-28.
Toulouse, J. H. (1955). "A Study of Industrial Use of Probability and Statistics in the Physical Sciences," JASA, 52, 322-330.
Travers, K. J. (1977). "On Generating A Bivariate Distribution Having Any Desired Correlation," MathTchr, 70, 265-279.
Travers, K. J. (1978). "A Computer Activity for Building a Linear Model of Data," CreaComp, 4, No. 5, 104-105.
Traxler, C. L. (1984). "Consulting Internship in Statistics: An Intern's Experience," ASAProStEd, pp. 53-56.
Troccolo, J. A. (1977). "Randomness in Physics and Mathematics," MathTchr, 70, 772-774.
Tuan, C. (1982). "An Evaluation of Regression Analysis by Business School Graduates," ASAProStEd, pp. 152-156.
Tubb, G. W. and Ringer, L. J. (1978). "Current Use of Computers in the Teaching of Statistics," PCompScSt10, pp. 437-441.
Tuerke, W. (1979). "A Geometric Presentation of Inequalities with Mean Values," Alpha, 12, No. 3, 52-53. (In German).
Tukey, J. W. (1954). "Unsolved Problems of Experimental Statistics," JASA, 49, 706-731.
Tukey, J. W. (1972). "Data Analysis, Computation, and Mathematics," QApplMa, 30, 51-65.
Tulya-Muhika, S. (1982). "Statistical Education in Schools in Uganda and Other East African States," TchngStSch, pp. 199-209.
Tung-Po, L. (1974). "The Power Mean and the Logarithmic Mean," AmMaMnth, 81, 879-883.
Turner, D. E. (1981). "Statistics Across the Curriculum," StochSchule, 1, No. 3, 39-40.
Urquhart, N. S. (1972). "On Consultation and Education Near the Interface," PCompScSt6, pp. 28-32.
Urquhart, N. S. (1985). "Some Current and Projected Uses of Computer Technology in the Teaching of Statistics," PTeachPot, pp. 422-440.
Urquhart, N. S. (1986). "Use of Computers in Teaching Statistics: Some Current and Projected Uses of Computer Technology in the Teaching of Statistics, II," PICTeachSt2, pp. 184-189.
Utter, M. and Wilkinson, J. W. (1973). "Some Classroom Experiences in the Teaching of Empirical Model Building and Regression Analysis," CompUnCur, 4, 427-429.
Van Osdol, D. H. (1986). "Industrial Training in Quality Improvement -- Part II: The Role of Human Relations," PICTeachSt2, pp. 361-364.
Van Valey, T. L. (1974). "The 'Piggyback' Course as a Device for Teaching Statistics in Sociology," CompUnCur, 5, 123-128.
Vanderhoot, I. (1987). "Applications of Statistical Methods in the Insurance Industry," RptStBus2, pp. 26-28.
Vanderlinden, A. (1980). "An Application of Statistics: Control of Products," MathPed, 6, No. 28, 121-132. (In French).
Vanhamme, W. (1981). "What Can We Do with a Microcomputer," MathPed, 7, No. 31, 29-38. (In French).
Vannman, K. (1986). "Statistics in Industry and Implications for Teaching," PICTeachSt2, pp. 355-356.
VanZwet, W. R. (1979). "Mean, Median, Mode: II," StatNeer, 33, 1-5.
Vasilopoulos, A. (1985). "Computer-Generated Density Functions for Sums of Independent Random Variables," CollMcmp, 3, 97-110.
Vayo, H. W. (1983). "A One Year Course in Mathematical Model Building," UMAPJ, 4, 285-290.
Velleman, P. E. (1980). "Do Statistical Packages Help or Hinder Data Analysis?," ASAProStCp, pp. 21-26.
Vilaplana, J. P. (1973). "The Current Trends in the Teaching of Applied Mathematics and Its Curriculum," ActBolMath, I, pp. 240-247.
Vilaplana, J. P. (1978). "Toward a New Approach in Teaching of Applied Mathematics," ActBolMath, II, 923-939.
Vilaplana, J. P. (1981). "The Teaching of Statistics," ActBolMath, III, 35-41.
Vilaplana, J. P. (1983). "The Teaching and Application of Statistics in Developing Countries," BullIntSt, 4, 455-459.
Vilaplana, J. P. (1986). "Teaching Statistics in Government: A Case Study in the Basque Country," PICTeachSt2, pp. 347-350.
Vollmann, R. (1981). "Let's Calculate the Mean Value," EhrHau, 6, No. 4, 31-38.
Wainer, H. (1984). "How to Display Data Badly," AmerStat, 38, 137-147.
Wakin, S. (1983). "Expected Value at Jai Alai and Pari-Mutuel Gambling," UMAPJ, 4, 475-487.
Walker, A. M. (1986). "Significance Tests Represent Consensus and Standard Practice," AmJPubHealth, 76, 1033; and 76, 1087.
Walker, H. M. (1934). "Bicentenary of the Normal Curve," JASA, 29, 72-75.
Walker, H. M. (1958). "The Contributions of Karl Pearson," JASA, 53, 11-27.
Wallace, D. P. (1985). "The Use of Statistical Methods in Library and Information Science," JAmSocInfSci, 36, 402-410.
Wallenstein, S., Zucker, C. and Fleiss, J. (1980). "Some Statistical Methods Useful in Circulation Research," CirRes, 47, 1-9.
Walsh, A. (1987). "Teaching Understanding and Interpretation of Logit Regression," TeachgSoc, 15, 178-183.
Walton, K. D. (1985). "Teaching Probability: The Computer is the Handmaiden of Mathematics," JCompMaScT, 5, 19-23.
Ward, E. F. (1984). "Statistics Mastery: A Novel Approach," TeachgPsy, 11, 223-225.
Ware, J. H., Mosteller, F. and Ingelfinger, A. (1986). "P Values," MedUseStat, pp. 149-169.
Watson, J. M. (1978). "A Current Event for the Mathematics Classroom," MathTchr, 71, 658-663.
Weiss, D. G., Williford, W. O., Collins, J. F. and Bingham, S. F. (1983). "Planning Multicenter Clinical Trials: A Biostatistician's Perspective," ContCliTr, 4, 53-64.
Weiss, G. B. and Bunce, H. (1979). "Are We Ready for Statistical Guidelines for Medical Research Data?" Biomtrcs, 35, 91.
Weiss, M. C. (1972). "PSYSTAT -- A Teaching Aid for Introductory Statistics," CompUnCur, 3, 521-524.
Weiss, S. T. and Samet, J. M. (1980). "An Assessment of Physician Knowledge of Epidemiology and Biostatistics," JMedEd, 55, 692-697.
Welch, B. L. (1970). "Statistics -- A Vocational or a Cultural Study? (with discussion)," JRSS-A, 133, 531-554.
Wernecke, K. D. (1984). "On the Conveyance of Bio-Statistical Knowledge by Dealing with Applied Medical Problems," PEuSympSt, pp. 102-106.
Wetherill, G. B., Coulson, N., Daniels, H. E., Durran. J. H., Gartside, S., Gill, E. E., Maxwell, A. E., Smith, A. H., Chatfield, C. and Basset, E. E. (1968). "Interim Report of the RSS Committee on
the Teaching of Statistics in Schools," JRSS-A, 131, 478-499.
Wiebe, J. J. (1987). "Snails, Statistics and Computer," SchSciMa, 87, 665-671.
Wiedling, H. (1981). "Statistical Quality Control," PraxMath, 23, No. 1, 11-18.
Wikoff, R. L. (1970). "Using the Computer in Basic Statistics Courses," CompUnCur, 1, 2.18-2.24.
Wilcox, R. R. (1983). "Approximating the Probability of Identifying the Most Effective Treatment for the Case of Normal Distributions Having Unknown and Unequal Variances," EdPsyMeas, 43, 43-51.
Wilcox, R. R. and Charlin, V. L. (1986). "Comparing Medians: A Monte Carlo Study," JEdStat, 11, 263-276.
Wilcox, W. F. (1935). "Definitions of Statistics," IntStRvw, 3, 388-399.
Wilk, D. (1981). "Rapid Calculation of Standard Deviation of Small Samples," TeachgSt, 3,79-82.
Wilkinson, R. K. (1979). "Statistics in Sixth Form Economics," StochSchule, 1, 35-37.
Willer, H. (1984). "Biometrics for Students of Veterinary Medicine and for Postgraduate Veterinarians in the GDR," PEuSympSt, pp. 117-120.
Wilson, B. (1979). "The First Shall Be Last," StochSchule, 1, No. 2, 36-41.
Wilson, J. (1973). "Three Myths in Educational Research," EduRes, 16, 17-19.
Winch, R. F. and Campbell, D. T. (1969). "Proof? No Evidence? Yes! The Significance of Tests of Significance," AmSoc, 4, 140-143.
Winner, A. (1982). "Elementary Problem Solving with the Microcomputer," CompTchr, 9, 11-14.
Wirokowski, J. J. (1972). "A Curious Aspect of Knockout Tournaments of Size 2," AmerStat, 26, 28-30.
Wishart, J. (1939). "Some Aspects of the Teaching of Statistics (with Discussion by Greenwood, M., Elderton, W., Fisher, R. A., Allen, R. G. D., Irwin, J. O., Selwyn, V. and Bowley, A. L.)," JRSS-A,
102, 532-564.
Wishart, J. (1948). "The Teaching of Statistics (with discussion)," JRSS-A, 111, 212-229.
Wolfowitz, J. (1952). "Abraham Wald: 1902-1950," AnlsMathStat, 23, 1-13.
Wolfowitz, J. (1967). "Remarks on the Theory of Testing Hypotheses," NYStstcian, 18, 1-3.
Wonnacott, T. (1986). "Bayesian and Classical Hypothesis Testing," JAppStat, 13, 149-157.
Wonnacott, T. (1987). "Confidence Intervals or Hypothesis Tests?," JAppStat, 14, 195-201.
Woodward, E. and Ridenhour, J. R. (1982). "An Interesting Probability Problem," MathTchr, 75, 765-768.
Wu, S. C. (1972). "An Alternative Approach in Teaching Statistical Methods," CompUnCur, 3, 529-541.
Wulff, H. R., Anderson, B., Brandenhoff, P. and Guttler, F. (1987). "What Do Doctors Know About Statistics?" StatMed, 6, 3-10.
Wyllys, R. E. (1978). "Instructional Use of Statistical Program Packages: BND, IMPOMNITAB II and SPSS," PCompScSt10, pp. 265-270.
Yates, F. (1951). "The Influence of 'Statistical Methods for Research Workers' on the Development of the Science of Statistics," JASA, 46, 19-34.
Yates, F. (1966). "Computers: the Second Revolution in Statistics," Biomtrcs, 22, 233-251.
Yates, F. (1968). "Theory and Practice in Statistics," JRSS-A, 131, 463-477.
Yates, J. F. (1984). "Evaluating and Analyzing Probability Forecasts," UMAPJ, 5, 75-118.
Yeh, C-N. (1978). "Computer Aided Instruction in Business Statistics," CompUnCur, 9, 309-313.
Yeo, G. K. (1984). "A Note of Caution on Using Statistical Software," Ststcian, 33, 181-184.
Youden, W. J. (1951). "The Fisherian Revolution in the Methods of Experimentation," JASA, 46, 47-50.
Young, F. S. (1980). "A SAS Coloring Book as a Teaching Aid," SAS.SUGI5, pp. 330-335.
Youngman, M. B. (1977). "Necessary Inferences: A Reply to T. Derrick," EduRes, 20, 55-57.
Youngs, G. A., Jr. (1987). "Using Matrix Structures to Integrate Theory and Statistics into a Research Methods Course," TeachgSoc, 15, 157-163.
Yule, G. U. (1905). "The Introduction of the Words 'Statistics' and 'Statistician' into the English Language," JRSS-A, 88, 391-396.
Zahn, D. A. and Boroto, D. R. (1984). "Resolving Breakdowns in Statistical Consultations," ASAProStEd, pp. 51-52.
Zatzkis, H. (1973). "Another View of the Optimal Length of Play of a Binomial Game," MathTchr, 66, 667-669.
Zidek, J. V. (1986). "Statistician: The Quest for a Curriculum," PICTeachSt2, pp. 1-17.
Zuhrt, E. (1984). "Methodical Aspects of Teaching Biostatistics in Medicine and Stomatology," PEuSympSt, pp. 107-110.
Zuliani, A. and Sanna, F. (1986). "An Exploratory Survey of Teachers of Mathematics in the State Upper Secondary Schools in Italy: Some Results," PICTeachSt2, pp. 120-126.
Zwiers, F. W. and Kelly, I. W. (1986). "Probability and the Short Run Illusion: Perceptions and Misperceptions," SchSciMa, 86, 149-155.
Hardeo Sahai
Department of Biostatistics and Epidemiology
Medical Sciences Campus
University of Puerto Rico
San Juan, PR 00936
Anwer Khurshid
Department of Mathematical Statistics and Operational Research
University of Exeter
EX4 4QE
Department of Statistics
University of Karachi
PC 75270
Satish Chandra Misra
Division of Biostatistics and Epidemiology
U.S. Food and Drug Administration
8800 Rockville Pike
Bethesda, Maryland 20892
The American University
Washington, DC
Return to Sahai, Khurshid, and Misra Paper | Return to Table of Contents | Return to the JSE Home Page | {"url":"http://www.amstat.org/publications/jse/v4n3/bib1b.html","timestamp":"2014-04-21T14:49:01Z","content_type":null,"content_length":"63545","record_id":"<urn:uuid:df8e8094-8950-46ce-be28-6653b0b506a8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2004 [00583]
[Date Index] [Thread Index] [Author Index]
RE: i don't understand mapping function over a long list
• To: mathgroup at smc.vnet.net
• Subject: [mg47840] RE: [mg47802] i don't understand mapping function over a long list
• From: "tgarza01 at prodigy.net.mx" <tgarza01 at prodigy.net.mx>
• Date: Wed, 28 Apr 2004 06:56:33 -0400 (EDT)
• Reply-to: tgarza01 at prodigy.net.mx
• Sender: owner-wri-mathgroup at wolfram.com
For one thing, xlist is not defined (your list is called l).
And you can't get
{{x$299 + 2, x$300 + 2, x$301 +2},
{x$302 + 2, x$303 + 2, x$304 + 2}...
{x$326 + 2, x$327 + 2, x$328 + 2}}}
as the output of any evaluation. Mathematica will always give
{{x$299 + 2, x$300 + 2, x$301 +2},
{x$302 + 2, x$303 + 2, x$304 + 2}...
{x$326 + 2, x$327 + 2, x$328 + 2}}}
since adding any constant to a list will result in adding that constant to
each element of the list.
Tomas Garza
Mexico City
Original Message:
From: sean_incali at yahoo.com (sean kim)
To: mathgroup at smc.vnet.net
Subject: [mg47840] [mg47802] i don't understand mapping function over a long list
hello group.
i just don't get this.
I'm not sure what's the problem is.
please consider the following list of lists.
l = Partition[Flatten[{Table[Unique[x], {n, 1, 30}]}], 3 ]
above generates a list of 10 lists with 3 elements in each as in...
{x$299, x$300, x$301}, {x$302, x$303, x$304},
{x$305, x$306, x$307}, {x$308, x$309, x$310},
{x$311, x$312, x$313}, {x$314, x$315, x$316},
{x$317, x$318, x$319}, {x$320, x$321, x$322},
{x$323, x$324, x$325}, {x$326, x$327, x$328}
Now suppose I want to use each of the list( all 10 of them) as a part
of a function. . I want to "Apply" the function to every list(so, 10
times in total)
for a simple examplel let's add 2 to the lists
In[21]:= Apply[Plus@@xlist, 2]
Out[21]= 2
that didn't work. i wanted to get was
{{x$299, x$300, x$301} + 2,
{x$302, x$303, x$304} + 2...
{x$326, x$327, x$328} + 2}}
then i want to give each of the results unique names and use the
renamed list of lists as an argument in another function.
uniquexname1 = {x$299, x$300, x$301} + 2,
uniquexname2 = {x$302, x$303, x$304} + 2...
uniquexname10 = {x$326, x$327, x$328} + 2}
Map[Plus, xlist, 2]
just bring back the list itself.
This problem recurs for me. and I think i have problems with it
because I just don't understand how Mathematica language works.
Reading the book an dhelp manual doesn't help me much in understanding
what lies underneath. Can you guys shed soem light on this with some
simple examples that use numerical operations?
Maybe I'm asking a lot, but any and all insights are thoroughly
thanks in advance.
mail2web - Check your email from the web at
http://mail2web.com/ . | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Apr/msg00583.html","timestamp":"2014-04-20T13:25:39Z","content_type":null,"content_length":"36734","record_id":"<urn:uuid:1d469717-5d7f-4bca-8fe9-5449d3cd8134>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
SUPER G.S QUESTION : ___________________ An infinite G.s the sum of the 1st n terms =b ,and the sum of the first 2n terms = c , and the sum of the first 3n terms = d , PROVE THAT (b,c-d,d-c,.....) is
an infinite G.s and the sum of any number of its terms cant exceed the sum of the sum of the original sequence up to infinity ......
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Geometric sequence :D
Best Response
You've already chosen the best response.
how that c-d is geometric mean of preceding and suceeding term
Best Response
You've already chosen the best response.
show that (c-d)^2 = b * (d-c)
Best Response
You've already chosen the best response.
@experimentX : he want to prove that these sequence is an infinite G.s ,not a Geometric sequence and since your working on the Geometric Mean that will lead you to the Geometric sequence ...
Best Response
You've already chosen the best response.
these sequences are *
Best Response
You've already chosen the best response.
do this first ... then use induction!!
Best Response
You've already chosen the best response.
ok ,what about the next require ? i dont even understand what he want !_!
Best Response
You've already chosen the best response.
\[ S_n = \frac{n(2a + (n-1)d)}{2}\] \[ S_{2n} = \frac{2n(2a + (2n-1)d)}{2}\] \[ S_{3n} = \frac{3n(2a + (3n-1)d)}{2}\] Find the differences, \[ S_{2n} - S_{n} = \frac{2n(2a + (2n-1)d)}{2} -\frac{n
(2a + (n-1)d)}{2} = \frac{n}{2}(2a + (3n - 1)d) \] \[ S_{3n} - S_{2n} = \frac{3n(2a + (3n-1)d)}{2} -\frac{2n(2a + (2n-1)d)}{2} = \frac{n}{2}(2a + (5n - 1)d) \]
Best Response
You've already chosen the best response.
looks like something went wrong ... verify that these are in Geometric progression first
Best Response
You've already chosen the best response.
experimentX: the question says "geometric sequence" not "arithmetic sequence", so the formula for the sum should be:\[S_n=\frac{a(1-r^n)}{1-r}\]
Best Response
You've already chosen the best response.
if you use this, then you should be able to do the question.
Best Response
You've already chosen the best response.
yea thats right @asnaseer , anyways tysmm of your effort @experimentX
Best Response
You've already chosen the best response.
Also, Eyad, I think you may have a mistake in the question. I think it should say: "PROVE THAT (b, c-b, d-c, .....) is an ..." and not: "PROVE THAT (b, c-d, d-c, .....) is an ..."
Best Response
You've already chosen the best response.
note the 2nd term should be "c-b" not "c-d"
Best Response
You've already chosen the best response.
otherwise 3rd term = -(2nd term)
Best Response
You've already chosen the best response.
which doesn't seem to make sense
Best Response
You've already chosen the best response.
do you agree Eyad?
Best Response
You've already chosen the best response.
I copied it exactly as its written from the book ,though i swear its doesn't make sense ,i totally agreeee
Best Response
You've already chosen the best response.
must be a misprint
Best Response
You've already chosen the best response.
I think i'am gonna leave it and ask da prof. ,and if i got an answer i will surely share it to you guys ... Tysm for your efforts
Best Response
You've already chosen the best response.
you should be able to use the formula I gave you to prove this fairly easily
Best Response
You've already chosen the best response.
maybe ....
Best Response
You've already chosen the best response.
I can do the first step for you if you want?
Best Response
You've already chosen the best response.
I dont want to bother you asnaseer i think iam gonna make sure first if the question is written correct then we shall begin again ,what do u think ?
Best Response
You've already chosen the best response.
I am 100% sure its a misprint - and it is no bother at all to help you - always a pleasure :)
Best Response
You've already chosen the best response.
tysm @asnaseer ,ty too @experimentX ^_^ You guys rocks
Best Response
You've already chosen the best response.
ok, we know:\[S_n=\frac{a(1-r^n)}{1-r}=b\]therefore the sum of the first 2n terms must be:\[S_{2n}=\frac{a(1-r^{2n})}{1-r}=c\]similarly, the sum of the first 3n terms must be:\[S_{3n}=\frac{a(1-r
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
yes ,that right i have just proved it ..continue Please
Best Response
You've already chosen the best response.
sorry that should be - similarly, you should be able to show:\[d-c=br^{2n}\]
Best Response
You've already chosen the best response.
so you will be left with the sequence:\[b,c-b,d-c,...\]which equals:\[b,br^n,br^{2n},...\]
Best Response
You've already chosen the best response.
which is a geometric sequence
Best Response
You've already chosen the best response.
aha ,now i will be able to prove it can be added to infinite G.s
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ty ,ima be sure of the 2nd require and then call you back :DDD
Best Response
You've already chosen the best response.
ok - good luck! :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fbaa0f8e4b05565342f9717","timestamp":"2014-04-21T10:26:50Z","content_type":null,"content_length":"122522","record_id":"<urn:uuid:e3a6fee5-0217-40d3-bd26-4dbeb65d091f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Parabola Modelling Fuel Consumption
September 17th 2011, 08:45 PM #1
Junior Member
Sep 2011
A Parabola Modelling Fuel Consumption
A car consumes gas according to the equation f(v) = (v − 10)2 + 9900 where v is its speed in km/hr and f(v) is the fuel consumption rate in ml/hr. Find the speed that gives best fuel economy,
i.e., the best distance-to-fuel ratio. Hint: The idea is to minimize fuel per unit distance, not per unit time.
Hey, I was just wondering how I would solve this problem without using Calculus. I approached it by graphing the f(v) and determining the minimum via the vertex form f(x) = (x-a)^2 + h. Are there
any other methods?
Re: A Parabola Modelling Fuel Consumption
A car consumes gas according to the equation f(v) = (v − 10)2 + 9900 where v is its speed in km/hr and f(v) is the fuel consumption rate in ml/hr. Find the speed that gives best fuel economy,
i.e., the best distance-to-fuel ratio. Hint: The idea is to minimize fuel per unit distance, not per unit time.
Hey, I was just wondering how I would solve this problem without using Calculus. I approached it by graphing the f(v) and determining the minimum via the vertex form f(x) = (x-a)^2 + h. Are there
any other methods?
Since $(v-10)^2$ is non-negative the minimum fuel consumption occurs when $(v-10)=0$
September 17th 2011, 09:30 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/algebra/188238-parabola-modelling-fuel-consumption.html","timestamp":"2014-04-19T04:23:32Z","content_type":null,"content_length":"34333","record_id":"<urn:uuid:107e9f8b-c2ff-4beb-8227-c16d8812bbbb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hammond, IN Calculus Tutor
Find a Hammond, IN Calculus Tutor
...I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in
Greece. Thus I bring first hand knowledge to your history studies.
41 Subjects: including calculus, chemistry, physics, English
...I am currently a student at Purdue University Calumet, and I am majoring in math education. One day I plan on being a high school math teacher. I specifically I want to tutor in math for
elementary, middle, or high school students.
9 Subjects: including calculus, algebra 2, precalculus, elementary (k-6th)
...I am licensed in both math and physics at the high school level. I have taught a wide variety of courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus,
advanced placement calculus, integrated chemistry/physics, and physics. I also have experience teaching physics at the college level and have taught an SAT math preparation course.
12 Subjects: including calculus, physics, geometry, algebra 1
...These methods are nothing unusual, but on the part of the teacher, they require a great deal of self-discipline, and willingness to "push" the students. I completed a Discrete math course
(included formal logic, graph theory, etc.) in college, and computer science courses that handled automata t...
21 Subjects: including calculus, chemistry, statistics, geometry
...As a theoretical chemist, I have a strong foundation in mathematics* and some areas of physics as well. While obtaining my masters degree in chemistry at the University of Chicago, I worked as
a teaching assistant for the general chemistry course. This involved presenting a discussion section a...
18 Subjects: including calculus, chemistry, reading, algebra 2 | {"url":"http://www.purplemath.com/hammond_in_calculus_tutors.php","timestamp":"2014-04-20T14:02:21Z","content_type":null,"content_length":"23998","record_id":"<urn:uuid:373ad6f8-03d0-47f4-b9db-318ebd3239be>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Private Tutoring Math
1 = - (-w + 6)
The Distributive Property is an algebra property which is used to multiply a single term and two or more terms inside a set of parentheses. Take a look at the problem below
1 = - (w + 6)
The minus sign on the outside of the parenthesis means you have to change the sign of everything INSIDE the parenthesis. So, you get this...
1 = w - 6
Because you want the w on one side by itself, you have to get rid of the -6... To make -6 a zero, you have to ADD 6 to it. And what you do to one side of the equal sign, you have to do to the
other... So, add 6 to the 1 also. You get this...
7 = w
Here are some great online tutoring sites to help you further with your math problems. | {"url":"http://privatetutoringmath.blogspot.com/","timestamp":"2014-04-20T20:56:06Z","content_type":null,"content_length":"38283","record_id":"<urn:uuid:7c09c270-c8c0-4fe4-81f8-d0133550aafe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is an abelian variety with a Galois invariant, rank one submodule of its Tate module, CM?
up vote 9 down vote favorite
Let $A$ be an absolutely simple abelian variety over a number field $K$. Assume that, for some prime $p$, the Tate module $T_p A$ has a submodule of rank one, invariant under the absolute Galois
group of $K$. Does it follow that $A$ is has CM?
For elliptic curves, I guess this follows from Serre's open image theorem. That's all I know. I would be surprised if there was a counterexample as it would be a way of constructing abelian
extensions of $K$ using non-CM abelian varieties, which would be surprising.
nt.number-theory abelian-varieties
This seems very true. Let $L_{\mathfrak p}$ be a finite extension of $\mathbb Q_{p}$ containing all the traces of the Frobenius morphisms acting on $T_{p}A$. Assume that $A$ has no CM. Then there
are no quadratic character $\eta$ such that $V=T_{p}A\otimes L_{\mathfrak p}$ is isomorphic to $V\otimes\eta$. This implies that $\operatorname{End}_{L_{\mathfrak p}[G_{K}]}$ is equal to $L_{\
mathfrak p}$ by Frobenius reciprocity and this in turn implies that $V$ is irreducible. Does that sound good to you or am I missing something? Didn't Bogomolov proved the open image theorem you
want anyway? – Olivier Feb 16 '11 at 22:11
1 @Olivier: What's the result of Bogomolov you've alluded to? I'd be interested to see. – Felipe Voloch Feb 17 '11 at 0:24
Dear Felipe, I was referring to Sur l'algébricité des représentations l-adiques (C.R.A.S 290 F.Bogomolov). There are several results of Serre from the 80s, mostly found in letters to other people,
which also cover these kind of results. – Olivier Feb 17 '11 at 9:42
I thought Bogomolov proved something about homotheties and not a full open image theorem. I'll have a look, thanks. – Felipe Voloch Feb 18 '11 at 1:37
add comment
1 Answer
active oldest votes
Yes. This follows from the main result of the following paper of Zarhin.
MR0885780 (88h:14046) Zarhin, Yu. G. Endomorphisms and torsion of abelian varieties. Duke Math. J. 54 (1987), no. 1, 131–145.
His result, specialized to the $K$-simple case, is the following (fantastic) theorem.
up vote 12 down vote Let $A$ be a $K$-simple abelian variety defined over a number field $K$. The following are equivalent:
accepted (i) $A(K^{\operatorname{ab}})[\operatorname{tors}]$ is infinite.
(ii) $A$ is of CM-type over $K$.
Your hypotheses imply that there is infinite torsion over the abelian extension cut out by the action of Galois on the one-dimensional subspace (the Galois group is contained
in $\mathbb{Z}_{p}^{\times}$), so Zarhin's theorem applies.
This is very pretty. In (i) did you mean 'infinite'? – Keerthi Madapusi Pera Feb 16 '11 at 22:33
@Keerthi: thanks. Yes, I either meant "infinite" in (i) or was missing a "not" in (ii) (and not both!). I fixed it as you suggested. – Pete L. Clark Feb 16 '11 at 22:36
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory abelian-varieties or ask your own question. | {"url":"https://mathoverflow.net/questions/55666/is-an-abelian-variety-with-a-galois-invariant-rank-one-submodule-of-its-tate-mo","timestamp":"2014-04-17T07:42:21Z","content_type":null,"content_length":"58988","record_id":"<urn:uuid:e57881df-ea23-4cb7-89c0-a4ea96c26ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some 4x4 transformation matrices, using a right handed coordinate system. These matrices are used by multiplying vectors from the right.
The projection matrices will produce vectors in a left handed coordinate system, i.e. where z goes into the screen.
A 4x4 rotation matrix for a rotation around the X axis
A 4x4 rotation matrix for a rotation around the Y axis
A 4x4 rotation matrix for a rotation around the Z axis
:: Floating a
=> Vec3 a The normalized vector around which the rotation goes
-> a The angle in radians
-> Mat44 a
A 4x4 rotation matrix for a rotation around an arbitrary normalized vector
rotationEuler :: Floating a => Vec3 a -> Mat44 aSource
A 4x4 rotation matrix from the euler angles yaw pitch and roll. Could be useful in e.g. first person shooter games,
:: Num a
=> Vec4 a The quaternion with the real part (w) last
-> Mat44 a
A 4x4 rotation matrix from a normalized quaternion. Useful for most free flying rotations, such as airplanes.
:: Floating a
=> Vec3 a The up direction, not necessary unit length or perpendicular to the view vector
-> Vec3 a The viewers position
-> Vec3 a The point to look at
-> Mat44 a
A 4x4 rotation matrix for turning toward a point. Useful for targeting a camera to a specific point.
:: Floating a
=> a Near plane clipping distance (always positive)
-> a Far plane clipping distance (always positive)
-> a Field of view of the y axis, in radians
-> a Aspect ratio, i.e. screen's width/height
-> Mat44 a
A perspective projection matrix for a right handed coordinate system looking down negative z. This will project far plane to z = +1 and near plane to z = -1, i.e. into a left handed system.
:: Fractional a
=> a Near plane clipping distance
-> a Far plane clipping distance
-> Vec2 a The size of the view (center aligned around origo)
-> Mat44 a
An orthogonal projection matrix for a right handed coordinate system looking down negative z. This will project far plane to z = +1 and near plane to z = -1, i.e. into a left handed system. | {"url":"http://hackage.haskell.org/package/Vec-Transform-1.0.3/docs/Data-Vec-LinAlg-Transform3D.html","timestamp":"2014-04-24T20:38:49Z","content_type":null,"content_length":"14931","record_id":"<urn:uuid:e99f69a1-cbac-4ab6-bf5a-d9b368a94bb4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1999 [00071]
[Date Index] [Thread Index] [Author Index]
RE: Re: NonlinearRegress and numerical functions...
• To: mathgroup at smc.vnet.net
• Subject: [mg20190] RE: [mg20180] Re: [mg20132] NonlinearRegress and numerical functions...
• From: "Ersek, Ted R" <ErsekTR at navair.navy.mil>
• Date: Tue, 5 Oct 1999 04:04:20 -0400
• Sender: owner-wri-mathgroup at wolfram.com
Larske Ragnarsson presented a problem about NonLinearRegression. In his
demonstration of the problem he used the following line.
The line above stores the following definition for (g1).
g1[t1_]:=y[t]/.DSolve[{y'[t]==-3 y[t],y[o]==6},y[t],t][[1]]/.t->t1
Bob Hanlon noted that the line above will solve the diff-eq each time
g1[_] is evaluated. Bob recommended the following line which will only
solve the diff-eq when the function is defined.
I point out that the following line does the same thing and requires a
little less typing.
However, the previous two solutions don't store the intended solution if
(t1) happens to have a global value. It may be that the global value was
assigned early in a Mathematica session, and you have long since forgot
about it. Consider the examples below.
6/E^21 (* Wrong Answer! *)
6/E^21 (* Wrong Answer! *)
You can solve the diff-eq only once, and ensure the intended definition is
stored by clearing global values from the symbol (t1) before the definition
is made. Of course (t1) better not have any values that you care about.
That is what I do below.
For another approach you can use a function in my (LocalPatterns.m) package.
With this package you can be sure the right definition is stored, and you
don't have to clear any definitions. You can download the package and a
notebook where it's demonstrated from
Ted Ersek
For Mathematica tips, tricks see | {"url":"http://forums.wolfram.com/mathgroup/archive/1999/Oct/msg00071.html","timestamp":"2014-04-18T11:05:36Z","content_type":null,"content_length":"37038","record_id":"<urn:uuid:7c68b6a5-67ed-44bf-9c43-22ccb6db6ba1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating volume.. easy question
it should be (x,f(x)) even though its in the 2nd quad?
is this general for all quad that x and f(x) be positive when labeling an arbitrary coordinate ?
You are assuming that x is always positive and -x is always negative - no, this isn't true. You can't tell the sign of a variable by whether it has a + or - in front of it. For example, -b could be
positive or negative, depending on the value of b. Similarly, +c could be positive or negative, depending on the value of c. Note that we don't normally write +c, but I'm just trying to make a point.
Think about the x-axis. If x is a number to the left of zero, it's negative. We DO NOT write this as -x. | {"url":"http://www.physicsforums.com/showthread.php?s=6d80cb3f5caf01dd0b1c96705fafb29d&p=3780168","timestamp":"2014-04-21T04:44:02Z","content_type":null,"content_length":"38158","record_id":"<urn:uuid:7836b0c3-c4f4-47d6-b7ec-3142e5fb87e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - is there a logical way of understanding how randomness could agree with causality
the probability of wave-functions approach 0.[/QUOTE]
so wave functions are not 100 percent probabliistic?
im ok with 99.999999999999999999999999999999999999999 percent likelyhood that the wavefunction is random. that could provide accurate qm predictions. maybe its asymptotic. 100 percent has serious | {"url":"http://www.physicsforums.com/showpost.php?p=3776752&postcount=10","timestamp":"2014-04-17T07:35:48Z","content_type":null,"content_length":"7101","record_id":"<urn:uuid:dac94bc6-d196-436b-9538-afd50de7cfd0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perpendicular Distance from a Point to a Line
Later, on this page...
(BTW - we don't really need to say 'perpendicular' because the distance from a point to a line always means the shortest distance.)
This is a great problem because it uses all these things that we have learned so far:
Need Graph Paper?
The distance from a point (m, n) to the line Ax + By + C = 0 is given by:
There are some examples using this formula following the proof.
Proof of the Perpendicular Distance Formula
Let's start with the line Ax + By + C = 0 and label it DE. It has slope `-A/B`.
We have a point P with coordinates (m, n). We wish to find the perpendicular distance from the point P to the line (that is, distance `PQ`).
We now do a trick to make things easier for ourselves (the algebra is really horrible otherwise). We construct a line parallel to DE through (m, n). This line will also have slope `-A/B`, since it is
parallel to DE. We will call this line FG.
Now we construct another line parallel to PQ passing through the origin.
This line will have slope `B/A`, because it is perpendicular to DE.
Let's call it line RS. We extend it to the origin `(0, 0)`.
We will find the distance RS, which I hope you agree is equal to the distance PQ that we wanted at the start.
Since FG passes through (m, n) and has slope `-A/B`, its equation is `y-n=-A/B(x-m)` or `y=(-Ax+Am+Bn)/B`.
Line RS has equation `y=B/Ax.`
Line FG intersects with line RS when
Solving this gives us
So after substituting this back into `y=B/Ax`, we find that point R is
Point S is the intersection of the lines `y=B/Ax` and Ax + By + C = 0, which can be written `y=-(Ax+C)/B`.
This occurs when (that is, we are solving them simultaneously)
Solving for x gives
Finding y by substituting back into
So S is the point
The distance RS, using the distance formula, `d=sqrt((x_2-x_1)^2+(y_2-y_1)^2` is
`=sqrt( ((A^2+B^2)(Am+Bn+C)^2)/(A^2+B^2)^2)`
`=sqrt( ((Am+Bn+C)^2)/(A^2+B^2))`
The absolute value sign is necessary since distance must be a positive value, and certain combinations of A, m , B, n and C can produce a negative number in the numerator.
So the distance from the point (m, n) to the line Ax + By + C = 0 is given by:
Example 1
Find the perpendicular distance from the point (5, 6) to the line −2x + 3y + 4 = 0, using the formula we just found.
Example 2
Find the distance from the point `(-3, 7)` to the line
Didn't find what you are looking for on this page? Try search:
Online Algebra Solver
This algebra solver can solve a wide range of math problems. (Please be patient while it loads.)
Go to: Online algebra solver
Ready for a break?
Play a math game.
(Well, not really a math game, but each game was made using math...)
The IntMath Newsletter
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents!
Share IntMath!
Short URL for this Page
Save typing! You can use this URL to reach this page:
Math Lessons on DVD
Easy to understand math lessons on DVD. See samples before you commit.
More info: Math videos | {"url":"http://www.intmath.com/plane-analytic-geometry/perpendicular-distance-point-line.php","timestamp":"2014-04-17T21:34:24Z","content_type":null,"content_length":"27250","record_id":"<urn:uuid:ed184148-9a9a-4a63-9071-91680b368676>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 7th Geometery
Posted by Max on Wednesday, April 24, 2013 at 10:27pm.
I have a complex figure like a U shape.
I understand the formula for this is l x w = a
Can you show me the steps?
31m is parael to 31m this would be the height (sides of the U)?? or is this the width?
Length 69m
Width is on the right top of U 23m.
Inside the middle of the U23m
Inside the middle side 12m
So confused can anyone help?
• Math 7th Geometery - Reiny, Thursday, April 25, 2013 at 7:35am
I think I got your figure.
I assume that the height of the inside of the U is 12 m
continue those height to the base, you now have 3 rectangles, the two outside ones are equal and are
31 by 23
The smaller one at the base is 23 by (31-12)
or 23 by 19
total area = 2(31)(23) + (23(19)
= 1426 + 437 = 1863
• Math 7th Geometery - Max, Friday, April 26, 2013 at 11:50am
yes hieght is 12m
Yes 3 rectangles.
yes two outside ones equal 31 by 23
Yes smaller one at the base is 23m
Yes all right. your awesome! This was a tough one for me. But you helped me see where I was confused.
Thank you so much.
Related Questions
Algebra 2/Trig. - I forgot some stuff from my 7th grade algebra class about ...
Math - What is another way to figure out the area of a complex figure?
geometery - Formula to find the area of a composite figure of a square and a ...
LAL! - are these complex sentence. if it is complex, and are the subordinate ...
Math, Geometery - Hey Guys I have a problem like on the kids on here. What is ...
english/check please - Find the independent clauses and the dependent clauses....
7th grade math - suppose you used the rule (3x+1,3y-4) to transform the original...
Geometry - What is another way to figure out the area of a complex figure?
matsh - The picture below shows a figure in the complex plane, consisting of two... | {"url":"http://www.jiskha.com/display.cgi?id=1366856868","timestamp":"2014-04-17T20:48:24Z","content_type":null,"content_length":"9334","record_id":"<urn:uuid:2f391663-4bdd-45fc-a7f8-33e401b37774>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quick Update
I will be doing the Mirror Symmetry thing. I had someone “like” the post which I’ll count as a vote for it and someone explicitly vote for it. That’s two whole people! For all I know that is half of
my regular readers, so by majority rule I have to do it.
I haven’t gotten around to a post because I’ve been incredibly busy. I’ve been doing my usual research and I’m taking two classes this quarter. In addition to all that I’ve been teaching a
“mini-course” in our AG club. So for the next several weeks when I have down time I’ll probably think about what I’m going to say there and/or I’ll be typing up notes for that. I should point out
that mirror symmetry has two parts, the derived category side and the Fukaya category side. The derived side is something that is really close to what I do, so it is without a doubt worthwhile to
blog about that half, and I plan to soon. I may be a bit sketchier on the side that isn’t as important to me.
I’ll just leave you with a quick taste of what some people mean by “Mirror Symmetry” which is a bit different than Kontsevitch Mirror Symmetry which is what I’ll be explaining at some point. Suppose
you have a Calabi-Yau threefold. By this I’ll mean a smooth, projective variety of dimension three over $\mathbb{C}$ with $\omega_X\simeq \mathcal{O}_X$ and $H^1(X,\mathcal{O}_X)=H^2(X,\mathcal{O}_X)
=0$. By general Hodge theory there is a symmetry in the Hodge numbers $h^{pq}(X)=\dim_{\mathbb{C}}H^q(X, \Omega^p)$, namely that $h^{pq}(X)=h^{qp}(X)$. Also, you can use the fact that it is
Calabi-Yau to check that all Hodge numbers are completely determined (independently of $X$) as either 0 or 1 except $h^{11}$ and $h^{12}$. Fun exercise in Serre duality!
Suppose $X$ is a Calabi-Yau threefold. There is some specified $h^{11}$ and $h^{12}$ (the only two unknown Hodge numbers). A mirror pair for $X$ is another Calabi-Yau threefold with $h^{11}$ and $h^
{12}$ swapped. In my brief encounter with Kontsevich Mirror Symmetry (which says something about an equivalence of categories) this will follow as a special case.
Since I’m in the mood I may as well say some things that immediately pop into mind when seeing this as someone that has recently been thinking in the arithmetic world. If we are over an algebraically
closed field of characteristic 0, then there is a result that says $h^{11}>0$. In particular, there cannot be a rigid CY 3-fold if mirror symmetry is true, since $h^{12}$ gives the space of
deformations of $X$. But in positive characteristic there are tons of rigid CY 3-folds! Interesting.
I’ll leave you with that little taste of what is to come. | {"url":"http://hilbertthm90.wordpress.com/2011/10/10/quick-update-2/","timestamp":"2014-04-20T10:46:22Z","content_type":null,"content_length":"79512","record_id":"<urn:uuid:fc87d06e-f011-4503-afed-51f1edbaec5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Robert Vaughan.
Title: Etale pi_1 obstructions to rational points
Seminar: Algebra and Number Theory Seminar
Speaker: Kirsten Wickelgren, Harvard University
Grothendieck's anabelian conjectures say that hyperbolic algebraic curves over number fields should be K(pi,1)'s in algebraic geometry. It follows that conjecturally the rational points on such a
curve are the sections of etale pi_1 of the structure map. We will use these sections to approach the problem of distinguishing the rational points of a curve from the rational points of its
Jacobian, where the curve is viewed as embedded inside its Jacobian using some fixed rational point. More specifically, we will use cohomological obstructions of Jordan Ellenberg coming from the
etale fundamental group to obstruct a rational point of the Jacobian from lying on the curve. We will relate Ellenberg's obstructions to Massey products, and explicitly compute versions of the first
and second for P1- {0,1, infty}. Over R, we show the first obstruction alone determines the connected components of real points of the curve from those of the Jacobian, giving a strengthening of the
section conjecture over R.
Room Reservation Information
Room Number: MB106
Date: 04 / 29 / 2010
Time: 11:15am - 12:05pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=6587","timestamp":"2014-04-20T04:04:13Z","content_type":null,"content_length":"4039","record_id":"<urn:uuid:4430f967-c95e-41f8-a4c5-a39801019647>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
A cubic polynomial system with seven limit cycles at infinity.
(English) Zbl 1096.65130
By the change of transformations $\xi =x{\left({x}^{2}+{y}^{2}\right)}^{-2/3}$, $\eta =y{\left({x}^{2}+{y}^{2}\right)}^{-2/3}$, $t={\left({x}^{2}+{y}^{2}\right)}^{-1}\tau$ applied to the real planar
cubic polynomial system without singular point at infinity the problem of limit cycles bifurcation at infinity is transferred into that at the origin. The computation of singular point values for
transformated system allows to derive the conditions of the origin (respectively of the infinity for the original system) to be a center and the highest degree fine focus. In conclusion the system is
constructed allowing the appearance of seven limit cycles in the neighbourhood of infinity.
Reviewer’s remarks: All computations have been done with the computer algebra system Mathematica.
65P30 Bifurcation problems (numerical analysis)
37M20 Computational methods for bifurcation problems
37G15 Bifurcations of limit cycles and periodic orbits
37G10 Bifurcations of singular points | {"url":"http://zbmath.org/?q=an:1096.65130","timestamp":"2014-04-19T04:38:04Z","content_type":null,"content_length":"22429","record_id":"<urn:uuid:19d59259-59f6-450f-aa6e-7139eddb8a49>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Curvilinear motion
Remember that the velocity vector is tangent to the curve at all points. So, you should find at what x-value is the particle at 4m (do you remember the integral definition of distance for a curve?),
then find the derivative at that point...
Do you see where to go after that, given the magnitude of the velocity vector? | {"url":"http://www.physicsforums.com/showthread.php?t=467641","timestamp":"2014-04-24T15:09:03Z","content_type":null,"content_length":"22304","record_id":"<urn:uuid:dfe50ea6-fd1c-4de8-8e2f-0d15b83d3b89>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 23
- In: Computer Graphics (SIGGRAPH 91 Proceedings , 1991
"... The number of polygons comprising interesting architectural models is many more than can be rendered at interactive frame rates. However, due to occlusion by opaque surfaces (e.g., walls), only
a small fraction of atypical model is visible from most viewpoints. We describe a method of visibility pre ..."
Cited by 281 (15 self)
Add to MetaCart
The number of polygons comprising interesting architectural models is many more than can be rendered at interactive frame rates. However, due to occlusion by opaque surfaces (e.g., walls), only a
small fraction of atypical model is visible from most viewpoints. We describe a method of visibility preprocessing that is efficient andeffective foraxis-aligned oril.ria / architectural m[}dels, A
model is subdivided into rectangular cc//.$whose boundaries coincide with major opaque surfaces, Non-opaque p(~rtc~/.rare identified rm cell boundaries. and used to form ana~ju{~’n~y,q)f~/>//con
nectingthe cells nfthesubdivisicm. Next. theccl/-r/~-cc/ / visibility is computed for each cell of the subdivisirrn, by linking pairs of cells between which unobstructed.si,q/~t/inr. ~exist. During
an interactive ww/krhrm/,q/~phase, an observer with a known ~~sition and\it)M~~~)~t>mov esthrc>ughthe model. At each frame, the cell containingthe observer is identified, and the contents {]fp{>
tentially visible cells areretrieved from storage. The set of potentially visible cells is further reduced by culling it against theobserver’s view cone, producing the ~)yt>-r~]-t(>// \ i,$ibi/ify,
The contents of the remaining visible cells arc then sent to a graphics pipeline for hidden-surface removal and rendering, Tests onmoderatelyc mnplex 2-D and 3-D axial models reveal substantially
reduced rendering loads,
, 1995
"... The subject of this paper is the analysis of a randomized preprocessing scheme that has been used for query processing in robot motion planning. The attractiveness of the scheme stems from its
general applicability to virtually any motion-planning problem, and its empirically observed success. In th ..."
Cited by 30 (10 self)
Add to MetaCart
The subject of this paper is the analysis of a randomized preprocessing scheme that has been used for query processing in robot motion planning. The attractiveness of the scheme stems from its
general applicability to virtually any motion-planning problem, and its empirically observed success. In this paper we initiate a theoretical basis for explaining this empirical success. Under a
simple assumption about the configuration space, we show that it is possible to perform a preprocessing step following which queries can be answered quickly. En route, we pose and give solutions to
related problems on graph connectivity in the evasiveness model, and art-gallery theorems. Robotics Laboratory, Department of Computer Science, Stanford University, Stanford, CA 94305-2140. Partially
supported by ARPA grant N00014-92-J-1809 and ONR grant N00014-94-1-0721. y Robotics Laboratory, Department of Computer Science, Stanford University, Stanford, CA 94305-2140. Partially supported by
ARPA grant N0...
- IN EUROPEAN CONFERENCE ON COMPUTER VISION , 2004
"... We analyze visibility from static sensors in a dynamic scene with moving obstacles (people). Such analysis is considered in a probabilistic sense in the context of multiple sensors, so that
visibility from even one sensor might be sufficient. Additionally, we analyze worst-case scenarios for high ..."
Cited by 25 (3 self)
Add to MetaCart
We analyze visibility from static sensors in a dynamic scene with moving obstacles (people). Such analysis is considered in a probabilistic sense in the context of multiple sensors, so that
visibility from even one sensor might be sufficient. Additionally, we analyze worst-case scenarios for high-security areas where targets are non-cooperative. Such visibility analysis provides
important performance characterization of multi-camera systems. Furthermore, maximization of visibility in a given region of interest yields the optimum number and placement of cameras in the scene.
Our analysis has applications in surveillance - manual or automated - and can be utilized for sensor planning in places like museums, shopping malls, subway stations and parking lots. We present
several example scenes - simulated and real - for which interesting camera configurations were obtained using the formal analysis developed in the paper.
- International Journal of Computational Geometry and Applications , 1993
"... yz ..."
- International Journal of Robotics Research , 1999
"... This paper presents a dynamic sensor planning system, capable of planning the locations and settings of vision sensors for use in an environment containing objects moving in known ways. The key
component of this research is the computation of the camera position, orientation, and optical settings to ..."
Cited by 18 (2 self)
Add to MetaCart
This paper presents a dynamic sensor planning system, capable of planning the locations and settings of vision sensors for use in an environment containing objects moving in known ways. The key
component of this research is the computation of the camera position, orientation, and optical settings to be used over a time interval. A new algorithm is presented for viewpoint computation which
ensures that the feature detectability constraints of focus, resolution, field-of-view, and visibility are satisfied. A five degree-of-freedom Cartesian robot carrying a CCD camera in a hand/eye
configuration and surrounding the work-cell of a Puma 560 robot was constructed for performing sensor planning experiments. The results of these experiments, demonstrating the use of this system in a
robot work-cell, are presented. The research described in this paper was performed while this author was at the Columbia University Department of Computer Science. y This work was supported in part
by DARPA con...
, 1995
"... Layout and packing are NP-hard geometric optimization problems of practical importance for which finding a globally optimal solution is intractable if P!=NP. Such problems appear in industries
such as aerospace, ship building, apparel and shoe manufacturing, furniture production, and steel construct ..."
Cited by 13 (6 self)
Add to MetaCart
Layout and packing are NP-hard geometric optimization problems of practical importance for which finding a globally optimal solution is intractable if P!=NP. Such problems appear in industries such
as aerospace, ship building, apparel and shoe manufacturing, furniture production, and steel construction. At their core, layout and packing problems have the common geometric feasibility problem of
containment: find a way of placing a set of items into a container. In this thesis, we focus on containment and its applications to layout and packing problems. We demonstrate that, although
containment is NP-hard, it is fruitful to: 1) develop algorithms for containment, as opposed to heuristics, 2) design containment algorithms so that they say "no" almost as fast as they say "yes", 3)
use geometric techniques, not just mathematical programming techniques, and 4) maximize the number of items for which the algorithms are practical. Our approach to containment is based on a new
, 2007
"... This paper studies non-crossing geometric perfect matchings. Two such perfect matchings are compatible if they have the same vertex set and their union is also non-crossing. Our first result
states that for any two perfect matchings M and M ′ of the same set of n points, for some k ∈ O(log n), there ..."
Cited by 8 (6 self)
Add to MetaCart
This paper studies non-crossing geometric perfect matchings. Two such perfect matchings are compatible if they have the same vertex set and their union is also non-crossing. Our first result states
that for any two perfect matchings M and M ′ of the same set of n points, for some k ∈ O(log n), there is a sequence of perfect matchings M = M0, M1,..., Mk = M ′ , such that each Mi is compatible
with Mi+1. This improves the previous best bound of k ≤ n − 2. We then study the conjecture: every perfect matching with an even number of edges has an edge-disjoint compatible perfect matching. We
introduce a sequence of stronger conjectures that imply this conjecture, and prove the strongest of these conjectures in the case of perfect matchings that consist of vertical and horizontal
segments. Finally, we prove that every perfect matching with n edges has an edge-disjoint compatible matching with approximately
- DISCRETE COMPUT. GEOM , 2000
"... This paper addresses three questions related to minimal triangulations of a three-dimensional convex polytope P . . Can the minimal number of tetrahedra in a triangulation be decreased if one
allows the use of interior points of P as vertices? . Can a dissection of P use fewer tetrahedra than ..."
Cited by 7 (3 self)
Add to MetaCart
This paper addresses three questions related to minimal triangulations of a three-dimensional convex polytope P . . Can the minimal number of tetrahedra in a triangulation be decreased if one allows
the use of interior points of P as vertices? . Can a dissection of P use fewer tetrahedra than a triangulation? . Does the size of a minimal triangulation depend on the geometric realization of P?
The main result of this paper is that all these questions have an affirmative answer. Even stronger, the gaps of size produced by allowing interior vertices or by using dissections may be linear in
the number of points.
, 2000
"... The problem of finding a triangulation of a convex three-dimensional polytope with few tetrahedra is NP-hard. We discuss other related complexity results. ..."
, 2003
"... We present an optimal \Theta (n)-time algorithm for the selection of a subset of the vertices of an n-vertex plane graph G so that each of the faces of G is covered by (i.e. incident with) one
or more of the selected vertices. At most bn=2c vertices are selected, matching the worst-case requiremen ..."
Cited by 4 (0 self)
Add to MetaCart
We present an optimal \Theta (n)-time algorithm for the selection of a subset of the vertices of an n-vertex plane graph G so that each of the faces of G is covered by (i.e. incident with) one or
more of the selected vertices. At most bn=2c vertices are selected, matching the worst-case requirement. Analogous results for edge-covers are developed for two different notions of "coverage&
quot;. In particular,our linear-time algorithm selects at most n \Gamma 2 edges to strongly cover G, at most bn=3c diagonals to cover G, and in the case where G has no quadrilateral faces, at most bn
=3c edges to cover G. All these bounds are optimal in the worst-case. Most of our results flow from the study of a relaxation of thefamiliar notion of a 2-coloring of a plane graph which we call a
face-respecting 2-coloring that permits | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=522602","timestamp":"2014-04-20T01:20:24Z","content_type":null,"content_length":"38230","record_id":"<urn:uuid:ba003888-2a8b-4880-93c3-3ba857df23fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS4 Talk: Dr Rachel Armstrong
On Wednesday 30th January, Dr Rachel Armstrong from the the University of Greenwich gave the CS4 talk “A Hitchiker’s Guide to Complexity”
One thought on “CS4 Talk: Dr Rachel Armstrong”
1. Complexity repeatedly yields this question: why numerous static spatial manifestations of the golden ratio (in both nature & culture)? They evidently are the static condensates of dynamical
energy, matter and information flows that are best described (in response to Ilya Progogine’s belief that fractals were coinicidental to his dissipative far-from equilibrium systems), by
Professor Adrian Bejan’s Constructal law’s behaviours.
The answer (therefore one of the key codes of Complexity), is that Constructal law describes these flows as optimal and analogical, as tree-shaped.
The archetypal and fundamental, underlying dynamical geometrical signature of these temporal flows are in turn described by the dynamical symmetries of the Asynsis principle, which are actually
golden ratio signatures in time.
Such a temporal, dynamic synthesis is unprecedented as the golden ratio has hitherto only been described in static, spatial, non-temporal, non-irreversible terms.
Constructal law describes nature’s behaviours – the Asynsis principle describes the archetypal geometrical signatures of those behaviours, which are indeed, golden ratio-based.
The reason? The golden ratio is optimisation and analogy exemplified. Evolved currents flow most easily when following a golden ratio, Asynsis principle path in space – and also in time.
PhiStatics are derived from PhiDynamics. (Nigel Reading RIBA, AD Magazine, Architecture & Film issue, 1995)
PhiStatics occurs thanks to a new design law of Nature, Consciousness and Culture in the Asynsis principle-Constructal law.
For how evolutionary design emerges analogically, optimally from entropy – for how Form follows Flow, please refer to: | {"url":"http://cs4southampton.wordpress.com/2013/02/08/cs4-talk-dr-rachel-armstrong/","timestamp":"2014-04-16T10:10:11Z","content_type":null,"content_length":"50283","record_id":"<urn:uuid:495d690f-6c7c-4155-8d9a-7bd882f89bd6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the real Jacquet module of a Harish-Chandra module still a Harish-Chandra module?
up vote 1 down vote favorite
Casselman defined the real Jacquet module for a Harish-Chandra module, if we view the Jacquet module as a module corresponding to the Levi subgroup, the question is is it still a Harish-Chandra
module? In particular is it still admissible?
add comment
1 Answer
active oldest votes
As I understand it, the Jacquet module for $(\mathfrak g, K)$-modules is defined so as to again be a $\mathfrak g$-module, and in fact it is a Harish-Chandra module, not for $({\mathfrak
g},K)$, but rather for $(\mathfrak g,N)$ (where $N$ is the unipotent radical of the parabolic with respect to which we compute the Jacquet module). (I am probably assuming that the original
$(\mathfrak g, K)$-module has an infinitesimal character here.)
I am using the definitions of this paper, in particular the discussion of section 2. This in turn refers to Ch. 4 of Wallach's book. So probably this latter reference will cover things in
Added: I may have misunderstood the question (due in part to a confusion on my part about definitions; see the comments below), but perhaps the following remark is helpful:
up vote 1
down vote If one takes the Jacquet module (say in the sense of the above referenced paper, which is also the sense of Wallach), say for a Borel, then it is a category {\mathcal O}-like object: it is
a direct sum of weight spaces for a maximal Cartan in ${\mathfrak g},$ and any given weight appears only finitely many times. (See e.g. Lemma 2.3 and Prop. 2.4 in the above referenced
paper; no doubt this is also in Wallach in some form; actually these results are for the geometric Jacquet functor of that paper rather than for Wallach's Jacquet module, but I think they
should apply just as well to Wallach's.
Maybe they also apply with Casselman's definition; if so, doesn't this give the desired admissibility?
Thank, Emerton. The definition in Wallach's book is different from Casselman's, in some sense it's the dual of Casselman's. In Casselman's definition, it becomes a (g,P)-module, and in
particular a module for the corresponding Levi. – user1832 Feb 10 '10 at 18:17
Sorry, I didn't know that. Maybe ignore the above, then! – Emerton Feb 10 '10 at 18:25
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/14921/is-the-real-jacquet-module-of-a-harish-chandra-module-still-a-harish-chandra-mod?sort=votes","timestamp":"2014-04-18T06:13:02Z","content_type":null,"content_length":"54280","record_id":"<urn:uuid:e166e513-8712-4626-bd84-40edb17d5205>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gainesville, GA SAT Math Tutor
Find a Gainesville, GA SAT Math Tutor
...As well as tutoring, I have volunteered in my local elementary school to help student with their homework for their homework club. Also I mentor students from middle school to high school on
behavior, studies, and other topics. I have attended many lectures on best study skills, tutored other c...
14 Subjects: including SAT math, chemistry, geometry, biology
...I have worked as a tutor and teacher, but more importantly, I know how to make learning fun and easy. I have a Master's degree in Business Administration(MBA), and I took several astronomy
courses as electives receiving A's in them. I am also a former member of MENSA.
29 Subjects: including SAT math, reading, GED, English
...As a recent summa cum laude graduate I realize it takes dedication to overcome obstacles. Higher SAT scores can improve college acceptance, and techniques learned can improve study habits and
scholastic achievement. Significant improvement can be achieved through guidance and student participation.
13 Subjects: including SAT math, writing, geometry, algebra 1
I have been tutoring since high school. In graduate school, I was a TA for 2 classes each semester for 3 years. I have BA's in Architectural/Art History and Psychology from UCSB and a Master's
Degree in Architecture from U. of Miami.
48 Subjects: including SAT math, English, reading, calculus
...Since not everyone is a auditory learner, I use visual and hands-on learning to help the lesson stick. I make sure that you understand what the problem is asking you to find and then help you
find the information needed to find that answer. This is my 7th year teaching.
21 Subjects: including SAT math, calculus, elementary (k-6th), ACT Math
Related Gainesville, GA Tutors
Gainesville, GA Accounting Tutors
Gainesville, GA ACT Tutors
Gainesville, GA Algebra Tutors
Gainesville, GA Algebra 2 Tutors
Gainesville, GA Calculus Tutors
Gainesville, GA Geometry Tutors
Gainesville, GA Math Tutors
Gainesville, GA Prealgebra Tutors
Gainesville, GA Precalculus Tutors
Gainesville, GA SAT Tutors
Gainesville, GA SAT Math Tutors
Gainesville, GA Science Tutors
Gainesville, GA Statistics Tutors
Gainesville, GA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Alpharetta SAT math Tutors
Athens, GA SAT math Tutors
Buford, GA SAT math Tutors
Duluth, GA SAT math Tutors
Dunwoody, GA SAT math Tutors
Johns Creek, GA SAT math Tutors
Lawrenceville, GA SAT math Tutors
Oakwood, GA SAT math Tutors
Roswell, GA SAT math Tutors
Sandy Springs, GA SAT math Tutors
Smyrna, GA SAT math Tutors
Snellville SAT math Tutors
Suwanee SAT math Tutors
Westside, GA SAT math Tutors
Woodstock, GA SAT math Tutors | {"url":"http://www.purplemath.com/Gainesville_GA_SAT_Math_tutors.php","timestamp":"2014-04-16T16:09:08Z","content_type":null,"content_length":"24063","record_id":"<urn:uuid:19ac8d0a-8aad-41db-97d3-24d69ff3346a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Willowbrook, Houston, TX
Spring, TX 77389
Young, effective tutor - improved performance in any subject
...I'm an experienced tutor and will effectively teach all subjects in a way that is easily understood. I specialize in tutoring math (elementary math,
, prealgebra, algebra 1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA...
Offering 10+ subjects including geometry | {"url":"http://www.wyzant.com/Willowbrook_Houston_TX_geometry_tutors.aspx","timestamp":"2014-04-20T08:43:16Z","content_type":null,"content_length":"61801","record_id":"<urn:uuid:345174ea-6cff-4e8e-9f83-e880ad12c7ec>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Model fitting
Replies: 3 Last Post: Nov 25, 2012 11:27 PM
Messages: [ Previous | Next ]
Re: Model fitting
Posted: Nov 25, 2012 11:27 PM
On Nov 25, 2:03 am, Dmitry Zinoviev <dzinov...@gmail.com> wrote:
> On Saturday, November 24, 2012 2:30:51 AM UTC-5, Ray Koopman wrote:
>> On Nov 23, 12:31 am, dzinov...@gmail.com wrote:
>>> I have an array of 3D data in the form {xi,yi,0/1} (that is, the z coordinate is either 0 or 1). The points are not on a rectangular grid. The 0 and 1 areas are more or less contiguous, though
the boundary between them can be somewhat fuzzy. The boundary is expected to be described by the equation y=a x^b. How can I adapt NonlinearModelFit or any other standard function to find the best
fit values for a and b? Thanks!
>> y = a x^b is linear in log-log coordinates, so use LogitModelFit
>> with Log@x and Log@y as the predictors; i.e., the probability of
>> observing z == 1 is 1/(1 + Exp[-(b0 + b1*Log@x + b2*Log@y)]).
> Than you! I assume that b=b2/b1. How do I calculate a?
The boundary is the curve for which prob[z == 1] == 1/2:
Solve[1/(1 + Exp[-(b0 + b1*Log@x + b2*Log@y)]) == 1/2, y]
y -> E^(-b0/b2) x^(-b1/b2) | {"url":"http://mathforum.org/kb/message.jspa?messageID=7928070","timestamp":"2014-04-18T23:25:21Z","content_type":null,"content_length":"20486","record_id":"<urn:uuid:a9980f0c-7fb6-4d40-ba10-e1a1c662c2db>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties of Subtraction
Subtraction is one of the mathematical operations performed on the numbers. By subtraction, we mean reducing. We say that subtraction means reducing smaller number from the bigger number. The
properties of subtraction are shown below:
There are different properties of subtraction. They are:
1. If ‘a’ and ‘b’ are any whole numbers and subtraction is performed on ‘a’ and ‘b’, then if a = b, or a > b, then the result is a whole number, else the result is not a whole number.
2. Commutative property does not hold true for the subtraction of whole numbers. If there exists numbers ‘a’ and ‘b’, then we say that commutative property does not hold true for subtraction. Thus we
conclude that a – b = b – a.
3. For every whole number ‘a’, there exist a number 0, such that if we subtract a number 0 from ‘a’, we get the same number. It can be written as:
a – 0 = a.
4. If we have a, b and c as the whole numbers, such that we have a – b = c, then the property of subtraction can be expressed in form of addition as follows:
a = c + b.
5. Associative property for whole numbers does not hold true. It means that a, b and c are any of three whole numbers, then we say that
(a – b) – c is not equal to a – (b – c).
6. If we subtract number 1 from a given whole number, then we get the predecessor of the given number. So we say that if ‘a’ is any number, then a – 1 is the predecessor of the given number. | {"url":"http://maths.edurite.com/properties-of-subtraction-s2hzi.html","timestamp":"2014-04-19T11:57:12Z","content_type":null,"content_length":"125809","record_id":"<urn:uuid:1beebad7-c573-4fc1-ba8e-c8cf649df5c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Cylinders through Five Points
Given five points there are, in general, at most six cylinders containing them. There are six solutions to the set of equations that describe the radius and axis parameter values (counting
multiplicity), but some of these solutions may be complex valued. In the special case where the points are the vertices of two regular tetrahedra glued together along a common face, there are in
fact six real valued solutions. The tetrahedra are oriented so that the glued face is in the horizontal plane, and the top vertex is allowed to move up or down. At a certain value, which is 5/4
times its initial value, the six cylinders will coalesce into three pairs (that is, the solutions each have multiplicity two). Beyond that value the cylinders "disappear;" that is, all solutions
become complex. | {"url":"http://demonstrations.wolfram.com/CylindersThroughFivePoints/","timestamp":"2014-04-21T09:40:12Z","content_type":null,"content_length":"41754","record_id":"<urn:uuid:92b9a8be-9ec4-40bb-928d-7496df91851c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotics for a phase of accelerated expansion
6.3 Asymptotics for a phase of accelerated expansion
Fuchsian techniques cannot only be used to construct singular spacetimes; they can also be used to construct spacetimes which are future geodesically complete and which exhibit accelerated expansion
at late times. A solution of the Einstein equations with a foliation of spacelike hypersurfaces whose mean curvature
In [304 ] Fuchsian techniques were used to construct solutions of the Einstein vacuum equations with positive cosmological constant in any dimension which have accelerated expansion at late times and
are not assumed to have any symmetry. Detailed asymptotic expansions are obtained for the late-time behaviour of these solutions. In the case of three spacetime dimensions these expansions were first
written down by Starobinsky [326 ]. These spacetimes are closely related to those discussed in Section 5.1. In even spacetime dimensions they have asymptotic expansions in powers of | {"url":"http://relativity.livingreviews.org/Articles/lrr-2005-6/articlesu22.html","timestamp":"2014-04-17T09:58:16Z","content_type":null,"content_length":"8496","record_id":"<urn:uuid:ad5215ae-6ecc-4c23-8023-d0745c7ced33>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
<complexity> (NPC, Nondeterministic Polynomial time complete) A set or property of computational decision problems which is a subset of NP (i.e. can be solved by a nondeterministic Turing Machine in
polynomial time), with the additional property that it is also NP-hard. Thus a solution for one NP-complete problem would solve all problems in NP. Many (but not all) naturally arising problems in
class NP are in fact NP-complete.
There is always a polynomial-time algorithm for transforming an instance of any NP-complete problem into an instance of any other NP-complete problem. So if you could solve one you could solve any
other by transforming it to the solved one.
The first problem ever shown to be NP-complete was the satisfiability problem. Another example is Hamilton's problem.
See also computational complexity, halting problem, Co-NP, NP-hard.
[Other examples?]
Last updated: 1995-04-10
Try this search on Wikipedia, OneLook, Google
Nearby terms: NP « np « NPC « NP-complete » NP-hard » NPL » NPPL
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/NP-complete","timestamp":"2014-04-18T21:16:56Z","content_type":null,"content_length":"5856","record_id":"<urn:uuid:7342afaf-dcbe-4a15-b78a-79b524913d68>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
? on foreign tax credit
I know a number of bogleheads hold Total International Stock Fund in taxable accounts, with one advantage being claiming the foreign tax credit. There is a from 1116 which is fairly complex and also
has to be replicated for the Alternative Minimum Tax if the foreign tax credit claimed is over $300 for a single taxpayer. Am I doing this calculation correctly to see how much TISM dividends are
actually required to exceed the $300 threshold?
Using rough numbers, from the Vanguard site that gives foreign tax credit information (I think it is at the advisors site for some reason), it says that to calculate the foreign tax, it is the $
amount of the foreign dividends x 0.065 (6.5%). So I am assuming that at exactly $300 of foreign tax, this would represent $ 300 divided by 0.065 = $4600 of foreign dividends. Then hypothetically
assuming a 2% dividend rate, this $4600 of foreign dividends would be generated from an investment in TISM of $4600 divided by 0.02, which is $230,000. So if the investment in TISM exceeds $230 K one
would need Form 1116. Does this make sense?
Re: ? on foreign tax credit
A cursory read of your post suggests what you wrote makes sense.
However, I do not fear Form 1116. Tax software does a good job with it nowadays. We have filed 1116 for many years now. I think it's no longer a big deal since the software folks have finally caught
up with understanding how to program it ... at least for mutual funds.
It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing.
Re: ? on foreign tax credit
livesoft wrote:A cursory read of your post suggests what you wrote makes sense.
However, I do not fear Form 1116. Tax software does a good job with it nowadays. We have filed 1116 for many years now. I think it's no longer a big deal since the software folks have finally
caught up with understanding how to program it ... at least for mutual funds.
deleted. This was meant to be a PM.
Last edited by Calm Man on Fri Dec 21, 2012 9:01 pm, edited 1 time in total.
Re: ? on foreign tax credit
It's the opposite of diehard.
It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing.
Re: ? on foreign tax credit
Calm Man,
You have the right idea, but it should be noted that you are using numbers from 2011, and your dividend rate is way off.
The key unknown variable at this point is the amount of foreign tax as a percentage of the dividend distribution. Last year's number was 6.5%, although I think the historical average is more like 7%.
In 2010 it was 8.8%.
I believe this number will not be released until January.
Assuming you held exactly 1000 (Admiral) shares of Total International between September and now, then you would have been paid $746 in dividends. At 7%, that would be $52.22 in foreign taxes. (And
your 1099 would report a total of $798.22 = $746 + $52.22 in dividends.)
So to hit $300 in foreign taxes under the 7% and constant number of shares assumption, you'd need 5744.925 shares of VTIAX. Using the price at today's close, that's $142,876.28, give or take a few
Re: ? on foreign tax credit
What software deals with AMT FTC?
Re: ? on foreign tax credit
TT asks questions about FTC when going through the AMT stuff, so that's a "deals with" in my book. I am not subject to AMT.
It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing. | {"url":"http://www.bogleheads.org/forum/viewtopic.php?f=2&t=107400&newpost=1560601","timestamp":"2014-04-19T17:04:23Z","content_type":null,"content_length":"25542","record_id":"<urn:uuid:de09ad46-fefb-41a5-b1f4-1832ecdb075f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact Bayesian inference and model selection for some infection models
While much progress in the analysis of infectious disease data depends upon MCMC methodology, the simpler and more exact method of rejection sampling can sometimes be very useful. Using examples of
influenza data from a population divided into households, this talk will illustrate the use of rejection sampling in model fitting; use of an initial sample to improve the efficiency of the
algorithm; selection between competing models of differing dimensionality. | {"url":"http://www.newton.ac.uk/programmes/SCB/abstract3/clancy.html","timestamp":"2014-04-20T14:13:19Z","content_type":null,"content_length":"2597","record_id":"<urn:uuid:18fd421f-f933-43d5-af88-202027a846f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
When we think of a room, its walls and floors, a news paper, a note book, the computer screen, the surfaces of a rectangular box etc, we see flat surfaces which are rectangular. The rectangular
surface is the one which has four sided and the sides are straight. So we can see that the rectangular surfaces are those which are bounded by four sides. All four sided figures are not rectangles.
The four sided figures called quadrilaterals are classified into different types according to the sides and the angles. In this section let us see the definition of quadrilaterals and the other types
of quadrilaterals and their properties. | {"url":"http://www.mathcaptain.com/geometry/quadrilateral.html","timestamp":"2014-04-16T07:16:50Z","content_type":null,"content_length":"64609","record_id":"<urn:uuid:ff7fad67-0576-48ef-be97-29b8867e2337>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much would it cost to go 800 miles in a car that has a 15.5 gallon tank and gets 26 mpg if gas is 2.45 a gallon?
Green vehicles
Energy conservation refers to reducing energy through using less of an energy service. Energy conservation differs from efficient energy use, which refers to using less energy for a constant
service. For example, driving less is an example of energy conservation. Driving the same amount with a higher mileage vehicle is an example of energy efficiency. Energy conservation and
efficiency are both energy reduction techniques.
Even though energy conservation reduces energy services, it can result in increased financial capital, environmental quality, national security, and personal financial security. It is at the top
of the sustainable energy hierarchy.]citation needed[
Transport economics is a branch of economics founded in 1959 by American economist John R. Meyer that deals with the allocation of resources within the transport sector. It has strong links to
civil engineering. Transport economics differs from some other branches of economics in that the assumption of a spaceless, instantaneous economy does not hold. People and goods flow over
networks at certain speeds. Demands peak. Advance ticket purchase is often induced by lower fares. The networks themselves may or may not be competitive. A single trip (the final good, in the
consumer's eyes) may require the bundling of services provided by several firms, agencies and modes.
Although transport systems follow the same supply and demand theory as other industries, the complications of network effects and choices between dissimilar goods (e.g. car and bus travel) make
estimating the demand for transportation facilities difficult. The development of models to estimate the likely choices between the such goods involved in transport decisions (discrete choice
models) led to the development of an important branch of econometrics, as well as a Nobel Prize for Daniel McFadden.
The fuel economy of an automobile is the fuel efficiency relationship between the distance traveled and the amount of fuel consumed by the vehicle. Consumption can be expressed in terms of volume
of fuel to travel a distance, or the distance travelled per unit volume of fuel consumed. Since fuel consumption of vehicles is a great factor in air pollution, and since importation of motor
fuel can be a large part of a nation's foreign trade, many countries impose requirements for fuel economy. Different measurement cycles are used to approximate the actual performance of the
vehicle. The energy in fuel is required to overcome various losses (wind resistance, tire drag, and others) in propelling the vehicle, and in providing power to vehicle systems such as ignition
or air conditioning. Various measures can be taken to reduce losses at each of the conversions between chemical energy in fuel and kinetic energy of the vehicle. Driver behavior can affect fuel
economy; sudden acceleration and heavy braking wastes energy.
Miles per gallon gasoline equivalent (MPGe or MPG[ge]) is a measure of the average distance traveled per unit of energy consumed. MPGe is used by the U.S. Environmental Protection Agency (EPA) to
compare energy consumption of alternative fuel vehicles, plug-in electric vehicles and other advanced technology vehicles with the fuel economy of conventional internal combustion vehicles
expressed as miles per US gallon.
The MPGe metric was introduced in November 2010 by EPA in the Monroney label of the Nissan Leaf electric car and the Chevrolet Volt plug-in hybrid. The ratings are based on EPA's formula, in
which 33.7 kilowatt hours of electricity is equivalent to one gallon of gasoline, and the energy consumption of each vehicle during EPA's five standard drive cycle tests simulating varying
driving conditions. All new cars and light-duty trucks sold in the U.S. are required to have this label showing the EPA's estimate of fuel economy of the vehicle.
In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings
about interest, sympathy or motivation in the reader or viewer.
Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an
interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement.
Finance is the practice]citation needed[ of funds management, or the allocation of assets and liabilities over time under conditions of certainty and uncertainty. A key point in finance is the time
value of money, which states that a unit of currency today is worth more than the same unit of currency tomorrow. Finance aims to price assets based on their risk level, and expected rate of return.
Finance can be broken into three different sub categories: public finance, corporate finance and personal finance.
Note: Varies by jurisdiction
Note: Varies by jurisdiction
Related Websites: | {"url":"http://answerparty.com/question/answer/how-much-would-it-cost-to-go-800-miles-in-a-car-that-has-a-15-5-gallon-tank-and-gets-26-mpg-if-gas-is-2-45-a-gallon","timestamp":"2014-04-16T16:00:12Z","content_type":null,"content_length":"33397","record_id":"<urn:uuid:c39a9ea6-63d8-4ce2-982b-20d6e7205b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle Of Deviation - An Optics Problem, help needed in optics
how can i find out the angle of deviation for plano-concave lenses.
Could you please provide us some background on plano-concave lenses, and how rays are traced through them? What do you think the relevant equations are for the "angle of deviation?"
Is this homework or coursework? | {"url":"http://www.physicsforums.com/showthread.php?t=305031","timestamp":"2014-04-20T11:18:17Z","content_type":null,"content_length":"36854","record_id":"<urn:uuid:de47b3ab-8026-4ab6-8180-623e4061f135>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cesa-Bianchi, N. (S. Ben-David, N. Cesa-Bianchi and P. M. Long) -- Characterizations of learnability for classes of { 0,dots ,n }-valued functions - 1992
Cesa-Bianchi, N. (N. Cesa-Bianchi, P. M. Long and M. K. Warmuth) -- Worst-case quadratic loss bounds for on-line prediction of linear functions by gradient descent - 1996
Cesa-Bianchi, N. (N. Cesa-Bianchi) -- Learning the Distribution in the Extended PAC Model - 1990
Cesa-Bianchi, N. (N. Cesa-Bianchi, A. Krogh and M. K. Warmuth) -- Bounds on approximate steepest descent for likelihood maximization in exponential families - July 1994
Cesa-Bianchi, N. (N. Cesa-Bianchi, Y. Freund, D. P. Helmbold, D. Haussler, R. E. Schapire and M. K. Warmuth) -- How to use expert advice - 1993
Cesa-Bianchi, N. (N. Cesa-Bianchi, P. Long and M. Warmuth) -- Worst-case quadratic loss bounds for a generalization of the Widrow-Hoff rule - 1993 | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cltbibZz-e--00-1----0-10-0---0---0direct-10---4-------0-0l--11-en-50---20-help---01-3-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=cltbib-e&cl=CL2.3.40","timestamp":"2014-04-17T18:44:37Z","content_type":null,"content_length":"18290","record_id":"<urn:uuid:4de625b5-c45d-4c00-bcb4-e76d83797ea7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topic outline
• General
Instructor: Ceylan YOZGATLIGİL
Added: 15 March 2010
• Topic 1
Probability Axioms, Combinatorics
• Topic 2
Conditional Probability and Bayes Theorem
• Topic 3
Random Variables, Expectation, Transformation
(3 weeks)
Random variables, probability mass function, probability density function, cumulative distribution function and their properties. Expectations of random variables, Transformations of variables,
Parameter, Statistics, Measure of location, measure of variability, Box-Plot graphs, Covariance and Correlation.
• Topic 6
Mgf and Distributions
(2 weeks)
Moment generating functions, Statistical Distributions: Discrete distributions and their properties, Continuous distributions and their properties.
• Topic 8
Limiting Distributions & Sampling Distributions
• Topic 9
Point Estimation
(3 weeks)
Maximum likelihood estimation, Method of moments. Unbiased estimators, Consistent estimators, Mean-square error, Sufficiency; Completeness, Rao-Blackwell Theorem, Complete sufficient statistics,
Lehmann-Scheffe theorem, Minimum variance unbiased estimators, Exponential families, Fisher information, Rao-Cramer inequality, Efficient estimators, Asymptotic efficiency.
• Topic 12
Estimation II
(2 weeks)
Testing hypotheses: concepts of hypothesis testing, Neyman-Pearson lemma, Likelihood ratio test, Confidence intervals | {"url":"http://ocw.metu.edu.tr/course/view.php?id=83","timestamp":"2014-04-18T15:59:12Z","content_type":null,"content_length":"35979","record_id":"<urn:uuid:5bc7b648-0e2d-4400-8fcd-8cab6dd2f2c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Art and Craft of Mathematical Problem Solving
University of San FranciscoPh.D., University of California, Berkeley
Video or Audio?
While this course works well in both formats, the video version features graphics to enhance your learning experience, including illustrations, images of
people and events, and on-screen text.
Video Exclusive
This course is richly illustrated with animations and hundreds of images to enhance your comprehension of the material.
Audio Exclusive
This course is available exclusively for audio.
Which Format Should I Choose? Video Download Audio Download DVD CD CD Soundtrack
Soundtrack Download
Watch or listen immediately with FREE streaming
Stream using apps on your iPad, iPhone, Android, or Kindle Fire
Watch Stream to your internet connected PC or laptop
Preview Download files for offline viewing or listening
Receive DVDs or CDs for your library
Play as many times as you want
What are CD Soundtracks?
CD Soundtracks are the entire audio portion of this video course. They contain some references to visual images, animations, graphics and content designed for the video experience.
What are Digital Soundtracks?
Digital Soundtracks are the entire audio portion of this video course. They contain some references to visual images, animations, graphics and content designed for the video experience.
72% off for a limited time
Art and Craft of Mathematical Problem Solving
This course includes FREE digital streaming
Enjoy instantly on your computer, laptop, tablet or smartphone.
One of life's most exhilarating experiences is the "aha!" moment that comes from pondering a mathematical problem and then seeing the way to an elegant solution. And many problems can be solved
relatively quickly with the right strategy. For example, how fast can you find
Show More
24 Lectures
Solving a math problem is like taking a hike, or even climbing a mountain. It's exciting, challenging, and unpredictable. Get started with three entertaining problems that plunge you into
thinking like a problem solver and illustrate two useful strategies: "wishful thinking" and "get your hands dirty."
Learn the difference between strategies, tactics, and tools when applied to problem solving. Try to decipher a puzzling reply to a census question, and determine whether three jumping frogs will
ever land on a given point.
Delve deeper into the psychological aspects of problem solving—especially concentration, creativity, and confidence—and ways to enhance them. Learn to avoid overreliance on very narrowly focused
mathematical tricks, and investigate a number of "think outside the box" problems, including the original that gave the name to this strategy.
Brainstorm an array of problems with the goal of building your receptiveness to discovery. See how far you can go by just letting yourself look for interesting patterns, experiencing both
conjectures that work as well as cautionary examples of those that don't. The core of the lecture is an investigation into trapezoidal numbers and a search for patterns in Pascal's triangle.
Learn how to "close the deal" on some of the outstanding conjectures from the previous lecture by using airtight arguments, or proofs. These include deductive proof, proof by contradiction, and
algorithmic proof—along with the narrow (and often overestimated) power of specific tools or "tricks," such as the "massage" tool, to make a mathematical expression simpler.
Explore three strategies for achieving a problem-solving breakthrough: draw a picture, change your point of view, and recast the problem. Try these strategies on a selection of intriguing word
problems that almost magically yield an answer, once you find a creative way of analyzing the situation.
Applying the problem-solving tactic of parity, test your wits against an evil wizard, an open-and-shut row of lockers, and other colorful conundrums. Then see how parity leads naturally into
graph theory, a playground for investigation that has nothing to do with conventional graphs.
Having used symmetrical principles to tackle problems in earlier lectures, take a closer look at this powerful tactic. Discover that when symmetry isn't evident, impose it! This approach lets you
compute the shortest distance to grandma's when you first have to detour to a river to fetch water.
Devise winning strategies for several fun but baffling combinatorial games. One is the "puppies and kittens" exercise, a series of moves and countermoves that can be taught to children but that
is amazingly hard to play well; that is, until you uncover its secrets with symmetry and a few other ideas.
Take your problem-solving skills to extremes on a variety of mathematical puzzles by learning how to contemplate the minimal or maximal values in a problem. This "extreme" principle is a simple
idea, but it has the nearly magical ability to solve hard problems almost instantly.
Detour into the hidden world of problem solvers—young people and their mentors who live and breathe nontraditional, nontextbook mathematics such as what you have been studying in this course. The
movement is especially strong in Russia and eastern Europe but is catching on in the United States.
Delve deeply into the famous "chicken nuggets" problem. In brief, what's the largest number of nuggets that you can't order by combining boxes of 7 and 10 nuggets? There are many roads to a
solution, but you focus on a visual approach by counting points in a geometric plane.
Apply the powerful strategies of recasting and rule-breaking to two classical theorems in number theory: Fermat's "little" theorem and Euler's proof of the infinitude of primes.
According to the pigeonhole principle, if you try to put n + 1 pigeons into n pigeonholes, at least one hole will contain at least two pigeons. See how this simple idea can solve an amazing
variety of problems. Also, delve into Ramsey theory, a systematic way of finding patterns in seemingly random structures.
To Professor Zeitz, the single most important word in all of mathematics is "invariants." Discover how this granddaddy of all problem-solving tactics—which involves quantities and qualities that
stay unchanged—can be used almost anywhere and encompasses such ideas as symmetry and parity.
What is the largest number that is the product of positive integers whose sum is 1,976? Tackle this question from the 1976 International Mathematical Olympiad with the method of algorithmic
proof, in which you devise a sequence of steps—an algorithm—that is guaranteed to solve the problem.
Draw on your skills developed so far to solve a tricky problem about marbles colliding on a circular track. Martin Gardner's airplane problem and a question about how many times a laser beam
reflects between two intersecting mirrors help you warm up to a solution.
Focusing on geometry, consider some baffling problems that become almost trivial once you know how to apply rotations, reflections, and other geometric transformations of your normal point of
view. This clever tactic was pioneered by the 19th-century mathematician Felix Klein.
Sometimes a problem demands a different type of proof from the ones you learned in Lecture 5. Study cases in which proof by mathematical induction is the only feasible approach. These typically
occur in recursive situations, where a complicated structure emerges from a simpler one.
Continuing your use of inductive proof, calculate the probability that a randomly chosen number in Pascal's triangle is even. This problem is surprisingly easy to investigate, but it requires
sophistication to resolve. But by now you have a good grasp of the methods you need.
Is it possible to find weird dice that "play fairly"? These are two dice that are numbered differently from standard dice but that have the same probability of rolling 2, 3, 4, and so on through
12. Learn that, amazingly, the answer is yes.
In a lecture that Professor Zeitz compares to walking along a mathematical cliff edge, use the pigeonhole principle to find patterns within apparently random and mind-bogglingly large structures.
You'll discover there is no limit to what the intrepid problem solver can do.
No course on problem solving is complete without a look at the checkers problem, formulated by contemporary mathematician and puzzle-master John Conway. Also learn about two other icons in the
field: Paul Erdos, who died in 1996, and Évariste Galois, who lived in the early 1800s.
Professor Zeitz reviews problem-solving tactics and introduces one final topic, complex numbers, before recommending a mission to last a lifetime: the quest for why a solution to any given
problem is true, not just how it was obtained. He closes by sharing some of his favorite examples of this elusive intellectual quest.
Dr. Paul Zeitz is Professor of Mathematics at the University of San Francisco. He majored in history at Harvard and received a Ph.D. in Mathematics from the University of California, Berkeley, in
1992, specializing in ergodic theory.
One of his greatest interests is mathematical problem solving. He won the USA Mathematical Olympiad (USAMO) and was a member of the first American team to participate in the International
Mathematical Olympiad (IMO) in 1974. Since 1985, he has composed and edited problems for several national math contests, including the USAMO. He has helped train several American IMO teams, most
notably the 1994 "Dream Team," which, for the first time in history achieved a perfect score. He founded the San Francisco Bay Area Math Meet in 1994 and cofounded the Bay Area Mathematical Olympiad
in 1999. These and other experiences led him to write The Art and Craft of Problem Solving (1999; second edition, 2007).
He was honored in March 2002 with the Award for Distinguished College or University Teaching of Mathematics by the Northern California Section of the Mathematical Association of America (MAA), and in
January 2003, he received the MAA's national teaching award, the Deborah and Franklin Tepper Haimo Award.
This course features hundreds of visual elements to aid in your understanding, including animations, graphics, and on-screen text. | {"url":"http://www.thegreatcourses.com/tgc/courses/course_detail.aspx?cid=1483","timestamp":"2014-04-16T11:32:12Z","content_type":null,"content_length":"219517","record_id":"<urn:uuid:74bf66a1-3f78-45db-bc57-c7a6cc8d781b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
J. Rákosník (ed.),
Function spaces, differential operators and nonlinear analysis.
Proceedings of the conference held in Paseky na Jizerou, September 3-9, 1995.
Mathematical Institute, Czech Academy of Sciences, and Prometheus Publishing House, Praha 1996
p. 245 - 250
A semilinear elliptic problem in an unbounded domain
Klaus Pflüger
Freie Universität Berlin, Institut für Mathematik I, Arnimallee 2-6, D 14195 Berlin, Germany pflueger@math.fu-berlin.de
Abstract: We study a semilinear elliptic boundary value problem in an unbounded domain of $R^{n}$ $(n \geq 3)$ which arises for example in electromagnetic wave propagation in fibres. We consider
nonlinear boundary conditions of the form $\partial u / \partial n = Q(x) |u|^{q-1} u$ $(q>1)$. A Mountain Pass Lemma approach and a comparison argument are used to construct a nontrivial solution of
this problem.
Full text of the article:
[Previous Article] [Next Article] [Table of Contents] | {"url":"http://www.emis.de/proceedings/Paseky95/23.html","timestamp":"2014-04-17T09:41:50Z","content_type":null,"content_length":"1979","record_id":"<urn:uuid:5326bed6-74cb-4afe-8fae-9843e56a50c9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
EPA-Expo-Box (A Toolbox for Exposure Assessors)
Deterministic and Probabilistic Assessments
Deterministic exposure assessments use a combination of point values selected to be either health-protective (i.e., high-end values) or to represent a “typical” exposure (i.e., central tendency
values). They produce an exposure estimate that is also a point estimate that falls somewhere within the full distribution of possible exposures (U.S. EPA, 2004).
Deterministic assessments use point values to produce a point estimate of individual or population exposure.
Multiple iterations of an assessment can be conducted using the deterministic approach. For example, default point estimates can be used for a screening-level assessment to create a basic picture of
high-end or typical exposures. If the results of the initial assessment are not sufficient for use in decision-making, a refined deterministic assessment can be completed using more site-specific
data, if available, to create a more precise picture of expected exposures.
• Represent the average or typical individual in a population, usually near the median or 50^th percentile of the population distribution
Central Tendency Estimates • Arithmetic mean uses average values for all factors
• Median exposure/dose corresponds to 50^th percentile exposure/dose; useful when data are in a lognormal distribution
• Highest possible exposure
Bounding Estimates • Useful for rapid screening estimate
• Uses highest intake rates; highest exposure frequency and distribution; and average body weights for estimate
• At or above 90^th percentile of population distribution (e.g., reasonable maximum, reasonable worst-case, and maximum exposure)
High-End Estimates • Combination of high and central tendency inputs
• More realistic than upper bound
• Used in Superfund remedy decisions as recommended in RAGS
• Reasonable maximum exposure • Represents the highest exposure that is reasonably likely to occur
• Often represents the 90^th–99.9^th percentile of the exposure distribution estimated from a probabilistic risk assessment (U.S. EPA, 2001)
• Reasonable worst-case exposure • Lower part of the high-end exposure range (U.S. EPA, 1992)
• 90^th–98^th percentile (U.S. EPA, 1992)
• Maximum exposure • Uppermost portion of the exposure range (U.S. EPA, 1992)
• Above the 98^th percentile (U.S. EPA, 1992)
Probabilistic exposure assessments give the assessor flexibility in generating exposure estimates for the spectrum of high-end percentiles (e.g., from the 90^th to 99.9^th percentiles) from which the
assessor can select the most appropriate upper-bound level (U.S. EPA, 2004). Many of the same algorithms and data distributions used to derive point estimates in deterministic assessments can also be
used in probabilistic assessments.
To be health-protective, risk management decisions are often based on estimates of the high-end exposure to an individual. As the exposure estimate moves higher within the percentile range, the level
of uncertainty increases. Using a probabilistic approach allows for better characterization of variability and/or uncertainty in exposure estimates (U.S. EPA, 2004). This is accomplished by using a
set of “random variables” in the exposure equation. Random variables are those variables like body weight, exposure frequency, and ingestion rate that are assumed to be independent and not correlated
with one another (e.g., body weight is not correlated with exposure frequency) and are expressed as probability distributions, which account for variability within the population. Any known
correlations between variables are taken into account (e.g., food intake may be correlated with body weight).
Random variables allow for a unique estimate of exposure to be calculated by sampling each set of probability distributions and calculating a result. Each iteration of the calculation represents a
plausible combination of input values and therefore a plausible estimate of exposure. However, the "individuals" represented in each iteration are not meant to represent a single person; rather, the
total distribution of exposure values is meant to demonstrate the likelihood or probability of different exposure levels within a population with characteristics and behaviors that vary.
Below are the steps an assessor might take to conduct a probabilistic approach.
Identify Variables to Evaluate Probabilistically
• Prior to carrying out a probabilistic assessment, the assessor decides which of the input variables are going to be evaluated probabilistically. Ideally, the model will use probability
distributions for input variables that are uncertain or variable as identified by the sensitivity analyses. More often, the choices are limited by available data (U.S. EPA, 2004)
Select and Fit Distributions
• The assessor selects and fits the best distributions for the variables that will be input as probability distributions (see Input Data for more information and resources on selecting and fitting
Sample the Probability Distributions
• The most popular (but not the only) approach to estimating exposure with probability distributions is the Monte Carlo simulation. A Monte Carlo simulation is "a technique for characterizing the
uncertainty and variability in exposure estimates by repeatedly sampling the probability distributions of the exposure equation inputs and using these inputs to calculate a range of exposure
values" (U.S. EPA, 2001).
Monte Carlo simulations can vary in complexity:
□ One-dimensional Monte Carlo Analysis (1-D MCA) "combine[s] point estimates and probability distributions to yield a probability distribution that characterizes variability or uncertainty in
risks within a population" (U.S. EPA, 2001).
□ Two-dimensional Monte Carlo Analysis (2-D MCA) “simultaneously characterize[s] variability and uncertainty in multiple variables and parameter estimates” and is typically employed in more
refined assessments (U.S. EPA, 2001).
Many user-friendly programs are available for conducting Monte Carlo simulations, but if the model is not appropriately parameterized or the input distributions are not appropriately defined, the
results of a Monte Carlo simulation will not be useful. Some knowledge of probabilistic analysis and critical evaluation of the input distributions is therefore required to generate high-quality
results using a Monte Carlo simulation tool.
Presenting the Results of Probabilistic Assessments
• Presenting the results of a probabilistic assessment can be challenging due to the complexity of the approach while the results of a deterministic assessment are often simple to understand and a
decision point for taking action is often clear. For example, if the point estimate of risk is above a certain level, take action. If not, another action or no action might be advised (U.S. EPA,
The results of a probabilistic assessment are not as intuitive to interpret and the distribution of exposures or risks should be characterized as representing variability among the population
based on differences in exposure (U.S. EPA, 2004).
U.S. EPA recommends early and continuous involvement with stakeholders, including a communication plan, and developing effective graphics to ensure the results are understood by affected parties
(U.S. EPA, 2004). Further, information might be presented in multiple ways (e.g., using probability density functions and cumulative density functions) to communicate the results effectively.
Chapter 31 – Probabilistic Risk Assessment (12 pp, 345KB, About PDF) in ATRA Volume I (U.S. EPA, 2004)and Chapter 6 – Exposure Assessment (20 pp, 1.92MB, About PDF) in RAGS Volume 1 (U.S. EPA,
1989 ) include discussions of factors to consider when presenting the results of a probabilistic assessment. Hypothetical results showing probability density and cumulative density functions are
also included in this chapter.
Tools for Conducting Probabilistic Assessments
The tools in this table are all models that can be used to conduct probabilistic assessments. Tools for deterministic approaches are included in other EPA-Expo-Box tool sets. | {"url":"http://epa.gov/ncea/risk/expobox/tiertype/de-met.htm","timestamp":"2014-04-20T08:22:59Z","content_type":null,"content_length":"24569","record_id":"<urn:uuid:a239a542-3872-4be6-bc2a-04ddaf9a1638>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dressed to the Nines
RC Lewis , 13 February 2012 · 48 views
We have a wonderful base-10 number system. It makes a lot of things easy, and would make things even easier if the U.S. would get with it and switch to the metric system. Think about the poor Romans.
Have you seen those years noted at the end of movies made in the twentieth century? Yuck.
The nature of the base-10 system makes for some interesting things with the number that's just one shy of ten—nine. While learning your times tables, you may have noticed these properties of nine.
Up to 9 × 10, the digits of the products add to nine.
Again up to 10, there's a cool bookend-reversing thing going on, as the first digits go up and the second digits go down:
A side-effect of this, along with the fact we have ten fingers, is a little trick I use with kids who still struggle to remember multiplication facts with nine. (I thought everyone knew this, but
have found several adults who've never seen it, so I figured I'd share it here.)
Hold your hands in front of you, fingers spread. Whatever number you want to multiply nine by, count that many fingers from the left and put down the finger you land on. (So if you're doing 3 × 9,
count three fingers from the left, and put down your left middle finger.)
How many fingers are up to the left of the lowered finger? (In the example, two.) How many fingers are up to the right? (Seven.) Put those together, and you have the answer. (Two and seven ... 3 × 9
= 27.)
And now, I have to go do some research on the title of this post, because I've always kind of wondered about that phrase. | {"url":"http://agentqueryconnect.com/index.php?/blog/330/entry-670-dressed-to-the-nines/","timestamp":"2014-04-19T06:22:01Z","content_type":null,"content_length":"43435","record_id":"<urn:uuid:8be005c4-971f-4c03-ac7b-28489a443a0c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
When does the relative differential $df=0$ imply that $f$ comes from the base?
up vote 16 down vote favorite
Let $A \to B$ be a map of commutative rings, and $d : B \to I/I^2$ be defined by $df = f\otimes 1 - 1\otimes f$, where $I$ is the kernel of $B \otimes_A B \to B$, as in [Hartshorne II.8].
If $df=0$, I would like to infer that $f \in A$, i.e. "if the derivative is zero, the function is constant".
This is certainly $\bf false$, e.g. $A = {\mathbb F}_p$, $B = A[x]$, and $f=x^p$. There $B\otimes_A B\to B$ is $A[x_1,x_2] \mapsto A[x]$, with $I = \langle x_1 -x_2\rangle \ni x_1^p - x_2^p = f\
otimes 1 - 1\otimes f$. (That is, not only is $f\otimes 1 - 1\otimes f$ in $I^2$, it's in $I^p$.)
What is the right condition, then, on $A\to B\ $? My primary interest is in $A = {\mathbb Z}[1/d]$.
characteristic-p ag.algebraic-geometry differentials ac.commutative-algebra
1 After teaching 3 hrs, my brain is shot. Here some thoughts which may only be quasicoherent. If $A$ is a field of char $p>0$, there is not much hope. If $A$ is a field of char $0$, and $B$ has a a
domain of finite type, then you should be OK by the algebraic de Rham theorem which implies $H^0_{DR}(B)=A$. Perhaps in your situation, you can pass to the fraction field? – Donu Arapura Sep 13
'11 at 18:59
@Donu: I like (and use) your idea of passing to Frac($A$). But over a field you just have to be carefull with finite extensions $B/A$ for which $H^0_{DR}(B)=B$. – Qing Liu Sep 14 '11 at 0:20
Shouldn't that be: "if the derivative is zero, the function is locally constant"? – user2035 Sep 14 '11 at 7:45
Qing Liu, thanks. Nice answer, by the way. A-fortiori, certainly, although I don't think he meant it literally. – Donu Arapura Sep 14 '11 at 8:29
2 a-fortiori, I do point out that it's false! Basically your worry is that the fiber of Spec B -> Spec A may not be connected. In Qing Liu's very nice sufficient condition, the generic fiber is
geometrically integral, which is how he addresses your issue. – Allen Knutson Sep 14 '11 at 11:11
show 1 more comment
1 Answer
active oldest votes
I will rather regard $I/I^2$ as $\Omega_{B/A}$, the module of differential forms.
First some necessary conditions. If $D$ is a sub-$A$-algebra of $B$ such that $\Omega_{D/A}=0$ (e.g. $D$ is a localization of $A$ or étale over $A$), the canonical map $\Omega_{D/A}\
otimes_D B\to \Omega_{B/A}$ shows that $df=0$ for all $f\in D$. So if $A$ is a field (of characteristic $0$), you want it to be algebraically closed in $B$. If $A$ is not a field, to
avoid $B$ to contain a localization of $A$, you almost want to suppose $\mathrm{Spec} B\to \mathrm{Spec}A$ be surjective.
These being said, now a sufficient condition.
Suppose $A$ is noetherian, integrally closed of characteristic $0$, $B$ is an integral domain, $\mathrm{Spec} B\to \mathrm{Spec}A$ is surjective, and its generic fiber is [S:smooth
:S] of finite type and geometrically integral. Then the kernel of $B\to \Omega_{B/A}$ is equal to $A$.
(1) Let $C$ be the kernel of $d : B\to \Omega_{B/A}$. This is a sub-$A$-algebra of $B$. The canonical exact sequence $$ \Omega_{C/A}\otimes_C B\to \Omega_{B/A}\to \Omega_{B/C}\to 0$$
implies that $\Omega_{B/A}\to \Omega_{B/C}$ is an isomorphism because the first map is identically zero ($dc\otimes b\mapsto bdc=0$ if $c\in C$).
(2) Let $K=\mathrm{Frac}(A)$, $L=\mathrm{Frac}(B)$. Let us first show that $E:=\mathrm{ker}(L\to\Omega_{L/K})$ equals to $K$. Note that $E$ is a field. Applying (1) to the situation $A=
K$ and $B=L$, we see that $\Omega_{L/K}\to \Omega_{L/E}$ is an isomorphism. This map is $L$-linear, and $\dim_L\Omega_{L/K}$ is the transcendental degree of $L$ over $K$ (here we use the
up vote 23 characteristic zero hypothesis). Similarly for $L/E$. Hence $E$ is algebraic over $K$. But $B_K$ is geometrically integral over $K$, this forces $E=K$ (otherwise $L\otimes_K E$ is not
down vote integral).
(3) Let us show $C_K:=C\otimes_A K$ equals to $K$. Tensoring the exact sequence of $A$-modules $$ 0\to C\to B\to \Omega_{B/A}$$ by $K$, we get the exact sequence $$ 0\to C_K \to B_K \to
\Omega_{B_K/K}. $$ First suppose $B_K$ is smooth over $K$. As $\Omega_{L/K}$ is a localization of $\Omega_{B_K/K}$ and the latter is a free (hence torsion free) $B_K$-module, the map $\
Omega_{B_K/K}\to \Omega_{L/K}$ is injective. By (2) we then get $C_K=K$. In the general case, some dense open subset $U$ of $\mathrm{Spec}(B_K)$ is smooth. If $f\in B_K$ satisfies $df=0$
in $\Omega_{B_K/K}$, then $d(f_{|U})=0$ in $\Omega_{U/K}$. So $f{|_U}\in K$. As $B_K\to O(U)$ is injective, $f\in K$.
(4) Let us show $C=A$. We have $C\subseteq B\cap K$ (in fact the equality holds). So we have to show $B\cap K=A$. Let $g\in B\cap K$ viewed as a rational function on $\mathrm{Spec}(A)$.
Then $A\subseteq A[g]\subseteq B$. For any prime ideal $Q$ of $B$, $g$ is regular at the point $Q\cap A$. So the image of $\pi : \mathrm{Spec}(B)\to \mathrm{Spec}(A)$ is contained in the
complementary of the pole divisor of $g$. So the latter is empty by the surjectivity hypothesis on $\pi$. Therefore $g$ is a regular function (here we use the normality of $A$) and $C=
Add Suppose $A, B$ are integral domains of characteristic $0$, $B_K$ is finitely generated over $K=\mathrm{Frac}(A)$[S:, and $\Omega_{B_K/K}$ is torsion free over $B_K$:S]. Then we
proved above that the kernel of $d: B\to\Omega_{B/A}$ is contained in $B\cap K^{alg}$.
Add 2 One can remove free or torsion-free condition on $\Omega_{B_K/K}$. The proof is a little modified in the step 3. We just notice that any integral variety in characteristic $0$ has
a dense open subset which is smooth !
Better and better! – Allen Knutson Sep 15 '11 at 0:58
add comment
Not the answer you're looking for? Browse other questions tagged characteristic-p ag.algebraic-geometry differentials ac.commutative-algebra or ask your own question. | {"url":"https://mathoverflow.net/questions/75329/when-does-the-relative-differential-df-0-imply-that-f-comes-from-the-base/75365","timestamp":"2014-04-16T07:49:34Z","content_type":null,"content_length":"63355","record_id":"<urn:uuid:d7b3ce74-4952-482c-8409-e48c6cdf38de>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry proof help
Re: Geometry proof help
Just put the reason right next to it? Do you know the reasons?
1) Angle FEG = angle DEG : because EG bisects angle DEF.
2) Angle EDG = angle EFG : given
3) ΔGED is congruent to ΔGEF: AAS
4) DG = GF : CPCTE
5)ΔDGF is isosceles: Two sides are equal. Definition of an isosceles triangle.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19912","timestamp":"2014-04-16T16:17:21Z","content_type":null,"content_length":"16761","record_id":"<urn:uuid:77e8f2a2-b465-45da-afc4-4dc1fc881a67>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plastic to oil process?
Hi there,
There are many videos that show people converting plastic into oil, then refining the oil into diesel, kerosine, and gasoline. I have a few questions about this particular video,
1) When the plastics turn into gas, what is the process called? Is it pyrolysis or gasification, or something else? Is the gas syngas? A chemical formula would be helpful.
2) Why do they liquify it and then refine it? Can't they just put the gas directly into a distillation column where diesel and gasoline will already be separated?
3) When the gas is liquified, is the water cooling it, or is it instead reacting with it?
4) Is the Fischer-Tropsch synthesis involved in any of this?
5) Is any Oxygen involved in this process?
Would really appreciate thorough answers with chemical formulas. | {"url":"http://www.physicsforums.com/showthread.php?t=592309","timestamp":"2014-04-19T04:37:48Z","content_type":null,"content_length":"19958","record_id":"<urn:uuid:dffe2a7b-7280-44bf-80cb-2cba7b213593>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Present Discounted Value of Payment
February 16th 2014, 05:47 PM #1
Junior Member
Sep 2013
Present Discounted Value of Payment
I have a question about the present discounted value of some amount of money, and how that info is used to make decisions.
Consider getting $1000 for 3 years when the annual interest rate is 10%. You get $1000 today, a year from now, and two years from now. The present discounted value of that is $1000 + 909.09 +
826.45. The total is clearly less than getting $3000 today. Why would someone value it at less than $3000 based on the interest rate today? I get that if you put 909.09 in the bank and let it sit
for a year you would have $1000. If you put 826.45 in the bank today and let it sit for two years you would end up with $1000. Why would you value $3000 at less than $3000? Why are you valuing
money as if you had it and would put it into a bank account to see how much interest you would make on it?
I have the same question about business profits. Let's say that you get $1 million per year from your business (ignore costs, risk, inflation etc.). The value of those gains can be discounted
when you consider an interest rate of 10%. Using the formula for perpetuities you get PV = 10,000,000. So from now until forever you value the profits at 10 million bucks today. This makes ZERO
sense to me: 1 million dollars forever equals 10 million? What's the most confusing is that people use this to make decisions.
What person in their right mind says "oh, I'm going to get $1 million forever from a business, and the interest rate is 10%, so I'll get 10 million". To me, if you get 1 million every year, you
get more than $10 million because you put that money in the bank and get interest on it. So as soon as you get that first million you put it in the bank and get the interest on it. Could someone
please explain it to me intuitively? I get that someone might not value money tomorrow as much as money today because of uncertainty and impatience that goes along with waiting. But this interest
rate stuff is so confusing! Why do you use the interest rate to see how much you value something in the present?
Re: Present Discounted Value of Payment
sorry, I don't know why the text is kind of weird ..
Re: Present Discounted Value of Payment
I have a question about the present discounted value of some amount of money, and how that info is used to make decisions.
Consider getting $1000 for 3 years when the annual interest rate is 10%. You get $1000 today, a year from now, and two years from now. The present discounted value of that is $1000 + 909.09 +
826.45. The total is clearly less than getting $3000 today. Why would someone value it at less than $3000 based on the interest rate today? I get that if you put 909.09 in the bank and let it sit
for a year you would have $1000. If you put 826.45 in the bank today and let it sit for two years you would end up with $1000. Why would you value $3000 at less than $3000? Why are you valuing
money as if you had it and would put it into a bank account to see how much interest you would make on it?
I have the same question about business profits. Let's say that you get $1 million per year from your business (ignore costs, risk, inflation etc.). The value of those gains can be discounted
when you consider an interest rate of 10%. Using the formula for perpetuities you get PV = 10,000,000. So from now until forever you value the profits at 10 million bucks today. This makes ZERO
sense to me: 1 million dollars forever equals 10 million? What's the most confusing is that people use this to make decisions.
What person in their right mind says "oh, I'm going to get $1 million forever from a business, and the interest rate is 10%, so I'll get 10 million". To me, if you get 1 million every year, you
get more than $10 million because you put that money in the bank and get interest on it. So as soon as you get that first million you put it in the bank and get the interest on it. Could someone
please explain it to me intuitively? I get that someone might not value money tomorrow as much as money today because of uncertainty and impatience that goes along with waiting. But this interest
rate stuff is so confusing! Why do you use the interest rate to see how much you value something in the present?
The second question is much easier to explain than the first. You have a business that generates 1 million a year in perpetuity without any risk etc. If you can buy a guaranteed perpetuity at
10%, how much would you have to invest to get exactly the same annual income as you get from the business. 10 million, right? So things that generate the same annual income with the same degree
of risk have equal market value. Let's see what would happen if people could buy a risk-free perpetuity generating 1 million a year forever for 9 million and you wanted to sell for 10 million a
business that will similarly have profits of 1 million a year. No one would buy at your asking price because they could get the same income for a smaller expenditure. With me so far? Similarly,
if the market price of such a perpetuity was 11 million, people would be willing to buy the business for 11 million.
Now you do NOT need to utilize interest rates for valuing receipts or disbursements in the present. (Actually if you use the formula, (1 + r)^0 = 1 so you CAN use the formula; it's just a waste
of time to do so.) Let's take an example. Suppose I ask you if you have change for a fifty. You do, and we swap my fifty for your two twenties and a ten. Interest rates have nothing to do with
it. Now suppose we change the example. I ask if you have change for a fifty. You do. I say "OK, great. Give me your two twenties and a ten, and I'll give you a fifty ten years from today." Most
people will not do such a transaction: they do not value money paid today as equal to money paid sometime in the future even if the future payment is absolutely guaranteed. Does this make sense
to you? People simply do not value future money as highly as they do money right now. A bird in the hand is worth two in the bush.
OK. Now we get to the hard question. Why is the (risk-free) market interest rate relevant? Person A might be willing to give up two twenties and a ten right now for a guarantee of a hundred ten
years from now, and Person B might be willing to give up two twenties and a ten right now only for a guarantee of two hundred dollar bills ten years from now. The "subjective time preference" of
A and B differ. Nothing new here: some people like chocolate ice cream and some like vanilla. The market rate of interest is like any other price: it balances out the buyers and sellers of future
payments of money. The current market rate is used in present value and future value formulas because it represents what it is practical to achieve in the current state of the market. You go to
the grocery store to buy a rib roast. When you see the price, you say that is too much and you don't buy. That is your prerogative, but then you do not get the rib roast. If you want the rib
roast, you have to pay the going market price. It is the same with financial markets. You can participate in them at the current structure of market rates, but you will do so only if your
personal subjective time preference is consistent with those market rates. Does this help?
Re: Present Discounted Value of Payment
Thanks so much!
It makes intuitive sense!
February 16th 2014, 05:50 PM #2
Junior Member
Sep 2013
February 21st 2014, 10:04 AM #3
Feb 2014
United States
March 3rd 2014, 06:43 PM #4
Junior Member
Sep 2013 | {"url":"http://mathhelpforum.com/business-math/226111-present-discounted-value-payment.html","timestamp":"2014-04-17T12:46:41Z","content_type":null,"content_length":"45657","record_id":"<urn:uuid:a9f72d03-646b-485a-9506-b010a6f4ea6b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relief mapping artifact
08-21-2012, 09:09 AM #1
Junior Member Newbie
Join Date
Jul 2012
Relief mapping artifact
I'm trying to implement relief mapping based on http://www.inf.ufrgs.br/~oliveira/pu...M_I3D_2005.pdf this paper, and the sample shader code in the appendix.
What I have works fine except for this strange artifact I'm getting:
It's as if the ray's slope is greater than it should be and that that it's intersecting parts of the image that it shouldn't be able to reach. However, changing the slope doesn't seem to do all
that much. This artifact occurs wherever the angle between two faces is greater then 180.
Is anyone here experienced with this technique?
Here's my ray cast function:
Code :
vec2 castRay(in sampler2D rm, in vec2 tc, in vec2 delta) {
const int nLinearSteps = 50;
const int nBinarySteps = 15;
float rayDepth = 0.0;
float stepSize = 1.0/float(nLinearSteps);
float texelDepth = texture(rm, tc).a;
float intersect = 0.0;
//linear test
for(int i = 0; i < nLinearSteps; i++) {
intersect = 1.0;
if(texelDepth > rayDepth) {
rayDepth += stepSize;
texelDepth = texture(rm, tc + (delta * rayDepth)).a;
intersect = 0.0;
if(intersect < 0.9)
//"Rewind" to the point before the intersection, but only if there is an intersection
rayDepth -= (stepSize * intersect);
//binary search
for(int i = 0; i < nBinarySteps; i++) {
stepSize *= 0.5;
rayDepth += stepSize;
texelDepth = texture(rm, tc + (delta * rayDepth)).a;
if(texelDepth <= rayDepth) {
stepSize *= 2.0;
rayDepth -= stepSize;
stepSize *= 0.5;
return (tc + (delta * rayDepth * intersect));
In the main function, I have
Code :
vec2 delta = fReliefScale * -normTanView.xy/normTanView.z;
“normTanView” is the normalized tangent-space view vector and “fReliefScale” is what modifies the slope of the ray.
I'm using mikktspace to get the tangent and bitangent vectors.
Try disabling mipmapping for the texture(s) you are ray casting against. If the problem remains, that will at least rule out texcoord gradient calculation as a problem.
I remember when the author of that paper debuted the technique more or less in this forum. There was a long discussion where I probably said an embarrassing thing or two.
Does anyone know if this has come along... been used effectively in any bigwig games? I was trying to recall just the other day if the technique suffered at shear angles like standard bump
mapping or not. It looks pretty good in the paper but the silhouettes still look flat; but a few of the plates seem to pop out. It's not super clear that a discard is being done along the
silhouette or not.
I was working on some stuff at the time, and I thought this would be a great technique for filling in the spaces that were just a few pixels wide in a regularly tessellated (in screen space)
procedural mesh.
God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.
It was used in Crysis, and probably a couple others. There have been some papers released with variations on the original technique that use acceleration structures to speed up the ray cast
(distance fields, more or less) at the cost of preprocessing time and/or more texture memory, but the original technique is still pretty effective. Performance became pretty good starting with
DX10 hardware.
For shear angles... it's a good idea to clamp the maximum angle that you'll raycast at for performance reasons, to avoid skipping around texture memory too much while searching. Silhouettes can
be accomplished by combining discard with either geometry fins that stick out from the edges (allows the silhouette to stick out from the surface) or no fins (the silhouette can only be inset
into the surface). I doubt many games would actually implement this, though, because the discard would hurt performance by messing with fast depth culling. The typical game use case is making
rocks and other bumpy things stick out from the ground, where you wouldn't be able to notice the silhouette.
So just out of curiosity. How does it stack up to just tessellating the surface with a geometry shader nowadays? I don't know much about geo shaders. I just found out the other day that
apparently they run after the vertex shader, which is not how I had imagined it. Anyway. Is a geometry shader good for a lot in screen space? I guess the down side would be generating a bunch of
self-defeating pixels along a silhouette??
Last edited by michagl; 09-23-2012 at 07:12 PM.
God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.
The geometry shader isn't designed to produce enough output triangles quickly enough to be used for that kind of detail tessellation. It has an upper limit on how many output triangles per input
triangle it can produce, and performance also degrades the more you output from a geometry shader. Tessellation shaders from GL 4.x (control and evaluation shaders) can do a good job of replacing
relief mapping with actual geometry with the same or better image quality.
Thanks, I will have to look into the tessellating shaders. I kind of assumed that was the main thing people were using geometry shaders for, and tessellating was for parametric surfaces (patches)
only. Good to know.
So what about relief mapping. Is is still the win for per pixel? I imagine anyway that rendering pixel sized triangles is never a good idea. Anyone?
God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.
I kind of assumed that was the main thing people were using geometry shaders for
Geometry shaders are primarily used for:
1. Specialized point-to-quad conversion operations.
2. Layered rendering. Writing different primitives to different layers of a layered framebuffer.
3. Feeding transform feedback operations with specialized data, including multi-stream output. This probably won't be used as often now that we have Compute Shaders, but for pre-compute hardware
(3.x), some of this could still be useful.
08-23-2012, 12:33 PM #2
Junior Member Regular Contributor
Join Date
Mar 2004
Austin, TX, USA
09-16-2012, 07:40 PM #3
Member Regular Contributor
Join Date
Jan 2005
09-18-2012, 08:09 PM #4
Junior Member Regular Contributor
Join Date
Mar 2004
Austin, TX, USA
09-23-2012, 07:02 PM #5
Member Regular Contributor
Join Date
Jan 2005
09-24-2012, 01:46 PM #6
Junior Member Regular Contributor
Join Date
Mar 2004
Austin, TX, USA
09-25-2012, 07:24 PM #7
Member Regular Contributor
Join Date
Jan 2005
09-25-2012, 08:41 PM #8
Senior Member OpenGL Guru
Join Date
May 2009 | {"url":"http://www.opengl.org/discussion_boards/showthread.php/178830-Relief-mapping-artifact?p=1242809&viewfull=1","timestamp":"2014-04-21T10:13:40Z","content_type":null,"content_length":"64793","record_id":"<urn:uuid:b12ba844-703f-4c0d-add0-d8b92228885e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
If you've used SAS or SPSS and want a jump-start into the basics of the popular R language, next week's webinar, Introduction to R for SAS and SPSS Users will be of interest to you. While R, SAS and
SPSS are all three software systems for data analysis and graphics, the underlying concepts in R are quite different to...
Visualizing Sampling Distributions
Teacher: “How variable is your estimate of the mean?” Student: “Uhhh, it’s not. I took a sample and calculated the sample mean. I only have one number.” Teacher: “Yes, but what is the standard
deviation of sample means?” Student: “What do you mean means, I only have the one friggin number.” Statisticians have a habit
Are new SEC rules enough to prevent another Flash Crash?
At 2:42PM on March 10 2010, without warning, the Dow Jones Industrial Index plunged more than 1000 points in just 5 minutes. It remains the biggest one-day decline in this stock market index in
history. On an intra-day basis, anyway: by the end of the day, the market had regained 600 points of the drop. At the time, the...
Recession forecasting III: A Better Naive Forecast
In Recession Forecasting Part II, I compared the accuracy of Hussman's recession forecasts to the accuracy of a naive forecast that assumed the current state of the recession variable would continue
next month.An anonymous comment...
rgdal + raster + RCurl = My next package
This package has been a long time in the making. In the end it’s more of a data package than a functional package, but pulling all the pieces together required me to learn some really cool packages:
raster ( which I already knew ) rgdal and RCurl. I’ll provide a littel bit of an overview
Implementation of the CDC Growth Charts in R
I implemented in R a function to re-create the CDC Growth Chart, according to the data provided by the CDC.In order to use this function, you need to download the .rar file available at this
megaupload link.Mirror: mediafire link.Then unrar the file, a...
Backtesting Part 2: Splits, Dividends, Trading Costs and Log Plots
Note: This post is NOT financial advice! This is just a fun way to explore some of the capabilities R has for importing and manipulating data. In my last post, I demonstrated how to backtest a simple
momentum-based stock trading strategy ...
Soil Series Query for SoilWeb
A map depicting the spatial distribution of a given soil series can be very useful when working on a new soil survey, updating an old one, or searching for specific soil characteristics. We have
recently added a soil series query facility to SoilWeb, w...
Simulation studies in R – Using all cores and other tips
After working more seriously with simulations I noticed some updates were necessary to my previous setup. Most notably are the following three: It is very handy to explicitly call the different
scenarios instead of using nested loops Storing intermediate results in single files obliviates the need to rerun an almost finished but crashed analysis and
Correlations among US Stocks: Is it really time to fire your adviser?
Note: This post is NOT financial advice! This is just a fun way to explore some of the capabilities R has for importing and manipulating data.The Financial Times says it's time to "Fire your Adviser"
because correlations among US stocks ar... | {"url":"http://www.r-bloggers.com/search/gis/page/182/","timestamp":"2014-04-16T13:37:55Z","content_type":null,"content_length":"38443","record_id":"<urn:uuid:3869f283-49db-48b9-bea4-adc69a20849e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution of Equations using MATLAB
W.R. Wilcox, Clarkson University
Last revised May 13, 2010
SOLUTION OF EQUATIONS USING MATLAB
See also:
· Tutorial on symbolic mathematics using MATLAB
· Relationships between mathematical, MATLAB and Excel expressions
· MATLAB tutorials on analysis and plotting of data
· Common conversion factors for units
· Introductory MATLAB tutorials: See MATLAB help, MATLAB, getting started. The following sites also have useful tutorials: Mathworks, Florida, Louisiana, New Hampshire, Carnegie Mellon
The present tutorial deals exclusively with numerical methods of solving algebraic and trigonometric equations, both single and several simultaneously, both linear and non-linear. Analytical
solutions can be obtained using the methods described in the symbolic tutorial. When both symbolic and numerical methods work, sometimes one is simpler and sometimes the other.
In order to benefit from the following material one must copy and paste the indicated commands into MATLAB and execute them. These commands are shown in the format >> help plot, for example. The
user should look up each command in MATLAB's help in order to understand the command and the different possible syntaxes that can be used with it.
Graphical solution of one non-linear equation or two non-linear equations in explicit form
When a single equation or a pair of equations is to be solved, it is good practice to first make a plot. In this way, it can be seen immediately what the approximate solution is.
In order to plot an equation using the "plot" command it must be in the explicit form, i.e. y = f(x). If the equation can be written only in implicit form, i.e. f(x,y) = 0, then the "ezplot" command
must be used as described in the symbolic tutorial.
We consider first a single equation, i.e. f(x) = 0. As an example we will use sin^2(x) e^-x/2 – 1 = 0. Do the following steps:
1. While optional, it's a good idea to clear the Workspace memory in order to avoid the chance of MATLAB using a previous value for a variable:
>> clear
2. Guess the range of x values over which you expect a solution and generate values for the x vector over this range. Select an increment small enough to yield a smooth curve. For example:
>> x = -10:0.01:10;
gives x from -10 to +10 in increments of 0.01.
3. Generate the y = f(x) vector corresponding to these x values:
>> y = sin(x).^2.*exp(-x/2) -1;
Notice the dots before ^ and *. These dots are essential. Because x is a vector, we must use array operations rather than scalar operations (without the dot).
4. Also create a line for y = 0. The intersection of this line with the curve above gives the solution (or solutions, as in this example). To do this we create a vector with the same number of
values as x, with each value being 0.
>> y0 = zeros(1,length(x));
5. Make the plot:
>> plot(x,y,x,y0);
6. Locate the solution(s). If the two lines do not intersect, repeat with another range for x. In our example, the lines intersect more than once, and we surmise, in fact, that there are
infinitely many solutions for negative x. Let us select the solution at about x = -1. Enlarge this area of the intersection in the Figure window by Tools / Zoom In, and then clicking near the
intersection once or more. Then go to Tools / Data Cursor. Click as close as you can to the intersection. Write the result down to compare with that obtained by using the "fzero" command
Next let us consider a pair of explicit equations, y = sin^2(x) e^-x/2 – 1 and y = 10sin(x)/x. Proceeding as above:
>> clear; x = -10:0.01:10; y1 = 10*sin(x)./x; y2 = sin(x).^2.*exp(-x/2) - 1; plot(x,y1,x,y2);
Again we see that there are many solutions (intersections), both positive and negative. (Note that when MATLAB calculated sin(0)/0 it gave a warning, but nonetheless completed the other calculations
and the plot.) As above, find the approximate solution at x near 3.5. (I get x = 3.49, y = 0.9796.) Write down your result to compare with those to be found later. Alternately, we can equate these
two equations (since both = y), move everything to the left-hand side, and find the values of x when this is 0. For the value of x found, the corresponding value for y is determined by substituting
this back into either one of the original equations.
>> clear; x = -10:0.01:10; yeq = 10*sin(x)./x - sin(x).^2.*exp(-x/2)+1;
>> yeq0 = zeros(1,length(x)); plot(x,yeq,x,yeq0);
Using the same method as above, I again got x = 3.49 and yeq = 0. Note that yeq is not y. To find the y we substitute x back into both of the simultaneous equations, which gives us two results.
(Use your own value of x rather than my value of 3.49.)
>> clear; x = 3.49; y1 = 10*sin(x)/x, y2 = sin(x)^2*exp(-x/2) - 1
The closer these are y1 and y2, the better our solution.
Numerical solution of one or two non-linear equations in explicit form (y = f(x))
We use the "fzero" command, which finds the value of x for f(x) = 0. (See >> help fzero.) We will use the same examples as above. To display the result to 14 significant figures we first change
the format to long. For the single equation, sin^2(x) e^-x/2 – 1 = 0:
>> clear; format long; x = fzero('sin(x)^2*exp(-x/2)-1', -1)
(Notice that no dot is required now before ^ or *, although using the dot causes no problem.) How does this result compare to the approximate result you obtained by the graphical method above?
Now we again consider the pair of equations, y = sin^2(x) e^-x/2 – 1 and y = 10sin(x)/x:
>> clear; x = fzero('10*sin(x)/x - sin(x)^2*exp(-x/2) + 1', 3.5)
>> y1 = 10*sin(x)/x, y2 = sin(x)^2*exp(-x/2) - 1
As before, we see that the numerical method is more accurate. So why bother with the graphical method at all? The reason is, to see about where you need to have "fzero" seek a solution!
Finding a particular solution when there are infinitely many
There are equations with an infinite number of solutions, for example sin(x) = 1/2.
It is helpful to see some of these solutions by plotting y = sin(x) - 1/2 and then observing where y has values of zero:
>> clear, syms x; eq='y - sin(x) + 1/2'; ezplot(eq,[-6,6,-2,1])
>> hold on; eq0='0'; ezplot(eq0); hold off
The "hold on" command tells MATLAB to write the subsequent plots on top of the first one, rather than replacing it. Plotting 0 puts a horizontal line on the graph. Intersections of the sine wave
with this line represent solutions of the first equation. Approximate values can be obtained in the Figure window by clicking on Tools / Zoom In, clicking on a desired intersection to enlarge that
area of the graph, clicking on Tools / Data Cursor, and then clicking on that intersection again to give the approximate values of x and y there. Try this on the first two intersections to the right
of the origin (x > 0). Write down your results to compare with those found below.
Here are the steps to find a solution numerically:
1. Convert the equation to a function consisting of everything moved to the left-hand side, e.g. func2(x) = sin(x)-1/2.
2. Create this function in the MATLAB editor, save to the Current Directory, and make certain the current directory is in the path (File / Set Path). For example:
function out = func2(x)
out = sin(x) - 1/2;
3. Plot this function over the range of interest to see where the solutions are, e.g.:
>> clear, x = - 4:0.01:4; plot(x,func2(x))
4. Use one of the following formats to find the solution:
>> fzero('func2',3) or >> fzero(@func2,3) or >> fzero('func2', [2,3])
MATLAB finds the value of x nearest 3 that gives func2 = 0. Notice (“whos” or Workspace) that the result is double, so no conversions are necessary for subsequent numeric calculations.
Numerical solution of two or more equations in implicit forms (e.g., f1(x,y)=0, f2(x,y)=0)
We use the "fminsearch" command. Consider as an example the following two implicit equations (already in MATLAB format):
0.5 = (200+3*x+4*y)^2 / ((20+2*x+3*y)^2 * x)
10 = (20+2*x+3*y)*y / x
Use the following steps:
1. Move everything to the left-hand side in all equations.
2. Using MATLAB's edit window, create a function that calculates the deviations from 0 when arbitrary values are used for the variables. The final output of the function is the sum of the squares
of these deviations. Following is the code for our example, and would be saved as meth_reac.m in MATLAB's Current Directory, which must also be in the Path (File/Set Path).
function out=meth_reac(guesses)
% function to calculate the values of x and y
% satisfying the two equibiliubrium relations
% for a methanol reactor, CO + 2H2 = CH30H
% CO2 + 3H2 = CH3OH + H2O (coal to chems case study)
% x is the molar flow rate of CO, y of CO2 (kmol/h)
% After saving: >> fminsearch('meth_reac',[25,3]); x=ans(1), y=ans(2)
% Any positive values work as initial guesses in this example
x=guesses(1); y=guesses(2);
eq1 = 0.5 -(200+3*x+4*y)^2/(20+2*x+3*y)^2/x;
eq2 = 10 -(20+2*x+3*y)*y/x;
3. In the Command window, type "fminsearch('function name',[guessed values]). For our example,
>> fminsearch('meth_reac',[25,3]); x=ans(1), y=ans(2)
You can check your result by using the symbolic "solve" command, as follows for our example:
>> clear, syms x y
>> eq1='0.5=(200+3*x+4*y)^2/(20+2*x+3*y)^2/x'
>> eq2='10=(20+2*x+3*y)*y/x'
>> [x y]=solve(eq1,eq2,x,y)
Note that several solutions are produced, from which you have to select the physically reasonable one. If this is the first solution, then to obtain double-precision numerical variables we use:
>> x=double(x(1)), y=double(y(1))
(The equations in this example are based on the equilibrium relationships for a chemical reactor making methanol from CO, CO[2] and H[2] as part of an existing plant that produces a variety chemicals
starting with the gasification of coal, i.e. coal reacting with water at high temperature.)
Numerical solution of simultaneous linear equations using the matrix backslash (\) method
MATLAB uses Gaussian elimination to solve systems of linear equations via the backslash matrix command (\). As an example, consider the following three equations:
The procedure is as follows:
1. Each equation must first have each variable in the same order on the left-hand side, with the constants on the right-hand side. This should be done using pencil and paper. Students commonly
assume they can do this while entering the information in MATLAB, and frequently make mistakes doing so. For our example, we write:
2x - 3y + 4z = 5
x + y + 4z = 10
3x + 4y - 2z = 0
2. In MATLAB, create matrices of coefficients for the variables and constants on the right-hand sides. Thus for our example:
>> clear, C = [2,-3,4; 1,1,4; 3,4,-2], B = [5; 10; 0]
3. Solve for x, y and z using:
>> A = C\B, x = A(1), y = A(2), z = A(3)
Check your results using: >> C*A - B
(Note that these are matrix operations; you MUST NOT use a dot (.) before * or \)
Compare this result with that obtained using MATLAB symbolic mathematics for the same system of equations.
Another application of the backslash matrix operator is to fit data to a non-linear equation. We use an example with the following y versus data:
>> clear; t = [0, .3, .8, 1.1, 1.6, 2.3]'; y = [0.5, 0.82, 1.14, 1.25, 1.35, 1.40]'; plot(t,y,'o')
Notice that this does not give a straight line plot. Instead we will try a quadratic fit (without using the Basic Fitting tool on the graph window or the "polyfit" command). That is, we want to
find the values of the coefficients (a[0], a[1], a[2]) that[ ]best fit these data to the form y = a[0] + a[1]t + a[2]t^2. We have 6 values each of t and y, from which we generate 6 equations in the
format a[0] + ta[1] + t^2a[2] = y matching that required for solution of simultaneous linear equations:
a[0] = 0.5
a[0] + 0.3a[1 ]+0.3^2a[2] = 0.82
a[0] + 0.8a[1 ]+0.8^2a[2] = 1.14
a[0 ]+ 1.1a[1 ]+1.1^2a[2 ]= 1.25
a[0 ]+ 1.6a[1 ]+1.6^2a[2 ]= 1.35
a[0] + 2.3a[1 ]+2.3^2a[2] = 1.40
Thus we have six simultaneous equations, but only 3 unknowns. We cannot expect to get values for a[0], a[1] and a[2] that would exactly satisfy all 6 equations. The backslash operator gives the
least squares values instead. We must put these 6 equations in the form C*A=B, as follows (assuming t and y have already been entered, as above):
>> C(:,1) = ones(length(t),1)
>> C(:,2)=t
>> C(:,3)=t.^2
>> B = y
>> A = C\B
>> a0 = A(1), a1 = A(2), a2 = A(3)
>> tfit=0:0.1:2.3; yfit = a0 + a1*tfit + a2*tfit.^2;
>> plot(t,y,'o',tfit,yfit);
This is a general method and not confined to fitting data to polynomials. See fitting data to non-polynomial equations. | {"url":"http://people.clarkson.edu/~wwilcox/ES100/eqsolve.htm","timestamp":"2014-04-18T11:06:21Z","content_type":null,"content_length":"79863","record_id":"<urn:uuid:642f67c5-0e07-41fd-b9f6-d0e9d5199418>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
WWW site of Hillman) that covers the plane with fat and thin rhombi according to a set of "matching rules" that force a particular aperiodic structure. Diffraction from such structures is nicely
described in a web site maintained by Lifshitz. Widom has shown that randomly tiling the plane with the same set of shapes spontaneously generates structures of appropriate symmetry and
quasiperiodicity. The configuational entropy of tiling the plane at random may help explain thermodynamic stability of real quasicrystalline alloys.
D.E. Knuth. Our random tiling theory implies an upper bound of log(2) for the tiling entropy per vertex, consistent with a conjecture by Knuth. Click here for a preprint on this research. The figure
above displays one particular rhombus tiling from an ensemble of 18-fold rotational symmetry. The tiling (left) corresponds (examine color coding of tiles and crossings of lines) to the bubble sort
on 9 element lists (right).
Papers and posters describing this work are available on the web. | {"url":"http://euler.phys.cmu.edu/widom/research/qc/quasi.html","timestamp":"2014-04-20T11:27:21Z","content_type":null,"content_length":"4490","record_id":"<urn:uuid:abe6a884-bcd0-485d-ad4c-779bc106dc2c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: LAPACK routine (version 1.5) (l) Updated: 12 May 1997 Local index Up
PCPTTRF - compute a Cholesky factorization of an N-by-N complex tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1)
N, D, E, JA, DESCA, AF, LAF, WORK, LWORK, INFO )
INTEGER INFO, JA, LAF, LWORK, N
INTEGER DESCA( * )
COMPLEX AF( * ), E( * ), WORK( * )
REAL D( * )
PCPTTRF computes a Cholesky factorization of an N-by-N complex tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the
factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in
subsequent calls to PCPTTRS to solve linear systems.
The factorization has the form
P A(1:N, JA:JA+N-1) P^T = U' D U or
P A(1:N, JA:JA+N-1) P^T = L D L',
where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix.
This document was created by man2html, using the manual pages.
Time: 21:52:09 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/P/pcpttrf","timestamp":"2014-04-17T16:12:16Z","content_type":null,"content_length":"9067","record_id":"<urn:uuid:897d060e-f30c-4737-8de5-6eca82a505dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the following system. x - y = 10 x + y = 8 Answer (9, -1) (9, 1) (-1, 9) (-9, 1)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f5fe39be4b038e0718066e3","timestamp":"2014-04-21T10:28:19Z","content_type":null,"content_length":"53949","record_id":"<urn:uuid:1756972a-427c-4776-8cd5-1847bfd9614b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Ordinary and Extraordinary Refractive Indices in a 2D Anisotropic Crystal
This Demonstration shows how the refractive index of an e-ray and an o-ray change with the angle with respect to the optic axis in a 2D anisotropic crystal. It shows the shapes of the refractive
indices of both negative and positive crystals and the plots show how the values of the refractive indices change in different directions inside a crystal.
The figure on the left shows the shapes of the refractive indices for both ordinary and extraordinary rays in an anisotropic crystal (o-ray in blue and e-ray in red). The rays have the same
refractive index on the vertical optic axis. The green line indicates the direction inside of the crystal along which the refractive indices are considered. By varying (defined as the angle between
the optic axis and the green line), we could change our direction inside the crystal. For different angles, the refractive indices for the two rays are different, which is why we get two different
shapes. The o-ray refractive index is the same for all angles, but for the e-ray it has the shape of an ellipse. The difference between the values of the refractive indices of the two rays is the
maximum at 90 degrees.
The second figure shows qualitatively how the values of the refractive indices vary for different directions in a crystal (which is described by the angle ). If we assume that light is emitted from
the origin in all directions, then (in the 2D case) the variation of the refractive index with the direction inside the crystal will give different velocities in different directions. The e-ray and
o-ray move at different velocities. If we know the values of the refractive indices, we can determine the velocity of the ray in that particular direction.
"ne(90)" represents the value of the e-ray refractive index for and "" shows that of the o-ray. Varying shows the calculated value. The "ne" value for any angle is shown below the plots. We also
could know the shapes of the refractive index for positive and negative crystals. For positive crystals we should change the values of "ne(90)" and "" such that "ne(90)" > "". If the values are
otherwise, we are dealing with negative crystals.
This Demonstration also shows the regions inside a anisotropic crystal where the velocities of the rays are the minimum and maximum. From these plots it is obvious that for positive crystals, the
minimum velocity of light is at 90 degrees and the maximum is along the optic axis. And for negative crystals, we get the minimum velocity of light along the optic axis and the maximum at 90
degrees, since the velocity is inversely proportional to the refractive index. | {"url":"http://www.demonstrations.wolfram.com/OrdinaryAndExtraordinaryRefractiveIndicesInA2DAnisotropicCry/","timestamp":"2014-04-19T14:30:37Z","content_type":null,"content_length":"44660","record_id":"<urn:uuid:4f3a2837-8ecb-41de-b937-acf59f2d5c33>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interesting Numbers
The gematria value for the Hebrew, four letter name of God is 26. The value for the Hebrew word, Adam, is 45. The difference between 45 and 26 is 19 which is the Hebrew value for Eve. 26 squared
plus 45 squared equals 2701; the gematria value of Genesis 1:1. Of course, these word-number relationship are well-known and are often taught to demonstrate the excellence of the Scriptures and
as a help in memorization.
However, there are additional, interesting values that can be derived from these same three word-numbers; 45, 26 and 19, that are less well known .. For example:
45 divided by 19 equals 2.36842105263158. The first four digits are 2368, the gematria of Jesus Christ in Greek.
45 divided by 26 equals 1.73076923076924. The first four digits are 1730, the Strong's number(H) for Love or Beloved.
19 divided by 26 equals .73076923076924. The first four digits are 7303, the Strong's number(H) for Spirit.
19 divided by 45 equals .42222222222222
The difference between these last two quotients is .30854700854702. The first four digits are 3085, the Strong's number(G) for Redemption.
The difference between the middle two quotients is 1. If we add 1 to 45 we get 46.
46 is the number of human chromosomes and 46 is also the Greek gematria value for Adam.
There is much more here but I hope that these few examples help show the wisdom, glory and precision of the Word.
Thank you for maintaining such a wonderful site. I hope I have not made any errors. | {"url":"http://www.fivedoves.com/letters/jan2011/janet15.htm","timestamp":"2014-04-20T00:44:47Z","content_type":null,"content_length":"4618","record_id":"<urn:uuid:841f7c02-8640-4d7d-9389-ea81300f41b2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density-based indexing for approximate nearest-neighbor queries
Density-based indexing for approximate nearest-neighbor queries (1999)
Download Links
Other Repositories/Bibliography
by Kristin P. Bennett , Usama Fayyad , Dan Geiger
Venue: In Proc. KDD
Citations: 33 - 2 self
author = {Kristin P. Bennett and Usama Fayyad and Dan Geiger},
title = {Density-based indexing for approximate nearest-neighbor queries},
booktitle = {In Proc. KDD},
year = {1999}
We consider the problem of performing Nearest-neighbor queries efficiently over large high-dimensional databases. To avoid a full database scan, we target constructing a multidimensional index
structure. It is well-accepted that traditional database indexing algorithms fail for high-dimensional data (say d> 10 or 20 depending on the scheme). Some arguments have advocated that
nearest-neighbor queries do not even make sense for high-dimensional data. We show that these arguments are based on over-restrictive assumptions, and that in the general case it is meaningful and
possible to build an index for such queries. Our approach, called DBIN, scales to high-dimensional databases by exploiting statistical properties of the data. The approach is based on statistically
modeling the density of the content of the data table. DBIN uses the density model to derive a single index over the data table and requires physically rewriting data in a new table sorted by the
newly created index (i.e. create a clustered-index). The indexing scheme produces a mapping between a query point (a data record) and an ordering on the clustered index values. Data is then scanned
according to the index. We present theoretical and empirical justification for DBIN. The scheme supports a family of distance functions which includes the traditional Euclidean distance measure. 1
8049 Maximum likelihood from incomplete data via the EM algorithm - Dempster, Laird, et al. - 1977
4785 Neural Networks for Pattern Recognition - Bishop - 1995
3911 Pattern Classification and Scene Analysis - Duda, Hart - 1973
2137 Density estimation for statistics and data analysis - Silverman - 1986
713 Approximate Nearest Neighbor: Towards Removing the Curse of Dimensionality - Indyk, Motwani - 1998
505 The X-tree: an index structure for high-dimensional data - Berchtold, Keim, et al. - 1996
409 Fastmap: a fast algorithm for indexing, data mining and visualization of traditional and multimedia datasets - Faloutsos, Lin - 1995
380 The K-D-B tree: A search structure for large multidimensional dynamic indexes - Robinson - 1981
375 The SR-tree: an index structure for high-dimensional nearest neighbor queries - Katayama, Satoh - 1997
364 Multi-dimensional density estimation - Scott, Sain - 2004
298 R.: Similarity indexing with the ss-tree - White, Jain - 1996
292 When is “nearest neighbor” meaningful - Beyer, Goldstein, et al. - 1999
270 Pattern Classi cation and Scene Analysis - Duda, Hart - 1973
267 Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques - Dasarathy - 1991
244 Scaling clustering algorithms to large databases - Bradley, Fayyad, et al. - 1998
193 C.: The tv-tree: an index structure for high-dimensional data - Lin, Jagadish, et al. - 1994
184 H.P.: A cost model for nearest neighbor search in high-dimensional data space. In: PODS - Berchtold, Böhm, et al. - 1997
177 The pyramid-technique: towards breaking the curse of dimensionality - Berchtold, Bhm, et al. - 1998
166 Optimal multi-step k-nearest neighbor search - Seidl, Kriegel - 1998
46 Fast nearest neighbor search in high-dimensional space - Berchtold, Ertl, et al. - 1998
33 R-trees: A Dynamic Index Structure for Spatial Searching - Gutman - 1984
33 Quadratic Forms in Random Variables - Mathai, Provost - 2003
32 High-dimensional similarity joins - Shim, Srikant, et al.
21 Scaling EM (expectation maximization) clustering to large databases - Bradley, Fayyad, et al. - 1998
16 High-dimensional index structures, database support for next decade’s applications (tutorial - Berchtold, Keim - 1998
3 Re ning initial points for k-means clustering - Bradley, Fayyad - 1998
3 Cluster analysis of multivariate data: e ciency versus interpretability of classi cations, Biometrics 21 - Forgy - 1965
2 Algorithm as 204: The distribution of a positive linear combination of chi-square random variables - Farebrother - 1983
2 Computationally e cient methods for selecting among mixtures of graphical models, with discussion - Thiesson, Meek, et al. - 1999
1 The R*-tree: An e cient and robust access method for points and rectangles - Katayama, Satoh - 1998
1 Fast nearest neighbor search in high-dimensional space - PI - 1998
1 Density based indexing for nearest-neighbor queries - Bennett, Fayyad, et al. - 1998 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.295.2485","timestamp":"2014-04-18T12:25:43Z","content_type":null,"content_length":"33489","record_id":"<urn:uuid:8085400a-f525-4004-af86-f09cc7598f8c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Einstein Explains Why Gravity Is Universal
The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the ground.
But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and should fall
faster. But in reality, the piece of paper slows down the descent of the stone.
Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren't for the interference of air resistance. Apollo
15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same rate.
Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way, negative
charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.
Thinking about this problem led Albert Einstein to what he called "the happiest thought of my life." Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside
world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a
massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away from
any other one. Sticking just to what is observed inside the spaceship, there's no way we could detect the existence of gravity.
Einstein, in his genius, realized the profound implication of this situation: if gravity affects everything equally, it's not right to think of gravity as a "force" at all. Rather, gravity is a
feature of spacetime itself, through which all objects move. In particular, gravity is the curvature of spacetime. The space and time through which we move are not fixed and absolute, as Newton would
have had it; they bend and stretch due to the influence of matter and energy. In response, objects are pushed in different directions by spacetime's curvature, a phenomenon we call "gravity." Using a
combination of intimidating mathematics and unparalleled physical intuition, Einstein was able to explain a puzzle that had been unsolved since Galileo's time. | {"url":"http://edge.org/print/response-detail/10098","timestamp":"2014-04-17T06:58:23Z","content_type":null,"content_length":"16082","record_id":"<urn:uuid:477b40aa-7d59-4b90-aaab-f8996337e362>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fixing non positive definite correlation matrices using R
October 14, 2012
By a modeler's tribulations, gopi goteti's web log
When a correlation or covariance matrix is not positive definite (i.e., in instances when some or all eigenvalues are negative), a cholesky decomposition cannot be performed. Sometimes, these
eigenvalues are very small negative numbers and occur due to rounding or due to noise in the data. In simulation studies a known/given correlation has to be imposed on an input dataset. In such cases
one has to deal with the issue of making a correlation matrix positive definite. Following are papers in the field of stochastic precipitation where such matrices are used.
• FP Brissette, M Khalili, R Leconte, Journal of Hydrology, 2007, “Efficient stochastic generation of multi-site synthetic precipitation data”
• GA Baigorria, JW Jones, Journal of Climate, 2010, “GiST: A stochastic model for generating spatially and temporally correlated daily rainfall data”
• M Mhanna and W Bauwens, International Journal of Climatology, 2012, “A stochastic space-time model for the generation of daily rainfall in the Gaza Strip”
A Solution
The paper by Rebonato and Jackel, “The most general methodology for creating a valid correlation matrix for risk management and option pricing purposes”, Journal of Risk, Vol 2, No 2, 2000, presents
a methodology to create a positive definite matrix out of a non-positive definite matrix.
Below is my attempt to reproduce the example from Rebonato and Jackel (2000). The correlation matrix below is from the example.
origMat <- array(c(1, 0.9, 0.7, 0.9, 1, 0.3, 0.7, 0.3, 1), dim = c(3,
origEig <- eigen(origMat)
## [1] 2.296728 0.710625 -0.007352
As you can see, the third eigenvalue is negative. Trying a cholesky decomposition on this matrix fails, as expected.
cholStatus <- try(u <- chol(origMat), silent = FALSE)
cholError <- ifelse(class(cholStatus) == "try-error", TRUE, FALSE)
Here, I use the method of Rebonato and Jackel (2000), as elaborated by Brissette et al. (2007), to fix the correlation matrix. As per the method, replace the negative eigenvalues with 0 (or a small
positive number as Brissette et al. 2007 suggest), then normalize the new vector.
# fix the correl matrix
newMat <- origMat
iter <- 0
while (cholError) {
iter <- iter + 1
cat("iteration ", iter, "\n")
# replace -ve eigen values with small +ve number
newEig <- eigen(newMat)
newEig2 <- ifelse(newEig$values < 0, 0, newEig$values)
# create modified matrix eqn 5 from Brissette et al 2007, inv = transp for
# eig vectors
newMat <- newEig$vectors %*% diag(newEig2) %*% t(newEig$vectors)
# normalize modified matrix eqn 6 from Brissette et al 2007
newMat <- newMat/sqrt(diag(newMat) %*% t(diag(newMat)))
# try chol again
cholStatus <- try(u <- chol(newMat), silent = TRUE)
cholError <- ifelse(class(cholStatus) == "try-error", TRUE, FALSE)
## iteration 1
## iteration 2
# final check
## [1] 2.290e+00 7.096e-01 -1.332e-15
Unresolved Issue(?)
However, as you can see, the third eigenvalue is still negative (but very close to zero). The “chol” function in R is not giving an error probably because this negative eigenvalue is within the
“tolerance limits”. I would like to know what these “tolerance limits” are.
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/fixing-non-positive-definite-correlation-matrices-using-r-2/","timestamp":"2014-04-18T16:00:32Z","content_type":null,"content_length":"40888","record_id":"<urn:uuid:557eae48-0d11-4530-a0e3-129a5b9ea368>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Click here to see the number of accesses to this library.
lib meschach
by David E. Stewart des@thrain.anu.edu.au
for numerical linear algebra, dense and sparse, with permutations,
, error handling, input/output,
, dynamic allocation, de-allocation, re-sizing and copying of objects,
, dense complex matrices and vectors as well as real matrices
, and vectors,
, input and output routines for these objects, and MATLAB
, save/load format,
, error/exception handling,
, basic numerical linear algebra -- linear combinations, inner
, products, matrix-vector and matrix-matrix products,
, including transposed and adjoint forms,
, vector min, max, sorting, componentwise products, quotients,
, dense matrix factorise and solve -- LU, Cholesky, LDL^T, QR,
, QR with column pivoting, symmetric indefinite (BKP),
, dense matrix factorisation update routines -- LDL^T, QR
, (real matrix updates only),
, eigenvector/eigenvalue routines -- symmetric, real Schur
, decomposition, SVD, extract eigenvector,
, sparse matrix "utility" routines,
, sparse matrix factorise and solve -- Cholesky, LU and BKP
, (Bunch-Kaufman-Parlett symmetric indefinite factorisation),
, sparse incomplete factorisations -- Cholesky and LU,
, iterative techniques -- pre-conditioned conjugate gradients,
, CGNE, LSQR, CGS, GMRES, MGCR, Lanczos, Arnoldi,
, allowance for "procedurally defined" matrices in the iterative
, techniques,
, various "torture" routines for checking aspects of Meschach,
, memory tracking for locating memory leaks
lang C
prec double
# true master copy: ftp thrain.anu.edu.au:pub/meschach/
file dcg.shar
for Preconditioned Conjugate Gradient
by Mark Seager, LLNL.
file sge.shar
for LINPACK functions SGECO/SGEFA/SGESL
, (dense matrix conditon number estimate, factorization and solution
, routines) and some of the BLAS in C. There is a driver which shows
, how to set up column oriented matrices in C for these routines.
by Mark K. Seager (seager@lll-crg.llnl.gov) 4/8/88.
gams d2a1
file frac
for finds rational approximation to floating point value
by Robert Craig, AT&T Bell Labs - Naperville
ref Jerome Spanier and Keith B. Oldham, "An Atlas of Functions," Springer-Verlag, 1987, pp. 665-7.
gams a2, a6c
file brent.shar
for Brent's univariate minimizer and zero finder.
by Oleg Keselyov <oleg@ponder.csci.unt.edu, oleg@unt.edu> May 23, 1991
ref G.Forsythe, M.Malcolm, C.Moler, Computer methods for mathematical computations.
# Contains the source code for the program fminbr.c and
# zeroin.c, test drivers for both, and verifivation protocols.
see serv.shar
gams f1b, g1a2
file serv.shar
for numerical programming in C
# These files are needed to compile the programs in packages
# brent.shar *vector.shar shar.shar task_env.shar
# Verification programs and data are included as well.
by Oleg Keselyov <oleg@ponder.csci.unt.edu, oleg@unt.edu> May 23, 1991
gams l6a14, n1, r1
file vector.shar
for Low and Intermediate Level functions to manage vectors in C.
# In fact, the vector declaration as a special structure and
# a wide set of procedures to handle it define a class (in
# the sence of C++ or SMALLTALK). It is still a common C,
# however.
# Features: high reliability and fool-proof checking, the
# user can operate on single elements of the vector in the
# customary C manner, or he may wish to handle the vector as
# a whole (as an atomary object) with high-effective functions
# (that can clear the vector, assign vectors or obtain their
# scalar product, find the vector norm(s), etc.).
by Oleg Keselyov <oleg@ponder.csci.unt.edu, oleg@unt.edu> May 23, 1991
gams d1a
file hl_vector.shar
for high level vector operations. Involves the
, Aitken-Lagrange interpolation over the table of uniform or
, arbitrary mesh, Hook-Jeevse local multidimensional minimizer.
by Oleg Keselyov <oleg@ponder.csci.unt.edu, oleg@unt.edu> May 23, 1991
gams e2a, g1b2
file task_env.shar
for Resource facility, or managing global "private" parameters
, that specify various program "options". It help keep
, reasonable number of arguments in function calls.
by Oleg Keselyov <oleg@ponder.csci.unt.edu, oleg@unt.edu> May 23, 1991
gams z
file numcomp-free-c
for index of free source code for numerical computation written in C or C++
by Ajay Shah
gams z
file sgefa.shar
# see also ode/cvode.tar.Z
file bcc.tar.gz
for bounds checking patches for gcc
by Richard W.M. Jones <rwmj@doc.ic.ac.uk>
alg tracks heap, stack, and static objects and checks all references
size 692 kB
ref ftp://dse.doc.ic.ac.uk/pub/misc/bcc/bounds-checking-*-*.tgz
# requires source for gcc | {"url":"http://netlib.org/c/","timestamp":"2014-04-16T04:55:20Z","content_type":null,"content_length":"5322","record_id":"<urn:uuid:88394174-42a4-4ba8-b889-1e965600ef34>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Droid Tesla
About Droid Tesla Software
DroidTesla is a simple and powerful SPICE engine. SPICE is an acronym for Simulation Program with Integrated Circuit Emphasis and was inspired by the need to accurately model devices used in
integrated circuit design.
DroidTesla simulator solves basic resistive circuits using Kirchoff’s Current Law (KCL) in much the same way a student in a circuits class would,the simulator systematically forms a matrix in
accordance with KCL and then proceeds to solve for the unknown quantities using various algebraic techniques such as Gaussian elimination and sparse matrix techniques.
For non-linear components, such as the diode and BJT ,DroidTesla engine searching for the approximate solution by making an initial guess at an answer and then improving the solution with successive
calculations built upon this guess. This is called an iterative process.DroidTesla simulation uses the Newton-Raphson iterative algorithm to solve circuits with non-linear I/V relationships.
For reactive elements(capacitors and inductors),the DroidTesla uses numeric integration methods to approximate the state of the reactive elements as a function of time. DroidTesla offers the
Trapezoidal(I'll add a GEAR method later) integration methods to approximate the state of the reactive elements. Although for most circuits, both methods will provide almost identical results, it is
generally regarded that the Gear method is more stable, but trapezoidal method is faster and more accurate.
Droid Tesla For Now Can Simulate
• -Resistor
• -Capacitor
• -Inductor
• -Potentiometer
• -Light Bulb
• -Ideal operational amplifier
• -Bipolar junction transistor (NPN PNP)
• -MOSFET N-channel depletion
• -MOSFET N-channel enhancement
• -MOSFET P-channel depletion
• -MOSFET P-channel enhancement
• -PN Diode
• -PN Led diode
• -PN Zener diode
• -AC current source
• -DC current source
• -AC voltage source
• -DC voltage(battery) source
• -CCVS - current controlled voltage source
• -CCCS - current controlled current source
• -VCVS - voltage controlled voltage source
• -VCCS - voltage controlled current source
• -Square wave voltage source
• -Triangle wave voltage source
• -AC ampermeter
• -DC ampermeter
• -AC voltmeter
• -DC voltmeter
• -Two channe oscilloscope
• -SPST Switch
• -SPDT Switch
• -Voltage controlled switch
• -Current controlled switch
• -AND
• -NAND
• -OR
• -NOR
• -NOT
• -XOR
• -XNOR
• -JK flip-flop
• -7 Segment Display
• -D flip-flop
• -Relay
• -IC 555
• -Transformer
• -Graetz Circuit | {"url":"http://droidtesla.com/","timestamp":"2014-04-20T18:23:33Z","content_type":null,"content_length":"13392","record_id":"<urn:uuid:a6efa54f-bf2a-4103-932e-49e8160448c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: a cubic fitting curve in a set of (x, y, z) data ?
Replies: 3 Last Post: Mar 21, 2013 5:41 PM
Messages: [ Previous | Next ]
Re: a cubic fitting curve in a set of (x, y, z) data ?
Posted: Mar 21, 2013 5:41 PM
"Kuo-Hsien" wrote in message <kifjqt$qq6$1@newscl01ah.mathworks.com>...
> "Torsten" wrote in message <kied7n$oom$1@newscl01ah.mathworks.com>...
> > "Kuo-Hsien" wrote in message <kidpp3$440$1@newscl01ah.mathworks.com>...
> > > Dear all,
> > >
> > > I have a set of scatter data (x, y, z).
> > >
> > > How to add a cubic fitting curve in this 3d plot?
> > >
> > > Thanks for the hint.
> > >
> > > Michael
> >
> > A curve for 3d data ?
> >
> > Best wishes
> > Torsten.
> Hi Torsten,
> Yes. the 3d dataset looks like a narrow galaxy band (scatters) across the xyz plot. I'd like to just add a cubic fitting line to stand for this narrow band scatters.
> Any ideas?
Essentially, it sounds like you wish to fit a "cubic" model to data
with noise in all three variables. So a cubic errors in variables
model. Not at all trivial.
Worse, a "cubic" curve makes essentially little sense anyway
in three dimensions. What model are you posing? What is a
function of what? When you say cubic, this has to mean
something in terms of mathematics.
It sounds like what you really want is some general curve that
follows through the center of your curvilinear cloud, in itself
not a trivial task either.
I'd suggest grouping your points by averaging into a relatively
few points, spaced apart. Now use a tool that can interpolate
a curve through those points in three dimensions. cscvn is
such a tool, or my own interparc from the file exchange.
Date Subject Author
3/20/13 a cubic fitting curve in a set of (x, y, z) data ? Kuo-Hsien
3/21/13 Re: a cubic fitting curve in a set of (x, y, z) data ? Torsten
3/21/13 Re: a cubic fitting curve in a set of (x, y, z) data ? Kuo-Hsien
3/21/13 Re: a cubic fitting curve in a set of (x, y, z) data ? John D'Errico | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2442564&messageID=8715988","timestamp":"2014-04-21T00:25:18Z","content_type":null,"content_length":"21266","record_id":"<urn:uuid:a58fea0f-5bb9-4bfc-b2c4-c945495d73bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
The height of a tree - Math Central
T is the top of the tree, B is the bottom of the tree and E is the end of the shadow. Triangle TBE is a right triangle so what trig function relates the measure of the angle BET (30^o), the length of
BE (25 feet) and the length of TB? Solve for the length of TB, the height of the tree. | {"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/danice1.html","timestamp":"2014-04-16T13:03:09Z","content_type":null,"content_length":"6993","record_id":"<urn:uuid:8725383b-6362-4eaa-85f4-af7b80f82f0f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I prove that the derived couple of the homotopy exact couple is an invariant?
up vote 2 down vote favorite
I'm working on (yet) an(other) exercise from Mosher & Tangora's "Cohomology Operations and Applications to Homotopy Theory". This one is about the homotopy exact couple, which is defined for a
complex $K$ by $D_{p,q}=\pi_{p+q}(K^p)$ and $E_{p,q}=\pi_{p+q}(K^p,K^{p-1})$. So that we have relative Hurewicz, we assume K to be simply connected. As stated in the title, the object of the exercise
is to show that this is not a homotopy invariant but that its derived couple is.
The motivating example I've got in my head (let me know if you've got a better one) is $S^2$ realized either with 1 vertex, 1 edge, and 2 faces, or with 1 edge and 1 face. This already easily proves
that the homotopy exact couple itself is not an invariant. For the harder part, I've drawn the (presumably standard) grid with rows like $\cdots \rightarrow D_{p,q} \rightarrow E_{p,q} \rightarrow D_
{p-1,q} \rightarrow \cdots$ connected by vertical inclusion maps $D_{p,q} \rightarrow D_{p+1,q-1}$, and I can see how these both give the same derived couple, but I'm having trouble figuring out
exactly how to make this into a general argument. I begin with a homotopy equivalence $f:K \rightarrow L$, $g:L \rightarrow K$, and I can assume these maps are cellular so I get induced maps between
all corresponding groups of the homotopy exact couples associated to $K$ and $L$. But what can I say about these maps? Clearly from my motivating example the restrictions to skeleta need not be
homotopy equivalences, or even anything close. I'm pretty sure they commute with the intra-couple maps, but I haven't had any success pushing through the commutative algebra with that fact alone. It
smells like obstruction theory should be involved here since in general you'll need to move $K^p$ through $K^{p+1}$ to realize the homotopy $gf\simeq 1_K$ (consider it as a map $K\times I \rightarrow
K$, which can be assumed to be cellular), but I don't think I understand it well enough to see how (or if that's even true, I guess). Am I headed in the right direction?
P.S. I'm camping right now so I typed all of this on my phone. Might this be a first for MO? Or have people been asking math questions from their phones since before I was born...
at.algebraic-topology homotopy-theory
People used to write letters before you were born. – supercooldave Jul 5 '10 at 8:58
4 Obligatory xkcd link: xkcd.com/378 – Mariano Suárez-Alvarez♦ Jul 5 '10 at 8:59
add comment
1 Answer
active oldest votes
Let me start by making a definition: an $n$-skeleton of a space $X$ is an $n$-equivalence $X_n \to X$, where $X_n$ is an $n$-dimensional (at most) CW complex ($X$ itself need not be a CW
complex). Obviously, $n$-skeleta are not unique, but any two $n$-skeleta for the same space factor through one another: there are compositions $X_n' \to X_n \to X$ and $X_n \to X_n' \to
Let's concentrate on the $D$s. By definition, $D^2_{p,q} = \mathrm{im}( \pi_{p+q} (K_q) \to \pi_{p+q}( K_{q+1}))$. Any two $q$- and $(q+1)$-skeleta $K_q\to K_{q+1}\to K$ and ${K_q}'\to
up vote 3 K_{q+1}' \to K$ factor through one another, so
down vote $\mathrm{im}( \pi_{p+q} (K_q) \to \pi_{p+q}( K_{q+1})) \cong \mathrm{im}( \pi_{p+q} ({K_q}') \to \pi_{p+q}( K_{q+1}'))$.
This shows that $D^2_{p,q}$ is independent of the choice of CW decomposition. The isomorphism of the $E$-groups follows by the Five lemma.
EDIT: Of course the last bit of the second paragraph was ridiculous, and unnecessary; fixed now.
Thanks! So I assume that when you say that $n$-skeleta factor through each other, you mean up to homotopy -- this is basically the same as saying that the hom-eq maps can be chosen to
be cellular, right? Also, I think at the end of your second paragraph you need $p\leq 0$ otherwise that's not an iso. But in any case, since as I said you only need the $(p+1)
$-skeleton to get the $p$-skeleton in the right place, then we can get a commutative-up-to-homotopy square with $K^p \leftrightarrow L^p$ included into $K^{p+1}\leftrightarrow L^{p+1}
$, which (I think) should prove it... – Aaron Mazel-Gee Jul 5 '10 at 18:39
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/30597/how-can-i-prove-that-the-derived-couple-of-the-homotopy-exact-couple-is-an-invar","timestamp":"2014-04-20T18:45:42Z","content_type":null,"content_length":"58060","record_id":"<urn:uuid:b61708d2-d490-40af-99cf-6da4a3dc2b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monday Musing: General Relativity, Very Plainly
Monday, September 19, 2005
Monday Musing: General Relativity, Very Plainly
[NOTE: Since I wrote and published this essay last night, I have received a private email from Sean Carroll, who is the author of an excellent book on general relativity, as well as a comment on this
post from Daryl McCullough, both pointing out the same error I made: I had said, as do many physics textbooks, that special relativity applies only to unaccelerated inertial frames, while general
relativity applies to accelerated frames as well. This is not really true, and I am very grateful to both of them for pointing this out. With his permission, I have added Sean's email to me as a
comment to this post, and I have corrected the error by removing the offending sentences.]
In June of this year, to commemorate the 100th anniversary of the publication of Einstein's original paper on special relativity, I wrote a Monday Musing column in which I attempted to explain some
of the more salient aspects of that theory. In a comment on that post, Andrew wrote: "I loved the explanation. I hope you don't wait until the anniversary of general relativity to write a short essay
that will plainly explain that theory." Thanks, Andrew. The rest of you must now pay the price for Andrew's flattery: I will attempt a brief, intuitive explanation of some of the well-known results
of general relativity today. Before I do that, however, a caveat: the mathematics of general relativity is very advanced and well beyond my own rather basic knowledge. Indeed, Einstein himself needed
help from professional mathematicians in formulating some of it, and well after general relativity was published (in 1915) some of the greatest mathematicians of the twentieth century (such as Kurt
Gödel) continued to work on its mathematics, clarifying and providing stronger foundations for it. What this means is, my explication here will essentially not be mathematical, which it was in the
case of special relativity. Instead, I want to use some of the concepts I introduced in explaining special relativity, and extend some of the intuitions gathered there, just as Einstein himself did
in coming up with the general theory. Though my aims are more modest this time, I strongly urge you to read and understand the column on special relativity before you read the rest of this column.
The SR column can be found here.
Before anything else, I would like to just make clear some basics like what acceleration is: it is a change in velocity. What is velocity? Velocity is a vector, which means that it is a quantity that
has a direction associated with it. The other thing (besides direction) that specifies a velocity is speed. I hope we all know what speed is. So, there are two ways that the velocity of an object can
change: 1) change in the object's speed, and 2) change in the object's direction of motion. These are the two ways that an object can accelerate. (In math, deceleration is just negative
acceleration.) This means that an object whose speed is increasing or decreasing is said to be accelerating, but so is an object traveling in a circle with constant speed, for example, because its
direction (the other aspect of velocity) is changing at any given instant.
Get ready because I'm just going to give it to you straight: the fundamental insight of GR is that acceleration is indistinguishable from gravity. (Technically, this is only true locally, as
physicists would say, but we won't get into that here.) Out of this amazing notion come various aspects of GR that most of us have probably heard about: that gravity bends light; that the stronger
gravity is, the more time slows down; that space is curved. The rest of this essay will give somewhat simplified explanations of how this is so.
Just as in special relativity no experiment that we could possibly perform inside a uniformly moving spaceship (with no windows) could possibly tell us whether we were moving or at rest, in general
relativity, no experiment we can possibly perform inside the spaceship can ever tell us whether we are 1) accelerating, or 2) in a gravitational field. In other words, the effects of gravity in a
spaceship sitting still on the surface of the Earth are exactly the same as those of being in an accelerating spaceship far from any gravitational forces. Yet another, more technical, way of saying
this would be that observations made in an accelerating reference frame are indistinguishable from observations made in a classical Newtonian gravitational field. This is the principle of
equivalence, and it is the heart of general relativity. While this may seem unintuitive at first, it is not so hard to imagine and get a grip on. Look at the spaceship shown in Fig. 1 (in the next
section, below) and imagine that you are standing on its floor while it is standing upright on the surface of Earth, on the launch pad at Cape Kennedy, say. You would be pressed against the floor by
gravity, just as you are when standing anywhere else, like on the street. If you stood on a weighing scale, it would register your weight. Now imagine that you are in deep space in the same ship, far
from any planets, stars, or other masses, so that there is no gravity acting on you or the spaceship. If the spaceship were accelerating forward (the direction which is up in Fig. 1), you would be
pressed against the floor, just as when an airplane accelerates quite fast down the runway on its takeoff roll, you are pressed in the opposite direction against your the back of your seat. If the
acceleration were exactly fast enough, you would be pressed against the floor of the spaceship with the same force as your weight, and at this rate of acceleration, if you stood on a weighing scale,
it would again register your weight. You would be unable to tell whether you were accelerating in deep space or standing still on the surface of Earth. (You could perform all of Galileo's experiments
inside the spaceship, dropping objects, rolling them down inclined planes, etc., and they would give the same results as here on Earth.) Are you with me? What I am saying is, for a gravitational
field of a given strength in a given direction (like that at Earth's surface toward its center), there is a corresponding rate of acceleration in the opposite direction which is indistinguishable
from it.
I am afraid of losing some people here, so let me pump your intuition with a few examples. Have you ever been on one of those rides in an amusement park (the one I went to was called the Devil's
Hole) where you stand in a circular room against the wall, then after the room starts spinning quite rapidly, you are pressed strongly against the wall and then the floor drops away? It can be scary,
but is safe because you are accelerating (moving in a circle) and this presses you to the wall just as gravity would if you turned the whole circular room on its side (like a Ferris wheel) and lay on
the side of it which is touching the ground. Most gravity defying stunts, like motorcyclists riding inside a wire cage in the shape of a sphere, rely on the effects of acceleration to cancel gravity.
You've probably seen astronauts on TV training for weightless environments inside aircraft where they are floating about. This also exploits the principle of equivalence: if the plane accelerates
downwards at the same rate as a freely falling object would, this will produce what could be described as an upward gravitational force inside the plane, and this cancels gravity. Of course, from an
outside perspective, looking through the plane windows, it just seems that the plane and the people in it are both falling at the same rate, which is why they seem to be floating inside it. But
inside the plane, if you have no windows, there is no way to tell whether you are far away from any gravitational field, or simply accelerating in its direction. All this really should become quite
clear if you think about it for a bit. Reread the last couple of paragraphs if you have to.
Consider the leftmost three drawings of the spaceship in Fig. 1. They show the spaceship accelerating upward. Remember, this does not mean that it is moving upward with a steady speed. It means that
it is getting faster and faster each instant. In other words, its speed is increasing. Now we have an object, say a ball, which is moving at a steady (fixed) speed across the path of the spaceship
from left to right in a straight-line path perpendicular to the direction the spaceship is moving and accelerating in (up). Suppose, further, that there is a little hole in the spaceship just where
the ball would strike the exterior left wall of the spaceship, which allows the ball to enter the spaceship without ever touching any part of it. Imagine that the spaceship is made of glass and is
transparent, so you can see what happens inside. If you are standing outside the spaceship, what you will see is what is shown in the leftmost three drawings of the spaceship in Fig. 1, i.e., the
ball will continue in a straight line on its previous path (shown in the figure as a dotted line), while the spaceship accelerates up around it (while the ball is inside the ship). Here's the weird
part: now imagine yourself standing still on the floor of the spaceship as it accelerates upward. You experience gravity which presses you to the floor, as described above. Now, you see the ball
enter from the window in the left wall, and what you see is that it follows a parabolic arc down and hits the opposite wall much lower than the height at which it entered (shown in the rightmost
drawing of Fig. 1) just as it would because of gravity if the spaceship were standing on the launchpad at Cape Kennedy and someone threw a ball in horizontally through the window. Do you see? One
man's acceleration is another man's gravity!
You can probably guess what's coming next: now imagine the the ball is replaced with a ray of light. Exactly the same thing will happen to it. The light will follow a parabolic arc downward and hit
the opposite wall below the height at which it entered the spaceship, when seen by the person inside. The reason that you normally don't see light bending in any spaceships is that light travels so
fast. In the billionths of a second that light takes to get from one wall to the other, the spaceship doesn't move up much (maybe billionths of an inch) because it is moving much slower than light
moves. This small a deflection is impossible to measure. (This is just as you don't see a bullet fired horizontally bending down much over a short distance, even though it is following a downward
parabolic path to the ground. And light is a lot faster than bullets.) This bending of light must be true as long as we assume the principle of equivalence to be true, because if it weren't, we could
then perform optical experiments on the ship to decide whether we are in an accelerating frame or a gravitational field. This is forbidden by the principle of equivalence. And since we now see that
light will bend in an accelerating spaceship (seen by someone in the ship) and since we also know that the person in the ship by definition has no way of knowing whether she is accelerating or in a
gravitational field, light must also bend in a gravitational field. (Otherwise the person would know she is accelerating.) It's really that simple!
The most famous experiment which confirmed the correctness of GR and made Einstein world famous overnight, was the observation of the bending of starlight by the Sun's gravity in 1919, which I
mentioned briefly in my June SR column. Also, in case you are wondering why light is bent by gravity even though photons have no mass at rest, it is because light is a form of energy, and as we know
from special relativity, energy is equivalent to inertial mass according to E = mc^2. All energy gravitates.
This time, let's consider what happens with a rotational motion. Look at Fig. 2. It shows a huge disk. Imagine that we put two clocks on the disk: one at the center at point A, and one at the edge
at point B. Also put a clock at point C, which is on still ground some distance from the disk. Now imagine that the disk starts rotating very fast as shown by the arrow. Now we know that the clocks
at points A and C are not moving with respect to each other, so they will read the same time. But we also know that clock at B is moving with respect to the ground, and by the principles of special
relativity must be running slower than C. And since C must be running at the same rate as A (they are not in motion relative to one another), B must also be slower than A. This will be true for an
observer at C on the ground as well as at A on the disk, but their interpretations of why the clock on the edge at B is slower will be different: for the ground observer at C, the clock at B is in
motion, which is what slows it down. For the observer at A, however, there is no motion, only a centripetal acceleration toward the center of the disc, and it is this acceleration which accounts for
the slowing down of the clock. The further A moves toward the edge, the stronger the centrifugal force (and the centripetal acceleration), and the slower the clock he has runs. Since acceleration is
indistinguishable from gravity (A has no idea if he tends to experience a force toward the outside of the disk because the disk is rotating, or whether the disk is still and he is in a gravitational
field), clocks must also slow down in gravitational fields. This slowing down of time by gravity has been confirmed by experiments to a very high precision.
We just looked at time. Let's see what happens with space. Take the same disk from Fig. 2 and replace clock B with a ruler. Place the ruler at B so that it is tangent to the disk. For the same
special relativistic reasons that the clock at B will run slower, the ruler at B will be contracted in length. Use another ruler to measure the radius from A to B. since the rotational motion on the
disk is always perpendicular to the radius, this will be unaffected by motion. Since only the ruler at B is affected, if that ruler is used to measure the circumference of the disk, the ratio of that
measured circumference and the measured diameter will not be Pi (3.1415926...), but a smaller number, depending on the rate of rotation of the disk. This is a property and an indication of a curved
surface. But the rotation (accelerated motion) is equivalent to a gravitational field as we have already seen, so we can say that gravity causes space to become curved.
There is much, much, much, more to this grand theory, and I have but drawn a crude cartoon of it here in the small hope that I might impart a bit of its flavor, and indicate the direction in which
Einstein moved after publishing special relativity in 1905. Andrew, this is the best I can do.
Thanks to Margit Oberrauch for doing the illustrations.
Have a good week!
My other recent Monday Musings:
Regarding Regret
Three Dreams, Three Athletes
Rocket Man
Francis Crick's Beautiful Mistake
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka's President
Posted by S. Abbas Raza at 12:00 AM | Permalink | {"url":"http://3quarksdaily.blogs.com/3quarksdaily/2005/09/general_relativ.html","timestamp":"2014-04-23T10:44:58Z","content_type":null,"content_length":"44516","record_id":"<urn:uuid:696817a3-3806-4c8b-92b7-41dd9640f609>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing Network Reliability Coefficients
Abstract: When a network is modeled by a graph and edges of the graph remain reliable with a given probability p, the probability of the graph remaining connected is called the reliability of the
network. One form of the reliability polynomial has as coefficients the number of connected spanning subgraphs of each size in the graph. Since the problem of exact computation is P-hard, we turn to
approximation methods. We have developed two methods for computing these coefficients: one based on sequential importance sampling (SIS) and the other based on Monte Carlo Markov chain (MCMC). MCMC
uses a random walk through the sample space while SIS draws a sample directly. There is not much theory available on the SIS method; however, this method is fast. In contrast, MCMC has a great deal
of theory associated with it and is thus a more widely used and trusted method. In order to properly use MCMC, two quantities are need, the mixing time, the parameter which governs how long the
algorithm must be run to get independent samples, and the fugacity, the parameter which governs the acceptance rates of proposed steps in the random walk. Despite the theory available on MCMC, both
of these quantities are very difficult to calculate. As such, it is common practice to simply guess at values for these parameters. This work focuses on the effectiveness of SIS in calculating these
MCMC parameters in a given instance of the problem. Thus, we use SIS to speed up MCMC. | {"url":"http://www.nist.gov/manuscript-publication-search.cfm?pub_id=908418","timestamp":"2014-04-18T08:20:19Z","content_type":null,"content_length":"23559","record_id":"<urn:uuid:929a4671-b129-4ed6-92d4-9319033f9860>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two layered discs rotating at relativistic angular velocities
No - if the object is static in classical mechanics, it'll be static in special relativity too. And being static means it's velocity is zero.
For a formal demonstration, if you use the SR velcoity addition formula (for velocities in the same direction) which is:
v_add = (v1 + v2) / (1 - v1 * v2)
if v1 = +v and v2 = -v, the added velocity, v_add, is still zero. And given that the disks are rotating at equal speeds in opposite directions, v1 and v2 will be equal and opposite and in the same
Formally again, v1 = r x w, v2 = - r x w, where x is the vector cross product. | {"url":"http://www.physicsforums.com/showthread.php?t=583584","timestamp":"2014-04-18T03:04:59Z","content_type":null,"content_length":"36886","record_id":"<urn:uuid:0369bbb3-2e84-4472-a933-78163ac2401d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermi Estimates - Less Wrong
Comments (105)
Sort By: Best
It's probably worth figuring out what went wrong in Approach 1 to Example 1, which I think is this part:
[300 cities of 10,000 or more people per county] × [2500 counties in the USA]
Note that this gives 750,000 cities of 10,000 or more people in the US, for a total of at least 7.5 billion people in the US. So it's already clearly wrong here. I'd say 300 cities of 10,000 people
or more per county is way too high; I'd put it at more like 1 (Edit: note that this gives at least 250 million people in the US and that's about right). This brings down the final estimate from this
approach by a factor of 300, or down to 3 million, which is much closer.
(Verification: I just picked a random US state and a random county in it from Wikipedia and got Bartow County, Georgia, which has a population of 100,000. That means it has at most 10 cities with
10,000 or more people, and going through the list of cities it actually looks like it only has one such city.)
This gives about 2,500 cities in the US total with population 10,000 or more. I can't verify this number, but according to Wikipedia there are about 300 cities in the US with population 100,000 or
more. Assuming the populations of cities are power-law distributed with exponent 1, this means that the nth-ranked city has population about 30,000,000/n, so this gives about 3,000 cities in the US
with population 10,000 or more.
And in fact we didn't even need to use Wikipedia! Just assuming that the population of cities is power-law distributed with exponent 1, we see that the distribution is determined by the population of
the most populous city. Let's take this to be 20 million people (the number you used for New York City). Then the nth-ranked city in the US has population about 20,000,000/n, so there are about 2,000
cities with population 10,000 or more.
Edit: Found the actual number. According to the U.S. Census Bureau, as of 2008, the actual number is about 2,900 cities.
Incidentally, this shows another use of Fermi estimates: if you get one that's obviously wrong, you've discovered an opportunity to fix some aspect of your model of the world.
(2 Fermis per day)×(90 days) = 200 Fermi estimates.
I've run meetups on this topic twice now. Every time I do, it's difficult to convince people it's a useful skill. More words about when estimation is useful would be nice.
In most exercises that you can find on Fermi calculations, you can also actually find the right answer, written down somewhere online. And, well, being able to quickly find information is probably a
more useful skill to practice than estimation; because it works for non-quantified information too. I understand why this is; you want to be able to show that these estimates aren't very far off, and
for that you need to be able to find the actual numbers somehow. But that means that your examples don't actually motivate the effort of practicing, they only demonstrate how.
I suspect the following kinds of situations are fruitful for estimation:
• Deciding in unfamiliar situations, because you don't know how things will turn out for you. If you're in a really novel situation, you can't even find out how the same decision has worked for
other people before, and so you have to guess at expected value using the best information that you can find.
• Value of information calculations, like here and here, where you cannot possibly know the expected value of things, because you're trying to decide if you should pay for information about their
• Deciding when you're not online, because this makes accessing information more expensive than computation.
• Decisions where you have unusual information for a particular situation -- the internet might have excellent base-rate information about your general situation, but it's unlikely to give you the
precise odds so that you can incorporate the extra information that you have in this specific situation.
• Looking smart. It's nice to look smart sometimes.
Others? Does anyone have examples of when Fermi calculations helped them make a decision?
Fermi's seem essential for business to me. Others agree; they're taught in standard MBA programs. For example:
• Can our business (or our non-profit) afford to hire an extra person right now? E.g., if they require the same training time before usefulness that others required, will they bring in more revenue
in time to make up for the loss of runway?
• If it turns out that product X is a success, how much money might it make -- is it enough to justify investigating the market?
• Is it cheaper (given the cost of time) to use disposable dishes or to wash the dishes?
• Is it better to process payments via paypal or checks, given the fees involved in paypal vs. the delays, hassles, and associated risks of non-payment involved in checks?
And on and on. I use them several times a day for CFAR and they seem essential there.
They're useful also for one's own practical life: commute time vs. rent tradeoffs; visualizing "do I want to have a kid? how would the time and dollar cost actually impact me?", realizing that
macademia nuts are actually a cheap food and not an expensive food (once I think "per calorie" and not "per apparent size of the container"), and so on and so on.
Oh, right! I actually did the comute time vs. rent computation when I moved four months ago! And wound up with a surprising enough number that I thought about it very closely, and decided that number
was about right, and changed how I was looking for apartments. How did I forget that?
realizing that macademia nuts are actually a cheap food and not an expensive food (once I think "per calorie" and not "per apparent size of the container"),
But calories aren't the only thing you care about -- the ability to satiate you also matters. (Seed oil is even cheaper per calorie.)
The main use I put Fermi estimates to is fact-checking: when I see a statistic quoted, I would like to know if it is reasonable (especially if I suspect that it has been misquoted somehow).
Qiaochu adds:
If you get [an estimate] that's obviously wrong, you've discovered an opportunity to fix some aspect of your model of the world.
I also think Fermi calculations are just fun. It makes me feel totally awesome to be able to conjure approximate answers to questions out of thin air.
There's a free book on this sort of thing, under a Creative Commons license, called Street-Fighting Mathematics: The Art of Educated Guessing and Opportunistic Problem Solving. Among the fun things
in it:
Chapter 1: Using dimensional analysis to quickly pull correct-ish equations out of thin air!
Chapter 2: Focusing on easy cases. It's amazing how many problems become simpler when you set some variables equal to 1, 0, or ∞.
Chapter 3: An awful lot of things look like rectangles if you squint at them hard enough. Rectangles are nice.
Chapter 4: Drawing pictures can help. Humans are good at looking at shapes.
Chapter 5: Approximate arithmetic in which all numbers are either 1, a power of 10, or "a few" -- roughly 3, which is close to the geometric mean of 1 and 10. A few times a few is ten, for small
values of "is". Multiply and divide large numbers on your fingers!
... And there's some more stuff, too, and some more chapters, but that'll do for an approximate summary.
XKCD's What If? has some examples of Fermi calculations, for instance at the start of working out the effects of "a mole of moles" (similar to a mole of choc donuts, which is what reminded me).
Thanks, Luke, this was helpful!
There is a sub-technique that could have helped you get a better answer for the first approach to example 1: perform a sanity check not only on the final value, but on any intermediate value you can
think of.
In this example, when you estimated that there are 2500 counties, and that the average county has 300 towns with population greater than 10,000, that implies a lower bound for the total population of
the US: assuming that all towns have exactly 10,000 people, that gets you a US population of 2,500x300x10,000=7,500,000,000! That's 7.5 billion people. Of course, in real life, some people live in
smaller towns, and some towns have more then 10,000 people, which makes the true implied estimate even larger.
At this point you know that either your estimate for number of counties, or your estimate for number of towns with population above 10,000 per county, or both, must decrease to get an implied
population of about 300 million. This would have brought your overall estimate down to within a factor of 10.
I had the pleasure the other day of trying my hand on a slightly unusual use of Fermi estimates: trying to guess whether something unlikely has ever happened. In particular, the question was "Has
anyone ever been killed by a falling piano as in the cartoon trope?" Others nearby at the time objected, "but you don't know anything about this!" which I found amusing because of course I know quite
a lot about pianos, things falling, how people can be killed by things falling, etc. so how could I possibly not know anything about pianos falling and killing people? Unfortunately, our estimate
gave it at around 1-10 deaths by piano-falling so we weren't able to make a strong conclusion either way over whether this happened. I would be interested to hear if anyone got a significantly
different result. (We only considered falling grands or baby grands to count as upright pianos, keyboards, etc. just aren't humorous enough for the cartoon trope.)
I'll try. Let's see, grands and baby grands date back to something like the 1700s; I'm sure I've heard of Mozart or Beethoven using pianos, so that gives me a time-window of 300 years for falling
pianos to kill people in Europe or America.
What were their total population? Well, Europe+America right now is, I think, something like 700m people; I'd guess back in the 1700s, it was more like... 50m feels like a decent guess. How many
people in total? A decent approximation to exponential population growth is to simply use the average of 700m and 50m, which is 325, times 300 years, 112500m person-years, and a lifespan of 70 years,
so 1607m persons over those 300 years.
How many people have pianos? Visiting families, I rarely see pianos; maybe 1 in 10 had a piano at any point. If families average a size of 4 and 1 in 10 families has a piano, then we convert our
total population number to, (1607m / 4) / 10, 40m pianos over that entire period.
But wait, this is for falling pianos, not all pianos; presumably a falling piano must be at least on a second story. If it simply crushes a mover's foot while on the porch, that's not very comedic at
all. We want genuine verticality, real free fall. So our piano must be on a second or higher story. Why would anyone put a piano, baby or grand, that high? Unless they had to, that is - because they
live in a city where they can't afford a ground-level apartment or house.
So we'll ask instead for urban families with pianos, on a second or higher story. The current urban percentage of the population is hitting majority (50%) in some countries, but in the 1700s it
would've been close to 0%. Average again: 50+0/2=25%, so we cut 40m by 75% to 30m. Every building has a ground floor, but not every building has more than 1 floor, so some urban families will be able
to live on the ground floor and put their piano there and not fear a humorously musical death from above. I'd guess (and here I have no good figures to justify it) that the average urban building
over time has closer to an average of 2 floors than more or less, since structural steel is so recent, so we'll cut 30m to 15m.
So, there were 15m families in urban areas on non-ground-floors with pianos. And how would pianos get to non-ground-floors...? By lifting, of course, on cranes and things. (Yes, even in the 1700s.
One aspect of Amsterdam that struck me when I was visiting in 2005 was that each of the narrow building fronts had big hooks at their peaks; I was told this was for hoisting things up. Like pianos, I
shouldn't wonder.) Each piano has to be lifted up, and, sad to say, taken down at some point. Even pianos don't live forever. So that's 30m hoistings and lowerings, each of which could be hilariously
fatal, an average of 0.1m a year.
How do we go from 30m crane operations to how many times a piano falls and then also kills someone? A piano is seriously heavy, so one would expect the failure rate to be nontrivial, but at the same
time, the crews ought to know this and be experienced at moving heavy stuff; offhand, I've never heard of falling pianos.
At this point I cheated and look at the OSHA workplace fatalities data: 4609 for 2011. At a guess, half the USA population is gainfully employed, so 4700 out of 150m died. Let's assume that 'piano
moving' is not nearly as risky as it sounds and merely has the average American risk of dying on the job.
We have 100000 piano hoistings a year, per previous. If a team of 3 can do lifts or hoisting of pianos a day, then we need 136 teams or 410 people. How many of these 410 will die each year, times
300? (410 * (4700/150000000))*300 = 3.9
So irritatingly, I'm not that sure that I can show that anyone has died by falling piano, even though I really expect that people have. Time to check in Google.
Searching for killed by falling piano, I see:
But no actual cases of pianos falling a story onto someone. So, the calculation may be right - 0 is within an order of magnitude of 3.9, after all.
0 is within an order of magnitude of 3.9, after all.
No it's not! Actually it's infinitely many orders of magnitude away!
Nitpick alert: I believe pianos used to be a lot more common. There was a time when they were a major source of at-home music. On the other hand, the population was much smaller then, so maybe the
effects cancel out.
I wonder. Pianos are still really expensive. They're very bulky, need skilled maintenance and tuning, use special high-tension wires, and so on. Even if technological progress, outsourcing
manufacture to China etc haven't reduced the real price of pianos, the world is also much wealthier now and more able to afford buying pianos. Another issue is the growth of the piano as the standard
Prestigious Instrument for the college arms races (vastly more of the population goes to college now than in 1900) or signaling high culture or modernity (in the case of East Asia); how many pianos
do you suppose there are scattered now across the USA compared to 1800? Or in Japan and China and South Korea compared to 1900?
And on the other side, people used to make music at home, yes - but for that there are many cheaper, more portable, more durable alternatives, such as cut-down versions of pianos.
Pianos are still really expensive.
Concert grands, yes, but who has room for one of those? Try selling an old upright piano when clearing a deceased relative's estate. In the UK, you're more likely to have to pay someone to take it
away, and it will just go to a scrapheap. Of course, that's present day, and one reason no-one wants an old piano is that you can get a better electronic one new for a few hundred pounds.
But back in Victorian times, as Nancy says elsethread, a piano was a standard feature of a Victorian parlor, and that went further down the social scale that you are imagining, and lasted at least
through the first half of the twentieth century. Even better-off working people might have one, though not the factory drudges living in slums. It may have been different in the US though.
Concert grands, yes, but who has room for one of those? Try selling an old upright piano when clearing a deceased relative's estate.
Certainly: http://www.nytimes.com/2012/07/30/arts/music/for-more-pianos-last-note-is-thud-in-the-dump.html?_r=2&ref=arts But like diamonds (I have been told that you cannot resell a diamond for
anywhere near what you paid for it), and perhaps for similar reasons, I don't think that matters to the production and sale of new ones. That article supports some of my claims about the glut of
modern pianos and falls in price, and hence the claim that there may be unusually many pianos around now than in earlier centuries:
With thousands of moving parts, pianos are expensive to repair, requiring long hours of labor by skilled technicians whose numbers are diminishing. Excellent digital pianos and portable keyboards
can cost as little as several hundred dollars. Low-end imported pianos have improved remarkably in quality and can be had for under $3,000. “Instead of spending hundreds or thousands to repair an
old piano, you can buy a new one made in China that’s just as good, or you can buy a digital one that doesn’t need tuning and has all kinds of bells and whistles,” said Larry Fine, the editor and
publisher of Acoustic & Digital Piano Buyer, the industry bible.
At least, if we're comparing against the 1700s/1800s, since the article then goes on to give sales figures:
So from 1900 to 1930, the golden age of piano making, American factories churned out millions of them. Nearly 365,000 were sold at the peak, in 1910, according to the National Piano Manufacturers
Association. (In 2011, 41,000 were sold, along with 120,000 digital pianos and 1.1 million keyboards, according to Music Trades magazine.)
(Queen Victoria died in 1901, so if this golden age 1900-1930 also populated parlors, it would be more accurate to call it an 'Edwardian parlor'.)
We got ~$75 for one we picked up out of somebody garbage in a garage sale, and given the high interest we had in it, probably could have gotten twice that. (Had an exchange student living with us who
loved playing the piano, and when we saw it, we had to get it - it actually played pretty well, too, only three of the chords needed replacement. It was an experience loading that thing into a pickup
truck without any equipment. Used a trash length of garden hose as rope and a -lot- of brute strength.)
I was basing my notion on having heard that a piano was a standard feature of a Victorian parlor. The original statement of the problem just specifies a piano, though I grant that the cartoon version
requires a grand or baby grand. An upright piano just wouldn't be as funny.
These days, there isn't any musical instrument which is a standard feature in the same way. Instead, being able to play recorded music is the standard.
Thanks for the link about the lack of new musical instruments. I've been thinking for a while that stability of the classical orchestra meant there was something wrong, but it hadn't occurred to me
that we've got the same stability in pop music.
I was basing my notion on having heard that a piano was a standard feature of a Victorian parlor.
Sure, but think how small a fraction of the population that was. Most of Victorian England was, well, poor; coal miners or factory workers working 16 hour days, that sort of thing. Not wealthy
bourgeoisie with parlors hosting the sort of high society ladies who were raised learning how to play piano, sketch, and faint in the arms of suitors.
An upright piano just wouldn't be as funny.
Unless it's set in a saloon! But given the low population density of the Old West, this is a relatively small error.
That article treats all forms of synthesis as one instrument. This is IMO not an accurate model. The explosion of electronic pop in the '80s was because the technology was on the upward slope of the
logistic curve, and new stuff was becoming available on a regular basis for artists to gleefully seize upon. But even now, there's stuff you can do in 2013 that was largely out of reach, if not
unknown, in 2000.
But even now, there's stuff you can do in 2013 that was largely out of reach, if not unknown, in 2000.
Have any handy examples? I find that a bit surprising (although it's a dead cert that you know more about pop music than I do, so you're probably right).
I'm talking mostly about new things you can do musically due to technology. The particular example I was thinking of was autotune, but that was actually invented in the late 1990s (whoops).
But digital signal processing in general has benefited hugely in Moore's Law, and the ease afforded by being able to apply tens or hundreds of filters in real time. The phase change moment was when a
musician could do this in faster than 1x time on a home PC. The past decade has been mostly on the top of the S-curve, though.
Nevertheless, treating all synthesis as one thing is simply an incorrect model.
Funny coincidence. About a week ago I was telling someone that people sometimes give autotune as an example of a qualitatively new musical/aural device, even though Godley & Creme basically did it
30+ years ago. (Which doesn't contradict what you're saying; just because it was possible to mimic autotune in 1979 doesn't mean it was trivial, accessible, or doable in real time. Although autotune
isn't new, being able to autotune on an industrial scale presumably is, 'cause of Moore's law.)
Granular synthesis is pretty fun.
Agreed, although I don't know how impractical or unknown it was in 2000 — I remember playing with GranuLab on my home PC around 2001.
Average again: 50+0/2=25%, so we cut 40m by 75% to 30m.
To 10m, surely?
The workplace fatalities really gone down recently, with all the safe jobs of sitting in front of the computer. You should look for workplace fatalities in construction, preferably historical (before
safety guidelines). Accounting for that would raise the estimate.
A much bigger issue is that one has to actually stand under the piano as it is being lifted/lowered. The rate of such happening can be much (orders of magnitude) below that of fatal workplace
accidents in general, and accounting for this would lower the estimate.
You should look for workplace fatalities in construction, preferably historical (before safety guidelines).
I don't know where I would find them, and I'd guess that any reliable figures would be very recent: OSHA wasn't even founded until the 1970s, by which point there's already been huge shifts towards
safer jobs.
A much bigger issue is that one has to actually stand under the piano as it is being lifted/lowered. The rate of such happening can be much (orders of magnitude) below that of fatal workplace
accidents in general, and accounting for this would lower the estimate.
That was the point of going for lifetime risks, to avoid having to directly estimate per-lifting fatality rates - I thought about it for a while, but I couldn't see any remotely reasonable way to
estimate how many pianos would fall and how often people would be near enough to be hit by it (which I could then estimate against number of pianos ever lifted to pull out a fatality rate, so instead
I reversed the procedure and went with an overall fatality rate across all jobs).
You also need to account for the fact that some proportion of piano-hoister work-related fatalities will be to other factors like heatstroke or heart attack or wrapping the rope around their arm.
To a very good first approximation, the distribution of falling piano deaths is Poisson. So if the expected number of deaths is in the range [0.39, 39], then the probability that no one has died of a
falling piano is in the range [1e-17, 0.677] which would lead us to believe that with a probability of at least 1/3 such a death has occurred. (If 3.9 were the true average, then there's only a 2%
chance of no such deaths.)
I disagree that the lower bound is 0; the right range is [-39,39]. Because after all, a falling piano can kill negative people: if a piano had fallen on Adolf Hitler in 1929, then it would have
killed -5,999,999 people!
Sorry. The probability is in the range [1e-17, 1e17].
That is a large probability.
It's for when you need to be a thousand million billion percent sure of something.
A decent approximation to exponential population growth is to simply use the average of 700m and 50m
That approximation looks like this
It'll overestimate by a lot if you do it over longer time periods. e.g. it overestimates this average by about 50% (your estimate actually gives 375, not 325), but if you went from 1m to 700m it
would overestimate by a factor of about 3.
A pretty-easy way to estimate total population under exponential growth is just current population * 1/e lifetime. From your numbers, the population multiplies by e^2.5 in 300 years, so 120 years to
multiply by e. That's two lifetimes, so the total number of lives is 700m*2. For a smidgen more work you can get the "real" answer by doing 700m * 2 - 50m * 2.
Cecil Adams tackled this one. Although he could find no documented cases of people being killed by a falling piano (or a falling safe), he did find one case of a guy being killed by a RISING piano
while having sex with his girlfriend on it. What would you have estimated for the probability of that?
From the webpage:
The exception was the case of strip-club bouncer Jimmy Ferrozzo. In 1983 Jimmy and his dancer girlfriend were having sex on top of a piano that was rigged so it could be raised or lowered for
performances. Apparently in the heat of passion the couple accidentally hit the up switch, whereupon the piano rose and crushed Jimmy to death against the ceiling. The girlfriend was pinned
underneath him for hours but survived. I acknowledge this isn’t a scenario you want depicted in detail on the Saturday morning cartoons; my point is that death due to vertical piano movement has
a basis in fact.
Reality really is stranger than fiction, is it.
I feel compelled to repeat this old physics classic:
How Fermi could estimate things!
Like the well-known Olympic ten rings,
And the one-hundred states,
And weeks with ten dates,
And birds that all fly with one... wings.
That is beautiful.
You did much better in Example #2 than you thought; the conclusion should read
60 fatalities per crash × 100 crashes with fatalities over the past 20 years = 6000 passenger fatalities from passenger-jet crashes in the past 20 years
which looks like a Fermi victory (albeit an arithmetic fail).
Lol! Fixed.
Thanks for writing this! This is definitely an important skill and it doesn't seem like there was such a post on LW already.
Some mild theoretical justification: one reason to expect this procedure to be reliable, especially if you break up an estimate into many pieces and multiply them, is that you expect the errors in
your pieces to be more or less independent. That means they'll often more or less cancel out once you multiply them (e.g. one piece might be 4 times too large but another might be 5 times too small).
More precisely, you can compute the variance of the logarithm of the final estimate and, as the number of pieces gets large, it will shrink compared to the expected value of the logarithm (and even
more precisely, you can use something like Hoeffding's inequality).
Another mild justification is the notion of entangled truths. A lot of truths are entangled with the truth that there are about 300 million Americans and so on, so as long as you know a few relevant
true facts about the world your estimates can't be too far off (unless the model you put those facts into is bad).
More precisely, you can compute the variance of the logarithm of the final estimate and, as the number of pieces gets large, it will shrink compared to the expected value of the logarithm (and
even more precisely, you can use something like Hoeffding's inequality).
If success of a fermi estimate is defined to be "within a factor of 10 of the correct answer", then that's a constant bound on the allowed error of the logarithm. No "compared to the expected value
of the logarithm" involved. Besides, I wouldn't expect the value of the logarithm to grow with number of pieces either: the log of an individual piece can be negative, and the true answer doesn't get
bigger just because you split the problem into more pieces.
So, assuming independent errors and using either Hoeffding's inequality or the central limit theorem to estimate the error of the result, says that you're better off using as few inputs as possible.
The reason fermi estimates even involve more than 1 step, is that you can make the per-step error smaller by choosing pieces that you're somewhat confident of.
Oops, you're absolutely right. Thanks for the correction!
There are 3141 counties in the US. This is easy to remember because it's just the first four digits of pi (which you already have memorised, right?).
This reminds me of the surprisingly accurate approximation of pi x 10^7 seconds in a year.
This chart has been extremely helpful to me in school and is full of weird approximation like the two above.
I will note that I went through the mental exercise of cars in a much simpler (and I would say better) way: I took the number of cars in the US (300 million was my guess for this, which is actually
fairly close to the actual figure of 254 million claimed by the same article that you referenced) and guessed about how long cars typically ended up lasting before they went away (my estimate range
was 10-30 years on average). To have 300 million cars, that would suggest that we would have to purchase new cars at a sufficiently high rate to maintain that number of vehicles given that lifespan.
So that gave me a range of 10-30 million cars purchased per year.
The number of 5 million cars per year absolutely floored me, because that actually would fail my sanity check - to get 300 million cars, that would mean that cars would have to last an average of 60
years before being replaced (and in actuality would indicate a replacement rate of 250M/5M = 50 years, ish).
The actual cause of this is that car sales have PLUMMETED in recent times. In 1990, the median age of a vehicle was 6.5 years; in 2007, it was 9.4 years, and in 2011, it was 10.8 years - meaning that
in between 2007 and 2011, the median car had increased in age by 1.4 years in a mere 4 years.
I will note that this sort of calculation was taught to me all the way back in elementary school as a sort of "mathemagic" - using math to get good results with very little knowledge.
But it strikes me that you are perhaps trying too hard in some of your calculations. Oftentimes it pays to be lazy in such things, because you can easily overcompensate.
Tip: frame your estimates in terms of intervals with confidence levels, i.e. "90% probability that the answer is within <low end> and <high end>". Try to work out both a 90% and a 50% interval.
I've found interval estimates to be much more useful than point estimates, and they combine very well with Fermi techniques if you keep track of how much rounding you've introduced overall.
In addition, you can compute a Brier score when/if you find out the correct answer, which gives you a target for improvement.
Douglas W. Hubbard has a book titled How to Measure Anything where he states that half a day of exercising confidence interval calibration makes most people nearly perfectly calibrated. As you noted
and as is said here, that method fits nicely with Fermi estimates.
This combination seems to have a great ratio between training time and usefulness.
Alternatively, you might allow yourself to look up particular pieces of the problem — e.g. the number of Sikhs in the world, the formula for escape velocity, or the gross world product — but not
the final quantity you're trying to estimate.
Would it bankrupt the global economy to orbit all the world's Sikhs?
Fermi estimates can help you become more efficient in your day-to-day life, and give you increased confidence in the decisions you face. If you want to become proficient in making Fermi
estimates, I recommend practicing them 30 minutes per day for three months. In that time, you should be able to make about (2 Fermis per day)×(90 days) = 180 Fermi estimates.
I'm not sure about this claim about day-to-day life. Maybe there are some lines of work where this skill could be useful, but in general it's quite rare in day-to-day life where you have to come up
with quick estimates on the spot to make a sound descision. Many things can be looked up on the internet for a rather marginal time-cost nowadays. Often enough probably even less time, than it would
actually take someone to calculate the guesstimate.
If a descision or statistic is important you, should take the time to actually just look it up, or if the information you are trying to guess is impossible to find online, you can at least look up
some statistics that you can and should use to make your guess better. As you read above, just getting one estimate in a long line of reasoning wrong (especially where big numbers are concerned) can
throw off your guess by a factor of 100 or 1000 and make it useless or even harmful.
If your guess is important to an argument you're constructing on-the-fly, I think you could also take the time to just look it up. (If it's not an interview or some conversation which dictates, that
using a smartphone would be unacceptable).
And if a descision or argument is not important enough to invest some time in a quick online search, then why bother in the first place? Sure, it's a cool skill to show off and it requires some
rationality, but that doesn't mean it's truly useful. On the other hand maybe I'm just particularly unimaginative today and can't think of ways, how Fermi estimates could possibly improve my
day-to-day life by a margin that would warrant the effort to get better at it.
Write down your own Fermi estimation attempts here. One Fermi estimate per comment, please!
One famous Fermi estimate is the Drake equation.
A running list of my own: http://www.gwern.net/Notes#fermi-calculations (And there's a number of them floating around predictionbook.com; Fermi-style loose reasoning is great for constraining
Just tried one today: how safe are planes?
Last time I was at an airport, the screen had five flights, three-hour period. It was peak time, so multiplied only by fifteen, so 25 flights from Chennai airport per day.
~200 countries in the world, so guessed 500 adjusted airports (effective no. of airports of size of Chennai airport), giving 12500 flights a day and 3*10^6 flights a year.
One crash a year from my news memories, gives possibility of plane crash as 1/310^-6 ~ 310^-7.
Probability of dying in a plane crash is 3*10^-7 (source). At hundred dead passengers a flight, fatal crashes are ~ 10^-5. Off by two orders of magnitude.
Probability of dying in a plane crash is 3*10^-7 (source). At hundred dead passengers a flight, fatal crashes are ~ 10^-5. Off by two orders of magnitude.
If there are 3*10^6 flights in a year, and one randomly selected plane crashes per year on average, with all aboard being killed, then the chances of dying in an airplane crash are $\left ( \frac{1}
{3} \right ) *10^{-6}$, surely?
Yes, there's a hundred dead passengers on the flight that went down, but there's also a hundred living passengers on every flight that didn't go down. The hundreds cancel out.
The hundreds cancel out.
Wow, that was stupid of me. Of course they do! And thanks.
Anna Salamon's Singularity Summit talk from a few years ago explains one Fermi estimate regarding the value of gathering more information about AI impacts: How Much it Matters to Know What Matters: A
Back of the Envelope Calculation.
Here's one I did with Marcello awhile ago: about how many high schools are there in the US?
My attempt: there are 50 states. Each state has maybe 20 school districts. Each district has maybe 10 high schools. So 50 * 20 * 10 = 10,000 high schools.
Marcello's attempt (IIRC): there are 300 million Americans. Of these, maybe 50 million are in high school. There are maybe 1,000 students in a high school. So 50,000,000 / 1,000 = 50,000 high
Actual answer:
Numbers vary, I think depending on what is being counted as a high school, but it looks like the actual number is between 18,000 and 24,000. As it turns out, the first approach underestimated the
total number of school districts in the US (it's more like 14,000) but overestimated the number of high schools per district. The second approach overestimated the number of high school students
(it's more like 14 million) but also overestimated the average number of students per high school. And the geometric mean of the two approaches is 22,000, which is quite close!
I tried the second approach with better success: it helps to break up the "how many Americans are in high school" calculation. If the average American lives for 80 years, and goes to high school for
4, then 1/20 of all Americans are in high school, which is 15 million.
there are 300 million Americans. Of these, maybe 50 million are in high school
...you guessed 1 out of 6 Americans is in highschool?
With an average lifespan of 70+ years and a highschool duration of 3 years (edit: oh, it's 4 years in the US?), shouldn't it be somewhere between 1 in 20 and 1 in 25?
This conversation happened something like a month ago, and it was Marcello using this approach, not me, so my memory of what Marcello did is fuzzy, but IIRC he used a big number.
The distribution of population shouldn't be exactly uniform with respect to age, although it's probably more uniform now than it used to be.
How old is the SECOND oldest person in the world compared to the oldest? Same for the united states?
I bogged down long before I got the answer. Below is the gibberish I generated towards bogging down.
So OK, I don't even know offhand how old is the oldest, but I would bet it is in the 114 years old (yo) to 120 yo range.
Then figure in some hand-wavey way that people die at ages normally distributed with a mean of 75 yo. We can estimate how many sigma (standard deviations) away from that is the oldest person.
Figure there are 6 billion people now, but I know this number has grown a lot in my lifetime, it was less than 4 billion when I was born 55.95 years ago. So say the 75 yo's come from a population
consistent with 3 billion people. 1/2 die younger than 75, 1/2 die older, so the oldest person in the world is 1 in 1.5 billion on the distribution.
OK what do I know about normal distributions? Normal distribution goes as exp ( -((mean-x)/(2sigma))^2 ). So at what x is exp( -(x/2sigma)^2 ) = 1e-9? (x / 2sigma) ^ 2 = -ln ( 1e-9). How to estimate
natural log of a billionth? e = 2.7 is close enough for government work to the sqrt(10). So ln(z) = 2log10(z). Then -ln(1e-9) = -2*log10(1e-9) = 2*9 = 18. So (x/2sigma)^2 = 18, sqrt(18) = 4 so
So I got 1 in a billion is 4 sigma. I didn't trust that so I looked that up, Maybe I should have trusted it, in fact 1 in a billion is (slightly more than ) 6 sigma.
mean of 75 yo, x=115 yo, x-mean = 40 years. 6 sigma is 40 years. 1 sigma=6 years.
So do I have ANYTHING yet? I am looking for dx where exp(-((x+dx)/(2sigma))^2) - exp( -(x/2sigma)^2)
So, this isn't quite appropriate for Fermi calculations, because the math involved is a bit intense to do in your head. But here's how you'd actually do it:
Age-related mortality follows a Gompertz curve, which has much, much shorter tails than a normal distribution.
I'd start with order statistics. If you have a population of 5 billion people, then the expected percentile of the top person is 1-(1/10e9), and the expected percentile of the second best person is
1-(3/10e9). (Why is it a 3, instead of a 2? Because each of these expectations is in the middle of a range that's 1/5e9, or 2/10e9, wide.)
So, the expected age* of death for the oldest person is 114.46, using the numbers from that post (and committing the sin of reporting several more significant figures), and the expected age of death
for the second oldest person is 113.97. That suggests a gap of about six months between the oldest and second oldest.
* I should be clear that this is the age corresponding to the expected percentile, not the expected age, which is a more involved calculation. They should be pretty close, especially given our huge
population size.
But terminal age and current age are different- it could actually be that the person with the higher terminal age is currently younger! So we would need to look at permutations and a bunch of other
stuff. Let's ignore this and assume they'll die on the same day.
So what does it look like in reality?
The longest lived well-recorded human was 122, but note that she died less than 20 years ago. The total population whose births were well-recorded is significantly smaller than the current
population, and the numbers are even more pessimistic than the 3 billion figure you get at; instead of looking at people alive in the 1870s, we need to look at the number born in the 1870s. Our model
estimates she's a 1 in 2*10^22 occurrence, which suggests our model isn't tuned correctly. (If we replace the 10 with a 10.84, a relatively small change, her age is now the expectation for the oldest
terminal age in 5 billion- but, again, she's not out of a sample of 5 billion.)
The real gaps are here; about a year, another year, then months. (A decrease in gap size is to be expected, but it's clear that our model is a bit off, which isn't surprising, given that all of the
coefficients were reported at 1 significant figure.)
Upvoted (among other things) for a way of determining the distribution of order statistics from an arbitrary distribution knowing those of a uniform distribution which sounds obvious in retrospect
but to be honest I would never have come up with on my own.
I am looking for dx where exp(-((x+dx)/(2sigma))^2) - exp( -(x/2sigma)^2)
Assuming dx << x, this is approximated by a differential, (-xdx/sigma^2) * exp( -(x/2sigma)^2, or the relative drop of dx/sigma^2. You want it to be 1/2 (lost one person out of two), your x = 4
sigma, so dx=1/8 sigma, which is under a year. Of course, it's rather optimistic to apply the normal distribution to this problem, to begin with.
I estimated how much the population of Helsinki (capital of Finland) grew in 2012. I knew from the news that the growth rate is considered to be steep.
I knew there are currently about 500 000 habitants in Helsinki. I set the upper bound to 3 % growth rate or 15 000 residents for now. With that rate the city would grow twentyfold in 100 years which
is too much. But the rate might be steeper now. For lower bound i chose 1000 new residents. I felt that anything less couldnt really produce any news. AGM is 3750.
My second method was to go through the number of new apartments. Here I just checked that in recent years about 3000 apartments have been built yearly. Guessing that the household size could be 2
persons I got 6000 new residents.
It turned out that the population grew by 8300 residents which is highest in 17 years. Otherwise it has recently been around 6000. So both methods worked well. Both have the benefit that one doesnt
need to care whether the growth comes from births/deaths or people flow. They also didn't require considering how many people move out and how many come in.
Obviously i was much more confident on the second method. Which makes me think that applying confidence intervals to fermi estimates would be useful.
For the "Only Shallow" one, I couldn't think of a good way to break it down, and so began by approximating the total number of listens at 2 million. My final estimate was off by a factor of one.
Matt Mahoney's estimate of the cost of AI is a sort-of Fermi estimate.
Out of the price of a new car, how much goes to buying raw materials? How much to capital owners? How much to labor?
How many Wall-Marts in the USA.
That sounds like the kind of thing you could just Google.
But I'll bite. Wal-Marts have the advantage of being pretty evenly distributed geographically; there's rarely more than one within easy driving distance. I recall there being about 15,000 towns in
the US, but they aren't uniformly distributed; they tend to cluster, and even among those that aren't clustered a good number are going to be too small to support a Wal-Mart. So let's assume there's
one Wal-Mart per five towns on average, taking into account clustering effects and towns too small or isolated to support one. That gives us a figure of 3,000 Wal-Marts.
When I Google it, that turns out to be irel pybfr gb gur ahzore bs Jny-Zneg Fhcrepragref, gur ynetr syntfuvc fgberf gung gur cuenfr "Jny-Zneg" oevatf gb zvaq. Ubjrire, Jny-Zneg nyfb bcrengrf n
fznyyre ahzore bs "qvfpbhag fgber", "arvtuobeubbq znexrg", naq "rkcerff" ybpngvbaf gung funer gur fnzr oenaqvat. Vs jr vapyhqr "qvfpbhag" naq "arvtuobeubbq" ybpngvbaf, gur gbgny vf nobhg guerr
gubhfnaq rvtug uhaqerq. V pna'g svaq gur ahzore bs "rkcerff" fgberf, ohg gur sbezng jnf perngrq va 2011 fb gurer cebonoyl nera'g gbb znal.
Different method. Assume all 300 million us citizens are served by a Wal Mart. Any population that doesn't live near a Wal-Mart has to be small enough to ignore. Each Wal-mart probably has between
10,000 and 1 million potential customers. Both fringes seem unlikely, so we can be within a factor of 10 by guessing 100000 people per Wal-Mart. This also leads to 3000 Wal-Marts in the US.
The guys at last.fm are usually very willing to help out with interesting research (or at least were when I worked there a couple of years ago), so if you particularly care about that information
it's worth trying to contact them.
I'm happy to see that the Greatest Band of All Time is the only rock band I can recall ever mentioned in a top-level LessWrong post. I thought rationalists just sort of listened only to Great Works
like Bach or Mozart, but I guess I was wrong. Clearly lukeprog used his skills as a rationalist to rationally deduce the band with the greatest talent, creativity, and artistic impact of the last
thirty years and then decided to put a reference to them in this post :)
If you check out Media posts, you'll see that LWers like a range of music. It wouldn't surprise me too much if they tend to like contemporary classical better than classical classical.
I like a specific subset of classical classical, but I suspect not at all typical.
I thought rationalists just sort of listened only to Great Works like Bach or Mozart
Because of the images of different musical genres in our culture. There is an association of classical music and being academic or upper class. In popular media, liking classical music is a cheap
signal for these character types. This naturally triggers confirmation biases, as we view the rationalist listening to Bach as typical, and the rationalist listening to The Rolling Stones as
atypical. People also use musical preference to signal what type of person they are. If someone wants to be seen as a rationalist, they often mention their love of Bach and don't mention genres with
a different image, except to disparage them.
I think you're conflating "rationalist" and "intellectual." I agree that there is a stereotype that intellectuals only listen to Great Works like Bach or Mozart, but I'm curious where the OP picked
up that this stereotype also ought to apply to LW-style rationalists. I mean, Eliezer takes pains in the Sequences to make anime references specifically to avoid this kind of thing.
I mean, Eliezer takes pains in the Sequences to make anime references specifically to avoid this kind of thing.
Well, he also likes tends to like anime, and anime has a tendency to deal with some future-ish issues.
From On Things That Are Awesome:
Whenever someone compliments "Eliezer Yudkowsky", they are really complimenting "Eliezer Yudkowsky's writing" or "Eliezer Yudkowsky's best writing that stands out most in my mind". People who met
me in person were often shocked at how much my in-person impression departed from the picture they had in their minds. I think this mostly had to do with imagining me as being the sort of actor
who would be chosen to play me in the movie version of my life—they imagined way too much dignity. That forms a large part of the reason why I occasionally toss in the deliberate anime reference,
which does seem to have fixed the divergence a bit.
I'm just pointing out the way such a bias comes into being. I know I don't listen to classical, and although I'd expect a slightly higher proportion here than in the general population, I wouldn't
guess it wold be a majority or significant plurality.
If I had to guess, I'd guess on varied musical tastes, probably trending towards more niche genres than broad spectrum pop than the general population.
Well, Eliezer mentions Bach a bunch in the sequences as an example of a great work of art. I used stereotypes to extrapolate. :p
(the statement was tongue-in-cheek, if that didn't come across. I am genuinely a little surprised to see MBV mentioned here though.)
One of my favorite numbers to remember to aid in estimations is this: 1 year = pi * 10^7 seconds. Its really pretty accurate.
Of course for Fermi estimation just remember 1 Gs (gigasecond) = 30 years.
10^7.5 is even more accurate.
I spend probably a pretty unusual amount of time estimating things for fun, and have come to use more or less this exact process on my own over time from doing it.
One thing I've observed, but haven't truly tested, is my geometric means seem to be much more effective when I'm willing to put a more tight guess on them. I started off bounding them with what I
thought the answer conceivably could be, which seemed objective and often felt easier to estimate. The problem was that often either the lower or upper bound was too arbitrary relative to it's weight
on my final estimate. Say, average times an average 15 year old sends an mms photo in a week. My upper bound may be 100ish but my lower bound could be 2 almost as easily as it could be 5 which ranges
my final estimate quite a bit, between 14 and 22.
I just wanted to say, after reading the Fermi estimate of cars in the US, I literally clapped - out loud. Well done. And I highly appreciate the honest poor first attempt - so that I don't feel like
such an idiot next time I completely fail.
Potentially useful: http://instacalc.com/
I recently ran across an article describing how to find a rough estimate of the standard deviation of a population, given a number of samples, which seems that it would be suitable for Fermi
estimates of probability distributions.
First of all, you need a large enough population that the central limit theorem applies, and the distribution can therefore be assumed to be normal. In a normal distribution, 99.73% of the samples
will be within three standard deviations of the mean (either above or below; a total range of six standard deviations). Therefore, one can roughly estimate the standard deviation by taking the
largest value, subtracting the smallest value, and dividing the result by 6.
This is useful, because in a normal distribution, around 7 in 10 of the samples will be within one standard deviation of the mean, and around 19 in every 20 will be within two standard deviations of
the mean.
If you want a range containing 70% of the samples, why wouldn't you just take the range between the 15th and 85th percentile values?
That would also work, and probably be more accurate, but I suspect that it would take longer to find the 15th and 85th percentile values than it would to find the ends of the range.
How long can the International Space Station stay up without a boost? I can think of a couple of ways to estimate that.
You can try estimating, or keep listing the reasons why it's hard.
Out of the price of a new car, how much goes to buying raw materials? How much to capital owners? How much to labor?
I recommend trying to take the harmonic mean of a physical and an economic estimate when appropriate.
I recommend doing everything when appropriate.
Is there a particular reason why the harmonic mean would be a particularly suitable tool for combining physical and economic estimates? I've spent only a few seconds trying to think of one, failed,
and had trouble motivating myself to look harder because on the face of it it seems like for most problems for which you might want to do this you're about equally likely to be finding any given
quantity as its reciprocal, which suggests that a general preference for the harmonic mean is unlikely to be a good strategy -- what am I missing?
Can I have a representative example of a problem where this is appropriate?
So, what you're saying is that the larger number is less likely to be accurate the further it is from the smaller number? Why is that? | {"url":"http://lesswrong.com/lw/h5e/fermi_estimates/","timestamp":"2014-04-18T23:16:38Z","content_type":null,"content_length":"409667","record_id":"<urn:uuid:4a830954-2bee-4464-8d9b-c82ed0085831>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The value of sin(x) can be approximated by using this infinite series formula: x^3 x^5 x^7 x^9 sin(x) = x - ___ + ___ - ___ + ___ ... 3! 5! 7! 9! Write a function approxSin() that accepts two inputs
in this order, a value for x, and a positive interger for the number of terms to computer, then return a result. The result should be in 10 floating-point precision Note: You should use exact same
function name. Create a program that prompts two inputs, a value for x and a value for the number of terms. The program should then display the resulting of the
Best Response
You've already chosen the best response.
question continued: The program should then display the resulting of the computing series. **Hints:** If you are having trouble with this problem try the following steps. 1. Try writing a help
function calFac() that accepts a number and computes its factorial first so that: - 3! = 3\*2\*1 = 6 - 4! = 4\*3\*2\*1 = 24 - ... 2. Be careful with the terms count. The example above shows the
first 5 terms. 3. Use `double` type variables for your computations. `float` will give you slightly different results. **Constraints** - The input **x** is allowed to be any double precision
number (no checks are needed) - Make sure the program checks that **terms** is a positive integer, 1 or greater - If a non-negative **terms** is entered print out: **Input Error** - Make sure the
output is on a single line, no spaces between digits, and no additional text. - You may use the `pow()` function in \<cmath\> only. - You can check wether your calculation is correct, check the
output of the `sin()` function in \<cmath\> Example 1: Enter a value for x and terms seperated by spaces (eg: 2 10): 4.5 -3 Input Error Example 2: Enter a value for x and terms seperated by
spaces (eg: 2 10): 4.5 10 approximated Sine: -0.9775310994 cmath Sine: -0.9775301177
Best Response
You've already chosen the best response.
Things to note before moving on into the code: You're declaring V as a global variable, then returning it in the function approxSin() (While this is not -wrong- it's not considered good practice,
you can either change the function to void and return nothing, or declare V as a local variable to the function and return it as you're doing now Should work well either way.) What seems to be
the problem with this code?
Best Response
You've already chosen the best response.
The code failed on Test Failed on input "2.5 8" and invalid inputs (which means if there was a value inputted that was supposed be false, I didn't fix it for the user input to be true)
Best Response
You've already chosen the best response.
Sometimes I get the wrong sin value compared to the sine value I get from cmath
Best Response
You've already chosen the best response.
V = ( pow(-1, n-1) //exp is another thing, -1^(n-1) should do the trick ( *pow(x, 2*n+1) / calFrac(2*n+1))); I believe the error was in the exp part, everything else seems fine. (Also, you can
change variable t to int, since it only acts as a counter, it has no need to be double)
Best Response
You've already chosen the best response.
I think exp should be fine though. There has to be other errors though :/ It's not working.
Best Response
You've already chosen the best response.
Also, you seem to be equating V to the last value calculating, instead of acumulating all the values evaluated.
Best Response
You've already chosen the best response.
How do I fix that?
Best Response
You've already chosen the best response.
There it goes, the correct function is: V += (Math.pow(-1, n) * (Math.pow(x, 2 * n + 1) / calFrac(2*n + 1))); or V = V + (Math.pow(-1, n) * (Math.pow(x, 2 * n + 1) / calFrac(2*n + 1))); Doing: x
+= 10 is the same as doing x = x + 10 (At least in Java, should work in C# too). Another thing to note, in order to get the correct answer, I initialized V = x to account to the first value (x^1)
/1 which you are not considering in your function
Best Response
You've already chosen the best response.
where did you initialize V=x? Just as a declaration of variables?
Best Response
You've already chosen the best response.
Though if you initialize your for in n = 0 instead of n = 1, the V will take the initial value of x^1/1, then move to x^3/3!
Best Response
You've already chosen the best response.
which is what I want right? How do I make sure the signs are in the right place though?
Best Response
You've already chosen the best response.
What will the new code look like? Thanks!
Best Response
You've already chosen the best response.
Your code should look like... double approxSin(double x, int t) { for (int n=0; n<=t; n++) { V += (pow(-1, n) * (pow(x, 2 * n + 1) / calFrac(2*n + 1))); } return V; }
Best Response
You've already chosen the best response.
Also I think there might be something wrong with my if else statement in the end
Best Response
You've already chosen the best response.
Also, this is C++ I should I specified sooner :P Thanks!
Best Response
You've already chosen the best response.
C#, C++, same thing for me =p. I'm midly experienced at C, but variants escape me. if-else seems fine, note that I changed the approxSin to an int t, maybe that's causing some trouble, you could
change all t's to ints, or change that one back to a double.
Best Response
You've already chosen the best response.
Idk, it still doesn't work completely. I'll take a look at it more later I guess.
Best Response
You've already chosen the best response.
Strangest thing is there's a zero after my cout statements which say, "Enter a number: " ... so it looks like Enter a value for x: 0 _(space for input)
Best Response
You've already chosen the best response.
looks good
Best Response
You've already chosen the best response.
Not at all :(
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e9cea180b8b3194bed11827","timestamp":"2014-04-18T00:40:51Z","content_type":null,"content_length":"78562","record_id":"<urn:uuid:8665484f-f749-423d-953f-72969ca845f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lacunary Statistical Limit Points in Random 2-Normed Spaces
ISRN Mathematical Analysis
Volume 2013 (2013), Article ID 189721, 5 pages
Research Article
Lacunary Statistical Limit Points in Random 2-Normed Spaces
Department of Mathematics, Suleyman Demirel University, 32260 Isparta, Turkey
Received 29 April 2013; Accepted 13 June 2013
Academic Editors: F. Colombini and J.-L. Wu
Copyright © 2013 A. Güncan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We introduce the notion -cluster points, investigate the relation between -cluster points and limit points of sequences in the topology induced by random 2-normed spaces, and prove some important
1. Introduction and Background
An interesting and important generalization of the notion of metric space was introduced by Menger [1] under the name of statistical metric space, which is now called probabilistic metric space. In
this theory, the notion of distance has a probabilistic nature. Namely, the distance between two points and is represented by a distribution function ; and for , the value is interpreted as the
probability that the distance from to is less than . In fact the probabilistic theory has become an area of active research for the last forty years. An important family of probabilistic metric
spaces are probabilistic normed spaces. The notion of probabilistic normed spaces was introduced in [2] and further it was extended to random/probabilistic 2-normed spaces by Goleţ [3] using the
concept of 2-norm of Gähler [4]. Applications of this concept have been investigated by various authors, for example, [5–7].
The concept of statistical convergence for sequences of real number was introduced by Fast in [8] and Steinhaus in [9] independently in the same year 1951. A lot of developments have been made in
this area after the works of Salat [10] and Fridy [11]. Recently, Mohiuddine and Aiyub [12] studied lacunary statistical convergence as generalization of the statistical convergence and introduced
the concept -statistical convergence in random 2-normed space. In [13], Mursaleen and Mohiuddine extended the idea of lacunary statistical convergence with respect to the intuitionistic fuzzy normed
space. Also lacunary statistically convergent double sequences in probabilistic normed space was studied by Mohiuddine and Savaş in [14].
The aim of this work is to introduce and investigate the relation between -statistical cluster points, -statistical limit points, and ordinary limit points of sequence in random 2-normed spaces.
First, we recall some of the basic concepts that will be used in this paper. All the concepts listed below are studied in depth in the fundamental book by Schweizer and Sklar [2].
Let denote the set of real numbers and . A mapping is called a distribution function if it is nondecreasing and left continuous with and .
We denote the set of all distribution functions by such that . If , then , where It is obvious that for all .
A triangular norm (-norm) is a continuous mapping such that is an abelian monoid with unit one and if and for all . A triangle function is a binary operation on which is commutative, associative and
for every .
The concept of 2-normed spaces was first introduced by Gähler [4, 15].
Let be a real vector space of dimension , where . A 2-norm on is a function which satisfies (i) if and only if and are linearly dependent; (ii) ; (iii) , ; (iv) . The pair is then called a 2-normed
As an example of a 2-normed space we may take being equipped with the 2-norm the area of the parallelogram spanned by the vectors and , which may be given explicitly by the formula
In 2006, Goleţ [3] introduced the notion of random 2-normed space.
Let be a linear space of dimension greater than one, a triangle, and . Then is called a probabilistic 2-norm and a probabilistic -normed space if the following conditions are satisfied:(i) if and
are linearly dependent, where denotes the value of at ,(ii) if and are linearly independent,(iii) for all ,(iv) for every , and ,(v) whenever .
If is replaced by(v)′ for all and , then is called a random -normed space (for short, RTN space).
Remark 1. Note that every 2-normed space can be made a random 2-normed space in a natural way, by setting for every , and , .
Let be a RTN space. Since is a continuous -norm, the system of -neighborhoods of (the null vector in ) where determines a first countable Hausdorff topology on , called the -topology. Thus, the
-topology can be completely specified by means of -convergence of sequences. It is clear that means and vice versa.
A sequence in is said to be -convergence to if for every , and for each nonzero there exists a positive integer such that or equivalently, In this case we write , .
2. The Main Results
It is known (see [16]) that statistical cluster and statistical limit points set of a given sequence are not altered by changing the values of a subsequence, the index set of which has density zero.
Moreover, there is a strong connection between -statistical cluster points and ordinary limit points of a given sequence. We will prove that these facts are satisfied for -statistical cluster points
and -statistical limit point sets of a given sequence in the topology induced by random 2-normed spaces.
The notion of statistical convergence depends on the density of subsets of , the set of natural numbers.
Definition 2 (see [8, 11]). Let be a subset of . Then the asymptotic density of denoted by , where the vertical bars denote the cardinality of the enclosed set. A number sequence is said to be
statistically convergent to if for every ,. If is statistically convergent to , we write -.
By a lacunary sequence we mean an increasing integer sequence such that and as . Throughout this paper the intervals determined by will be denoted by , and the ratio will be abbreviated by . Let .
The number is said to be the -density of , provided the limit exists (see [17]).
Definition 3 (see [17]). Let be a lacunary sequence. Then a sequence is said to be -convergent to the number if for every the set has -density zero, where In this case we write or .
Definition 4 (see [12]). Let be a RTN space and let be a lacunary sequence. A sequence in a random 2-normed spaces is said to be -statistically convergent or -convergent to with respect to if for
every , and nonzero such that or equivalently In this case we write -, or .
Now we define some concepts in RTN-space.
Definition 5. Let be a RTN space and let be a lacunary sequence. Let be a subsequence of and then one denotes by . If then is called a -thin sequence. On the other hand, is a -nonthin subsequence of
provided that
Definition 6. Let be a RTN space and be a lacunary sequence. is called a -statistical limit point of a sequence provided that there is a -nonthin subsequence of that converges to . Let denotes the
set of all -limit points.
Definition 7. Let be a RTN space and let be a lacunary sequence. is called a -statistical cluster point of a sequence provided that, for every , and nonzero Let denote the set of -statistical cluster
point of the sequence .
Theorem 8. Let be a RTN space and let be a lacunary sequence. If and are sequences in such that then and .
Proof. Assume that and ; say is a -nonthin sequence of that converges to . Since it follows that Therefore, the latter set yields a -nonthin subsequence of that converges to . Hence and . By symmetry
we see that ; hence . Now let and let . Since , we can write for every , , and nonzero . Since for almost all , for every , , and nonzero . Hence, and . By symmetry we see that ; hence .
Theorem 9. Let be a RTN space and let be a lacunary sequence. For any sequence , one has .
Proof. Suppose ; then there is a -nonthin subsequence of that converges to , that is, Since for every , , and nonzero , we have Since converges to , the set is finite for any , , and nonzero .
Therefore, Hence, which means that .
Theorem 10. Let be a RTN space and let be a lacunary sequence. Let be the set of ordinary limit points of and for any sequence , .
Proof. Assume that ; then for every , , and nonzero . We set a -nonthin subsequence of such that for every , , and nonzero , and . Since there are infinitely many elements in , .
The converse of the theorem does not hold.
Theorem 11. Let be a RTN space and let be a lacunary sequence. If for sequence , -, then .
Proof. First, we show that . Fix , , and nonzero . Assume that such that . In this case, there exist and -nonthin subsequences of that converge to and , respectively. Since converges to , we have
which is a finite set. Consider that implies Hence, Since , Therefore, we can write For every , Hence, Therefore, This contradicts (31). Hence, .
Now we assume that such that for some , , and nonzero . Then Since for every , Therefore From (37), the right side of (40) is greater than zero and from (32), the left side of (40) equals to zero.
This is a contradiction. Hence, .
1. K. Menger, “Statistical metrics,” Proceedings of the National Academy of Sciences of the United States of America, vol. 28, pp. 535–537, 1942. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
2. B. Schweizer and A. Sklar, Probabilistic Metric Spaces, North-Holland, New York, NY, USA, 1983. View at MathSciNet
3. I. Goleţ, “On probabilistic 2-normed spaces,” Novi Sad Journal of Mathematics, vol. 35, no. 1, pp. 95–102, 2005. View at Zentralblatt MATH · View at MathSciNet
4. S. Gähler, “2-metrische Räume und ihre topologische Struktur,” Mathematische Nachrichten, vol. 26, pp. 115–148, 1963. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View
at MathSciNet
5. M. Mursaleen and S. A. Mohiuddine, “On ideal convergence of double sequences in probabilistic normed spaces,” Mathematical Reports, vol. 12, no. 4, pp. 359–371, 2010. View at Zentralblatt MATH ·
View at MathSciNet
6. M. Mursaleen, “On statistical convergence in random 2-normed spaces,” Acta Scientiarum Mathematicarum, vol. 76, no. 1-2, pp. 101–109, 2010. View at MathSciNet
7. M. Mursaleen and S. A. Mohiuddine, “On ideal convergence in probabilistic normed spaces,” Mathematica Slovaca, vol. 62, no. 1, pp. 49–62, 2012. View at Publisher · View at Google Scholar · View
at MathSciNet
8. H. Fast, “Sur la convergence statistique,” Colloquium Mathematicae, vol. 2, pp. 241–244, 1951. View at Zentralblatt MATH · View at MathSciNet
9. H. Steinhaus, “Sur la convergence ordinaire et la convergence asymptotique,” Colloquium Mathematicum, vol. 2, pp. 73–74, 1951.
10. T. Salat, “On statistical convergence of real numbers,” Mathematica Slovaca, vol. 50, pp. 111–115, 2000.
11. J. A. Fridy, “On statistical convergence,” Analysis, vol. 5, no. 4, pp. 301–313, 1985. View at Zentralblatt MATH · View at MathSciNet
12. S. A. Mohiuddine and M. Aiyub, “Lacunary statistical convergence in random 2-normed spaces,” Applied Mathematics & Information Sciences, vol. 6, no. 3, pp. 581–585, 2012. View at MathSciNet
13. M. Mursaleen and S. A. Mohiuddine, “On lacunary statistical convergence with respect to the intuitionistic fuzzy normed space,” Journal of Computational and Applied Mathematics, vol. 233, no. 2,
pp. 142–149, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. S. A. Mohiuddine and E. Savaş, “Lacunary statistically convergent double sequences in probabilistic normed spaces,” Annali dell'Universitá di Ferrara, vol. 58, no. 2, pp. 331–339, 2012. View at
Publisher · View at Google Scholar · View at MathSciNet
15. S. Gähler, “Lineare 2-normierte Räume,” Mathematische Nachrichten, vol. 28, pp. 1–43, 1964. View at Publisher · View at Google Scholar · View at MathSciNet
16. J. A. Fridy, “Statistical limit points,” Proceedings of the American Mathematical Society, vol. 118, no. 4, pp. 1187–1192, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt
MATH · View at MathSciNet
17. J. A. Fridy and C. Orhan, “Lacunary statistical convergence,” Pacific Journal of Mathematics, vol. 160, no. 1, pp. 43–51, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt
MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/isrn.mathematical.analysis/2013/189721/","timestamp":"2014-04-20T04:06:33Z","content_type":null,"content_length":"469420","record_id":"<urn:uuid:dde34ffa-d14c-4f3b-92e5-d3b5e6c6319a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
{-# LANGUAGE CPP #-}
Copyright (C) 2010 Dr. Alistair Ward
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
{- |
[@AUTHOR@] Dr. Alistair Ward
* Provides a polymorphic algorithm, to /unfold/ a list into a tree, to which an /associative binary operator/ is then applied to re-/fold/ the tree to a /scalar/.
* Implementations of this strategy have been provided for /addition/ and /multiplication/,
though other associative binary operators, like 'gcd' or 'lcm' could also be used.
* Where the contents of the list are consecutive, a more efficient implementation is available in /Factory.Data.Bounds/.
module Factory.Math.DivideAndConquer(
-- * Types
-- ** Type-synonyms
-- * Functions
) where
import Control.Arrow((***))
import qualified Data.Monoid
import qualified Data.Ratio
#if MIN_VERSION_parallel(3,0,0)
import qualified Control.Parallel.Strategies
{- |
* The ratio of the original list-length at which to bisect.
* CAVEAT: the value can overflow.
type BisectionRatio = Data.Ratio.Ratio Int
-- | The list-length beneath which to terminate bisection.
type MinLength = Int
{- |
* Reduces a list to a single scalar encapsulated in a 'Data.Monoid.Monoid',
using a /divide-and-conquer/ strategy,
bisecting the list and recursively evaluating each part; <http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm>.
* By choosing a 'bisectionRatio' other than @(1 % 2)@, the bisection can be made asymmetrical.
The specified ratio represents the length of the left-hand portion, over the original list-length;
eg. @(1 % 3)@ results in the first part, half the length of the second.
* This process of recursive bisection, is terminated beneath the specified minimum list-length,
after which the /monoid/'s binary operator is directly /folded/ over the list.
* One can view this as a <http://en.wikipedia.org/wiki/Hylomorphism_%28computer_science%29>,
in which the list is exploded into a binary tree-structure
(each leaf of which contains a list of up to 'minLength' integers, and each node of which contains an associative binary operator),
and then collapsed to a scalar, by application of the operators.
divideAndConquer :: Data.Monoid.Monoid monoid
=> BisectionRatio -- ^ The ratio of the original list-length at which to bisect.
-> MinLength -- ^ For efficiency, the list will not be bisected, when it's length has been reduced to this value.
-> [monoid] -- ^ The list on which to operate.
-> monoid -- ^ The resulting scalar.
divideAndConquer bisectionRatio minLength l
| any ($ apportion minLength) [
(< 1), --The left-hand list may be null.
(> pred minLength) --The right-hand list may be null.
] = error $ "Factory.Math.DivideAndConquer.divideAndConquer:\tbisectionRatio='" ++ show bisectionRatio ++ "' is incompatible with minLength=" ++ show minLength ++ "."
| otherwise = slave (length l) l
apportion :: Int -> Int
apportion list = (list * Data.Ratio.numerator bisectionRatio) `div` Data.Ratio.denominator bisectionRatio
slave len list
| len <= minLength = Data.Monoid.mconcat list --Fold the monoid's binary operator over the list.
| otherwise = uncurry Data.Monoid.mappend .
#if MIN_VERSION_parallel(3,0,0)
Control.Parallel.Strategies.withStrategy (
Control.Parallel.Strategies.parTuple2 Control.Parallel.Strategies.rseq Control.Parallel.Strategies.rseq
) .
(slave cut *** slave (len - cut)) $ splitAt cut list where --Apply the monoid's binary operator to the two operands resulting from bisection.
cut = apportion len
{- |
* Multiplies the specified list of numbers.
* Since the result can be large, 'divideAndConquer' is used in an attempt to form operands of a similar order of magnitude,
which creates scope for the use of more efficient multiplication-algorithms.
product' :: Num n
=> BisectionRatio -- ^ The ratio of the original list-length at which to bisect.
-> MinLength -- ^ For efficiency, the list will not be bisected, when it's length has been reduced to this value.
-> [n] -- ^ The numbers whose product is required.
-> n -- ^ The resulting product.
product' bisectionRatio minLength = Data.Monoid.getProduct . divideAndConquer bisectionRatio minLength . map Data.Monoid.Product
{- |
* Sums the specified list of numbers.
* Since the result can be large, 'divideAndConquer' is used in an attempt to form operands of a similar order of magnitude,
which creates scope for the use of more efficient multiplication-algorithms.
/Multiplication/ is required for the /addition/ of 'Data.Ratio.Rational' numbers by cross-multiplication;
this function is unlikely to be useful for other numbers.
sum' :: Num n
=> BisectionRatio -- ^ The ratio of the original list-length at which to bisect.
-> MinLength -- ^ For efficiency, the list will not be bisected, when it's length has been reduced to this value.
-> [n] -- ^ The numbers whose sum is required.
-> n -- ^ The resulting sum.
sum' bisectionRatio minLength = Data.Monoid.getSum . divideAndConquer bisectionRatio minLength . map Data.Monoid.Sum | {"url":"http://hackage.haskell.org/package/factory-0.0.0.2/docs/src/Factory-Math-DivideAndConquer.html","timestamp":"2014-04-16T14:45:58Z","content_type":null,"content_length":"19501","record_id":"<urn:uuid:2010cec8-96ce-4a37-9046-273cc08a845c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
413 search hits
Integer point sets minimizing average pairwise L1 distance: What is the optimal shape of a town? (2010)
Erik D. Demaine Sándor P. Fekete Günter Rote Nils Schweer Daria Schymura Mariano Zelke
An n-town, n[is an element of]N , is a group of n buildings, each occupying a distinct position on a 2-dimensional integer grid. If we measure the distance between two buildings along the
axis-parallel street grid, then an n-town has optimal shape if the sum of all pairwise Manhattan distances is minimized. This problem has been studied for cities, i.e., the limiting case of very
large n. For cities, it is known that the optimal shape can be described by a differential equation, for which no closed-form solution is known. We show that optimal n-towns can be computed in O
(n[superscript 7.5]) time. This is also practically useful, as it allows us to compute optimal solutions up to n=80.
Analyst behaviour: the geography of social interaction (2012)
Frederik König
An analyst who works in Germany is more likely to publish a high (low) price target regarding a DAX30 stock if other Germany based analysts are also optimistic (pessimistic) about the same stock.
This finding is not biased by the fact that DAX30 companies are headquartered in Germany. In times of bull markets, price targets of analysts who regularly exchange their opinion are higher
correlated compared to other analysts. This effect vanishes in a bearish market environment. This suggests that communication among analysts indeed plays an important role. However, analysts’
incentives induce them not to deviate too much from the overall average during an economic downturn.
Does social interaction destablise financial markets? (2012)
Frederik König
With this paper, I propose a simple asset pricing model that accounts for the influence from social interaction. Investors are assumed to make up their mind about an asset’s price based on a
forecasting strategy and its past profitability as well as on the contemporaneous expectations of other market participants. Empirically analysing stocks of the DAX30 index, I provide evidence
that social interaction rather destabilises financial markets. Based on my results, I state that at least, it does not have a stabilising effect.
Fluctuations of social influence: evidence from the behaviour of mutual fund managers during the economic crisis 2008/09 (2012)
Frederik König
In this paper, I analyse the reciprocal social influence on investment decisions within an international group of roughly 2000 mutual fund managers that invested in companies of the DAX30. Using
a robust estimation procedure, I provide empirical evidence that in the average a fund manager puts 0.69% more portfolio weight on a particular stock, if other fund managers increase the
corresponding position by 1%. The dynamics of this influence on portfolio weights suggest that fund managers adjust their behaviour according to the prevailing market situation and are more
strongly influenced by others in times of an economic downturn. Analysing the working locations of the fund managers, I conclude that more than 90% of the magnitude of influence is due to pure
observation. While this form of influence varies much in time, the magnitude of influence resulting from the exchange of opinion is more or less constant.
Evidence regarding clinical use of microvolt T-wave alternans (2011)
Stefan H. Hohnloser Takanori Ikeda Richard J. Cohen
Background: Microvolt T-wave alternans (MTWA) testing in many studies has proven to be a highly accurate predictor of ventricular tachyarrhythmic events (VTEs) in patients with risk factors for
sudden cardiac death (SCD) but without a prior history of sustained VTEs (primary prevention patients). In some recent studies involving primary prevention patients with prophylactically
implanted cardioverter-defibrillators (ICDs), MTWA has not performed as well. Objective: This study examined the hypothesis that MTWA is an accurate predictor of VTEs in primary prevention
patients without implanted ICDs, but not of appropriate ICD therapy in such patients with implanted ICDs. Methods: This study identified prospective clinical trials evaluating MTWA measured using
the spectral analytic method in primary prevention populations and analyzed studies in which: (1) few patients had implanted ICDs and as a result none or a small fraction (≤15%) of the
reported end point VTEs were appropriate ICD therapies (low ICD group), or (2) many of the patients had implanted ICDs and the majority of the reported end point VTEs were appropriate ICD
therapies (high ICD group). Results: In the low ICD group comprising 3,682 patients, the hazard ratio associated with a nonnegative versus negative MTWA test was 13.6 (95% confidence interval
[CI] 8.5 to 30.4) and the annual event rate among the MTWA-negative patients was 0.3% (95% CI: 0.1% to 0.5%). In contrast, in the high ICD group comprising 2,234 patients, the hazard ratio was
only 1.6 (95% CI: 1.2 to 2.1) and the annual event rate among the MTWA-negative patients was elevated to 5.4% (95% CI: 4.1% to 6.7%). In support of these findings, we analyzed published data from
the Multicenter Automatic Defibrillator Trial II (MADIT II) and Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) trials and determined that in those trials only 32% of patients who received
appropriate ICD therapy averted an SCD. Conclusion: This study found that MTWA testing using the spectral analytic method provides an accurate means of predicting VTEs in primary prevention
patients without implanted ICDs; in particular, the event rate is very low among such patients with a negative MTWA test. In prospective trials of ICD therapy, the number of patients receiving
appropriate ICD therapy greatly exceeds the number of patients who avert SCD as a result of ICD therapy. In trials involving patients with implanted ICDs, these excess appropriate ICD therapies
seem to distribute randomly between MTWA-negative and MTWA-nonnegative patients, obscuring the predictive accuracy of MTWA for SCD. Appropriate ICD therapy is an unreliable surrogate end point
for SCD. Key words: Arrhythmia; Sudden cardiac death; Cardiac arrest; ICD; T-wave alternans; Surrogate endpoint; Ventricular tachyarrhythmic event; Primary prevention
'Kiezdeutsch goes School' : a multiethnic variety of German from an educational perspective (2008)
Kerstin Paul Ulrike Freywald Eva Wittenberg
This article presents linguistic features of and educational approaches to a new variety of German that has emerged in multi-ethnic urban areas in Germany: Kiezdeutsch (‘Hood German’). From a
linguistic point of view, Kiezdeutsch is very interesting, as it is a multi-ethnolect that combines features of a youth language with those of a contact language. We will present examples that
illustrate the grammatical productivity and innovative potential of this variety. From an educational perspective, Kiezdeutsch has also a high potential in many respects: school projects can help
enrich intercultural communication and weaken derogatory attitudes. In grammar lessons, Kiezdeutsch can be a means to enhance linguistic competence by having the adolescents analyse their own
language. Keywords: German, Kiezdeutsch, multi-ethnolect, migrants’ language, language change, educational proposals
The performance of approximating ordinary differential equations by neural nets (2008)
Josef Fojdl Rüdiger W. Brause
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only
a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less
computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative
computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we
use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer
games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE)
which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the
interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case.
Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
Handwriting analysis for diagnosis and prognosis of Parkinson’s disease (2006)
Atilla Ünlü Rüdiger W. Brause Karsten Krakow
At present, there are no quantitative, objective methods for diagnosing the Parkinson disease. Existing methods of quantitative analysis by myograms suffer by inaccuracy and patient strain;
electronic tablet analysis is limited to the visible drawing, not including the writing forces and hand movements. In our paper we show how handwriting analysis can be obtained by a new
electronic pen and new features of the recorded signals. This gives good results for diagnostics. Keywords: Parkinson diagnosis, electronic pen, automatic handwriting analysis
On the scope of the referential hierarchy in the typology of grammatical relations (2008)
Balthasar Bickel
In the late seventies, Bernard Comrie was one of the first linguists to explore the effects of the referential hierarchy (RH) on the distribution of grammatical relations (GRs). The referential
hierarchy is also known in the literature as the animacy, empathy or indexibability hierarchy and ranks speech act participants (i.e. first and second person) above third persons, animates above
inanimates, or more topical referents above less topical referents. Depending on the language, the hierarchy is sometimes extended by analogy to rankings of possessors above possessees, singulars
above plurals, or other notions. In his 1981 textbook, Comrie analyzed RH effects as explaining (a) differential case (or adposition) marking of transitive subject ("A") noun phrases in low RH
positions (e.g. inanimate or third person) and of object ("P") noun phrases in high RH positions (e.g. animate or first or second person), and (b) hierarchical verb agreement coupled with a
direct vs. inverse distinction, as in Algonquian (Comrie 1981: Chapter 6).
Prosodic tautomorphemicity in Sino-Tibetan (2003)
Balthasar Bickel
Sino-Tibetan is a prime example of how strongly a language family can typologically diversify under the pressure of areal spread features (Matisoff 1991, 1999). One of the manifestation of this
is the average length of prosodic words. In Southeast Asia, prosodic words tend to average on one or one-and-a-half syllables. In the Himalayas, by contrast, it is not uncommon to encounter
prosodic words containing five to ten syllables. The following pair of examples illustrates this. | {"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/all/start/0/rows/10/languagefq/eng/doctypefq/preprint","timestamp":"2014-04-18T09:29:38Z","content_type":null,"content_length":"43688","record_id":"<urn:uuid:0c264ab3-0dba-40de-aefc-ba76ee3a28a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallelogram- Length of diagonal with just lengths of sides?
July 20th 2012, 12:24 PM
Parallelogram- Length of diagonal with just lengths of sides?
I was working on a worksheet that was drilling me on the Law of Cosines, when I came to a question which I think must be misprinted, or incomplete, but I want to make sure that it isnt actually
possible to figure out. I need to find the length of the diagonal in a parallelogram, given only the length of the sides. No angles were supplied. Obviously this cuts the Law of Cosines
completely out, but I was wondering if maybe it could be solved another way, or maybe there is some sort of law for getting the angles out of it. The length of the two sides are 6 and 12. I
thought that maybe since one side is twice as long, maybe there is some sort of rule for figuring it out, although I doubt it. I also thought that maybe it was as simple as a special triangle,
but with legs of 6 and 12, I cant think of any hypotenuse that would match up for the special triangles.
Thanks for you help!
July 20th 2012, 12:30 PM
a tutor
Re: Parallelogram- Length of diagonal with just lengths of sides?
If the question only gave two sides then the parallelogram is not unique.
July 20th 2012, 12:33 PM
Re: Parallelogram- Length of diagonal with just lengths of sides?
Ok, thanks. That is what I thought. I wracked my brain for a good 15 mins trying to think of absolutely anything. | {"url":"http://mathhelpforum.com/trigonometry/201194-parallelogram-length-diagonal-just-lengths-sides-print.html","timestamp":"2014-04-21T11:59:47Z","content_type":null,"content_length":"4844","record_id":"<urn:uuid:81348d7e-38fc-490b-b05e-87535aab4c79>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Group whose order is a power of a prime contains an element of order prime
February 16th 2011, 09:52 PM #1
Group whose order is a power of a prime contains an element of order prime
I don't know how to go about this. So the group has $p^{n}$ elements and I have to somehow use cosets or Langrage's theorem, but I really don't understand much of this. And I can't use Cauchy's
Theorem, it uses concepts I haven't covered yet. Any help would be appreciated.
Edit: So I know by Lagrange's theorem that the order of a subgroup of $G$ must divide the order of $G$, so given $g\in G$, the cyclic subgroup $<g>$ has order that divides $p^{n}$. At this part,
I'm stuck.
So since $p$ is prime, the order of $g$ is either $p$ or some power of $p$, say $p^m$. If $|g|=p$, then we're done, so assume that $|g|eq p$. So an element $g$ has order $p^m$ if $p^m$ is the
smallest positive integer with the property that $g^{p^m}=1$.
Now $g$ generates a cyclic subgroup of $G$ that consists of $p^m$ elements, each of which is a power of $g$.
So let $g^k$ be an element in $\langle g\rangle$ where $k$ is a positive integer and $k\leq p^m$. Now $g^k$ generates a subgroup of $\langle g\rangle$, and the order of $g^k$ must divide $p^m$.
Now since $k\leq p^m$, is there some value of $k$ such that $k^p=p^m$?
Check out the section on cyclic subgroups in Artin's Algebra. There is a proposition that talks about the GCD of the order of a group and the order of an element in the group. makes the rest of
the proof a snap.
I followed you until you brought up $k^{p} = p^{m}$, but I think I got a solution:
Assume $ge 1_{G}$ If the order of $<g>$ is $p^{n}$, then $G = <g>$ and so the element $g^{p^{n-1})$ has order $p$ since $(g^{p^{n-1}})^{p} = g^{p^{n}} = 1$, and since $p$ is prime, the claim
follows. Now if the order of $<g>$ is not $p^{n}$, then it's order must divide $p^{n-1}$, since the order of $<g>$ must be an element of $P = \{ p, ..., p^{n-1}\}$. So let $p^{m} = ord(<g>)$.
Then $p^{m}= \frac{p^{n}}{p^{n-m}}$. Then $g^{\frac{p^{n-1}}{p^{n-m}}$ has order $p$. Q.E.D.
Last edited by Pinkk; February 16th 2011 at 10:49 PM.
In fact there are at least p-1 such elements. One of my favorite proofs in algebra is McKay's proof of Cauchy's theorem, which is outlined in Dummit and Foote:
Consider the set S of p-tuples of elements in G whose product is the identity: $S=\{(g_1,\dots,g_p)\mid \prod g_i=1\}$. Note there are $|G|^{p-1}$ of these, since we can choose the first p-1
elements however we like, then choose the last to be the inverse of their product.
Next show that the cyclic group $\mathbb{Z}/p\mathbb{Z}$ acts on the set of p-tuples by cyclic permutation, i.e. if the action of 1 on a p-tuple is to shift every entry one place to the right,
and move the last entry into the first place, then the product of the elements remains the identity.
Now by the orbit-stabilizer theorem, the size of each orbit under this action divides p, so every orbit has size 1 or p, and the union of all orbits is the set S. Since p divides |S|, the number
of size-1 orbits is divisible by p. A size-1 orbit is one that is unchanged by cyclic permutations, i.e. a p-tuple consisting of just one element: (g, g, ..., g). If there's a tuple like that, it
means $g^p=1$, and so if g is not the identity, then |g|=p as desired. But there has to be a nonidentity element like that, since (1, 1, ..., 1) is in S, and so there are at least (p-1) more with
orbit size 1.
February 16th 2011, 10:14 PM #2
Junior Member
Jun 2010
February 16th 2011, 10:35 PM #3
February 17th 2011, 07:51 AM #4 | {"url":"http://mathhelpforum.com/advanced-algebra/171579-group-whose-order-power-prime-contains-element-order-prime.html","timestamp":"2014-04-18T05:22:27Z","content_type":null,"content_length":"48875","record_id":"<urn:uuid:c3932c89-4926-4d79-a158-95261fb86657>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
'Self-correcting' gates advance quantum computing
Post-doctoral Fellow Kaveh Khodjasteh and Associate Professor of Physics and Astronomy Lorenza Viola (photo by Joseph Mehling '69)
(PhysOrg.com) -- Two Dartmouth researchers have found a way to develop more robust “quantum gates,” which are the elementary building blocks of quantum circuits. Quantum circuits, someday, will be
used to operate quantum computers, super powerful computers that have the potential to perform extremely complex algorithms quickly and efficiently.
Associate Professor of Physics and Astronomy Lorenza Viola and Post-doctoral Fellow Kaveh Khodjasteh report their findings in the Feb 27, 2009 issue of Physical Review Letters. Their study is titled
“Dynamically Error-Corrected Gates for Universal Quantum Computing.”
The futuristic realm of quantum computing considers units of information called quantum bits, or qubits, which can be carried by quantum-mechanical objects such as electrons or atoms. Unlike today’s
computers, which use binary strings of 0s and 1s, a quantum computer uses qubits that can each be in a superposition of 0 and 1. As a result, quantum computers could efficiently solve computational
problems beyond the reach of today’s computers.
“An outstanding challenge stems from the fact that quantum bits are incredibly more prone to errors than their traditional-sized counterparts,” says Viola, who is the director of Dartmouth’s Quantum
Information Science Initiative. “All quantum gates, the building blocks for implementing complex quantum-mechanical circuits, are plagued by errors originating from both the interaction with the
surrounding quantum environment or operational imperfections.”
Viola’s and Khodjasteh’s study showed how to construct new quantum gates that can be “dynamically corrected” out of sequences from the available faulty gates. In this manner, the researchers say, the
net total error is approximately canceled.
“The key idea is to carefully exploit known relationships between unknown errors,” says Viola. “Dynamically corrected gates allow for substantially higher fidelity to be reaching quantum circuits,
and can thus bring the implementation of reliable quantum-computing devices closer to reality.”
Provided by Dartmouth College
1 / 5 (4) Mar 12, 2009
the incredible part; some believe that quantum teleportation is already causing telepathic phenomena where large amounts of coherent information are being transferred between people. There is some
type of natural error correcting mechanism or quantum pheonomenon that like superconductivity causes long range cohenrency.
5 / 5 (1) Mar 12, 2009
the incredible part; some believe that quantum teleportation is already causing telepathic phenomena where... [etc.]
That's not particularly incredible. "Some" is a group of people with a long track record of believing all sorts of thing.
1 / 5 (1) Mar 13, 2009
NFarbstein what a wonderful world you live in, any time you have difficulty with reality just invent a whole new swath of "special" knowledge. Then you are the only one who knows and it takes as much
effort as a children's fairytale.
1 / 5 (1) Mar 14, 2009
the incredible part; some believe that quantum teleportation is already causing telepathic phenomena where large amounts of coherent information are being transferred between people. There is
some type of natural error correcting mechanism or quantum pheonomenon that like superconductivity causes long range cohenrency.
Long range coherency is found within a laser beam: It is wrong to claim that it also manifests within a traditional superconductor. In a traditional superconductor current is transported by
correlated movement of adjacent charge-carriers: i.e. a local mechanism. Coherent superconduction is only possible when it occurs without having localised charge-carriers conveying the current.
The charge is then teleported across the phase. I have already generated this phase 8 years ago by extracting electrons with an anode from n-type diamond: When doing this and the electron density
becomes high enough the electrons between the diamond and anode condense to form a single macro-wave. This wave is coherent and does therefore superconduct by teleportation (no local correlated
movement of charge-carriers involved). This macro- electron wave is on a smaller scale like a covalent bond (2 condensed electrons) or a double bond (4 condensed electrons) or a triple bond (6
condensed electrons) except that consists of milliosn of electrons.
When such a wave exist of neutral matter, one obtains "dark matter". The latter should also be able to teleport: This could explain telepathy.
It is highly likely that a similar condensation of electrons form along a DNA string. | {"url":"http://phys.org/news156101597.html","timestamp":"2014-04-20T18:15:10Z","content_type":null,"content_length":"73113","record_id":"<urn:uuid:d87687f0-8b17-498d-a9ae-53a9114d89f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic model
Next: Part of speech tagging Up: Assigning Phrase Breaks from Previous: Background
We formally define the problem as follows. Between every pair of words is a juncture, which can take one of a number of juncture types. In the simplest case the set of juncture types consists of
break and non-break (the only case discussed here), but in principle any number is possible. The task of the algorithm is to decide the best sequence of juncture types for each sentence.
Our algorithm uses a Markov model where each state represents one of the juncture types and emits probabilities of part-of-speech (POS) sequences occurring. We usually take two words before and one
after the juncture in questions to represent the POS sequence:
where j is the juncture in question and
In the simplest case there are only two states in the model, one for break and the other for non-break. The transition probabilities are determined by the prior probability of each juncture type
occurring. Bayes' rule is used to combine the emission probability of each state (P(C|j)) with the transition probability (P(j)), to find the probability we are interested in, P(j|C). This case
represents local information only, i.e. the basic probability of each juncture type at each point. To take more context into account we use an n-gram of juncture sequences. The ngram gives the
probability of a juncture type given the previous N junctures, i.e.
For an ngram of size N, a Markov model is built with
Training the network is straightforward: the emission probabilities are calculated by collecting all the examples of breaks in the training data. For all the possible unique POS sequences, C, counts
are made from the data of how many times each occurs. These are converted to the probability P(C|break) by dividing by the total number of breaks. P(C|non-break) is calculated the same way. The
ngrams are calculated by counting the number of times each unique sequence of length N occurs in the training data.
At run time, this network is efficiently searched using the Viterbi algorithm to find the most likely sequence of junctures given the input POS tags for a sentence.
Next: Part of speech tagging Up: Assigning Phrase Breaks from Previous: Background Alan W Black
Tue Jul 1 17:09:00 BST 1997 | {"url":"http://www.cs.cmu.edu/~awb/papers/ES97pos/node2.html","timestamp":"2014-04-17T10:54:07Z","content_type":null,"content_length":"5025","record_id":"<urn:uuid:0cb05b92-94fd-45d8-a350-f8c1a3d2e512>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
14. When you go to the market, what can you use as a guide to learn the number of calories in foods?
A. The corporate dietitian
B. A government listing of all foods and their contents
C. The packaging label
D. Frozen foods always have fewer calories than canned foods, so purchase frozen whenever possible
14. When you go to the market, what can you use as a guide to learn the number of calories in foods? A. The corporate dietitian B. A government listing of all foods and their contents C. The
packaging label D. Frozen foods always have fewer calories than canned foods, so purchase frozen whenever possible
C. The packaging label
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/home.aspx?ConversationId=12EDAA16","timestamp":"2014-04-16T21:57:03Z","content_type":null,"content_length":"41166","record_id":"<urn:uuid:9c41ffbf-573d-47b3-9bd2-4dfaf6ce6d47>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: arXiv:0910.3772v1[hep-lat]20Oct2009
Can complex Langevin dynamics evade the sign
Gert Aarts
Physics Department, Swansea University, Swansea, United Kingdom
E-mail: g.aarts@swan.ac.uk
I answer the question in the title for the relativistic Bose gas at finite chemical potential using
numerical lattice simulations, complemented with analytical understanding.
The XXVII International Symposium on Lattice Field Theory
July 26-31, 2009
Peking University, Beijing, China
c Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it/
Can complex Langevin dynamics evade the sign problem? Gert Aarts
1. Introduction
As is well known, at nonzero baryon chemical potential the complexity of the fermion deter-
minant prohibits the use of importance sampling in lattice QCD. This makes the determination of
the QCD phase diagram an outstanding open problem (a summary of what is possible despite this
obstruction can be found in Ref. [1]). Stochastic quantization [2] does not rely on importance sam-
pling: the configurations that dominate in the partition function are found by integrating complex | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/388/2037659.html","timestamp":"2014-04-20T23:29:41Z","content_type":null,"content_length":"8284","record_id":"<urn:uuid:6946b399-ebb3-414a-8200-e52a4a6f04c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Average Rate of Change
March 2nd 2010, 09:58 AM #1
Apr 2009
Average Rate of Change
I need some help with this problem:
A company introduces a new product for which the number of units sold S is given by the equation below, where t is the time in months.
I need to:
a) Find the average rate of change of S(t) during the first year (rounded to 1 decimal place).
b) During what month does S'(t) equal the average rate of change during the first year?
I'm not too sure what to do for the first one. Some people have told me to find the derivative of the problem then plug 12 in for t, but that's not right. I could probably find B after I find A.
The average rate of change on an closed interval [a,b] is given by
So in your case $a=0, b=12$. And the average rate of change is given by
For the second part, just use the mean value theorem
I need some help with this problem:
A company introduces a new product for which the number of units sold S is given by the equation below, where t is the time in months.
I need to:
a) Find the average rate of change of S(t) during the first year (rounded to 1 decimal place).
b) During what month does S'(t) equal the average rate of change during the first year?
I'm not too sure what to do for the first one. Some people have told me to find the derivative of the problem then plug 12 in for t, but that's not right. I could probably find B after I find A.
The average rate of change is given by the slope of the line drawn through 2 points on a graph. So, the points in question here are, $(0,S(0))$, and $(12,S(12))$. Can you find the slope?
Hmm... So I got 15.2 for A, which is incorrect. I plugged 12 in for t and got 182.8, then dividing that by 12 gave me 15.2. Plugging 0 in for [t] doesn't seem possible, since 9/0 is undefined...
Actually I did that wrong. So I got 189.4736 for s(12), then 56.25 for s(0). I subtracted S(12) and S(0), and got 133.223684211. I then divided that by 12 and got 11.1, which is incorrect.
Ok, so I got 182.8125 for S(12) and 56.25 for S(0). Subtracting the two and dividing by 12 gives me 10.6 (rounding to one decimal).
So I took the derivative of the original problem and set it equal to 10.6. I got two answers; -11.9799 and 3.97993.
This is the last submission I have for this problem, so I need to make sure this is right ahead of time. Is part A correct? For part B, I just have to choose the month from a drop-down menu.
Part A)
$S(12) = 182.8125, S(0) = 56.25$
$S_{average} = \frac{S(12)-S(0)}{12-0} = \frac{126.5625}{12} = 10.546875$
Part B
$\frac{dS(t)}{dt} = \frac{675}{(4+t)^2}$
By MVT:
$\frac{675}{(4+t)^2} = 10.546875$
Solve for t
$t = \{-12,4\}$
Since there is no such thing a negative month, we use $t=4$
I figured out part A, that was sorta simple (after seeing what others have posted). I guess I got the answer wrong, because I put 10.6 (rounding up...), but I guess they were looking for 10.5.
So anyway, I sort of understood part B. Although the correct month was May, I didn't understand how they got that.
March 2nd 2010, 10:13 AM #2
March 2nd 2010, 10:14 AM #3
March 2nd 2010, 10:31 AM #4
Apr 2009
March 2nd 2010, 11:08 AM #5
Apr 2009
March 2nd 2010, 05:13 PM #6
Apr 2009
March 2nd 2010, 07:00 PM #7
March 2nd 2010, 07:08 PM #8
Apr 2009 | {"url":"http://mathhelpforum.com/calculus/131637-average-rate-change.html","timestamp":"2014-04-17T10:14:04Z","content_type":null,"content_length":"52171","record_id":"<urn:uuid:aaa39919-07a0-4d16-84b8-cf388af71a1f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |