content
stringlengths
86
994k
meta
stringlengths
288
619
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Solomon Garfunkel on Common Core Standards Replies: 9 Last Post: Jul 30, 2012 8:36 PM Messages: [ Previous | Next ] Re: Solomon Garfunkel on Common Core Standards Posted: Jul 28, 2012 7:31 PM Dom, can you share more detail on why you think CCSSM is rubbish? I think the elementary standards are on the right track but they are only a topic list. The teaching of mathematics involves more than just a good list of topics, it involves a textbook, a curriculum and a teacher. Just creating a good list of topics is not enough to guarantee those other elements. Case in point, virtually every current rattrap of a curriculum has already (self) qualified themselves as being "aligned" with these standards, even though they are not. But that is marketing. Something we are used to from Pepsi or Phillip Morris and should be just as used to from the educational industry. They are businesses, with executives, bills to pay and mouths to feed like any other. I think the elementary topics are at least in the right order of magnitude. The high school standards are destined to fail though. Pages 4 and 5... "One of the hallmarks of the Common Core State Standards for Mathematics is the specification of content that all students must study in order to be college and career ready. This ?college and career ready line? is a minimum for all students." "Furthermore, research shows that allowing low-achieving students to take low-level courses is not a recipe for academic success (Kifer, 1993). The research strongly suggests that the goal for districts should not be to stretch the high school mathematics standards over all four years. Rather, the goal should be to provide support so that all students can reach the college and career ready line by the end of the eleventh grade, ending their high school career with one of several high-quality mathematical courses that allows students the opportunity to deepen their understanding of the college- and career-ready standards." First off, why did they need research to show that students that fail math are unsuccessful. Secondly, are they unaware that less than 15% of the students succeed through algebra? Are they aware that if you asked 100 randomly chosen adults to solve a quadratic equation, maybe 3 would be able to do so? This "Algebra for All" philosophy is the death knell to this initiative. Bob Hansen Date Subject Author 7/28/12 Solomon Garfunkel on Common Core Standards Domenico Rosa 7/28/12 Re: Solomon Garfunkel on Common Core Standards Robert Hansen 7/28/12 Re: Solomon Garfunkel on Common Core Standards Anna Roys 7/29/12 Re: Solomon Garfunkel on Common Core Standards Wayne Bishop 7/29/12 Re: Solomon Garfunkel on Common Core Standards kirby urner 7/29/12 Re: Solomon Garfunkel on Common Core Standards Wayne Bishop 7/29/12 Re: Solomon Garfunkel on Common Core Standards Domenico Rosa 7/30/12 Re: Solomon Garfunkel on Common Core Standards kirby urner 7/29/12 Re: Solomon Garfunkel on Common Core Standards Wayne Bishop 7/29/12 Re: Solomon Garfunkel on Common Core Standards Haim
{"url":"http://mathforum.org/kb/message.jspa?messageID=7854719","timestamp":"2014-04-17T12:58:34Z","content_type":null,"content_length":"29270","record_id":"<urn:uuid:56a02e23-1515-4399-a2c6-21b8f4e3e085>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Incremental SVD Package Overview Documentation The Problem Many applications can be characterized by the requirement for a subset of the singular values/vectors of a given matrix. Three distinct approaches exist for this problem: • Compute the full SVD and truncate the parts that are not needed. This is wasteful, and often times, prohibitively expensive. • Transform the singular value problem into a symmetric eigenvalue problem, apply an iterative eigensolver to compute the relevant part of the spectrum, back-transform the eigenvalue solutions into the desired singular value solutions. This is the most popular approach for large SVD problems, especially where the data matrix is • Attack directly the singular value problem via some specialized solver: □ JD-SVD method of Hochstenbach. This is analogous to the Jacobi-Davidson eigensolver. It operates by using a Newton method to compute an update to the current approximation that attempts to set the SVD problem residuals to zero. This is wrapped by a Davidson-type two-sided subspace acceleration strategy, which serves to improve the rate of convergence and globalize the Newton method. JD-SVD is best applied to finding either the largest or smallest singular values. □ The computation of the dominant singular vectors can be characterized as the maximization of a particular function over a Riemannian manifold. This allows the application of a number of Riemannian optimization techniques. See the GenRTR page for more information. There is currently no analogous method for computing the smallest singular values. □ There exist a number of efforts which compute the dominant singular vectors via a neural network. Try this or this. □ A family of so-called low-rank incremental SVD methods allow the approximation of the dominant or dominated singular subspaces in a pass-efficient manner. These methods are the focus of this webpage. Incremental SVD Publications The most recent talk of Baker provides a good introduction the the familiy of incremental/streaming SVD methods: • Incremental Methods for Computing Extreme Singular Subspaces, presented at the 2012 SIAM Conference on Applied Linear Algebra, Valencia, Spain, June 2012. The family of low-rank incremental SVD methods were originally described in the following papers: • B. S. Manjunath, S. Chandrasekaran, and Y. F. Wang. "An eigenspace update algorithm for image analysis". In IEEE Symposium on Computer Vision, page 10B Object Recognition III, 1995. • A. Levy and M. Lindenbaum. "Sequential Karhunen-Loeve basis extraction and its application to images". IEEE Transactions on image processing, 9(8):1371-1374, August 2000. • Y. Chahlaoui, K. Gallivan, and P. Van Dooren. "An incremental method for computing dominant singular spaces". In Computational Information Retrieval, pages 53-62. SIAM, 2001. • M. Brand. "Incremental singular value decomposition of uncertain data with missing values". In Proceedings of the 2002 European Conference on Computer Vision, 2002. • Y. Chahlaoui, K. Gallivan, and P. Van Dooren. "Recursive calculation of dominant singular subspaces". SIAM J. Matrix Anal. Appl., 25(2):445-463, 2003. • Matthew Brand. "Fast low-rank modifications of the thin singular value decomposition". Linear Algebra and its Applications, 415(1):20-30, May 2006. Baker's thesis described a generalization of these methods, with an emphasis on efficient implementations: • C. G. Baker. "A block incremental algorithm for computing dominant singular subspaces". 2004 The following papers present further analysis of these methods, including descriptions of the multipass methods: • Baker, Gallivan, Van Dooren. "Low-Rank Incremental Methods for Computing Dominant Singular Subspaces" Linear Algebra and its Applications, 436(8):2866-2888, April • Baker, Gallivan, Van Dooren. "Low-Rank Incremental Methods for Computing Dominant Singular Subspaces", 10th Copper Mountain Conference on Iterative Methods, 2008. IncPACK Software IncPACK contains implementations of the low-rank incremental SVD methods for the approximation of either the largest or smallest singular values of a matrix, along with the associated singular vectors. Currently, the package provides a single solver, an implementation of the of the Sequential Karhunen-Loeve (Levy and Lindenbaum, 2000). This solver, as described by the authors, computes approximations for the left singular subspace. It has been modified to compute also the right singular subspace, and to improve on these approximations via multiple passes through the data matrix. IncPACK currently provides implementations in MATLAB, released under an open-source modified BSD license. An MPI-based implementation in C++ is available in Trilinos in the RBGen package (development branch only). A beta-implementation for NVIDIA GPUs using CUDA is available below. Version Date .zip .tgz 0.1.2 18 Dec 2012 IncPACK-0.1.2.zip IncPACK-0.1.2.tgz Updated to include Modified BSD software license. 0.1.1 11 Apr 2008 IncPACK-0.1.1.zip IncPACK-0.1.1.tgz Minor changes. Threshholding logic for 'S' case was not correct. Improved 0.1.0 8 Apr 2008 IncPACK-0.1.zip IncPACK-0.1.tgz Initial public release of MATLAB IncSVD package. Provides implementation of sequential Karhunen-Loeve, with the following multipass strategies: • simple restarting • steepest descent (variant A) • steepest descent (variant B) Version Date .zip .tgz 1.0 31 Dec 2012 G-IncSVD-1.0.zip G-IncSVD-1.0.tgz Initial release of GPU IncSVD (G-IncSVD), under modified BSD software The authors of the codes are: • Chris Baker, Oak Ridge National Laboratory • Kyle Gallivan, Florida State University • Paul Van Dooren, Université catholique de Louvain Funding for this work came in part from: • National Science Foundation Award 032944: "Collaborative Research: Model Reduction of Dynamical Systems for Real Time Control" • National Science Foundation Award 9912415: "Efficient Algorithms for Large Scale Dynamical Systems" Related software • The JD Gateway contains information on the JD-SVD method, which can be used to compute the dominant or dominated SVD. • The GenRTR package provides a solver for computing the dominant SVD. • The RTRESGEV package provides eigensolvers, which can be used to compute dominant or dominated SVD. The Problem Many applications can be characterized by the requirement for a subset of the singular values/vectors of a given matrix. Three distinct approaches exist for this problem: • Compute the full SVD and truncate the parts that are not needed. This is wasteful, and often times, prohibitively expensive. • Transform the singular value problem into a symmetric eigenvalue problem, apply an iterative eigensolver to compute the relevant part of the spectrum, back-transform the eigenvalue solutions into the desired singular value solutions. This is the most popular approach for large SVD problems, especially where the data matrix is sparse. • Attack directly the singular value problem via some specialized solver: □ JD-SVD method of Hochstenbach. This is analogous to the Jacobi-Davidson eigensolver. It operates by using a Newton method to compute an update to the current approximation that attempts to set the SVD problem residuals to zero. This is wrapped by a Davidson-type two-sided subspace acceleration strategy, which serves to improve the rate of convergence and globalize the Newton method. JD-SVD is best applied to finding either the largest or smallest singular values. □ The computation of the dominant singular vectors can be characterized as the maximization of a particular function over a Riemannian manifold. This allows the application of a number of Riemannian optimization techniques. See the GenRTR page for more information. There is currently no analogous method for computing the smallest singular values. □ There exist a number of efforts which compute the dominant singular vectors via a neural network. Try this or this. □ A family of so-called low-rank incremental SVD methods allow the approximation of the dominant or dominated singular subspaces in a pass-efficient manner. These methods are the focus of this webpage. Incremental SVD Publications The most recent talk of Baker provides a good introduction the the familiy of incremental/streaming SVD methods: • Incremental Methods for Computing Extreme Singular Subspaces, presented at the 2012 SIAM Conference on Applied Linear Algebra, Valencia, Spain, June 2012. The family of low-rank incremental SVD methods were originally described in the following papers: • B. S. Manjunath, S. Chandrasekaran, and Y. F. Wang. "An eigenspace update algorithm for image analysis". In IEEE Symposium on Computer Vision, page 10B Object Recognition III, 1995. • A. Levy and M. Lindenbaum. "Sequential Karhunen-Loeve basis extraction and its application to images". IEEE Transactions on image processing, 9 (8):1371-1374, August 2000. • Y. Chahlaoui, K. Gallivan, and P. Van Dooren. "An incremental method for computing dominant singular spaces". In Computational Information Retrieval, pages 53-62. SIAM, 2001. • M. Brand. "Incremental singular value decomposition of uncertain data with missing values". In Proceedings of the 2002 European Conference on Computer Vision, 2002. • Y. Chahlaoui, K. Gallivan, and P. Van Dooren. "Recursive calculation of dominant singular subspaces". SIAM J. Matrix Anal. Appl., 25(2):445-463, 2003. • Matthew Brand. "Fast low-rank modifications of the thin singular value decomposition". Linear Algebra and its Applications, 415(1):20-30, May 2006. Baker's thesis described a generalization of these methods, with an emphasis on efficient implementations: • C. G. Baker. "A block incremental algorithm for computing dominant singular subspaces". 2004 The following papers present further analysis of these methods, including descriptions of the multipass methods: • Baker, Gallivan, Van Dooren. "Low-Rank Incremental Methods for Computing Dominant Singular Subspaces" Linear Algebra and its Applications, 436 (8):2866-2888, April 2012. • Baker, Gallivan, Van Dooren. "Low-Rank Incremental Methods for Computing Dominant Singular Subspaces", 10th Copper Mountain Conference on Iterative Methods, 2008. IncPACK Software IncPACK contains implementations of the low-rank incremental SVD methods for the approximation of either the largest or smallest singular values of a matrix, along with the associated singular vectors. Currently, the package provides a single solver, an implementation of the of the Sequential Karhunen-Loeve (Levy and Lindenbaum, 2000). This solver, as described by the authors, computes approximations for the left singular subspace. It has been modified to compute also the right singular subspace, and to improve on these approximations via multiple passes through the data matrix. IncPACK currently provides implementations in MATLAB, released under an open-source modified BSD license. An MPI-based implementation in C++ is available in Trilinos in the RBGen package (development branch only). A beta-implementation for NVIDIA GPUs using CUDA is available below. Version Date .zip .tgz 0.1.2 18 Dec 2012 IncPACK-0.1.2.zip IncPACK-0.1.2.tgz Updated to include Modified BSD software license. 0.1.1 11 Apr 2008 IncPACK-0.1.1.zip IncPACK-0.1.1.tgz Minor changes. Threshholding logic for 'S' case was not correct. Improved documentation. 0.1.0 8 Apr 2008 IncPACK-0.1.zip IncPACK-0.1.tgz Initial public release of MATLAB IncSVD package. Provides implementation of sequential Karhunen-Loeve, with the following multipass strategies: • simple restarting • steepest descent (variant A) • steepest descent (variant B) Version Date .zip .tgz 1.0 31 Dec 2012 G-IncSVD-1.0.zip G-IncSVD-1.0.tgz Initial release of GPU IncSVD (G-IncSVD), under modified BSD software license. The authors of the codes are: • Chris Baker, Oak Ridge National Laboratory • Kyle Gallivan, Florida State University • Paul Van Dooren, Université catholique de Louvain Funding for this work came in part from: • National Science Foundation Award 032944: "Collaborative Research: Model Reduction of Dynamical Systems for Real Time Control" • National Science Foundation Award 9912415: "Efficient Algorithms for Large Scale Dynamical Systems" Related software • The JD Gateway contains information on the JD-SVD method, which can be used to compute the dominant or dominated SVD. • The GenRTR package provides a solver for computing the dominant SVD. • The RTRESGEV package provides eigensolvers, which can be used to compute dominant or dominated SVD. The most recent talk of Baker provides a good introduction the the familiy of incremental/streaming SVD methods: The family of low-rank incremental SVD methods were originally described in the following papers: Baker's thesis described a generalization of these methods, with an emphasis on efficient implementations: The following papers present further analysis of these methods, including descriptions of the multipass methods: IncPACK contains implementations of the low-rank incremental SVD methods for the approximation of either the largest or smallest singular values of a matrix, along with the associated singular vectors. Currently, the package provides a single solver, an implementation of the of the Sequential Karhunen-Loeve (Levy and Lindenbaum, 2000). This solver, as described by the authors, computes approximations for the left singular subspace. It has been modified to compute also the right singular subspace, and to improve on these approximations via multiple passes through the data matrix. IncPACK currently provides implementations in MATLAB, released under an open-source modified BSD license. An MPI-based implementation in C++ is available in Trilinos in the RBGen package (development branch only). A beta-implementation for NVIDIA GPUs using CUDA is available below. Version Date .zip .tgz 0.1.2 18 Dec 2012 IncPACK-0.1.2.zip IncPACK-0.1.2.tgz Updated to include Modified BSD software license. 0.1.1 11 Apr 2008 IncPACK-0.1.1.zip IncPACK-0.1.1.tgz Minor changes. Threshholding logic for 'S' case was not correct. Improved documentation. 0.1.0 8 Apr 2008 IncPACK-0.1.zip IncPACK-0.1.tgz Initial public release of MATLAB IncSVD package. Provides implementation of sequential Karhunen-Loeve, with the following multipass strategies: • simple restarting • steepest descent (variant A) • steepest descent (variant B) Version Date .zip .tgz 1.0 31 Dec 2012 G-IncSVD-1.0.zip G-IncSVD-1.0.tgz Initial release of GPU IncSVD (G-IncSVD), under modified BSD software license.
{"url":"http://www.math.fsu.edu/~cbaker/IncPACK/","timestamp":"2014-04-21T09:36:26Z","content_type":null,"content_length":"11792","record_id":"<urn:uuid:918f3f6f-337f-4252-ab93-4988040220d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Interested in Logic But need HELP February 17th 2010, 10:49 AM #1 Feb 2010 Interested in Logic But need HELP This is a old assignment my roomate had from school, I was interested in Logic and started to read his notes and books and came across this assignment. I tried to figure it out but I failed according to him. So if anyone here could help me out it would be appreciated thanks! Construct a Truth-table that demonstrates whether (1) is truth functionally true, false, or indeterminate. Explain why your truth-table demonstrates the intended conclusion. Construct a Truth-table that demonstrates whether (2) is true or false. The symbol |= stands for truth-functional entailment. Explain why your truth table demonstrates the intended conclusion. {(B >C),B} |= C Note: > is Material implication Consider the following argument. Jones or sally has the highest exam score. Jones scores higher than sally only if he did not play soccer over the weekend. Since he played soccer, Sally has the highest score. a) Construct an abbreviation scheme, and symbolize the argument in standard form. b) Construct a truth-table that demonstrates whether the argument is valid or invalid. Explain why your truth-table demonstrates the intended conclusion. Consider the following set of sentences. {Neither Smith nor Jones is happy, Brown is unhappy provided that Jones is Unhappy, Smith is happy provided that Brown is unhappy}. a) Construct an abbreviation scheme, and symbolize each sentence in the set. b) Construct a truth-table that demonstrates whether the set is truth-functionally consistent or inconsistent. Explain why your truth-table demonstrates the intended conclusion. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/discrete-math/129303-interested-logic-but-need-help.html","timestamp":"2014-04-19T04:31:46Z","content_type":null,"content_length":"29928","record_id":"<urn:uuid:4d369dfd-a646-4bdc-b85b-2f6ed3a590e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Dark Buzz Research papers in the soft sciences usually use a type of counterfactual called a null hypothesis . The object of such a paper is often to disprove a hypothesis by collecting data and applying statistical analysis. A pharmaceutical company might do a controlled clinical study of a new drug. Half of the test subjects get the new drug, and half get a placebo. The null hypothesis is that the drug and the placebo are equally effective. The company hopes to prove that the null hypothesis is false, and that the drug is better than the placebo. The proof consists of a trial with more subjects doing better with the drug, and a statistical argument that the difference was unlikely to be pure luck. To see how a statistical disproof of a null hypothesis would work, consider a trial consisting of 100 coin tosses. The null hypothesis is that heads and tails are equally likely. That means that we would get about 50 heads in a trial, on average, with the variation being a number called "sigma". The next step in the analysis is to figure out what sigma is. In this case, for a fair coin, sigma is 5. That means that the number of heads in a typical trial run will differ from 50 by about 5. Two thirds of the trials will be within one sigma, or between 45 and 55. 95% will be within two sigmas, or between 40 and 60. 99% will be within three sigmas, or between 35 and 65. Thus you can prove that a coin is biased by tossing it 100 times. If you get more than 65 heads, then either you were very unlucky or the chance of heads was more than 50%. A company can show that its drug is effective by giving it to 100 people, and showing that it is better than the placebo 65 times. Then the company can publish a study saying that the probability that the data matches the (counterfactual) null hypothesis is 0.01 or less. That probability is called the . A p-value of 0.01 means that the company can claim that the drug is effective, with 99% confidence. The p-value is the leading statistic for getting papers published and drugs approved, but it does not really confirm a hypothesis . It just shows an inconsistency between a dataset and a counterfactual hypothesis. As a practical matter, the p-value is just a statistic that allows journal editors an easier decision on whether to publish a paper. A p-value under 0.05 is considered statistically significant, and not otherwise. It does not mean that the paper's conclusions are probably true. A Nature mag editor scientific experiments don't end with a holy grail so much as an estimate of probability. For example, one might be able to accord a value to one's conclusion not of "yes" or "no" but "P<0.05", which means that the result has a less than one in 20 chance of being a fluke. That doesn't mean it's "right". One thing that never gets emphasised enough in science, or in schools, or anywhere else, is that no matter how fancy-schmancy your statistical technique, the output is always a probability level (a P-value), the "significance" of which is left for you to judge – based on nothing more concrete or substantive than a feeling, based on the imponderables of personal or shared experience. Statistics, and therefore science, can only advise on probability – they cannot determine The Truth. And Truth, with a capital T, is forever just beyond one's grasp. This explanation is essentially correct, but some scientists who should know better argue that it is wrong and anti-science . A fluke is an accidental (and unlikely) outcome under the (counterfactual) null hypothesis. The scientific paper says that either the experiment was a fluke or the null hypothesis was wrong. The frequentist philosophy that underlies the computation does not allow giving a probability on a hypothesis. So the reader is left to deduce that the null hypothesis was wrong, assuming the experiment was not a fluke. The core of the confusion is over the counterfactual. Some people would rather ignore the counterfactual, and instead think about a subjective probability for accepting a given hypothesis. Those people are called , and they argue that their methods are better because they more completely use the available info. But most science papers use the logic of p-values to reject counterfactuals because assuming the counterfactual requires you to believe that the experiment was a fluke. Hypotheses are often formulated by combing datasets and looking for correlations. For example, if a medical database shows that some of the same people suffer from obesity and heart disease, one might hypothesize that obesity causes heart disease. Or maybe that heart disease causes obesity. Or that overeating causes both obesity and heart disease, but they otherwise don't have much to do with each other. The major caution to this approach is that correlation does not imply causation . A correlation can tell you that two measures are related, but the operation is symmetrical and cannot say what causes what. To establish causality requires some counterfactual analysis. The simplest way in a drug study is to randomly gives some patients placebo instead of the drug in question. That way, the intended treatment can be compared to counterfactuals. A counterfactual theory of causation has been worked out by Judea Pearl and others. His 2000 book Neither logic, nor any branch of mathematics had developed adequate tools for managing problems, such as the smallpox inoculations, involving cause-effect relationships. Most of my colleagues even considered causal vocabulary to be dangerous, avoidable, ill-defined, and nonscientific. "Causality is endless controversy," one of them warned. The accepted style in scientific papers was to write "A implies B" even if one really meant "A causes B," or to state "A is related to B" if one was thinking "A affects B." His theory is not particularly complex, and could have been worked out a century earlier. Apparently there was resistance to analyzing counterfactuals. Even the great philosopher Bertrand Russell hated causal analysis Many theories seem like plausible explanations for observations, but ultimately fail because they offer no counterfactual analysis. For example, the famous theories of Sigmund Freud tell us how to interpret dreams, but do not tell us how to recognize a false interpretation. A theory is not worth much without counterfactuals. The Dilbert cartoonist posts whimsical ideas all the time, but only gets hate mail if he says something skeptical about biological evolution or global warming. Those are sacred cows of today's intellectual left. He now writes: Let's get this out of the way first... In the realm of science, a theory is an idea that is so strongly supported by data and prediction that it might as well be called a fact. But in common conversation among non-scientists, "theory" means almost the opposite. To the non-scientist, calling something a theory means you don't have enough data to confirm it. I'll be talking about the scientific definition of a theory in this post. And I have one question that I have seen asked many times (unsuccessfully) on the Internet: How often are scientific theories overturned in favor of new and better theories? ... Note to the Bearded Taint's Worshippers: Evolution is a scientific fact. Climate change is a scientific fact. When you quote me out of context - and you will - this is the paragraph you want to leave out to justify your confused outrage. He has taken his definition of theory from his evolutionist critics, like PZ Myers, but I do not see the term used that way. Physicists use terms like "string theory" even tho it is not supported by any facts or data at all. I also don't see non-scientists using the word to mean the opposite. Not often, anyway. The first I saw was when Sean M. Carroll was on PBS TV explaining BICEP2 and cosmic inflation. As you can see in the video, the dopey PBS news host asks: Those predictions have always been theories. How do you then go about proving a theory not to be a theory, and is that what we have actually done here? Has it been proven? [at 2:50] With exceptions like this, my experience is that scientists and non-scientists use the term "theory" in the same say. Eg, global warming is a theory whether you accept the IPCC report or not. A comment points out the Wikipedia article: Superseded scientific theories. To answer the question, you first have to agree on what an overturned theory is. Did Copernicus overturn Ptolemy? Did general relativity overturn Newtonian gravity? I would say that these theories were embellished, but not overturned. The old theories continued to work just as well for nearly all situations. You might say that the Bohr atom has been overturned, but it was never more than a heuristic model, and it is still a good heuristic model. Not as good as quantum mechanics, but still a useful way of thinking about atoms. The London Guardian reports: This is the most famous equation in the history of equations. It has been printed on countless T-shirts and posters, starred in films and, even if you've never appreciated the beauty or utility of equations, you'll know this one. And you probably also know who came up with it – physicist and Nobel laureate Albert Einstein. ... It would be nice to think that Einstein's equation became famous simply because of its fundamental importance in making us understand how different the world really is to how we perceived it a century ago. But its fame is mostly because of its association with one of the most devastating weapons produced by humans – the atomic bomb. The equation appeared in the report, prepared for the US government by physicist Henry DeWolf Smyth in 1945, on the Allied efforts to make an atomic bomb during the Manhattan project. The result of that project led to the death of hundreds of thousands of Japanese citizens in Hiroshima and Nagasaki. Is this saying that the equation was first connected to the bomb in 1945? The first bomb had already been built by then. I am not sure the equation did have much to do with the atomic bomb. The energy released by uranium fission was explained by the electrostatic potential energy. That is, protons repel each from their like electric charge, and so a lot of energy must have been needed to bind them together in a nucleus. Splitting the nucleus is like releasing a compressed metal spring. Understanding the energy of H-bomb requires considering the strong nuclear force. One of the first applications of quantum mechanic was George Gamow figuring out in 1928 that protons could tunnel thru the electrostatic repulsion to explain fusion in stars. The relation between mass and energy was first given by Lorentz in 1899. He gave formulas for how the mass of an object increases as energy is used to accelerate it. This was considered the most striking and testable aspect of relativity theory, and it was confirmed in experiments in 1902-1904. Einstein wrote a paper in 1906 crediting Poincare with E=mc^2 in a 1900 paper. Computer scientist Scott Aaronson previously urged telling the truth when selling quantum computing to the public, and he has posted an attempt on PBS: A quantum computer is a device that could exploit the weirdness of the quantum world to solve certain specific problems much faster than we know how to solve them using a conventional computer. Alas, although scientists have been working toward the goal for 20 years, we don’t yet have useful quantum computers. While the theory is now well-developed, and there’s also been spectacular progress on the experimental side, we don’t have any computers that uncontroversially use quantum mechanics to solve a problem faster than we know how to solve the same problem using a conventional computer. It is funny how he can claim "spectacular progress" and yet no speedup whatsoever. It is as if the Wright brothers claims spectacular progress in heavier-than-air flight, but had never left the ground. Or progress in perpetual motion machines. But is there anything that could support such a hope? Well, quantum gravity might force us to reckon with breakdowns of causality itself, if closed timelike curves (i.e., time machines to the past) are possible. A time machine is definitely the sort of thing that might let us tackle problems too hard even for a quantum computer, as David Deutsch, John Watrous and I have pointed out. To see why, consider the “Shakespeare paradox,” in which you go back in time and dictate Shakespeare’s plays to him, to save Shakespeare the trouble of writing them. Unlike with the better-known “grandfather paradox,” in which you go back in time and kill your grandfather, here there’s no logical contradiction. The only “paradox,” if you like, is one of “computational effort”: somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them! Now this is science fiction. But cooling takes energy. So, is there some fundamental limit here? It turns out that there is. Suppose you wanted to cool your computer so completely that it could perform about 10^43 operations per second — that is, one about operation per Planck time (where a Planck time, ~10^-43 seconds, is the smallest measurable unit of time in quantum gravity). To run your computer that fast, you’d need so much energy concentrated in so small a space that, according to general relativity, your computer would collapse into a black hole! Okay, if my computer ever runs that fast, I'll worry about being sucked into a black hole. He also claims that if they can ever make true qubits, then they could simulate some dumbed-down models of quantum mechanics. And maybe the qubits could help with quantum gravity, if anyone can figure out what that is. I guess this is why quantum computing is usually hyped with dubious claims about breaking internet security systems. NPR radio reports on a trailer for a new documentary: It has the look and feel of a fast-paced and riveting science documentary. The trailer opens with actress Kate Mulgrew (who starred as Capt. Janeway in Star Trek: Voyager) intoning, "Everything we think we know about our universe is wrong." That's followed by heavyweight clips of physicists Michio Kaku and Lawrence Krauss. Kaku tells us, "There is a crisis in cosmology," and Krauss says, "All of these things are rather strange, and we don't know why they are occurring right now." And then, about 1:17 into the trailer, comes the bombshell: The film's maker, Robert Sungenis, tells us, "You can go on some websites of NASA and see that they've started to take down stuff that might hint to a geocentric [Earth-centered] universe." The film, which the trailer promises will be out sometime this spring, is called The Principle. Besides promoting the filmmaker's geocentric point of view, it seems to be aimed at making a broader point about man's special place in a divinely created universe. Max Tegmark is also in the trailer. Kaku, Krauss, Tegmark, and mainstream physics documentaries say kooky stuff all the time. If this movie implies that Kaku and Krauss have some sympathies for geocentrism, it should not be any more embarrassing than many other interviews. A central premise of relativity is that motion is relative, and that the covariant equations of cosmology can be written in any frame. So a geocentric frame is a valid frame to use. This movie apparently goes farther and says that the geocentric frame is superior, but I don't see how that is any wackier than many-worlds or some of the theories coming out of physics today. Krauss denies responsibility: It is, after all, impossible in the modern world to shield everyone from nonsense and stupidity. What we can do is provide the tools, through our educational system, for people to be able to tell sense from nonsense. These tools include the scientific method, skeptical questioning, empirical evidence, verifying sources, etc. So, for those of you who are scandalized that a film narrated by a well-known TV celebrity with some well-known scientists promotes geocentrism, here is my suggestion: Let’s all stop talking about it from today on. That celebrity says: I understand there has been some controversy about my participation in a documentary called THE PRINCIPLE. Let me assure everyone that I completely agree with the eminent physicist Lawrence Krauss, who was himself misrepresented in the film, and who has written a succinct rebuttal in SLATE. I am not a geocentrist, nor am I in any way a proponent of geocentrism. ... I was a voice for hire, and a misinformed one, ... Lumo says they deserve some criticism: I think that their hype about the coming revolutions in cosmology is untrue, easily to be misinterpreted so that it is dangerously untrue, and this hype ultimate does a disservice to science although any hype is probably good enough for those who want to remain visible as "popularizers of science". This reminds me of gripes about the 2004 movie What the Bleep Do We Know!?. Some scientists grumbled about it exaggerating the mysteriousness of quantum mechanics. Once you agree that the past is definite and the future is uncertain, then probability theory is the natural way to discuss the likelihood of alternatives. That is, if you believe in counterfactuals, then different things could happen, and quantifying those leads to probability. Probability might seem like a simple concept, there there are different probability interpretations. The frequentist school believes that chance is objective, and the Bayesians say that probability is just a measure of someone's subjective belief. The frequentists say that they are more scientific because they are more objective. The Bayesians say that they are more scientific because they more fully use the available info. Mathematically, the whole idea of chance is a useful fiction. It is just a way of using formulas for thinking about uncertainty. There is no genuine uncertainty in math. A random variable is just a function on some sample space, and the formulas are equally valid for any interpretation. Coin tosses are considered random for the purpose of doing controlled experiments. It does not matter to the experiment if some theoretical analysis of Newtonian forces on the coin is able to predict the coin being heads or tails. The causal factors on the coin will be statistically independent of whatever is being done in the experiment. There is no practical difference between the coin being random and being statistically independent from whatever else is being measured. It is sometimes argued that radioactive decay is truly random, but there is really no physical evidence that it is any more random than coin tosses. We can measure the half-life of potassium, but not predict individual decays. According to our best theories, a potassium-40 nucleus consists of 120 quarks bouncing around a confined region. Maybe if we understood the strong interaction better and had precise data for the wave function, we could predict the decay. The half-life of potassium-40 is about a billion years, so any precise prediction seems extremely unlikely. But we do not know that it is any different from putting dice in a box and shaking it for a billion years. All fields of science seek to quantify counterfactuals, and so they use probabilities. They may use frequentist or Bayesian statistics, and may debate which is the better methodology. Only quantum physicists try to raise the issue to one of fundamental reality, and argue whether the probability is psi-ontic or psi-epistemic. The terms come from philosophy, where ontology is about what is real, and epistemology is about knowledge. So the issue is whether the wave function psi is used to calculate probabilities that are real, or that are about our knowledge of the system. It seems like a stupid philosophical point, but the issue causes endless confusion about Schroedinger's cat and other paradoxes. Physicist N. David Mermin argues that these paradoxes disappear if you take a Bayesian/psi-epistemic view, as was common among the founders of quantum mechanics 80 years ago. He previously argued that quantum probability was objective, like what Karl Popper called "propensity". That is the idea that probability is something physical, but nobody has been able to demonstrate that there is any such thing. Max Tegmark in the March 12, 2014 episode of Through the Wormhole uses multiple universes to deny randomness: Luck and randomness aren't real. Some things feel random, but that's just how it subjectively feels whenever you get cloned. And you get cloned all the time. ... There is no luck, just cloning. There are more and more physicists who say this nonsense, but there is not a shred of evidence that anyone ever gets cloned. There is just a silly metaphysical argument that probabilities do not exist because all possible counterfactuals are real in some other universe. These universes do not interact with each other, so there can be no way to confirm it. Scott Aaronson argues that the essence of quantum mechanics is that probabilities can be negative. But the probabilities are not really negative. The wave function values can be positive, negative, complex, spinor, or vector, and they can be used to calculate probabilities, but those probabilities are never negative. There is no experiment to tell us whether the probabilities are real. It is not a scientific question. Even tho the Bayesian view solves a lot of problems, as Mermin says, most physicists today insist that phenomena like radioactive decay and spin quantization prove that the probabilities are real. Quantum mechanics supposedly makes essential use of probabilities. But that is only Born's interpretation. Probabilities are no more essential to quantum mechanics than to any other branch of science, as I explained here. Time for the annual FQXi essay contest: Contest closes to entries on April 18, 2014. ... The theme for this Essay Contest is: "How Should Humanity Steer the Future?". Dystopic visions of the future are common in literature and film, while optimistic ones are more rare. This contest encourages us to avoid potentially self-fulfilling prophecies of gloom and doom and to think hard about how to make the world better while avoiding potential catastrophes. Our ever-deepening understanding of physics has enabled technologies and ways of thinking about our place in the world that have dramatically transformed humanity over the past several hundred years. Many of these changes have been difficult to predict or control—but not all. In this contest we ask how humanity should attempt to steer its own course in light of the radically different modes of thought and fundamentally new technologies that are becoming relevant in the coming decades. Possible topics or sub-questions include, but are not limited to: * What is the best state that humanity can realistically achieve? * What is your plan for getting us there? Who implements this plan? * What technology (construed broadly to include practices and techniques) does your plan rely on? What are the risks of those technologies? How can those risks be mitigated? (Note: While this topic is broad, successful essays will not use this breadth as an excuse to shoehorn in the author's pet topic, but will rather keep as their central focus the theme of how humanity should steer the future.) Additionally, to be consonant with FQXi's scope and goals, essays should be sure to touch on issues in physics and cosmology, or closed related fields, such as astrophysics, biophysics, mathematics, complexity and emergence, and the philosophy of physics. I am drafting a submission, based on recent postings to this blog. No, I am not going to shoehorn anything about quantum computing, as that subject may have gotten me blackballed in the past. Roger Penrose is writing a book on "Fashion, Faith, and Fantasy", and gives this interview: Sir Roger Penrose calls string theory a "fashion," quantum mechanics "faith," and cosmic inflation a "fantasy." Coming from an armchair theorist, these declarations might be dismissed. But Penrose is a well-respected physicist who co-authored a seminal paper on black holes with Stephen Hawking. What's wrong with modern physics—and could alternative theories explain our observations of the universe? He has his own speculative theories, such as the BICEP2 data not being evidence of inflation or gravity waves, but of magnetic fields in a previous universe before the big bang. A lot of people are skeptical about string theory and inflation models. He thinks that quantum mechanics is incomplete because we do not understand wave function collapse. He is one of the leading mathematical physicists alive today, and his ideas should be taken seriously. The concept of counterfactuals requires not just a reasonable theory of time but also a reasonable theory of causality. Causality has confounded philosophers for centuries. Leibniz believed in the Principle of Sufficient Reason that everything must have a reason or cause. Bertrand Russell denied the law of causality, and argued that science should not seek causes. Of course causality is central to science, and to how we personally make sense out of the world. It is now commonplace for scientists to deny free will, particularly among popular exponents of atheism, evolution, and leftist politics. Philosopher Massimo Pigliucci rebuts Jerry Coyne and others, and John Horgan rebuts Francis Crick. The leading experiments against free will are those by Benjamin Libet and John-Dylan Haynes. They show that certain brain processes take more time than is consciously realized, but they do not refute free will. See also contrary experiments. The other main argument against free will is that a scientific worldview requires determinism. Eg, Jerry Coyne argues against contra-causal free will, and for biological determinism of behavior. Einstein hated quantum mechanics because it allowed for the possibility of free will. A common belief is that the world must be either deterministic or random, but the word "random" is widely misunderstood. Mathematically, a random process is defined by the Kolmogorov axioms, and a random variable is a function on a measure-1 state space. That is, it is just a way of parameterizing outcomes based on some measurable set of samples. Whether or not this matches your intuition about random variables depends on your choice of Probability interpretation. Wikipedia has difficulty defining what is random: Randomness means different things in various fields. Commonly, it means lack of pattern or predictability in events. The Oxford English Dictionary defines "random" as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard." This concept of randomness suggests a non-order or non-coherence in a sequence of symbols or steps, such that there is no intelligible pattern or combination. In mathematics, the digits of Pi (π) can be said to be random or not random, depending on the context. Likewise scientific observations may or may not be called random, depending on whether there is a good explanation. Leading evolutionists Richard Dawkins and S.J. Gould had big disputes over whether evolution was random. There is no scientific test for whether the world is deterministic or random or something else. You can drop a ball repeatedly and watch it fall the same way, so that makes the experiment appear deterministic. You will also see small variations that appear random. You can also put a Geiger detector on some uranium, and hear intermittent clicks at seemingly random intervals. But the uranium nucleus may be a deterministic chaotic system of quarks. We can never know, as any attempt to observe those quarks will disturb them. Likewise there can be no scientific test for free will. You would have to clone a man, replicate his memories and mental state, and see if he makes the same decisions. Such an experiment could never be done, and would not convince anyone even if it could be, as it is not clear how free will would be distinguished from randomness. Free will is a metaphysical issue, not a scientific one. Even if you believe in determinism, it is still possible to believe in free will. A debate between determinists Dan Dennett and Sam Harris was over statements like: If determinism is true, the future is set — and this includes all our future states of mind and our subsequent behavior. And to the extent that the law of cause and effect is subject to indeterminism — quantum or otherwise — we can take no credit for what happens. There is no combination of these truths that seems compatible with the popular notion of free will. But that is exactly what quantum mechanics is -- a combination of those facts that is compatible with the popular notion of free will. In biology, this dichotomy between determinism and randomness has been called the Causalist-Statisticalist Debate. At the core of their confusion is a simple counterfactual: Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. [Austin’s example] The problem here is that they think that determinism is a philosophical necessity, and so they fail to grasp the meaning of a counterfactual. In public surveys, people overwhelmingly reject this deterministic view: Imagine a universe (Universe A) in which everything that happens is completely caused by whatever happened before it. This is true from the very beginning of the universe, so what happened in the beginning of the universe caused what happened next, and so on right up until the present. For example one day John decided to have French Fries at lunch. Like everything else, this decision was completely caused by what happened before it. So, if everything in this universe was exactly the same up until John made his decision, then it had to happen that John would decide to have French And so an atheist biologist writes: To me, the data show that the most important task for scientists and philosophers is to teach people that we live in Universe A. That is a tough sell, as Universe A is contrary to common sense, experience, and our best scientific theories. Steven Weinberg has argued that the laws of physics are causally complete, but also that we are blindly searching for the final theory that will solve the mysteries of the universe. A final theory would explain quark masses and cancel gravity infinities. Einstein had an almost religious belief in causal determinism, and many others seem to believe that a scientific outlook requires such a view. On the other hand, a majority of physicists today assert (incorrectly) that quantum mechanics has somehow proved that nature is intrinsically random. Quantum mechanics is peculiar in that it leaves the possibility of free will. It is the counterexample to the notion that a scientific theory must be causal and deterministic, or otherwise contrary to free will. If you tried to concoct a fundamental physical theory that could accommodate free will, it is hard to imagine a theory being better suited for free will. Some interpretations of quantum mechanics are deterministic and some are not, as so, as Scott Aaronson explains, determinism is not a very meaningful concept in the context of quantum mechanics. If you reject free will and the flow of time, and believe that everything is determined by God or the Big Bang, then counterfactuals make no sense. Most of time travel in fiction makes no sense either. The concept of counterfactuals depends on the possibility of alternate events, and on time moving forward into an uncertain future. Regardless of brain research and the scientific underpinnings of free will, counterfactuals are essential to how human beings understand the progress of time and the causality of events. MIT computer scientist Scott Aaronson has confessed he has physics envy: I confess that my overwhelming emotion on watching Particle Fever was one of regret — regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown. See, from my perspective, there’s a lot to envy about the high-energy physicists. Most importantly, they don’t perceive any need to justify what they do in terms of practical applications. Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN. But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you. ... Now contrast that with quantum computing. To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster. (Why would it be faster? Something to do with trying both 0 and 1 at the same time.) He blames the media?! No, every single scientist in this field tells glowing stories about the inevitable breakthrus in quantum cryptography and computing. Including Aaronson. Lots of scientists over-hype their work, but the high energy physicists and astronomers have scientific results to show. Others are complete washouts after decades of work and millions in funding. String theory have never been able to show any relationship between the real world and their 10-dimensional models. Quantum cryptography has never found any practical application to information security. Quantum computing has never found even one scalable qubit or any quantum speedup. Multiverse theories have no testable implications and are mathematically incoherent. Of course a conspiracy of lies brings in the grant money: Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field — because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself. All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like: What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes — that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms? ... I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze. And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons! and the real reasons remain as valid as ever!” In my view, we as a community have failed to make the honest case for quantum computing — the case based on basic science — because we’ve underestimated the public. We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer. The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality. We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it. Or if it isn’t the truth, then we want to discover what is the truth. If the quantum computer scientists were honest, they would admit that they are just confirming an 80-year-old quantum theory. Update: Scott adds: Quantum key distribution is already practical (at least short distances). The trouble is, it only solves one of the many problems in computer security (point-to-point encryption), you can’t store the quantum encrypted messages, and the problem solved by QKD is already solved extremely well by classical crypto. Oh, and QKD assumes an authenticated classical channel to rule out man-in-the-middle attacks. ... I like to say that QKD would’ve been a killer app for quantum information, in a hypothetical world where public-key crypto had never existed. That's right, and quantum cryptography is commercially worthless for those reasons. Those who claim some security advantage are selling snake oil. Update: Scott adds: Well, it’s not just the people who flat-out deny QM. It’s also the people like Gil Kalai, Michel Dyakonov, Robert Alicki, and possibly even yourself (in previous threads), who say they accept QM, but then hypothesize some other principle on top of QM that would “censor” quantum computing, or make the effort of building a QC grow exponentially with the number of qubits, or something like that, and thereby uphold the classical Extended Church-Turing Thesis. As I’ve said before, I don’t think they’re right, but I think the possibility that they’re right is sufficiently sane to make it worth doing the experiment. I would not phrase it that way. Scott's bias is that he is theoretical computer scientist, and he just wants some mathematical principles so he can prove theorems. I accept quantum mechanics to the extent that it has been confirmed, but not the fanciful extrapolations like many-worlds and quantum computing. I am skeptical about those because they seem unjustified by known physics, contrary to intuition, and most of all, because attempts to confirm them have failed. I am also skeptical about supersymmetry (SUSY). I do not know any principle that would censor SUSY. The main reason to be skeptical is that SUSY is a fanciful and wildly speculative hypothesis that is contradicted by the known experimental evidence. Likewise I am skeptical about quantum computing. Update: Scott rpefers to compare QC to the Higgs boson than SUSY, presumably because the Higgs has been found, and adds: My own view is close to that of Greg Kuperberg in comment #73: yes, it’s conceivable that the skeptics will turn out to be right, but if so, their current explanations for how they could be right are grossly inadequate. ... If, hypothetically, QC were practical but only on the surface on Titan, then I’d count that as a practical SUCCESS! The world’s QC center could simply be installed on Titan by robotic spacecraft, and the world’s researchers could divvy up time to dial in to it, much like with the Hubble telescope. Spoken like a theorist. He does not want his theorems to be vacuous. Craig Feinstein asks: Leonid Levin said, "Exponential summations used in QC require hundreds if not millions of decimal places accuracy. I wonder who would expect any physical theory to make sense in this realm." Peter Shor replies: If you believe the fault-tolerant threshold theorem for quantum computers, you do not require hundreds of digits of accuracy. Levin does not believe this theorem. More precisely, he believes that the hypotheses required for the theorem to work do not apply to the actual universe. I believe his mental model of quantum mechanics resembles the idea that the physics of the universe is being simulated on a classical machine which has floating point errors. I don't believe this is true. ... The real question is whether the rules of the universe are exact unitary evolution or something else. If they're exact unitary evolution and you have locality of action (quantum field theories, including QED, satisfy these) then the fault-tolerant threshold theorem holds. If the universe has extra levels of weirdness under the quantum field theory, then it's not clear the hypotheses are I am not sure who is right here. Quantum mechanics is a linear theory and has been verified to high precision is some contexts. But a linear theory is nearly always an approximation to a nonlinear theory, and I don't think that the quantum computer folks have shown that they are operating within a valid approximation. Shor assumes "unitary", but there are interpretations of quantum mechanics that are not unitary, and no one has proved them wrong. So how do we know nature is really unitary? If being unitary is some physically observed law, like conservation of momentum, then we should have error bars that show us just how close to being unitary the world, and what confidence in different situations. If being unitary is a metaphysical necessary truth, derived from the conservation of probability, then how have so many textbooks managed to get by with the Copenhagen interpretation? I say that quantum computing is a vast extrapolation of known physics, and extrapolations are unreliable. In other news: An international team of researchers has created an entanglement of 103 dimensions with only two photons, beating the previous record of 11 dimensions. The discovery could represent an advance toward toward better encryption of information and quantum computers with much higher processing speeds, according to a statement by the researchers. Until now, to increase the “computing” capacity of these particle systems, scientists have mainly turned to increasing the number of qubits (entangled particles), up to 14 particles. ... “The most immediate practical use is expected to be in secure communication,” Huber explained to KurzweilAI in an email interview. I haven't read the paper but I am pretty sure than there is no practical application to secure communication. I expected them to claim that all those dimensions could be used for quantum computing. Steve Flammia writes: Gil Kalai has just posted on his blog a series of videos of his lectures entitled “why quantum computers cannot work.” For those of us that have followed Gil’s position on this issue over the years, the content of the videos is not surprising. The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video). I think the high gloss on these videos has the potential to sway low-information bystanders into thinking that there really is a debate about whether quantum computing is possible in principle. So let me be clear. There is no debate! The expert consensus on the evidence is that large-scale quantum computation is possible in principle. ... For now, though, the reality is that quantum computation continues to make exciting progress every year, both on theoretical and experimental levels, and we have every reason to believe that this steady progress will continue. ... And most importantly, we are open to being wrong. No, there is no significant progress. No one has made scalable qubits, and no one has demonstrated a quantum speedup. He sure doesn't sound like someone who is open to being wrong. Papers on this subject by physicists subscribing to this consensus never admit that the whole field is based on speculative premises. I am a skeptic. Modern physics teaches certain singularities in general relativity (black holes and big bang) and quantum field theory (renormalization). I have expressed skepticism about whether there is truly a singularity in the black hole and at the big bang. Max Tegmark has also expressed skepticism about actual infinities in nature. Now that the BICEP2 has given us evidence close to the alleged big bang singularity, Matt Strassler and Lubos Motl have reopened the debate about whether there really is a singularity. Those are sensible mainstream views. Others will push back harder, and speculate about before the big bang and into the multiverse. I have to agree with Strassler that the evidence points to energies high enough that our physical theories break down, so we cannot go further. I also agree with Tegmark that we never observe true singularities in nature. I am a positivist, and I believe in what has been demonstrated. Infinities and singularities are wonderful mathematical tools, but math is not the same as physics. Depending on how the inflation evidence plays out, I am not sure the big bang has anything to do with general relativity or a spacetime singularity. The physics was not dominated by gravity or the standard model, as we know them. Something mysterious called an inflaton field was releasing huge amounts of energy. I am not even sure about the reports that BICEP2 saw gravity waves. Maybe they saw inflaton waves. Some physicists have said that this proves gravity is quantized. I don't know how they can say that, when no one knows what the inflaton is or how it relates to gravity. I expect the meaning of BICEP2 to be settled in the next year or so, but unwarranted speculation about time and multiverses to go on for the foreseeable future. Update: Strassler argues: Who is still telling the media and the public that the universe really started with a singularity, or that the modern Big Bang Theory says that it does? I’ve never heard an expert physicist say that. And with good reason: when singularities and other infinities have turned up in our equations in the past, those singularities disappeared when our equations, or our understanding of how to use our equations, improved. Moreover, there’s a point of logic here. How could we possibly know what happened at the very beginning of the universe? No experiment can yet probe such an early time, and none of the available equations are powerful enough or usable enough to allow us to come to clear and unique conclusions. Lumo responds: But by endorsing the idea that the Big Bang singularity exists, we don't claim that the classical general relativity is exactly accurate and all of its conclusions about quantities' being infinite at the singularity are strictly right. We never mean such things. I posted before on Mermin taking Bohr seriously, SciAm pushes Quantum Bayesianism, and Counterfactuals: Time on the metaphysics of time. Now Cornell Physicist N. David Mermin has an essay in the current Nature journal: Schrödinger wrote in a little-known 1931 letter2 to German physicist Arnold Sommerfeld that quantum mechanics “deals only with the object–subject relation”. Another founder of quantum mechanics, Danish physicist Niels Bohr, insisted in a 1929 essay3 that the purpose of science was not to reveal “the real essence of the phenomena” but only to find “relations between the manifold aspects of our experience”. ... People who believe wavefunctions to be as real as stones have invested much effort in searching for objective physical mechanisms responsible for such changes in the wavefunction: ... Another celebrated part of the muddle produced by the exclusion of the perceiving subject is 'quantum non-locality', the belief of some quantum physicists and many mystics, parapsychologists and journalists that an action in one region of space can instantly alter the real state of affairs in a faraway region. Thousands of papers have been written about this mysterious action at a distance over the past 50 years. A clue that the only change is in the expectations of the perceiving subject7 is that to learn anything about such alterations one must consult somebody in the region where the action took place. ... The issue for Einstein was not the famous revelation of relativity that whether or not two events in two different places happen at the same time can depend on your frame of reference. It was simply that physics seems to offer no way to identify the Now even at a single event in a single place, although a local present moment — Now — is evident to each and every one of us as undeniably real. How can there be no place in physics for something as obvious as that? ... When I recently mentioned to an eminent theoretical physicist that I was writing an essay explaining how the QBist view of science solves the strictly classical problem of the Now, he said: “Ah, you're going to explain why we all have that illusion.” And a distinguished philosopher of science recently derided the attitude that there ought to be a Now on my world-line as “chauvinism of the present moment”9. My only quarrel with Mermin is that he acts as if he is saying something new. He is just reciting the view of Bohr and everyone else not infected with Einstein's disease. There are physicists and philosophers today who (1) believe wavefunctions to be as real as stones; (2) assert quantum non-locality; and (3) deny Now as just chauvinism of the present moment. They have bizarre and foolish philosophies that lead to unresolvable paradoxes. Mermin's common sense explanations from a century ago are perfectly adequate. Counterfactual reasoning is used all the time in the hard sciences. When you learn the formulas for gravity, the first thing you do is to answer questions like, “If you drop a rock off a 100-foot cliff, how long will it take to hit the ground?” Dropping a rock from a cliff could also be called hypothetical reasoning, because one can easily imagine conducting the experiment. However physics also has all sorts of thought experiments that have no hope of ever being carried out. For example, explanations of relativity frequently involve spaceships taking people near the speed of light or being swallowed up in a black hole. Counterfactuals are essential to the scientific method. Science is all about doing experiments that favor some hypothesis over some counterfactual. The ability to make a precise prediction from a counterfactual is what distinguishes the hard sciences from the soft. A famous example is Hendrik Lorentz's discovery of space and time transformations (now called Lorentz transformations) to explain the Michelson-Morley experiment. The counterfactual was aether motion, as Lorentz interpreted experiments to show that no such motion was detectable. He then used his formulas to predict relativistic mass, which was then confirmed by experiment. (Einstein later published similar theories, but the consensus of historians is that he paid no attention to the experiments.) An example that failed to disprove the counterfactual was the 1543 Copernicus heliocentric model of the solar system. The established theory was Ptolemy's, but both theories predicted the sky with about the same accuracy. Experiments and models with much greater accuracy were achieved by Tycho Brahe and Johannes Kepler around 1600. The 20th century theory of relativity taught that motion is relative, and heliocentricity could never be proven. Kepler's theory was superior not only for its accurate predictions, but for its counterfactual predictions. He had a complete theory of what kinds of orbits were possible in the solar system, so he could have made predictions about any new planet or asteroid that might be discovered. But he did not have a causal mechanism. Causality is closely connected with counterfactual analysis. If an event A is followed by an event B, we only say that A caused B if counterfactuals for A would have been followed by something other than B. If rain follows my rain dance, I only argue that the dance caused the rain if I have a convincing argument that it would not have rained if I had not danced. A truly causal argument would provide a connected chain of events from the dance to the rain, with every link in the chain causing the next link. Isaac Newton found a more powerful theory of mechanics by positing a gravitational force between any two massive objects, and saying that the force causes the orbital motion. Laplace argued in 1814 that all of nature is predictable with causal mechanics, given sufficient data. This Newtonian causality was not true causality, because it required action-at-a-distance. One planet could exert a force on another planet over millions of miles, without any intermediate effects. A truly causal theory required the invention of the concept of field, such as electric or gravitational field, that can propagate thru empty space from one object to another. James Clerk Maxwell worked out such a theory for electric and magnetic fields in 1865, and that was the first relativistic theory. A field is a physical way of describing certain counterfactuals. Saying that there is an electric field, at a particular point in space and time, is another way of saying what would happen if an electric charge were put at that point. The field is one of the most important concepts in all of physics, because it allows reducing the universe to the mechanics of locally defined objects. Thus physics is rooted in counterfactuals at every level. You could say that reductionism works in physics because of clever schemes for distributing counerfactual info over space and time. A trendy topic in theoretical astrophysics is the multiverse. This involves a loose collection of unrelated ideas, but they all involve hypothetical universes outside of our observational abilities. It is a giant counterfactual exercise, with no experiment to decide who is right. A particular fascination is the possibility of intelligent life in other universes. It appears that our universe is finely tuned for life. That is, it is hard to imagine the development of life in most of the counterfactual universes. The most bizarre approach to counterfactuals is the many-worlds interpretation (MWI) of quantum mechanics. It simply posits that every possible counterfactual has an objective reality in an alternate universe. The extra universes do not really explain anything because they do not communicate with each other. There can be no experimental evidence for the other universes. There is no theoretical reason either, except that some physicists are unhappy with counterfactuals being just countefactuals. The many-worlds seems like an endorsement of counterfactual thinking, but it corrupts such thinking by declaring the the counterfactuals real. A counterfactualist might argue, "if a new ice age were beginning, then we would probably notice cooler termperatures, but we don't, so we are not in a new ice age." But in many-worlds, all exceptionally improbable events take place in different universes, and we could be in one of them. Thus many-worlds leaves no good rationale for rejecting counterfactuals.
{"url":"http://blog.darkbuzz.com/","timestamp":"2014-04-19T04:19:48Z","content_type":null,"content_length":"186829","record_id":"<urn:uuid:fd7e3776-d00e-4c56-bb1d-6917894a0445>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
class Sampleable d m t whereSource A typeclass allowing Distributions and RVars to be sampled. Both may also be sampled via runRVar or runRVarT, but I find it psychologically pleasing to be able to sample both using this function, as they are two separate abstractions for one base concept: a random variable. sampleFrom :: RandomSource m s => s -> d t -> m tSource Directly sample from a distribution or random variable, using the given source of entropy. Distribution d t => Sampleable d m t Lift m n => Sampleable (RVarT m) n t sample :: (Sampleable d m t, MonadRandom m) => d t -> m tSource Sample a random variable using the default source of entropy for the monad in which the sampling occurs. sampleState :: (Sampleable d (State s) t, MonadRandom (State s)) => d t -> s -> (t, s)Source Sample a random variable in a "functional" style. Typical instantiations of s are System.Random.StdGen or System.Random.Mersenne.Pure64.PureMT. sampleStateT :: (Sampleable d (StateT s m) t, MonadRandom (StateT s m)) => d t -> s -> m (t, s)Source Sample a random variable in a "semi-functional" style. Typical instantiations of s are System.Random.StdGen or System.Random.Mersenne.Pure64.PureMT.
{"url":"http://hackage.haskell.org/package/random-fu-0.2.1.0/docs/Data-Random-Sample.html","timestamp":"2014-04-18T10:46:44Z","content_type":null,"content_length":"6801","record_id":"<urn:uuid:f2d9f8aa-8d95-4893-9443-807315587052>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Trying to Understand Bell's reasoning That's because your version of H is too vague and doesn't actually specify whether the red card was the one that was picked to send to Alice and the white card was the one that was picked to send to Bob, or vice versa. If you completely specified the hidden properties of the envelope that was sent to Bob--namely "the person who picked the cards from the box put the red card in Bob's envelope, and the envelope continued to have that hidden card on its journey to Bob"--then in that case it would be true that P(B|H)=P(B|AH). I am not sure I agree with this. In probability theory, when we write P(A|H), we are assuming that we know H but not A. If we knew A already (a certainty), there is no point calculating a probability is there? I think I explained clearly in the text above that I was calculating the probability of B, not A, given either just H or given both H and A. If you want to calculate the probability of A rather than B, then you can easily modify the paragraph above: That's because your version of H is too vague and doesn't actually specify whether the red card was the one that was picked to send to Alice and the white card was the one that was picked to send to Bob, or vice versa. If you completely specified the hidden properties of the envelope that was sent to Alice--namely "the person who picked the cards from the box put the red card in Alice's envelope, and the envelope continued to have that hidden card on its journey to Alice"--then in that case it would be true that P(A|H)=P(A|BH). In this case when Bell obtains cos(theta), his equation will only be valid for two angles when cos(theta)= 0 or 1! How then can this equation apply to other angles? Therefore I don't think that is the reasoning here. Huh? The argument is about what probabilities would be calculated by an ideal observer they had access to the hidden variables H (which are assumed to have well-defined values at all times in a local realist theory), not just what probabilities are calculated by normal observers who don't know the values of the hidden variables. How could it be otherwise, when H explicitly appears in the conditional probability equations? Furthermore, the variables are hidden from the perspective of the wise men, it is not God trying to calculate the probabilities but the wise men, because they do not have all the information. We are only looking from God's perspective to verify that the equation the wise men choose to use corresponds to the factual situation and in the example I gave it does not appear to. No, you're completely confused, the argument is about taking a God's-eye-view and saying that no matter how we imagine God would see the hidden variables, in a local realist theory God would necessarily end up making predictions about the statistics of different correlations that are different from what we humans actually observer in QM. But in my example, with the information known by God, this is the case, every box only contains two cards, one red and one white and in each iteration of the experiment one of the cards is sent to Bob and the other to Alice. The equation P(AB|H) = P(A|H)P(A|BH) always works but P(AB|H) = P(A|H)P(A|H) works only in very limited case in which H is no longer hidden and calculating probabilities is pointless. It's not "pointless" if you can use this hypothetical God's-eye-perspective (where nothing is hidden) to show that if the hidden variables are such that Alice and Bob always get the same result when they perform the same measurement, that imply certain things about the statistics they see when they perform different measurements--and that these statistical predictions are falsified in real quantum mechanics! This is a reductio ad absurdum argument showing that the original assumption that QM can be explained using a local realist theory must have been false. Perhaps you could take a look at the scratch lotto analogy I came up with a while ago and see if it makes sense to you (note that it's explicitly based on considering how the 'hidden fruits' might be distributed if they were known by a hypothetical observer for whom they aren't 'hidden'): Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get the same result--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a cherry too. Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card always matches the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the same as the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must also have been created with the hidden fruits A+,B+,C-. The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find the same fruit on at least 1/3 of the trials. For example, if we imagine Bob and Alice's cards each have the hidden fruits A+,B-,C+, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be: Bob picks A, Alice picks B: results (Bob gets a cherry, Alice gets a lemon) Bob picks A, Alice picks C: results (Bob gets a cherry, Alice gets a cherry) Bob picks B, Alice picks A: results (Bob gets a lemon, Alice gets a cherry) Bob picks B, Alice picks C: results (Bob gets a lemon, Alice gets a cherry) Bob picks C, Alice picks A: results (Bob gets a cherry, Alice gets a cherry) Bob picks C, Alice picks picks B: results (Bob gets a cherry, Alice gets a lemon) In this case, you can see that in 1/3 of trials where they pick different boxes, they should get the same results. You'd get the same answer if you assumed any other preexisting state where there are two fruits of one type and one of the other, like A+,B+,C- or A+,B-,C-. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, so either they're both getting A+,B+,C+ or they're both getting A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get the same fruits with probability 1. So if you imagine that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C- while other pairs are created in homogoneous preexisting states like A+,B+,C+, then the probability of getting the same fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get the same answers in less than 1/3 of trials where you scratch different boxes, provided you assume that each card has such a preexisting state with "hidden fruits" in each box. But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got the same fruits 1/4 of the time! That would be the violation of Bell's inequality, and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have the same fruits in a given box. And you can modify this example to show some different Bell inequalities, see post #8 of this thread for one example. But that is not quite true. The justification for reducing P(A|BH) to P(A|H) is not based on whether B gives [i[additional[/i] information but on whether B gives any information. (see Conditional Independence in Statistical Theory, J.R Statist. Soc B, 1979 41, No. 1, pp. 1-31) I don't have access to that reference (could you quote it?) but I'm confident it doesn't say what you think it does. In a situation where the probability of A is completely determined by H and the probability of B is also completely determined by H, then it would naturally be true that P(A|BH) would be equal to P(A|H), even if the P(A) was not equal to P(A|B) (i.e. if you don't know H, knowing B does give some information about the probability of A). Do you claim the reference somehow contradicts this? For example, suppose we have two identical-looking flashlights X and Y that have been altered with internal mechanisms that make it a probabilistic matter whether they will turn on when the switch is pressed. The mechanism in flashlight X makes it so that there is a 70% chance it'll turn on when the switch is pressed; the mechanism in flashlight Y makes it so there's a 40% chance when the switch is pressed. The mechanism's random decisions aren't affected by anything outside the flashlight, so whether or not flashlight X turns on doesn't change the probability that flashlight Y turns on. Now suppose we do an experiment where Alice is sent one flashlight and Bob is sent the other, by a sender who has a 50% chance of sending X to Alice and Y to Bob, and a 50% chance of sending Y to Alice and X to Bob. Let H1 and H2 represent these two possible sets of "hidden" facts (hidden to Alice and Bob since the flashlights look identical from the outside): H1 represents the event "X to Alice, Y to Bob" and H2 represents the event "Y to Alice, X to Bob". Let A represent the event Alice's flashlight turns on when she presses the switch, B represents the event that Bob's flashlight turns on when she presses the switch. Here, P(A) = P(A|H1)*P(H1) + P(A|H2)*P(H2) = (0.7)*(0.5) + (0.4)*(0.5) = 0.55 and P(B) = P(B|H1)*P(H1) + P(B|H2)*P(H2) = (0.4)*(0.5) + (0.7)*(0.5) = 0.55 Since P(A|B) = P(A and B)/P(B), we must have P(A|B) = (0.7)*(0.4)/(0.55) = 0.5090909... So you see that P(A|B) is slightly lower than P(A), which makes sense since if Bob's flashlight lights up, that makes it more likely Bob got flashlight X which had a higher probability of lighting, and more likely A got flashlight Y with a lower probability of lighting. But despite the fact that B does give some information about the probability of A, it is still true that P(A|B and H1) = P(A|H1) = 0.7, since H1 tells us that Alice got flashlight X, and that alone completely determines the probability that Alice's flashlight lights up when she presses the switch, the fact that Bob's flashlight lit up won't alter our estimate of the probability that Alice's lights up. Likewise, P(A|B and H2) = P(A|H2) = 0.4. I'm sure that whatever the reference you gave says, it doesn't imply that this reasoning is incorrect. You re saying pretty much the same thing there, that A gives no additional information to H. But that is not the meaning of conditional independence. Conditional independence means that A gives us no information whatsoever about B. I gave a pretty detailed argument in posts #61 and 62 on that thread , starting with the paragraph towards the end of post #61 that says "Let me try a different tack". If you aren't convinced by my comments so far in this post, perhaps you could identify the specific point in my argument on the other thread where you think I say something incorrect? For example, do you disagree with this part? I'd like to define the term "past light cone cross-section" (PLCCS for short), which stands for the idea of taking a spacelike cross-section through the past light cone of some point in spacetime M where a measurement is made; in SR this spacelike cross-section could just be the intersection of the past light cone with a surface of constant t in some inertial reference frame (which would be a 3D sphere containing all the events at that instant which can have a causal influence on M at a later time). Now, let [tex]\lambda[/tex] stand for the complete set of values of all local physical variables, hidden or non-hidden, which lie within some particular PLCCS of M. Would you agree that in a local realist universe, if we want to know whether the measurement M yielded result A, and B represents some event at a spacelike separation from M, then although knowing B occurred may change our evaluation of the probability A occurred so that P(A|B) is not equal to P(A), if we know the full set of physical facts [tex]\lambda[/tex] about a PLCCS of M, then knowing B can tell us nothing additional about the probability A occurred at M, so that P(A|[tex]\lambda[/tex]) = P(A|[tex]\ lambda[/tex] B)? In case we are dealing with a local realist universe that is not deterministic, I think I should add here that the PLCCS of M is chosen at a time the last moment of intersection between the past light cones of M and B, so that no events that happen after the PLCCS can have any causal influence on B. Continuing the quote: If so, consider two measurements of entangled particles which occur at spacelike-separated points M1 and M2 in spacetime. For each of these points, pick a PLCCS from a time which is prior to the measurements, and which is also prior to the moment that the experimenter chose (randomly) which of the three detector settings under his control to use (as before, this does not imply the experimenter has complete control over all physical variables associated with the detector). Assume also that we have picked the two PLCCS's in such a way that every event in the PLCCS of M1 lies at a spacelike separation from every event in the PLCCS of M2. Use the symbol [tex]\lambda_1[/tex] to label the complete set of physical variables in the PLCCS of M1, and the symbol [tex]\lambda_2[/tex] to label the complete set of physical variables in the PLCCS of M2. In this case, if we find that whenever the experimenters chose the same setting they always got the same results at M1 and M2, I'd assert that in a local realist universe this must mean the results each of them got on any such trial were already predetermined by [tex]\lambda_1[/tex] and [tex]\lambda_2[/tex]; would you agree? The reasoning here is just that if there were any random factors between the PLCCS and the time of the measurement which were capable of affecting the outcome, then it could no longer be true that the two measurements would be guaranteed to give identical results on every trial.
{"url":"http://www.physicsforums.com/showthread.php?p=2698804","timestamp":"2014-04-19T19:36:24Z","content_type":null,"content_length":"131155","record_id":"<urn:uuid:63fe0499-9bcc-4ee9-9914-56af8d8e56a1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: History of Mathematics 2014; approx. 216 pp; hardcover Volume: 40 ISBN-10: 1-4704-1076-1 ISBN-13: 978-1-4704-1076-6 List Price: US$39 Member Price: US$31.20 Order Code: HMATH/40 Not yet published. Expected publication date is May 26, 2014. The fame of the Polish school at Lvov rests with the diverse and fundamental contributions of Polish mathematicians working there during the interwar years. In particular, despite material hardship and without a notable mathematical tradition, the school made major contributions to what is now called functional analysis. The results and names of Banach, Kac, Kuratowski, Mazur, Nikodym, Orlicz, Schauder, Sierpiński, Steinhaus, and Ulam, among others, now appear in all the standard textbooks. The vibrant joie de vivre and singular ambience of Lvov's once scintillating social scene are evocatively recaptured in personal recollections. The heyday of the famous Scottish Café--unquestionably the most mathematically productive cafeteria of all time--and its precious Scottish Book of highly influential problems are described in detail, revealing the special synergy of scholarship and camaraderie that permanently elevated Polish mathematics from utter obscurity to global prominence. This chronicle of the Lvov school--its legacy and the tumultuous historical events which defined its lifespan--will appeal equally to mathematicians, historians, or general readers seeking a cultural and institutional overview of key aspects of twentieth-century Polish mathematics not described anywhere else in the extant English-language literature. Undergraduate, graduate, and research mathematicians interested in the history of mathematics and the Polish history of sciences. • The University and the Polytechnic in Lvov • Polish mathematics at the turn of the twentieth century • Sierpiński's stay at the University of Lvov (1908-1914) • The University in Warsaw and Janiszewski's program (1915-1920) • World mathematics (active fields in Poland) around 1920 The golden age: Individuals and community • The mathematical community in Lvov after World War I • Mathematical studies and students • Journals, monographs, and congresses • The popularization of mathematics • Social life (the Scottish Café, the Scottish Book) • The Polish Mathematical Society • Collaboration with other centers • In the eyes of others The golden age: Achievements • Stefan Banach's doctoral thesis and priority claims • Probability theory • Measure theory • Game theory: A revelation without follow-up • Operator theory in the 1920s • Methodological audacity • Banach's monograph: Polishing the pearls • Operator theory in the 1930s: The dazzle of pearls • New perspectives for which time did not allow • On the periphery • Ukrainization the Soviet way (1939-1941) • The German occupation (1941-1944) • The expulsion of Poles (1945-1946) Historical significance • Chronological overview • Chronology of events as perceived elsewhere • Influence on mathematics of the Lvov school • A tentative summary • Mathematics in Lvov after 1945 List of Lvov mathematicians • Mathematicians associated with Lvov • Bibliographies • List of illustrations • Index of names
{"url":"http://ams.org/cgi-bin/bookstore/bookpromo?fn=91&arg1=bookvideo&itmc=CMP","timestamp":"2014-04-17T07:34:52Z","content_type":null,"content_length":"16922","record_id":"<urn:uuid:4a16177c-409b-4dbe-b35a-98cb4ed44663>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
A Confidence Region for Zero-Gradient Solutions for Robust Parameter Design Experiments International Journal of Quality, Statistics, and Reliability Volume 2011 (2011), Article ID 537543, 11 pages Research Article A Confidence Region for Zero-Gradient Solutions for Robust Parameter Design Experiments ^1Department of Statistics, Temple University, 1810 North 13th Street, Philadelphia, PA 19122, USA ^2Quantitative Sciences, GlaxoSmithKline Pharmaceuticals, 1250 South Collegeville Road, Collegeville, PA 19426, USA Received 11 March 2011; Accepted 28 June 2011 Academic Editor: Myong (MK) Jeong Copyright © 2011 Aili Cheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. One of the key issues in robust parameter design is to configure the controllable factors to minimize the variance due to noise variables. However, it can sometimes happen that the number of control variables is greater than the number of noise variables. When this occurs, two important situations arise. One is that the variance due to noise variables can be brought down to zero The second is that multiple optimal control variable settings become available to the experimenter. A simultaneous confidence region for such a locus of points not only provides a region of uncertainty about such a solution, but also provides a statistical test of whether or not such points lie within the region of experimentation or a feasible region of operation. However, this situation requires a confidence region for the multiple-solution factor levels that provides proper simultaneous coverage. This requirement has not been previously recognized in the literature. In the case where the number of control variables is greater than the number of noise variables, we show how to construct critical values needed to maintain the simultaneous coverage rate. Two examples are provided as a demonstration of the practical need to adjust the critical values for simultaneous coverage. 1. Introduction Robust Parameter Design (RPD) is also called Robust Design or Parameter Design in the literature [1, 2]. The concept of RPD was introduced in the United States by Genichi Taguchi in the early 1980s. It is a methodology that takes both the mean and variance into consideration for product or process optimization. Taguchi [3] divided the predictor variables into two categories: control variables and noise variables. Control variables are easy to control while noise variables are either difficult to control or uncontrollable at a large scale. In practice, we would like to find a range of control variables such that (1) the variance caused by the change of noise variables is minimized and (2) the mean response is close to target. Multiple optimization design and analysis methods have been developed to achieve these two goals simultaneously, ranging from the traditional Taguchi methods to the more sophisticated response surface alternatives. (See [2, 4, 5] for detailed reviews on these methods.) Under certain conditions, Myers et al. [6] proposed a way to construct a confidence region of the control variables where the variability transmission by the noise variables is minimized to zero. Although they only focused on the variance part, there are many such situations in which focus is placed entirely on the process variance (see p506 in [5]). For instance, if the process mean can be optimized using certain control factors that do not interact with noise variables or impact the noise variance (i.e., “tuning factors”), then one can seek other control factors which can drive the noise variance to zero. Even if the process mean cannot be driven to the target by tuning factors alone, it is nonetheless illuminating to consider both the confidence region for the minimum process variance and the confidence region for the optimal process mean [5, 6]. If the number of noise variables is not greater than the number of control factors (for examples, see [7–9] and see pages 491–492, 499 in [5]), then the conditions for a minimum process variance and a zero-gradient solution will coincide. Furthermore, a zero-gradient solution is quite useful in that, even for very large noise variation, the transmission of that noise to the output of the process can be made negligible (or very small) by utilization of zero-gradient (or near-zero-gradient) operating conditions. Therefore, it is desirable to have a way to statistically test for the existence of such a solution within the experimental region or a region of feasible operation (see p506 in [5]). A simultaneous confidence region for the locus of points forming a zero-gradient solution forms such a test and also provides a graphical measure of uncertainty about the zero-gradient solution. The existence of a zero-gradient solution is especially interesting for mixture experiments with control factor ingredients and noisy process variables. If a zero-gradient solution for the ingredient mixture can be found, that is, within the experimental region, and the confidence region for the zero-gradient solution intersects one or more of the mixture simplex boundaries, then this implies that it may be possible to remove one or more mixture ingredients and still maintain very low noise variance. Removal of one or more mixture ingredients may help to reduce production cost [10]. Myers et al. [6] proposed a confidence region in control variables based upon the standard response surface model as shown in (1) for incorporating noise variables (see, for examples, [4, 11, 12]) where is a vector of control variables, is a vector of noise factors, is the intercept, is a vector of coefficients for the main effects of control variables, is a vector of coefficients for the main effects of noise variables, is a matrix whose diagonals are the coefficients for the quadratic terms of control variables and whose off-diagonals are one-half of the control variable interaction effects, and is a matrix of control by noise variable interaction effects. is the random error term. It is assumed here that . Assuming that noise variables have mean zero and variance-covariance matrix , the variance of the response in Model (1) is Here the variance of the response is divided into two parts: the variance transmitted by the noise variables (represented by the first term) and the constant variance () due to modeling error and other factors not considered in the model. In other words, it is the noise variables that lead to the variance heterogeneity in the response. Since the changes of noise variables are inevitable in practice, Myers et al. [6] proposed that the “minimum process variance” can be reached by setting the slope of the noise variables, , equal to zero and, therefore, eliminating the noise variance part from the response variance. A confidence region for such control variable values can be constructed by inverting a hypothesis test of the form: for each x-point. To simplify the notation, let be a vector that contains all the elements of the noise variable’s main effect vector and the interaction matrix , that is, ,, where , is the element of vector and , is the element of matrix in the row and column. Let , where is an identity matrix of dimension , is a vector of the corresponding control variables that interact with noise variables, and denotes the Kronecker product. The null hypothesis in (3) can then be written as where is a matrix and . Now the confidence region in control variables for zero variance due to noise variables can be defined as Furthermore, let denote the test statistic for which is where is the estimate of ; is the usual unbiased estimate of the variance of , . Let, Myers et al. [6] have shown that, for each fixed, where is a vector of random variables (i.e., is an estimator of , whereas is an estimate from the actual data), is the residual degrees of freedom (df), and is distribution with numerator df equal to and denominator df equal to . They then conclude that the percent confidence region in (5) is where the critical value . Note that the confidence region in (7) (called the “MKG confidence region” from now on) and the critical value were derived based on two critical assumptions: (1) the minimum of the variance due to noise variables is zero; (2) the solution to the zero-gradient equation in (4) is unique. There are situations where the first assumption cannot be met due to the fact that the solution to (4) is either outside the experimental region or does not exist. (However, the approach proposed in this article may provide the determination of such existence in a statistically significant way.) Assuming that the first assumption is met (like the two examples in Section 3), the second assumption is only true when . Notice that (4) represents a series of equations with unknown control variables. As recognized by Myers et al. (see page 506 in [5]), the equation will result in a single point solution when , a line or hyperplane when , and a single point solution or no solution when . In other words, the MKG confidence region provides the correct critical value for the zero-gradient solution (if it exists) only when there are at least as many noise variables as control variables. However, when the number of noise variables is less than the number of control variables, multiple solutions can exist to the zero-gradient equation in (4). In such situations, use of the MKG region will provide below nominal simultaneous coverage. As such, a confidence region which covers all the solutions simultaneously needs to be developed. In practice, statistical inference for the multiple-solution problems is important as this gives the experimenter more options with regard to finding the zero-gradient factor settings. The objective of this paper is to generalize the MKG confidence region such that it will provide the adequate coverage for both the single-solution case (where ) and multiple-solution case (where ). The rest of this paper will be organized as follows. Section 2 will focus on the derivation of such a generalized confidence region and the corresponding critical value required for inverting the associated null hypothesis. In Section 3, we give two examples to demonstrate the difference in simultaneous coverage between the MKG confidence region and our proposed confidence region when the number of control factors exceeds the number of noise variables. Section 4 provides a summary of the results. 2. A Generalized Confidence Region Approach 2.1. The Multiple Zero-Gradient Solution Problem To address the multiple solution situation, the hypothesis in (4) is generalized to , where is the linear subspace representing either a unique single solution (i.e., a point) or multiple solutions (i.e., a line, or a hyperplane). In other words, the confidence region could be a collection of either points or linear subspaces (of dimension ≥1) depending on whether the solution to the equation is unique or not. Therefore, we propose to generalize the MKG confidence region to where represents the linear subspace of the space defined by the elements of , which are solutions to . Here is a vector. has dimension , where is defined as , 0 otherwise. Therefore, when the solution to is a single point, ; otherwise, . In this section, we derive values for in (8). When and is, therefore, not a point, computation of the confidence region in (8) may appear difficult due to the replacement of x-points by subspaces. However, in this section, we will also show that the confidence region in (8 ) is equivalent to one based on pointwise gridding (of the type done for the MKG confidence region computations). We call the confidence region given in (8) the generalized zero-gradient (GZG) confidence region. As indicated by the definition, the MKG confidence region is a special case of the GZG confidence region where is a point and . The MKG confidence region is correct, and the critical value is for , if a solution exists. (If and the MKG region is the null set, then there is statistically significant evidence that a solution does not exist.) The next question is what value should take when ? For , note that a GZG confidence region should contain the zero-gradient solution set , with probability before the experiment is performed. It is worth pointing out that when , the GZG confidence region in (8) is a simultaneous confidence region problem in that a line or hyperplane will be included in the confidence region only if all the points on the line or hyperplane satisfy the criterion in (8). Therefore, the GZG confidence region in (8) can also be expressed as where and . To find the critical value for , we need to first investigate the distribution of the test statistic when is true. When , , which is a vector. Based on Miller’s theorem (see p65 and p113 in [13]), the critical value should be , where is the dimension of the solution set, , for the equation . Here, is the degrees of freedom of the residuals. Note that when , means that the solution is a point, which is a linear space of dimension zero. The critical value then becomes the MKG critical value because . For the case, the distribution of is more complex. Section 2.2 addresses the full model in (1) where the experimental design is completely orthogonal or partially orthogonal so that for some positive constant, , residual variance , and an identity matrix , of dimension of . Here, an exact simultaneous confidence region is derived. For the general case, Section 2.3 proposes a simulation method based upon the multivariate t-distribution to find approximate critical values with which to construct the confidence region. 2.2. Full Model-Orthogonal Case Here, we assume that the data are generated from an orthogonal design or partially orthogonal design such that . Furthermore, it is assumed that we have a full-noise-control variable interaction model, meaning that each noise variable interacts with the same set of control variables, that is, each element of interaction matrix is nonzero. If , and all the elements of the interaction matrix are nonzero, then, for , the distribution of the test statistic has the same distribution as a function of a chi-square random-variable and a random Wishart’s matrix (which are stochastically independent) as shown below: where , ~Wishart , and is the residual degrees of freedom. Here, is a identity matrix, where is the same as defined in Section 2.1. The degrees of freedom of the Wishart distribution is , and is the maximum eigenvalue of the matrix . The proof of this result is provided in Appendix A. Using (10), the critical value, , can then be computed as the 100(1-α)th percentile of the distribution of , which can be obtained by the simple Monte Carlo simulation from and Wishart’s distributions. Some limited tables of critical values are given in Appendix B based on (10), although computation of the critical value for any specific case using the random variable in (10) is easily accomplished. Note that the critical value determined by (10) becomes the MKG critical value, , when , that is, . This is because when , has a Wishart distribution which is . In other words, is not a matrix anymore but a random variable. Hence, . So can then be written as Furthermore, the same result in (10) holds when has an matrix normal distribution. (See p90-91 in [14].) (For details, see [15].) 2.3. The General Case In some cases, the experimental design may be such that Var does not have the orthogonal form, or we may wish to use a model with some “control × noise variable” interaction terms deleted, that is, has some zero elements. In such situations, when , the distribution of does not have a simple form and may depend upon even under . Nonetheless, in such situations, it is still possible to obtain approximately conservative simultaneous confidence regions for control variables associated with zero-gradient solutions. We provide such a construction as follows. Recall that in (9) is a function of , and consider where and . Note that by using is an approximate upper confidence bound for the scalar-valued quantity, . (See Clarke, [16], for a discussion of confidence bounds on nonlinear functions of model parameters constructed from confidence regions.) Let denote the 100(1−α)th percentile of the distribution of under . Consider the confidence region defined by . This confidence region should provide (at least approximately) a conservative simultaneous confidence region for the zero-gradient solutions. However, computation of (using (12)) and the associated confidence region is numerically difficult due to the complex constraints associated with the definition of . Fortunately, it can be shown that where . A proof is given in Appendix C. The expression for in (13) allows for much easier computation of the critical value. The actual construction of the GZG confidence region from the relevant critical value will be outlined in Section 2.4. 2.3.1. The Critical Value Computation for the General Case Note that under we can express as where is the mean squared error, is a known matrix computed from the design matrix, and . Here, t follows the multivariate distribution with location parameter equal to zero, scale matrix , and degrees of freedom . Using (14) we can then compute the critical value, , using the Monte Carlo simulations as follows. Step 1. Compute , where . Step 2. Simulate a multivariate t random vector (rv) with scale matrix and ν df. (This can be done by simulation of a multivariate normal rv with mean vector 0 and variance-covariance matrix and a chi-square random variable with ν df. See [17] for details.) Step 3. Compute using the expressions in (13) and (14). (For practical reasons, computation of can be done by maximization of over instead, where is a prespecified, bounded region. This will calibrate the coverage to be simultaneous only over , where is the true linear subspace such that .) Step 4. Do Steps 2-3 a large number of times to estimate the 100(1-α)th percentile of the Monte Carlo distribution of . This 100(1-α)th percentile is then a Monte Carlo estimate of . 2.3.2. The Coverage Rate of the Critical Value In order to check the accuracy of as a critical value, we have done some Monte Carlo simulations of the above four-step procedure using three different noise variable models in conjunction with both orthogonal and nonorthogonal designs. The statistical models used are summarized in Table 1. These models are constructed so that the zero-gradient solution exists in the experimental region. Three partially orthogonal, face-centered central composite experimental designs were assessed, with associated statistical models 1, 2, and 3, respectively. These designs employed a coded factor space with factor levels equal to (except for the center points). The axial points in noise variables are deleted to maintain partial orthogonality. The factorial part of the designs is either full factorial (e.g., model 1 and model 2) or half factorial (e.g., model 3). The nonorthogonal designs are constructed by changing the (one) factorial point (comprised of all −1s) from () to (). The resultant sample size () and the residual df () of each composite design are both listed in Table 2. For demonstration purposes, we simply chose model parameters to be either 1 or −1, with residual error variance equal to 1. The results of these coverage rate simulations are summarized in Table 2. The coverage rates were computed as follows. The models in Table 1 were used to compute the true space, . For each of the three models, the region, , was a hypercube constructed from the Cartesian product of intervals of the form . For each simulation, a dataset was generated based on the model and the corresponding central composite design, a critical value was computed based on the simulated dataset, then a check was done to see if the event occurred, where is the convex hull formed by the factor levels. For each simulated dataset, the critical value of was computed using 1000 Monte Carlo simulations. 5000 simulations were done to assess the simultaneous coverage of the GZG confidence region for the set . In an attempt to reduce the conservatism of the above approach for computing , we also considered the approximate approach obtained by maximizing over , where . We denote this approximate critical value by and use it in place of to reduce conservatism. Remark 1. Because is a function of the data, the relatively large region, , was chosen for these simulations so that would be extremely unlikely to be empty for any simulated dataset. In addition, we did not want to rule out situations where the confidence region was outside the experimental region. While, in practice, such extrapolated inferences must be treated with caution, nonetheless it may be desired to compute such a confidence region. Such a confidence region outside the experimental region suggests that it may not be possible to obtain a “zero-gradient” solution for noise transmission, at least within the current experimental region. However, such a confidence region just outside the experimental region may offer hope that resetting process control conditions may allow for a more robust process. Of course, additional experiments outside the current experimental region would be needed to confirm this. Remark 2. Maximization of over , to compute the Monte Carlo critical value, , was accomplished by using the SAS/IML Nelder-Mead simplex algorithm, nlpnms. This was done to make the Monte Carlo simulations of this Monte Carlo procedure tractable. Some limited simulations were also done whereby the maximization of over was computed by gridding instead. This was done to make sure that the Nelder-Mead algorithm did not stop its maximization prematurely. In all cases, each critical value, , computed using nlpnms, was slightly larger than that obtained using gridding. (Random number seeds were aligned to avoid the Monte Carlo differences in the comparisons between gridding and the use of the Nelder-Mead simplex algorithm.) For the approximate approach, maximization over was done by gridding as this was easier to accomplish with finer gridding. Table 2 below displays the percent of times the event in (15) occurred for each of the three models with and without an orthogonal design. If the event in (15) occurs, then that portion of the true linear subspace, (within ), is entirely covered by the GZG confidence region; otherwise it is not. Table 2 indicates that the simultaneous coverage rate of the GZG confidence region using the conservative critical value, , produces reasonably conservative results, while the approximate approach (that maximizes over , instead) achieves closer to nominal (yet slightly conservative) coverage rates. It is interesting to note that for each approach the coverage rate appears to be insensitive to the minor departure from orthogonality that was induced by changing the (one) factorial point (comprised of all −1s) from () to (). Such a departure from design orthogonality could happen due to a design execution error or a process restriction. 2.3.3. The Full Model Nonorthogonal Case Because computation of and requires maximization within a Monte Carlo calculation, it would be useful to assess if this can be eliminated when a full model is employed. We, therefore, conduct another simulation study to see if the critical value based upon the random variable in (10) can be used as an approximate critical value for mild departures from orthogonality. We use the same nonorthogonal designs as used in Table 2. The corresponding full-interaction models are listed in Table 3. This time the critical value was used with these nonorthogonal designs to assess the simultaneous coverage rate. The results are shown in Table 4. In order to assess the coverage rate gridding had to be done over a subset of . As a more fair comparison with the theoretical critical value, gridding was done over , (whereas before is a hypercube region composed of the Cartesian product of the intervals , instead of ). This is because the critical value associated with the random variable in (10) is computed by maximization over the whole linear subspace, . Table 4 indicates that this minor departure from orthogonality has virtually no effect on the coverage rate of the GZG confidence region when the more convenient critical value is used. For more radical departures from orthogonality, it may possibly be safer to use the conservative critical value. But further robustness studies are needed to ascertain how well the more convenient critical value works under departures from its assumptions. 2.4. Computation of Simultaneous Confidence Region For the case, once we have computed the critical value, the confidence region can be computed by searching linear subspaces, , that satisfy the condition as defined in (9). However, searching over various lines or hyperplanes that span an experimental region is more computationally difficult than searching the same experimental region in a pointwise fashion. Fortunately, it can be shown that, for any given critical value, the GZG confidence region can be computed by pointwise gridding. This is because for in (7) and in (8), with the same critical value, . A proof is provided in Appendix D . This equivalency shows that one can construct the GZG confidence region by simply gridding over the experimental region in a pointwise fashion. 3. Examples 3.1. One Noise Variable This example is from Myers et al. [6]. It was originally taken from Montgomery [18] (2009, page 231). The data was generated from a 2^4 factorial experiment with a total of 16 observations from a pilot plant to explore the factors that could affect the filtration rate of a chemical bonding substance. The goal is to maximize the filtration rate, . As in Myers et al. [6], one of the four factors, temperature, is assumed difficult to control at large scale and, therefore, treated as a noise variable . The rest of the factors are control variables: : pressure, : concentration, : stirring rate. The fitted model is with mean square error equal to 21.12 and residual df equal to 9. The estimated slope of noise variable is Therefore, and . The solution to the null hypothesis is a line. Then the general critical value (based on Miller’s Theorem (1981) [13]) should be used to calculate the GZG confidence region (as shown in Figure 1). The GZG confidence region in Figure 1 is clearly wider than the MKG confidence region in Myers et al. (1997, Figure2 in [6]) where is used as the critical value. It is clear from Figure 1 that we are at least 95% confident that the zero-gradient locus of points passes through the experimental region. Next, we do some simulations to compare the coverage rates of the GZG and MKG confidence regions. Since the true optima is not known in practice, we calculate the coverage rate for the solution to , using a simulation model equal to the fitted model in (16) with . Note that the true solution in this example is a line with infinite length. But the simulation is done only for the line within the experimental region, that is, of the control variables. Using 100,000 Monte Carlo simulations, the simultaneous coverage rate of the GZG confidence region for all of the zero-gradient solutions in the experimental region is 97% while the MKG confidence region only has 92% coverage rate. The MKG confidence region has a lower coverage rate because it was designed to contain the true optima only when the optimum is a point. Although the GZG confidence region is designed to contain all the true solutions (which could be a point, a line, or a hyperplane), the simulated coverage rate tends to exceed the nominal coverage rate because the simulation is done within a finite range of the control variables while the line or hyperplane has an infinite range in theory. 3.2. Two Noise Variables This example comes from a face-centered central composite design with the factorial part being a half-fractional factorial design (see details in [19]). The objective of this study is to find the optimized condition that maximizes the yield of diacylglycerol oil, which is a natural component of various edible oils and has shown some beneficial effects as compared to the traditional triacylglycerol oil. Five factors were studied in this experiment: reaction time (RTIME), enzyme load (ENZL), reaction temperature (RTEMP), water content (WATC), and substrate molar ratio (SUBR). Water content (WATC) is difficult to control at large scale [19] and, therefore, treated as a noise variable. For illustration purposes, substrate molar ratio is also treated as a noise variable, and the axial points corresponding to the noise variables are excluded from the analysis to obtain partial design orthogonality with respect to the noise variables (i.e., to ensure ). The final model in coded factor value is as follows: where : RTIME, : ENZL, : RTEMP, : WATC, : SUBR. Here, the residual mean squared error is equal to 2.56 with 25 observations and residual df equal to 9. Since , the solution to the null hypothesis is a line in a 3-dimensional space determined by control variables . Therefore, the confidence region for this line is a tube in this 3-dimensional space. A 95% GZG confidence region is shown in Figure 2. Based on (10), the GZG critical value is obtained via and Wishart’s distribution. From Figure 2, we can see that while this confidence region does not provide statistically significant evidence that the zero-gradient locus of points passes through the experimental region, it does appear that a good portion of the confidence region is within the experimental region, and hence, attainment of near-zero-gradient conditions should be feasible for this process. As with previous example, we compare the coverage rates for the GZG and MKG confidence regions using the fitted models as true population models. Using 100,000 Monte Carlo simulations (based upon the fitted model in (18) with ), the simultaneous coverage rate is 96% while it is only 90% for the nominal 95% MKG confidence region. (Here, gridding was done over the cube formed by the Cartesian product of associated with each .) 4. Summary This paper shows that when the number of control variables does not exceed the number of noise variables, the MKG approach provides a confidence region for control variables associated with a zero-gradient for noise transmission. Otherwise, the MKG approach results in a confidence region that is too small for simultaneous coverage of the linear subspace of zero-gradient solutions. It is important to know that the true optimal condition represented by control variables is either a line or a hyperplane instead of a single point when . In this situation, constructing a simultaneous confidence region about the linear subspace solution is desirable in that a subspace of solutions provides the investigator with many options for setting the zero-gradient control level. Of course a confidence region also provides the experimenter with a measure of uncertainty for the optimal solution. If the confidence region is too large, further experimental runs may be needed to make more accurate inferences. If the current manufacturing set point is outside of the confidence region, this provides statistically significant evidence that reconfiguration of the set point may help improve process variability by lowering the transmission of noise through the system. The GZG confidence region for the zero-gradient conditions is proposed and is shown to provide nominal or reasonably conservative coverage rates for many noise variable experiments that occur in practice. In the situation where there are many noise variables, it may be either costly or difficult to study all the noise effects. One way to deal with this problem is to combine the multiple noise factors into one compound noise factor with two extreme conditions as its two levels (provided certain assumptions can be satisfied). See [1, 20, 21] for discussion. If the noise factors can be combined into one compound noise factor, then we could have a situation where the number of control variables is greater than the number of noise variables. The GZG approach is directly applicable for this situation. In some cases, however, one may desire to create a compound noise factor, with more than two levels [22], or two or more compound noise factors. In either case, as long as the predictive model is in the form of (1), the GZG approach is applicable. The GZG confidence region provides inferences about the optimal control point or points that yield a zero-gradient for the transmission of variability from the noise variables. However, there are situations where the control points corresponding to zero noise variance are either outside the experimental region or simply do not exist. In this case, it would still be useful to find the constrained optimal control point for minimum noise variance over the experimental region. A method to further generalize the confidence region for the constrained optimal point for minimum noise variance is needed and is currently under development. A. Proof of (10) We will prove the result for the no-intercept case where in the hypothesis . It then follows that the result can be generalized to the intercept case where . Part 1. For the no-intercept case: When , the solution, , to can be expressed as the linear combination of the basis vectors for the linear subspace , that is, , where is a coefficient vector, and is a matrix where the rows consist of the basis vectors of linear subspace . (Here, and only L is a function of .) Then . Let , then and , where is defined as where , and is the element on the row of the matrix . Since the vector , by the definition of Wishart distribution (see p92 in [23]). Let be the sample estimate of the residual variance , then the test statistic becomes By replacing by z in (A.2), becomes where and is a scalar. (For a proof of (A.3), see the result in Problem 22.1 in Rao 1973 [24, p74].) By the definition of an eigenvalue, it follows that Note is positive definite and symmetric, and, therefore, is positive definite and symmetric as well. Therefore, both and exist. Now multiply both sides of (A.4) by from the left, then multiply both sides by from the right, we get Let , then . Therefore, where follows distribution. Next, we show that A~Wishart . Because ~Wishart and is a matrix with rank , by the property of Wishart distribution (see the theorem in Rencher 1998 [23, page 56]), ~Wishart . Note that rank rank rank. Since is symmetric and positive definite, then is also symmetric and positive definite, and its rank is . Hence, ~Wishart . Part 2. For the intercept case: The intercept case can be proved using the same arguments as indicated by Miller ([13, page 113]). Note that, in the intercept case, the rank of is . Hence, . B. Critical Values Based on (10) See Table 5. C. Proof of (13) Note that Adapting the proof of Theorem2.1 in Peterson et al. [25], it follows that for any critical value, , Since is not a function of , it follows directly that So (13) follows directly from ( C.1) and (C.3). D. The Proof That Recall that . By definition, implies that . Next, we will show that if , then . This can be proved by contradiction. Suppose that there exist some s such that , but . Then there exists at least one point, say , in such that there is no linear subspace that satisfies the two conditions: (1) it contains , (2) for every point in this subspace . Define and consider the set, . It follows using a proof analogous to that in Theorem2.1 in Peterson et al. [25] that . Therefore, . This then implies that there exists a such that . If , then there exists a line or hyperplane such that all the s that satisfy are in . Hence, . Again using the proof of Theorem2.1 in Peterson et al. [25], it follows that for any given point , if and only if for some . Therefore, for all the in . In other words, satisfies the above two conditions, which is a contradiction. 1. V. N. Nair, “Taguchi's parameter design: a panel discussion,” Technometrics, vol. 34, no. 2, pp. 127–161, 1992. 2. T. Robinson, C. Borror, and R. H. Myers, “Robust parameter design: a review,” Quality and Reliability Engineering International, vol. 20, no. 1, pp. 81–101, 2004. View at Scopus 3. G. Taguchi, Introduction to Quality Engineering, UNIPUB/Kraus International, White Plains, NY, USA, 1986. 4. R. H. Myers, A. I. Khuri, and G. G. Vining, “Response surface alternatives to the taguchi robust parameter design approach,” The American Statistician, vol. 46, no. 2, pp. 131–139, 1992. 5. R. H. Myers, D. C. Montgomery, and C. M. Anderson-Cook, Response Surface Metholodology—Process and Product Optimization Using Designed Experiments, John Wiley & Sons, New York, NY, USA, 3rd edition, 2009. 6. R. H. Myers, Y. Kim, and K. L. Griffiths, “Response surface methods and the use of noise variables,” Journal of Quality Technology, vol. 29, no. 4, pp. 429–440, 1997. View at Scopus 7. J. Kunert, C. Auer, M. Erdbrügge, and R. Ewers, “An experiment to compare Taguchi's product array and the combined array,” Journal of Quality Technology, vol. 39, no. 1, pp. 17–34, 2007. View at 8. T. J. Robinson, S. S. Wulff, D. C. Montgomery, and A. I. Khuri, “Robust parameter design using generalized linear mixed models,” Journal of Quality Technology, vol. 38, no. 1, pp. 65–75, 2006. View at Scopus 9. H. B. Goldfarb, C. M. Borror, D. C. Montgomery, and C. M. Anderson-Cook, “Using genetic algorithms to generate mixture-process experimental designs involving control and noise variables,” Journal of Quality Technology, vol. 37, no. 1, pp. 60–74, 2005. View at Scopus 10. E. Del Castillo and S. Cahya, “A tool for computing confidence regions on the stationary point of a response surface,” The American Statistician, vol. 55, no. 4, pp. 358–365, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus 11. G. E. P. Box and S. Jones, “Design products that are robust to environment,” Tech. Rep. 56, Center for Quality and Productivity Improvement, University of Wisconsin, Madison, WI, USA, 1990. 12. J. M. Lucas, “How to achieve a robust process using response surface methodology,” Journal of Quality Technology, vol. 26, no. 4, pp. 248–260, 1994. View at Scopus 13. R. G. Miller, Simultaneous Statistical Inference, Springer, New York, NY, USA, 2nd edition, 1981. 14. N. H. Timm, Applied Multivariate Analysis, Springer, New York, NY, USA, 2002. 15. A. Cheng, “Confidence Regions for Optimal Control Variables for Robust Parameter Design Experiments,” , Ph.D.dissertation, Temple University, Department of Statistics, 2011. 16. G. P. Y. Clarke, “Approximate confidence limits for a parameter function in nonlinear regression,” Journal of the American Statistical Association, vol. 82, no. 397, pp. 221–230, 1987. 17. S. Kotz and N. L. Johnson, “Multivariate t-distribution,” in Encyclopedia of Statistical Sciences, vol. 6, pp. 129–130, John Wiley & Sons, Hoboken, NJ, USA, 1982. 18. D. C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, New York, NY, USA, 7th edition, 2009. 19. J. B. Kristensen, X. Xu, and H. Mu, “Process optimization using response surface design and pilot plant production of dietary diacylglycerols by lipase-catalyzed glycerolysis,” Journal of Agricultural and Food Chemistry, vol. 53, no. 18, pp. 7059–7066, 2005. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 20. J. Wu and M. Hamada, Experiments-Planning, Analysis, and Parameter Design Optimization, John Wiley & Sons, New York, NY, USA, 2000. 21. J. Singh, D. D. Frey, N. Soderborg, and R. Jugulum, “Compound noise: evaluation as a robust design method,” Quality and Reliability Engineering International, vol. 23, no. 3, pp. 387–398, 2007. View at Publisher · View at Google Scholar · View at Scopus 22. X. S. Hou, “On the use of compound noise factor in parameter design experiments,” Applied Stochastic Models in Business and Industry, vol. 18, no. 3, pp. 225–243, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus 23. A. C. Rencher, Multivariate Statistical Inference and Applications, John Wiley & Sons, New York, NY, USA, 1998. 24. C. R. Rao, Linear Statistical Inference and its Application, John Wiley & Sons, New York, NY, USA, 2nd edition, 1973. 25. J. J. Peterson, S. Cahya, and E. Del Castillo, “A general approach to confidence regions for optimal factor levels of response surfaces,” Biometrics, vol. 58, no. 2, pp. 422–431, 2002. View at
{"url":"http://www.hindawi.com/journals/jqre/2011/537543/","timestamp":"2014-04-17T19:44:46Z","content_type":null,"content_length":"502348","record_id":"<urn:uuid:c853cada-6c49-451f-8ec8-faf6d907ab23>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Dissipation and enstrophy statistics in turbulence: are the simulations and mathematics converging? Seminar Room 1, Newton Institute This presentation will be based upon the Focus on Fluids article with this title to appear in JFM 700 (2012). The Focus will be on: Yeung, Donzis, & Sreenivasan, 2012 Dissipation, enstrophy and pressure statistics in turbulence simulations at high Reynolds numbers. J. Fluid Mech. 700 and the two themes of the FoF are that Yeung et al resolves a remaining question about the convergence of higher-order statistics and that this result is related to new mathematics on temporal intermittency in turbulence in Gibbon, J.D. 2009 Estimating intermittency in three- dimensional Navier-Stokes turbulence. J. Fluid Mech. 625. What Yeung et al. finds is that even if the fluctuations of the higher-order vorticity and strain statistics are so large that they do not converge individually, their ratios do converge. Gibbon (2009) shows that this type of behaviour is expected and Gibbon (TODW01) will present specific predictions for the ordering of these statistics at any given time and the t ype of maximum growth during the most intermittent periods. However, Yeung et al does not give time variations, so a direct comparison is not possible. My new results are from simulations of the reconnection of anti-parallel vortex tubes, an example of the events assumed by Gibbon (2009), where this time-dependent analysis has been done. This simulation develops, after just two reconnection steps, most of the properties associated with fully-developed turbulence, including a -5/3 spectrum with the proper coefficient and the expected enstrophy production skewness, and the intermittency ratios are consistent with Yeung et al. Turbulence develops after reconnections by: Forming orthogonal vortices, which wrap up as in the Lundgren spiral vortex model. The temporal ordering and growth of the higher-order vorticity statistics obey the bounds of the new mathematical predictions exactly. Thus the connection between the latest high Reynolds number calculations and the latest mathematics is demonstrated. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/TOD/seminars/2012072414451.html","timestamp":"2014-04-19T02:03:30Z","content_type":null,"content_length":"7573","record_id":"<urn:uuid:0e5dcdb4-4290-4b28-851b-f1aab4fa80e4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
William P. I have been a college math instructor for more than 15 years (twice named Teacher of the Year by the student government association.) During this time, I have also been a professional tutor in my school's Learning Center. While I enjoy the challenge of classroom teaching, I have found that students often benefit most from the personalized, interactive nature of tutoring sessions. My educational background includes Masters degrees in mathematics and physics, as well as Ph.D. coursework in math. In my role as a Learning Center tutor, I have acquired extensive experience tutoring almost all levels of college math and many physics courses. I have also tutored high school math (algebra and above) and HS physics. I hope to have the opportunity to help you achieve William's subjects
{"url":"http://www.wyzant.com/Tutors/PA/Wilkes-Barre/8079202/","timestamp":"2014-04-17T11:02:30Z","content_type":null,"content_length":"81448","record_id":"<urn:uuid:e5328998-73f4-4f85-b96d-c4c5258db21b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic and Geometric Topology 5 (2005), paper no. 23, pages 537-562. Yang-Baxter deformations of quandles and racks Michael Eisermann Abstract. Given a rack Q and a ring A, one can construct a Yang-Baxter operator c_Q: V tensor V --> V tensor V on the free A-module V = AQ by setting c_Q(x tensor y) = y tensor x^y for all x,y in Q. In answer to a question initiated by D.N. Yetter and P.J. Freyd, this article classifies formal deformations of c_Q in the space of Yang-Baxter operators. For the trivial rack, where x^y = x for all x,y, one has, of course, the classical setting of r-matrices and quantum groups. In the general case we introduce and calculate the cohomology theory that classifies infinitesimal deformations of c_Q. In many cases this allows us to conclude that c_Q is rigid. In the remaining cases, where infinitesimal deformations are possible, we show that higher-order obstructions are the same as in the quantum case. Keywords. Yang-Baxter operator, r-matrix, braid group representation, deformation theory, infinitesimal deformation, Yang-Baxter cohomology AMS subject classification. Primary: 17B37. Secondary: 18D10,20F36,20G42,57M25. E-print: arXiv:math.QA/0409202 DOI: 10.2140/agt.2005.5.537 Submitted: 16 September 2004. (Revised: 18 May 2005.) Accepted: 3 June 2005. Published: 19 June 2005. Notes on file formats Michael Eisermann Institut Fourier, Universite Grenoble I, 38402 St Martin d'Heres, France Email: Michael.Eisermann@ujf-grenoble.fr URL: www-fourier.ujf-grenoble.fr/~eiserm AGT home page Archival Version These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
{"url":"http://www.emis.de/journals/UW/agt/AGTVol5/agt-5-23.abs.html","timestamp":"2014-04-21T14:51:00Z","content_type":null,"content_length":"3434","record_id":"<urn:uuid:af96e95e-d55a-4a42-9ae3-faa66f9a2fb4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
11. Most House business is conducted within the Committee of the Whole because this arrangement A. guarantees a quorum. B. prevents filibusters. C. doesn't require a quorum. D. ensures cloture. 11. Most House business is conducted within the Committee of the Whole because this arrangement A. guarantees a quorum. B. prevents filibusters. C. doesn't require a quorum. D. ensures cloture. The answer is B. doesn't require a quorum. Most House business is conducted within the Committee of the Whole because this arrangement doesn't require a quorum. [ ] Not a good answer? Get an answer now. (FREE) There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=0B25AA6C","timestamp":"2014-04-19T09:28:54Z","content_type":null,"content_length":"42590","record_id":"<urn:uuid:0be62d45-3bfa-4c46-8ee1-9acfe76c521f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Subject: erron in code From: Konst Date: 12 Oct, 2012 15:18:11 Message: 1 of 5 Can someone tell me what is the error in this? for t=10:30 clear alpha clear alphadetrend clear Y while (i+n)<N for j=1:kapa while i<=N for j=1:kapa for k=1:n The first loop runs OK. But then when t becomes 11 it says that in line alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. I tried to put an if to check the index i+1 too but it still gives the same error. Any ideas what is wrong? Thanks in advance! Can someone tell me what is the error in this? y=randn(500,1); N=length(y); f=1; for t=10:30 n=f*t; kapa=N/n; kapa=round(kapa); i=0; clear alpha clear alphadetrend clear Y while (i+n)<N for j=1:kapa alpha(1:n,j)=y(i+1:i+n,1); alphadetrend(1:n,j)=detrend(alpha(1:n,j)); i=i+n; end end i=1; while i<=N for j=1:kapa for k=1:n Y(i,1)=alphadetrend(k,j); i=i+1; end end end end The first loop runs OK. But then when t becomes 11 it says that in line alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. I tried to put an if to check the index i+1 too but it still gives the same error. Any ideas what is wrong? Thanks in advance! Subject: erron in code From: dpb Date: 12 Oct, 2012 16:44:09 Message: 2 of 5 On 10/12/2012 10:18 AM, Konst wrote: > Can someone tell me what is the error in this? > y=randn(500,1); > N=length(y); > f=1; > for t=10:30 > n=f*t; > kapa=N/n; > kapa=round(kapa); > i=0; > clear alpha > clear alphadetrend > clear Y > while (i+n)<N > for j=1:kapa > alpha(1:n,j)=y(i+1:i+n,1); > alphadetrend(1:n,j)=detrend(alpha(1:n,j)); > i=i+n; > end > end > The first loop runs OK. But then when t becomes 11 it says that in line > alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. I tried to > put an if to check the index i+1 too but it still gives the same error. > Any ideas what is wrong? Thanks in advance! t = >> j j = >> size(alpha) ans = On 10/12/2012 10:18 AM, Konst wrote: > Can someone tell me what is the error in this? > y=randn(500,1); > N=length(y); > f=1; > for t=10:30 > n=f*t; > kapa=N/n; > kapa=round(kapa); > i=0; > clear alpha > clear alphadetrend > clear Y > while (i+n)<N > for j=1:kapa > alpha(1:n,j)=y(i+1:i+n,1); > alphadetrend(1:n,j)=detrend(alpha(1:n,j)); > i=i+n; > end > end ... > > The first loop runs OK. But then when t becomes 11 it says that in line > alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. I tried to > put an if to check the index i+1 too but it still gives the same error. > Any ideas what is wrong? Thanks in advance! t t = 11 >> j j = 5 >> size(alpha) ans = 11 4 ... -- Subject: erron in code From: Roger Stafford Date: 12 Oct, 2012 20:20:27 Message: 3 of 5 "Konst " <konstance1@hotmail.com> wrote in message <k59cbj$2le$1@newscl01ah.mathworks.com>... > The first loop runs OK. But then when t becomes 11 it says that in line alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. ..... - - - - - - - - - - I don't see any error at t = 11, but at t = 12 you will have kapa = 42 and, because 12*42 = 504, at the last trip through the for-lop you will call on y(493:504) which is out of range. You had better change from round(kapa) to floor(kapa). Roger Stafford "Konst " <konstance1@hotmail.com> wrote in message <k59cbj$2le$1@newscl01ah.mathworks.com>... > The first loop runs OK. But then when t becomes 11 it says that in line alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. ..... - - - - - - - - - - I don't see any error at t = 11, but at t = 12 you will have kapa = 42 and, because 12*42 = 504, at the last trip through the for-lop you will call on y(493:504) which is out of range. You had better change from round(kapa) to floor(kapa). Roger Stafford Subject: erron in code From: Konst Date: 13 Oct, 2012 14:56:07 Message: 4 of 5 "Roger Stafford" wrote in message <k59u2b$e4c$1@newscl01ah.mathworks.com>... > "Konst " <konstance1@hotmail.com> wrote in message <k59cbj$2le$1@newscl01ah.mathworks.com>... > > The first loop runs OK. But then when t becomes 11 it says that in line alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. ..... > - - - - - - - - - - > I don't see any error at t = 11, but at t = 12 you will have kapa = 42 and, because 12*42 = 504, at the last trip through the for-lop you will call on y(493:504) which is out of range. You had better change from round(kapa) to floor(kapa). > Roger Stafford But isn't the while supposed to control that I am below 500? I changed to floor but it still gives me the same mistake at t=11. Then I did this: for j=1:kapa if (i+n)<N for j=1:kapa for k=1:n if i<=N which I think does exactly the same but I was aimin to have better control over the indexes. Still the same error though. "Roger Stafford" wrote in message <k59u2b$e4c$1@newscl01ah.mathworks.com>... > "Konst " <konstance1@hotmail.com> wrote in message <k59cbj$2le$1@newscl01ah.mathworks.com>... > > The first loop runs OK. But then when t becomes 11 it says that in line alpha(1:n,j)=y(i+1:i+n,1); index exceeds matrix dimensions. ..... > - - - - - - - - - - > I don't see any error at t = 11, but at t = 12 you will have kapa = 42 and, because 12*42 = 504, at the last trip through the for-lop you will call on y(493:504) which is out of range. You had better change from round(kapa) to floor(kapa). > > Roger Stafford But isn't the while supposed to control that I am below 500? I changed to floor but it still gives me the same mistake at t=11. Then I did this: for j=1:kapa if (i+n)<N alpha(1:n,j)=y (i+1:i+n,1); alphadetrend(1:n,j)=detrend(alpha(1:n,j)); i=i+n; end end i=1; for j=1:kapa for k=1:n if i<=N Y(i,1)=alphadetrend(k,j); i=i+1; end end end which I think does exactly the same but I was aimin to have better control over the indexes. Still the same error though. Subject: erron in code From: Roger Stafford Date: 13 Oct, 2012 22:00:08 Message: 5 of 5 "Konst " <konstance1@hotmail.com> wrote in message <k5bve6$fte$1@newscl01ah.mathworks.com>... > I changed to floor but it still gives me the same mistake at t=11. ..... - - - - - - - - My advice to you would be to insert an appropriate 'fprintf' line immediately prior to your error-producing line to see what is going on at the time of the error, perhaps something like this: for j=1:kapa fprintf('i = %d, n = %d, size(y,1) = %d\n',i,n,size(y,1)) % <-- You will get a lot of print-out lines but it will be the last one before the error that you will be interested in. It may lead you to the source of the trouble. Roger Stafford "Konst " <konstance1@hotmail.com> wrote in message <k5bve6$fte$1@newscl01ah.mathworks.com>... > I changed to floor but it still gives me the same mistake at t=11. ..... - - - - - - - - My advice to you would be to insert an appropriate 'fprintf' line immediately prior to your error-producing line to see what is going on at the time of the error, perhaps something like this: ... for j=1:kapa fprintf('i = %d, n = %d, size(y,1) = %d\n',i,n,size(y,1)) % <-- alpha(1:n,j)=y(i+1:i+n,1); ... You will get a lot of print-out lines but it will be the last one before the error that you will be interested in. It may lead you to the source of the trouble. Roger Stafford A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest. Anyone can tag a thread. Tags are public and visible to everyone.
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/323754","timestamp":"2014-04-16T08:19:50Z","content_type":null,"content_length":"38864","record_id":"<urn:uuid:a30647cf-d32a-460f-94ee-b47d59c06148>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Size, Yoneda, and Limits of Algebras Posted by Mike Shulman Let $T$ be a monad on a category $C$, and $C^T$ its category of Eilenberg-Moore algebras with forgetful functor $u\colon C^T\to C$. Consider the following two (true) statements. 1. For all $C$ and $T$, if $d\colon I\to C^T$ is any diagram such that $u\circ d$ has a limit in $C$, then $d$ has a limit in $C^T$ which is preserved by $u$. 2. For all $C$ and $T$, if $C$ is complete, then so is $C^T$, and $u$ is continuous. Obviously the first implies the second. Interestingly, the second also implies the first, by a clever Yoneda-lemma argument. However, the argument applies a priori only to small categories, so it provides one convenient testing-ground to compare different ways of dealing with size. First let’s prove that if (2) is true for all $C$, then (1) is true when $C$ and $I$ are small. Let $T$ be a monad on a small category $C$, let $\hat{C} = Set^{C^{op}}$ be the category of presheaves on $C$, and let $y\colon C\to \hat{C}$ be the Yoneda embedding. Since $\hat{C}$ is the free cocompletion of $C$, the monad $T$ extends uniquely to a cocontinuous monad $\hat{T}$ on $\hat{C}$. Saying that $\hat{T}$ “extends” $T$ means that $\hat{T}\circ y \cong y\circ T$, coherently with the monad structure. It follows that $T$-algebras can be identified with those $\hat{T}$-algebras whose underlying presheaves are representable, and we have the following diagram: $\array{C^T & \overset{y^T}{\to} & \hat{C}^{\hat{T}}\\ ^u\downarrow && \downarrow^{\hat{u}}\\ C& \underset{y}{\to} & \hat {C}}$ in which the horizontal functors are both fully faithful. Now let $d\colon I\to C^T$ be a diagram, with $I$ small, such that $u d$ has a limit $\ell$ in $C$. Since $y$ preserves limits, $y(\ ell)$ is also a limit of $y u d \cong \hat{u} y^T d$ in $\hat{C}$. But $\hat{C}$ is complete, so by assumption, $\hat{C}^{\hat{T}}$ is complete and $\hat{u}$ is continuous. Therefore, since $I$ is small, $y^T d$ has a limit $\hat{\ell}$ in $\hat{C}^{\hat{T}}$ which is preserved by $\hat{u}$. This means that $\hat{u}(\hat{\ell})$ is a limit of $\hat{u} y^T d$, and hence isomorphic to $y(\ell)$. But then $\hat{\ell}$ is a $\hat{T}$-algebra whose underlying presheaf is representable (it is represented by $\ell$), and thus it “is” a $T$-algebra $k$ with $u(k)\cong \ell$. Finally, since fully faithful functors reflect limits, $k$ is a limit of $d$ which is preserved by $u$. I actually find this argument kind of striking. It implies that once we’ve proven that (say) products and equalizers lift to the category of algebras for any monad, it follows automatically that all limits lift similarly—even when the base category doesn’t have products and equalizers out of which those other limits can be constructed! Of course, philosophers can debate what it means for two true statements to be equivalent, but in practice this sort of argument can simplify your life. Now what about when $C$ is large? Let me first point out some things that don’t work. In general, without any universes, the category $Set^{C^{op}}$ will not exist. If $C$ is locally small, then the category of small presheaves on $C$ does always exist and is the free cocompletion of $C$, but it is not complete without further assumptions on $C$ (such as that $C$ is small, or itself complete). So it seems hard to get away without some universe-like hypotheses. Of course, if we assume Grothendieck’s axiom of universes, then we can reason as follows: pick a universe $U$ such that $C$ and $I$ are $U$-small, and let $Set$ denote the $U$-large category of $U$ -small sets; then the above argument goes through. Note that this requires the statement of (2) to be changed to “for any universe $U$ and any $C$ and $T$, if $C$ is $U$-small-complete, then so is $C ^T$, and $u$ is $U$-small-continuous.” However, presumably whatever argument we originally used to prove it would still prove this more general statement. On the other hand, in strong Feferman set theory ZMC/S, we have a specified universe $U$ which satisfies the reflection schema, and “small,” “large,” “complete,” and “continuous” have a fixed meaning referring to this $U$. We can now argue as follows: the above proof shows that (1) is true for any small categories $C$ and $I$. But this is just the “relativization” to $U$ of the statement (1) itself; thus by the reflection schema, (1) itself is true. Personally, I find this version cleaner, although (taking into account how one proves the consistency of Feferman set theory) they contain more or less the same content. Posted at December 2, 2009 5:09 PM UTC Re: Size, Yoneda, and Limits of Algebras That’s a very nice argument! As far as I can see, though, it isn’t really using many of the specifics of $\hat{C}$? In particular, we only seemed to need the “free co-completion” for the sake of extending $T$ to $\hat{T}$ — so we were just using something like: there’s a 2-functor (^): $\mathrm{Cat} \rightarrow \mathrm{CAT}$, such that 1. $\hat{C}$ is always complete, and 2. there’s a natural map $y:C \rightarrow \hat{C}$, full and faithful and preserving all limits that exist in $C$. So… is there some other such 2-functor that works for possibly-large categories as well? If so, then the argument would extend to large categories without needing to worry about issues of size. If I remember right, the “free completion” construction works fine for larger categories, but there, the unit $C \rightarrow F_{\mathrm{cplt}}(C)$ doesn’t preserve limits that already existed, so isn’t any good to us… Posted by: Peter LeFanu Lumsdaine on December 2, 2009 9:53 PM | Permalink | Reply to this Re: Size, Yoneda, and Limits of Algebras is there some other such 2-functor that works for possibly-large categories as well? Good question; I don’t know. The “free completion” does exist for arbitrary locally small categories using small presheaves, but as you say, that’s not helpful. Even if you find such a 2-functor, though, I don’t think it will quite prove the whole claim: only the case when the domain category $I$ is also small. The arguments using universes and with Feferman set theory apply even when $I$ is large. Admittedly, limits over large domain categories are not usually of great importance in practice, but the philosophical point is that the statement “limits lift to categories of algebras” is not size-dependent: all limits lift. Posted by: Mike Shulman on December 2, 2009 10:51 PM | Permalink | PGP Sig | Reply to this Re: Size, Yoneda, and Limits of Algebras I should probably say that I didn’t invent this argument (at least, the small part of it). I don’t know who noticed it first; I learned it from Steve Lack. Posted by: Mike Shulman on December 2, 2009 10:43 PM | Permalink | PGP Sig | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2009/12/size_yoneda_and_limits_of_alge.html","timestamp":"2014-04-17T06:42:54Z","content_type":null,"content_length":"41494","record_id":"<urn:uuid:500e2d7e-3979-4a0f-a7c4-708d7802592f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
E.N.D. for Statistics Exploring Numerical Data (E.N.D.) is a software resource for instructors and students in a university/college level social sciences introductory statistics course; it consists of text material, review questions, and calculation utilities. Text Material. The text sections are focused expositions of the substantive content of the topics. Instructors, and printed matter such a standard text, may convey a fuller treatment of some topics; E.N.D. is intended to supplement these presentations without getting in their way. That said, most of my students use E.N.D. as their main resource; instructors who have developed their own package of hand-out materials will find E.N.D. an excellent and economical accessory. The presentation of topics in E.N.D. is couched in orthodox language and notation and is therefore compatible with virtually any text and fits well into typical introductory statistics courses. My personal bias is to emphasize the conceptual aspects of statistics rather than devote excessive time to mechanical Review Questions. Review exercises enable students to test their mastery of concepts and computation routines. The review questions are styled in a multiple-choice (seven alternatives) format and require, in turn, understanding of relevant concepts, computational skills from partially developed data, and understanding of the rules for decision making in science. The review exercises are accompanied by computational solutions and/or text feedback for incorrect alternatives. The current version of E.N.D (2.2) contains 365 review exercises which constitute a resource for thorough preparation for tests and examinations. My own students regard the review questions as most valuable. Calculation Utilities. Many of the topic units contain a calculation utility. The utilities can serve as convenient tools for homework assignments but their principle value as instructional devices is to act as vehicles by which instructors may demonstrate, and students may explore, various descriptive and analytic procedures. Functions are provided to enable the creation and storage of permanent data files. TABLE OF CONTENTS OF E.N.D. 3.0 Introducing E.N.D. - 13 pages/screens Using E.N.D. Screen Settings Hard Drive Installation Example Review Question Handling Data Copyright & Limitations Bug List & Feedback Unit 1: Learning the Language - 57 pages/screens Terms and Definitions Review Questions I Types of Measurement Review Questions II Summation Notation Review Questions III Key Terms Summary Unit 2: Organizing Data - 43 pages/screens Frequency Distributions Exact Limits Stem & Leaf Displays Review Questions Key Terms Summary Frequency Distribution Calculation Utility Stem & Leaf Calculation Utility Histogram / Frequency Polygon Calculation Utility Unit 3: Describing Data - 62 pages/screens Review Questions I Review Questions II Key Terms Summary Descriptive Statistics Calculation Utility Unit 4: Normal Distribution - 43 pages/screens Normal Distribution Review Questions Key Terms Summary Normal Distribution Calculation Utility Unit 5: Probability - 41 pages/screens This or That This and That Review Questions Key Terms Summary Unit 6: Statistical Distributions - 43 pages/screens Random Samples Sampling Distributions t Distributions Review Questions Key Terms Summary Unit 7: Confidence Intervals - 30 pages/screens Confidence Intervals Review Questions Key Terms Summary Confidence Interval Calculation Utility Unit 8: Hypothesis Testing - 45 pages/screens Hypotheses in Science Type I and Type II Error Review Questions Key Terms Summary Unit 9: One Sample t-Test - 39 pages/screens One Sample t-Test Underlying Assumptions Confidence Interval Approach Review Questions Key Terms Summary One Sample t-Test Calculation Utility Unit 10: Two Sample t-Tests - 67 pages/screens Independent Samples t-Test Related Samples t-Test Underlying Assumptions Confidence Interval for the Difference Between Means Confidence Interval for the Mean Difference Review Questions Key Terms Summary Independent Samples t-Test Calculation Utility Related Samples t-Test Calculation Utility Unit 11: Analysis of Variance - 39 pages/screens F Distributions One Factor ANOVA Underlying Assumptions Après ANOVA (multiple comparisons and strength of effect) Review Questions Key Terms Summary One Factor ANOVA Calculation Utility Unit 12: Correlation - 54 pages/screens Describing Correlation Calculation / Interpretation Variance Interpretation Concluding Remarks Review Questions Key Terms Summary Correlation Calculation and Plotting Utility Unit 13: Linear Regression - 24 pages/screens Predicting Y from X Confidence Intervals about Predicted Y Review Questions Key Terms Summary Regression Calculation and Plotting Utility Unit 14: Chi-Square Tests - 51 pages/screens Goodness of Fit Test Review Questions I Test of Independence Review Questions II Key Terms Summary Goodness of Fit Calculation Utility Test of Independence Calculation Utility Data Files Management - 4 pages/screens Data Files Directory Single Variable Files Paired Variables Files Endex - 11 pages/screens One of the new aspects of version 3.0 is called "Endex". It is a master glossary of 200+ terms/concepts and a navigating index providing mouse-click access to relevant discussions in the E.N.D. units. Students struggling to learn the terminology and concepts will find Endex most useful. Exploring Numerical Data is written in ToolBook II (Asymetrix Corp.) and is available on a CD; the CD is self-contained and runs on computers using the Windows 95 and later operating systems (sorry, there is no MacIntosh version). This is a partial screen capture of the opening page of Unit 9: One-Sample t-Test. This is a partial screen capture of the calculation utility in Unit 9. This is a partial screen capture of a review exercise in Unit 9. In my thirty plus years as a statistics instructor I devoted much time and effort trying to make this subject comprehendible and non-threatening to students. These efforts included a text (now out of print); E.N.D. is another attempt at this goal. A few words and images cannot adequately describe Exploring Numerical Data; there is nothing like it currently available. Earlier versions of Exploring Numerical Data were distributed by ITP Nelson (Canada) Publishers and was available bundled with several texts or stand alone. If you would like to obtain a copy please contact your ITP Nelson sales representative, or me at horvatht@sympatico.ca T. Horvath Ph.D. Windsor, ON Canada REVISED 2003 FEB 28
{"url":"http://www3.sympatico.ca/horvatht/endpage.htm","timestamp":"2014-04-20T13:23:43Z","content_type":null,"content_length":"14493","record_id":"<urn:uuid:b988d1f2-fcf6-40b8-8075-74459f9f9e3f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-svn] r4037 - trunk/numpy/core numpy-svn@scip... numpy-svn@scip... Sun Sep 16 00:42:03 CDT 2007 Author: charris Date: 2007-09-16 00:42:01 -0500 (Sun, 16 Sep 2007) New Revision: 4037 More documentation. Modified: trunk/numpy/core/fromnumeric.py --- trunk/numpy/core/fromnumeric.py 2007-09-16 05:40:34 UTC (rev 4036) +++ trunk/numpy/core/fromnumeric.py 2007-09-16 05:42:01 UTC (rev 4037) @@ -52,18 +52,14 @@ a : array The source array indices : int array The indices of the values to extract. axis : {None, int}, optional The axis over which to select values. None signifies that the operation should be performed over the flattened array. out : {None, array}, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. mode : {'raise', 'wrap', 'clip'}, optional Specifies how out-of-bounds indices will behave. 'raise' -- raise an error @@ -95,11 +91,9 @@ a : array Array to be reshaped. newshape : shape tuple or int The new shape should be compatible with the original shape. If an integer, then the result will be a 1D array of that length. order : {'C', 'FORTRAN'}, optional Determines whether the array data should be viewed as in C (row-major) order or FORTRAN (column-major) order. @@ -135,15 +129,12 @@ a : int array This array must contain integers in [0, n-1], where n is the number of choices. choices : sequence of arrays Each of the choice arrays should have the same shape as the index out : array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype mode : {'raise', 'wrap', 'clip'}, optional Specifies how out-of-bounds indices will behave. 'raise' : raise an error @@ -182,14 +173,13 @@ - a : array - repeats : int or int array + a : {array_like} + Blah. + repeats : {integer, integer_array} The number of repetitions for each element. If a plain integer, then it is applied to all elements. If an array, it needs to be of the same length as the chosen axis. - axis : {None, int}, optional + axis : {None, integer}, optional The axis along which to repeat values. If None, then this function will operated on the flattened array `a` and return a similarly flat @@ -271,14 +261,11 @@ a : array Array to be sorted. axis : {None, int} optional Axis along which to sort. None indicates that the flattened array should be used. kind : {'quicksort', 'mergesort', 'heapsort'}, optional Sorting algorithm to use. order : {None, list type}, optional When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be @@ -340,14 +327,11 @@ a : array Array to be sorted. axis : {None, int} optional Axis along which to sort. None indicates that the flattened array should be used. kind : {'quicksort', 'mergesort', 'heapsort'}, optional Sorting algorithm to use. order : {None, list type}, optional When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be @@ -355,7 +339,7 @@ - indices : integer array + index_array : {integer_array} Array of indices that sort 'a' along the specified axis. *See Also*: @@ -400,16 +384,15 @@ - a : array + a : {array_like} Array to look in. axis : {None, integer} If None, the index is into the flattened array, otherwise along the specified axis - Array of indices + index_array : {integer_array} @@ -434,16 +417,15 @@ - a : array + a : {array_like} Array to look in. axis : {None, integer} If None, the index is into the flattened array, otherwise along the specified axis - Array of indices + index_array : {integer_array} @@ -477,10 +459,8 @@ a : 1-d array Array must be sorted in ascending order. v : array or list type Array of keys to be searched for in a. side : {'left', 'right'}, optional If 'left', the index of the first location where the key could be inserted is found, if 'right', the index of the last such element is @@ -528,21 +508,20 @@ - a : array_like + a : {array_like} Array to be reshaped. - new_shape : tuple - Shape of the new array. + new_shape : {tuple} + Shape of reshaped array. - new_array : array + reshaped_array : {array} The new array is formed from the data in the old array, repeated if necessary to fill out the required number of elements, with the new if isinstance(new_shape, (int, nt.integer)): new_shape = (new_shape,) a = ravel(a) @@ -601,17 +580,14 @@ - a : array_like + a : {array_like} Array from whis the diagonals are taken. offset : {0, integer}, optional Offset of the diagonal from the main diagonal. Can be both positive and negative. Defaults to main diagonal. axis1 : {0, integer}, optional Axis to be used as the first axis of the 2-d subarrays from which the diagonals should be taken. Defaults to first axis. axis2 : {1, integer}, optional Axis to be used as the second axis of the 2-d subarrays from which the diagonals should be taken. Defaults to second axis. @@ -669,28 +645,23 @@ - a : array_like + a : {array_like} Array from whis the diagonals are taken. offset : {0, integer}, optional Offset of the diagonal from the main diagonal. Can be both positive and negative. Defaults to main diagonal. axis1 : {0, integer}, optional Axis to be used as the first axis of the 2-d subarrays from which the diagonals should be taken. Defaults to first axis. axis2 : {1, integer}, optional Axis to be used as the second axis of the 2-d subarrays from which the diagonals should be taken. Defaults to second axis. dtype : {None, dtype}, optional Determines the type of the returned array and of the accumulator where the elements are summed. If dtype has the value None and a is of integer type of precision less than the default integer precision, then the default integer precision is used. Otherwise, the precision is the same as that of a. out : {None, array}, optional Array into which the sum can be placed. It's type is preserved and it must be of the right shape to hold the output. @@ -722,7 +693,7 @@ - a : array_like + a : {array_like} order : {'C','F'}, optional If order is 'C' the elements are taken in row major order. If order @@ -730,7 +701,7 @@ - 1d array of the elements of a. + 1d_array : {array} *See Also*: @@ -756,11 +727,11 @@ - a : array_like + a : {array_like} - Tuple of arrays of indices. + tuple_of_arrays : {tuple} @@ -786,11 +757,13 @@ - a : array type + a : {array_like} + Array whose shape is desired. If a is not an array, a conversion is + attempted. - tuple of integers : + tuple_of_integers : The elements of the tuple are the length of the corresponding array @@ -841,29 +814,28 @@ - a : array_type + a : {array_type} + Array containing elements whose sum is desired. If a is not an array, a + conversion is attempted. axis : {None, integer} Axis over which the sum is taken. If None is used, then the sum is over all the array elements. dtype : {None, dtype}, optional Determines the type of the returned array and of the accumulator - where the elements are summed. If dtype has the value None and a is - of integer type of precision less than the default platform integer - precision, then the default integer precision is used. Otherwise, - the precision is the same as that of a. + where the elements are summed. If dtype has the value None and the + type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. out : {None, array}, optional Array into which the sum can be placed. It's type is preserved and it must be of the right shape to hold the output. - Sum along specified axis : {array, scalar}, type as above. - If the sum is along an axis, then an array is returned whose shape - is the same as a with the specified axis removed. For 1d arrays or - dtype=None, the result is a 0d array. + sum_along_axis : {array, scalar}, see dtype parameter above. + Returns an array whose shape is the same as a with the specified + axis removed. Returns a 0d array when a is 1d or dtype=None. + Returns a reference to the specified output array if specified. *See Also*: @@ -899,29 +871,29 @@ - a : array_type + a : {array_like} + Array containing elements whose product is desired. If a is not an array, a + conversion is attempted. axis : {None, integer} Axis over which the product is taken. If None is used, then the product is over all the array elements. dtype : {None, dtype}, optional Determines the type of the returned array and of the accumulator - where the elements are multiplied. If dtype has the value None and a - is of integer type of precision less than the default platform - integer precision, then the default integer precision is used. - Otherwise, the precision is the same as that of a. + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. out : {None, array}, optional - Array into which the product can be placed. It's type is preserved and - it must be of the right shape to hold the output. + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. - product along specified axis : {array, scalar}, type as above. - If the product is along an axis, then an array is returned whose shape - is the same as a with the specified axis removed. For 1d arrays or - dtype=None, the result is a 0d array. + product_along_axis : {array, scalar}, see dtype parameter above. + Returns an array whose shape is the same as a with the specified + axis removed. Returns a 0d array when a is 1d or dtype=None. + Returns a reference to the specified output array if specified. *See Also*: @@ -1123,13 +1095,14 @@ - a : array_like - Array_like object whose dimensions are counted. + a : {array_like} + Array whose number of dimensions are desired. If a is not an array, a + conversion is attempted. - number of dimensions : int - Just so. + number_of_dimensions : {integer} + Returns the number of dimensions. *See Also*: @@ -1166,13 +1139,14 @@ - a : array_like - Array_like object whose dimensions are counted. + a : {array_like} + Array whose number of dimensions is desired. If a is not an array, a + conversion is attempted. - number of dimensions : int - Just so. + number_of_dimensions : {integer} + Returns the number of dimensions. *See Also*: @@ -1205,15 +1179,17 @@ - a : array_like - If a is not an array, a conversion is attempted. - axis : {None, int}, optional - Axis along which elements are counted. None means all elements. + a : {array_like} + Array whose axis size is desired. If a is not an array, a conversion + is attempted. + axis : {None, integer}, optional + Axis along which the elements are counted. None means all elements + in the array. - element count : int - Count of elements. + element_count : {integer} + Count of elements along specified axis. *See Also*: @@ -1249,27 +1225,30 @@ def around(a, decimals=0, out=None): """Round a to the given number of decimals. - The real and imaginary parts of complex numbers are rounded separately. - Nothing is done if the input is an integer array with decimals >= 0. + The real and imaginary parts of complex numbers are rounded separately. The + result of rounding a float is a float so the type must be cast if integers + are desired. Nothing is done if the input is an integer array and the + decimals parameter has a value >= 0. - a : array_like + a : {array_like} + Array containing numbers whose rounded values are desired. If a is + not an array, a conversion is attempted. decimals : {0, int}, optional Number of decimal places to round to. When decimals is negative it specifies the number of positions to the left of the decimal point. out : {None, array}, optional - Existing array to use for output (by default, make a copy of a). + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. Numpy rounds floats to floats by default. - *Returns*: + *Returns*: - out : array - May be used to specify a different array to hold the result rather - than the default a. If the type of the array specified by 'out' - differs from that of a, the result is cast to the new type, - otherwise the original type is kept. Floats round to floats by - default. + rounded_array : {array} + If out=None, returns a new array of the same type as a containing + the rounded values, otherwise a reference to the output array is + returned. *See Also*: @@ -1282,9 +1261,17 @@ Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in IEEE floating point and the - errors introduced in scaling the numbers when decimals is something - other than 0. + errors introduced when scaling by powers of ten. + *Examples* + >>> around([.5, 1.5, 2.5, 3.5, 4.5]) + array([ 0., 2., 2., 4., 4.]) + >>> around([1,2,3,11], decimals=1) + array([ 1, 2, 3, 11]) + >>> around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) round = a.round @@ -1296,27 +1283,30 @@ def round_(a, decimals=0, out=None): """Round a to the given number of decimals. - The real and imaginary parts of complex numbers are rounded separately. - Nothing is done if the input is an integer array with decimals >= 0. + The real and imaginary parts of complex numbers are rounded separately. The + result of rounding a float is a float so the type must be cast if integers + are desired. Nothing is done if the input is an integer array and the + decimals parameter has a value >= 0. - a : array_like - decimals : {0, int}, optional + a : {array_like} + Array containing numbers whose rounded values are desired. If a is + not an array, a conversion is attempted. + decimals : {0, integer}, optional Number of decimal places to round to. When decimals is negative it specifies the number of positions to the left of the decimal point. out : {None, array}, optional - Existing array to use for output (by default, make a copy of a). + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. - out : array - May be used to specify a different array to hold the result rather - than the default a. If the type of the array specified by 'out' - differs from that of a, the result is cast to the new type, - otherwise the original type is kept. Floats round to floats by - default. + rounded_array : {array} + If out=None, returns a new array of the same type as a containing + the rounded values, otherwise a reference to the output array is + returned. *See Also*: @@ -1329,9 +1319,17 @@ Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in IEEE floating point and the - errors introduced in scaling the numbers when decimals is something - other than 0. + errors introduced when scaling by powers of ten. + *Examples* + >>> round_([.5, 1.5, 2.5, 3.5, 4.5]) + array([ 0., 2., 2., 4., 4.]) + >>> round_([1,2,3,11], decimals=1) + array([ 1, 2, 3, 11]) + >>> round_([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) round = a.round @@ -1345,27 +1343,30 @@ Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified - axis. + axis. The dtype returned for integer type arrays is float - axis : integer + a : {array_like} + Array containing numbers whose mean is desired. If a is not an + array, a conversion is attempted. + axis : {None, integer}, optional Axis along which the means are computed. The default is to compute the standard deviation of the flattened array. - dtype : type + dtype : {None, dtype}, optional Type to use in computing the means. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. - out : ndarray + out : {None, array}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if - mean : array (see dtype parameter above) - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. + mean : {array, scalar}, see dtype parameter above + If out=None, returns a new array containing the mean values, + otherwise a reference to the output array is returned. *See Also*: @@ -1378,6 +1379,16 @@ The mean is the sum of the elements along the axis divided by the number of elements. + *Examples* + >>> a = array([[1,2],[3,4]]) + >>> mean(a) + 2.5 + >>> mean(a,0) + array([ 2., 3.]) + >>> mean(a,1) + array([ 1.5, 3.5]) mean = a.mean @@ -1395,23 +1406,26 @@ - axis : integer + a : {array_like} + Array containing numbers whose standard deviation is desired. If a + is not an array, a conversion is attempted. + axis : {None, integer}, optional Axis along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. - dtype : type + dtype : {None, dtype}, optional Type to use in computing the standard deviation. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. - out : ndarray + out : {None, array}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if - standard_deviation : The return type varies, see above. - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. + standard_deviation : {array, scalar}, see dtype parameter above. + If out=None, returns a new array containing the standard deviation, + otherwise a reference to the output array is returned. *See Also*: @@ -1419,13 +1433,23 @@ `mean` : Average - *Notes*: + *Notes* The standard deviation is the square root of the average of the squared deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). The computed standard deviation is biased, i.e., the mean is computed by dividing by the number of elements, N, rather than by N-1. + *Examples* + >>> a = array([[1,2],[3,4]]) + >>> std(a) + 1.1180339887498949 + >>> std(a,0) + array([ 1., 1.]) + >>> std(a,1) + array([ 0.5, 0.5]) std = a.std @@ -1438,28 +1462,31 @@ """Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a - distribution. The variance is computed for the flattened array by default, + distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. - axis : integer + a : {array_like} + Array containing numbers whose variance is desired. If a is not an + array, a conversion is attempted. + axis : {None, integer}, optional Axis along which the variance is computed. The default is to compute the variance of the flattened array. - dtype : type + dtype : {None, dtype}, optional Type to use in computing the variance. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. - out : ndarray + out : {None, array}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if - variance : array (see dtype parameter above) - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. + variance : {array, scalar}, see dtype parameter above + If out=None, returns a new array containing the variance, otherwise + a reference to the output array is returned. *See Also*: @@ -1474,6 +1501,16 @@ i.e., the mean is computed by dividing by the number of elements, N, rather than by N-1. + *Examples* + >>> a = array([[1,2],[3,4]]) + >>> var(a) + 1.25 + >>> var(a,0) + array([ 1., 1.]) + >>> var(a,1) + array([ 0.25, 0.25]) var = a.var More information about the Numpy-svn mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-svn/2007-September/001328.html","timestamp":"2014-04-18T18:34:56Z","content_type":null,"content_length":"28334","record_id":"<urn:uuid:36389508-e631-488f-8ecb-52502099612d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
The more science tells us about the world, the stranger it looks. Ever since physics first penetrated the atom, early in this century, what it found there has stood as a radical and unanswered challenge to many of our most cherished conceptions of nature. It has literally been called into question since then whether or not there are always objective matters of fact about the whereabouts of subatomic particles, or about the locations of tables and chairs, or even about the very contents of our thoughts. A new kind of uncertainty has become a principle of science. This book is an original and provocative investigation of that challenge, as well as a novel attempt at writing about science in a style that is simultaneously elementary and deep. It is a lucid and self-contained introduction to the foundations of quantum mechanics, accessible to anyone with a high school mathematics education, and at the same time a rigorous discussion of the most important recent advances in our understanding of that subject, some of which are due to the author himself. User ratings 5 stars 4 stars 3 stars 2 stars 1 star Review: Quantum Mechanics and Experience User Review - Jacob J - Goodreads An excellent explanation of exactly what makes quantum mechanics unfathomable written to the non-physicist. This book takes an approach I have rarely seen of giving just enough math to give the reader ... Read full review Review: Quantum Mechanics and Experience User Review - lucas - Goodreads the best book to read to get a sense of the current state of play in the foundations of quantum mechanics. Read full review Superposition 1 The Mathematical Formalism and the Standard Way of Thinking about It 17 Nonlocality 61 The Measurement Problem 73 The Collapse of the Wave Function 80 The Dynamics by Itself 112 Bohms Theory 134 SelfMeasurement 180 The KochenHealyDieks Interpretations 191 Bibliography 199 Index 203 References from web pages JSTOR: Quantum Mechanics and Experience 46 (1995), 253-260 REVIEW DAVID ALBERT Quantum Mechanics and Experience Cambridge, MA, Harvard University Press, 1992, cloth ?22.50 Anna Maidens Department ... links.jstor.org/ sici?sici=0007-0882(199506)46%3A2%3C253%3AQMAE%3E2.0.CO%3B2-G A Mental Space Odyssey: Quantum Mechanics and Experience Today I just bought the required text, Quantum Mechanics and Experience by David Z. Albert. The book looks pretty good, with a solid layman introduction to ... mentalspaceodyssey.blogspot.com/ 2007/ 09/ quantum-mechanics-and-experience.html Philosophy of Quantum Mechanics Albert: Quantum Mechanics and Experience, Chapters 1 & 2. • Maudlin: Quantum Non-Locality and Relativity, “An Overview of Quantum. Mechanics.” ... www.pitt.edu/ ~gbelot/ Courses/ 08-1/ 2627/ 2627syllabus.pdf Books Text - Physics Forums Library David Albert - Quantum Mechanics and Experience Jim Baggott - Beyond Measure: Modern Physics, Philosophy and the Meaning Of Quantum Theory ... www.physicsforums.com/ archive/ index.php/ t-44076.html Bas van Fraassen, Quantum Mechanics Course PH327-98 Required: George Greenstein and Arthur Zajonc, The Quantum Challence; Optional texts are: David Albert, Quantum Mechanics and Experience; B. van Fraassen, ... web.princeton.edu/ vanfraas/ ph327.htm Review: Many minds and the quantum muddle - 11 September 1993 ... Quantum Mechanics and Experience by David Z. Albert, Harvard University Press, pp 206, £22. 25 'I think I can safely say that nobody understands quantum. www.newscientist.com/ article/ mg13918904.500-review-many-minds-and-the-quantum-muddle-.html HPS/PHIL 687 Historical Foundations of the Quantum Theory Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1992. Sunny Y. Auyang. How is Quantum Field Theory Possible? ... www.nd.edu/ ~dhoward1/ 687f03-Recommended-Readings.pdf University of Chicago Press - Quantum Mechanics and Ordinary ... Albert, David Z. (1992), Quantum Mechanics and Experience. Cambridge: Harvard University Press. First citation in article. Albert, David Z. and Barry Loewer ... www.journals.uchicago.edu/ cgi-bin/ resolve?PHOS700548 P M P / David Z. Albert Quantum Mechanics and Experience Harvard University Press,. . (Slanted toward Bohm’s theory—and lacking in detail on Quantum ... teaching.arts.usyd.edu.au/ philosophy/ 1002/ PhilofPhys.pdf Many-Worlds Interpretation of Quantum Mechanics (Stanford ... [Abstract | Preprint]; Albert, D., (1992) Quantum Mechanics and Experience, Cambridge, MA: Harvard University Press. Albert, D., and Loewer, ... plato.stanford.edu/ entries/ qm-manyworlds/ Bibliographic information
{"url":"http://books.google.com/books?id=HYEZD0Mh8JEC&dq=related:LCCN30011596&source=gbs_similarbooks_r&cad=2","timestamp":"2014-04-19T00:02:19Z","content_type":null,"content_length":"136219","record_id":"<urn:uuid:bb5ef7c8-fd65-4ffe-8508-7e8878b339d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolving a vector into a sum of two vectors April 6th 2011, 05:25 AM #1 Junior Member Mar 2011 Resolving a vector into a sum of two vectors "Resolve the vector u = 5i + j + 6k into a sum of two vectors, one of which is parallel and the other perpendicular to v = 3i - 6j + 2k." What I want to know with this question is how you are able to know how to find the sum of each vector. I know that you find the vector projection of u in the direction of v and the vector component u orthogonal to v for the sum of two vectors. I know how to solve and find the answers from this so, I just basically want to know why you have to use these two in particular to get the "Resolve the vector u = 5i + j + 6k into a sum of two vectors, one of which is parallel and the other perpendicular to v = 3i - 6j + 2k." What I want to know with this question is how you are able to know how to find the sum of each vector. I know that you find the vector projection of u in the direction of v and the vector component u orthogonal to v for the sum of two vectors. I know how to solve and find the answers from this so, I just basically want to know why you have to use these two in particular to get the Maybe the picture will well. I can only draw in 2D. First notice the right triangle in red. We need to find the adjacent and opposite sides of it. Using trigonometry we know that the adjacent side of a triangle is $\displaystyle \cos(\theta)=\frac{\text{adjacent}}{\text{hypotenu se}}=\frac{A}{H} \iff A=H\cos(\theta)$ The hypotenuse is the magnitude of the vector This gives The object above is a scalar but is has the magnitude of the adjacent, now we just need to point it in the right direction without changing its length. So we need a vector pointing in the direction of $\mathbf{v}$ with length $1$. So lets normalize $\mathbf{v}$ Let $\displaystyle \mathbf{n}=\frac{\mathbf{v}}{||\mathbf{v}||}$ Now lets combine these two to get $\displaystyle A\mathbf{n}=(||\mathbf{u}||\cos(\theta))\left(\fra c{\mathbf{v}}{||\mathbf{v}||} \right)=\frac{||\mathbf{u}||||\mathbf{v}||\cos(\th eta)}{||\mathbf{v}||^2}\mathbf{v}=\frac{\mathbf {u} \cdot \mathbf{v}}{||\mathbf{v}||^2}\mathbf{v}$ The last step comes from the geometric definition of the dot product. To get the perpendicular $\mathbf{p}$ side We use the geometric fact that vectors is Euclidean space add via the parallelogram rule. This gives $A\mathbf{n}+\mathbf{p}=\mathbf{u}$ now we just solve for $\mathbf{p}, \mathbf{p}=A\mathbf{n}-\mathbf{u}$ April 6th 2011, 06:03 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/177009-resolving-vector-into-sum-two-vectors.html","timestamp":"2014-04-19T05:31:39Z","content_type":null,"content_length":"38903","record_id":"<urn:uuid:296c6757-06a0-4bfb-abd6-e28743d8316d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
\usepackage{amssymb} \usepackage{amsmath} \usepackage{geometry} \setcounter{MaxMatrixCols}{10} %TCIDATA{OutputFilter=LATEX.DLL} %TCIDATA{Version=5.00.0.2552} %TCIDATA{} %TCIDATA{Created=Tuesday, September 18, 2007 17:30:18} %TCIDATA{LastRevised=Saturday, March 22, 2008 18:01:21} %TCIDATA{} %TCIDATA{} %TCIDATA{CSTFile=40 LaTeX article.cst} \newtheorem{theorem}{Theorem} \newtheorem {acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \ newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion} [theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem {notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \ newtheorem{summary}[theorem]{Summary} \newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}} \geometry{left=1in,right=1in,top=1in,bottom=1in} \input{tcilatex} \begin {document} \renewcommand{\baselinestretch}{1.5} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \begin{center} \begin{tabular}{c} ECONOMIC ANALYSIS GROUP \\ DISCUSSION PAPER \\ \end{tabular} \end {center} \addvspace{0.8in} \begin{flushright} \begin{tabular}{c} Dynamic Contract Breach\\ by \\ Fan Zhang\footnotemark \\ \begin{tabular}{cc} EAG 08-3 & March 2008 \\ & \end{tabular}% \end{tabular} \end{flushright} \addvspace{0.8in} EAG Discussion Papers are the primary vehicle used to disseminate research from economists in the Economic Analysis Group (EAG) of the Antitrust Division. These papers are intended to inform interested individuals and institutions of EAG's research program and to stimulate comment and criticism on economic issues related to antitrust policy and regulation. The analysis and conclusions expressed herein are solely those of the authors and do not represent the views of the United States Department of Justice. Information on the EAG research program and discussion paper series may be obtained from Russell Pittman, Director of Economic Research, Economic Analysis Group, Antitrust Division, U.S. Department of Justice, BICN 10-000, Washington, DC 20530, or by e-mail at russell.pittman@usdoj.gov. Comments on specific papers may be addressed directly to the authors at the same mailing address or at their e-mail address. Recent EAG Discussion Paper titles are listed at the end of this paper. To obtain a complete list of titles or to request single copies of individual papers, please write to Janet Ficco at the above mailing address or at janet.ficco@usdoj.gov. In addition, recent papers are now available on the Department of Justice website at http://www.usdoj.gov/atr/public/eag/discussion\_papers.htm. Beginning with papers issued in 1999, copies of individual papers are also available from the Social Science Research Network at www.ssrn.com. \footnotetext{% I am grateful to Michael Whinston, Kathryn Spier, and William Rogerson for their guidance and insightful discussions. I also thank and Dan Liu, Viswanath Pingali, and seminar participants at MLEA 2007 and the U.S. Department of Justice for helpful comments. Financial support from the Center for the Study of Industrial Organization is gratefully acknowledged. The views expressed herein are my own and do not purport to represent the views of the U.S. Department of Justice. All errors are mine. Comments are welcomed at Fan.Zhang@usdoj.gov.} \renewcommand{\thefootnote}{\arabic{footnote}} \thispagestyle{empty}\newpage \abstract{This paper studies the design of optimal, privately-stipulated damages when breach of contract is possible at more than one point in time. It offers an intuitive explanation for why cancellation fees for some services (e.g., hotel reservations) increase as the time for performance approaches. If the seller makes investments over time to improve her value from trade, she will protect the value of her investments by demanding a higher compensation when the buyer breaches their contract at a time closer to when contract performance is due. Furthermore, it is shown that if the seller may be able to find an alternate buyer when breach occurs early but not when breach occurs late, the amount by which the damage for late breach exceeds the damage for early breach is increasing in the probability of finding an alternate buyer. (This result may explain why some hotels impose larger penalties for last-minute cancellations during the high season than during the low season.) When the probability of finding an alternate buyer is endogenized, the seller's private incentive to mitigate breach damages is shown to be socially insufficient whenever she does not have complete bargaining power with the alternate buyer. Finally, if renegotiation is possible after the arrival of each perfectly competitive entrant, the efficient breach and investment decisions are shown to be implementable with the same efficient expectation damages that implement the efficient outcomes absent renegotiation. } \thispagestyle{empty}\newpage \thispagestyle{empty} \setcounter{page}{1} \section{Introduction} Contracts for the provision of services frequently have cancellation fees that penalize the party who backs out before the contract expires or before the date of performance of the contract. \ For example, vacation resorts often set two separate fees for cancellation of lodging reservations:\ an early cancellation fee if the reservation is cancelled with sufficient advanced notice, and a late cancellation fee, which is usually larger, if the reservation is cancelled \textquotedblleft at the last minute.\textquotedblright\ \ Furthermore, the difference between the fees for late cancellation and early cancellation is often larger during the high season, when demand is higher. \ What causes such variations in breach damages with respect to when a contract is signed and when it is breached? \ This paper proposes a possibile explanation by allowing for the possibility of contract breach and investment at multiple points in time. Suppose that when the contract is signed, the buyer is uncertain about the value of his outside option at various future points in time and may therefore breach the contract before his performance (payment) is due. \ If the seller has multiple opportunities over time to make non-contractible, cost-reducing investments that improve her\ value from trade, she will want to protect the value of those investments by demanding a higher compensation for contract breach that occurs later, or closer in time to when contract performance is due. \ Therefore, the buyer's decision of whether to breach early or late involves a trade off between the option value of not breaching early (and waiting for a potentially cheaper supplier to arrive later) versus the higher penalty associated with potentially breaching late. The law and economics literature on contract breach began by considering the efficiency of standard court-imposed damage measures in a setting where the buyer faces an alternate source of supply that is competitively priced. \ In particular, Shavell (1980) and Rogerson (1984)\ considered, respectively, the situations where the incumbent seller and buyer cannot and can renegotiate their initial contract. \ The common finding in both cases is that standard court-imposed damages generally induce socially excessive investment. The efficiency of privately stipulated, or liquidated, damages for breach of contract has also been previously addressed, notably by Aghion and Bolton (1987) (assuming no investment or renegotiation), Chung (1992) (allowing for investments but not renegotiation), and Spier and Whinston (1995) (assuming both investments and renegotiation). \ The common focus of these papers is on the strategic stipulation of socially excessive breach damages when the entrant seller has market power, i.e., when the incumbent seller and buyer's original contract imposes externalities on third parties.\footnote{% Most of the literature on contract damages, including this paper and those cited above, assumes investments are selfish in that they only directly affect the investing party's payoffs. \ Che and Chung (1999), however, assume cooperative investments, which directly affect the payoffs of the non-investing party. \ They show that the relative social desirability of expectation damages, liquidated damages, and reliance damages are different when investments are cooperative instead of selfish.} In contrast, I assume that third parties have no bargaining power with the incumbent seller and buyer. \ Instead, the key innovation of this paper is the existence of multiple opportunities for breach of contract, which is due to the sequential arrival of two potential entrants. \ Section \ref% {Sec_Model} introduces the rest of the model in detail, and Section \ref% {Sec_Efficient} characterizes the ex-ante efficient breach and investment decisions. \ In the event of breach, \emph{expectation damages} compensate the breached-against party (in this case, the seller) for the profit that she would have made had breach not occurred, given her \emph{actual} investment decision. \ By comparison, \emph{efficient }expectation damages compensate the breach-against party for the profit she would have made absent breach -- had she chosen the \emph{efficient} investment level. \ First, absent externalities and assuming renegotiation is impossible, I\ demonstrate in Section \ ref{Sec_NoReneg} that the incumbent parties can implement the efficient breach and investment decisions in both periods by stipulating the efficient expectation damages in their contract. \ This result can be viewed as an extention to multiple periods of the well-known result that the efficient expectation damages is socially efficient when renegotiation is not possible.\footnote{% See, for example, Chung (1992) and the references therein.} \ Furthermore, I\ show that efficient expectation damages for late breach exceed those for early breach. In a related paper, Chan and Chung (2005) also considers at a two-period model of contract breach with sequential investment opportunities. \ They focus on standard court-imposed breach rememdies and do not allow for renegotiation. \ In contrast, the main motivations of this paper are to provide explanations for why privately stipulated damages might increase over time as the date of performance approaches, and to examine the robustness of this result to the possibility of renegotiation. \ Another related paper is Triantis and Triantis (1998), which\ studies a continuous time model of contract breach and \textit{assumes} that breach damages are increasing over time. \ The present paper can be viewed as providing a framework that justifies such an assumption when damages are privately stipulated. Another novel feature of this model is the possibility that the seller may find an alternate buyer when the incumbent buyer breaches early but not when he breaches late.\footnote{% For example, there may be insufficient time to find an alternate buyer if breach occurs late. \ The qualitative results would continue hold if the probability of finding an alternate buyer upon late breach is positive so long as it is less than the analogous probability given early breach.} \ In this case, contract law requires the seller to take reasonable measures to reduce, or mitigate, the damages that are owed to her for early breach. \ Since these damages are decreasing in the probability of trading with an alternate buyer, mitigation in this setting entails efforts to increase this probability of trading with the alternate buyer. \ Section \ref% {Sec_Mitigation}\ endogenizes this probability of trading with an alternate buyer and compares the private and social incentives for mitigation of damages. \ It is shown that unless the incumbent seller has complete bargaining power vis-a-vis the alternate buyer, her\ private incentives for mitigation are socially insufficient, leading to suboptimal mitigation efforts. \ However, this result crucially depends upon the implicit assumption that breach is defined as only a function of whether the incumbent buyer refuses trade, or delivery of the good (as opposed to being also a function of whether the incumbent seller is able to trade with an alternate buyer). Next, I assume in Section \ref{Sec_Reneg} that the incumbent buyer and seller are able to renegotiate their original contract after the arrival of each perfectly competitive entrant. \ It is shown that if the incumbent seller has complete bargaining power with the alternate buyer (so that externalities are absent), socially efficient breach and investment decisions can still be implemented with the same contract that induces efficient decisions when renegotiation is not possible. \ Thus, this paper contributes to the literature on contract breach by demonstrating that, absent externalities, efficient expectation damages are socially optimal even if breach and renegotiation are possible at multiple points in time. Finally, Section \ref{Sec_Application} considers an application of the no-renegotiation version of the model to the lodging industry, and in particular, vacation resorts' policies regarding cancellation of lodging reservations. \ The model predicts that a resort's opportunity cost of honoring a reservation beyond the early cancellation opportunity is increasing in the likelihood of finding an alternate guest in case early cancellation occurs. \ Therefore, we should expect the amount by which the late cancellation fee exceeds the early cancellation fee to be larger during periods of high demand than during periods of low demand. Section \ref{Sec_Conclusion} briefly concludes. \section{A Model with Multiple Breach Opportunties \ label{Sec_Model}} Consider a contract between a buyer and a seller to exchange one unit of an indivisible good or service. \ The buyer's value for the good, $v,$ is commonly known to both parties.\ footnote{% Stole (1992)\ argues that when the parties are asymmetrically informed, liquidated damages not only provide incentives for efficient breach, but also serve to efficiently screen among different types of buyers and sellers.% } \ The seller can make sequential cost-reducing investments of $r_{1}$ and $% r_{2}$ to improve her\ value from trade with the buyer. \ After the original seller makes each investment $r_{i}$, another seller observes her own production cost $c_{\func{Ei}}$ and announces a price $p_{\func{Ei}}$ that she will charge the buyer if the buyer breaches his contract with the incumbent seller and buys from her, the entrant seller, instead.\footnote{% Fixed costs of entry for the entrants are not explicitly modeled. \ Each of them simply observes her production cost and then costlessly shows up to announce a price.} I study the case where the buyer has all the bargaining power when dealing with the entrants, so that each entrant sets her price equal to her cost, $p_{\func{Ei}}=c_{\func{Ei}},$ and behaves as if she were perfectly competitive.\footnote{% If an entrant has some bargaining power with respect to the buyer, the damage for breach that the buyer must incur if he were to buy from the entrant would still constrain the entrant's price choice. \ Since the entrant would make positive profits if she sells to the buyer in this case, the incumbent seller can use (socially excessive) stipulated breach damages to extract surplus from the entrant. \ See Spier and Whinston (1995).} The buyer has two opportunities to breach his contract with the incumbent seller:\ once after each entrant seller arrives and announces $p_{\func{Ei}}$% . \ The entrant's price $p_{\func{Ei}}$ and the incumbent's investments $% r_{i}$ are observable by all parties but not verifiable. \ For now, assume the incumbent seller and buyer cannot renegotiate their contract after each entrant's announcement of $p_{\func{Ei}}$ (I\ examine the case where renegotiation is possible in Section \ref{Sec_Reneg}). \ So the model is essentially the stage game of Spier and Whinston (1995) repeated twice, with perfectly competitive entrants and with the following additional modification. \ I assume that if the original buyer breaches early, i.e., immediately after the first entrant sets her price, then with probability $% \theta $ the seller is able to find an alternate buyer who has the same value $v$ for the good and is charged a price $p^{\prime }$ by the seller. \ (Except for the discussion on mitigation of damages in Section \ref% {Sec_Mitigation}, I\ will assume throughout the rest of this paper that $% p^{\prime }=v,$ so that the alternate buyer has no bargaining power with respect to the incumbent seller.) \ If the original buyer breaches late, i.e., after the second entrant announces her price, the seller cannot find an alternate buyer.\ \ For example, it may be the case that the incumbent seller requires sufficient time to have a chance of finding an alternate buyer. Because the buyer will have two opportunities to breach, the seller specifies in the contract two liquidated damages, $x_{1}$ and $x_ {2},$ where the buyer must pay $x_{i}$ to the seller if he cancels the contract after the seller has made her investment $r_{i}.$ \ If the buyer never breaches the contract and buys from the incumbent seller, the only payment that he makes to the seller is a price $p,$ which is paid when the contract is performed in the last period (when the buyer accepts delivery of the good from the seller). \ In this case, the seller's investment costs are $% r_{1}+r_{2}$ and her production cost is $c(r_{1},r_{2}),$ where $c(\cdot ,\cdot )$ is strictly decreasing and strictly convex in $r_{1}$ and $r_{2}$ for all $(r_{1},r_{2})\gg 0.$\footnote{% While no functional form assumptions are made with respect to how the seller's production costs depend on her investments, it is assumed that these investments are selfish in the sense that they do not directly affect the \emph{buyer's} payoff.} \ I will refer to $r_{1}$ as the early investment and $r_{2}$ as the late investment. \ In the event that early breach occurs, $r_{2}=0.$ To summarize, the sequence of events, shown in Figure 1 for the case when renegotiation is impossible, is as follows. \begin{enumerate} \item[t=0] Seller S offers a contract $(p,x_{1},x_{2})$ to Buyer B. \ If B rejects, both parties receive a payoff of zero and the game ends. \ If B accepts, the game continues. \item[t=1.1] S makes a non-contractible \ emph{early investment}\ $r_{1}\geq 0 $ to reduce her production costs. \item[t=1.2] Nature draws Entrant seller E1's cost $c_{E1}$ from a distribution $F(\cdot )$ with support $[0,v],$ and E1 chooses her price $% p_{E1}.$ \item[t=1.3] B decides whether to \emph{breach early}\ and buy from E1. \ The cost of the first investment, $r_{1},$ is a sunk cost for S at this point, but if B breaches early, S incurs production costs $c(r_{1},0)$ only if she finds an alternate buyer (which occurs with probability $\theta $). \ Therefore, payoffs for the incumbent buyer, incumbent seller, the first entrant, and the alternate buyer in the case of early breach are, respectively,% \begin{equation*} \begin{tabular}{cccc} $u_{B}=v-p_{E1}-x_{1},$ & $u_{S}=x_{1}-r_{1}+\theta \left[ p^{\prime }-c(r_ {1},0)\right] ,$ & $u_{E1}=p_{E1}-c_{E1},$ & $u_{AB}=\theta \lbrack v-p^{\prime }].$% \end{tabular}% \end{equation*}% The game ends after an early breach. \ If B does not breach early, $% u_{E1}=u_ {AB}=0$ and the game continues. \item[t=2.1] S makes a non-contractible, relationship-specific \emph{late investment}\ $r_{2}\geq 0$ to further reduce her production costs.\footnote{% The seller's late investment $r_{2}$ is relationship-specific because it does not improve the her payoff at all if the incumbent buyer breaches late. \ In contrast, S's early investment $r_{1}$ is not completely relationship-specific because it reduces her cost of selling to the alternative buyer, if one is found.} \item[t=2.2] Nature draws Entrant seller E2's cost $c_{E2}$ from $F(\cdot ),$ independent of $c_{E1},$ and E2 chooses her price $p_{E2}.\footnote{% The analysis would clearly be the same if we assumed that there is only one entrant who takes another independent draw of his cost if the buyer does not buy from her at time t=1.3.}$ \item[t=2.3] B decides whether to \emph{breach late}\ and buy from E2. \ Because I assume that S is unable to find an alternate buyer if breach occurs late, payoffs for the buyer, incumber seller, and second entrant in the case of B breaching late are, respectively,% \begin{equation*} \begin{tabular}{ccc} $u_{B}=v-p_{E2}-x_{2},$ & $u_{S}=x_{2}-r_{1}-r_ {2},$ & $% u_{E2}=p_{E2}-c_{E2}. $% \end{tabular}% \end{equation*}% If B does not breach, payoffs are% \begin{equation*} \begin{tabular}{ccc} $u_{B}=v-p,$ & $u_{S}=p-c(r_{1},r_{2})-r_{1}-r_{2},$ & $u_{E2}=0.$% \end{tabular}% \end{equation*} \end{enumerate} % Figure 1 (Timeline.emf): Timeline and payoffs when renegotiation is not possible. % This figure shows a timeline of the two-period model and the players' payoffs % for the case when renegotiation is not possible. In each period, % the seller invests, the entrant observes her cost and chooses her price, % and the buyer decides whether to breach. The seller may find an alternative % buyer if breach occurs early but not if breach occurs late. \section{Efficient Investment and Breach \label{Sec_Efficient}} As a benchmark, I identify the investment and breach decisions that maximize expected\ social surplus, or the sum of payoffs for all parties. \ Let $% r_{1}^{\ast }$ and $r_{2}^{\ast }(r_{1}^{\ast })$ denote the (ex-ante) efficient investments for the seller. Proceeding in reverse chronological order, I first characterize the buyer's efficient late breach decision. \ Assuming no early breach and investments $% r_{1}$ and $r_{2},$ the social surplus (i.e., the sum of payoffs for B, S, and E2) is $v-c_{E2}-r_{1}-r_{2}$ if B breaches and $% v-c(r_{1},r_{2})-r_{1}-r_{2}$ if B does not breach. \ Thus, given investment levels $r_{1}$ and $r_{2}$ and no early breach, social surplus is maximized when B breaches late if and only if potential entrant E2 can produce the good at a lower cost than the incumbent seller:% $$c_{E2}\leq c(r_{1},r_{2}). \label{B2}$$% In particular, because all investment costs are sunk, they do not have any direct effect on the efficient late breach decision. \ However, investments indirectly affect the late breach decision through their effects on the seller's production costs. Next, consider the seller's efficient late investment, $r_{2}^{\ast }(r_{1}), $ which by definition maximizes expected social surplus given early investment $r_{1},$ no early breach, and late breach occurring if and only if $c_{E2}\leq c(r_{1},r_{2})$. \ In other words, $r_{2}^{\ast }(r_{1})$ is the solution to the problem% \begin{eqnarray*} &&\max_{r_{2}\geq 0}S(r_{2}|r_{1}),\text{ where} \\ S(r_{2}|r_{1}) &\equiv &\left\{ \begin{array}{c} \int_{0}^{c(r_{1},r_{2})}[v-c_{E2}-r_{1}-r_{2}]f(c_ {E2})dc_{E2} \\ +\int_{c(r_{1},r_{2})}^{v}[v-c(r_{1},r_{2})-r_{1}-r_{2}]f(c_{E2})dc_{E2}.% \end{array}% \right. \end{eqnarray*}% The seller's efficient late investment $r_{2}^{\ast }(r_{1}),$ assuming it is positive, is characterized by the first order condition% $$1=-c_{2}(r_{1},r_{2}^{\ast }(r_{1}))(1-F[c(r_{1},r_{2}^{\ast }(r_{1}))]). \label{R2}$$% This condition requires that, at its efficient level, the marginal cost of increasing $r_{2}$ should equal the expected marginal benefit of increasing $% r_{2},$ which is the cost reduction from increasing $r_{2}$ multiplied by the probability that the cost reduction will be realized (i.e., the probability of late breach not occurring, conditional on early breach not occurring). Now consider the efficient early breach decision. \ Social surplus from early breach is $v-c_{E1}-r_{1}+\theta \lbrack v-c(r_{1},0)]$. \ Given that the late breach decision is efficient (follows (\ref{B2})) and late investment is efficient (as characterized by (\ref{R2})), expected social surplus from not breaching early is% \begin{eqnarray*} &&S(r_{2}^{\ast }(r_{1})|r_{1}) \\ &=&v-F[c(r_{1},r_{2}^{\ast }(r_{1}))]E[c_{E2}|c_{E2}\leq c(r_ {1},r_{2}^{\ast }(r_{1}))] \\ &&-(1-F[c(r_{1},r_{2}^{\ast }(r_{1}))])[c(r_{1},r_{2}^{\ast }(r_{1}))]-r_{1}-r_{2}^{\ast }(r_{1}). \end{eqnarray*}% Thus, it is efficient for B to breach early if and only if $% v-c_{E1}-r_{1}+\theta \lbrack v-c(r_{1},0)]\geq S(r_{2}^{\ast }(r_{1})|r_{1}),$ or% $$c_{E1}\leq c^{\ast }(r_{1})+r_{2}^{\ast }(r_{1})+\theta \lbrack v-c(r_{1},0)], \label{B1}$$% where \ begin{eqnarray} c^{\ast }(r_{1}) &\equiv &F[c(r_{1},r_{2}^{\ast }(r_{1}))]E[c_{E2}|c_{E2}\leq c(r_{1},r_{2}^{\ast }(r_{1}))] \label{c*(r1)} \\ &&+(1-F[c(r_{1},r_{2}^{\ast }(r_{1}))])c(r_{1},r_{2}^{\ ast }(r_{1})) \notag \end{eqnarray}% is the expected continuation production cost given $r_{1},$ and efficient late investment and efficient late breach. \ So breaching early is efficient if and only if the first entrant's cost, $c_{E1},$ is lower than the expected \emph{social} cost of continuing with the incumbent seller, given efficient investments and efficient late breach. \ In other words, in order for the buyer's early breach decision to be efficient, his total expected continuation cost must include not only his private expected continuation cost $c^{\ast }(r_{1}),$ but also internalize the additional investment cost $r_{2}^{\ast }(r_{1})$ that the seller will incur once early breach is foregone, as well as the lost expected surplus $\theta \lbrack v-c(r_{1},0)]$ that would have been realized had the seller been given the opportunity to find an alternate buyer. Finally, given the seller's efficient late investment and the buyer's efficient breach decisions as described above, the seller's efficient early investment, $r_{1}^{\ast },$ should maximize the ex-ante expected social surplus:% \begin{eqnarray} &&\max_{r_{1}\geq 0}S(r_{1}) \label{S(r1)} \\ &=&\ max_{r_{1}\geq 0}\left\{ \begin{array}{c} \int_{0}^{c^{\ast }(r_{1})+r_{2}^{\ast }(r_{1})+\theta \lbrack v-c(r_{1},0)]}\{v-c_{E1}-r_{1}+\theta \lbrack v-c(r_{1},0)]\}f(c_{E1})dc_{E1} \\ +\int_{c^{\ ast }(r_{1})+r_{2}^{\ast }(r_{1})+\theta \lbrack v-c(r_{1},0)]}^{v}\{v-c^{\ast }(r_{1})-r_{2}^{\ast }(r_{1})-r_{1}\}f(c_{E1})dc_{E1}% \end{array}% \right\} \notag \\ &=&\max_{r_{1}\geq 0}\left\{ \ begin{array}{c} v-r_{1}+\int_{0}^{c^{\ast }(r_{1})+r_{2}^{\ast }(r_{1})+\theta \lbrack v-c(r_{1},0)]}\{-c_{E1}+\theta \lbrack v-c(r_{1},0)]\}f(c_{E1})dc_{E1} \\ +\int_{c^{\ast }(r_{1})+r_{2}^{\ast } (r_{1})+\theta \lbrack v-c(r_{1},0)]}^{v}\{-c^{\ast }(r_{1})-r_{2}^{\ast }(r_{1})\}f(c_{E1})dc_{E1}% \end{array}% \right\} \notag \end{eqnarray}% In the first version of this problem, the two integrals represent the expected social surpluses when early breach is efficient and when not breaching early is efficient, respectively. \ The seller's efficient early investment $r_{1}^{\ast },$ assuming it is positive, can be characterized by the first order condition \begin{eqnarray} 1 &=&-c_{1}(r_{1}^{\ast },0)\theta F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_ {1}^{\ast },0))] \label{R1} \\ &&-\frac{d}{dr_{1}}[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })]\{1-F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))]\}. \notag \end{eqnarray}% Because (\ref{R2}) implies $\frac{d}{dr_{1}}[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })]=c_{1}(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))\left\{ 1-F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\right\} ,$ (\ref{R1}) can be rewritten as% \begin{eqnarray} 1 &=&-c_{1}(r_{1}^{\ast },0)\cdot \theta F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))] \label{R1'} \\ &&-c_{1}(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))\cdot \left\{ 1-F\left[ \begin{array}{c} c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast }) \\ +\theta (v-c(r_{1}^{\ast },0))% \end{array}% \right] \right\} \{1-F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\} \notag \end{eqnarray}% Equation (\ref{R1'}) states that in order for early investment $r_ {1}^{\ast } $ to be efficient, its marginal cost must equal its expected marginal benefit. \ When the buyer (efficiently) breaches early and an alternate buyer is found, an event which occurs with probability $\theta F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))], $ the marginal benefit of early investment $r_{1}^{\ast }$ is a reduction of the seller's production cost by the amount $-c_{1}(r_{1}^{\ast },0)$. \ When the buyer (efficiently) never breaches and buys from the incumbent seller, which occurs with probability $(1-F[c^{\ast }(r_{1}^{\ast }) +r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))])\{1-F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\},$ the marginal benefit of early investment $r_{1}^{\ast }$ is a reduction of the production cost by the amount $-c_{1}(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))$. \ Note that when $\theta =0,$ so that there is no possibility of finding an alternate buyer even if early breach occurs, (\ref{R1'}) reduces to% \begin{equation*} 1=-c_{1}(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))(1-F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })])\left\{ 1-F[c(r_{1}^{\ast },r_{2}^{\ ast }(r_{1}^{\ast }))]\right\} , \end{equation*}% where the right hand side is the reduction in production cost that results from investment $r_{1}^{\ast },$ multiplied by the probability that this benefit will actually be realized, i.e., the probability that breach never occurs. \begin{proposition} \label{SO}The incumbent seller's efficient investments, $r_{1}^{\ast }$ and $% r_{2}^{\ast }(r_ {1}^{\ast }),$ are characterized by (\ref{R1}) and (\ref{R2}% ), respectively. \ The buyer's efficient breach decision is to breach early if and only if (\ref{B1}) is satisfied and (conditional on not breaching early) to breach late if and only if (\ref{B2}) is satisfied. \end{proposition} \section{Private Contracts Induce Efficient Decisions \label{Sec_NoReneg}} In this section, I show that if the incumbent parties' original contract imposes no externalities on third parties,\footnote{% That is, assume both entrant sellers are perfectly competitive, i.e., constrained to set price equal to cost, and that the incumbent seller has complete bargaining power with respect to the alternate buyer.} and if renegotiation is not possible, then the incumbent seller and buyer can implement the efficient investment and breach decisions in both periods by stipulating efficient expectation damages. \ This result has been demonstrated previously for the case of a single breach opportunity,% \ footnote{% See paragraph 4 on p.\ 186 of Spier and Whinston (1995) for references.} but not for multiple breach opportunities. Suppose the buyer and seller agreed to a contract $(p,x_{1},x_{2})$ where% \begin{eqnarray} x_{1} &=&p-c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))-r_{2}^{\ast }(r_{1}^{\ast })-\theta \lbrack v-c(r_{1}^{\ast },0)] \label{x1CE} \\ x_{2} &=&p-c(r_{1}^{\ast },r_{2}^{\ ast }(r_{1}^{\ast })) \label{x2CE} \end{eqnarray}% Furthermore, assume each entrant Ei sets price equal to cost, $p_{\func{Ei}% }=c_{\func{Ei}}$ for $i=1,2,$ and that the incumbent seller can charge the alternate buyer his value for the good, i.e., $p^{\prime }=v.$ \ The following proposition states that this contract will induce the seller to invest efficiently and the buyer to make the efficient breach decision in each period. \ Note that if a contract satisfies (\ref{x1CE}) and (\ref{x2CE}% ), then whenever the buyer breaches, the damages that he pays makes the seller as well off as if the contract had been performed, \emph{assuming the seller invested efficiently.} \ Hence these damages are the \emph{efficient expectation damages.} \begin{proposition} \label{xCE}Assume that entrants are perfectly competitive, the alternate buyer has no bargaining power, and renegotiation is not possible. \ Then any contract $(p,x_{1},x_{2})$ satisfying (\ref{x1CE}) and (\ref{x2CE}) induces the seller to always invest efficiently and the buyer to always breach efficiently. \end{proposition} \begin{proof} Using backwards induction to solve for the subgame perfect Nash equilibrium of the game, consider first B's private incentives for late breach. \ Given a contract $(p,x_{1},x_{2})$ that satisfies (\ref{x1CE}) and (\ref{x2CE}), suppose early breach did not occur. \ B's equilibrium incentive is to breach late\ if and only if $v-c_{E2}-x_{2}\geq v-p,$ or $c_{E2}\leq p-x_{2}=c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))$. \ Thus, (\ref{B2}) implies that B's late breach decision is efficient if S's \emph{equilibrium} investments $r_{1}^{e}$ and $r_{2}^{e}$ are efficient, i.e., if they equal $% r_{1}^{\ast }$ and $r_{2}^{\ast }(r_{1}^{\ast }),$ respectively. Given this late breach decision by B, an early investment of $r_{1}^{e}$ by S, and no early breach, (\ref{x2CE}) can be used to write S's late investment problem as choosing $r_{2}$ to maximize her expected continuation payoff:% \begin{eqnarray} r_{2}^{e}(r_{1}) &=&\max_{r_{2}\geq 0}\left\{ \begin{array}{c} \int_{0}^{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))}[x_{2}-r_{1}^{e}-r_{2}]f(c_{E2}) dc_{E2} \\ +\int_{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))}^{v}[p-c(r_{1}^{e},r_{2})-r_{1}^{e}-r_{2}]f(c_{E2})dc_{E2}% \end{array}% \right\} \label{Ust=5} \\ &=&\max_{r_{2}\geq 0}\left\{ -r_{2}- \int_{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))}^{v}c(r_{1}^{e},r_{2})f(c_{E2})dc_{E2}\right\} . \notag \end{eqnarray}% Then S's equilibrium choice of $r_{2}^{e}$ is characterized by the first order condition $$1=-c_{2}(r_{1}^{e},r_{2}^{e}(r_{1}^{e}))(1-F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]). \label{R2e}$$% Since $c_{22}(\cdot )>0,$ equations (\ref{R2}) and (\ref{R2e}) imply that $% r_{2}^{e}(r_{1}^{e})=r_{2}^{\ast }(r_{1}^{\ast })$ if $r_{1}^{e}=r_{1}^{\ast }.$ Hence, S's late investment is indeed efficient if her early investment is efficient. Anticipating the late investment and breach decisions characterized above, B's equilibrium incentive is to breach early\ if and only if \begin{equation*} v-c_{E1}-x_{1}\geq \int_{0}^{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ ast }))}[v-c_{E2}-x_{2}]f(c_{E2})dc_{E2}+\int_{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))}^{v}[v-p]f(c_{E2})dc_{E2}. \end{equation*}% By using (\ref{x1CE})-(\ref{x2CE}) and rearranging, this inequality can be shown to be equivalent to $c_{E1}\leq c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta \lbrack v-c(r_{1}^{\ast },0)],$ which is the same as (\ref{B1}). \ Therefore, if $r_{1}^{e}=r_{1}^{\ast }$ so that S's early investment is efficient, B's early breach decision will be also efficient (as will be the late investment and late breach decisions). So it remains to show that S's equilibrium early investment is efficient, i.e., $r_{1}^{e}=r_{1}^{\ast },$ when breach damages are specified by (\ref% {x1CE}) and (\ref{x2CE}). Given that B breaches early if and only if $% c_{E1}\leq c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta \lbrack v-c(r_{1}^{\ast },0)],$ the probability of early breach only depends on the efficient early investment $r_{1}^{\ast }$ and not S's equilibrium choice of $r_{1}.$ \ Therefore, S chooses her early investment $r_{1}\geq 0$ to maximize% \begin{eqnarray*} &&F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))](x_{1}-r_{1}+\theta \lbrack p^{\prime }-c(r_{1},0)]) \\ &&+\{1-F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))]\}\Gamma (r_{1}), \end {eqnarray*}% where $x_{1}-r_{1}+\theta \lbrack p^{\prime }-c(r_{1},0)]$ is S's expected payoff conditional on early breach, and \begin{eqnarray*} \Gamma (r_{1}) &\equiv &\int_{0}^{c(r_{1}^{\ast },r_ {2}^{\ast }(r_{1}^{\ast }))}[\underset{x_{2}}{\underbrace{p-c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))}}-r_{1}-r_{2}^{e}(r_{1})]f(c_{E2})dc_{E2} \\ &&+\int_{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ ast }))}^{v}[p-c(r_{1},r_{2}^{e}(r_{1}))-r_{1}-r_{2}^{e}(r_{1})]f(c_{E2})dc_{E2} \end{eqnarray*}% is the maximized value of the first problem in (\ref{Ust=5}) when $x_{1}$ and $x_{2}$ are given by (\ ref{x1CE}) and (\ref{x2CE}). \ That is, $\Gamma (r_{1})$ is the continuation payoff for S from choosing early investment $% r_{1}$ when B does not breach early, S's chooses her late investment according to $r_{2}^{e}(\cdot ),$ and B breaches late if and only if $% c_{E2}\leq c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast })).$ \ Note that $% \Gamma (r_{1})$ can be rewritten as \begin{eqnarray*} \Gamma (r_{1}) &=&p-c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))] \\ &&-c(r_{1},r_{2}^{e}(r_{1}))\left\{ 1-F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\right\} -r_{1}-r_{2}^{e}(r_{1}). \end{eqnarray*} The first order condition for S's equilibrium early investment $r_{1}^{e}$ can be written as% \begin{eqnarray} 0 &=&-F[c^{\ast }(r_{1}^{\ast }) +r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))](1+\theta c_{1}(r_{1}^{e},0)) \label{R1FOC} \\ &&+\{1-F[c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))]\}\Gamma ^{\prime }(r_{1}^{e}), \notag \end{eqnarray}% where (\ref{R2e}) implies that% $$\Gamma ^{\prime }(r_{1}^{e})=-1-c_{1}(r_{1}^{e},r_{2}^{e}(r_{1}^{e}))(1-F[c(r_{1}^{\ast },r_{2}^{\ast } (r_{1}^{\ast }))]) \label{Gamma'}$$% Substituting (\ref{Gamma'}) into (\ref{R1FOC}) and rearranging, (\ref{R1FOC}% ) can be written as% \begin{eqnarray*} 1 &=&-c_{1}(r_{1}^{e},0)\cdot \theta F[c^{\ ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta (v-c(r_{1}^{\ast },0))] \\ &&-c_{1}(r_{1}^{e},r_{2}^{e}(r_{1}^{e}))\cdot \left\{ 1-F\left[ \begin{array}{c} c^{\ast }(r_{1}^{\ast })+r_{2}^{\ ast }(r_{1}^{\ast }) \\ +\theta (v-c(r_{1}^{\ast },0))% \end{array}% \right] \right\} \{1-F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\} \end{eqnarray*}% This equation, when compared with (\ref {R1'}), implies that S's equilibrium early investment is indeed efficient:$\ r_{1}^{e}=r_{1}^{\ast }$ (recall $% c_{11}(\cdot )>0$). Therefore, by the calculations above, S's equilibrium late investment is also efficient ($r_{2}^{e}(r_{1}^{e})=r_{2}^{\ast }(r_{1}^{\ast })$), and both of B's breach decisions are efficient. \end{proof} By Proposition \ref{xCE}, a contract satisfying (\ref {x1CE}) and (\ref{x2CE}% )\ maximizes the joint expected payoffs of the seller and buyer. \ Therefore, such a contract must also maximize the seller's ex-ante expected payoff given that the buyer accepts the contract. \ Since the seller's original contract proposal is a take-it-or-leave-it offer, she will find it in her interest to offer a contract satisfying (\ref{x1CE}) and (\ref{x2CE}) and choose the price $p$ so that the buyer is just indifferent inbetween accepting or rejecting the contract offer. Because the alternate buyer and each competitive entrant seller always earn a payoff of zero, a contract satisfying (\ref{x1CE}) and (\ref{x2CE}) also maximizes social surplus. \ Therefore, assuming all of the assumptions of the model are satisfied, standard court-imposed breach remedies cannot improve welfare. \ Note that this result crucially depends on the absence of externalities. \ When an entrant has market power (and the buyer and seller are able to renegotiate after entry), Spier and Whinston (1995) show in a one-period model that \textquotedblleft privately stipulated damages are set at a socially excessive level to facilitate the extraction of the entrant's surplus.\textquotedblright\ \ Presumably, this inefficiency result would continue to hold if entrants have market power and renegotiation is introduced into the above two-period framework. Note that the intuition behind Proposition \ref{xCE} can also be seen without resorting to first order conditions. \ Because the original contract imposes no externalities, the incumbent seller's investments are always efficient \emph{given} the incumbent buyer's breach decisions. \ Therefore, since efficient expectation damages induce the buyer to make breach decisions that are efficient assuming the seller's investments are ex-ante efficient,\footnote{% To see why this is so with sequential breach decisions, first note that with efficient second period investment, the efficient expectation damage for late breach will induce the buyer to make his late breach decision efficiently. \ Thus, given efficient first period investment, the efficient expectation damage for early breach will also cause the buyer to make his early early breach decision efficiently (since his continuation payoff from not breaching early is based on efficient second period breach and investment decisions). \ This reasoning should also apply to the case in which there are $N>2$ periods in which breach may occur.} such damages will also induce the seller to make (ex-ante) efficient investment decisions. Subtracting equation (\ref{x1CE}) from (\ref{x2CE}), the following observations are evident. \begin{corollary} \label{Corollary_x2-x1>0}When the entrants are perfectly competitive, the breach damages is higher \emph{after} the second investment has been made than \emph{before} the second investment has been made:% \begin{equation*} x_{2}-x_{1}=r_{2}^{\ast }(r_{1}^{\ast })+\theta \lbrack v-c(r_{1} ^{\ast },0)]>0.\footnote{% This assumes that trade with the alternate buyer is efficient, conditional on efficient early investment.} \end{equation*}% Furthermore, this difference is increasing in the probability of finding an alternate buyer (if breach occurs early):% \begin{equation*} \frac{d}{d\theta }\left( x_{2}-x_{1}\right) =v-c(r_{1}^{\ast },0)>0. \end{equation*} \end{corollary} The first part of this corollary says that the fee for cancelling the contract increases over time. \ The relationship $x_{2}=x_{1}+r_{2}^{\ast }(r_{1}^{\ast })+\theta \lbrack v-c(r_{1}^{\ast },0)]$ between the damages for late and early breach illustrates the intuition. \ If the buyer does not breach at his first opportunity to do so, the seller will make the investment $r_{2}^{\ast }(r_{1}^{\ ast })$ and forgo an expected surplus of $% \theta \lbrack v-c(r_{1}^{\ast },0)]$ from possible trade with an alternate buyer. \ Therefore, the penalty for late breach must include the additional cost of the seller's second investment, as well as the lost expected surplus from potential trade with an alternate buyer, in order to induce the buyer to internalize these social opportunity costs of continuing with the contract when making his second breach decision. Because the opportunity cost $\theta \lbrack v-c(r_{1}^{\ast },0)]$ of continuing with the contract is increasing in the probability of finding an alternate buyer in case of early breach, the second part of the corollary simply points out the fact that the difference in the penalties between late breach and early breach must also be increasing in this probability. \section{Mitigation of Damages \label{Sec_Mitigation}} Corollary \ref{Corollary_x2-x1>0} shows that the amount by which the damages for late breach exceed the damages for early breach is increasing in $\theta ,$ the probability of finding an alternate buyer. \ While so far it has been assumed that this probability is exogenous, in reality the incumbent seller frequently has some influence over the likelihood of recouping some of her initial investment, and therefore the damages owed her by the incumbent buyer. \ When this is the case, contract law stipulates that the seller (i.e., the breached-against party, or promisee) has the responsibility of undertaking (a reasonable amount of) effort\ to reduce, or mitigate, those damages.\ footnote{% According to Restatement (Second) of Contracts, \S 350 (p. 127), \textquotedblleft As a general rule, a party cannot recover damages for loss that he could have avoided by reasonable efforts.\textquotedblright\ \ Goetz and Scott (1983) provide a detailed discussion of the general theory of mitigation. \ Miceli, et al. (2001) consider a specific application to property leases with court imposed damages. \ They show that whether it is optimal for there to be a duty for the landlord to mitigate damages from tenant breach of contract depends on whether leases fall under the domain of contract law or property law.} Mitigation usually involves effort costs or other opportunity costs, so I modify the previous model by introducing a cost of mitigation for the seller. \ I demonstrate that the seller's incentive to engage in such mitigation efforts is socially efficient only when she has complete bargaining power vis-a-vis the alternate buyer; otherwise, her mitigation effort is socially insufficient. \subsection{Binary Mitigation Decision} First I consider the case where the seller simply makes a binary decision (immediately after early breach occurs) regarding whether or not to mitigate the damages owed to her by the incumbent buyer. \ Choosing to mitigate implies, as before, encountering an alternate buyer with (fixed) probability $\theta ,$ and not mitigating implies being unable to find an alternate buyer with certainty. \ Assume mitigation involves a disutility of $\gamma >0 $ for the incumbent seller. Suppose that the incumbent seller's early investment is $r_{1}$ and that early breach has occurred. \ The seller's payoff from not mitigating is $% x_{1}-r_{1},$ and her payoff from mitigating is $x_{1}-r_{1}+\theta \lbrack p^{\prime } -c(r_{1},0)]-\gamma ,$ where recall $p^{\prime }$ is the price paid by the alternate buyer. \ Therefore, if there is no legal requirements on the seller's mitigation decision, she will choose to mitigate if and only if% \begin{equation*} \theta \lbrack p^{\prime }-c(r_{1},0)]>\gamma . \end{equation*}% That is, effort is expended to search for an alternate buyer when the probability of, or gains from, trade with such a buyer is high, or when the search effort associated with mitigation is not too costly. How does this compare with the socially efficient mitigation decision? \ The payoffs of the incumbent buyer and the first entrant seller are independent of whether the incumbent seller mitigates, so they do not influence the socially efficient mitigation decision. \ Summing the payoffs of the incumbent seller and the alternate buyer, it is straightforward to see that social surplus is maximized with the incumbent seller mitigating if and only if% \begin{equation*} \ theta \lbrack v-c(r_{1},0)]>\gamma . \end{equation*}% By comparing the above two inequalities, it can be readily observed that the incumbent seller's private incentives for mitigation of damages is socially insufficient unless $p^{\prime }=v,$ in which case she has complete bargaining power when dealing with the alternate buyer.\footnote{\label% {ftnt_p'=v}When $p^{\prime }=v,$ the social efficiency of the incumbent seller's mitigation decision follows immediately from the observation that her decision to mitigation can be viewed as an example of a selfish investment.} \subsection {Continuous Mitigation Decision} Now consider the more general case where the seller's mitigation effort choice is continuous. \ Without loss of generality, suppose that the seller directly chooses the probability of finding an alternate buyer, $\theta \in \lbrack 0,1]$. \ In doing so, she incurs an effort cost of $\gamma (\theta ), $ where $\gamma (\cdot )$ is strictly increasing and strictly convex in $% \theta ,$ with $\gamma (0)=0.$ Given early investment $r_{1}$ by the incumbent seller, and early breach by the incumbent buyer, the seller chooses her mitigation effort level $\theta $ to maximize her expected payoff:% \begin{equation*} \max_{\theta \in \lbrack 0,1]}\{x_{1}-r_{1}+\theta \lbrack p^{\prime }-c(r_{1},0)]-\gamma (\theta )\}. \end{equation*}% Assuming $p^{\prime }-c(r_ {1},0)>\gamma ^{\prime }(0),$ the first order condition characterizing the interior solution is% \begin{equation*} p^{\prime }-c(r_{1},0)=\gamma ^{\prime }(\theta ^{e}(r_{1})), \end{equation*}% where $\theta ^{e}(r_{1})$ represents the incumbent seller's \emph{% equilibrium} choice of mitigation effort. \ This expression simply states that the privately optimal mitigation effort level equates the marginal private benefit of increasing such effort with the marginal cost. In contrast, the socially efficient mitigation effort level $\theta ^{\ast }(r_{1})$ satisfies% \begin{equation*} v-c(r_ {1},0)=\gamma ^{\prime }(\theta ^{\ast }(r_{1})) \end{equation*}% because the marginal social benefit from increasing the probability of trade with an alternate buyer is the total surplus from such trade, or $v-c(r_{1})$% . \ Since this marginal social benefit exceeds the marginal private benefit whenever $v>p^{\prime },$ or whenever the alternate buyer has some bargaining power, the incumbent seller will tend to choose a socially insufficient mitigation effort level (due to the convexity of her effort costs $\gamma (\cdot )$): $\theta ^{e}(r_{1})\leq \theta ^{\ast }(r_{1})$ for all $r_ {1},$ with equality if and only if $v=p^{\prime }.$\footnote{% The same intuition as in footnote \ref{ftnt_p'=v} above applies here as well.% } \subsection{Contractibility of the Mitigation Decision} Regardless of whether the mitigation choice involves a binary or continuous decision variable, the incumbent buyer usually exerts a socially insufficient amount of effort to mitigate breach damages, and her mitigation decision is socially efficient if and only if she is able to capture all of the gains from trade with the alternate buyer. \ The intuition for this inefficiency result is analogous to the intuition for inefficient (under-)investment in property rights models with separate ownership:\ here, unless the seller is able to charge the alternate buyer a price equal to the latter's willingness to pay for the good or service, she (the seller) does not appropriate all of the surplus from trade and therefore has inefficiently weak incentives for mitigation. \ (Recall that the seller always bears all of the mitigation costs.) Notice that the above analysis assumes the damages for early breach, $x_{1},$ is fixed and unaffected by the mitigation choice. \ This requires an implicit assumption that while the incumbent seller is able to commit to her choices of damages, she is unable to commit to her mitigation decision when the contract is first signed. \ This assumption is reasonable to the extent that mitigation effort cannot be contracted upon at the start of the game, and it seems justified as least in the model where the mitigation decision is continuous and assumed to be equivalent to the \emph{probability} of finding an alternate buyer. \ In such an environment, it is difficult to conceive how the contracting parties may verify to a court the actual mitigation effort level, since it is \emph{possible} that an alternate buyer is found ex-post even though the incumbent seller may have chosen a very small, but positive, mitigation effort level ex-ante. \ This case would be relevant, for example, when the mitigation effort decision is not publicly observable.\footnote{% If the mitigation effort decision is publicly observable, the question then becomes whether mitigation should be viewed as the mere exertion of effort to search for an alternate buyer, or actual discovery of such an opportunity \emph{and} the consumation of trade with the alternate buyer.} On the other hand, if the mitigation decision is binary, and there really is no chance of finding an alternate buyer upon late breach, it is conceivable that the mitigation decision might be verifiable ex-post and hence contractible ex-ante.\footnote{% It would be interesting to analyze whether the incumbent seller has private incentives to write a contract that induces socially efficient mitigation effort when this decision is verifiable and included as a part of the original contract. If the incumbent seller has complete bargaining power with respect to both the incumbent and alternate buyers, it may be reasonable to expect that private mitigation efforts will be socially efficient.} The reason is that if, upon early breach, an alternate buyer is indeed found and trade occurs, then the incumbent seller necessarily chose to mitigate damages. \ However, this logic depends on the assumption that trade with the alternate buyer is verifable. \ Were this not the case, the incumbent seller would have an incentive to frabricate evidence of trade with an alternate buyer. \ Nevertheless, this issue is not problematic to the extent that (i) trade with the incumbent buyer is verifiable, so that the original contract is enforable; and (ii) verifiability of trade for the incumbent seller is correlated among buyers. If the parties truly cannot contract upon the mitigation decision ex-ante, the incumbent seller would no longer have any contractual obligations towards the incumbent buyer once breach has occurred. \ She would then be free, in the event of early breach, to choose her mitigation decision in any manner she sees fit. \ In light of this consideration, the legal requirement that breached-against parties take reasonable efforts to mitigate their damages in the event of breach can be viewed as an attempt to ameliorate the social insufficiency of private mitigation incentives when contracts are incomplete.\footnote{% See Goetz and Scott (1983).} \subsection{The Nature of the Breach Outcome} There is one final observation to make regarding the efficiency of the incumbent seller's mitigation effort. \ Assuming that she has full bargaining power vis-a-vis the alternate buyer, the preceeding analysis shows that the incumbent seller has socially efficient incentives for mitigation. \ This result relies on the implicit assumption that whether the contract is breached directly depends upon only the incumbent buyer's action and not the action of the incumbent seller. \ If whether breach occurs is a function of both party's actions (as is the case in some tort models), the following analysis will show that the incumbent seller's action (mitigation decision) may be socially inefficient, even if she has full bargaining power with respect to the alternate buyer. The duty to mitigate damages usually arises in situations where breach damages are imposed ex-post by the court, as opposed to being privately stipulated ex-ante. Therefore, to see the importance of the way in which breach is defined, consider the following example, where I\ assume court-imposed expectation damages. Suppose there is just one period, with no investment, buyer value $v,$ seller cost $c,$ and a binary mitigation decision for the incumbent seller. \ Assume the entrant's cost $c_{E}$ is either $c_{E}^{L}$ or $c_ {E}^{H}$ with $c_{E}^{L}0$, as required by efficiency. Now suppose breach of contract is said to occur (and hence breach damages $x$ due) if and only if the incumbent buyer refuses trade \emph{and} the incumbent seller cannot find an alternate buyer.\footnote{% Because the seller's mitigation decision affects her probability of finding an alternate buyer, it also affects the probability that breach is said to occur.}\ \ Conditional on the incumbent buyer's refusal of trade, efficiency requires that the seller mitigates, i.e., exerts effort to find an alternate buyer, if and only if $v-c_ {E}+\theta \lbrack v-c]-\gamma \geq 0\iff c_{E}\leq v+\theta \lbrack v-c]-\gamma .$\footnote{% If S does not mitigate after B refuses trade, no surplus is realized because S would not be able to trade with either B or the alternate buyer.} \ Since $% c_{E}\leq c_{E}^{H}\leq v+\theta \lbrack v-c]-\gamma $ by assumption, the efficient mitigation decision is to always mitigate (conditional on the incumbent buyer's refusal of trade). \ However, the seller will never exert mitigation effort. \ To see this, note that if she does not mitigate, then with probability 1 she does not find an alternate buyer to trade with, and hence by definition breach occurs. \ So the seller's payoff from not mitigating, given expectation damages, is $x=p^{\prime }-c=v-c.$\footnote{% The expectation damage equates the seller's payoff from breach, $x,$ to her payoff from no breach. \ Conditional on the incumbent buyer's refusal to trade, no breach corresponds to the case in which the seller is able to find an alternate buyer with whom to trade. \ In this case, the seller receives a payoff of $p^{\prime }-c=v-c.$} $\ $The seller's payoff from mitigation is $% \theta \lbrack v-c]+(1-\theta ) x-\gamma =v-c-\gamma ,$ which is less than her payoff of $v-c$ from not mitigating.\footnote{% With probability $\theta ,$ the seller finds and trades with an alternate buyer. \ In this case, there is no breach and the seller receives $v-c$ from trade with the alternate buyer. \ With probability $1-\theta ,$ the seller is unable to find an alternate buyer, and so by definition breach occurs. \ The seller receives the breach damage $x$ in this case. \ Regardless of whether an alternate buyer is found, the seller incurs the effort cost $% \gamma $ if she mitigates.}$^{,}$\footnote{\label {ft_nt_mit}Note that if the expectation damages were to compensate the seller for her disutility of mitigation effort, then $x=v-c+\gamma .$ \ In this case, the seller's payoff from mitigation is $\ theta \lbrack v-c]+(1-\theta )x-\gamma $ $% =v-c+(1-\theta )\gamma -\gamma $ $=v-c-\theta \gamma ,$ which is still less than her payoff of $v-c$ from not mitigating. \ Therefore, as long as the court-imposed expectation damage does not grossly over-estimate the seller's disutility of mitigation, she will still prefer to not mitigate.} \ Thus, the seller will never choose to mitigate even though it is efficient for her to do so after the buyer's refusal to trade. \ The intuition for this result is straightforward. \ When breach is equivalent to the incumbent buyer's refusal to trade, the seller's mitigation decision does not affect the incumbent buyer's payoff conditional on his refusal to trade. \ Instead, the mitigation decision only affects the seller's own payoff (recall the alternate buyer always earns zero by assumption), and so her mitigation decision will be efficient. \ In contrast, if the definition of breach requires not only the buyer's refusal to trade but also the seller's inability to find an alternate buyer, then the seller will not mitigate even when it is efficient to do so. \ To see this, note that expectation damages ensure that regardless of whether the seller mitigates, she will receive the same gross payoff (excluding any mitigation effort costs) of $v-c$ after the incumbent buyer refuses to trade. \ Therefore, because mitigation effort is costly, the seller will choose to not mitigate.\footnote{% Alternatively, the intuition for the inefficiency result follows from the observation that when breach depends on both parties' actions, the incubment seller's mitigation decision has an externality on the incumbent seller (even though $p^{\prime }=v$ implies no externality on the alternate buyer) and therefore will be inefficient.} \ (This inefficiency result still obtains even if the seller is accurately compensated for her disutility of mitigation effort when no alternate buyer is found. \ The reason is that while the cost of mitigation is certain, finding an alternate buyer is not. \ See footnote \ref{ft_nt_mit}.) \section{Renegotiation \label{Sec_Reneg}} I now examine the situation where the incumbent seller S and buyer B are able to renegotiate their original contract after each entrant seller announces its price $p_{\func{Ei}}$ and prior to each breach opportunity. \ Once again, assume each entrant is perfectly competitive and sets price equal to cost, $p_{\func{Ei}}=c_{\func{Ei}},$ and suppose that S has complete bargaining power vis-a-vis the alternate buyer. \ Then S and B's contract imposes no externalities on other parties, and so they have joint incentives to induce efficient breach and investment decisions. \ As Proposition \ref{Prop_reneg} below demonstrates, the efficient breach and investment decisions can in fact be implemented with the same efficient expectation damages as before, when renegotiation was impossible. \ The logic underlying this argument depends crucially on analyzing the parties' payoffs off the equilibrium path. Assume Nash bargaining during each renegotation period, so that the renegotiation outcome maximizes the seller and buyer's joint payoffs. \ The renegotiation surplus, which is split between S and B in the proportions $% \alpha $ and $1-\alpha ,$ is defined as the difference in the sum of payoffs for S and\ B with and without renegotiation: $s_{reneg}\equiv (u_{S}+u_{B})|_{w/reneg}-(u_{S}+u_{B})|_{w/o\text{ }reneg}.$ \ Hence, the payoffs after each stage of renegotiation are $u_{S}|_{w/o\text{ }% reneg}+\alpha \cdot s_ {reneg}$ for the seller and $u_{B}|_{w/o\text{ }% reneg}+(1-\alpha )\cdot s_{reneg}$ for the buyer. \ If B is indifferent between buying from an entrant or S, assume B buys from the entrant, regardless of whether the indifference arises before or after renegotiation. Suppose that early and late investment are complementary, i.e.,% $$c_{12}(r_{1},r_{2})\leq 0\text{ for all }(r_{1},r_{2}). \label{c12<=0}$$% Then S's privately optimal, or equilibrium, late investment $% r_{2}^{e}(r_{1}) $ is increasing in her early investment $r_{1}.$ \ Finally, assume $$1-\max \{F[c(r_{1},r_{2}^{e}(r_ {1}))],F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\}\geq \theta \text{ for all }r_{1}, \label{1-max>T}$$% which can be shown to imply that: (i) when $r_{1}$ is less than $r_{1}^{\ast },$ the private value of early investment for S exceeds its social value assuming early breach occurs; and (ii) when $r_{1}$ is greater than $% r_{1}^{\ast },$ the private value of early investment for S is less than its social value assuming early breach does not occur. \begin{proposition} \label{Prop_reneg}Suppose S and B can renegotiate after each competitive entrant arrives and that (\ref{c12<=0}) and (\ref{1-max>T}) are satisfied. \ Then the ex-ante efficient breach and investment decisions (as characterized in Section 3) can be implemented by the same contract that implements the efficient outcome when renegotiation is not possible, i.e., any contract $% (p,x_{1},x_{2})$ where $x_{1}$ and $x_{2}$ are the effficient expectation damages and satisfy (\ref{x1CE}) and (\ref{x2CE}). \end {proposition} The intuition for this result is as follows. \ When $r_{1}T}) implies that S's private marginal benefit from increasing early investment continues to exceed the social marginal benefit, given that early breach occurs.}) \ Similarly, when $r_{1}>r_{1}^{\ast },$ early renegotiation causes early breach to not occur (but it does occur absent early renegotiation) for intermediate realizations of the early entrant's cost. \ Here, assumption (\ref{1-max>T}) implies that S has a smaller private incentive to increase $r_{1}$ relative to the social marginal benefit. \ Together, these two observations will induce S to choose the efficient early investment $r_{1}^{\ast }.$ \ Given that S chooses the efficient early investment $r_{1}^{\ast },$ early renegotiation implies that B's early breach decision will be (ex-ante) efficient as well. \ It can also be shown that S's privately optimal late investment, $r_{2}^{e}(r_{1}),$ coincides with the efficient late investment $r_ {2}^{\ast }(r_{1})$ when $r_{1}=r_{1}^{\ast }.$ \ In other words, given that S's early investment is efficient, so is her late investment (see Lemma % \ref{r2e=r2*}\ below). \ Late renegotiation then leads to the efficient late breach decision. \ (These observations also imply that no renegotiation occurs on the equilibrium path.) The rest of this section details the proof of this proposition.\ footnote{% Readers who are either uninterested in the technical details underlying Proposition \ref{Prop_reneg} or more interested in a concrete application of this model may wish to skip ahead to Section \ref{Sec_Application}.} \ Using backwards induction, I\ first look at B's late breach decision, then S's late investment decision, then B's early breach decision, and finally S's early investment decision. \subsection{\textbf{Late Breach Decision}} First consider B's late breach decision. \ Given there is no early breach and that $x_{2}$ satisfies (\ref{x2CE}), B has a private incentive to breach late absent renegotiation if and only if $v-c_{E2}-x_{2}\geq v-p,$ i.e.% \begin{equation*} c_{E2}\leq p-x_{2}=c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast })). \end{equation*}% \ On the other hand, conditional on S having actually chosen investment levels $r_{1}$ and $r_{2},$ renegotiation after the second entrant arrives (what I will sometimes refer to as \textquotedblleft late renegotiation\textquotedblright ) leads to late breach if and only if $% v-c_{E2}\geq v-c(r_{1},r_{2}),$ i.e., \begin{equation*} c_{E2}\leq c(r_{1},r_{2}). \end{equation*}% Given $(r_{1},r_{2}),$ this is the ex-post efficient breach decision. \ Since ex-ante efficiency requires late breach to occur exactly when $% c_{E2}\leq c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast })),$ late renegotiation implies that B's late breach decision is ex-ante efficient \emph{if} S's early and late investments are ex-ante efficient, i.e., if $% (r_{1},r_{2})=(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast })).$ \ subsection{\textbf{Renegotiation Payoffs in the Second Period}} Before examining S's late investment decision, we must first consider the (renegotiation-induced) payoffs of S (and B) for all possible realizations of the second entrant's price/cost $c_{E2},$ as well as for all possible early and late investments $(r_{1},r_{2})$ that S might make (including those off the equilibrium path). When $c_ {E2}\leq \min \{p-x_{2},c(r_{1},r_{2})\},$ B breaches late regardless of whether late\ renegotiation is possible, and so payoffs are $% u_{S}=x_{2}-r_{1}-r_{2}$ and $u_{B}=v-c_{E2}-x_{2}.$ \ On the other hand, when $c_{E2}>\max \{p-x_{2},c(r_{1},r_{2})\},$ B does not breach late regardless of whether late renegotiation is possible, and so payoffs are $% u_{S}=p-c(r_{1},r_{2})-r_{1}-r_{2}$ and $u_{B}=v-p.$ If $p-x_{2}0$ (in this case, S has the lower cost). \ Disagreement payoffs are therefore those associated with the breach outcome, i.e., $x_{2}-r_{1}-r_{2}$ for S and $v-c_{E2}-x_{2}$ for B, and so the renegotiation payoffs are $u_{S}=x_{2}-r_{1}-r_{2}+\alpha \lbrack c_{E2}-c(r_{1},r_{2})]$ and $u_{B}=v-c_{E2}-x_{2}+(1-\alpha )[c_{E2}-c(r_{1},r_{2})].$ To summarize: \begin{lemma} \label{Lem_LateReneg}If early breach does not occur and S's investments are $% (r_{1},r_{2}),$ payoffs after late renegotiation (excluding investment costs) for the incumbent seller S and buyer B, respectively, are given by:% \begin{equation*} \begin{tabular}{ll} $\{x_{2},v-c_{E2}-x_{2}\}$ & if $c_{E2}\leq \min \{p-x_{2},c(r_{1},r_{2})\};$ \\ $\{p-c(r_{1},r_{2}),v-p\}$ & if $c_{E2}>\max \{p-x_ {2},c(r_{1},r_{2})\};$ \\ $\left\{ \begin{array}{c} p-c(r_{1},r_{2})+\alpha \lbrack c(r_{1},r_{2})-c_{E2}], \\ v-p+(1-\alpha )[c(r_{1},r_{2})-c_{E2}]% \end{array}% \right\} $ & if $p-x_{2}=c(r1,r2). %This figure shows the seller's ex-post payoffs after late renegotiation, assuming p-x2>=c(r1,r2), %for various values of the second entrant's cost cE2. Late breach always occurs %for low values of cE2 and never occurs for high values of cE2. For intermediate %values (when c(r1,r2)<=cE2<=p-x2),late breach occurs absent late renegotiation %but does not occur with late renegotiation. \subsection {\textbf{Late Investment Decision}} Now consider S's late investment decision given that she chose $r_{1}$ in period 1. \ First, suppose S chooses $r_{2}$ such that $p-x_{2}\leq c(r_{1},r_{2}).$ \ Conditional on early breach not occurring, Figure \ref% {Fig_CE2values-1}\ summarizes S's ex-post payoff after late renegotiation (from Lemma \ref{Lem_LateReneg}) as a function of the second entrant's price offer $p_{E2}=c_{E2}.$ \ In this case, S's expected payoff (exclusive of her early investment cost) is% \begin{eqnarray*} \pi _{L}(r_{1},r_{2}) &=&F[p-x_{2}]x_{2}+\int_{p-x_{2}}^{c(r_ {1}r_{2})}\left\{ p-c(r_{1},r_{2})+\alpha \lbrack c(r_{1},r_{2})-c_{E1}]\right\} f(c_{E1})dc_{E1} \\ &&+(1-F[c(r_{1},r_{2})])[p-c(r_{1},r_{2})]-r_{2}. \end{eqnarray*} On the other hand, if S chooses $r_{2}$ such that $p-x_{2}\geq c(r_{1},r_{2}),$ Figure \ref{Fig_CE2values-2}\ depicts her ex-post payoff after late renegotiation as a function of the second entrant's price. \ For these values of $r_{1}$ and $r_{2},$ S's expected payoff (exclusive of early investment cost) is% \begin{eqnarray*} \pi _{H}(r_{1},r_{2}) &=&F[c(r_{1},r_{2})]x_{2}+\int_{c(r_{1}r_{2})}^{p-x_{2}}\left\{ x_{2}+\alpha \lbrack c_{E1}-c(r_{1},r_{2})]\right\} f(c_{E1})dc_{E1} \\ &&+(1-F[p-x_{2}])[p-c(r_{1},r_{2})]-r_{2}. \end{eqnarray*}% Note that $\pi _{H}(r_{1},r_{2})$ can be rewritten as% \begin{eqnarray*} \pi _ {H}(r_{1},r_{2}) &=&F[p-x_{2}]x_{2}+\int_{c(r_{1}r_{2})}^{p-x_{2}}\alpha \lbrack c_{E1}-c(r_{1},r_{2})]f(c_{E1})dc_{E1} \\ &&+(1-F[p-x_{2}])[p-c(r_{1},r_{2})]-r_{2} \\ &=&\pi _{L}(r_{1},r_{2}), \end {eqnarray*}% where the second inequality follows from (i) switching the bounds of integration in the second term and multiplying the integrand by $-1;$ and (ii) writing $1-F[p-x_{2}]$ in the third term as $% (1-F[c(r_{1},r_{2})])+(F[c(r_{1},r_{2})]-F[p-x_{2}])$ and then rearranging. \ Thus, given $r_{1},$ simply denote S's expected payoff from chosing $r_{2}$ (exclusive of early investment cost) by% \begin{equation*} \pi (r_{1},r_{2})\equiv \pi _{L}(r_{1},r_{2})=\pi _{H}(r_{1},r_{2})\text{ for all }(r_{1},r_{2}). \end{equation*}% Let $r_{2}^{e}(r_{1})$ denote S's privately optimal, or equilibrium, late investment choice, given that her early investment is $r_{1}.$ \ It is characterized by the first order condition% \begin{eqnarray} 0 &=&\pi _{2}(r_{1},r_{2}^{e}(r_{1}))\text{ for all }r_{1} \label{pi2=0} \\ &=&-c_{2}(r_{1},r_{2}^{e}(r_{1}))\{1-\alpha F[c(r_{1},r_{2}^{e}(r_{1}))]-(1-\alpha )F[p-x_{2}]\}-1 \notag \end{eqnarray} \begin{lemma} \label{r2e=r2*}If S's early investment is efficient, her late investment is efficient as well:% \begin{equation*} r_{2}^{e}(r_{1}^{\ast })=r_{2}^{\ast }(r_{1}^{\ast }). \end{equation*} \end{lemma} \begin{proof} To see this, observe that since $r_{2}^{\ast }(r_{1}^{\ast })$ maximizes social surplus given $r_{1}^{\ast },$ $S^{\prime }(r_{2}^{\ast }(r_{1}^{\ast })|r_{1}^{\ast })\gtreqqless 0$ for all $r_{2}\lesseqqgtr r_ {2}^{\ast }(r_{1}^{\ast }).$ \ Thus, if $p-x_{2}=c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))\leq c(r_{1}^{\ast },r_{2}),$ then $r_{2}\leq r_{2}^{\ast }(r_{1}^{\ast })$ (as $c_{2}<0$). \ In this case, (\ref{x2CE}) implies $\pi _{2}(r_{1}^{\ast },r_{2})\geq -c_{2}(r_{1}^{\ast },r_{2})\{1-F[c(r_{1}^{\ast },r_{2})]\}-1=S^{\prime }(r_{2}|r_{1}^{\ast })\geq 0.$ \ Similarly, $% p-x_{2}=c(r_{1}^{\ ast },r_{2}^{\ast }(r_{1}^{\ast }))\geq c(r_{1}^{\ast },r_{2})$ implies $r_{2}\geq r_{2}^{\ast }(r_{1}^{\ast })$ and hence $\pi _{2}(r_{1}^{\ast },r_{2})\leq -c_{2}(r_{1}^{\ast },r_{2})\{1-F[c(r_{1}^ {\ast },r_{2})]\}-1=S^{\prime }(r_{2}|r_{1}^{\ast })\leq 0.$ \end{proof} This result is analogous to Proposition 1 in Spier and Whinston (1995), where efficient expectation damages lead the seller to invest efficiently. \ (As in their Proposition 1, I\ also assume renegotiation and a perfectly competitive (late)\ entrant.) \ The intuition is the same as well. \ When the seller's late\ investment is less than efficient (given $r_{1}^{\ast }$% ), late renegotiation allows her to capture a share of the return on her cost reduction for realizations of $c_{E2}$ that ultimately lead to late breach (see the middle interval in Figure \ref{Fig_CE2values-1}). \ Since a social planner only values late investment when S actually produces the good, the seller's incentive to increase her late investment exceeds that of a social planner when $r_{2}$ is less than efficient (given $r_{1}^{\ast }$% ). \ Similarly, when $r_{2}$ is more than efficient (given $r_{1}^{\ast }$), the seller's incentive to increase her late\ investment is less than that of a social planner. \ Hence, the seller chooses the efficient late investment (given early investment $r_{1}^{\ast }$). Finally, assuming the second order condition is satisfied, (\ref{c12<=0}) implies that $r_{2}^{e}(r_{1})$ is increasing in $r_{1}.$\footnote{% A sufficient condition for the second order condition to be satisfied is that $\pi _{22}=-c_{22}(r_{1},r_{2})\{1-\alpha F[c(r_{1},r_{2})]-(1-\alpha )F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]\}+c_{2}(r_{1},r_{2})^{2}f[c(r_{1},r_{2})]<0$ at $r_{2}=r_{2}^{e}(r_{1})$ for all $r_{1}.$ \ Given (\ref{c12<=0}), $\pi _{21}=-c_{21}\{1-\alpha F[c]-(1-\alpha )F[p-x_{2}]\}+\alpha c_{1}c_{2}f(c)>0,$ and so $% r_{2}^{e\prime }(r_{1})=-\pi _{21}/\pi _{22}>0$ at $% (r_{1},r_ {2}^{e}(r_{1})). $} \ Hence, because $c_{1}<0,c_{2}<0,$ we have $% \frac{d}{dr_{1}}c(r_{1},r_{2}^{e}(r_{1}))<0.$ \ Therefore Lemma \ref{r2e=r2*}% \ implies that% $$r_{1}\lesseqgtr r_{1}^{\ast }\iff p-x_{2}=c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))\lesseqgtr c(r_{1},r_{2}^{e}(r_{1})) \label{r1<>r1*==c*<>c}$$% with equality if and only if $r_{1}=r_{1}^{\ast }.$ \subsection{\textbf{Early Breach Decision}} \textbf{Absent Early Renegotiation.} Absent early renegotiation, the incumbent buyer B obtains a payoff of $% v-c_{E1}-x_{1}$ if he breaches early to buy from the first entrant. \ Now consider B's expected payoff from not breaching early, with late renegotiation still possible. Given S's early investment $r_{1},$ B will anticipate S's late investment choice of $r_{2}^{e}(r_ {1}).$ \ First, suppose $r_{1}\leq r_{1}^{\ast },$ which is equivalent to $p-x_{2}\leq c(r_{1},r_{2}^{e}(r_{1}))$ by (\ref% {r1<>r1*==c*<>c}). \ Lemma \ref{Lem_LateReneg}\ and (\ref{x2CE}) imply that B's expected payoff from not breaching early is% \begin{eqnarray*} &&% \int_{0}^{p-x_{2}}(v-c_{E2}-x_{2})f(c_{E2})dc_{E2}+(1-F[c(r_{1},r_{2}^{e}(r_{1}))])(v-p) \\ &&+\int_{p-x_{2}}^{c(r_{1},r_{2}^{e} (r_{1}))}(v-p+(1-\alpha )[c(r_{1},r_{2}^{e}(r_{1}))-c_{E2}])f(c_{E2})dc_{E2} \\ &=&v-p-\int_{0}^{p-x_{2}}(c_{E2}-c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))f(c_{E2})dc_{E2} \\ &&+\int_{p-x_{2}}^{c (r_{1},r_{2}^{e}(r_{1}))}(1-\alpha )[c(r_{1},r_{2}^{e}(r_{1}))-c_{E2}]f(c_{E2})dc_{E2}. \end{eqnarray*}% Since (\ref{x2CE}) implies $c^{\ast }(r_{1}^{\ast })=\int_{0}^{p-x_{2}}c_{E2}f(c_{E2})dc_{E2}+ \int_{p-x_{2}}^{v}c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))f(c_{E2})dc_{E2}$ (recall (\ref{c*(r1)}), the definition of $c^{\ast }(r_{1})$), B's expected payoff from not early early can be further rewritten as $v-\psi (r_{1})-x_{2},$ where% \begin{equation*} \psi (r_{1})\equiv c^{\ast }(r_{1}^{\ast })-\int_{c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))}^{c(r_{1},r_{2}^{e}(r_{1}))}(1-\alpha )[c (r_{1},r_{2}^{e}(r_{1}))-c_{E2}]f(c_{E2})dc_{E2}. \end{equation*}% If $r_{1}\geq r_{1}^{\ast }$ instead, i.e., $p-x_{2}\geq c(r_{1},r_{2}^{e}(r_{1})),$ B's expected payoff from not breaching early is% \begin{eqnarray*} &&% \int_{0}^{c(r_{1},r_{2}^{e}(r_{1}))}(v-c_{E2}-x_{2})f(c_{E2})dc_{E2}+(1-F[p-x_{2}])(v-p) \\ &&+\int_{c(r_{1},r_{2}^{e}(r_{1}))}^{p-x_{2}}(v-c_{E2}-x_{2}+(1-\alpha )[c_{E2}-c (r_{1},r_{2}^{e}(r_{1}))])f(c_{E2})dc_{E2}. \end{eqnarray*}% It turns out that this expression can also be written as $v-\psi (r_{1})-x_{2}.$ So for any $r_{1},$ B breaches early absent early renegotiation if and only if $v-c_{E1}-x_{1}\geq v-\psi (r_{1})-x_{2},$ or equivalently,% $$c_{E1}\leq \psi (r_{1})+x_{2}-x_{1}. \label{NoRenegEarlyBreach}$$% Since $\psi ^{\prime }(r_{1})=(1-\alpha )\frac{dc(r_{1},r_{2}^{e}(r_{1}))}{% dr_{1}}(F[c(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))]-F[c(r_{1},r_{2}^{e}(r_{1}))]),$ (\ref{r1<>r1*==c*<>c}) implies that% $$\psi ^{\prime }(r_{1})\gtreqqless 0\text{ for all }r_{1}\lesseqqgtr r_{1}^{\ast }, \label{psi'><0}$$% with equality only at $r_{1}^{\ast }.$ Finally, $\psi (r_{1}^{\ast })=c^{\ast }(r_{1}^{\ast })$ follows from Lemma % \ref{r2e=r2*}. \ So \emph{if} S's early investment is efficient, (\ref{x1CE}% ) and (\ref{x2CE}) imply that B will breach early absent early renegotiation if and only if $c_{E1}\leq \psi (r_{1}^{\ast })+x_{2}-x_{1} =c^{\ast }(r_{1}^{\ast })+r_{2}^{\ast }(r_{1}^{\ast })+\theta \lbrack v-c(r_{1}^{\ast },0)],$ which is the efficient early breach decision. \textbf{With Early Renegotiation.} With early renegotiation, B will breach early to buy from the first entrant if and only if expected social surplus is higher from his breaching early. \ Absent early breach, surplus is $u_{S}+u_{B}=v-\psi (r_ {1})-x_{2}+\pi (r_{1},r_{2}^{e}(r_{1}))-r_{1}.$ \ With early breach, $u_{S}+u_{B}=v-c_{E1}+% \theta \lbrack v-c(r_{1},0)]-r_{1}.$ \ Thus, early renegotiation leads to early breach if and only if% \ begin{eqnarray} c_{E1} &\leq &\phi (r_{1})+\theta \lbrack v-c(r_{1},0)], \label{EffEarlyBreach} \\ \text{where }\phi (r_{1}) &\equiv &\psi (r_{1})+x_{2}-\pi (r_{1},r_{2}^{e}(r_{1})), \notag \end {eqnarray}% (which is the efficient breach decision given $r_{1}$). \ Recall from Section 4 that when renegotiation is never possible, early breach is efficient given $r_{1}$ if and only if $$c_{E1}\ leq c^{\ast }(r_{1})+r_{2}^{\ast }(r_{1})+\theta \lbrack v-c(r_{1},0)] \label{B1''}$$% (compare with (\ref{B1}) for the case $r_{1}=r_{1}^{\ast }$). \ It can be verified that $c^{\ast }(r_{1})+r_{2}^ {\ast }(r_{1})$ and $\phi (r_{1}),$ and hence the right hand sides of (\ref{EffEarlyBreach}) and (\ref{B1''}), are not equal unless $r_{1}=r_{1}^{\ast }.$ \ Therefore, the efficient early breach decisions when renegotiation is and is not possible do not coincide with each other unless S's early investment is efficient. \ \emph{In other words, the possibility of renegotiation does not alter the efficient early breach decision on the equilibrium path but does affect it off the equilibrium path. \ } Since B's early breach decision (with early renegotiation) is ex-ante efficient given $r_ {1}^{\ast },$ it remains to show that S's early investment is indeed efficient. \subsection{\textbf{Renegotiation Payoffs in the First Period}} Before analyzing S's early investment decision, we first derive the payoffs of S (and B) after early renegotiation for all possible realizations of the first entrant's price/cost $c_{E1}$ and all levels of S's early investment $% r_{1}.$ \ Recall that absent early renegotiation, B breaches early if and only if (\ref{NoRenegEarlyBreach}) holds, while with early renegotiation early breach occurs if and only if (\ref{EffEarlyBreach}) is satisfied. When $c_{E1}\leq \min \{\psi (r_{1})+x_{2}-x_{1},\phi (r_{1})+\theta \lbrack v-c(r_{1},0)]\},$ B breaches early regardless of whether early renegotiation is possible, and so payoffs are $u_{S}=x_{1}+\theta \lbrack v-c(r_{1},0)]-r_{1}$ and $u_{B}=v-c_{E1}-x_{1}.$ \ On the other hand, when $% c_{E2}>\max \{\psi (r_{1})+x_{2}-x_{1},\phi (r_{1})+\theta \lbrack v-c(r_{1},0)]\},$ B does not breach early regardless of whether early renegotiation is possible, and so payoffs are $u_{S}=\pi (r_{1},r_{2}^{e}(r_{1}))-r_{1}$ and $u_{B}=v-\psi (r_{1})-x_{2}.$ If $\psi (r_{1})+x_{2}-x_{1}\ max \left\{ \begin{array}{c} \psi (r_{1})+x_{2}-x_{1}, \\ \phi (r_{1})+\theta \lbrack v-c(r_{1},0)]% \end{array}% \right\} ;$ \\ $\left\{ \begin{array}{c} \pi (r_{1},r_{2}^{e}(r_{1}))+\alpha \cdot s_ {reneg}^{L}, \\ v-\psi (r_{1})-x_{2}+(1-\alpha )\cdot s_{reneg}^{L}]% \end{array}% \right\} $ & if $\psi (r_{1})+x_{2}-x_{1}T}),% \begin{equation*} r_{1}\lesseqqgtr r_{1}^{\ast }\iff \psi (r_{1})+x_ {2}-x_{1}\lesseqqgtr \phi (r_{1})+\theta \lbrack v-c(r_{1},0)], \end{equation*}% with equality only at $r_{1}=r_{1}^{\ast }.$ \ To see this, note that $\psi (r_{1})+x_{2}-x_{1}\lesseqqgtr \phi (r_ {1})+\theta \lbrack v-c(r_{1},0)]$ is equivalent to \begin{equation*} 0\lesseqqgtr \theta \lbrack v-c(r_{1},0)]+x_{1}-\pi (r_{1},r_{2}^{e}(r_{1})), \end{equation*}% which is satisfied for all $r_{1}\ lesseqqgtr r_{1}^{\ast }$ because the right hand side of this expression is zero at $r_{1}^{\ast }$ (by (\ref{x1CE}% ) and (\ref{x2CE})) and strictly decreasing in $r_{1}$ for all $r_{1}$ (by assumption (\ref{1-max>T})).\footnote{\label{ftnt_pi1+Tc1}$\frac{d}{dr_{1}}% \left\{ \theta \lbrack v-c(r_{1},0)]+x_{1}-\pi (r_{1},r_{2}^{e}(r_{1}))\right\} =-\theta c_{1}(r_{1},0)-\pi _{1}(r_{1},r_ {2}^{e}(r_{1})),$ which is negative for all $r_{1}$ by assumption (\ref{1-max>T}).}$^{,}$\footnote{% Recall that excluding early investment cost and absent early renegotiation, S's earns a payoff of $x_{1}+\theta \lbrack v-c(r_{1},0)]$ from early breach occurring and $\pi (r_{1},r_{2}^{e}(r_{1}))$ from early breach not occurring. \ Therefore, $\pi (r_{1},r_{2}^{e}(r_{1}))\lesseqqgtr x_{1}+\theta \lbrack v-c(r_{1},0)]$ for all $r_{1}\lesseqqgtr r_{1}^{\ast }$ implies that (i) when $r_{1}$ is less than $r_{1}^{\ast },$ early investment is more valuable to S if early breach occurs; and (ii) when $r_{1}$ is greater than $r_{1}^{\ast },$ early investment is more valuable to her if early breach does not occur.} \textbf{Case (A).}\ \ Suppose $r_{1}\leq r_{1}^{\ast },$ which implies $\psi (r_{1})+x_{2}-x_{1}\leq \phi (r_{1})+\theta \lbrack v-c(r_{1},0)].$ \ There are three subcases to consider for different realizations of $c_{E1},$ and Figure \ref{Fig_CE1values-1} shows S's payoffs in each subcase.\ \ % Figure 4: Seller's payoffs after early renegotiation in Case (A), where r1<=r1*. % This figure shows the seller's payoffs after early renegotiation, assuming she % underinvests (r1<=r1*), for various values of the first entrant's cost cE1. % Early breach always occurs for low values of cE1 and never occurs for high values of cE1 % For intermediate values of cE1, early breach does not occur absent early renegotiation % but does occur with early renegotiation. (i) If $c_{E1}\leq \psi (r_{1})+x_{2}-x_{1},$ early breach always occurs. \ Social surplus is $v-c_{E1}+\ theta \lbrack v-c(r_{1},0)]-r_{1}$ for these realizations of $c_{E1},$ so the marginal net social return from increasing $% r_{1}$ slightly is $-\theta c_{1}(r_{1},0)-1.$ Since S's private payoff is $% x_{1}+\theta \lbrack v-c(r_{1},0)]-r_{1}$ in this range, her marginal net private return from increasing $r_{1}$ corresponds to the net social return. (ii)\ If $\psi (r_{1})+x_{2}-x_{1}<0}) imply $\psi ^{\prime }(r_{1})\geq 0$ while $r_{1}\leq r_{1}^{\ast }$ and footnote \ref{ftnt_pi1+Tc1}\ imply $\pi _{1}+\theta c_{1}\geq 0.$ (iii)\ If $\phi (r_{1})+\theta \lbrack v-c(r_{1},0)]<0}) implies $ \psi ^{\prime }(r_{1})\geq 0$). So to summarize case (A), when $r_{1}\leq r_{1}^{\ast },$ S's marginal net private return from increasing $r_{1}$ slightly is weakly greater than the marginal net social return for all realizations of $c_{E1}.$ \ Hence $\Pi ^{\prime }(r_{1})\geq S^{\prime }(r_{1})\geq 0$ when $r_{1}\leq r_{1}^{\ast }.$ \textbf{Case (B).} \ If $r_{1}\geq r_{1}^{\ast },$ then $\ psi (r_{1})+x_{2}-x_{1}\geq \phi (r_{1})+\theta \lbrack v-c(r_{1},0)].$ \ S's payoffs for all possible realizations of $c_{E1}$ are depicted in Figure \ref% {Fig_CE1values-2}. \ Similary to the previous case, it can be shown that S's marginal net private return to increasing $% r_{1}$ slightly is weakly less than the marginal net social return for all $% c_{E1}.$ \ Therefore $\Pi ^{\prime } (r_{1})\leq S^{\prime }(r_{1})\leq 0$ for all $r_{1}\geq r_{1}^{\ast }.$ Hence, given any a contract $(p,x_{1},x_{2})$ where $x_{1}$ and $x_{2}$ are the efficient expectation damages (satisfying (\ ref{x1CE}) and (\ref{x2CE}% )), S's privately optimal early investment (the value of $r_{1}$ that maximizes $\Pi (r_{1})$) is indeed the efficient one, $r_{1}^{\ast }.$ This concludes the proof of Proposition \ref{Prop_reneg}. % Figure 5: Seller's payoffs after early renegotiation in Case (B), where r1>=r1*. % This figure shows the seller's payoffs after early renegotiation, assuming she % overinvests (r1>=r1*), for various values of the first entrant's cost cE1. % Early breach always occurs for low values of cE1 and never occurs for high values of cE1. % For intermediate values of cE1, early breach occurs absent early renegotiation % but does not occur with early renegotiation. \bigskip To summarize, any contract $(p,x_{1},x_{2})$ where $x_{1}$ and $x_{2}$ are the efficient expectation damages specified in (\ref{x1CE}) and (\ref{x2CE}) will induce S to choose the ex-ante efficient early investment $r_{1}^{\ast } $. \ The work above shows that early renegotiation then leads to B making the efficient early breach decision, S making the efficient late investment, and B making the efficient late breach decision. Proposition \ref{Prop_reneg} says that the same contract that implements the efficient outcome when renegotiation is not possible also implements the efficient outcome when renegotiation is possible. \ Therefore, renegotiation will not occur on the equilibrium path. \ Nevertheless, it is crucial in establishing Proposition \ref{Prop_reneg}\ to consider the payoffs of the parties from choices made off the equilibrium path. \section{An Application\label{Sec_Application}} Consider once again the model without renegotiation or mitigation effort (so that the probability of finding an alternate buyer is exogenous). \ One application of this model is to study the way in which hotels structure their fees for cancellation of a reservation. \ There are usually different cancellation policies for reservations during the high season versus the low season. \ For example, the following is a summary of the deposit and cancellation policies of The Lodge at Vail, a ski resort in Vail, Colorado.% \footnote{% See http:// lodgeatvail.rockresorts.com. For the cancellation policy, see http://lodgeatvail.rockresorts.com/info/rr.fees.asp.} \begin{quotation} \textit{Deposit Policies}: In the winter season, a 50\% deposit is due at the time of booking. \ The remaining balance is then due 45 days prior to the arrival. \ In spring, summer, and fall seasons, no deposit is required. \textit{Cancellation Policies}: In the winter season, a full refund, less the first night's room and tax, will be given if reservations are cancelled more than 45 days prior to arrival. \ However, there will be a full forfeiture of the entire reservation value if cancelling within 45 days of arrival. \ In spring, summer, and fall seasons, one night's deposit will be forfeited if cancellation occurs within 24 hours of arrival.\ footnote{% Even though no deposit it required at the time a reservation is made in the spring, summer, or fall season, the price of one night's stay is still charged to the guest if cancellation occurs within 24 hours of arrival.} \end{quotation} In the case of The Lodge at Vail, their penalities for breach of contract (cancelling the reservation) are increasing as one approaches the date of performance (start of the reserved stay), regardless of the time of the year. \ Furthermore, presumably because of higher demand in the winter season for ski resorts, the difference between their penalties for cancelling late and cancelling early is larger during the winter than during other times of the year (ignoring the seasonal difference in the definitions of what constitutes a late breach). \ This choice of breach damages is consistent with the assumption that it is impossible (or in general, more difficult) to find an alternate buyer if breach occurs late, and the fact that it is easier (by definition) to find an alternate buyer in case of early breach during the high season than low season. In order to precisely apply the model to this lodging industry example, the parameter $\theta $ should, strictly speaking, be interpreted as the probability of finding an alternate buyer/guest (upon early breach) to fill the \emph{same} room that was vacated by the incumbent buyer/guest who breached the original contract. \ (For example, the seller/hotel may be booked to capacity at the time that the original contract is breached.) \ Otherwise, without a binding capacity constraint, the seller may be able to accommodate another buyer\ even if early breach does not occur. \ Note that the seller/hotel is less likely to be booked to capacity during the low season than during the high season, which is consistent with $\theta $ being lower during the low season. \ Furthermore, whether breach is considered late or early in the low season depends on whether it occurs within 24 hours prior to arrival; whereas during the high season breach is considered late if it occurs within 45 days prior to arrival. \ The shorter prior notice requirement for early breach during the low season is also consistent with $\theta $ being lower during the low season. To formalize the connection between the Lodge at Vail example and the model, suppose that the price of the entire reserved stay can be written as $% np^{s}, $ where $p^{s}$ is the price per night, with $s\in \{H,L\}$ denoting the season, and $n$ is the number of nights. \ Assume that the price is higher during the high season than during the low season, or $p^{H}>p^{L}$ (presumably, short-run supply in the lodging industry is fixed), and that the stay is for at least $n>\frac{p^{L}}{p^{H}}+1$ nights. \ Then the Lodge at Vail's policy is such that during the high (winter) season, $% x_{2}^{H}-x_{1}^{H}=np^{H}-p^{H}=(n-1)p^{H},$ which exceeds the analogous difference $x_{2}^{L}-x_{1}^{L}=p^{L}-0=p^ {L}$ during the low season. \ Thus this example is consistent with the second inequality in Corollary \ref% {Corollary_x2-x1>0}. \ Note that the Lodge at Vail's policy also satisfies $% x_{2}^{H}=np^ {H}>p^{L}=x_{2}^{L},$ i.e., the penalty for cancelling a reservation at the last minute is larger in the high season than in the low season. \ If the model formally accounts for seasonal variations in the contract price, then this observation would again be consistent with the model's predicted efficient expectation damages for late breach. \ (This claim follows from replacing $p$ with $p^{H}$ and $p^{L}$ in (\ref{x2CE}) and noting that $(r_{1}^{\ast },r_{2}^{\ast }(r_{1}^{\ast }))$ do not depend on $p$). Finally, observe that both results in Corollary \ref{Corollary_x2-x1>0} could have been obtained even if the seller does not make any investments, or if she only invests before the first breach decision. \ If the seller only invests before the first breach decision, efficient investment and breach decisions can be induced by $x_{2}=p-c(r_{1}^{\ast })$ and $% x_{1}=p-c(r_{1}^{\ast })-\theta \lbrack v-c(r_{1}^{\ast })]$ so that $% x_{2}-x_{1}=\theta \lbrack v-c(r_{1}^{\ast })]>0$. \ Similarly, if the seller does not make any investments ($r_{1}^{\ast }\equiv 0$), efficient breach decisions can still be induced with $(x_{1},x_{2})$ satisfying $% x_{2}-x_{1}=\theta \ lbrack v-c(0)]>0$. \ Therefore, an empirical investigation is necessary to determine whether, and how, a seller's investments affect the difference in her chosen penalties for late breach versus early breach in reality. \ However, regardless of whether, and when, the seller makes investments, the models predict that the difference in the penalties for late breach versus early breach, $x_{2} -x_{1},$ is increasing in $\theta ,$ the likelihood of finding an alternate buyer if breach occurs early. \section{Conclusion \label{Sec_Conclusion}} This paper studies optimal liquidated damages when breach of contract is possible at multiple points in time. \ It suggests that when the potentially breached-against party makes sequential investment decisions, efficient breach damages should increase over time so as to make the potentially breaching party internalize those increasing opportunity costs. \ This provides an intuitive explanation for why fees for cancelling some service contracts, such as hotel reservations, tend to increase as the time for performance approaches. Furthermore, when the investing party may be able to find an alternate trading partner when breach occurs early but not when breach occurs late, it is shown that the amount by which the damages for late breach exceeds the damages for early breach is increasing in the probability of finding an alternate trading partner. \ This provides one possible explanation for why hotels tend to charge larger penalties for late cancellation of high-season reservations than late cancellation of low-season reservations. When an incumbent seller, as the potentially breached-against party, can affect the probability of finding an alternate buyer, her private incentives to mitigate breach damages are shown to be socially insufficient whenever she does not have full bargaining power vis-a-vis the alternate buyer. \ This is because while mitigation costs are always borne entirely by the incumbent seller, the benefits of mitigation are shared whenever the alternate buyer has some bargaining power. \ However, if breach is defined as not only a function of whether the incumbent buyer refuses trade, but also a function of whether the incumbent seller is able to trade with an alternate buyer, then the incumbent seller's mitigation incentives may be insufficient even if she has full bargaining power with the alternate buyer. Finally, it is shown that when the incumbent buyer and seller are able to renegotiate their original contract after the arrival of each perfectly competitive entrant, the socially efficient breach and investment decisions can still be implemented with the same efficient expectation damages that implement the first best outcome absent renegotiation. \begin{thebibliography}{99} \bibitem{} Aghion, Philippe and Patrick Bolton. \ \textquotedblleft Contracts as a Barrier to Entry.\textquotedblright\ \ \textit{American Economic Review}, Vol. 77 (1987), pp. 388-401. \bibitem{} Chan, Alan and Tai-Yeong Chung. \ \textquotedblleft Contract Damages and Investment Dynamics.\textquotedblright\ Working paper (2005). \bibitem{} Che, Yeon-Koo and Tai-Yeong Chung. \ \textquotedblleft Contract Damages and Cooperative Investments.\textquotedblright\ \ \textit{RAND Journal of Economics}, Vol. 30 (1999), pp. 84--105. \bibitem{} Chung, Tai-Yeong. \ \textquotedblleft On the social optimality of liquidated damage clauses: An economic analysis.\textquotedblright\ \textit{% Journal of Law, Economics, \&\ Organization} Vol. 8 (1992), pp. 280-305. \bibitem{} Goetz and Scott. \ \textquotedblleft The Mitigation Principle: Toward a General Theory of Contractual Obligation.\textquotedblright\ \textit{Viriginia Law Review} Vol. 69 (1983), pp. 967- 1024. \bibitem{} Miceli, Thomas J., C.F. Sirmans, and Geoffrey Turnbull. \ \textquotedblleft The Duty to Mitigate Damages in Leases Out with the Old Rule and in with the New.\ textquotedblright\ Working paper (2001). \bibitem{} Rogerson, William P. \ \textquotedblleft Efficient Reliance and Damage Measures for Breach of Contract.\textquotedblright\ \ \textit{RAND Journal of Economics}, Vol. 15 (1984), pp. 39--53. \bibitem{} Shavell, Steven. \ \textquotedblleft Damage Measures for Breach of Contract.\textquotedblright\ \ \textit{Bell Journal of Economics}, Vol. 11 (1980), pp. 466--490. \bibitem{} Spier, Kathryn E. and Michael D. Whinston. \ \textquotedblleft On the efficiency of privately stipulated damages for breach of contract: entry barriers, reliance, and renegotiation.\textquotedblright\ \textit{RAND\ Journal of Economics}, Vol. 26 (1995), pp. 180-202. \bibitem{} Stole, Lars A. \ \textquotedblleft The Economics of Liquidated Damage Clauses in Contractual Environments with Private Information.\textquotedblright\ \ \textit{Journal of Law, Economics, \&\ Organization} Vol. 8 (1992), pp. 582-606. \bibitem{} Triantis, Alexander J. and George G. Triantis. \ \textquotedblleft Timing Problems in Contract Breach Decisions.\textquotedblright\ \textit{Journal of Law and Economics}, Vol. 41 (1998), pp. 163-207. \end{thebibliography} \end
{"url":"http://www.justice.gov/atr/public/eag/231577.tex","timestamp":"2014-04-16T08:39:10Z","content_type":null,"content_length":"108118","record_id":"<urn:uuid:6e222684-ba37-4080-82ab-be5232c27e9e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about batch means on Xi'an's Og Richard Everitt tweetted yesterday about a recent publication in JCGS by Rajib Paul, Steve MacEachern and Mark Berliner on convergence assessment via stratification. (The paper is free-access.) Since this is another clear interest of mine’s, I had a look at the paper in the train to Besançon. (And wrote this post as a result.) The idea therein is to compare the common empirical average with a weighted average relying on a partition of the parameter space: restricted means are computed for each element of the partition and then weighted by the probability of the element. Of course, those probabilities are generally unknown and need to be estimated simultaneously. If applied as is, this idea reproduces the original empirical average! So the authors use instead batches of simulations and corresponding estimates, weighted by the overall estimates of the probabilities, in which case the estimator differs from the original one. The convergence assessment is then to check both estimates are comparable. Using for instance Galin Jone’s batch method since they have the same limiting variance. (I thought we mentioned this damning feature in Monte Carlo Statistical Methods, but cannot find a trace of it except in my lecture slides…) The difference between both estimates is the addition of weights p_[in]/q_[ijn], made of the ratio of the estimates of the probability of the ith element of the partition. This addition thus introduces an extra element of randomness in the estimate and this is the crux of the convergence assessment. I was slightly worried though by the fact that the weight is in essence an harmonic mean, i.e. 1/q_[ijn]/Σ q_[imn]… Could it be that this estimate has no finite variance for a finite sample size? (The proofs in the paper all consider the asymptotic variance using the delta method.) However, having the weights adding up to K alleviates my concerns. Of course, as with other convergence assessments, the method is not fool-proof in that tiny, isolated, and unsuspected spikes not (yet) visited by the Markov chain cannot be detected via this comparison of averages.
{"url":"http://xianblog.wordpress.com/tag/batch-means/","timestamp":"2014-04-19T09:24:37Z","content_type":null,"content_length":"60887","record_id":"<urn:uuid:600a10ee-5846-472d-82ec-cacbca68dce7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Boston ACT Tutor Find a Boston ACT Tutor ...I am currently studying computer science at MIT, but my academic experience extends to subjects offered in liberal arts as well as technically focused high schools. To complement the technical knowledge that I have accumulated and continue to gain, I have a strong background in academic communic... 38 Subjects: including ACT Math, chemistry, English, reading ...I tutor in the sciences, reading, writing, English, algebra, and test prep (SAT/ACT) for middle/high school students and all subjects for elementary students. I have extensive experience working with children of all ages through various volunteer programs. As a strong, results driven motivator,... 29 Subjects: including ACT Math, chemistry, English, study skills ...I also took organic chemistry. I scored As in Geometry. Received A's in High School and College and a 5 in AP Physics C Precalculus - A, and 790 in Math on SAT I received an A in accounting when in school for my MBA. 42 Subjects: including ACT Math, chemistry, calculus, English ...I received my Bachelor Degree in Economics in 2012. I have been tutoring since my first undergraduate year and have five years tutoring experience now. I am very good at teaching math and Chinese, as well as TOFEL sections for reading and vocabulary.I am now studying in the Economics PhD program, and have been tutoring Macroeconomics and Microeconomics in the past year. 23 Subjects: including ACT Math, calculus, statistics, GRE ...Santa Barbara, ranked 33rd in the World's Top 200 Universities (2013), and graduated with a degree in Communication. I've been tutoring for the past 9 years and have worked with students at many different levels. My unique approach to tutoring involves working with each student according to their individual learning style. 27 Subjects: including ACT Math, reading, English, writing Nearby Cities With ACT Tutor Brighton, MA ACT Tutors Brookline, MA ACT Tutors Cambridge, MA ACT Tutors Charlestown, MA ACT Tutors Chelsea, MA ACT Tutors Dorchester, MA ACT Tutors East Boston ACT Tutors Everett, MA ACT Tutors Jamaica Plain ACT Tutors Malden, MA ACT Tutors Medford, MA ACT Tutors Revere, MA ACT Tutors Roxbury, MA ACT Tutors Somerville, MA ACT Tutors South Boston, MA ACT Tutors
{"url":"http://www.purplemath.com/Boston_ACT_tutors.php","timestamp":"2014-04-21T04:45:04Z","content_type":null,"content_length":"23522","record_id":"<urn:uuid:4a782b88-d5b0-4ba6-a8e5-af97b45fc448>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Sparse ramsey theory up vote 4 down vote favorite It is known that for any graph H and all $k∈N$, there exists a graph $G$ such that any $k$-coloring of the edges of $G$ yields a monochromatic copy of H and ω(G)=ω(H) (the two graphs have the same clique numbers). My question is: Given any graph $H$ with finite girth, is there a $G$ with the same girth as $H$ such that any $2$-coloring of the edges of $G$ yields a monochromatic copy of $H$? I think this is an open problem but if someone can confirm that and give some references concerning this I would be most obliged. add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ramsey-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/102331/sparse-ramsey-theory","timestamp":"2014-04-18T13:52:24Z","content_type":null,"content_length":"44498","record_id":"<urn:uuid:a397fc3a-c482-433d-ad13-d0bbb226f2d6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Jan Reimann, Stephen Simpson. Title: Cone Avoidance for Ramsey's theorem for pairs Seminar: Logic Seminar Speaker: Keita Yokoyama, Tokyo Institute of Technology and Pennsylvania State University Determining the strength of Ramsey's theorem for pairs (RT^2_2) is a long standing problem in Computability Theory and Reverse Mathematics. The following two theorems are the significant contributions to this problem. In 1990s, Seetapun proved the cone avoidance theorem for Ramsey's theorem for pairs, which implies that RT^2_2 is strictly weaker than ACA_0. In 2001, Cholak, Jockusch and Slaman constructed a low_2 omega-model of RT^2_2 by using Mathias forcing, and showed that the first-order part of RT^2_2 is weaker than the system of Sigma_2-induction. In this talk, I will introduce another proof of Seetapun's theorem given by Dzhafarov and Jockusch, which is based on Mathias forcing used for the latter theorem. Room Reservation Information Room Number: MB315 Date: 11 / 29 / 2011 Time: 02:30pm - 03:45pm
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=10140","timestamp":"2014-04-20T13:46:09Z","content_type":null,"content_length":"3845","record_id":"<urn:uuid:43c90484-d861-46ed-a9e4-5a2480b32de6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Sciences Graduate Course Descriptions MTSC-500. FOUNDATIONS OF MATHEMATICS 3:3:0 This course is specifically designed to bridge undergraduate and graduate study in mathematics. It is an introduction to abstract ideas, proofs, set theory, relations, and number systems and their connections. Prerequisites: MTSC-252. Credit, three hours. MTSC-503. MATHEMATICS TEACHING METHODS I 3:3:0 This course is the first of a two (2) part sequence designed to provide weighty consideration of some of the major topics in middle and secondary school mathematics education. Emphasis will be on epistemological, pedagogical, social, psychological, effective teaching, classroom management, and cultural concerns as well as the teaching profession. This course is also a study of methods and materials used in teaching mathematics and will expose students to current educational theory and reform organizations. Through research, practice, and presentations, students will take an active role in the instruction and development of materials for this course. Prerequisites: MTSC-252, MTSC-313, MTSC-341, MTSC-241 and MTSC-203. Credit, three hours. MTSC-504. MODERN GEOMETRY 3:3:0 The course covers Menelaus and Ceva’s Theorem, Cross Ratio, Elementary Transformations, Euclidean Constructions, and Non-Euclidean Geometry. The course illustrates to the students the strength of deductive reasoning in proofs involving Euclidean axioms and transformation theory. The student will also be familiar with Non-Euclidean Geometry. Prerequisites: MTSC-303 with minimum grade of .C.. Credit, three hours. MTSC-505. MATHEMATICAL LOGIC 3:3:0 The course is designed to examine the logical foundations of mathematics. Formal systems are shown to model real life relationships, and these formal systems are studied and analyzed using mathematical methods and rigor. The results of the study show both the inherent limitation of reasoning and at the same time the richness of what can be expressed and proven. Prerequisites: MTSC-251, MTSC-313. Credit, three hours. MTSC-511. INTRODUCTION TO ABSTRACT ALGEBRA 3:3:0 The course is concerned with the basic theory of some of the important algebraic systems such as groups, rings and fields with emphasis on homomorphism, isomorphism, integral domain, extension fields, and Galois groups. Credit, three hours. MTSC-521. GENERAL TOPOLOGY 3:3:0 The purpose of the course is to give the students the basic concepts of topology and lead them to algebraic topology. The course also presents as a related discipline to the proper understanding of various branches of analysis and geometry. The students should become familiar with topological spaces, point-set topology and homotopy theory. Prerequisites: MTSC-451, MTSC-452. Credit, three hours. MTSC-531. NUMBER THEORY 3:3:0 The course, Number Theory, is an introduction to the study of basic properties of integers which allows one to demonstrate how various areas of mathematics play a role in the study of properties of natural numbers. The course is flexible and fundamental enough to be taken by Math and Math Ed Majors. Credit, three hours. MTSC-541. ADVANCED PROBABILITY THEORY 3:3:0 The course covers the mathematical structure of probability theory with applications of the theory from a wide variety of experimental situations. Prerequisites: MTSC-253 with a minimum grade of .C.. Credit, three hours. MTSC-551. ORDINARY DIFFERENTIAL EQUATIONS 3:3:0 The purpose of the course is to present techniques of solving ordinary differential equations. The students should become familiar with Boundary Value Problems, Systems of Ordinary Differential Equations, Phase Diagrams, and Prerequisites: MTSC-351. Credit, three hours. MTSC-561. REAL ANALYSIS I 3:3:0 The purpose of the course is to cover the basic material that every graduate should know in the classical theory of functions of a real variable and in measure and integration theory. To provide the students with the background in those parts of modern mathematics which have their roots in the classical theory of functions of a real variable. These include the classical theory of functions of a real variable itself, measure and integration, point-set topology, and the theory of normed linear space. Prerequisites: MTSC-402 with a minimum grade of .C., or its equivalent. Credit, three hours. MTSC-562. REAL ANALYSIS II 3:3:0 This course is the extension of real analysis I. The purpose of the course is to further provide students the background of modern mathematics. The course is to cover the theories of (improper) Riemann integrals and a brief introduction of Lebesgue integrals, the theories of pointwise and uniform convergence of sequences of functions, and the theories of infinite series of functions. Prerequisites: MTSC-561 with minimum grade of .C., or its equivalent. Credit, three hours. MTSC-571. COMPLEX ANALYSIS 3:3:0 This is a first-semester course at the graduate level, in the field of Functions of one (1) Complex Variable. The rigorous approach adopted herein will set a firm foundation for leading the students to the next level of Complex Analysis. To prepare the student for further studies in the field of Complex Analysis. To provide the students with sufficient background for various applications of Complex Analysis physical and engineering disciplines. Prerequisites: MTSC-471. Credit, three hours. MTSC-621. FUNCTIONAL ANALYSIS 3:3:0 The course gives students an introduction to Metric Spaces, Hilbert Spaces, and Banach Spaces with emphasis on Hilbert Spaces. Prerequisites: MTSC-561. Credit, three hours. MTSC-631. OPERATIONS RESEARCH 3:3:0 The course is designed to expose students in computer science to linear, nonlinear, and integer programming, simplex method, duality theorem, transport and other application problems, and different optimization methods and techniques. The topics to be covered include: Optimization problems; the subject of Operations Research; Linear programming; Simplex method and duality theorem; Integer programming; Nonlinear programming; Optimization techniques; Applications; and MATLAB Optimization Toolbox. Credit, three hours. MTSC-641. COMBINATORICS 3:3:0 The student will be introduced to the theory involved in combinatorial reasoning. The two (2) combinatorial theories of enumeration and graph theory will be developed. Students will apply combinatorial reasoning to problems in the analysis of computer systems, in discrete operations research and in finite probability. Credit, three hours. MTSC-643. STATISTICS 3:3:0 The course provides students with the fundamental theory of statistics. The students will be familiar with descriptive and inferential statistical methods, theory, and applications. Prerequisites: MTSC-541 with minimum grade of .C.. Credit, three hours. MTSC-651. PARTIAL DIFFERENTIAL EQUATIONS 3:3:0 The course is designed to acquaint students to Classifications of Partial Differential Equations, Methods of Solution for the Wave Equation, Laplace’s Equation, and the Heat Equation. Prerequisites: A second course in Ordinary Differential Equations. Credit, three hours. MTSC-661. NUMERICAL ANALYSIS 3:3:0 The student should become familiar with advanced techniques for solving numerically large problems in Linear Algebra. In particular, students should become familiar with the effects of ill conditioning, and of ways in which special information about matrices, such as sparsity can be used. An important part of all of this is the consideration of error from various sources and ways of controlling its accumulation. Prerequisites: MTSC-313. Credit, three hours. MTSC-699. THESIS OR DIRECTED PROJECT 6 3:3:0 A student may register three (3) or six (6) hours thesis with the approval of his/her thesis advisor. Credit, three to six hours. (c) Copyright 2010 DSU CMNST, Dover, Delaware 19901. All rights reserved.
{"url":"http://jclendaniel@desu.edu/print/703","timestamp":"2014-04-18T22:02:27Z","content_type":null,"content_length":"15189","record_id":"<urn:uuid:ab93b88e-ddf1-4e57-98e2-f46a4c6fd34f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Sam on Sunday, October 28, 2012 at 12:54pm. The differential gear of a car axle allows the wheel on the left side of a car to rotate at a different angular speed than the wheel on the right side. A car is driving at a constant speed around a circular track on level ground, completing each lap in 19.9 s. The distance between the tires on the left and right sides of the car is 1.63 m, and the radius of each wheel is 0.340 m. What is the difference between the angular speeds of the wheels on the left and right sides of the car? • Physics - bobpursley, Sunday, October 28, 2012 at 1:46pm linear velocity=w*radius inner velocity=w*(r) outer velocity=w(r+.340) So the difference in linear velocities at the track is outer-inner=w(r+.340-r)=.340 w Now consider the tires: they have to have the same linear velocity at the pavement. So the difference in tire velocitys is .340w Inner tire=wit*radtire outer tire=wot*(radtire) so the difference in the angular speeds of the two tires is: difference=2Pi*radiustrack*.340/(19.9*radiuswheels) • Physics - Jennifer, Sunday, October 28, 2012 at 1:53pm w = 2*pi*f f = 1/19.9 where f is the frequency of the car going around the track, w is the angular speed around the track, pi is 3.14 w = 2*pi/19.9 =0.315 The speed traveled by one tire is where r is the radius of the wheel The speed traveled by the other tire is The difference in speeds of the wheels is w*(r+1.63)-w*r = 1.63w = .513 = w(tire)*r(tire) where w(tire) is the angular speed of the tire, and r(tire) is the radius of the tire w(tire)*0.340 = 0.513 Solve for w(tire) Related Questions Physics - 7. A formula 1 racecar is capable of very quick starts (with a launch... Intergated chemistry/physics - Draw a setup containing 3 gears. For every turn ... physics - If gear A has 30 teeth and is spinning at 800rpm, gear B has 50 teeth ... physics - A 1000 kg car in neutral gear slow from 15 m/s to a stop in 5.0 s due ... 9th grade science - if the turns of the output gear over the turns of the input ... science - A gear with 30 teeth turns a gear with 5 teeth. How many turns will ... math - i need help on ratios, gears and teeth.ex. teeth on driving gear A-35 ... math - A gear with 60 teeth is meshed to a gear with 40 teeth. If the larger ... Physics - a chain pulls tangentially on a 33.4kg uniform cylindrical gear with ... physics - Two gears A and B are in contact. The radius of gear A is twice that ...
{"url":"http://www.jiskha.com/display.cgi?id=1351443282","timestamp":"2014-04-19T13:50:17Z","content_type":null,"content_length":"9887","record_id":"<urn:uuid:fca46b95-d8f0-43b1-9def-be8fe660c846>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
I am trying to parametrize and find a family of solutions to some equations. (I am using the solve([eqns],vars) function.) Unfortunately, the equations are just complicated enough so that rather than parametrize, sage gives up and outputs the equations themselves. Here is a (partial) particular example that I had in mind. My real equation is actually a bit more complicated. But this is a point where it went from solving to simply spitting out the original equations. Here is the output: 10 equations, and 12 unknowns. {s0 + s2 + s4: 1} {s5: 0} {s0 + s1 + s2 + s3 + s4 + s5: 2} {w0: s5} {w1: -s4} {w2: s3} {(s0*w0 + s1*w1)*(s0*w0 + s1*w1 + s2*w2): s2*w0 + s3*w1} {w3: -s2} {w4: s1} {w5: Is there anything I can do? I've read the solve and x.solve pages, but I don't see a clear method I should try... Edit: I'm looking for the simplest representation of this system of equations. I would like Sage to parametrize a few of the variables and solve for the others. For instance, a linear system example would be the following: The output would be: a=1-r1, b=1, c=r1 Sage can do this using the solve function for small simple situations, even if they're non-linear equations. I want it to classify the family of all possible (real) solutions which satisfy all of the equations simultaneously. But it's stopping when I have a complicated system of equations. In essence I want it to "solve" the system of equations. Clearly this requires a few parameters, since there are more variables than equations... and since it's not a linear system I can't just ask a linear algebra student to do it. ;-) There is no closed form for the roots of a polynomial in general. So we'll have to assume that you can somehow compute roots. One nice trick to get a parametrization is to compute a Groebner basis in lexicographic order. You need to put your parameters last in the order of variables. Say, sage: R.<x,y,z> = PolynomialRing(QQ, order='lex') sage: I = R.ideal([5*x - 1/3*y^2 - 2*y*z + y - 5/4*z^2, -3/2*x - 1/13*y*z + 4]) These are two (random) polynomial equations in three variables, so we expect the solution to be a curve. Indeed, it is: sage: I.dimension() It is also an irreducible curve, that is, not two curves that are disjoint or intersect in a point: sage: I.primary_decomposition() [Ideal (52*y^2 + 352*y*z - 156*y + 195*z^2 - 2080, 39*x + 2*y*z - 104) of Multivariate Polynomial Ring in x, y, z over Rational Field] The Groebner basis in the given lexicographic order is: sage: I.groebner_basis() [x + 2/39*y*z - 8/3, y^2 + 88/13*y*z - 3*y + 15/4*z^2 - 40] The last variable (in the lexicographic order) is z, which we take to be the parameter. The second equation in the Groebner basis depends only on y and z, so if you plug in a value for z then it is a polynomial equation in a single variable. Solving this univariate polynomial equation yields multiple (in this case, two) solutions for y. Then plug y,z into the first equation of the Groebner basis. You get one polynomial equation in x, which you have to solve again. Hence, you have determined y(z) and x(y(z),z), that is, parametrized your curve by z. posted Mar 05 '11 Volker Braun 2701 ● 9 ● 24 ● 60 Thank you very much Volker. This is very clear, and is exactly what I was looking for. David Ferrone (Mar 05 '11) If you have an underdetermined polynomial system of equations, you should probably think of the solution as an algebraic variety. sage: AA.<x,y> = AffineSpace(QQ, 2) sage: X = AA.subscheme(x^2+y^2-1) sage: X Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: x^2 + y^2 - 1 sage: X.dimension() Its not clear to me what you mean by "solve"... You should rephrase your question as either a geometric property of the variety or an algebraic property of the defining ideal. posted Feb 24 '11 Volker Braun 2701 ● 9 ● 24 ● 60
{"url":"http://ask.sagemath.org/question/403/method-for-solving-a-large-system-of-under","timestamp":"2014-04-19T09:24:32Z","content_type":null,"content_length":"28816","record_id":"<urn:uuid:fa728e06-71b1-4148-a8ab-b6b0723b29cb>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Re: Seasonal ARMA model Ajay Shah ajayshah at mayin.org Sun Jul 4 10:45:37 CEST 2004 > It might clarify your thinking to note that a seasonal ARIMA model > is just an ``ordinary'' ARIMA model with some coefficients > constrained to be 0 in an efficient way. E.g. a seasonal AR(1) s = > 4 model is the same as an ordinary (nonseasonal) AR(4) model with > coefficients theta_1, theta_2, and theta_3 constrained to be 0. You > can get the same answer as from a seasonal model by using the > ``fixed'' argument to arima. E.g.: x <- arima.sim(list(ar=c(0,0,0,0.5)),300) f1 = arima(x,seasonal=list(order=c(1,0,0),period=4)) f2 = arima(x,order=c(4,0,0),fixed=c(0,0,0,NA,NA),transform.pars=FALSE) Is there a convenient URL which shows the mathematics of the seasonal ARMA model, as implemented by R? I understand f2 fine. I understand that you are saying that f1 is just an AR(4) with the lags 1,2,3 constrained to 0. But I'm unable to generalise this. What would be the meaning of mixing up both order and seasonal? E.g. what would it mean to do something like: Ajay Shah Consultant ajayshah at mayin.org Department of Economic Affairs http://www.mayin.org/ajayshah Ministry of Finance, New Delhi More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2004-July/053822.html","timestamp":"2014-04-20T13:24:31Z","content_type":null,"content_length":"3894","record_id":"<urn:uuid:290c4e92-21ba-438e-a6d7-11bd0a87d219>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Should numpy.sqrt(-1) return 1j rather than nan? Should numpy.sqrt(-1) return 1j rather than nan? Greg Willden gregwillden at gmail.com Thu Oct 12 10:19:05 CDT 2006 May I suggest the following change to generate_umath.py? <completely untested string change> 'sqrt' : Ufunc(1, 1, None, 'square-root elementwise. For real x, the domain is restricted to For complex results for x<0 see numpy.scimath.sqrt', TD(inexact, f='sqrt'), TD(M, f='sqrt'), When sqrt throws a ValueError would it be possible/appropriate for the error message to mention numpy.scimath.sqrt? I'm just trying to think of other ways to help make the transition as smooth as possible for new users. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20061012/e3cb9ba3/attachment.html -------------- next part -------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo -------------- next part -------------- Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/011403.html","timestamp":"2014-04-18T03:23:30Z","content_type":null,"content_length":"4404","record_id":"<urn:uuid:244075e2-8b68-4bb4-98d3-433bfcede754>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Here is a list of all of the skills students learn in kindergarten! These skills are organized into categories, and you can move your mouse over any skill name to view a sample question. To start practicing, just click on any link. IXL will track your score, and the questions will automatically increase in difficulty as you improve!
{"url":"http://www.ixl.com/math/kindergarten","timestamp":"2014-04-21T07:05:42Z","content_type":null,"content_length":"63738","record_id":"<urn:uuid:e0628718-6a08-4d22-b85e-41d888e115f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Wesleyan University Mathematics and Computer Science Department 265 Church Street Middletown, CT 06459 USA Office: 609 Exley Science Center Tel: (860) 685-2167 E-mail: dconstantine[at]wesleyan[dot]edu My research is in the areas of geometry and dynamical systems. In particular, I am interested in how dynamics can be used to solve geometric problems. My interests include: differential geometry, geometric flows, symmetric and homogeneous spaces, homogeneous dynamics, rigidity theory, Lie groups and representation varieties.
{"url":"http://dconstantine.web.wesleyan.edu/","timestamp":"2014-04-21T07:03:37Z","content_type":null,"content_length":"3815","record_id":"<urn:uuid:a1fbba44-5d07-4d68-b9ca-11d029d2cf8c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Fort Gillem, GA Algebra 2 Tutor Find a Fort Gillem, GA Algebra 2 Tutor ...Students often wonder just why they are required to take something as (apparently) frivolous as a philosophy class. For some students, it’s hard to see the point of studying something like philosophy, because they don’t see anything that they can do with it. I like to begin my answer to those s... 9 Subjects: including algebra 2, English, reading, writing ...Analytic geometry is just a fancy name for graphing. You probably did plenty of it in algebra. It is a handy way to deal with equations. 6 Subjects: including algebra 2, geometry, algebra 1, trigonometry I earned a double major in Physics and Mathematics from Berea College in Berea, Kentucky. My plan is to earn a Ph.D in either Physics or Electrical Engineering in the future. Berea has a work-study program which requires all students to work a number of hours throughout the week. 13 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I moved to Georgia and started teaching in 1986 and haven't looked back, earning my M.Ed. in science from Georgia State. While I enjoy being a classroom teacher, I find the most rewarding part of my work to be one-on-one instruction. I have achieved a lot of success with it. 7 Subjects: including algebra 2, chemistry, biology, algebra 1 ...Areas to be covered may include: use and choice of verb tense, generic and specific nouns, modal verbs, conditionality, word formation, prepositions, comparison, and sentence structure. Geometry focuses on the comparisons between geometric figures such as triangles, polygons, circles, and parall... 9 Subjects: including algebra 2, geometry, algebra 1, GED Related Fort Gillem, GA Tutors Fort Gillem, GA Accounting Tutors Fort Gillem, GA ACT Tutors Fort Gillem, GA Algebra Tutors Fort Gillem, GA Algebra 2 Tutors Fort Gillem, GA Calculus Tutors Fort Gillem, GA Geometry Tutors Fort Gillem, GA Math Tutors Fort Gillem, GA Prealgebra Tutors Fort Gillem, GA Precalculus Tutors Fort Gillem, GA SAT Tutors Fort Gillem, GA SAT Math Tutors Fort Gillem, GA Science Tutors Fort Gillem, GA Statistics Tutors Fort Gillem, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Centerville Branch, GA algebra 2 Tutors Cumberland, GA algebra 2 Tutors Embry Hls, GA algebra 2 Tutors Farrar, GA algebra 2 Tutors Forest Park, GA algebra 2 Tutors Green Way, GA algebra 2 Tutors Kelly, GA algebra 2 Tutors Mountville, GA algebra 2 Tutors North Metro algebra 2 Tutors North Springs, GA algebra 2 Tutors Peachtree Corners, GA algebra 2 Tutors Rockbridge, GA algebra 2 Tutors Round Oak, GA algebra 2 Tutors Tuxedo, GA algebra 2 Tutors Winters Chapel, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/Fort_Gillem_GA_algebra_2_tutors.php","timestamp":"2014-04-18T18:33:45Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:f3aa8fb1-e567-4ad9-bba3-3304306a4ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractal Geometry in Medical Science: Beginning of a New Era? Fractal Geometry in MedIcal Science: Beginning of a New Era? In present days, many scientists strongly have opinion that fractal geometry is a revolutionary area of The Technology of Nature ® FRACTUS, S.A., Parque Emp. Sant Joan. Sant Cug at del Vallès. 08190 Barcelona. Spain Tel. +34 935 442 690 Fax +34 935 442 691, E-mail FRACTAL d.o.o. Split, Croatia Company FRACTAL d.o.o. Split was founded in 1994, with power system Engineering software development as a primary business area. Today, there are fo rfour main area Fractal Geometry and its Applications Fractals • Fractals were discovered by Benoit MAndelbrot to measure . roughness • Fractals can be described as – Broken – Fragmented – Irregular. FRACTAL GEOMETRY Introduction to Fractal Geometry FRACTAL GEOMETRY Introduction to Fractal Geometry Fractal geometry is based on the idea of self-similar forms. To be self-similar, a shape must be able to be divided Origin of Fractal Branching Complexity in the Lung Origin of Fractal Branching Complexity in the Lung STEPHEN H. BENNETT1, MARLOWE W. ELDRIDGE2, CARLOS E. PUENTE3, RUDOLF H. RIEDI4, THOMAS R. NELSON5, BOYD W The Fractal Planning Solution – Jim Stone, PhD. Clear Mind, Effective Action. How to get more done with less stress in a world of ever-increasing complexity using the organizing power of fractals Sarah Atchison Dr. Michael Pilant Sarah Atchison Dr. Michael Pilant MATH 614.700 13 May 2009 The Role of Fractal Geometry in the BiologIcal Sciences A Healthy Heart Is a Fractal Heart from SIAM News, Volume 36, Number 7, September 2003 A Healthy Heart Is a Fractal Heart By Barry A. Cipra I have ideas and reasons, Know theories in all their parts, FRACTAL ModeLS IN ARCHITECTURE: A CASE OF STUDY NICOLETTA SALA Academy of Architecture of Mendrisio, University of Italian Switzerland Largo Bernasconi CH - 6850 Fractals: A Brief Overview This presentation provides a broad and basic introduction to the subject of fractal geometry. My thanks to Michael Frame at Yale University for the use of Generating Fractals Based on Spatial Organizations generated fractal, as well as, the proportions between the generator and the initiator. Secondly, the meanings attached to the lines segments that constitute the fractal Fractals in Poetry Activity Fractal Lines,” Fulton addresses the poetics of a cascade which “contains nesting of design within design” (189). In “Fractal Lines” she writes, Fractal Art: Closer to Heaven? Fractal Art: Closer to Heaven? Modern Mathematics, the art of Nature, and the nature of Art Charalampos Saitis Music & Media Technologies Dept. of Electronic The Guide to Fractal Finance Fractal Finance 4 Preface Thank you for purchasing Fractal Finance. As the world leader in fractal analysis software, Tetrahex is sure that Fractal Finance will be a Fractal Dimension for Data Mining - Krishna Kumaraswamy skkumar@cs Fractal Dimension for Data Mining Krishna Kumaraswamy skkumar@cs.cmu.edu Center for Automated Learning and Discovery School of Computer Science Fractal Geometry, Graph and Tree Constructions - Tommy L¨ofstedt Fractal Geometry, Graph and Tree Constructions Tommy L¨ofstedt tommy@cs.umu.se February 8, 2008 Master’s Thesis in Mathematics, 30 credits Supervisor at Math-UmU Locally Invariant Fractal Features for Statistical Texture Locally Invariant Fractal Features for StatistIcal Texture Classification Manik Varma Microsoft Research India manik@microsoft.com Rahul Garg Indian Institute of Submitted By: Chen Ting (Matric No. U017596H) Huang Liming (Matric Contents 1. Introduction to Fractal 1.1 Definition of Fractal 1 1 1.2 Properties of Fractal Fractal Tetrahedrons Fractals are SMART: Science, Math & Art! www.FractalFoundation.org Copyright 2010 Fractal Foundation, all rights reserved. 1 Fractal Tetrahedrons The fractal geometry of ancient Maya settlement The fractal geometry of ancient Maya settlement Clifford T. Browna*, Walter R.T. Witscheyb aMiddle American Research Institute, Tulane University, 6224 Rose Hill Trading with Time Fractals that are orderly and self-similar in scale”, (1) or fractal. To get a rough mental picture of fractals, draw a triangle with three equal sides. Fractal Explorer Tutorial Fractal EXplorer Tutorial How to use Fractal Explorer This tutorial was made using several versions of FE, areas were updated as needed. Last update Fractal Analysis of Image Structures Fractals O. Zmeskal et al. / HarFA - Harmonic and Fractal Image Analysis (2001), pp. 3 - 5 HarFA e-journal http://www.fch.vutbr.cz/lectures/imagesci/harfa.htm Estimating fractal dimension Vol. 7, No. 6/June 1990/J. Opt. Soc. Am. A 1055 Estimating fractal dimension James Theiler Lincoln Laboratory, Massachusetts Institute of Technology, Lexington Fractals, Chaos Theory, Quantum Spirituality, and The Shack Twelve Fractals, Chaos Theory, Quantum Spirituality, and The SHack A fractal . . . something considered simple and orderly that is actually composed of repeated Chaos, Solitons and Fractals A short history of fractal-Cantorian space-time L. Marek-Crnjac Institute of Mathematics, Physics and Mechanics, SI-1001 Ljubljana, Slovenia article info Fractals, Complexity, and Connectivity in Africa and triangles, etc. – because we don’t have built fractal structures to contrast them with. As we will see, fractal geometry not only illuminates the underlying FRACTAL IMAGE COMPRESSION: A RESOLUTION INDEPENDENT REPRESENTATION FOR IMAGER?? Alan D. Sloan 5550 Peachtree Parkway Iterated Systems, Inc. Norcross, Georgia 30092 1. 1 FRACTAL ANALYSIS AND ITS APPLICATION FOR INVESTIGATING TIME SERIES. In this presentation I’d like to give a brief overview of fractal analysis, one of the new Constructing Fractals in Geometer’s SketchPad fractal gives a wonderful geometric figure remiNISCent of a Pysynka, or Ukrainian Easter egg, as shown in Figure 6. Figure 6: Four-Circle Fractal The fractal nature of everyday space The fractal nature of everyday space Andrew Crompton School of Environment and Development University of Manchester a.crompton@man.ac.uk February 2004
{"url":"http://download.ebooks6.com/Fractal-pdf.html","timestamp":"2014-04-17T09:35:55Z","content_type":null,"content_length":"46056","record_id":"<urn:uuid:725b4afd-93b7-462e-a998-894abfe171b4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Size Calculator in Excel Sample Size Calculator in Excel (Rev 10/13) What's Cool About the QI Macros Sample Size Calculator? The QI Macros Sample Size Calculator works with both variable (measured) and attribute (counted) data. Just click on the QI Macros menu, then Statistical Tools and then Sample Size Calculator. You should see the following: To calculate a sample size you need to know: 1. The confidence level required (90%, 95%, 99%) α = 0.1, 0.05, 0.01 (Type I Error) 2. The Power required (80%, 85%, 90%) β = 0.2, 0.15, 0.1 (Type II Error) 3. The desired width of the confidence interval δ - Maximum allowable error of the estimate = 1/2 * tolerance 4. σ - estimated standard deviation (0.167 = 1/6) The defaults are set to standard parameters, but can be changed. Confidence Level In sampling, you want to know how well a sample reflects the total population. The α = 0.05 - 95% confidence level means you can be 95% certain that the sample reflects the population within the confidence interval. Step 1 - Choose alpha α = 0.05 - 95% Confidence Level Step 2 - Choose beta β = 0.1 - 90% Power Confidence Interval The confidence interval represents the range of values which includes the true value of the population parameter being measured. Step 3 - Set the confidence interval to half the tolerance or maximum allowable error of the estimate. (e.g., + 0.05, 2, etc.) Step 4 - Attribute data (pass/fail, etc.) - Set percent defects to 0.5 If 95 out of 100 are good and only 5 are bad, then you wouldn't need a very large sample to estimate the population. If 50 are bad and 50 are good, you'd need a much larger sample to achieve the desired confidence level. Since you don't know beforehand how many are good or bad, you can set the attribute field to (50% or 0.5). Step 5 - Variable Data - Enter Standard Deviation If you know the standard deviation of your data (from past studies), then you can use the standard deviation. If you know the specification tolerance, then you can use (maximum value - minimum value)/6 as your standard deviation. (The default is 1/6 = 0.167). Step 6 - Enter the total population (if known) Using the default values (95%, + 0.05, Stdev = 0.167) Step 7- Read the Sample Size Use the sample size calculated for your type of data: Attribute or Variable. Variable Sample Size: If we are using variable data and just α the sample size would be 43. Using α and β the sample size would be 118. Attribute Example Attribute Sample Size: What if you were using attribute data, (e.g., counting the number of defective coins in a vat at the Denver Mint) but didn't know how many coins were in the vat? You'd need 384 coins to be 95% confident that the coins fell within the 5% interval. What if you knew there were 1000 coins in the vat (population known)? You only need 278 to be confident. What if you changed the confidence interval to be + 0.1? You only need 88 to be 95% confident. Variable Example A sample must be selected to estimate the mean length of a part in a population. Almost all production falls between 2.009 and 2.027 inches. Estimated standard deviation = (2.027 - 2.009) / 6 = 0.003. And you want to be 95% confident that the sample is within +/- 0.001 of the true mean. Enter the data as shown below: You need 35 samples using α alone and 95 using α and β together. Learn More Hypothesis Testing Quick Reference Card To create a Sample Size Calculator in Excel using the QI Macros...
{"url":"http://www.qimacros.com/hypothesis-test/sample-size-calculator/","timestamp":"2014-04-21T12:49:07Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:7417d38f-4ae4-4915-872b-cc87992356d7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Lynwood Statistics Tutor Find a Lynwood Statistics Tutor Hello! Thanks for taking the time to view my profile. I received my B.S in Psychology and am currently an MBA & M.S of Finance student at the University of Southern California (USC). I have more than 7 years of experience and am confident that I can help you achieve your goals! 25 Subjects: including statistics, reading, English, SAT math ...I have scored in the 99th percentile on the SAT, ACT, and LSAT. I am well versed in teaching standardized tests and helping applicants piece together the puzzle of college and graduate school applications. During my time at USC, where I was an economics major with an emphasis upon mathematics, ... 16 Subjects: including statistics, economics, SAT math, LSAT ...It may seem obvious, but it is a big confidence booster. Other test-taking skills include intelligent guessing and thinking-like-the-test-writers. We’ll go over all of these. 24 Subjects: including statistics, Spanish, physics, writing ...I can help you on unit circle, solving triangles, identities, and word problems. You will need these skills in real life problems, engineering, and calculus. They are essential for higher math. 11 Subjects: including statistics, calculus, geometry, algebra 1 ...I also tutor German; I am a native German speaker.I have a BS in Business Administration from USC. I have an MBA from UC Irvine (graduated in 2011 with a 4.0 GPA). I have helped several of my MBA classmates understand difficult concepts in Finance and Statistics, among other subjects, that they ... 24 Subjects: including statistics, geometry, finance, economics
{"url":"http://www.purplemath.com/Lynwood_statistics_tutors.php","timestamp":"2014-04-18T11:05:30Z","content_type":null,"content_length":"23746","record_id":"<urn:uuid:dfe9f4fe-d570-4286-b8a7-9d60bc94f348>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
A simple algorithm and proof for type inference. Fundamenta Informaticae 10 - OOPSLA'91 , 1991 "... We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op-timizing co ..." Cited by 222 (18 self) Add to MetaCart We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op-timizing compiler. Types are finite sets of classes and subtyping is set inclusion. Using a trace graph, our algorithm constructs a set of conditional type constraints and computes the least solution by least fixed-point derivation. - In. Proc. 18th ACM Symposium on the Principles of Programming Languages , 1991 "... We analyse the computational complexity of type inference for untyped X,-terms in the secondorder polymorphic typed X-calculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^-typable terms requires exponential ..." Cited by 28 (11 self) Add to MetaCart We analyse the computational complexity of type inference for untyped X,-terms in the secondorder polymorphic typed X-calculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^-typable terms requires exponential time, and for Fa the problem is non-elementary. We show as well a sequence of lower bounds on recognising the i^-typable terms, where the bound for Fk+1 is exponentially larger than that for Fk. The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Non-accepting computations are mapped to non-normalising reduction sequences, and hence non-typable terms. The accepting computations are mapped to typable terms, where higher-order types encode reduction sequences, and first-order types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges. These results are the first nontrivial lower bounds on type inference for the - In Proc. 21st Int'l Coll. Automata, Languages, and Programming , 1994 "... Core-ML is a basic subset of most functional programming languages. It consists of the simply typed (or monomorphic) -calculus, simply typed equality over atomic constants, and let as the only polymorphic construct. We present a synthesis of recent results which characterize this "toy" language' ..." Cited by 5 (3 self) Add to MetaCart Core-ML is a basic subset of most functional programming languages. It consists of the simply typed (or monomorphic) -calculus, simply typed equality over atomic constants, and let as the only polymorphic construct. We present a synthesis of recent results which characterize this "toy" language's expressive power as well as its type reconstruction (or type inference) problem. More specifically: (1) Core-ML can express exactly the ELEMENTARY queries, where a program input is a database encoded as a -term and a query program is a -term whose application to the input normalizes to the output database. In addition, it is possible to express all the PTIME queries so that this normalization process is polynomial in the input size. (2) The polymorphism of let can be explained using a simple algorithmic reduction to monomorphism, and provides flexibility, without affecting expressibility. Algorithms for type reconstruction offer the additional convenience of static typing without type declarations. Given polymorphism, the price of this convenience is an increase in complexity from linear-time in the size of the program typed (without let) to completeness in exponential-time (with let). - In ECOOP '93 Conference Proceedings , 1995 "... . We have designed and implemented a type inference algorithm for the Self language. The algorithm can guarantee the safety and disambiguity of message sends, and provide useful information for browsers and optimizing compilers. Self features objects with dynamic inheritance. This construct has unt ..." Cited by 2 (0 self) Add to MetaCart . We have designed and implemented a type inference algorithm for the Self language. The algorithm can guarantee the safety and disambiguity of message sends, and provide useful information for browsers and optimizing compilers. Self features objects with dynamic inheritance. This construct has until now been considered incompatible with type inference because it allows the inheritance graph to change dynamically. Our algorithm handles this by deriving and solving type constraints that simultaneously define supersets of both the possible values of expressions and of the possible inheritance graphs. The apparent circularity is resolved by computing a global fixed-point, in polynomial time. The algorithm has been implemented and can successfully handle the Self benchmark programs, which exist in the "standard Self world" of more than 40,000 lines of code. Keywords: Languages and their implementation, tools and environments. 1 Introduction The choice between static and dynamic typing ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2863245","timestamp":"2014-04-21T13:57:30Z","content_type":null,"content_length":"23504","record_id":"<urn:uuid:96577942-3b6f-4ec2-b863-eacb67000814>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
The Nodding Sphere and the Bird's Beak: D'Alembert's Dispute with Euler - Precession of the Equinoxes D'Alembert's strongest case was the one against Euler's paper on the precession of the equinoxes and the nutation of the earth's axis [11]. An English translation of this article by Steven Jones is available by clicking here for the html version and here for the pdf version. The precession of the equinoxes is a phenomenon that has been known since classical times. The earth's axis is not in fact stationary and instead traces out a large circle with respect to the fixed stars, rather like a top spinning on an oblique axis. The period of the precession is about 26,000 years and it will significantly alter the location of the north celestial pole in the millennia to come. In 1748, the British Astronomer Royal James Bradley announced his discovery of another disturbance in the earth's axis of rotation, a nodding motion or "nutation" with an 18 year cycle. D'Alembert had set himself the task of explaining both phenomena in strictly mechanical terms, as a consequence of Newton's inverse-square law of gravitation. He eventually cracked the problem, and published his book-length solution [14] in the middle of 1749. Euler had also been working on the problem of precession and nutation, but had not been able to solve it. He received a copy of d'Alembert's book late in the summer of 1749. In his "Observations" essay, sent to the Academy in June 1752, d'Alembert reported receiving a letter from Euler, dated January 3, 1750, in which Euler acknowledged receiving the book [6, p. 338]. Euler also said that he had not really been able to follow d'Alembert's argument, but that after he had read it, he saw the big picture and was able to give his own solution to the problem. It was this solution that Euler published in the Berlin Academy's 1749 volume. Euler's solution is certainly shorter (36 pages as opposed to d'Alembert's 184) and more comprehensible. Indeed, d'Alembert's mathematical writings were notorious for poor organization and impenetrability. More importantly, Euler's solution was far more general, and led him to an important paper the following year on general principles governing the motion of rigid bodies [15], a problem he had been working on since 1734 at least. So although Euler owed much to d'Alembert in his solution of the precession and equinox problem, there is also much in his paper [11] that is novel; all of this is explained in careful detail in a recent paper by Curtis Wilson [16]. Nevertheless, Euler ought to have acknowledged at the outset of the paper that he was only presenting an alternate solution to a problem that had already been solved by d'Alembert. It was a serious lapse of academic etiquette to have neglected this. In addition, the records show that Euler didn't actually present his results on the problem to the Berlin Academy until March 5, 1750, so it was ethically questionable for him to have inserted the article in the Academy's volume for 1749. In any case, Euler recognized the validity of D'Alembert's priority claim and inserted a brief notice [17] to this effect in the next volume of Academy's journal, published in translation here. In this notice, Euler acknowledges that he had written his paper only after he had read d'Alembert's book, and that he "makes no pretense to the glory that is due to he that first resolved this important question." The majority of D'Alembert's "Observations" essay is occupied with his priority claim on the problem of precession and nutation. However, he also demanded that his priority be recognized for two other papers. Euler capitulated in one case but not in the other.
{"url":"http://www.maa.org/publications/periodicals/convergence/the-nodding-sphere-and-the-birds-beak-dalemberts-dispute-with-euler-precession-of-the-equinoxes","timestamp":"2014-04-21T07:38:14Z","content_type":null,"content_length":"104115","record_id":"<urn:uuid:7bbc7e55-1644-46ae-950f-cb397eb6e22f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
The Intensity Pattern From The Two Slits For A ... | Chegg.com The intensity pattern from the two slits for a single wavelength looks like the one shown on the left side of the figure. If another slit, separated from one of the original slits by a distance , is added, how will the intensity at the original peaks change? By examining the phasors for light from the two slits, you can determine how the new slit affects the intensity. Phasors are vectors that correspond to the light from one slit. The length of a phasor is proportional to the magnitude of the electric field from that slit, and the angle between a phasor and the previous slit's phasor corresponds to the phase difference between the light from the two slits. Recall that at points of constructive interference, light from the original two slits has a phase difference of , which corresponds to a complete revolution of one phasor relative to the Notice that, as shown in the figure, undergoing a complete revolution leaves the phasor pointing in the same direction as the phasor from the other slit. Think about the phase difference between the new slit and the closer of the two old slits and what this implies about the direction of the phasor for the new slit. The peak intensity increases. Part D Are there any points between the maxima of the original two slits where light from all three slits interferes constructively? If so, what are they? Yes, halfway between the original maxima Yes, at intervals one-third of the distance between the original maxima Yes, halfway between every second pair of maxima
{"url":"http://www.chegg.com/homework-help/questions-and-answers/intensity-pattern-two-slits-single-wavelength-looks-like-one-shown-left-side-figure-anothe-q2511455","timestamp":"2014-04-16T08:35:13Z","content_type":null,"content_length":"24959","record_id":"<urn:uuid:3eebe607-0f6b-4fda-9a24-17b0315c70a0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Index of Auxiliary Routines Up: Index of ScaLAPACK Routines Previous: Index of Driver and 1. This index lists related pairs of real and complex routines together, for example, PSGETRF and PCGETRF. 2. Driver routines are listed in bold type, for example, PSGESV and PCGESV. 3. Routines are listed in alphanumeric order of the real (single precision) routine name (which always begins with PS-). (See section 3.1.3 for details of the ScaLAPACK naming scheme.) 4. Double precision routines are not listed here; they have names beginning with PD- instead of PS-, or PZ- instead of PC-. 5. This index gives only a brief description of the purpose of each routine. For a precise description, consult the Specifications in Part ii, where the routines appear in the same order as here. 6. The text of the descriptions applies to both real and complex routines, except where alternative words or phrases are indicated, for example, ``symmetric/Hermitian'', ``orthogonal/unitary'', or ``quasi-triangular/triangular''. For the real routines ii.) 7. A few routines for real matrices have no complex equivalent (for example, PSSTEBZ). Susan Blackford Tue May 13 09:21:01 EDT 1997
{"url":"http://netlib.org/scalapack/slug/node161.html","timestamp":"2014-04-20T13:18:55Z","content_type":null,"content_length":"3721","record_id":"<urn:uuid:948f50c1-04f6-4146-9eb5-bf72fbc807ea>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: a problem... Replies: 2 Last Post: Jun 17, 2008 12:09 PM Messages: [ Previous | Next ] a problem... Posted: Jun 16, 2008 11:52 AM the problem: construct a square (ABCD) given one of its points (A) and two other points, P and Q on the BC and CD segment... i know that if i make a triangle out of those and pull out circles from the midpoints, i'll get a set of points where one would belong to the square... but i've got some sounds in my head...can't think at all! Help!!!
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1758543&messageID=6256400","timestamp":"2014-04-18T09:33:02Z","content_type":null,"content_length":"18338","record_id":"<urn:uuid:d80891da-16e0-42c7-90de-7fedd438def8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Dice probability From Wikimedia Commons, the free media repository This page is about the probability to "gain" with dice; this depends of course on the rules of the game. • Comparison of the probabilities to have a number, between the sum of three dice and the sum of the best three amongst four dice — this is used in some roleplaying games • Probability to have at least one die with a value equal to or exceeding a given threshold amongst n dice thrown — this is used in some roleplaying games
{"url":"http://commons.wikimedia.org/wiki/Dice_probability","timestamp":"2014-04-20T11:29:26Z","content_type":null,"content_length":"28946","record_id":"<urn:uuid:658f362c-8885-4f32-a0a8-1deb2927f913>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Floppy Cube This puzzle was invented by Katsuhiko Okamoto, and made by Gentosha Toys in Japan. It is like a 1×3×3 Rubik's Cube, so it consists of 9 cubes arranged in a square, and each side of 3 cubes can be given a half turn. Halfway through a move the centre cubie deforms slightly to allow the corner cubies to remain connected, and that is where its name comes from. The number of positions: There are 4 edge pieces which do not move but can be flipped, and 4 corner pieces that do move but cannot be flipped (their orientation is determined by their location). This would give a total of 2^ 4·4! = 384 positions. This is not reached however due to a parity constraint. The parity of the corner permutation is the same as the parity of the number of flipped edges. There are therefore 2^4·4! /2 = 192 positions. I performed a computer search for this puzzle. The following table shows how many positions there are for each number of moves from the solved position. Moves # Positions Total 192 The letters B, F, L and R will denote a move twisting the Back, Front, Left, and Right side of the puzzle. a. Find the corner piece that belongs at the back left location. It will have the same side colours as the back and left edge pieces. b. Do any moves you need to put it in its back-left location. c. If the back edge needs to be flipped over, do the move sequence LBL. d. If the left edge needs to be flipped over, do the move sequence BLB. e. Find the corner piece that belongs at the back right location. f. If it is not yet at the the back-right location, then do R or FR to put it in place. g. If the right edge needs to be flipped over, do the move sequence RFRFR. h. If the front edge needs to be flipped over, do the move F. The front corners should automatically be correct as well. Below is a graph that can be used to solve the puzzle in the minimum number of moves. By reorienting the puzzle, possibly turning it upside down, any mixed position will match one of the positions shown in the graph. Devil's Algorithm: Let P be the move sequence: L FRFRFRFRFRF L FRFRFRFR L RFRFRFRFRFR L RFRFRFRFRFR L R. The move sequence PFPB PFPB is 192 moves long, and visits all possible positions exactly once, returning it to how it was at the beginning. This kind of sequence is sometimes called a Devil's Algorithm, and is a Hamiltonian cycle on the Cayley Graph. If you apply it to any mixed position, it will solve it somewhere along the way. A blindfolded person could always solve the floppy cube by doing this sequence as long as there is someone there to say 'stop' when it is solved.
{"url":"http://www.jaapsch.net/puzzles/floppy.htm","timestamp":"2014-04-21T02:18:06Z","content_type":null,"content_length":"5308","record_id":"<urn:uuid:78ede8d3-2f3a-4660-acfe-a830e92a9085>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
April 26, 2013, Christopher D. Carroll DecentralizingRCK Decentralizing the Ramsey/Cass-Koopmans Model This handout shows that under certain very special conditions the behavior of an economy composed of distinct individual households will replicate the social planner’s solution to the Ramsey/ Cass-Koopmans model. 1 The Consumer’s Problem Consider first the problem of an individual infinitely lived consumer indexed by who has some predetermined set of expectations for how the aggregate net interest rate and wage rate will evolve. At date household owns some capital , and can in principle also borrow; designate the net debt of household in period as ; since we will be examining a perfect foresight solution with perfect capital markets, the interest rate on debt must match the rate of return on assets. This means that all that really matters is the household’s total net asset position, Each household is endowed with one unit of labor, which it supplies exogenously, earning a wage rate . Each household solves: subject to the budget constraint Integrating the household’s dynamic budget constraint and assuming a no-Ponzi-game transversality condition yields the intertemporal budget constraint, which says that the present discounted value of consumption must match the PDV of labor income plus the current stock of net wealth: The formulas for these PDV’s are a bit awkward because they must take account of the fact that interest rates are varying over time. To make the formulas a bit simpler, define the compound interest which is simply the compound interest term needed to convert a value at date to its PDV as of time 0. With this definition in hand we can write the IBC as where is human wealth, Each household solves the standard optimization problem taking the future paths of wages and interest rates as given. Thus the Hamiltonian is which implies that the first optimality condition is the usual . The second optimality condition is leading eventually to the usual first order condition for consumption: Note (for future use) that the RHS of this equation does not contain any components that are idiosyncratic: The consumption growth rate will be identical for every household. The same is true of the expression for human wealth, equation (7). 2 The Firm’s Problem Now we assume that there are many perfectly competitive small firms indexed by in this economy, each of which has a production function identical to the aggregate Cobb-Douglas production function. Perfect competition implies that individual firms take the interest rate and wage rate to be exogenous. Hence firms solve where and are the rental rates for a unit of capital and a unit of labor for one period. Note that, dividing by , this is equivalent to The first order condition for this problem implies that Under perfect competition firms must make zero profits in equilibrium, which means, by fact [EulersTheorem], that: 3 Equilibrium At a Point in Time Thus far, we have solved the consumer’s and the firm’s problems from the standpoint of atomistic individuals. It is now time to consider the behavior of an aggregate economy composed of consumers and firms like these. We assume that the population of households and firms is distributed along the unit interval and the population masses sum to one, as per Aggregation. Thus, aggregate assets at time can be defined as the sum of the assets of all the individuals in the economy at time , while per capita assets are aggregate assets divided by aggregate population, Similarly, normalizing the population of firms to one yields Up to this point, we have allowed for the possibility that different households might have different amounts of net worth. We now impose the assumption that every household is identical to every other household. This assumption rules out the presence of any debt in equilibrium (if all households are identical, they cannot all be in debt - who would they owe the money to?). Indeed, in this case, the aggregate capital stock per capita will equal the aggregate level of net worth, . Thus, households’ expectations about and determine their saving decisions, which in turn determine the aggregate path of . There is one important subtlety here, however. In writing the consumer’s budget constraint, we designated as the net amount of income that would be generated by owning one more unit of net worth (e.g. capital). But if we have depreciation of the capital stock, the net return to capital will be equal to the marginal product minus depreciation. The discussion of the firm’s optimization problem did not consider depreciation because the firms do not own any capital; instead, they make a payment to the households for the privilege of using the households’ capital. Thus the net increment to a household’s wealth if the household holds one more unit of capital will be There is no depreciation of labor, so the labor market equilibrium will be 4 The Perfect Foresight Equilibrium We assume that every household knows the aggregate production function, and understands the behavior of all the other households and firms in the economy. Understanding all of this, suppose that households have some set of beliefs about the future path of the aggregate capital stock per capita . This belief about will imply beliefs about wages and interest rates as well . The final assumption is that the equilibrium that comes about in this economy is the “perfect foresight equilibrium.” That is, consumers have the sets of beliefs such that, if they have those beliefs and act upon them, the actual outcome turns out to match the beliefs. Note now that using the fact that in the perfect foresight equilibrium we can rewrite the household’s budget constraint as Reproducing from (12), Now compare these to the equations derived for the social planner’s problem (with population growth and productivity growth zero) in a previous handout: Since the equilibrium value of , (27) = (23). And (21) is identical to (26). Thus, aggregate behavior of this economy is identical to the behavior of the social planner’s economy! This is a very convenient result, because it means that if we are careful about the exact assumptions we make we can often solve a social planner’s problem and then assume that the solution also represents the results that would obtain in a decentralized economy. The social planner’s solution and the decentralized solution are the same because they are maximizing the same utility function with respect to the same factor prices (and ). When will the decentralized solution not match the social planner’s solution? One important case is when there are externalities in the behavior of individual households; another possible case is where there is idiosyncratic risk but no aggregate risk; basically, whenever the household’s budget constraint or utility function differs in the right ways from the aggregate budget constraint or the social planner’s preferences, there can be a divergence between the two solutions. AIYAGARI, S. RAO (1994): “Uninsured Idiosyncratic Risk and Aggregate Saving,” Quarterly Journal of Economics, 109, 659–684. CARROLL, CHRISTOPHER D. (1992): “The Buffer-Stock Theory of Saving: Some Macroeconomic Evidence,” Brookings Papers on Economic Activity, 1992(2), 61–156, http://econ.jhu.edu/people/ccarroll/ __________ (2000): “Requiem for the Representative Consumer? Aggregate Implications of Microeconomic Consumption Behavior,” American Economic Review, Papers and Proceedings, 90(2), 110–115, http: KRUSELL, PER, AND ANTHONY A. SMITH (1998): “Income and Wealth Heterogeneity in the Macroeconomy,” Journal of Political Economy, 106(5), 867–896.
{"url":"http://www.econ2.jhu.edu/people/ccarroll/public/lecturenotes/Growth/DecentralizingRCK/","timestamp":"2014-04-16T15:58:51Z","content_type":null,"content_length":"30452","record_id":"<urn:uuid:b73cf68e-66ed-4740-84af-106c9656c9a1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
The importance of being median Peter Freed wants to you to know that Jonah Lehrer is Not a Neuroscientist. Lehrer doesn’t claim to be, of course. He’s a journalist and science writer who covers developments in neuroscience, and a good one at that. Freed is concerned about how Lehrer handled a recent study on “the wisdom of crowds” in a recent op-ed in the Wall Street Journal. The wisdom of crowds is a long-standing and often-successful idea that you’ll get a better prediction by aggregating the responses from a bunch of people posed with the same question than you’d get by simply asking a given expert, or even aggregating the opinions of lots of experts. The research Lehrer was pointing out showed that the wisdom of the crowd declines when people have a chance to share information about what other people were going to predict. The more people learn about what other people think on the issue, the less diversity of opinion remains, and the more confident people become in their increasingly inaccurate predictions. As the researchers observe, this sounds kinda familiar to anyone reading the financial pages, or political news, or any of a range of fanatical subpopulations (birthers, truthers, creationists, to name a Freed’s essay is not so much about Lehrer, actually, as about how scientists read. First, how and why a particular phrase in Lehrer’s essay stopped him, but then how one reads a scientific paper: I opened the paper and did what I always do – skip the intro, go straight to the tables and figures, and then to the methods. If you ever read a science paper, you should do the same thing yourself. Reading intros and conclusions first is for suckers – they can say anything the author wants, and reading them allows the author’s “spin” – as we scientists call intros and conclusions – to frame your analysis of the data. In science, the only thing that matters is the methods and the data, because it’s where the author can’t hide behind a spun story. This is exactly right. It’s not something you’re taught in a lecture in your science classes. It’s one of many quirky bits of scientific culture that’s passed along by oral tradition, the sort of thing students learn on their own or fall through the cracks in science education. It’s a bizarre way to read any document, but it’s what scientists do. We pound the structure of scientific papers into our students when we teach them to write lab reports, and then years later we let them in on a secret: that they should read the paper out of order. About halfway through Freed’s essay, he realizes he’d assumed Lehrer was talking about a median rather than a mean, and thus was misreading Lehrer and not properly matching Lehrer’s essay with the paper he was citing. It should be clearly stated that Lehrer only referred to the median, so the fault lies with the reader here, and Freed doesn’t blame Lehrer for it. But then, alas, he just keeps on not understanding why Lehrer used the median, and along the way misrepresents what a median is, what a mean is, and why they are different. For instance, Freed writes: It had never occurred to me – as it never occurred tot he study’s authors – that Lehrer would be talking about an actual group median…. maybe he didn’t know what median meant. Because median guesses are not guesses by a crowd, as Lehrer states. They are guesses by a single person. … Which is to say, they have nothing to do with the point of the Journal‘s article [the wisdom of crowds]! …the median person is defined as the person with the middle value in a group – half the group is above them, and half below. Which means that the median person is one guy. Not a crowd! A single person! Which is why the numbers are so pretty – they were chosen by individuals. …the authors probably just included it [the median] for kicks, and not as a measure of crowd wisdom – which made sense, given it wasn’t a crowd answer. … And then I read the methods section, which confirmed my reading in spades. There I found this sentence: “this confirms that the geometric mean…. is an accurate measure of the wisdom of crowds for our data.” … The authors explicitly say that it is the geometric mean that should be used to judge crowd wisdom. Not the median! All of this is wrong. Well almost all. It’s true that a median is the value which has as many values above it as below. It’s wrong, though, to jump from the median answer to the median person. And to suggest that a median value doesn’t represent the crowd is absurd. It’s definition requires us to consider the value of every other response, not just one response. Nor is the median an inappropriate measure for this purpose. As the authors make clear in the second sentence of the paper – in the abstract of the paper no less - ”Already Galton (1907) found evidence that the median estimate of a group can be more accurate than estimates of experts.” There are all sorts of good reasons to prefer the median as a measure. Indeed, that italicized quotation above is from a section of the paper justifying their use of the geometric mean instead of the more widely used median. As they explain in the Methods section: a high wisdom-of-crowd indicator implies that the truth is close to the median. Thus, it implicitly defines the median as the appropriate measure of aggregation. In our empirical case this is not in conflict with the choice of the geometric mean as can be seen by the similarity of the geometric mean and the median in Table 1 [the Table Freed is focused on]. A theoretical reason is that the geometric mean and the median coincide for a log-normal distribution. In other words, the median is the appropriate measure of the crowd’s choice, but for various computational reasons specific to this study, it was better to use a geometric mean. Freed doubles down on this mishandling of statistics when he writes: “arithmetric mean,” which is just a fancy-pants way of saying “the number that the students actually guessed.” That is, this is the number that real people literally wrote down when guessing the answer to the question. No. No no no no no. No one in the study actually wrote down that there were 26,773 immigrants to Zurich. Someone did write down that 10,000 people immigrated to Zurich, and if you recall that was why Freed didn’t want to use the median. Indeed, exactly as many people wrote down a number larger than 10,000 as wrote a smaller number, while far more wrote a number smaller than 26,773 than wrote down a larger number. The suggestion that the larger number is more representative of what people wrote down boggles the mind and defies basic mathematics. Freed here is trying to say that the arithmetic mean is a superior measure of the collective judgment than the geometric mean, which in turn is superior to the median, and he’s criticizing Lehrer for taking the opposite view. This, it must be emphasized, despite the fact that the authors of the paper under discussion agree with Lehrer and disagree entirely with Freed. And they should disagree with Freed. The arithmetic mean is not the relevant statistic here. To understand why, let’s quickly review what these terms mean. The arithmetic mean is what you calculate when someone tells you to calculate the average. Add the numbers up and divide by the number of things you added. For various mathematical reasons, it is often the case for the sorts of numbers you and I deal with that the arithmetic mean is a good summary of the location of a given collection of numbers. It works wonderfully, for instance, with respect to people’s heights. It works there because height is distributed fairly symmetrically around its central point, as can be seen in the photo here, which shows the heights of shows genetics students of at the University of Connecticut in 1996. One has to squint a bit to see past the variability in the data, but larger samples bear out the observation that height is distributed in the shape of a standard bell curve, known as the normal distribution. It doesn’t work, for instance, for wealth distribution. With an arithmetic mean, if Bill Gates and a homeless person walk into a bar, the arithmetic mean of the wealth of each person in the bar will skyrocket because $56 billion divided by any number of bar patrons is still enormous. But that doesn’t tell us as much as we want to know about the actual income distribution. Similarly, simply summing the househould income of every US household and dividing by the number of households gave you $60,528 in 2006 (according to Wikipedia). But in the chart shown here, you can see that most Americans make a lot less than that. Indeed, about 63% of households earned less than that. To get a better measure of the actual dynamics, you can do one of two things. First, you can compute a different sort of average by multiplying all the numbers, and then taking the nth root of the result (where n is the number of quantities being averaged). Doing this tends to de-emphasize the effect of large numbers, and is thus going to be smaller (or the same size as) the arithmetic mean. It’s also more complicated to compute on the fly, so is less commonly used. The median is easier to compute, because it corresponds to a value greater than half and less than half of all the other numbers. The median household income is $44,389, over $17,000 less than the arithmetic mean (and quite close to the geometric mean). For many purposes, this is a far preferable metric, especially when you’re more interested in where people stand than in what the numbers are (the arithmetic mean is what you’d get if you took away everyone’s money and divided it into equal piles, the median corresponds to what most people actually earn). Because income distribution is skewed (as seen in the figure above), an arithmetic mean is less appropriate. We want a method like a geometric mean or a median that better balances the impact of very large numbers. There is no perfect measure to use, and to the extend Freed is treating the arithmetic mean as “the number that the students actually guessed,” he’s misleading readers. More than half of the students guessed a smaller number than the arithmetic mean, and treating that number as the gold standard devalues those students in preference for the ones who made insanely large guesses. The median has always been the preferred measure for these sorts of “wisdom of the crowds” measures, and because the geometric mean best matched the median, that was the statistic the researchers chose to work with. And because the median is the number normally used in these sorts of studies, that’s what Lehrer reported. Peters dismisses all of this by saying, essentially, that math is scary. The geometric mean is simply “something confusing involving logarithms,” so ignore math and theory and stick with an inappropriate statistic. (“Involving logarithms” because the logarithms transform the computationally obnoxious multiplications and nth roots described above into simpler additions and divisions followed by an exponentiation). So, Freed spends 5500 words to convince us “we need not to get spun. Which is what surely happened to Lehrer.” And he wants us to believe that Lehrer got spun into using the median because the authors used the geometric mean rather than the arithmetic mean. Even though Freed cannot seem to understand what a geometric mean is, let alone why it really is preferable here, he thinks it’s Lehrer who must’ve gotten it wrong. Freed spun himself. Freed’s broader points - about the importance of reading papers and news reports critically, of thinking carefully about the underlying assumptions, of checking numbers to make sure they jibe with logic, and to ensure that examples aren’t being cherry-picked - those are all great. But they’d be that much stronger if the substantive issue he’s raising wasn’t a misreading of Lehrer, the original paper, and basic statistics. 1. #1 razib May 31, 2011 i’m pretty speechless. i actually read that post.WOW. 2. #2 Leo Martins June 1, 2011 Which means that the median person is one guy Amounts to saying that the “mean” is not even one guy, it is nobody. 3. #3 Anna June 1, 2011 Thank you for pointing this out. Statistics are so often misinterpreted in the media, so I’m glad Lehrer as a journalist takes the trouble to understand them, and I’m surprised Freed made that 4. #4 Snarkyxanf June 7, 2011 Wow, I’m “impressed” by the OP. Median is almost always the correct estimate to use in a data set with significant skew. I also “enjoyed” his later post suggesting that people who were upset at his misunderstanding of statistics were suffering from fear of schoolyard bullies (no, seriously: http://neuroself.com/ 2011/06/07/chasing-their-tails-when-central-tendency-junkies-attack-part-1/ ) He does have an illuminating story of sorts about how statistics can be misused. Outliers are often important. But of course, the whole point of using the median for crowd estimates is because outliers tend to be really bad estimates. So I think he missed the point outright. 5. #5 Patrick September 15, 2011 This is a fantastic, fantastic post. I forwarded it to a friend who I’ve been helping with a basic statistics course, because this is an EXCELLENT demonstration of the utility of different measures of central tendency. I really want to thank you for writing this, well done. I don’t think of myself as a “stats person” and my eyes usually glaze over quickly. I’m delighted to say that they didn’t here at all. I award you +1 internet!! 6. #6 owevr September 24, 2011 Wel, I read the intro and discussion since I suck at writing them. I always look for the good ways to write a cool intro and discussion. It is important to fluently explain what the problem is and how the particular methodology can solve the problem, especially when the work is groundbreaking and unprecedented. By the way, the median is not that easy to compute since, if you take the naive strategy, you have to sort your data and the complexity becomes O(n log n) at best. Of course, there are algorithms of O(n) complexity, but they are not particularly “easy” either.
{"url":"http://scienceblogs.com/tfk/2011/05/31/the-importance-of-being-median/","timestamp":"2014-04-16T04:30:22Z","content_type":null,"content_length":"63686","record_id":"<urn:uuid:208eaf87-5e86-4356-95bf-8b2dc73acefa>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Help for programming hey guys im very new to programming and i have this program which i do not understand at all. i would appreciate if some1 could help me with it. I have the skeleton code but i do not know what to do. its regarding minimum spanning tree. class Weighted_graph { static const double INF; // your choice Weighted_graph( int = 50 ); int degree( int ) const;//Returns the degree of the vertex n. Throw an illegal argument exception if the argument does not correspond to an existing vertex. int edge_count() const;//Returns the number of edges in the graph. double adjacent( int, int ) const;//Returns the weight of the edge connecting vertices m and n. If the vertices are equal, return 0. If the vertices are not adjacent, return infinity. Throw an illegal argument exception if the arguments do not correspond to existing vertices. double minimum_spanning_tree( int ) const;//Return the size of the minimum spanning tree of those nodes which are connected to vertex m. Throw an illegal argument exception if the arguments do not correspond to existing vertices. bool is_connected() const;//Determine if the graph is connected. void insert( int, int, double );//If the weight w < 0 or w = ∞, throw an illegal argument exception. If the weight w is 0, remove any edge between m and n (if any). Otherwise, add an edge between vertices m and n with weight w. If an edge already exists, replace the weight of the edge with the new weight. If the vertices do not exist or are equal, throw an illegal argument exception. // Friends friend std::ostream &operator << ( std::ostream &, Weighted_graph const & ); const double Weighted_graph::INF = std::numeric_limits<double>::infinity(); // Enter definitions for all public functions here std::ostream &operator << ( std::ostream &out, Weighted_graph const &graph ) { // Your implementation return out; its a really long message i know but i would really appreciate if some1 could help with the coding University of Waterloo Electrical and Computer Engineering Department. Course: ECE 250 Instructor: Ladan Tahvildari. Programming Assignment 4. Due date: December 3rd, 2012, 11:00 PM As a teacher, this work ethic saddens me. Last edited on Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/86802/","timestamp":"2014-04-18T21:05:20Z","content_type":null,"content_length":"8917","record_id":"<urn:uuid:4d423536-f8d3-4525-b95e-8248c88e63a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
GOVE EFFINGER - Brief Resume • Ph.D. Mathematics, University of Massachusetts, May 1981 • M.A. Mathematics, University of Oregon, June 1969 • B.A. Magna Cum Laude with Highest Honors in Mathematics, Williams College, June 1967 MAJOR AREAS OF STUDY: • Number theory (especially additive problems and prime numbers), complex analysis, polynomials over finite fields. • Additive Number Theory of Polynomials over a Finite Field (with David R. Hayes), Oxford University Press, England, 1991. • Common-Sense BASIC: Structured Programming with Microsoft QuickBASIC (with Alice M. Dean), Harcourt Brace Jovanovich, Inc., San Diego, 1991. SELECTED PAPERS • "Toward a Complete Twin Primes Theorem for Polynomials over Finite Fields", in Proceedings of the 8th International Conference on Finite Fields and their Applications, 2008, to appear. • "Integers and Polynomials: Comparing the Close Cousins Z and Fq[x]", coauthored with Gary Mullen and Ken Hicks, The Mathematical Intelligencer, Volume 27, Number 2, Spring 2005, pages 26-34. • "Twin Irreducible Polynomials over Finite Fields", coauthored with Gary Mullen and Ken Hicks, in Finite Fields with Applications to Coding theory, Cryptography and Related Fields, Springer, 2002. • "Some Numerical Implications of the Hardy and Littlewood Analysis of the 3-Primes Problem", The Ramanujan Journal, Vol. 3 (1999), pp. 239-280. • "A Complete Vinogradov 3-Primes Theorem Under the Riemann Hypothesis" (with J-M. Deshouillers, H. te Riele, and D. Zinoviev), Electronic Research Announcements of the American Mathematical Society, Vol. 3 (1997), pp. 99-104. • "The Polynomial 3-Primes Conjecture", Computer Assisted Analysis and Modeling on the IBM 3090, Baldwin Press, Athens, Georgia 1992. • "A Complete Solution to the Polynomial 3-Primes Problem" (with David R. Hayes), Bulletin of the American Mathematical Society, Vol. 24 (1991), pp. 363-369. • "A Goldbach 3-Primes Theorem for Polynomials of Low Degree over Finite Fields of Characteristic 2", Journal of Number Theory, Vol. 29 (1988), pp. 345-363. • "A Goldbach Theorem for Polynomials of Low Degree over Odd Finite Fields", Acta Arithmetica, Vol. 42 (1983), pp. 329-365. Go Back
{"url":"http://www.skidmore.edu/~effinger/webvita.htm","timestamp":"2014-04-20T23:28:24Z","content_type":null,"content_length":"3697","record_id":"<urn:uuid:eff8e417-e460-466d-ac3b-49f5fb7301de>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of Circle Tangent to Y-Axis, X-Axis and Line June 8th 2010, 02:07 PM #1 Jun 2010 Equation of Circle Tangent to Y-Axis, X-Axis and Line Please help me with the process of finding the equation of a circle in the first quadrant that is tangent to the x-axis, y-axis, and a line of the form ax+by+c=0. Thanks. A circle $(x-a)^2+(y-b)^2=r^2$ is tangent to the x-axis iff $|b|=r$ , is tangent to the y-axis iff $|a|=r$ , and since the circle's center is in the first quadrant this means that the circle's equation is $(x-r)^2+(y-r)^2=r^2$ , and center at $(r,r)$ Finally, the distance from a given point $(x_1,y_1)$ to a given line $Ax+By+C=0$ is given by $\frac{|Ax_1+By_1+C|}{\sqrt{A^2+B^2}}$ . Do some maths now. Thanks. So is there any way to find the center $(r,r)$ if the only information given is the tangent line $3x+4y-12=0$? Just find where the line crosses the x and y axes. $y=0\ \Rightarrow\ 3x-12=0\ \Rightarrow\ x=4$ $x=0\ \Rightarrow\ 4y-12=0\ \Rightarrow\ y=3$ Then the triangle area is 2(3)=6 and the hypotenuse from Pythagoras' theorem is 5. Then the radius is $r=\frac{2A}{sum\ of\ the\ side\ lengths}=\frac{2(6)}{5+4+3}=1$ Hence, the centre is (1,1) and the radius is 1. Do you mean a tangent circle to the two axis and also to that equation? If so input the values in the equation for the distance of the point $(r,r)$ to the line $3x+4y-12=0$: $\frac{|3r+4r-12|}{\sqrt{3^2+4^2}}=r\iff 49r^2-168r+144=$$25r^2\iff r^2-7r+6=0\Longrightarrow r=1,\,6$ , so there are two possibilities: Ps. There's a mistake above: $r=6$ is impossible (why?) Last edited by tonio; June 19th 2010 at 04:37 AM. Reason: Mistake: r=6 is impossible. if the circle is only in the first quadrant, here is another way to do it. See the attachment... the triangle contains pairs of identical right-angled triangles, $a-r+b-r=c$ where c is the length of the hypotenuse, the 3rd side of the triangle. Then it is straightforward to find r. Tonio's method finds both solutions. the circle of r=6 is also tangent to all 3 lines. Last edited by Archie Meade; June 9th 2010 at 09:47 AM. Re: Equation of Circle Tangent to Y-Axis, X-Axis and Line What about the circle to the right of the line, that is also in the first quadrant and tangent to the line and both axes? how do we find its radius? Re: Equation of Circle Tangent to Y-Axis, X-Axis and Line In addition to the previous posts I've done a geometrical construction: Yo'll find the two centers as Points of intersection of two of the three angle bisectors. The colour of the bisector correspond to the colour of the circle. All three angle bisectors form a right triangle. June 8th 2010, 03:43 PM #2 Oct 2009 June 8th 2010, 04:52 PM #3 Jun 2010 June 8th 2010, 05:35 PM #4 MHF Contributor Dec 2009 June 8th 2010, 06:05 PM #5 Oct 2009 June 8th 2010, 06:59 PM #6 MHF Contributor Dec 2009 May 12th 2012, 01:11 PM #7 May 2012 avon, ct May 12th 2012, 11:37 PM #8
{"url":"http://mathhelpforum.com/pre-calculus/148298-equation-circle-tangent-y-axis-x-axis-line.html","timestamp":"2014-04-16T21:04:15Z","content_type":null,"content_length":"64185","record_id":"<urn:uuid:de11ca1d-7a83-46ff-a939-3e76369cadf8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Formally Describable Functions Next: Weak Decidability and Convergence Up: Preliminaries Previous: Infinite Computations, Convergence, Formal Much of the traditional theory of computable functions focuses on halting programs that map subsets of For instance, Definition 2.10 (Describable Functions) Let . A function , with universal element if it is If the ) then Compare functions in the arithmetic hierarchy [34] and the concept of 30, p. 46-47]. Next: Weak Decidability and Convergence Up: Preliminaries Previous: Infinite Computations, Convergence, Formal Juergen Schmidhuber 2003-02-13
{"url":"http://www.idsia.ch/~juergen/ijfcs2002/node6.html","timestamp":"2014-04-19T15:00:44Z","content_type":null,"content_length":"8182","record_id":"<urn:uuid:bbb14d7e-73c0-4c53-9d46-a1b1f93746af>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 25th 2009, 03:03 PM #1 Junior Member Jan 2009 power series i mostly know how to do these, but i don't know what to do for a specific problem. i did the ratio test and took the limit as n goes to infinity, and ended up with x+5. to find the radiance of convergence, everything i've seen is about x-a, nothing about what to do with x+a. could i just change it to x-(-5)? or is there something special that needs to be i mostly know how to do these, but i don't know what to do for a specific problem. i did the ratio test and took the limit as n goes to infinity, and ended up with x+5. to find the radiance of convergence, everything i've seen is about x-a, nothing about what to do with x+a. could i just change it to x-(-5)? or is there something special that needs to be $|x+5| < 1$ $-1 < x+5 < 1$ $-6 < x < -4$ check the endpoints in the nth term for possible inclusion in the interval of convergence. November 25th 2009, 03:19 PM #2
{"url":"http://mathhelpforum.com/calculus/116745-power-series.html","timestamp":"2014-04-18T13:58:20Z","content_type":null,"content_length":"34151","record_id":"<urn:uuid:c9e1c4c0-31a8-42ce-8533-4791ff9735a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there another proof for Dirichlet's theorem? up vote 3 down vote favorite Possible Duplicate: Is a “non-analytic” proof of Dirichlet’s theorem on primes known or possible? Dirichlet's theorem on primes in arithmetic progression states that there are infinitely many primes of the form $kn+h$ given that $k$ and $h$ are coprime. Is there a short proof for this? nt.number-theory analytic-number-theory arithmetic-progression prime-numbers 15 short answer: no. – KConrad Jun 14 '10 at 20:30 3 "It is rash to assert that a particular theorem cannot proved in a particular way." Thought you were an endorser of that viewpoint, Professor K. C. – J. H. S. Jun 14 '10 at 20:40 5 Every proof I've even seen takes the same form up until the final steps. (1) Introduce the character group of the unit group of $Z/N$. (2) Consider the sum $\sum \chi(k)/k$, where $chi$ is a character of $Z/N$. Notice that this sum is much larger for $\chi$ trivial than for $\chi$ nontrivial. (3) Use the multiplicativity of $\chi$, and step 2, to deduce that $\sum 1/p$ grows much faster than $\sum \chi(p)/p$. (4) THE HARD STEP: Deduce somehow that $\sum \chi(k)/k \neq 0$, so $- \sum \chi(p)/p$ is also small. (5) Deduce the theorem. – David Speyer Jun 14 '10 at 20:52 5 J.H.S.: I didn't mean there can't be a short proof, but rather that (right now) there isn't one. That's what it seemed he was asking about, not some meta-mathematical query on the possible existence of a short proof. – KConrad Jun 14 '10 at 21:04 I have voted to close, but not for the reason that others have (they said "exact duplicate"). Rather, I think that the question "does there exist a short proof of Theorem X?" is inherently vague 4 and subjective and could well lead to arguments of the form "Proof X which assumes Y and takes Z pages is / is not short." Please clarify what you actually want to know. There are proofs of Dirichlet's theorem which avoid complex or even real analysis, but I am not aware of a proof which could be given in an undergraduate course in less than a week of lectures. – Pete L. Clark Jun 15 '10 at 3:12 show 11 more comments marked as duplicate by Harry Gindi, François G. Dorais♦, Harald Hanche-Olsen, Pete L. Clark, Andy Putman Jun 15 '10 at 4:41 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. 1 Answer active oldest votes Well, there are short proofs of particular instances of the result. For example, emulating the Euclidean assault on the infinitude of primes, one can establish, almost effortlessly, that there are infinitely many primes of the form 4k+3. Nevertheless, you have to be warned that there is no way to strenghten this technique in order to get the result for every arithmetic progression. You may want to take a look at [1]. In that note, Professor Murty mentions that it was I. Schur the one who first derived a sufficient condition for the existence of an "Euclidean" proof for the infinitude of primes in the arithmetic progression {$mk+a$}$_{k \in \mathbb{N}}$. Edit: As David Speyer mentioned above one on the main ingredients in the proof is a certain non-vanishing result for L-series. Hence, a way in which one might shorten the proof is by spotting the shortest demonstration for the corresponding non-vanishing theorem. I higly recommend that you take a look at the thread in [2] if you wish to learn more about this particular up vote 3 down References [1] M. R. Murty, Primes in certain arithmetic progressions, J. Madras. Univ. (1988), 161-169. [2] Shortest/Most elegant proof for the non-vanishing of $L(1, \chi)$ : Shortest/Most elegant proof for $L(1,\chi)\neq 0$ Oh, that's a different matter! The person who asked the question should definitely clarify if he was asking for a short proof of the general theorem or if he'd be content with a short proof of some special cases (which usually don't give a flavor of the general proof). – KConrad Jun 14 '10 at 21:01 Point taken, Sire. Nonetheless, I decided to enter the reply because I thought it'd help to disseminate the fact that there are specific arithmetic progressions for which the related proofs are as short as possible. – J. H. S. Jun 14 '10 at 21:44 1 More information here: math.uconn.edu/~kconrad/blurbs/gradnumthy/dirichleteuclid.pdf – danseetea Jun 14 '10 at 21:50 1 There's also a short proof for $4k+1$. Assume there were a finite number of such primes, take their product, square it, and add one. Then there is a prime $p$ that divides this result. This forces $-1$ to be a square in $F_p$, hence $p$ is congruent to 1 modulo 4, contradiction. – David Carchedi Jun 14 '10 at 22:09 3 For more on Murty's result that the usual elementary approach is doomed to failure look at: math.uiuc.edu/%7Epppollac/euc4.pdf There Paul Pollack shows that a commonly believed conjecture implies a generalization of Murty's result to a broader type of elementary proof. – Noah Snyder Jun 14 '10 at 22:45 show 1 more comment Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory arithmetic-progression prime-numbers or ask your own question.
{"url":"https://mathoverflow.net/questions/28160/is-there-another-proof-for-dirichlets-theorem/28164","timestamp":"2014-04-19T15:15:32Z","content_type":null,"content_length":"61011","record_id":"<urn:uuid:ac95bc79-358c-4cf8-8d89-f7cac517ac47>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculation of tank height based on flow rate hi physics forum i am having a issue related to a tank size based on the flow rate and the diameter of the orifice. the dimensions would be the total height of the setup should be 1000mm the flow rate should be 525+/- 25ml/min the Internal diameter of the orifice is 4.3mm what would be the probable dimensions of the tank for maintaining the water level. I'm not sure I entirely understand your question. What are you trying to achieve? If the flow rate out of the tank is 525 mL/min then the flow rate in to the tank must be 525 mL/min in order to keep the water level constant. The dimensions and flow rate are already set by you in your problem statement (i.e. 1000-mm height). The height of the fluid column (in part) is what determines the flow rate out of a tank that is open to the atmosphere. The other dimensions of the tank generally do not matter unless the tank has a really small surface area compared to the orifice diameter.
{"url":"http://www.physicsforums.com/showthread.php?t=425974","timestamp":"2014-04-17T12:34:56Z","content_type":null,"content_length":"34749","record_id":"<urn:uuid:96873bde-1e90-494f-b74d-5637825ba758>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about particle MCMC on Xi'an's Og Last evening, I read a nice paper with the above title by Drovandi, Pettitt and McCutchan, from QUT, Brisbane. Low count refers to observation with a small number of integer values. The idea is to mix ABC with the unbiased estimators of the likelihood proposed by Andrieu and Roberts (2009) and with particle MCMC… And even with a RJMCMC version. The special feature that makes the proposal work is that the low count features allows for a simulation of pseudo-observations (and auxiliary variables) that may sometimes authorise an exact constraint (that the simulated observation equals the true observation). And which otherwise borrows from Jasra et al. (2013) “alive particle” trick that turns a negative binomial draw into an unbiased estimation of the ABC target… The current paper helped me realise how powerful this trick is. (The original paper was arXived at a time I was off, so I completely missed it…) The examples studied in the paper may sound a wee bit formal, but they could lead to a better understanding of the method since alternatives could be available (?). Note that all those examples are not ABC per se in that the tolerance is always equal to zero. The paper also includes reversible jump implementations. While it is interesting to see that ABC (in the authors’ sense) can be mixed with RJMCMC, it is delicate to get a feeling about the precision of the results, without a benchmark to compare to. I am also wondering about less costly alternatives like empirical likelihood and other ABC alternatives. Since Chris is visiting Warwick at the moment, I am sure we can discuss this issue next week there.
{"url":"http://xianblog.wordpress.com/tag/particle-mcmc/","timestamp":"2014-04-20T00:37:45Z","content_type":null,"content_length":"80671","record_id":"<urn:uuid:9c26c375-6c5f-4a16-b380-700f6d74823a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
CATEGORIES TOP DOWNLOADS NEW DOWNLOADS Popular Topics Rt-Plot 2.8.10.83 Software ID: 13082 Rt-Plot is a tool to generate Cartesian X/Y-plots from scientific data. You can enter and calculate tabular data. View the changing graphs, including linear and non linear regression, interpolation, differentiation and integration, during entering. Rt-Plot enables you to create plots fast and easily. The line calculations give full access to calculation and display ranges and can use statistical weights. The options can be changed interactively. A powerful reporting module generates ready to publish documents. The result on the screen is the same as on the report printout (what you see is what you get). Although the program looks simple, the graph can be altered in any item you can imagine. All distances in axis, scaling, numberings, captions colors, line- and point styles and colors can be altered. Thus a plot can be generated fitting the requirements of any journal you want to publish in and of cause your personal taste. Features: unlimited number of data points, live calculated data points in data table, unlimited number of graphs, unlimited number of series in graph, unlimited number of calculated lines, linear regression, polynomial, non linear regression,interpolation, smoothing, differential, integral, calculations can use statistical weights, error indicators at data points, function interpreter for calculating columns and non linear functions, graph fully customizable, secondary axis at top and right, twisted, log, exponential axis scaling, all distances colors styles can be varied, report fully customizable, built in word processor, including graph and results. Mathematics Worksheet Factory Deluxe Easily create professional worksheets to provide students with the math practice they need in: whole number, decimal, and fraction operations in addition, subtraction, multiplication, and division; number patterns; measurement; factors and... Download Now! FindGraph FindGraph is a graphing, curve-fitting, and digitizing tool for engineers, scientists and business. Discover the model that best describes your data. Download Now! Prime Number Spiral The Prime Number Spiral (a.k.a. the Ulam Spiral) is formed by marking the prime numbers in a spiral arrangement of the natural numbers. This is software is for exploring the Prime Number Spiral. Download Now! School Calendar School Calendar will help you with assignment organization, project due dates, and scheduling. It can even remind you when your scheduled event is about to happen. Included are two viewing modes, search, auto-backup, iCalendar data exchange. Download Now! Breaktru Fractions n Decimals Add, Subtract, Divide and Multiply fractions.Convert a decimal to a fraction or a fraction to a decimal. Quick and easy interface. No confusing menus. Converts decimals to fractions. Great for school or work. Handy for STOCK quote Download Now! Archim Archim is a program for drawing the graphs of all kinds of functions. You can define a graph explicitly and parametrically, in polar and spherical coordinates, on a plane and in space (surface). Archim will be useful for teachers and students. Download Now! Inverse Matrices The program provides detailed, step-by-step solution in a tutorial-like format to the following problem: Given a 2x2 matrix, or a 3x3 matrix, or a 4x4 matrix, or a 5x5 matrix. Find its inverse matrix by using the Gauss-Jordan elimination method. The... Download Now! Math Quiz Creator Math Quiz Creator lets a parent or teacher create quizzes with any type of arithmetic problem, customized to their students' needs. Download Now! Math Calculator Math calculator, also derivative calculator, integral calculator, calculus calculator, expression calculator, equation solver, can be used to calculate expression, derivative, root, extremum, integral. Download Now! 746.2 KB | Freeware | Category: Mathematics Study linear regression with this tool. Study linear regression with this tool. Linear Regression for Chemical Analysis help you calculate the regression parameters and the limit of detection for two linear scenarios. The user enters the calibration values into the table at the left and... Software Terms: Data Analysis, Datenanalyse, Instrumental Analysis, Instrumentelle Analytik, Statistical Software, Electronic Books, Microelektronics, Teaching Materials 956.0 KB | Shareware | US$40 | Category: Mathematics The most powerful Sci/Eng calculator for Windows. Expression evaluation, 18 digits of precision, variables, >100 functions, unit conversion, polynomial roots, interpolation, polynomial regression, linear algebra, numerical integration and differentiation, systems of linear, non-linear and differential equations, multi-argument functon optimization and fitting, curve, point and histogram graphs, statistical operations, computer math (bin/oct/hex). OS: Windows Software Terms: Calculator, Math, Science, Engineering, Numerical Methods, Statistics, Differential Equations, Linear Algebra, Nonlinear Equations, Interpolation 1.7 MB | Shareware | Category: Mathematics Performs linear regression using the Least Squares method. Performs linear regression using the Least Squares method. Determines the regression coefficients, the generalized correlation coefficient and the standard error of estimate. Does multivariate regression. Displays 2D and 3D plots. OS: Windows Software Terms: Numerical Mathematics, Free, Freeware, Downloads, Programs, Mathematics, Math, Maths, Mathematical, Software 4.6 MB | Shareware | US$99 | Category: Mathematics A compilation of components for Visual Studio WinForms framework for generating Cartesian X/Y-plots from scientific and financial data. A compilation of components for Visual Studio WinForms framework for generating Cartesian X/Y-plots from scientific and financial data. It supports a large variety of axis scaling types, series, calculated line legends, movable markers, etc., and... OS: Windows Software Terms: Adsorption, pores, Surface Area, Bet, b e t, Plot, Delphi, Components, Cartesian Plot 1024.0 KB | Freeware | Category: Mathematics Regression Analysis Calculator calculates regression from given data in different models and tests, a quick and easy means of computing regression based on several models.How to use regression analysis calculator 1. Open 'Regression Analysis Calculator' located in your desktop 2. Insert Dependent and Independent variable values in first and second column accordingly. 3. Different methods of Regression analysis will be displayed without any...
{"url":"http://www.bluechillies.com/screenshot/13082.html","timestamp":"2014-04-18T11:09:46Z","content_type":null,"content_length":"49240","record_id":"<urn:uuid:9ddd1fc6-d6f3-4121-b982-7db15c3264d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
14/3 strange usage 02-21-2012, 01:51 PM because when you measure the voltage at that outlet, you are measuring from the Neutral to the hot leg at that outlet. If that 2 pole breaker had a 1000 watt load on each pole of the breaker, the 2 loads would wind up being in series with each other fed by 240 volts, and the neutral would be handling little, if any current. If one breaker had 1000 watt load on it, and the other pole of that breaker had a 250 watt load on it, then the neutral would be carrying the current for 750 watts at 120 volts 6.25 amps. If the 250 watt load was disconnected, then the Neutral would be carrying the full 1000 watt current at 120 volts 8.33 amps 02-21-2012, 01:54 PM Hackney plumbing because when you measure the voltage at that outlet, you are measuring from the Neutral to the hot leg at that outlet. If that 2 pole breaker had a 1000 watt load on each pole of the breaker, the 2 loads would wind up being in series with each other fed by 240 volts, and the neutral would be handling little, if any current. If one breaker had 1000 watt load on it, and the other pole of that breaker had a 250 watt load on it, then the neutral would be carrying the current for 750 watts at 120 volts 6.25 amps. If the 250 watt load was disconnected, then the Neutral would be carrying the full 1000 watt current at 120 volts 8.33 amps Actually the water heater does not have a neutral. Just two 120v hots and a ground. 02-21-2012, 02:07 PM That is because the WH is a 240v device. If it also needed 120v for something, it would also need a neutral to create a 120v source. 02-21-2012, 02:13 PM Hackney plumbing 02-21-2012, 02:18 PM Correct. The timer and such usually take 120v. This is true of many 240v circuits (such as a range). Pure 240v circuits (like a WH) only need the 2 hots and a ground. 02-21-2012, 02:20 PM What made you bring up a water heater? That is a 240 volt device, where it draws current from the two hot legs, and nothing from the neutral unless it somehow had a fancy 115v control unit on it. never mind, this has already been said:eek: 02-21-2012, 02:21 PM Hackney plumbing So if you have an old house wired for a dryer with two hot and a ground the new dryers wouldn't work at all or would they work but not be safe? 02-21-2012, 02:27 PM The older ciruits had 2 hots and a Neutral for the Dryer power most likely using 10/3 kleenex without ground. The new dryers (and ranges) are supposed to be wired with nn/3 (nn= proper sized conductors) plus ground kleenex and a 4 prong plug and receptacle. The ground on the old stuff was added to the frame of the dryer by the installer to a "safe" (not really) water pipe nearby 02-21-2012, 02:32 PM Hackney plumbing I think I'm ready to wire YOUR house now.....LOL Just kidding thanks for the tips. My house was built in 2000. I'll be removing my panel cover and making sure any multibranch circuit is done with a double pole breaker. 02-22-2012, 10:05 AM Maybe this will help... 02-22-2012, 10:08 AM Hackney plumbing 02-22-2012, 11:00 AM America is in the dark ages with electricity. Europe, even Africa wires all for 240V and the timers and controls [DUH!] in appliances are built for 240V. Now you are outlet wiring with 16 and 18 gauge wire and don't need a neutral. When copper was cheap and the US made it all, the copper lobby won. We all lost. Notice that at least the Americans are smart enough to sell electric water heater timers that have a [MAGIC!] clock that runs on 240V, so you need not pull in another totally unneeded wire. Imagine the audacity of making a 240V lightbulb! 03-29-2012, 09:37 AM The one difference being that most of the electricity that we see in our houses is 110v, yes, and that is a lot easier to let go of it if gets you than is 220v. I have a vague memory from the late '60's when I lived on an airforce base in Britain. I was very young. I seem to remember a TV show explaining the workings of a GFI outlet and saying that they would becoming a standard. But it could have been a few years later in the states. 03-29-2012, 06:20 PM IN a perfectly balanced split feed, the neutral is not carrying any current. It is "alternating current" and the two lines are 180 degrees out of phase, so when one's black wire is "+" the other's is "-" so they cancel each other out. The neutral takes care of any imbalance. 03-29-2012, 06:24 PM I have heard, but do not know if it is fact, that homes in Australia only have a single 240 volt wire coming into the houses. The return to the generator is done with a "ground connection" into the earth.
{"url":"http://www.terrylove.com/forums/printthread.php?t=45977&pp=15&page=2","timestamp":"2014-04-17T13:16:32Z","content_type":null,"content_length":"21020","record_id":"<urn:uuid:8bff286b-ebb9-4de2-9a75-5cf92b4e77d5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
2nd order non-linear ODE, no 'x' term August 25th 2010, 03:05 PM #1 Junior Member Apr 2009 2nd order non-linear ODE, no 'x' term I'm trying to solve the ODE I'm "close" to the answer provided, but I can't see where I'm going wrong. This is what I've done: let $z=y'$, so $y" = \frac{dz}{dx} = \frac{dy}{dx}\frac{dz}{dy}=z\frac{dz}{dy}$ Then the ODE becomes $yz\frac{dz}{dy}=4z^2$, which is separable and gives me $z = Cy^4$ Now integrating wrt x to recover y, $y = Cxy^4+K$ However... the answer provided is $y = (Cx+K)^{-1/3}$, and I don't think manipulating what I have will get me here So what have I done wrong? Divide both sides by yy' and integrate. I'm trying to solve the ODE I'm "close" to the answer provided, but I can't see where I'm going wrong. This is what I've done: let $z=y'$, so $y" = \frac{dz}{dx} = \frac{dy}{dx}\frac{dz}{dy}=z\frac{dz}{dy}$ Then the ODE becomes $yz\frac{dz}{dy}=4z^2$, which is separable and gives me $z = Cy^4$ Now integrating wrt x to recover y, $y = Cxy^4+K$ However... the answer provided is $y = (Cx+K)^{-1/3}$, and I don't think manipulating what I have will get me here So what have I done wrong? If $z = cy^4$ then $\dfrac{dy}{dx} = cy^4$. Now separate $\dfrac{dy}{y^4} = cdx$, integrate and solve for y. August 25th 2010, 04:05 PM #2 August 26th 2010, 04:50 AM #3
{"url":"http://mathhelpforum.com/differential-equations/154425-2nd-order-non-linear-ode-no-x-term.html","timestamp":"2014-04-19T15:49:43Z","content_type":null,"content_length":"41497","record_id":"<urn:uuid:fe1cf54a-7cea-43ad-a33f-532778e17243>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Metric Space , Close and open September 28th 2009, 06:00 PM Metric Space , Close and open We have a, b ∈ R \ Q, a < b. A = {x ∈ Q: a < x < b} Show that A is clopen(open and close) in Q i have being trying to solve this problem in the last 2hours i need some hints or examples thank you in advance September 28th 2009, 06:22 PM Assuming you give $\mathbb{Q}$ the topology inherited from $\mathbb{R}$ then open sets in $\mathbb{Q}$ are those of the form $A \cap \mathbb{Q}$ where $A$ is open in $\mathbb{R}$. Is $A_1=\{ x \ in \mathbb{R} : a<x<b \}$ open? $A_1$ fails to be closed in $\mathbb{R}$ because it lacks $a$ and $b$, but is that a problem in $\mathbb{Q}$? September 28th 2009, 07:18 PM Assuming you give $\mathbb{Q}$ the topology inherited from $\mathbb{R}$ then open sets in $\mathbb{Q}$ are those of the form $A \cap \mathbb{Q}$ where $A$ is open in $\mathbb{R}$. Is $A_1=\{ x \ in \mathbb{R} : a<x<b \}$ open? $A_1$ fails to be closed in $\mathbb{R}$ because it lacks $a$ and $b$, but is that a problem in $\mathbb{Q}$? is a problem in Q I don't know how to start my proof. thank you for your reply
{"url":"http://mathhelpforum.com/differential-geometry/104912-metric-space-close-open-print.html","timestamp":"2014-04-20T10:35:44Z","content_type":null,"content_length":"9842","record_id":"<urn:uuid:5fc9e059-fcf7-4e20-9b32-7af281e9a409>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
how to calculate the cement sand quantity for brickwork • kg of Sand / Cement In Mortar – Cannot Find Definite Answer … I have calculated the following using a cement density of 1.506 t/m3 and … Try 1kg sand per brick, 2 kg per block and a straight volume (not… Get Prices • CONSUMPTION OF MATERIALS 3.1 Consumptions for Plastering … Cement. Sand. 388 nos. 152.9 kg. 0.320 m3. 8. Brickwork in cement mortar (1 : 4) …. For open graded mixes, having no specific job mix formula, the quantities as… Get Prices • how we calculate of Sand, cement and aggregate of M20 ratio or … Volume of broken stone Require = (3/5.5) x 1.57 = 0.856 m3 … To calculate proportion of cement, sand, coarse agrregate and water, It required to do mix design & for this initially it is requird … M20 (1 cement :1.5 sand :3 stone/brick aggregate). Get Prices • How to Estimate Ingredients for 3" Brick Masonry Wall? – A Civil … 26 Aug 2013 … As we use 1:4 cement-sand ratio for 3" brick wall, so cement will be – … How to estimate materials for different quantity of 3" thick brick wall? Get Prices • department of architecture yangon technologial university – most 1. The given one storied staff house timber building is under construction. Calculate the required amount of Brick, cement and sand for the following item of work. Get Prices • Brickwork – Inland Waterways Association Cement, is the most commonly used cement for brickwork. …… Calculating the amount of sand, lime and cement for a cubic metre of mortar can only be roughly. Get Prices • Paving Expert – Calculators calculator. A complete listing of the pop-up calculators available… Get Prices • Building Sand and Cement Calculator – Bricks-Brick Warehouse in … calculator man Brick calculator sand calculator plaster calculator screed calculator. Building Sand and Cement Calculator. How much sand and cement do I… Get Prices • Building materials calculator – iimprove A useful utility for calculating the quantities of building materials… Get Prices • QUIKRETE® – Quantity Calculator You can use this quantity calculator to help you determine the number of bags of … Mortar Mix – to lay either 8" x 2" x 4" bricks or 8" x 8" x 16" blocks at a 3/8" joint … QUIKWALL® Surface Bonding Cement …. The calculator will indicate the approximate number of Polymeric Jointing Sand bags you will need for your project. Get Prices • Brick, block and mortar calculator. – Source4me Building Materials This Calculator/Estimator will provide the quantities of bricks, blocks and mortar ( sand & cement) required for a given area for metric bricks (single & double… Get Prices • TheCivilEngineer : Message: Re: [TheCivilEngineer] Re: How to … 27 Mar 2012 … Re: How to calculate cement and sand quantities in cement mortar As … How to Calculate Quantity of Cement Mortar in Brick work and Plaster? Get Prices • QUANTITY OF CEMENT & SAND CALCULATION IN MORTAR 19 Dec 2012 … QUANTITY OF CEMENT & SAND CALCULATION IN MORTAR Quantity of cement mortar is required for rate analysis of brickwork and plaster… Get Prices • How to calculate the amount of Brick,cement & sand required for 1 cum How to calculate the amount of Brick,cement & sand required for 1 cum of 230mm thick brick work with ratio 1:5 please give me an answer with detailed step… Get Prices • CALCULATING BUILDING MATERIAL QUANTITIES – Wickes included into the calculations, making the measurement of the brick – ….. cement/ cement and sand/gravel. Wickes bags. RATIOS BY of all-in. VOLUME ballast. Get Prices • the on-line calculators: Brick Estimator and Mortar Estimator – Boral Step2: Enter the quantity of bricks being laid (in thousands). 000. Step3: Calculate the estimated cement, lime, sand and damp sand required. No. of 40kg . Get Prices • Mortar calculator – Cashbuild Mortar calculator. … This calculator estimates the quantities of sand and Ordinary Portland Cement for Class II (Non-Structural) mortar per square metre of… Get Prices • Brick Mortar Calculator & Calculation – Ncalculators Brick Mortar Calculator is an online tool for area and volume calculation programmed to estimate the area and how much sand and cement is required for the… Get Prices • Calculating Quantities of Cement, Lime & Sand | ClayBrick.org 26 Jul 2012 … A beginners guide to mixing mortar. Useful information for all Clay Brick DIY projects. Get Prices • Online building materials, quantity, volume, area and cost … Useful online building material metric and imperial calculators – quantity, volume, area and cost for plaster, render, stucco, stud, cement, sand, bricks, mortar etc. … Sub Base Calculator – this estimator will provide the area in square metres… Get Prices • How much cement and sand need in 1 cubic meter brick work How much sand used for 1 cubic meter of concrete? measure in the ratio 4:2:1 crushed aggregate:washed sand: cement. The sand and cement simply fill up… Get Prices • Technical Notes 10 – Estimating Brick Masonry : Chicago, Illinois … In the estimating procedure, determine the net quantities of all material before … Table 5 contains the quantities of portland cement, hydrated lime and sand… Get Prices • calculator to estimate quantities – Bulk Builders Merchants calculate the amount of bricks/aggregates you will need to fill an… Get Prices • DIY Project – Easy Mix Packaged Sand, Cement, Concrete & Mortar … From this volume you can calculate the number of EASY MIX bags you will need. Volume of slab … Spread mortar on the end of the next brick/block and position. Get Prices • estimate for construction Cement = 7 bags. sand = 27 cft. 11/2" jally = 35 cft. R.C.C FOR COLUMN UP TO 5 FEET … Cement = 58 bags. Sand = 478 cft. Brick = 18,000 Nos. INNER WALL… Get Prices • Building contractors pocket handbook – CIDB CALCULATING QUANTITIES … Stack bricks on firm hard ground as close to the structure as possible. …. 5½ bags cement + 0.75m³ sand + 0.75m³ stone. Get Prices • cement and sand mortar for bricklaying and building | houses, flats … United Kingdom. Also bricks, concrete and timber. … MIXING SAND AND CEMENT FOR MASONRY WORK … Tip or shovel in the required amount of cement. Get Prices • Required Cement quantity – GeekInterview.com 9 Mar 2012 … Civil Engineering – Required Cement quantity what is the method of calculation for require Cement qty. in 1 Cum brick work with CSM 1:6.. 2 Answers are available for this … cement quantity= 1/(1+6)*0.3 and sand= 6/(1+6)*0.3. Get Prices • Easy Estimator – Cockburn Cement Please note that our guide is based on using standard bricks, for all non standard bricks … For your concrete quantity requirements please use our calculator. … Key: C=Cement, L=Lime (Hy-Lime), S =Sand (brickies, plasters or concrete). Get Prices • Cement sand plaster – Ideas and Infinity Architects How to apply cement sand plaster. … sand mortar which is applied over walls made of brickwork, cement block work, … The cement – sand mortar mix can be in the range of 1:3 to 1:6. … Thumb rule for interior designers to calculate quantity: Get Prices
{"url":"http://www.coalsurfacemining.com/mining/how-to-calculate-the-cement-sand-quantity-for-brickwork.html","timestamp":"2014-04-18T11:25:08Z","content_type":null,"content_length":"27255","record_id":"<urn:uuid:839c1f2d-abe9-4731-a175-42a2de8b42dd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of pyramid 1. The problem statement, all variables and given/known data The volume of a pyramid can be evaluated by using the equation V=1/3*A*h where h is the height of the pyramid and A is the area of the base of the pyramid. The problem is to design such a triangular pyramid where the volume is V = 72 cm3 and the height is h = 12 cm. The base of the pyramid is a rectangular isosceles triangle. 2. Relevant equations a) Determine the area of the base of the pyramid. Pay attention to the unit of your answer. b) Draw a picture of the base triangle of the pyramid in a proper scale 3. The attempt at a solution b) A = 18 = b*y so: 18=2*b => b = 9cm Your picture didn't come through, but I'm guessing your formula A = 18 = b*y is wrong. Remember the area of a triangle is ( )base * height. And even if it were correct, b = y wouldn't give you 18 = 2b, it would be 18 = b . Two things to fix.
{"url":"http://www.physicsforums.com/showthread.php?t=482539","timestamp":"2014-04-18T18:17:56Z","content_type":null,"content_length":"66640","record_id":"<urn:uuid:c7801e8d-b82e-4742-ac56-856ebf4a8ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Hersh's pointless attack on logicism and formalism Stephen G Simpson simpson at math.psu.edu Mon Sep 28 10:11:50 EDT 1998 Hersh Thu, 24 Sep 1998 14:34:06 writes: > I offered to teach a course ... on Foundations of Math. ... > I had expected naively that this would be like teaching > number theory or differential geometry. .... That expectation was indeed very naive, if not downright arrogant. Evidently Hersh found out that teaching a course in f.o.m. is *not* like teaching a course in number theory, differential geometry, etc. Many people, f.o.m. professionals and laymen alike, were already aware of this. I would interpret Hersh's discovery as more evidence for something that I have been saying all along: f.o.m. is not just another branch of mathematics. Its goals and subject matter are very different. I would formulate the difference as follows: F.o.m. focuses on the most basic concepts of mathematics, with an eye to the unity of human knowledge. Thus f.o.m. necessarily involves a lot of philosophical overtones and motivation which are not essential in core mathematical Not everyone would agree with the views expressed in the previous paragraph. For example, Shoenfield has said that he regards f.o.m. as just another branch of mathematics. I would imagine that this view is reflected to some extent in Shoenfield's lecture style: very dry, very precise, very little in the way of philosophical motivation. Perhaps Shoenfield does not even regard philosophy of mathematics as a legitimate subject. Returning to Hersh, I still don't know what Hersh means by "foundationalism" and why he is so hostile to it. Hersh says that foundationalism is the pursuit of indubitability, but as Martin Davis pointed out, indubitability is a straw man. Hersh identifies Frege, Russell, Hilbert, ... as the enemy, but he admits that their research was of great scientific value. I now suspect that Hersh's views in opposition to "foundationalism" are completely incoherent. We may never get to the bottom of Hersh's real intent. > If my words against logicism, formalism and intuitionism haved been > hurtful, I'm sorry. Hersh's words are not hurtful, only pointless and incoherent. He displays a lot of hostility toward "foundationalism", but he doesn't present what I would regard as a serious scientific or philosophical argument against the work of Frege, Russell, Hilbert, G"odel, et al. Nor does he present any serious philosophical or scientific ideas of his own. Let's look at Hersh's views on two specific f.o.m. programs, logicism and formalism. 1. Logicism Hersh formulates the goal of logicism as: to make mathematics indubitable by reducing it to logic. Hersh then attacks logicism on the grounds that this supposed goal was allegedly not achieved. Such a loaded approach is obviously not a good way to evaluate anything. A better approach would be to formulate the goal of logicism somewhat differently: to investigate the extent to which mathematics is reducible to logic, and along the way to develop technical tools which can be used to investigate other related philosophical or foundational questions concerning the nature and basic concepts of mathematics. This is *much* more fruitful. With this formulation, one can talk about the achievements which flowed from the logicist program: identification of the predicate calculus as an explication of mathematical and non-mathematical reasoning; identification of the axioms of set theory; set-theoretic foundations; a far-reaching explication of mathematical rigor in terms of provability in ZFC; the independence phenomenon; etc etc. Some of Hersh's pronouncements make me wonder whether Hersh is even aware of these developments, let alone whether he has thought seriously about them. 2. Formalism Hersh states the goal of formalism as: to make mathematics indubitable by reducing it to the manipulation of meaningless formal symbols. This caricature of Hilbert's program is worse than useless. In his posting of 25 Sep 1998 23:00:09, Hersh explicitly dismisses Hilbert's actual views, on the grounds that "Hilbert is no longer around". This makes me question whether Hersh is really interested in this subject, as he claims to be. A much more accurate and fruitful formulation of Hilbert's program is in terms of finitistic reductionism: to investigate the extent to which mathematics can dispense with actual infinities; to investigate the extent to which mathematics is reducible to finitism; to develop technical tools which can be used to investigate these and related philosophical or foundational questions concerning the nature and basic concepts of mathematics. With this formulation of Hilbert's program, one can discuss the achievements which flowed from it: conservation results, consistency proofs, primitive recursive functions, G"odel's first and second incompleteness theorems, Gentzen-style ordinal analysis, etc etc. Concerning the specific issue of finitistic reductionism, considerable progress has been made, as detailed in my paper "Partial Realizations of Hilbert's Program", JSL 53, 349-363, http://www.math.psu.edu/simpson/papers/hilbert/. That paper contains the following estimate: at least 85 percent of existing mathematics can be formalized within WKL_0 or WKL_0^+ or stronger systems which are conservative over PRA with respect to Pi^0_2 sentences. Since PRA (= primitive recursive arithmetic) is finitistic, this estimate would if true represent a significant vindication of Hilbert's program of finitistic reductionism. To Hersh, all of this is meaningless. I raised these issues pointedly in my posting of 17 Sep 1998 18:16:23 entitled "Hersh on the axiom of infinity", but I got no response at all from Hersh. Hersh 12 Sep 1998 18:06:45 loves to use the axiom of infinity to attack "foundationalism", but he is adamantly unwilling to examine any and all evidence concerning the actual role of the axiom of infinity in mathematics. Specifically, Hersh is uninterested in knowing the outcome of Hilbert's program of finitistic reductionism. In sum, I am beginning to wonder whether the views of Hersh and his "humanist" followers concerning f.o.m. deserve to be taken seriously. -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002225.html","timestamp":"2014-04-16T07:19:58Z","content_type":null,"content_length":"8808","record_id":"<urn:uuid:035ed4b4-46a2-4239-a642-e8340281b04e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
commutative diagram February 12th 2009, 08:38 AM #1 Aug 2008 commutative diagram R is a ring and consider commutative diagram of R -modules with exact rows. then I want to show that Gamma is isomorphism if and only if the sequence here + is direct sum and teh first homomorphism sends a to \alpha(a),k_1(a) and the second homomorphism sends (a_2,b) to k_2(a_2)-\beta(b) here k_1 is homomorphism from A_1 to B_1 and k_2 is homomorphism from A_2 to B_2. R is a ring and consider commutative diagram of R -modules with exact rows. then I want to show that Gamma is isomorphism if and only if the sequence here + is direct sum and teh first homomorphism sends a to \alpha(a),k_1(a) and the second homomorphism sends (a_2,b) to k_2(a_2)-\beta(b) here k_1 is homomorphism from A_1 to B_1 and k_2 is homomorphism from A_2 to B_2. ok, like other diagram chasing problems the solution is fairly straightforward but quite lengthy and annoying! i'll give the solution this time and that would also be the last time! so the rows in your commutative diagram are $0 \longrightarrow A_j \overset{k_j}{\longrightarrow} B_j \overset{u_j} \longrightarrow C_j\longrightarrow 0, \ \ j=1,2.$ we also have the sequence $0 \longrightarrow A_1 \overset{f}{\longrightarrow} A_2 \oplus B_1 \overset{g} \longrightarrow B_2 \longrightarrow 0$ defined by: $f(a_1)=(\alpha(a_1),k_1(a_1)), \ \ g(a_2,b_1)=k_2(a_2)-\beta(b_1).$ so suppoose first that $\gamma$ in your diagram is an isomorphism. see that since $k_1$ is injective, $f$ is injective too. also by the definition: $gf =k_2 \alpha -\beta k_1.$ but since the diagram is commutative, we have $k_2 \alpha = \beta k_1.$ thus $gf=0,$ i.e. $\text{im}f \subseteq \ker g.$ next we'll show that $\ker g \subseteq \text{im}f$: let $z=(a_2,b_1) \in \ker g,$ i.e. $k_2(a_2)=\beta(b_1).$ hence: $0=u_2k_2(a_2)=u_2 \beta (b_1)=\gamma u_1(b_1),$ which gives us $u_1(b_1)=0,$ because $\gamma$ is an isomorphism. so $b_1 \in \ker u_1 = \text{im}\ k_1.$ thus $b_1=k_1(a_1),$ for some $a_1 \in A_1.$ now since $k_2(a_2)=\beta k_1(a_1)=k_2 \alpha (a_1),$ we'll have $a_2=\alpha(a_1)$ because $k_2$ is injective. hence: $z=(\alpha(a_1), k_1(a_1))=f (a_1) \in \text{im} f.$ so we've proved that $\text{im} f = \ker g.$ finally we need to show that $g$ is surjective: let $b_2 \in B_2.$ then $u_2(b_2) \in C_2=\gamma(C_1),$ because $\gamma$ is an isomorphism. so $\gamma for some $c_1 \in C_1.$ now since $u_1$ is surjective, $c_1=u_1(b_1),$ for some $b_1 \in B_1.$ thus: $u_2(b_2)=\gamma u_1(b_1)=u_2 \beta (b_1),$ which gives us: $b_2 - \beta(b_1) \in \ker u_2 = \ text{im} \ k_2.$ so there exists $a_2 \in A_2$ such that $b_2 - \beta(b_1) = k_2(a_2).$ hence $b_2=k_2(a_2)+\beta(b_1)=g(a_2, - b_1) \in \text{im} g.$ this completes the proof of the first half of the problem. conversely, suppose that the sequence is exact. we want to show that $\gamma$ is an isomorphism. first we show that $\gamma$ is injective: so suppose $\gamma(c_1)=0,$ for some $c_1 \in C_1.$ $u_1$ is surjective, we have $c_1=u_1(b_1),$ for some $b_1 \in B_1.$ then: $0=\gamma u_1(b_1)=u_2 \beta(b_1)$ and hence $\beta(b_1) \in \ker u_2 = \text{im} \ k_2.$ so $\beta(b_1)=k_2(a_2),$ for some $a_2 \in A_2,$ which gives us: $g(a_2,b_1)=0.$ thus $(a_2,b_1) \in \ker g = \text{im} f.$ so there exists $a_1 \in A_1$ such that $(a_2,b_1)=f(a_1)=(\alpha(a_1), k_1(a_1)).$ hence $b_1=k_1(a_1)$ and thus $c_1=u_1(b_1)=u_1k_1 finally we need to show that $\gamma$ is surjective: let $c_2 \in C_2.$ then $\exists \ b_2 \in B_2: \ u_2(b_2)=c_2,$ because $u_2$ is surjective. now since $g$ is surjective, there exists $ (a_2,b_1) \in A_2 \oplus B_1$ such that: $k_2(a_2)-\beta(b_1)=g(a_2,b_1)=b_2.$ thus: $c_2=u_2(b_2)=-u_2 \beta(b_1)=-\gamma u_1(b_1)=\gamma(-u_1(b_1)) \in \text{im} \gamma. \ \Box$ whiy is it true that if $<br /> <br /> gf=0,<br />$ then $<br /> <br /> \text{im}f \subseteq \ker g.<br />$ How can I use this exercise to show the following: There are given two short exact sequences $<br /> <br /> 0 \longrightarrow K {\longrightarrow} P \longrightarrow M\longrightarrow 0, <br />$ and $<br /> <br /> 0 \longrightarrow K' {\longrightarrow} P' \longrightarrow M\longrightarrow 0, <br />$ whereP and P' are projective R-modules. I want to show that P + K' is isomorphic to P' + K, here + denotes direct sum. How can I use this exercise to show the following: There are given two short exact sequences $0 \longrightarrow K \overset{u}{\longrightarrow} P \overset{v}\longrightarrow M\longrightarrow 0, <br />$ and $0 \longrightarrow K' \overset{u'}{\ longrightarrow} P' \overset{v'} \longrightarrow M \longrightarrow 0, <br />$ whereP and P' are projective R-modules. I want to show that $P \oplus K'$ is isomorphic to $P' \oplus K.$ let $\gamma: M \longrightarrow M$ be the identity map, which is obviously an isomorphism. now since $P$ is projective and $v'$ is surjective, there exists a map $\beta: P \longrightarrow P'$ such that $v'\beta=\gamma v=v.$ next we will define a map $\alpha: K \longrightarrow K'$ such that $u' \alpha = \beta u.$ here's how to define $\alpha$: let $x \in K.$ then $v' \beta u(x)=vu(x)=0.$ thus $\beta u(x) \in \ker v' = \text{im} \ u'.$ so there exists $y \in K'$ such that $\beta u(x)=u'(y).$ define $\alpha(x)=y.$ then $\alpha$ is well-defined because if $z \in K'$ is another element with $\beta u(x)=u'(z),$ then $u'(y)=u'(z)$ and so $y=z,$ because $u'$ is injective. see that $\alpha$ is a homomorphism. also from the definition it's clear that $u' \alpha = \beta u.$ now we have all conditions needed in your exercise. hence since $\gamma$ is an isomorphism, the sequence $0 \longrightarrow K \ longrightarrow P \oplus K' \longrightarrow P' \longrightarrow 0$ is exact and hence it splits, because $P'$ is projective. thus $P' \oplus K \simeq P \oplus K',$ and you're happily done! Thanksssssss alot for your help. :-) February 12th 2009, 05:32 PM #2 MHF Contributor May 2008 February 13th 2009, 10:58 AM #3 Aug 2008 February 14th 2009, 04:11 AM #4 February 16th 2009, 09:53 AM #5 Aug 2008 February 16th 2009, 09:03 PM #6 MHF Contributor May 2008 February 16th 2009, 11:29 PM #7 Aug 2008
{"url":"http://mathhelpforum.com/advanced-algebra/73288-commutative-diagram.html","timestamp":"2014-04-20T13:54:28Z","content_type":null,"content_length":"72381","record_id":"<urn:uuid:a20aa047-756b-422a-a744-4d1bd6bcccc2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
10. REFERENCES - G. Mantzouranis et al.: Angular Distribution of Nucleons in Nucleon-Induced Preequilibrium Reactions, Phys. Lett. 57B, 220 (1975) - G. Mantzouranis et al.: Generalized Exciton Model for the Description of Preequilibrium Angular Distribution, Z. Phys. A276, 145 (1976) - Z. Sun et al.: Angular Distribution Calculation Based on the Exciton Model Taking Account of the Influence of the Fermi Motion and the Pauli Principle, Z. Phys. A305, 61 (1982) - K. Kikuchi et al.: Nuclear Matter and Nuclear Reaction, North-Holland, Amsterdam, 1968, p.33 - C. Costa et al.: Angle-Energy Correlated Model of Preequilibrium Angular Distribution, Phys. Rev. C28, 587 (1983) - A. Iwamoto et al.: An Extension of the Generalized Exciton Model and Calculations of (p,p') and (p,alpha) Angular Distribution, Nucl. Phys. A419, 472 - X. Wen et al.: A Semi-Classical Model of Multi-Step Direct and Compound Nuclear Reactions, Z. Phys. A324, 325 (1986) - J. Akkermans and H. Gruppelaar: #FK Phys. Let., #FH 157B, #FS 95 (1985) - Zhang Jing-shang and Shi Xiang-jun: - A. Iwamoto et al.: Mechanism of Cluster Emission in Nucleon Induced Preequilibrium Reactions, Phys. Rev. C26, 1821 (1982) - K. Sato et al.: Preequilibrium Emission of Light Composite Particles in the Framework of the Exciton Model, Phys. Rev. C28, 1527 (1983) - J.Zhang et al.: Formation and Emission of Light Particles in Fast Neutron Induced Reactions, Commun. in Theor. Phys., 10, 33 (1988) - J. Zhang et al.: The Pick-up Mechanism in Composite Particle Emission Processes, Z. Phys., A344, 251 (1992) - J. Zhang: The Angular Distribution of Light Particle based on a Semi-Classical Model, Commun. Theor. Phys. 14, 41 (1990) - J. Zhang et al.: A Theoretical Method for Calculating the Double Differential Cross Section of Composite Particles, Chinese J. Nucle. Phys., 13, 129 -J. Zhang: A Method for Calculating Double Differential Cross Sections of Alpha Particle Emissions, Nucl. Sci. Eng., 116, 35 (1994) - A. Gilbert et al.: Can, J. Phys., 43, 1446 (1965) - F.D. Becchetti et al.: Phys. Rev., 182, 1190 (1969) - C. M. Perey et al.: Atomic Data and Nuclear Data Tables, 17, 3 (1976) - A. R. Barnett et al.: #FK Computer Phys. Commun., 8, 377 (1974) - P. D. Kunz: Distorted Wave Code DWUCK4, University of Colorado, USA
{"url":"http://www.oecd-nea.org/tools/abstract/detail/iaea1290","timestamp":"2014-04-19T12:22:03Z","content_type":null,"content_length":"26018","record_id":"<urn:uuid:398c1345-3c8f-46ab-b735-dae1c4382493>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
help with fibonacci Join Date Jan 2010 Rep Power hello everyone i need big help.... i don't know how to use 'fibonacci' :( our teacher asked us to make a program that will output how many possible siblings a certain animal can have, for example if the user input 5 number of kangaroos, we're asked to make a program using fibonacci to output how many possible siblings a 5 kangaroos could have.... she gave us example but i could hardly understand that fibonacci:( Java Code: import java.io.*; public class FibonacciNum { public static void main (String[]args) throws IOException { BufferedReader dataIN = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter a number:"); int num = Integer.parseInt(dataIN.readLine()); fibonacci(num - 2) + fabonacci(num-1); i really don't know what to do, i can't even explain these codes i wrote :( :( can someone help me please? thank you very much by the way she gave us this formula: fibonacci n = 1 fibonacci o = o Last edited by likemine; 01-06-2010 at 02:40 AM. Join Date Oct 2009 Rep Power hello everyone i need big help.... i don't know how to use 'fibonacci' :( our teacher asked us to make a program that will output how many possible siblings a certain animal can have, for example if the user input 5 number of kangaroos, we're asked to make a program using fibonacci to output how many possible siblings a 5 kangaroos could have.... she gave us example but i could hardly understand that fibonacci:( Java Code: import java.io.*; public class FibonacciNum { public static void main (String[]args) throws IOException { BufferedReader dataIN = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter a number:"); int num = Integer.parseInt(dataIN.readLine()); fibonacci(num - 2) + fabonacci(num-1); i really don't know what to do, i can't even explain these codes i wrote :( :( can someone help me please? thank you very much by the way she gave us this formula: fibonacci n = 1 fibonacci o = o take a look at this Java Code: import java.io.*; public class FibonacciNum { public static void main (String[]args) throws IOException //BufferedReader for the class BufferedReader dataIN = new BufferedReader(new InputStreamReader(System.in)); //Asks for a number System.out.println("Enter a number:"); //Here when the system asks for a number. and a user types it. The user //input is a String. So, we need to parse it into a String. int num = Integer.parseInt(dataIN.readLine()); //hmm there doesnt seem to be sense. whats fibnacci(int) here?..Usually you //can do fibonacci through recursion. fibonacci(num - 2) + fabonacci(num-1); take a look at recursion and fibonacci section. at Join Date Dec 2009 Rep Power ill share you mine.. an iterative way.. i had hard time too thinking of how to make a fibonacci program i hope you understand this one... Java Code: public class FibonacciIterative2 { public static void main(String[] args) { int f1 = 1, f2 = 1, // fibonacci iterating sequence starting in F(N) where N = 3 for (int x = 3; x <= 10; x++) { temp = f1; f1 = f2; f2 = temp + f2; i got this program in wikipedia. but i cant find the exact page where this program occurs, so ill share the code to you... Last edited by bigj; 01-06-2010 at 07:05 AM. Join Date Oct 2009 Rep Power ill share you mine.. an iterative way.. i hope you understand this one... Java Code: public class FibonacciIterative2 { public static void main(String[] args) { int f1 = 1, f2 = 1, // fibonacci iterating sequence starting in F(N) where N = 3 for (int x = 3; x <= 10; x++) { temp = f1; f1 = f2; f2 = temp + f2; this also another way to do it. But this way of doing is very inefficient for large numbers. So, recursion is commonly used way for this type of problem Join Date Dec 2009 Rep Power oh sir.. tnx for the quick response to my code... any way.. atleast now i know fibonacci is more efficient in recursive way... tnx sir! Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power Aux contraire: recursive Fibonacci implementations might be short but they are extremely inefficient because you calculate the same numbers over and over again. Use an iterative implementation instead. kind regards, Join Date Oct 2009 Rep Power My apologies. Just cross checked. Its the other way around. Iterative is much more efficient than recursive. Heres an evidence for that. Recursive vs iterative Again Sry for the wrong conclusion. Join Date Jan 2010 Rep Power ill share you mine.. an iterative way.. i had hard time too thinking of how to make a fibonacci program i hope you understand this one... Java Code: public class FibonacciIterative2 { public static void main(String[] args) { int f1 = 1, f2 = 1, // fibonacci iterating sequence starting in F(N) where N = 3 for (int x = 3; x <= 10; x++) { temp = f1; f1 = f2; f2 = temp + f2; i got this program in wikipedia. but i cant find the exact page where this program occurs, so ill share the code to you... the output for this one is Java Code: so is this the possible siblings of 3 kangaroos can have? where did you get this one? Join Date Dec 2009 Rep Power
{"url":"http://www.java-forums.org/new-java/24428-help-fibonacci.html","timestamp":"2014-04-17T17:16:36Z","content_type":null,"content_length":"97988","record_id":"<urn:uuid:2a00e539-f2b0-4911-a43e-e819f00c54dd>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
The AstroStat Slog This is an unusual Quote-of-the-week, in that I point you to [ABSTRACT] and a [VIDEO] of the recent talk at the Institute for Innovative Computing. See what you think! Continue reading ‘Quote of the Week, Oct 12, 2007’ » This is a long comment on October 3, 2007 Quote of the Week, by Andrew Gelman. His “folk theorem” ascribes computational difficulties to problems with one’s model. My thoughts: Model , for statisticians, has two meanings. A physicist or astronomer would automatically read this as pertaining to a model of the source, or physics, or sky. It has taken me a long time to be able to see it a little more from a statistics perspective, where it pertains to the full statistical model. For example, in low-count high-energy physics, there had been a great deal of heated discussion over how to handle “negative confidence intervals”. (See for example PhyStat2003). That is, when using the statistical tools traditional to that community, one had such a large number of trials and such a low expected count rate that a significant number of “confidence intervals” for source intensity were wholly below zero. Further, there were more of these than expected (based on the assumptions in those traditional statistical tools). Statisticians such as David van Dyk pointed out that this was a sign of “model mis-match”. But (in my view) this was not understood at first — it was taken as a description of physics model mismatch. Of course what he (and others) meant was statistical model mismatch. That is, somewhere along the data-processing path, some Gauss-Normal assumptions had been made that were inaccurate for (essentially) low-count Poisson. If one took that into account, the whole “negative confidence interval” problem went away. In recent history, there has been a great deal of coordinated work to correct this and do all intervals properly. This brings me to my second point. I want to raise a provocative corollary to Gelman’s folk theoreom: When the “error bars” or “uncertainties” are very hard to calculate, it is usually because of a problem with the model, statistical or otherwise. One can see this (I claim) in any method that allows one to get a nice “best estimate” or a nice “visualization”, but for which there is no clear procedure (or only an UNUSUALLY long one based on some kind of semi-parametric bootstrapping) for uncertainty estimates. This can be (not always!) a particular pitfall of “ad-hoc” methods, which may at first appear very speedy and/or visually compelling, but then may not have a statistics/probability structure through which to synthesize the significance of the results in an efficient way. From the ever-quotable Andrew Gelman comes this gem, which he calls a Folk Theorem : When things are hard to compute, often the model doesn’t fit the data. Difficulties in computation are therefore often model problems… [When the computation isn't working] we have the duty and freedom to think about models. Once again, the middle of a recent (Aug 30-31, 2007) argument within CHASC, on why physicists and astronomers view “3 sigma” results with suspicion and expect (roughly) > 5 sigma; while statisticians and biologists typically assume 95% is OK: David van Dyk (representing statistics culture): Can’t you look at it again? Collect more data? Vinay Kashyap (representing astronomy and physics culture): …I can confidently answer this question: no, alas, we usually cannot look at it again!! Ah. Hmm. To rephrase [the question]: if you have a “7.5 sigma” feature, with a day-long [imaging Markov Chain Monte Carlo] run you can only show that it is “>3sigma”, but is it possible, even with that day-long run, to tell that the feature is really at 7.5sigma — is that the question? Well that would be nice, but I don’t understand how observing again will help? No one believes any realistic test is properly calibrated that far into the tail. Using 5-sigma is really just a high bar, but the precise calibration will never be done. (This is a reason not to sweet the computation TOO much.) Most other scientific areas set the bar lower (2 or 3 sigma) BUT don’t really believe the results unless they are replicated. My assertion is that I find replicated results more convincing than extreme p-values. And the controversial part: Astronomers should aim for replication rather than worry about 5-sigma. These are from two lively CHASC discussions on classification, or cluster analysis. The first was on Feb 7, 2006; the continuation on Dec 12, 2006, at the Harvard Statistics Department, as part of Stat 310 . Don’t demand too much of the classes. You’re not going to say that all events can be well-classified…. It’s more descriptive. It gives you places to look. Then you look at your classes. Then you’re saying the cluster analysis is more like - It’s really like you have a propsal for classes. You then investigate the physical processes more thoroughly. You may have classes that divide it [up] But it can make a difference, where you see the clusters, depending on your [parameter] transformation.You can squish the white spaces, and stretch out the crowded spaces; so it can change where you think the clusters are. But that is interesting. Yes, that is very interesting. These are particularly in honor of Hyunsook Lee‘s recent posting of Chattopadhyay et. al.’s new work about possible intrinsic classes of gamma-ray bursts. Are they really physical classes — or do they only appear to be distinct clusters because we view them through the “squished” lens (parameter spaces) of our imperfect instruments? Some of the lively discussion at the end of the first “Statistical Challenges in Modern Astronomy” conference, at Penn State in 1991, was captured in the proceedings (“General Discussion: Working on the Interface Between Statstics and Astronomy, Terry Speed (Moderator)”, in SCMA I, editors Eric D. Feigelson and G. Jogesh Babu, 1992, Springer-Verlag, New York,p 505). Joseph Horowitz (Statistician): …there should be serious collaboration between astronomers and statisticians. Statisticians should be involved from the beginning as real collaborators, not mere number crunchers. When I collaborate with anybody, astronomer or otherwise, I expect to be a full scientific equal and to get something out of it of value to statistics or mathematics, in addition to making a contribution to the collaborator’s field… Jasper Wall (Astrophysicist): …I feel strongly that the knowledge of statistics needs to come very early in the process. It is no good downstream when the paper is written. It is not even much good when you have built the instrument, because we should disabuse statisticians of any impression that the data coming from astronomical instruments are nice, pure, and clean. Each instrument has its very own particular filter, each person using that instrument puts another filter on it and each method of data acquisition does something else yet again. I get more and more concerned particularly at the present time [1991] of data explosion (the observatory I work with is getting 700 MBy per night!). There is discussion of data compression, cleaning on-line, and other treatments even before the observing astronomer gets the data. The knowledge of statistics and the knowledge of what happens to the data need to come extremely early in the process. “Bayesian” methods have, I think, rightly gained favor in astronomy as they have in other fields of statistical application. I put “Bayesian” in quotation marks because I do not believe this marks a revival in the sciences in the belief in personal probability. To me it rather means that all information on hand should be used in model construction, coupled with the view of Box[1979 etc], who considers himself a Bayesian: Models, of course, are never true but fortunately it is only necessary that they be useful. The Bayesian paradigm permits one to construct models and hence statistical methods which reflect such information in an, at least in principle, marvellously simple way. A frequentist such as myself feels as at home with these uses of Bayes principle as any Bayesian. From Bickel, P. J. “An Overview of SCMA II”, in Statistical Challenges in Modern Astronomy II, editors G. Jogesh Babu and Eric D. Feigelson, 1997, Springer-Verlag, New York,p 360. [Box 1979] Box, G. E. P. , 1979, “Some Problems of statistics and everyday life”. J. Amer. Statst. Assoc., 74, 1-4. Peter Bickle had so many interesting perspectives in his comments at these SCMA conferences that it was hard to choose just one set. Ten years ago, Astrophysicist John Nousek had this answer to Hyunsook Lee’s question “What is so special about chi square in astronomy?”: The astronomer must also confront the problem that results need to be published and defended. If a statistical technique has not been widely applied in astronomy before, then there are additional burdens of convincing the journal referees and the community at large that the statistical methods are valid. Certain techniques which are widespread in astronomy and seem to be accepted without any special justification are: linear and non-linear regression (Chi-Square analysis in general), Kolmogorov-Smirnov tests, and bootstraps. It also appears that if you find it in Numerical Recipes (Press etal. 1992) that it will be more likely to be accepted without comment. …Note an insidious effect of this bias, astronomers will often choose to utilize a widely accepted statistical tool, even into regimes where the tool is known to be invalid, just to avoid the problem of developping or researching appropriate tools. From pg 205, in “Discussion by John Nousek” (of Edward J. Wegman et. al., “Statistical Software, Siftware, and Astronomy”), in Statistical Challenges in Modern Astronomy II”, editors G. Jogesh Babu and Eric D. Feigelson, 1997, Springer-verlag, New York. Ingrid Daubechies interview by Dorian Devins, www.nasonline.org/interviews_daubechies, National Academy of Sciences, U.S.A., 2004. It is from part 6, where Ingrid Daubechies speaks of her early mathematics paper on wavelets. She tries to put the impact into context: I really explained in the paper where things came from. Because, well, the mathematicians wouldn’t have known. I mean, to them this would have been a question that really came out of nowhere. So, I had to explain it … I was very happy with [the paper]; I had no inkling that it would take off like that… [Of course] the wavelets themselves are used. I mean, more than even that. I explained in the paper how I came to that. I explained both [a] mathematicians way of looking at it and then to some extent the applications way of looking at it. And I think engineers who read that had been emphasizing a lot the use of Fourier transforms. And I had been looking at the spatial domain. It generated a different way of considering this type of construction. I think, that was the major impact. Because then other constructions were made as well. But I looked at it differently. A change of paradigm. Well, paradigm, I never know what that means. A change of … a way of seeing it. A way of paying Jeff Scargle (in person [top] and in wavelet transform [bottom], left) weighs in on our continuing discussion on how well “automated fitting”/”Machine Learning” can really work (private communication, June 28, 2007): It is clearly wrong to say that automated fitting of models to data is impossible. Such a view ignores progress made in the area of machine learning and data mining. Of course there can be problems, I believe mostly connected with two related issues: * Models that are too fragile (that is, easily broken by unusual data) * Unusual data (that is, data that lie in some sense outside the arena that one expects) The antidotes are: (1) careful study of model sensitivity (2) if the context warrants, preprocessing to remove “bad” points (3) lots and lots of trial and error experiments, with both data sets that are as realistic as possible and ones that have extremes (outliers, large errors, errors with unusual properties, etc.) Trial … error … fix error … retry … You can quote me on that. This ilustration is from Jeff Scargle’s First GLAST Symposium (June 2007) talk, pg 14, demonstrating the use of inverse area of Voroni tesselations, weighted by the PSF density, as an automated measure of the density of Poisson Gamma-Ray counts on the sky. I want to use this short quote by Andrew Gelman to highlight many interesting topics at the recent Third Workshop on Monte Carlo Methods. This is part of Andrew Gelman’s empahsis on the fundamental importance of thinking through priors. He argues that “non-informative” priors (explicit, as in Bayes, or implicit, as in some other methods) can in fact be highly constraining, and that weakly informative priors are more honest. At his talk on Monday, May 14, 2007 Andrew Gelman explained: You want to supply enough structure to let the data speak, but that’s a tricky thing. These quotes are in the opposite spirit of the last two Bayesian quotes. They are from the excellent “R”-based , Tutorial on Non-Parametrics given by Chad Schafer and Larry Wassserman at the 2006 SAMSI Special Semester on AstroStatistics (or here ). Chad and Larry were explaining trees: For more sophistcated tree-searches, you might try Robert Nowak [and his former student, Becca Willett --- especially her "software" pages]. There is even Bayesian CART — Classifcation And Regression Trees. These can take 8 or 9 hours to “do it right”, via MCMC. BUT [these results] tend to be very close to [less rigorous] methods that take only minutes. Trees are used primarily by doctors, for patients: it is much easier to follow a tree than a kernel estimator, in person. Trees are much more ad-hoc than other methods we talked about, BUT they are very user friendly, very flexible. In machine learning, which is only statistics done by computer scientists, they love trees. This is the second a series of quotes by Xiao Li Meng , from an introduction to Markov Chain Monte Carlo (MCMC), given to a room full of astronomers, as part of the April 25, 2006 joint meeting of Harvard’s “Stat 310″ and the California-Harvard Astrostatistics Collaboration. This one has a long summary as the lead-in, but hang in there! Summary first (from earlier in Xiao Li Meng’s presentation): Let us tackle a harder problem, with the Metropolis Hastings Algorithm. An example: a tougher distribution, not Normal in [at least one of the dimensions], and multi-modal… FIRST I propose a draw, from an approximate distribution. THEN I compare it to true distribution, using the ratio of proposal to target distribution. The next draw: tells whether to accept the new draw or stay with the old draw. Our intuition: 1/ For original Metropolis algorithm, it looks “geometric” (In the example, we are sampling “x,z”; if the point falls under our xz curve, accept it.) 2/ The speed of algorithm depends on how close you are with the approximation. There is a trade-off with “stickiness”. Practical questions: How large should say, N be? This is NOT AN EASY PROBLEM! The KEY difficulty: multiple modes in unknown area. We want to know all (major) modes first, as well as estimates of the surrounding areas… [To handle this,] don’t run a single chain; run multiple chains. Look at between-chain variance; and within-chain variance. BUT there is no “foolproof” here… The starting point should be as broad as possible. Go somewhere crazy. Then combine, either simply as these are independent; or [in a more complicated way as in Meng and Gellman]. And here’s the Actual Quote of the Week: [Astrophysicist] Aneta Siemiginowska: How do you make these proposals? [Statistician] Xiao Li Meng: Call a professional statistician like me. But seriously – it can be hard. But really you don’t need something perfect. You just need something decent. This is one in a series of quotes by Xiao Li Meng, from an introduction to Markov Chain Monte Carlo (MCMC), given to a room full of astronomers, as part of the April 25, 2006 joint meeting of Harvard’s “Stat 310″ and the California-Harvard Astrostatistics Collaboration: These MCMC [Markov Chain Monte Carlo] methods are very general. BUT anytime it is incredibally general, there is something to worry about. The same is true for bootstrap – it is very general; and easy to misuse. Marty Weinberg , January 26, 2006, at the opening day of the Source and Feature Detection Working Group of the SAMSI 2006 Special Semester on Astrostatistics : You can’t think about source detection and feature detection without thinking of what you are going to use them for. The ultimate inference problem and source/feature detection need to go together. Provocative Corollary to Andrew Gelman’s Folk Theorem Quote of the Week, October 3, 2007 Quote of the Week, Aug 31, 2007 Quote of the Week, Aug 23, 2007 Quote of the Week, August 2, 2007 Quote of the Week, July 26, 2007 Quote of the Week, July 19, 2007 Quote of the Week, July 12, 2007 Quote of the Week, July 5, 2007 Quote of the Week, June 27, 2007 Quote of the Week, June 20, 2007 Quote of the Week, June 12, 2007 Quote of the Week, June 5, 2007 Quote of the Week, May 29, 2007
{"url":"http://groundtruth.info/AstroStat/slog/author/aconnors/","timestamp":"2014-04-16T13:04:26Z","content_type":null,"content_length":"67311","record_id":"<urn:uuid:1abe3a18-10cc-4ff5-9260-fbe8508587b3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Browse by Keyword: "geometry" atlaspack Pack rectangles (or images) into a rectangle (or canvas texture atlas). binpacking binary tree based bin packing algorithm box-frustum Checks if an axis aligned bounding box intersects a camera frustum. clipsy a node.js port of jsclipper [JS]... which is a port of clipper [C++, C#, Delphi] coati Transform GeoJSON data to PostgreSQL/PostGIS data constellation A grid geometry toolkit for controlling 2D sprite motion. d3-geom The functions from src/geom in the d3 repo differential Homological exterior derivartive with integer coefficients dot-obj A generic parser for the .obj 3D geometry format face-normals Given an array of triangles' vertices, return a `Float32Array` of their normal vectors. fieldkit Basic building blocks for computational design projects. Written in CoffeeScript for browser and server environments. geographiclib GeographicLib (http://geographiclib.sourceforge.net/) node.js port geom A collection of terse, efficient geometry tools. geom-mat A collection of terse, efficient affine matrix tools. geom-poly A collection of terse, efficient polygon tools. geom-vec A collection of terse, efficient vector tools. geomath AMD Geometry and Matrix modules using reuse pattern for better performance geometry JavaScript library for working with objects in a two-dimensional coordinate system. gj2pg geoJSON to PostgreSQL/PostGIS heya-dom DOM: the venerable Dojo DOM utilities. hypotrochoid Returns points across one or more hypotrochoids is-triangle return whether an array of points (in any dimension) describe a triangle js-2dmath Fast 2d geometry math: Vector2, Rectangle, Circle, Matrix2x3 (2D transformation), Circle, BoundingBox, Line2, Segment2, Intersections, Distances, Transitions (animation/tween), Random numbers, Noise jsts A JavaScript library of spatial predicates and functions for processing geometry line2 perform operations on infinite lines in 2 dimensions mesh-geodesic Approximate geodesic distance for triangulated meshes mesh-normals Given a list of vertices and faces, generate the normals for a triangle mesh. mixdown-geotools Geography tools for working with maps, or geometry in javascript. Can be used as library or in mixdown. mokolo Collection of machine learning algorithms: Non-Negative Matrix Factorization for JavaScript multipoint Multipoint Translation and Operations Library ndthree The unholy union of three.js and ndarrays node-occ OpenCascade OCE Wrapper for Node js normals Estimates normals for meshes paper The Swiss Army Knife of Vector Graphics Scripting point-displace offset points by angles, displacements, and other points point-in-region Quickly and robustly determines which region contains a given query point poly2tri A 2D constrained Delaunay triangulation library polygon utility for working with polygons (arrays of vec2s) ray Minimal Ray geometric primitive rle-core Core tools for working with narrow band level sets in JavaScript rle-sample Methods for sampling narrowband level sets shape Simple 2D shape generators simplify-js A high-performance JavaScript 2D/3D polyline simplification library three JavaScript 3D library toxiclibsjs toxiclibsjs is an open-source library for computational design tasks with JavaScript. turf a node.js library for performing geospatial operations with geojson vec2 manipulate vectors in 2d voronoi-map JavaScript port of Amit Patel's mapgen2 https://github.com/amitp/mapgen2 Map generator for games. Generates island maps with a focus on mountains, rivers, coastlines.
{"url":"https://www.npmjs.org/browse/keyword/geometry","timestamp":"2014-04-20T01:24:57Z","content_type":null,"content_length":"17285","record_id":"<urn:uuid:ccd2db9b-d883-4f19-a525-1f6e8af375d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnetic Flux through a Coil 1. The problem statement, all variables and given/known data You hold a wire coil perpendicular to a magnetic field B. If the magnitude of B increases while its direction remains unchanged, how will the magnetic flux through the coil change? Check all that apply: The flux is unchanged because the position of the coil with respect to B is unchanged. The flux increases because the magnitude of B increases. The flux decreases because the magnitude of B increases. The flux is unchanged because the surface area of the coil is unchanged. 2. Relevant equations [tex]A_{eff} = Acos\vartheta[/tex] 3. The attempt at a solution According to the formula - [tex]A_{eff} = Acos\vartheta[/tex], the magnetic flux is determined by the area. I believe the answer is "flux is unchanged because the surface area of the coil is unchanged" since in the problem, only B is changing. Am I right?
{"url":"http://www.physicsforums.com/showthread.php?t=221716","timestamp":"2014-04-21T14:49:58Z","content_type":null,"content_length":"42887","record_id":"<urn:uuid:32c0b5be-ba96-4dd0-93c2-15c35e2a0ef9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 119 , 1999 "... We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approac ..." Cited by 2407 (62 self) Add to MetaCart We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approach is that local properties are often not preserved at the global level. We present a general framework for using additional interface processes to model the environment for a component. These interface processes are typically much simpler than the full environment of the component. By composing a component with its interface processes and then checking properties of this composition, we can guarantee that these properties will be preserved at the global level. We give two example compositional systems based on the logic CTL*. - Journal of Computer and System Sciences , 1982 "... This paper is an attempt at laying the foundations for the classification of queries on relational data bases according to their structure and their computational complexity. Using the operations of composition and fixpoints, a Z--// hierarchy of height w 2, called the fixpoint query hierarchy, i ..." Cited by 243 (3 self) Add to MetaCart This paper is an attempt at laying the foundations for the classification of queries on relational data bases according to their structure and their computational complexity. Using the operations of composition and fixpoints, a Z--// hierarchy of height w 2, called the fixpoint query hierarchy, is defined, and its properties investigated. The hierarchy includes most of the queries considered in the literathre including those of Codd and Aho and Ullman - Journal of Applied Logic , 2007 "... Abstract. The practice of first-order logic is replete with meta-level concepts. Most notably there are meta-variables ranging over formulae, variables, and terms, and properties of syntax such as alpha-equivalence, capture-avoiding substitution and assumptions about freshness of variables with resp ..." Cited by 183 (21 self) Add to MetaCart Abstract. The practice of first-order logic is replete with meta-level concepts. Most notably there are meta-variables ranging over formulae, variables, and terms, and properties of syntax such as alpha-equivalence, capture-avoiding substitution and assumptions about freshness of variables with respect to metavariables. We present one-and-a-halfth-order logic, in which these concepts are made explicit. We exhibit both sequent and algebraic specifications of one-and-a-halfth-order logic derivability, show them equivalent, show that the derivations satisfy cut-elimination, and prove correctness of an interpretation of first-order logic within it. We discuss the technicalities in a wider context as a case-study for nominal algebra, as a logic in its own right, as an algebraisation of logic, as an example of how other systems might be treated, and also as a theoretical foundation , 1988 "... Nowadays computer science is surpassing mathematics as the primary field of logic applications, but logic is not tuned properly to the new role. In particular, classical logic is preoccupied mostly with infinite static structures whereas many objects of interest in computer science are dynamic objec ..." Cited by 153 (16 self) Add to MetaCart Nowadays computer science is surpassing mathematics as the primary field of logic applications, but logic is not tuned properly to the new role. In particular, classical logic is preoccupied mostly with infinite static structures whereas many objects of interest in computer science are dynamic objects with bounded resources. This chapter consists of two independent parts. The first part is devoted to finite model theory; it is mostly a survey of logics tailored for computational complexity. The second part is devoted to dynamic structures with bounded resources. In particular, we use dynamic structures with bounded resources to model Pascal. - Combinatorica , 1992 "... In this paper we show that Ω(n) variables are needed for first-order logic with counting to identify graphs on n vertices. The k-variable language with counting is equivalent to the (k − 1) -dimensional Weisfeiler-Lehman method. We thus settle a long-standing open problem. Previously it was an open q ..." Cited by 135 (9 self) Add to MetaCart In this paper we show that Ω(n) variables are needed for first-order logic with counting to identify graphs on n vertices. The k-variable language with counting is equivalent to the (k − 1) -dimensional Weisfeiler-Lehman method. We thus settle a long-standing open problem. Previously it was an open question whether or not 4 variables suffice. Our lower bound remains true over a set of graphs of color class size 4. This contrasts sharply with the fact that 3 variables suffice to identify all graphs of color class size 3, and 2 variables suffice to identify almost all graphs. Our lower bound is optimal up to multiplication by a constant because n variables obviously suffice to identify graphs on n vertices. 1 - Graph Theoretic Concepts in Computer Science, 24th International Workshop, WG ’98, Lecture Notes in Computer Science , 1998 "... Abstract. Hierarchical decompositions of graphs are interesting for algorithmic purposes. There are several types of hierarchical decompositions. Tree decompositions are the best known ones. On graphs of tree-width at most k, i.e., that have tree decompositions of width at most k, where k is fixed, ..." Cited by 113 (20 self) Add to MetaCart Abstract. Hierarchical decompositions of graphs are interesting for algorithmic purposes. There are several types of hierarchical decompositions. Tree decompositions are the best known ones. On graphs of tree-width at most k, i.e., that have tree decompositions of width at most k, where k is fixed, every decision or optimization problem expressible in monadic second-order logic has a linear algorithm. We prove that this is also the case for graphs of clique-width at most k, where this complexity measure is associated with hierarchical decompositions of another type, and where logical formulas are no longer allowed to use edge set quantifications. We develop applications to several classes of graphs that include cographs and are, like cographs, defined by forbidding subgraphs with “too many ” induced paths with four vertices. 1. - Information and Computation , 1989 "... A theory satisfies the k-variable property if every first-order formula is equivalent to a formula with at most k bound variables (possibly reused). Gabbay has shown that a model of temporal logic satisfies the k-variable property for some k if and only if there exists a finite basis for the tempora ..." Cited by 76 (5 self) Add to MetaCart A theory satisfies the k-variable property if every first-order formula is equivalent to a formula with at most k bound variables (possibly reused). Gabbay has shown that a model of temporal logic satisfies the k-variable property for some k if and only if there exists a finite basis for the temporal connectives over that model. We give a model-theoretic method for establishing the k-variable property, involving a restricted Ehrenfeucht-Fraisse game in which each player has only k pebbles. We use the method to unify and simplify results in the literature for linear orders. We also establish new k-variable properties for various theories of bounded-degree trees, and in each case obtain tight upper and lower bounds on k. This gives the first finite basis theorems for branching-time models of temporal logic. 1 Introduction A first-order theory \Sigma satisfies the k-variable property if every first-order formula is equivalent under \Sigma to a formula with at most k bound variables (pos... - COMPUTATION AND PROOF THEORY , 1984 "... Whereas first-order logic was developed to confront the infinite it is often used in computer science in such a way that infinite models are meaningless. We discuss the first-order theory of finite structures and alternatives to first-order logic, especially polynomial time logic. ..." Cited by 75 (6 self) Add to MetaCart Whereas first-order logic was developed to confront the infinite it is often used in computer science in such a way that infinite models are meaningless. We discuss the first-order theory of finite structures and alternatives to first-order logic, especially polynomial time logic. - Annals of Discrete Mathematics , 1985 "... We develop a Logic in which the basic objects of concern are games, or equivalently, monotone predicate transforms. We give completeness and decision results and extend to certain kinds of many-person games. Applications to a cake cutting algorithm and to a protocol for exchanging secrets, are given ..." Cited by 63 (5 self) Add to MetaCart We develop a Logic in which the basic objects of concern are games, or equivalently, monotone predicate transforms. We give completeness and decision results and extend to certain kinds of many-person games. Applications to a cake cutting algorithm and to a protocol for exchanging secrets, are given. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=62876","timestamp":"2014-04-18T09:38:15Z","content_type":null,"content_length":"35848","record_id":"<urn:uuid:b4f6db50-8379-42ca-aee5-8bfde0dded20>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Arch 499: Landscape Modeling-Project 1 Cut and Fill Calc's on form Z Cut and fill calculations can be a very long and tedious task in any profession. So I have taken the liberty to discover an easier way to accomplish this daunting task. This page will help demonstrate my work with form Z and how it makes cut and fill calc's a snap. The Process This project came about after I had been working with form Z terrain models in one class (Arch481) while also doing cut and fill calc's in a landscape construction class. In our construction class we learned how to calculate cut and fill using a large grid format along with a formula to calculate change in grade per grid cell. This method was very long, boring and filled with numerous mistakes. So I set out to find an easier way to calculate cut and fill. This page is a look at how form Z cut and fill calculations compare with those done by hand. Feel free to download the DXF contour files and try this yourself. I'd appreciate it if you'd e-mail me with your results Beginning with a Site Plan The first step in calculating cut and fill is finding a project site that has existing and propsed contours on it. Here is the site I worked with for this project. The dashed lines Site plan of are the existing contours and the solid lines are the proposed ones. project Calculating Cut and Fill I asked around school and at work to see how most landscape architects calculate cut and fill. There were two main answeres: Section Calculations & Area by Volume Calculations. Section Calculations are done by cutting multiple sections through a site to find volume, then multipllying that section by the linear value between those sections to get an overall calculation for each section. This is a very rough estimate at best. The Area by Volume Method, which also seemed to be the most popular method, is done by calculating the area of each contour, and then multiply that area by the depth of the contour to get an overall cubic footage for each contour. Once you have the cubic footage for each contour, you just add them up to get a total cubic footage for the contours. You do this for both the proposed and Existing contour maps, then subtract the proposed value from the existing value to get a overall value. A total positive value means the site requires that much fill, and a total negative number means that the site requires that much cut. As you can see this is a very time consuming task. Now I'll demonstrate how this task can be accomplished in form Z at about 1/10th the total time! Check out my hand calculated numbers The first step in calculating cut and fill in form Z, is to take site contours and scan them into the program. Below are the existing and proposed contours from the site shown above. Existing contours traced in form Z Proposed contours traced in form Z Download existing contours Download proposed contours Creating Terrain Model Once you have the contours traced into form Z, you can then create a stepped model of each group of contour lines. This is done by using the Terrain modeling tool in the program. You select the type of model (I used a step model), then you adjust the contour intervals and starting height, then select the contour lines in ascending order and then you have a 3-D step model of your site!!! Below are the steped models fro the existing and proposed contours of my project site. Existing contours terrain model Proposed contour terrain model Comparing Cubic Footage Once you have the two terrain models (existing and proposed), you can do a query on each of them to see the cubic footage of each model. Query on exsitng terrain model Query on proposed terrain model Final Cut and Fill Calculation Once you have these numbers, you subtract the proposed number from the existing number to get a total change in cubic footage. This number is used to calculate cut and fill just like the hand done calculation were, meaning that a negative number means overall cut, and a posative number means overall fill. Here are my numbers: Existing Terrain model = 257,503.389 cubic Feet Proposed Terrain model = 245,786.024 cubic Feet Difference = -11,717.365 cubic Feet Final Conclusions Well there you have form Z's final cut and fill Calculation, but how does it compare to those done by hand. Well lets look at what I came up with: Final results per Method Hand calculated using "Area by Volume" Method = About 12,500 cubic feet of Cut Calculated by form Z = 11, 717.365 cubic feet of Cut As you can see the numbers are very close. But the real key here is the amount of time it took to do each set of calculations. Lets compare those: Time to calculate per Method Hand calculations = approx. 1.5 hours form Z calculations = approx. .5 hours The bottom line here is that form Z can can calculate precise cut and fill calculations in about a third of the time it takes to do the same calculations by hand (if you never make simple mathmatical errors I mind you). Therefore I can honestly say that form Z is a quicker more accurate way of calculating cut and fill, and should be used more readily to save time and money on this annoying task. But, cut and fill is not all form Z is good for, check out my other pages to see what other aspects of landscape architecture form Z is a more efficient tool. Chad Whichers, 1998
{"url":"http://courses.be.washington.edu/ARCH/481/00HomePage/3.Case%20Studies/5.Cut%20&%20Fill/1.Cut&Fill%20Calculations.html","timestamp":"2014-04-19T22:16:13Z","content_type":null,"content_length":"12779","record_id":"<urn:uuid:9e824486-cf0f-4d7f-82f5-dc9c79d1a188>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Use the Intermediate Value Theorem and synthetic division to determine whether or not the following polynomials have a real zero between the numbers given. P(x) = x3 - 3x2 + 2x - 5; Is there a real zero between 2 and 3? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/501ba82ce4b02742c0b2d125","timestamp":"2014-04-20T00:50:15Z","content_type":null,"content_length":"51261","record_id":"<urn:uuid:678b7c46-ed09-4d77-b6ed-5ba5d7abf8df>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: RE: algebra II trig paper flashcards Replies: 9 Last Post: Feb 5, 2013 1:57 PM Messages: [ Previous | Next ] RE: RE: algebra II trig paper flashcards Posted: Jul 3, 2012 1:59 PM I would also like to be included Please send to mgavioli@ccsd.edu THANK YOU! Dr. Mary Ann Gavioli Mathematics Department Chair Mu Alpha Theta Advisor Mad About Mu Advisor Clarkstown South HS Fax: 845-623-5470 From: owner-nyshsmath@mathforum.org [owner-nyshsmath@mathforum.org] on behalf of Kathy Swan [kswan@ogdensburgk12.org] Sent: Tuesday, July 03, 2012 12:17 PM To: nyshsmath@mathforum.org Subject: Re: RE: algebra II trig paper flashcards I would also love to be included. Thank you soooo much for sharing. Please send to kswan@ogdensburgk12.org * To unsubscribe from this mailing list, email the message * "unsubscribe nyshsmath" to majordomo@mathforum.org * Read prior posts and download attachments from the web archives at * http://mathforum.org/kb/forum.jspa?forumID=671 * To unsubscribe from this mailing list, email the message * "unsubscribe nyshsmath" to majordomo@mathforum.org * Read prior posts and download attachments from the web archives at * http://mathforum.org/kb/forum.jspa?forumID=671 Date Subject Author 7/2/12 RE: algebra II trig paper flashcards Eileen Lane 7/3/12 Re: RE: algebra II trig paper flashcards Kathy Swan 7/3/12 RE: RE: algebra II trig paper flashcards Gavioli, Mary, Ph.D. 7/4/12 RE: RE: algebra II trig paper flashcards SRothwell@e1b.org 7/4/12 Re: RE: algebra II trig paper flashcards Glenn Mary 7/4/12 Re: algebra II trig paper flashcards Roni316@aol.com 7/4/12 Re: algebra II trig paper flashcards Bauer Kathy 7/6/12 Re: RE: algebra II trig paper flashcards grozz 7/25/12 Re: RE: algebra II trig paper flashcards Steven Fenton 2/5/13 Re: RE: algebra II trig paper flashcards mathgirl
{"url":"http://mathforum.org/kb/message.jspa?messageID=7844906","timestamp":"2014-04-18T03:57:42Z","content_type":null,"content_length":"28126","record_id":"<urn:uuid:ea776bf8-1f9b-4ed4-834e-d06e2b484b85>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Introduction to Parigot's –¯­Calculus Andreas Abel March 30, 2001 The first­order fragment of Parigot's –¯­Calculus is formalization of the implicational fragment of classical logic. The proof term syntax, proof reductions and normal forms are gently introduced in this article. 1 Proof Terms and Proof Rules In his seminal 1992 paper Parigot [Par92] introduced the –¯­calculus. We will restrict on the first­order fragment, which assigns proof terms to classical natural deductions in propositional logic. Consider the !­fragment plus the rule for absurdity elimination together with a rule how to obtain absurdity: A \Delta \Delta \Delta ! I x A ! B A ! B A ! E
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/635/3948691.html","timestamp":"2014-04-16T04:43:27Z","content_type":null,"content_length":"7821","record_id":"<urn:uuid:616784cf-f86d-4971-8a4e-2be2f3a1ba0e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
POW 1UP March 27, 2010 5:19 AM Subscribe Is there a closed-form method for finding the unique subsets from the power set of prime factors of an integer? Let's say I start with the integer 28, with prime factors {2, 2, 7}. If I construct a power set from this set of factors, I get: {{nil}, {2}, {2}, {7}, {2,2}, {2,7}, {2, 7}, {2, 2, 7}} I can reduce this to a set of divisors by multiplying the elements inside the non-empty subsets, to get {2, 7, 4, 14, 28} (adding {1} as a default divisor). However, two of the subsets are duplicates: {2} and {2,7}, so I am doing unnecessary calculations by simply iterating through all subsets. I could loop through the subsets and keep a hash table of multiplied elements-as-keys or some similar brute force approach. But as a power set grows quickly, there could be a very large number of Is there a smarter, more general way to do this, if I know what the labels (prime factors) are, something like a unique permutation of the factors, where some factors might be repeated? (Note: This isn't homework.) posted by Blazecock Pileon to Science & Nature (9 answers total) 1 user marked this as a favorite Let's say your number has the factorization p_1^(e_1) p_2^(e_2) ... p_m^(e_m), where the p_i are distinct primes. For example, 28 = 2^2 7^1. Then you want p_1^(f_1) ... p_m^(f_m) for each m-tuple (f_1, ..., f_m) where f_k is between 0 and e_k (inclusive). posted by madcaptenor at 5:25 AM on March 27, 2010 [3 favorites] two of the subsets are duplicates: {2} and {2,7}, so I am doing unnecessary calculations by simply iterating through all subsets. If you are actually representing these using some kind of set data structure, duplicates should be removed for you automatically. By definition sets don't contain duplicates (e.g. {{0}, {0}} is the same set as {{0}}). However, if you are using a multiset, it won't do this for you. If you are rolling your own data structure, maybe the thing to do would be to see how a standard implementation of a set would do this. (Or just use one.) posted by advil at 6:20 AM on March 27, 2010 [2 favorites] Exactly what madcaptenor said: write out all the unique prime factors and how often each one occurs. Then, every element of the factor power set can be written in terms of taking f_1 of the first factor, f_2 of the 2nd factor, etc... On that note, if there are factors p_1, p_2, ... p_m with counts e_1, e_2, ... , e_m, the number has (1+e_1) * (1+e_2) ... * (1+e_m) factors including 1 and itself. posted by bsdfish at 8:07 AM on March 27, 2010 (Note: This isn't homework.) I'm pretty sure it's , though. Anyway, I agree with the above: Keep track of unique primes and the powers thereof. posted by DU at 8:43 AM on March 27, 2010 If you are actually representing these using some kind of set data structure, duplicates should be removed for you automatically. That would be language or implementation dependent, and a kind of a cheat. My hope was to find a method for generating a power set that automatically removes duplicates, without relying on the vagaries of a particular language. But instead of using a power set, it seems better to just use an ordered tuple approach, once I have the factorization. posted by Blazecock Pileon at 4:48 PM on March 27, 2010 Then you want p_1^(f_1) ... p_m^(f_m) for each m-tuple (f_1, ..., f_m) where f_k is between 0 and e_k (inclusive). Isn't that also just the list of factors? posted by albrecht at 7:38 PM on March 27, 2010 albrecht, I thought the list of factors is what Blazecock was asking for. posted by madcaptenor at 6:53 AM on March 28, 2010 Oh, sorry; I misunderstood the question. In that case, your answer is definitely the way to go. posted by albrecht at 7:35 AM on March 28, 2010 No, I have the factors. I was trying to build a count of divisors. Using a power set will introduce duplicates. So I just need to generate permutations of factors a different way. posted by Blazecock Pileon at 3:25 AM on March 30, 2010 « Older Are there DIY methods to stren... | Back in January, as a New Year... Newer » This thread is closed to new comments.
{"url":"http://ask.metafilter.com/149596/POW-1UP","timestamp":"2014-04-20T19:58:19Z","content_type":null,"content_length":"26165","record_id":"<urn:uuid:fee755bd-a0d1-44b8-b333-df41256b2cce>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: WHAT IS THE SOLUTION to 2x^2+x+2=0? • 4 months ago • 4 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/528fd939e4b0424725722642","timestamp":"2014-04-18T16:53:04Z","content_type":null,"content_length":"42083","record_id":"<urn:uuid:679832fa-8d5f-49c0-8962-fc48d3b23072>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Capacitors and ESR The following information may have errors; It is not permissible to be read by anyone who has ever met a lawyer. Use is confined to Engineers with more than 370 course hours of electronic engineering for theoretical studies. ph +1(785) 841 3089 Email inform@xtronics.com Capacitors and ESR From Transwiki [edit] Capacitor ESR Ratings [edit] ESR Equivalent Series Resistance The ESR rating of a capacitor is a rating of quality. A theoretically perfect capacitor would be lossless and have an ESR of zero. It would have no in-phase AC resistance. We live in the real world and all capacitors have some amount of ESR To understand why, let us review what a capacitor is, what they are made of, and how we rate them. [edit] Measuring ESR You can measure ESR with an analog meter called the Capacitor Wizard From Transtronics, Inc., which is this site's sponsor. [edit] What is a Capacitor? A capacitor consists of two conductive metal plates separated by an insulating dielectric. The dielectric can be made of glass, ceramic, tantalum oxide, or plastics such as polyethylene or polycarbonate. Even air can be used as the dielectric. When the capacitor holds some energy in the form of extra electrons on one plate and electron holes on the other, we say that the capacitor is [edit] Farads Capacitance (C) is the amount of charge per volt of potential that a capacitor holds. (C = Q / V, • where Q = charge (measured in coulombs) and • V = potential difference (measured in volts) Capacitance is measured in Farads, but most often a small fraction of a Farad thus: • micro-Farads uF millionths (10^-6) Farads • pico-Farads pF (10^-12) Farads (sometimes called "puffs" in engineering slang) The energy stored in a capacitor is E = CV^2/2 (E is in joules). Thus, the average power in watts is P[av] = CV^2 / 2t where t is the time in seconds. • The maximum voltage rating and its capacitance determine the amount of energy a capacitor holds. • The voltage rating increases with increasing dielectric strength and the thickness of the dielectric. • The capacitance increases with the area of the plates and decreases with the thickness of the dielectric. Thus -capacitance of a capacitor (C) is related to • the plate area (A) • Plate separation distance (d) • permittivity (ε) of the dielectric by the following equation: $C = \varepsilon A/d$ Here A and d are based on meters as the unit and ε is in Farads per meter (or Coulombs squared per newton-meters squared). Notice the force unit involved—it explains why capacitor microphonics (remember the good old condenser microphone?) and a mechanical failure mode of capacitors). [edit] Dielectric Constants Dielectric constant (k) gets its value by comparison of the charge holding ability of a vacuum, where k = 1. Thus, k is the ratio of the capacitance with a volume of dielectric compared to that of a vacuum dielectric. $k = \varepsilon_{\rm d}/\varepsilon_0$ • where ε[d] is the permittivity of the dielectric • ε[0] is the permittivity of free space □ Air has nearly the same dielectric value as a vacuum with k = 1.0001.) □ Teflon, a very good insulator, has a value of k = 2, while the plastics range in the low 2s to low 3s. □ Mica gets us a k = 6. □ Aluminum oxide is 7 □ tantalum's k is 27 □ Ceramics' range from 35 to over 6,000. Dielectric constants vary with temperature, voltage, and frequency making capacitors messy devices to characterize. Whole books have been written about choosing the correct dielectric for an application, balancing the desires of temperature range, Temperature stability, size, cost, reliability, dielectric absorption, voltage coefficients, current handling capacity (ESR). (Ivan Sinclair wrote a nice book on passives; unfortunately, it is out of print. This points to the fact that our universities are no longer teaching this material). See the Secrets of capacitor codes page for more information [edit] Dielectric strength Dielectric strength is a property of the dielectric that is usually expressed in volts per mil (V/.001") or volts per centimeter (V/cm). It is the maximum potential difference across a unit thickness of the dielectric before it breaks down and allows a spark. If we exceed the dielectric strength, an electric arc will "flash over" and often weld the plates of a capacitor together. [edit] Q or Quality Factor The Q of a capacitor is important in tuned circuits because they are more damped and have a broader tuning point as the Q goes down. $Q = \frac{X_{\rm C}}{R}$ where X[C] is the capacitive reactance $X_{\rm C} =\frac{1}{\omega C}=\frac{1}{2\pi f C}$ and R is the soon-to-be-defined term of ESR. Q is proportional to the inverse of the amount of energy dissipated in the capacitor. Thus, ESR rating of a capacitor is inversely related to its quality. [edit] Dissipation Factor The inverse of Q is the dissipation factor (tan(δ)). Thus, tan(δ) = ESR/X[C] and the higher the ESR the more losses in the capacitor and the more power we dissipate. If too much energy is dissipated in the capacitor, it heats up to the point that values change (causing drift in operation) or failure of the capacitor. [edit] Ripple Current Rating The ripple current is sometimes rated for a capacitor in RMS current. Remembering that P = I^2R where R in this case is ESR it is plain to see that this is a power dissipation rating. [edit] Dielectric Absorption This is the phenomenon where after a capacitor has been charged for some time, and then discharged, some stored charge will migrate out of the dielectric over time, thus changing the voltage value of the capacitor. This is extremely important in sample and hold circuit applications. The typical method of observing Dielectric Absorption is to charge up a cap to some known DC voltage for a given time, then discharge the capacitor through a 2 ohm resistor for one second, then watch the voltage on a high-input-impedance voltmeter. The ratio of recovered voltage (expressed in percent) is the usual term for Dielectric absorption. The charge absorption effect is caused by a trapped space charge in the dielectric and is dependent on the geometry and leakage of the dielectric material. ESL (Equivalent Series Inductance) is pretty much caused by the inductance of the electrodes and leads. The ESL of a capacitor sets the limiting factor of how well (or fast) a capacitor can de-couple noise off a power bus. The ESL of a capacitor also sets the resonate-point of a capacitor. Because the inductance appears in series with the capacitor, they form a tank circuit. [edit] ESR Defined ESR is the sum of in-phase AC resistance. It includes resistance of the dielectric, plate material, electrolytic solution, and terminal leads at a particular frequency. ESR acts like a resistor in series with a capacitor (thus the name Equivalent Series Resistance). This resistance often is the cause of failures in capacitor circuits. These circuits look just fine on paper, but the hidden resistance causes failure due to heat buildup. To charge the dielectric material current needs to flow down the leads, through the lead plate junction, through the plates themselves - and even through the dielectric material. The dielectric losses can be thought of as friction of aligning dipoles and thus appear as an increase (or a reduction of the rate of decrease -- this increase is what makes the resistance vs freq line to go flat.) of measured ESR as frequency increases. As the dielectric thickness increases so does the ESR. As the plate area increases, the ESR will go down if the plate thickness remains the same. To test a capacitor's ESR requires something other than a standard capacitor meter. While a capacitor value meter is a handy device, it will not detect capacitor failure modes that raise the ESR. As the years go by, more and more designs rely on low ESR capacitors to function properly. ESR-failed caps can present circuit symptoms that are difficult to diagnose. [edit] Formulas at a glance [edit] Dissipation Factor • δ is the angle between the capacitor's impedance vector and the negative reactive axis. • DF is the Dissipation Factor (sometimes expressed as a percentage). • ESR stands for Equivalent Series Resistance • X[C] is the capacitive reactance $DF = \tan \delta = \frac{ESR}{|X_{\rm C}|}$ [edit] Capacitive Reactance Reactance is used to compute amplitude and phase changes of sinusoidal current. It is denoted by the symbol $\scriptstyle{X}$ and can be used in place of resistance in many calculations - It can be thought of as the effective AC resistance at some frequency. $X_{\rm C} = \frac{-1}{ \omega C} = \frac{-1}{2 \pi fC}$ The -1 above is because the reactance is negative from the following vector math: Both reactance $\scriptstyle{X}$ and resistance $\scriptstyle{R}$ are required to calculate impedance $\scriptstyle In some circuits one of these may dominate, but an approximate knowledge of the minor component is useful to determine if it may be neglected. $Z = R + jX\,$ where :j^2 = − 1 The 'magnitude' $\scriptstyle{|Z|}$ phase $\scriptstyle{\theta}$ of the impedance depend on the combined action of the resistance and the reactance. $\scriptstyle{|Z|}$ is the ratio of the voltage and current amplitudes, while the $\scriptstyle{\theta}$ is the voltage–current phase difference. $|Z| = \sqrt{ZZ^*} = \sqrt{R^2 + X^2}$ where Z^ * is the complex conjugate of Z $\theta = \arctan{X \over R}$ • If $\scriptstyle{X > 0}$, the reactance is inductive • If $\scriptstyle{X = 0}$, then the impedance is purely resistive • If $\scriptstyle{X < 0}$, the reactance is capacitive The reciprocal of reactance ( 1 / X) is susceptance - not a term you will likely meet up with. [edit] Inductive reactance: X[L] = ωL = 2πfL Relationship between angular frequency ω and frequency f ω = 2πf $I = \frac{E}{\sqrt{R^2 + \left(\omega L -\frac {1}{\omega C}\right)^2}}$ [edit] Resonance Frequency • F[r] = Frequency of Resonance $F_{\rm r} = \frac{1}{2\pi \sqrt{LC}}$ [edit] Dielectric Constant to Capacitance • k = dielectric constant ($\varepsilon_d/\varepsilon_0$; dimensionless) • A = area (square meters) • t = thickness of the dielectric (meters) * Q = charge (coulombs) • V = potential difference (volts) $C = \frac{\varepsilon_0 kA}{t}$$C = \frac{Q}{V}$ • $\varepsilon_d$ is the permittivity of the dielectric • $\varepsilon_0$ is the permittivity of free space =($8.85 \cdot 10^{-12} {\rm F}\;{\rm m}^{-1}$) $C = \frac{ \varepsilon A}{t}$$k = \frac{\varepsilon_d}{\varepsilon_0}$ [edit] Stored Energy Where energy E (in joules) stored in a capacitor is given by $E = \frac{CV^2}{2}$ [edit] Average power Thus, the average power in watts where t = time in seconds. $P_{\rm av} = \frac{CV^2}{2t}$$Z_{\rm C} = \sqrt{ESR^2 + X_{\rm C}^2}$ [edit] Time Domain Reflectometry (TDR) Characteristic Impedance of cable formulas [edit] Discontinuance of transmission characteristic impedance • Z[a] = characteristic impedance through which the incident wave travels first • Z[b] = the characteristic impedance through which the incident wave travels next. • V[r] = the reflected wave amplitude, • V[i] = the incident wave amplitude • V[t] is the transmitted wave amplitude. $\frac{V_{\rm r}}{V_{\rm i}} = \frac{Z_{\rm b} -Z_{\rm a}}{Z_{\rm b} + Z_{\rm a}}$$\frac{V_{\rm r}}{V_{\rm t}} = \frac{2 Z_{\rm b}}{Z_{\rm b} + Z_{\rm a}}$ $Z_{\rm b} = Z_{\rm a}\left( \frac{V_{\rm i} + V_{\rm r}}{V_{\rm i} - V_{\rm r}}\right)$ • Z[0] is the characteristic impedance $\tau = RC = \frac{Z_0C}{2}$ [edit] Was this Information Useful? [If you found this information useful - all I ask is to look at our home page and see if we have any products that might be of use to you or a colleague. Link to us if you have a web page. If you have some thing to add to this page please use the edit button.]
{"url":"http://wiki.xtronics.com/index.php/Capacitors_and_ESR","timestamp":"2014-04-17T01:27:08Z","content_type":null,"content_length":"38141","record_id":"<urn:uuid:5db81936-b8b5-4e72-a1e6-b2240bd0b472>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Syllabus Entrance MA 120 Basic Concepts of Statistics Willcox, John Mission Statement: The mission of Park University, an entrepreneurial institution of learning, is to provide access to academic excellence, which will prepare learners to think critically, communicate effectively and engage in lifelong learning while serving a global community. Vision Statement: Park University will be a renowned international leader in providing innovative educational opportunities for learners within the global society. Course MA 120 Basic Concepts of Statistics Semester F1T 2007 DLH Faculty Willcox, John Title Senior Instructor Degrees/Certificates BS Mathematics MA Industrial Management MA Logistics Management Office Location Nine Mile Falls, WA Office Hours Respond within 48 hours Daytime Phone 509 276 9466 Please leave name and phone number, slowly. Other Phone FAX 509 276 9466 If answer machine comes on, continue with the FAX, it will work. E-Mail John.Willcox@pirate.park.edu Semester Dates Aug 20 - Oct 14, 2007 Class Days Monday thru Sunday Class Time Will try to respond to student within 24 hours but no later than 48 hours Prerequisites None. Should have a basic understanding of math concepts. If not, recommend College Mathematics or Algebra. Credit Hours 3 Required Text: Elementary Statistics - 4rd Ed. Author: Allan B. Bluman ISBN: 0-07-334714-0 Order Texts at: http://direct.mbsbooks.com/park.htm The purchase of a new textbook insures the inclusion of the Student Access Kit to MathZone. You will also need a scientific calculator. Check with your instructor for specific requirements and if mathzone will be required. Links in the course Student Instruction Guide are provided for downloading required FREE software for the multimedia presentations of the course. Textbooks can be purchased through the MBS bookstore Textbooks can be purchased through the Parkville Bookstore Additional Resources: Student should have a basic Scientific Calculator such as a Texas Instrument or Casio and know how to use it. Some functions the student should know how to used is the square, square root, power, permuation and combination functions. These calculators sell from $12-$20. McAfee Memorial Library - Online information, links, electronic databases and the Online catalog. Contact the library for further assistance via email or at 800-270-4347. Career Counseling - The Career Development Center (CDC) provides services for all stages of career development. The mission of the CDC is to provide the career planning tools to ensure a lifetime of career success. Park Helpdesk - If you have forgotten your OPEN ID or Password, or need assistance with your PirateMail account, please email helpdesk@park.edu or call 800-927-3024 Resources for Current Students - A great place to look for all kinds of information http://www.park.edu/Current/. Advising - Park University would like to assist you in achieving your educational goals. Please contact your Campus Center for advising or enrollment adjustment information. Online Classroom Technical Support - For technical assistance with the Online classroom, email helpdesk@parkonline.org or call the helpdesk at 866-301-PARK (7275). To see the technical requirements for Online courses, please visit the http://parkonline.org website, and click on the "Technical Requirements" link, and click on "BROWSER Test" to see if your system is ready. FAQ's for Online Students - You might find the answer to your questions here. Course Description: A development of certain basic concepts in probability and statistics that are pertinent to most disciplines. Topics include: probability models, parameters, statistics and sampling procedures, hypothesis testing, correlation, and regression. 3:0:3 Educational Philosophy: Each student is responsible for: • o Completing weekly reading assignments. o Participating in weekly discussions/study groups/projects. o Completing weekly homework on time o Complete mid term examination o Completing a proctored examination during Week 8. Learning Outcomes: Core Learning Outcomes 1. Compute descriptive statistics for raw data as well as grouped data 2. Determine appropriate features of a frequency distribution 3. Apply Chebyshev's Theorem 4. Distinguish between and provide relevant descriptions of a sample and a population 5. Apply the rules of combinatorics 6. Differentiate between classical and frequency approaches to probability 7. Apply set-theoretic ideas to events 8. Apply basic rules of probability 9. Apply the concepts of specific discrete random variables and probability distributions 10. Compute probabilities of a normal distribution Core Assessment: Description of MA 120 Core Assessment One problem with multiple parts for each numbered item, except for item #3, which contains four separate problems. 1. Compute the mean, median, mode, and standard deviation for a sample of 8 to 12 data. 2. Compute the mean and standard deviation of a grouped frequency distribution with 4 classes. 3. Compute the probability of four problems from among these kinds or combinations there of: a. the probability of an event based upon a two-dimensional table; b. the probability of an event that involves using the addition rule; c. the probability of an event that involves conditional probability; d. the probability of an event that involves the use of independence of events; e. the probability of an event based upon permutations and/or combinations; f. the probability of an event using the multiplication rule; or g. the probability of an event found by finding the probability of the complementary event. 4. Compute probabilities associated with a binomial random variable associated with a practical situation. 5. Compute probabilities associated with either a non-standard normal probability distribution. 6. Compute and interpret a confidence interval for a mean and/ or for a proportion. Link to Class Rubric Class Assessment: The class will be assessed primarily by participation, homework, mid term examination and the final assessment will be the Final Examination. No graded quizzes will be required. The final is part of the core assessment. The final is a departmental exam and it will be provided by the department of mathematics. The final is 2hrs and will be a proctored exam. This will assure the student understands the basic concept of solving and understanding statictical data gathering in order to accomplish follow on University classes. The following event will take place each week of class and the student will be responsible for accomplishing each item during that week time for full credit. If the student delays 50% will be deducted for items submitted on Monday and 100% for items submitted on Tuesday of the following class week. The weeks or units follows: Unit 1 Chapter 1 The Nature of Probability and Statistics, Discussion, Homework Problems Due Unit 2 Chapter 2 Frequency Distributions and Graphs, Discussion, Homework Problems Due Unit 3 Chapter 3 Data Statistics, Discussion, Homework Problems Due Unit 4 Chapter 4 Probability and Counting Rules, Discussion, Homework Problems Due Unit 5 Chapter 5 Discrete Probability Statistics, Discussion, Homework Problems Due, Mid Term Examination Unit 6 Chapter 6 The Normal Distribution, Discussion, Homework Problems Due. Deadline for Proctor Form (if this is not accomplished, 10 points will be deducted from your final) Unit 7 Chapter 7 Confidence Intervals and Sample Size, Discussion, Homework Problems Due Unit 8 Chapter 8 Hypothesis Testing/10 Correlation and Regression, Discussion, Homework Problems Due, Final Examination Weekly participation/discussion will be required which will total 10-15 points. This will include primarily the study guide and discussion threads. The discussion thread is the staple communication tool for this course. This thread will discuss problems and issues that relate to the homework, midterm, and the final. It is important for the student should visit these threads on a regular basis, and not wait until Sunday evening. Many issues that relate to working problems are discussed and missed discussions can affect the students knowledge. Those that do join this thread on Sunday evening may have points deducted. Students that participate far beyond what is expected will get maximum points (10 Points), those that excel above average will get 8 - 9 points, and those that just do the assignment will get 7 points. If a week is missed by the student, a 0 grade will be given. Homework needs to be handed in on time is worth a total of 5 points. Homework will be spot graded, and points will be taken off for missed questions or work not completed. Additionally, some students spend a great amount of time preparing charts, explanations, and attention to detail while others just provide the answers. For just doing what is expected you could get 3 points, more than what is required may receive 4 points, where exceptional work and content could get the full 5 points. The mid term and final will be open book, calculator, and notes. Both will total 125 points. The mid term will cover chapters 1-5 and the final will cover chapters 1-7. The instructor will provide discussion questions which will relate to the mid term and final throughout the class. If the student does not understand any of these questions, make sure this is identified in the appropriate threads. The student is required to obtain a proctor and if the proctor is not obtained during the time frame that is required, 20 points may be deducted from the students final. Summary of Assigned Grade Points Unit/Week Participation Homework Mid Term Final Total Points 85 45 125 125 380 Percent 0.22 0.12 0.33 0.33 1.00 A Grade 90<100 Percent, B Grade 80< 90 Percent, C Grade 70<80 Percent, D Grade 60< 70 Percent, F Grade 0< 60 Percent. Late Submission of Course Materials: •Submission of Late Work: All work should be completed by the end of each course week. After that a 50% reduction will be levied for each day late. If the student does not complete the work by that following 12 PM, CST Tuesday a 0 grade will administered. The student is required to obtain a proctor and if the proctor is not obtained during the time frame that is required, 20 points may be deducted from the students final. Classroom Rules of Conduct: Online Course Policies: Policy #1: Submission of Work: •A class week is defined as the period of time between Monday 12:01 am MST and Sunday at 11:59 PM MST. The first week begins the first day of the term/semester. Assignments scheduled for completion during a class week should be completed and successfully submitted by the posted due date. •Create a back up file of every piece of work you submit for grading. This will ensure that a computer glitch or a glitch in cyberspace won't erase your efforts. •When files are sent attached to an email, the files should be in either Microsoft Word, RTF, ASCII, txt, or PDF file formats. Policy #2: Ground Rules for Online Communication & Participation •General email: Students should use email for private messages to the instructor and other students. When sending email other than assignments, you must identify yourself fully by name and class in all email sent to your instructor and/or other members of our class. •Online threaded discussions: are public messages and all writings in this area will be viewable by the entire class or assigned group members. •Online Instructor Response Policy: Online Instructors will check email frequently and will respond to course-related questions within 24-48 hours. •Observation of "Netiquette": All your Online communications need to be composed with fairness, honesty and tact. Spelling and grammar are very important in an Online course. What you put into an Online course reflects on your level of professionalism. Here are a couple of Online references that discuss writing Online http://goto.intwg.com/ and netiquette http://www.albion.com/netiquette/corerules.html. •Please check the Announcements area before you ask general course "housekeeping" questions (i.e. how do I submit assignment 3?). If you don't see your question there, then please contact your instructor. Policy #3: What to do if you experience technical problems or have questions about the Online classroom. •If you experience computer difficulties (need help downloading a browser or plug-in, you need help logging into the course, or if you experience any errors or problems while in your Online course, click on the Help button in your Online Classroom, then click on the helpdesk menu item, and then fill out the form or call the helpdesk for assistance. •If the issue is preventing you from submitting or completing any coursework, contact your instructor immediately. Course Topic/Dates/Assignments: You have heard Benjamin Disraeli's assertion: "There are three types of lies: lies, damned lies, and statistics." By learning the concepts expressed in this course, you should be able to sort out how the information in collections of data is interpreted (as well as misinterpreted). Each week we'll focus on different aspects of the general properties of data sets, methods of collecting data, ways of analyzing and expressing ideas about data sets, and problem-solving methods based on information contained in our text. Welcome to Basic Concepts of Statistics. This course provides an introduction to the world of statistical analysis. Each week we'll focus on different aspects of the general topic. In Unit 1 we'll learn what the topic of statistics entails. We’ll discuss some ways to collect the needed data for a statistical study. By the end the unit we’ll have a view of how the two distinct divisions of statistics, descriptive and inferential, are related. In Unit 2 we'll discover how to convert pure data into corrupted data, also referred to as ungrouped data into grouped data. Then we will examine some of the many ways data can be visually displayed. We will finish with a consideration of a method matching and graphing two sets of data to analyze the possibility of a relationship. We will return to this analysis graph again in Unit 8 when we discuss correlation and regression. In Unit 3 we will examine ways to describe data by looking at its central tendency, its variation from its center, and how to determine the location of an element within a data set. A method of finding the proportions of variation a data set possesses will also be covered. In Unit 4 we'll explore the basic concepts of probabilities, the branch of mathematics that allows us to take a sample and make predictions about the population from which it was derived. We’ll strive to gain a fundamental understanding of probability through its addition, multiplication and counting rules. In Unit 5 we combine the probability concepts and the statistical concepts we previously learned to construct discrete probability distributions. Then we'll learn how to find statistics of the distribution. The unit ends with a discussion on a specific discrete probability distribution called the binomial distribution. In Unit 6 the discussion changes from discrete distributions to continuous random variable distributions. We begin looking at the Normal distribution and then quickly moving on the the Standard Normal distribution. We conclude the unit by learing how the Central Limit Theorem can be applied to sample data sets. In Unit 7 we move into inferential statistcs. We learn how to use a sample mean to estimate the population mean, and how we can confidently report its value within a specific interval. In Unit 8 we will examine the basics of hypothesis testing by using one-sample procedures for the hypothesis test of the population mean. In addition we will conclude our examination of topics in statistics by discussing the purpose of regression and correlation analysis. First, we'll examine some introductory terms, then focus on simple linear regression analysis and simple linear correlation analysis. During this final week of the course you will also complete the proctored Final Exam and the Course Evaluation. Academic Honesty: Academic integrity is the foundation of the academic community. Because each student has the primary responsibility for being academically honest, students are advised to read and understand all sections of this policy relating to standards of conduct and academic life. Park University 2007-2008 Undergraduate Catalog Page 85-86 Plagiarism involves the use of quotations without quotation marks, the use of quotations without indication of the source, the use of another's idea without acknowledging the source, the submission of a paper, laboratory report, project, or class assignment (any portion of such) prepared by another person, or incorrect paraphrasing. Park University 2007-2008 Undergraduate Catalog Page 85 Attendance Policy: Instructors are required to maintain attendance records and to report absences via the online attendance reporting system. 1. The instructor may excuse absences for valid reasons, but missed work must be made up within the semester/term of enrollment. 2. Work missed through unexcused absences must also be made up within the semester/term of enrollment. 3. Work missed through unexcused absences must also be made up within the semester/term of enrollment, but unexcused absences may carry further penalties. 4. In the event of two consecutive weeks of unexcused absences in a semester/term of enrollment, the student will be administratively withdrawn, resulting in a grade of "F". 5. A "Contract for Incomplete" will not be issued to a student who has unexcused or excessive absences recorded for a course. 6. Students receiving Military Tuition Assistance or Veterans Administration educational benefits must not exceed three unexcused absences in the semester/term of enrollment. Excessive absences will be reported to the appropriate agency and may result in a monetary penalty to the student. 7. Report of a "F" grade (attendance or academic) resulting from excessive absence for those students who are receiving financial assistance from agencies not mentioned in item 5 above will be reported to the appropriate agency. ONLINE NOTE: An attendance report of "P" (present) will be recorded for students who have logged in to the Online classroom at least once during each week of the term. Recording of attendance is not equivalent to participation. Participation grades will be assigned by each instructor according to the criteria in the Grading Policy section of the syllabus. Park University 2007-2008 Undergraduate Catalog Page 87-88 Disability Guidelines: Park University is committed to meeting the needs of all students that meet the criteria for special assistance. These guidelines are designed to supply directions to students concerning the information necessary to accomplish this goal. It is Park University's policy to comply fully with federal and state law, including Section 504 of the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990, regarding students with disabilities. In the case of any inconsistency between these guidelines and federal and/or state law, the provisions of the law will apply. Additional information concerning Park University's policies and procedures related to disability can be found on the Park University web page: http://www.park.edu/disability . Additional Information: Students will post all items which relate to the class, in the appropraite links such as Study Guide, Discussion, or Office. If the student has a personal issue such as missing class, then Email communication will be used. If Emails are sent to the instructor, asking something about the class such as how to accomplish a homework problem or what the final consist of, then the Email will be returned with instructions to post in the class links. ┃Competency │ Exceeds Expectation (3) │ Meets Expectation (2) │ Does Not Meet Expectation (1) │ No Evidence (0) ┃ ┃Evaluation │Can perform and interpret a hypothesis test with 100% │Can perform and interpret a hypothesis test with at │Can perform and interpret a hypothesis test with less │Makes no attempt to┃ ┃Outcomes │accuracy. │least 80% accuracy. │than 80% accuracy. │perform a test of ┃ ┃10 │ │ │ │hypothesis. ┃ ┃ │ │ │ │Makes no attempt to┃ ┃Synthesis │Can compute and interpret a confidence interval for a │Can compute and interpret a confidence interval for a │Can compute and interpret a confidence interval for a │compute or ┃ ┃Outcomes │sample mean for small and large samples, and for a │sample mean for small and large samples, and for a │sample mean for small and large samples, and for a │interpret a ┃ ┃10 │proportion with 100% accuracy. │proportion with at least 80% accuracy. │proportion with less than 80% accuracy. │confidence ┃ ┃ │ │ │ │interval. ┃ ┃ │ │ │ │Makes no attempt to┃ ┃ │ │ │ │apply the normal ┃ ┃Analysis │Can apply the normal distribution, Central limit │Can apply the normal distribution, Central limit │Can apply the normal distribution, Central limit │distribution, ┃ ┃Outcomes │theorem, and binomial distribution to practical │theorem, and binomial distribution to practical │theorem, and binomial distribution to practical │Central Limit ┃ ┃10 │problems with 100% accuracy. │problems with at least 80% accuracy. │problems with less than 80% accuracy. │Theorem, or ┃ ┃ │ │ │ │binomial ┃ ┃ │ │ │ │distribution. ┃ ┃Terminology│Can explain event, simple event, mutually exclusive │Can explain event, simple event, mutually exclusive │Can explain event, simple event, mutually exclusive │Makes no attempt to┃ ┃Outcomes │events, independent events, discrete random variable, │events, independent events, discrete random variable, │events, independent events, discrete random variable, │explain any of the ┃ ┃4,5,7 │continuous random variable, sample, and population │continuous random variable, sample, and population with│continuous random variable, sample, and population │terms listed. ┃ ┃ │with 100% accuracy. │at least 80% accuracy. │with less than 80% accuracy. │ ┃ ┃Concepts │Can explain mean, median, mode, standard deviation, │Can explain mean, median, mode, standard deviation, │Can explain mean, median, mode, standard deviation, │Makes no attempt to┃ ┃Outcomes │simple probability, and measures of location with 100%│simple probability, and measures of location with at │simple probability, and measures of location with less│define any concept.┃ ┃1,6 │accuracy. │least 80% accuracy. │than 80% accuracy. │ ┃ ┃ │Compute probabilities using addition multiplication, │Compute probabilities using addition multiplication, │Compute probabilities using addition multiplication, │ ┃ ┃Application│and complement rules and conditional probabilities. │and complement rules and conditional probabilities. │and complement rules and conditional probabilities. │Makes no attempt to┃ ┃Outcomes │Compute statistical quantities for raw and grouped │Compute statistical quantities for raw and grouped │Compute statistical quantities for raw and grouped │compute any of the ┃ ┃1,2,3,8,9 │data. Compute probabilities using combinatorics, │data. Compute probabilities using combinatorics, │data. Compute probabilities using combinatorics, │probabilities or ┃ ┃ │discrete random variables, and continuous random │discrete random variables, and continuous random │discrete random variables, and continuous random │statistics listed. ┃ ┃ │variables. All must be done with 100% accuracy. │variables. All must be done with at least 80% accuracy.│variables. All are done with less than 80% accuracy. │ ┃ ┃Whole │Can apply the concepts of probability and statistics │Can apply the concepts of probability and statistics to│Can apply the concepts of probability and statistics │Makes no attempt to┃ ┃Artifact │to real-world problems in other disciplines with 100 %│real-world problems in other disciplines with at least │to real-world problems in other disciplines with less │apply the concepts ┃ ┃Outcomes │accuracy. │80 % accuracy. │than 80% accuracy. │to real-world ┃ ┃7,8 │ │ │ │problems. ┃ ┃Components │ │ │ │Makes no attempt to┃ ┃Outcomes │Can use a calculator or other computing device to │Can use a calculator or other computing device to │Can use a calculator or other computing device to │use any computing ┃ ┃1 │compute statistics with 100% accuracy. │compute statistics with at least 80% accuracy. │compute statistics with less 80% accuracy. │device to compute ┃ ┃ │ │ │ │statistics. ┃ This material is copyright and can not be reused without author permission. Last Updated:7/19/2007 8:55:34 AM
{"url":"https://app.park.edu/syllabus/syllabus.aspx?ID=75189","timestamp":"2014-04-20T08:14:53Z","content_type":null,"content_length":"190788","record_id":"<urn:uuid:8148b45b-a88b-41a2-a5f8-e383a27a0527>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Information-Theoretic Statistics in MATLAB to Understand How Ecosystems Affect Regional Climates It is common knowledge that weather and climate influence the plants, animals, and microorganisms that live upon the landscape. New research is investigating the possibility that the opposite is also true: that due to feedback between plants and the atmosphere, vegetation and the landscape influence regional climate. University of Illinois at Urbana-Champaign researchers have developed statistical methods to detect connections between environmental variables, such as evaporation from plant leaves; isolate variables that drive changes in other variables; and identify feedback loops. The project is funded by NASA and the Metropolitan Water Reclamation District of Greater Chicago. To understand the relationship between key variables in a self-organizing system, such as the Earth’s land-surface ecosystem and climate, we must look beyond traditional linear methods of analysis. In a linear system, changes in subsystem “X” cause proportional changes in subsystem “Y”. In a nonlinear or self-organizing system featuring circular feedback, this notion of causality breaks down, as components “X” and “Y” become self-causing. Furthermore, self-organizing feedback loops may be nested inside each other so that the system’s dynamics behave like a Russian Doll, where each physical process is a small part of a larger feedback loop. This type of self-organizing system is best understood as a process network^1. Process networks describe complex systems as networks of nested feedback loops and their associated timescales. Using a new class of advanced statistics based on the Theory of Statistical Information, process networks can be derived for any system that can be observed and measured. Using MATLAB^® and Parallel Computing Toolbox™, we apply these computationally intensive statistical methods to time series data, including observed meteorological, hydrological, and environmental variables. The results are helping to explain not only how changes in climate, including drought, affect the ecosystem, but also how human changes to landscape and vegetation affect the regional Tackling a Computationally Intensive Problem The observed data is derived from FLUXNET, a global network of more than 400 towers, each equipped with a suite of sensors (Figure 1). These sensors record air temperature (Θa), soil temperature (Θs), soil water content (θ), radiation from the sun (Rg), vapor pressure density (VPD, a measure of humidity), precipitation (P), cloud cover (CF), the net flow of carbon dioxide in or out of the ecosystem (NEE), and the amount of heat radiated from the ground as sensible heat flux (γH) and as latent heat flux (γLE, evaporated water) (Figure 2). The measurements are averaged to a time resolution of 30-minute intervals. The Bondville tower used to study the structure of drought is located near Champaign, Illinois. This tower has been measuring the climate since 1996. For each combination of two variables measured by the fluxnet tower, a joint probability distribution is estimated from the time series data. The information-theoretic statistic transfer entropy, which establishes statistically causal links between variables, requires estimation of a 3D joint probability density function. This computation must be repeated for all possible combinations of variables and time lags for each month of data studied. We can then examine how the process network of connections between variables changes with the seasons and understand the effects of drought on the structure of the system. The computationally intensive nature of this approach was one of the main reasons we chose MATLAB. MATLAB is well-suited to the matrix manipulation required for the analysis, and Parallel Computing Toolbox enabled us to accelerate the computations by running them ona computing cluster. In addition, MATLAB visualization capabilities allowed us to rapidly analyze a large volume of statistical The first step was to make sure that the data we received from the FLUXNET towers was complete and correctly formatted. Using MATLAB and Statistics Toolbox™, we wrote scripts to extract the subset of data that we needed, scan it for errors and omissions, fill in missing data when possible, and format the data for use in the statistical algorithms. Statistics Toolbox was used to summarize the input data set by month, season, and year to allow the plotting of results. Estimating the transfer entropy statistic depends on the accurate estimation of probability densities from data. To calculate densities, we developed MATLAB algorithms for a fixed-interval partition (or bin-counting) classification scheme to estimate joint probabilities. We obtained several interesting statistical results by applying transfer entropy to study the system’s process network, including the monthly mean net information production of each variable. Information production measures the predictive value of each variable on the process network; a variable with a sufficiently large positive net information production causally drives other variables on the network more than it is driven by those variables (Figure 3). Due to feedback on the process network, all variables control the behavior of the network as a whole, but the variables marked red in Figure 3 have the greatest controlling influence. Parallelizing the Application During initial prototyping, the analysis focused on data from just two months at a single site. MATLAB algorithms were run on a dedicated workstation overnight because they took several hours to complete. When we began analyzing ten years’ worth of data across multiple sites, we realized that the complete calculations would take about a month. This is too long to wait for results, especially when debugging and code alterations will necessitate multiple calculations. Clearly, we would need to accelerate the analysis by parallelizing the algorithms and running them on a computer cluster. Fortunately, we can analyze the data set of each month and each tower site separately, making data analysis relatively easy to parallelize. However, there are always challenges to working in a cluster environment. When parallelizing a Fortran application, for example, developers may need to tailor it to account for cache and memory limitations, write initialization and staging scripts, and adapt the code to handle the unique properties of the cluster machines. With MATLAB and Parallel Computing Toolbox, we parallelized our algorithm by changing a single line of code. In fact, the most difficult part of the parallelization procedure was convincing ourselves that one code modification—changing a for loop to a parfor (parallel for) loop—was all that was needed. The original code was not explicitly designed for parallelization, yet it took us less than an hour to convert the code to run in parallel on a computing cluster. Results computed by each “worker” were collected in a single six-dimensional array, which was then diced and visualized to display The analysis was run on a 32-core cluster comprising four dual-CPU, quad-core systems. We saw a linear improvement in computation speed, completing in one day what would have required a month on a single workstation. A calculation with millions of iterations that took 176 hours on one core required just 5.46 hours using 32 cores. Applying the Methods to Other Disciplines Our research confirms that changes in the landscape and the ecosystem can affect regional climate via regional feedback loops in the process network. The implication of this finding is that, for example, land-use decisions can influence the severity and duration of droughts in the Midwestern U.S. Using this information it may be possible to design land-use policies for agriculture, forestry, and urban development that minimize adverse effects on regional climate. We are collaborating with researchers who will apply these statistical methods to other time-varying complex systems in which feedback between component parts leads to self-organization. In one study, scientists are analyzing time series chemical concentrations in a closed system of microorganisms and nutrients to better understand the biological cycles involved. In another, researchers are analyzing satellite data to investigate the interaction of different parts of the landscape. Financial market analysis is another ideal application of this statistical approach. Whatever the discipline, the algorithms that we use employ cutting-edge statistical methods and are exceptionally intensive computationally. MATLAB, Statistics Toolbox, and Parallel Computing Toolbox provide an advantage, both in the development of the algorithms and in the ability to use parallel computing to obtain and visualize results rapidly.
{"url":"http://www.mathworks.se/company/newsletters/articles/using-information-theoretic-statistics-in-matlab-to-understand-how-ecosystems-affect-regional-climates.html?nocookie=true","timestamp":"2014-04-23T09:52:34Z","content_type":null,"content_length":"36856","record_id":"<urn:uuid:a936e241-3423-4e0d-824c-d22ee616232d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Some integral equations for the Szego and the Bergman kernels. Murid, Ali H.M. and Razali, Mohd. R.M (1995) Some integral equations for the Szego and the Bergman kernels. Technical Report. Jabatan Matematik Universiti Teknologi Malaysia. Full text not available from this repository. The Szego and the Bergman kernel functions are known to satisfy a certain integral equation of the second kind. A generalized integral equation is formulated such that both the integral equation for the Szego and the Bergman kernels can be derived. This general formulation also yields singular integral equations of the first and the third kinds for the Szego and the Bergman kernels. Item Type: Monograph (Technical Report) Uncontrolled Keywords: Integral equation, Szego kernel, Bergman kernel. Subjects: Q Science > QA Mathematics Divisions: Science ID Code: 3865 Deposited By: Assoc. Prof. Dr. Ali Hassan Mohamed Murid Deposited On: 29 Jun 2007 02:36 Last Modified: 29 Jun 2007 02:36 Repository Staff Only: item control page
{"url":"http://eprints.utm.my/3865/","timestamp":"2014-04-20T08:38:28Z","content_type":null,"content_length":"14837","record_id":"<urn:uuid:b25728f0-0635-4cdc-a2b9-a218bcb1eb84>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Variance of the sum of N independent variables January 31st 2009, 08:14 AM Variance of the sum of N independent variables Let X1,X2,...XN be independent identically distributed random variables, where N is a non-negative integer valued random variable. Let Z = X1 + X2 + : : + XN, (assuming that Z = 0 if N = 0). Find E(Z) and show that var(Z) = var(N)E(X1)^2 + E(N) var (X1) Not a clue on this one! I know Expectation of Z is NE(X1) but how do I show this result? Many Many thanks January 31st 2009, 12:52 PM mr fantastic Let X1,X2,...XN be independent identically distributed random variables, where N is a non-negative integer valued random variable. Let Z = X1 + X2 + : : + XN, (assuming that Z = 0 if N = 0). Find E(Z) and show that var(Z) = var(N)E(X1)^2 + E(N) var (X1) Not a clue on this one! I know Expectation of Z is NE(X1) but how do I show this result? Many Many thanks You're expected to know the theorem that for independent random variables $Var\left[\sum_{i=1}^n (a_i X_i) \right]= \sum_{i=1}^n a_i^2 Var(X_i)$. Apply this theorem to your problem. January 31st 2009, 09:59 PM This is a problem of a random sum of random variables, which is very useful in actuarial science. First, I saw u already know E(sum Xi)= N*E(Xi) (A). keep this result, it's gotta be useful later. Second, remember var(sum Xi)=E(var(sum Xi|N=n))+var(E(sum Xi|N=n)) (B). We will work out the second part in the right side of equation (B); var(E(sum Xi|N=n))=var(E(sum Xi)) since Xi's and N are independent OK, substitute the E(sum Xi) result (A)in, and var(E(sum Xi|N=n))=var(N*E(Xi))=var(N)*(E(Xi))^2 Here come the first part in the right side of equation (B), E(var(sum Xi|N=n))=E(N*var(X)) since Xi's are independent and X and N are indep. then, E(var(sum Xi|N=n))=E(N)*var(X) Combine two parts, u get the result u expected. u need to show a bit more detail when u prove the first part in equation (B). I skipped some steps. If you want to be an actuary, we should talk more! (Wink) February 1st 2009, 01:53 AM many thanks guys! I am still unsure how to get (B) though. I cant think of any useful identities I know to give me this result - we've only really just begun this section of the course, so I'm still very hazy! February 24th 2009, 04:28 PM Let X1,X2,...XN be independent identically distributed random variables, where N is a non-negative integer valued random variable. Let Z = X1 + X2 + : : + XN, (assuming that Z = 0 if N = 0). Find E(Z) and show that var(Z) = var(N)E(X1)^2 + E(N) var (X1) Not a clue on this one! I know Expectation of Z is NE(X1) but how do I show this result? Many Many thanks N is a random variable, known as a stopping time. So $E(Z)=NE(X_1)$ does not make sense, the left side is constant while the right is random. I don't know if we need independence or not between the N and the $X_i$'s in the mean case, but we may need it, in obtaining the variance. Most likely $E(Z)=E(N)E(X_1)$. I just looked in up in my advisor's advisor's book. This is correct, it's known as Wald's Equation. proof is in... I also found the proof to the variance on page 139 of that book.
{"url":"http://mathhelpforum.com/advanced-statistics/70953-variance-sum-n-independent-variables-print.html","timestamp":"2014-04-19T04:09:52Z","content_type":null,"content_length":"9097","record_id":"<urn:uuid:cda66595-2186-429e-be05-0cc5035df1ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Somerset, NJ Algebra Tutor Find a Somerset, NJ Algebra Tutor I am an experienced math high school teacher and I work as an adjunct math professor part time. I have a bachelors degree in mathematics and a masters degree in math education. I have tutored for many years, tutoring students as young as 5! 9 Subjects: including algebra 1, algebra 2, Spanish, SAT math ...I have the experience, the patience, and the knowledge to effectively tutor this subject. I hold a BA in Mathematics from Rutgers University and have been a successful one-on-one tutor for many years. One of my areas of specialization is preparing students to take standardized tests. 17 Subjects: including algebra 2, algebra 1, geometry, ASVAB ...Whatever the needs of the students in Algebra-1 I can provide them appropriately and teach satisfactorily. Since I have experience of working in public schools I am fully enriched with the content knowledge, skills, applications in projects. Also I am equipped with the Algebra resources like on... 10 Subjects: including algebra 1, algebra 2, calculus, geometry Hello, my name is Sarah. I have been tutoring for about nine years now. I have worked with children from all levels (elementary, middle school, high school, and college). I graduated from Kean University in 2011 with a 4.0 GPA, and am currently in the process of obtaining my Master's degree. 8 Subjects: including algebra 2, algebra 1, geometry, SAT math ...I passed the rigorous 8 hour long Fundamentals of Engineering Exam(FE) which tested every math and science topic I offer and some. I have volunteered at St.Joseph-St. Thomas and Gonzaga schools helping students in math and science. 9 Subjects: including algebra 2, chemistry, physics, calculus Related Somerset, NJ Tutors Somerset, NJ Accounting Tutors Somerset, NJ ACT Tutors Somerset, NJ Algebra Tutors Somerset, NJ Algebra 2 Tutors Somerset, NJ Calculus Tutors Somerset, NJ Geometry Tutors Somerset, NJ Math Tutors Somerset, NJ Prealgebra Tutors Somerset, NJ Precalculus Tutors Somerset, NJ SAT Tutors Somerset, NJ SAT Math Tutors Somerset, NJ Science Tutors Somerset, NJ Statistics Tutors Somerset, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/somerset_nj_algebra_tutors.php","timestamp":"2014-04-16T07:52:44Z","content_type":null,"content_length":"23933","record_id":"<urn:uuid:51e8371c-1b4e-4667-8043-14952e1221e9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Criteria for connectedness November 5th 2013, 07:34 PM #1 Junior Member Sep 2012 Criteria for connectedness I have to prove that if, X is not the union of two disjoint non-empty closed subsets of itself, then, Either X is empty or the only continuous functions from X to the discrete space {0,1} are the two constant functions. Attempt at the proof: Assume that the first one is true, and let f:X->{0,1} be continuous. Then f inverse of zero and f inverse of one are both disjoint and closed in X. How do i proceed further ? Re: Criteria for connectedness I don't understand what you mean by "Assume that the first one is true". You should assume that $X$ cannot be represented as the union of two disjoint non-empty closed subsets of itself. Then, also assume $X$ is nonempty (so, either $X$ is empty, in which case the proposition is true, or $X$ is nonempty, and you are in this case) and prove that the only continuous functions from $X$ to the discrete space $\{0,1\}$ are the two constant functions. As you said, define $f:X \to \{0,1\}$ as a continuous function. You learned that a function is continuous if the preimage of every closed set is closed. Since $\{0\}^c = \{1\}$ and $\{1\}^c = \{0\}$, both singleton sets are clopen (both open and closed) subsets of $\{0,1\}$. Since $f$ is continuous, $f^{-1}(\{0\})$ and $f^ {-1}(\{1\})$ must both be closed. Suppose $x \in f^{-1}(\{0\}) \cap f^{-1}(\{1\})$. Then $f(x) = 0$ and $f(x) = 1$, implying $f$ is not even a function. So, the preimages must be disjoint. For any $x \in X$, either $f(x) = 0$ or $f(x) = 1$, so the union of the preimages must be the entire set. Suppose $f^{-1}(\{0\}) eq \emptyset$. Then by the initial assumption ( $X$ cannot be represented by the union of two disjoint, nonempty, closed subsets of itself), it must be that $f^{-1}(\{1\}) = \emptyset$. The other case: $f^{-1}(\{0\}) = \emptyset$. By the second assumption, $X$ is nonempty, so it must be that $f^{-1}(\{1\}) eq \emptyset$. Hence, either $f(X) = \{0\}$ or $f(X) = \{1\}$. November 5th 2013, 07:51 PM #2 MHF Contributor Nov 2010
{"url":"http://mathhelpforum.com/advanced-math-topics/223908-criteria-connectedness.html","timestamp":"2014-04-20T09:09:53Z","content_type":null,"content_length":"39370","record_id":"<urn:uuid:66915f76-a747-4926-8d74-979eef003cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Reducing Incomparability in Cardinal comparisons Replies: 5 Last Post: Apr 7, 2013 12:42 AM Messages: [ Previous | Next ] Re: Reducing Incomparability in Cardinal comparisons Posted: Mar 12, 2013 3:46 PM On Mar 12, 2:49 pm, Zuhair <zaljo...@gmail.com> wrote: > On Mar 11, 11:17 pm, Zuhair <zaljo...@gmail.com> wrote: > > Let x-inj->y stands for there exist an injection from x to y and there > > do not exist a bijection between them; while x<-bij-> means there > > exist a bijection between x and y. > > Define: |x|=|y| iff x<-bij->y > > Define: |x| < |y| iff x-inj->y Or Rank(|x|) -inj-> Rank(|y|) > > Define: |x| > |y| iff |y| < |x| > > Define: |x| incomparable to |y| iff ~|x|=|y| & ~|x|<|y| & ~|x|>|y| > > where |x| is defined after Scott's. > > Now those are definitions of what I call "complex size comparisons", > > they are MORE discriminatory than the ordinary notions of cardinal > > comparisons. Actually it is provable in ZF that for each set x there > > exist a *set* of all cardinals that are INCOMPARABLE to |x|. This of > > course reduces incomparability between cardinals from being of a > > proper class size in some models of ZF to only set sized classes in > > ALL models of ZF. > > However the relation is not that natural at all. > > Zuhair > One can also use this relation to define cardinals in ZF. > |x|={y| for all z in TC({y}). z <* x} > Of course <* can be defined as: > x <* y iff [x -inj->y Or > Exist x*. x*<-bij->x & for all y*. y*<-bij->y -> rank(x*) in > rank(y*)]. > Zuhair All the above I'm sure of, but the following I'm not really sure of: Perhaps we can vanquish incomparability altogether If we prove that for all x there exist H(x) defined as the set of all sets hereditarily not strictly supernumerous to x. Where strict subnumerousity is the converse of relation <* defined above. Then perhpas we can define a new Equinumerousity relation as: x Equinumerous to y iff H(x) bijective to H(y) Also a new subnumerousity relation may be defined as: x Subnumerous* to y iff H(x) injective to H(y) This might resolve all incomparability issues (I very highly doubt Then the Cardinality of a set would be defined as the set of all sets Equinumerous to it of the least possible rank. A Scott like definition, yet not Scott's. Date Subject Author 3/11/13 Reducing Incomparability in Cardinal comparisons Zaljohar@gmail.com 3/12/13 Re: Reducing Incomparability in Cardinal comparisons Zaljohar@gmail.com 3/12/13 Re: Reducing Incomparability in Cardinal comparisons Zaljohar@gmail.com 3/13/13 Re: Reducing Incomparability in Cardinal comparisons Zaljohar@gmail.com 3/13/13 Re: Reducing Incomparability in Cardinal comparisons Zaljohar@gmail.com 4/7/13 Re: Reducing Incomparability in Cardinal comparisons Charlie-Boo
{"url":"http://mathforum.org/kb/message.jspa?messageID=8608209","timestamp":"2014-04-16T19:25:42Z","content_type":null,"content_length":"25085","record_id":"<urn:uuid:ec05be09-a02b-4cc1-8365-ec3f7d7eac57>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement and Uncertainty Lab -- Fall 2006 MEASUREMENT AND UNCERTAINTY In this lab you will study, experimentally, how the uncertainty in more than one measured quantity leads to an uncertainty in all other quantities calculated from them. You will measure the dimensions of a number of metal cylinders, calculate their volume, and then use Archimedes' Principle to measure their actual volume. Notice that you will be called upon to write your own Introduction to this lab in your lab notebook. Like the Introduction above, it should describe what you are about to do and explain why you are doing it. You should not simply restate what I've just written -- even if you paraphrase it. And you should certainly never, ever use the words of the lab instructions without quoting them, nor the ideas of the lab instructions without properly citing them. In future labs you will be asked to do some library research before lab and to write your Introduction by labtime, usually with some allusion to what you found in your library research. 1. Study Chapters 1 and 2 of Baird's Instrumentation. 2. Procure a lab notebook with graph-paper pages. Leave the first page blank, to serve as a Table of Contents for the entire notebook. Consider a simple metal block or cylinder with tiny holes for being suspended by a fine wire. Measure its dimensions in centimeters, using Vernier calipers. Your instructor can help show you how to use them. Notice that there are at least two potential sources of uncertainty or error here for each dimension: 1. First, you can never measure any dimension of this object exactly. If your measuring instrument measures to the nearest 0.1mm, it is unlikely that the object is exactly some multiple of 0.1mm, and so your measurement is at best an estimate to the nearest 0.1mm. You would expect that if you kept measuring the same dimension of the object that you would always get the same measurement, but you would never obtain a more precise idea of its length however long you continued to measure. In this example, you can use 0.1mm for your uncertainty. 2. Second, the actual physical object may not be a perfect block or cylinder: its diameter or width, for example, may vary along the length, just as the Earth's circumference about its poles is different than its circumference around the Equator. So each time you measure your object's diameter or width, you might get a different value. It would make sense to measure its value several times along the different lengths of the object, and then calculate the mean and standard deviation of your measurements. In this case, the uncertainty is the standard deviation of your measurements, as long as it is greater than the resolution of your measuring device. (0.1mm in the example of the last paragraph) For a fine enough measuring stick, increasing the precision of the measuring stick should not decrease this kind of uncertainty/error. Right? All right, then. Measure the dimensions of your object, taking enough measurements to allow you to calculate the uncertainty. Have your lab partner repeat the measurements. Copy your raw data into each other's notebooks. Once we know the dimensions, you can calculate the volume. Do this. Repeat this for all the metal pieces the instructor asks you to measure. This should include at least one "normal" cylinder and one irregular one. What do we do with these individual uncertainties? Well, we can calculate the uncertainty of the volume. We will do this three ways: 1. The theoretical approach: Use the formulas in the text to calculate what the uncertainty of the volume is, given the uncertainties in each of the variables. 2. The "empirical" approach: What if the length of your object is closer to the high end of the range defined by your uncertainty? Calculate the volume of your object using every possible combination of extreme values for each dimension of the object ( d ± δd, h ± δh ). From your results, record the maximum and minimum values for the volume. 3. The experimental approach: When you are done, use Archimedes' method to measure the volume, directly measuring the buoyant "force" on the object to the nearest 0.1g. DO NOT USE GLASS BEAKERS! (Consider how this might be done with the equipment available in the laboratory.) Notice that a buoyant "force" of 3.0g means that 3.0g of water, equal to 3.0cc, was displaced, for a volume of 3.0cm^3. Compare to your calculations and uncertainty. Does the measured value fit within the "acceptance interval"? Show all your measurements and calculations in your lab notebook. Report your volume and its uncertainty, using the appropriate number of significant figures for each. Add some explanation as you go along -- in full sentences -- of what you did and what you conclude. This is not a formal report, but I will ask you to turn in your notebooks at the end of class so I can see what you did. Write your explanations for a hypothetical classmate who had to miss class this week and wants to know what went on. The "Conclusion" part of your report should answer the following three things in paragraph form (i.e. no numbers, no tables, just full sentences): (1) What did you do? (2) What were your results? (3) What does this tell you? It should be no longer than half a page, and can be as short as three sentences, if you manage to adequately do all three things. For next time, your instructor will give you an assignment from Baird. Also, read next week's lab assignment, do some library research and write an Introduction for next week's lab.
{"url":"http://it.stlawu.edu/~koon/classes/221.222/221L/Uncertainty.html","timestamp":"2014-04-19T15:14:18Z","content_type":null,"content_length":"6749","record_id":"<urn:uuid:8a8d7df9-ccae-4279-8d34-f8d308747b2e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Like Laplace, only more so The Laplace distribution is pointy in the middle and fat in the tails relative to the normal distribution.This post is about a probability distribution that is more pointy in the middle and fatter in the tails. Here are pictures of the normal and Laplace (a.k.a. double exponential) distributions. The normal density is proportional to exp(- x^2/2) and the Laplace distribution is proportional to exp(-|x|). Near the origin, the normal density looks like 1 – x^2/2 and the Laplace density looks like 1 – |x|. And as x gets large, the normal density goes to zero much faster than the Laplace. Now let’s look at the distribution with density I don’t know a name for this. I asked on Cross Validated whether there was a name for this distribution and no knew of one. The density is related to the bounds on a density presented in this paper. Here’s a plot. The density is unbounded near the origin, blowing up like -2 log( |x| ) as x approaches 0, and so is more pointed than the Laplace density. As x becomes large, log(1 + x^-2) is asymptotically x^-2 so the distribution has the same tail behavior as a Cauchy distribution, much heavier tailed than the Laplace density. Here’s a plot of this new density and the Laplace density together to make the contrast more clear. As William Huber pointed out in his answer on Cross Validated, this density has a closed-form CDF: F(x) = 1/2 + (arctan(x) – x log( sin( arctan(x) ) ))/π The paper mentioned above used a similar density as a Bayesian prior distribution in situations where many observations were expected to be small, though large values were expected as well. Related posts: Probability distribution relationship chart Robust prior illustration What is the significance of a vertical asymptote in a probability distribution? What does it mean that the probability of x being 0 is infinitely large? I guess this can only apply to random variables that cannot take on the value 0? The CDF must require Lebesgue integration – this isn’t Riemann integrable is it? Dan: The probability isn’t infinite at zero, the probability density is infinite at zero. The density has to be integrated over a set to obtain a probability. It does mean that some small intervals can have high probability relative to their length, but this is a proper density: the total probably over the real line integrates to 1. Riemann integration is adequate. You can integrate from -∞ to a and from b to ∞ and take the limits as a goes up to zero and b goes down to zero. brilliant paper, FWIW; a follow-up showing relationships between shrinkage priors and Levy processes is at http://faculty.chicagobooth.edu/nicholas.polson/research/papers/Bayes1.pdf and well worth Some distribution famili are infinite in zero Cummulative F(x)=Tanh[x/p]^n has a zero asymptot when n<1 f(x) = (n/p)*(1-Tanh[x/p]^n) (x/p)^(n-1) and it is very useful in granulometric distribution. The authors have a whole host of papers on arxiv along these lines, which I’d recommend for anyone interested in shrinkage in regression. Another class of shrinkage priors, which are more analytically friendly than the horseshoe, are given in http://ftp.stat.duke.edu/WorkingPapers/10-08.html. The Laplace is a couple exponential pdf’s back to back; in that paper, the authors do the same with the Pareto. Thanks for the interesting distribution. I have never seen a distribution function that had a vertical asymptote. I see how this integrates to one which is essential. Have you ever heard of a distribution function with a slant asymptote? I don’t think this could be possible because if the slant asymptote were not purely horizontal, it would fall asymptotically toward infinitity and negative infiinity as x approached infinity and negative infinity. The distributions with vertical asymptotes I use most often are the gamma and beta. The gamma distribution has a vertical asymptote at the origin for shape parameters less than 1. The beta distribution has a vertical asymptote at 0 if its first parameter is less than 1 and a vertical asymptote at 1 if its second parameter is less than 1. This distribution looks very similar as a special case of the Generalized Asymmetric Laplace (GAL) as described by Kotz, Kozubowski and Podgorski (http://wolfweb.unr.edu/homepage/tkozubow/0_alm.pdf). Below you can find a link to a plot of the symmetric GAL with only one dimension: Just like the distribution John describes in his post, this distribution has a vertical asymptote at 0 (it’s location parameter). When one plots this special case of the GAL on the same plot as the distribution described by John, the differences become clear: That is: the distribution described by John is much more peaked an has fatter tails than the special case of the GAL. I’ve been using the Laplace in the context of quantile regression or as shrinkage prior on regression coefficients. The distribution described by John could be used for the latter purpose too (as stated in the OP), but the Laplace has the advantage that it can be expressed as a scale mixture of normals (and makes efficient mcmc possible -> Gibbs sampling). Thanks for your post! [...] This post was mentioned on Twitter by FMFG, Probability Fact. Probability Fact said: A probability density like the Laplace (double exponential) but more so http://bit.ly/e3WYrK [...] Tagged with: Bayesian, Probability and Statistics Posted in Math, Statistics
{"url":"http://www.johndcook.com/blog/2011/02/17/like-laplace-only-more-so/","timestamp":"2014-04-20T09:21:35Z","content_type":null,"content_length":"40310","record_id":"<urn:uuid:220f3f02-c71e-499a-bb5d-c097f2d5c2ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Coppell Algebra 2 Tutor ...I am currently a senior mathematics major at UT Dallas with a cumulative GPA of 3.78. I completed advanced math courses in high school, culminating with AP Calculus and AP Statistics. I achieved a 5 on the AP Calculus BC exam and a 4 on the AP Statistics exam. 7 Subjects: including algebra 2, algebra 1, SAT math, ACT Math ...I have always loved math and took a variety of math classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side. I really enjoy tutoring because with every student there is a challenge of figuring out the best way to explain concepts so that they will understand. 7 Subjects: including algebra 2, statistics, geometry, SAT math ...I’ve tutored over 100 hours of ACT and SAT prep, including over 30 hours of ACT English and ACT/SAT Writing. I also edit content for a website, so I correct grammar errors and check writing for organization and flow on a daily basis. In my tutoring sessions, I have each student review and pract... 15 Subjects: including algebra 2, reading, writing, geometry ...I also know many memory tricks, and tips for doing well on exams. I believe that enjoyment and encouragement can be fundamental motivators for success. These will necessarily be qualities I commit to in any tutoring relationship. 17 Subjects: including algebra 2, reading, chemistry, geometry ...I know I can help you reach a solid understanding of the concepts and methods involved! With practice and step by step discussions, you can lay a foundation that will help you far beyond a single Geometry class! Pre-Algebra can be fun! 23 Subjects: including algebra 2, English, writing, calculus
{"url":"http://www.purplemath.com/Coppell_algebra_2_tutors.php","timestamp":"2014-04-17T07:59:15Z","content_type":null,"content_length":"23829","record_id":"<urn:uuid:6bdc6226-0e55-4549-aa01-7cb8903faaba>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Q-Series With Applications to Combinatorics, Nu Q-Series With Applications to Combinatorics, Number Theory, and Physics A Conference on Q-Series With Applications to Combinatorics, Number Theory, and Physics, October 26-28, 2000, University of Author: ISBN-13: Publisher: American Mathematical Society Summary: The subject of $q$-series can be said to begin with Euler and his pentagonal number theorem. In fact, $q$-series are sometimes called Eulerian series. Contributions were made by Gauss, Jacobi, and Cauchy, but the first attempt at a systematic development, especially from the point of view of studying series with the products in the summands, was made by E. Heine in 1847. In the latter part of the nineteenth and in th...e early part of the twentieth centuries, two Englishmathematicians, L. J. Rogers and F. H. Jackson, made fundamental contributions. In 1940, G. H. Hardy described what we now call Ramanujan's famous $ 1\psi 1$ summation theorem as ''a remarkable formula with many parameters.'' This is now one of the fundamental theorems of the subject. Despite humble beginnings,the subject of $q$-series has flourished in the past three decades, particularly with its applications to combinatorics, number theory, and physics. During the year 2000, the University of Illinois embraced The Millennial Year in Number Theory. One of the events that year was the conference $q$-Series with Applications to Combinatorics, Number Theory, and Physics. This event gathered mathematicians from the world over to lecture and discuss their research. This volume presents nineteen of thepapers presented at the conference. The excellent lectures that are included chart pathways into the future and survey the numerous applications of $q$-series to combinatorics, number theory, and physics. Berndt, Bruce C. is the author of Q-Series With Applications to Combinatorics, Number Theory, and Physics A Conference on Q-Series With Applications to Combinatorics, Number Theory, and Physics, October 26-28, 2000, University of Illinois, published under ISBN 9780821827468 and 0821827464. One hundred Q-Series With Applications to Combinatorics, Number Theory, and Physics A Conference on Q-Series With Applications to Combinatorics, Number Theory, and Physics, October 26-28, 2000, University of Illinois textbooks are available for sale on ValoreBooks.com, or buy new starting at $84.10. [read more] Item Details Condition: New Seller: Seller Rating: (0) Ships From: Boonsboro, MD Shipping: Standard Comments: Brand new. We distribute directly for the publisher. The subject of $q$-series can be said to be... [more] Brand new. We distribute directly for the publisher. The subject of $q$-series can be said to begin with Euler and his pentagonal number theorem. In fact, $q$-series are sometimes called Eulerian series. Contributions were made by Gauss, Jacobi, and Cauchy, but the first attempt at a systematic development, especially from the point of view of studying series with the products in the summands, was made by E. Heine in 1847. In the latter part of the nineteenth and in the early part of the twentieth centuries, two English mathematicians, L. J. Rogers and F. H. Jackson, made fundamental contributions. In 1940, G. H. Hardy described what we now call Ramanujan's famous $_1\psi_1$ summation theorem as "a remarkable formula with many parameters. " This is now one of the fundamental theorems of the subject. Despite humble beginnings, the subject of $q$-series has flourished in the past three decades, particularly with its applications to combinatorics, number theory, and physics. During the year 2000, the University of Illinois embraced The Millennial Year in Number Theory. One of the events that year was the conference $q$-Series with Applications to Combinatorics, Number Theory, and Physics. This event gathered mathematicians from the world over to lecture and discuss their research. This volume presents nineteen of the papers presented at the conference. The excellent lectures that are included chart pathways into the future and survey the numerous applications of $q$-series to combinatorics, number theory, and physics. [less] Product Details ISBN-13: 9780821827468 ISBN: 0821827464 Publisher: American Mathematical Society ValoreBooks.com is the #1 site for cheap Q-Series With Applications to Combinatorics, Number Theory, and Physics A Conference on Q-Series With Applications to Combinatorics, Number Theory, and Physics, October 26-28, 2000, University of Illinois rentals, or new and used copies for sale.
{"url":"http://www.valorebooks.com/textbooks/q-series-with-applications-to-combinatorics-number-theory-and-physics-a-conference-on-q-series-with-applications-to-combinatorics-number-theory-and-physics-october-26-28-2000-university-of-illi/9780821827468","timestamp":"2014-04-17T14:07:10Z","content_type":null,"content_length":"70295","record_id":"<urn:uuid:45ea39b0-3cd1-4b98-9745-212b985167e1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Fastest available algorithm for distance transform up vote 10 down vote favorite I am looking for the fastest available algorithm for distance transform. According to this site http://homepages.inf.ed.ac.uk/rbf/HIPR2/distance.htm, it describes: "The distance transform can be calculated much more efficiently using clever algorithms in only two passes (e.g. Rosenfeld and Pfaltz 1968)." Searching around, I found: "Rosenfeld, A and Pfaltz, J L. 1968. Distance Functions on Digital Pictures. Pattern Recognition, 1, 33-61." But I believe we should have a better and faster algorithm than the one in 1968 already? In fact, I could not find the source from 1968, so any help is highly appreciated. algorithm image-processing transform distance add comment 3 Answers active oldest votes There's tons of newer work on computing distance functions. • Fast marching algorithms that originally came from Tsitsiklis (not Sethian like Wikipedia says). Tons of implementations are available for this. • Fast sweeping algorithms from Zhao up vote 3 down vote accepted • O(n) (approximate) fast marching by Yatziv By the way, you'd really want to use these instead of the work by Rosenfeld, specifically when you want to compute distances in the presence of obstacles. add comment The OpenCV library uses for its approximate cv::distanceTransform function a algorithm which passes the image from top left to bottom right and back. The algorithm is described in the paper "Distance transformations in digital images" from Gunilla Borgefors (Comput. Vision Graph. Image Process. 34 3, pp 344–371, 1986). The algorithm calculates the distance through a combination of some basic jumps (horizontal, vertical, diagonal and the knight move). Each jump incurs costs. The following table shows the costs for the different jumps. | 2.8 |2.1969| 2 |2.1969| 2.8 | |2.1969| 1.4 | 1 | 1.4 |2.1969| | 2 | 1 | 0 | 1 | 2 | |2.1969| 1.4 | 1 | 1.4 |2.1969| | 2.8 |2.1969| 2 |2.1969| 2.8 | up vote 8 down vote The distance from one pixel to another is the sum of the costs of the jumps necessary. The following image shows the distance from the 0-cells to each other cell. The arrows are showing the way to some cells. The colored numbers reflect the exact (euclidean) distance. The algorithm works like this: Following mask | 0 | 1 | 2 | | 1 | 1.4 |2.1969| | 2 |2.1969| 2.8 | is moved from top left of the image to bottom right. During this pass, cells lying inside the boundaries of the mask either keep their value (if it is known and smaller) or they get the value calculated by summing the mask-value and the cell-value (if it is known) from the cell below the mask-0-cell. After that, a second pass from bottom right to top left (with a vertical and horizontal flipped mask) is performed. After the second pass the distances are calculated. This method is considerably slower than modern techniques (the most notable being the one from A. Meijster). – Vinnie Falco Dec 24 '12 at 20:32 add comment This paper reviews the known exact distance transform algorithms: "2D Euclidean distance transform algorithms: A comparative survey" The fastest exact distance transform is from Meijster: "A General Algorithm for Computing Distance Transforms in Linear Time." up vote 3 down vote http://fab.cba.mit.edu/classes/S62.12/docs/Meijster_distance.pdf The design of the algorithm is particularly well suited for parallel calculation. This is implemented in my open source library which tries to emulate Photoshop's "Layer Style:" add comment Not the answer you're looking for? Browse other questions tagged algorithm image-processing transform distance or ask your own question.
{"url":"http://stackoverflow.com/questions/7426136/fastest-available-algorithm-for-distance-transform/7427209","timestamp":"2014-04-18T00:54:47Z","content_type":null,"content_length":"74440","record_id":"<urn:uuid:a69820b1-a96b-4897-85b1-66417660ef3d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Regulated rewriting Regulated rewriting is a specific area of formal languages studying grammatical systems which are able to take some kind of control over the production applied in a derivation step. For this reason, the grammatical systems studied in Regulated Rewriting theory are also called "Grammars with Controlled Derivations". Among such grammars can be noticed: Matrix Grammars Basic concepts A Matrix Grammar, , is a four-tuple $G = \left(N, T, M, S\right)$ is an alphabet of non-terminal symbols is an alphabet of terminal symbols disjoint with $M = \left\{m_1, m_2,..., m_n\right\}$ is a finite set of matrices, which are non-empty sequences $m_\left\{i\right\} = \left[p_\left\{i\right\},...,p_\left\{i_\left\{k\left(i\right)\right\}\right\}\right]$ , with $k\left(i\right)geq 1$ , and $1 leq i leq n$ , where each $p_\left\{i_\left\{j\right\}\right\} 1leq jleq k\left(i\right)$ , is an ordered pair $p_\left\{i_\left\{j\right\}\right\} = \left(L, R\right)$ $L in \left(N cup T\right)^*N\left(Ncup T\right)^*, R in \left(Ncup T\right)^*$ these pairs are called "productions", and are denoted $Lrightarrow R$ . In these conditions the matrices can be written down as $m_i = \left[L_\left\{i_\left\{1\right\}\right\}rightarrow R_\left\{i_\left\{1\right\}\right\},...,L_\left\{i_\left\{k\left(i\right)\right\}\right\}rightarrow R_\left\{i_\left\{k\left(i\right)\right 4.- S is the start symbol Definition Let $MG = \left(N, T, M, S\right)$ be a matrix grammar and let $P$ the collection of all productions on matrices of $MG$. We said that $MG$ is of type i according to Chomsky's hierarchy with $i=0,1,2,3$, or "increasing length" or "linear" or "without $lambda$-productions" if and only if the grammar $G=\left(N, T, P, S\right)$ has the corresponding property. The classical example (taked from [5] with change of nonterminals names) The context-sensitive language $L\left(G\right) = \left\{ a^nb^nc^n : ngeq 1\right\}$ is generated by the $CFMG$$G =\left(N, T, M, S\right)$ $N = \left\{S, A, B, C\right\}$ is the non-terminal set, $T = \left\{a, b, c\right\}$ is the terminal set, and the set of matrices is defined as $M :$$left\left[Srightarrow abcright\right]$ $left\left[Srightarrow aAbBcCright\right]$ $left\left[Arightarrow aA,Brightarrow bB,Crightarrow cCright\right]$ $left\left[Arightarrow a,Brightarrow b,Crightarrow cright\right]$ Time Variant Grammars Basic concepts Definition A Time Variant Grammar is a pair $\left(G, v\right)$ $G = \left(N, T, P, S\right)$ is a grammar and $v: mathbb\left\{N\right\}rightarrow 2^\left\{P\right\}$ is a function from the set of natural numbers to the class of subsets of the set the productions. Programmed Grammars Basic concepts A Programmed Grammar is a pair $\left(G, s\right)$ $G = \left(N, T, P, S\right)$ is a grammar and $s, f: Prightarrow 2^\left\{P\right\}$ are the functions from the set of productions to the class of subsets of the set the productions. Grammars with regular control language Basic concepts A Grammar With Regular Control Language, , is a pair $\left(G, e\right)$ $G = \left(N, T, P, S\right)$ is a grammar and is a regular expression over the alphabet of the set the productions. A naive example Consider the CFG $G = \left(N, T, P, S\right)$ $N = \left\{S, A, B, C\right\}$ is the non-terminal set, $T = \left\{a, b, c\right\}$ is the terminal set, and the productions set is defined as $P = \left\{p_0, p_1, p_2, p_3, p_4, p_5, p_6\right\}$ $p_0 = Srightarrow ABC$$p_1 = Arightarrow aA$ $p_2 = Brightarrow bB$ $p_3 = Crightarrow cC$$p_4 = Arightarrow a$ $p_5 = Brightarrow b$ , and $p_6 = Crightarrow c$ . Clearly, $L\left(G\right) = \left\{ a^*b^*c^*\right\}$ . Now, considering the productions set as an alphabet (since it is a finite set), define the regular expression over Combining the CFG grammar $G$ and the regular expression $e$, we obtain the CFGWRCL $\left(G,e\right) =\left(G,p_0\left(p_1p_2p_3\right)^*\left(p_4p_5p_6\right)\right)$ which generates the language $L\left(G\right) = \left\{ a^nb^nc^n : ngeq 1\right\}$. Besides there are other grammars with regulated rewriting, the four cited above are good examples of how to extend context-free grammars with some kind of control mechanism to obtain a Turing machine powerful grammatical device. [1] Salomaa, Arto Formal languages Academic Press, 1973 ACM monograph series [2] G. Rozenberg, A. Salomaa, (eds.) Handbook of formal languages Berlin; New York : Springer, 1997 ISBN 3540614869 (set) (3540604200 : v. 1; 3540606483 : v. 2; 3540606491: v. 3) [3] Regulated Rewriting in Formal Language Theory Jurgen Dassow; G. Paun Pages: 308. Medium: Hardcover. Year of Publication: 1990 ISBN:0387514147. Springer-Verlag New York, Inc. Secaucus, NJ, USA [4] Grammars with Regulated Rewriting Jurgen Dassow Otto-von-Guericke Available at: and ( ) [5] Some questions of language theory S. Abraham in Proceedings of the 1965 International Conference On Computational Linguistics pp 1 - 11, Bonn, Germany Year of Publication: 1965 Available at:
{"url":"http://www.reference.com/browse/Regulated+rewriting","timestamp":"2014-04-20T03:28:17Z","content_type":null,"content_length":"82742","record_id":"<urn:uuid:d8f71126-c0a7-4114-99cc-e38d3921771e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics : The Art and Science of Learning from Data Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/statistics-art-science-learning-3rd/bk/9780321755940","timestamp":"2014-04-20T13:24:21Z","content_type":null,"content_length":"40917","record_id":"<urn:uuid:0fe56942-ecb3-41dc-88e8-573ee4ddf14a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2007/405 Secure PRNGs from Specialized Polynomial Maps over Any $F_q$ Michael Feng-Hao Liu and Chi-Jen Lu and Bo-Yin Yang and Jintai Ding Abstract: We prove that a random map drawn from any class ${\frak C}$ of polynomial maps from $F_q^n$ to $F_q^{n+r}$ that is (i) totally random in the affine terms, and (ii) has a negligible chance of being not strongly one-way, provides a secure PRNG (hence a secure stream cipher) for any q. Plausible choices for ${\frak C}$ are semi-sparse (i.e., the affine terms are truly random) systems and other systems that are easy to evaluate from a small (compared to a generic map) number of parameters. To our knowledge, there are no other positive results for provable security of specialized polynomial systems, in particular sparse ones (which are natural candidates to investigate for speed). We can build a family of provably secure stream ciphers a rough implementation of which at the same security level can be more than twice faster than an optimized QUAD (and any other provably secure stream ciphers proposed so far), and uses much less storage. This may also help build faster provably secure hashes. We also examine the effects of recent results on specialization on security, e.g., Aumasson-Meier (ICISC 2007), which precludes Merkle-Damgaard compression using polynomials systems {uniformly very sparse in every degree} from being universally collision-free. We conclude that our ideas are consistent with and complements these new results. We think that we can build secure primitives based on specialized (versus generic) polynomial maps which are more efficient. Category / Keywords: secret-key cryptography / multivariate polynomial, stream cipher, special polynomial, provably secureDate: received 22 Oct 2007Contact author: ding at math uc eduAvailable format (s): PDF | BibTeX Citation Version: 20071022:192841 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2007/405/20071022:192841","timestamp":"2014-04-18T10:39:57Z","content_type":null,"content_length":"3328","record_id":"<urn:uuid:a3225121-e950-41d1-bb61-cbb41911e738>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Log Change of Base Solution Please March 15th 2013, 04:28 PM #1 Mar 2013 Log Change of Base Solution Please My daughter has the following question on her home work. Although I'm pretty good on all the calculus for the life of me I can not figure this out and am obviously doing some thing wrong. If you can help that would be great. The question is: Using the log change of base rule solve for x: Log [5]^[6x+4] = Log [2x^49] Should be simple...right? Re: Log Change of Base Solution Please Hey tommly. The change of base rule says that log_a(x) = log_b(x)/log_b(a) for bases a and b. Hint: Try changing the LHS to log_2(x) given that it is log_5(x) by using this formula. Re: Log Change of Base Solution Please My daughter has the following question on her home work. Although I'm pretty good on all the calculus for the life of me I can not figure this out and am obviously doing some thing wrong. If you can help that would be great. The question is: Using the log change of base rule solve for x: Log [5]^[6x+4] = Log [2x^49] Should be simple...right? Is this the equation? It's a bit hard to read. $\displaystyle \log_5{(6x+4)} = \log_{2x}{(49)}$ Re: Log Change of Base Solution Please doing the Left hand side I then get Log [2x] (6x+4)/Log [2x] 5 = Log [2x]49 That does not really help me. I've tried changing the RHS to Log 5 and that does not help me either. I end up with a Log multiplying a Log both the the variable x in it...... obviously missing something Re: Log Change of Base Solution Please guess no one knows .... i'll continue to try Re: Log Change of Base Solution Please Hello, tommiy! Your use of [SUB] and [SUP] is confusing. $\begin{array}{ccc}\text{It could mean: }& \log_5(6x+4) \:=\:\log_{2x}49 & [1] \\ \\[-3mm] \text{Or maybe: }& \log(5^{6x+4}) \:=\:\log(2x^{49}) & [2] \end{array}$ If it is [1], we have: . $\log_5(6x+4) \:=\:\frac{\log_5(49)}{\log_5(2x)}$ . . which becomes: . $\log_5(2x)\cdot\log_5(6x+4) \:=\:\log_5(49)$ And I see no way to solve for $x.$ If it is [2], we have: . $(6x+4)\log5 \:=\:\log(2x^{49})$ This is a transcendental equation. It cannot be solved for $x.$ From what course did this problem arise? And who would assign such a frustrating problem? Re: Log Change of Base Solution Please Its number 1 and its from there standard text book for Year 12 in Australia. It actually has he answer of 3.5 which is correct but I can not see how you manage to achieve this. March 15th 2013, 06:01 PM #2 MHF Contributor Sep 2012 March 15th 2013, 06:05 PM #3 March 15th 2013, 07:11 PM #4 Mar 2013 March 16th 2013, 01:42 PM #5 Mar 2013 March 16th 2013, 02:29 PM #6 Super Member May 2006 Lexington, MA (USA) March 16th 2013, 10:56 PM #7 Mar 2013
{"url":"http://mathhelpforum.com/pre-calculus/214845-log-change-base-solution-please.html","timestamp":"2014-04-16T17:03:18Z","content_type":null,"content_length":"50721","record_id":"<urn:uuid:437e7d2b-baee-4d9c-a9af-2987ede6567e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
162-0? [Archive] - White Sox Interactive Forums 02-24-2006, 08:19 AM I was kidding around in the Thome homeruns thread and mentioned the Sox going 162-0. WSox8404 mentioned a discussion he had about it even being possible. I wonder- is it even possible in the MLB? I mean- if you played a million seasons, would there even be the slightest chance of it happening? I once wrote a rudimentary random number generator and fed it a "team" that was expected to be 120-42, and even then it never quite happened... I know its completely silly... but there have only ever been a few thousand (?) team seasons in MLB history, and most of them of course have been "average" teams.
{"url":"http://www.whitesoxinteractive.com/vbulletin/archive/index.php/t-66905.html","timestamp":"2014-04-21T05:08:46Z","content_type":null,"content_length":"23446","record_id":"<urn:uuid:545d71dd-1317-41a6-a32f-ea6aaa63bd07>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
L2.T8.1. Individual, incremental and diversified value at risk (VaR) David Harper CFA FRM CIPM David Harper AIM: Define and distinguish between individual VaR, incremental VaR and diversified portfolio VaR. Discuss the role correlation has on portfolio risk. Compute diversified VaR, individual VaR, and undiversified VaR of a portfolio. 1.1. In a portfolio where all returns are normally distributed, the diversified portfolio value at risk (VaR) of a two-asset portfolio is $76 million. The first asset has an individual VaR of $70 million. The assets have zero correlation. What is the individual VaR of the second asset? a. $6.0 million b. $18.0 million c. $19.4 million d. $29.6 million 1.2. A two-asset portfolio with a value of $20 million contains two equally-weighted assets (each asset has a value of $10 million). The volatility of the first asset is 10% and the volatility of the second asset is 20% (assets returns are normally distributed). What is, respectively, the 95% diversified portfolio value at risk (VaR) if (i) the assets are uncorrelated, (ii) the assets have a correlation (rho) of 0.5, and (iii) the assets are perfectly correlated? a. $3.0, 3.9 and 4.5 million b. $3.7, 4.4 and 4.9 million c. $3.9, 5.2 and 5.8 million d. $4.1, 5.6 and 6.3 million 1.3. A two-asset portfolio with a value of $40 million contains two equally-weighted assets (each asset has a value of $20 million). The volatility of both asset is 30%. The assets are uncorrelated (i.e., their correlation is zero). What is the incremental value at risk (VaR), assuming 95% confident VaR, if we subtract one asset from the portfolio, leaving only the remaining asset in the portfolio? a. $4.09 million b. $6.73 million c. $9.87 million d. $13.96 million 1.4. If computed for a portfolio where correlations are imperfect, which of the following value at risk (VaRs) will be greatest? a. Undiversified VaR b. Diversified VaR c. Individual VaR d. Incremental VaR 1.5. Which approach is most likely to find a local-valuation (delta-normal valuation) method insufficient? a. Undiversified VaR b. Diversified VaR c. Individual VaR d. Incremental VaR 1.6. Which is equal to the sum of component VaRs? a. Undiversified VaR b. Diversified VaR c. Sum of individual VaRs d. Sum of incremental VaRs 1.7. Which is equal to the sum of individual VaRs? a. Undiversified VaR b. Diversified VaR c. Sum of component VaRs d. Sum of incremental VaRs
{"url":"https://www.bionicturtle.com/forum/threads/l2-t8-1-individual-incremental-and-diversified-value-at-risk-var.4778/","timestamp":"2014-04-18T03:04:49Z","content_type":null,"content_length":"30692","record_id":"<urn:uuid:96b2d4f0-2add-44c6-a1db-c7df08722ea7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Wind, Waves, and Sailors - SailNet Community Sailboats move on the water's surface, in the dynamic area where wind and water interact, exchanging energy and building waves. Winds fill a boat's sails, but it is waves that affect a boat's movement and stability. So waves, more than wind, are really our first interest. So how do waves form? When wind blows along a consistent direction and with a consonant force for a measured period of time, waves form. We speak of waves in measured terms, calculating their height and period. Height is the distance from the trough to the crest, and period is the time it takes for successive wave crests, or troughs, to pass a fixed point. Now, here is the key to understanding waves. When you know the wave period you are able to calculate everything else you need to know about waves. Knowing this variable, you can quickly compute these other important wave features: 1. A wave length is five times the wave period squared. 2. A single wave speed is three times the wave period. 3. A group of waves move at a constant speed, which is 1.5 times the wave period. 4. A wave feels the bottom and begins to slow (while also growing in height) when water depth is half the wave's length (see item No.1 for determining length). 5. A wave breaks when the water depth is five thirds of the wave's height. If you know the water depth, you can work this formula in reverse and calculate what size wave will break in a given water depth. For example, in five feet of water a three-foot wave will break. So how do you determine the wave period? Well, you can time the passage of wave crests or troughs yourself while observing the ocean around you, and in addition, you can use the Marine Prediction Centers (MPC) wave period charts, which are produced and disseminated several times each day via Weatherfax and the Internet (www.mpc.ncep.noaa.gov). MPC wave period charts are highly accurate and incorporate global wave energy in determining wave period. A wave model known as the Wave Watch III is now in use by the MPC and has proven itself to be the world’s most accurate and reliable wave model. We are truly fortunate to have this data available to us. Why do you need to know these relationships concerning waves? Here is why: Let’s say a gale (a gale is defined as winds of 34 to 40 knots, which is Force 8 on the Beaufort Scale) is 500 miles away from your position and stationary. These gale-force winds are blowing across 200 miles of open ocean and in doing so they produce waves with a height of 23 feet and a period of approximately nine seconds (data from Table 3302 in American Practical Navigator-Bowditch). When will these gale-produced waves reach your position? Before we compute an answer let’s first consider why we need to know. Knowing when these seas will arrive will allow you to make an appropriate course or anchorage change, keeping your sailboat in comfortable and safe conditions. So when will these waves arrive? Let’s calculate wave speed for the group of waves being produced. The group wave speed is 1.5 times the wave period. The wave period is nine seconds, so the group wave speed is 13.5 knots (1.5 x nine seconds = 13.5 knots). Traveling at 13.5 knots it will take these waves 37 hours, or 1.5 days (500 miles divided by 13.5 knots = 37 hours) to reach your position. Forerunners of these main waves travel at a faster speed, which is three times the wave period, so in 18 hours the first swells from this gale will reach your position. These first swells are the warning sign of the approaching herd of waves, and should prompt you to take note. In the days before weather satellites, the approach of hurricanes was monitored by watching ocean swells. Hurricane-force winds produce large waves with long periods, up to 18 seconds, and so move at 27 knots (18 seconds x 1.5 = 27 knots). Since hurricanes mover slower than 27 knots when first developing, the arrival of large swells warns of an approaching tropical cyclone. Better yet, by timing the swell period you can work this math in backwards and determine an approximate distance to the hurricane! Now let's say for this example that you are anchored in what appears to be a safe cove, where the water depth averages 30 feet. You know that large swells will be arriving within the next 18 to 37 hours and you want to know in what depth these waves will break. Wave height will degrade to some extent during their 500-mile transit to your position, and for now let’s assume the wave height upon arrival at your anchorage is 20 feet. At what depth will these waves begin feeling the bottom and hence slow down? At what depth will these waves break? First, we need to calculate wavelength, which is five times the wave period squared. So 5 x (9 x 9) = 405 feet for wavelength. The waves feel the bottom and begin slowing when the depth is 1/2 of wavelength; so these waves will begin slowing down and growing in height when the water depth is 405/2 = 202 feet. Well, you are OK so far since you are not anchored in 202 feet of water! Wait though! When will these waves actually break? That occurs when the water depth is the 5/3 wave height (remember, a three-foot wave will break in five feet of water). Do a little new math cross multiplication, and you have depth = 5/3 wave height. Thus, we now know these waves will break in 33 feet of water (5/3 x 20 = 33 feet). Oh no! The water depth in your safe cove is 30 feet! I imagine you had better weigh anchor and move to a different location, but no need to scream, shout, and run about because you have calculated the wave speed and you have 18 hours before the first swells arrive. But don't procrastinate! Hopefully you now see how waves are a critical factor in sailboat stability and motion, so observe the waves around you, and determine the wave period. Use the MPC wave period chart and always calculate depths where waves will break when choosing an open anchorage. Forewarned is forearmed.
{"url":"http://www.sailnet.com/forums/seamanship-articles/19279-wind-waves-sailors.html","timestamp":"2014-04-23T14:56:35Z","content_type":null,"content_length":"98884","record_id":"<urn:uuid:a1cb9924-827d-4b2d-980f-a53bca55b6bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
S&TR |May 2005: Commentary Einstein’s Legacy Alive at Livermore IN March 1905, a little-known patent examiner in the Swiss patent office published a paper about the properties of light in the German physics journal, Annalen der Physik. The examiner was 26-year-old Albert Einstein, and his paper, “Concerning an Heuristic Point of View toward the Emission and Transmission of Light,” marked the beginning of a surge of human creativity nearly unprecedented in science. In seven months, Einstein published four papers and a thesis that together had far-reaching effects on physics, technology, and our understanding of the universe. Science and Technology Review (S&TR) is publishing a four-part series, examining how Einstein’s discoveries form the basis for much of the Laboratory’s physics research. Applying Einstein’s Theories of Relativity,” the first article in this series, discusses Einstein’s most famous 1905 paper, “On the Electrodynamics of Moving Bodies,” which introduced the revolutionary concept of relativity. Special and general relativity play an important role in Livermore physics research, especially our research into the physics of astronomical events such as gamma-ray bursts, black holes, and supernovae. In like manner, our supercomputer codes that model these events must account for relativity. Many of our astrophysics codes were adapted from versions developed originally for nuclear weapons research, which are based on concepts outlined in another Einstein paper. Finally, our work with machines that accelerate ions close to the speed of light would not be possible without incorporating the tenets of relativity. S&TR will discuss Einstein’s first paper of 1905, “Concerning an Heuristic Point of View toward the Emission and Transmission of Light.” In this paper, he explained some puzzling properties of light as a consequence of its particulate nature. Einstein called light’s discrete packets quanta; we now call them photons. This paper is the foundation for the Laboratory’s research in quantum physics, ionizing radiation, lasers, and advanced optical-imaging techniques. S&TR will examine Einstein’s third 1905 paper, “Investigations on the Theory of the Brownian Movement.” In this paper, he explained the random motion of microscopic particles suspended in a liquid, and he used a branch of physics known as statistical mechanics to estimate the size of molecules. This work helped confirm the atomic theory of matter and is the basis for much of the Laboratory’s work in molecular dynamics, Monte Carlo statistical techniques, and physical chemistry. S&TR will finish the series with a discussion of Einstein’s fourth 1905 paper, “Does the Inertia of a Body Depend upon Its Energy-Content?” In this paper, which appeared in the September 1905 edition of Annalen der Physik, Einstein reported that, as a consequence of special relativity, matter and energy are interchangeable. This work led to the famous equation E = mc^2. Thirty years later, physicists discovered fission and fusion, which demonstrate the conversion of mass into large amounts of energy. This work made possible the nuclear weapons research done at Lawrence Livermore and Los Alamos national laboratories. It also paved the way for Livermore research efforts in peaceful nuclear power, such as magnetic fusion energy and inertial confinement fusion, as well as the Laboratory’s nuclear and particle physics experiments, in which matter and energy are interchanged.
{"url":"https://www.llnl.gov/str/May05/ComMay05.html","timestamp":"2014-04-17T13:00:58Z","content_type":null,"content_length":"11434","record_id":"<urn:uuid:81053e14-17ad-46f1-8916-834192d1ef83>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Intuitionists and excluded-middle Neil Tennant neilt at mercutio.cohums.ohio-state.edu Thu Oct 20 14:25:25 EDT 2005 On Wed, 19 Oct 2005, Jesse Alama wrote: > 2. This proof might not go through for all representations of real > numbers, especially an important one in this connection, namely > representation by choice sequences. You conclude from the > assumption that a and b are irrational numbers between 0 and 1 that > we can represent them as > a = (0.)x_1,...,x_k,y,y_1,...,y_n,z,z_1,z_2,... > b = (0.)x_1,...,x_k,u,u_1,u_2,... > where the sequence of y_i's is the first (and longest) block of 9's in > the decimal expansion of a after which a and b "disagree" in their > decimal expansions. Why should we assume that every irrational number > between 0 and 1 (or any irrational number for that matter), regarded > as a choice sequence, has such a block of 9's? First, a and b differ in their expansions from the (k+1)-th place (since y<u). Secondly, Lew Gordeew explicitly allowed the possibility that n might be 0 (hence that there be no 9s). More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-October/009205.html","timestamp":"2014-04-19T14:52:41Z","content_type":null,"content_length":"3545","record_id":"<urn:uuid:54543032-6c0b-437e-839f-2a558f01ffd6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounding 2nd Eigenvalue of a Pseudo-Rotation-ish matrix up vote 0 down vote favorite Let $p,q$ be arbitrary primes. Let $N = p * q$. Let $I$ be the $N * N$ identity matrix. Let $R$ be the $N * N$ matrix defined as follows: $R[x_0 * p + y_0, x_1 * p + y_1]=1$ if and only if $x_0+1 \equiv x_1 (\mod q)$ and $y_0 + 1 \equiv y_1 (\mod p)$. Let $A = \begin{pmatrix} \frac12I & \frac12R \\\\ \frac12R & \frac12I \end{pmatrix}$. The largest eigenvalue of $A$ is 1.0, associated with the all 1 vector. Question: how can I show that the second largest (absolute value of) eigenvalue is < 1? I'm not particularly concerned with the bound. For example, $\lambda < 1 - 2^{p*q}$ is perfectly fine. I just need to show that it's < 1. Context: derandomization. computational-complexity linear-algebra It seems to me that $R$ can be written as a tensor product of $S \in M_q$ and $T \in M_p$, given by $$ S[x_0,x_1] = 1 \Leftrightarrow x_0 + 1 \equiv x_1\ (mod\ p) $$ and $$ T[y_0,y_1] = 1 \ Leftrightarrow y_0 + 1 \equiv y_1 \ (mod\ p) $$ I suspect that there is a typo, and the condition on $S$ is supposed to be $y_0+1 \equiv y_1\ (mod\ q)$. In any case, this tensor product formulation may be helpful. – Aaron Tikuisis Mar 17 '12 at 14:54 add comment 2 Answers active oldest votes With reference to the tensor product formulation that I gave in my comment, we notice that $S$ is unitarily equivalent to $$ diag(\alpha, \alpha^2, \dots, \alpha^q) $$ where $\alpha = exp (2\pi i/q)$, and likewise $T$ is unitarily equivalent to $$ diag(\beta, \dots, \beta^p) $$ where $\beta = exp(2\pi i/p)$. Therefore, $A$ is unitarily equivalent to $$ 1/2 \begin{pmatrix} I_{pq} & diag(\gamma,\dots,\gamma^{pq}) \\ diag(\gamma,\dots,\gamma^{pq}) & I_{pq} \end{pmatrix}, $$ where $\gamma=\alpha\beta$. Subtracting $1/2I_{2pq}$ from this gives $$ 1/2 \begin {pmatrix} 0_{pq} & diag(\gamma,\dots,\gamma^{pq}) \\ diag(\gamma,\dots,\gamma^{pq}) & 0_{pq} \end{pmatrix} = 1/2 \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \otimes diag(\gamma,\dots,\ up vote 0 gamma^{pq}). $$ down vote accepted The eigenvalues of $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ are $\pm 1$, while the eigenvalues of $diag(\gamma,\dots,\gamma^{pq})$ are $\gamma,\dots,\gamma^{pq}$, so the eigenvalues of $$ 1/2 \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \otimes diag(\gamma,\dots,\gamma^{pq}) $$ are $\pm\gamma/2,\dots,\pm\gamma^{pq}/2$. The eigenvalues of $A$ are therefore $(1\pm \ gamma)/2,\dots,(1\pm\gamma^{pq})/2$. Your desired bound follows. Wait. How do you get from X is unitarity equivalent to Y to: 1/2 [ [I X] [X I]] is unitarilye quivalent to 1/2[[ I Y] [ Y I]] ? – user22209 Mar 17 '12 at 19:10 This makes sense to me now. Thanks! – user22209 Mar 17 '12 at 20:01 While I appreciate that you picked my answer, I think that you may find it worth your while even to look at en.wikipedia.org/wiki/Perron–Frobenius_theorem . The Perron-Frobenius theorem may come in handy for similar problems. (Of course, recognizing when matrices have tensor decompositions can also come in handy.) – Aaron Tikuisis Mar 17 '12 at 20:58 add comment You matrix is non-negative. Thus $1$ is its Perron eigenvalue. You only have to verify that its is irreducible and not cyclic. Then apply Perron-Frobenius theorem (section 8.3 of my book up vote 1 Matrices (Springer-Verlad GTM 216, 2nd edition), together with Section 8.4. down vote Well that's much simpler than what I did. To show that $A$ is nonnegative, must we verify that $\|R\| = \|S\|\|T\| = 1$, or is there a quicker way? – Aaron Tikuisis Mar 17 '12 at 16:55 $A$ is non-negative. In the Perron-Frobenius' theory, this means that every entry $a_{ij}$ is non-negative. Irreducible means that it is not `block-triangular'. Read chapter 8 of my book. This is a very classical topic that more or less every experienced mathematician learn one day or the other. – Denis Serre Mar 17 '12 at 18:28 Sorry, I currently do not have a copy of your book and thus am unable to appreciate the beauty of this proof. – user22209 Mar 17 '12 at 20:02 add comment Not the answer you're looking for? Browse other questions tagged computational-complexity linear-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/91457/bounding-2nd-eigenvalue-of-a-pseudo-rotation-ish-matrix","timestamp":"2014-04-18T00:28:42Z","content_type":null,"content_length":"63776","record_id":"<urn:uuid:41f921cb-0e16-4531-9900-0e09da6adc58>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Determinization of Weighted Finite Automata - In Automata, Languages and Programming – 32nd International Colloquium, ICALP 2005 , 2005 "... Abstract. Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We g ..." Cited by 39 (7 self) Add to MetaCart Abstract. Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Büchi’s and Elgot’s fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. 1 - THEORET. COMPUT. SCI , 2004 "... Finite automata with weights in the max-plus semiring are considered. The main result is: it is decidable whether a series that is recognized by a finitely ambiguous max-plus automaton is unambiguous, or is sequential. Furthermore, the proof is constructive. A collection of examples is given to illu ..." Cited by 5 (2 self) Add to MetaCart Finite automata with weights in the max-plus semiring are considered. The main result is: it is decidable whether a series that is recognized by a finitely ambiguous max-plus automaton is unambiguous, or is sequential. Furthermore, the proof is constructive. A collection of examples is given to illustrate the hierarchy of maxplus series with respect to ambiguity. - Proceedings of the 8th Scandinavian Workshop on Algorithm Theory (SWAT 2002), 2368 of LNCS:348–357 , 2002 "... Abstract. A popular and much studied class of filters for approximate string matching is based on finding common q-grams, substrings of length q, between the pattern and the text. A variation of the basic idea uses gapped q-grams and has been recently shown to provide significant improvements in pra ..." Cited by 5 (0 self) Add to MetaCart Abstract. A popular and much studied class of filters for approximate string matching is based on finding common q-grams, substrings of length q, between the pattern and the text. A variation of the basic idea uses gapped q-grams and has been recently shown to provide significant improvements in practice. A major difficulty with gapped q-gram filters is the computation of the so-called threshold which defines the filter criterium. We describe the first general method for computing the threshold for q-gram filters. The method is based on a carefully chosen precise statement of the problem which is then transformed into a constrained shortest path problem. In its generic form the method leaves certain parts open but is applicable to a large variety of q-gram filters and may be extensible even to other classes of filters. We also give a full algorithm for a specific subclass. For this subclass, the algorithm has been implemented and used succesfully in an experimental comparison. 1 - In Proceedings of the workshop Weighted Automata: Theory and Applications (WATA , 2002 "... Finite automata are classical computational devices used in a variety of large-scale applications [1]. Finite-state transducers are automata whose transitions are labeled with both an input and an output label. Some applications in text, speech and image processing require more general devices, weig ..." Cited by 2 (2 self) Add to MetaCart Finite automata are classical computational devices used in a variety of large-scale applications [1]. Finite-state transducers are automata whose transitions are labeled with both an input and an output label. Some applications in text, speech and image processing require more general devices, weighted automata, to account for the variability of the input data and to rank various output hypotheses [7, 9, 8]. A weighted automaton is a finite automaton in which each transition is labeled with some weight in addition to the usual symbol. Weighted automata and transducers provide a common representation for each component of a complex system used in these applications and admit general algorithms such as composition which can be used to combine these components. The time efficiency of these systems is substantially increased when deterministic or subsequential machines are used [9] and the size of these machines can be further reduced using general minimization algorithms [9, 10]. A weighted automaton or transducer is deterministic or subsequential if it has a unique initial state and if no two transitions leaving the same state share the same input label. A general determinization algorithm for weighted automata and transducers was given by [9]. The algorithm outputs a deterministic machine equivalent to the input weighted automaton or transducer and is an extension of the classical subset construction used for unweighted finite automata. But, unlike the case of unweighted automata, not all finite-state transducers or weighted automata and transducers can be determinized using this algorithm. In fact, some machines do not admit any equivalent deterministic one. Thus, it is important to design an algorithm for testing the determinizability of finite-state transducers and weighted automata. "... Abstract—A nondeterministic weighted finite automaton (WFA) maps an input word to a numerical value. Applications of weighted automata include formal verification of quantitative properties, as well as text, speech, and image processing. Many of these applications require the WFAs to be deterministi ..." Cited by 2 (1 self) Add to MetaCart Abstract—A nondeterministic weighted finite automaton (WFA) maps an input word to a numerical value. Applications of weighted automata include formal verification of quantitative properties, as well as text, speech, and image processing. Many of these applications require the WFAs to be deterministic, or work substantially better when the WFAs are deterministic. Unlike NFAs, which can always be determinized, not all WFAs have an equivalent deterministic weighted automaton (DWFA). In [1], Mohri describes a determinization construction for a subclass of WFA. He also describes a property of WFAs (the twins property), such that all WFAs that satisfy the twins property are determinizable and the algorithm terminates on them. Unfortunately, many natural WFAs cannot be determinized. In this paper we study approximated determinization of WFAs. We describe an algorithm that, given a WFA A and an approximation factor t ≥ 1, constructs a DWFA A ′ that t-determinizes A. Formally, for all words w ∈ Σ ∗ , the value of w in A ′ is at least its value in A and at most t times its value in A. Our construction involves two new ideas: attributing states in the subset construction by both upper and lower residues, and collapsing attributed subsets whose residues can be tightened. The larger the approximation factor is, the more attributed subsets we can collapse. Thus, t-determinization is helpful not only for WFAs that cannot be determinized, but also in cases determinization is possible but results in automata that are too big to handle. In addition, t-determinization is useful for reasoning about the competitive ratio of online algorithms. We also describe a property (the t-twins property) and use it in order to characterize t-determinizable WFAs. Finally, we describe a polynomial algorithm for deciding whether a given WFA has the t-twins property. Index Terms—Weighted automata; Determinization; I. , 2009 "... The automata arising from the well known conversion of regular expression to non deterministic automata have rather particular transition graphs. We refer to them as the Glushkov graphs, to honour his nice expression-to-automaton algorithmic short cut [10]. The Glushkov graphs have been characterize ..." Add to MetaCart The automata arising from the well known conversion of regular expression to non deterministic automata have rather particular transition graphs. We refer to them as the Glushkov graphs, to honour his nice expression-to-automaton algorithmic short cut [10]. The Glushkov graphs have been characterized [6] in terms of simple graph theoretical properties and certain reduction rules. We show how to carry, under certain restrictions, this characterization over to the weighted Glushkov graphs. With the weights in a semiring K, they are defined as the transition Glushkov K-graphs of the Weighted Finite Automata (WFA) obtained by the generalized Glushkov construction [4] from the K-expressions. It works provided that the semiring K is factorial and the K-expressions are in the so called star normal form (SNF) of Brüggeman-Klein [2]. The restriction to the factorial semiring ensures to obtain algorithms. The restriction to the SNF would not be necessary if every K-expressions were equivalent to some with the same litteral length, as it is the case for the boolean semiring B but remains an open question for a general K. "... Although they can be topologically different, two distinct transducers may actually recognize the same rational relation. Being able to test the equivalence of transducers allows to implement such operations as incremental minimization and iterative composition. This paper presents an algorithm for ..." Add to MetaCart Although they can be topologically different, two distinct transducers may actually recognize the same rational relation. Being able to test the equivalence of transducers allows to implement such operations as incremental minimization and iterative composition. This paper presents an algorithm for testing the equivalence of deterministic weighted finite-state transducers, and outlines an implementation of its applications in a prototype weighted finite-state calculus tool. "... We propose a novel approach for the maxstring problem in acyclic nondeterministic weighted FSA’s, which is based on a convexity-related notion of domination among intermediary results, and which can be seen as a generalization of the usual dynamic programming technique for finding the max-path (a.k. ..." Add to MetaCart We propose a novel approach for the maxstring problem in acyclic nondeterministic weighted FSA’s, which is based on a convexity-related notion of domination among intermediary results, and which can be seen as a generalization of the usual dynamic programming technique for finding the max-path (a.k.a. Viterbi approximation) in such automata. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.31.4054","timestamp":"2014-04-20T21:52:56Z","content_type":null,"content_length":"35018","record_id":"<urn:uuid:cfb90b9c-4bdd-496a-beb0-b59f03fa7891>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Cresskill Geometry Tutor Find a Cresskill Geometry Tutor ...However, Euclidean geometry is only a rough approximation to the geometry of physical space. This profound observation was developed by Riemann, Einstein, and other innovative scientists. The geometry currently taught in US high schools is comprised mostly of aspects of the ancient Euclidean geometry utilizing a local, restricted, set of coordinates. 19 Subjects: including geometry, reading, writing, calculus ...It does not have to be. Once broken down, my students understand that improving your score is a very achievable goal. As soon as a student understands that tangible results are possible, I find that they become not only willing but enthusiastic about the work required to get the score they want. 16 Subjects: including geometry, algebra 1, GED, finance ...I love to talk with students of all ages about these subjects, and I would like to help you to appreciate their fundamental simplicity and beauty while getting you to your academic goals. I hold a PhD in physics from Columbia University, and I have been part of the condensed matter research community for more than a decade. I prefer to tutor near where I live in Park Slope, Brooklyn, 25 Subjects: including geometry, chemistry, physics, calculus ...It's the explanation for everyday things, like why laundry detergent works better in hot water or how baking soda works or why not all pain relievers work equally well on a headache. If you know some chemistry, you can make educated choices about everyday products that you use.You could use chem... 17 Subjects: including geometry, chemistry, physics, biology ...I am willing to use many techniques to improve English skills. I will help students understand the functions of the government with key to the history and development of the U.S. I also help extensively with the writing process. 52 Subjects: including geometry, English, chemistry, physics
{"url":"http://www.purplemath.com/Cresskill_geometry_tutors.php","timestamp":"2014-04-19T12:39:38Z","content_type":null,"content_length":"24023","record_id":"<urn:uuid:16a7ecdc-0c31-4415-b70f-4fe6c4b1a4fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
North Richland Hills Calculus Tutor Find a North Richland Hills Calculus Tutor Mathematics can be intimidating to many people. I would like to put you, the student, at ease when teaching, so that you can more easily understand the subject without added stress. I have a special technique that I stumbled upon when in High School, for learning abstract mathematical concepts. 20 Subjects: including calculus, statistics, geometry, GRE ...I graduated from New Tribes Bible Institute with the equivalent of an associate's degree in Biblical studies. My greatest joy is to share what I have learned from the Bible with others. I still study the Bible on a daily basis. 40 Subjects: including calculus, reading, chemistry, physics ...I now study biochemistry and nutrition as one of my hobbies. In general, the topics covered in chemistry include, but are not limited to, the following: the history of chemistry, measurement and the metric system, the mole concept, chemical reactions and stoichiometry, energy and chemical react... 82 Subjects: including calculus, English, chemistry, algebra 1 I am a recently retired (2013) high school math teacher with 30 years of classroom experience. I have taught all maths from 7th grade through AP Calculus. I like to focus on a constructivist style of teaching/learning which gets the student to a conceptual understanding of mathematical topics. 12 Subjects: including calculus, statistics, geometry, algebra 2 ...I have a Chemical Engineering degree from The University of Texas at Austin and I am Texas certified in Mathematics grades 4-8 and 8-12. I have experience teaching grades 5, 6, 7, 8, Algebra I, Algebra II, Geometry, Pre-Calculus and Calculus. I am able to identify gaps in student understanding and come up with a plan to fill those gaps and lead the student to success. 9 Subjects: including calculus, chemistry, geometry, algebra 1
{"url":"http://www.purplemath.com/North_Richland_Hills_Calculus_tutors.php","timestamp":"2014-04-18T11:25:20Z","content_type":null,"content_length":"24531","record_id":"<urn:uuid:133d0d74-e680-433a-9057-00accf0502e7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Sage wants a chance to be a starter. - Page 4 - Houston Texans Message Board & Forum - TexansTalk.com Originally Posted by The Pencil Neck I think you're letting a couple of bad plays in the red zone influence your opinion WAY too much. Early in the season, Matt had a couple of questionable throws. After he tried to force one against the Panthers, he got a little reticent and that hesitation look like it caused a turnover against the Colts (after Jacoby's great punt return.) But overall, he could have been better and I expect him to be better but I think he was far from "abysmal." And then in the Chargers game, Schaub threw back-to-back INTs (or it was at least on back-to-back drives) to the same guy because he was forcing the throw--The throws, IIRC, were nowhere near being a completion...with the second INT a very easy one which had no chance from the time the bal was snapped. This was after Schaub, in the Panthers game, told Kubiak he "wouldn't do that again." After that sort of emphatic promise, you'd think he would have been less likely to throw one up for grabs (let alone two in a row). People on this board are saying "Schaub's the man," and some are saying "Well, hold on a second..." and some are saying "Boy, I am sure glad we have Sage as a quality backup," and some are saying "He's a great stand-in, but he's had his chances" and so the jury is tilted toward Schaub thus far. I haven't changed much on where I stand: "Who the HELL is our QB of tomorrow, because I see two backups trying to be the starter." The stats are about the same (eerily close, actually) except for Sage being better in a category or two but Schaub being better in a different category or two than Sage (I am not going to research this out again, it's not worth the time). So...all we have left is that Schaub has "a better ceiling" and Sage has had his chances. I just want somebody to solidify the role. And be a franchise QB. You don't see this issue on the Colts, or the Steelers, or the Chargers, or the Patriots, nor on several other teams. Those are teams that you KNOW who "the guy" is and that he's not endanger of being uprooted anytime soon. Conversely, we're better off than the Bears--They seem to not even have ONE decent QB who can lead a team in even a somewhat decent fashion. So, all I got is that I rambled on this topic and pretty much didn't get anywhere new with it. So, when does training camp start?...because I am bored like everyone else. Obviously.
{"url":"http://www.texanstalk.com/forums/showthread.php?p=937769","timestamp":"2014-04-19T07:44:54Z","content_type":null,"content_length":"182121","record_id":"<urn:uuid:ddb48b3c-d457-4023-b27c-0c09d622a963>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of Combinatorial Matrix Theory to Laplacian Matrices of Graphs This book is part of the series “Discrete Mathematics and its Applications.” It continues the recent line of books that exploit the connections between the two seemingly disparate subjects of graph theory and matrix theory. While some of these books are more along the lines of graduate-level research monographs (such as An Introduction to the Theory of Graph Spectra by Cvetković, Rowlinson, and Simić), or an undergraduate textbook (Graphs and Matrices by Bapat) , this book works well as a reference textbook for undergraduates. Indeed, it is a distillation of a number of key results involving, specifically, the Laplacian matrix associated with a graph (which is sometimes called the “nodal admittance matrix” by electrical engineers). Two other texts, one by Brualdi and Ryser from 1991 (Combinatorial Matrix Theory) and one by Brualdi and Cvetković from 2009 (A Combinatorial Approach to Matrix Theory and Its Applications) have similar titles, but are at a higher level. In the former, such topics as permanents and Latin Squares are given treatment, while the latter discusses canonical forms and applications to electrical engineering, chemistry and physics. After two chapters covering the preliminaries in Matrix Theory and Graph Theory necessary for the sequel, Molitierno presents an Introduction to Laplacian Matrices, with a proof of the Kirchhoff Matrix-Tree Theorem via Cauchy-Binet. He discusses Laplacians of weighted graphs as well as unweighted ones, and bounds on the eigenvalue spectra of certain classes of graphs. In particular, Molitierno focuses on the second smallest eigenvalue of a graph’s Laplacian matrix, called the algebraic connectivity of the graph. The important work of Grone and Merris is given a decent treatment, as is Fielder’s. In fact, it is Fiedler’s theorem on eigenvectors that leads to a particular type of matrix that dominates the last two chapters of the book, the so-called “bottleneck matrices.” These matrices are used to determine such graph properties as algebraic connectivity. Chapter 6 covers the bottleneck matrices for trees, while some general classes of non-tree graphs are covered in chapter 7. Molitierno’s book represents a well-written source of background on this growing field. The sources are some of the seminal ones in the field, and the book is accessible to undergraduates. John T. Saccoman is Professor of Mathematics at Seton Hall University in South Orange, NJ.
{"url":"http://www.maa.org/publications/maa-reviews/applications-of-combinatorial-matrix-theory-to-laplacian-matrices-of-graphs?device=mobile","timestamp":"2014-04-17T00:11:56Z","content_type":null,"content_length":"29749","record_id":"<urn:uuid:aa05f897-e56f-492a-afda-6f3fa431dadd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Instructor's notes. The model used in the activity " Using mass balance model to understand carbon dioxide" is based on the simple mass balance relation. (eqn. 1) Where C is the global atmospheric concentration of CO[2] (in ppmv), S is the emission source strength (ppm/yr), and t is the atmospheric lifetime (yrs). The general solution to equation 1 for any arbitrary time dependent emission source S(t) is: (eqn. 2) We have solved equation 2 assuming an emission source strength of the form, (eqn. 3) [see solution] An interactive online program has been written in which students can modify the input values ( Co, t, So, and R) and then generate a graph of C vs t. A table of values for C and S as functions of time is also generated. Students first "calibrate" the model to fit recent observations and then use the model to explore future emission scenarios. Although I have used this assignment after a brief in-class discussion of the model basics and online modeling environment, the two activities below provide a solid background of the mass balance concept and its application to global trace gas For classes with limited mathematics ability, I describe the mathematics of the model using only the finite difference form of equation 1. Although it is tempting to also discuss Euler's number e, I purposely avoid this for classes with weak math skills as I believe that it adds little (if anything) to their understanding of the mass balance physical processes. │ │To give students a better feel for the model you may want to use some or all of the introductory water bucket model activity at: http://cs.clark.edu/~mac/physlets/GlobalPollution/WaterBucket.htm.│ │ │I often use a 2-liter pop bottle, with flow from a sink into the top and a hole in the bottom, as a physical model during an in class discussion of mass balance. The lifetime for this water │ │ │bucket model is then related to the hole size in the bottle and viscosity of the water. │ │ │The next suggested activity that is useful in bridging the gap between the waterbucket model and its application to the atmosphere is located at: http://cs.clark.edu/~mac/physlets/GlobalPollution│ │ │/TraceGasTheory.htm. This site also provides links to basic definition of terms and basic theory in a context of atmospheric science. │ • Another activity related to recent carbon dioxide trends and its seasonal cycle is CO2_trend&SeasonalCycle. • A similar modeling activity has also been written to explore chlorofluorocarbon (CFC-12) build-up in the atmosphere using the same model structure given by equations 1-3. http://cs.clark.edu/~mac
{"url":"http://www.atmosedu.com/physlets/GlobalPollution/instructorNotes.htm","timestamp":"2014-04-21T02:09:43Z","content_type":null,"content_length":"5050","record_id":"<urn:uuid:e5367267-bbc3-428b-b07c-660b76f0d8c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}