content
stringlengths
86
994k
meta
stringlengths
288
619
Trignometry Basics - Circles and TrianglesTrignometry Basics - Circles and Triangles Content Type CAT Previous Year Tests Previous Year CAT tests for training you for the D-Day. Time yourself, race against the competition and win it! Online CAT 2008 - Question Paper & Solutions Online CAT ... Sudeep Joshi – Thu, 10 Jun 2010 17:41:51 -0000 it's tough but an intresting topic…………………………….it's usage is also vast……………………… mukherjipooja – Mon, 03 May 2010 07:01:55 -0000 i hv a question…post d explanation if anybody solves it.. q1.a regular hexagon is circumscribed by a rectangle such that all six corners of the hexagon lie on the rectangle.what is the ratio of the area of largest possible equi. triangle that can be cut from the rectangle to the area of hexagon? lihosp – Thu, 18 Mar 2010 14:32:39 -0000 hehe these questions are so simple…if these type of question gonna come in SAT paper,i can score full marks sir..so can u say these types of question gonna come in SAT Paper… Post Comments Rubina Singh – Thu, 25 Mar 2010 11:02:47 -0000 The questions mentioned here are just the examples and definitely the questions will follow the same apttern, basically this exam is to test the speed and accuracy of the candidate so you need to be good in speed and must be accurate so do it very carefully. Here are some books which you can refer for its preparation: The Official SAT Study Guide: 2nd Edition by College Board (Editor) Answers and Explanations by Peter Tanguay The Ultimate SAT Supplement by Erik Klass Up Your Score 2009-2010: The Underground Guide to the SAT The Full Potential SAT Audio Program by Bara Sapir The Ultimate SAT Tutorial: The Easiest and Most Effective Way to Raise Your Score by Erik Klass SAT Practice: The New Verbal Section by K. Titchenell Try to solve more and more sample papers for a good practice. Don't forget that if you can score full marks in this then you can apply for various scholarships and can also apply to top class Start preparing for it and do well in exams. Good Luck!!!! Reply to This praveen yadav – Sat, 14 Nov 2009 10:42:52 -0000 crsvaidehi – Wed, 19 Aug 2009 06:25:18 -0000 example 3 should have better been explained with a rough diagram Post Comments Suresh – Thu, 20 Aug 2009 10:48:45 -0000 I thought it is easy to understand the solution with a diagram. Anyway, will try to be more lucid in the coming lessons. Reply to This Current Rating crsvaidehi – Wed, 19 Aug 2009 06:23:32 -0000 for practice problem 2,on solving the given 2 conditions I got it as 3a=b or a=3b. so wat next….. I couldn't get it. plz post the answer for it Post Comments Suresh – Thu, 20 Aug 2009 10:50:50 -0000 Hi, Apply the Cosine rule and you will get the answer quite comfortably. Reply to This crsvaidehi – Wed, 19 Aug 2009 06:15:53 -0000 Solution to problem 1 : let a,b,c be the sides of the triangle. c be the hypotenuse. as R is the circumradius,the circumdiameter i.e,c=30 so a=18,b=24 which is the only condition satisfying (1) so a+b=42 user dce – Sat, 30 May 2009 15:58:23 -0000 area of triangle = 1/2(base*altitude) ie. altitude D= 2(Area/Base) anilkumarreddy – Tue, 26 May 2009 14:41:04 -0000 Current Rating Suresh – Fri, 15 May 2009 12:08:48 -0000 Folks, seems that there is some problem with the format of this lesson. I will try to fix this soon and also will try to come up with answers for the practice problems as well. Post Comments amahapatra – Mon, 27 Jul 2009 13:35:58 -0000 Please add answers to the practice problem also Reply to This suman sourabh – Mon, 27 Apr 2009 20:20:38 -0000 fantastic lesson Nick Watson – Sat, 18 Apr 2009 06:11:02 -0000 i didnt understand the last question…… i mean where's the point H. thats y i m not able 2 form the trapezium……. gargi_l – Thu, 09 Apr 2009 13:40:58 -0000 where do i get the solution for the practice problems pokharna – Wed, 01 Apr 2009 13:51:51 -0000 i was considering that my level for quant was suffic. but now i must rethink i took a few mins to solve these with my weak application skills great work!! deepak agarwal – Thu, 19 Mar 2009 11:01:29 -0000 , the formula you used for Altitude calculation D= 2(Area/Base) is new to me deepak agarwal – Thu, 19 Mar 2009 10:59:22 -0000 basic is good.thanks viditsa handoo – Wed, 18 Mar 2009 12:54:06 -0000 better one asureshwaran – Wed, 04 Feb 2009 03:03:53 -0000 good lesson. wwwwwwwwwwwww – Wed, 19 Nov 2008 05:09:10 -0000 Dude u said that u studied maths in grade school and also in engg, but it seems u didnt study it well. medians of triangle make 3 qudrilaterals of equal area, this concept is there in the high school course of CBSE board. and the formula Altitude = 2(Area/Base) is new to you ? u r kidding right ??? Area of a triangle = 1/2(Base* Altitude) So, (Base * Altitude) = 2 * Area…….. understood ??? We can also say, Altitude = 2* (Area/Base)….. Clear ??? U r a funny guy…. U dont seem to be a maths scholar. Sanchit kshirsagar – Sun, 16 Nov 2008 11:53:03 -0000 I have studied Maths in grade school or in Engg as well, but to be really true, no one ever mentioned that medians of triangle make 3 qudrilaterals of equal area. Also, the formula you used for Altitude calculation D= 2(Area/Base) is new to me. Any ref book you would like to suggest that will only talk about such clues and relationships between geometrical figures. Seriously this one was interesting topic. I will recommend, GMAT test takers to go through this session. Your Comment
{"url":"http://gmat.learnhub.com/lesson/5727-trignometry-basics-circles-and-triangles?comment=38975","timestamp":"2014-04-17T07:13:36Z","content_type":null,"content_length":"122787","record_id":"<urn:uuid:3f3f8aba-5994-40f0-83bf-2b9dfe82a2aa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
The complexity of theorem-proving procedures Results 11 - 20 of 621 , 1995 "... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable ..." Cited by 161 (0 self) Add to MetaCart ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable random 3-SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NP-complete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research. - Journal of the ACM , 1996 "... Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often t ..." Cited by 157 (5 self) Add to MetaCart Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about exactly what can and cannot be inferred from the knowledge base. We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a tractable form — allowing for efficient subsequent query answering. We show how propositional logical theories can be compiled into Horn theories that approximate the original information. The approximations bound the original theory from below and above in terms of logical strength. The procedures are extended to other tractable languages (for example, binary clauses) and to the first-order case. Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general framebased language into a tractable form. , 1991 "... Precise and efficient dependence tests are essential to the effectiveness of a parallelizing compiler. This paper proposes a dependence testing scheme based on classifying pairs of subscripted variable references. Exact yet fast dependence tests are presented for certain classes of array references, ..." Cited by 138 (16 self) Add to MetaCart Precise and efficient dependence tests are essential to the effectiveness of a parallelizing compiler. This paper proposes a dependence testing scheme based on classifying pairs of subscripted variable references. Exact yet fast dependence tests are presented for certain classes of array references, as well as empirical results showing that these references dominate scientific Fortran codes. These dependence tests are being implemented at Rice University in both PFC, a parallelizing compiler, and ParaScope, a parallel programming environment. - Artificial Intelligence , 1997 "... The computational properties of qualitative spatial reasoning have been investigated to some degree. However, the question for the boundary between polynomial and NP-hard reasoning problems has not been addressed yet. In this paper we explore this boundary in the "Region Connection Calculus" RCC-8. ..." Cited by 108 (22 self) Add to MetaCart The computational properties of qualitative spatial reasoning have been investigated to some degree. However, the question for the boundary between polynomial and NP-hard reasoning problems has not been addressed yet. In this paper we explore this boundary in the "Region Connection Calculus" RCC-8. We extend Bennett's encoding of RCC-8 in modal logic. Based on this encoding, we prove that reasoning is NPcomplete in general and identify a maximal tractable subset of the relations in RCC-8 that contains all base relations. Further, we show that for this subset path-consistency is sufficient for deciding consistency. 1 Introduction When describing a spatial configuration or when reasoning about such a configuration, often it is not possible or desirable to obtain precise, quantitative data. In these cases, qualitative reasoning about spatial configurations may be used. One particular approach in this context has been developed by Randell, Cui, and Cohn [20], the so-called Region Connecti... - IN PROCEEDINGS OF AAAI-91 , 1991 "... We present a new approach to developing fast and efficient knowledge representation systems. Previous approaches to the problem of tractable inference have used restricted languages or incomplete inference mechanisms --- problems include lack of expressive power, lack of inferential power, and/or la ..." Cited by 106 (9 self) Add to MetaCart We present a new approach to developing fast and efficient knowledge representation systems. Previous approaches to the problem of tractable inference have used restricted languages or incomplete inference mechanisms --- problems include lack of expressive power, lack of inferential power, and/or lack of a formal characterization of what can and cannot be inferred. To overcome these disadvantages, we introduce a knowledge compilation method. We allow the user to enter statements in a general, unrestricted representation language, which the system compiles into a restricted language that allows for efficient inference. Since an exact translation into a tractable form is often impossible, the system searches for the best approximation of the original information. We will describe how the approximation can be used to speed up inference without giving up correctness or completeness. We illustrate our method by studying the approximation of logical theories by Horn theories. Following the ... - Bulletin of Symbolic Logic , 1995 "... This paper of Tseitin is a landmark as the first to give non-trivial lower bounds for propositional proofs; although it pre-dates the first papers on ..." Cited by 105 (2 self) Add to MetaCart This paper of Tseitin is a landmark as the first to give non-trivial lower bounds for propositional proofs; although it pre-dates the first papers on , 1996 "... This chapter is a self-contained survey of recent results about the hardness of approximating NP-hard optimization problems. ..." , 1992 "... We deal with directed hypergraphs as a tool to model and solve some classes of problems arising in Operations Research and in Computer Science. Concepts such as connectivity, paths and cuts are defined. An extension of the main duality results to a special class of hypergraphs is presented. Algorith ..." Cited by 100 (5 self) Add to MetaCart We deal with directed hypergraphs as a tool to model and solve some classes of problems arising in Operations Research and in Computer Science. Concepts such as connectivity, paths and cuts are defined. An extension of the main duality results to a special class of hypergraphs is presented. Algorithms to perform visits of hypergraphs and to find optimal paths are studied in detail. Some applications arising in propositional logic, And-Or graphs, relational data bases and transportation analysis are presented. January 1990 Revised, October 1992 ( * ) This research has been supported in part by the "Comitato Nazionale Scienza e Tecnologia dell'Informazione", National Research Council of Italy, under Grant n.89.00208.12, and in part by research grants from the National Research Council of Canada. 1 Dipartimento di Informatica, Università di Pisa, Italy 2 Département d'Informatique et de Recherche Opérationnelle, Université de Montréal, Canada 2 INTRODUCTION Hypergraphs, a - Artificial Intelligence , 1996 "... We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by using the right distribution of instances, and appropriate parameter values, it is possible ..." Cited by 98 (2 self) Add to MetaCart We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult. Our results provide a benchmark for the evaluation of satisfiability-testing procedures. In Artificial Intelligence, 81 (19996) 17--29. 1 Introduction Many computational tasks of interest to AI, to the extent that they can be precisely characterized at all, can be shown to be NP-hard in their most general form. However, there is fundamental disagreement, at least within the AI community, about the implications of this. It is claimed on the one hand that since the performance of algorithms designed to solve NP-hard tasks degrades rapidly with small increases in input size, something ... - Artificial Intelligence , 1995 "... Problems in logic are well-known to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantical ..." Cited by 92 (0 self) Add to MetaCart Problems in logic are well-known to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantically well-founded logic for approximate reasoning, which is justifiable from the intuitive point of view, and to provide fast algorithms for dealing with it even when using expressive languages. We also want our logic to be useful to perform approximate reasoning in different contexts. We define a method for the approximation of decision reasoning problems based on multivalued logics. Our work expands and generalizes in several directions ideas presented by other researchers. The major features of our technique are: 1) approximate answers give semantically clear information about the problem at hand; 2) approximate answers are easier to compute than answers to the original problem; 3) approxim...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10778&sort=cite&start=10","timestamp":"2014-04-19T23:18:01Z","content_type":null,"content_length":"37753","record_id":"<urn:uuid:d4ed26d9-f01b-4af5-9ab1-44cc9cb0ee88>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
wall crossing This entry is about discontinuities in parameter dependence of (often asymptotic) solutions of differential equations and similar phenomena with stability parameters (and their stability slopes) in algebraic geometry which are often interpreted as crossing the walls of marginal stability in physics. For the notion of the same name in Morse theory see at Cerf wall crossing and for the (Weyl chamber wall) crossing functors in representation theory see wall crossing functor. In study of solitons, one can try a WKB-style approximation to nonlinear wave equation (cf. eikonal equation, Maslov index etc.). Stokes has observed that when trying to connect the local solutions, one has discontinuities along certain lines, now called Stokes lines. This is called the Stokes phenomenon. Similar issues appear in study of isomonodromic deformation?s of nonlinear ODE-s in complex plane, what is also relevant in soliton theory, and integrable systems, and special functions like Painlevé transcendents. This has especially much been studied by Kyoto school (Jimbo, Miwa, Sato, Kashiwara etc.), including using D-modules and microlocal analysis. Kyoto school found a connection of isomonodromic theory to the so-called holonomic quantum fields. The solutions of meromorphic differential equations can be expressed in terms of meromorphic connections. Then the slopes related to the solutions can be viewed as features of particular objects in a category of $D$-modules. More generally, slope filtrations are structures which appear in many other additive categories, e.g. in Hodge theory, theory of Dieudonné modules and so on. Many of those are related to the stability of the objects, which is important in the construction of moduli spaces. In algebraic geometry, Grothendieck has shown how to correctly define and construct some fundamental moduli spaces, like Hilbert schemes and Quot schemes for coherent sheaves. The work has been continued by David Mumford who geometrized classical invariant theory into geometric invariant theory. To keep moduli under control, one needs to impose stability conditions on objects and also look at classes with some fixed data: those involve slopes or equivalently phase factors. This is thus similar to the phases of eikonal in the case of Stokes phenomenon. Cf. also Harder-Narasimhan filtration, Castelnuovo-Mumford regularity? (cf. wikipedia) etc. In supersymmetric field theory In super Yang-Mills theory the number of BPS states is locally constant as a function of the parameters of the theory, but it may jump at certain “walls” in the moduli spaces of parameters. The precise behaviour of the BPS states as one crosses these walls is studied as “wall crossing phenomena”. Another example are the moduli spaces of Higgs bundles, studied by Carlos Simpson and others, which have special cases with interpretations both in geometry and in the gauge theory (instantons). It appears that sometimes they can be linked to the geometric picture. Riemann-Hilbert correspondence, spectral transform and similar correspondences again play a major role. Surely, one often works at the derived level. An adaptation of the notion of stability into the setup of triangulated categories has been introduced by Bridgeland. Bridgeland stability for the derived categories of (boundary conditions of) D-branes (B-model) are relevant for string theory. References and links Conferences and seminars Introductions and lectures • Sergio Cecotti, Trieste lectures on wall-crossing invariants (2010) (pdf) • Greg Moore, PiTP Lectures on BPS states and wall-crossing in $d = 4$, $\mathcal{N} = 2$ theories (pdf)
{"url":"http://www.ncatlab.org/nlab/show/wall+crossing","timestamp":"2014-04-20T11:21:30Z","content_type":null,"content_length":"25520","record_id":"<urn:uuid:0646d8de-d948-4f28-9707-4c0e48e972fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
3D, some theory and basic mesh tweaking this is a series of samples I did for a workshop I gave in Paris on january the 12th 2012 (see the previous article). everything went fine, mostly due to the people that came. thank you all, it was a very pleasant time :) the backbone of the workshop was geometry, dynamic mesh generation and manipulation. the first part reviewed the basics ; starting from the theory – creating and manipulating the Vector3D & Matrix3D – we saw how to build geometry from scratch, how to distort geometry and how to plug it on a soundspectrum. then we saw some “advanced techniques” to compute meshes: the 2D section, linear extrusions, the lathe and finally the loft objects (or path sweeping). I’ll address the advanced techniques later ; for now we’ll focus on the basics. I tried to remain so API agnostic as possible yet to have something to show, I needed a 3D engine. I chose Away3D but most of what will be shown here is not using any specific feature of Away3D so consider using ANY opensource engine. here’s a list for your convenience: also, I don’t use the dev branch of the engine. it would be a wise choice though for a king in the north is fiercely fighting to bring the engine to a beta version. it would allow us to use the latest features, cleaner and more stable code. In the zip below, I’ve embedded a snapshot of the version I used, I’ll leave it a s an exercise to adapt the sources to the next releases of the engine. before we start, here’s a zip file of the sources. it’s a FlashDevelop project but the attendees managed to compile from various platform / OS. another thing I did for the workshop is a 3D cheat sheet ; a PDF file that highlights graphiclly some principles and handy methods. so, hereunder you’ll find a series of snapshots, click on them to see a live example, the name underneath indicates the class’ location in the src folder, the source link let you access the source code directly. I massively commented the sources in english mostly because 3 attendees over 11 were not french (bless them once more). it might not look exactly as the picture but it’s the very same the template all the examples extend a template called BaseScene in the triga package. the original was done by Darcey Lloyd (Darcey) https://github.com/DarceyLloyd his website: http://www.allforthecode.co.uk/ and the original template can be found here:https://github.com/away3d/away3d-core-fp11/issues/152 this is a simple instanciation, nothing spectacular (click class name). _template.Template :: source this is a sample scene with dirtily created meshes, that uses the default lights and materials and a couple of hidden variables to help quickly build a 3D scene. by default there’s a HoverController camera ( or ArcBall camera ), just click & drag to rotate around the model. _template.TemplateParams :: source reviewing the basics never hurts and I learnt a lot while doing these examples. here we notice 2 important things: the Vector3D has a 4rth parameter : W, and most importantly, it helps us vizualize that a Vector is a position AND a direction ; an axis. while preparing my workshop, I crossed this article Architecture of Coral vs Vector3D and Matrix3D where the difference between a position and a direction is made very clear. I also used some of his snippets to perform manual Matrix transform, very interesting blog. geometry._theory.A_VectorProperties :: source so, as we have Vectors, we can do some basic operations. not many actually, everything boils down to something like this: add(a:Vector3D):void {x+=a.x; y+=a.y; z+=a.z;} subtract(a:Vector3D):void {x-=a.x; y-=a.y; z-=a.z;} negate():void {x=-x; y=-y; z=-z;} scale( s:Number ):void { x*=s; y*=s; z*=s;} dot( a:Vector3D ):Number { return x*a.x+y*a.y+z*a.z;} crossProduct( a:Vector3D, b:Vector3D ):Vector3D{ return new Vector3D( a.y*b.z-a.z*b.y, a.z*b.x-a.x*b.z, a.x*b.y-a.y*b.x); } length():Number { return Math.sqrt(x*x+y*y+z*z);} normalize():void { scale(1/length());} they’re all implemented in the native Vector3D with some variations in the signatures and apart from negate() (that inverts the direction of the vector) they’re all used in the example below. geometry._theory.B_VectorOperations :: source now this becomes more interesting, the following samples address the Matrix3D object and how to use it. basically, a Matrix3D is a linear array of values. to “apply” a Matrix, we multiply it with another Matrix3D or with a Vector3D. this is a rather tricky operation to do by hand ( see the 3D cheatsheet to understand better ). a class called C_MatrixMultiply in _geometry/_theory develops the Matrix – Matrix multiplication and it looks like this: private function multiply(a:Matrix3D, b:Matrix3D):Matrix3D var a0:Number = a.rawData[ 0 ]; var a1:Number = a.rawData[ 1 ]; var a2:Number = a.rawData[ 2 ]; var a3:Number = a.rawData[ 3 ]; var a4:Number = a.rawData[ 4 ]; var a5:Number = a.rawData[ 5 ]; var a6:Number = a.rawData[ 6 ]; var a7:Number = a.rawData[ 7 ]; var a8:Number = a.rawData[ 8 ]; var a9:Number = a.rawData[ 9 ]; var a10:Number = a.rawData[ 10 ]; var a11:Number = a.rawData[ 11 ]; var a12:Number = a.rawData[ 12 ]; var a13:Number = a.rawData[ 13 ]; var a14:Number = a.rawData[ 14 ]; var a15:Number = a.rawData[ 15 ]; var b0:Number = b.rawData[ 0 ]; var b1:Number = b.rawData[ 1 ]; var b2:Number = b.rawData[ 2 ]; var b3:Number = b.rawData[ 3 ]; var b4:Number = b.rawData[ 4 ]; var b5:Number = b.rawData[ 5 ]; var b6:Number = b.rawData[ 6 ]; var b7:Number = b.rawData[ 7 ]; var b8:Number = b.rawData[ 8 ]; var b9:Number = b.rawData[ 9 ]; var b10:Number = b.rawData[ 10 ]; var b11:Number = b.rawData[ 11 ]; var b12:Number = b.rawData[ 12 ]; var b13:Number = b.rawData[ 13 ]; var b14:Number = b.rawData[ 14 ]; var b15:Number = b.rawData[ 15 ]; var c:Matrix3D = new Matrix3D(); c.rawData = Vector.<Number>([ a0 * b0 + a1 * b4 + a2 * b8 + a3 * b12, a0 * b1 + a1 * b5 + a2 * b9 + a3 * b13, a0 * b2 + a1 * b6 + a2 * b10 + a3 * b14, a0 * b3 + a1 * b7 + a2 * b11 + a3 * b15, a4 * b0 + a5 * b4 + a6 * b8 + a7 * b12, a4 * b1 + a5 * b5 + a6 * b9 + a7 * b13, a4 * b2 + a5 * b6 + a6 * b10 + a7 * b14, a4 * b3 + a5 * b7 + a6 * b11 + a7 * b15, a8 * b0 + a9 * b4 + a10 * b8 + a11 * b12, a8 * b1 + a9 * b5 + a10 * b9 + a11 * b13, a8 * b2 + a9 * b6 + a10 * b10 + a11 * b14, a8 * b3 + a9 * b7 + a10 * b11 + a11 * b15, a12 * b0 + a13 * b4 + a14 * b8 + a15 * b12, a12 * b1 + a13 * b5 + a14 * b9 + a15 * b13, a12 * b2 + a13 * b6 + a14 * b10 + a15 * b14, a12 * b3 + a13 * b7 + a14 * b11 + a15 * b15 ]); return c; indeed not something you’d really go for manually :) most of the time we need to multiply a Vector3D by a Matrix which boils down to: //this will be our output position var v:Vector3D = new Vector3D(); //we grab the position of the 3d object var p:Vector3D = manualTransform.position; //then we grab the raw data of the matrix3D var raw:Vector.<Number> = mat.rawData; //and perform a Matrix multiplication between the P vector and the transform matrix v.x = raw[0] * p.x + raw[4] * p.y + raw[8] * p.z + raw[12] * p.w; v.y = raw[1] * p.x + raw[5] * p.y + raw[9] * p.z + raw[13] * p.w; v.z = raw[2] * p.x + raw[6] * p.y + raw[10] * p.z + raw[14] * p.w; v.w = raw[3] * p.x + raw[7] * p.y + raw[11] * p.z + raw[15] * p.w; //TADA ! V is the result of our matrix transform. this method is wrapped natively in the Matrix3D.transformVector() method. here’s an example: geometry._theory.D_MatrixTransform :: source as Matrix multiplication is not commutative, this one emphasizes the importance of the order in which we perform the operations. it briefly illustrates the append / prepend difference. geometry._theory.E_AppendPrepend :: source this shows how to perform a translation and a scaling by hand ; by entering the ratios directly into the Matrix. geometry._theory.F_PositionScale :: source the matrix rotation is one of the most useful and complex operations (as compared to scaling and translation). this explains the logic behind it and shows a rotation done by hand and by using the Matrix3D built-in methods. geometry._theory.G_MatrixRotation :: source this shows how to rotate an object on its local axis to combine rotations. the object rotates on the white disc, targets (pointAt) the blue object and also rotates locally on one of its axes. it could help build some Forward or Invert Kinematics systems. geometry._theory.H_LocalRotation :: source this is a simple comparison of a Quaternion and a Matrix3D, it helps understand that they’re very close to each other. geometry._theory.I_QuaternionTest :: source this is a Spherical interpolation ( SLERP for short ) done both with a Quaternion and a Matrix3D. note that the matrix transform(blue sphere) is not handling the angles the same way as the Quaternion (red tick) does. geometry._theory.J_QuaternionSlerp :: source this is a mouse unprojection: it’s a technique that allows us to create a 3D line from the “eye” (camera) to the “finger” (mouse position). even if there must be a better way, it’s very useful for picking or shooting. this example places a mesh at the mouse location when you hit a key. geometry._theory.K_UnprojectMouse :: source in this snippet I use the “eye-finger” 3D line and I test it against a sphere. if the line hits the sphere, I use the ends of the segment to reset the Quaternions and the Matrices and make a SLERP between 2 locations. geometry._theory.L_Quaternion2Vectors :: source the goal of the workshop was indeed to generate and manipulate geometry. so let’s dive into it :) by default all the engines offer some primitives models, here’s a list of the ones Away3D offers. geometry.basics.A_Primitives :: source now the nifty part: our first custom geometry built from scratch \o/ again, check the 3D cheat sheet if you’re not at ease with concept of VertexBuffer/indexBuffer. it’s a single sided plane in 3D. geometry.basics.B_CustomGeometry :: source so as we know how to build a plane and how to unproject a 2D point. we can combine them together to create the view plane in the 3D space. hit anay key to create the view plane (what the camera sees) in the 3D space then move around. geometry.basics.C_UnprojectPlane :: source this simply shows how to build a cube in a more compact way. geometry.basics.D_CustomGeometry :: source an interesting possibility when we can access the vertex coordinates is that we can alter them and then update the mesh. in this case, the longer you press the mouse, the more offset the vertices will get. when you release, they go back to their origin. geometry.basics.E_Deformation :: source this simply shows how to load a mesh geometry.basics.F_LoadMesh :: source as long as we have access to the vertex data, we can deform any mesh, in that case, the mesh we’ve loaded previously. geometry.basics.G_Deformation :: source and finally, a popular way of distorting meshes is the SoundSpectrum so here’s one. I can’t find where the sound comes from so if anyone has a clue and if I’m breaking some compyrights, please let me know. I’m extremely proud of this one ; it has to be the most expressive Spectrum I’ve done in my life or is it just the music that makes it so cool, I don’t know. it almost seems alive :D oh and it uses a convex hull to close the mesh. geometry.basics.H_SoundSPectrum :: source ok, so this is a bit off topic, one of the first things I’ve done when I started playing with 3D is Curves. it’s both useful and beautiful. off topic, because it uses a simplified version of the Loft class playfully called Spline and that we’ll study in the next article. here’s a simple example where a curve is built from the vertices of a mesh then smoothed with a Cubic method. geometry.curves.A_SimpleSpline :: source and this is a comparison between the 3 types I’ve implemented ; Cubic, CatmullRom and Cardinal. I had already addressed the topic in 2D here. a good thing with these curves is that they’re N dimensional, understand that if you find out how to render them, you could build 18th dimensional curves. geometry.curves.B_SplineTypes :: source finally this is a combination between mesh generation and curves ; I use a smoothing curve to create some vertices inside each face of a mesh then rebuild the indices. a detailed how to is located at the bottom right of the 3D cheat sheet geometry.curves.C_subdivisions :: source an here’s a series of tests on materials more info here. ( comments in French ) materials.A_Colormaterial :: source materials.B_DefaultTextures :: source materials.C_Bitmapmaterial :: source materials.D_GradientMaterials :: source materials.E_PatineTextures :: source materials.F_Skybox :: source materials.G_ViewHelperTest :: source ok, so that was it for the first part. I hope you’ll find it useful. enjoy :)
{"url":"http://en.nicoptere.net/?p=2047","timestamp":"2014-04-20T03:11:20Z","content_type":null,"content_length":"58671","record_id":"<urn:uuid:f10a348b-19c3-47f5-9b32-915241467404>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
linear combination.. help please August 27th 2008, 06:22 AM #1 Aug 2008 Is vector u = [3 3 2] is a linear combination of vector v1 =[1 0 1] and vector v2 =[1 1 1] ? could you please show me how to solve this problem? thank you very much. This question can be answered using the determinant the three vectors : $u$ is a linear combination of $v_1$ and $v_2$ iff the determinant $D=\left|\begin{smallmatrix} 3&1&1\\3&0&1\\2&1&1\end {smallmatrix}\right|$ equals 0. Do we have $D=0$ ? Another way of answering this question is solving $u=av_1+bv_2$ for $a,b\in\mathbb{R}$. If this equation has at least one solution then $u$ is a linear combination of $v_1$ and $v_2$. If it has no solution then... In order for vector u = [3 3 2] to be a linear combination of vector v1 =[1 0 1] and vector v2 =[1 1 1], it would necessary u=3[1,1,1]+t[1,0,1]. That is the only way to have 3 in the second position of u. But that means that 3+t=3 and 3+t=2. But that is impossible. August 27th 2008, 06:45 AM #2 August 27th 2008, 07:16 AM #3
{"url":"http://mathhelpforum.com/advanced-algebra/46894-linear-combination-help-please.html","timestamp":"2014-04-21T10:04:17Z","content_type":null,"content_length":"38341","record_id":"<urn:uuid:c27e4050-b969-40b4-82fb-b0d74799bc0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - College Algebra standards Date: Mar 15, 1995 8:28 AM Author: Jack Rotman Subject: College Algebra standards > Butch Sloan said: > >I am aware of the NCTM standards for K-12 mathematics, but I have had a > >request for what the reform movement is suggesting for the standard > >college algebra curriculum at the junior college or university level. > >Can anyone point me towards a source for what's being advocated in this area? > > > >If this is not of interest to the list, please respond to me directly. > > > >Thanks in advance, > >+========================================================================+ > >|| Butch Sloan <bs@tenet.edu> || (i * Pi) || > >|| Mathematics (6-12) Coordinator || e + 1 = 0 || > >|| School Improvement Department || "The five most important || > >|| Garland ISD (Texas - USA) || numbers in mathematics" || > >+========================================================================+ > > Actually, there is a complete "Standards" document for the introductory levels of math at colleges/university. The American Mathematical Association of Two-Year Colleges (AMATYC) is the professional group representing math instructors at community colleges and the entry level at universities. Through a long-term project, we have been writing a Standards document for mathematics at this level. The "Standards for Introductory College Mathematics" (SICM) is in its final form at the present time. The document is being considered for statements of support from many organizations, and will be adopted officially by AMATYC at our fall conference this November in Little For more information, I'd suggest you contact your nearest community college. Many colleges have had faculty involved in the process. If they don't have the information, they can get in touch with their state AMATYC affiliate. (Such as TexMATYC in Texas.) For high school teachers who can make a conference trip, you would probably find the AMATYC conference interesting. (We in AMATYC often go to NCTM meetings for parallel reasons.) If you want more information on the conference, contact the AMATYC office: The 1995 conference revolves around implementing the Standards, so you'd see quite a few presentations at the college algebra level and below. I hope this helps! <<<<<<<<<<<<<<<<<<<< from >>>>>>>>>>>>>>>>>>>> Jack Rotman phone (517)483-1079 Math Professor Lansing Community College Lansing, MI internet: ROTMAN@ALPHA.LANSING.CC.MI.US "Like all art & science, mathematics surrounds us." <<<<<<<<<<<<<<<< Math Success ! >>>>>>>>>>>>>>>>>
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1474718","timestamp":"2014-04-20T19:40:03Z","content_type":null,"content_length":"4351","record_id":"<urn:uuid:8e859720-db6b-4f21-a739-fa65fe52ac31>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Techniques for Computer Science Applications Monday 5:00-7:00. Warren Weaver Hall room 201. Professor Ernest Davis Reaching Me • Email: • phone: (212) 998-3123 • office: 329 Warren Weaver Hall Office hours: Tue 10:00-12:00, Wed 3:00-4:00 Optional problem session A problem/review session will meet Thursdays 5-6, WWH 517. The first meeting of the problem session will be Thursday, September 13; it will not meet on September 6. Ernest Davis, Linear Algebra and Probability for Computer Science Applications , CRC Press, 2012. Amos Gilat, MATLAB: An Introduction with Applications, Wiley, 2008. Any edition is OK. Inexpensive used copies are available online. Online documentation for MATLAB: Getting Started with MATLAB Matlab clones: GNU Octave Either of these will suffice for this course. Class email list You should be automatically subscribed to the class email list If not, go to this link, and subscribe manually. The grader for the course will be Chaitanya Rudra cr1512@nyu.edu All programming assignments and all exercises submitted electronically should be emailed to him. This course gives an introduction to theory, computational techniques, and applications of linear algebra, probability and statistics. These three areas of continuous mathematics are critical in many parts of computer science, including machine learning, scientific computing, computer vision, computational biology, computational finance, natural language processing, and computer graphics. The course will teach a specialized language for mathematical computation, such as MATLAB, and will discuss how the language can be used for computation and for graphical output. No prior knowledge of linear algebra, probability, or statistics is assumed. Programming assignments (50% of the grade). Biweekly exercises (10% of the grade). Final exam, Monday December 17. (40% of the grade). Exercises 1 Not to hand in. Programming Assignment 1 Due Sept. 24. Problem Set 2 Due Oct. 8 Programming Assignment 2 Due Oct. 8 Problem Set 3 Due Oct. 29 Programming Assignment 3 Due Oct. 29 Sample Output for Programming Assignment 3 Problem Set 4 Due Nov. 19 Programming Assignment 4 Due Nov. 19 More Results for Programming Assignment 4 Problem Set 5 Due Dec. 3 Programming Assignment 5 Due Dec. 3 Final Exam The final exam will be held Monday Dec. 17 during the regular class hour. It will be closed book and closed notes. Here is a list of topics . A sample exam is on the course Blackboard site. Part I. Introduction: Week 1.A Introduction to MATLAB. Basic programming language features. Davis, Chap. 1. Part II. Linear Algebra: Week 1.B. Vectors. Basic operations. Dot product. Vectors in MATLAB. Plotting in MATLAB. Davis, Chap. 2 Week 2. Matrices. Definition, fundamental properties, basic operations. Linear transformations. Davis, Chap. 3 Week 3. Abstract linear algebra: Linear independence, basis, rank, orthogonality, subspaces, null space. Davis, Section 4.1. Week 4. Solving linear equations using Gaussian elimination. Davis, Chap 5. Week 5+6. Geometric applications. Davis, Chap. 6. Week 7: Change of basis and singular value decomposition Davis, Chap. 7 Part III. Probability Week 8: Introduction. Independence. Bayes's Law. Discrete random variables. Davis, Chap. 8 Week 9+10: Numerical random variables. Expected value and variance. Discrete and continuous distributions. Davis, Chap. 9 Week 11: Markov models. Davis, Chap. 10. Week 12: Information theory and entropy. Davis, Chap. 13. Week 13: Confidence intervals. Monte Carlo methods. Davis, chaps. 11+12. You may discuss any of the assignments with your classmates (or anyone else) but all work for all assignments must be entirely your own. Any sharing or copying of assignments will be considered cheating. By the rules of the Graduate School of Arts and Science, I am required to report any incidents of cheating to the department. My policy is that any incident of cheating will result in the student getting a grade of F for the course. The second incident, by GSAS rules, will result in expulsion from the University.
{"url":"http://cs.nyu.edu/courses/Fall12/CSCI-GA.1180-001/","timestamp":"2014-04-16T21:53:12Z","content_type":null,"content_length":"6013","record_id":"<urn:uuid:92b2233e-a412-47db-a772-b2baf51f7b1e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellerose Village, NY Math Tutor Find a Bellerose Village, NY Math Tutor ...I request 24 hours notice for cancellation. Also, it's worth noting that my teaching approaches vary according to the needs of the student. For the student suffering from test-taking anxiety, I will be particularly patient and encouraging. 17 Subjects: including calculus, GRE, SAT writing, Regents ...Do you have a student who has difficulties with writing such as generating or getting ideas onto paper, organizing writing and grammatical problems? Do you have a student whose life you would like to enrich with piano lessons? If you answered yes to any of these questions, I can help. 30 Subjects: including prealgebra, geometry, reading, statistics ...I graduated from a NYC specialized high school and I am currently studying at New York University, majoring in Mathematics Secondary Education. I have been a volunteer math tutor for the last 5 years, and have grown to work quickly and effectively on any mathematics subject. I am exceptionally patient and understanding of all students needs. 19 Subjects: including trigonometry, algebra 1, algebra 2, biology ...While the other teachers regularly could not control this young man, I found him quite pleasant, and was able to motivate him to do work by using creative approaches (teaching math via computer games, teaching reading comprehension through video game magazine articles). My patience, affability, a... 50 Subjects: including algebra 2, elementary (k-6th), music history, religion ...It is my responsibility to make sure that the students improves in not only their grades, but their understanding of mathematics as well. For this reason, I do not mind traveling to a location that the students would feel the most comfortable at so they can have a clear mind to learn. I look fo... 10 Subjects: including algebra 1, algebra 2, calculus, geometry Related Bellerose Village, NY Tutors Bellerose Village, NY Accounting Tutors Bellerose Village, NY ACT Tutors Bellerose Village, NY Algebra Tutors Bellerose Village, NY Algebra 2 Tutors Bellerose Village, NY Calculus Tutors Bellerose Village, NY Geometry Tutors Bellerose Village, NY Math Tutors Bellerose Village, NY Prealgebra Tutors Bellerose Village, NY Precalculus Tutors Bellerose Village, NY SAT Tutors Bellerose Village, NY SAT Math Tutors Bellerose Village, NY Science Tutors Bellerose Village, NY Statistics Tutors Bellerose Village, NY Trigonometry Tutors Nearby Cities With Math Tutor Alden Manor, NY Math Tutors Argo Village, NY Math Tutors Bellerose Math Tutors Bellerose Manor, NY Math Tutors Bellerose Terrace, NY Math Tutors Bellrose Village, NY Math Tutors Bellrs Manor, NY Math Tutors Floral Park Math Tutors Garden City Park, NY Math Tutors Herricks, NY Math Tutors Hillside Manor, NY Math Tutors Locustwood, NY Math Tutors Meacham, NY Math Tutors North New Hyde Park, NY Math Tutors South Floral Park, NY Math Tutors
{"url":"http://www.purplemath.com/Bellerose_Village_NY_Math_tutors.php","timestamp":"2014-04-19T15:03:49Z","content_type":null,"content_length":"24473","record_id":"<urn:uuid:87387f1e-fbd1-4275-bf19-ae43091764df>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Homotopy Type Theory Good For? Posted by Urs Schreiber The current situation of homotopy type theory reminds me a bit of the dot-com bubble at the turn of the millenium. Back then a technology had appeared which was as powerful as it was new: while everybody had a sure feeling that the technology would have dramatically valuable impact, because it was so new nobody had an actual idea of what that would be. As opposed to other bubbles, that one did not burst because overly optimistic hopes had been unjustifed as such, but because it took a while to understand just how these hopes would be materialized in detail (for instance that today I would send a message as this here from a café via my webbook from my dropbox account). With homotopy type theory the situation currently seems to be similar to me. On the one hand it is clear that some dramatic breakthrough right at the heart of mathematics has occured. One hears the sound of something big happening. But: what is the impact? It feels like after 1995 – when it was clear that the internet is going to be something big – but still before, say, 2003, when we started getting a good idea of how it changes our lives. How will homotopy type theory change our lives? Currently most research in homotopy type theory revolves around the fine-tuning of the formulation itself and completing the understanding of its relation to traditional homotopy theory. That’s necessary and good. (It’s great, I am enthusiastic about it!) But if the excitement about HoTT is not to be an illusion, then something will follow after that. The traditional homotopy theorist currently may complain (and some do) that much of what is happening is that facts already known are being re-formulated in a new language, not always yet to an effect a homotopy theorist would find So I am wondering: how will the traditional homotopy theorist eventually benefit from homotopy type theory? How the researcher who uses homotopy theory for something else? I am asking for personal reasons, too. Since, somewhat inadvertently, I have been investing some of my time into learning about it, I am naturally wondering: how will that time investment pay off for me? What does homotopy type theory do for my research? I am not sure yet. But I have some first ideas. One of these I want to share here. An example My research, you may have noticed, is motivated from understanding basic structural phenomena in theoretical physics as incarnations of natural mathematical structures. What I will try to indicate in the following is a certain kind of problem that poses itself in the context of string theory, which – I think it is fair to say – was generally regarded to be among the more subtle problems in a field rich in subtle mathematical effects, and how it finds an elegant and simple solution once you regard it from the perspective of homotopy type theory. What I say in the following I have said in different words before, together with my coauthors Domenico Fiorenza and Hisham Sati: in section 2 of an article titled The E8-moduli 3-stack of the C-field in M-theory. There we point out that the solution which we propose and study in the article, to some problem in string theory, can naturally be understood simply by reformulating a well-known equation – known as the flux quantization condition – first as a fiber product of sets of certain field configurations and then refining that to a homotopy fiber product of moduli ∞-stacks of certain field configurations. Here I will just observe that if you come to this from homotopy type theory, then the solution looks even more elegant than this: one arrives there simply by taking verbatim the symbols denoting the solution set to the equation, but now interpreting these not in the ordinary logic of sets, but in the homotopy logic of homotopy types. It is then homotopy type theory which automatically produces the correct answer, the “$E_8$-moduli 3-stack of the supergravity C-field in M-theory”. A solution that looks subtle to the eye of classical logic becomes self-evident from the point of view of homotopy logic / homotopy type theory. From these remarks everybody with just basic training in category theory and homotopy theory can already deduce what I will say below. And what I say next is not hard to see, once you see it. It is one of those cases where a simple change of perspective leads with great ease to a solution of what seemed to be a difficult technical problem. Nevertheless, or because of this, I thought I’d say this explicitly. Formulation in ordinary logic The situation studied in that article concerns a hypothetical physical system in which on spacetime $X$ three different species of fields propagate: 1. the field of gravity, 2. a gauge field for the gauge group E8, and 3. a higher gauge field called (part of) the supergravity C-field. It is not important for the following what exactly these fields are and why. Important are the following two aspects only. 1. Since all of them are gauge fields, there is no naive notion of equality between different field configurations. Instead, there is a sensible notion of equivalence of field configurations. (In physics this is called gauge equivalence.) Moreover, since these are higher gauge fields, there is no naive notion of equality even between the gauge equivalences themselves. Instead there are higher order equialences between them. (Physicists describe this state of affairs by saying that there are higher order ghosts.) 2. In the problem under consideration in the above article, there is a constraint equation that is required to be satisfied by the gauge equivalence classes of the three fields, the “flux quantization condition”. The question is then: what is the right collection of field configurations that satisfy the constraint equation? What are the gauge equivalences between these? What is the mathematical model for the supergravity $C$-field? Let’s formulate this a bit more in symbols. Naively, we would say that there is a set of configurations of the field of gravity. I’ll write that set “$[X, \mathbf{B}Spin_{conn}]$”, but this is just notation which you need not care about for the following, if you don’t want to. (If you do, see the above article for details!) Similarly there is a set $[X, \mathbf{B}E_8]$ of configurations of the gauge field. And a set, to be denoted, $[X, \mathbf{B}^3 U(1)_{conn}]$, of configurations of (part of) the $C$-field. So we’d write \begin{aligned} \phi_{gr} & \in [X, \mathbf{B}Spin_{conn}] \\ \phi_{ga} & \in [X, \mathbf{B}E_8] \\ \phi_{hg} & \in [X, \mathbf{B}^3 U(1)_{conn}] \end{aligned} to denote elements of these sets, representing configurations of each of these fields. Now, each of these fields induces yet another field, much like the field of, say, electrons induces a magnetic field. We have functions that send the above field configurations to configurations of that induced field. These functions go by the following names (but again, these are just names, here we only need that there are three such functions): \begin{aligned} \frac{1}{2}p_1 & : [X, \mathbf{B}Spin_{conn}] \to [X, \mathbf{B}^3 U(1)] \\ 2 a & : [X, \mathbf{B}E_8] \to [X, \mathbf{B}^3 U(1)] \\ 2 G_4 & : [X, \mathbf{B}^3 U(1)_{conn}] \to [X, \ mathbf{B}^3 U(1)] \end{aligned} \,. In terms of this notation, that constraint equation to be satisfied by the three type of fields which I mentioned, the “flux quantization condition”, says that $\frac{1}{2}p_1(\phi_{gr}) + 2 a(\phi_{ga}) = 2 G_4(\phi_{hg}) \,.$ This is just some equation in the set $[X, \mathbf{B}^3 U(1)]$ (which happens to come equipped with the structure of an abelian group,with respect to which the addition in the above equation is formed), for the present discussion it is not important what this means, as long as you can imagine that it may happen that three gauge fields are related by some such equation. Given all this now, one might naïvely think that the collection of fields that satisfy the flux quantization condition is the set that in traditional ZFC logic one denotes by the right hand side of $CField(X) := \left\{ \phi_{gr}, \phi_{ga}, \phi_{hg} | \frac{1}{2}p_1(\phi_{gr}) + 2 a(\phi_{ga}) = 2 G_4(\phi_{hg}) \right\} \,.$ However, this answer turns out to be physically wrong. There are some evident deficiencies: this answer does not resolve the gauge transformations between fields and is hence unsuited for describing the actual quantum physics of the problem. But it is worse than that. In the correct answer there is yet one more field on $X$. Formulation in $\infty$-logic / homotopy type theory Where is that extra field supposed to come from if we are imposing a constraint equation, thus seemingly reducing the degrees of freedom? The answer is of course in the notion of homotopy or gauge transformation, which ordinary logic ignores. But this is precisely what homotopy type theory corrects. The same symbolic logical expressions are interpreted by homotopy type theory in a way that makes them correct for higher gauge theory. Automatically. In that article we discuss how those “sets” of field configurations, $[X, \mathbf{B}Spin_{conn}]$ etc., are in fact objects not of $Set$, but of some (∞,1)-topos: they are smooth moduli ∞-stacks. In the language of homotopy type theory the statement that there is a field configuration of, say, gravity $\phi_{gr} \in [X, \mathbf{B}Spin_{conn}]$ becomes the statement that $\phi_{gr}$ is a term of type $[X, \mathbf{B}Spin_{conn}]$. Of course that’s just terminology. Now comes the key point: we form now the solution set to the flux quantization condition as we did before. But now we do so in homotopy type theory. This means we use precisely the same logical symbols as before, which I repeat for emphasis on the right of $\mathbf{CField}(X) \coloneqq \left\{ \phi_{gr} , \phi_{ga} , \phi_{hg} | \frac{1}{2}p_1(\phi_{gr}) + 2 a(\phi_{ga}) = 2 G_4(\phi_{hg}) \right\} \,,$ where now the boldface on the left is to indicate that we take this expression no longer to evaluate to a set (a subset of $[X, \mathbf{B} Spin_conn] \times [X, \mathbf{B} E_8] \times [X, \mathbf{B}^ 3 U(1)_{conn}]$), but now to a homotopy type. In fact, thus, to a smooth ∞-stack. Taken apart, the above notation in homotopy type theory means 1. the dependent sum $\{ \cdot \; | \; \cdots \}$ 2. over the product type $[X, \mathbf{B} Spin_conn] \times [X, \mathbf{B} E_8] \times [X, \mathbf{B}^3 U(1)_{conn}]$ 3. of (and that’s the key) the dependent identity type $Id_{[X, \mathbf{B}^3 U(1)]}$. Here is where homotopy type theory automatically does some work for us that is quite non-trivial from the classical point of view: this type $\mathbf{CField}(X)$ automagically comes out as the homotopy pullback of $(\frac{1}{2} p_1 + 2 a)$ along $2 G_4$, the homotopy-universal way of completing the following diagram: $\array{ \mathbf{CField}(X) &\stackrel{}{\to}& [X, \mathbf{B}^3 U(1)_{conn}] \\ \downarrow &\swArrow_{\simeq}& \downarrow^{\mathrlap{2 \mathbf{G}_4}} \\ [X, \mathbf{B}Spin_{conn} \times \mathbf{B}E_ {8}] &\underset{\frac{1}{2}\mathbf{p}_1 + 2 \mathbf{a}}{\to}& [X, \mathbf{B}^3 U(1)] }$ of smooth $\infty$-stacks, up to that gauge transformation indicated as the double arrow filling the diagram. That this is so, hence that in type theory with identity types dependent sums over identity types express in fact homotopy pullbacks, is not entirely obvious, a priori, and is one of the hallmarks of what makes homotopy type theory interesting. (For technical details see here.) In our example, the claim is that this homotopy type $\mathbf{CField}(X)$ is the correct “type of $C$-field configurations”, being the answer to a subtle question in string theory. So in conclusion, amplifying the argument of our section 2 just a bit more with an emphasis on the homotopy-logic we find: formulating a higher gauge theoretic problem as one would naively, but then reading the result in the logic of homotopy type theory, automatically takes care of otherwise subtle phenomena. What do we learn from the existence of homotopy type theory? There is the explicit lesson, which drives the whole interest, it says: with just a tiny little adjustment (allowing for identity types), traditional logic / type theory is a language that captures homotopy theory. But inside this there might be another lesson of potentially more interest in common practice. That says: not only is there some formal language to capture homotopy theory. No, moreover: it’s a natural language, potentially more natural than the language you have been using so far, and speaking this language may help to make more transparent phenomena in homotopy theory that are less transparent otherwise. Or so it feels. I would like to see this materialize in more detail. Therefore I close with a question: Can you come up with more examples where thinking about situations in homotopy theory / in higher topos theory concretely profits from reformulating them from a homotopy-logical perspective? Some construction of natural interest involving various homotopies, homotopy pullbacks, homotopy pushouts, etc where you can step back and say: look, with homotopy type theory we can equivalently think of this as expressing the following logical formula, and this is much simpler than the original formulation? Many of the basic definitions of Voevodsky’s in the foundation files of homotopy type theory are of the form that I am after here. For instance that the geometric procedure of giving a contracting homotopy of an $\infty$-stack $X$ is equivalent to the homotopy-logical procedure of proving $\exists_{x \in X} \forall_{y \in X} (x = y)$ (see at contractible type for details) might be taken as an Or consider the notion of free loop space objects (“derived loop spaces”) $\mathcal{L} X$ of an $\infty$-stack $X$. Often these are heuristically motivated as formalizing the idea that a point in them is given by “making two points in $X$ equal in two different ways” (such as to yield a loop). In terms of homotopy type theory, this heuristics becomes a theorem, which reads: $\mathcal{L}X = \left\{ x,y : X \; | \; (x = y) \, and \, (x = y) \right\} \,.$ These first simple examples seem to suggest that there is a whole universe of similar, but more interesting (even more interesting, if you wish), homotopy-logical reformulations of familiar phenomena in homotopy theory… and hopefully eventually of unfamiliar and previously unknown phenomena. Eventually I want to collect such examples at a page like HoTT methods for homotopy theorists. Posted at May 10, 2012 12:33 AM UTC Re: What is homotopy type theory good for? This is very nice! It makes lots of sense why the ZFC-answer is wrong; although I don’t know exactly what it means to “resolve” gauge transformations, I can guess that it has something to do with invariance under gauge transformations, which that set is manifestly not. But can you say anything, at this level of precision, about why the homotopy-motivated answer is physically “correct” (or what that even means)? I think this sort of thing also supports my argument that identity/path types in homotopy type theory should be denoted simply by an equals sign “$(x=y)$” as you are doing here, rather than something fancier like $Id(x,y)$ or $Paths(x,y)$ or $(x\rightsquigarrow y)$. It may take a little bit of getting used to, but any time you wanted to write “$=$” when talking about non-0-truncated types you almost certainly really meant to talk about paths/equivalences anyway, so why proliferate notations unnecessarily? Posted by: Mike Shulman on May 10, 2012 4:12 AM | Permalink | Reply to this Re: What is homotopy type theory good for? I don’t know exactly what it means to “resolve” gauge transformations, I should probably try to come up with a better way to phrase this. It is just supposed to mean that forming the intersection at the level of sets clearly cannot tell you about the higher homotopy Let’s see, maybe you can help me with an idea for how to say this at a chatty level such that it is still useful: I simply want to highlight at that point in a non-technical way that when you have a cospan of higher groupoids and look at the pullback of just the underlying sets of 0-cells, then qualitatively there are two things that go wrong: not only is the resulting set not invariantly meaningful without looking at the higher cells, too, but it is also “too small” in that its elements are equalities between pairs of points in the tip of the cospan, instead of equivalences there. But can you say anything, at this level of precision, about why the homotopy-motivated answer is physically “correct” (or what that even means)? The way this works is a kind of reverse-engeneering from available data: one is looking for a sensible mathematical structure such that several truncations or limits of it match certain insights that had been obtained before. To briefly start with a historical analogy: how do we know that our mathematical model of the electromagnetic field is physically correct? We collect the required properties in various areas where it appears. First, in the classical limit it exerts forces whose force vectors arrange into a closed differential 2-form. So the electromagnetic field mathematically must be something involving a closed 2-form. Next, 70 years later it is discovered that it also must be a rule that consistently assigns values in the circle group to curves in spacetime. After a little fiddling one finds that there is a mathematical structure naturally unifying these two aspects which is the thing that in the above notation has $\mathbf{B}U(1)_{conn}$ as its moduli stack. So this is a “physically correct” mathematical structure. Finally, one sees that it is also “more physically correct” than the parts that it unifies, because it turns out to deal with further relevant physical phenonema (flux quantization etc.). For the supergravity C-field there were similarly partial constraints known: first, it was known that it involves a closed differential 3-form. Then that it must be a rule to assign values in $U(1)$ cosistently to 3-volumes in space. By comparison with the previous case that seemed to indicate that it is the thing classified by $\mathbf{B}^3 U(1)_{conn}$. But then it turned out that there were more datapoints: a certain piece of the “quantum corrected action functional” for the C-field only made sense as a function on just gauge equivalence classes if that “flux quantization condition” holds. And finally it was known that when restricted to a boundary of spacetime the whole structure had to naturally transmute into a certain other structure. It’s a bit like with the proverbial Elephant that Johnstone appeals to, only that this Elephant is not topos theory but, hm, M-theory or the like: there are facets that are known and one checks how the next puzzle piece would fit in. So concerning the C-field the task was: can you write down a natural mathematical structure that looks essentially like $\mathbf{B}^3 U(1)_{conn}$, but with a little twist built in such that some constraint equation holds and such that relative cohomology with coefficients in this beast (meaning: restriction to boundaries) has certain properties. Earlier there has been the observation that one can manufacture by hand a certain discrete 1-groupoid approximation to what I am calling $\mathbf{CField}(X)$ that has most of these desired properties, except that, being a discrete 1-groupoid, it still must have been some truncation and restriction of the full moduli $\infty$-stack. So then the construction that we consider, the one that homotopy type theory produces from feeding the available datapoints into it, does produce somehting that is a smooth moduli $\infty$-stack, such that restricted to all these known special cases and limits it reproduces what it has to reproduce, and in this sense is “physically correct”. And moreover it is “more physically correct” than the previous constructions, which did not give full moduli 3-stacks, because of yet one more datapoint: for quantization of gauge theories we do need these higher stacks, or at least their infinitesimal approximation (known as “BRST complexes”). Posted by: Urs Schreiber on May 10, 2012 10:15 AM | Permalink | Reply to this Re: What is homotopy type theory good for? So a pretty simple way to illuminate when you have a cospan of higher groupoids and look at the pullback of just the underlying sets of 0-cells, then qualitatively there are two things that go wrong, would be by looking at the pullback of one mapping of a point into a circle along another. Done wrongly you find either a point or the empty set. Done properly you’ll have something equivalent to $B Posted by: David Corfield on May 10, 2012 5:12 PM | Permalink | Reply to this Re: What is homotopy type theory good for? So a pretty simple way to illuminate Yes, that’s about the simplest non-trivial example. Done properly you’ll have something equivalent to $\mathbf{B}\mathbb{Z}$. You mean the circle is $\mathbf{B}\mathbb{Z}$, the homotopy pullback $\ast \times_{\mathbf{B}\mathbb{Z}} \ast$ is $\Omega \mathbf{B} \mathbb{Z} \simeq \mathbb{Z}$! Posted by: Urs Schreiber on May 10, 2012 5:18 PM | Permalink | Reply to this Re: What is homotopy type theory good for? That is, we replace both maps pt -> S^1 by surjective submersions R -exp-> S^1, and then the fiber product is R x Z. I think I’m getting it! Posted by: Allen Knutson on May 10, 2012 6:28 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Just a remark on this here: we replace both maps $pt \to S^1$ by surjective submersions This does come out right in this case, but it is a bit misleading to attribute it to the submersiveness of the map, isn’t it? There is really no smooth structure of relevance here, we are talking about the circle as an object of $\infty Grpd \simeq Top$. (I think. If we are talking about the smooth circle as $S^1 \in SmthMfd \stackrel{Yoneda}{\to} Smooth \infty Grpd$ then its categorical loop space is the point!) What matters is that we replace one of the two point inclusions by a fibration in the relevant model category structure, so either by a Serre fibration if we choose to present $\mathbf{B}\mathbb{Z}$ by the topological circle $S^1$, or by a Kan fibration if we choose to represent it by the nerve of the groupoid with a single object and $\mathbb{Z}$ as its morphisms. Details on how and why this computation works are at nLab: homotopy pullback - Concrete constructions - General. Lots of examples are also spelled out at nLab:homotopy limit. Posted by: Urs Schreiber on May 10, 2012 7:00 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Posted by: David Corfield on May 11, 2012 9:43 AM | Permalink | Reply to this Re: What is homotopy type theory good for? It is just supposed to mean that forming the intersection at the level of sets clearly cannot tell you about the higher homotopy type. Hmm, but you want to say that somehow without the words “higher homotopy type”? Seems tricky… you could just say “contains too few data.” Thanks for the explanation of the physical background! Posted by: Mike Shulman on May 10, 2012 5:21 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I think this sort of thing also supports my argument that identity/path types in homotopy type theory should be denoted simply by an equals sign “(x = y)” as you are doing here, rather than something fancier like $Id(x,y)$ or $Paths(x,y)$ or (x⇝y). Yes, I was thinking about that, too, when this idea here was ripening. It certainly has its appeals. It is closely related to whether or not one uses the “$\infty$“-prefix in higher category theory. While emphasis may demand it, eventually it is more natural to just drop it and instead invent a prefix for when we explicitly do not mean the higher case. But with HoTT being an internal language it seems more natural, even, not to change the symbols here. Posted by: Urs Schreiber on May 10, 2012 10:44 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Here is another, maybe amusing, example of the use of homotopy-logic, and another reason, maybe, for the “notational conservatism” discussed above. For some $n \in \mathbb{N}$, consider the type $\mathbf{B}^n U(1) :$Smooth∞Grpd (where I am adoping the habit of naming the object classifier $Type$ after the ambient $\infty$-topos) which classifies smooth circle $n$-bundles / bundle (n-1)-gerbes. Then the type $\mathbf{B}^n U(1)_{conn}$ which classifies circle $n$-bundles with connection aka ordinary differential cocycles of degree (n+1) has (details are here) the following homotopy-logical $\mathbf{B}^n U(1)_{conn} \simeq \left\{ P \in \mathbf{B}^n U(1), F \in \Omega^{n+1}_{cl} | curv(P) = F \right\} \,.$ In homotopy-words: a circle $n$-bundle with connection is a circle $n$-bundle and an identification of its curvature with a closed $(n+1)$-form. And that’s now an accurate theorem characterizing the moduli $n$-stack for differential cohomology. This has a certain charme to it. We can also massage this to read in a maybe even more enjoyable form. For that, first a preliminary: We previously had here (somewhere, I forget in which thread) discussion where I said I am enjoying to strictly regard the notation • $\{ x : X | P(x) \}$ • $\exists_{x : X} . P(x)$ as synonymous in homotopy type theory, instead of reading the latter as being the bracket type/(-1)-truncation of the former (links for eventual bystanders). I understand that this is not what Coq or some other system does, but I will happily ignore this and take the standpoint that this is Coq’s problem and not mine. Because I find it is a very nice unification that constructively and with higher homotopy types around, I may perfectly identify these two expressions. This seems to be a nice feature that I don’t feel like “correcting” by talking about support, bracket types and (-1)-truncation. I can talk about all that still when I really really need to form a (-1)-truncation. But when would that be? Okay, so if I allow myself this, then I can write the moduli $n$-stack for ordinary differential cohomology in degree $(n+1)$ equivalently as $\mathbf{B}^n U(1)_{conn} \simeq \left\{ P \in \mathbf{B}^n U(1), | \exists_{F \in \Omega^{n+1}_{cl}} . curv(P) = F \right\} \,.$ This homotopy-logical theorem now reads in words: a circle $n$-bundle with connection is a circle $n$-bundle such that its curvature is a closed $(n+1)$-form. Not sure if this is very useful, but it is kind of fun! One can play this game with each item in the “table of twisted cohomologies”. For instance the evident formal expression for the simple sentence For $\alpha \in H^3(X, \mathbb{Z})$, an $\alpha$-twisted $K$-cocycle is a projective unitary bundle and an identification of its Dixmier-Douady class with $\alpha.$ is now (or compiles to, under homotopy type theory) a rigorous characterization of the stack whose connected components are twisted $K^0$. Posted by: Urs Schreiber on May 10, 2012 4:01 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Hmm, I see that now you want to also take the propositions as types point of view on the existential quantifier. Many type theorists of course do this all the time, as does Agda. (Coq’s “Prop” is sort of like a bracket type, but not entirely consistently so.) I’m not yet convinced by the argument to use PAT terminology. Using $=$ to mean isomorphism is not a problem because there is no other internally meaningful notion of “equality” between inhabitants of higher types. And many mathematicians have long recognized this by “sloppily” using the symbol $=$ to denote isomorphism informally already. But outside of the small circle of constructive type theorists who subscribe to PAT, mathematicians universally distinguish between “the collection of $x$ such that $P(x)$” and “the collection of $x$ together with $p\in P(x)$”, and both are meaningful and useful things to consider in homotopy type theory. If we use “such that” to mean “together with”, then we are left without a good phrase which means “such that”; we have to resort to some clumsy thing like putting brackets around our sentence. “Such that” types are everywhere in mathematics, so if homotopy type theory is to be a foundation for all of mathematics (not just a few narrow fields), then it needs a good language for them. But as an example of where the ability to actually say “such that” may be useful for the specific sorts of things you are interested in, consider for a type $X$ the type $\{ A : Type \; | \; A \;\text{ is equivalent to }\; X \}$ which you could write more formally as $\sum_{A: Type} [A\simeq X]$ The “together with” version of this type is (assuming univalence) contractible. But the “such that” version is the classifying space of bundles with fiber $A$. Posted by: Mike Shulman on May 10, 2012 5:20 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I see, good. Okay, you have convinced me! But one question: in you last sentence, aren’t you mixing up the “such that”-version and the “together with”-version? We have $\mathbf{B} Aut(X)$ is the “collections of $A$s together with an equivalence to $X$”. Posted by: Urs Schreiber on May 10, 2012 5:32 PM | Permalink | Reply to this Re: What is homotopy type theory good for? We have $\mathbf{B}Aut(X)$ is the “collections of $A$s together with an equivalence to $X$.” No, that’s the contractible one. The type $\sum_{A:Type} Equiv(A,X)$ is equivalent, by univalence, to $\sum_{A:Type} (A=X)$ which is contractible; it’s the mapping path space of $A : 1\to Type$. It’s only when you add the bracket-type that it becomes $\mathbf{B}Aut(X)$. Posted by: Mike Shulman on May 10, 2012 8:58 PM | Permalink | Reply to this Re: What is homotopy type theory good for? it’s the mapping path space Ah! Thanks, now I see where I went wrong. So in the categorical semantics, we have the pullback $\array{ \sum_{A} (A = X) &\to& * \\ \downarrow && \downarrow^{\mathrlap{}} & \searrow^{\mathrlap{X}} \\ (\mathbf{B}Aut(X))^I &\to& \mathbf{B}Aut(X) &\to & Type \\ \downarrow \\ \mathbf{B}Aut(X) }$ And then the bracket type version is the (-2)-truncation of the total left vertical composite! Right? Hence $\mathbf{B}Aut(X)$. Posted by: Urs Schreiber on May 10, 2012 10:59 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Great stuff! I’ll write something more substantial later, but as a quick sanity check $Id_{[C, B^3 U(1)]}$ in part 3 of the analysis of $CField$ should be $Id_{[X, B^3 U(1)]}$? And where you write In the correct answer there is yet one more field on $X$, you mean that rather than think of $CField(X)$ as just a subset of the product of three fields, we should see it as a new field in itself? Posted by: David Corfield on May 10, 2012 9:55 AM | Permalink | Reply to this Re: What is homotopy type theory good for? but as a quick sanity check Yes, that was a typo. I have fixed it now. Thanks for spotting it! we should see it as a new field in itself? Yes. I realize that this is a point that maybe the above text should spend a tad more time on. So the thing is that the extra homotopy in the homotopy pullback of fields is visible itself as a field! Here a few more details. I’ll do this for the case that the thing called $G_4$ vanishes, which is the relevant boundary case (and this and more is all explained in the article). So we have the field of gravity, $\phi_{gr}$, and it induces another field called $\frac{1}{2}p_1(\phi_{gr})$. This has a field strength, which is a closed 4-form, to be denoted $\langle F_\omega \ wedge F_\omega\rangle$. And then there is an $E_8$ gauge field, which induces another field called $a(\phi_{ga})$, also with its field strength 4-form, now called $\langle F_A \wedge F_A\rangle$. The homotopy pullback that we are talking about makes these two 4-forms become equivalent as 4-forms, by an explicit equivalence. An equivalence of closed 4-forms is a 3-form, call it $H$, such that its de Rham differential $d H$ is the difference of the two 4-forms $d H = \langle F_\omega \wedge F_\omega \rangle - \langle F_A \wedge F_A\rangle \,.$ So $H$ here is the equivalence between the two “pre-existing” fields, but this is now interpreted itself as a field: the field strength of the (twisted) “B-field”. Posted by: Urs Schreiber on May 10, 2012 10:30 AM | Permalink | Reply to this Re: What is homotopy type theory good for? So the B-field is in $[X, \mathbf{B}^2 U(1)]$? Has this been induced from somewhere like $\frac{1}{2}p_1(\phi_{gr})$ was induced from $\phi_{gr}$? Posted by: David Corfield on May 11, 2012 12:35 PM | Permalink | Reply to this Re: What is homotopy type theory good for? So the B-field is in $[X, \mathbf{B}^2 U(1)]$? That’s going in the right direction. More precisely it’s like this: Like the C-field, the B-field experiences various “twists” in general, depending on which type of string theory we look at (type II or heterotic) and depending on which other fields are present, because they all influence each other. But, yes, in the plain vanilla version the B-field is a connection on a circle 2-bundle = bundle gerbe and as such is a term $\phi_B \in [X, \mathbf{B}^2 U(1)_{conn}] \,.$ (I should decide whether I want to write $(X \to A)$ or $[X,A]$ for function types here. While I am still undecided, keep in mind that both notations stand for precisely the same thing.) Here the subscript “${}_{conn}$” indicates the differential refinement, the connection. There is a forgetful morphism $u : \mathbf{B}^2 U(1)_{conn} \to \mathbf{B}^2 U(1)$ which forgets the connection and just remembers the underlying 2-bundle / bundle gerbe or physically: roughly, the underlying “instanton sector” $u(\phi_B) \in [X, \mathbf{B}^2 U(1)] \,.$ You ask: Has this been induced from somewhere like $\frac{1}{2}p_1(\phi_{gr})$ was induces from $\phi_{gr}$. Something like this does happen in backgrounds for the type II superstring, yes. Where in the heterotic string the twist that governs the anomaly cancellation (as it were) is the second Chern class $\mathbf{c}_2 : \mathbf{B} SU \to \mathbf{B}^3 U(1)$ or rather its differential refinement $\hat \mathbf{c}_2 : \mathbf{B} SU_{conn} \to \mathbf{B}^3 U(1)_{conn}$ for the type II string the corresponding twist is given by the Dixmier-Douady class $\mathbf{dd} : \mathbf{B} PU \to \mathbf{B}^2 U(1) \,.$ In words and physically this means: a gauge field for the stable projective unitary group (namely on a D-brane) “induces” a B-field. Yes. And in this case there is also a similar “constraint equation” which says that this induced field must equal the difference of two other induced fields. Posted by: Urs Schreiber on May 11, 2012 4:03 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Here’s a problem that doesn’t directly have anything to do with gauge theory, but does involve some (possibly not so) exact complexes and in my vague understanding, $n$-stuff shows up whenever complexes appear. Would this formalism have an application here? Consider the following commutative square of linear differential operators (1)$\begin{matrix} \Gamma_{SC}(F) & \overset{c}{\longrightarrow} & \Gamma_{SC}(G) \\ \downarrow\mathrlap{\scriptsize{f}} & & \downarrow\mathrlap{\scriptsize{g}} \\ \Gamma_{SC}(\tilde{F}^*) & \ underset{d}{\longrightarrow} & \Gamma_{SC}(\tilde{G}^*) \end{matrix}$ The notation is as follows. $F\to M$ and $G\to M$ are vector bundles, $\dim M = n$. $\tilde{F}^* \cong F^*\otimes_M \Omega^n M$ are densitized dual bundles, so that point-wise contraction and integration gives a natural pairing between sections of $F$ and $F^*$. Similarly for $G^*$. $\Gamma_{SC}(F)$ and $\Gamma_{SC}(G)$ denote the spaces of smooth sections with spatially compact support (contained in the domain of influence of a compact set). The vertical maps are hyperbolic (one can define two sided inverses with advanced and retarded supports, Green functions $\mathcal{F}_\pm$ and $\mathcal{G}_\pm$). The horizontal maps are “elliptic” in the sense that they could be extended to “elliptic” complexes (2)$\cdots \overset{c_{-1}}{\longrightarrow} \Gamma_{SC}(F) \overset{c}{\longrightarrow} \Gamma_{SC}(G) \overset{c_1}{\longrightarrow} \cdots$ and similar for $d$, with finite dimensional cohomology. When the constraints $c$ are ignored and $f$ is globally hyperbolic on $M$, the kernel, image and cokernel of $f$ are nicely characterized by the following exact sequence: (3)$0 \to \Gamma_0(F) \overset{f}{\to} \Gamma_0(\tilde{F}^*) \overset{\mathcal{F}_+ - \mathcal{F}_-}{\longrightarrow} \Gamma_{SC}(F) \overset{f}{\to} \Gamma_{SC}(\tilde{F}^*) \to 0 ,$ where $\Gamma_0(F)$ consists of sections with compact support. Now, taking $c$ into account, the challenge is to characterize the spaces $\ker f\cap \ker c$, $\im(f\oplus c)$ and $\coker(f\oplus c)$. Is there some neat higher categorical/homotopy-type way of doing that using the elliptic complexes that extend $c$ and $d$? Posted by: Igor Khavkine on May 11, 2012 2:32 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Hi Igor, I see what you are getting at. Something at least closely related can be said: Using homotopy type theory gives a very conside and neat way of speaking about the BV-BRST formalism in a homotopy-true way. (Note to bystanders: this and other links I provide for the sake of other readers, not for Igor. Igor is a co-author of that page.) More precisely, homotopy type theory gives a natural automatic way to speak about derived critical loci. So consider some configuration space of fields, which for definiteness and for comparison with the other discussion here I take to be of the form $Conf = (X \to A) \,,$ i.e. to be a “function type” (in HoTT language) of maps from spacetime $X$ to some moduli type of field configurations. For instance if $A$ is a Riemannian manifold, then this might be the space of field configurations of a standard sigma-model. Or if $A = \mathbf{B}G_{conn}$ is a moduli stack of connections, then this would be the configuration stack of a gauge theory. So then an action functional $S : (X \to A) \to \mathbb{A}^1$ is now a term of function type $(X \to A) \to \mathbb{A}^1$. If we assume a suitable differential geometric context, then this induces a term of function type $d S : (X \to A) \to T^* (X \to A) \,,$ its differential. The corresponding covariant phase space $P_{BV}$ in its incarnation as a derived critical locus / BV-complex is the homotopy fiber of this $d S$ over 0, assuming that we interpret all of this in a “derived” $\infty$-topos. Hence formulating the discussion at derived critical locus in the language of homotopy type theory, yields the following evident expression as a theorem: $P_{BV} = \left\{ \phi : X \to A \;\; | (d S_\phi = 0) \right\} \,.$ Or, as Mike insists, equivalently $P_{BV} = \sum_{\phi : X \to A} (d S_{\phi} = 0) \,.$ This is the kind of trick that homotopy type theory achieves for us: this expression looks precisely like the naive definition of the ordinary covariant phase space, which is simply the “subset of points” of configuration space on which the Euler-Lagrange equation $d S = 0$ holds. But homotopy type theory automatically “compiles” this to the homotopy-correct statement and makes it be the derived covariant phase space / the derived critical locus, hence the full BV-complex. This is actually a very good example for the theme of the above post. Thanks for bringing this up. Posted by: Urs Schreiber on May 11, 2012 3:31 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I wrote: Or, as Mike insists, equivalently $P_{BV}= \sum_{\phi : X \to A} (d S_\phi = 0)$ In fact this notation is maybe noteworthy in this context: it manifestly exhibits the BV-complex / derived covariant phase space as a homotopy-theoretic path integral . Posted by: Urs Schreiber on May 11, 2012 3:37 PM | Permalink | Reply to this Re: What is homotopy type theory good for? as Mike insists The main thing I have a problem with is using “$\exists$” to mean $\sum$. The notation $\{ ... | ... \}$ I don’t have as strong feelings about. Traditionally in mathematics it’s used only when the thing on the right is a proposition, but if we put something that isn’t a proposition there I don’t think there is any ambiguity. That is, I don’t think the traditional notation includes an implicit bracket around the thing on the right; rather, in ordinary set theory the notation would be considered ill-formed if the thing on the right were not a proposition. Coq uses $\{ ... \& ... \}$ when the thing on the right is not in $Prop$; we could adopt that too. Either of these notations does have the advantage over $\sum$ that often the thing on the left is as or more important as the thing on the right, and in a $\sum$ notation that one gets stuck into a subscript. Posted by: Mike Shulman on May 11, 2012 5:12 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Actually, I was deliberately obscure about the origin of the above construction to see if it would be independently recognized as a kind of pattern that you guys might be familiar with. A concrete example might clarify what I’m talking about. Consider a 1-form $A_\mu$ on a Lorentzian manifold that satisfies Euler-Lagrange equations of the Proca action functional. An equivalent way of writing these equations is the pair $\square A_\mu - m^ 2 A_\mu + (\mathrm{curvature term})=0$, $abla^\mu A_\mu = 0$. The first is a massive wave equation (the $f$ operator in the abstract setting above), while the second is a constraint (the $c$ operator) (it would be called a “second class constraint” in the standard Hamiltonian classification). The other two operators, $g$ and $d$, are can be found by “commuting” $f$ and $c$. Their existence ensures that the the constraint $abla^\mu A_\mu$ is consisten with the massive wave equation. This system has no gauge invariance, so the ghost part of the BV formalism is trivial here. You can even forget about the fact that these equations came from an action functional. The challenge is just to characterize the space of solutions of the massive wave equation that also satisfy the constraints. In this case it’s not particularly difficult, but neither is solving the Maxwell equations modulo gauge transformations. In these situations, the constraints and the gauge transformations are rather simple. The complications appear for gauge systems when the gauge transformations become more complicated and have to be described by a non-trivial complex of differential operators, which gives rise to the whole hierarchy of ghosts of ghosts. Similarly, when the constraints $c$ are not all independent, their degeneracies would be described by the ellptic complexes that extend the operators $c$ and $d$ in my previous post. I see a certain parallel here between gauge transformations and constraints. Both may be complicated enough to have to be resolved by a complex of differential operators. Though for gauge transformations, the complex extends to the left, while for constraints the complex extends to the right. I wonder if this fancy machinery that already applies to the gauge setting can apply to setting of constraints as well. Posted by: Igor Khavkine on May 11, 2012 6:08 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I wonder if this fancy machinery that already applies to the gauge setting can apply to setting of constraints as well. To the extent that you cosider resolving your constraint surface by BV methods: yes. What I said above applies already to the case that there are no gauge symmetries. Maybe to be more explicit: I don’t see how appealing to homotopy type theory can help with actually solving some equations. But if you do appeal to BV at all (gauge symmetries or not) there is the fact that the kind of up-to-homotopy construction that BV secretly is has a nice native formulation in homotopy type theory. This is probably not what you are hoping for here, but it’s still kind of noteworthy. Posted by: Urs Schreiber on May 11, 2012 6:49 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Right. I see what you are saying. But it is worth noting that the standard BV formalism (or its generalization to non-variational systems à la Lyakhovich et al) does not pay special attention to such “second class” constraints. That is a little unsatisfying. So, the lesson seems to be that if there is any application of higher categorical or homotopical methods here, it is yet to be found. Posted by: Igor Khavkine on May 11, 2012 10:22 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Urs wrote: How will homotopy type theory change our lives? To me the most exciting thing was always the idea that topology is ‘logic turned into space’, as perhaps the name ‘topology’ was hinting all along. And this sort of realization seems bound to be important in physics, where we have these weird things called ‘space’ and ‘spacetime’ and ‘phase space’ that we’re trying to understand more deeply than we do now. But a lot of mathematicians and physicists want to see concrete answers to questions of the form ‘how does X help me do Y?’ So it’s good to compile a list of examples. Posted by: John Baez on May 13, 2012 3:22 AM | Permalink | Reply to this Re: What is homotopy type theory good for? topology is ‘logic turned into space’ That is crucially different from logic turned into homotopy types, though! Posted by: Urs Schreiber on May 14, 2012 10:50 AM | Permalink | Reply to this Re: What is homotopy type theory good for? So is the idea of this post that many interesting constructions in specific $(\infty, 1)$-toposes can be thought of as general homotopy logic operations? Might there be cases where an interesting construction relies too specifically on the particularity of its $(\infty, 1)$-topos setting, so one wouldn’t count it as logical? Won’t the examples you ask for just come from looking at what higher homotopy means in different $(\infty, 1)$-toposes? I guess your example looks especially striking as it’s not just any old pullback in $Smooth \infty Grpd$, but has a physical interpretation, where the higher homotopy is physically meaningful. Posted by: David Corfield on May 13, 2012 3:44 PM | Permalink | Reply to this Re: What is homotopy type theory good for? So is the idea of this post that many interesting constructions in specific (∞,1)-toposes can be thought of as general homotopy logic operations? No, this is the basic idea that already drives the existing interest: that homotopy type theory is presumeably the internal language of elementary $(\infty,1)$-toposes. What I am after here is seeing examples of how this is useful beyond just interest in establishing this relation itself. Examples of where looking at constructions in homotopy theory, which we are interested in as homotopy theory, are illuminated by re-phrasing them in homotopy logic. You may imagine me talking to students of homotopy theory who don’t care about symbolic logic, be it homotopy type theory or not, who see no reason to be interested in learning about homotopy type theory, maybe whose advisor advises them not to waste time with this fad, as there is nothing to be learned for homotopy theorists. The question is: what can one tell them apart from “right, nothing to be seen here, move on”? The example I tried to give in this thread is supposedly one where a subtle problem in application of homotopy theory has a crstal clear solution once formulated systematically in homotopy type theory. It’s not quite an example where you can make progress only with homotopy type theory, after all we found the solution just from homotopy theoretic reasoning, too, but I notice that from the point of view of homotopy type theory it becomes even more elegant. So that’s satisfactory, for me at least. But I am quite suspecting that this is only the tip of the iceberg. Posted by: Urs Schreiber on May 14, 2012 11:05 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Maybe your example is difficult to judge for a student of homotopy theory in that they won’t know enough about higher differential geometry to see whether you could have arrived at your construction most easily through: physical considerations; differential geometric/topological considerations; quasi-categorical considerations; or homotopy logical considerations. Perhaps an example in $Top$ would help. But I wonder whether this might be pushing against the flow of ideas. Couldn’t we see the dramatic changes as more to do with a flood of ideas coming out of homotopy theory to the rest of maths, by isolating concepts such as $(\infty, 1)$-toposes which surprisingly apply to areas such as differential and algebraic geometry. Or is that something has been added in the development of the internal language of homotopy type theory which will allow the debt to be somewhat repaid? Posted by: David Corfield on May 14, 2012 11:57 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Maybe your example is difficult to judge for a student of homotopy theory Yes, that example about the C-field is to motivate me :-) For those students I hope to eventually prepare the page HoTT methods for homotopy theorists, which however is still at best in a nascent state. (But right now I need to be looking into something completely different, so this page has to wait a moment.) There are hopefully two layers to this: one concerning lots of little tricks (hah!) as on that page (eventually), which are useful in everyday computation in homotopy theory and which haven’t found attention before the point of view of HoTT introduced them, the other concerning deeper research-level questions. Couldn’t we see the dramatic changes as more to do with a flood of ideas coming out of homotopy theory to the rest of maths, by isolating concepts such as (∞,1)-toposes which surprisingly apply to areas such as differential and algebraic geometry. I guess so, but what role is homotopy type theory to play in this? For instance, Mike is proposing (hopefully I am stating it correctly) that homotopy type theory indicates that in elementary $(\infty,1)$-topos theory it is not the usual notion of $(\infty,1)$ -colimits that are fundamental, as one would have expected, but that of $\infty$-initial $\infty$-algebras over presentable $\infty$-monads (as in our parallel discussion here). Both concepts will of course reflect on each others, but some things will be more natural with respect to the initial agebras, apart from this having a direct formalization in homotopy type theory. So one way to answer my question here is: what would be interesting items in a list of examples for applications in traditional homotopy theory that are more elegantly expressed in terms of initial algebras of presentable $\infty$-monads, as opposed to in terms of just plain $\infty$-colimits? Posted by: Urs Schreiber on May 14, 2012 12:55 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I don’t think I would try to use this example to convince a homotopy theorist that homotopy type theory is useful. The direct homotopy-theoretic solution — take the homotopy pullback rather than the strict pullback — seems pretty obvious and doesn’t need any type-theoretic machinery. in elementary (∞,1)-topos theory it is not the usual notion of (∞,1)-colimits that are fundamental, as one would have expected, but that of ∞-initial ∞-algebras over presentable ∞-monads Yes, I think that’s roughly accurate. I think it sounds a little better (because seemingly more general) to talk about free algebras rather than initial algebras, though they are essentially the same. It’s not entirely clear to what extent the notion of “presentable $\infty$-monad” can be defined without already having usual $(\infty,1)$-colimits, though. (One could of course regard HITs themselves as a “definition” of presentable $\infty$-monads.) what would be interesting items in a list of examples for applications in traditional homotopy theory that are more elegantly expressed in terms of initial algebras of presentable ∞-monads, as opposed to in terms of just plain ∞-colimits? I’ve said this before, but I think that localization is near the top of that list. Classical algebraic topologists are constantly localizing $\infty Gpd$ with respect to various things, like prime numbers, but the construction of such localizations is frequently very messy. In nice cases one can induct up the Postnikov tower, but in general one usually resorts to a small object argument. The fact that localization has a very clean description as a higher inductive type (even if that description “compiles out” in classical models to a small object argument) seems to me very suggestive. However, I don’t yet know of actual applications of this idea to something that a classical algebraic topologist would care about. Posted by: Mike Shulman on May 14, 2012 6:59 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I think that localization is near the top of that list. Right, I have looked at that post every now and then, but still need to find some way to connect to it. What happens when I read that post is that I do follow how the code there formalizes localization, and I do apreciate the remarkable claim that you have given a Coq-checked proof that this gives indeed a reflection, but then I am maybe left wondering what I learn from this now as long as I don’t try to do some serious Coq programming myself (which seems unlikely anytime soon). Let’s see. Maybe this here would help me: what is the induction principle and what is the recursion principle induced by this higher inductive type? I suppose the recursion principle is the universal property of precomposition with the the unit, hence with to_local? What’s the induction principle? Is that something that is usefully made explicit? So we’d have first of all an $X$-dependent type with a map to a $(localize X)$-dependent local type. And then compatible dependent terms of both. Hm, is that good for something? (I should admit that don’t really have the leisure to think about this right now, so I am just rambling…) Posted by: Urs Schreiber on May 14, 2012 11:58 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I wrote: So we’d have first of all an X-dependent type Ah, no, that’s wrong. The endofunctor in degree 0 is sort of constant on $X$. Anyway, I have to go offline now. Would be happy to continue discussing this later, though. Posted by: Urs Schreiber on May 15, 2012 12:26 AM | Permalink | Reply to this Re: What is homotopy type theory good for? That’s a good question. Unfortunately, the only general induction principle I know is not much more useful than the recursion principle. It basically says that if you have a type $P(z)$ dependent on $z : localize(X)$, such that $\sum_{z: localize(X)} P(z)$ is local, then you have a section. I mentioned this sort of induction principle in the more general context of reflective subfibrations here. A more useful induction principle would change the hypothesis to ask only that each type $P(z)$ is local. Unfortunately this one isn’t valid for all localizations, only those that give rise to a stable factorization system. That does include all nullifications, i.e. localizations at maps of the form $S\to 1$. For instance, $n$-truncation is nullification at the $(n+1)$-sphere. Posted by: Mike Shulman on May 15, 2012 7:42 AM | Permalink | Reply to this Re: What is homotopy type theory good for? In the comparison to the dot-com bubble, Urs writes Back then a technology had appeared which was as powerful as it was new: while everybody had a sure feeling that the technology would have dramatically valuable impact, because it was so new nobody had an actual idea of what that would be…With homotopy type theory the situation currently seems to be similar to me. On the one hand it is clear that some dramatic breakthrough right at the heart of mathematics has occured. One hears the sound of something big happening. Obviously, one ought not push the comparison too far, or else we’d be looking to match the current moment to a point on the Nasdaq graph, presumably before that big spike at 2000. But I was wondering, where can one hear speculations about the possible future impact of homotopy type theory other than at our places (nLab, Café), the Homotopy Type Theory site, and the Univalent Foundations site? Presumably, there is also what is written in the informal parts of papers on $(\infty, 1)$-toposes by e.g., Lurie, Toën, Joyal, etc. Are you hearing a lot more than is written anywhere? Anything as daring as some of these 1999/2000 predictions for 2010?: A fourth-grader on his way home from school will be able to punch a button on his bicycle that will “activate the microwave oven to heat his snack so it’ll be ready when he walks in the kitchen.” As he rides he’ll have one eye on a computer monitor on which he’s getting a head start on homework.– Charlotte News and Observer Kids’ dolls, trucks and other toys will use artificial intelligence to talk and “evolve” with your child as he grows, a process you’ll be able to track dramatically with holographic photos. “Land line phones will be a thing of the past” and “there will probably be a single international currency.” – The Melbourne Herald Sun Posted by: David Corfield on May 14, 2012 10:48 AM | Permalink | Reply to this Re: What is homotopy type theory good for? I remember chatting to an algebraic geometer at Kent a couple of years ago and trying to find out what level of category theory fed into his work. He knew of toposes but they didn’t feature in his research. He also had some idea of what derived algebraic geometry sought to achieve, but again with no felt need to find out more. If we were to liken the situation today to some point in the foundational changes of the 1880-1930 period, is it that Then: by, say, 1900, some of the avant garde had the idea that much (or all) of mathematics can be thought to be about sets or classes, and some also that there’s an (internal) formal language in which all mathematical reasoning can be cast. Now: some of the avant garde have the idea that much (or all) of mathematics can be though to be about $(\infty, 1)$-toposes, and some also that there’s an (internal) formal language. E.g., an avant garde expression of the future role of homotopy theory from David Ben-Zvi: It is not that homotopy theory was to be a part of group theory, but as Quillen started to show the converse is true – currently a big swath of algebraic geometry can be seen as a special easy case of homotopy theory, but with great insight to be gained from the broader perspective. In fact it is apparent that algebraic geometers largely ignored homotopy theory only at great cost and that is beginning to change. Homotopy theory is an incredibly rich beautiful subject with many overarching themes, large scale patterns, deep phenomena related to the most beautiful parts of number theory and geometry etc. It certainly has suffered from very bad salesmanship for a long time but Hopkins, Madsen, Lurie and others are changing that hopefully. Even if this vision wins out, it doesn’t mean homotopy type theory will feature explicitly. Most people got by in the past with an informal grasp of mathematics as being about structured sets and a passing knowledge of predicate logic. These ideas weren’t mentioned to me in four years as an undergraduate at Cambridge in the 1980s, so presumably it was supposed that one could work perfectly effectively with no training in set theory or logic. Of course, all this doesn’t stop them being wonderful conceptual discoveries. Posted by: David Corfield on May 15, 2012 9:44 AM | Permalink | Reply to this Re: What is homotopy type theory good for? I think the big difference to traditional (material) set theory is that this was never meant to be actually used explicitly in practice, but only served as a proof of principle. This is already different for structural set theory like ETCS, which is actually useful in practice. And it already goes a long way towards type theory, which to some extent takes the same basic idea and just formalizes it further, doesn’t it? Posted by: Urs Schreiber on May 15, 2012 10:38 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Like Molière’s Monsieur Jourdain Good Heavens! For more than forty years I have been speaking prose without knowing it, it may be that people could be said to have been reasoning unknowingly in a structural set theoretic way for many years. But I’m wary of bringing that debate up again after the FOM experience. Clearly a major change occurred between 1880 and 1930, for one thing in terms of a kind of homogenization of mathematics, so that it became much less surprising that different branches could speak to one another. It is as if you took a man out of a milieu in which he lived not because it fitted him but from ingrained habits and prejudices, and then allowed him, after thus setting him free, to form associations in better accordance with his true inner nature. (Weyl) I hope the homotopization of mathematics is as eventful, and that homotopy type theory plays an important role. Posted by: David Corfield on May 15, 2012 12:51 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Hi David, while I agree with what you say or indicate, I feel that somehow the issue that motivated me here is not entirely reflected in these comments of yours. Maybe I am wrong, but for the sake of clarity, let me try to emphasize a point which it feels you don’t quite address yet. For instance you end with: I hope the homotopization of mathematics is as eventful, […] But I would say: the homotopization of mathematics has already been happily and eventfully been proceeding all along. That’s what the “$n$” in the title of this blog here is meaning to pay tribute to, for instance. What is new is that this homotopization has reached also the very foundations. The question that I meant to be highlighting in this discussion here is: how does the homotopization of various areas that we are all already involved in profit from the fact that also the foundations are being homotopical now? Do you see what I mean? Another way might be to put it like this: we (for some value of “we”) have long agreed that the language that the world is written in is higher category theory. What is new now is that suddenly we realize that this higher category theory has an equivalent reformulation which, while equivalent, looks more fundamental, even, to some extent. The natural question (for me) is: what do we gain in practice by adopting that new viewpoint, that equivalent reformulation? Posted by: Urs Schreiber on May 15, 2012 5:44 PM | Permalink | Reply to this Re: What is homotopy type theory good for? The natural question (for me) is: what do we gain in practice by adopting that new viewpoint, that equivalent reformulation? It’s a very interesting question. You’ll have a much better sense of a good answer than I. What I’m naturally led to think about is what it was imagined formal mathematical languages of the past could achieve, and what they actually achieved. Let’s sketch a list. Security: the idea of a gapless proof, where no appeal to intuition need be given. E.g., no more discoveries of cases such as where Euclid presumes without stating that certain circles will Proof theory: the idea that proofs themselves could be subjected to mathematical analysis. Mechanization: certain inferences steps would become more or less automatic. Machines could be designed to do maths. Carving concepts correctly: Frege imagined his Begriffsschrift could do this. He said that the earlier forms of logic just allowed you to join or intersect properties, but his allowed the formation of fruitful new concepts. I haven’t heard anything much about the first two with respect to categorical type theory. I know there are category theoretic treaments of proof theory, e.g., M. E. Szabo, Algebra of Proofs, Studies in Logic and the Foundations of Mathematics, Vol. 88, North-Holland, 1978, but that’s category theory as proof theoretic tool, not proof theory about category theoretic proofs. Concerning mechanization, I believe Voevodsky himself has suggested that mathematics could be automated with the adoption of homotopy type theory. I think I remain to be convinced. It has always been the fourth, carving concepts correctly, that has intrigued me about category theory, though this is sometimes given the air of being fairly automatic. E.g., you here “turning the crank” to work out what a stabilzer of a sub-2-group must be. How to characterize such crank-turning? Can it be seen as operating a type theory? I must admit to being a little disappointed over the years that general category theory hasn’t made greater inroads into mainstream maths by crank-turning in concrete realizations. At the Café when it was found that optimal transport was all about profunctors in an enriched category, I thought we would quickly reveal whether the optimal tranpsort people had chosen the optimal concepts. There must be a heap of examples where X is working on a concept ignorant that Y has shown it to be a concrete realization of a category theoretic construction. Y doesn’t know what’s needed to make something of the observation, and X never finds out. Choosing randomly a piece of recent general category theory, Mike’s work with Steve Lack on Enhanced 2-categories and limits for lax morphisms, is that dripping with possibilities of surprising concrete realizations? Posted by: David Corfield on May 16, 2012 9:40 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Concerning mechanization, I really cannot judge this. But I can point out that the mechanization of mathematics using type theory is proceeding independently of its homotopization. There is the project ForMath: Formalization of Mathematics led by Coquand. I have not much feel for this, but you might find it interesting to look around a bit on their list of publications, especially concerning the Slides-section towards the bottom of the page. I recently went around and asked people who might know what would happen to these kinds of programs once the Coq code used there does or will make the homotopization explicit. After all, the point is that not too much is necessary to do so. Once it is shown that univalence is computationally good, it can be done in principle. What will happen then? What is the missing red entry on Mike’s slide number 4? How to characterize such crank-turning? Can it be seen as operating a type theory? Yes, that “crank turning” referred precisely to internalization. By the way, meanwhile Mike has proven that our two definitions of stabilizer infinity-group are equivalent, and he did so in homotopy type theory. There must be a heap of examples where X is working on a concept ignorant that Y has shown it to be a concrete realization of a category theoretic construction. Yeah, I like to collect such examples in my area of interest. Posted by: Urs Schreiber on May 16, 2012 12:58 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Security: the idea of a gapless proof, where no appeal to intuition need be given. Mechanization: certain inferences steps would become more or less automatic. Machines could be designed to do maths. Type theory is actually even better at security than it is at mechanization, through the use of machines. It turns out to be easier to program a computer to verify that a proof is correct, than it is to program a computer to produce a proof automatically. Coq and Agda and other proof assistants which we use in type theory are doing exactly that, so we do have security guaranteed. I feel like people talk about this quite a lot, although I guess maybe I haven’t emphasized it very much around here. But non-homotopical type theory is already being used in this way: for instance, the four-color theorem has been completely computer-verified, and work is in progress for various other complicated mathematical theorems whose correctness people are worried about. Voevodsky isn’t innovating here at all; it’s a well-established fact. Proof assistants do also have a certain degree of automation, which can make “simple” steps be proven automatically. A computer’s definition of “simple” is not necessarily that of a human, though, so sometimes you end up having to spell out manually things that seem obvious, but other times the computer can just do something like magic and you aren’t entirely sure how it happened. However, this sort of automation is only going to get better. I wouldn’t go so far as to say that all of mathematics can be automated (and I haven’t personally heard Voevodsky claim that, though I wouldn’t be surprised if he has), but I think in 100 years, the process of doing mathematics will look radically different than it does today. I would also have said that type theory is very closely related to proof theory. I’m not a proof theorist, but the terms that appear in type theory are exactly a “representation of proofs” which are used in proving proof-theoretic theorems like cut-elimination. Posted by: Mike Shulman on May 16, 2012 4:48 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I remember someone, was it John Power, telling me that it was no surprise that European computer science had gone down a path of assuring security through type-checking in typed languages, hence open to category theory, while the American approach primarily sought speed (or something like that). Maybe there’ll be a proof theory for homotopy type theory, with relative consistency proofs, etc. As for Voevodsky’s beliefs, he talks here of a new era which will be characterized by the widespread use of automated tools for proof construction and verication. I guess that’s not too radical. Call me old-fashioned, but I find what’s happening more interesting in terms of the production of new conceptual understandings. Posted by: David Corfield on May 16, 2012 5:16 PM | Permalink | Reply to this Re: What is homotopy type theory good for? Maybe there’ll be a proof theory for homotopy type theory Since homotopy type theory is just intensional type theory with maybe a few extra axioms and rules, I would say that it already has a proof theory. Posted by: Mike Shulman on May 17, 2012 1:09 AM | Permalink | Reply to this Re: What is homotopy type theory good for? But I would say: the homotopization of mathematics has already been happily and eventfully been proceeding all along. That’s what the “$n$” in the title of this blog here is meaning to pay tribute to, for instance. But that’s only partial, right? Beyond the great city of homotopization, there still lies the distant dream of categorification. Posted by: David Corfield on May 16, 2012 9:48 AM | Permalink | Reply to this Re: What is homotopy type theory good for? Beyond the great city of homotopization, there still lies the distant dream of categorification. I don’t find this distinction helpful. These are just two words for aspects of the same thing. Homotopization is $(\infty,1)$-categorification, and categorification is directed homotopization. Instead of prolonging the schism, I’d rather see the mistake of making the distinction in the first place been buried and forgotten. Posted by: Urs Schreiber on May 16, 2012 12:25 PM | Permalink | Reply to this Re: What is homotopy type theory good for? I’m curious what “bad salesmanship” of homotopy theory David has in mind. I’ve certainly heard that charge leveled against category theory (with some justification, as far as I can tell), but never before against homotopy theory. Posted by: Mike Shulman on May 15, 2012 6:31 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2012/05/what_is_homotopy_type_theory_g.html","timestamp":"2014-04-17T22:31:21Z","content_type":null,"content_length":"209780","record_id":"<urn:uuid:45333f11-b609-45b5-a840-e2e4bca25907>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
A technique for me is a task for you Originally in the context of Braque and now in the context of FUSE, I've thought a bit about understanding the role of techniques and tasks in scientific papers (admittedly, mostly NLP and ML, which I realize are odd and biased). I worked with Sandeep Pokkunuri, a MS student at Utah, looking at the following problem: given a paper (title, abstract, fulltext), determine what task is being solved and what technique is being used to solve it. For instance, a paper like "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data" the task would be "segmenting and labeling sequence data" and the technique would be "conditional random fields." You can actually go a long way just looking for simple patterns in paper titles, like "TECH for TASK" or "TASK by TECH" and a few things like that (after doing some NP chunking and clean-up). From there you can get a good list of seed tasks and techniques, and could conceivably bootstrap your way from there. We never got a solid result out of these, and sadly I moved and Sandeep graduated and it never went anywhere. What we wanted to do was automatically generate tables of "for this TASK, here are all the TECHs that have been applied (and maybe here are some results) oh and by the way maybe applying these other TECHs would make sense." Or visa-verse: this TECH has been applied to blah blah blah tasks. You might even be able to tell what TECHs are better for what types of tasks, but that's quite a bit more challenging. At any rate, a sort of "obvious in retrospect" thing that we noticed was that what I might consider a technique, you might consider a task. And you can construct a chain, typically all the way back to math. For instance, I might consider movie recommendations a task. To solve recommendations, I apply the technique of sparse matrix factorization. But then to you, sparse matrix factorization is a task and to solve it, you apply the technique of compressive sensing. But to Scott Tanner, compressive sensing is a task, and he applies the technique of smoothed analysis (okay this is now false, but you get the idea). But to Daniel Spielman, smoothed analysis is the task, and he applies the technique of some other sort of crazy math. And then eventually you get to set theory (or some might claim you get to category theory, but they're weirdos :P). (Note: I suspect the same thing happens in other fields, like bio, chem, physics, etc., but I cannot offer such an example because I don't know those areas. Although not so obvious, I do think it holds in math: I use the proof technique of Shelah35 to prove blah -- there, both theorems and proof techniques are objects.) At first, this was an annoying observation. It meant that our ontology of the world into tasks and techniques was broken. But it did imply something of a richer structure than this simple ontology. For instance, one might posit as a theory of science and technologies studies (STS, a subfield of social science concerned with related things) that the most basic thing that matters is that you have objects (things of study) and an appliedTo relationship. So recommender systems, matrix factorization, compressive sensing, smoothed analysis, set theory, etc., are all objects, and they are linked by appliedTos. You can then start thinking about what sort of properties appliedTo might have. It's certainly not a function (many things can be applied to any X, and any Y can be applied to many things). I'm pretty sure it should be antireflexive (you cannot apply X to solve X). It should probably also be antisymmetric (if X is applied to Y, probably Y cannot be applied to X). Transitivity is not so obvious, but I think you could argue that it might hold: if I apply gradient descent to an optimization problem, and my particular implementation of gradient descent uses line search, then I kind of am applying line search to my problem, though perhaps not directly. (I'd certainly be interested to hear of counter-examples if any come to mind!) If this is true, then what we're really talking about is something like a directed acyclic graph, which at least at a first cut seems like a reasonable model for this world. Probably you can find exceptions to almost everything I've said, but that's why you need statistical models or other things that can deal with "noise" (aka model misspecification). Actually something more like a directed acyclic hypergraph might make sense, since often you simultaneously apply several techniques in tandem to solve a problem. For instance, I apply subgradient descent and L1 regularization to my binary classification problem -- the fact that these two are being applied together rather than separately seems important somehow. Not that we've gone anywhere with modeling the world like this, but I definitely thing there are some interesting questions buried in this problem. 2 comments: rrenaud said... Here is a nice problem where X is used to solve Y, and Y is used to solve X. Here is a case where LCA is used to solve RMQ, and then restricted RMQ is used to solve LCA. RMQ -> LCA -> Restricted RMQ Mark said... Hi Hal, Nice post. I've also been looking into how discussion of tasks might be separated from the techniques used to solve them. It hadn't occurred to me that task vs. technique was viewpoint-sensitive. By the way, I am co-organising a workshop on relating machine learning problems at NIPS this year: I hope you can make it to the workshop as the discussion session could really benefit from your input given you've been working along these lines.
{"url":"http://nlpers.blogspot.com/2011/09/technique-for-me-is-task-for-you.html","timestamp":"2014-04-20T00:50:20Z","content_type":null,"content_length":"96559","record_id":"<urn:uuid:ff2988c8-8a39-4cf7-9c31-581b182f4a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Computer Parts Quizzes Sample Questions An electronic tool that allows information to be input, processed, and output A worldwide network of computers What is this object? Name this object. Name this object.. The brain of the computer. This part does the calculation, moving and processing of information An electronic tool that allows information to be input, processed, and output A worldwide network of computers The brain of the computer. This part does the calculation, moving and processing of information ‹ › This quiz will evaluate how well elementary students know the basic computer parts and definitions. This quiz will evaluate how well elementary students know the basic computer parts and definitions.
{"url":"http://www.proprofs.com/quiz-school/topic/basic-computer-parts","timestamp":"2014-04-16T22:09:02Z","content_type":null,"content_length":"68744","record_id":"<urn:uuid:1fafc1d0-3f97-4d9b-b02b-83dbd181419e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Prospect Heights Math Tutor Find a Prospect Heights Math Tutor ...I have worked with many students in Algebra 2 courses. I have tutored all the topics dealing with factoring, FOILING, Solving for variables using quadratic equations, and graphing quadratic equations as well. I have also taught solving system of equations using graphing, elimination (combination), and substitution methods. 11 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...I am a Junior level High School Math teacher and have been for 7 years. I prepare all of my classes each year to take the ACT and SAT with ACT and SAT style questions as well as helpful hints for doing well on the test. I have prep books to go through in order to help raise scores. 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I have applied these tools and techniques when teaching English classes and when I tutored a few (non-ESL) students in reading comprehension skills. I have been teaching college composition (English 101 and 102) for more than six years, including time spent as a graduate teaching assistant while... 17 Subjects: including algebra 1, algebra 2, grammar, geometry ...I was valedictorian in elementary school, achieved AP scores of 5/5 in French Language, French Literature, AB Calculus, and Biology in high school, and graduated from college as J.N. Honors Scholar Cum Laude with a BS in Elementary Education. I am certified to teach grades K-8 in Michigan and grades 1-6 in NY. 19 Subjects: including algebra 1, geometry, precalculus, ACT Math ...My tutoring radius is 20 miles. If a student is at the end of the radius, I would prefer to find a meeting place between both locations, such as a public library, etc. If this is not an option, I will drive the full distance with an increase in the hourly tutoring rate. 15 Subjects: including algebra 1, algebra 2, statistics, trigonometry
{"url":"http://www.purplemath.com/prospect_heights_math_tutors.php","timestamp":"2014-04-20T19:27:28Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:0f83f2ff-6075-4e77-9ba9-f0cbd8c5afab>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
The iterated Carmichael λ-function and the number of cycles of the power generator, Acta Arith - Acta Arith "... We consider two standard pseudorandom number generators from number theory: the linear congruential generator and the power generator. For the former, we are given integers e, b, n (with e, n> 1) and a seed u0, and we compute the sequence ..." Cited by 7 (2 self) Add to MetaCart We consider two standard pseudorandom number generators from number theory: the linear congruential generator and the power generator. For the former, we are given integers e, b, n (with e, n> 1) and a seed u0, and we compute the sequence - MATH. COMP , 2010 "... In this paper we study a class of dynamical systems generated by iterations of multivariate polynomials and estimate the degree growth of these iterations. We use these estimates to bound exponential sums along the orbits of these dynamical systems and show that they admit much stronger estimates ..." Cited by 2 (2 self) Add to MetaCart In this paper we study a class of dynamical systems generated by iterations of multivariate polynomials and estimate the degree growth of these iterations. We use these estimates to bound exponential sums along the orbits of these dynamical systems and show that they admit much stronger estimates than in the general case and thus can be of use for pseudorandom number generation. - Discr. and Cont.Dynam.Syst.,Ser.A "... We use character sums to confirm several recent conjectures of V. I. Arnold on the uniformity of distribution properties of a certain dynamical system in a finite field. On the other hand, we show that some conjectures are wrong. We also analyze several other conjectures of V. I. Arnold related to t ..." Cited by 2 (2 self) Add to MetaCart We use character sums to confirm several recent conjectures of V. I. Arnold on the uniformity of distribution properties of a certain dynamical system in a finite field. On the other hand, we show that some conjectures are wrong. We also analyze several other conjectures of V. I. Arnold related to the orbit length of similar dynamical systems in residue rings and outline possible ways to prove them. We also show that some of them require further tuning. 1 "... ABSTRACT. We study the distribution of prime chains, which are sequences p1,..., pk of primes for which pj+1 ≡ 1 (mod pj) for each j. We first give conditional upper bounds on the length of Cunningham chains, chains with pj+1 = 2pj +1 for each j. We give estimates for P (x), the number of chains wit ..." Cited by 1 (0 self) Add to MetaCart ABSTRACT. We study the distribution of prime chains, which are sequences p1,..., pk of primes for which pj+1 ≡ 1 (mod pj) for each j. We first give conditional upper bounds on the length of Cunningham chains, chains with pj+1 = 2pj +1 for each j. We give estimates for P (x), the number of chains with pk � x (k variable), and P (x; p), the number of chains with p1 = p and pk � px. The majority of the paper concerns the distribution of H(p), the length of the longest chain with pk = p, which is also the height of the Pratt tree for p. We show H(p) � c log log p and H(p) � (log p) 1−c′ for almost all p, with c, c ′ explicit positive constants. We can take, for any ε> 0, c = e − ε assuming the Elliott-Halberstam conjecture. A stochastic model of the Pratt tree is introduced and analyzed. The model suggests that for most p � x, H(p) stays very close to e log log x. 1. "... Abstract. Let ϕ and λ be the Euler and Carmichael functions, respectively. In this paper, we establish lower and upper bounds for the number of positive integers n ≤ x such that ϕ(λ(n)) = λ(ϕ (n)). We also study the normal order of the function ϕ(λ(n))/λ(ϕ(n)). 1 ..." Add to MetaCart Abstract. Let ϕ and λ be the Euler and Carmichael functions, respectively. In this paper, we establish lower and upper bounds for the number of positive integers n ≤ x such that ϕ(λ(n)) = λ(ϕ(n)). We also study the normal order of the function ϕ(λ(n))/λ(ϕ(n)). 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3832727","timestamp":"2014-04-24T09:57:46Z","content_type":null,"content_length":"21648","record_id":"<urn:uuid:aa1517c5-657a-48ca-aca3-8fb08389a9b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
An Open Architecture for Improving VLSI Circuit Performance Fred W. Obermeier EECS Department University of California, Berkeley Technical Report No. UCB/CSD-89-522 August 1989 Electrical performance and area improvement are important parts of the overall integrated circuit design task. However, few design tools allow easy exploration of the design space (area, delay, and power) or offer designers different performance alternatives. Given designer specified constraints on area, delay, and power, EPOXY will size a circuit's transistors and will attempt small circuit changes to help meet the constraints. The system provides an open flexible framework for developing and evaluating the effects of different area and electrical models, optimization algorithms, and circuit modifications. EPOXY takes a physical and electrical description of the circuit and produces a series of symbolic equations that model its performance. This results in circuit performance 5 times faster than Crystal and 56 times faster when these equations are subsequently compiled. EPOXY employs a virtual-grid area model since the sum of transistor area is a better measure of dynamic power than cell area. Optimization of a CMOS eight-stage inverter chain illustrates this difference; a typical minimum power implementation is 32% larger than the one for minimum area. Next EPOXY attempts to find a parameter assignment for the input variables of these equations, transistor widths, to meet the constraints while minimizing the user defined objective function. Previous transistor sizing systems are limited to fixed electrical models and only consider time and power tradeoffs. After evaluating two non-linear optimization techniques, the TILOS-style heuristic and augmented Lagrangian algorithm, a combination of the two was found to produce quality results rapidly. If the performance constraints cannot be met by transistor sizing, EPOXY considers inserting buffers stages, rearranging transistors within a pull-down or pull-up tree, and splitting large transistors so that cell height and width can be traded off. This level handles the discrete decisions of proposing circuit alternatives while the two lower levels determine the best possible implementation for this alternative. From an implementation viewpoint, EPOXY's underlying equation abstraction of circuit performance automatically provides critical path information and allows rapid modification of the curcuit structure. A typical speed improvement of 23% for several CMOS circuits was achieved over transistor sizing alone while satisfying difficult height (pitch) constraints. Advisor: Randy H. Katz BibTeX citation: Author = {Obermeier, Fred W.}, Title = {An Open Architecture for Improving VLSI Circuit Performance}, School = {EECS Department, University of California, Berkeley}, Year = {1989}, Month = {Aug}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/5914.html}, Number = {UCB/CSD-89-522}, Abstract = {Electrical performance and area improvement are important parts of the overall integrated circuit design task. However, few design tools allow easy exploration of the design space (area, delay, and power) or offer designers different performance alternatives. Given designer specified constraints on area, delay, and power, EPOXY will size a circuit's transistors and will attempt small circuit changes to help meet the constraints. The system provides an open flexible framework for developing and evaluating the effects of different area and electrical models, optimization algorithms, and circuit modifications. <p>EPOXY takes a physical and electrical description of the circuit and produces a series of symbolic equations that model its performance. This results in circuit performance 5 times faster than Crystal and 56 times faster when these equations are subsequently compiled. EPOXY employs a virtual-grid area model since the sum of transistor area is a better measure of dynamic power than cell area. Optimization of a CMOS eight-stage inverter chain illustrates this difference; a typical minimum power implementation is 32% larger than the one for minimum area. <p>Next EPOXY attempts to find a parameter assignment for the input variables of these equations, transistor widths, to meet the constraints while minimizing the user defined objective function. Previous transistor sizing systems are limited to fixed electrical models and only consider time and power tradeoffs. After evaluating two non-linear optimization techniques, the TILOS-style heuristic and augmented Lagrangian algorithm, a combination of the two was found to produce quality results rapidly. <p>If the performance constraints cannot be met by transistor sizing, EPOXY considers inserting buffers stages, rearranging transistors within a pull-down or pull-up tree, and splitting large transistors so that cell height and width can be traded off. This level handles the discrete decisions of proposing circuit alternatives while the two lower levels determine the best possible implementation for this alternative. From an implementation viewpoint, EPOXY's underlying equation abstraction of circuit performance automatically provides critical path information and allows rapid modification of the curcuit structure. A typical speed improvement of 23% for several CMOS circuits was achieved over transistor sizing alone while satisfying difficult height (pitch) constraints.} EndNote citation: %0 Thesis %A Obermeier, Fred W. %T An Open Architecture for Improving VLSI Circuit Performance %I EECS Department, University of California, Berkeley %D 1989 %@ UCB/CSD-89-522 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/5914.html %F Obermeier:CSD-89-522
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/5914.html","timestamp":"2014-04-16T18:57:31Z","content_type":null,"content_length":"9565","record_id":"<urn:uuid:54924245-5ca1-43ed-b79d-e01dca8f6130>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
From HaskellWiki < Diagrams | Dev (Difference between revisions) (→Matrices, inverse and transpose transformations: use (Notes on an intensional representation for angle-preserving transformations) better notation for transpose, and record proof that normals are preserved by the inverse transpose) ← Older edit Line 111: Line 111: corresponds to the function <math>\lambda (x,y) \to (x + corresponds to the function <math>\lambda (x,y) \to (x + 2y, 3x + 4y)</math>. 2y, 3x + 4y)</math>. * Compute the transpose, inverse, and inverse transpose of the matrix and turn the results into functions as * Compute the transpose, inverse, and inverse transpose of the matrix and turn the results into functions as well. + = Angle-preserving transformations = + An angle-preserving transformation is one which preserves angles between all pairs of lines. It is possible to prove the following: * A linear transformation <math>L</math> is angle-preserving iff there exists a constant <math>\lambda</math> such that <math>\|L(v)\| = \ + lambda \|v\|</math> for all vectors <math>v</math>. (See http://www.cs.bsu.edu/homepages/fischer/math445/angles.pdf.) * Intuitively, the previous result means that any angle-preserving linear transformation (other than the constantly zero transformation) + can be uniquely decomposed as a uniform scale followed by a rotation. * An affine transformation is angle-preserving iff it can be expressed as an angle-preserving linear transformation followed by a + translation. + * Angle-preserving affine transformations are closed under composition. The last two points in particular imply that we can uniquely represent angle-preserving affine transformations by a triple <math>A = (\ + theta, s, w)</math> where <math>A(v) = R_\theta(sv) + w</math> (<math>R_\theta</math> indicates a rotation by <math>\theta</math>), and given this representation we may compute + <math>(\theta_1, s_1, w_1) \circ (\theta_2, s_2, w_2) = (\theta_1 + \theta_2, s_1 s_2, R_{\theta_1}(s_1 w_2) + w_1)</math> + It is also easy to compute the inverse + <math>(\theta, s, w)^{-1} = (-\theta, 1/s, -R_{-\theta}(w/s))</math> + and the transpose of the linear part: + <math>(\theta, s)^T = (-\theta, s)</math> (The above calculations are simplified by the observation that rotations and uniform scales commute, though of course transformations do + not commute in general.) + As a concrete proposal, we can change <code>T2</code> from a synonym for <code>Transformation R2</code> to a sum type + data T2 + = Ortho Angle Double R2 + | GenT2 (Transformation R2) and change things like <code>rotate</code> to construct an <code>Ortho</code> instead of a <code>Transformation R2</code>. There is a bit of trickiness to be worked out with the function <code>scaling</code>, which currently has type <code>Scalar v -> Transformation v</code>, + i.e. it is supposed to work over any vector space, but we want it to do something special for <code>R2</code>. I am not sure what the right solution is. + In any case, some benefits of such a scheme would be: * Angle-preserving transformations are precisely those which preserve circular arcs. If we add a new primitive type of arc segment, we can + look at the final transformation applied to the segment to determine whether we can simply issue a primitive arc command (if the transformation is an <code>Ortho</code>) or if we must issue an arc command wrapped in a transform (or approximate the arc with Beziers). * For typical applications, <code>Ortho</code> transformations make up the vast majority of transformations actually used. Intuitively, it + seems this should lead to a performance boost, since <code>Ortho</code> values consist of simple first-order data, making it easier for GHC to optimize e.g. chains of repeated transformations. General <code>Transformation</code> values, on the other hand, are functions, which can be tricky for GHC to optimize e.g. when deeply nested. Latest revision as of 02:54, 1 December 2013 [edit] 1 Linear and affine transformations A linear transformation on a vector space V is a function $f : V \to V$ satisfying • f(kv) = kf(v) • f(u + v) = f(u) + f(v) where k is an arbitrary scalar and u,v are arbitrary vectors. Linear transformations always preserve the origin, send lines to lines, and preserve the relative distances of points along any given line. The image of parallel lines under a linear transformation is again parallel lines. However, linear transformations do not necessarily preserve angles between lines. Examples of linear transformations include • rotation • reflection • scaling • shear Linear transformations are closed under composition and form a monoid with the identity function as the neutral element. An affine transformation is a function of the form a(v) = f(v) + u where f is a linear transformation and u is some vector. Affine transformations preserve all the same things as linear transformations, except that they do not necessarily send the origin to itself. Affine transformations include all the above examples, along with Affine transformations are also closed under composition, and are used as the basis for all transformations in diagrams. Note that using homogeneous coordinates, affine transformations can be seen as a certain subset of linear transformations in a vector space one dimension higher. (diagrams used to actually use this approach for representing affine transformations, but no longer does.) [edit] 2 Matrices, inverse and transpose transformations Any linear transformation in an n-dimensional space can be represented as an $n \times n$ matrix: in particular, the matrix whose nth column is the result of applying the transformation to the unit vector with a 1 as its nth entry and 0s everywhere else. For example, $\begin{bmatrix} 4 & 5 \\ 6 & 2 \end{bmatrix}$ is the linear transformation that sends (1,0) to (4,6) and sends (0,1) to (5,2). It's easy to see that multiplying a matrix by such a unit vector picks out a column of the matrix, as I claimed. It's also not too hard to see that the matrix's action on any other vector is completely determined by linearity. Finally, of course, one can verify that matrix multiplication is, in fact, linear. By the inverse of a linear map (note I use "linear transformation" and "linear map" interchangeably), we mean its inverse under composition: a transformation T^ − 1 such that $T \circ T^{-1} = T^{-1} \circ T = id$. We can also think about this in terms of matrices: the inverse of a matrix M is a matrix M^ − 1 such that MM^ − 1 = M^ − 1M = I, the identity matrix (with ones along the diagonal and zeros everywhere else). One can check that (1) the identity matrix corresponds to the identity linear map, and (2) matrix multiplication corresponds to composition of linear maps. So these are really two different views of the same thing. The transpose of a matrix (denoted $M^\top$) is the matrix you get when you reflect all the entries about the main diagonal. For example, $\begin{bmatrix} 4 & 5 \\ 6 & 2 \end{bmatrix}^\top = \begin{bmatrix} 4 & 6 \\ 5 & 2 \end{bmatrix}$ This is (usually) not the same as the inverse, as can be easily checked. Now, how can we interpret the transpose operation on matrices as an operation on linear maps? Of course, we can just say "the transpose of a linear map is defined to be the linear map corresponding to the transpose of its matrix representation". But I actually don't have any intuitive sense for what the transpose of a linear map IS in any more fundamental sense. What we really want to think about is the inverse transpose, that is, the inverse of the transpose of a matrix, denoted $M^{-\top}$. Suppose we have a vector v, and another vector p which is perpendicular to v. Now suppose we apply a linear map (represented by a matrix M) to the vector v, giving us a new vector v' = Mv. In general, Mp may not be perpendicular to Mv, since linear transformations do not preserve angles. Some linear maps such as rotations and uniform scales do preserve perpendicularity (if that's a word), but not all linear maps do: for example, the 2D linear map that scales all x-coordinates by 2 but leaves y-coordinates unchanged. But what if we want to obtain another vector perpendicular to v'? In fact, the thing to do is to multiply p not by M, but by the inverse transpose of M. That is: if v and p are perpendicular vectors, then so are Mv and $(M^{-\top})p$. This can be proved by noting that the dot product of two vectors $v_1 \bullet v_2$ can equivalently be written $v_1^\top v_2$ (treating vectors as $n \times 1$ column matrices), and that two vectors are perpendicular iff their dot product is zero. Hence $(Mv) \bullet (M^{-\top}p) = (Mv)^\top(M^{-\top}p) = v^\top M^\top M^{-\top}p = v^\top p = 0$ Now, if one has a matrix M it is easy to compute its transpose, and not too difficult to compute its inverse. However, in diagrams, given a transformation we don't have its matrix representation. Linear maps are represented more directly (and efficiently) as functions. We could generate a matrix by applying the transformation to basis vectors, but this is ugly, and converting a matrix back to a functional linear map is even uglier, and probably loses a lot of efficiency as well. What to do? We need to carry around enough extra information with each transformation so that we can extract the inverse transpose. We just have to make sure that this extra information is compositional, i.e. we know how to compose two transformations while preserving the extra information. It's easy to see that $(M^{-1})^{-1} = M = (M^\top)^\top$ (i.e. the inverse and transpose operations are involutions). It's not quite as obvious, but not too hard to show, that the inverse of the transpose is the same as the transpose of the inverse. Combining these facts, we see that for a given matrix M there are really only four matrices of interest: M, M^ − 1, $M^\top$, and $M^{-\top}$. We can arrange them conceptually in a square, thus: $\begin{matrix} M & M^{-1} \\ M^\top & M^{-\top} \end{matrix}$ Then taking the inverse corresponds to a L-R reflection, and taking the transpose corresponds to a U-D reflection. Given two such sets of four linear maps, it's not hard to work out how to compose them: just compose the elements pairwise (i.e. compose the inverse with the inverse, the transpose with the transpose, etc.), keeping in mind that the inverse and transpose compose in the reverse order whereas the regular transformation and the inverse transpose compose in the normal order. So each transformation is actually a set of four linear maps. To take the inverse or transpose of a transformation just involves shuffling the linear maps around. Composition works as described above. We just have to specify these four linear maps by hand for each primitive transformation the library provides, which can be done by appealing to the matrix representation of the primitive transformation in question. Now, so far we have only represented linear transformations: affine transformations can be recovered by storing just a single extra vector along with the four linear transformations. [edit] 3 Representation Transformations are defined in Graphics.Rendering.Diagrams.Transform (from the diagrams-core package). There are additional comments there which may be helpful. First, we define invertible linear data (:-:) u v = (u :-* v) :-: (v :-* u) That is, a linear transformation paired with its inverse. We always carry around an inverse transformation along with every transformation. All the functions for constructing primitive transformations also construct an inverse, and when two transformations are composed their inverses are also composed (in the reverse order). Note that the :-* type constructor is from the vector-space package. Don't worry about how it is implemented (actually I don't even know). Then an (affine) Transformation is defined as follows: data Transformation v = Transformation (v :-: v) (v :-: v) v There are three parts here. The first (v :-: v) is the normal transformation (the linear part of it, i.e. without translation) paired with its inverse. The second (v :-: v) is the transpose and inverse transpose. The final v is the translational component of the transformation. For examples of creating Transformations, see the remainder of the Graphics.Rendering.Diagrams.Transform module, and also the Diagrams.TwoD.Transform module from the diagrams-lib package. [edit] 4 Specifying transformations as matrices To turn an arbitrary matrix into a Transformation would require the following: • Assuming the matrix is represented using homogeneous coordinates, it must be checked that the matrix actually represents an affine transformation (as opposed to a general projective transformation). This corresponds to the bottom row of the matrix consisting of all zeros and a final 1. • Extract the translational component from the final column of the matrix (except for the 1 in the bottom right corner). • Extract the linear transformation component as the submatrix obtained by deleting the last row and column. • Turn the matrix into a function by inlining the definition of matrix-vector multiplication. e.g. the matrix $\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ corresponds to the function $\lambda (x,y) \to (x + 2y, 3x + 4y)$. • Compute the transpose, inverse, and inverse transpose of the matrix and turn the results into functions as well. [edit] 5 Angle-preserving transformations An angle-preserving transformation is one which preserves angles between all pairs of lines. It is possible to prove the following: • A linear transformation L is angle-preserving iff there exists a constant λ such that $\|L(v)\| = \lambda \|v\|$ for all vectors v. (See http://www.cs.bsu.edu/homepages/fischer/math445/angles.pdf • Intuitively, the previous result means that any angle-preserving linear transformation (other than the constantly zero transformation) can be uniquely decomposed as a uniform scale followed by a • An affine transformation is angle-preserving iff it can be expressed as an angle-preserving linear transformation followed by a translation. • Angle-preserving affine transformations are closed under composition. The last two points in particular imply that we can uniquely represent angle-preserving affine transformations by a triple A = (θ,s,w) where A(v) = R[θ](sv) + w (R[θ] indicates a rotation by θ), and given this representation we may compute $(\theta_1, s_1, w_1) \circ (\theta_2, s_2, w_2) = (\theta_1 + \theta_2, s_1 s_2, R_{\theta_1}(s_1 w_2) + w_1)$ It is also easy to compute the inverse (θ,s,w)^ − 1 = ( − θ,1 / s, − R[ − θ](w / s)) and the transpose of the linear part: (θ,s)^T = ( − θ,s) (The above calculations are simplified by the observation that rotations and uniform scales commute, though of course transformations do not commute in general.) As a concrete proposal, we can change T2 from a synonym for Transformation R2 to a sum type data T2 = Ortho Angle Double R2 | GenT2 (Transformation R2) and change things like rotate to construct an Ortho instead of a Transformation R2. There is a bit of trickiness to be worked out with the function scaling, which currently has type Scalar v -> Transformation v, i.e. it is supposed to work over any vector space, but we want it to do something special for R2. I am not sure what the right solution is. In any case, some benefits of such a scheme would be: • Angle-preserving transformations are precisely those which preserve circular arcs. If we add a new primitive type of arc segment, we can look at the final transformation applied to the segment to determine whether we can simply issue a primitive arc command (if the transformation is an Ortho) or if we must issue an arc command wrapped in a transform (or approximate the arc with Beziers). • For typical applications, Ortho transformations make up the vast majority of transformations actually used. Intuitively, it seems this should lead to a performance boost, since Ortho values consist of simple first-order data, making it easier for GHC to optimize e.g. chains of repeated transformations. General Transformation values, on the other hand, are functions, which can be tricky for GHC to optimize e.g. when deeply nested.
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Diagrams/Dev/Transformations&curid=8738&diff=57162&oldid=49539","timestamp":"2014-04-24T10:19:35Z","content_type":null,"content_length":"45764","record_id":"<urn:uuid:712694b7-2a29-4122-a85d-cd2377896ee9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating the size of the solution space of metabolic networks • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2008; 9: 240. Estimating the size of the solution space of metabolic networks Cellular metabolism is one of the most investigated system of biological interactions. While the topological nature of individual reactions and pathways in the network is quite well understood there is still a lack of comprehension regarding the global functional behavior of the system. In the last few years flux-balance analysis (FBA) has been the most successful and widely used technique for studying metabolism at system level. This method strongly relies on the hypothesis that the organism maximizes an objective function. However only under very specific biological conditions (e.g. maximization of biomass for E. coli in reach nutrient medium) the cell seems to obey such optimization law. A more refined analysis not assuming extremization remains an elusive task for large metabolic systems due to algorithmic limitations. In this work we propose a novel algorithmic strategy that provides an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. Using a technique derived from the fields of statistical physics and information theory we designed a message-passing algorithm to estimate the size of the affine space containing all possible steady-state flux distributions of metabolic networks. The algorithm, based on the well known Bethe approximation, can be used to approximately compute the volume of a non full-dimensional convex polytope in high dimensions. We first compare the accuracy of the predictions with an exact algorithm on small random metabolic networks. We also verify that the predictions of the algorithm match closely those of Monte Carlo based methods in the case of the Red Blood Cell metabolic network. Then we test the effect of gene knock-outs on the size of the solution space in the case of E. coli central metabolism. Finally we analyze the statistical properties of the average fluxes of the reactions in the E. coli metabolic network. We propose a novel efficient distributed algorithmic strategy to estimate the size and shape of the affine space of a non full-dimensional convex polytope in high dimensions. The method is shown to obtain, quantitatively and qualitatively compatible results with the ones of standard algorithms (where this comparison is possible) being still efficient on the analysis of large biological systems, where exact deterministic methods experience an explosion in algorithmic time. The algorithm we propose can be considered as an alternative to Monte Carlo sampling methods. Cellular metabolism is a complex biological problem. It can be viewed as a chemical engine that transforms available raw materials into energy or into the building blocks needed for the biological function of the cells. In more specific terms a metabolic network is indeed a processing system transforming input metabolites (nutrients), into output metabolites (amino acids, lipids, sugars etc.) according to very strict molecular proportions, often referred as stoichiometric coefficients of the reactions. Although the general topological properties of these networks are well characterized [1-3], and non-trivial pathways are well known for many species [4] the cooperative role of these pathways is hard to comprehend. In fact, the large sizes of these networks, usually containing hundreds of metabolites and even more reactions, makes the comprehension of the principles that govern their global function a challenging task. Therefore, a necessary step to achieve this goal is the use of mathematical models and the development of novel statistical techniques to characterize and simulate these It is well known that under evolutionary pressure, prokaryote cells like E. coli behave optimizing their growth performance [5]. Flux Balance Analysis (FBA) provides a powerful tool to predict the optimal growth and production fluxes, but is not reliable about phenotypic state the cell will acquire. This is mainly due to the fact that among the infinitely many potential network states compatible with the stoichiometric constraints, FBA chooses a single one whose biological meaning is at least questionable under generic external conditions. FBA maximizes a linear function (usually the growth rate of the cell) subject to biochemical and thermodynamic constraints [6]. On the other hand, cells with genetically engineered knockouts or bacterial strains that were not exposed to evolution pressures, need not to optimize their growth. In fact, the method of minimization metabolic adjustment [7] has shown that knockout metabolic fluxes undergo a minimal redistribution with respect to the flux configuration of the wild type. Yet, in more general situations, the results are unpredictable, therefore, a tool to characterize the shape and volume of the whole space of possible phenotypic solutions must be welcome. So far, apart from exact algorithms evaluating the volume of the space of possible solutions, that are unsuitable for analyzing metabolic networks larger than some dozens metabolites [8,9], the best technique allowing for such a characterization is based on Monte Carlo sampling (MCS) of the steady-state flux space [10-14]. This method is known to perform very well on intermediate size metabolic networks (up to a hundred of metabolites) [10,11] where different strategies of MCS have been implemented giving comparable results. Some improved variant of MCS seems to perform well also on organism-wide where the number of variables is in the range order of a thousand [13,14]. Although MCS has in general some intrinsically associated problems, mainly due to the fact that the convergence (or mixing) time is hard to assess and often is exponential, in the case of polytope volume estimations it turns out that sampling strategies such as hit and run have mixing times that scale only polynomially with system size [15]. However in many concrete cases a practical problem is to give a precise condition for convergence therefore we believe that an alternative independent technique could be more than welcome even in the cases where MCS is applicable. As a concrete step toward an efficient characterization of the set of fluxes compatible with the stoichiometric constraints, we propose a novel message-passing technique derived from the field of statistical physics and information theory. Mathematical Model As already mentioned, a metabolic network is an engine that converts metabolites into other metabolites through a series of intra-cellular intermediate steps. The fundamental equation characterizing all functional states of a reconstructed biochemical reaction network is a mass conservation law that imposes simple linear constraints between the incoming and outcoming fluxes at any chemical where ρ is the vector of the M metabolite concentrations in the network. i (o) is the input (output) vector of fluxes, and ν are the reaction fluxes governed by the M × N stoichiometric linear operator $S‸$ (usually named stoichiometric matrix) encoding the coefficient of the M intra-cellular relations among the N fluxes. As long as just steady-state cellular properties are concerned one can assume that a variation in the concentration of metabolites in a cell can be ignored and considered as constant. Therefore in case of fixed external conditions one can assume metabolites (quasi) stationarity and consequently the lhs of 1 can be set to zero. Under these hypotheses the problem of finding the metabolic fluxes compatible with flux-balance is mathematically described by the linear system of equations where b is the net metabolite uptake by the cell. Without loss of generality we can assume that the stoichiometric matrix $S‸$ has full rows rank, i.e. that rank($S‸$) = M, since linearly dependent equations can be easily identified and removed. Given that the number of metabolites M is lower than the number of fluxes N the subspace of solutions is a (N-M)-dimensional manifold embedded in the N -dimensional space of fluxes. In addition, the positivity of fluxes, together with the experimentally accessible values for the maximal fluxes, limit further the space of feasible solutions. This fact may be expressed by the following inequalities: in such a way that together, 2 and 3, define the convex set of all the allowed time-independent phenotipic states of a given metabolic network. Sub-dimensional volumes Mathematically speaking, the space of feasible solutions consistent with the Equations 2 is an affine space V ^N of dimension N - M. The set of inequalities 3 then defines a convex polytope Π V that, from the metabolic point of view, may be considered as the allowed configuration space for the cell states. The main goal of this work is computation of the volume of this space of solutions and of certain subspaces of it. Although conceptually simple, the notion of sub-dimensional volume like that of Π requires some new definitions. Consider any linear parameterization ϕ : ^N-M → V ^N (see explicative scheme in Figure Figure1).1). A popular choice for ϕ is, for instance, the inverse of the so called lexicographical projection i.e., the projection over the first N-M coordinates such that its restriction to V has an inverse. Being ϕ linear, the (N - M) × N Jacobian matrix $λ‸$ is constant and coincides with the matrix of ϕ in the canonical bases. Denoting λ = det $(λ‸†λ‸)12$, the Euclidean metric in ^N induces a measure on V (which does not depend on ϕ): Embedding. Sketch of the the parameterization ϕ = ^N - M → V ^N, in a simple case where N = 3 and M = 1. The mass balance equations defined in Equation 2 define the hyperplane V and the set of inequalities defined ... allowing to compute the volume of our polytope where 1[Π ](·) is the indicator function of the set Π. It is worth pointing out that given the linear structure of the metabolic equations, the determinant of the mapping is a (scalar) constant. Although the coefficient λ could be explicitly calculated, it turns out that as far as only relative volume quantities are concerned, as in the case of the in silico flux knock-outs introduced below, this term factors out and therefore we will drop it from the rest of the computation. Probabilistic framework The problem of describing the polytope Π can be cast into a probabilistic framework. We define the probability density $P$ as: Marginal flux probabilities over a given set of fluxes are obtained by integrating out all remaining degrees of freedom. In particular we can define single flux marginal probability densities as integrals on the affine subspace W = V ∩ {ν[i ]= $ν¯$} where the normalization term vol[W ](Π ∩ W) is the (sub dimensional) volume of the intersection between the polytope Π and the hyperplane {ν[i ]= $ν¯$} as displayed schematically in Figure Figure22 where the marginal probability at point ν[i ]= $ν¯$ is proportional to length of the blue segment which is the intersection between the polytope Π and the plane ν[i ]= $ν¯$ Marginal probability. Geometrical interpretation of the marginal probability distributions: the marginal probability at point ν[i ]= $ν¯$ is proportional to length of the blue segment which is the intersection between the polytope ... Approximate volume computation ¿From a computational point of view, the problem of the exact computation of the volume of a polytope with current methods requires the enumeration of all its vertexes. The vertex enumeration problem is #P-hard [16,17], but even the problem of computing the volume, given the set of all vertexes is a big computational challenge. Various algorithms exist for calculating the exact volume of a polytope from its vertexes (for a review see [18]), and many software packages are available in the Internet. Computational limitations restrict however exact algorithmic strategies to cope with polytopes in relatively few dimensions (e.g. N - M around 10 or so). To overcome such severe limitations we will introduce a very efficient approximate computational strategy that will allow us to compute the volume and the shape of the space of solutions for real-world metabolic networks. We will adopt the following three steps strategy: • We discretize the problem a la Riemann considering an a N-dimensional square lattice whose elementary cell is of size ε^N. The approximated volume is then proportional to the number of cells intersecting the polytope Π. Of course the smaller ε the better is the approximation. • We consider an integer constraint satisfaction problem where each of the mass-balance equations set a hard constraint over the involved discretized fluxes. • We solve the constraint satisfaction problem using a message-passing algorithm called Belief Propagation. Consider the regular orthogonal grid Λ[ε ]of side ε partitioning ^N as in the simple sketch of Figure Figure3.3. This grid maps via ϕ^-1 into a partition Γ[ε ]of ϕ^-1(Π). The number of cells $Nε$[ε ]of Λ[ε ]intersecting Π is proportional to the numbers of cells of Γ[ε ]intersecting ϕ^-1(Π). Finally, the volume in Eq. 5 is proportional to lim[ε → 0]ε^N-M$Nε$[ε ]. For any given ε one we have then defined discrete variables ν[i ]$qimax⁡$}, for $qimax⁡$ equal to the integer part of $qmax⁡×νimax⁡$, where the integer q^max is the granularity of the approximation. Tiling. Discretization of the volume: in the left part of the figure we display the the regular orthogonal grid Λ[ε ]of side ε partitioning ^N. The counter-image of Λ[ε ]via ϕ is given by the the grid ... The constraint satisfaction problem When dealing with integer coefficients s[i, a], as the ones appearing in normal stoichiometric relations, the discrete version of Eqs. 2 close in the set of positive integers defining a constraint satisfaction problem. The ε-approximation of the volume is then the number of elementary cells that are solution to the discretized mass-balance equations. It is interesting to note that in the case of fractional stoichiometric relations one can multiply all terms for the minimum common multiple of all denominators, getting an equivalent mass-balance equation with integer coefficients. Belief Propagation The integer constraint satisfaction is solved using Belief Propagation (BP), a local iterative algorithm that allows for the computations of marginal probability distributions. BP is exact on trees, and perform reasonably well on locally tree-like structures [19-22]. This approximation scheme allows for the computation of the logarithm of the number of solutions via the entropy that can be expressed in terms of flux marginals. We will give a detailed derivation of the equations in section Methods. Results and Discussion Performance on low dimensional systems In this section we will analyze the performance of our algorithm against an exact algorithm on low dimensional polytopes. Among the different packages available in the Internet, we have chosen LRS [8 ], a program based on the reverse search algorithm presented by Avis and Fukuda in [9] that can compute the volume of non-full dimensional polytopes. Actually, it computes the volume of the lexicographically smallest representation of the polytope, that for the benchmark used below, coincides with the conventional volume estimated by our algorithm. We have devised a specific benchmark generating random diluted stoichiometric matrices at a given ratio α = M/N and fixed number of terms different from zero K in each of the reactions. All fluxes were constrained inside the hypercube 0 ≤ ν[i ]≤ 1. As a general strategy we have calculated several random instances of the problem and measured the volume (entropy) of the polytope using the LRS and BP algorithm. In particular, we have first generated 1000 realizations of random stoichiometric matrices with N = 12, M = 4. Note that N = 12 is around the maximum that allows simulations with LRS in reasonable time (around one hour per instance). For each polytope then we have computed the two entropies S[LRS ]and S[BP ]with both algorithms, fixing the same maximum value for the discretization q^max = 1024 for all fluxes. In Figure Figure44 we show how the quality of the BP measure is affected by the discretization, by displaying the histogram of the relative differences $δS=SBP−SLRSSLRS$ with an increasing number of bins per variable q^max = 16 , 64, 256, 1024. One can see how a finer binning of messages improves the quality of the approximation, seemingly converging to a single distribution of errors. It is expected that for larger N the histograms would peak around the true value: upon increasing the number of fluxes, loops become larger and the overall topology of the graph becomes more locally tree-like, validating the hypothesis behind the Bethe approximation. Unfortunately, the huge increase of computer time experimented in the calculation of the volumes using LRS made impossible to test systems large enough to make any reasonable scaling analysis. Discrepance histogram. Histograms of δS = (S[BP ]- S[LRS])/S[LRS], where S[LRS ]and S[BP ]are the estimates of the logarithm of the volume computed using respectively the program LRS [8], and our message-passing algorithm. The measurement if δS ... Finally we address the issue of the computational complexity of the algorithm which is a crucial one if one is interested in approaching real world metabolic networks whose size typically is at least 50 times the size of the largest network that can be analyzed with exact algorithms. In Figure Figure55 we display the running time of both LRS and BP as a function of the number of fluxes N. Interesting, LRS outperforms BP up to sizes N ~12 where the running time of LRS explodes exponentially while BP maintains a modest almost-linear behavior. Running time. Logarithm of the running time τ expressed in seconds vs. N for LRS algorithm, and BP algorithm. Averages are taken over 5000 realizations of random stoichiometric matrices at each value of N except in the N = 12 case where we analyzed ... Distribution of fluxes in Red Blood Cell We used our BP algorithm and MCS [11] to obtain the marginal flux distributions for each of the reactions in the Red Blood Cell (RBC) metabolism taking the same stoichiometric matrix presented in [10 ,11] (see Additional file 1). The network contains 46 reactions and 34 metabolites. The first comparison of MCS with our BP algorithm is done setting all $νimax⁡$ to 1 (the resulting marginal probability distribution for the fluxes are displayed in Figure Figure6).6). The second comparison is done by setting all $νimax⁡$ to the values used in [10] (results are displayed in Figure Figure7).7). In both settings we compared BP with a set of 5000 feasible solutions generated by MCS, while for the BP algorithm we used a q^max = 2048. As it can be seen in the two cases, the predictions of both methods compare rather well. Assuming the accuracy of MCS, differences are probably due to small loops structures in the graph. We leave for a future work the issue of a more detailed comparison of both methods. However we really think that the emerging scenario of RBC metabolism captured by BP is analogous to that obtained by MCS both quantitatively and qualitatively. Distribution of fluxes in Blood Cells. Marginal probability distributions of the flux values for each of the 46 reaction in the red blood cell network computed using our message-passing algorithm (filled area) and the MC method (lines). Panels are arranged ... Distribution of fluxes in Blood Cells. Marginal probability distributions of the flux values for each of the 46 reaction in the red blood cell network computed using our message-passing algorithm (filled area) and the MC method (lines). Panels are arranged ... The MCS method appears to be quite efficient (the authors of [11] reported in a similar network less then 30 seconds of computer computation in a Dell Dimension 8200 to obtain their distributions) while our algorithm converged to the same result in less than 3 seconds on a similar machine (Intel CPU 6600 2.40 GHz). Of course, no stringent statement can be done at this level about the comparative performance of the two algorithms. This positive result encourages us to face the problem of metabolism at organism-wide scale. Analysis of gene knock-out in E. coli central metabolism In this section we study the influence of partial flux knock-out on the volume of the solution space. We concentrate on E. coli central metabolism [23]. The network has 62 metabolites, 104 reactions (75 internal reactions and 29 exchange fluxes). All reactions were considered irreversible following their nominal directions, and maximum flux rates were set to 1. In this numerical experiment we first run BP on the unperturbed system measuring the volume of the space of solution S[0]. Then the maximum flux values are kept constant and equal to 1 for all the reactions but one (say reaction ν[i]). The partial knock-out effect on flux i is then obtained reducing repeatedly $νimax⁡$ and computing again the volume S[KO]($νimax⁡$) at each time until $νimax⁡$ = 0. In principle at each reduction step of $νimax⁡$ one should converge again the BP equations. In practice for most of the fluxes convergence is very fast since at each reduction of $νimax⁡$ the new stationary point will be in general close enough to the old one. However we have experimentally noted that the larger is the impact of a given flux on the volume of the space of solutions, the longer is the convergence time at each reduction step. Let us point out that doing so we are tacitly assuming that each knock-out is independent from the others although it is known that some reactions might be associated with the same enzyme. However there is no computational restriction to analyze multiple knock-outs. An analogous technique was presented in [11], where maximal fluxes were reduced to the value of 75% of their original maximal allowed flux to mimic enzymopathies. In Figure Figure88 we display the whole set of S[0 ]- S[KO]($νimax⁡$) vs. knock-out percentage curves. We can observe how heterogeneous is the impact of the different fluxes on the volume. Moreover one can observe how different curves may cross depending on the knock-out percentage displaying thus an intriguing scenario of differential flux-reduction impacts. Let us now concentrate on the 20 fluxes having larger impact on the space of solutions: in Figure Figure99 we display complete knock-outs values S[0 ]-S[KO]($νimax⁡$ = 0). Focusing only on internal fluxes (the first two fluxes are indeed exchange fluxes of water and protons), one can observe that the first half of them basically compose the backbone of glycolysis showing little pathway redundancy in the network, while reactions like FUM, ACONT, SUC and SUCCD1i appear in the Krebs cycle and again show little pathway redundancy in the network (see Additional file 2). Flux knock-out curves. Flux knock-out impact on the volume of the space of solutions in E. coli central metabolism. On the x-axis we display the percentage of reduction of any given flux, and on the y-axis the relative volume difference with respect to ... Top impact flux knock-outs. Impact of the 20 fluxes which have a larger impact on the volume of the space of solution. Here S[KO ]= S[0 ]- S[KO ]($νimax⁡$ = 0)measures the difference of the volume of the unperturbed system and that modified by ... Finally in Figure Figure1010 we display the correlations between the changes in the entropy for different reaction knock-outs and the average flux ν[i][i ](ν)νdν in the unperturbed network. At 75% knock-out, two kinds of regimes are divided by a clear threshold at ν ~ 0.6: (i) for ν[i][0 ]- S[KO]($νimax⁡$) has a small positive correlation with ν Correlation of impact and value. Change in entropy S[0 ]- S[KO ]after reaction knockouts and the average as a function of the average flux value ν ... E. coli metabolism at organism-wide scale Finally we analyze the average fluxes distribution function in the metabolic network of E. coli. The network used contains, in its original format, 1035 reactions and 626 metabolites [23]. The network has been preprocessed using Gaussian elimination procedure: when a trivial mass-balance equation of the type ν[i ]= ν[j ]is present we eliminate variable ν[i ]using elementary row operations. Doing so new trivial mass balance equations appear, an then we iterate the eliminations until all trivial mass-balance equations disappear. During Gaussian elimination, some mass balance equations become effective dead-end metabolite (i.e metabolites that are only consumed or produced). Fluxes participating to dead-end metabolites are set to zero at the beginning and removed from the system. The new set of equations is equivalent to the original one but with a lower number of fluxes. Reversible reactions have been considered using bidirectional signed fluxes. Using BP we are able to compute the marginal flux distributions in around 40 minutes, using q^max = 64. We ran our algorithm on this network and computed the average fluxes in each reaction. We have performed a statistical analysis of these averages in the spirit of [14,24,25]. The probability distribution function (pdf) of these average values is displayed in Figure Figure11.11. As can be clearly seen the distribution is large and may be fitted with a power law distribution of P(ν) ν + ν [0])^-γ. The fit gives an exponent γ around 1.5, a result that compares very well with previous simulations found in: (i) [14] where FBA optimal fluxes were averaged over many different external condition, (ii) [24] where a maximal local-output strategy is used, and (iii) [25] where a Gaussian approximation for the marginal flux distributions is used. It is claimed in [14] that the robustness of this value γ is a signature of universality in cell's metabolism. Distribution of average fluxes. Distribution of average fluxes (νE. coli metabolism (points) obtained from the marginal probability distributions computed using our message-passing algorithm. We also A more careful analysis of the data may reveals however that the distribution of averages fluxes has a richer structure. In Figure Figure1212 we present the cumulative distribution function $P<(ν) ≡∫−∞νP(y)dy$ of theaverage fluxes of the reactions and a clear jump appears for ν ≈ 0.5, and smaller ones for ν ≈ 0.4 and μ ≈ 0.6. At present we do not know whether these jumps are just due to statistical fluctuations (and are correctly smeared out in the usual binning process done to plot the pdf in double logarithmic scale) or they reflect relevant biological or structural information about the network. Integrated distribution of average fluxes.Integrated distribution P[<]of average fluxes νE. coli metabolism obtained from the marginal probability distributions computed using our message-passing algorithm. ... We proposed a novel algorithm to estimate the size and shape of the affine space of a non full-dimensional convex polytope in high dimensions. The algorithm was tested in specific benchmark, i.e. random diluted stoichiometric matrices at a given ratio α = M/N and fixed number of terms different from zero K, in each of the reactions, with results that compare very well with those of exact algorithms. Moreover, we show that while the running time of exact algorithms increases more than exponentially for already moderate sizes, our algorithm is polynomial. The program was run on the Red Blood Cell metabolism producing with shorter computational time results that are in both quantitatively and qualitatively in good agreement with those obtained by MCS presented in [10,11]. Then, our program was used to study the E. coli central metabolism, and as expected, reactions with little redundancy turned out to be the ones with larger impact in the size of the space of the metabolic solutions. Specifically, most of the reactions associated with the transformation of glucose in pyruvate, belong to this set, as well as some reactions in the citric cycle. In addition we show strong correlations between the characteristics of the flux distributions of the wild type network and the changes in size of the space of solutions after reaction knock-outs. Finally, we calculate the distribution of the average values of the fluxes in the metabolism of the E. coli and present results that are consistent with those of the literature. In the present approach we have mainly followed a discretization strategy that although polynomial, becomes computationally rather heavy for mass balance equation containing a large number of fluxes. We are currently investigating other representation schemes for the messages. The final hope is to obtain an algorithm that allows for on-line analysis of organism-wide metabolic networks. Let us conclude by noting that in principle the presented approach can be extended to deal with constraints whose functional form is more general than linear, provided that the number of variables involved in each of the constraints remains small, as in the case of inequalities enforcing the second law of thermodynamics for the considered reactions [26]. Work is in progress also in this Belief Propagation In the previous section we have seen how the metabolic problem can be cast into a constraint satisfaction framework where each of M mass-balance equations imposes a constraint onto a subset of the metabolic fluxes. Let A be the set of equations and I the set of fluxes. Consider the a-th row of $S‸$, and let {i a} i[1], ..., $ina$} I be the labels of the fluxes involved in the considered equation having stoichiometric coefficients different from zero. Let also {a i} a[1], ..., $ani$} A be the labels of the equations in which flux i participates. The emerging structure is a bipartite graph, with two types of nodes: variable nodes representing the fluxes of the reactions and factor nodes imposing mass conservation. In this case marginals become q-modal probability densities that for large values of $qimax⁡$ will approximate better and better the continuous set of probabilities. Under the hypothesis that the factor graph is a tree it can be shown [19,27] that a given flux vector ν satisfying all flux-balance constraints can be expressed as a product of flux and reaction marginals [19-21]: where d[i ]is the number of equations in which flux ν[i ]participates (i.e. the degree of site i). The marginal probabilities are defined as: One can then define an entropy in terms of the marginal probability distributions which amounts to the logarithm of the number of solutions of the associated constraint satisfaction problem. From a more geometrical point of view, this entropy is a count of the (logarithm of) the number of elementary ε-cells intersecting the polytope Π (see Figure Figure33). One may wonder how such an approach could be useful in a real-world situation where the graph is not a tree. One can hope that typical loop length are large enough to ensure weak statistical dependence of neighboring sites which lay at the heart of the Bethe approximation [28,29]. It is interesting to note that the Bethe approximation is successfully used in many different problems with loopy graph topologies. This is for instance the case for LDPC error correcting codes in Information Theory [30] used in wireless Internet transmission technologies such as WiMAX, or in many inference problems such as graphical models [31], binary perceptron learning [32], and in constraint satisfaction problems such as Satisfiability or Coloring [33,34]. In all these cases although there is no mathematically rigorous proof about the quality of the solution, whenever the algorithm converges, the result generally provides a good estimate of marginal probability distributions. The algorithm is based on two type of messages exchanged from variable nodes to functional nodes, and vice versa: • μ[i → a](ν): the probability that flux i takes value ν in the absence of reaction a. • m[a → i](ν): the non-normalized probability that the balance in reaction a is fulfilled given that flux i takes value ν. The two quantities satisfy the following set of functional equations: where $∑{νl}l∈a\i$ means the sum over all values of fluxes around metabolite a but i, b i\a, is the set of metabolites in reaction i but a, C[i → a ]is a constant enforcing the normalization of the probability μ[i → a](ν), and δ (·; ·) is the Kronecker delta function (δ (a; b) is 1 if a = b and 0 otherwise). The set of Equations 11 can be solved iteratively and upon convergence of the algorithm one can compute the marginal flux distributions as: A brute force integration of the discrete set of equation would be much too inefficient for analyzing large networks, due to the multiple dimensional sum over ${0,...,qlmax⁡}l∈a\i$ in the previous equation. The first of Equations 11 includes the computation of the convolution of all μ[j → a ]messages except one, an expression of the form $Ca→i=⊗j∈a\iμj→a$ (where n[a ]- 1 convolutions for each outputting message, i.e. n[a ](n[a ]- 1) convolutions in total for constraint a. In the case of the complete E. coli network we have mass-balance equations with n[a ]as large as 500, then reducing the computational complexity of the iteration has a cogent practical implication on the performance of the proposed algorithm. There is a way to reduce this quadratic load to just O (n[a]) convolutions. The method we propose is not confined to convolutions and works generally for any associative operation. Note that for operations that are efficiently invertible and commutative, one could just operate all n[a ]terms (n[a ]initial operations) and then for each outputting message operate with the inverse of the undesired element (just one more operation for each message, totalizing just 2n[a ]operations for constraint a plus invertions) i.e. $Ca→i=(⊗j∈aμj→a)⊗μi→a−1$. When elements are not invertible, nor conmutative or are just difficult to invert, the following more general scheme can be applied. Let us concentrate for instance on metabolite a and let us assume that n[a ]is a power of two, say 2^n (this assumption can be easily relaxed). We will iteratively build a n × n[a ]auxiliary matrix $hil$ setting as initial condition $hi0=μi→a$ and then building the following sequence: $hid=h2id−1⊗h2i+1d−1$ for d = 1, ..., n - 1. This is equivalent to operate consecutive pairs of fluxes over metabolite a, then consecutive quadruples and so on. One needs just 2n[a ]operations for computing the full matrix at this stage. To compute C[a → i ]one needs additionally n = log[2 ]n[a ]operations: we just have to operate the complement of i in its pair with the complement of this pair in its quartet and so on on all levels, i.e. in a compact form $⊗d=0nh[i/2d]xor 1d$ (See Figure Figure13).13). The total number of operations for all out messages becomes then 2n[a ]+ n[a]log[2 ]n[a]. Moreover, if all messages are to be computed sequentially this number can be further reduced to O (n[a]). Linear cavity summation. An example showing the summation strategy for a metabolite with 8 fluxes. Once all elements of the table are computed (6 convolutions are needed), one needs only 3 more convolutions to compute each one of the 8 cavity messages. ... Authors' contributions Authors equally contributed to this work. All authors read and approved the final manuscript. Supplementary Material Additional file 1: Human red blood cell metabolism supplementary files. The archive contains 4 text files: rbc.sto (stoichiometric matrix where each column is a reaction and each line is a metabolite), rbc.flu (one column, contains the sign of exchange fluxes: negative are incoming positive outcoming, zero free), rbc.names (contains the name of both reactions and metabolites), and rbc.max (contains maximum flux values of the reactions). Additional file 2: E. coli central metabolism supplementary files. The archive contains 3 text files: central.sto (stoichiometric matrix where each column is a reaction and each line is a metabolite), central.flu (one column, contains the sign of exchange fluxes: negative are incoming, positive outcoming, zero free), central.names (contains the name of both reactions and metabolites). AB was supported by Microsoft TCI grant. RM wants to thank the International Center for Theoretical Physics in Trieste and the Center for Molecular Immunology of La Habana, for their cordial hospitality during the completion of this work. We are also very grateful to Ginestra Bianconi, Michele Leone, Martin Weigt, and Riccardo Zecchina, for very interesting discussions, and in particular to Carlotta Martelli for sharing with us a human readable E. coli data set. We are really grateful to Jan Schellenberger for providing us the MCS data used in Figure Figure66 and and77. • Jeong H, Tombor B, Albert R, Oltvai ZN, Barabasi AL. The large-scale organization of metabolic networks. Nature. 2000;407:651–654. doi: 10.1038/35036627. [PubMed] [Cross Ref] • Fell DA, Wagner A. The small world of metabolism. Nature Biotechnology. 2000;18:1121–1122. doi: 10.1038/81025. [PubMed] [Cross Ref] • Z D, Qin ZS. Structural comparison of metabolic networks in selected single cell organisms. BMC Bioinformatics. 2005;6 [PMC free article] [PubMed] • Kanehisa M, Goto S, Hattori M, Aoki-Kinoshita K, Itoh M, Kawashima S, Katayama T, Araki M, Hirakawa M. From genomics to chemical genomics: new developments in KEGG. Nucleic Acids Res. 2006;34 :D354–7. doi: 10.1093/nar/gkj102. [PMC free article] [PubMed] [Cross Ref] • Ibarra AU, Edwards J, Palsson B. Escherichia coli K-12 undergoes adaptive evolution to achieve in silico predicted optimal growth. Nature. 2002;420:186–189. doi: 10.1038/nature01149. [PubMed] [ Cross Ref] • Varma A, Palsson B. Metabolic Capabilities of Escherichia coli: I.Synthesis of biosynthetic precursors and cofactors. J theor Biol. 1993;165:477–502. doi: 10.1006/jtbi.1993.1202. [PubMed] [Cross • Segré D, Vitkup D, Church G. Analysis of optimality in natural and perturbed metabolic networks. PNAS. 2002;99:15112–15117. doi: 10.1073/pnas.232349399. [PMC free article] [PubMed] [Cross Ref] • LRS package http://cgm.cs.mcgill.ca/~avis/C/lrs.html • Avis D, Fukuda K. A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra. Discrete Comput Geom. 1992;8:295–313. doi: 10.1007/BF02293050. [Cross Ref] • Wiback S, Famili I, Greenberg HJ, Palsson B. Monte Carlo sampling can be used to determine the size and shape of the steady-state flux space. J Theor Biol. 2004;228:437–447. doi: 10.1016/ j.jtbi.2004.02.006. [PubMed] [Cross Ref] • Price N, Schellenberger J, Palsson B. Uniform Sampling of Steady-State Flux Spaces: Means to Design Experiments and to Interpret Enzymopathies. Biophysical Journal. 2004;87:2172–2186. doi: 10.1529/biophysj.104.043000. [PMC free article] [PubMed] [Cross Ref] • Thiele I, Price N, Vo T, Palsson B. Impact of diabetes, ischemia, and diet. J Biol Chem. 2005;280:11683–11695. doi: 10.1074/jbc.M409072200. [PubMed] [Cross Ref] • Price N, Thiele I, Palsson B. Candidate States of Helicobacter pylori's Genome-Scale Metabolic Network upon Application of "Loop Law" Thermodynamic Constraints. Biophysical Journal. 2006;90 :3919–3928. doi: 10.1529/biophysj.105.072645. [PMC free article] [PubMed] [Cross Ref] • Almaas E, Kovacs B, Vicsek T, Oltval Z, Barabasi AL. Global organization of metabolic fluxes in the bacterium Escherichia col. Nature. 2004;427:839–843. doi: 10.1038/nature02289. [PubMed] [Cross • Simonovits M. How to compute the volume in high dimension? Math Progr. 2003;97:337–374. • Dyer M, Frieze A. On the complexity of computing the volume of a polyhedron. SIAM J Comput. 1988;17:967–97. doi: 10.1137/0217060. [Cross Ref] • Khachiyan L. Complexity of volume computation. In: Pach J, editor. New trends in discrete and computational geometry, Springer-Verlag. 1993. pp. 91–101. • Bëuler B, Enge A, Fukuda K. Exact volume computation for convex polytopes: A practical study. In: Ziegler GM, Kalai G, editor. Polytopes-combinatorics and computation, Birkhauser. 2000. pp. • Yedidia J, Freeman W, Weiss Y. Generalized belief propagation. In: press M, editor. Advances in Neural Information Processing Systems (NIPS) 13, Denver, CO. 2001. pp. 772–778. • Kschischang FR, Frey BJ, Loeliger HA. Factor graphs and the sum-product algorithm. Information Theory, IEEE Transactions on. 2001;47:498–519. doi: 10.1109/18.910572. [Cross Ref] • Braunstein A, Mezard M, Zecchina R. Survey propagation: An algorithm for satisfiability. Random Struct Algorithms. 2005;27:201–226. doi: 10.1002/rsa.20057. [Cross Ref] • MacKay DJC. Information Theory, Inference, and Learning Algorithms. Cambridge University Press; 2003. • Reed J, Vo T, Schilling C, Palsson B. An expanded genome-scale model of Escherichia coli K-12 (iJR904 GSM/GPR) Genome Biology. 2003;4:R54. doi: 10.1186/gb-2003-4-9-r54. [PMC free article] [PubMed ] [Cross Ref] • De Martino A, Martelli RMC, Castillo IP. Von Neumann expanding model on random graphs. Journal of Statistical Mechanics: Theory and Experiment. 2007;2007:P05012. doi: 10.1088/1742-5468/2007/05/ P05012. [Cross Ref] • Bianconi G, Zecchina R. Viable flux distribution in metabolic networks. Technical report. 2007. http://aps.arxiv.org/pdf/0705.2816 ArXiv:cond-mat/0705.2816. • Beard D, Babson E, Curtis E, Qian H. Thermodynamic constraints for biochemical networks. J Theor Biol. 2004;228:327–333. doi: 10.1016/j.jtbi.2004.01.008. [PubMed] [Cross Ref] • Baxter B. Exactly Solved Models in Statistical Mechanics. London: Academic Press Inc; 1989. • Mezard M, Parisi G. The Bethe lattice spin glass revisited. The European Physical Journal B. 2001;20:217. doi: 10.1007/PL00011099. [Cross Ref] • Mezard M, Parisi G. The cavity method at zero temperature. J Stat Phys. 2003;111:1. doi: 10.1023/A:1022221005097. [Cross Ref] • Richardson T, Urbanke R. The capacity of low-density parity check codes under message passing decoding. IEEE, Trans Info Theory. 2001;47:599–618. doi: 10.1109/18.910577. [Cross Ref] • Weiss Y, Freeman WT. Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology. Neural Comp. 2001;13:2173–2200. doi: 10.1162/089976601750541769. [PubMed] [Cross Ref] • Baldassi C, Braunstein A, Brunel N, Zecchina R. Efficient supervised learning in networks with binary synapses. PNAS. 2007;104:11079–11084. doi: 10.1073/pnas.0700324104. http://aps.arxiv.org/pdf/ 0707.1295 [PMC free article] [PubMed] [Cross Ref] • Mezard M, Parisi G, Zecchina R. Analytic and Algorithmic Solution of Random Satisfiability Problems. Science. 2002;297:812. doi: 10.1126/science.1073287. http://aps.arxiv.org/pdf/0707.1295 [ PubMed] [Cross Ref] • Mulet R, Pagnani A, Weigt M, Zecchina R. Coloring random graphs. Phys Rev Lett. 2002;89:268701. doi: 10.1103/PhysRevLett.89.268701. http://arxiv.org/abs/cond-mat/0208460 [PubMed] [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2483728/?tool=pubmed","timestamp":"2014-04-16T19:26:28Z","content_type":null,"content_length":"156788","record_id":"<urn:uuid:9cefa530-781b-4ffd-965a-f118b5138d20>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Expectation Correct a Mundo! One of the cornerstones of experimental mathematics. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation It seems the the denominator is always a power of the number of bins, as expected. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Now you only have to work on the numerators. And they are integers. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation But, some of the numerator might have to be multiplied by a factor. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Why do you say that? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation Because it might be lost when the denominator and the numerator are canceled. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Very good, you are correct. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation But, it is hard to figure out what the factor is for each of those fractions... What do we do now? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation If you know that the denominators were powers of 3, you should be able to reconstruct the numerators. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation But, the trouble is that for 3 bins and 5 balls needed, the denominator is greater than 3^4... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation It seems that cancellation is not the problem. 9^3, 9^7 ... is gone! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation I think the denominators have to form the series 3^(3*(k-1)). The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Not every sequence has a general term that is expressible in terms of elementary functions. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation I am sure the denominators have to be of the form 3^(3*(k-1)). The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation That produces this table: the denominators look like this: You can see what was cancelled from each one and put it back. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation Yes. So, the factors we need to multiply the numerator by are {1,3,9,3,9}. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation You could try that. Try to read as much of the help as you can for the next couple of days. I am going offline, I have to pay bills. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation Okay, see you later! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Had to pay some bills and the rent. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation Okay. OEIS has nothing on our numerator sequence... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation How about going across instead of down? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation I haven't tried. I will tomorrow, because I am watching Doctor Who now. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Okay that is fine, I am going to get some sleep. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Expectation Okay, see you later! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Expectation Okay. OEIS has nothing on our numerator sequence... Even after you adjusted the numerators for the missing 3's? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=260492","timestamp":"2014-04-21T05:32:42Z","content_type":null,"content_length":"37476","record_id":"<urn:uuid:74bef0e9-6aca-4c71-ad06-7d9aa5ef879a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
bilinear form November 19th 2006, 11:40 AM #1 Junior Member Feb 2006 bilinear form Can anyone help with the following question: Let Vn be the vector space of of polynomials, h element of R[x], deg(f)<= n. For h,k element of Vn - define: f(h,k) = integral between 0 and infinity of [ h(x)k(x)(e^-x)] dx a) Show that f is symmetric bilinear form b) Let B be the basis {1,X, . . . X^n} of Vn. Find [f] w.r.t B. You can easily verify that for f: (1) $f(\cdot,k)$ is linear. (2) $f(h,\cdot)$ is linear. (3) $f(h,k)=f(k,h)$ and (4) $f(th,tk)=t^2f(h,k), \ t\in \mathbb{R}.$ So f is a symmetric bilinear form. Its matrix with respect to the basis B will have entries $[f]_{ij}=[f(x^i,x^j)]=\left[\int_0^{\infty}x^{i+j}{\rm e}^{-x}dx\right]$, which you can find by repeated use of the integration by parts Last edited by Rebesques; August 26th 2007 at 08:43 AM. Reason: myopia August 24th 2007, 06:14 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/7759-bilinear-form.html","timestamp":"2014-04-19T05:36:23Z","content_type":null,"content_length":"32855","record_id":"<urn:uuid:4e6e2deb-19d0-4a1e-83ee-e808f3e9b2f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
index of a closed subgroup of a profinite group up vote 7 down vote favorite In the book "profinite groups, arithmetic, and geometry" of Shatz, the index $(G:H)$ of a closed subgroup $H$ of a profinite group $G$ is defined to be the supernatural number $lcm\big((G/U):(H/(H\ cap U))\big)$ where $U$ runs over the open normal subgroups of $G$. There is an exercise following this definition saying that "$(G:H)=lcm(G:U)$ where $U$ runs over those open normal subgroups of $G$ containing $H$. If $G$ is a finite group with discrete topology, then the index given is nothing but the number of elements in the coset space $G/H$. However, if we take $G$ to be a finite simple group having a non-trivial proper subgroup $H$, e.g. $Alt_n$ for a suitable $n$, the only normal subgroup contating $H$ is $G$ itself and $\big((G/G):(H/(H\cap G))\big)=1$. I am not sure if the claim in the exercise is true for infinite profinite groups as they are necessarily non-simple, which means they don't admit trivial counter-examples. But at least the exercise seemed me wrong for finite case. Am I missing something, or this is a well-known misprint which I don't know? gr.group-theory profinite-groups topological-groups add comment 1 Answer active oldest votes (The first time around I had read your question too quickly and not properly appreciated it. Sorry about that.) You are right: the exercise on p. 12 of Shatz's book is false, because of the example you suggest. You asked if there were also counterexamples among infinite profinite groups. Certainly: let $n \geq 5$, let $p$ be a prime number greater than $n$, and consider $G = \mathbb{Z}_p \times A_n$. Then the problem persists: take $H = \mathbb{Z}_p \times H'$, where $H'$ is a proper nontrivial subgroup of $A_n$. (Use Goursat's Lemma.) up vote 6 down vote accepted I checked that this exercise does not appear in Serre's Galois Cohomology. Have you found that it is used at any point of Shatz's book? It seems plausible to me that you could recover a statement like this by working prime-by-prime with the Sylow subgroups of the groups in question -- certainly there are enough normal subgroups of $p$ groups to detect indices -- but I haven't thought carefully about that. 1 Thanks for your answer. You are right, product with any infinite profinite group with the example yields countre-examples to the infinite case. When the normality condition is dropped the exercise becomes correct. So far i did not see any usage of the exercise. – safak Nov 19 '10 at 20:31 1 infact it is exactly what is written in serre's "it is also the lcm of the indices (G:V) for open V containing H" – safak Nov 19 '10 at 21:57 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory profinite-groups topological-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/46643/index-of-a-closed-subgroup-of-a-profinite-group/46644","timestamp":"2014-04-20T08:39:21Z","content_type":null,"content_length":"54903","record_id":"<urn:uuid:6128ed9a-d4c2-46ed-84c1-7aba6bfc96ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Nuclear Dynamics with Subnucleonic Degrees of Freedom The objective of this research program is: to investigate the role of quark-gluon degrees of freedom in hadron structure and interactions, and in nuclear dynamics; to explore the properties and phase structure of hot, dense Quantum Chromodynamics (QCD) and its possible consequences for the structure of compact astrophysical objects; to develop theoretical methods and tools to place reliable constraints on the variation of Nature’s fundamental parameters and physics beyond the Standard Model; the development and application of reaction theories for use in exploring hadron structure using the data from meson and nucleon-resonance production experiments at modern experimental facilities; and to investigate relations of Poincaré covariant dynamics specified by mass operators to complementary dynamics specified by Green functions. At the level of quark-gluon degrees of freedom, the Dyson-Schwinger equations (DSEs) provide a Poincaré covariant, nonperturbative method for studying QCD in the continuum. A hallmark of present-day DSE applications in hadron physics is the existence of a symmetry preserving truncation that enables the simultaneous exploration of phenomena such as: confinement, dynamical chiral symmetry breaking, and bound state structure and interactions. In addition, the DSEs provide a generating tool for perturbation theory and hence yield model-independent results for the ultraviolet behavior of strong interaction observables. This means that model studies facilitate the use of physical processes to constrain the long-range behavior of the interaction between light-quarks in QCD, which is poorly understood and whose elucidation is a key goal of modern experimental programs. The last year saw numerous noteworthy applications and successes. For example, we presented arguments which support a view that chiral perturbation theory is inapplicable for pion-like meson masses greater than m[0-] ~ 0.45 GeV; that a unique signal for the restoration of chiral symmetry via excitation of mesons is equality between the pole residues for 0^–+(nS) and 0^++(nS) states when n, the radial quantum number, is large; that chiral symmetry and its dynamical breakdown in QCD even place constraints on properties of mesons composed of two heavy-quarks; and we also provided a prediction for the ratio of neutron electric and magnetic form factors. At the level of meson and baryon degrees of freedom, we continue our effort to develop a dynamical coupled-channels model for use in analyzing the very extensive set of data for electromagnetic meson production reactions. A primary objective is the development of an interpretation for this data in terms of the quark-gluon substructure of nucleon resonances (N*). We aim to draw the connection between this data and the predictions made by QCD-based hadron models and numerical simulations of lattice-regularized QCD. In the past year we completed a major stage of this project by determining the hadronic interactions in the model by fitting pion-nucleon scattering data up to 2 GeV. We predicted the effect of a meson cloud on the form factors describing the transition to all known low-lying nucleon resonances. Methods for determining the resonance parameters from the partial-wave amplitudes were developed. We also made progress on projects focused on η and ω photoproduction, which aim at discovering highly excited resonances with masses close to 2 GeV. In addition, we continued to play a leading role in directing operations of the Excited Baryon Analysis Center (EBAC) at Jefferson Laboratory. Relativistic quantum dynamics requires a unitary representation of space-time symmetries (Poincaré group) and localization of states, such that states localized in relatively space-like regions are causally independent. We have recently focused on the application and elucidation of complementary mathematical representations of hadron phenomena, and on a consistent treatment of medium energy electromagnetic few-body processes. Nuclear astrophysics Nuclear forces and nuclear systems Theoretical Physics Research
{"url":"http://www.phy.anl.gov/theory/research/subnucleon.html","timestamp":"2014-04-20T16:35:47Z","content_type":null,"content_length":"4705","record_id":"<urn:uuid:7a56af22-d2b4-45a6-ac91-0631727f9935>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Another relatively prime problem September 18th 2006, 01:58 PM Another relatively prime problem a) Show that if a, b, and c are integers with (a,b)=(a,c)=1, then (a,bc)=1. b) Use mathematical induction to show that if a1, a2,.....,an are integers, and b is another integer such that (a1,b)=(a2,b)=...=(an,b)=1, then(a1a2...an, b)=1 Thanks for your help.:) September 18th 2006, 04:07 PM a) (a,b) = 1 so the (unique) prime factorization of b contains NO primes in the prime factorization of a. Similarly (a,c) = 1 implies that the prime factorization of c contains no primes in the prime factorization of a. This means that the prime factorization of bc also contains no primes in the prime factorization of a. Thus (a,bc) = 1. b) You really want to "turn this around" and start with (a_1,b) = 1 and (a_2,b) = 1 implies (a_1*a_2,b) = 1. (After all, the symbol is symmetric.) Now, what about (a_1,b) = 1, (a_2,b) = 1, and (a_3,b) = 1. Let a_1*a_2 = d. Thus we need to prove (a_1*a_2*a_3,b) = (d*a_3,b) = 1. We know that (d,b) = (a_1*a_2,b) = 1 from a) and (a_3,b) = 1. Thus by a) we know that (d*a_3,b) = 1. Can you generalize this procedure? September 18th 2006, 04:13 PM
{"url":"http://mathhelpforum.com/number-theory/5621-another-relatively-prime-problem-print.html","timestamp":"2014-04-17T01:27:14Z","content_type":null,"content_length":"6650","record_id":"<urn:uuid:2dc2222b-20aa-4d3a-9638-dd1b36ca6e15>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylorsville, GA Math Tutor Find a Taylorsville, GA Math Tutor ...I'm certified to teach grades 4-8 mathematics. For the past 2 years I've had over 90% of my students meet or exceed on the CRCT. I base my teaching method on building a strong foundation on the fundamentals of math and work from there. 3 Subjects: including algebra 1, prealgebra, elementary math ...I have taught all levels of math at grades 6-12, plus have tutored college level for more than 20 years. I teach test taking skills as well as content for standardized tests. I am a highly qualified math teacher in Georgia with over 11 years of experience in teaching math and working with students at a college prep independent school. 31 Subjects: including calculus, physics, precalculus, SAT math Hello, my name is Scarlet D. I am a junior at Jacksonville State University, majoring in Exercise Science. I have an associates degree in biology from Georgia Highlands College. 4 Subjects: including algebra 1, prealgebra, precalculus, differential equations ...She then went on to graduate from the Georgia Tech with a degree in Applied Mathematics and a minor in Economics. She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship. She even studied abroad in Ireland during those three years! 22 Subjects: including algebra 2, reading, differential equations, ACT Math I am a Georgia Tech Biomedical Engineering graduate and I have been tutoring high school students in the subjects of math and science for the last three years. I love helping students reach their full potential! I have found that most of the time all a student needs is someone encouraging them and letting them know that they are SMART and that they CAN do it! 15 Subjects: including algebra 1, algebra 2, biology, chemistry Related Taylorsville, GA Tutors Taylorsville, GA Accounting Tutors Taylorsville, GA ACT Tutors Taylorsville, GA Algebra Tutors Taylorsville, GA Algebra 2 Tutors Taylorsville, GA Calculus Tutors Taylorsville, GA Geometry Tutors Taylorsville, GA Math Tutors Taylorsville, GA Prealgebra Tutors Taylorsville, GA Precalculus Tutors Taylorsville, GA SAT Tutors Taylorsville, GA SAT Math Tutors Taylorsville, GA Science Tutors Taylorsville, GA Statistics Tutors Taylorsville, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/taylorsville_ga_math_tutors.php","timestamp":"2014-04-18T21:58:42Z","content_type":null,"content_length":"23907","record_id":"<urn:uuid:51ddde73-43fc-4a60-b64b-f8a8c637100b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement beyond the standard quantum limit • Chemistry Tweeter Written by : • Mathematics Henri Dupuis • Physics Measurement beyond the standard quantum limit 5/20/11 Translated by: Eriks Uskalis Physics needs measurements which are more and more precise, but any measurement in physics is marred by noise: imperfect preparation of the system to be measured, thermal disturbance, Based on the work of: etc. At the most extreme scales this occurs at the level of particles such as photons, and John Martin this noise is thus quantum by nature. This is what is known as the Heisenberg limit, a limit which cannot be overcome and which represents the fact that no measurement can be of an Read this publication absolute and infinite precision. But there are easier ways than others to get closer to this Your comments limit. Daniel Braun, of the University of Toulouse, and John Martin, of the University of All comments Liège, have just conceived of a theoretical system which will doubtless aid experimenters and favour applications in numerous scientific disciplines. Their work has just been the subject Print of a publication in Nature Communications (1). Pdf version Send by mail The research by Daniel Braun, Professor at the theoretical physics laboratory at the University of Toulouse III, and John Martin, lecturer and head of the quantum optics group at the ULg, enters the context of precision measurements whose sensitiveness can be improved in using quantum specificities, the properties of microscopic objects such as particles or light, properties which have no equivalence in classical physics. The article they have just published presents a general formalism which they have applied to the particular case of measuring the length of a cavity. By a cavity is here meant two mirrors between which light can be trapped. The idea is to be able to measure variations in the length of the cavity over the course of time with the greatest precision possible. The length can in effect vary due to different factors: external perturbations, temperature variations, etc. (1) Braun, D. and Martin, J. Heisenberg-limited sensitivity with decoherence-enhanced measurements. Nat. Commun. 2:223 doi: 10.1038/ncomms1220 (2011). Page : 1 2 3 next Physics needs measurements which are more and more precise, but any measurement in physics is marred by noise: imperfect preparation of the system to be measured, thermal disturbance, etc. At the most extreme scales this occurs at the level of particles such as photons, and this noise is thus quantum by nature. This is what is known as the Heisenberg limit, a limit which cannot be overcome and which represents the fact that no measurement can be of an absolute and infinite precision. But there are easier ways than others to get closer to this limit. Daniel Braun, of the University of Toulouse, and John Martin, of the University of Liège, have just conceived of a theoretical system which will doubtless aid experimenters and favour applications in numerous scientific disciplines. Their work has just been the subject of a publication in Nature Communications (1).The research by Daniel Braun, Professor at the theoretical physics laboratory at the University of Toulouse III, and John Martin, lecturer and head of the quantum optics group at the ULg, enters the context of precision measurements whose sensitiveness can be improved in using quantum specificities, the properties of microscopic objects such as particles or light, properties which have no equivalence in classical physics.The article they have just published presents a general formalism which they have applied to the particular case of measuring the length of a cavity. By a cavity is here meant two mirrors between which light can be trapped. The idea is to be able to measure variations in the length of the cavity over the course of time with the greatest precision possible. The length can in effect vary due to different factors: external perturbations, temperature variations, etc. Written by : Henri Dupuis Translated by: Eriks Uskalis Based on the work of: John Martin Read this publication Your comments All comments Pdf version Send by mail
{"url":"http://reflexions.ulg.ac.be/cms/c_36321/mesure-au-dela-de-la-limite-quantique-standard","timestamp":"2014-04-17T00:54:49Z","content_type":null,"content_length":"30736","record_id":"<urn:uuid:3c34fd53-9e8e-4334-9190-82b6413d8054>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help on proofs July 12th 2010, 03:16 PM Need help on proofs Suppose all you know about a function f is that f(ab)= f(a) + f(b), A(upsidedown) a, b greater than 0. Find f(1). Here's a picture of it. It's part c. Thanks. July 12th 2010, 04:11 PM Suppose all you know about a function f is that f(ab)= f(a) + f(b), A(upsidedown) a, b greater than 0. Find f(1). Here's a picture of it. It's part c. Thanks. $f(1) = f(1 \cdot 1) = f(1) + f(1) = 2f(1)$ so ... if $f(1) = 2f(1)$ , what must $f(1)$ equal? July 12th 2010, 06:52 PM 1=1*1, so therefore, f(1)=0 f(x^2) = f(x*x) = f(x)+f(x) = 2f(x) let y = sqrt(x) f(x) = f(y*y) = f(y) + f(y) therefore f(y) = f(x)/2
{"url":"http://mathhelpforum.com/pre-calculus/150771-need-help-proofs-print.html","timestamp":"2014-04-17T08:10:19Z","content_type":null,"content_length":"5494","record_id":"<urn:uuid:59b291bd-aa78-48f2-9d7a-c5e844c78e6e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Recent developments After these important papers in which the basic scenario was outlined and the general assumptions justified, it was necessary to develop more detailed models, mainly numerical and N-body simulations due to the high complexity of the various physical processes involved. A schematic list of models and reviews is now given, which should be completed with references therein: Cole (1991), White and Frenk (1991), Navarro and White (1993), Kauffmann, Guidernoni and White (1994), Cole et al. (1994), Navarro, Frenk and White (1996, 1997), Lacey and Cole (1993), Avila-Reese, Firmani and Hernandez (1998), Avila-Reese et al. (1999), Sensui, Funato and Makino (1999), Salvador-Sole, Solanes and Manrique (1998), Baugh et al. (1999), Subramanian, Cen and Ostriker (1999), Steinmetz (1999), van den Bosch (1999) and a large series of papers, reflecting the importance of the topic. Models can be classified as "semi-analytical" (in which some processes are given a simplified treatment assuming simple recipes, based on either previous numerical calculation or on theoretical ideas), numerical simulations (e.g. hydrodynamical simulations, collision-less simulations) N-body simulations (the most widely used) and even analytical. Some hybrid models are difficult to classify in this scheme. It is first necessary to adopt a cosmological model, the most popular one being the "standard" CDM (with h = 0.5, for instance) or the P(k) k^n, with n ranging from 0 to -1.5 (for example), where k is the wave number. Another important parameter used by most models is ^1/2; then Other parameters characterize the calculation methods. For instance the initial redshift, the number of particles in N-body simulations and the box in Mpc^3 in which the calculations are performed. The so called "Virgo consortium" (Jenkins et al. 1997) is able to handle 256^3 particles and a large volume of the order of 60 Mpc. Parameters controlling the resolution of the simulation and the efficiency with which gas cools have a higher influence on the results (Kay et al. 1999). These models not only deal with the formation of halos, but also with the ability of gas to form stars, with matter and energy outputs, mainly due to supernova explosions, the evolution of the baryonic component, the explanation of the Hubble Sequence, how spirals merge to produce spirals and so on. From our point of view, the rotation of galaxies strongly depends on the structure of the halos, which is determined in the first stage of the computations. The latest evolution of visible galaxies is, paradoxically, the most difficult to understand and to model. For instance, the Initial Mass Function (IMF) is largely unknown and yet is decisive in galactic evolution. The hierarchical process of merging, the formation and internal structure of dark matter halos is said to be the best known process. This could be due, in part, to the relative simplicity of the process, but also to the evident fact that it is easier to make predictions about the unobservable. In general, even if some observable facts remain insufficiently explained, these families of theoretical models provide a very satisfactory basis to interpret any evolutionary and morphological problem.
{"url":"http://ned.ipac.caltech.edu/level5/March01/Battaner/node24.html","timestamp":"2014-04-16T07:13:42Z","content_type":null,"content_length":"8306","record_id":"<urn:uuid:2bf2d5ac-c372-404a-988f-62862b84c2c3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Lebesgue measure theory on real line January 28th 2009, 10:30 PM #1 Jan 2009 Lebesgue measure theory on real line Can someone help me get started on these problems? These are tough questions, and I believe some of them are propositions in some books. A reference to more info would also be nice. [L denotes Lebesgue measure function, out denotes outer measure, R denotes real number set] Show that for any subset E of R, there is a G_delta set A such that E is subset of A and L(A) = out(E). Show that a subset E of R is Lebesgue measurable if and only if there is a G_delta set A such that E is subset ofA and out(A \ E) = 0. (c) Show that if E is subset of B, where B is a Lebesgue measurable set with L(B) < +infinity, E is Lebesgue measurable if and only if L(B) = out(E) + out(B \ E) [ How would I set up the solutions? Which should be done by contradictions? ] Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/70550-lebesgue-measure-theory-real-line.html","timestamp":"2014-04-17T06:12:23Z","content_type":null,"content_length":"32720","record_id":"<urn:uuid:7b0f090e-878b-404a-bcff-47079b6ad856>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
How many ways July 9th 2006, 01:40 AM How many ways How many ways can first and second awards and three honorable mention awards be given to nine contestants? This one is hard but I came up with 6,048 awards given away. is this right. Thanks for any help given. July 9th 2006, 03:50 AM Hello, kwtolley! This one is similar to the committee-selection problem. How many ways can first and second awards and three honorable mention awards be given to nine contestants? I came up with 6,048. is this right? . Sorry, no The first award can be given to any of the $9$ contestants. The second award can be given to any of the $8$ other contestants. The honorable mentions are given to three of the other seven contestants . . in $C(7,3) \,= \,\frac{7!}{3!4!}\,=\,35$ ways. Therefore: . $9 \times 8 \times 35\:=\:2520$ ways. July 10th 2006, 08:05 AM you for checking my answer
{"url":"http://mathhelpforum.com/statistics/4061-how-many-ways-print.html","timestamp":"2014-04-20T19:23:14Z","content_type":null,"content_length":"5322","record_id":"<urn:uuid:98e37f9f-0ec5-42b9-a2af-0616acc3ecf5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: See attachment • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/514113d5e4b0d5d99f3e4a13","timestamp":"2014-04-19T20:10:17Z","content_type":null,"content_length":"73886","record_id":"<urn:uuid:21da4618-e7ea-4d28-a744-9563ac84b2d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Radar Equation Radar Equation Theory The point target radar range equation estimates the power at the input to the receiver for a target of a given radar cross section at a specified range. In this equation, the signal model is assumed to be deterministic. The equation for the power at the input to the receiver is: where the terms in the equation are: ● P[r] — Received power in watts. ● P[t] — Peak transmit power in watts. ● G[t] — Transmitter gain in decibels. ● G[r] — Receiver gain in decibels. ● λ — Radar operating frequency wavelength in meters. ● σ — Target's nonfluctuating radar cross section in square meters. ● L — General loss factor in decibels that accounts for both system and propagation loss. ● R[t] — Range from the transmitter to the target. ● R[r] — Range from the receiver to the target. If the radar is monostatic, the transmitter and receiver ranges are identical. The equation for the power at the input to the receiver represents the signal term in the signal-to-noise (SNR) ratio. To model the noise term, assume the thermal noise in the receiver has a white noise power spectral density (PSD) given by: where k is the Boltzmann constant and T is the effective noise temperature. The receiver acts as a filter to shape the white noise PSD. Assume that the magnitude squared receiver frequency response approximates a rectangular filter with bandwidth equal to the reciprocal of the pulse duration, 1/τ. The total noise power at the output of the receiver is: where F[n] is the receiver noise figure. The product of the effective noise temperature and the receiver noise factor is referred to as the system temperature and is denoted by T[s], so that T[s] = TF[n] . Using the equation for the received signal power and the output noise power, the receiver output SNR is: Solving for the required peak transmit power: The preceding equations are implemented in the Phased Array System Toolbox™ by the functions: radareqpow, radareqrng, and radareqsnr. These functions and the equations on which they are based are valuable tools in radar system design and analysis. Link Budget Calculation Using the Radar Equation This example shows how to compute the required peak transmit power using the radar equation. You implement a noncoherent detector with a monostatic radar operating at 5 GHz. Based on the noncoherent integration of ten one-microsecond pulses, you want to achieve a detection probability of 0.9 with a maximum false-alarm probability of 10^–6 for a target with a nonfluctuating radar cross section (RCS) of 1 m^2 at 30 km. The transmitter gain is 30 dB. Determine the required SNR at the receiver and use the radar equation to calculate the required peak transmit power. Use Albersheim's equation to determine the required SNR for the specified detection and false-alarm probabilities. Pd = 0.9; Pfa = 1e-6; NumPulses = 10; SNR = albersheim(Pd,Pfa,10) The required SNR is approximately 5 dB. Use the function radareqpow to determine the required peak transmit power in watts. tgtrng = 30e3; % target range in meters lambda = 3e8/5e9; % wavelength of the operating frequency RCS = 1; % target RCS pulsedur = 1e-6; %pulse duration G = 30; % transmitter and receiver gain (monostatic radar) Pt = radareqpow(lambda,tgtrng,SNR,pulsedur,'rcs',RCS,'gain',G) The required peak power is approximately 5.6 kW. Maximum Detectable Range for a Monostatic Radar Assume that the minimum detectable SNR at the receiver of a monostatic radar operating at 1 GHz is 13 dB. Use the radar equation to determine the maximum detectable range for a target with a nonfluctuating RCS of 0.5 m^2 if the radar has a peak transmit power of 1 MW. Assume the transmitter gain is 40 dB and the radar transmits a pulse that is 0.5 μs in duration. tau = 0.5e-6; % pulse duration G = 40; % transmitter and receiver gain (monostatic radar) RCS = 0.5; % target RCS Pt = 1e6; %peak transmit power in watts lambda = 3e8/1e9; SNR = 13; % required SNR in dB maxrng = radareqrng(lambda,SNR,Pt,tau,'rcs',RCS,'gain',G) The maximum detectable range is approximately 345 km. Output SNR at the Receiver in a Bistatic Radar Estimate the output SNR for a target with an RCS of 1 m^2. The radar is bistatic. The target is located 50 km from the transmitter and 75 km from the receiver. The radar operating frequency is 10 GHz. The transmitter has a peak transmit power of 1 MW with a gain of 40 dB. The pulse width is 1 μs. The receiver gain is 20 dB. lambda = physconst('LightSpeed')/10e9; tau = 1e-6; Pt = 1e6; TxRvRng =[50e3 75e3]; Gain = [40 20]; snr = radareqsnr(lambda,TxRvRng,Pt,tau,'Gain',Gain); The estimated SNR is approximately 9 dB.
{"url":"http://www.mathworks.com.au/help/phased/ug/radar-equation.html?nocookie=true","timestamp":"2014-04-23T11:51:30Z","content_type":null,"content_length":"43854","record_id":"<urn:uuid:13cf4e48-e5a5-4924-9de8-caa8be9cc877>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellenwood Math Tutor Find an Ellenwood Math Tutor ...Assignments can be anything from speed drills to sentence diagramming, but they are guaranteed to be effective and engaging. I am an engineer, so I use ACT math every day for my job. I teach systematic methods to make sure that every student does the best they can on the ACT math. 17 Subjects: including trigonometry, algebra 1, algebra 2, ACT Math ...Later I was awarded an acting scholarship to three different universities. Attended BYU where I worked as a statistics teaching assistant while studying economics. Became a mentor/tutor to 160 nontraditional students completing a statistics course online. 28 Subjects: including trigonometry, government & politics, biostatistics, linear algebra ...I have also taught at Lithia Springs High School with a PBTA (Performance-based teaching certificate) where I taught the advanced and accelerated classes in literature and composition. As a graduate student, I tutored in the GSU Writing Center and evaluated Regent's exams. At Mercer University, I taught SAT prep in addition to my English classes. 15 Subjects: including SAT math, reading, English, writing I have been privately tutoring students for 5 years. I am completing my degree in Information, Science, and Technology at Pennsylvania State University. During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry, and Physics courses and have tutored in all of these subjects. 13 Subjects: including calculus, chemistry, physics, precalculus ...My tutoring methods vary with the needs of the student. I cater my approach to the overall goal of the parent for their child. If you are looking for quick preparation for a standardized test, I teach testing methods and techniques to help prepare you for the test. 10 Subjects: including algebra 1, grammar, Microsoft Word, Microsoft PowerPoint Nearby Cities With Math Tutor Atlanta Ndc, GA Math Tutors Clarkdale, GA Math Tutors Conley Math Tutors Forest Park, GA Math Tutors Hapeville, GA Math Tutors Lake City, GA Math Tutors Lovejoy, GA Math Tutors Morrow, GA Math Tutors Pine Lake Math Tutors Red Oak, GA Math Tutors Redan Math Tutors Rex, GA Math Tutors Sharpsburg, GA Math Tutors Sunny Side Math Tutors Woolsey, GA Math Tutors
{"url":"http://www.purplemath.com/Ellenwood_Math_tutors.php","timestamp":"2014-04-18T04:20:46Z","content_type":null,"content_length":"23686","record_id":"<urn:uuid:fe7622a0-0fab-4b88-b1af-2850e0860121>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclid's Elements: Book V Euclid's Elements Book V : Theory of abstract proportions. Definition 1 A magnitude is a part of a magnitude, the less of the greater, when it measures the greater. Definition 2 The greater is a multiple of the less when it is measured by the less. Definition 3 A ratio is a sort of relation in respect of size between two magnitudes of the same kind. Definition 4 Magnitudes are said to have a ratio to one another which can, when multiplied, exceed one another. Definition 5 Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth, when, if any equimultiples whatever are taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order. Definition 6 Let magnitudes which have the same ratio be called proportional. Definition 7 When, of the equimultiples, the multiple of the first magnitude exceeds the multiple of the second, but the multiple of the third does not exceed the multiple of the fourth, then the first is said to have a greater ratio to the second than the third has to the fourth. Definition 8 A proportion in three terms is the least possible. Definition 9 When three magnitudes are proportional, the first is said to have to the third the duplicate ratio of that which it has to the second. Definition 10 When four magnitudes are continuously proportional, the first is said to have to the fourth the triplicate ratio of that which it has to the second, and so on continually, whatever be the Definition 11 Antecedents are said to correspond to antecedents, and consequents to consequents. Definition 12 Alternate ratio means taking the antecedent in relation to the antecedent and the consequent in relation to the consequent. Definition 13 Inverse ratio means taking the consequent as antecedent in relation to the antecedent as consequent. Definition 14 A ratio taken jointly means taking the antecedent together with the consequent as one in relation to the consequent by itself. Definition 15 A ratio taken separately means taking the excess by which the antecedent exceeds the consequent in relation to the consequent by itself. Definition 16 Conversion of a ratio means taking the antecedent in relation to the excess by which the antecedent exceeds the consequent. Definition 17 A ratio ex aequali arises when, there being several magnitudes and another set equal to them in multitude which taken two and two are in the same proportion, the first is to the last among the first magnitudes as the first is to the last among the second magnitudes. Or, in other words, it means taking the extreme terms by virtue of the removal of the intermediate terms. Definition 18 A perturbed proportion arises when, there being three magnitudes and another set equal to them in multitude, antecedent is to consequent among the first magnitudes as antecedent is to consequent among the second magnitudes, while, the consequent is to a third among the first magnitudes as a third is to the antecedent among the second magnitudes. If any number of magnitudes are each the same multiple of the same number of other magnitudes, then the sum is that multiple of the sum. If a first magnitude is the same multiple of a second that a third is of a fourth, and a fifth also is the same multiple of the second that a sixth is of the fourth, then the sum of the first and fifth also is the same multiple of the second that the sum of the third and sixth is of the fourth. If a first magnitude is the same multiple of a second that a third is of a fourth, and if equimultiples are taken of the first and third, then the magnitudes taken also are equimultiples respectively, the one of the second and the other of the fourth. If a first magnitude has to a second the same ratio as a third to a fourth, then any equimultiples whatever of the first and third also have the same ratio to any equimultiples whatever of the second and fourth respectively, taken in corresponding order. If a magnitude is the same multiple of a magnitude that a subtracted part is of a subtracted part, then the remainder also is the same multiple of the remainder that the whole is of the whole. If two magnitudes are equimultiples of two magnitudes, and any magnitudes subtracted from them are equimultiples of the same, then the remainders either equal the same or are equimultiples of Equal magnitudes have to the same the same ratio; and the same has to equal magnitudes the same ratio. If any magnitudes are proportional, then they are also proportional inversely. Of unequal magnitudes, the greater has to the same a greater ratio than the less has; and the same has to the less a greater ratio than it has to the greater. Magnitudes which have the same ratio to the same equal one another; and magnitudes to which the same has the same ratio are equal. Of magnitudes which have a ratio to the same, that which has a greater ratio is greater; and that to which the same has a greater ratio is less. Ratios which are the same with the same ratio are also the same with one another. If any number of magnitudes are proportional, then one of the antecedents is to one of the consequents as the sum of the antecedents is to the sum of the consequents. If a first magnitude has to a second the same ratio as a third to a fourth, and the third has to the fourth a greater ratio than a fifth has to a sixth, then the first also has to the second a greater ratio than the fifth to the sixth. If a first magnitude has to a second the same ratio as a third has to a fourth, and the first is greater than the third, then the second is also greater than the fourth; if equal, equal; and if less, less. Parts have the same ratio as their equimultiples. If four magnitudes are proportional, then they are also proportional alternately. If magnitudes are proportional taken jointly, then they are also proportional taken separately. If magnitudes are proportional taken separately, then they are also proportional taken jointly. If a whole is to a whole as a part subtracted is to a part subtracted, then the remainder is also to the remainder as the whole is to the whole. If magnitudes are proportional taken jointly, then they are also proportional in conversion. If there are three magnitudes, and others equal to them in multitude, which taken two and two are in the same ratio, and if ex aequali the first is greater than the third, then the fourth is also greater than the sixth; if equal, equal, and; if less, less. If there are three magnitudes, and others equal to them in multitude, which taken two and two together are in the same ratio, and the proportion of them is perturbed, then, if ex aequali the first magnitude is greater than the third, then the fourth is also greater than the sixth; if equal, equal; and if less, less. If there are any number of magnitudes whatever, and others equal to them in multitude, which taken two and two together are in the same ratio, then they are also in the same ratio ex aequali. If there are three magnitudes, and others equal to them in multitude, which taken two and two together are in the same ratio, and the proportion of them be perturbed, then they are also in the same ratio ex aequali. If a first magnitude has to a second the same ratio as a third has to a fourth, and also a fifth has to the second the same ratio as a sixth to the fourth, then the sum of the first and fifth has to the second the same ratio as the sum of the third and sixth has to the fourth. If four magnitudes are proportional, then the sum of the greatest and the least is greater than the sum of the remaining two. Euclid's Elements: Book IV <--- Book V --->Euclid's Elements: Book VI
{"url":"http://everything2.com/title/Euclid%2527s+Elements%253A+Book+V","timestamp":"2014-04-20T21:25:32Z","content_type":null,"content_length":"35366","record_id":"<urn:uuid:2acf1606-edc0-4b43-963a-002bbb6643b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
The Petri box calculus: a new causal algebra with multi-label communication Results 1 - 10 of 89 , 1993 "... We study the complexity of several standard problems for 1-safe Petri nets and some of its subclasses. We prove that reachability, liveness, and deadlock are all PSPACE-complete for 1-safe nets. We also prove that deadlock is NP-complete for free-choice nets and for 1-safe free-choice nets. Finally, ..." Cited by 44 (7 self) Add to MetaCart We study the complexity of several standard problems for 1-safe Petri nets and some of its subclasses. We prove that reachability, liveness, and deadlock are all PSPACE-complete for 1-safe nets. We also prove that deadlock is NP-complete for free-choice nets and for 1-safe free-choice nets. Finally, we prove that for arbitrary Petri nets, deadlock is equivalent to reachability and liveness. This paper is to be presented at FST&TCS 13, Foundations of Software Technology & Theoretical Computer Science, to be held 1517 December 1993, in Bombay, India. A version of the paper with most proofs omitted is to appear in the proceedings. 1 Introduction Petri nets are one of the oldest and most studied formalisms for the investigation of concurrency [33]. Shortly after the birth of complexity theory, Jones, Landweber, and Lien studied in their classical paper [24] the complexity of several fundamental problems for Place/Transition nets (called in [24] just Petri nets). Some years later, Howell,... "... The main feature of zero-safe nets is a primitive notion of transition synchronization. To this aim, besides ordinary places, called stable places, zero-safe nets are equipped with zero places, which in an observable marking cannot contain any token. This yields the notion of transaction: a basic ..." Cited by 40 (20 self) Add to MetaCart The main feature of zero-safe nets is a primitive notion of transition synchronization. To this aim, besides ordinary places, called stable places, zero-safe nets are equipped with zero places, which in an observable marking cannot contain any token. This yields the notion of transaction: a basic atomic computation, which may use zero tokens as triggers, but defines an evolution between observable markings only. The abstract counterpart of a generic zero-safe net B consists of an ordinary P/T net whose places are the stable places of B, and whose transitions represent the transactions of B. The two nets offer both the refined and the abstract model of the same system, where the former can be much smaller than the latter, because of the transition synchronization mechanism. Depending on the chosen approach -- collective vs individual token philosophy -- two notions of transaction may be defined, each leading to different operational and abstract models. Their comparison is fully dis... , 1997 "... The PEP tool is a Programming Environment based on Petri Nets. Comprehensive modelling, compilation, simulation and verification components are embedded in a user-friendly graphical interface. The basic idea is that the modelling component allows the user to design parallel systems by parallel finit ..." Cited by 35 (2 self) Add to MetaCart The PEP tool is a Programming Environment based on Petri Nets. Comprehensive modelling, compilation, simulation and verification components are embedded in a user-friendly graphical interface. The basic idea is that the modelling component allows the user to design parallel systems by parallel finite automata, parallel programs, process algebra terms, high-level or low-level Petri nets, and that the PEP system then automatically generates Petri nets from such models in order to use Petri net theory for simulation and verification purposes. This paper describes the typical usage of the PEP tool by considering the design of the well-known `alternating-bit' protocol. Among others, the usefulness of new concepts for the handling of hierarchies and synchronous communication is explained. PEP has been implemented on Solaris 2.x, SunOS 4.1.x and Linux. Ftp-able versions are available via http://www.informatik.uni-hildesheim.de/¸pep. Keywords: `Alternating bit' protocol, B (PN) 2 , Hierarc... , 1992 "... Action structures are proposed as a variety of algebra to underlie concrete models of concurrency and interaction. An action structure is equipped with composition and product of actions, together with two other ingredients: an indexed family of abstractors to allow parametrisation of actions, a ..." Cited by 34 (1 self) Add to MetaCart Action structures are proposed as a variety of algebra to underlie concrete models of concurrency and interaction. An action structure is equipped with composition and product of actions, together with two other ingredients: an indexed family of abstractors to allow parametrisation of actions, and a reaction relation to represent activity. The eight axioms of an action structure make it an enriched strict monoidal category; however, the work is presented algebraically rather than in category theory. The notion of action structure is developed mathematically, and examples are studied ranging from the evaluation of expressions to the statics and dynamics of Petri nets. For algebraic process calculi in particular, it is shown how they may be defined by a uniform superposition of process structure upon an action structure specific to each calculus. This allows a common treatment of bisimulation congruence. The theory of action structures emphasizes the notion of effect; that - In Proc. 6th International Workshop on Petri Nets and Performance Models , 1995 "... Generalized Stochastic Petri Nets (GSPN)and Performance Evaluation Process Algebra (PEPA) can both be used to study qualitative and quantitative behaviour of systems in a single environment. This paper presents a comparison of the two formalisms in terms of the facilities that they provide to the mo ..." Cited by 28 (7 self) Add to MetaCart Generalized Stochastic Petri Nets (GSPN)and Performance Evaluation Process Algebra (PEPA) can both be used to study qualitative and quantitative behaviour of systems in a single environment. This paper presents a comparison of the two formalisms in terms of the facilities that they provide to the modeller, considering both the definition and the analysis of the performance model. Our goal is to provide a better understanding of both formalisms, and to prepare a fertile ground for exchanging ideas and techniques between the two. To illustrate similarities and differences, we make the different issues more concrete by means of an example modelling resource contention. 1 Introduction In this paper we present a comparison of two formalisms which may be used to develop performance models as continuous time Markov chains (CTMC). Generalized stochastic Petri nets (GSPN) is a well-established high level modelling paradigm which has been widely applied in performance analysis. In contrast, Per... - FMPA 2000 , 2001 "... ..." - Journal of the ACM , 1996 "... A multiparty interaction is a set of I/O actions executed jointly by a number of processes, each of which must be ready to execute its own action for any of the actions in the set to occur. An attempt to participate in an interaction delays a process until all other participants are available. Altho ..." Cited by 26 (8 self) Add to MetaCart A multiparty interaction is a set of I/O actions executed jointly by a number of processes, each of which must be ready to execute its own action for any of the actions in the set to occur. An attempt to participate in an interaction delays a process until all other participants are available. Although a relatively new concept, the multiparty interaction has found its way into a number of distributed programming languages and algebraic models of concurrency. In this paper, we present a taxonomy of languages for multiparty interaction that covers all proposals of which we are aware. Based on this taxonomy, we then present a comprehensive analysis of the computational complexity of the multiparty interaction scheduling problem, the problem of scheduling multiparty interactions in a given execution environment. 1 Introduction A multiparty interaction is a set of I/O actions executed jointly by a number of processes, each of which must be ready to execute its own action for any of the act... , 1996 "... The PEP system (Programming Environment based on Petri Nets) supports the most important tasks of a good net tool, including HL and LL net editing and comfortable simulation facilities. In addition, these features are embedded in sophisticated programming and verification components. The programming ..." Cited by 24 (2 self) Add to MetaCart The PEP system (Programming Environment based on Petri Nets) supports the most important tasks of a good net tool, including HL and LL net editing and comfortable simulation facilities. In addition, these features are embedded in sophisticated programming and verification components. The programming component allows the user to design concurrent algorithms in an easy-to-use imperative language, and the PEP system then generates Petri nets from such programs. The PEP tool's comprehensive verification components allow a large range of properties of parallel systems to be checked efficiently on either programs or their corresponding nets. This includes user-defined properties specified by temporal logic formulae as well as specific properties for which dedicated algorithms are available. PEP has been implemented on Solaris 2.4, SunOS 4.1.3 and Linux. Ftp-able versions are available. , 1995 "... In this paper a high-level Petri net model called M-nets (for multilabeled nets) is developed. A distinctive feature of this model is that it allows not only vertical unfolding, as do most other high-level net models, but also horizontal composition - in particular, synchronisation - in a manner sim ..." Cited by 23 (11 self) Add to MetaCart In this paper a high-level Petri net model called M-nets (for multilabeled nets) is developed. A distinctive feature of this model is that it allows not only vertical unfolding, as do most other high-level net models, but also horizontal composition - in particular, synchronisation - in a manner similar to process algebras such as CCS. This turns the set of M-nets into a domain whose composition operations satisfy various algebraic properties. The operations are shown to be consistent with unfolding in the sense that the unfolding of a composite high-level net is the composition of the unfoldings of its components. A companion paper shows how this algebra can be used to define the semantics of a concurrent programming language compositionally. 1 Introduction and Motivation In traditional high-level Petri net models, as described for instance in [1, 10, 11, 16, 17, 20, 21], there are place/transition annotations which determine the transition rule of the model. Such annotations also dr... - Theoretical Computer Science , 1993 "... In this paper we address the following question: What type of event structures are suitable for representing the behaviour of general Petri nets? As a partial answer to this question we define a new class of event structures called local event structures and identify a subclass called UL-event stru ..." Cited by 21 (1 self) Add to MetaCart In this paper we address the following question: What type of event structures are suitable for representing the behaviour of general Petri nets? As a partial answer to this question we define a new class of event structures called local event structures and identify a subclass called UL-event structures. We propose that UL-event structures are appropriate for capturing the behaviour of general Petri nets. Our answer is a partial one in that in the proposed event structure semantics, auto-concurrency is filtered out from the behaviour of Petri nets. It turns out that this limited event structure semantics for Petri nets is nevertheless a non-trivial and conservative extension of the (prime) event structure semantics of 1-safe Petri nets provided in [NPW]. We also show that the strong relationship between prime event structures and 1-safe Petri nets established in a categorical framework in [W3] can be extended to the present setting, provided we restrict our attention to the subclass ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=76512","timestamp":"2014-04-16T10:17:42Z","content_type":null,"content_length":"38624","record_id":"<urn:uuid:77e573ad-fb73-465b-8641-8c5d532bbda5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: f(x)=2xlnx solve for x • one year ago • one year ago Best Response You've already chosen the best response. There is nothing to solve. If the equation is rewritten as \[y=2x \ln x\]then we could solve for x in terms of y. Best Response You've already chosen the best response. make X the subject of the formular Best Response You've already chosen the best response. Hello all ! f(x) is the dependent variable here, which is equal to 2xlnx, where x is independent. @Stacey wrote "y=2xlnx". She defined a new variable, which is again equal to 2xlnx. Hence, we can write f(x)=y=2xlnx. Now answering the main question by @adarakch: f(x)=2xlnx solve for x First of all, the question is incomplete @adarakch. This is only an equation with 1 dependent "f(x)" and 1 independent "x" variable. It would probably be: Differentiate f(x) or Integrate f(x) for a given condition f '(x)=0 or something! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509b16d6e4b085b3a90e5388","timestamp":"2014-04-18T11:03:36Z","content_type":null,"content_length":"32837","record_id":"<urn:uuid:3088d2e6-b1c6-45cb-ac19-b61ddda70f2e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Databases - Storing locations and finding nearby locations (Tech Off | Channel 9) When you search on certain websites you can specify "Find [type of company/person] within [x] miles/km from [insert city]" What kind of information do you store to calculate distances between cities? Is it geographical data like degrees, minutes, seconds, then calculating the difference between the two coordinates and then translating them to km/miles? Or am I seeing to difficult? Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know. You're on the right track. Ok, I got the file with all the longitudes and latitudes of all cities in Belgium. Next, I had to find some information about calculating latitude and longitudes to kilometers. This is what Wikipedia "As opposed to a degree of latitude, which is always around sixty nautical miles or about 111 km, a degree of longitude varies from 0 to 111 km: it is 111 km times the cosine of the latitude, when the distance is laid out on a circle of constant latitude. More precisely, one degree of longitude = (111.320 + 0.373sin²φ)cosφ km, where φ is latitude)" That is sick. I wonder how fast my pc is going to be when it has to calculate and find this sort of data. ZippyV wrote: Ok, I got the file with all the longitudes and latitudes of all cities in Belgium. Next, I had to find some information about calculating latitude and longitudes to kilometers. This is what Wikipedia says: "As opposed to a degree of latitude, which is always around sixty nautical miles or about 111 km, a degree of longitude varies from 0 to 111 km: it is 111 km times the cosine of the latitude, when the distance is laid out on a circle of constant latitude. More precisely, one degree of longitude = (111.320 + 0.373sin²φ)cosφ km, where φ is latitude)" That is sick. I wonder how fast my pc is going to be when it has to calculate and find this sort of data. in some cases you can skip "the great circle" formula. if you are only dealing with local locations and if you do not have to be super acurate then you can approximate the variable part and use the range of values to help. for example what are the smallest lat and long for your area? what are the highest values ? if you need all locations within x km of a given point then find the values for two corners that describe a box. any values in the range are inside the box and are "in range" this is not a "perfect fit" but may be a "good approximation" and reduces the amount of processing to mostly a sql select. where Lat >x1 and lat < x2 and long > y1 and long < y2 or some veriation of that... It works. Luckily, I didn't have to calculate any longitudes or latitudes. After a couple of days of thinking (math is not my strongest point) this was my resulting query: @mycity nvarchar(50), @distance tinyint --distance in km DECLARE @mycitylat decimal(9,6), @mycitylong decimal(9,6); SET ROWCOUNT 1 SET @mycitylat = (SELECT latitude FROM cities WHERE city = @mycity) SET @mycitylong = (SELECT longitude FROM cities WHERE city = @mycity) SET ROWCOUNT 0 SELECT city FROM cities WHERE SQRT(POWER(latitude - @mycitylat, 2) + POWER(longitude - @mycitylong, 2)) BETWEEN 0 AND @distance / 111.320 the table looks like - cityid - city nvarchar(50) - latitude decimal(9,6) - longitude decimal(9,6) The result comes pretty fast too, within a second. The table is about 12000 rows big but there are still to many non-cities in it. Like regions and streets, waterways, mountains. If you want to know how I came up with the WHERE clause, it's from Pythagoras: Instead of using SET ROWCOUNT 1, you could also just use the TOP keyword in your query, and you could merge your 2 queries into 1, like this: SELECT TOP 1 @mycitylat = latitude , @mycitylong = longitude FROM cities WHERE city = @mycity I tried that first but the graphical sql designer in Visual Studio didn't like that. Check out the next version of SQL with native Spatial support:
{"url":"http://channel9.msdn.com/forums/TechOff/258037-Databases-Storing-locations-and-finding-nearby-locations/","timestamp":"2014-04-21T10:54:26Z","content_type":null,"content_length":"44689","record_id":"<urn:uuid:72ec6d7b-3425-4e9c-960f-937658c775d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply This came up on another discussion group. Here I will show how easy this is to do with mathematica. There are 5 different red balls, 5 different green balls, 5 different blue balls and 5 different black balls. In how many ways can they be arranged so that no two balls of same color are adjacent ? The whole problem condenses down to this expression. ( I want to thank Robert Israel for showing me this idea.) This produces an extremely large polynomial in 4 variables. The coefficient of is the answer. We get it with the extremely powerful command: Coefficient[ans, w^5 x^5 y^5 z^5] the answer is 134631576.
{"url":"http://www.mathisfunforum.com/post.php?tid=20068&qid=286539","timestamp":"2014-04-19T22:26:33Z","content_type":null,"content_length":"21108","record_id":"<urn:uuid:a3361fa5-6263-4382-a459-29d65c22237f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability GHC, Hugs (MPTC and FD) Stability stable Maintainer robdockins AT fastmail DOT fm Safe Haskell Safe-Inferred This module introduces a number of infix symbols which are aliases for some of the operations in the sequence and set abstractions. For several, the argument orders are reversed to more closely match usual symbolic usage. The symbols are intended to evoke the the operations they represent. Unfortunately, ASCII is pretty limited, and Haskell 98 only allocates a few symbols to the operator lexical class. Thus, some of the operators are less evocative than one would like. A future version of Edison may introduce unicode operators, which will allow a wider range of operations to be represented symbolicly. Unlike most of the modules in Edison, this module is intended to be imported unqualified. However, the definition of (++) will conflict with the Prelude definition. Either this definition or the Prelude definition will need to be imported hiding ( (++) ). This definition subsumes the Prelude definition, and can be safely used in place of it. (<|) :: Sequence seq => a -> seq a -> seq aSource Left (front) cons on a sequence. The new element appears on the left. Identical to lcons. (|>) :: Sequence seq => seq a -> a -> seq aSource Right (rear) cons on a sequence. The new element appears on the right. Identical to rcons with reversed arguments. (++) :: Sequence seq => seq a -> seq a -> seq aSource Append two sequences. Identical to append. Subsumes the Prelude definition.
{"url":"http://hackage.haskell.org/package/EdisonAPI-1.2.2/docs/Data-Edison-Sym.html","timestamp":"2014-04-17T08:33:22Z","content_type":null,"content_length":"7623","record_id":"<urn:uuid:336c5739-8e19-4b56-bb9a-9a92863fd77e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
geodesic calculation. May 30th 2011, 11:41 AM #1 May 2011 geodesic calculation. I have a formula I will post below that I have posted to numerous GIS forums and received no response to my question. The lack of response in those forums has prompted me to post my question here. The question is in what units are 'd'? What would I enter if I wanted 'd' to represent 0.25 miles? I am also assuming that bearing 'tc' is in decimal units between zero and 360. I would like to verify that as well. Lat/lon given radial and distance A point {lat,lon} is a distance d out on the tc radial from point 1 if: IF (cos(lat)=0) lon=lon1 // endpoint a pole This algorithm is limited to distances such that dlon <pi/2, i.e those that extend around less than one quarter of the circumference of the earth in longitude. A completely general, but more complicated algorithm is necessary if greater distances are allowed: lat =asin(sin(lat1)*cos(d)+cos(lat1)*sin(d)*cos(tc)) lon=mod( lon1-dlon +pi,2*pi )-pi Assuming that tc is measured in degrees, then so is d. The actual "distance" would then be given by $d\left(\frac{\pi}{180}\right)R$ where R is the radius of the earth and is in whatever units R is measured in. May 31st 2011, 01:49 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/pre-calculus/182038-geodesic-calculation.html","timestamp":"2014-04-23T20:55:53Z","content_type":null,"content_length":"33654","record_id":"<urn:uuid:322d0ebd-aa4e-49df-8d82-0a8add6eeab9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Lives for gear Joined: Aug 2009 Location: Los Angeles, CA Posts: 1,159 Thread Starter linear phase versus minimum phase (EQ's) What exactly is the difference between "linear phase" and "minimum phase" when talking about EQ's? Is there a preference for one over the other when dealing with specific tasks? Sorry, for such a Noob question.......but for a General mastering EQ, I hear so many different EQ styles/brands recommended, and "linear phase" this...."linear phase" that....comes up alot when asking about a mastering EQ. Yet plugins such as Flux Epure (which is NOT a linear phase EQ) gets such rave reviews for a mastering EQ. So whats the deal? Whats the difference between "linear phase" and "minimal phase"?
{"url":"http://www.gearslutz.com/board/mastering-forum/569408-linear-phase-versus-minimum-phase-eqs.html","timestamp":"2014-04-19T21:08:21Z","content_type":null,"content_length":"162508","record_id":"<urn:uuid:05e98dbb-ef97-497d-8061-fd0be17afa07>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Vargotamma planet also Retrograde......effects??? Re: Vargotamma planet also Retrograde......effects??? Dear Basab, Reading your SECOND mail of 13.08.07 (about vargottam planets), I think I have goofed-up the "vargottam of debilitation" issue! I will come back on this point. Let me re-check before confirming. However, as regards the contents in your FIRST mail of 13.08.07, honestly, I am not convinced, "on the face of it", with what you say. Let me go back to the 40-odd examples on the basis of which I had tried to device this "logical formulation". I may take a few days, but I will certainly come back. Any way, thanks for your views/observations. Meanwhile, I also request you to actually apply what you have written to about 15-20 charts and have a re-look till such time I do my home-work. The idea is only that we try to learn and follow the correct thing! Satish Kingi Re: retrogade astrokinghyd wrote:The following superceeds all the above logic: 1.All vargottams are good (even debilitated vargottams) 2. Any planet debilitated in navamsa is finally bad don't you think the points are contradictory to each other. in the first point you have mentioned that a debiliated vargattoma planet is good but in the second point you have mentioned that a planet debiliated in navamsha is bad so a debiliated planet when vargattoma means the planet is debiliated in navamsha so how can it be good and bad at the same time? Re: retrogade astrokinghyd wrote:Example1: Planet Jupiter - Natural Benefic - plus Sign - Pisces - Own Sign - plus House 11th House - favourable - plus Retrograde - yes - minus Conclusion: plus x plus x plus x minus = minus Therefore, Retrograde Jupiter in Pisces in 11th house is bad. Planet saturn - Natural Malefic - minus Sign - Libra - Exalted - plus House 5th House - unfavourable - minus Conclusion: minus x plus x minus = plus Therefore, a non-retrograde exalted saturn in 5th house is good. in Satishji's examples the answers are absolutely correct but the calculation is not. it's true that jupiter won't be favourable and saturn will be favourable but in the first example jupiter has got three positive points and just one negative point and the result is bad but logically the reult should be good and not bad and in the second example saturn has got 2 negative points and one positive point and the result is good though logically the result should be bad as it has got more negative points than positive points. so i think it's better if the positive and negative points are added instead of mutiplied, that i believe will give a more accurate picture. taking the same examples i am adding the positives and negatives instead of multiplying the positives and negatives as Satisji has done. Planet Jupiter - Natural Benefic - plus Sign - Pisces - Own Sign - plus House 11th House - favourable - plus Retrograde - yes - minus Conclusion: plus + plus + plus + minus = plus Therefore, Retrograde Jupiter in Pisces in 11th house is good. Planet saturn - Natural Malefic - minus Sign - Libra - Exalted - plus House 5th House - unfavourable - minus Conclusion: minus + plus + minus = minus Therefore, a non-retrograde exalted saturn in 5th house is bad. this gives a logical result as in the first exampe the positives are more so the result is good and in the second case the negatives are more so the result is bad. i just gave the aforesaid examples to show that mathematically the calculation should be done this way but i believe even this doesn't give a correct picture. i believe the strength of a planet cannot be measured in this way. it's because saturn though is being shown as bad is not at all bad and is good because it is exalted in the 5th house being the lord of the 9th house. it is a functional benefic for gemini lagna so not at all negative and jupiter would not be that favourable as jupiter is a functional malefic and retrograde as well. but jupiter will still give some good results definitely, but how much that cannot be assessed if calculation is done on the basis of positive and negative points. percentage should be fixed for vaious positive and negative points. i believe functional status of a planet holds a lot of importance and not just natural beneficience/maleficience. then it should be checked whether a planet is under benefic or malefic influence. so these additional parameteres should also be taken into consideration and percentage should be given to assess the strength of the planet because an exalted planet get's a better score than a planet in it's own house but if just postive and negative is just given then the extent of good or bad cannot be understood. one should apply all the parameters and give them certain percentages as per it's importance and assess the strength of a planet. Dear Umaabirami, Yes, this is the way I do. I am happy that it is working with your experience. I had already said, "THIS IS NOT A RULE WRITTEN IN ANY OF THE ASTROLOGICAL TEXTS. IT IS JUST THAT I HAVE EXTENDED MY IMAGINATION BASED ON MY EXPERIENCE. YOU MAY AGREE WITH IT OR MAY NOT". Let us see if more friends report about their agreement/disagreement about this humble attempt from me. Satish Kingi Vargotamma planet also Retrograde......effects??? Thanks for the reply. The tables was helpful in finding whether good or bad. I caluclated the following as per your tables correct me if I am wrong. Here is the situation for Taurus Acsendent and moon is in Libra. 7th and 12th lord Mars is in 5th in Virgo and it is Vargottama. It is not in Retro. That means (a) is minus as it is natural malefic (b) is minus as it is in enemey's house (mercury) (c) is minus as it is natural malefic in trine. The final result is minus and so deep rooted failure in love/marriage/children? 8th and 11th lord Jupiter is in 10th in Aquarius and it is Vargottama. It is in Retro. That means (a) is plus as it is natural benefic (b) is plus as it is in freind's home (c) is plus as it is natural benefic in Kendra. (d) is minus as it is retro. The final result is minus and so profession will be affected badly? 4th lord Sun is debiliated in 6th in Libra and it is Vargottama. That means (a) is minus as it is natural malefic (b) is minus as it is debiliated (c) is plus as it is natural malefic in dusta house. Final result is plus so good eventough debiliated Vargottama? Also Ascendent is in Vargottama. That means Ascendent is more powerful than Moon Rashi? When I say " a house is condusive OR not to a planet", I mean that we should consider following two factors: (a) Keeping every thing aside, as a fundamental we know that Angles/Trines are good for Natural benefics and other houses good for natural Malefics. Accordingly, decide whether "plus" or "minus" (b) Ignore the result in (a) above and take it as "plus" if the planet occupying a particular house is himself the KARAKA OF THAT HOUSE. Satish Kingi Dear Satish, You wrote: "There is a touch of algebra to this. 1. Keep in mind 4 factors: (a) Who is the planet concerned, a natural benefic or malefic (b) What is the sign it occupies, enemy planet's sign/debilitated or friendly/own/mooltrikona/exalted (c) Is the house condusive to the planet (d) Is it retrograde or not." Q) regarding c), what is the criteria to decide whether a house is condusive OR not to a planet? Vargotamma planet also Retrograde......effects??? In point 2 - I wanted to know when Jupiter is Aquarius (Saturn's home) is considered as minus or plus? Dear readers, I would like to share a way of analyzing retrograde, vargottam planets that I USE. In my experience most of the times it works. There ARE rarest occasions when this way does not work. THIS IS NOT A RULE WRITTEN IN ANY OF THE ASTROLOGICAL TEXTS. IT IS JUST THAT I HAVE EXTENDED MY IMAGINATION BASED ON MY EXPERIENCE. YOU MAY AGREE WITH IT OR MAY NOT. Either ways, I will like to hear your response. Any improvisations suggested will be well taken. Here it goes: There is a touch of algebra to this. 1. Keep in mind 4 factors: (a) Who is the planet concerned, a natural benefic or malefic (b) What is the sign it occupies, enemy planet's sign/debilitated or friendly/own/mooltrikona/exalted (c) Is the house condusive to the planet (d) Is it retrograde or not. 2. Now, for factors (a) to (c) if the answers are negative, take the result as "minus" for each factor. 3. If retrograde take always "minus" - other wise dont consider the factor, at all. 4. Mutiply these four (or three, if not retrograde) resultant signs. If you get finally "plus" then diagnose that the planet will act favourably - if "minus" it is unfavourable. Planet Jupiter - Natural Benefic - plus Sign - Pisces - Own Sign - plus House 11th House - favourable - plus Retrograde - yes - minus Conclusion: plus x plus x plus x minus = minus Therefore, Retrograde Jupiter in Pisces in 11th house is bad. Planet saturn - Natural Malefic - minus Sign - Libra - Exalted - plus House 5th House - unfavourable - minus Conclusion: minus x plus x minus = plus Therefore, a non-retrograde exalted saturn in 5th house is good. The following superceeds all the above logic: 1.All vargottams are good (even debilitated vargottams) 2. Any planet debilitated in navamsa is finally bad Readers may try out this. Satish Kingi Astrologer, Hyderabad Does this mean that a retrogade trik house lord placed in its mooltriknona house in lgna chart and vargotama will become a stronger malefic? For example in Taurus lagna retrograde Mars in 12th house is mooltrikona and if its vargotama does it make it a stronger malefic or stronger benefic. yes you are right .. it will become stronger. But at the same time find whether it become strong as malefic or benefic. benefic: for any lagna subhasthana adhipathi placed in subhasthana and retrograded...........and vargottama in navamsa.........it become stronger as benefic viceversa i think it becomes very strong. "You will understand the Gita better with your biceps, your muscles, a little stronger." -- Swami Vivekananda Vargotamma planet also Retrograde......effects??? hii all what happens when a Retrograde planet is also vargotamma??? looking forward to hear from you dear Satishiji, i think this can be understood very logically and doesn't need to be applied to charts to be proven. you are multipying the positives and negatives. if just one factor is negative then even if an infinite no. of positive factors be there the result will always be negative because minus x plus x plus x plus x plus x plus = minus so the negative will turn into a positive only if there is 2 negative(minus) points otherwise not because minus x minus = plus which i believe can never give a clear picture. as per my method minus + plus + plus + plus + plus + plus = 4(plus) which shows the strength of the planet quite clearly. looking at it from another context, minus x minus x minus = minus so 3 minus points are giving one minus point which is not at all logical because it is not showing the extent of negative it can do as it is showing 1 minus point as the result when it has scored 3 minus points. but if you add up the 3 minus points minus + minus + minus= 3(minus) then it's giving 3 minus points which shows the extent of negative the planet can do. then there is another disadvantage, if there are 2 minus and 1 plus point then minus x minus x plus= plus but if you look at the result which is plus it doesn't show the correct picture because 2 minus points means more negative and so the result should be minus and not plus. but if you use the addition method then minus + minus + plus= 1(minus). so it is giving a better picture as it is giving the resut as 1 minus point as 1 minus is cancelling against i plus and so 1 minus is the result. then additional parameters like the functional status of the planets should also be considered which is one of the most important points. now let me give an example and apply your method and my method. i think it will prove my point. example1(your method) Planet Saturn - Natural Malefic - minus Sign - Libra - Exaltation Sign - plus House 10th house - favourable - plus Retrograde - no - so not considering it as a factor as you have said. Conclusion: minus x plus x plus = minus Therefore, Saturn in Libra in 10th house is bad. example1(my method) Planet Saturn - Natural Malefic - minus Sign - Libra - Exaltation Sign - plus House lagna - favourable - plus Retrograde - no - plus i will give not retrograde a plus point because it will affect the result in my case. Conclusion: minus + plus + plus + plus = 2(plus) Therefore, Saturn in Libra in Lagna is good and it get's 2 positive points. now do you think Saturn the lagna lord getting exalted in the 10th house can be bad as your method is showing it to be? my method is not only showing it is good but it has also given it 2 plus example 2(your method) Planet Venus - Natural Benefic - plus Sign - Taurus - Own Sign - plus House- 5th house - plus retrograde - yes - minus Conclusion: plus x plus x plus x minus = minus Therefore Venus in the 5th house in Taurus is bad example 2(my method) Planet Venus - Natural Benefic - plus Sign - Taurus - Own Sign - plus House- 5th house - plus retrograde - yes - minus Conclusion: plus + plus + plus + minus = 2(plus) Therefore Venus in the 5th house in Taurus is good and it get's 2 plus points. now do you think venus the raja yoga karak planet for the capricon lagna placed in the 5th house will do bad just because it is retrograde as your result is showing? my result is not only showing it is good but giving it 2 plus points. but this method i have used here is also not absolutely correct though the method of calculation should be this. more factors should be added like the functional status of the planets, etc and there should be gradation of plus/minus points like in this case an exalted planet and a planet in it's own house is getting the same points. it should be like this that an exalted planet will get 30 points and a planet in it's own house will get 20 points. there should be a maximum point and the total score will show the strength of the planet in context of that. so these should be used but the main thing is the points should be added and not multiplied. umaabirami wrote:4th lord Sun is debiliated in 6th in Libra and it is Vargottama. That means (a) is minus as it is natural malefic (b) is minus as it is debiliated (c) is plus as it is natural malefic in dusta house. Final result is plus so good eventough debiliated your method has been used in the above mentioned chart. now do you think 4th lord sun debiliated in 6th house will give good results? sun in the 6th house is not that bad so maybe the negative will be slightly less but still is very negative. as per my method it will get minus + minus + plus = 1(minus), which i guess gives a better picture of the planetary strength. maybe you have a strong justification for all this so would wait for your response to this. i am sure you have derived this method after applying them in many charts so sure there must be some basis to it which right now i am overlooking. i will try to understand it and will definitely use it in the charts i have got in my astrodata bank. Last edited by on Tue Aug 14, 2007 4:02 am, edited 5 times in total. Vargotamma planet also Retrograde......effects??? My observation accross charts on Vargottama is. Depending upon the planet and the house they reside the person is determined (becomes strong) for that house and depending upon the plus/minus they need to work hard for that. For example: When Moon is Vargottama generally people have clear thinking (not influential), but if it is debiliated it takes time for them to convince their thinking. In Vargottama they can not be influenced to change their thinking. As Vargottama makes detemined for that house. When there are too many plus then their detemination is accepted and achieved easily else they struggle. But in general Vargottama people stick to the plans they have made/comitted. When Ascendent is Vargottama overall determined life is what they have. dear Satishji, you are not getting my point. yes, i agree that i said that the conclusion you had reached in the first 2 examples definitely was correct but when i applied them in some other planetary postions it didn't work out as i had thought. there are some examples i have given using the method you have used and using the method i have used so do you think the result your method is showing is correct in those cases? basab wrote: now let me give an example and apply your method and my method. i think it will prove my point. example1(your method) Planet Saturn - Natural Malefic - minus Sign - Libra - Exaltation Sign - plus House 10th house - favourable - plus Retrograde - no - so not considering it as a factor as you have said. Conclusion: minus x plus x plus = minus Therefore, Saturn in Libra in 10th house is bad. example1(my method) Planet Saturn - Natural Malefic - minus Sign - Libra - Exaltation Sign - plus House lagna - favourable - plus Retrograde - no - plus i will give not retrograde a plus point because it will affect the result in my case. Conclusion: minus + plus + plus + plus = 2(plus) Therefore, Saturn in Libra in Lagna is good and it get's 2 positive points. now do you think Saturn the lagna lord getting exalted in the 10th house can be bad as your method is showing it to be? my method is not only showing it is good but it has also given it 2 plus example 2(your method) Planet Venus - Natural Benefic - plus Sign - Taurus - Own Sign - plus House- 5th house - plus retrograde - yes - minus Conclusion: plus x plus x plus x minus = minus Therefore Venus in the 5th house in Taurus is bad example 2(my method) Planet Venus - Natural Benefic - plus Sign - Taurus - Own Sign - plus House- 5th house - plus retrograde - yes - minus Conclusion: plus + plus + plus + minus = 2(plus) Therefore Venus in the 5th house in Taurus is good and it get's 2 plus points. now do you think venus the raja yoga karak planet for the capricon lagna placed in the 5th house will do bad just because it is retrograde as your result is showing? my result is not only showing it is good but giving it 2 plus points. basab wrote:if just one factor is negative then even if an infinite no. of positive factors be there the result will always be negative because minus x plus x plus x plus x plus x plus = minus i would like to draw your attention to this line, how do you justify this? [quote="astrokinghyd"]Example: When I say Saturn in Libra in 5th house, it is understood that the lagna is Gemini – it is also understood that it is lord of 9th and 10th house – and hence it is understood that it is functional benefic. Thus the functional beneficience is taken care – the “who is he and how is he†Hi All, Went throgh the posts..and iam pretty confused... Pls CLARIFY the presnt case: 5th House : Jupiter(Retro) in Swathi star. The dispositor Venus in in Aries in 11th house aspecting its own house Libra. Jupiter: Natural Benefic: PLUS Enimy House:MINUS Retro: MINUS As Jupiter attains "kendradhipatya dosha" for Gemini Lagna,Does Functional Malefic status come is also considered? Is it a Plus x Minus x Minus= PLUS [or] 1PLUS and 2 MINUS so MINUS??? How will Jupiter Mahadasa be??? Please clarify?! Many thanks, the best analysis for this has been provided by Visti Larsen on his website srigaruda dot com Ghrishneswar wrote:the best analysis for this has been provided by Visti Larsen on his website srigaruda dot com Can you please provide the link? Last edited by on Mon Feb 01, 2010 4:19 pm, edited 1 time in total. the link is srigaruda dot com slash visti slash index dot php slash publications slash articles slash 89-retr Ghrishneswar wrote:the link is srigaruda dot com slash visti slash index dot php slash publications slash articles slash 89-retr Grateful that you so kindly remembered. Much appreciated!!
{"url":"http://www.lightonvedicastrology.com/phpBB3_0/viewtopic.php?f=5&t=507","timestamp":"2014-04-21T14:40:09Z","content_type":null,"content_length":"80292","record_id":"<urn:uuid:1cac4461-5d4d-4d5b-b19e-cf56e31ae3ca>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 28 To find medial straight lines commensurable in square only which contain a medial rectangle. Set out the rational straight lines A, B, and C commensurable in square only. Take a mean proportional D between A and B. Let it be contrived that B is to C as D is to E. Since A and B are rational straight lines commensurable in square only, therefore the rectangle A by B, that is, the square on D, is medial. Therefore D is medial. And since B and C are commensurable in square only, and B is to C as D is to E, therefore D and E are also commensurable in square only. But D is medial, therefore E is also medial. Therefore D and E are medial straight lines commensurable in square only. I say next that they also contain a medial rectangle. Since B is to C as D is to E, therefore, alternately, B is to D as C is to E. But B is to D as D is to A, therefore D is to A as C is to E. Therefore the rectangle A by C equals the rectangle D by E. But the rectangle A by C is medial, therefore the rectangle D by E is also medial. Therefore medial straight lines commensurable in square only have been found which contain a medial rectangle. There has been no construction given in Book X for the required lines A, B, and C. One example would be where their lengths are 1, √2, and √ 3. Set A to have length 1, B length √b, an irrational number where b is rational, and C length √c, an irrational number where c is rational, with b/c and irrational number. Then D has length b^1/4 while E has length (√c)/b^1/4. Then D and E are medial and commensurable in square only, and the b^1/4 by (√c)/b^1/4 rectangle they contain has area √c, an irrational number. This proposition is used in X.75.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX28.html","timestamp":"2014-04-20T13:32:59Z","content_type":null,"content_length":"5171","record_id":"<urn:uuid:42d12bb8-38a0-4756-a329-62ec8b01c980>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
A088870 - OEIS %S 13677,14647,21291,29567,43941,69031,88701,105991,126507,317973, %T 156304482823,468913448469,21729950852487,2212933498428421, %U 6638800495285263,12049739358792173,36149218076376519,11316117499289108644863 %N Numbers n which are divisors of the number produced by concatenating (n-5), (n-4), ... (n-1) in that order. %e a(1)=13677 because 13677 is a factor of 1367213673136741367513676. %Y Cf. A069860, A088797, A088798, A088799, A088800, A088868, A088869, A088871, A088872. %K base,nonn %O 1,1 %A Chuck Seggelin (barkeep(AT)plastereddragon.com), Oct 20 2003 %E More terms from _David Wasserman_, Aug 26 2005
{"url":"http://oeis.org/A088870/internal","timestamp":"2014-04-18T18:27:03Z","content_type":null,"content_length":"7333","record_id":"<urn:uuid:2d020fae-5abc-42b3-a94c-dffe7e0f9623>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
free shortcut method for solving maths problem Author Message dammeom Posted: Thursday 28th of Dec 20:46 Hi Fellows ! I have a severe problem regarding math and I was wondering if someone might be able to help me out somehow. I have a algebra test in a few weeks and even though I have been taking math seriously, there are still a a couple of parts that cause a lot of problems , such as free shortcut method for solving maths problem and slope especially. Last week I had a meeting with a math teacher, but many things still remain unclear to me. Can you propose a good way of studying or a good tutor that you know already? Registered: 19.11.2005 AllejHat Posted: Friday 29th of Dec 14:51 It always feels nice when I hear that beginners are willing to put that extra effort into their education . free shortcut method for solving maths problem is not a very difficult topic and you can easily do some initial preparation yourself. As a useful tool, I would suggest that you get a copy of Algebrator. This program is quite handy when doing math yourself. Registered: 16.07.2003 From: Odense, Denmark DoniilT Posted: Friday 29th of Dec 19:24 I tried out each one of them myself and that was when I came across Algebrator. I found it really apt for evaluating formulas, difference of squares and greatest common factor. It was actually also kid’s play to run this. Once you feed in the problem, the program carries you all the way to the solution elucidating each step on its way. That’s what makes it terrific . By the time you arrive at the result, you already know how to crack the problems. I benefited from learning to crack the problems with Algebra 2, College Algebra and Intermediate algebra in algebra . I am also positive that you too will love this program just as I did. Wouldn’t you want to test this out? Registered: 27.08.2002 Dolknankey Posted: Saturday 30th of Dec 08:17 A extraordinary piece of algebra software is Algebrator. Even I faced similar difficulties while solving solving inequalities, side-side-side similarity and rational expressions. Just by typing in the problem workbookand clicking on Solve – and step by step solution to my algebra homework would be ready. I have used it through several math classes - Remedial Algebra, Basic Math and Basic Math. I highly recommend the program. Registered: 24.10.2003 From: Where the trout streams flow and the air is nice
{"url":"http://www.sofsource.com/algebra-2-chapter-4-resource-book/relations/free-shortcut-method-for.html","timestamp":"2014-04-20T13:23:11Z","content_type":null,"content_length":"26772","record_id":"<urn:uuid:677e2fee-c471-4264-8e6d-24111a602b69>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Math and trigonometry functions (reference) Click one of the links in the following list to see detailed help about the function. Function Description ABS Returns the absolute value of a number ACOS Returns the arccosine of a number ACOSH Returns the inverse hyperbolic cosine of a number ASIN Returns the arcsine of a number ASINH Returns the inverse hyperbolic sine of a number ATAN Returns the arctangent of a number ATAN2 Returns the arctangent from x- and y-coordinates ATANH Returns the inverse hyperbolic tangent of a number CEILING Rounds a number to the nearest integer or to the nearest multiple of significance COMBIN Returns the number of combinations for a given number of objects COS Returns the cosine of a number COSH Returns the hyperbolic cosine of a number DEGREES Converts radians to degrees EVEN Rounds a number up to the nearest even integer EXP Returns e raised to the power of a given number FACT Returns the factorial of a number FACTDOUBLE Returns the double factorial of a number FLOOR Rounds a number down, toward zero GCD Returns the greatest common divisor INT Rounds a number down to the nearest integer LCM Returns the least common multiple LN Returns the natural logarithm of a number LOG Returns the logarithm of a number to a specified base LOG10 Returns the base-10 logarithm of a number MDETERM Returns the matrix determinant of an array MINVERSE Returns the matrix inverse of an array MMULT Returns the matrix product of two arrays MOD Returns the remainder from division MROUND Returns a number rounded to the desired multiple MULTINOMIAL Returns the multinomial of a set of numbers ODD Rounds a number up to the nearest odd integer PI Returns the value of pi POWER Returns the result of a number raised to a power PRODUCT Multiplies its arguments QUOTIENT Returns the integer portion of a division RADIANS Converts degrees to radians RAND Returns a random number between 0 and 1 RANDBETWEEN Returns a random number between the numbers you specify ROMAN Converts an arabic numeral to roman, as text ROUND Rounds a number to a specified number of digits ROUNDDOWN Rounds a number down, toward zero ROUNDUP Rounds a number up, away from zero SERIESSUM Returns the sum of a power series based on the formula SIGN Returns the sign of a number SIN Returns the sine of the given angle SINH Returns the hyperbolic sine of a number SQRT Returns a positive square root SQRTPI Returns the square root of (number * pi) SUBTOTAL Returns a subtotal in a list or database SUM Adds its arguments SUMIF Adds the cells specified by a given criteria SUMIFS Adds the cells in a range that meet multiple criteria SUMPRODUCT Returns the sum of the products of corresponding array components SUMSQ Returns the sum of the squares of the arguments SUMX2MY2 Returns the sum of the difference of squares of corresponding values in two arrays SUMX2PY2 Returns the sum of the sum of squares of corresponding values in two arrays SUMXMY2 Returns the sum of squares of differences of corresponding values in two arrays TAN Returns the tangent of a number TANH Returns the hyperbolic tangent of a number TRUNC Truncates a number to an integer
{"url":"http://office.microsoft.com/en-us/excel-help/math-and-trigonometry-functions-reference-HP010079189.aspx","timestamp":"2014-04-20T13:41:11Z","content_type":null,"content_length":"32247","record_id":"<urn:uuid:9e811537-59bc-43ca-9c2c-fd9a271d8bb8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellefonte, DE Algebra 2 Tutor Find a Bellefonte, DE Algebra 2 Tutor ...As a teacher, I believe in a balanced based approach between the "new math" and traditional teaching methods. I believe students understand math better when they see the real-life application of it in the real world. But, I also whole-heartedly believe the basics are essential. 12 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...If you are interested in having me as your tutor, I look forward to hearing from you! On the other hand, if you have chosen one of the other talented tutors, keep on learning! "An expert problem solver must be endowed with two incomparable qualities: a restless imagination and a patient pertina... 9 Subjects: including algebra 2, chemistry, geometry, algebra 1 ...He earned a full semester's worth of credits and is now full-time in GCC's dual enrollment with Rowan University. My youngest is a high school junior and is also taking classes at GCC. Both have become increasingly independent with their studies - which allows me the time to help other students. 23 Subjects: including algebra 2, reading, writing, geometry ...ANITA G. is a dynamic, natural-born teacher, performer and educator. She has offered and mastered classes in local school districts including creating a jazz flute workshop for middle school and high school students. Her education includes, a B.S. from Rutgers University, graduate education cou... 51 Subjects: including algebra 2, English, reading, algebra 1 I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including algebra 2, geometry, GRE, algebra 1 Related Bellefonte, DE Tutors Bellefonte, DE Accounting Tutors Bellefonte, DE ACT Tutors Bellefonte, DE Algebra Tutors Bellefonte, DE Algebra 2 Tutors Bellefonte, DE Calculus Tutors Bellefonte, DE Geometry Tutors Bellefonte, DE Math Tutors Bellefonte, DE Prealgebra Tutors Bellefonte, DE Precalculus Tutors Bellefonte, DE SAT Tutors Bellefonte, DE SAT Math Tutors Bellefonte, DE Science Tutors Bellefonte, DE Statistics Tutors Bellefonte, DE Trigonometry Tutors
{"url":"http://www.purplemath.com/Bellefonte_DE_Algebra_2_tutors.php","timestamp":"2014-04-17T00:58:25Z","content_type":null,"content_length":"24256","record_id":"<urn:uuid:0bb4d1f4-a176-49de-a3d6-8c480ee7bf6a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Harmans Math Tutor Find a Harmans Math Tutor ...It is important that students keep up with the work level and keep up with practicing. I have found that when students don't keep up with the work load they tend to fall behind quickly, and it becomes increasingly difficult to keep up with the course. Trigonometry is all about triangles, that's why I love it! 24 Subjects: including calculus, elementary (k-6th), grammar, ACT Math I majored Physics in my undergraduate (BS) and graduate study (MS). However, my Ph.D. was obtained in Biophysics. Therefore, I have a very strong background to teach math and science (Biology/ Physics). I have many experiences to teach Math and Science from middle school student to graduate school student. I am very patient and know how to teach all different levels of students. 18 Subjects: including algebra 1, SAT math, differential equations, linear algebra ...I have tutored at all levels for the past 15 years, and can help with any type of mathematics or physics. My significant body of research in mathematics education, which includes grants from the National Science Foundation and Department of Education, makes me an expert on student learning and t... 17 Subjects: including geometry, physics, precalculus, trigonometry ...I look forward to working with you and helping you achieving your goals. Warm Regards,NancyI am an attorney licensed to practice law in the State of Maryland. I have worked in the court system for over 4 years. 34 Subjects: including probability, reading, algebra 1, geometry ...My B.S. was earned in Biology and Chemistry at Case Western Reserve University, with a minor in economics. The areas of my tutoring include: the biological, chemical, and health sciences, math ematics, economics, history, social science, and both general and technical writing. Professionally I h... 36 Subjects: including algebra 1, algebra 2, organic chemistry, biology
{"url":"http://www.purplemath.com/harmans_math_tutors.php","timestamp":"2014-04-18T11:20:53Z","content_type":null,"content_length":"23799","record_id":"<urn:uuid:f9991e2c-e934-41bc-aa80-acde84cdf46b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference Between Domain and Range Domain vs Range A mathematical function is a relationship between two sets of variables. One is independent called domain and other is dependent called range. In other words, for two dimensional Cartesian coordinate system or XY system, the variable along x-axis is called as Domain and along y-axis is called as Range. Mathematically, consider a simple relation as {(2, 3), (1, 3), (4, 3)} In this example, Domain is {2, 1, 4}, while Range is {3} Domain is the set of all possible input values is any relation. It means the output value in a function depends upon each member of domain. Domain value vary in different mathematical problems and depends upon the function for which is it solved. If we talk about cosine, then domain is the set of all possible real numbers either above the 0 value or below the 0 value, it could also be 0. While for square root, the domain value could not be less than 0, it should be minimum 0 or above 0. In other words, you can say that domain of square root is always 0 or positive value. For complex and real equations, the domain value is a subset of complex or real vector space. If we want to solve a partially differential equation for finding the value of domain, then your answer should lie within three dimensional space of Euclidean geometry. For Example If y=1/1-x, then its domain value calculated as And x= 1, Hence its domain could be set of all real numbers except 1. Range is the set of all possible output values in a function. Range values are also called dependent values, because these values could only be calculated by putting the domain value in the function. In simple words, you can say that if domain value of a function y=f(x) is x, then its range value will be y. For Example If Y=1/1-x, then its range value will be a set of real numbers, because the values of y for every x are again real numbers. • The domain value is an independent variable, while range value depends upon domain value, so it is dependent variable. • The domain is a set of all input values. On the other hand, range is a set of those output values, which a function produces by entering the value of domain. • Here is a best theoretical example to understand the difference between domain and range. Consider the hours of sunlight during whole day. The domain is the number of hours between sunrise and sun set. While, the value of range is between 0 to maximum elevation of sun. To consider this example, you should keep in mind the hours of daylight, which vary according to season means either winter or summer. There is another thing to pay attention which is latitude. You should calculate domain and range for specific latitude. No doubt, both domain and range are mathematical variables and correlate with each other, as value of range depends upon the value of domain. However, both variables have different properties and have individual identity in any one mathematical function. Related posts:
{"url":"http://www.differencebetween.com/difference-between-domain-and-range/","timestamp":"2014-04-16T10:12:09Z","content_type":null,"content_length":"89067","record_id":"<urn:uuid:613ad10f-a8b8-4306-9ad0-2498f2f6beef>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Uncountability of the Real Numbers Without Decimals Replies: 4 Last Post: Dec 7, 2013 12:54 PM Messages: [ Previous | Next ] Re: Uncountability of the Real Numbers Without Decimals Posted: Dec 6, 2013 8:11 PM WM <wolfgang.mueckenheim@hs-augsburg.de> writes: > Am Freitag, 6. Dezember 2013 15:39:24 UTC+1 schrieb Ben Bacarisse: >> Does a bijection not define >> an enumeration? It does for the rest of us. [typo and->an corrected] > It does. Then I have nothing more to comment on. You accept the existence of bijections between N and Q; you accept that a bijection defines an enumeration. I was only commenting on your claims the there was no enumeration of the rationals, and you clearly agree that there is. Date Subject Author 12/6/13 Re: Uncountability of the Real Numbers Without Decimals Ben Bacarisse 12/7/13 Re: Uncountability of the Real Numbers Without Decimals wolfgang.mueckenheim@hs-augsburg.de 12/7/13 Re: Uncountability of the Real Numbers Without Decimals Virgil 12/7/13 Re: Uncountability of the Real Numbers Without Decimals ross.finlayson@gmail.com
{"url":"http://mathforum.org/kb/message.jspa?messageID=9337607","timestamp":"2014-04-19T05:01:46Z","content_type":null,"content_length":"20150","record_id":"<urn:uuid:f7fb7d97-3810-4331-b199-3474212bd8de>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Fortran Wiki Calculates the Euclidean vector norm (${L}_{2}$ norm) of array along dimension dim. Fortran 2008 and later Transformational function? result = norm2(array[, dim]) • array - Shall be an array of type real. • dim - (Optional) shall be a scalar of type integer with a value in the range from 1 to n, where n equals the rank of array. Return value The result is of the same type as array. If dim is absent, a scalar with the square root of the sum of squares of the elements of array is returned. Otherwise, an array of rank $n-1$, where $n$ equals the rank of array, and a shape similar to that of array with dimension dim dropped is returned. program test_norm2 real :: x(5) = [ real :: 1, 2, 3, 4, 5 ] print *, norm2(x) ! = sqrt(55.) ~ 7.416 end program See also product, sum, hypot
{"url":"http://fortranwiki.org/fortran/show/norm2","timestamp":"2014-04-21T02:34:44Z","content_type":null,"content_length":"10770","record_id":"<urn:uuid:fb08c01b-7e56-4bca-84ed-42ab2413ac8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Fox Island Algebra Tutor Find a Fox Island Algebra Tutor ...I have a Masters Degree in Chemistry from the University of Washington, Seattle, and have been employed as a chemist and educator for over twenty years. I thoroughly enjoy teaching, and tutored all through my college years. My goal as an instructor is to ensure that the student is comfortable w... 12 Subjects: including algebra 1, algebra 2, chemistry, geometry ...I have been speaking publicly since literally kindergarten age: in two languages. I was a member of my high school's speech team and have spoken to groups as large as several hundred. I have given professional presentations to generals, Fortune 500 VIPs, foreign dignitaries and congressional and senatorial staffers. 48 Subjects: including algebra 1, Spanish, reading, English ...I feel that tutoring will prepare me for my future career and to earn some money to pay for my classes. I will work hard to assure that the student achieves their highest potential. I believe that learning should be a fun thing that sparks interest in the student. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I hope to further discuss my skills and experience in detail with you soon!My accumulated education experience has given me the skills necessary to be an effective K-6th tutor. Currently I help with two 3rd and 4th grade home school students in reading and reading comprehension. I have been working with them for a year. 9 Subjects: including algebra 1, reading, writing, grammar I have earned a Bachelor degree in Mathematics, and I have four years experienced in tutoring for middle-high school and college level. I am currently tutoring at a community college. I can be a one-to-one or small group tutor. 7 Subjects: including algebra 1, algebra 2, calculus, statistics Nearby Cities With algebra Tutor Allyn algebra Tutors Arletta, WA algebra Tutors Burley, WA algebra Tutors Burton, WA algebra Tutors Gig Harbor algebra Tutors Grapeview algebra Tutors Longbranch algebra Tutors Olalla Valley, WA algebra Tutors Ruston, WA algebra Tutors Shorewood Beach, WA algebra Tutors Sylvan, WA algebra Tutors Villa Beach, WA algebra Tutors Warren, WA algebra Tutors Wauna algebra Tutors Yoman Ferry, WA algebra Tutors
{"url":"http://www.purplemath.com/fox_island_algebra_tutors.php","timestamp":"2014-04-16T19:33:39Z","content_type":null,"content_length":"23823","record_id":"<urn:uuid:9383eed3-8272-4418-ad8c-f5ffbd738a26>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Commutativity of pullback and pushforward up vote 2 down vote favorite Suppose I have a Cartesian square of morphisms of algebraic varieties over a field $K$ (apologies for the grotty diagram): $$\begin{array}{ccc} A & \to^\alpha & B \\ \downarrow^\beta & & \downarrow^\ gamma \\ C & \to^\delta & D \end{array} $$ so $A$ is the fibre product of $B$ and $C$ over $D$. Suppose also that all four of $A,B,C,D$ are proper curves, and all the morphisms are finite and flat. Is it then true that the two maps $K(C)^\times \to K(B)^\times$ given by $\alpha_* \beta^*$ and $\gamma^* \delta_*$ coincide? (I am sure this must be standard, but I'm not a geometer and I don't really know where to look in the literature for this sort of thing.) Does $K(X)$ denote the rational function field of $X$? If so, what is the pushforward? I feel like this might be a special case of Theorem 6.2 in Fulton's "Intersection Theory", but I can't see how. – Mark Grant Jun 9 '12 at 13:38 Yes, $K(X)$ is just the rational function field of $X$ (or rather, rational function ring, since $A$ may not be irreducible even if $B,C,D$ are.) So $K(A)$ is a finite $K(B)$-algebra, via $\alpha^ 2 *$, and hence there is a norm map $K(A)^\times \to K(B)^\times$ which sends $x$ to the determinant of multiplication by $x$ regarded as a $K(B)$-linear map from $K(A)$ to itself. – David Loeffler Jun 9 '12 at 13:49 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/99173/commutativity-of-pullback-and-pushforward","timestamp":"2014-04-17T19:11:25Z","content_type":null,"content_length":"47686","record_id":"<urn:uuid:733efb32-05d3-41ad-a31b-0cc58a5c2007>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Given a vector field, how does one compute the flow? October 13th 2010, 08:28 AM Given a vector field, how does one compute the flow? I need to be able to know how to compute the flow in R^3 and R^2 given some relatively simple vector fields. My issue is that whenever I read any literature on flows it is very abstract and generalized to the point that I'm not sure what to make of it. October 13th 2010, 09:48 AM Basic Idea Here is the basic idea. let $\vec{r}(t)=x(t)\vec{i}+y(t)\vec{j}+z(t)\vec{k}$ be the position of a particle at time t. Then its velocity is $\vec{v}(t)=\frac{d}{dt}\vec{r}(t)=\frac{dx}{dt}\ve c{i}+\frac{dy}{dt}\vec{j}+\frac{dz}{dt}\vec{k}$ If your vector field $\vec{F}(x,y,z)$ is a velocity field then $\vec{F}(\vec{r}(t))=\frac{dx}{dt}\vec{i}+\frac{dy} {dt}\vec{j}+\frac{dz}{dt}\vec{k}$ by equating the components of the vectors you will get a system of ODE's that are the flow lines of the vectorfield
{"url":"http://mathhelpforum.com/differential-geometry/159463-given-vector-field-how-does-one-compute-flow-print.html","timestamp":"2014-04-20T20:40:28Z","content_type":null,"content_length":"5093","record_id":"<urn:uuid:75b450a3-6542-4e20-9565-682965e38564>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Anti-Fibonacci Sequences and Rings of Saturn Anti-Fibonacci Sequences and Rings of Saturn Assume a process which repeatedly marks all of positive integers which belong to a Fibonacci sequence which start with the lowest unmarked x and x+c. For example, if c is one, the first Fibonacci sequence marks 1, 2, 3, 5, 8, 13, 21, etc. The process would start the second Fibonacci sequence at 6 and 7, marking thereafter 13, 20, 33, ..., the third one would mark 9, 10, 19, 29, etc. The unmarked numbers 4, 18, 22, 28, 32, 47, 54, 72, etc. are then what I call the anti-Fibonacci numbers for the given c = 1. For c = 2 the anti-Fibonacci numbers would be 5, 8, 9, 24, 34, 38, 45, 50, etc. One might assume that anti-Fibonacci number generation is a sufficiently complicated process to be essentially pseudo-random. My back-of-the-envelope calculation went as follows: Let p(i) be the probability that i is anti-Fibonacci. A single existing Fibonacci sequence grows sparser with a rate of phi/i where phi is the base of the golden ratio (sqrt(5) + 1) / 2 with which the Fibonacci values grow, but with a probability of p(i)^2 the i and i+c starts a new Fibonacci sequence and p(i) gets a nudge of 2/i upwards. Putting these in equilibrium and solving for p gives sqrt(sqrt(5) + 1) / 2, or roughly 0.899. I knew I had made some lofty approximations in the derivation above, especially in assuming that the density of multiple Fibonacci sequences would decrease as rapidly as that of one sequence, so I wrote a little program that computes some 100 million first anti-Fibonacci numbers. I was somewhat surprized to see that p is only approximately 0.856 for c = 1. For c = 2 the program gave 0.872, which as again surprizing because I didn't expect c to affect p. But when my computer spat out what looks like interference patterns my jaw really dropped. Having seen the non-randomness of the average of p over all i and observed how it depends on c, a change in coordinates lead me to wonder if conversely the average of p over c's depends on i. The answer is yes. Below I have a sample of p(i) estimated by c up to ten thousand. The moral of the story? Whenever playing with number theory, staple your jaw so it won't keep falling off constantly. 2 Comments: cessu said... I noticed this post has been discussed elsewhere, and in good internet fashion those discussions haven't gone "upstream". I started the Fibonacci sequences (for any c) from zero, not from two ones as usually. This seems to have little effect on the images, but for c>1 the actual anti-Fibonacci numbers are a little different. For example for c=2 the anti-Fibonacci sequence starting from one would be 2, 5, 9, 16, 20, 42, 45, 53, 54, 60, etc. Another note is that I wasn't aware that the term anti-Fibonacci sequence is already used for the extension of the conventional Fibonacci sequence to negative numbers ..., -8, 5, -3, 2, -1, 1, 0, 1, 1, 2, 3, 5, 8, ... George said... Have you read Richard Merrick's work on Interference theory? You will find it quite interesting.
{"url":"http://cessu.blogspot.com/2006/11/anti-fibonacci-sequences-and-rings-of.html","timestamp":"2014-04-16T04:12:19Z","content_type":null,"content_length":"20297","record_id":"<urn:uuid:ced4c8e1-2716-407d-b3e9-287820e860a6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Sherborn Algebra Tutor Find a Sherborn Algebra Tutor ...I also work with students on timing strategies so they neither run out of time nor rush during a section. I do assign weekly homework, as it is critical for students to practice the strategies they learn and feel comfortable with them so they will use them during the test. Of the hundreds of st... 26 Subjects: including algebra 2, algebra 1, English, linear algebra ...I then graduated from Law school at Boston University School of Law, and, while there, I was selected on merit to attend an international law program semester at Oxford University. I then attended Business school at Georgetown University, where I received my MBA in Finance and International Busi... 67 Subjects: including algebra 1, algebra 2, English, calculus ...Things need to make sense to me, I work hard to be sure to understand the big picture of everything and because of that I'm able to show others or explain things better to others about different mathematical and science concepts. I currently have a Masters Degree in Microbiology from Loyola University in Chicago. I also got my Bachelors degree in Biochemistry from Mount Holyoke 22 Subjects: including algebra 1, algebra 2, English, reading ...Since elementary school, I was concert mistress and principal violinist of my elementary, middle, and high school orchestras. I was nominated to play in the NYSSMA Festival (New York State School Music Association), the All-County Orchestra, and the Long Island String Festival. I was nominated and accepted based on my performance on state-wide playing tests and teacher 11 Subjects: including algebra 1, algebra 2, Spanish, ESL/ESOL ...I graduated from Tufts University with an undergraduate degree in Biopsychology and from Stony Brook University with a Master's degree in Physiology and Biophysics. I have extensive experience in tutoring high school math (algebra, trigonometry, pre-calculus, calculus) and science (biology, chem... 10 Subjects: including algebra 1, algebra 2, geometry, chemistry
{"url":"http://www.purplemath.com/Sherborn_Algebra_tutors.php","timestamp":"2014-04-17T15:56:56Z","content_type":null,"content_length":"24006","record_id":"<urn:uuid:38883380-5be0-4cef-97e2-80326b1a6b49>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do I Prove or disprove; For all x, y € R, if x is rational and y is irrational, then xy is irrational Best Response You've already chosen the best response. \(x\) is rational, so \(x\) can be expressed as \(\frac{p}{q}\) Let us assume that \(x\times y\) is rational, then \(xy=\frac{m}{n}\implies y=\frac{mq}{np}\) . This shows that y is rational, which is a contradiction. So \(xy\) is irrational. Best Response You've already chosen the best response. Did you understand it? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f633375e4b079c5c631a6ff","timestamp":"2014-04-20T16:02:11Z","content_type":null,"content_length":"30188","record_id":"<urn:uuid:e880c065-cf7f-4d13-b6b3-bf7c28a937f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
This entry is about the notion of “limit” in category theory. For the notion of the same name in analysis and topology see at limit of a sequence. Category theory Universal constructions Limits and colimits In category theory a limit of a diagram $F : D \to C$ in a category $C$ is an object $lim F$ of $C$ equipped with morphisms to the objects $F(d)$ for all $d \in D$, such that everything in sight commutes. Moreover, the limit $lim F$ is the universal object with this property, i.e. the “most optimized solution” to the problem of finding such an object. The limit construction has a wealth of applications throughout category theory and mathematics in general. In practice, it is possibly best thought of in the context of representable functors as a classifying space for maps into a diagram. So in some sense the limit object $lim F$ “subsumes” the entire diagram $F(D)$ into a single object, as far as morphisms into it are concerned. The corresponding universal object for morphisms out of the diagram is the colimit. An intuitive general idea is that a limit of a diagram is the locus or solution set of a bunch of equations, where each of the coordinates is parametrized by one of the objects of the diagram, and where the equations are prescribed by the morphisms of the diagram. This idea is explained more formally here. Often, the general theory of limits (but not colimits!) works better if the source of $F$ is taken to be the opposite category $D^op$ (or equivalently, if $F$ is taken to be a contravariant functor). This is what we do below. In any given situation, of course, you use whatever categories and functors you're interested in. In some cases the category-theoretic notion of limit does reproduce notions of limit as known from analysis. See the examples below. Global versus local In correspondence to the local definition of adjoint functors (as discussed there), there is a local definition of limits (in terms of cones), that defines a limit (if it exists) for each individual diagram, and there is a global definition, which defines the limit for all diagrams (in terms of an adjoint). If all limits over the given shape of diagrams exist in a category, then both definitions are equivalent. See also the analogous discussion at homotopy limit. Terminology and notation A limit is taken over a functor $F : D^{op} \to C$ and since the functor comes equipped with the information about what its domain is, one can just write $\lim F$ for its limit. But often it is helpful to indicate how the functor is evaluated on objects, in which case the limit is written $\lim_{d \in D} F(d)$; this is used particularly when $F$ is given by a formula (as with other notation with bound variables.) In some schools of mathematics, limits are called projective limits, while colimits are called inductive limits. Also seen are (respectively) inverse limits and direct limits. Both these systems of terminology are alternatives to using ‘co-’ when distinguishing limits and colimits. The first system also appears in pro-object and ind-object. Correspondingly, the symbols $\underset{\leftarrow}lim$ and $\underset{\rightarrow}\lim$ are used instead of $\lim$ and $\colim$. Confusingly, many authors restrict the meanings of these alternative terms to (co)limits whose sources are directed sets; see directed limit. In fact, this is the original meaning; projective and inductive limits in this sense were studied in algebra before the general category-theoretic notion of (co)limit. Local definition in terms of representable functors There is a general abstract definition of limits in terms of representable functors, which we describe now. This reproduces the more concrete and maybe more familiar description in terms of universal cones, which is described further below. Let in the following $D$ be a small category and Set the category of sets (possibly realized as the category $U Set$ of $U$-small sets with respect to a given Grothendieck universe.) Limit of a Set-valued functor The limit of a Set-valued functor $F : D^{op} \to Set$ is the hom-set $lim F := Hom_{[D^{op}, Set]}(pt, F) \in Set$ in the functor category $[D^{op}, Set]$ (the presheaf category), where $pt : D^{op} \to Set$ $pt : d \mapsto \{*\}$ is the functor constant on the point, i.e. the terminal diagram. The set $lim F$ is equivalently called The set $lim F$ can be equivalently expressed as an equalizer of a product, explicitly: $lim F \simeq \left\lbrace (x_d)_{d \in D} \in \prod_{d \in D} F(d) | \forall (d_i \stackrel{\alpha}{\to} d_j) \in D : F(\alpha)(x_{d_j}) = x_{d_i} \right\rbrace$ In particular, the limit of a set-valued functor always exists. Notice the important triviality that the covariant hom-functor commutes with set-valued limits: for every set $S$ we have a bijection of sets $Hom_{Set}(S, lim F) \simeq \lim Hom_{Set}(S, F(-)) \,,$ where $Hom(S, F(-)) : D^{op} \to Set$. Limit of a functor with values in an arbitrary category The above formula generalizes straightforwardly to a notion of limit for functors $F : D^{op} \to C$ for $C$ an arbitrary category if we construct a certain presheaf on $C$ which we will call $\hat \ lim F$. The actual limit $lim F$ is then, if it exists, the object of $C$representing this presheaf. More precisely, using the the Yoneda embedding $Y : C \to [C^{op}, Set]$ define for $F : D^{op} \to C$ the presheaf $\hat \lim F \in [C^{op}, Set]$ by the analog of the above formula $(\hat \lim F)(c) \simeq Hom_{[C^{op}, Set]}(Y(c), \hat \lim F) := \lim Hom_C(c, F(-))$ for all $c \in C$. Here the $\lim$ on the right is again that of Set-valued functors defined before. By the above this can also be written as $(\hat \lim F)(c) = Hom_{[D^{op}, Set]}(pt , Hom_C(c,F(-))$ or, suppressing the subscripts for readability: $(\hat lim F)(c) = Hom(pt , Hom(c,F(-)) \,.$ So also the presheaf-valued limit always exists. Iff this presheaf is representable by an object $\lim F$ of $F$, then this is the limit of $F$: $Hom(c, \lim F) \simeq Hom(pt, Hom(c,F(-))) \,.$ Generalization to weighted limits In the above formulation, there is an evident generalization to weighted limits: replace in the above the constant terminal functor $pt : D^{op} \to Set$ with any functor $W : D^{op} \to Set$ – then called the weight –, then the $W$-weighted limit of $F$ $\lim_W F$ often written is, if it exists, the object representing the presheaf $c \mapsto Hom_{[D^{op}, Set]}(W , Hom_C(c,F(-))) \,,$ i.e. such that $Hom(c, \lim_W F) \simeq Hom(W, Hom(c,F(-))) \,$ naturally in $c \in C$. Relation to continuous functors The very definition of limit as above asserts that the covariant hom-functor $Hom(c,-) : C \to Set$ commutes with forming limits. Indeed, the definition is equivalent to saying that the hom-functor is a continuous functor. Definition in terms of universal cones Unwrapping the above abstract definition of limits yields the following more hands-on description in terms of universal cones. Let $F : D^{op} \to C$ be a functor. Notice that for every object $c \in C$ an element $* \to Hom(pt, Hom(c, F(-)))$ is to be identified with a collection of morphisms $c \to F(d)$ for all $d \in D$, such that all triangles $\array{ && c \\ & \swarrow && \searrow \\ F(d_i) && \stackrel{F(f)}{\to} && F(d_j) }$ commute. Such a collection of morphisms is called a cone over $F$, for the obvious reason. If the limit $\lim F \in C$ of $F$ exist, then it singles out a special cone given by the composite morphism $* \stackrel{* \mapsto Id_{\lim F}}{\to} Hom_C(\lim F, \lim F) \stackrel{\simeq}{\to} Hom(pt, Hom(\lim F, F(-))) \,,$ where the first morphism picks the identity morphism on $\lim F$ and the second one is the defining bijection of a limit as above. The cone $\array{ && \lim F \\ & \swarrow && \searrow \\ F(d_i) && \stackrel{F(f)}{\to} && F(d_j) }$ is called the universal cone over $F$, because, again by the defining property of limit as above, every other cone $\{c \to F(d)\}_{d \in D}$ as above is bijectively related to a morphism $c \to \lim $* \stackrel{\{c \to F(d)\}_{d \in D}}{\to} Hom(pt, Hom(c, F(-))) \stackrel{\simeq}{\to} Hom(c, \lim F) \,.$ By inspection one finds that, indeed, the morphism $c \to \lim F$ is the morphism which exhibits the factorization of the cone $\{c \to F(d)\}_{d \in D}$ through the universal limit cone $\array{ && c \\ & \swarrow && \searrow \\ F(d_i) && \stackrel{F(f)}{\to} && F(d_j) } = \array{ && c \\ && \downarrow \\ && \lim F \\ & \swarrow && \searrow \\ F(d_i) && \stackrel{F(f)}{\to} && F (d_j) } \,.$ An illustrative example is the following: a limit of the identity functor $Id_c:C\to C$ is, if it exists, an initial object of $C$. Global Definition in terms of adjoint of the constant diagram functor Given categories $D$ and $C$, limits over functors $D^{op} \to C$ may exist for some functors, but not for all. If it does exist for all functors, then the above local definition of limits is equivalent to the following global definition. For $D$ a small category and $C$ any category, the functor category $[D^{op},C]$ is the category of $D$-diagrams in $C$. Pullback along the functor $D^{op} \to pt$ to the terminal category $pt = \{\ bullet\}$ induces a functor $const : C \to [D^{op},C]$ which sends every object of $C$ to the diagram functor constant on this object. The left adjoint $colim_D : [D^{op},C] \to C$ of this functor is, if it exists, the functor which sends every diagram to its colimit and the right adjoint is, if it exists, the functor $lim_D : [D^{op},C] \to C$ which sends every diagram to its limit. The Hom-isomorphisms of these adjunctions state precisely the universal property of limit and colimit given above. Concretely this means that for all $c \in C$ we have a bijection $Hom_C(c, \lim F) \simeq Hom_{[D^{op},C]}(const_X, F) \,.$ From this perspective, a limit is a special case of a Kan extension, as described there, namely a Kan extension to the point. The notion of limit, being fundamental to category theory, generalizes to many other situations. Examples include the following. The central point about examples of limits is: Categorical limits are ubiquitous. To a fair extent, category theory is all about limits and the other universal constructions: Kan extensions, adjoint functors, representable functors, which are all special cases of limits – and limits are special cases of these. Listing examples of limits in category theory is much like listing examples of integrals in analysis: one can and does fill books with these. (In fact, that analogy has more to it than meets the casual eye: see coend for more). Keeping that in mind, we do list some special cases and special classes of examples that are useful to know. But any list is necessarily wildly incomplete. Here are some important examples of limits, classified by the shape of the diagram: Existence: construction from products and equalizers Frequently some limits can be computed in terms of other limits. This makes things easier since we only have to assume that categories have, or functors preserve, some easier-to-verify class of limits in order to obtain results about a larger one. The most common example of this is the computation of limits in terms of products and equalizers. Specifically, if the limit of $F : D^{op} \to C$ and the products $\prod_{d\in Obj(D)} F(d)$ and $\ prod_{f\in Mor{d}} F(s(f))$ all exist, then $lim F$ is a subobject of $\prod_{d\in Obj(D)} F(d)$, namely the equalizer of $\prod_{d \in Obj(D)} F(d) \stackrel{\prod_{f \in Mor(d)} (F(f) \circ p_{t(f)}) }{\to} \prod_{f \in Mor(D)} F(s(f))$ $\prod_{d \in Obj(D)} F(d) \stackrel{\prod_{f \in Mor(d)} (p_{s(f)}) }{\to} \prod_{f \in Mor(D)} F(s(f)) \,.$ Conversely, if both of these products exist and so does the equalizer of this pair of maps, then that equalizer is a limit of $F$. In particular, therefore, a category has all limits as soon as it has all products and equalizers, and a functor defined on such a category preserves all limits as soon as it preserves products and equalizers. Another example is that all finite limits can be computed in terms of pullbacks and a terminal object. Interaction with $Hom$-functor Covariant Hom commutes with limits For $C$ a locally small category, for $F : D^{op} \to C$ a functor and writing $C(c, F(-)) : CD^{op} \to Set$, we have $C(c, lim F) \simeq lim C(c, F(-)) \,.$ Depending on how one introduces limits this holds by definition or is an easy consequence. In $Set$ Limits in Set are hom-sets For $F : D^{op} \to Set$ any functor and $const_{*} : D^{op} \to Set$ the functor constant on the point, the limit of $F$ is the hom-set $lim F \simeq [D^{op}, Set](const_{*}, F)$ in the functor category, i.e. the set of natural transformations from the constant functor into $F$. In functor categories Proposition – limits in functor categories are computed pointwise Let $D$ be a small category and let $D'$ be any category. Let $C$ be a category which admits limits of shape $D$. Write $[D',C]$ for the functor category. Then • $[D',C]$ admits $D$-shaped limits; • these limits are computed objectwise (“pointwise”) in $C$: for $F : D^{op} \to [D',C]$ a functor we have for all $d' \in D'$ that $(lim F)(d') \simeq lim (F(-)(d'))$. Here the limit on the right is in $C$. Compatibility with adjoints Proposition – right adjoints commute with limits Let $R : C \to C'$ be a functor that is right adjoint to some functor $L : C' \to C$. Let $D$ be a small category such that $C$ admits limits of shape $D$. Then $R$commutes with $D$-shaped limits in $C$ in that for $F : D^{op} \to C$ some diagram, we have $R(lim F) \simeq lim (R \circ F) \,.$ Using the adjunction isomorphism and the above fact that Hom commutes with limits, one obtains for every $c' \in C'$ \begin{aligned} C'(c', R (lim F)) & \simeq C(L(c'), lim F) \\ & \simeq lim C(L(c'), F) \\ & \simeq lim C'(c', R\circ F) \\ & \simeq C'(c', lim (R \circ F)) \,. \end{aligned} \,. Since this holds naturally for every $c'$, the Yoneda lemma, corollary II on uniqueness of representing objects implies that $R (lim F) \simeq lim (G \circ F)$. Commutativity with limits and colimits Proposition – small limits commute with small limits Let $D$ and $D'$ be small catgeories and let $C$ be a category which admits limits of shape $D$ as well as limits of shape $D'$. Then these limits commute with each other, in that for $F : D^{op} \times {D'}^{op} \to C$ a functor , with corresponding induced functors $F_D : {D'}^{op} \to [D^{op},C]$ and $F_{D'} : {D}^{op} \to [{D'}^{op},C]$, then $lim F \simeq lim_{D} (lim_{D'} F_D ) \simeq lim_{D'} (lim_{D} F_{D'} ) \,.$ This follows from the above proposition and the characterization of the limit as right adjoint to the functor $const$ defined above in the section on adjoints. See limits and colimits by example for what this formula says for instance for the special case $C =$Set. In general limits do not commute with colimits. But under a number of special conditions of interest they do. More on that at commutativity of limits and colimits.
{"url":"http://ncatlab.org/nlab/show/limit","timestamp":"2014-04-19T14:32:25Z","content_type":null,"content_length":"145836","record_id":"<urn:uuid:890dece0-de02-4673-a743-24961e7a0fab>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove some identities! (Round two) February 6th 2011, 07:00 AM #91 Simply put -since the sums are finite: $\displaystyle\sum_{k=1}^{\infty}{\lfloor \log_k n\rfloor} = \sum_{k=2}^{\infty}{\left(\sum_{k^r \leq n ; r\geq 1} 1 \right) } = \sum_{r\geq 1}\left(\sum_{k ^r\leq n; k\geq 2}1\right)$ And we are done since $\displaystyle\sum_{k^r\leq n; k\geq 2}1 = \lfloor n^{1/r}-1\rfloor$ Now a couple of problems: 1. Show that: $\displaystyle\sum_{k=1}^n{\frac{d(k)}{k}} = \tfrac{1}{2}\cdot \log^2(n)+ O\left(\log(n)\right)$$(*)$ 2. Show that: $\displaystyle\sum_{k=0}^{n}{\binom{n}{k}\cdot (n-k)^{n-2}\cdot (-1)^k} = 0$ for all $n\in \{2,3,4,...\}$$(**)$ $(*)$ Here $d(k)$ is the number of positive integers dividing $k$. $(**)$ For the case $n=2$ to work, we write $0^0 = 1$ I'll assume $n\ge 3$. For $n=2$ it can be checked directly. $\displaystyle{<br /> f(x)=\sum_{k=0}^n \binom{n}{k}x^k = (1+x)^n<br /> }$ $\displaystyle{<br /> xf'(x)=\sum_{k=0}^n \binom{n}{k}kx^k<br /> }$ Differentiating more, it's easily seen that $<br /> \displaystyle x(x(xf')'\cdots)' = \sum_{k=0}^n\binom{n}{k}P_m(k)x^k}$ Where on the left side we have $m$ differentiations and on the right side $P_m$ are polynomials of degree $m$. As such, $\{P_0,P_1,P_2,\cdots,P_{n-2}\}$ form a basis to the vector space of polynomials of degree at most $n-2$. Therefore, $\displaystyle{<br /> \sum_{k=0}^n \binom{n}{k}(n-k)^{n-2}x^k=\sum_{k=0}^n \binom{n}{k}k^{n-2}x^k<br /> }$ is a linear combination of $<br /> (1)\;\;\;f,\; xf',\; x(xf')'\;\cdots\;, x(x(xf')'\cdots)'<br />$ up to $n-2$ times differentiation. But since the zero of $f$ at $-1$ is of order $n$, we have that all the functions in (1) evaluated at $-1$ vanish, and thus their linear combination evaluated at that point, $\displaystyle\sum_{k=0}^n \binom{n}{k}(n-k)^{n-2}(-1)^k$ also does. $\displaystyle \sum_{n\leq x} d(n) = \sum_{n\leq x} \left\lfloor\frac xn\right\rfloor = \sum_{n\leq x} \frac xn -\sum_{n\leq x}\left\{\frac xn\right\}$ $\displaystyle = x\left(\log x +\gamma x +O\left(\tfrac1x\right)\right) + O(x) = x\log x + O(x)$. Now use Abel's summation to get $\displaystyle \sum_{n\leq x} \frac{d(n)}n = \frac{x\log x + O(x)}x + \int_1^x \frac{\log t}t dt + O\left(\int_1^x \frac1t dt\right)$ $\displayatyle = \frac12\log^2x + O\left(\log x\right)$. Now I'll do you one better. It can be shown via the Dirichlet hyperbola method that $\displaystyle \sum_{n\leq x} d(n) = x\log x + (2\gamma-1)x + O\left(\sqrt{x}\right)$. Now use Abel's summation to get $\displaystyle \sum_{n\leq x} \frac{d(n)}n = \frac{x\log x + (2\gamma-1)x + O\left(\sqrt{x}\right)}x + \int_1^x \frac{\log t}t dt + \int_1^x \frac{2\gamma-1}t dt + O\left(\int_1^x t^{-3/2} dt\ $\displayatyle = \frac12\log^2x + 2\gamma\log x + O(1)$. Last edited by chiph588@; February 6th 2011 at 11:29 AM. My solutions: First proof: Consider the set of functions from a set $A\to B$. If we fix a subset $S\subseteq B$ we have that the number of functions from $A\to B$ such that $f(A)\subseteq B - S$ is $\left( |B| - |S|\right)^{|A|}$ so in fact what we are counting (in the original equation) by inclusion-exclusion is the number of functions such that $f(A)\supseteq B$ which is of course absurd since in this case $|f(A)|\leq |A| = n-2<|B| = n$ Remark: If $|A|=|B|=n$ we get $n! = \displaystyle\sum_{k=0}^n{\binom{n}{k}\cdot (n-k)^n\cdot (-1)^k}$, setting $n = p-1$ and using Fermat's Little Theorem + The Binomial Theorem we get a proof of Wilson's Theorem. Second Proof: The number of labeled trees with $n$ vertices such that a given set -you choose it- of $k$ labels correspond to leaves is $(n-k)^{n-2}$, so in fact what $\displaystyle\sum_{k=0}^n{\ binom{n}{k}\cdot (n-k)^{n-2}\cdot (-1)^k}$ is counting is the number of trees with no leaves, of course, if there are vertices this number will be 0. 2. We write : $\displaystyle\sum_{k=1}^n{\frac{d(k)}{k}}= \sum_{k=1}^n{\tfrac{1}{k}\cdot \sum_{d|k}{1}} = \sum_{d=1}^n{\sum_{d|k}{\tfrac{1}{k}}} = \sum_{d=1}^n{\tfrac{1}{d}\cdot H_{\left\lfloor \ tfrac{n}{d} \right\rfloor}}$ where $H_n = 1 + \tfrac{1}{2}+...+\tfrac{1}{n}$ , estimating this the rest follows. February 6th 2011, 08:23 AM #92 Nov 2009 February 6th 2011, 10:56 AM #93 February 7th 2011, 03:50 AM #94
{"url":"http://mathhelpforum.com/math-challenge-problems/167141-prove-some-identities-round-two-7.html","timestamp":"2014-04-20T04:59:37Z","content_type":null,"content_length":"60345","record_id":"<urn:uuid:cb8b534c-af6e-4475-8d07-eb9b3ab498db>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Secaucus Math Tutor Find a Secaucus Math Tutor ...My approach as a tutor is to first establish my tutee's needs and abilities, and to then fashion an individualized program for him/her. I take the student through math examples in a step-by-step manner, making sure the student grasps each point. We then do several additional, similar examples, ... 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I am also able to advise techniques for drawing diagrams on the logic games section, and well as general logical rules to help understand the arguments and constraints in each problem. I have tutored the GRE at least 20 times. I am able to tutor students of all skill levels, from the most eleme... 32 Subjects: including logic, linear algebra, algebra 1, algebra 2 ...For the last year, I have tutored college students in Calculus I and Calculus 2. I feel very confident tutoring this subject. I have been tutoring students grades K-5 for the last 5 years, in addition to middle and high school students. 19 Subjects: including trigonometry, algebra 1, algebra 2, biology ...We can do practices, exercises, and/or homework together, or we can study a topic from the beginning and do exercises to reinforce our understanding. I need cancellation notifications 24-hours before. As work place, I prefer comfortable and suitable public places generally, like a coffee shop with wide tables, or a library. 25 Subjects: including algebra 1, SAT math, statistics, logic ...As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I bring all the tools you'll need to succeed! Of course, a big part of physics is math, and I am experienced and well qualified to tutor math from elementary school up throug... 18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
{"url":"http://www.purplemath.com/Secaucus_Math_tutors.php","timestamp":"2014-04-17T11:22:13Z","content_type":null,"content_length":"23722","record_id":"<urn:uuid:9e3ac83d-dd95-4600-abef-73d6af06d6c6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Corner points of Rectangular Box using center point and forward vector - Printable Version Generating Corner points of Rectangular Box using center point and forward vector - 3DPat - Nov 9, 2006 01:01 AM Hi all.. Please help me with this!!! I have a rectangular box, knwing its length,width n height, at origin with some forward vector. This box is translated and oriented in space. Knowing the forward vector and the center of the cube, how can i calculate the 8 corner points ??? I tried using this... calculating the other 2 perpendicular vectors from the forward vector... then calculating the diagonal vector... but unable to proceed ne further to locate the points on this Eagerly awaiting some useful reponses Generating Corner points of Rectangular Box using center point and forward vector - Fenris - Nov 9, 2006 04:18 AM You need to know either the box orientation in angles, or the up vector. If you have the up vector, then you can grab the right vector too, as you said. From these, you can find the midpoints on the box's planes. (Just walk in the direction of all three vectors for the appropriate lengths, in both directions). From the planes' center points, you can find its corners. For instance, if you use the "up" vector to find the top side, then you you can use the front and right vectors to find one corner. (center point + (front vector * length/2) + (right vector * length / 2). Then you can use different combinations of signs (±front vector, ±right vector) to find the other corners of that side. Then you can just give it a quick thought and see which corners are already calculated and remove any duplicates. Generating Corner points of Rectangular Box using center point and forward vector - TomorrowPlusX - Nov 9, 2006 09:00 AM Another approach -- which I take -- is if you know the transformation matrix that the box has in world space; you can create your eight corner points in untransformed space ( e.g., where the box would be with an identity transform ) and then transform those eight points by the box's actual transform. Generating Corner points of Rectangular Box using center point and forward vector - 3DPat - Nov 10, 2006 04:26 AM Thank you Fenris and TomorrowPlusX for this immediate response... But i still hav some problems... TomorrowPlusX .. ur suggestion wud wrk jst perfect in my case... but the problem im facing is tht ... i need to perform these calculations for mobile devices... in this i get the transformation matrix but i cant apply the orientation directly on it. So the problem of orientation still exists... And Fenris similarly for ur suggestions the problem i face is calculating the angles... coz i want to avoid using Inverse Sine,Cosine calculations... so im still stuck up... Both ur solutions were just perfect.. but somehow they dont meet the constraints im having... I wonder can it b done somehow just manuplating the Vectors tht i hav and using the length,width n Please do give it a thought... Thanks again.. Generating Corner points of Rectangular Box using center point and forward vector - OneSadCookie - Nov 10, 2006 04:42 AM A center and a forward vector do not uniquely specify the orientation of your box. There must be another constraint. Once you decide what that constraint is, either of Fenris' or TomorrowPlusX's solutions will work just fine. Until you decide what that constraint is, you cannot solve this problem. Generating Corner points of Rectangular Box using center point and forward vector - Fenris - Nov 10, 2006 05:47 AM Also, just need to point out that my solution does not require any trigonometry. Generating Corner points of Rectangular Box using center point and forward vector - 3DPat - Nov 13, 2006 09:40 PM Hii Fenris, Thanks for ur replies... using ur technique in the method suggested by TomorrowPlusX helped me solve my problem... it was possible only after calculating the 2nd vector along with the forward vector Thankyou all...
{"url":"http://idevgames.com/forums/printthread.php?tid=3745","timestamp":"2014-04-20T23:44:54Z","content_type":null,"content_length":"7484","record_id":"<urn:uuid:0e59e326-18f9-4b70-8bb3-ac00972da2da>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
DOCUMENTA MATHEMATICA, Vol. Extra Volume: John H. Coates' Sixtieth Birthday (2006), 711-727 DOCUMENTA MATHEMATICA , Vol. Extra Volume: John H. Coates' Sixtieth Birthday (2006), 711-727 Joseph H. Silverman Divisibility Sequences and Powers of Algebraic Integers Let $\a$ be an algebraic integer and define a sequence of rational integers $d_n(\a)$ by the condition \[ d_n(\a) = \max\{d\in\ZZ : \a^n \equiv 1 \MOD{d} \}. \] We show that $d_n(\a)$ is a strong divisibility sequence and that it satisfies $\log d_n(\a)=o(n)$ provided that no power of $\a$ is in $\ZZ$ and no power of $\a$ is a unit in a quadratic field. We completely analyze some of the exceptional cases by showing that $d_n(\a)$ splits into subsequences satisfying second order linear recurrences. Finally, we provide numerical evidence for the conjecture that aside from the exceptional cases, $d_n(\a)=d_1(\a)$ for infinitely many $n$, and we ask whether the set of such $n$ has postive (lower) density. 2000 Mathematics Subject Classification: Primary: 11R04; Secondary: 11A05, 11D61 Keywords and Phrases: divisibility sequence, multiplicative group Full text: dvi.gz 30 k, dvi 74 k, ps.gz 650 k, pdf 169 k. Home Page of DOCUMENTA MATHEMATICA
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-coates/silverman.html","timestamp":"2014-04-17T15:34:51Z","content_type":null,"content_length":"2039","record_id":"<urn:uuid:8448efca-afd7-4915-87eb-fcee7d27f802>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Short algorithm, long-range consequences In the last decade, theoretical computer science has seen remarkable progress on the problem of solving graph Laplacians -- the esoteric name for a calculation with hordes of familiar applications in scheduling, image processing, online product recommendation, network analysis, and scientific computing, to name just a few. Only in 2004 did researchers first propose an algorithm that solved graph Laplacians in "nearly linear time," meaning that the algorithm's running time didn't increase exponentially with the size of the problem. At this year's ACM Symposium on the Theory of Computing, MIT researchers will present a new algorithm for solving graph Laplacians that is not only faster than its predecessors, but also drastically simpler. "The 2004 paper required fundamental innovations in multiple branches of mathematics and computer science, but it ended up being split into three papers that I think were 130 pages in aggregate," says Jonathan Kelner, an associate professor of applied mathematics at MIT who led the new research. "We were able to replace it with something that would fit on a blackboard." The MIT researchers -- Kelner; Lorenzo Orecchia, an instructor in applied mathematics; and Kelner's students Aaron Sidford and Zeyuan Zhu -- believe that the simplicity of their algorithm should make it both faster and easier to implement in software than its predecessors. But just as important is the simplicity of their conceptual analysis, which, they argue, should make their result much easier to generalize to other contexts. Overcoming resistance A graph Laplacian is a matrix -- a big grid of numbers -- that describes a graph, a mathematical abstraction common in computer science. A graph is any collection of nodes, usually depicted as circles, and edges, depicted as lines that connect the nodes. In a logistics problem, the nodes might represent tasks to be performed, while in an online recommendation engine, they might represent titles of movies. In many graphs, the edges are "weighted," meaning that they have different numbers associated with them. Those numbers could represent the cost -- in time, money or energy -- of moving from one step to another in a complex logistical operation, or they could represent the strength of the correlations between the movie preferences of customers of an online video service. The Laplacian of a graph describes the weights between all the edges, but it can also be interpreted as a series of linear equations. Solving those equations is crucial to many techniques for analyzing graphs. One intuitive way to think about graph Laplacians is to imagine the graph as a big electrical circuit and the edges as resistors. The weights of the edges describe the resistance of the resistors; solving the Laplacian tells you how much current would flow between any two points in the graph. Earlier approaches to solving graph Laplacians considered a series of ever-simpler approximations of the graph of interest. Solving the simplest provided a good approximation of the next simplest, which provided a good approximation of the next simplest, and so on. But the rules for constructing the sequence of graphs could get very complex, and proving that the solution of the simplest was a good approximation of the most complex required considerable mathematical ingenuity. Looping back The MIT researchers' approach is much more straightforward. The first thing they do is find a "spanning tree" for the graph. A tree is a particular kind of graph that has no closed loops. A family tree is a familiar example; there, a loop might mean that someone was both parent and sibling to the same person. A spanning tree of a graph is a tree that touches all of the graph's nodes but dispenses with the edges that create loops. Efficient algorithms for constructing spanning trees are well established. The spanning tree in hand, the MIT algorithm then adds back just one of the missing edges, creating a loop. A loop means that two nodes are connected by two different paths; on the circuit analogy, the voltage would have to be the same across both paths. So the algorithm sticks in values for current flow that balance the loop. Then it adds back another missing edge and rebalances. In even a simple graph, values that balance one loop could imbalance another one. But the MIT researchers showed that, remarkably, this simple, repetitive process of adding edges and rebalancing will converge on the solution of the graph Laplacian. Nor did the demonstration of that convergence require sophisticated mathematics: "Once you find the right way of thinking about the problem, everything just falls into place," Kelner explains. Paradigm shift Daniel Spielman, a professor of applied mathematics and computer science at Yale University, was Kelner's thesis advisor and one of two co-authors of the 2004 paper. According to Spielman, his algorithm solved Laplacians in nearly linear time "on problems of astronomical size that you will never ever encounter unless it's a much bigger universe than we know. Jon and colleagues' algorithm is actually a practical one." Spielman points out that in 2010, researchers at Carnegie Mellon University also presented a practical algorithm for solving Laplacians. Theoretical analysis shows that the MIT algorithm should be somewhat faster, but "the strange reality of all these things is, you do a lot of analysis to make sure that everything works, but you sometimes get unusually lucky, or unusually unlucky, when you implement them. So we'll have to wait to see which really is the case." The real value of the MIT paper, Spielman says, is in its innovative theoretical approach. "My work and the work of the folks at Carnegie Mellon, we're solving a problem in numeric linear algebra using techniques from the field of numerical linear algebra," he says. "Jon's paper is completely ignoring all of those techniques and really solving this problem using ideas from data structures and algorithm design. It's substituting one whole set of ideas for another set of ideas, and I think that's going to be a bit of a game-changer for the field. Because people will see there's this set of ideas out there that might have application no one had ever imagined." Story Source: The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty. Note: Materials may be edited for content and length. Journal Reference: 1. Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, Zeyuan Allen Zhu. A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time. Submitted to ArXiv, 2013 Cite This Page: Massachusetts Institute of Technology. "Short algorithm, long-range consequences." ScienceDaily. ScienceDaily, 2 March 2013. <www.sciencedaily.com/releases/2013/03/130302125400.htm>. Massachusetts Institute of Technology. (2013, March 2). Short algorithm, long-range consequences. ScienceDaily. Retrieved April 17, 2014 from www.sciencedaily.com/releases/2013/03/130302125400.htm Massachusetts Institute of Technology. "Short algorithm, long-range consequences." ScienceDaily. www.sciencedaily.com/releases/2013/03/130302125400.htm (accessed April 17, 2014).
{"url":"http://www.sciencedaily.com/releases/2013/03/130302125400.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+sciencedaily%2Fcomputers_math%2Fmathematics+%28Mathematics+News+--+ScienceDaily%29","timestamp":"2014-04-18T00:22:24Z","content_type":null,"content_length":"88787","record_id":"<urn:uuid:666973c3-aadd-458f-b4c8-7e600a6def30>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
StatPlus 2006.3.9.0 StatPlus 2006 is a powerful and flexible software solution that processes data to perform statistical analysis. Big Christmas discounts are available. License Shareware (Free to Try) Date Added 01/07/2007 Price USD $120.00 Category Education / Mathematics Filesize 6.4 MB Author AnalystSoft With StatPlus 2006, one gets a robust suite of statistics tools and graphical analysis methods that are easily accessed though a simple and straightforward interface. The range of possible applications of StatPlus 2006 is virtually unlimited - sociology, financial analysis, biostatistics, economics, insurance industry, healthcare and clinical research, probability calculations for lotteries and gambling operations - to name just a few fields where the program is already being extensively used. While StatPlus 2006 is a "heavy-duty" professional statistical analysis tool, the interface is so simple that even people who have no knowledge of statistics are capable of processing data, provided they know how to use PC and clear instructions are given. This frees up intellectual resources for analyzing the results, rather than agonizing over who and how processed the data, and if any mistakes were made in the process. Statistical features: Basic statistics:descriptive statistics,normality tests, T-Test/,Fisher F-test, correlation coefficients (Pearson,Fechner),covariation ANOVA (MANOVA,GLM, Latin and Greco-latin squares analysis) Nonparametric statistics: Tables Analysis chi-square test, rank correlations (Kendall Tau, Spearman R, etc.), comparing independent samples (Mann-Whitney U, Kolmogorov-Smirnov, Rosenbaum, Wald-Wolfowitz Runs Tests,Kruskal-Wallis ANOVA,...) comparing dependent samples (Wilcoxon Test, Sign Test, Friedman ANOVA, Kendall's Concordance) Cochran Q Test Regression: Linear, polynomial, logistic, stepwise and Cox regression Survival Analysis: Probit-analysis, Cox regression Time series analysis:Moving Average, Auto Correlation and Partial AC, etc. Powerful data processor (from sampling up to special transformations) Spreadsheet features : Reads Microsoft Excel/StatSoft Statistica/SPSS Charts - histograms, bars, areas, point-graphs, pies... OLE Objects Big Christmas dicounts are available. Platform:Windows 95, Windows 98, Windows Me, Windows NT, Windows 2000, Windows XP, Windows 2003 System Requirements: There is no specific requirements
{"url":"http://www.supershareware.com/info/statplus.html","timestamp":"2014-04-17T21:25:14Z","content_type":null,"content_length":"56954","record_id":"<urn:uuid:21ca5cdd-1df4-4c69-b048-be81d009db13>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Analogical Reasoning with Relational Bayesian Sets Ricardo Silva, Katherine Heller and Zoubin Ghahramani In: AISTATS 2007, Puerto Rico(2007). Analogical reasoning depends fundamentally on the ability to learn and generalize about relations between objects. There are many ways in which objects can be related, making automated analogical reasoning very chal- lenging. Here we develop an approach which, given a set of pairs of related objects S = {A1:B1,A2:B2,...,AN:BN}, measures how well other pairs A:B fit in with the set S. This addresses the question: is the relation between objects A and B analogous to those relations found in S? We recast this classi- cal problem as a problem of Bayesian analy- sis of relational data. This problem is non- trivial because direct similarity between ob- jects is not a good way of measuring analo- gies. For instance, the analogy between an electron around the nucleus of an atom and a planet around the Sun is hardly justified by isolated, non-relational, comparisons of an electron to a planet, and a nucleus to the Sun. We develop a generative model for predicting the existence of relationships and extend the framework of Ghahramani and Heller (2005) to provide a Bayesian measure for how anal- ogous a relation is to other relations. This sheds new light on an old problem, which we motivate and illustrate through practical ap- plications in exploratory data analysis. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00006732/","timestamp":"2014-04-19T22:10:54Z","content_type":null,"content_length":"8473","record_id":"<urn:uuid:8e76da69-8979-4625-92b8-a3392764a618>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
ADVCOMP 2009 The Third International Conference on Advanced Engineering Computing and Applications in Sciences ADVCOMP 2009 October 11-16, 2009 - Sliema, Malta Technical Co-Sponsors and Logistics Supporters Archive: 2008 2007 printer friendly pdf version Special Tutorial: Tools and Services for Data Intensive Research by Roger S. Barga, PhD, Microsoft Research Note: Roger will provide the attendees with both software and supporting documentation of Microsoft Dryad on a USB drive that we will hand Submission (full paper) [S:May 20, 2009:S] June 5 , 2009 Authors of selected papers will be invited to submit extended versions to a IARIA Journal Notification [S:June 30, 2009:S] July 1, 2009 Registration July 15, 2009 Publisher: IEEE Computer Society Conference Publishing Services Camera ready July 20, 2009 Posted: IEEE Digital Library Indexing process All tracks/topics are open to both research and industry contributions. Advances on computing theories Finite-state machines; Petri nets /stochastic/colored/probabilistic/etc; Genetic algorithms; Machine learning theory; Prediction theory; Bayesian theory /statistics/filtering/estimation/reasoning /rating/etc; Markov chains/process/model/etc; Graphs theories Advances in computation methods Hybrid computational methods; Advanced numerical algorithms; Differential calculus; Matrix perturbation theory; Rare matrices; Fractals & super-fractal algorithms; Random graph dynamics; Multi-dimensional harmonic estimation Computational logics Knowledge-based systems and automated reasoning; Logical issues in knowledge representation /non-monotonic reasoning/belief; Specification and verification of programs and systems; Applications of logic in hardware and VLSI; Natural language, concurrent computation, planning; Deduction and reasoning; Logic of computation; Dempster-Shafer theory; Fuzzy theory/computation/logic/etc Advances on computing mechanisms Clustering large and high dimensional data; Data fusion and aggregation; Biological sequence analysis; Biomecatronics mechanisms; Biologically inspired mechanisms; System theory and control mechanisms; Multi-objective evolutionary algorithms; Constraint-based algorithms; Ontology-based reasoning; Topology and structure patterns; Geometrical pattern similarity; Strong and weak symmetry; Distortion in coordination mechanisms Computing techniques Distributed computing; Parallel computing; Grid computing; Autonomic computing; Cloud computing; Development of numerical and scientific software-based systems; Pattern-based computing; Finite-element method computation; Elastic models; Optimization techniques; Simulation techniques; Stream-based computing Computational geometry Theoretical computational geometry; Applied computational geometry; Design and analysis of geometric algorithms; Design and analysis of geometric algorithms and data structures; Discrete and combinatorial geometry and topology; Data structures (Voronoi Diagrams, Delaunay triangulations, etc.); Experimental evaluation of geometric algorithms and heuristics; Numerical performance of geometric algorithms; Geometric computations in parallel and distributed environments; Geometric data structures for mesh generation; Geometric methods in computer graphics; Solid modeling; Space Partitioning; Special applications (animation of geometric algorithms, manufacturing, computer graphics and image processing, computer-aided geometry design, solid geometry) Interdisciplinary computing Computational /physics, chemistry, biology/ algorithms; Graph-based modeling and algorithms; Computational methods for /crystal, protein/ structure prediction; Computation for multi-material structure; Modeling and simulation of large deformations and strong shock waves; Computation in solid mechanics; Remote geo-sensing; Interdisciplinary computing in music and arts Cloud computing Hardware-as-a-service; Software-as-a-service [SaaS applicaitions]; Platform-as-service; On-demand computing models; Cloud Computing programming and application development; Scalability, discovery of services and data in Cloud computing infrastructures; Privacy, security, ownership and reliability issues; Performance and QoS; Dynamic resource provisioning; Power-efficiency and Cloud computing; Load balancing; Application streaming; Cloud SLAs, business models and pricing policies; Custom platforms; Large-scale compute infrastructures; Managing applications in the clouds; Data centers; Process in the clouds; Content and service distribution in Cloud computing infrastructures; Multiple applications can run on one computer (virtualization a la VMWare); Grid computing (multiple computers can be used to run one application); Cloud-computing vendor governance and regulatory compliance Grid Networks, Services and Applications GRID theory, frameworks, methodologies, architecture, ontology; GRID infrastructure and technologies; GRID middleware; GRID protocols and networking; GRID computing, utility computing, autonomic computing, metacomputing; Programmable GRID; Data GRID; Context ontology and management in GRIDs; Distributed decisions in GRID networks; GRID services and applications; Virtualization, modeling, and metadata in GRID; Resource management, scheduling, and scalability in GRID; GRID monitoring, control, and management; Traffic and load balancing in GRID; User profiles and priorities in GRID; Performance and security in GRID systems; Fault tolerance, resilience, survivability, robustness in GRID; QoS/SLA in GRID networks; GRID fora, standards, development, evolution; GRID case studies, validation testbeds, prototypes, and lessons learned Computing in Virtualization-based environments Principles of virtualization; Virtualization platforms; Thick and thin clients; Data centers and nano-centers; Open virtualization format; Orchestration of virtualization across data centers; Dynamic federation of compute capacity; Dynamic geo-balancing; Instant workload migration; Virtualization-aware storage; Virtualization-aware networking; Virtualization embedded-software-based smart mobile phones; Trusted platforms and embedded supervisors for security; Virtualization management operations /discovery, configuration, provisioning, performance, etc.; Energy optimization and saving for green datacenters; Virtualization supporting cloud computing; Applications as pre-packaged virtual machines; Licencing and support policies Development of computing support Computing platforms; Advanced scientific computing; Support for scientific problem-solving; Support for distributed decisions; Agent-assisted workflow support; Middleware computation support; High performance computing; Problem solving environments; Computational science and education; Neuronal networks Computing applications in science Advanced computing in civil engineering; Advanced computing in physics science; Advanced computing in chemistry science; Advanced computing in mathematics; Advanced computing in operation research; Advanced computing in economics; Advanced computing in electronics and electrical science; Advanced computing on Earth science, geosciences and meteorology Complex computing in application domains Computation genomic; Management of scientific data and knowledge; Advanced computing in bioinformatics and biophysics; Advanced computing in molecular systems and biological systems; Application of engineering methods to genetics; Medical computation and graphics; Advanced computing in simulation systems; Advanced computing for statistics and optimization; Advanced computing in mechanics and quantum mechanics; Advanced computing for geosciences and meteorology; Maps and geo-images building; Curve and surface reconstruction; Financial computing and forecasting; Advanced computing in robotics and manufacturing; Advanced computing in power systems; Environmental advanced computing
{"url":"http://www.iaria.org/conferences2009/ADVCOMP09.html","timestamp":"2014-04-19T20:06:34Z","content_type":null,"content_length":"21548","record_id":"<urn:uuid:47f74c9e-9d4e-4d6a-8701-7df4385ce4a0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
EQUATIONS USED IN CALCULATION OF flowrate Large Diameter Orifice Flowmeter Calculation for Liquid Flow SMALL Diameter Orifice Flowmeter Calculation for Liquid Flow Large Diameter Orifice Flowmeter Calculation for Gas Flow Small Bore Orifice Flowmeter Calculation for Gas Flow A fluid passing though an orifice constriction will experience a drop in pressure across the orifice. This change can be used to measure the flowrate of the fluid. To calculate the flowrate of a fluid passing through an orifice plate, enter the parameters below. (The default calculation involves air passing through a medium-sized orifice in a 4" pipe, with answers rounded to 3 significant figures.) EQUATIONS USED IN CALCULATION OF flowrate As long as the fluid speed is sufficiently subsonic (V < mach 0.3), the incompressible Bernoulli's equation describes the flow reasonably well. Applying this equation to a streamline traveling down the axis of the horizontal tube gives, where location 1 is upstream of the orifice, and location 2 is slightly behind the orifice. It is recommended that location 1 be positioned one pipe diameter upstream of the orifice, and location 2 be positioned one-half pipe diameter downstream of the orifice. Since the pressure at 1 will be higher than the pressure at 2 (for flow moving from 1 to 2), the pressure difference as defined will be a positive quantity. From continuity, the velocities can be replaced by cross-sectional areas of the flow and the volumetric flowrate Q, Solving for the volumetric flowrate Q gives, The above equation applies only to perfectly laminar, inviscid flows. For real flows (such as water or air), viscosity and turbulence are present and act to convert kinetic flow energy into heat. To account for this effect, a discharge coefficient Cd is introduced into the above equation to marginally reduce the flowrate Q, Since the actual flow profile at location 2 downstream of the orifice is quite complex, thereby making the effective value of A2 uncertain, the following substitution introducing a flow coefficient Cf is made, where Ao is the area of the orifice. As a result, the volumetric flowrate Q for real flows is given by the equation, The flow coefficient Cf is found from experiments and is tabulated in reference books; it ranges from 0.6 to 0.9 for most orifices. Since it depends on the orifice and pipe diameters (as well as theReynolds Number), one will often find Cf tabulated versus the ratio of orifice diameter to inlet diameter, sometimes defined as b, The mass flowrate can be found by multiplying Q with the fluid density, The orifice plate is commonly used in clean liquid, gas, and steam service. It is available for all pipe sizes, and if the pressure drop it requires is free, it is very cost-effective for measuring flows in larger pipes (over 6" diameter). The orifice plate is also approved by many standards organizations for the custody transfer of liquids and gases. The orifice flow equations used today still differ from one another, although the various standards organizations are working to adopt a single, universally accepted orifice flow equation. Orifice sizing programs usually allow the user to select the flow equation desired from among several. The orifice plate can be made of any material, although stainless steel is the most common. The thickness of the plate used ( 1/8-1/2") is a function of the line size, the process temperature, the pressure, and the differential pressure. The traditional orifice is a thin circular plate (with a tab for handling and for data), inserted into the pipeline between the two flanges of an orifice union. This method of installation is cost-effective, but it calls for a process shutdown whenever the plate is removed for maintenance or inspection. In contrast, an orifice fitting allows the orifice to be removed from the process without depressurizing the line and shutting down flow. In such fittings, the universal orifice plate, a circular plate with no tab, is used. The concentric orifice plate (Figure A) has a sharp (square-edged) concentric bore that provides an almost pure line contact between the plate and the fluid, with negligible friction drag at the boundary. The beta (or diameter) ratios of concentric orifice plates range from 0.25 to 0.75. The maximum velocity and minimum static pressure occurs at some 0.35 to 0.85 pipe diameters downstream from the orifice plate. That point is called the vena contracta. Measuring the differential pressure at a location close to the orifice plate minimizes the effect of pipe roughness, since friction has an effect on the fluid and the pipe wall. Flange taps are predominantly used in the United States and are located 1 inch from the orifice plate's surfaces . They are not recommended for use on pipelines under 2 inches in diameter. Corner taps are predominant in Europe for all sizes of pipe, and are used in the United States for pipes under 2 inches . With corner taps, the relatively small clearances represent a potential maintenance problem. Vena contracta taps (which are close to the radius taps) are located one pipe diameter upstream from the plate, and downstream at the point of vena contracta. This location varies (with beta ratio and Reynolds number) from 0.35D to 0.8D. The vena contracta taps provide the maximum pressure differential, but also the most noise. Additionally, if the plate is changed, it may require a change in the tap location. Also, in small pipes, the vena contracta might lie under a flange. Therefore, vena contracta taps normally are used only in pipe sizes exceeding six inches. Radius taps are similar to vena contracta taps, except the downstream tap is fixed at 0.5D from the orifice plate . Pipe taps are located 2.5 pipe diameters upstream and 8 diameters downstream from the orifice . They detect the smallest pressure difference and, because of the tap distance from the orifice, the effects of pipe roughness, dimensional inconsistencies, and, therefore, measurement errors are the greatest. The concentric orifice plate is recommended for clean liquids, gases, and steam flows when Reynolds numbers range from 20,000 to 107 in pipes under six inches. Because the basic orifice flow equations assume that flow velocities are well below sonic, a different theoretical and computational approach is required if sonic velocities are expected. The minimum recommended Reynolds number for flow through an orifice varies with the beta ratio of the orifice and with the pipe size. In larger size pipes, the minimum Reynolds number also rises. Because of this minimum Reynolds number consideration, square-edged orifices are seldom used on viscous fluids. Quadrant-edged and conical orifice plates are recommended when the Reynolds number is under 10,000. Flange taps, corner, and radius taps can all be used with quadrant-edged orifices, but only corner taps should be used with a conical orifice. Concentric orifice plates can be provided with drain holes to prevent buildup of entrained liquids in gas streams, or with vent holes for venting entrained gases from liquids (Figure A above). The unmeasured flow passing through the vent or drain hole is usually less than 1% of the total flow if the hole diameter is less than 10% of the orifice bore. The effectiveness of vent/drain holes is limited, however, because they often plug up. Concentric orifice plates are not recommended for multi-phase fluids in horizontal lines because the secondary phase can build up around the upstream edge of the plate. In extreme cases, this can clog the opening, or it can change the flow pattern, creating measurement error. Eccentric and segmental orifice plates are better suited for such applications. Concentric orifices are still preferred for multi-phase flows in vertical lines because accumulation of material is less likely and the sizing data for these plates is more reliable. The eccentric orifice (Figure B above) is similar to the concentric except that the opening is offset from the pipe's centerline. The opening of the segmental orifice (Figure C above) is a segment of a circle. If the secondary phase is a gas, the opening of an eccentric orifice will be located towards the top of the pipe. If the secondary phase is a liquid in a gas or a slurry in a liquid stream, the opening should be at the bottom of the pipe. The drainage area of the segmental orifice is greater than that of the eccentric orifice, and, therefore, it is preferred in applications with high proportions of the secondary phase. These plates are usually used in pipe sizes exceeding four inches in diameter, and must be carefully installed to make sure that no portion of the flange or gasket interferes with the opening. Flange taps are used with both types of plates, and are located in the quadrant opposite the opening for the eccentric orifice, in line with the maximum dam height for the segmental orifice. For the measurement of low flow rates, a d/p cell with an integral orifice may be the best choice. In this design, the total process flow passes through the d/p cell, eliminating the need for lead lines. These are proprietary devices with little published data on their performance; their flow coefficients are based on actual laboratory calibrations. They are recommended for clean, single-phase fluids only because even small amounts of build-up will create significant measurement errors or will clog the unit. Restriction orifices are installed to remove excess pressure and usually operate at sonic velocities with very small beta ratios. The pressure drop across a single restriction orifice should not exceed 500 psid because of plugging or galling. In multi-element restriction orifice installations, the plates are placed approximately one pipe diameter from one another in order to prevent pressure recovery between the plates. Although it is a simple device, the orifice plate is, in principle, a precision instrument. Under ideal conditions, the inaccuracy of an orifice plate can be in the range of 0.75-1.5% AR. Orifice plates are, however, quite sensitive to a variety of error-inducing conditions. Precision in the bore calculations, the quality of the installation, and the condition of the plate itself determine total performance. Installation factors include tap location and condition, condition of the process pipe, adequacy of straight pipe runs, gasket interference, misalignment of pipe and orifice bores, and lead line design. Other adverse conditions include the dulling of the sharp edge or nicks caused by corrosion or erosion, warpage of the plate due to waterhammer and dirt, and grease or secondary phase deposits on either orifice surface. Any of the above conditions can change the orifice discharge coefficient by as much as 10%. In combination, these problems can be even more worrisome and the net effect unpredictable. Therefore, under average operating conditions, a typical orifice installation can be expected to have an overall inaccuracy in the range of 2 to 5% AR. The typical custody-transfer grade orifice meter is more accurate because it can be calibrated in a testing laboratory and is provided with honed pipe sections, flow straighteners, senior orifice fittings, and temperature controlled enclosures. The orifice plates are simple, cheap and can be delivered for almost any application in any material. The TurnDown Rate for orifice plates are less than 5:1. Their accuracy are poor at low flow rates. A high accuracy depend on an orifice plate in good shape, with a sharp edge to the upstream side. Wear reduces the accuracy. Large Diameter Orifice Flowmeter Calculation for Liquid Flow For pipe diameter > 5 cm. Compute flowrate, orifice diameter, or differential pressure. Equations: ISO 5167 Small Diameter Orifice Flowmeter Calculation for Liquid Flow For pipe diameter < 5 cm. Compute flowrate, orifice diameter, or differential pressure. Equations: ASME MFC-14M-2001. Large Diameter Orifice Flowmeter Calculation for Gas Flow For pipe diameter > 5 cm. Compute flowrate, orifice diameter, or differential pressure. Equations: ISO 5167 Small Bore Orifice Flowmeter Calculation for Gas Flow For pipe diameter < 5 cm. Compute flowrate, bore diameter, or differential pressure. Equations: ASTM MFC-14M-2001
{"url":"http://saba.kntu.ac.ir/eecd/Ecourses/instrumentation/projects/reports/Flowmeter/orifice.htm","timestamp":"2014-04-18T23:15:35Z","content_type":null,"content_length":"29591","record_id":"<urn:uuid:d79e5631-40ab-4efa-b4a9-64f862ac1589>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Brooklyn, NY Algebra 2 Tutor Find a Brooklyn, NY Algebra 2 Tutor I am currently an undergraduate student at NYU Polytechnic School of Engineering and pursuing a Bachelor's in Computer Engineering. Although I have yet to acquire extensive experience in tutoring, I have always excelled in mathematics and helped my peers, friends, and family understand the material... 4 Subjects: including algebra 2, algebra 1, elementary math, prealgebra ...To define myself, all I say is that I have built my personality on a quote stated by some unknown scholar, " Nothing is impossible in this world, because impossible itself says I'm possible". I believe that a person can accomplish anything, if he has will and confidence on himself. This is the way that I teach kids, so they can rely for help on oneself rather than on someone else. 8 Subjects: including algebra 2, reading, algebra 1, trigonometry I am at present pursuing Bachelor's degrees in Mathematics and will graduate from Medgar Evers College next May. I am mainly interested in utilizing my skills and knowledge in a school atmosphere. As a student teacher and tutor, my main goal is to emphasize conceptual understanding of the math top... 6 Subjects: including algebra 2, calculus, algebra 1, prealgebra ...My name is Teresa and I come to you with over five years of tutoring experience. While an undergraduate at RIce University, I worked as a tutor for students ages 9 to 17 both independently and for Varsity Tutors. I have tutored everything from elementary reading and spelling to SAT Math, study skills, and paper writing. 37 Subjects: including algebra 2, reading, English, writing ...So, if you want someone who is passionate not only about the knowledge, but passing that knowledge on to you, you've found him.I have taken immersion French classes for 15 years and am more than capable of teaching difficult concepts that can be so difficult to master. As a student of Cognitive ... 17 Subjects: including algebra 2, reading, writing, English Related Brooklyn, NY Tutors Brooklyn, NY Accounting Tutors Brooklyn, NY ACT Tutors Brooklyn, NY Algebra Tutors Brooklyn, NY Algebra 2 Tutors Brooklyn, NY Calculus Tutors Brooklyn, NY Geometry Tutors Brooklyn, NY Math Tutors Brooklyn, NY Prealgebra Tutors Brooklyn, NY Precalculus Tutors Brooklyn, NY SAT Tutors Brooklyn, NY SAT Math Tutors Brooklyn, NY Science Tutors Brooklyn, NY Statistics Tutors Brooklyn, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Brooklyn_NY_Algebra_2_tutors.php","timestamp":"2014-04-19T02:04:41Z","content_type":null,"content_length":"24257","record_id":"<urn:uuid:06c23e34-0521-4cdb-9aa2-c0d20cfb3fbe>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Best Response You've already chosen the best response. I just noticed the last line: If you must order at least one of each type of table, because without that line I was not getting any of your choices. Can you try to sketch the relations? Let x be the number of type A and y be the number of type B You can order no more than 190 tables this month. that means the sum of x and y must be less than or equal to 190 x+y≤ 190 put this into slope-intercept form: y ≤ -x + 190 plot the line y= -x + 190 shade the area under this line (this is where y is less than the line) can you do that? Best Response You've already chosen the best response. I'll try :) one sec Best Response You've already chosen the best response. It only shows a bit of the graph. By hand: |dw:1358269570252:dw| Best Response You've already chosen the best response. OK so what does this tell me now? sorry I'm dumb :( Best Response You've already chosen the best response. We don't go below x=0 (we can't order less than 0) same for y that triangle represents all (x,y) pairs of orders where x+y ≤ 190 (and x+y≥0 we can't order less than 0) if you go outside the triangle you will order too many or a negative number. Neither is allowed. Best Response You've already chosen the best response. you need to make at least $4,610 profit on them It would be good if you could write down the relation (mathematical expression) for this. Hero posted it, but you should try. The profit will be the number of type A's times the profit per type A plus # of B's times profit of a B this sum must be ≥ 4610 Can you write down the expression? Best Response You've already chosen the best response. 29a + 19b >_ 4610 Best Response You've already chosen the best response. ok, but use x and y, because we have to plot this... Best Response You've already chosen the best response. ok :) 29x + 19y >_4610 Best Response You've already chosen the best response. can you put it into slope-intercept form? Best Response You've already chosen the best response. IDK sorry Best Response You've already chosen the best response. slope intercept form is y ≥ mx + b in other words, y is by itself on the left. First step is add -29x to both sides. Just write -29x on both sides what do you get? Best Response You've already chosen the best response. y=x+19>_4610 IDk Best Response You've already chosen the best response. don't make things up. start with 29x + 19y >= 4610 write -29x on both sides: -29x + 29x + 19y >= -29x + 4610 on the left side you have 29x and -29x (think: 29 x's take away 29 x's. I get no x's) 19y ≥ -29x +4610 now divide both sides by 19. That means divide each term on both sides by 19 what do you get? Best Response You've already chosen the best response. y≥ -29/12x + 4610/19 Best Response You've already chosen the best response. I assume that 12 is a typo? Best Response You've already chosen the best response. ohh I mean 19 Best Response You've already chosen the best response. now, let's change into decimals (only a few digits). Roughly we get y ≥ -1.53 x + 242.6 now plot this line. Y ≥ means shade all the Y above the line. Can you plot this line? Best Response You've already chosen the best response. I have no idea:( Best Response You've already chosen the best response. for y ≥ -1.53 x + 242.6 we first plot y = -1.53x +242.6 if we set x to 0, what is y ? Best Response You've already chosen the best response. y ≥ 242.6 Best Response You've already chosen the best response. Ok, x=0, y = 242.6 (0,242.6) is one point on the line can you plot this point ? Best Response You've already chosen the best response. idk how Best Response You've already chosen the best response. Best Response You've already chosen the best response. I picked x=100 because it is easy to multiply by 100. we could pick any x, but x near or inside our triangle makes sense. Best Response You've already chosen the best response. y= 395.6 Best Response You've already chosen the best response. -1.53*100 is a negative number. try again Best Response You've already chosen the best response. Best Response You've already chosen the best response. that means (100,89.6) is on the line. can you plot this point ? go to x=100 and move up about 90. Best Response You've already chosen the best response. Best Response You've already chosen the best response. now connect the dots (or x's in this case) |dw:1358272145701:dw| Best Response You've already chosen the best response. d.)100 of type A; 90 of type B Best Response You've already chosen the best response. I shaded the area above the line, because that is where y ≥ to the line The black shaded area is everywhere where you get the needed profit (remember profit must be ≥ 4610) We also must be below the red line. To meet both requirements, we must be in the lower triangle. Best Response You've already chosen the best response. ahhhh ok I'm confused now...what is the answer? Best Response You've already chosen the best response. Once you get the "feasible" region, you check the "corners" for the (x,y) pair that matches your requirement. In this case, the bottom two spots have y=0 and the statement If you must order at least one of each type of table means we can rule out those points we have to find the intersection of the two lines. but we see by eyeball it is 100, 90 Best Response You've already chosen the best response. so its AorD :D Best Response You've already chosen the best response. A is x in our graph. and B is y Here is how to find where the lines cross: you start with y= -x + 190 y = -1.53x +242.6 when they meet they have the same y values. we set -x + 190 = -1.53x +242.6 add 1.53x to both sides (you should learn how to do this, btw) 1.53x - 1x + 190= -1.53x + 1.53x +242.6 on the left we get 0.53x+190 and on the right 242.6: 0.53x +190 = 242.6 add -190 to both sides 0.53x +190-190= 242.6-190 0.53x = 52.6 divide both sides by 0.53, and x = 99.2 looking at our graph, we want x bigger than this (we have to buy a whole table) x=100 and x+y =190 replace x with 100: 100+y=190 add -100 to both sides: 100-100+y= 190-100 y=90 In a harder problem, we would use these x,y numbers in the cost equation, and find the (x,y) pair (of the three corners) that gives the lowest cost. here, we just need to find the top "corner" because the bottom 2 are ruled out by the question (we must buy some of both types, so 0 is not allowed) Best Response You've already chosen the best response. So the answer is A :) thank you so much for spending the time on teaching me :p I really appreciated Best Response You've already chosen the best response. I think you can do this stuff, but you have skimped so much in learning all the details that it is difficult to do a problem like this where you have to remember how to do lots of things: plot, solve equations, etc. Best Response You've already chosen the best response. Personally, I would pick just one area (example: putting equations into slope-intercept form), and watch a video at Khan Best Response You've already chosen the best response. ok I'll try that thanx again :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f58a15e4b0246f1fe3902f","timestamp":"2014-04-18T19:22:42Z","content_type":null,"content_length":"313126","record_id":"<urn:uuid:a4969010-a79c-4121-9961-abcdea7ee8ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Recurrence Relation? January 7th 2008, 09:34 PM Recurrence Relation? can u plz help me to solve recurrence relation? $a_n = a_{n-1} + f(n)$ for $n \ge 1$ by substitution &ge; is giving latex error. January 7th 2008, 09:38 PM January 8th 2008, 12:25 AM There seems to be a quirk in the implementation of HTML used by this forum. You can get the character ≥ in the LaTeX environment by typing \ge, \geq or (my preference) \geqslant. But if you want it in HTML, typing &ge; will not work. However, if you use the equivalent numerical character entity reference &#38;#8805; you will get the symbol ≥. The same applies to any other HTML character. You can get it by specifying the numerical character entity reference, but not by using the (abbreviated) name of the character. So for example & infin; won't work, but &#38;#8734; comes out as ∞. January 9th 2008, 03:42 AM Thanks for help on using editor. Anyone please help on solving the question? January 9th 2008, 04:14 AM $a_1 = a_0+f(1)$, $a_2 = a_1+f(2) = a_0+f(1)+f(2)$, $a_3 = a_2+f(3) = a_0+f(1)+f(2)+f(3)$, $a_n = a_0+\sum_{k=1}^nf(k)$. That's all you can say, unless you have further information about the function f.
{"url":"http://mathhelpforum.com/discrete-math/25747-recurrence-relation-print.html","timestamp":"2014-04-16T11:04:40Z","content_type":null,"content_length":"8015","record_id":"<urn:uuid:eab3b685-648a-4ae7-bdf7-0724696b320a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
AKS Primality Prover, Part 1 October 2, 2012 Finding the multiplicative order of n modulo r is easy; we just start with k = 2 and keep incrementing k until we find the answer: (define (compute-r n) (let ((target (* 4 (square (log2 n))))) (if (not (td-prime? n)) #f (do ((r 3 (+ r 1))) ((and (not (= r n)) (< target (ord r n))) r))))) We represent a polynomial as a list of its coefficients, most significant first, and use several helper functions to compute the modular multiplication of two polynomials. Plus adds two polynomials, assuming that the leading coefficients have the same degree; we can’t use map because they might have different lengths. Mod-poly performs the modulo operation on two polynomials, with the second of form x^r−1, by splitting the list of coefficients at r and adding the two pieces. Times and mod-n performs normal integer operations. The coefficients of the first polynomial are reversed, so multiplication proceeds right-to-left, and a leading zero is added to each polynomial to make the degrees of the two addends the same: (define (poly-mult-mod xs ys r n) (define (times x) (lambda (y) (* x y))) (define (plus xs ys) (let loop ((xs xs) (ys ys) (zs (list))) (cond ((null? xs) (reverse (append (reverse ys) zs))) ((null? ys) (reverse (append (reverse xs) zs))) (else (loop (cdr xs) (cdr ys) (cons (+ (car xs) (car ys)) zs)))))) (define (mod-poly xs) (let-values (((hs ts) (split r (reverse xs)))) (reverse (plus hs ts)))) (define (mod-n x) (modulo x n)) (let ((xs (reverse xs))) (let loop ((xs (cdr xs)) (zs (map (times (car xs)) ys))) (if (null? xs) (map mod-n (mod-poly zs)) (loop (cdr xs) (plus (cons 0 zs) (map (times (car xs)) ys))))))) Then modular exponentiation of a polynomial is done by the usual square-and-multiply method: (define (poly-power-mod bs e r n) (let loop ((bs bs) (e e) (rs (list 1))) (if (zero? e) rs (loop (poly-mult-mod bs bs r n) (quotient e 2) (if (even? e) rs (poly-mult-mod rs bs r n)))))) Here are two examples: > (ord 191 89) > (poly-power-mod '(1 4 12 3) 2 5 17) (6 0 15 5 0) We used split and expm from the Standard Prelude. You can run the program at http://programmingpraxis.codepad.org/xUSr2dTr. Pages: 1 2 One Response to “AKS Primality Prover, Part 1”
{"url":"http://programmingpraxis.com/2012/10/02/aks-primality-prover-part-1/2/","timestamp":"2014-04-21T12:37:26Z","content_type":null,"content_length":"62908","record_id":"<urn:uuid:7b0e1fd4-2258-4298-a61c-86ad31e4c05a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
The Probability Of Winning The Lottery Is One In ... | Chegg.com The probability of winning the lottery is one in a million. What is the probability that it takes exactly 1000 tickets to win the lottery if each attempt is independent? What is the probability it takes at least 1000 tickets to win? Other Math
{"url":"http://www.chegg.com/homework-help/questions-and-answers/probability-winning-lottery-one-million-probability-takes-exactly-1000-tickets-win-lottery-q3537759","timestamp":"2014-04-17T14:38:14Z","content_type":null,"content_length":"20376","record_id":"<urn:uuid:caa267af-4b7d-4480-b598-b1253c28066d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
The Golden Ratio by Mario Livio Numberless are the world's wonders. --Sophocles (495-405 b.c.) The famous British physicist Lord Kelvin (William Thomson; 1824-1907), after whom the degrees in the absolute temperature scale are named, once said in a lecture: "When you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." Kelvin was referring, of course, to the knowledge required for the advancement of science. But numbers and mathematics have the curious propensity of contributing even to the understanding of things that are, or at least appear to be, extremely remote from science. In Edgar Allan Poe's The Mystery of Marie Roget, the famous detective Auguste Dupin says: "We make chance a matter of absolute calculation. We subject the unlooked for and unimagined, to the mathematical formulae of the schools." At an even simpler level, consider the following problem you may have encountered when preparing for a party: You have a chocolate bar composed of twelve pieces; how many snaps will be required to separate all the pieces? The answer is actually much simpler than you might have thought, and it does not require almost any calculation. Every time you make a snap, you have one more piece than you had before. Therefore, if you need to end up with twelve pieces, you will have to snap eleven times. (Check it for yourself.) More generally, irrespective of the number of pieces the chocolate bar is composed of, the number of snaps is always one less than the number of pieces you need. Even if you are not a chocolate lover yourself, you realize that this example demonstrates a simple mathematical rule that can be applied to many other circumstances. But in addition to mathematical properties, formulae, and rules (many of which we forget anyhow), there also exist a few special numbers that are so ubiquitous that they never cease to amaze us. The most famous of these is the number pi (?), which is the ratio of the circumference of any circle to its diameter. The value of pi, 3.14159 . . . , has fascinated many generations of mathematicians. Even though it was defined originally in geometry, pi appears very frequently and unexpectedly in the calculation of probabilities. A famous example is known as Buffon's Needle, after the French mathematician George-Louis Leclerc, Comte de Buffon (1707-1788), who posed and solved this probability problem in 1777. Leclerc asked: Suppose you have a large sheet of paper on the floor, ruled with parallel straight lines spaced by a fixed distance. A needle of length equal precisely to the spacing between the lines is thrown completely at random onto the paper. What is the probability that the needle will land in such a way that it will intersect one of the lines (e.g., as in Figure 1)? Surprisingly, the answer turns out to be the number 2/?. Therefore, in principle, you could even evaluate ? by repeating this experiment many times and observing in what fraction of the total number of throws you obtain an intersection. (There exist, however, less tedious ways to find the value of pi.) Pi has by now become such a household word that film director Darren Aronofsky was even inspired to make a 1998 intellectual thriller with that title. Less known than pi is another number, phi (f), which is in many respects even more fascinating. Suppose I ask you, for example: What do the delightful petal arrangement in a red rose, Salvador Dali's famous painting "Sacrament of the Last Supper," the magnificent spiral shells of mollusks, and the breeding of rabbits all have in common? Hard to believe, but these very disparate examples do have in common a certain number or geometrical proportion known since antiquity, a number that in the nineteenth century was given the honorifics "Golden Number," "Golden Ratio," and "Golden Section." A book published in Italy at the beginning of the sixteenth century went so far as to call this ratio the "Divine Proportion." In everyday life, we use the word "proportion" either for the comparative relation between parts of things with respect to size or quantity or when we want to describe a harmonious relationship between different parts. In mathematics, the term "proportion" is used to describe an equality of the type: nine is to three as six is to two. As we shall see, the Golden Ratio provides us with an intriguing mingling of the two definitions in that, while defined mathematically, it is claimed to have pleasingly harmonious qualities. The first clear definition of what has later become known as the Golden Ratio was given around 300 b.c. by the founder of geometry as a formalized deductive system, Euclid of Alexandria. We shall return to Euclid and his fantastic accomplishments in Chapter 4, but at the moment let me note only that so great is the admiration that Euclid commands that, in 1923, the poet Edna St. Vincent Millay wrote a poem entitled "Euclid Alone Has Looked on Beauty Bare." Actually, even Millay's annotated notebook from her course in Euclidean geometry has been preserved. Euclid defined a proportion derived from a simple division of a line into what he called its "extreme and mean ratio." In Euclid's words: A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the lesser. In other words, if we look at Figure 2, line AB is certainly longer than the segment AC; at the same time, the segment AC is longer than CB. If the ratio of the length of AC to that of CB is the same as the ratio of AB to AC, then the line has been cut in extreme and mean ratio, or in a Golden Ratio. Who could have guessed that this innocent-looking line division, which Euclid defined for some purely geometrical purposes, would have consequences in topics ranging from leaf arrangements in botany to the structure of galaxies containing billions of stars, and from mathematics to the arts? The Golden Ratio therefore provides us with a wonderful example of that feeling of utter amazement that the famous physicist Albert Einstein (1879-1955) valued so much. In Einstein's own words: "The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and science. He who knows it not and can no longer wonder, no longer feel amazement, is as good as dead, a snuffed-out candle." As we shall see calculated in this book, the precise value of the Golden Ratio (the ratio of AC to CB in Figure 2) is the never-ending, never-repeating number 1.6180339887 . . . , and such never-ending numbers have intrigued humans since antiquity. One story has it that when the Greek mathematician Hippasus of Metapontum discovered, in the fifth century b.c., that the Golden Ratio is a number that is neither a whole number (like the familiar 1, 2, 3, . . .) nor even a ratio of two whole numbers (like the fractions 1/2, 2/3, 3/4, . . . ; known collectively as rational numbers), this absolutely shocked the other followers of the famous mathematician Pythagoras (the Pythagoreans). The Pythagorean worldview (which will be described in detail in Chapter 2) was based on an extreme admiration for the arithmos--the intrinsic properties of whole numbers or their ratios--and their presumed role in the cosmos. The realization that there exist numbers, like the Golden Ratio, that go on forever without displaying any repetition or pattern caused a true philosophical crisis. Legend even claims that, overwhelmed with this stupendous discovery, the Pythagoreans sacrificed a hundred oxen in awe, although this appears highly unlikely, given the fact that the Pythagoreans were strict vegetarians. I should emphasize at this point that many of these stories are based on poorly documented historical material. The precise date for the discovery of numbers that are neither whole nor fractions, known as irrational numbers, is not known with any certainty. Nevertheless, some researchers do place the discovery in the fifth century b.c., which is at least consistent with the dating of the stories just described. What is clear is that the Pythagoreans basically believed that the existence of such numbers was so horrific that it must represent some sort of cosmic error, one that should be suppressed and kept secret. The fact that the Golden Ratio cannot be expressed as a fraction (as a rational number) means simply that the ratio of the two lengths AC and CB in Figure 2 cannot be expressed as a fraction. In other words, no matter how hard we search, we cannot find some common measure that is contained, let's say, 31 times in AC and 19 times in CB. Two such lengths that have no common measure are called incommensurable. The discovery that the Golden Ratio is an irrational number was therefore, at the same time, a discovery of incommensurability. In On the Pythagorean Life (ca. a.d. 300), the philosopher and historian Iamblichus, a descendant of a noble Syrian family, describes the violent reaction to this discovery: They say that the first [human] to disclose the nature of commensurability and incommensurability to those unworthy to share in the theory was so hated that not only was he banned from [the Pythagoreans'] common association and way of life, but even his tomb was built, as if [their] former colleague was departed from life among humankind. In the professional mathematical literature, the common symbol for the Golden Ratio is the Greek letter tau (from the Greek solag, to-mi, which means "the cut" or "the section"). However, at the beginning of the twentieth century, the American mathematician Mark Barr gave the ratio the name of phi, the first Greek letter in the name of Phidias, the great Greek sculptor who lived around 490 to 430 b.c. Phidias' greatest achievements were the "Athena Parthenos" in Athens and the "Zeus" in the temple of Olympia. He is traditionally also credited with having been in charge of other Parthenon sculptures, although it is quite probable that many were actually made by his students and assistants. Barr decided to honor the sculptor because a number of art historians maintained that Phidias had made frequent and meticulous use of the Golden Ratio in his sculpture. (We shall examine similar claims very scrupulously in this book.) I will use the names Golden Ratio, Golden Section, Golden Number, phi, and also the symbol interchangeably throughout, because these are the names most frequently encountered in the recreational mathematics literature. Some of the greatest mathematical minds of all ages, from Pythagoras and Euclid in ancient Greece, through the medieval Italian mathematician Leonardo of Pisa and the Renaissance astronomer Johannes Kepler, to present-day scientific figures such as Oxford physicist Roger Penrose, have spent endless hours over this simple ratio and its properties. But the fascination with the Golden Ratio is not confined just to mathematicians. Biologists, artists, musicians, historians, architects, psychologists, and even mystics have pondered and debated the basis of its ubiquity and appeal. In fact, it is probably fair to say that the Golden Ratio has inspired thinkers of all disciplines like no other number in the history of mathematics. An immense amount of research, in particular by the Canadian mathematician and author Roger Herz-Fischler (described in his excellent book A Mathematical History of the Golden Number), has been devoted even just to the simple question of the origin of the name "Golden Section." Given the enthusiasm that this ratio has generated since antiquity, we might have thought that the name also has ancient origins. Indeed, some authoritative books on the history of mathematics, like Frankois Lasserre's The Birth of Mathematics in the Age of Plato, and Carl B. Boyer's A History of Mathematics, place the origin of this name in the fifteenth and sixteenth centuries, respectively. This, however, appears not to be the case. As far as I can tell from reviewing much of the historical fact-finding effort, this term was first used by the German mathematician Martin Ohm (brother of the famous physicist Georg Simon Ohm, after whom Ohm's law in electromagnetism is named), in the 1835 second edition of his book Die Reine Elementar-Mathematik (The pure elementary mathematics). Ohm writes in a footnote: "One also customarily calls this division of an arbitrary line in two such parts the golden section." Ohm's language clearly leaves us with the impression that he did not invent the term himself but rather used a commonly accepted name. Yet the fact that he did not use it in the first edition of his book (published in 1826) suggests at least that the name "Golden Section" (or, in German, "Goldene Schnitt") gained its popularity only around the 1830s. The name might have been used orally prior to that, perhaps in nonmathematical circles. There is no question, however, that following Ohm's book, the term "Golden Section" started to appear frequently and repeatedly in the German mathematical and art history literature. It may have made its debut in English in an article by James Sully on aesthetics, which appeared in the ninth edition of the Encyclopaedia Britannica in 1875. Sully refers to the "interesting experimental enquiry . . . instituted by [Gustav Theodor] Fechner [a physicist and pioneering German psychologist in the nineteenth century] into the alleged superiority of 'the golden section' as a visible proportion." (I discuss Fechner's experiments in Chapter 7.) The earliest English uses in a mathematical context appear to have been in an article entitled "The Golden Section" (by E. Ackermann) that appeared in 1895 in the American Mathematical Monthly and, around the same time, in the 1898 book Introduction to Algebra by the well-known teacher and author G. Chrystal (1851-1911). Just as a curiosity, let me note that the only definition of a "Golden Number" that appears in the 1900 edition of the French encyclopedia Nouveau Larousse Illustre is: "A number used to indicate each of the years of the lunar cycle." This refers to the position of a calendar year within the nineteen-year cycle after which the phases of the Moon recur on the same dates. Clearly the phrase took a longer time to enter the French mathematical nomenclature. But what is all the fuss about? What is it that makes this number, or geometrical proportion, so exciting as to deserve all of this attention? The Golden Ratio's attractiveness stems first and foremost from the fact that it has an almost uncanny way of popping up where it is least expected. Take, for example, an ordinary apple, the fruit often associated (probably mistakenly) with the tree of knowledge that figures so prominently in the biblical account of humankind's fall from grace, and cut it through its girth. You will find that the apple's seeds are arranged in a five-pointed star pattern, or pentagram (Figure 3). Each of the five isosceles triangles that make the corners of a pentagram has the property that the ratio of the length of its longer side to the shorter one (the implied base) is equal to the Golden Ratio, 1.618. . . . But, you may think, maybe this is not so surprising. After all, since the Golden Ratio has been defined as a geometrical proportion, perhaps we should not be too astonished to discover that this proportion is found in some geometrical From the Hardcover edition. Excerpted from The Golden Ratio by Mario Livio. Copyright © 2002 by Mario Livio. Excerpted by permission of Broadway Books, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
{"url":"http://www.randomhouse.com/book/102878/the-golden-ratio-by-mario-livio/9780767908160/","timestamp":"2014-04-17T17:55:53Z","content_type":null,"content_length":"101709","record_id":"<urn:uuid:80bcc744-b7c8-4c13-ba39-3592aaae7ac7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
die Theorie der algebraische Formen "... Constructive Mathematics might be regarded as a fragment of classical mathematics in which any proof of an existence theorem is equipped with a computable function giving the solution of the theorem. Limit Computable Mathematics (LCM) considered in this note is a fragment of classical mathematics ..." Cited by 5 (0 self) Add to MetaCart Constructive Mathematics might be regarded as a fragment of classical mathematics in which any proof of an existence theorem is equipped with a computable function giving the solution of the theorem. Limit Computable Mathematics (LCM) considered in this note is a fragment of classical mathematics in which any proof of an existence theorem is equipped with a function computing the solution of the theorem in the limit. - . URZYCZYN ED., TLCA 2005, LNCS 3461 , 2005 "... Proof animation is a way of executing proofs to find errors in the formalization of proofs. It is intended to be "testing in proof engineering". Although the realizability interpretation as well as the functional interpretation based on limit-computations were introduced as means for proof animati ..." Cited by 4 (0 self) Add to MetaCart Proof animation is a way of executing proofs to find errors in the formalization of proofs. It is intended to be "testing in proof engineering". Although the realizability interpretation as well as the functional interpretation based on limit-computations were introduced as means for proof animation, they were unrealistic as an architectural basis for actual proof animation tools. We have found game theoretical semantics corresponding to these interpretations, which is likely to be the right architectural basis for proof animation. "... The notion of Limit-Computable Mathematics (LCM) will be introduced. LCM is a fragment of classical mathematics in which the law of excluded middle is restricted to 1 0 2 -formulas. We can give an accountable computational interpretation to the proofs of LCM. The computational content of LCM-p ..." Cited by 3 (0 self) Add to MetaCart The notion of Limit-Computable Mathematics (LCM) will be introduced. LCM is a fragment of classical mathematics in which the law of excluded middle is restricted to 1 0 2 -formulas. We can give an accountable computational interpretation to the proofs of LCM. The computational content of LCM-proofs is given by Gold's limiting recursive functions, which is the fundamental notion of learning theory. LCM is expected to be a right means for "Proof Animation," which was introduced by the first author [10]. LCM is related not only to learning theory and recursion theory, but also to many areas in mathematics and computer science such as computational algebra, computability theories in analysis, reverse mathematics, and many others.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=823507","timestamp":"2014-04-20T14:54:21Z","content_type":null,"content_length":"16902","record_id":"<urn:uuid:a0a1efae-c81e-4ed8-bda1-f38fe454f92b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Second-order CFA among multiple groups Second-order CFA among multiple groups rebecca tang posted on Thursday, October 14, 2010 - 9:32 am I am working on a project about comparing second-order CFA between two groups. Suppose there are four constructs x, y,z, m x has several measurement scales, x1, x2,x 3,.... y has several measurement scales, y1, y2, y3..... z has several measurement scales, z1, z2, z3,.... I also link m (no measurement scale)--> x, m-> y, m->z. Because it is the second-order CFA, among three paths m->x,m->y, m->z, one of them has to be restricted to 1. When I compare the second-order CFA, how can I compare the path that is already restricted to 1? Suppose the path m->x is restricted to 1. After I compare the path m->y, m->z, can I free the path m->z, but restrict m->y as 1. And then I compare the path m->x. Is it possible? Thank you very much for your help. It is really helpful for me! Linda K. Muthen posted on Friday, October 15, 2010 - 9:10 am Free all of the factor loadings for the second-order factor and set its metric by fixing the factor variance to one. songthip ounpraseuth posted on Tuesday, February 01, 2011 - 12:14 pm Hi Dr. Muthen – I wanted to examine a second order CFA component prior to testing my larger SEM model. One of my construct is given below: USEVARIABLES ARE X1-X12; CATEGORICAL ARE X1-X12; MISSING ARE ALL (-99); MODEL: F1 BY X1 X2; F2 BY X3 X4; F3 BY X5-X8; F4 BY X9-X12; F5 BY F1 F2 F3 F4; OUTPUT: Standardized; modindices; It’s failing to provide estimates since the number of iterations exceeded and no convergence was achieved. Is the problem caused by the fact that F1 and F2 have only two first-order factor indicators? A colleague ran the same model in AMOS which provided various estimates and fit statistics. Although the output appeared normal, I was concerned that the estimator used was MLE while in the presence of ordinal indictors. I am currently running Mplus version 6.0. Any insight or suggestions would be much appreciated. Linda K. Muthen posted on Tuesday, February 01, 2011 - 12:53 pm Please send the full output and your license number to support@statmodel.com so I can see why the model did not converge. Back to top
{"url":"http://www.statmodel.com/discussion/messages/9/6062.html?1296593615","timestamp":"2014-04-18T03:03:54Z","content_type":null,"content_length":"21246","record_id":"<urn:uuid:46a16557-4d10-410e-8baf-debdb9df0463>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Supercomputers help solve a 50-year homework assignment Kids everywhere grumble about homework. But their complaints will hold no water with a group of theoretical physicists who've spent almost 50 years solving one homework problem -- a calculation of one type of subatomic particle decay aimed at helping to answer the question of why the early universe ended up with an excess of matter. Without that excess, the matter and antimatter created in equal amounts in the Big Bang would have completely annihilated one another. Our universe would contain nothing but light -- no homework, no schools…but also no people, or planets, or stars! Physicists long ago figured out something must have happened to explain the imbalance -- and our very existence. "The fact that we have a universe made of matter strongly suggests that there is some violation of symmetry," said Taku Izubuchi, a theoretical physicist at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory. The physicists call it charge conjugation-parity (CP) violation. Instead of everything in the universe behaving perfectly symmetrically, certain subatomic interactions happen differently if viewed in a mirror (violating parity) or when particles and their oppositely charged antiparticles swap each other (violating charge conjugation symmetry). Scientists at Brookhaven -- James Cronin and Val Fitch -- were the first to find evidence of such a symmetry "switch-up" in experiments conducted in 1964 at the Alternating Gradient Synchrotron, with additional evidence coming from experiments at CERN, the European Laboratory for Nuclear Research. Cronin and Fitch received the 1980 Nobel Prize in physics for this work. What was observed was the decay of a subatomic particle known as a kaon into two other particles called pions. Kaons and pions (and many other particles as well) are composed of quarks. Understanding kaon decay in terms of its quark composition has posed a difficult problem for theoretical physicists. "That was the homework assignment handed to theoretical physicists, to develop a theory to explain this kaon decay process -- a mathematical description we could use to calculate how frequently it happens and whether or how much it could account for the matter-antimatter imbalance in the universe. Our results will serve as a tough test for our current understanding of particle physics," Izubuchi said. Sophisticated computational tools The mathematical equations of Quantum Chromodynamics, or QCD -- the theory that describes how quarks and gluons interact -- have a multitude of variables and possible values for those variables. So the scientists needed to wait for supercomputing capabilities to evolve before they could actually solve them. The physicists invented the complex algorithms and wrote nifty software packages that some of the world's most powerful supercomputers used to describe the quarks' behavior and solve the problem. In the physicists' software, the particles are "placed" on an imaginary four-dimensional space-time lattice consisting of three spatial dimensions plus time. At one end of the time dimension lies the kaon, made of two kinds of quarks -- a "strange" quark and an "anti-down" quark -- held together by gluons. At the opposite end, they place the end products, the four quarks that make up the two pions. Then the supercomputer computes how the kaon transforms into two pions as it flies through space and time. Conducting these computations on the lattice greatly simplifies the problem. "We use the supercomputers to look at how each quark is flying -- its velocity, direction -- in other words, the dynamics of the strong QCD interaction," Izubuchi said. Somewhere in the middle of this complicated space-time grid, with some degree of probability, the strange quark of the kaon -- which the strong force keeps strongly bound with its anti-down quark partner -- suddenly starts to change into a down quark by the so-called electroweak interaction. Since a kaon is heavier than two pions, the energy released creates a new quark/anti-quark pair -- an "up" and an "anti-up" quark -- from the vacuum. These quarks then combine with the new down quark and the leftover anti-down quark to make the two pions. "The experiments showed how frequently these 'K→ππ' processes happen, but the part that violates CP symmetry is the strange quark converting into a down quark through the weak interaction," Izubuchi said. "That's the part we really wanted to know more about to understand the strength of this CP violation. That information will give us a hint of why the universe is matter-rich, and/or confirm the correctness of our current understanding of particle physics." The supercomputers crunched tens of billions of numbers into the equation that describes this part of the process to find the result that should reproduce the decaying particle patterns and frequencies observed by the experiments. "The result of the calculation tells us how frequently this CP-violating weak interaction occurs and the strength of the CP violation at the quark level," Izubuchi said. "It's a kind of reverse-engineering what experimenters have seen in kaon decays to solve the problem." New algorithm, higher precision After publishing their initial results in 2012, the physicists further improved their calculation to more closely simulate what happens with these particles in Nature. These new calculations allow them to directly compare their numbers with the experimental results more accurately, but they also increase the computational "cost" considerably -- requiring more computing power/time. Even with the newest supercomputers, the homework would have taken many years if not for a new efficient algorithm developed by the Brookhaven group in late 2012. "This new algorithm, called all-mode averaging (AMA), divides the whole calculation into a 'difficult' but small piece and an 'easier' large piece, and devotes more computation time to the latter part to save the total computation required," Izubuchi said. "It accelerates the speed of the computations by a factor of ten or more. This very simple idea of dividing the calculation into two pieces actually helped to reduce the statistical error of the computation by a lot." Do the numbers add up? Is the calculated strength of the weak interaction strong enough to account for the matter antimatter asymmetry in the early universe? "That's the million-dollar question," said Izubuchi. "So far people think this is not the full answer. We cannot explain why the universe is matter-rich based solely on the amount of CP violation that this kaon decay accounts for. So there may be other sources of CP violation other than the weak interaction that would be revealed if a discrepancy were found between our calculation and the experimental results." Then Izubuchi confessed that the theorists have only solved half of their homework problem. "When we say we theoretically understood this process, it is only half true. There are two different ways the two end-result pions can combine with each other (called isospin states), and we've only solved the problem for one combination, the isospin 2 channel." The experiments have measurements for both isospin states, so the theorists are working on calculating the second process as well. "The other, isospin 0, is more challenging, and we are getting there by employing the faster supercomputers and new theoretical ideas and computation algorithms. But, for now, we have finished half of 50 years' homework." This research is part of DOE's Scientific Discovery through Advanced Computing (SciDAC-3) program "Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers," supported by the DOE Office of Science. Story Source: The above story is based on materials provided by Brookhaven National Laboratory. Note: Materials may be edited for content and length. Cite This Page: Brookhaven National Laboratory. "Supercomputers help solve a 50-year homework assignment." ScienceDaily. ScienceDaily, 1 October 2013. <www.sciencedaily.com/releases/2013/10/131001151042.htm>. Brookhaven National Laboratory. (2013, October 1). Supercomputers help solve a 50-year homework assignment. ScienceDaily. Retrieved April 16, 2014 from www.sciencedaily.com/releases/2013/10/ Brookhaven National Laboratory. "Supercomputers help solve a 50-year homework assignment." ScienceDaily. www.sciencedaily.com/releases/2013/10/131001151042.htm (accessed April 16, 2014).
{"url":"http://www.sciencedaily.com/releases/2013/10/131001151042.htm","timestamp":"2014-04-16T04:40:22Z","content_type":null,"content_length":"91258","record_id":"<urn:uuid:6c2e8eb9-3282-47e3-8855-02a9c0bcaa71>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Supercomputers help solve a 50-year homework assignment Kids everywhere grumble about homework. But their complaints will hold no water with a group of theoretical physicists who've spent almost 50 years solving one homework problem -- a calculation of one type of subatomic particle decay aimed at helping to answer the question of why the early universe ended up with an excess of matter. Without that excess, the matter and antimatter created in equal amounts in the Big Bang would have completely annihilated one another. Our universe would contain nothing but light -- no homework, no schools…but also no people, or planets, or stars! Physicists long ago figured out something must have happened to explain the imbalance -- and our very existence. "The fact that we have a universe made of matter strongly suggests that there is some violation of symmetry," said Taku Izubuchi, a theoretical physicist at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory. The physicists call it charge conjugation-parity (CP) violation. Instead of everything in the universe behaving perfectly symmetrically, certain subatomic interactions happen differently if viewed in a mirror (violating parity) or when particles and their oppositely charged antiparticles swap each other (violating charge conjugation symmetry). Scientists at Brookhaven -- James Cronin and Val Fitch -- were the first to find evidence of such a symmetry "switch-up" in experiments conducted in 1964 at the Alternating Gradient Synchrotron, with additional evidence coming from experiments at CERN, the European Laboratory for Nuclear Research. Cronin and Fitch received the 1980 Nobel Prize in physics for this work. What was observed was the decay of a subatomic particle known as a kaon into two other particles called pions. Kaons and pions (and many other particles as well) are composed of quarks. Understanding kaon decay in terms of its quark composition has posed a difficult problem for theoretical physicists. "That was the homework assignment handed to theoretical physicists, to develop a theory to explain this kaon decay process -- a mathematical description we could use to calculate how frequently it happens and whether or how much it could account for the matter-antimatter imbalance in the universe. Our results will serve as a tough test for our current understanding of particle physics," Izubuchi said. Sophisticated computational tools The mathematical equations of Quantum Chromodynamics, or QCD -- the theory that describes how quarks and gluons interact -- have a multitude of variables and possible values for those variables. So the scientists needed to wait for supercomputing capabilities to evolve before they could actually solve them. The physicists invented the complex algorithms and wrote nifty software packages that some of the world's most powerful supercomputers used to describe the quarks' behavior and solve the problem. In the physicists' software, the particles are "placed" on an imaginary four-dimensional space-time lattice consisting of three spatial dimensions plus time. At one end of the time dimension lies the kaon, made of two kinds of quarks -- a "strange" quark and an "anti-down" quark -- held together by gluons. At the opposite end, they place the end products, the four quarks that make up the two pions. Then the supercomputer computes how the kaon transforms into two pions as it flies through space and time. Conducting these computations on the lattice greatly simplifies the problem. "We use the supercomputers to look at how each quark is flying -- its velocity, direction -- in other words, the dynamics of the strong QCD interaction," Izubuchi said. Somewhere in the middle of this complicated space-time grid, with some degree of probability, the strange quark of the kaon -- which the strong force keeps strongly bound with its anti-down quark partner -- suddenly starts to change into a down quark by the so-called electroweak interaction. Since a kaon is heavier than two pions, the energy released creates a new quark/anti-quark pair -- an "up" and an "anti-up" quark -- from the vacuum. These quarks then combine with the new down quark and the leftover anti-down quark to make the two pions. "The experiments showed how frequently these 'K→ππ' processes happen, but the part that violates CP symmetry is the strange quark converting into a down quark through the weak interaction," Izubuchi said. "That's the part we really wanted to know more about to understand the strength of this CP violation. That information will give us a hint of why the universe is matter-rich, and/or confirm the correctness of our current understanding of particle physics." The supercomputers crunched tens of billions of numbers into the equation that describes this part of the process to find the result that should reproduce the decaying particle patterns and frequencies observed by the experiments. "The result of the calculation tells us how frequently this CP-violating weak interaction occurs and the strength of the CP violation at the quark level," Izubuchi said. "It's a kind of reverse-engineering what experimenters have seen in kaon decays to solve the problem." New algorithm, higher precision After publishing their initial results in 2012, the physicists further improved their calculation to more closely simulate what happens with these particles in Nature. These new calculations allow them to directly compare their numbers with the experimental results more accurately, but they also increase the computational "cost" considerably -- requiring more computing power/time. Even with the newest supercomputers, the homework would have taken many years if not for a new efficient algorithm developed by the Brookhaven group in late 2012. "This new algorithm, called all-mode averaging (AMA), divides the whole calculation into a 'difficult' but small piece and an 'easier' large piece, and devotes more computation time to the latter part to save the total computation required," Izubuchi said. "It accelerates the speed of the computations by a factor of ten or more. This very simple idea of dividing the calculation into two pieces actually helped to reduce the statistical error of the computation by a lot." Do the numbers add up? Is the calculated strength of the weak interaction strong enough to account for the matter antimatter asymmetry in the early universe? "That's the million-dollar question," said Izubuchi. "So far people think this is not the full answer. We cannot explain why the universe is matter-rich based solely on the amount of CP violation that this kaon decay accounts for. So there may be other sources of CP violation other than the weak interaction that would be revealed if a discrepancy were found between our calculation and the experimental results." Then Izubuchi confessed that the theorists have only solved half of their homework problem. "When we say we theoretically understood this process, it is only half true. There are two different ways the two end-result pions can combine with each other (called isospin states), and we've only solved the problem for one combination, the isospin 2 channel." The experiments have measurements for both isospin states, so the theorists are working on calculating the second process as well. "The other, isospin 0, is more challenging, and we are getting there by employing the faster supercomputers and new theoretical ideas and computation algorithms. But, for now, we have finished half of 50 years' homework." This research is part of DOE's Scientific Discovery through Advanced Computing (SciDAC-3) program "Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers," supported by the DOE Office of Science. Story Source: The above story is based on materials provided by Brookhaven National Laboratory. Note: Materials may be edited for content and length. Cite This Page: Brookhaven National Laboratory. "Supercomputers help solve a 50-year homework assignment." ScienceDaily. ScienceDaily, 1 October 2013. <www.sciencedaily.com/releases/2013/10/131001151042.htm>. Brookhaven National Laboratory. (2013, October 1). Supercomputers help solve a 50-year homework assignment. ScienceDaily. Retrieved April 16, 2014 from www.sciencedaily.com/releases/2013/10/ Brookhaven National Laboratory. "Supercomputers help solve a 50-year homework assignment." ScienceDaily. www.sciencedaily.com/releases/2013/10/131001151042.htm (accessed April 16, 2014).
{"url":"http://www.sciencedaily.com/releases/2013/10/131001151042.htm","timestamp":"2014-04-16T04:40:22Z","content_type":null,"content_length":"91258","record_id":"<urn:uuid:6c2e8eb9-3282-47e3-8855-02a9c0bcaa71>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
The Importance of a Large N The letter "N" is used to indicate the number of subjects contributing data to an experiment or opinion poll. N=500 means 500 subjects were used, or 500 people were polled. This number has a powerful influence on the reliability of the results. What is the possible effect of taking a small sample? If you take a small sample, even a small random sample, you can get very unusual and misleading results. For example, if you drew the names of three students at random from a college registration list, you might come up with three students who were over six feet tall, purely by coincidence. But if you used a random sample of 100 students, the average height of students in the sample would resemble almost exactly the average height of the entire student body. Why is it possible to say, with confidence, that a sample of 100 will have an average height within an inch or two of the entire student body? It is because of the so-called "Law of Large Numbers." The larger the N, the more closely a random sample will approximate its parent population. This law is really an outgrowth of basic laws of probability. Random variations go every which way. The larger the N, the more likely it is that all the variations will cancel each other out and leave you with an accurate average value. Students sometimes ask how large an N is required before the results of a poll become trustworthy. The answer depends on the level of precision desired and the amount of variation in the characteristic being measured. A national opinion poll using a random sample of a 500 people should produce results that are very accurate, providing the sample is truly unbiased. A smaller sample of 25 or so may give representative results if there is not a great deal of underlying variation. Polls based on 5 or 10 people are seldom reliable for anything except expressing the opinions of 5 or 10 Prev page | T of C | Next page Write to Dr. Dewey at psywww@gmail.com. Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below. Copyright © 2007 Russ Dewey
{"url":"http://www.psywww.com/intropsych/ch01_psychology_and_science/importance_of_a_large_n_.html","timestamp":"2014-04-19T01:47:39Z","content_type":null,"content_length":"5067","record_id":"<urn:uuid:3410978c-8f00-4536-a341-52058fc5e378>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotational Equilibrium Rotational Equilibrium: When an object is in equilibrium, it is not moving or rotating. The object's linear and angular accelerations are both zero. * The sum of all the torques acting on a system must be equal to zero. * The sum of the clockwise torques must be equal to the sum of the counterclockwise torques. * The pivotal axis can be chosen to be any point inside or outside the object. * It useful to observe that choosing the pivot point through the line of action of a force eliminates that force from the equilibrium torque equations. This often simplifies things. Linear Static Equilibrium of the Center of Mass: * The vector sum of all the forces acting on the object must be equal to zero. * The sum of the horizontal components of all the forces must be equal to zero, and the sum of the vertical components of all the forces must be equal to zero. * The sum of the components of the forces to the right must be equal to the sum of the components of the forces to the left, and the sum of the components of the forces up must be equal to the sum of the components of the forces down. Torque Due to the Weight of an Object: * The torque on a solid body (about any axis) produced by the object's own weight can be calculated as if all the object's mass were located at the center of mass of the object. Here the lever arm is the distance between the pivot point and the center of mass. * Since freely rotating systems revolve about their center of mass, the weight of an object cannot create any torque on the object when it is rotating freely.
{"url":"http://faculty.wwu.edu/vawter/PhysicsNet/Topics/RotationalDynamics/RotEquilibrium.html","timestamp":"2014-04-21T04:33:26Z","content_type":null,"content_length":"4254","record_id":"<urn:uuid:24b8f230-2b87-4ede-9ffe-e1727b198a01>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: shapes of atoms are the stacking of toruses #1728 Atom Totality 5th ed Replies: 0 shapes of atoms are the stacking of toruses #1728 Atom Totality 5th ed Posted: Aug 24, 2013 2:59 AM Now looking in The Elements Beyond Uranium, Seaborg & Loveland, 1990, pages 76-77, they discuss what atoms look like from the relativistic Dirac Equation. And here this book states in various passages that the torus is a usual shape (a doughnut is a torus) : (a) ..." a doughnut (p3/2, m = 3/2)" (b) ... " the highest m value for a given j always has a doughnut shaped distribution" (c) ... "States of intermediate m are multi-lobed toroids." So it is looking better all the time for the idea that the torus is the building block geometry of atoms, where we stack toruses onto more toruses. In this manner a sphere shape is just the stacking of 3 or more individual toruses where the middle torus is larger than the other two. I posted in sci.physics a few minutes ago: deriving both Schrodinger and Dirac Equations from Maxwell Equations #1727 Atom Totality 5th ed I thought it best to repost how the Maxwell Equations derive both Schrodinger and Dirac Equations. Deriving Schrodinger Eq from Maxwell Eq Alright the Schrodinger Eq. is easily derived from the Maxwell ?Equations. In the Dirac Equation we need more than two of the Maxwell ?Equations because it is a 4x4 matrix equation and so the full 4 ?Maxwell Equations are needed to cover the Dirac Equation, although ?the Dirac Equation ends up being a minor subset of the 4 Maxwell ?Equations, because the Dirac Equation does not allow the photon to be ?a double transverse wave while the Summation of the Maxwell Equations ?demands the photon be a double transverse wave. ?But the Schrodinger Equation: ihd(f(w)) = Hf(w) where f(w) is the wave function The Schrodinger Equation is easily derived from the mere Gauss's laws ?combined and without magnetic monopoles. ?These are the 4 symmetrical Maxwell Equations with magnetic ?monopoles: div*E = r_E div*B = r_B - curlxE = dB + J_B curlxB = dE + J_E Now the two Gauss's law of Maxwell Equations standing alone are ?nonrelativistic and so is the Schrodinger Equation. div*E = r_E div*B = 0 div*E + div*B = r_E this is reduced to k(d(f(x))) = H(f(x)) Now Schrodinger derived his equation out of thin air, using the Fick's ?law of diffusion. So Schrodinger never really used the Maxwell ?Equations. The Maxwell Equations were foreign to Schrodinger and to ?all the physicists of the 20th century when it came time to find the ?wave function. But how easy it would have been for Schrodinger if he ?instead, reasoned that the Maxwell Equations derives all of Physics, ?and that he should only focus on the Maxwell Equations. Because if he ?had reasoned that the Maxwell Equations were the axiom set of all of ?physics and then derived the Schrodinger Equation from the two Gauss ?laws, he would and could have further reasoned that if you Summation ?all 4 Maxwell Equations, that Schrodinger would then have derived the ?relativistic wave equation and thus have found the Dirac Equation long ?before Dirac ever had the idea of finding a relativistic wave ?equation. Deriving Dirac Eq from Maxwell Eq Alright, these are the 4 symmetrical Maxwell Equations with magnetic ?monopoles: div*E = r_E div*B = r_B - curlxE = dB + J_B curlxB = dE + J_E Now to derive the Dirac Equation from the Maxwell Equations we add ?the lot together: div*E = r_E div*B = r_B - curlxE = dB + J_B curlxB = dE + J_E div*E + div*B + (-1)curlxE + curlxB = r_E + r_B + dB + dE + J_E + J_B Now Wikipedia has a good description of how Dirac derived his famous ?equation which gives this: (Ad_x + Bd_y + Cd_z + (i/c)Dd_t - mc/h) p = 0 So how is the above summation of Maxwell Equations that of a ?generalized Dirac Equation? ?Well, the four terms of div and curl are the A,B,C,D terms. And the ?right side of the equation can all be conglomerated into one term and ?the negative sign in the Faraday law can turn that right side into ?the negative sign. Now why is the derivation of the Dirac Equation by the AP-Equation ?important? Well, it is important for many reasons, not only that all ?of Quantum Mechanics is packed inside of the Maxwell Equations. But the important implication is that all the forces of physics are ?just the one force of EM, or commonly called Coulomb force. That means ?the Strong Nuclear force is a Coulomb force of a chemical bonding in ?the nuclei of atoms, and the Weak Nuclear force is EM also, and the ?gravity force is EM-gravity of magnetic monopoles which can only ?attract and never repel.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2591583&messageID=9232678","timestamp":"2014-04-21T04:52:26Z","content_type":null,"content_length":"18292","record_id":"<urn:uuid:2b6bc0b6-817e-4d26-87d9-43a86586458b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming Praxis - Binary Search Tree Programming Praxis – Binary Search Tree In today’s Programming Praxis exercise we have to implement a Binary Search Tree. Let’s get started, shall we? We need two imports: import Control.Monad import System.Random The data structure is your run-of-the-mill binary tree. data BTree k v = Node k v (BTree k v) (BTree k v) | Empty Finding an element is pretty straightforward. Just keep taking the correct branch until we exhaust the tree or find what we want. find :: (k -> k -> Ordering) -> k -> BTree k v -> Maybe v find _ _ Empty = Nothing find cmp k (Node k' v' l r) = case cmp k k' of EQ -> Just v' LT -> find cmp k l GT -> find cmp k r Inserting works the same way as find: move to the correct position and insert or replace the new value. insert :: (k -> k -> Ordering) -> k -> v -> BTree k v -> BTree k v insert _ k v Empty = Node k v Empty Empty insert cmp k v (Node k' v' l r) = case cmp k k' of EQ -> Node k v l r LT -> Node k' v' (insert cmp k v l) r GT -> Node k' v' l (insert cmp k v r) Since the deletion algorithm calls for a random number, delete is an IO action. You can consider using unsafePerformIO to hide this (I did in my first draft), but I decided to stick with the honest, safer (though less convenient) version. Alternatively you could accept the occasional imbalance and just always start on the left. delete :: (k -> k -> Ordering) -> k -> BTree k v -> IO (BTree k v) delete _ _ Empty = return Empty delete cmp k t@(Node k' v' l r) = case cmp k k' of EQ -> fmap (flip deroot t . (== 0)) $ randomRIO (0,1 :: Int) LT -> fmap (flip (Node k' v') r) $ delete cmp k l GT -> fmap ( Node k' v' l) $ delete cmp k r For the deroot function we use a slightly different approach than the Scheme version. I’m not sure how that version deals with the case of one of the two branches being empty, but here they are explicitly included in the patterns. The rot-left and rot-right functions are rewritten as patterns. deroot :: Bool -> BTree k v -> BTree k v deroot _ Empty = Empty deroot _ (Node _ _ l Empty) = l deroot _ (Node _ _ Empty r) = r deroot True (Node k v l (Node rk rv rl rr)) = Node rk rv (deroot False $ Node k v l rl) rr deroot _ (Node k v (Node lk lv ll lr) r) = Node lk lv ll (deroot True $ Node k v lr r) Converting the search tree to a list is trivial. toList :: BTree k v -> [(k, v)] toList Empty = [] toList (Node k v l r) = toList l ++ (k, v) : toList r And, as always, a test to see if everything is working correctly: main :: IO () main = do let t = foldr (uncurry $ insert compare) Empty $ [(n, n) | n <- [4,1,3,5,2]] print $ toList t print $ find compare 3 t print $ find compare 9 t print . toList =<< foldM (flip $ delete compare) t [4,2,3,5,1] Tags: binary, bonsai, code, data, Haskell, kata, praxis, programming, search, structure, tree programmingpraxis Says: March 5, 2010 at 1:45 pm | Reply In deroot, the Scheme version uses the test leaf-or-nil. It doesn’t matter which it is, since it gets chopped off anyway.
{"url":"http://bonsaicode.wordpress.com/2010/03/05/programming-praxis-binary-search-tree/","timestamp":"2014-04-17T02:21:47Z","content_type":null,"content_length":"57875","record_id":"<urn:uuid:692f6e9b-d882-4364-8c82-0d9d9c50252d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Jonathan H. Hello to all interested in learning and understanding not only how to excel in math classes, but how to also understand why it works. First of all, math is my life. It's my one true love and gives me meaning in this world. I have so much passion for math and I hope it rubs off on all that are around me. Growing up, I have always been the student in the class that everyone asks for help on math problems. For this reason I have always tutored my friends and loved ones in math whenever they need help. It gives me great pleasure when I know I have helped someone in their math education. My goal is to have everyone know math well because math is part of our everyday lives and serves as logical reasoning to solve every problem we have. Throughout all of my education math has been the easiest classes for me to earn A's in. Math comes naturally to me but on top of that I love to study math and make sure I have a thorough understanding of all the rules and reasons why math works. Math is not just memorizing formulas and plugging in information, it is knowing the rules behind all formulas and understanding the logic. I have always been able to communicate math very well to students that need help in math. My credentials so far include math classes all the way up to Differential Equations. I have passed Calculus I, II and III with A's and have a thorough understanding of all material and what is needed to do well in each of the classes. Since math builds on all previous concepts, I can easily tutor students in lower math classes as well, whether it be elementary math to high school math. I believe a major trait needed to tutor math is patience and determination. Math takes a lot of time and studying to learn but anyone can do it as long as they give it their best effort and devotion. I have learned many things from my mother, she has been teaching 5th and 6th grade for several years and has taught me important methods on how to teach math and how to relate the material to students so the information sticks to them. Lastly, I want everyone to know I do not give up and will give students my devotion to make sure they understand the material they news to understand to make sure they stay on their correct path of education. Education is the most important thing in succeeding in today's harsh economy. Jonathan's subjects
{"url":"http://www.wyzant.com/Tutors/AZ/Surprise/7850276/","timestamp":"2014-04-16T08:41:16Z","content_type":null,"content_length":"95605","record_id":"<urn:uuid:1302b5dd-7bf7-43a4-abf7-9272e3e761b2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition: A shape, formed by two lines or rays diverging from a common point (the vertex). Try this Adjust the angle below by dragging the orange dot. Vertex The vertex is the common point at which the two lines or rays are joined. Point B is the figure above is the vertex of the angle ∠ABC. Legs The legs (sides) of an angle are the two lines that make it up. In the figure above, the lines are the legs of the angle ∠ABC. Interior The interior of an angle is the space in the 'jaws' of the angle extending out to infinity. See Interior of an Angle Exterior All the space on the plane that is not the interior. See Interior of an Angle Identifying an angle An angle can be identified in two ways. 1. Like this: ∠ABC The angle symbol, followed by three points that define the angle, with the middle letter being the vertex, and the other two on the legs. So in the figure above the angle would be ∠ABC or ∠CBA. So long as the vertex is the middle letter, the order is not important. As a shorthand we can use the 'angle' symbol. For example '∠ABC' would be read as 'the angle ABC'. 2. Or like this: ∠B Just by the vertex, so long as it is not ambiguous. So in the figure above the angle could also be called simply '∠B' Measure of an angle The size of an angle is measured in degrees (see Angle Measures). When we say 'the angle ABC' we mean the actual angle object. If we want to talk about the size, or measure, of the angle in degrees, we should say 'the measure of the angle ABC' - often written m∠ABC. However, many times we will see '∠ABC=34°'. Strictly speaking this is an error. It should say 'm∠ABC=34°' Types of angle Altogether, there are six types of angle as listed below. Click on an image for a full description of that type and a corresponding interactive applet. Acute angle Right angle Obtuse angle Less than 90° Exactly 90° Between 90° and 180° Straight angle Reflex angle Full angle Exactly 180° Between 180° and 360° Exactly 360° In Trigonometry When used in trigonometry, angles have some extra properties: They can have a measure greater than 360°, can be positive and negative, and are positioned on a coordinate grid with x and y axes. They are usually measured in radians instead of degrees. For more on this see Angle definition and properties (trigonometry). Angle construction In the Constructions chapter, there are animated demonstrations of various constructions of angles using only a compass and straightedge. Related angle topics Angle Types Angle relationships (C) 2009 Copyright Math Open Reference. All rights reserved Math Open Reference now has a Common Core alignment. See which resources are available on this site for each element of the Common Core standards. Check it out
{"url":"http://www.mathopenref.com/angle.html","timestamp":"2014-04-20T06:41:35Z","content_type":null,"content_length":"15395","record_id":"<urn:uuid:a0846dab-9c60-47e2-a93a-54ad62fee820>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab problem - solving a simple differential equation March 4th 2010, 11:18 AM #1 Mar 2010 Matlab problem - solving a simple differential equation I have been stuck with Matlab all day trying to solve a differential equation. I have found several tutorials on this subject but still didn't manage to solve my equation. Please have a look at the attached pictures. I wrote down the diff equation and also made a scan of a print from Matlab of what I have written. The text on either sides of the line are in separate files. My problem: I want to get a solution for y but for some reason Matlab only gives the initial value of y (which is zero). It seems like it doesn't solve the equaiton. I wonder what I could have done wrong? Especially the region I marked with red. I would appreciate any feedback that could help me solve this, so I can go on with my work... :P I have been stuck with Matlab all day trying to solve a differential equation. I have found several tutorials on this subject but still didn't manage to solve my equation. Please have a look at the attached pictures. I wrote down the diff equation and also made a scan of a print from Matlab of what I have written. The text on either sides of the line are in separate files. My problem: I want to get a solution for y but for some reason Matlab only gives the initial value of y (which is zero). It seems like it doesn't solve the equaiton. I wonder what I could have done wrong? Especially the region I marked with red. I would appreciate any feedback that could help me solve this, so I can go on with my work... :P You derivative function does not return the derivative vector. You have a differential equation of the form: $\ddot{y}=f(t, \dot{y},y)$ You turn this into a first order system by introducing the state vector: $<br /> \bold{y}=\left[ \begin{array}{c} y \\ \dot{y} \end{array}\right]<br />$ $<br /> \dot{\bold{y}}=\left[ \begin{array}{c} \dot{y} \\ f(t, \dot{y},y) \end{array}\right]<br />$$<br /> \left[ \begin{array}{c} \bold{y}_2 \\ f(t, \bold{y}_2,\bold{y}_1) \end{array}\right]<br It is ths $\dot{\bold{y}}$ that your derivative function should be returning. Thanks for the help! I forgot to reply... March 5th 2010, 03:41 AM #2 Grand Panjandrum Nov 2005 March 15th 2010, 03:33 AM #3 Mar 2010
{"url":"http://mathhelpforum.com/math-software/132044-matlab-problem-solving-simple-differential-equation.html","timestamp":"2014-04-18T01:57:05Z","content_type":null,"content_length":"39788","record_id":"<urn:uuid:9612bfbb-479e-4ec7-9209-eeda5acfeb31>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Longitudinal elongation Welcome to the PF, misty777. Keep in mind that homework and coursework problems are to be posted in the Homework Help forums area, and not here in the general forums. And when you post homework problems, you should use the Homework Help Template that is provided when you start a thread in those forums. I'll leave this question here for now, but it may get moved to one of the Homework Help forums at some point (and your other thread here with a similar question may be moved as well). With homework and coursework questions, you are required to show us your work and the relevant equations, in order for us to help you better. So, please define stress and strain for us. What modulus relates stress and strain? How can you use those concepts to solve this problem?
{"url":"http://www.physicsforums.com/showthread.php?t=164354","timestamp":"2014-04-16T18:58:12Z","content_type":null,"content_length":"39802","record_id":"<urn:uuid:6f10cf03-dd2f-4289-b55a-9aba8301bc77>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Re: matrix of significance stars [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Re: matrix of significance stars From "Adam Seth Litwin" <aslitwin@MIT.EDU> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Re: matrix of significance stars Date Sat, 29 Jul 2006 12:41:42 -0400 Against everybody's better judgment, including my own, I am still trying to do this by brute force. I'm SO close though. I now have a k x m matrix, where k is the number of independent variables and m is the number of models I have estimated. Each entry in the matrix is a 0, 1, 2, or 3 based on how many stars I would like the estimate to carry. (I built that matrix from a matrix of p-values.) Now, I would like to read each of these matrix elements into a macro, and then convert that macro to the appropriate string macro. So, I have... local indep1M1stars = stars[1,1] if `indep1M1stars' == 3 local `indep1M1stars' "***" if `indep1M1stars' == 2 local `indep1M1stars' "**" if `indep1M1stars' == 1 local `indep1M1stars' "*" if `indep1M1stars' == 0 local `indep1M1stars' That works fine. If I enter: .di "`indep1M1stars'" I get: Fantastic. BUT... .file open faketable using filename, write replace .file write faketable `indep1M1stars' does NOT write: It still writes: Do I need to reference indep1M1stars differently in the write command than in the display command? Thanks for suggestions. This is really vexing me. adam ----- Original Message ----- From: "Kit Baum" <baum@bc.edu> To: <statalist@hsphsun2.harvard.edu> Sent: Saturday, July 29, 2006 6:31 AM Subject: st: Re: matrix of significance stars > findit estout > Kit Baum, Boston College Economics > http://ideas.repec.org/e/pba1.html > On Jul 29, 2006, at 2:33 AM, Adam wrote: >> I am trying to figure out the easiest way to build a matrix of >> significance >> stars (or other symbols) after either an estimation command or, better >> yet, >> after building a table of estimates. I figured there would be something >> like e(stars), especially since the estimates table can display them so >> readily. But, short of -parmest-, which makes me build a whole new >> dataset, >> I'm not sure how to proceed. > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-07/msg00852.html","timestamp":"2014-04-16T22:11:36Z","content_type":null,"content_length":"8495","record_id":"<urn:uuid:a886b6f6-6b76-463f-a32a-400ac98970b6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: my teacher ordered me to give him the answers of an assignment >>what is the out put of any circuit??? • one year ago • one year ago Best Response You've already chosen the best response. do you mean the wattage or voltage? Best Response You've already chosen the best response. any circuit means? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a141f2e4b05517d536afe3","timestamp":"2014-04-21T04:59:15Z","content_type":null,"content_length":"29984","record_id":"<urn:uuid:da964d83-4534-4bbe-b03c-0849645a5be1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Arlington Heights, IL Trigonometry Tutor Find an Arlington Heights, IL Trigonometry Tutor ...I have tutored students of varying levels and ages for more than six years. While I specialize in high school and college level mathematics, I have had success tutoring elementary and middle school students as well. I have experience working with ACT College Readiness Standards and have been successful improving the ACT scores of students. 19 Subjects: including trigonometry, calculus, statistics, algebra 1 ...It was a comprehensive two year curriculum. My approach is to try different examples to explain the material. We will go over any subject as many times as necessary for the student to 18 Subjects: including trigonometry, physics, calculus, algebra 1 ...When teaching lessons, I put the material into a context that the student can understand. My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. 12 Subjects: including trigonometry, calculus, algebra 2, geometry I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background in science and math, a love of written and oral communication and a strong desire to share the knowledg... 35 Subjects: including trigonometry, chemistry, English, reading ...Properties of Logarithms. Exponential and Logarithmic Equations. Exponential and Logarithmic Models. 17 Subjects: including trigonometry, reading, calculus, geometry Related Arlington Heights, IL Tutors Arlington Heights, IL Accounting Tutors Arlington Heights, IL ACT Tutors Arlington Heights, IL Algebra Tutors Arlington Heights, IL Algebra 2 Tutors Arlington Heights, IL Calculus Tutors Arlington Heights, IL Geometry Tutors Arlington Heights, IL Math Tutors Arlington Heights, IL Prealgebra Tutors Arlington Heights, IL Precalculus Tutors Arlington Heights, IL SAT Tutors Arlington Heights, IL SAT Math Tutors Arlington Heights, IL Science Tutors Arlington Heights, IL Statistics Tutors Arlington Heights, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Arlington_Heights_IL_Trigonometry_tutors.php","timestamp":"2014-04-19T23:35:41Z","content_type":null,"content_length":"24633","record_id":"<urn:uuid:c090d1c5-7e28-4dc5-a9bc-7692cb65306b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Related Rates 1. In the following assume that x and y are both functions of t. Given , and determine for the following equation. 2. In the following assume that x, y and z are all functions of t. Given , , , and determine for the following equation. 3. For a certain rectangle the length of one side is always three times the length of the other side. (a) If the shorter side is decreasing at a rate of 2 inches/minute at what rate is the longer side decreasing? (b) At what rate is the enclosed area decreasing when the shorter side is 6 inches long and is decreasing at a rate of 2 inches/minute? 4. A thin sheet of ice is in the form of a circle. If the ice is melting in such a way that the area of the sheet is decreasing at a rate of 0.5 m^2/sec at what rate is the radius decreasing when the area of the sheet is 12 m^2 ? [Solution] 5. A person is standing 350 feet away from a model rocket that is fired straight up into the air at a rate of 15 ft/sec. At what rate is the distance between the person and the rocket increasing (a) 20 seconds after liftoff? (b) 1 minute after liftoff? [Solution] 6. A plane is 750 meters in the air flying parallel to the ground at a speed of 100 m/s and is initially 2.5 kilometers away from a radar station. At what rate is the distance between the plane and the radar station changing (a) initially and (b) 30 seconds after it passes over the radar station? See the (probably bad) sketch below to help visualize the problem. [Solution] 7. Two people are at an elevator. At the same time one person starts to walk away from the elevator at a rate of 2 ft/sec and the other person starts going up in the elevator at a rate of 7 ft/sec. What rate is the distance between the two people changing 15 seconds later? [Solution] 8. Two people on bikes are at the same place. One of the bikers starts riding directly north at a rate of 8 m/sec. Five seconds after the first biker started riding north the second starts to ride directly east at a rate of 5 m/sec. At what rate is the distance between the two riders increasing 20 seconds after the second person started riding? [Solution] 9. A light is mounted on a wall 5 meters above the ground. A 2 meter tall person is initially 10 meters from the wall and is moving towards the wall at a rate of 0.5 m/sec. After 4 seconds of moving is the tip of the shadow moving (a) towards or away from the person and (b) towards or away from the wall? [Solution] 10. A tank of water in the shape of a cone is being filled with water at a rate of 12 m^3/sec. The base radius of the tank is 26 meters and the height of the tank is 8 meters. At what rate is the depth of the water in the tank changing with the radius of the top of the water is 10 meters? [Solution] 11. The angle of elevation is the angle formed by a horizontal line and a line joining the observer’s eye to an object above the horizontal line. A person is 500 feet way from the launch point of a hot air balloon. The hot air balloon is starting to come back down at a rate of 15 ft/sec. At what rate is the angle of elevation, , changing when the hot air balloon is 200 feet above the ground. See the (probably bad) sketch below to help visualize the angle of elevation if you are having trouble seeing it. [Solution]
{"url":"http://tutorial.math.lamar.edu/problems/calci/relatedrates.aspx","timestamp":"2014-04-19T07:07:59Z","content_type":null,"content_length":"67058","record_id":"<urn:uuid:27e9292c-d71d-4929-a3c1-48a3a7949ec1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Coils Are Wound Around The Same Cylindrical ... | Chegg.com Two coils are wound around the same cylindrical form. When thecurrent in the first coil is decreasing at a rate of -0.249 1.69×10^−3 What is the mutual inductance of the pair of coils, aswell as the flux when the second coil has 25.0 turns and the first coil equals1.17 0.356
{"url":"http://www.chegg.com/homework-help/questions-and-answers/two-coils-wound-around-cylindrical-form-thecurrent-first-coil-decreasing-rate-0249-induced-q124932","timestamp":"2014-04-16T16:47:47Z","content_type":null,"content_length":"23104","record_id":"<urn:uuid:36dbcb2a-0b5c-4389-885b-55fb241a77ce>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Carlsbad, CA Math Tutor Find a Carlsbad, CA Math Tutor ...I have used this program for over two years while working at my day job. I utilize Microsoft Outlook to send and receive emails, set reminders, send out meeting invitations, and to organize my daily, weekly, and monthly calendars. I find this system very useful and it has allowed me to become more efficient at my job while prioritizing my upcoming tasks and goals. 37 Subjects: including prealgebra, reading, algebra 2, algebra 1 ...I have tutored through Wyzant in this subject. I am a prolific writer, have taught writing in the classroom (7th grade, 12th grade, college freshmen and sophomores), and have tutored through Wyzant in this subject with strong results. I received my B.A. in psychology and would be glad to help you understand the history of psychology, research methods, etc. 30 Subjects: including geometry, SAT math, trigonometry, algebra 1 ...Welcome to my WyzAnt profile, and thank you for your interest. Here I have the opportunity to give you a brief overview of my academic credentials, my experience tutoring and teaching, and my overall approach as a tutor. I graduated in 1982 from the University of the City of Manila, Philippines with a Bachelor of Science in Chemical Engineering. 4 Subjects: including algebra 1, algebra 2, prealgebra, trigonometry ...However, I found that with a little patience, creativity, and humor, even the students that deemed themselves "prealgebra hopeless" can be turned into the ones that enjoy the challenge. I think of precalculus as advanced algebra with little snippets of baby-calculus tucked here and there plus tr... 11 Subjects: including algebra 1, algebra 2, calculus, geometry I have had a variety of work experience in government and private industry, primarily in computers(software)/mathematics. My college degrees are B.A. (math) and M.S. (math); early in my career, I obtained a substitute-teaching credential for California community (two-year) colleges. I have tutored students in high school math and at the two-year college level. 5 Subjects: including algebra 1, algebra 2, geometry, precalculus
{"url":"http://www.purplemath.com/Carlsbad_CA_Math_tutors.php","timestamp":"2014-04-20T04:20:59Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:f55a1388-48e5-4f69-bee0-f89024764bf4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology- Geometry and Gauge Fields Foundations 2nd edition by Naber | 9781441972538 | Chegg.com Topology, Geometry and Gauge Fields 2nd edition Details about this item Topology, Geometry and Gauge Fields: Like any books on a subject as vast as this, this book has to have a point-of-view to guide the selection of topics. Naber takes the view that the rekindled interest that mathematics and physics have shown in each other of late should be fostered, and that this is best accomplished by allowing them to cohabit. The book weaves together rudimentary notions from the classical gauge theory of physics with the topological and geometrical concepts that became the mathematical models of these notions. The reader is asked to join the author on some vague notion of what an electromagnetic field might be, to be willing to accept a few of the more elementary pronouncements of quantum mechanics, and to have a solid background in real analysis and linear algebra and some of the vocabulary of modern algebra. In return, the book offers an excursion that begins with the definition of a topological space and finds its way eventually to the moduli space of anti-self-dual SU(2) connections on S4 with instanton number -1. Back to top Rent Topology, Geometry and Gauge Fields 2nd edition today, or search our site for Gregory L. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Springer.
{"url":"http://www.chegg.com/textbooks/topology-geometry-and-gauge-fields-2nd-edition-9781441972538-1441972536","timestamp":"2014-04-18T12:09:25Z","content_type":null,"content_length":"20899","record_id":"<urn:uuid:32fd9538-6b4e-4d0e-ae97-427a8ab997b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Transwiki:Least count Definition from Wiktionary, the free dictionary Least count is the highest degree of accuracy of measurement that can be achieved. For example the least count of a voltmeter is the minimum change that can be discerned. Every measuring instrument has no error when readings are taken. The least count, uncertainty or maximum possible error characterize such errors. Instruments' errors can be compared by calculating the percentage of uncertainty of their readings. The instrument with the least uncertainty is taken to measure objects, as all measurements consider accuracy. The percentage uncertainty is calculated with the following formula: (Maximum Possible error/Measurement of the Object in question) *100 The smaller the measurement, the larger the percentage uncertainty. The least count of an instrument is indirectly proportional to the precision of the instrument. A Vernier scale is constructed by taking 49 main scale divisions dividing them into 50 divisions ie. 49mm divided into 50 parts 1 vsd=49/50 mm=0.98mm 1 MSD=1mm substituting in formula L.C = 1 M.S.D - 1 V.S.D Least count error[edit] The smallest value that can be measured by the measuring instrument is called its least count. Measured values are good only up to this value. The least count error is the error associated with the resolution of the instrument. For example, a vernier caliper's least count is 0.02 mm while a spherometer may have a least count of 0.001 mm. Least count error belongs to the category of random errors but within a limited scale; it occurs with both systematic and random errors. If we use a metre scale for measurement of length, it may have graduations at 1 mm division scale spacing or interval. Instruments of higher precision, improving experimental techniques, etc., can reduce the least count error. Repeating the observations and taking the arithmetic mean of the result, the mean value would be very close to the true value of the measured quantity.
{"url":"http://en.wiktionary.org/wiki/Transwiki:Least_count","timestamp":"2014-04-16T22:59:40Z","content_type":null,"content_length":"22646","record_id":"<urn:uuid:e841ed18-bd3f-4b01-9091-14e4c64aafc1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
I need some help at least starting this out. February 12th 2008, 12:28 PM #1 Jan 2008 I need some help at least starting this out. A baseball hit at an angle of "X" to the horizontal with initial velocity "V" has horizontal range, R , given by $R = (((V^2)/g)sin(2x))$ Here "g" is the acceleration due to gravity. Sketch "R" as a function of "x" for $0 <= x <= (pi/2)$ . What angle "Xmax" gives the maximum range? What is the maximum range "Rmax" ? A baseball hit at an angle of "X" to the horizontal with initial velocity "V" has horizontal range, R , given by $R = (((V^2)/g)sin(2x))$ Here "g" is the acceleration due to gravity. Sketch "R" as a function of "x" for $0 <= x <= (pi/2)$ . What angle "Xmax" gives the maximum range? What is the maximum range "Rmax" ? the graph of sin(2x) is exactly the same as sin(x), except now, the period is pi as opposed to 2pi. the constant multiple in front changes the amplitude, so the graph will go up to a max of (V^2) /g and down to a min of -(V^2)/g Ok, wow, I thought way too much into the question and totally didn't see any of that. So the X max ended up being "pi/4" and I got the question right. Thanks, I appreciate it as always. February 12th 2008, 12:36 PM #2 February 12th 2008, 04:20 PM #3 Jan 2008 February 12th 2008, 04:45 PM #4
{"url":"http://mathhelpforum.com/calculus/28108-i-need-some-help-least-starting-out.html","timestamp":"2014-04-18T07:18:28Z","content_type":null,"content_length":"41302","record_id":"<urn:uuid:ffd7791a-3a6b-4722-9152-370c48aed941>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Car Talk Woes May 2001 Car Talk Woes A few weeks ago on NPR's popular afternoon program All Things Considered, Tom Magliozzi, who together with his brother Ray hosts the even more popular NPR program Car Talk, suggested that the teaching of algebra, geometry, and calculus in schools was a waste of time, and that we'd all be better off if pupils were taught more useful things. [You can hear the commentary by clicking here.] Now, along with millions of other listeners who make Car Talk by far the most popular show on NPR, I am an avid fan of the Magliozzi brothers. If tomorrow morning my 1995 Buick Park Avenue with 75,000 miles on the clock developed a strange BRRR--CLICK--BRRR-CLICK--BRRR-CLICK sound, Tom and Ray, who go by the nom-de-radio Click and Clack, would be the first people I would turn to for help. When it comes to cars, they know their stuff. But when it comes to mathematics, well, Tom, I've gotta tell you, you are so off base, it's scary. So scary, in fact, that when I heard your comments, I said to myself: "This is a smart guy -- heavens, he was a professor at MIT for many years. How come he thinks teaching mathematics serves no useful purpose?" It didn't take me long to find out. Thanks to the World Wide Web -- an entirely mathematical invention by the way -- I was able to replay Tom's words again, and it was clear what was going on. The real culprit isn't Tom, it's the way mathematics so often gets taught in schools. The event that prompted Tom's remarks was a back-to-school night at his son's high school. On the board in the math classroom Tom read the following statement: Calculus is the set of techniques that allow us to determine the slope at any point on a curve and the area under that curve." AGGGHHH. If I didn't know any better, that would have made me react the same way as Tom, although I'm not sure that Tom's phrase "Who gives a rat's patootie?" works as well in an English accent as it does in a Car Talk voice. Having been a mathematician -- not a math teacher I should add -- for thirty years, I can think of dozens of things I might have written on the board to describe calculus. For example: Calculus is a set of techniques that scientists and engineers use to describe accurately the way things move -- planets, space shuttles, ballistic missiles, electricity, radio waves, stock market prices, blood, the heart, the muscles of the body, and so on. Or, and this one is designed specially for Tom: calculus is the set of techniques that enable automobile designers and manufacturers to design and build a modern automobile, with all its moving Or: calculus was the major intellectual discovery of the seventeenth century that made possible the scientific revolution and all of modern science, technology, and medicine. Or, calculus is an absolutely indispensable tool for designing computers, radios, televisions, telephones, VCR machines, CD players, airplanes, artificial heart valves, CAT scan machines, the GPS positioning system, I could go on for ever. Or: calculus is the language physicists use to understand the universe and the world we live in. Or even simply, calculus is one of the greatest intellectual achievement of humankind. But if you take one of our culture's most impressive and useful inventions, that quite literally changed the world, and reduce it to a trivial statement about finding slopes of curves, as Magliozzi's son's teacher did, then there's no wonder Tom reacted the way he did. So we shouldn't blame Tom. He's simply the product of the education he received -- although I wonder who he mixed with during all those years on the faculty at MIT. And we shouldn't blame the hapless teacher who started this whole thing. It may well be that he or she is doing their best, given their education. Much of the blame lies in the way universities train future mathematics teachers. Mathematics only exists because it is so useful. No part of the subject should ever be taught at school level without explaining why is was invented and what some of its uses are. Uses that directly affect everyone in the classroom. According to Magliozzi, the purpose of education is, and I quote, to "help us to understand the world we live in." That, Tom, is precisely why some of our ancestors developed mathematics. Without mathematics, your weekly show would have to be called Horse Talk and you and your brother would have to be called Whinnie and Neigh -- except that without mathematics there wouldn't be any radio to do it on either, so you'd have to do it by standing in Harvard Square and yelling as loud as you could. By the way, I still think Car Talk is one of the best programs on radio. Devlin's Angle is updated at the beginning of each month. Keith Devlin ( devlin@stmarys-ca.edu) is Dean of Science at Saint Mary's College of California, in Moraga, California, a Senior Researcher at Stanford University, and "The Math Guy" on NPR's Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
{"url":"http://www.maa.org/external_archive/devlin/devlin_5_01.html","timestamp":"2014-04-17T21:50:05Z","content_type":null,"content_length":"6917","record_id":"<urn:uuid:0bd50732-8978-4403-abb0-3b3883320997>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Section 14.1 Java Security Update: Oracle has updated the security settings needed to run Physlets. Click here for help on updating Java and setting Java security. Section 14.1: Radial Wave Functions for Hydrogenic Atoms Please wait for the animation to completely load. In Chapter 13, we studied the Coulomb potential as applied to the idealized hydrogen atom. Here we extend this analysis to so-called hydrogenic atoms. Hydrogenic atoms are atoms with only one electron and are therefore highly ionized. We can easily extend the discussion of the radial energy eigenfunctions from Section 13.8 by considering the Coulomb potential, V = −Ze^2/r, where Z represents the number of protons. We can therefore make the replacement e^2 → Ze^2, in the expressions for the Bohr radius and energy equations, which give: a(Z) = ħ^2/μ[e]Ze^2 = a[0]/Z , (14.1) E[n](Z) = -μ[e] Z^2e^4/2n^2ħ^2 = Z^2E[n ], (14.2) where a[0] and E[n] are the Bohr radius and the energy levels for the idealized Hydrogen (Coulomb) problem. Notice therefore that the size of hydrogenic atoms decreases linearly as Z increases, and the binding energy increases quadratically as Z increases. The radial energy eigenfunctions described in Eq. (13.29) change as well, such that R[Z nl](r) = (2Z/na[0])^3/2[(n − l − 1)! / 2n[(n + l)!]^3] e^−Zr/nao [(Zr/na[0])^l + 1/r] v[nl](Zr/na[0]). (14.3) In the animation, these radial energy eigenfunctions are shown for Z ≤ 16 and n = 1, 2, 3, and 4 with the appropriate (allowed) l values. The quantum numbers are given in spectroscopic notation such that 4f corresponds to n = 4 and l = 3. For the radial energy eigenfunction, notice how the number of crossings is related to the quantum numbers n and l. You should see that the number of crossings is n − l − 1. Also note how the spatial extent of the radial energy eigenfunctions decrease as Z increases according to Eq. (14.1). In addition to R[Z nl](r), for the same range of Z values and the same quantum numbers, R^2[Z nl](r) and the probability density, P[Z nl](r) = R^2[Z nl](r)r^2, are also shown. You can change the start and end of the integral for R^2[Z nl](r) and R^2[Z nl](r)r^2 as well as change the range plotted in the graph by changing values and clicking the button associated with the state in which you are interested. You should quickly convince yourself that ∫ R^2[Z nl](r)r^2 dr = ∫ u^2[Z nl](r) dr = 1 , [integrals from 0 to infinity] (14.4) and that indeed P[Z nl](r) = R^2[Z nl](r)r^2. next »
{"url":"http://www.compadre.org/PQP/applications/section14_1.cfm","timestamp":"2014-04-21T05:18:22Z","content_type":null,"content_length":"106084","record_id":"<urn:uuid:5bc71dce-794c-43ed-8f69-a4f18d60767c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Consecutive Sudoku Yet another Sudoku puzzle variation – it’s somewhat similar to “greater/less than” puzzles but completely new solving methods are required to solve it. It is called Consecutive Sudoku, although I’ve seen it under the name “Disallowed Number Place”. You start with very few givens (in fact I have created these puzzles with only a single starting clue), but you also have marks between cells that contain consecutive numbers. These are marked with a thick pipe symbol | between cells. Don’t think these are too easy. They can be made extremely difficult to solve! Apart from the obvious methods for solving these (if you have solved a cell with number 1, and there is a | symbol, you know that adjacent cell must be 2), here are a few hints to help you out: 1. Use pencil marks and apply “pipes” to them. If a cell, for one reason or another, can contain, for example, only 5 or 8, and there is a pipe, you know that the cell next to it can contain only 4 or 6 or 7 or 9. 2. Where there is no pipe – it’s also a clue! Don’t forget to use it! If you solved a cell with number 4 and there is no pipe, you know that the cell next to it can’t be neither 3 nor 5! The second hint is very important. In fact it is possible to make puzzles with no consecutive numbers in it – so the puzzle looks like regular sudoku, but there are only, for example, 8 clues! If you didn’t know that puzzle was “consecutive” you would think it is impossible to solve. More about that when I construct and post one such puzzle. Okey dokey, here is the puzzle: Consecutive Sudoku for Monday, April 24. (click to download or right-click to save the image) I have been providing these puzzles for FREE since 2005. Please consider clicking this “Like” button Clicking it will help in keeping this website free. Tweet THANK YOU! sudoku variants and other puzzle books To see the solution to this puzzle click here 3 Comments 1. I like it! I look forward to seeing some that are more difficult though. 2. This sudoku has TWO solutions. 3. Oops, the second solution contains consecutive numbers in the cell, where there is no mark. So, the solution is unique. One Trackback • […] Sudoku. This is crucial, because it looks like ordinary sudoku, but it is not! You cannot place two consecutive numbers inside the central […] This entry was posted in Free sample puzzles, Sudoku Variants and tagged CONSECUTIVE, Consecutive Sudoku, EASY, greater/less. Bookmark the permalink. Post a comment or leave a trackback: Trackback
{"url":"http://www.djape.net/consecutive-sudoku/","timestamp":"2014-04-21T15:02:15Z","content_type":null,"content_length":"63838","record_id":"<urn:uuid:dd621562-062a-4697-80dd-a906c7110ee6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If (x,y) is a point on the graph of a function, and all points near (x,y) have a greater y-value than (x,y), then (x,y) must be A. A relative maximum B. The absolute minimum on (-∞,∞) C. A relative minumum D. The absolute maximum on (-∞,∞) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f9d5d52e4b000ae9ed1fa85","timestamp":"2014-04-17T21:44:39Z","content_type":null,"content_length":"37453","record_id":"<urn:uuid:2a37e659-f178-42ef-8127-b7551def8db8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with optimal Further tags : implicit form, Matlab codegen for GPOPS purposes Hi there, I would like to introduce my self and ask you for an advice. I have a four linked robot that i would really like to control by the meaning of the optimal control theory. The arm has 4 actuators (giving torques U) in corrispondence of 4 rotational joints. P parameters describe the system. I got ht edynamic system with the Lagrangian method and now I have 4 2nd order...
{"url":"http://www.mapleprimes.com/tags/optimal","timestamp":"2014-04-20T06:42:54Z","content_type":null,"content_length":"56720","record_id":"<urn:uuid:bbcd936c-ce91-4562-ba14-03a6a61232c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Watertight Trimmed NURBS :: Institutional Repository Watertight Trimmed NURBS Thomas W. Sederberg G. Thomas Finnigan Brigham Young University Xin Liy University of Science and Technology of China Hongwei Linx Zhejiang University Heather Ipson Brigham Young University This paper addresses the long-standing problem of the unavoidable gaps that arise when expressing the intersection of two NURBS surfaces using conventional trimmed-NURBS representation. The solution converts each trimmed NURBS into an untrimmed T-Spline, and then merges the untrimmed T-Splines into a single, watertight model. The solution enables watertight fillets of NURBS models, as well as arbitrary feature curves that do not have to follow isoparameter curves. The resulting T-Spline representation can be exported without error as a collection of NURBS surfaces. CR Categories: I.3.5 [Computer Graphics]: Computational geometry and object modeling—Curve, surface, solid, and object Keywords: Surface intersection, Booleans, NURBS, T-Splines 1 Introduction The trimmed-NURBS modeling paradigm suffers from a serious fundamental flaw: parametric trimming curves are mathematically incapable of fulfilling their primary role, which is to represent the curve of intersection between two NURBS surfaces. Consequently, trimmed-NURBS models are not mathematically watertight, as illustrated in Figure 1 in which trimming curves are used to express the intersection between the body and spout of the Utah teapot model. We use the term “watertight” to connote no unwanted gaps or holes. That is, the surface is a gap-free 2-manifold in the neighborhood of intersection curves. This paper presents a two-step algorithm for representing the Boolean combination of two NURBS objects as a single watertight T-Spline. In the first step, each trimmed NURBS is converted into a T-Spline without trimming curves, as illustrated in Figure 2.a. In the second step, each pair of untrimmed T-Splines is merged along their intersection curve into a gap-free T-Spline model, as illustrated in Figure 2.b. The resulting model, shown in Figure 2.c is C2, except at the C0 crease along the intersection curve. This T-Spline model facilitates the creation of watertight fillets. The model in Figure 2.d contains a C2 gap-free fillet between the body and spout. Section 2 reviews the history and significance of the problem this paper addresses, and reviews prior literature. Section 3 presents an algorithm for converting a trimmed-NURBS into an untrimmed TSpline. Section 4 explains how to merge two NURBS or T-Spline tom@byu.edu ztomfinnigan@gmail.com yxinliustc@gmail.com xhwlin@cad.zju.edu.cn (a) Spout translated away from body; intersection curve in white. (b) Body and spout trimmed using trimming curves. (c) Trimmed body and spout translated back into original orientation (d) Blowup of green rectangle in (c), showing gap Figure 1: Trimmed-NURBS Representation of the Utah Teapot. (a) Body and spout converted to untrimmed T-Splines (b) Body and spout merged into a single gap-free T-Spline. (c) Gap-free T-Spline model (d) Gap-free C2 fillet. Figure 2: Gap-free Teapot. surfaces with mis-matched parametrizations. Section 5 details how the algorithms presented in Sections 3 and 4 work together to create watertight trimmed-NURBS models, and examines the approximation error. This section also discusses the creation of gap-free, C2 fillets and the placement of feature lines on a T-Spline surface that are not aligned with iso-parameter curves. Section 6 summarizes. 2 Background The fact that gaps are unavoidable in conventional trimmed- NURBS mathematical models can be shown as follows. A trimming curve is typically a degree-three NURBS curve defined in the parameter domain of a NURBS surface. The image of such a trimming curve on a bicubic patch (i.e., the curve on the bicubic patch in R3 that the trimming curve maps to) is degree 18 and algebraic genus zero. However, a generic intersection curve of two bicubic surfaces is degree 324 in R3 [Sederberg et al. 1984] and algebraic genus 433 [Katz and Sederberg 1988]. Hence, intersection curves can only be approximated by parametric trimming curves. The existence of these gaps in trimmed NURBS models seems innocuous and easy to address, but in fact it is one of the most serious impediments to interoperability between CAD, CAM and CAE systems [Kasik et al. 2005]. Software for analyzing physical properties such as volume, stress and strain, heat transfer, or lift-to-drag ratio will not work properly if the model contains unresolved gaps. Since 3D modeling, manufacturing and analysis software does not tolerate gaps, humans often need to intervene to close the gaps. This painstaking process has been reported to require several days for a large 3D model such as an airplane [Farouki 1999] and was once estimated to cost the US automotive industry over $600 million annually in lost productivity [NIS 1999]. At a workshop of academic researchers and CAD industry leaders [Farouki 1999], the existence of gaps in trimmed-NURBS models was singled out as the single most pressing unresolved problem in the field of CAD. Prior Art Several solutions to the gap problem have been put forward, but none address the problem adequately. The best solution from a theoretical standpoint is to use the precise representation for trimming curves, which is an implicit (not parametric) equation of the form f(s; t) = 0. In the case of two intersecting bicubic patches, f(s; t) is a polynomial of bi-degree 54 54. [Krishnan and Manocha 1996] presents a solution to the surface intersection problem based on such a representation, and [Krishnan et al. 2001] describes a solid modeling system based on this approach, using exact arithmetic. Unfortunately, exact arithmetic can be very expensive and the method has not been adopted by the CAD industry. In applications for which a tessellation of the surfaces suffices, gaps can easily be filled with a triangle strip or avoided altogether by careful coordination while tessellating adjoining trimmed surfaces [Kumar 1996; Moreton 2001]. However, once a NURBS model has been reduced to a C0 tessellation, it loses its character as a smooth surface and operations such as offsetting become impossible. [Song et al. 2004] and [Farouki et al. 2004] describe methods for creating a non-tessellated, watertight approximation of two or more intersecting NURBS surfaces. Each method produces a set of piecewise C0 (but approximately C1) B´ezier patches, although if patches adjacent to an intersection curve are edited, the surfaces become discontinuous. Our technique produces a watertight C2 surface defined using a single T-Spline control grid, so the surface remains C2 if the control points are moved. [Song et al. 2004] requires the solution of a system of linear equations that under some conditions can produce huge approximation errors. Our method is more amenable to creating fillets than [Song et al. 2004] and [Farouki et al. 2004]. [Kristjansson et al. 2001] takes as input Loop subdivision surfaces, although extension to other types of subdivision surfaces is possible. [Kristjansson et al. 2001] produces a G2 watertight subdivision surface defined by a multi-resolution control grid; the surface remains G2 if the control grid is edited. The goal of [Kristjansson et al. 2001] is an efficient algorithm suitable for animation, but not necessarily for CAD. A pertinent prior art to Section 3 is [Litke et al. 2001], which describes a process of converting a trimmed subdivision surface into an untrimmed subdivision surface such that each trimming curve on the trimmed surface becomes a boundary curve on the untrimmed surface. [Litke et al. 2001] uses an enhanced Loop surface (triangle based), whereas our algorithm is based on tensor-product NURBS surfaces. Both algorithms must perturb the surface in the neighborhood of each trimming curve, but are capable of confining the perturbation to an arbitrarily small magnitude and narrow neighborhood. [Litke et al. 2001] uses trimming curves that lie in world space, thus permitting the true intersection curve to serve as the trimming curve, whereas we use conventional parametric trimming curves, which can only approximate true intersection curves. A key disadvantage of subdivision surfaces for use in CAD is their incompatibility with NURBS. Billions of dollars have been invested in NURBS software and models, and there is tremendous economic pressure against abandoning the NURBS paradigm. Furthermore, NURBS do have some advantages over subdivision surfaces. For example, numerous versions of subdivision surfaces exist, with no current industry standard, and many capabilities of subdivision surfaces involve special refinement rules. Also, subdivision surfaces are limit surfaces, involving infinite sequences of patches. Although there are efficient ways to evaluate subdivision surfaces [Stam 1998], the infinite number of patches is more difficult to deal with than a finite number of NURBS patches, especially when doing data file exchange. One approach used in commercial CAD software to manage the NURBS gap problem is to use a procedural definition of intersection curves, which keeps track of which surfaces intersect. Intersections can then be approximated, on demand, to any desired tolerance. This approach complicates subsequent tasks such as offsetting or filleting that require an explicit representation of the intersection curve. Furthermore, if a procedural definition of the same intersection is used by two different programs, it is possible to arrive at different results. This has resulted in incompatibilities between NURBS representations by different CAD, CAM, and CAE software applications, and the growth of an entire software industry around translating, fixing and healing 3D models and surfaces. The surface intersection problem has been very thoroughly researched. A sampling of the vast literature can be found in [Patrikalakis and Maekawa 2002; Song et al. 2004]. The algorithms described in this paper assume the existence of a robust surface intersection algorithm that can represent an intersection curve using trimming curves to within a prescribed tolerance, such as in [Krishnan and Manocha 1997]. Such capability is now standard in most commercial geometric modeling programs. The topic of fillets has likewise been widely researched. See [Song and Wang 2007] for a list of references, and a solution to the fillet problem that provides Gn continuity. Most commercial software approximates fillets of free-form surfaces as NURBS surfaces that lie on the base surfaces, with approximate G1 continuity. The advantage of the fillet solution presented in this paper is that it is part of a unified geometric framework. An entire geometric model, including fillets, can be represented as a single watertight T-Spline. 3 Trimmed-NURBS to Untrimmed T-Splines This section presents a method for converting a bicubic NURBS surface with trimming curve into an approximately equivalent TSpline with no trimming curve. The approximation error can be made arbitrarily small, and the perturbation can be confined to an arbitrarily narrow neighborhood of the trimming curve. We describe the algorithm using the example in Figure 3. Figure 4.a diagrams the trimming curve C in the parameter domain of the NURBS surface. The grid lines are knot lines for the NURBS surface. Points in the domain that correspond to NURBS control points are highlighted in red. Trimmed-NURBS to Untrimmed T-Splines Conversion Step 1. Form an axis-aligned polygon A (i.e., a polygon whose edges are parallel to one of the two parameter directions) that en( a) Trimmed NURBS. (b) Untrimmed T-Spline. Figure 3: Trimmed-NURBS to Untrimmed T-Splines Conversion. closes the trimming curve, as illustrated in Figure 4.b. Step 2. At each vertex of A that does not lie on a red point, perform a T-Spline control point insertion as described in [Sederberg et al. 2004]. In this example, the control point insertions will occur at the five red points lying on the black line in Figure 4.c. (The insertion operation adds two additional control points to the left of the black corner, and two beneath it, as shown in Figure 4.d.) Step 3. Remove the portion of the control mesh that lies on the interior of A and replace it with the mesh topology illustrated in Figure 4.d. Note that each convex corner in A introduces a valence three control point in the modified control grid, and each concave corner in A creates a valence five control point. Assign a knot interval of zero to all edges of the control grid that connect a blue control point to a green control point, as shown in Figure 5.a. These zeros create a B´ezier end condition for this boundary curve. Assign a small knot interval to all edges connecting the outer layer of green control points to the inner layer of green control point. A good choice for is the average of all parameter distances between the vertices on A and the trimming curve. Step 4. Leave all red control points in Figure 4.d in their initial location. The blue control points in Figure 4.d define a NURBS curve that approximates the image of C. The positions of those control points are chosen to minimize the orthogonal distance between the NURBS curve and the image of C, using an algorithm such as in [Wang et al. 2006]. Likewise, the positions of the green control points are chosen to minimize the orthogonal distance between the T-Spline surface and the interior of the trimmed NURBS surface. The resulting T-Spline and its control grid are shown in Figure 3.b. This procedure introduces some perturbation error, the magnitude of which in this example is 0:001 times the width of the model. The domain of the perturbed region lies within the support the green and blue control points. Figure 5.a illustrates the perturbation region in yellow. For a fixed axis-aligned polygon, we can make the perturbation region on the exterior of the polygon arbitrarily narrow by performing a local T-Spline refinement, as illustrated in Figure 5.b. Likewise, we can make the distance between A and C arbitrarily small by finding an axis-aligned polygon that approximates C to within a tolerance . Clearly, there are countless such axis-aligned polygons, and numerous possible algorithms for finding such polygons. We now present one such algorithm. 3.1 Finding an Axis-Aligned Polygon The algorithm, illustrated by the example in Figure 6, has the flavor of a curve rasterization in which the pixels are cells of a quadtree. Related applications of quadtrees are reported in [Hunter and Stei- (a) Trimming curve. (b) Axis-Aligned polygon A (blue). (c) Control points inserted at vertices of A. (d) Topology modification. Figure 4: Algorithm for Converting a Trimmed-NURBS into an Untrimmed T-Spline. glitz 1979; Samet 1984]. We begin by defining a color-based classification system for a rectangular domain R with respect to a trimming curve C and a tolerance as follows: White C does not intersect R. Blue C does intersect R and the width or height of C is > . Red C does intersect R, the width and height of C are < , but the one-neigborhood of cells adjacent to R intersect C in more than one connect component. Gray C does intersect R, the width and height of C are < , and the one-neigborhood of cells adjacent to R intersect C in exactly one connect component. Begin by assigning a color to each rectangle bounded by knot lines in the parameter domain, using the above classification. Split each red or blue rectangle into four axis-aligned rectangles and reclassify each of those four new rectangles. Repeat these splitting and reclassification operations until all cells are either white or gray. At this point, each component of C will be covered by a contiguous set of gray cells, which we might call a rasterization of the component. For each component, the perimeter of its rasterization will serve as an acceptable axis-aligned polygon. In Figure 6.f, the axis-aligned polygon is highlighted in blue. Extraordinary Points [Sederberg et al. 2003] suggests dealing with extraordinary points in T-Spline surfaces using the method presented in [Sederberg et al. 1998], which is a generalization of Catmull-Clark refinement that takes into account knot intervals. For our purposes, we can modify Step 2 in Algorithm 1 to include doing T-Spline refinements to force all knot intervals to be identical in the 2-neighborhood of each extraordinary point created in Step 3. This converts the 0 0 ! b (a) Perturbation domain (yellow). 0 0 ! " " ! # # (b) Decreasing the perturbation domain through T-Spline local refinement. Choose < b, < c. Figure 5: Limiting the Perturbation Domain. (a) Initial classification. (b) First iteration. (c) Second iteration. (d) Third iteration. (e) Fourth iteration. (f) Fifth iteration. A is Highlighted in Blue. Figure 6: Algorithm for Finding Bounding Polygons. extraordinary points into conventional Catmull-Clark style with uniform knots, making possible the use of methods such as [Peters 2000] for patching valence n extraordinary points using G1 bicubic patches, with one patch per face of the control grid. Alternatively, the extraordinary region can be filled using G2 patches using a method such as in [Loop 2004]. An important property of this procedure is that the resulting TSpline is fully editable, meaning that its control points can be adjusted and all the properties of a C2 spline are honored. The algorithm offers a tradeoff between accuracy and number of control points. If the goal is to trim away some holes but then to continue to modify the resulting T-Spline, an artist or designer can opt for fewer control points. The resulting larger approximation error should be acceptable since the surface will undergo additional modification. Figure 7 shows an example involving two loops and a sharp corner. In this case, the perturbation error is 0:00025 relative to the width of the patch. An improved algorithm for computing the green control points is a problem calling for future research. To create our example figures, we chose a traditional method that appeared simplest to implement. We begin by obtaining a set of sample points on the region of the trimmed surface that will be perturbed and then specify an initial position for the green control points. The following process is then repeated: Assign each sample point a parameter pair on the untrimmed T-Spline, then solve for the green control (a) Trimmed NURBS. (b) Untrimmed T-Spline. (c) Untrimmed T-Spline. Figure 7: Trimmed NURBS to Untrimmed T-Spline Conversion. points that minimize the least squares error based on these parameter assignments. It is known that this algorithm converges only linearly [Bjorck 1996], and indeed it can take several tens of seconds to obtain good results if the initial positions of the green control points are not chosen wisely. The algorithm has also been observed to converge to a local min. One possible solution is to extend to surfaces the curve-fitting algorithm in [Wang et al. 2006], something the authors of that paper are working on. Another line of research is to study whether the quasiinterpolation methods used in [Litke et al. 2001] can be adapted to 4 Merging using NU-NURBS After two intersecting surfaces are converted into untrimmed TSplines using the method in Section 3, the final step is to merge those two T-Splines into a single, gap-free T-Spline. A basic algorithm for merging two T-Splines is presented in [Sederberg et al. 2003]. However, that algorithm gives poor results if the two surfaces to be merged do not have consistent parametrizations, as illustrated in Figure 8. The first step in the merge algorithm in [Sederberg et al. 2003] is to insert knots such that the two surfaces have the same set of knot intervals, as shown in Figure 8.b. (This might also require that all knot intervals on one surface be scaled so that their sum matches the sum of the knot intervals on the other surface). The final step is to connect the two control grids, as shown in Figure 8.c. However, if the adjoining boundary curves are not parametrized similarly, the isoparameter curves will experience an abrupt bend, imparting a kink in the resulting surface. For example, Figure 9 shows a hand and arm modeled as separate NURBS, whose parametrizations do not align. Figure 9.b shows the result of merging them using the algorithm described in [Sederberg et al. 2003]. Unfortunately, most pairs of untrimmed T-Splines generated using the method in Section 3 have this problem. The problem is related to the fact that the refinements shown in Figure 8.b must honor a restriction that is placed on the knot intervals in a T-Spline: the sum of knot intervals on one edge of a face on the control grid must equal the sum of knot intervals on the opposing edge in the face. Better results could be obtained by, instead of refining each surface as in Figure 8.b, we refine the two surfaces so that their knot lines align, as shown in Figure 8.d. However, the resulting knot interval configuration violates the definition of a T-Spline. Previous methods for dealing with such knot intervals [Sederberg et al. 1998; M¨uller et al. 2006] devise variations on Catmull-Clark refinement in which faces of the control grid map to an infinite sequence of bicubic patches which, like extraordinary points in a Catmull-Clark surface, are G1. The infinite sequence of patches violates a key objective of this paper, which is to be ex3 (a) Initial knot intervals. (b) After refinement. (c) Control grids with mismatched 1 .8 3 .7 (d) Refinement to align isoparameter Figure 8: Merging Two NURBS Surfaces. portable using a finite number of tensor-product patches. To address this problem, we introduce a generalization of tensorproduct B-Spline surfaces that supports the knot interval configuration in Figure 10.b, that is C2, and that yields one tensor-product patch per face of the control grid. Since the knot intervals change and hence are not “uniform,” we will refer to this surface as a nonuniform NURBS surface, or NU-NURBS (spoken “new NURBS”). (a) Arm and hand showing mismatched knot intervals. (b) Merge using the algorithm in [Sederberg et al. 2003]. (c) Merge using NU-NURBS. Figure 9: Merging NURBS Hand and Arm Models. (Model courtesy of Zygote Media Group) The idea is based on the fact that a tensor-product B-Spline surface can be viewed as a family of iso-parameter curves: P(s; t) = Pi(t)Bi(s) where Pi(t) = PijBj(t) (1) e1 e1 e1 e2 e2 e3 e3 e3 e4 e4 e4 e4 P11 P61 P15 P65 (a) Knot intervals in a bicubic NURBS control grid. d1 ^ d1 ^ d2 ^ d2 ^ d3 ^ d3 ^ d4 ^ d4 ^ d5 ^ d5 ^ e1 e1 e1 e2 e2 e3 e3 e3 e4 e4 e4 e4 P11 P61 P15 P65 (b) Knot intervals in a merge region. Figure 10: Knot Interval Configurations. The Pi(t) can be viewed as “moving control points” that slide along B-Spline curves, as illustrated in Figure 11.a. e1 e1 e2 e2 e3 e3 e4 e4 P11 P61 P15 P65 P2(t) P3(t) P4(t) P5(t) (a) “Moving control points” Pi(t). d2 d3 d4 d5 d2 d3 d4 d5 P11 P61 P15 P65 P2(t) P3(t) P4(t) P5(t) (b) Iso-parameter curves, P(s). Figure 11: Constructing a Family of Isoparameter Curves on a NURBS Surface. For a fixed value of t = , P(s) = Pi( )Bi(s) (2) defines a cubic B-Spline curve that lies on the bicubic B-Spline surface, and is the iso-parameter curve for t = . If we let vary, the resulting family of iso-parameter curves sweeps out the B-Spline surface. Figure 11.b shows four such iso-parameter curves for various values of . Since most readers will be more familiar with B-Splines defined using knot vectors rather than with knot intervals, we note that it is straightforward to convert between the two representations. Define ~e??2 = 0, ~ei+1 = ~ei+ei, i = ??2; : : : ; 5 (e??1, e0, e5, and e6 are not shown in the figure). Then the knot vector for the B-spline curves Pi(t) is f~e??2; ~e??1; : : : ; ~e6g. Likewise, the knot vector for each of the isoparameter curves in Figure 11.b is f ~ d??2; ~ d??1; : : : ; ~ d7g where ~ d??2 = 0, ~ di+1 = ~ di + di, i = ??2; : : : ; 6 We now modify that description of a B-Spline surface to permit a knot interval arrangement as in Figure 12.a in which the dij can be any non-negative number. The basic idea is to treat the knot intervals themselves as cubic spline functions, which in turn control the basis functions Bi(s) in (2). e1 e1 e1 e2 e2 e3 e3 e3 e4 e4 e4 e4 P11 P61 P15 P65 (a) Knot intervals for NU-NURBS. d2(t) d3(t) d4(t) P11 P61 P15 P65 P2(t) P3(t) P4(t) P5(t) (b) Iso-parameter curve, P(s). Figure 12: NU-NURBS. Figure 12.b shows an iso-parameter curve on a NU-NURBS surface. The “moving control points” Pi(t) in this figure are identical to those used in the description of NURBS surfaces in Figure 11.b. The only difference is that in the NURBS case in Figure 11.b, the knot intervals di are constants whereas for NU-NURBS, the di(t) are spline functions. The coefficients of spline function di(t) are di0, di1, di2, : : :, di5, di6 and the knot vector for the spline function is f~e??2; ~e??1; : : : ; ~e6g. The NU-NURBS is thus defined as a family of iso-parameter curves. This NU-NURBS formulation has the following properties: 1. It specializes to NURBS in the case where the knot interval configuration is identical to that in Figure 10.a. 2. This NU-NURBS is C2 in s, since each iso-parameter curve P(s) is a cubic spline curve whose knot intervals are constant for a fixed value of t.. Since the knot interval functions are C2 splines in t, the NU-NURBS is also C2 in t. 3. The cost of evaluating this NU-NURBS is comparable to the cost of evaluating a bicubic NURBS surface, the only difference lies in evaluating the knot interval spline functions di(t). 4. Although this surface formulation is new and hence not directly supported in existing commercial software, it can be exactly represented—and exported—as a set of rational B´ezier patches, with one patch per face of the control grid. Unfortunately, the degree of those patches can be rather high. We have devised other, lower-degree versions of NU-NURBS, all that produce one patch per face, including a version that is C2 in t andG1 in s with patches that are degree 36, and one that is C2 in t andG2 in s with patches that are degree 49. We have also devised a version of NU-NURBS that permits arbitrary non-negative knot intervals in both parameter directions. These variations are being recorded in a separate paper [Sederberg et al. ]. 5 Examples This section examines the behavior of the algorithms presented in Sections 3 and 4 when they combine to represent two intersecting trimmed-NURBS surfaces as a single watertight T-Spline. It also shows how fillets are supported in this representation, along with arbitrary feature lines. As reviewed in Section 2, the problem of computing intersection curves is very well-studied, and algorithms for computing the intersection of two NURBS surfaces P1 and P2 are standard in most geometric modeling programs. These algorithms can compute trimming curves C1 and C2 in the parameter domains of P1 and P2, along with an approximation C of the intersection curve in R3, to within a prescribed tolerance. This paper assumes that C1, C2, and C have been computed using an existing algorithm, and that the geometric and topological accuracy of these curves is deemed acceptable. Perturbation Error Figure 13 shows the trimmed teapot body being converted into untrimmed T-Splines with different degrees of precision. The NURBS model of the teapot used throughout this paper is actually a C2 NURBS model based on the original C1 B´ezier model. In our NURBS model, the body is defined using 90 control points. Figure 14 shows the error distribution in both the body and the spout. This figure shows how the perturbation domain decreases as decreases. By adjusting, the perturbation magnitude and extant can be held below a specified tolerance. Figure 14: Error Plots. Dark Blue Denotes Zero Error. The perturbation errors reported in the captions in Figure 13 are relative to a teapot that is one unit wide. Hence, the error in Figure 13.a is about one tenth of one percent of the width of the teapot. Figure 15: Trimless T-Spline Cylinders. Relative Error = 0:0009 Figure 16.a shows the knot intervals adjacent to the intersection curve. Immediately after the merge is completed, k = 0. This creates a triple knot at the intersection curve and forces the T-Spline to be C0 along the intersection curve. If the value of k is changed to a small positive value, the C0 crease along the intersection curve is changed into a C2 fillet whose radius increases with k. Feature Lines Feature lines in NURBS models must follow iso-parameter curves. T-Splines allow for sharp features along portions of iso-parameter (a) Knot intervals following merge. Initially, k = 0. (b) Setting k = 0:1 to create a small Figure 16: Fillet. (a) Untrimmed T-Spline. (b) With control grid. Figure 17: Filleted CSG Object as an Untrimmed T-Spline. curves [Sederberg et al. 2003]. Subdivision surfaces don’t have a strong notion of iso-parameter curves, and hence procedures have been devised for placing a fillet or feature curve in an arbitrary direction on a subdivision surface [DeRose et al. 1998]. A generalpurpose tool for placing arbitrary feature lines on any geometric model, based on a deformation, is described in [Singh and Fiume To create an arbitrary a feature curve on a NURBS surface, using the algorithms presented in Sections 3 and 4, the feature curve is drawn as a parametric curve in parameter space of the surface, much like a trimming curve except that the curve need not be closed. The curve is then processed as discussed in Section 3: An axis-aligned bounding polygon is found, and topological and fitting operations are performed. The resulting control points can then be moved to create the desired feature. (a) Control grid. (b) Surface. Figure 18: NURBS Car Hood. We illustrate the procedure using the model of a NURBS car hood in Figure 18. Figure 19.a shows the control points that result from the process in Section 3, after they have been adjusted to create the desired feature lines, and Figure 19.b shows the resulting surface. 6 Discussion The modeling tools presented in this paper extend the capabilities of T-Splines to express Booleans, fillets, and arbitrary feature curves in a single unified framework. The geometric models thus created are watertight, C2 (or C1 or C0 if multiple knots are specified), (a) Control grid. (b) Surface. Figure 19: T-Spline Car Hood with Detail. and can be exported without translation error as a finite collection of NURBS patches, making them compatible with CAD industry standards. The models are editable in that control points can be adjusted and the surface will remain C2. These results can help to streamline the CAD modeling–analysis pipeline. In addition, these tools introduce new design workflows into the styling and CAD industries, allowing NURBS modelers to continue to style their models even after Booleans have been performed. The paper also presents an enhanced merging capability involving an augmentation of the definition of T-Splines to support different knot intervals on opposite sides of the same control polygon face. This enables the merging of two NURBS or T-Splines surfaces whose mating curves are parametrized differently, a case not handled well in [Sederberg et al. 2003]. Our process for merging two trimmed NURBS surfaces into an untrimmed T-Spline involves a perturbation of the original surfaces. The perturbations can be limited to an arbitrarily narrow strip. Tighter tolerances demand more control points, making the resulting T-Spline more difficult to edit, although if the designer’s intent is to ultimately edit the resulting T-Spline, a high initial tolerance may not be as crucial. This paper invites future research on several fronts. Section 3 discusses possible lines of research for finding an efficient algorithm for computing the green control points, the problem we view as most pressing. Also called for is a rigorous analysis of approximation error. What is the relationship between, positional error, and normal-vector error? What is the convergence rate? The paper focuses on two intersecting surfaces. Details of how to handle three or more intersecting surfaces are not presented and invite further study. The existing algorithms should extend readily to handle most cases where all intersections are to be computed simultaneously. The case where an untrimmed T-Spline that represents two intersecting surfaces is later intersected by a third NURBS or T-Spline is more challenging because it can involve the intersection of NU-NURBS or of faces next to extraordinary points. It is also not clear how the error might propagate upon repeated such The paper only addresses non-singular intersection curves. Full treatment of intersection curves that self-intersect is another topic of further study. 7 Acknowledgements Nicholas North and Adam Helps provided invaluable assistance with implementation and figures. BJORCK, A. 1996. Numerical Methods for Least Squares Problems. DEROSE, T. D., KASS, M., AND TRUONG, T. 1998. Subdivision surfaces in character animation. In Proceedings of SIGGRAPH 1998, Computer Graphics Proceedings, Annual Conference Series, FAROUKI, R. T., HAN, C. Y., HASS, J., AND SEDERBERG, T. W. 2004. Topologically consistent trimmed surface approximations based on triangular patches. Computer Aided Geometric Design 21, 5, 459–478. FAROUKI, R. T. 1999. Closing the gap between CAD model and downstream application (report on the SIAM Workshop on Integration of CAD and CFD, UC Davis, April 12–13, 1999). SIAM News 32, 5, 1–3. HUNTER, G. M., AND STEIGLITZ, K. 1979. Operations on images using quad trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 1, 2 (April), 145–153. KASIK, D. J., BUXTON, W., AND FERGUSON, D. R. 2005. Ten CAD model challenges. IEEE Computer Graphics and Applications 25, 2, 81–92. KATZ, S., AND SEDERBERG, T. W. 1988. Genus of the intersection curve of two rational surface patches. Computer Aided Geometric Design 5, 253–258. KRISHNAN, S., AND MANOCHA, D. 1996. Efficient representations and techniques for computing b-rep’s of csg models with nurbs primitives. In Proc. of CSG’96, 101–122. KRISHNAN, S., AND MANOCHA, D. 1997. An efficient surface intersection algorithm based on lower-dimensional formulation. ACM Transactions on Graphics 16, 1 (Jan.), 74–106. KRISHNAN, S., MANOCHA, D., GOPI, M., AND KEYSER, J. 2001. Boole: A boundary evaluation system for Boolean combinations of sculptured solids. International Journal on Computational Geometry and Applications 11, 1, 105–144. KRISTJANSSON, D., BIERMANN, H., AND ZORIN, D. 2001. Approximate Boolean operations on free-form solids. In Proceedings of ACM SIGGRAPH 2001, E. Fiume, Ed., Computer Graphics Proceedings, Annual Conference Series, 185–194. KUMAR, S. 1996. Interactive rendering of parametric spline surfaces. PhD thesis, The University of North Carolina at Chapel LITKE, N., LEVIN, A., AND SCHR¨ODER, P. 2001. Trimming for subdivision surfaces. Computer Aided Geometric Design 18, 5 (June), 463–481. LOOP, C. 2004. Second order smoothness over extraordinary vertices. In Eurographics / ACM SIGGRAPH Symposium on Geometry Processing, 165–174. MORETON, H. 2001. Watertight tessellation using forward differencing. In HWWS ’01: Proceedings of the ACM SIGGRAPH/ EUROGRAPHICS workshop on Graphics hardware, ACM, New York, NY, USA, 25–32. M¨ULLER, K., REUSCHE, L., AND FELLNER, D. 2006. Extended subdivision surfaces: Building a bridge between NURBS and Catmull-Clark surfaces. ACM Transactions on Graphics 25, 2 (Apr.), 268–292. 1999. Planning Report: Interoperability Cost Analysis of the US Automotive Supply Chain. National Institute of Standards and PATRIKALAKIS, N. M., AND MAEKAWA, T. 2002. Intersection problems. In Handbook of Computer Aided Geometric Design, North-Holland, G. Farin, J. Hoschek, and M.-S. Kim, Eds., 623– PETERS, J. 2000. Patching Catmull-Clark meshes. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, 255–258. SAMET, H. 1984. The quadtree and related hierarchical data structures. ACM Computing Surveys 16, 2, 187–260. SEDERBERG, T. W., LI, X., LIN, H., AND FINNIGAN, G. T. Nonuniform NURBS. In Preparation. SEDERBERG, T., ANDERSON, D., AND GOLDMAN, R. 1984. Implicit representation of parametric curves and surfaces. Computer Vision, Graphics and Image Processing 28, 72–84. SEDERBERG, T. W., ZHENG, J., SEWELL, D., AND SABIN, M. A. 1998. Non-uniform recursive subdivision surfaces. In Proceedings of SIGGRAPH 1998, Computer Graphics Proceedings, Annual Conference Series, 387–394. SEDERBERG, T. W., ZHENG, J., BAKENOV, A., AND NASRI, A. 2003. T-Splines and T-NURCCs. ACM Transactions on Graphics 22, 3 (July), 477–484. SEDERBERG, T. W., CARDON, D. L., FINNIGAN, G. T., NORTH, N. S., ZHENG, J., AND LYCHE, T. 2004. T-spline simplification and local refinement. ACM Transactions on Graphics 23, 3 SINGH, K., AND FIUME, E. L. 1998. Wires: A geometric deformation technique. In Proceedings of SIGGRAPH 1998, Computer Graphics Proceedings, Annual Conference Series, 405– SONG, Q., AND WANG, J. 2007. Generating gn parametric blending surfaces based on partial reparameterization of base surfaces. Comput. Aided Des. 39, 11, 953–963. SONG, X., SEDERBERG, T. W., ZHENG, J., FAROUKI, R. T., AND HASS, J. 2004. Linear perturbation methods for topologically consistent representations of free-form surface intersections. Computer Aided Geometric Design 21, 3, 303–319. STAM, J. 1998. Exact evaluation of Catmull-Clark subdivision surfaces at arbitrary parameter values. In Proceedings of SIGGRAPH 1998, Computer Graphics Proceedings, Annual Conference Series, 395–404. WANG, W., POTTMANN, H., AND LIU, Y. 2006. Fitting B-spline curves to point clouds by curvature-based squared distance minimization. ACM Transactions on Graphics 25, 2 (Apr.), 214–238.
{"url":"http://contentdm.lib.byu.edu/cdm/singleitem/collection/IR/id/1241/rec/2","timestamp":"2014-04-19T17:22:40Z","content_type":null,"content_length":"252758","record_id":"<urn:uuid:2f8cbf11-db9d-4af6-a82b-e6f6caea3531>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
N Queens - SICStus Prolog 10.35.12.2 N Queens The problem is to place N queens on an NxN chess board so that no queen is threatened by another queen. The variables of this problem are the N queens. Each queen has a designated row. The problem is to select a column for it. The main constraint of this problem is that no queen threaten another. This is encoded by the no_threat/3 constraint and holds between all pairs (X,Y) of queens. It could be defined as: no_threat(X, Y, I) :- X #\= Y, X+I #\= Y, X-I #\= Y. However, this formulation introduces new temporary domain variables and creates twelve fine-grained indexicals. Worse, the disequalities only maintain bound-consistency and so may miss some opportunities for pruning elements in the middle of domains. A better idea is to formulate no_threat/3 as an FD predicate with two indexicals, as shown in the program below. This constraint will not fire until one of the queens has been assigned (the corresponding indexical does not become monotone until then). Hence, the constraint is still not as strong as it could be. For example, if the domain of one queen is 2..3, it will threaten any queen placed in column 2 or 3 on an adjacent row, no matter which of the two open positions is chosen for the first queen. The commented out formulation of the constraint captures this reasoning, and illustrates the use of the unionof/3 operator. This stronger version of the constraint indeed gives less backtracking, but is computationally more expensive and does not pay off in terms of execution time, except possibly for very large chess boards. It is clear that no_threat/3 cannot detect any incompatible values for a queen with domain of size greater than three. This observation is exploited in the third version of the constraint. The first-fail principle is appropriate in the enumeration part of this problem. :- use_module(library(clpfd)). queens(N, L, LabelingType) :- length(L, N), domain(L, 1, N), labeling(LabelingType, L). constrain_all([X|Xs]) :- constrain_between(X, Xs, 1), constrain_between(_X, [], _N). constrain_between(X, [Y|Ys], N) :- no_threat(X, Y, N), N1 is N+1, constrain_between(X, Ys, N1). % version 1: weak but efficient no_threat(X, Y, I) +: X in \({Y} \/ {Y+I} \/ {Y-I}), Y in \({X} \/ {X+I} \/ {X-I}). % version 2: strong but very inefficient version no_threat(X, Y, I) +: X in unionof(B,dom(Y),\({B} \/ {B+I} \/ {B-I})), Y in unionof(B,dom(X),\({B} \/ {B+I} \/ {B-I})). % version 3: strong but somewhat inefficient version no_threat(X, Y, I) +: X in (4..card(Y)) ? (inf..sup) \/ unionof(B,dom(Y),\({B} \/ {B+I} \/ {B-I})), Y in (4..card(X)) ? (inf..sup) \/ unionof(B,dom(X),\({B} \/ {B+I} \/ {B-I})). | ?- queens(8, L, [ff]). L = [1,5,8,6,3,7,2,4] Send feedback on this subject.
{"url":"http://sicstus.sics.se/sicstus/docs/4.2.3/html/sicstus.html/N-Queens.html","timestamp":"2014-04-18T00:57:58Z","content_type":null,"content_length":"5789","record_id":"<urn:uuid:9deaa3a3-e5f4-45c6-814a-44d74d6140b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamics of the western United States The vertically averaged deviatoric stress tensor field within the western United States was determined using topographic data, geoid data, recent GPS observations, and strain rate magnitudes and styles from Quaternary faults. Gravitational potential energy differences control the large fault-normal compression on the California coast. Deformation in the Basin and Range is driven, in part, by gravitational potential energy differences but extension directions there are modified by plate interaction stresses. The California shear zone has relatively low vertically averaged viscosity of about 10^21 Pascal seconds, while the Basin and Range has higher vertically averaged viscosity of 10^22Pascal seconds. A self-consistent, continuous velocity field solution (black arrows) Shen-Tu et al. [1999] determined using GPS and VLBI data (red arrows) [Bennett et al., 1999; Ma and Ryan, 1998; Thatcher et al., 1999] Quaternary fault data [Jennings, 1994; Peterson and Wesnousky, 1994], and imposed NUVEL-1A plate motion [DeMets et al., 1994]. Ellipses represent a 95% confidence limit. Blue dots represent seismicity recorded from 1850-1998. (GV = Great Valley, B&R = Basin and Range, CP = Colorado Plateau, SAF = San Andreas Fault, SN = Sierra Nevada, GB = Great Basin). Longitude and latitude are given in degrees west and north. The minimal root mean squared deviatoric stress field determined from GPE variations, calculated assuming Airy compensation of topography. Different colors represent delta GPE values, sigma zz, relative to a column of lithosphere at sea level. Tensional stress are open white principal axes and compressional stress are black principal axes. Stress field boundary conditions. The analog motion (open arrows) associated with these boundary conditions has a PA-NA pole that is approximately 10 degrees west of the NUVEL-1A [DeMets et al., 1994] PA-NA pole. Tensional stress are open white principal axes and compressional stress are black principal axes. The total vertically averaged (over L=100 km) deviatoric stress field that is the sum of stresses due to potential energy variations (above) and plate interaction (above).Tensional stress are open white principal axes and compressional stress are black principal axes. The self-consistent flow field determined from strain rates calculated by scaling the total stress tensor field (above) by the inverse of viscosity (below) for all areas east of the San Andreas The total deviatoric stress field determined from the sum of stresses due to GPE variations estimated using the filtered GEOID96 and the corresponding best-fit stress field boundary conditions associated with PA-NA plate interaction. Tensional stress are open white principal axes and compressional stress are black principal axes. The vertically averaged effective viscosity (over L=100 km) for the western US determined by dividing the magnitude of the total deviatoric stress (above) by the magnitude of the strain rate for each grid area determined from the self-consistent kinematic model Shen-Tu et al. [1999]. last edited 02/21/00 by L. Flesch
{"url":"http://wellspring.ess.sunysb.edu/~flesch/wus.htm","timestamp":"2014-04-21T04:31:55Z","content_type":null,"content_length":"6070","record_id":"<urn:uuid:44c1c5c1-1bc0-4e06-94e6-eed64f89c92f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel 2007 How to use Solver in Excel 2007: First check whether the Solver is installed on your computer by clicking on 'data' in the menu bar. You will see 'Solver' under data analysis on the right. If not installed, click on the 'office button', then 'Excel Options' at the bottom of the window, next click on 'Add-ins' and follow the instructions to add 'Solver'. Solver in Excel is a What-if analysis tool that finds the optimal value of a target cell by changing values in cells used to calculate the target cell. Let's see how we can use the solver tool to solve a practical business problem like running a cybercafe. We take a loan from the bank and purchase computers, furniture and Uninterrupted Power Supply equipment (UPS). Using the PMT function we can calculate the amount of monthly instalments we have to repay to our bank. Our regular monthly expenses include salaries, electricity, maintenance, internet charges to our ISP, telephone costs, advertisement costs and rent. Our rented accomadation can easily accomadate 24 computers but we start with 12. Based on our costs we can calculate the minimum amount we should charge the customer to break-even. That is, at this point in our business we neither make a profit nor a loss. Now using the solver and a recce of what other cybercafes are charging per hour we can do a what if analysis. • Click on 'Data' • Select Solver • A new window opens. Now define the target cell (D15). Set its value equal to a certain amount like 100000 or 70000. You can also set to minimum or maximum. In this example it doesn't make sense to do that. In cases where you wish to minimize costs like costs of delivery, you could use this option. You could also set the value equal to 0. This would give you the breakeven point where the monthly expenses would be equal to the monthly earnings. In certain what-if analysis you could set the value equal to maximum if you wanted to maximize, let's say, your profit. But as indicated, this depends on the problem. • Next define the cells that will affect the target cell (B15,B6) • Then define the constraints like maximum and minimum hours of work, maximum and minimum number of computers. Also ensure that the number of computers is an integer because we cannot have 1.32 computers! We also define the minimum and maximum amount that we can charge per hour for cyber cafe use based on a recce. • Then we click on solve. If solver finds a solution, we can finally click on OK to keep the solution or cancel to find another solution based on different inputs. More on Solver Interesting Books on Solver for reference How to use Solver to optimize your interest earnings Using Solver for financial planning Sensitivity analysis using the scenario manager in Solver
{"url":"http://www.familycomputerclub.com/excel/how-to-use-solver-in-excel-2007.html","timestamp":"2014-04-20T16:10:06Z","content_type":null,"content_length":"12865","record_id":"<urn:uuid:25fffb63-2a6d-4a25-a7ee-ec5fb97d8f3e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00492-ip-10-147-4-33.ec2.internal.warc.gz"}