content
stringlengths
86
994k
meta
stringlengths
288
619
PageRank with Apache Hama some days ago I read in the Hama-dev mailing list about the Nutch project that want a PageRank implementation: Hi all, Anyone interested in PageRank and collaborating w/ Nutch project? :-) So I thought, that I can do this. I already implemented PageRank with MapReduce. Why don't we go for a BSP version?:D This is basically what this blog post is about. Let's make a few assumptions: • We have an adjacency list (web-graphs are sparse) of webpages and their unweighted edges • A working partitioning like here. (You must not implement it, but should know how it works) • We have read the Pregel Paper (or at least the summary) • Familiarity with PageRank Web Graph Layout This is the adjacency list. On the leftmost side is the vertexID of the webpage, followed by the outlinks that are seperated by tabs. This will be pretty printed a graph like this: I have colored them by the incoming links, the vertex with the most in-links is the brightest, the vertex with few or no in-links is more darker. We will see that vertex 2 should get a high PageRank, 4, 5 and 6 should get a more or less equal rank and so on. Short summary of the algorithm I am now referring to the Google Pregel paper. At first we need a modelling class that will represent a vertex and holds its own tentative PageRank. In my case we are storing the tentative PageRank along with the id of a vertex in a HashMap. In the first superstep we set the tentative PageRank to 1 / n. Where n is the number of vertices in the whole graph. In each of the steps we are going to send for every vertex its PageRank, devided by the number of its outgoing edges, to all adjacent vertices in the graph. So from the second step we are receiving messages with a tentative PageRank of a vertex that is adjacent. Now we are summing up these messages for each vertex "i" and using this formula: P(i) = 0.15/NumVertices() + 0.85 * sum This is the new tentative PageRank for a vertex "i". I'm not sure whether NumVertices() returns the number of all vertices or just the number of adjacent vertices. I'll assume that this is the count of all vertices in the graph, this would then be a constant. Now adding the damping factor multiplying this by the sum of the received tentatives of the adjacent vertices. We are looping these steps until convergence to a given error will be archived. This error is just a sum of absoluting the difference between the old tentative PageRank and the new one of each Or we can break if we are reaching a iteration that is high enough. We are storing the old PageRank as a copy of the current PageRank (simple HashMap). The error will thus be a local variable that we are going to sync with a master task- he will average them and broadcasts it back to all the slaves. Let's look at the fields we need: private static int MAX_ITERATIONS = 30; // alpha is 0.15/NumVertices() private static double ALPHA; private static int numOfVertices; private static double DAMPING_FACTOR = 0.85; // this is the error we want to archieve private static double EPSILON = 0.001; HashMap<Integer, List<Integer>> adjacencyList = new HashMap<Integer, List<Integer>>(); // normally this is stored by a vertex, but I don't want to create a new // model for it HashMap<Integer, Double> tentativePagerank = new HashMap<Integer, Double>(); // backup of the last pagerank to determine the error HashMap<Integer, Double> lastTentativePagerank = new HashMap<Integer, Double>(); Keep in mind that every task just has a subgraph of the graph. So these structures will hold just a chunk of PageRank. Let's get into the init phase of the BSP: public void bsp(BSPPeerProtocol peer) throws IOException, KeeperException, InterruptedException { fs = FileSystem.get(getConf()); String master = conf.get(MASTER_TASK); // setup the datasets adjacencyList = mapAdjacencyList(getConf(), peer); // init the pageranks to 1/n where n is the number of all vertices for (int vertexId : adjacencyList.keySet()) .put(vertexId, Double.valueOf(1.0 / numOfVertices)); Like we said, we are reading the data chunk from HDFS and going to set the tentative pagerank to 1/n. Main Loop // while the error not converges against epsilon do the pagerank stuff double error = 1.0; int iteration = 0; // if MAX_ITERATIONS are set to 0, ignore the iterations and just go // with the error while ((MAX_ITERATIONS > 0 && iteration < MAX_ITERATIONS) || error >= EPSILON) { if (iteration >= 1) { // copy the old pagerank to the backup // sum up all incoming messages for a vertex HashMap<Integer, Double> sumMap = new HashMap<Integer, Double>(); IntegerDoubleMessage msg = null; while ((msg = (IntegerDoubleMessage) peer.getCurrentMessage()) != null) { if (!sumMap.containsKey(msg.getTag())) { sumMap.put(msg.getTag(), msg.getData()); } else { msg.getData() + sumMap.get(msg.getTag())); // pregel formula: // ALPHA = 0.15 / NumVertices() // P(i) = ALPHA + 0.85 * sum for (Entry<Integer, Double> entry : sumMap.entrySet()) { ALPHA + (entry.getValue() * DAMPING_FACTOR)); // determine the error and send this to the master double err = determineError(); error = broadcastError(peer, master, err); // in every step send the tentative pagerank of a vertex to its // adjacent vertices for (int vertexId : adjacencyList.keySet()) sendMessageToNeighbors(peer, vertexId); I guess this is self explaining. The function broadcastError() will send the determined error to a master task, he will average all incoming errors and broadcasts this back to the slaves (similar to aggregators in the Pregel paper). Let's take a quick look at the determineError() function: private double determineError() { double error = 0.0; for (Entry<Integer, Double> entry : tentativePagerank.entrySet()) { error += Math.abs(lastTentativePagerank.get(entry.getKey()) - entry.getValue()); return error; Like I described in the summary we are just summing up the errors that is a difference between the old and the new rank. Finally we are able to run this and receive a fraction between 0 and 1 that will represent the PageRank of each site. I was running this with a convergence error of 0.000001 and a damping factor of 0.85. This took about 17 iterations. ------------------- RESULTS -------------------- 2 | 0.33983048615390526 4 | 0.21342628110369394 6 | 0.20495452025114747 5 | 0.1268811487940641 3 | 0.0425036157080356 1 | 0.0425036157080356 7 | 0.02990033228111791 This will result in about 1.0 in the sum of all ranks, which is correct. Note that the output if you are running this job is not guaranteed to be sorted, I did this to give you a better view. We'll see that we were quite good in our guessing of the PageRank in the beginning. I think this is all, if you are interested in testing / running this- feel free to do so. This class and test data is located in my Summer of Code repository under: The classes name is Just execute the main method, it will run a local multithreaded BSP on your machine. Star this project and vote for my GSoC task . :) Thank you. 7 comments: 1. Great job :) 1. Thank you about you share spirit 2. Where can I find the pagerank for mapreduce code? Thanks! 3. This comment has been removed by a blog administrator. 4. Hi, Thank you for your detailed blog. I have been through your pagerank rank hama code. Could you let me know the argument you passed in the main () function to "map additional confs from outside" 5. This comment has been removed by a blog administrator. 6. This comment has been removed by a blog administrator.
{"url":"http://codingwiththomas.blogspot.com/2011/04/pagerank-with-apache-hama.html","timestamp":"2014-04-16T22:07:01Z","content_type":null,"content_length":"93065","record_id":"<urn:uuid:870ce2fc-dca9-43a2-ae87-eaa904c8b731>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Scalar (dot) product Multiplication of a vector, A, with another scalar quantity, a, results in another vector, B. The magnitude of the resulting vector is equal to the product of the magnitude of vector with the scalar quantity. The direction of the resulting vector, however, is same as that of the original vector (See Figures below). Figure 2 Multiplication with scalar We have already made use of this type of multiplication intuitively in expressing a vector in component form. A = A x i + A y j + A z k A = A x i + A y j + A z k In this vector representation, each component vector is obtained by multiplying the scalar component with the unit vector. As the unit vector has the magnitude of 1 with a specific direction, the resulting component vector retains the magnitude of the scalar component, but acquires the direction of unit vector. A x = A x i A x = A x i
{"url":"http://cnx.org/content/m14513/1.5/","timestamp":"2014-04-24T04:13:47Z","content_type":null,"content_length":"176793","record_id":"<urn:uuid:95a687b1-7f5a-466d-b482-3fd4e171a5b4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Beth is a photographer taking pictures outdoors during the evening. She knows that the later it is when she takes a picture, the longer the shutter has to stay open to get enough light for the picture. A. The length of time the shutter needs to stay open is a function of how strong her flash is. B. How late it is in the evening is a function of the length of time the shutter needs to stay open. C. The length of time the shutter needs to stay open is a function of how late in the evening it is. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ec20a5e4b07cd2b648d397","timestamp":"2014-04-17T10:02:02Z","content_type":null,"content_length":"61475","record_id":"<urn:uuid:8f7cd80a-23ba-45f7-a445-cb100fb0618a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributing black and white marbles... January 4th 2009, 07:40 AM Distributing black and white marbles... There are six identical white marbles and eight identical black marbles. In how many ways can we give these marbles to 3 boys, if every boy must get at least one marble, and we don't necessarily have to give them all of the marbles? This problem should be solved using the inclusion-exclusion principle. I tried to solve this by defining three sets $A_1, A_2, A_3$ where each set $A_i$ contains all the possible marble combinations (or sets of marbles) $i$-th boy can get, and then counting the number of ordered triples $(x,y,z)$ where $x \in A_1, y \in A_2, z \in A_3$. However - this approach is wrong, because with the sets thus defined it's possible to have a triplet $(x,y,z)$ representing the first boy getting e.g. 8 black marbles, second boy getting 7 black marbles and third boy 6 black marbles, which is not possible as there are only 8 black marbles... So I'd be immensely grateful for any help/hints/insights you might have :D January 4th 2009, 10:46 AM There are six identical white marbles and eight identical black marbles. In how many ways can we give these marbles to 3 boys, if every boy must get at least one marble, and we don't necessarily have to give them all of the marbles? This problem should be solved using the inclusion-exclusion principle. "Inclusion-exclusion" means counting too many ways of giving the marbles, and then substracting those that are irrelevant (and possibly iterating this process to find the number of irrelevant The idea in "counting too many ways" is to reduce to simpler combinatorics. In your problem, you could first find in how many ways at most six white marbles can be given to 3 boys, if the boys may get any number of marbles (possibly 0). Then the same for 8 black ones. At that moment, you should know in how many ways at most 6 white and 8 black marbles can be given to 3 boys, if the boys may get any number of marbles. By doing this, we count a few cases too many: we need that each boy gets at least one marble, so we should substract the cases when (at least) a boy gets no marble. You should compute the number of such cases. I let you try that. Don't hesitate asking for help if you're stuck. January 9th 2009, 06:52 AM OK, so I'll try: In how many ways can we give at most six indistinguishable white marbles to three boys? If we give them 0 marbles, there is only 1 way to do that. If we are to give them 1 marble, there are 3 ways to give it to any of the three boys. If we want to give 2 marbles to 3 boys, the number of ways to do that is equivalent to the number of ways in which we can distribute 2 indistinguishable balls among 3 distinguishable boxes, which is $\binom{3-1+2}{2}$. And, because the number of ways in which we can distribute k (indistinguishable) marbles among n (different) boys is $\binom{n-1+k}{k}$, it finally follows that we can distribute 6 white marbles among 3 boys in $\sum_{k=0}^{6}\binom{3-1+k}{k}=\sum_{k=0}^{6}\binom{2+k}{k}=84$ ways (if I'm not mistaken). Similarly, we can distribute 8 black marbles among the three boys in $\sum_{k=0}^{8}\binom{2+k}{k}=165$ Would it be correct to say that the maximum number of ways we can distribute, first, at most six white marbles, and then, at most eight black marbles, is 84*165=13860 ? If it is correct, then we have the following problem: we must ultimately find the number of ways of distributing 6 white and 8 black marbles to the 3 boys in such a way that every boy gets at least 1 marble (presumably, either black or white), but there is also the condition that we do not have to distribute all of the marbles we got. Therefore, we must give away at least 3 marbles (either black or white) -so that every boy gets one- and at most all 14 marbles. I know we should subtract some quantities from the total number of distributions (13860), but am not really sure how precisely to go about that... Many thanks for any help! January 11th 2009, 05:29 AM OK, so I'll try: In how many ways can we give at most six indistinguishable white marbles to three boys? it finally follows that we can distribute 6 white marbles among 3 boys in $\sum_{k=0}^{6}\binom{3-1+k}{k}=\sum_{k=0}^{6}\binom{2+k}{k}=84$ ways (if I'm not mistaken). Similarly, we can distribute 8 black marbles among the three boys in $\sum_{k=0}^{8}\binom{2+k}{k}=165$ This is correct. This can also be obtained quicklier: Suppose we want the number of possibilities to give at most $n$ indistinguishable marbles to $p$ boys. This is easily seen to be equivalent to adding $p$ "sticks" to the sequence $1,\ldots,n$ : the sticks split the sequence according to the number of marbles given to the boys. For instance, if p=3 and n=4, "1|23||4" means giving 1 marble to the first boy, 2 to the second, and no marble to the third. "|||1234" means giving no marble to any boy. "1|2|3|4" means giving 1 to each of them. "1234|||" means giving four marbles to the first, and none to the others. Etc. Now, we can compute directly the number of possibilities: this is the number of locations of the $p$ sticks among the $N+p$ positions ( $N$ for the numbers, $p$ for the sticks). Finally, the number of ways to give at most $p$ marbles to $n$ boys is ${n+p\choose p}$. You can check it gives the same answers as yours. Would it be correct to say that the maximum number of ways we can distribute, first, at most six white marbles, and then, at most eight black marbles, is 84*165=13860 ? Yes, it is. That was the point of this first computation: there is no correlation between the black and white marbles for the moment, so we just have to multiply the numbers of possibilities. If it is correct, then we have the following problem: we must ultimately find the number of ways of distributing 6 white and 8 black marbles to the 3 boys in such a way that every boy gets at least 1 marble (presumably, either black or white), but there is also the condition that we do not have to distribute all of the marbles we got. Therefore, we must give away at least 3 marbles (either black or white) -so that every boy gets one- and at most all 14 marbles. I know we should subtract some quantities from the total number of distributions (13860), but am not really sure how precisely to go about that... Like I wrote in my first post, 13860 is the number of possibilities where each boy gets any number of marbles, so you must substract the number of possibilities where (at least) a boy gets no marble. Then the remainder will be the number of possibilities where every boy gets at least one marble. And this is what we need. So we must find how to give at most 6 white and 8 black marbles to 3 boys, in such a way that at least one of the boys gets no marble (of any colour). This can be done by inclusion-exclusion again: as a summary, we want to distribute the marbles with the restriction $(\geq 1,\geq 1,\geq 1)$. We can write schematically (the order of the indices is not important): $(\geq 1,\geq 1,\geq 1)=(\ast,\ast,\ast)-(0,\ast,\ast)+(0,0,\ast)-(0,0,0)$ where $\ast$ means " $\geq 0$". This can be described in words, like counting too many cases and substracting the superfluous terms, but this becomes somewhat painful to write... In a more rigorous way, if $A_i$ is the set of the possibilities where the boy $i$ ( $\in\{1,2,3\}$) gets at least one marble, then we want the cardinality of $A=A_1\cap A_2\cap A_3$. To that aim, we write, using inclusion-exclusion principle, and denoting by $\Omega$ the set of the possibilities without restriction (so that $|\Omega|=13860$) and by $A^c$ the complement of a set $A$ in $\Omega$: $|A| = |\Omega|-|A^c|=|\Omega|-|A_1^c\cup A_2^c\cup A_3^c|$ $=|\Omega| - (|A_1^c|+|A_2^c|+|A_3^c|)+(|A_1^c\cap A_2^c|+|A_1^c\cap A_3^c|+|A_2^c\cap A_3^c|)-|A_1^c\cap A_2^c\cap A_3^c|$ Due to the symmetry, the formula becomes: $|A|=|\Omega| - 3 |A_1^c| + 3 |A_1^c\cap A_2^c | - |A_1^c\cap A_2^c\cap A_3^c|$. This is exactly what I wrote above with $\ast$ and $0$: $A_1^c=(0,\ast,\ast)$ (the contrary to $A_1$ is: the first boy gets no marble), and so on. So you are reduced to computing the cardinality of: - first $A_1^c=(0,\ast,\ast)$. This is just like $|\Omega|$ but with only 2 boys getting the marbles, so this is ${6+2\choose 2}{8+2\choose 2}$. - then $A_1^c\cap A_2^c=(0,0,\ast)$. This is just giving marbles to only one boy, so this is ${6+1\choose 1}{8+1\choose 1}=7\times 9$. - finally $(A_1^c\cap A_2^c\cap A_3^c)=(0,0,0)$. This is just giving no marble to the boys, so this is 1. I assumed you knew the inclusion-exclusion formula; otherwise there must be good references on the internet. This is a generalization of $|A\cup B|=|A|+|B|-|A\cap B|$ to three (or more) subsets. January 11th 2009, 07:51 AM Thanks a million, Laurent! Your post has been most helpful and illuminating.
{"url":"http://mathhelpforum.com/discrete-math/66768-distributing-black-white-marbles-print.html","timestamp":"2014-04-20T00:40:44Z","content_type":null,"content_length":"24731","record_id":"<urn:uuid:868bc54f-6241-439c-ba71-b4452a357b60>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
shortest path , 1992 "... The grammar problem, a generalization of the single-source shortest-path problem introduced by Knuth, is to compute the minimum-cost derivation of a terminal string from each non-terminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsume ..." Cited by 116 (1 self) Add to MetaCart The grammar problem, a generalization of the single-source shortest-path problem introduced by Knuth, is to compute the minimum-cost derivation of a terminal string from each non-terminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsumes the problem of finding optimal hyperpaths in directed hypergraphs (under varying optimization criteria) that has received attention recently. In this paper we present an incremental algorithm for a version of the grammar problem. As a special case of this algorithm we obtain an efficient incremental algorithm for the single-source shortest-path problem with positive edge lengths. The aspect of our work that distinguishes it from other work on the dynamic shortest-path problem is its ability to handle "multiple heterogeneous modifications": between updates, the input graph is allowed to be restructured by an arbitrary mixture of edge insertions, edge deletions, and edge-length - THEORETICAL COMPUTER SCIENCE , 1996 "... ..." , 2005 "... Heuristic search methods promise to find shortest paths for path-planning problems faster than uninformed search methods. Incremental search methods, on the other hand, promise to find shortest paths for series of similar path-planning problems faster than is possible by solving each path-planning p ..." Cited by 28 (3 self) Add to MetaCart Heuristic search methods promise to find shortest paths for path-planning problems faster than uninformed search methods. Incremental search methods, on the other hand, promise to find shortest paths for series of similar path-planning problems faster than is possible by solving each path-planning problem from scratch. In this article, we develop Lifelong Planning A * (LPA*), an incremental version of A * that combines ideas from the artificial intelligence and the algorithms literature. It repeatedly finds shortest paths from a given start vertex to a given goal vertex while the edge costs of a graph change or vertices are added or deleted. Its first search is the same as that of a version of A * that breaks ties in favor of vertices with smaller g-values but many of the subsequent searches are potentially faster because it reuses those parts of the previous search tree that are identical to the new one. We present analytical results that demonstrate its similarity to A * and experimental results that demonstrate its potential advantage in two different domains if the path-planning problems change only slightly and the changes are close to the goal. - Transactions on Robotics "... Abstract—Mobile robots often operate in domains that are only incompletely known, for example, when they have to move from given start coordinates to given goal coordinates in unknown terrain. In this case, they need to be able to replan quickly as their knowledge of the terrain changes. Stentz ’ Fo ..." Cited by 21 (7 self) Add to MetaCart Abstract—Mobile robots often operate in domains that are only incompletely known, for example, when they have to move from given start coordinates to given goal coordinates in unknown terrain. In this case, they need to be able to replan quickly as their knowledge of the terrain changes. Stentz ’ Focussed Dynamic A (D) is a heuristic search method that repeatedly determines a shortest path from the current robot coordinates to the goal coordinates while the robot moves along the path. It is able to replan faster than planning from scratch since it modifies its previous search results locally. Consequently, it has been extensively used in mobile robotics. In this article, we introduce an alternative to D that determines the same paths and thus moves the robot in the same way but is algorithmically different. D Lite is simple, can be rigorously analyzed, extendible in multiple ways, and is at least as efficient as D. We believe that our results will make D-like replanning methods even more popular and enable robotics researchers to adapt them to additional applications. Index Terms—A, D (Dynamic A), navigation in unknown terrain, planning with the freespace assumption, replanning, search, sensor-based path planning. I. "... Agents operating in the real world often have limited time available for planning their next actions. Producing optimal plans is infeasible in these scenarios. Instead, agents must be satisfied with the best plans they can generate within the time available. One class of planners well-suited to this ..." Cited by 17 (5 self) Add to MetaCart Agents operating in the real world often have limited time available for planning their next actions. Producing optimal plans is infeasible in these scenarios. Instead, agents must be satisfied with the best plans they can generate within the time available. One class of planners well-suited to this task are anytime planners, which quickly find an initial, highly suboptimal plan, and then improve this plan until time runs out. A second challenge associated with planning in the real world is that models are usually imperfect and environments are often dynamic. Thus, agents need to update their models and consequently plans over time. Incremental planners, which make use of the results of previous planning efforts to generate a new plan, can substantially speed up each planning episode in such cases. In this paper, we present an A*-based anytime search algorithm that produces significantly better solutions than current approaches, while also providing suboptimality bounds on the quality of the solution at any point in time. We also present an extension of this algorithm that is both anytime and incremental. This extension improves its current solution while deliberation time allows and is able to incrementally repair its solution when changes to the world model occur. We provide a number of theoretical and experimental results and demonstrate the effectiveness of the approaches in a robot navigation domain involving two physical systems. We believe that the simplicity, theoretical properties, and generality of the presented methods make them well suited to a range of search problems involving large, dynamic graphs. - AT&T labs Research Technical Report, TD-5RJ8B, Florham Park, NJ , 2003 "... doi 10.1287/ijoc.1070.0231 ..." , 2001 "... We propose an algorithm which reoptimizes shortest paths in a very general situation, that is when any subset of arcs of the input graph is aected by a change of the arc costs, which can be either lower or higher than the old ones. This situation is more general than the ones addressed in the lit ..." Cited by 11 (0 self) Add to MetaCart We propose an algorithm which reoptimizes shortest paths in a very general situation, that is when any subset of arcs of the input graph is aected by a change of the arc costs, which can be either lower or higher than the old ones. This situation is more general than the ones addressed in the literature so far. , 1992 "... An incremental algorithm (also called a dynamic update algorithm) updates the answer to some problem after an incremental change is made in the input. We examine methods for bounding the performance of such algorithms. First, quite general but relatively weak bounds are considered, along with a care ..." Cited by 3 (0 self) Add to MetaCart An incremental algorithm (also called a dynamic update algorithm) updates the answer to some problem after an incremental change is made in the input. We examine methods for bounding the performance of such algorithms. First, quite general but relatively weak bounds are considered, along with a careful examination of the conditions under which they hold. Next, a more powerful proof method, the Incremental Relative Lower Bound is presented, along with its application to a number of important problems. We then examine an alternative approach, delta-analysis, which had been proposed previously, apply it to several new problems and show how it can be extended. For the specific problem of updating the transitive closure of an acyclic digraph, we present the first known incremental algorithm that is efficient in the delta-analysis sense. Finally, we criti...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=183867","timestamp":"2014-04-19T14:57:01Z","content_type":null,"content_length":"33006","record_id":"<urn:uuid:01fa3e9e-982b-4d90-ad94-b843f7086be5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:Popular Science Monthly Volume 70.djvu/83 This page has been , but needs to be THE VALUE OF SCIENCE THUS I know how to recognize the identity of two points, the point occupied by A at the instant a and the point occupied by B at the instant β but only on one condition, namely, that I have not budged between the instants a and β. That does not suffice for our object. Suppose, therefore, that I have moved in any manner in the interval between these two instants, how shall I know whether the point occupied by A at the instant a is identical with the point occupied by B at the instant β? I suppose that at the instant a, the object A was in contact with my first finger and that in the same way, at the instant β, the object B touches this first finger; but at the same time, my muscular sense has told me that in the interval my body has moved. I have considered above two series of muscular sensations S and S', and I have said it sometimes happens that we are led to consider two such series S and S' as inverse one of the other, because we have often observed that when these two series succeed one another our primitive impressions are reestablished. If then my muscular sense tells me that I have moved between the two instants a and β, but so as to feel successively the two series of muscular sensations S and S' that I consider inverses, I shall still conclude, just as if I had not budged, that the points occupied by A at the instant a and by B at the instant β are identical, if I ascertain that my first finger touches A at the instant a and B at the instant β. This solution is not yet completely satisfactory, as one will see. Let us see, in fact, how many dimensions it would make us attribute to space. I wish to compare the two points occupied by A and B at the instants a and β, or (what amounts to the same thing since I suppose that my finger touches A at the instant a and B at the instant β) I wish to compare the two points occupied by my finger at the two instants a and β. The sole means I use for this comparison is the series Σ of muscular sensations which have accompanied the movements of my body between these two instants. The different imaginable series Σ form evidently a physical continuum of which the number of dimensions is very great. Let us agree, as I have done, not to consider as distinct the two series Σ and Σ + s + s', when s and s' are inverses one of the other in the sense above given to this word; in spite of this
{"url":"http://en.wikisource.org/wiki/Page:Popular_Science_Monthly_Volume_70.djvu/83","timestamp":"2014-04-16T06:41:26Z","content_type":null,"content_length":"26215","record_id":"<urn:uuid:79fe944c-2929-4cdb-9fe8-e79b828ee972>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Sine Word Problems Practice law of sines word problems. Submit your word problem on sine: Are you looking for sine word problems? TuLyn is the right place. We have tens of word problems on sine and hundreds on other math topics. Below is the list of all word problems we have on sine. Sine Word Problems • A 25 m tower standing vertically (#847) A 25 m tower standing vertically on a hillside casts a shadow down the hillside that is 27 m long. The angle at the tip of the shadow S, subtended by the tower is 26... • A ferris wheel is built such that (#3414) A ferris wheel is built such that the height (h) in feet above the ground of a seat on the wheel at time (t) in sec can be modeled by: H(t)=53+50sin(π/10t-π/2) Find the angular speed in radians and the linear speed in feet? • A pig at Papas Barn just had a litter of piglets (#2809) A pig at Papas Barn just had a litter of piglets. The whole barn is about 12 miles long. In the middle of the night one piglet ran out of the barn a traveled down to the Bray Ranch which is about 13.5 miles away. When viewed on the towns map the two barns make a 115 degree ... • A satellite, travelling in a circular orbit (#253) A satellite, travelling in a circular orbit 700 km above Earth, will pass directly over a tracking station at noon. The satellite takes 95 mins to complete an orbit. a) If the antenna for a tracking station is aimed at an angle of elevation of 30 degrees, at what time will the satelite pass through the beam of the ... • A ship at sea, the Admiral, spots two other ships (#3031) A ship at sea, the Admiral, spots two other ships, the Barstow and the Cauldrew and measures the angle between them to be 65° . They radio the Barstow and by comparing known landmarks, the distance between the Admiral and the Barstow is found to be 236 meters. The Barstow reports an angle of 36° between the Admiral and the Cauldrew. To the nearest ... • A straight road makes an angle of 15 degrees (#2414) A straight road makes an angle of 15 degrees with the horizontal. When the angle of elevation of the sun is 57 degrees, a vertical pole at the side of the road casts a shadow 75 feet long directly down the ... • A tree on a hillside casts a shadow (#1959) A tree on a hillside casts a shadow 215 ft down the hill. If the angle of inclination of the hillside is 22 degrees to the horizontal and the angle of elevation of the sun is 52 ... • An airplane is sighted at the same time (#54) An airplane is sighted at the same time by two ground observers who are 4 miles apart and both directly west of the airplane. They report the angles of elevation as 11° and 22°... • Coast Guard Station Able is located (#2381) Coast Guard Station Able is located 150 miles due south of Station Baker. A ship at sea sends an SOS call that is received by each station. The call to Station Able indicates that the ship is located N55°E; the call to Station Baker indicates that the ship is located S60°E. (a) How far is each station from the ship? (b) If a helicopter capable of flying 200 miles per hour is dispatched from the nearest station to the ... • Due to wind, a tree grew (#1110) Due to wind, a tree grew so that it leaned 8 degrees from the vertical. From a point on the ground 28 meters from the base of the tree, the angle of elevation to the top of the tree is 24.6 ... • Island A is 230 miles from island B (#3030) Island A is 230 miles from island B. A ship captain travels 270 miles from island A and then finds that he is off course and 180 miles from island B. What angle, in degrees, must he turn through to head straight for island ... • Law Of Sines (#6240) a pilot is flying over a straight highway. The angles of depression are 5 miles apart and are 32 degees and 48 ... • Law Of Sines (#6872) AB is a line 652 feet long on one bank of a stream, and C is a point on the opposite bank. A = 58 degrees, and B = 48 degrees ... • Law Of Sines (#8003) A rocket tracking station has two telescopes A and B placed 1.1 miles apart. The telescopes lock onto a rocket and transmit their angles of elevation to a computer after a rocket ... • Law Of Sines (#8022) A pilot is flying over a straight highway. He determines the angles of depression to two mileposts, 9 mi apart, to be 32° and 48°, as shown in the figure. Round your answers to the nearest tenth (a) Find the distance of the plane from point ... • Law Of Sines (#8162) A boat is sailing due east parallel to the shoreline at a speed of 10 miles per hour. At a given time the bearing to the light house is S 70 degrees W and 15 minutes later the bearing is S 63 ... • Law Of Sines (#8590) The pentagon has 5 side each of them are 921 feet ... • Law Of Sines (#9481) to find the distance from the house at A to the house at B, a surveyoor measures the anges BAC to be 40 degrees and then walks off a distance of 100 feet to C and measures the angle ACB to be 50 • Law Of Sines (#9857) A tree growing on a hillside casts a 157ft shadow straight down the hill. Find the vertical height of the tree if relative to the horizontal, the hill slopes 11... • Law Of Sines (#10520) A 25 m tower standing vertically on a hillside casts a shadow down the hillside that is 27 m long. The angle at the tip of the shadow S, subtended by the tower is 26. What is the angle of elevation of the sun at S • Law Of Sines (#10808) To find the length of the span of a proposed ski lift from A to B, a survey or measures the angle DAB to be 25 degrees and then walks off a distance of 1000ft to C and measures the angles ACB to 15 ... • Sine (#5747) bob and donna leave their home .they travel in opposite drections. bob travels 65 mph and donna travels 75 mph ... • Sine (#9299) a ship is sailing toward a small island 800 miles away. if the ship is 2 degrees of course, by how many miles will it miss he island • The course for a bike race follows (#3069) The course for a bike race follows a triangular shaped trail. The first leg is a straight 6 kilometre ride. After that, the biker must turn 283 degrees to the left and follow the train to the next turn. Turning 269 degrees again to the left will take the biker to the finish line, which is exactly where he ... • Two observers,who are 2 miles apart (#1172) Two observers,who are 2 miles apart on a horizontal plane,observe a balloon the same vertical plane with themselves. The angles of elevation are 50' and 65' respectively. Find the height of the balloon ... • Two people leave from the same point (#2378) Two people leave from the same point. Their paths diverge by 95 degrees. If one walks 5 miles and the other walks 3.5 ... • You need to determine the length (#697) You need to determine the length of a tunnel drilled through a lunar peak at the center of a crater. From a point on the crater floor, you run ropes to the ends of the tunnel. The first rope's distance covered is 24.5 meters and the second rope's distance is 21.2 meters. The two ropes meet at an angle of 68 degrees. a) What is the length of the ... Find word problems and solutions on trigonometric functions.
{"url":"http://tulyn.com/wordproblems/sine.htm","timestamp":"2014-04-17T12:29:05Z","content_type":null,"content_length":"21483","record_id":"<urn:uuid:d6b51f52-c82d-4b72-9e34-9e2036122c02>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Behavioral Equivalence{a Unifying Concept for Initial and Final Speci , 1987 "... The properties of a simple and natural notion of observational equivalence of algebras and the corresponding specification-building operation are studied. We begin with a defmition of observational equivalence which is adequate to handle reachable algebras only, and show how to extend it to cope wit ..." Cited by 66 (17 self) Add to MetaCart The properties of a simple and natural notion of observational equivalence of algebras and the corresponding specification-building operation are studied. We begin with a defmition of observational equivalence which is adequate to handle reachable algebras only, and show how to extend it to cope with unreachable algebras and also how it may be generalised to make sense under an arbitrary institution. Behavioural equivalence is treated as an important special case of observational equivalence, and its central role in program development is shown by means of an example. - Iowa State University , 2002 "... Abstract. Using equational logic as a specification language, we investigate the proof theory of behavioral subtyping for object-oriented abstract data types with immutable objects and deterministic methods that can use multiple dispatch. In particular, we investigate a proof technique for correct b ..." Cited by 5 (1 self) Add to MetaCart Abstract. Using equational logic as a specification language, we investigate the proof theory of behavioral subtyping for object-oriented abstract data types with immutable objects and deterministic methods that can use multiple dispatch. In particular, we investigate a proof technique for correct behavioral subtyping in which each subtype’s specification includes terms that can be used to coerce its objects to objects of each of its supertypes. We show that this technique is sound, using our previous work on the model theory of such abstract data types. We also give an example to show that the technique is not complete, even if the methods do not use multiple dispatch, and even if types specified are term-generated. In preparation for the results on equational subtyping we develop the proof theory of a richer form of equational logic that is suitable for dealing with subtyping and behavioral equivalence. This gives some insight into question of when our proof techniques can be make effectively computable, but in general behavioral consequence is not effectively computable. 1. , 1995 "... We shall demonstrate that proving the behavioral equivalence of two algebraic specifications is equivalent to proving a set of theorems in a given initial algebra. Thus, it is possible to prove automatically this behavioral equivalence by use of automatic deduction techniques. ..." Add to MetaCart We shall demonstrate that proving the behavioral equivalence of two algebraic specifications is equivalent to proving a set of theorems in a given initial algebra. Thus, it is possible to prove automatically this behavioral equivalence by use of automatic deduction techniques.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3288860","timestamp":"2014-04-19T06:13:51Z","content_type":null,"content_length":"17475","record_id":"<urn:uuid:14d54565-3180-469b-a4f4-480076c694b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
DMOZ - Science: Math: Research: Open Problems [ ] [Search] [the entire directory ] See also: This category in other languages: • Millennium Prize Problems - The seven problems proposed by the Clay Mathematics Institute: P versus NP; Hodge Conjecture; PoincarĂ© Conjecture; Riemann Hypothesis; Yang-Mills Existence and Mass Gap; Navier-Stokes Existence and Smoothness; Birch and Swinnerton-Dyer Conjecture. Resources include articles on each problem by leading researchers. • Mathematical Problems - In various subjects, compiled by Torsten Sillke. • Most Wanted List - Elementary unsolved problems in mathematics, listed at the MathPages archive. • Open Problem Garden - A collection of unsolved math problems anyone can contribute to. • Unsolved Problem of the Week Archive - A list of unsolved problems published by MathPro Press during 1995. • Unsolved Problems - A list compiled by Eric Weisstein in MathWorld. Volunteer to edit this category.
{"url":"http://www.dmoz.org/Science/Math/Research/Open_Problems/","timestamp":"2014-04-17T01:39:11Z","content_type":null,"content_length":"20619","record_id":"<urn:uuid:dd7133a5-b3e3-4b0a-a1a6-947844f4363a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Say i have n vectors that are sorted. I can't think of a better algorithm to merge than searching the first of all n vectors to find minimum then pop that then continue. I think this would be too slow, but I can't think of anything more efficient. >I think this would be too slow Why? Have you tried it, profiled the execution, and determined it to be too slow? Or are you just thinking about it and going "golly, this has a lot of steps"? There are two problems with the 1) Programmers are notoriously bad at guessing where bottlenecks lie. 2) Algorithms are notoriously good at looking slow but being zippy. You could just do a sort by taking each element of every vector at a time in no particular order. This is probably what I'd do if the vectors weren't sorted. But since they are, your idea sounds pretty good. When you have more than 2 sorted vectors to merge, an excellent way to handle the merging is to reference each vector in a heap. Whether that means that your heap stores pointers to the vectors, or indexes to those vectors within the array of those vectors is up to you. You always extract an item from the vector that is at the head of the heap. Then you downheap (or is it upheap, I get mixed up), and repeat That's the way to do external mergesort. I like the heap idea. I get upheap and downheap confused too don't worry lol. but I'm not sure if I shouldn't just take Prelude's advice. Eh I'll code both. Thanks for the advice. I greatly appreciate it!
{"url":"http://cboard.cprogramming.com/cplusplus-programming/97718-merging-printable-thread.html","timestamp":"2014-04-19T10:39:42Z","content_type":null,"content_length":"8189","record_id":"<urn:uuid:c91bf2a1-398d-4eb1-9ee4-f059f733a9aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Genoa City Math Tutor Find a Genoa City Math Tutor ...I use multi-sensory methods of teaching and each student has an individualized education plan to ensure their success. I am a certified elementary and special education teacher. I worked 21 years as a Resource teacher with students with many different types of needs. 15 Subjects: including algebra 1, special needs, grammar, geometry ...I have worked with learners from 1st and 2nd grade to their mid-late 20's. Elementary Math, Reading, Pre-Algebra, Algebra, Geometry, College Basic Math, GED preparation, SAT and ACT Math, Algebra2, all are areas where I can help you or your child gain confidence and develop content mastery. "If... 34 Subjects: including prealgebra, geometry, algebra 1, ACT Math ...I hope that I can be of service to anyone who requires aid with these disciplines. They offer life-long skills by offering problem-solving techniques and numerical tools. I have background in peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college. 16 Subjects: including linear algebra, algebra 1, algebra 2, calculus ...My approach to developing vocabulary is through word study, to help students better identify common suffixes, prefixes and root words. As a teacher I can attest that the instruction of grammar has truly fallen by the wayside in education, from elementary school all the way through high school. ... 20 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...My experience studying foreign languages gives me awareness of struggles my students face. In addition, I have completed and Introduction to the Wilson Reading System, a reading education program that has shown good results with ESL students. I am a Northwestern Graduate with Advanced Math Cour... 22 Subjects: including algebra 1, prealgebra, reading, English
{"url":"http://www.purplemath.com/genoa_city_wi_math_tutors.php","timestamp":"2014-04-17T15:51:09Z","content_type":null,"content_length":"23711","record_id":"<urn:uuid:54e814eb-a487-431a-a7ff-8f777632b792>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about physics on SuperFly Physics Category Archives: physics Earlier this week I got an email from a friend who has been working with his students modeling how momentum works in a situation where two carts start connected and then explode apart. I’m not sure, but I think that … Continue reading About a month ago, I had an extraordinary experience: It was Bill Nye standing on me while I laid on a nail bed. Lots of fun, for sure, and I pointed out to the audience that it was the one … Continue reading This past week in my optics class I think I made a mistake. We were talking about how light interacts with a system with multiple parallel interfaces, and we started with analyzing a single interface that didn’t happen to be … Continue reading My colleague asked me to help him out with this image: He needs to know the grain size distribution, and they’ve been having trouble automating this. He knew I’d been doing some work with Mathematica’s image analysis capabilities so he … Continue reading Rhett Allain’s post about a human running around a loop has really got me (and him!) thinking (click through to see the video). I wondered if there was a more sophisticated way to do the calculation for the minimum speed … Continue reading This semester I’m trying to flip my flipped approach. Here’s a quick description. Today was the best day so far doing it this way. It was the fourth day of class, and the others had been ok but not great, … Continue reading I’ve been playing with ImageFeatureTrack in Mathematica over the last few days. My interest is in helping me and my students track the beads on a swinging beaded chain (something we worked on quite a bit last summer). I just … Continue reading When I was in undergrad, I dutifully did all my linear algebra homework, not really understanding why. I figured, “if they want me to find a vector or two for a given matrix that satisfies M.v=lambda v , fine, I’ll do it.” … Continue reading Yesterday I went on twitter to try to get some help on teaching Fourier analysis for my sound and music class: teaching fourier theory to non-sci-Ss. Goal: that it's possible (to find freqs). not-a-goal: teach slickest way to do the … Continue reading I’m teaching “the physics of sound and music” in the fall, along with the labs. I really want to do that using my Standards-Based Grading with Voice approach, but I recognize that having 40 students really changes the grading calculus. … Continue reading
{"url":"http://arundquist.wordpress.com/category/physics/","timestamp":"2014-04-18T23:15:05Z","content_type":null,"content_length":"46423","record_id":"<urn:uuid:939f4c60-3641-4f88-b8a3-5c570b4f8309>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Genoa City Math Tutor Find a Genoa City Math Tutor ...I use multi-sensory methods of teaching and each student has an individualized education plan to ensure their success. I am a certified elementary and special education teacher. I worked 21 years as a Resource teacher with students with many different types of needs. 15 Subjects: including algebra 1, special needs, grammar, geometry ...I have worked with learners from 1st and 2nd grade to their mid-late 20's. Elementary Math, Reading, Pre-Algebra, Algebra, Geometry, College Basic Math, GED preparation, SAT and ACT Math, Algebra2, all are areas where I can help you or your child gain confidence and develop content mastery. "If... 34 Subjects: including prealgebra, geometry, algebra 1, ACT Math ...I hope that I can be of service to anyone who requires aid with these disciplines. They offer life-long skills by offering problem-solving techniques and numerical tools. I have background in peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college. 16 Subjects: including linear algebra, algebra 1, algebra 2, calculus ...My approach to developing vocabulary is through word study, to help students better identify common suffixes, prefixes and root words. As a teacher I can attest that the instruction of grammar has truly fallen by the wayside in education, from elementary school all the way through high school. ... 20 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...My experience studying foreign languages gives me awareness of struggles my students face. In addition, I have completed and Introduction to the Wilson Reading System, a reading education program that has shown good results with ESL students. I am a Northwestern Graduate with Advanced Math Cour... 22 Subjects: including algebra 1, prealgebra, reading, English
{"url":"http://www.purplemath.com/genoa_city_wi_math_tutors.php","timestamp":"2014-04-17T15:51:09Z","content_type":null,"content_length":"23711","record_id":"<urn:uuid:54e814eb-a487-431a-a7ff-8f777632b792>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
A mathematical model of the inner medullary collecting duct of the rat: acid/base transport (Renal H+/K+ ATPase Variant) A mathematical model of the inner medullary collecting duct of the rat: acid/base transport Model Status This CellML model variant describes the renal H+/K+ ATPase. Model Structure ABSTRACT: A mathematical model of the inner medullary collecting duct (IMCD) of the rat has been developed that is suitable for simulating luminal buffer titration and ammonia secretion by this nephron segment. Luminal proton secretion has been assigned to an H-K-ATPase, which has been represented by adapting the kinetic model of the gastric enzyme by Brzezinski et al. (P. Brzezinski, B. G. Malmstrom, P. Lorentzon, and B. Wallmark. Biochim. Biophys. Acta 942: 215-219, 1988). In shifting to a 2 H+:1 ATP stoichiometry, the model enzyme can acidify the tubule lumen approximately 3 pH units below that of the cytosol, when luminal K+ is in abundance. Peritubular base exit is a combination of ammonia recycling and HCO3- flux (either via Cl-/HCO3- exchange or via a Cl- channel). Ammonia recycling involves NH4(+) uptake on the Na-K-ATPase followed by diffusive NH3 exit [S. M. Wall. Am. J. Physiol. 270 (Renal Physiol. 39): F432-F439, 1996]; model calculations suggest that this is the principal mode of base exit. By virtue of this mechanism, the model also suggests that realistic elevations in peritubular K+ concentration will compromise IMCD acid secretion. Although ammonia recycling is insensitive to carbonic anhydrase (CA) inhibition, the base exit linked to HCO3- flux provides a CA-sensitive component to acid secretion. In model simulations, it is observed that increased luminal NaCl entry increases ammonia cycling but decreases peritubular Cl-/HCO3- exchange (due to increased cell Cl-). This parallel system of peritubular base exit stabilizes acid secretion in the face of variable Na+ reabsorption. The complete original paper reference is cited below: A mathematical model of the inner medullary collecting duct of the rat: acid/base transport, Alan M. Weinstein, 1998, American Journal of Physiology , 274(5 Pt 2), F856-67. PubMed ID: 6496750 Conventional rendering of the renal H-K-ATPase adapted from the gastric H-K-ATPase model. This model has a stoichiometry of two H^+ and two K^+ per ATP. E[1] and E[2] are the cytosolic- and luminal-facing enzymes, respectively.
{"url":"http://models.cellml.org/exposure/632a7aad8db50bce5adf03529cab93c2/weinstein_1998_variant02.cellml/view","timestamp":"2014-04-19T01:49:59Z","content_type":null,"content_length":"20181","record_id":"<urn:uuid:5170c7d2-6d8d-48a9-b6f1-da5e63e47fd3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Python Basics Numerical Python Basics Installing Numerical Python To use Numerical Python, you'll need to install it first. For the details, see our handy guide with instructions and download locations here. On its own, Python is a potent tool for mathematics. When extended with the addition of two modules, Numerical Python and the DISLIN data visualization tool, Python becomes a powerhouse for numeric computing and problem solving. Numerical Python (NumPy) extends the Python language to handle matrices and supports the associated mathematics of linear algebra. Like Python, NumPy is a collaborative effort. The principal code writer was Jim Hugunin, a student at MIT. Paul Dubois at Lawrence Livermore National Labs and a few other major supporters (Konrad Hinsen and Travis Oliphant) eventually took on the project. While NumPy and Python make a good combination, the ability of Python to serve the scientific community is not complete without a data visualization package. An excellent choice for data visualization is the DISLIN package freely provided to the Python community by Helmut Michels of the Max-Planck Institute. DISLIN is an extensive cross platform (Win32/Linux/Unix) library that can be accessed in many languages, including Python. With its support for 2D and 3D plotting and basic drawing commands, you can create informative and eye-catching graphics. Over the next five months I will show you how to use both NumPy and DISLIN in real applications, starting this week with a review of the useful and interesting world of matrices and linear algebra. You may feel like you are having a flashback to high-school math, but stick with me. It gets interesting. Introduction to Matrices and Matrix Mathematics A matrix is a homogenous collection of numbers with a particular shape, like a table of numbers of the same type. The most common matrices are one or two-dimensional. One-dimensional matrices are also called vectors. A, b, and C are examples of matrices; a and b could also be called vectors. Although a and b both contain the same elements, they are different in shape. The shape of a matrix is described by the number of rows and columns. The a matrix can be described as either a column vector or a 3-by-1 matrix, b is a row-vector or a 1-by-3 matrix. The C matrix is and example of a 3-by-3 matrix. Addition and Subtraction You add and subtract matrices, as you might expect, element by element - with only a slight twist. The shape of the matrices must be the same. Thus the following are valid operations, is not. Matrix Multiplication Matrix multiplication is a more complicated operation. For addition the matrices have to be the same size, but in multiplication only the inner dimensions have to be the same: the columns of the first matrix must be the same as the rows of the second matrix. The resulting matrix will be the size of the remaining outer dimensions. As an example, multiplication of a 1-by-3 matrix by a 3-by-1 matrix results in a 1-by-1 matrix (3 being the 'inner' dimension). If a 5-by-2 matrix is multiplied by a 2-by-5 matrix, a 5-by-5 matrix results. The Formula The governing formula is: This formula describes how the rows of matrix a and the columns of matrix b can be multiplied to generate matrix c. The indices in c(i,j) are used to specify the element located at the i^th row and j ^th column of the new c matrix. (The same concept holds for the a and b matrices.) The Greek letter Sigma stands for summation. Think of it as a kind of for loop from k=1 to n where n is the size of the inner dimension. Add the resulting value of each trip through the loop together. The following shows the difference between matrices with the same values but different shapes. The first example is a 2-by-1 matrix multiplied by a 1-by-2 matrix resulting in a 2-by-2 matrix. The second example is a 1-by-2 matrix multiplied by a 2-by-1 matrix resulting in a 1-by-1 matrix (also known as a scalar). What about division? Matrix division is undefined; the concept is replaced by the matrix inverse. The inverse of a matrix A is defined by the following equation: When a matrix and its inverse are multiplied together, the result is the identity matrix or I. The identity matrix is a square matrix where all elements are 0 except along the main diagonal (going from upper left to lower right). A 3-by-3 identity matrix is shown While the matrix inverse is a bit different from traditional division, the concept follows directly from scalar mathematics where: There are couple of limitations: the matrix inverse is only defined for square matrices (same number of rows as columns), and it may not exist for certain sets of elements. This is similar to how dividing a scalar by zero is undefined. You just can't work out an inverse for every set of numbers. Where the inverse does exist, it is very useful for solving difficult equations, as we will see later in this tutorial.
{"url":"http://www.onlamp.com/pub/a/python/2000/05/03/numerically.html","timestamp":"2014-04-16T19:19:11Z","content_type":null,"content_length":"32437","record_id":"<urn:uuid:5c8d7cb4-ed76-49a0-b497-4b9224e2fb0d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define 51/3 to be the cube root of 5 because we want (51/3)3 = 5(1/3)3 to h...
{"url":"http://www.purplemath.com/learning/search.php?author_id=10920&sr=posts","timestamp":"2014-04-17T16:09:24Z","content_type":null,"content_length":"13117","record_id":"<urn:uuid:bc19dd7d-ebb1-4687-99a5-d0654757c671>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Farmers Branch, TX Statistics Tutor Find a Farmers Branch, TX Statistics Tutor ...The biggest problem I've noticed with students who come to me for Physics is a lack of firm grasp on its very basic and fundamental concepts. Physics is not like Calculus: you can't just get by with knowing a few techniques of integration and differentiation; you really have to be well grounded ... 41 Subjects: including statistics, chemistry, ASVAB, logic ...Help me help you! Having put myself through school, I understand the need for comprehension in class, homework help, and maintaining good grades. I'd like to help you master your subject and keep or improve your current grades. 4 Subjects: including statistics, chemistry, economics, vocabulary ...My laboratory computer was an HP running HP-UX. After the SSC was terminated, I bought my first home computer. I learned DOS and early versions of Windows inside and out due to the necessity of managing memory for my children's computer games. 25 Subjects: including statistics, chemistry, calculus, physics Hello, I have a PhD in Computational Mathematics from SMU and a Master's in Pure Mathematics from UNT. I tutor all Mathematics and Statistics courses as well as for professional exams such as GRE, GMAT and Test Prep ACT, SAT etc. I have taught Mathematics, Statistics and Computer courses at Texas A&M, Eastfield College, UNT & SMU. 23 Subjects: including statistics, calculus, geometry, GRE ...I can help with query strategies and with syntax of most query statements. I am a certified tutor in all math topics covered by the ASVAB. In my career I have used and taught many mathematical 15 Subjects: including statistics, chemistry, physics, calculus Related Farmers Branch, TX Tutors Farmers Branch, TX Accounting Tutors Farmers Branch, TX ACT Tutors Farmers Branch, TX Algebra Tutors Farmers Branch, TX Algebra 2 Tutors Farmers Branch, TX Calculus Tutors Farmers Branch, TX Geometry Tutors Farmers Branch, TX Math Tutors Farmers Branch, TX Prealgebra Tutors Farmers Branch, TX Precalculus Tutors Farmers Branch, TX SAT Tutors Farmers Branch, TX SAT Math Tutors Farmers Branch, TX Science Tutors Farmers Branch, TX Statistics Tutors Farmers Branch, TX Trigonometry Tutors Nearby Cities With statistics Tutor Addison, TX statistics Tutors Balch Springs, TX statistics Tutors Bedford, TX statistics Tutors Carrollton, TX statistics Tutors Coppell statistics Tutors Euless statistics Tutors Flower Mound statistics Tutors Grapevine, TX statistics Tutors Highland Park, TX statistics Tutors Hurst, TX statistics Tutors Irving, TX statistics Tutors Parker, TX statistics Tutors Richardson statistics Tutors The Colony statistics Tutors University Park, TX statistics Tutors
{"url":"http://www.purplemath.com/farmers_branch_tx_statistics_tutors.php","timestamp":"2014-04-18T01:13:44Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:93dac328-4a94-4cf5-af10-7168cbf0c7ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Brookhaven, PA SAT Math Tutor Find a Brookhaven, PA SAT Math Tutor ...I also teach Solfege (Do, Re Mi..) and intervals to help with correct pitch. I use IPA to teach correct pronunciation of foreign language lyrics (particularly Latin). I have a portable keyboard to bring to lessons and music if the student doesn't have their own selections that they wish to learn... 58 Subjects: including SAT math, chemistry, reading, biology ...Also, I have tutored students in ODE's for over ten years. I worked for close to three years as a pension actuary and have passed the first three exams given by the Society of Actuaries, which rigorously cover such topics as calculus, probability, interest theory, modeling, and financial derivat... 19 Subjects: including SAT math, calculus, econometrics, logic ...I have been trained to teach Trigonometry according to the Common Core Standards. I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. I have a bachelor's degree in secondary math education. 11 Subjects: including SAT math, calculus, algebra 2, geometry ...I am well-versed in IB (as well as AP) Biology, Theory of Knowledge, English Literature and Composition, Writing craft, and 20th Century History. I have also obtained 'A' grades in Spanish language studies at the 300 level in college and have studied various epic literature, moral philosophy, an... 18 Subjects: including SAT math, reading, Spanish, English ...I studied civil engineering in college. During my coursework, I took Calc I, Calc II, Calc III and Differential equations. My knowledge of calculus was also applied in my structural engineering coursework during senior year. 21 Subjects: including SAT math, reading, calculus, physics
{"url":"http://www.purplemath.com/Brookhaven_PA_SAT_Math_tutors.php","timestamp":"2014-04-20T06:28:44Z","content_type":null,"content_length":"24178","record_id":"<urn:uuid:b5e8c4e6-bdad-4e8c-9091-6de14e7397b6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Readers' response: Euler's greatest hits Posted by: Dave Richeson | March 8, 2010 Readers’ response: Euler’s greatest hits My friend Gene Chase is teaching a history of mathematics class at Messiah College this semester. He asked me if I was interested in giving a visiting lecture in his class in a few weeks. The topic: Leonhard Euler. He said that I could talk about whatever I wanted. Wow, the possibilities! So I was thinking about giving a biography of Euler followed by “Euler’s greatest hits.” This is where you come in. You probably know my favorite theorem of Euler’s. What are your favorites? Please leave them in the comments. List as many as you want. The more, the better. My vote might be for the Euler-Maclaurin summation formula. But I don’t know that much about which theorems came from Euler. By: John on March 8, 2010 at 6:32 pm I don’t know if it has a fancy name, but I’m partial to the infinite sum: the reciprocals of the squares of the integers = pi^2/6. What a weird result, and what a genius way he got there. By: Kate Nowak on March 8, 2010 at 7:01 pm And by integers, I mean integers >= 1, of course. Sorry. By: Kate Nowak on March 8, 2010 at 7:02 pm I’m still reading Euler’s Gem. The coolest thing I learned in it is that no one ever talked about edges until Euler, even though the Greeks talked a lot about polyhedra. That kind of blew me away. By: Sue VanHattum on March 8, 2010 at 7:03 pm The Euler product. By: Jason Dyer on March 8, 2010 at 8:17 pm Can’t forget e^(i*pi) + 1 = 0 ! [or more generally e^ (i * theta) = cos(theta)+ i*sin(theta)] By: nyates314 on March 8, 2010 at 8:25 pm • I vote for this one too! My favorite Euler’s formula (even though Euler didn’t write it in this form). It is also an example of ” Le plus court chemin entre deux vérités dans le domaine réel passe par le domaine complexe. ” (The shortest path between two truths in the real domain passes through the complex domain) – Jacques Hadamard. It combines i, pi, cos, sin, intertwines complex and real domains in a simple elegant formula. By: Xenic on March 9, 2010 at 6:14 am Thanks, everyone. You’ve picked some good ones. Keep them coming! Here’s one from Twitter: @piggymurph said his favorite is the Euler-Lagrange Equation: http://twitter.com/piggymurph/status/10190591083. Kate, that’s also one of my favorites. It is known as the “Basel problem.” It was theorem that made everyone take notice of this young mathematician named Euler. Sue, I hope you enjoy the book. That is a wild fact, isn’t it? By: Dave Richeson on March 8, 2010 at 10:23 pm One of my personal favorites: x^phi(n) is congruent to 1 mod n for any x coprime to n with x,n in Z+. By: Matt on March 9, 2010 at 7:46 pm I’ve heard it said that Euler was so prodigious that people started naming theorems after the first person *after* Euler to discover them. :) Be that as it may, I’m an applied guy, so I’ll go with the basic necessary condition on an extreme value of a functional: $\displaystyle \frac{\partial}{\partial {q}} L - \frac{d}{d{t}} \left( \frac{\partial}{\partial {\dot{q}}} L \right) = 0$ By: sherifffruitfly on March 14, 2010 at 2:35 am Posted in Math, Teaching | Tags: history of mathematics, Leonhard Euler, Math
{"url":"http://divisbyzero.com/2010/03/08/readers-response-eulers-greatest-hits/","timestamp":"2014-04-18T20:46:45Z","content_type":null,"content_length":"69377","record_id":"<urn:uuid:ea9c5603-69a5-4926-9f3f-1e5c13539f1f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
tricky trig equations September 12th 2009, 04:58 PM #1 Junior Member Sep 2008 tricky trig equations Express 3cosx-4sinx in the form Rcos(x+a) for some R>0 and 0<a<90 giving the value of a correct to the nearest minute Hence solve the equation 3cosx-4sinx=5 for in the domain of [0,360] Hi requal I'm not sure what you mean by "nearest minute". Maybe it means nearest degree? Let : p cos x - q sin x = R cos (x+a) Then, expand RHS and compare it with LHS to find cos(a) and sin(a). Hence, find a Hi mr fantastic Oh I see. Find (a) in degree first then convert it to minute. Last edited by songoku; September 14th 2009 at 03:02 AM. sorry, to bump but I'm still confused. Why does R equal the square root of p^2+q^2 Hi requal p cos x - q sin x = R cos (x+a) p cos x - q sin x = R cos(x) cos(a) - R sin(x) sin(a) Comparing coefficient : p cos x = R cos(x) cos(a) p = R cos(a) .........................(1) q sin x = R sin(x) sin(a) q = R sin(a)...........................(2) Square (1) and (2) and add them : p^2 = R^2 [cos(a)]^2 q^2 = R^2 [sin(a)]^2 --------------------------- + p^2 + q^2 = R^2 September 12th 2009, 07:25 PM #2 Senior Member Jul 2009 September 12th 2009, 09:50 PM #3 September 13th 2009, 03:00 AM #4 Senior Member Jul 2009 September 14th 2009, 01:57 AM #5 Junior Member Sep 2008 September 14th 2009, 03:13 AM #6 Senior Member Jul 2009
{"url":"http://mathhelpforum.com/trigonometry/101951-tricky-trig-equations.html","timestamp":"2014-04-24T16:50:43Z","content_type":null,"content_length":"43316","record_id":"<urn:uuid:87fb490e-27a3-4c53-b977-c33c3edeb438>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Please help me understand Leibniz' notation February 17th 2013, 08:39 PM Please help me understand Leibniz' notation For some inexplicable reason, I cannot seem to understand Leibniz' notation. I get it at its most simple form such as: $\frac{dy}{dx}x^2$ I do see that we are taking the difference (or delta) of y which is $\left( (x+\Delta x)(x+\Delta x)-x^2\left)$ and then I take the difference (delta) of x which is $x+\Delta x-x$ and then of course, take the limit as $\Delta x \rightarrow 0$ However when we start playing around with Leibniz' notation, I am at a loss. For instance, now I am trying to understand this Let $w=\ln$ $\frac{d}{dx}e^w=\frac{d}{dx}x=1\\\\\Rightarrow \left(\frac{d}{dw}e^w\right)\left(\frac{dw}{dx} \right)=1\\\\\Rightarrow e^w\cdot \frac{dw}{dx}=1\Rightarrow\frac{dw}{dx}=\frac{1}{e ^w}=\frac{1} (if it helps, here are my notation for the lecture: http://i.imgur.com/1fjBycY.jpg. The problem is on the right side.) But I am having severe issues understanding what is going on! I do understand the calculations if presented to me without the Leibniz' notation, I am sure. But this notation seems to clog me up every time! Please help MHF! (Hi) February 17th 2013, 10:21 PM Re: Please help me understand Leibniz' notation Hey Paze. Basically this is just the chain rule where for w = f(x) and g(w) = e^w and you are differentiating d/dx g(f(x)). February 18th 2013, 06:15 AM Re: Please help me understand Leibniz' notation Thank you, I did suspect that but my problem is understanding what is going on with the notation. Why does $\frac{d}{dx}$ become $\frac{d}{dw}$? February 18th 2013, 10:23 AM Re: Please help me understand Leibniz' notation Don't let Leibniz's notation scare you. Whether you write $\frac{d}{dx}f(x)$ or f'(x), it's just the derivative of f. Your professor introduced a new variable (or a new function, depending on how you look at it) $w=\ln{x}$. So $e^w=x$, and you take the derivative of both sides. The right side is easy. On the left, since w is a function of x, you have a composite function ( $e^w$ is $e^{\ln{x}}$), so you need to use the chain rule. The derivative of the outside function is $\frac{d}{dw}e^w=e^w$, and the derivative of the inside function is $\frac{dw}{dx}$. So you have $e^w\frac{dw}{dx}=1$, so $\frac{dw}{dx}=\frac{1}{e^w}=\frac{1}{x}$, and since $w=\ln{x}$, $\frac{d}{dx}\ln{x}=\frac{1}{x}$, which is what you wanted to show. If you can't figure out the derivation, you should at least memorize the result $\frac{d}{dx}\ln{x}=\frac{1}{x}$. Or in the other notation, if $f(x)=\ln{x}$, $f'(x)=\frac{1}{x}$. Hope that helps. - Hollywood February 18th 2013, 05:01 PM Re: Please help me understand Leibniz' notation Don't let Leibniz's notation scare you. Whether you write $\frac{d}{dx}f(x)$ or f'(x), it's just the derivative of f. Your professor introduced a new variable (or a new function, depending on how you look at it) $w=\ln{x}$. So $e^w=x$, and you take the derivative of both sides. The right side is easy. On the left, since w is a function of x, you have a composite function ( $e^w$ is $e^{\ln{x}}$), so you need to use the chain rule. The derivative of the outside function is $\frac{d}{dw}e^w=e^w$, and the derivative of the inside function is $\frac{dw}{dx}$. So you have $e^w\frac{dw}{dx}=1$, so $\frac{dw}{dx}=\frac{1}{e^w}=\frac{1}{x}$, and since $w=\ln{x}$, $\frac{d}{dx}\ln{x}=\frac{1}{x}$, which is what you wanted to show. If you can't figure out the derivation, you should at least memorize the result $\frac{d}{dx}\ln{x}=\frac{1}{x}$. Or in the other notation, if $f(x)=\ln{x}$, $f'(x)=\frac{1}{x}$. Hope that helps. - Hollywood I get it! God I love when the solution just appears to you after scouring for minutes, hours or even days. What you wrote was Hebrew to me for the first 30 minutes or so but after that it just suddenly clicked. That feeling is just #1. Thank you very much.
{"url":"http://mathhelpforum.com/calculus/213296-please-help-me-understand-leibniz-notation-print.html","timestamp":"2014-04-16T13:18:55Z","content_type":null,"content_length":"15668","record_id":"<urn:uuid:43dc9661-7728-4ced-b619-35cf350f801e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Integer Factorization 07-22-2003 #1 Registered User Join Date Jul 2003 Integer Factorization Hi everybody, I need a little help and advice with this code snippet. I want the function to take an integer, return a pointer to an array of ints on the heap. This array will contain all of the factors. Please point out where I could optimize or beautify my code. Anyway, here it is: const int MaxFactors = 30; int* Factorize(int Number) int* Factors = new int[MaxFactors]; int* pFactor; int possibleFactor = 3; int counter = 0; int startNumber = Number; if(Number == 1) pFactor = new int(1); Factors[counter] = *pFactor; delete pFactor; pFactor = 0; for(counter; counter < MaxFactors; ) if(Number % 2 == 0) pFactor = new int(2); Factors[counter] = *pFactor; delete pFactor; pFactor = 0; Number /= 2; for(counter; counter < MaxFactors && possibleFactor <= Number; ) if(Number % possibleFactor == 0) while(Number % possibleFactor == 0) pFactor = new int(possibleFactor); Factors[counter] = *pFactor; delete pFactor; pFactor = 0; Number /= possibleFactor; possibleFactor += 2; if(Factors[0] == startNumber) // Primenumber, no factors if(Factors[0] == 2) pFactor = new int(2); Factors[0] = *pFactor; delete pFactor; pFactor = 0; else if(Factors[0] == 1) pFactor = new int(1); Factors[0] = *pFactor; delete pFactor; pFactor = 0; return Factors; } // end if(Factors[0] == startNumber) return Factors; } // end Factorize(int Number) -- Placid Thanks a lot! I sorta typed that almost out of the book I'm studying, and I should've checked it more thorough I guess... But besides that, is there anything that could be better with the algorithm? Could the ugly MaxFactors constant be avoided via a linked list? If so, would you have done it? -- Placid A const isn't ugly. A #define is ugly. A global variable is (usally) ugly. The const is fine, and no, it couldn't have been avoided with a linked list, although a linked list could have a place in an application like this if you needed the factors for something other than printing to the screen. The ugly MaxFactors constant, along with the problem of there being such a thing as a maximum number of factors, and the problem of telling the calling program how many factors you actually found along with , and this is a big one, the requirement of callers to clean up the memory allocated by factors can be solved with vectors. The other trick to note, is that if n has no factors < sqrt(n) then ether sqrt(n) is a factor (n is the square of a prime), or n is a prime. sqrt is mildly expensive so I jumped through some hoops to minimse the number of times it's called. #include <iostream> #include <stdlib.h> #include <vector> #include <algorithm> #include <iterator> typedef std::vector<unsigned> factors_t; factors_t factors(unsigned n) { if(n < 2) return factors_t(1,n); factors_t v; while(n % 2 == 0) { v.push_back(2); n/=2; } unsigned max = static_cast<unsigned>(ceil(sqrt(n))); for(unsigned fact = 3; fact <= max; fact += 2) { if(n % fact == 0) { do { } while(n % fact == 0); max = static_cast<int>(ceil(sqrt(n))); if(n > 1) v.push_back(n); return v; int main(int argc, char *argv[]) { unsigned n = (argc>1)?atoi(argv[1]):2*2*3*5*7*13*17; factors_t v = factors(n); std::cout << n << " has " << v.size() << " factors" << std::endl; std::cout << v.back() << std::endl; return 0; Thanks for the input! Well I'm actually using this piece of code for something else: I'm reducing a rational number with the use of this function. In the Rational class the memory is released via the deconstructor, so that's not the biggest problem IMHO. I agree though that the function that allocates should be the one which frees memory, but sometimes that's impossible. (Perhaps a sign of bad design? I hope not...) Vectors seems to have a multitude of uses; I'll probably look into them in the near future Then the first thing I would look at is the binary euclid's algorithm. You don't need any aditional storage for this, just your two ints. Boost has an interesting rational class, for the simple case of a pair of ints. You will almost certainly feel the pull of arbitrary precision calling you eventually, might be good to get to know the fast. portable, yet GPL only gmp 07-22-2003 #2 Registered User Join Date Jul 2003 07-22-2003 #3 07-22-2003 #4 Registered User Join Date Jan 2003 07-23-2003 #5 Registered User Join Date Jul 2003 07-23-2003 #6 Registered User Join Date Jan 2003
{"url":"http://cboard.cprogramming.com/cplusplus-programming/42323-integer-factorization.html","timestamp":"2014-04-16T20:22:28Z","content_type":null,"content_length":"60762","record_id":"<urn:uuid:c885d7dc-1040-4cc2-b1dc-58be4c720245>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Reference for: $C_c^\infty(0,T;H)$ dense in $L^2(0,T;H)$ MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. If it is true, where may I find a reference/proof for: $C_c^\infty(0,T;H)$ dense in $L^2(0,T;H)$ where $H$ is a Hilbert space. up vote 1 down vote favorite ap.analysis-of-pdes fa.functional-analysis add comment If it is true, where may I find a reference/proof for: $$C^\infty_c(0,T;H) = C^\infty_c(0,T)\otimes^{\ell^1} H = C^\infty_c(0,T)\otimes^{\ell^\infty} H= C^\infty_c(0,T)\otimes^{\ell^2} H$$ where I write (for simplicity's sake) $\otimes ^{\ell^ 1}$ for the completed projective tensor product, also denoted $\pi$-tensor product, and $\otimes^{\ell^\infty}$ for the completed injective tensor product, also called $\epsilon$-tensor product. The second equality holds, since $C^\infty_c(0,T)$ is a nuclear (LF) space. Thus we have equality also for the $\otimes^{\ell^2}$ tensor product. up vote 1 On the other hand we have $L^2(0,T;H) = L^2(0,T)\otimes^{\ell^2} H$, corresponding to Hilbert-Schmidt operators. down vote Now, since $C^\infty_c(0,T)$ is dense in $L^2(0,T)$, the assertion is obvious. add comment $$C^\infty_c(0,T;H) = C^\infty_c(0,T)\otimes^{\ell^1} H = C^\infty_c(0,T)\otimes^{\ell^\infty} H= C^\infty_c(0,T)\otimes^{\ell^2} H$$ where I write (for simplicity's sake) $\otimes ^{\ell^1}$ for the completed projective tensor product, also denoted $\pi$-tensor product, and $\otimes^{\ell^\infty}$ for the completed injective tensor product, also called $\epsilon$-tensor product. The second equality holds, since $C^\infty_c(0,T)$ is a nuclear (LF) space. Thus we have equality also for the $\otimes^{\ell^2}$ tensor product. On the other hand we have $L^2(0,T;H) = L^2(0,T)\otimes^{\ell^2} H$, corresponding to Hilbert-Schmidt operators. Now, since $C^\infty_c(0,T)$ is dense in $L^2(0,T)$, the assertion is obvious.
{"url":"http://mathoverflow.net/questions/133496/reference-for-c-c-infty0-th-dense-in-l20-th","timestamp":"2014-04-17T01:55:33Z","content_type":null,"content_length":"49166","record_id":"<urn:uuid:0e4e6d69-dc1e-4cb4-adc6-d2e0ff347c2c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
How to include math operation? [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] How to include math operation? • Date: Fri, 03 Feb 2006 10:20:29 +1100 • From: newsgroups at smartcontroller.com.au (Angelo Fraietta) • Subject: How to include math operation? Matteo wrote: > Ok I try to use paranois example but I have the same error,- undefined > reference to all the math function!!! Does the paranoia example compile when you are first making RTEMS? Have you built the toolset correctly? Do any of the RTEMS example work for you? > I would like to use the function pow of the math library. > I'm using rtems-4.6.99.2 with gcc-4.0.2 and newlib-1.13.0 with system ubuntu > for i386 with psb pc386 > Can you help me ? BTW, make sure you post to the newsgroup also - there are many in the group that could give you far more informed answers than myself. Angelo Fraietta PO Box 859 Hamilton NSW 2303 Home Page There are those who seek knowledge for the sake of knowledge - that is There are those who seek knowledge to be known by others - that is VANITY There are those who seek knowledge in order to serve - that is LOVE Bernard of Clairvaux (1090 - 1153)
{"url":"http://www.rtems.org/ml/rtems-users/2006/february/msg00122.html","timestamp":"2014-04-19T07:06:58Z","content_type":null,"content_length":"4191","record_id":"<urn:uuid:ab7edb34-7122-43a9-831f-95b68cf5feb7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Hamtramck ACT Tutor Find a Hamtramck ACT Tutor ...Does the verb tense match the subject? What about indefinite pronouns - do you know that they take a singular verb and pronoun antecedent? While this may sound like a lot of overwhelming grammar jargon, more and more colleges are using your writing score (in addition to your math and reading scores) to decide whether you are ready for college. 11 Subjects: including ACT Math, English, writing, grammar ...I myself received a 33 on the Reading ACT portion and have aided students in the past in increasing their score by several points. I also have available several copies of the ACT exam from prior years to practice! I have over 4 years of experience providing test prep tutoring for the ACT. 46 Subjects: including ACT Math, reading, German, GED ...Using guided repetition, I will help you and/or your loved ones to develop an understanding of prealgebraic concepts. In no time at all, you and/or your loved ones will feel at ease with the curriculum and will find the success you deserve! Precalculus, otherwise known as Algebra 3, covers topi... 30 Subjects: including ACT Math, English, reading, physics ...I have worked as a teaching assistant for archaeology courses, and have tutored students in this capacity. In addition, I have taken coursework in archaeological fieldwork and theory at a graduate level, and have attended an archaeological field school. I majored in Political Science, focusing ... 25 Subjects: including ACT Math, reading, English, geometry ...Whether you are struggling with graphs, equations, factoring or any other aspect of the course, I can help build your confidence and get you back on track. I have taught all of the topics covered on Geometry. I have spent the last year working in a middle school in Michigan. 10 Subjects: including ACT Math, geometry, algebra 1, elementary math
{"url":"http://www.purplemath.com/hamtramck_act_tutors.php","timestamp":"2014-04-16T07:48:11Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:1b4122fc-44b8-43d8-94a5-e754485b4265>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Library Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org. Currently, most physicists believe that only a small fraction of all the matter and energy in the universe is visible and can be seen through our most powerful telescopes. The remaining majority of the universe is thought to consist of elusive dark matter and dark energy, two substances about which we know very little. This presentation will explore the evidence in supporting the dark matter and dark energy theories and discuss some of their implications for how our universe will evolve. In order to predict the future state of a quantum system, we generally do not need to know the past state of the entire universe, but only the state of a finite neighborhood of the system. This locality is best expressed as a restriction on how information "flows" between systems. In this talk I will describe some recent work, inspired by quantum cellular automata, about the information strucutre of local quantum dynamics.
{"url":"http://perimeterinstitute.ca/video-library?title=&page=647","timestamp":"2014-04-18T11:09:49Z","content_type":null,"content_length":"60840","record_id":"<urn:uuid:9bfa8339-4855-4b7f-a4b8-0b9c448084bd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
All Interactive Whiteboard Resources Teach Fractions 12 Fractions Resources for IWB AND tablets. Teach Shapes and Measures 10 Shapes and Measures Resources for IWB AND tablets. Numeracy Basics The 6 resources that appear in the Numeracy Basics app available for use here on the IWB: New Resources Shape, Space and Measure General Maths Investigations/Problems Solving Number and Algebra Data Handling Multi-Purpose IWB Resources
{"url":"http://www.teacherled.com/all-interactive-whiteboard-resources/","timestamp":"2014-04-20T11:26:42Z","content_type":null,"content_length":"73779","record_id":"<urn:uuid:ed0aba40-8b6c-4e41-b858-79dc85b492de>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Colpitts Oscillator Colpitts Oscillator. Colpitts oscillator was invented by American scientist Edwin Colpitts in 1918. It is another type of sinusoidal LC oscillator which has a lot of applications. The Colpitts oscillator can be realized using valves, transistors, FETs or op-amp. It is much similar to the Hartley oscillator except tank circuit. In Colpitts oscillator the tank circuit consists of two capacitors in series and an inductor connected in parallel to the serial combination. The frequency of the oscillations are determined by the value of the capcitors and inductor in the tank circuit. Collpitts oscillator is generally used in RF applications and the typical operating range is 20KHz to 300MHz. In Colpitts oscillator, the capacitive voltage divider setup in the tank circuit works as the feed back source and this arrangement gives better frequency stability when compared to the Hartley oscillator which uses an inductive voltage divider setup for feedback. The circuit diagram of a typical Colpitts oscillator using transistor is shown in the figure below. In the circuit diagram resistors R1 and R2 gives a voltage divider biasing to the transistor. Resistor R4 limits the collector current of the transistor. Cin is the input DC decoupling capacitor while Cout is the output decoupling capacitor. Re is the emitter resistor and its meant for thermal stability. Ce is the emitter by-pass capacitor. Job of the emitter by-pass capacitor is to by-pass the amplified AC signals from dropping across Re. The the emitter by-pass capacitor is not there, the amplified AC signal will drop across Re and it will alter the DC biasing conditions of the transistor and the result will be reduced gain. Capacitors C1, C2 and inductor L1 forms the tank circuit. Feedback to the base of transistor is taken from the junction of Capacitor C2 and inductor L1 in the tank circuit. When power supply is switched ON, capacitors C1 and C2 starts charging. When they are fully charged they starts discharging through the inductor L1. When the capacitors are fully discharged, the electrostatic energy stored in the capacitors gets transferred to the inductor as magnetic flux. The the inductor starts discharging and capacitors gets charged again. This transfer of energy back and forth between capacitors and inductor is the basis of oscillation. Voltage across C2 is phase opposite to that of the voltage across the C1 and it is the voltage across C2 that is fed back to the transistor. The feedback signal at the base base of transistor appears in the amplified form across the collector and emitter of the transistor. The energy lost in the tank circuit is compensated by the transistor and the oscillations are sustained. The tank circuit produces 180° phase shift and the transistor itself produces another 180° phase shift. That means the input and output are in phase and it is a necessary condition of positive feedback for maintaining sustained oscillations. The frequency of oscillations of the Colpitts oscillator can be determined using the equation below. Where L is the inductance of the inductor in the tank circuit and C is the effective capacitance of the capacitors in the tank circuit. If C1 and C2 are the individual capacitance, then the effective capacitance of the serial combination C= (C1C2)/(C1+C2). By using ganged variable capacitors in place of C1 and C2, the Colpitts oscillator can be made variable. Advantages of Colpitts oscillator. Main advantage of Colpitts oscillator over Hartley oscillator is the improved performance in the high frequency region. This is because the capacitors provide a low reactance path for the high frequency signals and thus the output signals in the high frequency domain will be more sinusoidal. Due to the excellent performance in the high frequency region, the Colpitts oscillator can be even used in microwave applications. Colpitts oscillator using opamp. The circuit diagram of a Colpitts oscillator using opamp is shown in the figure above. The opamp is arranged in the inverting mode where R1 is the input resistor and Rf is the feedback resistor. The gain of the opamp based oscillator can be individually set using the components Rf and R1 and it is a great advantage. The gain of an inverting op-amp amplifer is given by the equation; A = -Rf/R1. Other components such as the tank circuit elements, coupling capacitors etc have no significant effect in the gain of an opamp based Colpitts oscillator. In transistor based versions, the gain is dependent on almost all components (especially the tank circuit) and it is difficult to have a prediction. The working principle and theory of operation of the opamp based Colpitts oscillator is similar to that of the transistorized version. The equation for frequency is also the same. Clapp oscillator. Clap oscillator is just a modification of Colpitts oscillator. The only difference is that there is one additional capacitor connected in series to the inductor in the tank circuit. The circuit diagram of a typical Clapp oscillator is shown in the figure below. The main purpose of adding this additional capacitor C3 is to improve the frequency stability. The insertion of this additional capacitor C3 prevents the stray capacitances and other parameters of the transistor from affecting C1 and C2. In variable frequency applications using Clapp oscillator, the common practice is to make the C1, C2 fixed and C3 is made variable. While deriving the frequency equation the additional capacitor must be also taken into consideration and the equation is The value of C3 is usually selected much small and so the value of C1 and C2 has less effect on the net effective capacitance. As a result the equation for frequency can be simplified as 2 Responses to “Colpitts Oscillator” • I need the value of the components to build this circuit, please give us the values • Really helpful
{"url":"http://www.circuitstoday.com/colpitts-oscillator","timestamp":"2014-04-18T23:46:01Z","content_type":null,"content_length":"58607","record_id":"<urn:uuid:5b205b21-78ec-4f46-8d36-495aa107bf2f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
May 2012 Markov chains are a classic in probability model. They represent systems that evolve between states over time, following a random but stable process which is memoryless. The memoryless-ness is the defining characteristic of Markov processes, and is known as the Markov property. Roughly speaking, the idea is that if you know the state of the process at time T, you know all there is to know about it – knowing where it was at time T-1 would not give you additional information on where it may be at time T+1. While Markov models come in multiple flavors, Markov chains with finite discrete states in discrete time are particularly interesting. They describe a system which is changes between discrete states at fixed time intervals, following a transition pattern described by a transition matrix. Let’s illustrate with a simplistic example. Imagine that you are running an Airline, AcmeAir, operating one plane. The plane goes from city to city, refueling and doing some maintenance (or whatever planes need) every time. Each time the plane lands somewhere, it can be in three states: early, on-time, or delayed. It’s not unreasonable to think that if our plane landed late somewhere, it may be difficult to catch up with the required operations, and as a result, the likelihood of the plane landing late at its next stop is higher. We could represent this in the following transition matrix (numbers totally made │ Current \ Next │ Early │ On-time │ Delayed │ │ Early │ 10% │ 85% │ 5% │ │ On-Time │ 10% │ 75% │ 15% │ │ Delayed │ 5% │ 60% │ 35% │ Each row of the matrix represents the current state, and each column the next state. The first row tells us that if the plane landed Early, there is a 10% chance we’ll land early in our next stop, an 80% chance we’ll be on-time, and a 5% chance we’ll arrive late. Note that each row sums up to 100%: given the current state, we have to end up in one of the next states. How could we simulate this system? Given the state at time T, we simply need to “roll” a random number generator for a percentage between 0% and 100%, and depending on the result, pick our next state – and repeat. Using F#, we could model the transition matrix as an array (one element per state) of arrays (the probabilities to land in each state), which is pretty easy to define using Array comprehensions: let P = [| 0.10; 0.85; 0.05 |]; [| 0.10; 0.75; 0.15 |]; [| 0.05; 0.60; 0.35 |] (Note: the entire code sample is also posted on fsSnip.net/ch) To simulate the behavior of the system, we need a function that given a state and a transition matrix, produces the next state according to the transition probabilities: // given a roll between 0 and 1 // and a distribution D of // probabilities to end up in each state // returns the index of the state let state (D: float[]) roll = let rec index cumul current = let cumul = cumul + D.[current] match (roll <= cumul) with | true -> current | false -> index cumul (current + 1) index 0.0 0 // given the transition matrix P // the index of the current state // and a random generator, // simulates what the next state is let nextState (P: float[][]) current (rng: Random) = let dist = P.[current] let roll = rng.NextDouble() state dist roll // given a transition matrix P // the index i of the initial state // and a random generator // produces a sequence of states visited let simulate (P: float[][]) i (rng: Random) = Seq.unfold (fun s -> Some(s, nextState P s rng)) i The state function is a simple helper; given an array D which is assumed to contain probabilities to transition to each of the states, and a “roll” between 0.0 and 1.0, returns the corresponding state. nextState uses that function, by first retrieving the transition probabilities for the current state i, “rolling” the dice, and using state to compute the simulated next state. simulate uses nextState to create an infinite sequence of states, starting from an initial state i. We need to open System to use the System.Random class – and we can now use this in the F# interactive window: > let flights = simulate P 1 (new Random());; val flights : seq<int> > Seq.take 50 flights |> Seq.toList;; val it : int list = [1; 0; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 0; 1; 1; 2; 2; 2; 1; 1; 1; 1; 1; 1; 1; 1; 2; 1; 1; 1; 1; 1; 1; 1; 1; 2; 1; 1; 1; 1; 1; 2; 1; 1; 1; 1; 1; 1; 1] Our small sample shows us what we expect: mostly on-time (Fly AcmeAir!), with some episodical delayed or early flights. How many delays would we observe on a 1000-flights simulation? Let’s try: > Seq.take 1000 flights |> Seq.filter (fun i -> i = 2) |> Seq.length;; val it : int = 174 We observe about 17% of delayed flights. This is relevant information, but a single simulation is just that – an isolated case. Fortunately, Markov chains have an interesting property: if it is possible to go from any state to any state, then the system will have a stationary distribution, which corresponds to its long term equilibrium. Essentially, regardless of the starting point, over long enough periods, each state will be observed with a stable frequency. One way to understand better what is going on is to expand our frame. Instead of considering the exact state of the system, we can look at it in terms of probability: at any point in time, the system has a certain probability to be in each of its states. For instance, imagine that given current information, we know that our plane will land at its next stop either early or on time, with a 50% chance of each. In that case, we can determine the probability that its next stop will be delayed by combining the transition probabilities: p(delayed in T+1) = p(delayed in T) x P(delayed in T+1 | delayed in T) + p(on-time in T) x P(delayed in T+1 | on-time in T) + p(early in T) x P(delayed in T+1 | early in T) p(delayed in T+1) = 0.0 x 0.35 + 0.5 x 0.15 + 0.5 x 0.05 = 0.1 This can be expressed much more concisely using Vector notation. We can represent the state as a vector S, where each component of the vector is the probability to be in each state, in our case S(T) = [ 0.50; 0.50; 0.0 ] In that case, the state at time T+1 will be: S(T+1) = S(T) x P Let’s make that work with some F#. The product of a vector by a matrix is the dot-product of the vector with each column vector of the matrix: // Vector dot product let dot (V1: float[]) (V2: float[]) = Array.zip V1 V2 |> Array.map(fun (v1, v2) -> v1 * v2) |> Array.sum // Extracts the jth column vector of matrix M let column (M: float[][]) (j: int) = M |> Array.map (fun v -> v.[j]) // Given a row-vector S describing the probability // of each state and a transition matrix P, compute // the next state distribution let nextDist S P = |> Array.mapi (fun j v -> column P j) |> Array.map(fun v -> dot v S) We can now handle our previous example, creating a state s with a 50/50 chance of being in state 0 or 1: > let s = [| 0.5; 0.5; 0.0 |];; val s : float [] = [|0.5; 0.5; 0.0|] > let s' = nextDist s P;; val s' : float [] = [|0.1; 0.8; 0.1|] We can also easily check what the state of the system should be after, say, 100 flights: > let s100 = Seq.unfold (fun s -> Some(s, nextDist s P)) s |> Seq.nth 100;; val s100 : float [] = [|0.09119496855; 0.7327044025; 0.1761006289|] After 100 flights, starting from either early or on-time, we have about 17% of chance of being delayed. Note that this is consistent with what we observed in our initial simulation. Given that our Markov chain has a stationary distribution, this is to be expected: unless our simulation was pathologically unlikely, we should observe the same frequency of delayed flights in the long run, no matter what the initial starting state is. Can we compute that stationary distribution? The typical way to achieve this is to bust out some algebra and solve V = P x V, where V is the stationary distribution vector and P the transition Here we’ll go for a numeric approximation approach. Rather than solving the system of equations, we will start from a uniform distribution over the states, and apply the transition matrix until the distance between two consecutive states is under a threshold Epsilon: // Euclidean distance between 2 vectors let dist (V1: float[]) V2 = Array.zip V1 V2 |> Array.map(fun (v1, v2) -> (v1 - v2) * (v1 - v2)) |> Array.sum // Evaluate stationary distribution // by searching for a fixed point // under tolerance epsilon let stationary (P: float[][]) epsilon = let states = P.[0] |> Array.length [| for s in 1 .. states -> 1.0 / (float)states |] // initial |> Seq.unfold (fun s -> Some((s, (nextDist s P)), (nextDist s P))) |> Seq.map (fun (s, s') -> (s', dist s s')) |> Seq.find (fun (s, d) -> d < epsilon) Running this on our example results in the following stationary distribution estimation: > stationary P 0.0000001;; val it : float [] * float = ([|0.09118958333; 0.7326858333; 0.1761245833|], 1.1590625e-08) In short, in the long run, we should expect our plane to be early 9.1% of the time, on-time 73.2%, and delayed 17.6%. Note: the fixed point approach above should work if a unique stationary distribution exists. If this is not the case, the function may never converge, or may converge to a fixed point that depends on the initial conditions. Use with caution! Armed with this model, we could now ask interesting questions. Suppose for instance that we could improve the operations of AcmeAir, and reduce the chance that our next arrival is delayed given our current state. What should we focus on – should we reduce the probability to remain delayed after a delay (strategy 1), or should we prevent the risk of being delayed after an on-time landing (strategy 2)? One way to look at this is to consider the impact of each strategy on the long-term distribution. Let’s compare the impact of a 1-point reduction of delays in each case, which we’ll assume gets transferred to on-time. We can then create the matrices for each strategy, and compare their respective stationary distributions: > let strat1 = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.75; 0.15|]; [|0.05; 0.61; 0.34|]|] let strat2 = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.76; 0.14|]; [|0.05; 0.60; 0.35|]|];; val strat1 : float [] [] = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.75; 0.15|]; [|0.05; 0.61; 0.34|]|] val strat2 : float [] [] = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.76; 0.14|]; [|0.05; 0.6; 0.35|]|] > stationary strat1 0.0001;; val it : float [] * float = ([|0.091; 0.7331333333; 0.1758666667|], 8.834666667e-05) > stationary strat2 0.0001;; val it : float [] * float = ([|0.091485; 0.740942; 0.167573|], 1.2698318e-05) The numbers tell the following story: strategy 2 (improve reduction of delays after on-time arrivals) is better: it results in 16.6% delays, instead of 17.6% for strategy 1. Intuitively, this makes sense, because most of our flights are on-time, so an improvement in this area will have a much larger impact in the overall results that a comparable improvement on delayed flights. There is (much) more to Markov chains than this, and there are many ways the code presented could be improved upon – but I’ll leave it at that for today, hopefully you will have picked up something of interest along the path of this small exploration! I also posted the complete code sample on fsSnip.net/ch.
{"url":"http://www.clear-lines.com/blog/2012/05/default.aspx","timestamp":"2014-04-21T15:05:53Z","content_type":null,"content_length":"68748","record_id":"<urn:uuid:0a44a805-8d7f-46c1-9e93-cfb5bb2ee3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
some restrictions on the use of evaluators in meta-level rules Major Section: META Note: This topic, which explains some subtleties for evaluators, can probably be skipped by most readers. Rules of class :meta and of class :clause-processor are stated using so-called ``evaluator'' functions. Here we explain some restrictions related to evaluators. Below we refer primarily to :meta rules, but the discussion applies equally to :clause-processor rules. In a nutshell, we require that a rule's evaluator does not support other functions in the rule, and we require that the evaluator not be introduced under a non-trivial encapsulate. We also require that no function has an attachment (see defattach) that is both ancestral in the evaluator and also ancestral in the meta or clause-processor functions. We explain these restrictions in detail below. An argument given elsewhere (see meta, in particular ``Aside for the logic-minded'') explains that the correctness argument for appying metatheoretic simplifiers requires that one be able to ``grow'' an evaluator (see defevaluator) to handle all functions in the current ACL2 world. Then we may, in essence, functionally instantiate the original evaluator to the new (``grown'') evaluator, provided that the new evaluator satisfies all of the axioms of the original. We therefore require that the evaluator function does not support the formula of any defaxiom event. This notion of ``support'' (sometimes denoted ``is an ancestor of'') is defined recursively as follows: a function symbol supports a formula if either it occurs in that formula, or else it supports the definition or constraint for some function symbol that occurs in that formula. Moreover, we require that neither the evaluator function nor its list version support the definition or constraint for any other function symbol occurring in the proposed :meta theorem. We also require that the evaluator does not support the formula of a :meta rule's metafunction (nor, if there is one, hypothesis metafunction) or of a :clause-processor rule's clause-processor function. This requirement, along with with the analogous requirement for defaxiom events stated above, are necessary in order to carry out the functional instantiation argument alluded to above, as follows (where the reader may find it useful to have some familiarity with the paper ``Structured Theory Development for a Mechanized Logic'' (Journal of Automated Reasoning 26, no. 2 (2001), pages 161-203). By the usual conservativity argument, we know that the rule follows logically from the axiomatic events for its supporters. This remains true if we functionally instantiate the evaluator with one corresponding to all the functions symbols of the current session, since none of the definitions of supporters of defaxioms or metafunctions are hit by that functional substitution. Notice though that the argument above depends on knowing that the rule is not itself an axiom about the evaluator! Therefore, we also restrict evaluators so that they are not defined in the scope of a superior encapsulate event with non-empty signature, in order to avoid an even more subtle problem. The aforementioned correctness argument depends on knowing that the rule is provable from the axioms on the evaluator and metafunction (and hypothesis metafunction, if any). The additional restriction avoids unsoundness! The following events, if allowed, produce a proof that (f x) equals t even though, as shown below, that does not follow logically from the axioms introduced. ; Introduce our metafunction. (defun my-cancel (term) (case-match term (('f ('g)) (& term))) ; Introduce our evaluator and prove our meta rule, but in the same ; encapsulate! ((f (x) t)) (local (defun f (x) (declare (ignore x)) t)) (defevaluator evl evl-list ((f x))) (defthm correctness-of-my-cancel (equal (evl x a) (evl (my-cancel x) a)) :rule-classes ((:meta :trigger-fns (f))))) ; Prove that (f x) = t. (local (defstub c () t)) (local (encapsulate (local (defun g () (c))) (local (in-theory (disable g (g)))) (local (defthm f-g (equal (f (g)) t) :rule-classes nil)) (defthm f-c (equal (f (c)) t) :hints (("Goal" :use f-g :in-theory (e/d (g) (correctness-of-my-cancel)))) :rule-classes nil))) (defthm f-t (equal (f x) t) :hints (("Goal" :by (:functional-instance (c (lambda () x))))) :rule-classes nil)) To see that the term (equal (f x) t) does not follow logically from the axiomatic events above, consider following the above definition of my-cancel with the following events instead. ; (defun my-cancel (term) ...) as before, then: (defun f (x) (not x)) (defun g () (defevaluator evl evl-list ((f x) (g))) These events imply the axiomatic events above, because we still have the definition of my-cancel, we have a stronger defevaluator event, and we can now prove correctness-of-my-cancel exactly as it is stated above. So, the rule f-t is a logical consequence of the chronology of the current session. However, in the current session we can also prove the following rule, which contradicts f-t. (defthm f-not-t (equal (f t) nil) :rule-classes nil) It follows that the current session logically yields a contradiction! Erik Reeber has taken the above example and modified it to prove nil in ACL2 Version_3.1, as follows. (in-package "ACL2") (defun my-cancel (term) (case-match term (('f ('g)) (('f2 ('g2)) (& term))) (defun f2 (x) (not x)) (defun g2 () ((f (x) t)) (local (defun f (x) (declare (ignore x)) t)) (defevaluator evl evl-list ((f x) (f2 x) (defthm correctness-of-my-cancel (equal (evl x a) (evl (my-cancel x) a)) :rule-classes ((:meta :trigger-fns (f))))) (local (defstub c () t)) (local (encapsulate (local (defun g () (c))) (local (in-theory (disable g (g)))) (local (defthm f-g (equal (f (g)) t) :rule-classes nil)) (defthm f-c (equal (f (c)) t) :hints (("Goal" :use f-g :in-theory (e/d (g) (correctness-of-my-cancel)))) :rule-classes nil))) (defthm f-t (equal (f x) t) :hints (("Goal" :by (:functional-instance (c (lambda () x))))) :rule-classes nil)) (defun g () ; Below is the expansion of the following defevaluator, changed slightly as ; indicated by comments. ; (defevaluator evl2 evl2-list ((f x) (f2 x) (g) (g2))) (((EVL2 * *) => *) ((EVL2-LIST * *) => *)) (SET-INHIBIT-WARNINGS "theory") (LOCAL (IN-THEORY *DEFEVALUATOR-FORM-BASE-THEORY*)) (MUTUAL-RECURSION (DEFUN EVL2 (X A) (DECLARE (XARGS :VERIFY-GUARDS NIL :MEASURE (ACL2-COUNT X) :WELL-FOUNDED-RELATION O< :MODE :LOGIC)) (COND ((SYMBOLP X) (CDR (ASSOC-EQ X A))) ((ATOM X) NIL) ((EQ (CAR X) 'QUOTE) (CAR (CDR X))) ((CONSP (CAR X)) (EVL2 (CAR (CDR (CDR (CAR X)))) (PAIRLIS$ (CAR (CDR (CAR X))) (EVL2-LIST (CDR X) A)))) ((EQUAL (CAR X) 'F) ; changed f to f2 just below (F2 (EVL2 (CAR (CDR X)) A))) ((EQUAL (CAR X) 'F2) (F2 (EVL2 (CAR (CDR X)) A))) ((EQUAL (CAR X) 'G) (G)) ((EQUAL (CAR X) 'G2) (G2)) (T NIL))) (DEFUN EVL2-LIST (X-LST A) (DECLARE (XARGS :MEASURE (ACL2-COUNT X-LST) :WELL-FOUNDED-RELATION O<)) (COND ((ENDP X-LST) NIL) (T (CONS (EVL2 (CAR X-LST) A) (EVL2-LIST (CDR X-LST) A))))))) (DEFTHM EVL2-CONSTRAINT-1 (IMPLIES (SYMBOLP X) (EQUAL (EVL2 X A) (CDR (ASSOC-EQ X A))))) (DEFTHM EVL2-CONSTRAINT-2 (IMPLIES (AND (CONSP X) (EQUAL (CAR X) 'QUOTE)) (EQUAL (EVL2 X A) (CADR X)))) (DEFTHM EVL2-CONSTRAINT-3 (IMPLIES (AND (CONSP X) (CONSP (CAR X))) (EQUAL (EVL2 X A) (EVL2 (CADDAR X) (PAIRLIS$ (CADAR X) (EVL2-LIST (CDR X) A)))))) (DEFTHM EVL2-CONSTRAINT-4 (IMPLIES (NOT (CONSP X-LST)) (EQUAL (EVL2-LIST X-LST A) NIL))) (DEFTHM EVL2-CONSTRAINT-5 (IMPLIES (CONSP X-LST) (EQUAL (EVL2-LIST X-LST A) (CONS (EVL2 (CAR X-LST) A) (EVL2-LIST (CDR X-LST) A))))) (DEFTHM EVL2-CONSTRAINT-6 (IMPLIES (AND (CONSP X) (EQUAL (CAR X) 'F)) (EQUAL (EVL2 X A) ; changed f to f2 just below (F2 (EVL2 (CADR X) A))))) (DEFTHM EVL2-CONSTRAINT-7 (IMPLIES (AND (CONSP X) (EQUAL (CAR X) 'F2)) (EQUAL (EVL2 X A) (F2 (EVL2 (CADR X) A))))) (DEFTHM EVL2-CONSTRAINT-8 (IMPLIES (AND (CONSP X) (EQUAL (CAR X) 'G)) (EQUAL (EVL2 X A) (G)))) (DEFTHM EVL2-CONSTRAINT-9 (IMPLIES (AND (CONSP X) (EQUAL (CAR X) 'G2)) (EQUAL (EVL2 X A) (G2))))) (defthm f2-t (equal (f2 x) t) :hints (("Goal" :by (:functional-instance (f f2) (evl evl2) (evl-list evl2-list))))) (defthm bug-implies-nil :hints (("Goal" :use ((:instance f2-t (x t))))) :rule-classes nil) Finally, we also require that no function has an attachment (see defattach) that is both ancestral in the evaluator and also ancestral in the meta or clause-processor functions. (If you don't use defattach then you can ignore this condition.) Without this restriction, the following events prove nil. (in-package "ACL2") (defstub f () t) (defevaluator evl evl-list (defun my-meta-fn (x) (if (equal x '(f)) (list 'quote (f)) (defthm my-meta-fn-correct (equal (evl x a) (evl (my-meta-fn x) a)) :rule-classes ((:meta :trigger-fns (f)))) (defun constant-nil () (declare (xargs :guard t)) (defattach f constant-nil) (defthm f-is-nil ; proved using my-meta-fn-correct (equal (f) nil) :rule-classes nil) (defthm contradiction :hints (("Goal" :use ((:functional-instance (f (lambda () t)))))) :rule-classes nil) To see why this restriction is sufficient, see a comment in the ACL2 source code entitled ``; Essay on Correctness of Meta Reasoning.''
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v4-0/EVALUATOR-RESTRICTIONS.html","timestamp":"2014-04-19T17:32:41Z","content_type":null,"content_length":"12769","record_id":"<urn:uuid:e67838f6-b46c-4917-bb89-ffce6ad3bdf8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
(2) MA HK12x2 Custom box Design help... [Archive] - Car Audio Forum - CarAudio.com 03-21-2008, 03:11 AM Hey, I have everything that I need as far as on paper... I was just wondering if ANYONE ( Knowledgeable ) could give me anymore advice, pointers, or point out something that might go wrong with this ... I REALLY WOULD LOVE FOR SOMEONE TO SKETCH ME A MODEL... I REALLY WOULD APPRECIATE IT!!!!! HERE WE GO! MA HK12x2 Spec'd enclosure ( Ported ): Cuft: 3cuft Hz: 38hz Sq Port: 4x4 Sq Port Length: 14.5" Number of Ports: 3 Max Dimensions ( 99 Nissan Sentra ): External Dimensions: 42"L x 22"D x 14"H Internal Dimensions: 40.5"L x 20.5"D x 12.5"H (2) 11.5" for the sub holes... BOX CUTS! Top: 14 x 42 x 22 Bottom: 14 x 42 x 22 Front Wall: 42 x 12.5 Back Wall: 42 x 12.5 Left Side Wall: 20.5 x 12.5 Right Side Wall: 20.5 x 12.5 * NOTE * ... Front wall will have the three ported centered on the bottom, in the middle. 2 Ports will be on the bottom in the middle, and the third will be centered in the middle stacked on top. ( Think of a PYRAMID, and you'll get the picture ) ^^^^^^^ LIKE THAT.... Exactly! * Each port external dimensions is 4" x 14.5" Top: 4 x 14.5 Bottom: 4 x 14.5 Left Side Wall: 2.5 x 14.5 Right Side Wall: 2.5 x 14.5 * NOTE * Remember! There are 3 ports. All are IDENTICAL ( Triplets ) To get DIRECTLY in the middle, it'll be: 42" MINUS 8" = 34"... 34" DIVIDED BY 2 = 17" on each side of the ports To get DIRECTLY in the middle for the single stacked port, it'll be: 8" MINUS 4" = 4"... 2"s will be on each side of the stacked port, making it centered. A PERFECT PYRAMID!! Here is a quick lil' sketch, to give you an IDEA of what I'm picturing in my head.
{"url":"http://www.caraudio.com/forums/archive/index.php/t-301011.html","timestamp":"2014-04-24T17:30:53Z","content_type":null,"content_length":"10214","record_id":"<urn:uuid:274522c9-fe35-42b8-ba31-0f61d9aaf9d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS Program A SAS Program for the 2000 CDC Growth Charts (ages 0 to <20 y) The purpose of this SAS program is to calculate the percentiles and z-scores (standard deviations) for a child’s sex and age for BMI, weight, height, and head circumference based on the CDC growth charts. Weight-for-height percentiles and z-scores are also calculated. Observations that contain extreme values are flagged as being biologically implausible. These extreme values, however, are not necessarily incorrect. Although the SAS program can be used to calculate z-scores and percentiles for children up to 20 years of age, the World Health Organization (WHO) growth charts are recommended for children <24 months of age. There are several computer programs available on the WHO and CDC sites that use the WHO growth charts; the SAS program for the WHO growth charts follows the same steps as does this SAS program for the CDC growth charts. The SAS program, cdc-source-code.sas (files are below, in step #1), calculates these z-scores and percentiles for children in your data based on reference data in CDC_ref.sas7bdat. If you’re not using SAS, you can download CDCref_d.csv, and create a program based on CDC-source-code.sas to do the calculations. Instructions for SAS users Step 1: Download the SAS program (cdc-source-code.sas) and the reference data file (CDCref_d.sas7bdat). Do not alter these files, but move them to a folder (directory) that SAS can access. If you are using Chrome or Firefox, right click to save the cdc-source-code.sas file. For the following example, the files have been saved in c:\sas\growth charts\cdc\data. Step 2: Create a libname statement in your SAS program to point at the folder location of ‘CDCref_d.sas7bdat’. An example would be: libname refdir 'c:\sas\growth charts\cdc\data'; Note the SAS code expects this name to be refdir; do not change this name. Step 3: Set your existing dataset containing height, weight, sex, age and other variables into a temporary dataset, named mydata. Variables in your dataset should be renamed and coded as follows: Table 1 Variable Description agemos Child's age in months; must be present. The program assumes you know the number of months to the nearest day based on the dates of birth and examination. For example, if a child was born on Oct 1, 2007 and was examined on Nov 15, 2011, the child’s age would be 1506 days or 49.48 months. In everyday usage, this age would be stated as 4 years or as 49 months. However, if 49 months were used as the age of all children who were between 49.0 and <50 months in your data, the estimated z-scores would be slightly too high because, on average, these children would be taller, weigh more, and have a higher BMI than children who are exactly 49.0 months of age. This bias would be greater if only completed years of age were known, and the age of all children between 4 and <5 years was represented as 48 months. If age is known only as the completed number of months (as is data from NHANES 1988-1994 and 1999-2010), consider adding 0.5 so that the maximum error would be 15 days. If age is given as the completed number of years, multiply by 12 and consider adding 6. sex Coded as 1 for boys and 2 for girls. height Height in cm. This is either standing height (for children who are ≥ 24 months of age or recumbent length (for children < 24 months of age); both are input as height. If standing height was measured for some children less than 24 months of age, you should add 0.8 cm to these values (see page 8 of http://www.cdc.gov/nchs/data/series/sr_11/sr11_246.pdf). If recumbent length was measured for some children who are ≥ 24 months of age, subtract 0.8 cm. weight Weight (kg) bmi BMI (Weight (kg) /Height (m)^2). If your data doesn’t contain BMI, the program calculates it. If BMI is present in your data, the program will not overwrite it. headcir Head circumference (cm) Z-scores and percentiles for variables that are not in mydata will be coded as missing (.) in the output dataset (named _cdcdata). Sex (coded as 1 for boys and 2 for girls) and agemos must be in mydata. It’s unlikely that the SAS code will overwrite other variables in your dataset, but you should avoid having variable names that begin with an underscore, such as _bmi. Step 4: Copy and paste the following line into your SAS program after the line (or lines) in step #3. %include 'c:\sas\growth charts\cdc\data\CDC-source-code.sas'; run; If necessary, change this statement to point at the folder containing the downloaded ‘CDC-source-code.sas’ file. This tells your SAS program to run the statements in ‘CDC-source-code.sas’. Step 5: Submit the %include statement. This will create a dataset, named _cdcdata, which contains all of your original variables along with z-scores, percentiles, and flags for extreme values. The names and descriptions of these new variables in _cdcdata are in Table 2. Additional information on the extreme z-scores is given in a separate section that follows the “Example SAS Code”. Table 2: Z-Scores, percentiles, and extreme (biologically implausible, BIV) values in output dataset, _cdcdata Variable Cutoff for Extreme Z-Scores Description Modified Z-score to Identify Flag for Low z-score (Flag coded High z-score (Flag Percentile Z-score Extreme Values Extreme as -1) coded Values as +1) Weight-for-age for children between 0 and 239 (inclusive) months of age wapct waz _Fwaz _bivwt < -5 > 5 Height-for-age for children between 0 and 239 (inclusive) months of age. hapct haz _Fhaz _bivht < -5 >3 Weight-for-height for children with heights between 45 and 121 cm (this height range whpct whz _Fwhz _bivwh < -4 >5 approximately covers ages 0 to 6 y) BMI-for-age for children between 24 and 239 months of age bmipct bmiz _Fbmiz _bivbmi < -4 >5 Head circumference-for-age for children between 0 and 35 (inclusive) months of age headcpct headcz _Fheadcz _bivhc < -5 >5 Step 6: Examine the new dataset, _cdcdata, with PROC MEANS or some other procedure to verify that the z-scores and other variables have been created. If a variable in Table 1 was not in your original dataset (e.g., head circumference), the output dataset will indicate that all values for the percentiles and z-scores of this variable are missing. If values for other variables are unexpectedly missing, make sure that you’ve renamed and recoded variables as indicated in Table 1 and that your SAS dataset is named mydata. The program should not modify your original data, but will add new variables to your original dataset. Example SAS code corresponding to steps 2 to 6. You can simply cut and paste these lines into a SAS program, but you’ll need to change the libname and %include statements to point at the folders containing the downloaded files. libname refdir 'c:\sas\growth charts\cdc\data'; data mydata; set whatever-your-original-dataset-is-named; %include 'c:\sas\growth charts\cdc\data\CDC-source-code.sas'; proc means data=_cdcdata; run; Additional Information Z-scores are calculated as = Z = [ ((value / M)**L) – 1] / (S * L) , in which ‘value’ is the child’s BMI, weight, height, etc. The L, M, and S values are in CDCref_d.sas7bdat and vary according to the child’s sex and age or according to the child’s sex and height. Percentiles are then calculated from the z-scores (for example, a z-score of 1.96 would be equal to the 97.5 percentile). For more information on the LMS method, see http:// Extreme or Biologically implausible Values The SAS code also flags extreme values (biologically implausible values, or BIVs). As explained in the BIV cutoffs documentation, these BIVs are based on modified z-scores that were calculated using a different method. These BIV flag variables are coded as -1 (modified z-score is extremely low), +1 (modified z-score is extremely high), or 0 (modified z-score is between these 2 cut-points). These BIVs flags, along with other variables that are in the output dataset, _cdcdata, are shown in Table 2. The modified z-scores (3rd column of Table 2) can be used to construct other cut-points for extreme (or biologically implausible) values. For example, if the distribution of BMI is strongly skewed to the right, you might use F_bmiz > 8 (rather than 5) as the definition of an extremely high BMI-for-age. This could be recoded as: if -5 <= _Fbmiz <= 8 then _bivbmi=0; *plausible; else if _Fbmiz > 8 then _bivbmi=1; *high BIV; else if . < _Fbmiz < -5 then _bivbmi= -1; *low BIV; There are also 2 overall indicators of extreme values in the output dataset: _bivlow and _bivhigh. These 2 variables indicate whether any measurement is extremely high (_bivhigh=1) or extremely low (_bivlow=1). If a child does not have an extreme value for any measurement, both variables are coded as 0. A biologically implausible value is not necessarily incorrect, but the value should further studied, possibly in conjunction with other characteristics of the child. For example, if a child’s weight is implausibly high, is the child also very tall and are there other children who weigh nearly as much? Defining Extreme Obesity (the 99th percentile of BMI-for-age) The use of the LMS parameters of the CDC growth charts has been shown to result in inaccurate estimates of the empirical percentiles at very high BMI values (e.g., the 99th percentile) http:// www.ajcn.org/content/90/5/1314.full.pdf. Therefore, rather than using the BMI-for-age percentiles (and z-scores) to identify and track children who are extremely obese, it is recommended that these high BMI values be expressed as a percentage of the 95th percentile. A BMI value that is 20% greater than the 95th percentile (relative to the CDC reference population) is approximately equal to the 99th percentile of the reference population. The SAS code creates a variable, bmipct95, to simplify the use of this definition. This variable expresses a child’s BMI as a percentage of the 95th percentile for that child’s sex and age. Bmipct95 can range from <50 (for very thin children) to >220 (for very heavy children). A child with a bmipct95 of 100 is at the 95th percentile of BMI-for-age. A value of 120 would indicate that the child’s BMI is 20% greater than the 95th percentile. Contact Us: • Centers for Disease Control and Prevention Division of Population Health 4770 Buford Hwy, NE MS K–46 Atlanta, GA 30341-3717 • 800-CDC-INFO TTY: (888) 232-6348
{"url":"http://www.cdc.gov/nccdphp/dnpao/growthcharts/resources/sas.htm","timestamp":"2014-04-18T23:54:03Z","content_type":null,"content_length":"37707","record_id":"<urn:uuid:53b86b32-1790-49d1-a2ef-d56f350511e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
1911 Encyclopædia Britannica/Algebra From Wikisource ←Alemán, Mateo 1911 Encyclopædia Britannica, Volume 1 Alembic→ ALGEBRA (from the Arab. al-jebr wa'l-muqābala, transposition and removal [of terms of an equation], the name of a treatise by Mahommed ben Musa al-Khwarizmi), a branch of mathematics which may be defined as the generalization and extension of arithmetic. The subject-matter of algebra will be treated in the following article under three divisions:—A. Principles of ordinary algebra; B. Special kinds of algebra; C. History. Special phases of the subject are treated under their own headings, e.g. Algebraic Forms; Binomial; Combinatorial Analysis; Determinants; Equation; Continued Fraction; Function; Groups, Theory of; Logarithm; Number; Probability; A. Principles of Ordinary Algebra[edit] 1. The above definition gives only a partial view of the scope of algebra. It may be regarded as based on arithmetic, or as dealing in the first instance with formal results of the laws of arithmetical number; and in this sense Sir Isaac Newton gave the title Universal Arithmetic to a work on algebra. Any definition, however, must have reference to the state of development of the subject at the time when the definition is given. 2. The earliest algebra consists in the solution of equations. The distinction between algebraical and arithmetical reasoning then lies mainly in the fact that the former is in a more condensed form than the latter; an unknown quantity being represented by a special symbol, and other symbols being used as a kind of shorthand for verbal expressions. This form of algebra was extensively studied in ancient Egypt; but, in accordance with the practical tendency of the Egyptian mind, the study consisted largely in the treatment of particular cases, very few general rules being obtained. 3. For many centuries algebra was confined almost entirely to the solution of equations; one of the most important steps being the enunciation by Diophantus of Alexandria of the laws governing the use of the minus sign. The knowledge of these laws, however, does not imply the existence of a conception of negative quantities. The development of symbolic algebra by the use of general symbols to denote numbers is due to Franciscus Vieta (François Viète, 1540-1603). This led to the idea of algebra as generalized arithmetic. 4. The principal step in the modern development of algebra was the recognition of the meaning of negative quantities. This appears to have been due in the first instance to Albert Girard (1595-1632), who extended Vieta's results in various branches of mathematics. His work, however, was little known at the time, and later was overshadowed by the greater work of Descartes (1596-1650). 5. The main work of Descartes, so far as algebra was concerned, was the establishment of a relation between arithmetical and geometrical measurement. This involved not only the geometrical interpretation of negative quantities, but also the idea of continuity; this latter, which is the basis of modern analysis, leading to two separate but allied developments, viz. the theory of the function and the theory of limits. 6. The great development of all branches of mathematics in the two centuries following Descartes has led to the term algebra being used to cover a great variety of subjects, many of which are really only ramifications of arithmetic, dealt with by algebraical methods, while others, such as the theory of numbers and the general theory of series, are outgrowths of the application of algebra to arithmetic, which involve such special ideas that they must properly be regarded as distinct subjects. Some writers have attempted unification by treating algebra as concerned with functions, and Comte accordingly defined algebra as the calculus of functions, arithmetic being regarded as the calculus of values. 7. These attempts at the unification of algebra, and its separation from other branches of mathematics, have usually been accompanied by an attempt to base it, as a deductive science, on certain fundamental laws or general rules; and this has tended to increase its difficulty. In reality, the variety of algebra corresponds to the variety of phenomena. Neither mathematics itself, nor any branch or set of branches of mathematics, can be regarded as an isolated science. While, therefore, the logical development of algebraic reasoning must depend on certain fundamental relations, it is important that in the early study of the subject these relations should be introduced gradually, and not until there is some empirical acquaintance with the phenomena with which they are concerned. 8. The extension of the range of subjects to which mathematical methods can be applied, accompanied as it is by an extension of the range of study which is useful to the ordinary worker, has led in the latter part of the 19th century to an important reaction against the specialization mentioned in the preceding paragraph. This reaction has taken the form of a return to the alliance between algebra and geometry (§5), on which modern analytical geometry is based; the alliance, however, being concerned with the application of graphical methods to particular cases rather than to general expressions. These applications are sometimes treated under arithmetic, sometimes under algebra; but it is more convenient to regard graphics as a separate subject, closely allied to arithmetic, algebra, mensuration and analytical geometry. 9. The association of algebra with arithmetic on the one hand, and with geometry on the other, presents difficulties, in that geometrical measurement is based essentially on the idea of continuity, while arithmetical measurement is based essentially on the idea of discontinuity; both ideas being equally matters of intuition. The difficulty first arises in elementary mensuration, where it is partly met by associating arithmetical and geometrical measurement with the cardinal and the ordinal aspects of number respectively (see Arithmetic). Later, the difficulty recurs in an acute form in reference to the continuous variation of a function. Reference to a geometrical interpretation seems at first sight to throw light on the meaning of a differential coefficient; but closer analysis reveals new difficulties, due to the geometrical interpretation itself. One of the most recent developments of algebra is the algebraic theory of number, which is devised with the view of removing these difficulties. The harmony between arithmetical and geometrical measurement, which was disturbed by the Greek geometers on the discovery of irrational numbers, is restored by an unlimited supply of the causes of disturbance. 10. Two other developments of algebra are of special importance. The theory of sequences and series is sometimes treated as a part of elementary algebra; but it is more convenient to regard the simpler cases as isolated examples, leading up to the general theory. The treatment of equations of the second and higher degrees introduces imaginary and complex numbers, the theory of which is a special subject. 11. One of the most difficult questions for the teacher of algebra is the stage at which, and the extent to which, the ideas of a negative number and of continuity may be introduced. On the one hand, the modern developments of algebra began with these ideas, and particularly with the idea of a negative number. On the other hand, the lateness of occurrence of any particular mathematical idea is usually closely correlated with its intrinsic difficulty. Moreover, the ideas which are usually formed on these points at an early stage are incomplete; and, if the incompleteness of an idea is not realized, operations in which it is implied are apt to be purely formal and mechanical. What are called negative numbers in arithmetic, for instance, are not really negative numbers but negative quantities (§ 27 (i.)); and the difficulties incident to the ideas of continuity have already been pointed out. 12. In the present article, therefore, the main portions of elementary algebra are treated in one section, without reference to these ideas, which are considered generally in two separate sections. These three sections may therefore be regarded as to a certain extent concurrent. They are preceded by two sections dealing with the introduction to algebra from the arithmetical and the graphical sides, and are followed by a section dealing briefly with the developments mentioned in §§ 9 and 10 above. I. Arithmetical Introduction to Algebra[edit] 13. Order of Arithmetical Operations.—It is important, before beginning the study of algebra, to have a clear idea as to the meanings of the symbols used to denote arithmetical operations. (i.) Additions and subtractions are performed from left to right. Thus 3 lb + 5 lb - 7 lb + 2 lb means that 5 lb is to be added to 3 lb, 7 lb subtracted from the result, and 2 lb added to the new (ii.) The above operation is performed with 1 lb as the unit of counting, and the process would be the same with any other unit; e.g. we should perform the same process to find 3s. + 5s. - 7s. + 2s. Hence we can separate the numbers from the common unit, and replace 3 lb + 5 lb - 7 lb + 2 lb by (3 + 5 - 7 + 2) lb, the additions and subtractions being then performed by means of an (iii.) Multiplications, represented by ×, are performed from right to left. Thus 5×3×7×1 lb means 5 times 3 times 7 times 1 lb; i.e. it means that 1 lb is to be multiplied by 7, the result by 3, and the new result by 5. We may regard this as meaning the same as 5×3×7 lb, since 7 lb itself means 7×1 lb, and the lb is the unit in each case. But it does not mean the same as 5×21 lb, though the two are equal, i.e. give the same result (see § 23). This rule as to the meaning of × is important. If it is intended that the first number is to be multiplied by the second, a special sign such as >< should be used. (iv.) The sign ÷ means that the quantity or number preceding it is to be divided by the quantity or number following it. (v.) The use of the solidus / separating two numbers is for convenience of printing fractions or fractional numbers. Thus 16/4 does not mean 16 ÷ 4, but $\textstyle{16 \over 4}$. (vi.) Any compound operation not coming under the above descriptions is to have its meaning made clear by brackets; the use of a pair of brackets indicating that the expression between them is to be treated as a whole. Thus we should not write 8×7+6, but (8×7)+6, or 8×(7+6). The sign × coming immediately before, or immediately after, a bracket may be omitted; e.g. 8×(7+6) may be written 8 This rule as to using brackets is not always observed, the convention sometimes adopted being that multiplications or divisions are to be performed before additions or subtractions. The convention is even pushed to such an extent as to make "4½+3⅔ of 7+5" 4½+(3⅔ of 7)+5"; though it is not clear what "Find the value of 4½+3⅔ times 7+5" would then mean. There are grave objections to an arbitrary rule of this kind, the chief being the useless waste of mental energy in remembering it. (vii.) The only exception that may be made to the above rule is that an expression involving multiplication-dots only, or a simple fraction written with the solidus, may have the brackets omitted for additions or subtractions, provided the figures are so spaced as to prevent misunderstanding. Thus 8+(7×6)+3 may be written 8+7•6+3, and 8+$\textstyle{7 \over 6}$+3 may be written 8+7/6+3. But $\textstyle{3 \cdot 5 \over 2 \cdot 4}$ should be written (3•5)/(2•4), not 3•5/2•4. 14. Latent Equations.—The equation exists, without being shown as an equation, in all those elementary arithmetical processes which come under the head of inverse operations; i.e. processes which consist in obtaining an answer to the question "Upon what has a given operation to be performed in order to produce a given result?" or to the question "What operation of a given kind has to be performed on a given quantity or number in order to produce a given result?" (i.) In the case of subtraction the second of these two questions is perhaps the simpler. Suppose, for instance, that we wish to know how much will be left out of 10s. after spending 3s., or how much has been spent out of 10s. if 3s. is left. In either case we may put the question in two ways:—(a) What must be added to 3s. in order to produce 10s., or (b) To what must 3s. be added in order to produce 10s. If the answer to the question is X, We have either (a) 10s.=3s.+X,∴X=10s.-3s. or (b) 10s.=X+3s.,∴X=10s.-3s. (ii.) In the above case the two different kinds of statement lead to arithmetical formulae of the same kind. In the case of division we get two kinds of arithmetical formula, which, however, may be regarded as requiring a single kind of numerical process in order to determine the final result. (a) If 24d. is divided into 4 equal portions, how much will each portion be? Let the answer be X; then $\textstyle {\rm 24d.=4 \times X, \ \therefore X= {1 \over 4} \ of \ 24d.}$ (b) Into how many equal portions of 6d. each may 24d. be divided? Let the answer be x; then $\rm 24d.=\it x \times \rm 6d., \ \therefore \it x=\rm24d. \div 6d.$ (iii.) Where the direct operation is evolution, for which there is no commutative law, the two inverse operations are different in kind. (a) What would be the dimensions of a cubical vessel which would exactly hold 125 litres; a litre being a cubic decimetre? Let the answer be X; then $\rm 125 \ c.dm.=X^3,\ \therefore X = \sqrt[3]{125 \ \rm c.dm.} = \sqrt[3]{125} \ dm$ (b) To what power must 5 be raised to produce 125? Let the answer be x; then $125=5^x, \ \therefore \ x= \log_5 125.$ 15. With regard to the above, the following points should be noted. (1) When what we require to know is a quantity, it is simplest to deal with this quantity as a whole. In (i.), for instance, we want to find the amount by which 10s. exceeds 3s., not the number of shillings in this amount. It is true that we obtain this result by subtracting 3 from 10 by means of a subtraction-table (concrete or ideal); but this table merely gives the generalized results of a number of operations of addition or subtraction performed with concrete units. We must count with something; and the successive somethings obtained by the addition of successive units are in fact numerical quantities, not numbers. Whether this principle may legitimately be extended to the notation adopted in (iii.) (a) of § 14 is a moot point. But the present tendency is to regard the early association of arithmetic with linear measurement as important; and it seems to follow that we may properly (at any rate at an early stage of the subject) multiply a length by a length, and the product again by another length, the practice being dropped when it becomes necessary to give a strict definition of multiplication. (2) The results may be stated briefly as follows, the more usual form being adopted under (iii.)(a):— $\quad \rm (i.) \ If \ A=B+X, \ or=X+B, \ then \ X=A-B.$ $(\rm ii.)(\it a \rm) \ If \ A=\it m \ \rm times \ X, \ then \ X = \textstyle {{1 \over \it m}} \ of \ A.$ $(\it b \rm) \ If \ A= \it x \ \rm times \ M, \ then \ \it x =\rm A \div M.$ $(\rm iii.)(\it a \rm) \ If \ \it n=x^p \rm , \ then \ \it x=\sqrt[p]{}n.$ $(\it b \rm) \ If \it \ n=a^x, \ \rm then \ \it x= \log_a n.$ The important thing to notice is that where, in any of these five cases, one statement is followed by another, the second is not to be regarded as obtained from the first by logical reasoning involving such general axioms as that "if equals are taken from equals the remainders are equal"; the fact being that the two statements are merely different ways of expressing the same relation. To say, for instance, that X is equal to A - B, is the same thing as to say that X is a quantity such that X and B, when added, make up A; and the above five statements of necessary connexion between two statements of equality are in fact nothing more than definitions of the symbols $-, \ \textstyle {{1 \over \it m}} \ \rm of, \ \div, \ \sqrt[p]{}$, and $\ \it \log_a$. An apparent difficulty is that we use a single symbol - to denote the result of the two different statements in (i.) (a) and (i.) (b) of § 14. This is due to the fact that there are really two kinds of subtraction, respectively involving counting forwards (complementary addition) and counting backwards (ordinary subtraction); and it suggests that it may be wise not to use the one symbol - to represent the result of both operations until the commutative law for addition has been fully grasped. 16. In the same way, a statement as to the result of an inverse operation is really, by the definition of the operation, a statement as to the result of a direct operation. If, for instance, we state that A = X-B, this is really a statement that X = A+B. Thus, corresponding to the results under § 15 (2), we have the following:— (1) Where the inverse operation is performed on the unknown quantity or number:— $\quad \rm (i.) \ If \ A=X-B, \ then \ X=A+B.$ $(\rm ii.)(\it a \rm) \ If \ M = \textstyle {{1 \over \it m}} \ of \ X, \ then \ X=\it m \ \rm times \ M.$ $(\it b \rm) \ If \ \it m \rm =X \div M,\ then \ X= \it m \ \rm times \ M.$ $(\rm iii.)(\it a \rm) \ If \it \ a=\sqrt[p]{}x \rm ,\ then \ \it x=a^p.$ $(\it b \rm) \ If \it \ p= \log_a x \rm, \ then \it \ x=a^p.$ (2) Where the inverse operation is performed with the unknown quantity or number:— $\quad \rm (i.) \ If \ B=A-X, \ then \ A=B+X.$ $(\it a \rm) \ If \ \it m \rm =A \div X,\ then \ A= \it m \ \rm times \ X.$ $(\rm ii.)(\it b \rm) \ If \ M = \textstyle {{1 \over \it x}} \ of \ A, \ then \ A=\it x \ \rm times \ M.$ $(\rm iii.)(\it a \rm) \ If \it \ p= \log_x n \rm, \ then \it \ n=x^p.$ $(\it b \rm) \ If \it \ a=\sqrt[x]{}n \rm ,\ then \ \it n=a^x.$ In each of these cases, however, the reasoning which enables us to replace one statement by another is of a different kind from the reasoning in the corresponding cases of § 15. There we proceeded from the direct to the inverse operations; i.e. so far as the nature of arithmetical operations is concerned, we launched out on the unknown. In the present section, however, we return from the inverse operation to the direct; i.e. we rearrange our statement in its simplest form. The statement or instance, that 32-x = 25, is really a statement that 32 is the sum of x and 25. 17. The five equalities which stand first in the five pairs of equalities in §15(2) may therefore be taken as the main types of a simple statement of equality. When we are familiar with the treatment of quantities by equations, we may ignore the units and deal solely with numbers; and (ii.)(a) and (ii.)(b) may then, by the commutative law for multiplication, be regarded as identical. The five processes of deduction then reduce to four, which may be described as (i.) subtraction, (ii.) division, (iii.) (a) taking a root, (iii.) (b) taking logarithms. It will be found that these (and particularly the first three) cover practically all the processes legitimately adopted in the elementary theory of the solution of equations; other processes being sometimes liable to introduce roots which do not satisfy the original equation. 18. It should be noticed that we are still dealing with the elementary processes of arithmetic, and that all the numbers contemplated in §§ 14-17 are supposed to be positive integers. If, for instance, we are told that 15 = ¾ of (x-2), what is meant is that (1) there is a number v such that u = 4 times v, and, (3) 15 = 3 times v. From these statements, working backwards, we find successively that v = 5, u = 20, x = 22. The deductions follow directly from the definitions, and such mechanical processes as "clearing of fractions" find no place (§ 21 (ii.)). The extension of the methods to fractional numbers is part of the establishment of the laws governing these numbers (§ 27 (ii.)). 19. Expressed Equations.—The simplest forms of arithmetical equation arise out of abbreviated solutions of particular problems. In accordance with § 15, it is desirable that our statements should be statements of equality of quantities rather than of numbers; and it is convenient in the early stages to have a distinctive notation, e.g. to represent the former by capital letters and the latter by small letters. As an example, take the following. I buy 2 lb of tea, and have 6s. 8d. left out of 10s.; how much per lb did tea cost? (1) In ordinary language we should say: Since 6s. 8d. was left, the amount spent was 10s. - 6s. 8d., i.e. was 3s. 4d. Therefore 1 lb of tea cost 1s. 8d. (2) The first step towards arithmetical reasoning in such a case is the introduction of the sign of equality. Thus we say:— $\rm Cost \ of \ 2 \ lb \ tea+6s. \ 8d.=10s.$ $\therefore \rm Cost \ of \ 2 \ lb \ tea=10s.-6s. \ 8d.=3s. \ 4d.$ $\therefore \rm Cost \ of \ 1 \ lb \ tea=1s. \ 8d$ (3) The next step is to show more distinctly the unit we are dealing with (in addition to the money unit), viz. the cost of 1 lb tea. We write:— $(2 \times \rm cost \ of \ 1 \ lb \ tea)+6s. \ 8d.=10s.$ $\therefore 2 \times \rm cost \ of \ 1 \ lb \ tea=10s.-6s. \ 8d.=3s. \ 4d.$ $\therefore \rm Cost \ of \ 1 \ lb \ tea=1s. \ 8d$ (4) The stage which is introductory to algebra consists merely in replacing the unit "cost of 1 lb tea" by a symbol, which may be a letter or a mark such as the mark of interrogation, the asterisk, & c. If we denote this unit by X, we have $(2 \times \rm X)+6s. \ 8d.=10s.$ $\therefore 2 \times \rm X=10s.-6s. \ 8d.=3s. \ 4d.$ $\therefore \rm X=1s. \ 8d$ 20. Notation of Multiples.—The above is arithmetic. The only thing which it is necessary to import from algebra is the notation by which we write 2X instead of 2 × X or 2•X. This is rendered possible by the fact that we can use a single letter to represent a single number or numerical quantity, however many digits are contained in the number. It must be remembered that, if a is a number, 3a means 3 times a, not a times 3; the latter must be represented by a × 3 or a • 3. The number by which an algebraical expression is to be multiplied is called its coefficient. Thus in 3a the coefficient of a is 3. But in 3 • 4a the coefficient of 4a is 3, while the coefficient of a is 3 • 4. 21. Equations with Fractional Coefficients.—As an example of a special form of equation we may take $\textstyle {{1 \over 2}x + {1 \over 3}x = 10.}$ (i.) There are two ways of proceeding. (a) The statement is that (1) there is a number u such that x = 2u, (2) there is a number v such that x = 3v, and (3) u+v = 10. We may therefore conveniently take as our unit, in place of x, a number y such that x = 6y. We then have 3y+2y = 10, 5y = 10, y = 2, x = 6y = 12. (b) We can collect coefficients, i.e. combine the separate quantities or numbers expressed in terms of x as unit into a single quantity or number so expressed, obtaining $\textstyle {{5 \over 6}}x=10$ By successive stages we obtain (§ 18) $\textstyle {{1 \over 6}}$x = 2, x = 12; or we may write at once x = $\textstyle {{1 \over 5/6}}$ of 10 = $\textstyle {{6 \over 5}}$ of 10 = 12. The latter is the more advanced process, implying some knowledge of the laws of fractional numbers, as well as an application of the associative law (§ 26 (i.)). (ii.) Perhaps the worst thing we can do, from the point of view of intelligibility, is to "clear of fractions" by multiplying both sides by 6. It is no doubt true that, if +$\textstyle {{1 \over 2}}$x+$\textstyle {{1 \over 3}}$x = 10, then 3x+2x = 60 (and similarly $\textstyle {{1 \over 2}}$x+$\textstyle {{1 \over 3}}$x+$\textstyle {{1 \over 6}}$x = 10, then 3x+2x+x = 60); but the fact, however interesting it may be, is of no importance for our present purpose. In the method (a) above there is indeed a multiplication by 6; but it is a multiplication arising out of subdivision, not out of repetition (see Arithmetic), so that the total (viz. 10) is unaltered. 22. Arithmetical and Algebraical Treatment of Equations.—The following will illustrate the passage from arithmetical to algebraical reasoning. "Coal costs 3s. a ton more this year than last year. If 4 tons last year cost 104s., how much does a ton cost this year?" If we write X for the cost per ton this year, we have $4(X-3s.)=104s. \$ From this we can deduce successively X - 3s. = 26s., X = 29s. But, if we transform the equation into $4X-12s.=104s., \$ we make an essential alteration. The original statement was with regard to X-3s. as the unit; and from this, by the application of the distributive law (§ 26 (i.)), we have passed to a statement with regard to X as the unit. This is an algebraical process. In the same way, the transition from (x²+4x+4)-4 = 21 to x²+4x+4 = 25, or from (x+2)² = 25 to x+2 = √25, is arithmetical; but the transition from x²+4x+4 = 25 to (x+2)² = 25 is algebraical, since it involves a change of the number we are thinking about. Generally, we may say that algebraic reasoning in reference to equations consists in the alteration of the form of a statement rather than in the deduction of a new statement; i.e. it cannot be said that "If A = B, then E = F" is arithmetic, while "If C = D, then E = F" is algebra. Algebraic treatment consists in replacing either of the terms A or B by an expression which we know from the laws of arithmetic to be equivalent to it. The subsequent reasoning is arithmetical. 23. Sign of Equality.—The various meanings of the sign of equality (=) must be distinguished. $\rm(i.) \ 4 \times 3 \ lb=12 \ lb.$ This states that the result of the operation of multiplying 3 lb by 4 is 12 lb. $\rm(ii.) \ 4 \times 3 \ lb=3 \times 4 \ lb.$ This states that the two operations give the same result; i.e. that they are equivalent. $\rm(iii.) \ A \texttt{'} s \ share = 5s., \ or$ $\rm 3 \ times \ A \texttt{'} s \ share = 15s.$ Either of these is a statement of fact with regard to a particular quantity; it is usually called an equation, but sometimes a conditional equation, the term "equation" being then extended to cover (i.) and (ii.). $\rm (iv.) \ \it x\rm^3=\it x \times x \times x.$ This is a definition of x³; the sign = is in such cases usually replaced by ≡. $\rm (v.) \ 24d.=2s.$ This is usually regarded as being, like (ii.), a statement of equivalence. It is, however, only true if 1s. is equivalent to 12d., and the correct statement is then $\rm \textstyle {{1s. \over 12d.}} \times 24d. =2s.$ If the operator $\rm \textstyle {{1s. \over 12d.}} \times$ is omitted, the statement is really an equation, giving 1s. in terms of 1d. or vice versa. The following statements should be compared:— X = A's share = $\textstyle {{3 \over 2}}$ of £10 = 3×5£ = £15. X = A's share = $\textstyle {{3 \over 2}}$ of £10 = $\textstyle {{1 \over 2}}$ of £30 = £15. In each case, the first sign of equality comes under (iv.) above, the second under (iii.), and the fourth under (i.); but the third sign comes under (i.) in the first case (the statement being that ½ of £10 = £5) and under (ii.) in the second. It will be seen from § 22 that the application of algebra to equations consists in the interchange of equivalent expressions, and therefore comes under (i.) and (ii.). We replace 4(x-3), for instance, by 4x-4•3, because we know that, whatever the value of x may be, the result of subtracting 3 from it and multiplying the remainder by 4 is the same as the result of finding 4x and 4 • 3 separately and subtracting the latter from the former. A statement such as (i.) or (ii.) is sometimes called an identity. The two expressions whose equality is stated by an equation or an identity are its members. 24. Use of Letters in General Reasoning.—It may be assumed that the use of letters to denote quantities or numbers will first arise in dealing with equations, so that the letter used will in each case represent a definite quantity or number; such general statements as those of §§ 15 and 16 being deferred to a later stage. In addition to these, there are cases in which letters can usefully be employed for general arithmetical reasoning. (i.) There are statements, such as A+B = B+A, which are particular cases of the laws of arithmetic, but need not be expressed as such. For multiplication, for instance, we have the statement that, if P and Q are two quantities, containing respectively p and q of a particular unit, then p×Q = q×P; or the more abstract statement that p×q = q×p. (ii.) The general theory of ratio and proportion requires the use of general symbols. (iii.) The general statement of the laws of operation of fractions is perhaps best deferred until we come to fractional numbers, when letters can be used to express the laws of multiplication and division of such numbers. (iv.) Variation is generally included in text-books on algebra, but apparently only because the reasoning is general. It is part of the general theory of quantitative relation, and in its elementary stages is a suitable subject for graphical treatment (§ 31). 25. Preparation for Algebra.—The calculation of the values of simple algebraical expressions for particular values of letters involved is a useful exercise, but its tediousness is apt to make the subject repulsive. What is more important is to verify particular examples of general formulae. These formulae are of two kinds:—(a) the general properties, such as m(a+b) = ma+mb, on which algebra is based, and (b) particular formulae such as (x-a)(x+a) = x²-a². Such verifications are of value for two reasons. In the first place, they lead to an understanding of what is meant by the use of brackets and by such a statement as 3(7+2) = 3•7+3•2. This does not mean (cf. § 23) that the algebraic result of performing the operation 3(7+2) is 3•7+3•2; it means that if we convert 7+2 into the single number 9 and then multiply by 3 we get the same result as if we converted 3•7 and 3•2 into 21 and 6 respectively and added the results. In the second place, particular cases lay the foundation for the general Exercises in the collection of coefficients of various letters occurring in a complicated expression are usually performed mechanically, and are probably of very little value. 26. General Arithmetical Theorems. (i.) The fundamental laws of arithmetic (q.v.) should be constantly borne in mind, though not necessarily stated. The following are some special points. (a) The commutative law and the associative law are closely related, and it is best to establish each law for the case of two numbers before proceeding to the general case. In the case of addition, for instance, suppose that we are satisfied that in a+b+c+d+e we may take any two, as b and c, together (association) and interchange them (commutation). Then we have a+b+c+d+e = a+c+b+d+e. Thus any pair of adjoining numbers can be interchanged, so that the numbers can be arranged in any order. (b) The important form of the distributive law is m(A+B) = mA+mB. The form (m+n)A = mA+nA follows at once from the fact that A is the unit with which we are dealing. (c) The fundamental properties of subtraction and of division are that A-B+B = A and m×$\textstyle {{1 \over \it m}}$ of A = A, since in each case the second operation restores the original quantity with which we started. (ii.) The elements of the theory of numbers belong to arithmetic. In particular, the theorem that if n is a factor of a and of b it is also a factor of pa±qb, where p and q are any integers, is important in reference to the determination of greatest common divisor and to the elementary treatment of continued fractions. Graphic methods are useful here (§ 34 (iv.)). The law of relation of successive convergents to a continued fraction involves more advanced methods (see § 42 (iii.) and Continued Fraction). (iii.) There are important theorems as to the relative value of fractions; e.g. (a) If $\textstyle {{\it a \over b}}$ = $\textstyle {{\it c \over d}}$ then each = $\textstyle {{\it pa \pm qc \over pb \pm qd}}$. (b) $\textstyle {{\it a+n \over b+n}}$ is nearer to 1 than $\textstyle {{\it a \over b}}$ is; and, generally, if $\textstyle {{\it a \over b}}$ ≠ $\textstyle {{\it c \over d}}$, then $\ textstyle {{\it pa+qc \over pb+qd}}$ lies between the two. (All the numbers are, of course, supposed to be positive.) 27. Negative Quantities and Fractional Numbers.—(i.) What are usually called "negative numbers" in arithmetic are in reality not negative numbers but negative quantities. If a person has to receive 7s. and pay 5s., with a net result of +2s., the order of the operations is immaterial. If he pays first, he then has -5s. This is sometimes treated as a debt of 5s.; an alternative method is to recognize that our zero is really arbitrary, and that in fact we shift it with every operation of addition or subtraction. But when we say "-5s." we mean "-(5s.)," not "(-5)s."; the idea of (-5) as a number with which we can perform such operations as multiplication comes later (§ 49). (ii.) On the other hand, the conception of a fractional number follows directly from the use of fractions, involving the subdivision of a unit. We find that fractions follow certain laws corresponding exactly with those of integral multipliers, and we are therefore able to deal with the fractional numbers as if they were integers. 28. Miscellaneous Developments in Arithmetic.—The following are matters which really belong to arithmetic; they are usually placed under algebra, since the general formulae involve the use of (i.) Arithmetical Progressions such as 2, 5, 8, . . .—The formula for the rth term is easily obtained. The problem of finding the sum of r terms is aided by graphic representation, which shows that the terms may be taken in pairs, working from the outside to the middle; the two cases of an odd number of terms and an even number of terms may be treated separately at first, and then combined by the ordinary method, viz. writing the series backwards. In this, as in almost all other cases, particular examples should be worked before obtaining a general formula. (ii.) The law of indices (positive integral indices only) follows at once from the definition of a^2, a^3, a^4, . . . as abbreviations of a•a, a•a•a, a•a•a•a, ..., or (by analogy with the definitions of 2, 3, 4, . . . themselves) of a•a, a•a^2, a•a^3, . . . successively. The treatment of roots and of logarithms (all being positive integers) belongs to this subject; a = $\it \sqrt [p]{}n$ = $\log_ a$ being the inverses of n = a^p (cf. §§ 15, 16). The theory may be extended to the cases of p = 1 and p = 0; so that a^3 means a•a•a•1, a^2 means a•a•1, a^1 means a•1, and a^0 means 1 (there being then none of the multipliers a). The terminology is sometimes confused. In n = a^p, a is the root or base, p is the index or logarithm, and n is the power or antilogarithm. Thus a, a^2, a^3, . . . are the first, second, third, . . . powers of a. But a^p is sometimes incorrectly described as "a to the power p"; the power being thus confused with the index or logarithm. (iii.) Scales of Notation lead, by considering, e.g., how to express in the scale of 10 a number whose expression in the scale of 8 is 2222222, to (iv.) Geometrical Progressions.—It should be observed that the radix of the scale is exactly the same thing as the root mentioned under (ii.) above; and it is better to use the term "root" throughout. Denoting the root by a, and the number 2222222 in this scale by N, we have N = 2222222. aN = 22222220. Thus by adding 2 to aN we can subtract N from aN+2, obtaining 20000000, which is = 2 • a^7; and from this we easily pass to the general formula for the sum of a geometrical progression having a given number of terms. (v) Permutations and Combinations may be regarded as arithmetical recreations; they become important algebraically in reference to the binomial theorem (§§ 41, 44). (vi.) Surds and Approximate Logarithms.—From the arithmetical point of view, surds present a greater difficulty than negative quantities and fractional numbers. We cannot solve the equation 7s.+X = 4s.; but we are accustomed to transactions of lending and borrowing, and we can therefore invent a negative quantity -3s. such that -3s.+3s. = 0. We cannot solve the equation 7X = 4s.; but we are accustomed to subdivision of units, and we can therefore give a meaning to X by inventing a unit $\textstyle {{1 \over 7}}$s = 1s. such that 7×$\textstyle {{1 \over 7}}$s = 1s., and can thence pass to the idea of fractional numbers. When, however, we come to the equation x² = 5, where we are dealing with numbers, not with quantities, we have no concrete facts to assist us. We can, however, find a number whose square shall be as nearly equal to 5 as we please, and it is this number that we treat arithmetically as √5. We may take it to (say) 4 places of decimals; or we may suppose it to be taken to 1000 places. In actual practice, surds mainly arise out of mensuration; and we can then give an exact definition by graphical methods. When, by practice with logarithms, we become familiar with the correspondence between additions of length on the logarithmic scale (on a slide-rule) and multiplication of numbers in the natural scale (including fractional numbers), √5 acquires a definite meaning as the number corresponding to the extremity of a length x, on the logarithmic scale, such that 5 corresponds to the extremity of 2x. Thus the concrete fact required to enable us to pass arithmetically from the conception of a fractional number to the conception of a surd is the fact of performing calculations by means of logarithms. In the same way we regard log[10]2, not as a new kind of number, but as an approximation. (vii.) The use of fractional indices follows directly from this parallelism. We find that the product a^m × a^m × a^m is equal to a^3m; and, by definition, the product $\it \sqrt[3]{}a$ × $\it \sqrt [3]{}a$ × $\it \sqrt[3]{}a$ is equal to a, which is a^1. This suggests that we should write $\it \sqrt[3]{}a$ as a^1/3; and we find that the use of fractional indices in this way satisfies the laws of integral indices. It should be observed that, by analogy with the definition of a fraction, a^p/q mean (a^1/q)^p, not (a^p)^1/q. II. Graphical Introduction to Algebra[edit] 29. The science of graphics is closely related to that of mensuration. While mensuration is concerned with the representation of geometrical magnitudes by numbers, graphics is concerned with the representation of numerical quantities by geometrical figures, and particularly by lengths. An important development, covering such diverse matters as the equilibrium of forces and the algebraic theory of complex numbers (§ 66), has relation to cases where the numerical quantity has direction as well as magnitude. There are also cases in which graphics and mensuration are used jointly; a variable numerical quantity is represented by a graph, and the principles of mensuration are then applied to determine related numerical quantities. General aspects of the subject are considered under Mensuration; Vector Analysis; Infinitesmal Calculus. 30. The elementary use of graphic methods is qualitative rather than quantitative; i.e. it is for purposes of illustration and suggestion rather than for purposes of deduction and exact calculation. We start with related facts, and adopt a particular method of visualizing the relation. One of the relations most commonly illustrated in this way is the time-relation; the passage of time being associated with the passage of a point along a straight line, so that equal intervals of time are represented by equal lengths. 31. It is important to begin the study of graphics with concrete cases rather than with tracing values of an algebraic function. Simple examples of the time-relation are—the number of scholars present in a class, the height of the barometer, and the reading of the thermometer, on successive days. Another useful set of graphs comprises those which give the relation between the expressions of a length, volume, &c., on different systems of measurement. Mechanical, commercial, economic and statistical facts (the latter usually involving the time-relation) afford numerous examples. 32. The ordinary method of representation is as follows. Let X and Y be the related quantities, their expressions in terms of selected units A and B being x and y, so that X = x•A, Y = y•B. For graphical representation we select units of length L and M, not necessarily identical. We take a fixed line OX, usually drawn horizontally; for each value of X we measure a length or abscissa ON equal to x•L, and draw an ordinate NP at right angles to OX and equal to the corresponding value of y•M. The assemblage of ordinates NP is then the graph of Y. The series of values of X will in general be discontinuous, and the graph will then be made up of a succession of parallel and (usually) equidistant ordinates. When the series is theoretically continuous, the theoretical graph will be a continuous figure of which the lines actually drawn are ordinates. The upper boundary of this figure will be a line of some sort; it is this line, rather than the figure, that is sometimes called the "graph." It is better, however, to treat this as a secondary meaning. In particular, the equality or inequality of values of two functions is more readily grasped by comparison of the lengths of the ordinates of the graphs than by inspection of the relative positions of their bounding lines. 33. The importance of the bounding line of the graph lies in the fact that we can keep it unaltered while we alter the graph as a whole by moving OX up or down. We might, for instance, read temperature from 60° instead of from 0°. Thus we form the conception, not only of a zero, but also of the arbitrariness of position of this zero (cf. § 27 (i.)); and we are assisted to the conception of negative quantities. On the other hand, the alteration in the direction of the bounding line, due to alteration in the unit of measurement of Y, is useful in relation to geometrical projection. This, however, applies mainly to the representation of values of Y. Y is represented by the length of the ordinate NP, so that the representation is cardinal; but this ordinate really corresponds to the point N, so that the representation of X is ordinal. It is therefore only in certain special cases, such as those of simple time-relations (e.g. "J is aged 40, and K is aged 26; when will J be twice as old as K?"), that the graphic method leads without arithmetical reasoning to the properties of negative values. In other cases the continuation of the graph may constitute a dangerous 34. Graphic representation thus rests on the principle that equal numerical quantities may be represented by equal lengths, and that a quantity mA may be represented by a length mL, where A and L are the respective units; and the science of graphics rests on the converse property that the quantity represented by pL is pA, i.e. that pA is determined by finding the number of times that L is contained in pL. The graphic method may therefore be used in arithmetic for comparing two particular magnitudes of the same kind by comparing the corresponding lengths P and Q measured along a single line OX from the same point O. (i.) To divide P by Q, we cut off from P successive portions each equal to Q, till we have a piece R left which is less than Q. Thus P=kQ+R, where k is an integer. (ii.) To continue the division we may take as our new unit a submultiple of Q, such as Q/r, where r is an integer, and repeat the process. We thus get P = kQ+m•Q/r+S = (k+m/r)Q+S, where S is less than Q/r. Proceeding in this way, we may be able to express P÷Q as the sum of a finite number of terms k+m/r+n/r²+...; or, if r is not suitably chosen, we may not. If, e.g. r = 10, we get the ordinary expression of P/Q as an integer and a decimal; but, if P/Q were equal to 1/3, we could not express it as a decimal with a finite number of figures. (iii.) In the above method the choice of r is arbitrary. We can avoid this arbitrariness by a different procedure. Having obtained R, which is less than Q, we now repeat with Q and R the process that we adopted with P and Q; i.e. we cut off from Q successive portions each equal to R. Suppose we find Q = sR+T, then we repeat the process with R and T; and so on. We thus express P÷Q in the form of a continued fraction, $\ k + \cfrac{1}{s + \cfrac{1}{t+ \& \rm c.}}$, which is usually written, for conciseness, k + ${{1 \over s+}} {{1 \over t+}}$&c., or k + ${{1 \over s}}$$\begin {array}{r} \color{white} .\\ \color{white} .\\ ^+ \end{array}$${{1 \over t}}$$\begin{array}{r} \color{white} .\\ \color{white} .\\ ^+ \end{array}$&c. (iv.) If P and Q can be expressed in the forms pL and qL, where p and q are integers, R will be equal to (p-kq)L, which is both less than pL and less than qL. Hence the successive remainders are successively smaller multiples of L, but still integral multiples, so that the series of quotients k, s, t, . . . will ultimately come to an end. Moreover, if the last divisor is uL., then it follows from the theory of numbers (§ 26 (ii.)) that (a) u is a factor of p and of q, and (b) any number which is a factor of p and q is also a factor of u. Hence u is the greatest common measure of p and q. 35. In relation to algebra, the graphic method is mainly useful in connexion with the theory of limits (§§ 58, 61) and the functional treatment of equations (§ 60). As regards the latter, there are two classes of cases. In the first class come equations in a single unknown; here the function which is equated to zero is the Y whose values for different values of X are traced, and the solution of the equation is the determination of the points where the ordinates of the graph are zero. The second class of cases comprises equations involving two unknowns; here we have to deal with two graphs, and the solution of the equation is the determination of their common ordinates. Graphic methods also enter into the consideration of irrational numbers (§ 65). III. Elementary Algebra of Positive Numbers[edit] 36. Monomials.—(i.) An expression such as a·2·a·a·b·c·3·a·a·c, denoting that a series of multiplications is to be performed, is called a monomial; the numbers (arithmetical or algebraical) which are multiplied together being its factors. An expression denoting that two or more monomials are to be added or subtracted is a multinomial or polynomial, each of the monomials being a term of it. A multinomial consisting of two or of three terms is a binomial or a trinomial. (ii.) By means of the commutative law we can collect like terms of a monomial, numbers being regarded as like terms. Thus the above expression is equal to 6a^5bc^2, which is, of course, equal to other expressions, such as 6ba^5c^2. The numerical factor 6 is called the coefficient of a^5bc^2 (§ 20); and, generally, the coefficient of any factor or of the product of any factors is the product of the remaining factors. (iii.) The multiplication and division of monomials is effected by means of the law of indices. Thus 6a^5bc^2 ÷ 5a^2bc = 65a^3c, since b^0 = 1. It must, of course, be remembered (§ 23) that this is a statement of arithmetical equality; we call the statement an “identity,” but we do not mean that the expressions are the same, but that, whatever the numerical values of a, b and c may be, the expressions give the same numerical result. In order that a monomial containing a^m as a factor may be divisible by a monomial containing a^p as a factor, it is necessary that p should be not greater than m. (iv.) In algebra we have a theory of highest common factor and lowest common multiple, but it is different from the arithmetical theory of greatest common divisor and least common multiple. We disregard numerical coefficients, so that by the H.C.F. or L.C.M. of 6a^5bc^2 and 12a^4b^2cd we mean the H.C.F. or L.C.M. of a^5bc^2 and a^4b^2cd. The H.C.F. is then an expression of the form a^ pb^qc^rd^s, where p, q, r, s have the greatest possible values consistent with the condition that each of the given expressions shall be divisible by a^pb^qc^rd^s. Similarly the L.C.M. is of the form a^pb^qc^rd^s, where p, q, r, s have the least possible values consistent with the condition that a^pb^qc^rd^s shall be divisible by each of the given expressions. In the particular case it is clear that the H.C.F. is a^4bc and the L.C.M. is a^5b^2cd. The extension to multinomials forms part of the theory of factors (§ 51). 37. Products of Multinomials.—(i.) Special arithmetical results may often be used to lead up to algebraical formulae. Thus a comparison of numbers occurring in a table of squares \begin{align} 1^2 &= 1\\ 2^2 &= 4\\ 3^2 &= 9\\ &\vdots \end{align} \begin{align} 11^2 &= 121\\ 12^2 &= 144\\ 13^2 &= 169\\ &\vdots \end{align} suggests the formula (A + a)^2 = A^2 + 2Aa + a^2. Similarly the equalities 99 × 101 = 9999 = 10000 - 1 98 × 102 = 9996 = 10000 - 4 97 × 103 = 9991 = 10000 - 9 . . . . . . . . . lead up to (A - a) (A + a) = A^2 - a^2. These, with (A - a)^2 = A^2 - 2Aa + a^2, are the most important in elementary work. (ii.) These algebraical formulae involve not only the distributive law and the law of signs, but also the commutative law. Thus (A + a)² = (A + a)(A + a) = A(A + a) + a(A + a) = AA + Aa + aA + aa ; and the grouping of the second and third terms as 2Aa involves treating Aa and aA as identical. This is important when we come to the binomial theorem (§ 41, and cf. § 54 (i.)). (iii.) By writing (A+a)^2 = A^2 + 2Aa + a^2 in the form (A+a)^2 = A^2 + (2A+a)a, we obtain the rule for extracting the square root in arithmetic. (iv.) When the terms of a multinomial contain various powers of x, and we are specially concerned with x, the terms are usually arranged in descending (or ascending) order of the indices; terms which contain the same power being grouped so as to give a single coefficient. Thus 2bx - 4x^2 + 6ab + 3ax would be written -4x^2 + (3a+2b)x + 6ab. It is not necessary to regard -4 here as a negative number; all that is meant is that 4x^2 has to be subtracted. (v.) When we have to multiply two multinomials arranged according to powers of x, the method of detached coefficients enables us to omit the powers of x during the multiplication. If any power is absent, we treat it as present, but with coefficient 0. Thus, to multiply x^3 - 2x + 1 by 2x^2+4, we write the process \begin{align} +1 +0 &-2 +1\\ +2 +0 &+4 \end{align} \begin{align} +2 +0 &-4 +2\\ + 0 &+0 -0 +0\\ &+4 +0 -8 +4 \end{align} $+2 +0 +0 +2 -8 +4$ giving 2x^5 + 2x^2 - 8x + 4 as the result. 38. Construction and Transformation of Equations.—(i.) The statement of problems in equational form should precede the solution of equations. (ii.) The solution of equations is effected by transformation, which may be either arithmetical or algebraical. The principles of arithmetical transformation follow from those stated in §§ 15-18 by replacing X, A, B, m, M, x, n, a and p by any expressions involving or not involving the unknown quantity or number and representing positive numbers or (in the case of X, A, B and M) positive quantities. The principle of algebraic transformation has been stated in § 22; it is that, if A = B is an equation (i.e. if either or both of the expressions A and B involves x, and A is arithmetically equal to B for the particular value of x which we require), and if B = is an identity (i.e. if B and C are expressions involving x which are different in form but are arithmetically equal for all values of x), then the statement A=C is an equation which is true for the same value of x for which A=B is true. (iii.) A special rule of transformation is that any expression may be transposed from one side of an equation to the other, provided its sign is changed. This is the rule of transposition. Suppose, for instance, that P+Q-R+S = T. This may be written (P+Q-R)+S = T; and this statement, by definition of the sign -, is the same as the statement that (P+Q-R) = T-S. Similarly the statements P+Q-R-S = T and P+Q-R = T+S are the same. These transpositions are purely arithmetical. To transpose a term which is not the last term on either side we must first use the commutative law, which involves an algebraical transformation. Thus from the equation P+Q-R+S = T and the identity P+Q-R+S = P-R+S+Q we have the equation P-R+S+Q = T, which is the same statement as P-R+S = (iv.) The procedure is sometimes stated differently, the transposition being regarded as a corollary from a general theorem that the roots of an equation are not altered if the same expression is added to or subtracted from both members of the equation. The objection to this (cf. § 21 (ii.)) is that we do not need the general theorem, and that it is unwise to cultivate the habit of laying down a general law as a justification for an isolated action. (v.) An alternative method of obtaining the rule of transposition is to change the zero from which we measure. Thus from P+Q-R+S = T we deduce P+(Q-R+S) = P+(T-P). If instead of measuring from zero we measure from P, we find Q-R+S = T-P. The difference between this and (iii.) is that we transpose the first term instead of the last; the two methods corresponding to the two cases under (i.) of § 15 (2). (vi.) In the same way, we do not lay down a general rule that an equation is not altered by multiplying both members by the same number. Suppose, for instance, that 25(x+1) = 43(x-2). Here each member is a number, and the equation may, by the commutative law for multiplication, be written 2(x+1)5 = 4(x-2)3. This means that, whatever unit A we take, 2(x+1)5 A and 4(x-2)3 A are equal. We therefore take A to be 15, and find that 6(x+1) = 20(x-2). Thus, if we have an equation P = Q, where P and Q are numbers involving fractions, we can clear of fractions, not by multiplying P and Q by a number m, but by applying the equal multiples P and Q to a number m as unit. If the P and Q of our equation were quantities expressed in terms of a unit A, we should restate the equation in terms of a unit A/m, as explained in §§ 18 and 21 (i.) (a). (vii.) One result of the rule of transposition is that we can transpose all the terms in x to one side of equation, and all the terms not containing x to the other. An equation of the form ax = b , where a and b do not contain x, is the standard form of simple equation. (viii.) The quadratic equation is the equation of two expressions, monomial or multinomial, none of the terms involving any power of x except x and x². The standard form is usually taken to be ax^2+bx+c = 0, from which we find, by transformation, (2 ax+b)^2 = b^2–4ac, and thence x = √∣b² - 4ac∣ - b2a. This only gives one root. As to the other root, see § 47 (iii.). 39. Fractional Expressions.—An equation may involve a fraction of the form PQ, where Q involves x. (i.) If P and Q can (algebraically) be written in the forms RA and SA respectively, where A may or may not involve x, then PQ = RASA = RS, provided A is not = 0. (ii.) In an equation of the form PQ = UV, the expressions P, Q, U, V are usually numerical. We then have PQ QV = UV QV, or PV = UQ, as in § 38 (vi.). This is the rule of cross-multiplication. (iii.) The restriction in (i.) is important. Thus x²-1x² + x - 2 = (x-1) (x+1)(x-1) (x+2) is equal to x+1x+2, except when x = 1. For this latter value it becomes 00, which has no direct meaning, and requires interpretation (§ 61). 40. Powers of a Binomial.—We know that (A+a)² = A² + 2Aa + a². Continuing to develop the successive powers of A+a into multinomials, we find that (A+a)³ = A³ + 3A²a + 3Aa² + a³, &c.; each power containing one more term than the preceding power, and the coefficients, when the terms are arranged in descending powers of A, being given by the following table:— where the first line stands for (A+a)^0 = 1. A^0a^0, and the successive numbers in the (n+1)th line are the coefficients of A^na^0, A^n-1a^1, . . . A^0a^n in the n+1 terms of the multinomial equivalent to (A+a)^n. In the same way we have (A-a)^2 = A^2-2Aa+a^2, (A-a)^3 = A^3-3A^2a + 3Aa^2-a^3, . . . , so that the multinomial equivalent to (A - a)^n has the same coefficients as the multinomial equivalent to (A+a )^n, but with signs alternately + and -. The multinomial which is equivalent to (A ± a)^n, and has its terms arranged in ascending powers of a, is called the expansion of (A ± a)^n. 41. The binomial theorem gives a formula for writing down the coefficient of any stated term in the expansion of any stated power of a given binomial. (i.) For the general formula, we need only consider (A+a)^n. It is clear that, since the numerical coefficients of A and of a are each 1, the coefficients in the expansions arise from the grouping and addition of like terms (§ 37 (ii.)). We therefore determine the coefficients by counting the grouped terms individually, instead of adding them. To individualize the terms, we replace (A+a) (A+a) (A+a) … by (A+a) (B+b) (C+c) …, so that no two terms are the same; the “like” -ness which determines the placing of two terms in one group being the fact that they become equal (by the commutative law) when B, C, … and b, c, .... are each replaced by A and a respectively. Suppose, for instance, that n = 5, so that we take five factors (A+a) (B+b) (C+c) (D+d) (E+e) and find their product. The coefficient of A^2a^3 in the expansion of (A+a)^5 is then the number of terms such as ABcde, AbcDe, AbCde,..., in each of which there are two large and three small letters. The first term is ABCDE, in which all the letters are large; and the coefficient of A^2a^3 is therefore the number of terms which can be obtained from ABCDE by changing three, and three only, of the large letters into small ones. We can begin with any one of the 5 letters, so that the first change can be made in 5 ways. There are then 4 letters left, and we can change any one of these. Then 3 letters are left, and we can change any one of these. Hence the change can be made in 3·4·5 ways. If, however, the 3·4·5 results of making changes like this are written down, it will be seen that any one term in the required product is written down several times. Consider, for instance, the term AbcDe, in which the small letters are bce. Any one of these 3 might have appeared first, any one of the remaining 2 second, and the remaining 1 last. The term therefore occurs 1·2·3 times. This applies to each of the terms in which there are two large and three small letters. The total number of such terms in the multinomial equivalent to (A+a) (B+b) (C+c) (D+d) (E+e) is therefore (3·4·5) ÷ (1·2·3); and this is therefore the coefficient of A^2a^3 in the expansion of (A+a)^5. The reasoning is quite general; and, in the same way, the coefficient of A^n-ra^r in the expansion of (A+a)^n is {(n-r+1) (n-r+2) ... (n-1)n} ÷ {1·2·3 . . . r}. It is usual to write this as a fraction, inverting the order of the factors in the numerator. Then, if we denote it by n[(r)], so that n[(r)] ≡ n(n-1)...(n-r+1)1·2·3...r we have (A+a)^n = n[(0)]A^n + n[(1)]A^n-1a + ... + n[(r)]A^n-ra^r + ... + n[(n)]a^n where n[(0)], introduced for consistency of notation, is defined by This is the binomial theorem for a positive integral index. (ii.) To verify this, let us denote the true coefficient of A^n-ra^r by (nr), so that we have to prove that (nr) = n[(r)], where n[(r)] is defined by (1); and let us inspect the actual process of multiplying the expansion of (A+a)^n-1 by A+a in order to obtain that of (A+a)^n. Using detached coefficients (§ 37 (v.)), the multiplication is represented by the following:— 1 + (n-11) + (n-12) + ... + (n-1r) + ... + 1 1 + (n-11) + ... + (n-1r-1) + ... + (n-1n-2) + 1 1 + (n1) + (n2) + ... + (nr) + ... + (nn-1) + 1, so that (nr) = (n-1r) + (n-1r-1). Now suppose that the formula (2) has been established for every power of A+a up to the (n-1)th inclusive, so that (n-1r) = (n-1)[(r)], (n-1r-1) = (n-1)[(r-1)]. Then (nr), the coefficient of A^n-r a^r in the expansion of (A+a)^n, is equal to (n-1)[(r)]+(n-1)[(r-1)]. But it may be shown that (r being >0) n[(r)] = (n - 1)[(r)] + (n - 1)[(r-1)] and therefore (nr) = n[(r)]. Hence the formula (2) is also true for the nth power of A+a. But it is true for the 1st and the 2nd powers; therefore it is true for the 3rd; therefore for the 4th; and so on. Hence it is true for all positive integral powers of n. (iii.) The product 1·2·3 . . . r is denoted by |r or r!, and is called factorial r. The form r! is better for printing, but the form |r is more convenient for ordinary use. If we denote n(n-1) . . . (n-r+ 1) (r factors) by n^(r), then n[(r)] ≡ n^(r)/r!. (iv.) We can write n[(r)] in the more symmetrical form n[(r)] = n!(n-r)! r! which shows that n[(r)] = n[(n-r)] We should have arrived at this form in (i.) by considering the selection of terms in which there are to be two large and three small letters, the large letters being written down first. The terms can be built up in 5! ways; but each will appear 2! 3! times. (v.) Since n[(r)] is an integer, n^(r) is divisible by r!; i.e. the product of any r consecutive integers is divisible by r! (see § 42 (ii.)). (vi.) The product r! arose in (i.) by the successive multiplication of r, r- 1, r- 2, . . . 1. In practice the successive factorials 1!, 2!, 3! . . . are supposed to be obtained successively by introduction of new factors, so that r! = r · (r-1)! Thus in defining r! as 1·2·3 . . . r we regard the multiplications as taking place from left to right; and similarly in n^(r). A product in which multiplications are taken in this order is called a continued product. (vii.) In order to make the formula (5) hold for the extreme values n[(0)] and n[(n)] we must adopt the convention that 0! = 1 This is consistent with (7), which gives 1! = 1·0!. It should be observed that, for r = 0, (4) is replaced by n[(0)] = (n - 1)[(0)] and similarly, for the final terms, we should note that p[(q)] = 0 if q>p (viii.) If u[r] denotes the term involving a^r in the expansion of (A+a)^n, then u[r]/u[r-1] = {(n-r+1)/r}·a/A. This decreases as r increases; its value ranging from na/A to a/(nA). If na<A, the terms will decrease from the beginning; if n A<a, the terms will increase up to the end; if na > A and nA > a, the terms will first increase up to a greatest term (or two consecutive equal greatest terms) and then decrease. (ix) The position of the greatest term will depend on the relative values of A and a; if a/A is small, it will be near the beginning. Advantage can be taken of this, when n is large, to make approximate calculations, by omitting terms that are negligible. (a) Let S[r] denote the sum u[0]+u[1]+ . . . u[r], this sum being taken so as to include the greatest term (or terms); and let u[r+1]/u[r] = θ, so that θ<1. Then the sum of the remaining terms u[r+1]+u[r+2]+ . . . +u[n] is less than(1+θ+θ^2+ . . . +θ^n-r-1)u[r+1], which is less than u[r+1]/(1-θ); and therefore (A+a)^n lies between S[r], and S[r] + u[r+1]/(1-θ). We can therefore stop as soon as u[r+1]/(1-θ) becomes negligible. (b) In the same way, for the expansion of (A - a)^n, let σ[r] denote u[0]-u[1]+ . . . ± u[r]. Then, provided σ[r] includes the greatest term, it will be found that (A - a)^n lies between σ[r] and σ[r+1]. For actual calculation it is most convenient to write the theorem in the form (A±a)^n = A^n(1±x)^n = A^n±n1x·A^n+n-12x·n1x·A^n±... where x ≡ a/A; thus the successive terms are obtained by successive multiplication. To apply the method to the calculation of N^n, it is necessary that we should be able to express N in the form A+a or A-a, where a is small in comparison with A, A^n is easy to calculate and a/A is convenient as a. multiplier. 42. The reasoning adopted in § 41 (ii.) illustrates two general methods of procedure. We know that (A+a)^n is equal to a multinomial of n+1 terms with unknown coefficients, and we require to find these coefficients. We therefore represent them by separate symbols, in the same way that we represent the unknown quantity in an equation by a symbol. This is the method of undetermined coefficients . We then obtain a set of equations, and by means of these equations we establish the required result by a process known as mathematical induction. This process consists in proving that a property involving p is true when p is any positive integer by proving (1) that it is true when p = 1, and (2) that if it is true when p = n, where n is any positive integer, then it is true when p = n+1. The following are some further examples of mathematical induction. (i.) By adding successively 1, 3, 5 . . . we obtain 1, 4, 9, . . . This suggests that, if u[n] is the sum of the first n odd numbers, then u[n] = n^2. Assume this true for u[1], u[2], . . ., u[n ]. Then u[n+1] = u[n]+(2n+1) = n^2+(2n+1) = (n+1)^2, so that it is true for u[n+1]. But it is true for u[1]. Therefore it is true generally. (ii.) We can prove the theorem of § 41 (v.) by a double application of the method. (a) It is clear that every integer is divisible by 1!. (b) Let us assume that the product of every set of p consecutive integers is divisible by p!, and let us try to prove that the product of every set of p+1 consecutive integers is divisible by (p+1)!. Denote the product n(n+1) . . . (n+r-1) by n^[r]. Then the assumption is that, whatever positive integral value n may have, n^[p] is divisible by p!. (1) n^[p+1]-(n-1)^[p+1] = n(n+1). . . (n+p-1) = (p+1) · n^[p]. But, by hypothesis, n^[p+1]-(n-1)^[p+1] is divisible by p!. Therefore if (n-1)^[p+1] is divisible by (p+1)!, n^[p+1] is divisible by (p+1)!. (2) But 1^[p+1] = (p+1)!, which is divisible by (p+1)!. (3) Therefore n^[p+1] is divisible by (p+1)!, whatever positive integral value n may have. (c) Thus, if the theorem of § 41 (v.) is true for r = p, it is true for r = p+1. But it is true for r = 1. Therefore it is true generally. (iii.) Another application of the method is to proving the law of formation of consecutive convergent to a continued fraction (see continued fractions). 43. Binomial Coefficients.-The numbers denoted by n[(r)] in § 41 are the binomial coefficients shown in the table in § 40; n[(r)] being the (r+1)th number in the (n+1)th row. They have arisen as the coefficients in the expansion of (A+a)^n; but they may be considered independently as a system of numbers defined by (1) of § 41. The individual numbers are connected by various relations, some of which are considered in this section. (i.) From (4) of § 41 we have n[(r)]-(n-1)[(r)] = (n-1)[(r-1)] Changing n into n-1, n-2, . . ., and adding the results, n[(r)]-(n-s)[(r)] = (n-1)[(r-1)]+(n-2)[(r-1)]+ ... +(n-s)[(r-1)] In particular, n[(r)] = (n-1)[(r-1)]+(n-2)[(r-1)]+...+(r-1)[(r-1)] Similarly, by writing (4) in the form n[(r)]-(n-1)[(r-1)] = (n-1)[(r)] changing n and r into n-1 and r-1, repeating the process, and adding, we find, taking account of (9), n[(r)] = (n-1)[(r)]+(n-2)[(r-1)]+...+(n-r-1)[0] (ii.) It is therefore more convenient to rearrange the table of § 40 as shown below, on the left; the table on the right giving the key to the arrangement. 1 0[(0)] 1 1[(1)] 1 1 1[(0)] 2[(2)] 2 1 2[(1)] 3[(3)] 1 3 1 2[(0)] 3[(2)] 4[(4)] 3 4 1 3[(1)] 4[(3)] 5[(5)] 1 6 5 1 3[(0)] 4[(2)] 5[(4)] 6[(6)] 4 10 6 1 4[(1)] 5[(3)] 6[(5)] 7[(7)] 1 10 15 7 1 4[(0)] 5[(2)] 6[(4)] 7[(6)] 8[(8)] &c., &c., Here we have introduced a number O given by O[(0)] = 1 which is consistent with the relations in (i.). In this table any number is equal to the sum of the numbers which lie horizontally above it in the preceding column, and the difference of any two numbers in a column is equal to the sum of the numbers horizontally between them in the preceding column. The coefficients in the expansion of (A+a)^n for any particular value of n are obtained by reading diagonally upwards from left to right from the (n+1)th number in the first column. (iii.) The table might be regarded as constructed by successive applications of (9) and (4); the initial data being (16) and (10). Alternatively, we might consider that we start with the first diagonal row (downwards from the left) and construct the remaining diagonal rows by successive applications of (15). Constructed in this way, the successive diagonal rows, commencing with the first, give the figurate numbers of the first, second, third, . . . order. The (r+1)th figurate number of the nth order, i.e. the (r+1)th number in the nth diagonal row, is n(n+1) . . . (n+r-1)/r! = n^[r]/r !; this may, by analogy with the notation of §41, be denoted by n[[r]]. We then have (n+1)[[r]] = (r+1)[[n]] = (n+r)!/(n! r!) = (n+r)[(r)] = (n+r)[(n)] (iv.) By means of (17) the relations between the binomial coefficients in the form pm may be replaced by others with the coetiicients expressed in the form pm. The table in (ii.) may be written 2[[0]] 1[[2]] 2[[1]] 1[[3]] 3[[0]] 2[[2]] 1[[4]] 3[[1]] 2[[3]] 1[[5]] 4[[0]] 3[[2]] 2[[4]] 1[[6]] 4[[1]] 3[[3]] 2[[5]] 1[[7]] 5[[0]] 4[[2]] 3[[4]] 2[[6]] 1[[8]] The most important relations are n[[r]] = n[[r-1]]+(n-1)[(r)] O[[r]] = 0 n[[r]]-(n-s)[[r]] = n[[r-1]]+(n-1)[[r-1]]+...+(n-s+1)[[r-1]] n[[r]] = n[[r-1]]+(n-1)[[r-1]]+...+1[[r-1]] (v.) It should be mentioned that the notation of the binomial coefficients, and of the continued products such as n(n-1) . . . (n-r+1), is not settled. Some writers, for instance, use the symbol n, in place, in some cases, of n[(r)], and, in other cases, of n^(r). It is convenient to retain x, to denote x^r/r!, so that we have the consistent notation x[r] = x^r/r!, n[(r)] = n^(r)/r!, n[[r]] = n^[r]/r!. The binomial theorem for positive integral index may then be written (x+y)[n] = x[n]y[0]+x[n-1]y[1]+ ... + x[n-r]y[r]+ ... + x[0]y[n]. This must not be confused with the use of suffixes to denote particular terms of a series or a progression (as in § 41 (viii.) and (ix.)). 44. Permutations and Combinations.-The discussion, in § 41 (i.), of the number of terms of a particular kind in a particular product, forms part of the theory of combinatorial analysis (q.v.), which deals with the grouping and arrangement of individuals taken from a defined stock. The following are some particular cases; the proof usually follows the lines already indicated. Certain of the individuals may be distinguishable from the remainder of the stock, but not from each other; these may be called a type. (i.) A permutation is a linear arrangement, read in a definite direction of the line. The number ([n]P[r]) of permutations of r individuals out of a stock of n, all being distinguishable, is n^(r). In particular, the number of permutations of the whole stock is n!. If a of the stock are of one type, b of another, c of another, . . . the number of distinguishable permutations of the whole stock is n!÷(a!b!c! . . .). (ii.) A combination is a group of individuals without regard to arrangement. The number ([n]C[r]) of combinations of r individuals out of a stock of n has in effect been proved in § 41 (i.) to be n[( r)]. This property enables us to establish, by simple reasoning, certain relations between binomial coefficients. Thus (4) of § 41 (ii.) follows from the fact that, if A is any one of the n individuals, the nCr groups of r consist of [n-1]C[r-1] which contain A and [n-1]C[r] which do not contain A. Similarly, considering the various ways in which a group of r may be obtained from two stocks, one containing m and the other containing n, we find that [m+n]C[r] = [m]C[r]·[n]C[0]+[m]C[r-1]·[n]C[1]+ ... + [m]C[0]·[n]C[r], which gives (m+n)[(r)] = m[(r)]·n[(0)]+m[(r-1)]·n[(1)]+...+m[(0)]·n[(r)] This may also be written (m+n)^(r) = m^(r)·n^(0)+r[(1)]·m^(r-1)·n^(1)+...+r[(r)]·m^(0)·n^(r) If r is greater than m or n (though of course not greater than m+n), some of the terms in (22) and (23) will be zero. (iii.) If there are n types, the number of individuals in each type being unlimited (or at any rate not less than r), the number ([n]H[r]) of distinguishable groups of r individuals out of the total stock is n[[r]]. This is sometimes called the number of homogeneous products of r dimensions formed out of n letters; i.e. the number of products such as x^r, x^r-3y^3 x^r-2z^2, . . . that can be formed with positive integral indices out of n letters x, y, z, . . ., the sum of the indices in each product being r. (iv.) Other developments of the theory deal with distributions, partitions, &c. (see Combinatorial Analysis). (v.) The theory of probability (q.v.) also comes under this head. Suppose that there are a number of arrangements of r terms or elements, the first of which a is always either A or not-A, the second b is B or not-B, the third c is C or not-C, and so on. If, out of every N cases, where N may be a very large number, a is A in pN cases and not-A in (1-p)N cases, where p is a fraction such that pN is an integer, then p is the probability or frequency of occurrence of A. We may consider that we are dealing always with a single arrangement abc . . .. and that the number of times that a is made A bears to the number of times that a is made not-A the ratio of p to 1-p; or we may consider that there are N individuals, for pN of which the attribute a is A, while for (1-p)N it is not-A. If, in this latter case, the proportion of cases in which b is B to cases in which b is not-B is the same for the group of pN individuals in which a is A as for the group of (1-p)N in which a is not-A, then the frequencies of A and of B are said to be independent; if this is not the case they are said to be correlated. The possibilities of a, instead of being A and not-A, may be A[1], A[2], . . ., each of these having its own frequency; and similarly for b, c, . . . If the frequency of each A is independent of the frequency of each B, then the attributes a and b are independent; otherwise they are 45. Application of Binomial Theorem to Rational Integral Functions.—An expression of the form c[0]x^n+c[1]x^n-1+c[n], where c[0], c[1], . . . do not involve x, and the indices of the powers of x are all positive integers, is called a rational integral function of x of degree n. If we represent this expression by f(x), the expression obtained by changing x into x+h is f(x+h); and each term of this may be expanded by the binomial theorem. Thus we have f(x+)h = c[0]x^n+nc[0]x^n-1h1!+n(n-1)c[0]x^n-2h^22!+... + &c. = {c[0]x^n+c[1]x^n-1+c[2]x^n-2+...} +$\scriptstyle{ \left\{ \begin{matrix} \ \end{matrix} \right. }$nc[0]x^n-1+(n-1)c[1]x^n-2+(n-2)c[2]x^n-3 + ... $\scriptstyle{ \left. \begin{matrix} \ \end{matrix} \right\}\, }$h1! +$\scriptstyle{ \left\{ \begin{matrix} \ \end{matrix} \right. }$n(n-1)c[0]x^n-2+(n-1)(n-2)c[1]x^n-3 + ... $\scriptstyle{ \left. \begin{matrix} \ \end{matrix} \right\}\, }$h^22! + &c. It will be seen that the expression in curled brackets in each line after the first is obtained from the corresponding expression in the preceding line by a definite process; viz. x^r is replaced by r·x^r-1, except for r = 0, when x^0 is replaced by 0. The expressions obtained in this way are called the first, second, . . . derived functions of f(x). If we denote these by f[1](x), f[2](x), . . ., so that f[n](x) is obtained from f[n-1](x) by the above process, we have f(x+h) = f(x)+f[1](x)·h+f[2](x)h^2/2+...+f[r](x)h^r/r+... This is a particular case of Taylor's theorem (see Infinitesimal Calculus). 46. Relation of Binomial Coefficients to Summation of Series.— (i.) The sum of the first n terms of an ordinary arithmetical progression (a+b), (a+2b), . . . (a+nb) is (§ 28 (i.)) ½n{(a+b)+(a+nb)} = na+½n(n+1)b = n[[1]]·a+n[[2]]·b. Comparing this with the table in §43 (iv.), and with formula (21), we see that the series expressing the sum may be regarded as consisting of two, viz. a+a+ . . . and b+2b+3b+. . .; for the first series we multiply the table ( i.e. each number in the table) by a, and for the second series we multiply it by b, and the terms and their successive sums are given for the first series by the first and the second columns, and for the second series by the second and the third columns. (ii.) In the same way, if we multiply the table by c, the sum of the first n numbers in any column is equal to the nth number in the next following column. Thus we get a formula for the sum of n terms of a series such as 2·4·6+4·6·8+..., or 6·8·10·12+8·10·12·14+... (iii.) Suppose we have such a series as 2·5+5·8+8·11+...This cannot be summed directly by the above method. But the nth term is (3n-1)(3n+2) = 18n[[2]]-6n[[1]]-2. The sum of n terms is therefore (§ 43 (iv.)) 18n[[3]]-6n[[2]]-2n[[1]] = 3n^3+6n^2+n. (iv.) Generally, let N be any rational integral function of n of degree r. Then, since n[[r]] is also a rational integral function of n of degree r, we can find a coefficient c[r], not containing n, and such as to make N-c[r]n[[r]] contain no power of n higher than n^r-1. Proceeding in this way, we can express N in the form c[r]·n[[r]]+c[r-1]n[[r-1]]+ . . ., where c[r], c[r-1], c[r-2], . . . do not contain n; and thence we can obtain the sum of the numbers found by putting n = 1, 2, 3, . . . n successively in N. These numbers constitute an arithmetical progression of the rth order. (v.) A particular case is that of the sum 1^r+2^r+3^r + . . . + n^r, where r is a positive integer. It can be shown by the above reasoning that this can be expressed as a series of terms containing descending powers of n, the first term being n^r+1/(r+1). The most important cases are 1 + 2 + 3 + ... + n=12n(n+1), 1²+ 2²+ 3²+ ... + n²=16n(n+1)(2n+1), 1³+ 2³+ 3³+ ... + n³=14n²(n+1)²=(1+2+...+n)². The general formula (which is established by more advanced methods) is ½·0^r+1^r+2^r+...+(n-1)^r = 1r+1 $\scriptstyle{ \left\{ \begin{matrix} \ \end{matrix} \right. }$n^r+1+B[1](r+1)[(2)]n^r-1-B[2](r+1)[(4)]n^r-3+ . . . $\scriptstyle{ \left. \begin{matrix} \ \end {matrix} \right\}\, }$, where B[1], B[2], . . . are certain numbers known as Bernoulli's numbers, and the terms within the bracket, after the first, have signs alternately + and -. The values of the first ten of Bernoulli's numbers are B[1] = 16, B[2] = 130, B[3] = 142, B[4] =130, B[5] = 566, B[6] = 6912730, B[7] = 76, B[8] = 3617510, B[9] = 43867798, B[10] = 174611330 IV. Negative Numbers and Formal Algebra. 47. Negative quantities will have arisen in various ways, e.g. (i.) The logical result of the commutative law, applied to a succession of additions and subtractions, is to produce a negative quantity -3s. such that -3s. + 3s. = 0(§ 28 (vi.)). (ii.) Simple equations, especially equations in which the unknown quantity is an interval of time, can often only be satisfied by a negative solution (§ 33). (iii.) In solving a quadratic equation by the method of § 38 (viii.) we may be led to a result which is apparently absurd. If, for instance, we inquire as to the time taken to reach a given height by a body thrown upwards with a given velocity, we find that the time increases as the height decreases. Graphical representation shows that there are two solutions, and that an equation X² = 9a² may be taken to be satisfied not only by X = 3a but also by X = -3a. 48. The occurrence of negative quantities does not, however, involve the conception of negative numbers. In (iii.) of § 47, for instance, “-3a” does not mean that a is to be taken (-3·) times, but that a is to be taken 3 times, and the result treated as subtractive; i.e. -3a means -(3a), not (-3)a (cf. § 27 (i.)). In the graphic method of representation the sign - may be taken as denoting a reversal of direction, so that, if + 3 represents a length of 3 units measured in one direction, -3 represents a length of 3 units measured in the other direction. But even so there are two distinct operations concerned in the -3, viz. the multiplication by 3 and the reversal of direction. The graphic method, therefore, does not give any direct assistance towards the conception of negative numbers as operators, though it is useful for interpreting negative quantities as results. 49. In algebraical transformations, however, such as (x-a)² = x²-2ax+a², the arithmetical rule of signs enables us to combine the sign - with a number and to treat the result as a whole, subject to its own laws of operation. We see first that any operation with 4a-3b can be regarded as an operation with (+)4a+(-)3b, subject to the conditions (1) that the signs (+) and (-) obey the laws (+)(+) = (+), (+)(-) = (-)(+) = (-), (-)(-) = (+), and (2) that, when processes of multiplication are completed, a quantity is to be added or subtracted according as it has the sign (+) or (-) prefixed. We are then able to combine any number with the + or the - sign inside the bracket, and to deal with this constructed symbol according to special laws; i.e. we can replace pr or -pr by (+p)r or (-p)r, subject to the conditions that (+p) (+q) = (-p)(+q) = (-pq), and that + (-s) means that s is to be subtracted. These constructed symbols may be called positive and negative coefficients; or a symbol such as (-p) may be called a negative number, in the same way that we call 23 a fractional number. This increases the extent of the numbers with which we have to deal; but it enables us to reduce the number of formulae. The binomial theorem may, for instance, be stated for (x+a)^n alone; the formula for (x-a)^n being obtained by writing it as {x+(-)a}^n or {x+(-a)}^n, so that (x-a)^n = x^n-n[(1)]x^n-1a+...+(-)^rn[(r)]x^n-ra^r+..., where + (-)^r means - or + according as r is odd or even. The result of the extension is that the number or quantity represented by any symbol, such as P, may be either positive or negative. The numerical value is then represented by |P|; thus “|x|<1” means that x is between -1 and +1. 50. The use of negative coefficients leads to a difference between arithmetical division and algebraical division (by a multinomial), in that the latter may give rise to a quotient containing subtractive terms. The most important case is division by a binomial, as illustrated by the following examples:— 2·10+1 -2·10-1 2·10+1 -2·10-1 In (1) the division is both arithmetical and algebraical, while in (2) it is algebraical, the quotient for arithmetical division being 2·10+9. It may be necessary to introduce terms with zero coefficients. Thus, to divide 1 by 1+x algebraically, we may write it in the form 1+0·x+0·x^2+0·x^3+0·x^4, and we then obtain 11+x = 1+0·x+0·x^2+0·x^3+0·x^41+x = 1 - x + x^2 + x^3 + x^41+x, where the successive terms of the quotient are obtained by a process which is purely formal. 51. If we divide the sum of x^2 and a^2 by the sum of x and a, we get a quotient x-a and remainder 2a^2, or a quotient a-x and remainder 2x^2, according to the order in which we work. Algebraical division therefore has no definite meaning unless dividend and divisor are rational integral functions of some expression such as x which we regard as the root of the notation (§ 28 (iv.)), and are arranged in descending or ascending powers of x. If P and M are rational integral functions of x, arranged in descending powers of x, the division of P by M is complete when we obtain a remainder R whose degree (§ 45) is less than that of M. If R=0, then M is said to be a factor of P. The highest common factor (or common factor of highest degree) of two rational integral functions of x is therefore found in the same way as the G.C.M. in arithmetic; numerical coefficients of the factor as a whole being ignored (cf. § 36 (iv.)). 52. Relation between Roots and Factors.—(i.) If we divide the multinomial P ≡ p[0]x^n+p[1]x^n-1+...p[n] by x-a, according to algebraical division, the remainder is R ≡ p[0]a^n+p[1]a^n-1+...p[n] This is the remainder-theorem; it may be proved by induction. (ii.) If x=a satisfies the equation P = 0, then p[0]a^n+p[1]a^n-1+p[n]=0; and therefore the remainder when P is divided by x-a is 0, i.e. x-a is a factor of P. (iii.) Conversely, if x-a is a factor of P, then p[0]a^n+p[1]a^n-1+p[n]=0; i.e. x = a satisfies the equation P = 0. (iv.) Thus the problems of determining the roots of an equation P = 0 and of finding the factors of P, when P is a rational integral function of x, are the same. (v.) In particular, the equation P = 0, where P has the value in (i.), cannot have more than n different roots. The consideration of cases where two roots are equal belongs to the theory of equations (see Equation). (vi.) It follows that, if two multinomials of the nth degree in x have equal values for more than n values of x, the corresponding coefficients are equal, so that the multinomials are equal for all values of x. 53. Negative Indices and Logarithms.—(i.) Applying the general principles of §§ 47-49 to indices, we find that we can interpret X^-m as being such that X^m•X^-m = X^0 = 1 i.e. X^-m = 1/X^m. In the same way we interpret X^-p/q as meaning 1/X^p/q. (ii.) This leads to negative logarithms (see Logarithm. 54. Laws of Algebraic Form.—(i.) The results of the addition, subtraction and multiplication of multinomials (including monomials as a particular case) are subject to certain laws which correspond with the laws of arithmetic (§ 26(i.)) but differ from them in relating, not to arithmetical value, but to algebraic form. The commutative law in arithmetic, for instance, states that a+b and b+a, or ab and ba, are equal. The corresponding law of form regards a+b and b+a, or ab and ba, as being not only equal but identical (cf. § 37 (ii.)), and then says that A+B and B+A, or AB and BA, are identical, where A and B are any multinomials. Thus a(b+c) and (b+c)a give the same result, though it may be written in various ways, such as ab+ac, ca+ab, &c. In the same way the associative law is that A(BC) and (AB)C give the same formal result. These laws can be established either by tracing the individual terms in a sum or a product or by means of the general theorem in § 52 (vi.). (ii.) One result of these laws is that, when we have obtained any formula involving a letter a, we can replace a by a multinomial. For instance, having found that (x+a)^2 = x^2+2ax+a^2, we can deduce that (x+b+c)^2={x+(b+c)}^2 = x^2+2(b+c)x+(b+c)^2. (iii.) Another result is that we can equate coefficients of like powers of x in two multinomials obtained from the same expression by different methods of expansion. For instance, by equating coefficients of x^r in the expansions of (1 + x)^m+n and of (1+x)^m. (1+x)^n we obtain (22) of § 44 (ii.). (iv.) On the other hand, the method of equating coefficients often applies without the assumption of these laws. In § 41 (ii.), for instance, the coefficient of A^n-ra^r in the expansion of (A+a)(A+a )^n-1 has been called (n r); and it has then been shown that nr = n-1r +n-1r-1. This does not involve any assumption of the identity of results obtained in different ways; for the expansions of (A- {-a)2, (A-l-a)3, . . . are there supposed to be obtained in one way only, viz. by successive multiplications by A+a. 55. Algebraical Division.-In order to extend these laws so as to include division, we need a definition of algebraical division. The divisions in §§ 50-52 have been supposed to be performed by a process similar to the process of arithmetical division, viz. by a series of subtractions. This latter process, however, is itself based on a definition of division in terms of multiplication (§§ 1 5, 16). If, moreover, we examine the process of algebraical division as illustrated in § 5o, we shall find that, just as arithmetical division is really the solution of an equation (§ 14), and involves the tacit use of a symbol to denote an unknown quantity or number, , so algebraical division by a rnultinomial really implies the use of undetermined coefficients (§ 42). When, for instance, we.find that the quotient, when 6+ 5x-|- 7x2+13x"-{- 5x“ is divided by 2+ 3x+3C2, is made up of three terms-l-3, -2x, and -|-5x2, we are really obtaining successively the values of co, cl, and c2 which satisfy the identity 6+5x+7x2-l-13x"+5x“ =(c0+c1x+c2x2) (2-4-3x-l-xz); and We could equally obtain the result by expanding the right-hand side of this identity and equating coefficients in the first three terms, the coefficients in the remaining terms being then compared to see that there is no remainder. We therefore define algebraical division by means of algebraical multiplication, and say that, if P and M are multinomials, the statement “ P/M=Q ” means that. Q is a multinomial such that MQ (or QM) and P are identical. In this sense, the laws mentioned in .§ 54 apply also to algebraical division. 56. Extensions of the Binomial Theorem.-It has been mentioned in § 41 (ix.) that the binomial theorem can be used for obtaining an approximate value for a power of a number; the most important terms only being taken into account. There are extensions of the binomial theorem, by means of which approximate calculations can be made of fractions, surds, and powers of fractions and of surds; the main difference being that the number of terms which can be taken into account is unlimited, so that, although We may approach nearer and nearer to the true value, we never attain it exactly. The argument involves the theorem that, if 0 is a positive quantity less than 1, H' can be made as small as we please by taking t large enough; this follows from the fact that tlog0 can be made as large (numerically) as we please. (i.) By algebraical division, L r+o.x+o.x'+...+o.x'+ 1 -I-x " 1 -I-x, +I = 1 -x-{-x'-...-I-(-)'x'+(-)'+1%c (24). If, therefore, we take 'I/(1+x) as equal to 1-x+x2-...-|(-)'x', there is an error whose numerical magnitude is |x'+1/ (1 +x) |; and, if <1, this can be made as small as we please. This is the foundation of the use of recurring decimals; thus we can replace #I { =%-§ =13;;%/(1-1~2;@)} by -363636(=36/ro +36/1o“+36/106), with an error (in defect) of only 36/(Io° .99). (ii.) Repeated divisions of (24) by 1+x, r being replaced by r+I before each division, will give (r -I-x)"2 = I ~2x-1-3x2 -4x3-}-... -l-(-)'(r+1)x" +(-)”“x'“{(f+l)(I+x)"+(I +x)"}, (1 -|~x)'3 = X * '§ x+6x2“IOx3+...+(*)'. %(r-I-I)(r+2)x +(-)'“x'+'{%(r+I) (f+2)(I +x)"+(f+I)(I +x)`2+(I +x)'3i»&C» Comparison with the table of binomial coefficients in § 43 suggests that, if m is any positive integer, (I'l'x) "'=Sr'|'Rr (25), where SEI -mmx +mmx”-.. + (-)'"lif1x' (26), KE(-)”“x”“{mrf1(I+x)"+(m-I)m(I+x)"+-~-+Itf1(I+x)'"'}(27). This can be verified by induction. The same result would (§ 5 5) be obtained if we divided 1-I-o.x+o.x2+... at once by the expansion of (1 +x)”'. (iii.) From (21) of § 43 (iv.) we see that |R, | is less than m[, +11x'+1 if x is positive, or than |m[, +11x"'+1(1+x)“"' I if x is negative; and it can hence be shown that, if lx| < 1, |R, | can be Page:EB1911 - Volume 01.djvu/651 Page:EB1911 - Volume 01.djvu/652 continuous, so that continuity can only be achieved by an artificial development. The development is based on the necessity of being able to represent geometrical magnitude by arithmetical magnitude; and it may be regarded as consisting of three stages. Taking any number n to be represented by a point on a line at distance nL from a fixed point O, where L is a unit of length, we start with a series of points representing the integers 1, 2, 3, . . . This series is of course discontinuous. The next step is to suppose that fractional numbers are represented in the same way. This extension produces a change of character in the series of numbers. In the original integral series each number had a definite number next to it, on each side, except 1, which began the series. But in the new series there is no first number, and no number can be said to be next to any other number, since, whatever two numbers we take, others can be inserted between them. On the other hand, this new series is not continuous; for we know that there are some points on the line which represent surds and other irrational numbers, and these numbers are not contained in our series. We therefore take a third step, and obtain theoretical continuity by considering that every point on the line, if it does not represent a rational number, represents something which may be called an irrational number. This insertion of irrational numbers (with corresponding negative numbers) requires for its exact treatment certain special methods, which form part of the algebraic theory of number, and are dealt with under Number. 66. The development of the theory of equations leads to the amplification of real numbers, rational and irrational, positive and negative, by imaginary and complex numbers. The quadratic equation x²+ b²=0, for instance, has no real root; but we may treat the roots as being +b√-1, and -b√-1, if √-1 is treated as something which obeys the laws of arithmetic and emerges into reality under the condition √-1•√-1=-1. Expressions of the form b√-1 and a+b√-1, Where a and b are real numbers, are then described as imaginary and complex numbers respectively; the former being a particular case of the latter. Complex numbers are conveniently treated in connexion not only with the theory of equations but also with analytical trigonometry, which suggests the graphic representation of a+b√-1 by a line of length (a²+b²)^1/2 drawn in a direction different from that of the line along which real numbers are represented. References.—W. K. Clifford, The Common Sense of the Exact Sciences (1885), Chapters i. and iii., forms a good introduction to algebra. As to the teaching of algebra, see references under Arithmetic to works on the teaching of elementary mathematics. Among school-books may be mentioned those of W. M. Baker and A. A. Bourne, W. G. Borchardt, W. D. Eggar, F. Gorse, H. S. Hall and S. R. Knight, A. E. F. Layng, R. B. Morgan. G. Chrystal, Introduction to Algebra (1898); H. B. Fine, A College Algebra (1905); C. Smith, A Treatise on Algebra (1st ed. 1888, 3rd ed. 1892), are more suitable for revision purposes; the second of these deals rather fully with irrational numbers. For the algebraic theory of number, and the convergence of sequences and of series, see T. J. I'A. Bromwich, Introduction to the Theory of Infinite Series (1908); H. S. Carslaw, Introduction to the Theory of Fourier's Series (1906); H. B. Fine, The Number-System of Algebra (1891); H. P. Manning, Irrational Numbers (1906); J. Pierpont, Lectures on the Theory of Functions of Real Variables (1905). For general reference, G. Chrystal, Text-Book of Algebra (pt. i. 5th ed. 1904. pt. ii. 2nd ed. 1900) is indispensable; unfortunately, like many of the works here mentioned, it lacks a proper index. Reference may also be made to the special articles mentioned at the commencement of the present article, as well as to the articles on Differences, Calculus of; Infinitesimal Calculus; Interpolation; Vector Analysis. The following may also be consulted:—E. Borel and J. Drach, Introduction à l'étude de la théorie des nombres et de l'algèbre supérieure (1895); C. de Comberousse, Cours de mathématiques, vols. i. and iii. (1884-1887); H. Laurent, Traité d'analyse, vol. i. (1885); E. Netto, Vorlesungen über Algebra (vol. i. 1896, vol. ii. 1900); S. Pincherle, Algebra complementare (1893); G. Salmon, Lessons introductory to the Modern Higher Algebra (4th ed., 1885); J. A. Serret, Cours d'algèbre supérieure (4th ed., 2 vols., 1877); O. Stolz and J. A. Gmeiner, Theoretische Arithmetik (pt. i. 1900, pt. ii. 1902) and Einleitung in die Funktionen-theorie (pt. i. 1904, pt. ii. 1905)—these being developments from O. Stolz, Vorlesungen über allgemeine Arithmetic (pt. i. 1885, pt. ii. 1886); J. Tannery, Introduction à la théorie des fonctions d'une variable (1st ed. 1886, 2nd ed. 1904); H. Weber, Lehrbuch der Algebra, 2 vols. (1st ed. 1895-1896, 2nd ed. 1898-1899; vol. i. of 2nd ed. transl. by Griess as Traité d'algèbre supérieure, 1898). For a fuller bibliography, see Encyclopädie der math. Wissenschaften (vol. i., 1898). A list of early works on algebra is given in Encyclopedia Britannica, 9th ed., vol. i. p. 518. (W. F. SH.) B. Special Kinds of Algebra[edit] 1. A special algebra is one which differs from ordinary algebra in the laws of equivalence which its symbols obey. Theoretically, no limit can be assigned to the number of possible algebras; the varieties actually known use, for the most part, the same signs of operation, and differ among themselves principally by their rules of multiplication. 2. Ordinary algebra developed very gradually as a kind of shorthand, devised to abbreviate the discussion of arithmetical problems and the statement of arithmetical facts. Although the distinction is one which cannot be ultimately maintained, it is convenient to classify the signs of algebra into symbols of quantity (usually figures or letters), symbols of operation, such as +, √, and symbols of distinction, such as brackets. Even when the formal evolution of the science was fairly complete, it was taken for granted that its symbols of quantity invariably stood for numbers, and that its symbols of operation were restricted to their ordinary arithmetical meanings. It could not escape notice that one and the same symbol, such as √(a-b), or even (a-b), sometimes did and sometimes did not admit of arithmetical interpretation, according to the values attributed to the letters involved. This led to a prolonged controversy on the nature of negative and imaginary quantities, which was ultimately settled in a very curious way. The progress of analytical geometry led to a geometrical interpretation both of negative and also of imaginary quantities; and when a "meaning" or, more properly, an interpretation, had thus been found for the symbols in question, a reconsideration of the old algebraic problem became inevitable, and the true solution, now so obvious, was eventually obtained. It was at last realized that the laws of algebra do not depend for their validity upon any particular interpretation, whether arithmetical, geometrical or other; the only question is whether these laws do or do not involve any logical contradiction. When this fundamental truth had been fully grasped, mathematicians began to inquire whether algebras might not be discovered which obeyed laws different from those obtained by the generalization of arithmetic. The answer to this question has been so manifold as to be almost embarrassing. All that can be done here is to give a sketch of the more important and independent special algebras at present known to exist. 3. Although the results of ordinary algebra will be taken for granted, it is convenient to give the principal rules upon which it is based. They are These formulae express the associative and commutative laws of the operations + and ×, the distributive law of ×, and the definitions of the inverse symbols - and ÷, which are assumed to be unambiguous. The special symbols 0 and 1 are used to denote a-a and a÷a. They behave exactly like the corresponding symbols in arithmetic; and it follows from this that whatever "meaning" is attached to the symbols of quantity, ordinary algebra includes arithmetic, or at least an image of it. Every ordinary algebraic quantity may be regarded as of the form α+β√-1, where α, β are "real"; that is to say, every algebraic equivalence remains valid when its symbols of quantity are interpreted as complex numbers of the type α+β√-1 (cf. Number). But the symbols of ordinary algebra do not necessarily denote numbers; they may, for instance, be interpreted as coplanar points or vectors. Evolution and involution are usually regarded as operations of ordinary algebra; this leads to a notation for powers and roots, and a theory of irrational algebraic quantities analogous to that of irrational numbers. 4. The only known type of algebra which does not contain arithmetical elements is substantially due to George Boole. Although originally suggested by formal logic, it is most simply interpreted as an algebra of regions in space. Let i denote a definite region of space; and let a, b, &c., stand for definite parts of i. Let a+b denote the region made up of a and b together (the common part, if any, being reckoned only once), and let a × b or ab mean the region common to a and b. Then a+a=aa=a; hence numerical coefficients and indices are not required. The inverse symbols -, ÷ are ambiguous, and in fact are rarely used. Each symbol a is associated with its supplement ā which satisfies the equivalences a+ā=i, aā=0, the latter of which means that a and ā have no region in common. Finally, there is a law of absorption expressed by a+ab=a. From every proposition in this algebra a reciprocal one may be deduced by interchanging + and ×, and also the symbols 0 and i. For instance, x+y=x+xy and xy=x(x+y) are reciprocal. The operations + and × obey all the ordinary laws a, c, d (§ 3). 5. A point A in space may be associated with a (real, positive, or negative) numerical quantity a, called its weight, and denoted by the symbol αA. The sum of two weighted points αA, βB is, by definition, the point (α+β)G, where G divides AB so that AG: GB = β:α. It can be proved by geometry that where P is in fact the centroid of masses α, β, γ placed at A, B, C respectively. So, in general, if we put X is, in general, a determinate point, the barycentre of αA, βB, &c. (or of A, B, &c. for the weights α, β, &c.). If (α+β+...+λ) happens to be zero, X lies at infinity in a determinate direction; unless -αA is the barycentre of βB, γC,...λL, in which case αA+βB+...+λL vanishes identically, and X is indeterminate. If ABCD is a tetrahedron of reference, any point P in space is determined by an equation of the form (α+β+γ+δ)P = αA + βB + γC + δD: a, 5, 'y, 5 are, in fact, equivalent to a set of homogeneous coordinates of P. For constructions in a fixed plane three points of reference are sufficient. It is remarkable that Mobius employs the symbols AB, ABC, ABCD in their ordinary geometrical sense as lengths, areas and volumes, except that he distinguishes their sign; thus AB=-BA, ABC=-ACB, and so on. If he had happened to think of them as “ products, ” he might have anticipated Grassmann's discovery of the extensive calculus. From a merely formal point of view, we have in the barycentric calculus a set of “ special symbols of quantity ” or “extraordinaries ” A, B, C, &c, which combine with each other by means of operations + and - which obey the ordinary rules, and with ordinary algebraic quantities by operations >< and +, also according to the ordinary rules, except that division by an extraordinary is not used. is best defined as a symbol of the type 6. A quaternion q = Ease., = a0€o -l' 01161 = U~232`l' 11363, Zf 5':fi°"'5 where eo .... e3 are independent extraordinaries and ordinary algebraic quantities, which may be called the co-ordinates of q. The sum and product af two quaternions are defined by the formulae Ea:-es'l'2B.>es = E(as+Bs)es Earer X Zfgsea = Earaseresn where the products e, e, are further reduced according to the nlnns. ag, . ag following multiplication table, in which, for example, the 60 61 62 03 eo eo I ei ez ei ei 61 -<30 63 -62 61 82 'Cs “Zo 21 6; 63 62 -Zi '-B0 second line is to be read e1e0=e1, e, '=-eo, e1e2=e3, e1e3=-eg. The effect of these definitions is that the sum and the product of two quatemions are also quaternions, that addition is associative and commutative; and that multiplication is associative and distributive, but not commutative. Thus e1e2= -e2e1, and if q, q' are any two quaternions, qq' is generally different from q'q. The symbol eg behaves exactly like 1 in ordinary algebra; Hamilton writes I, i, j, k instead of eo, el, e2, eg, and in this notation all the special rules of operation may be summed up by the equalities Putting q=a-}-Bi-l-'yj-I-ok, Hamilton calls a. the scalar part of q, and denotes it by Sq; he also writes Vq for Bi-I-'yj+5k, which is called the vector part of q. Thus every quaternion may be written in the form q=Sq-l-Vq, where either Sq or Vq may separately vanish; so that ordinary algebraic quantities (or scalars, as we shall call them) and pure vectors may each be regarded as special cases of quaternions. The equations q'+x=q and y+q'=q are satisfied by the same quaternion, which is denoted by q-q', On the other hand, the equations q'x=q and yq'=q have, in general, different solutions. It is the value of y which is generally denoted by q+q'; a special symbol for x is desirable, but has not been established. If we put q6=Sq'-Vq', then qé is called the conjugate of q', and the scalar q'q6=q6q' is called the norm of q and written Nq'. With this notation the values of x and y may be expressed in the forms x = qiq/Nq', IV =qq6/Nq', which are free from ambiguity, since scalars are commutative with quaternions. The values of x and y are different, unless V(qq6)=o. In the applications of the calculus the co-ordinates of a quaternion are usually assumed to be numerical; when they are complex, the quaternion is further distinguished by Hamilton as a bi quaternion. Clifford's biquaterhions are quantities £q+17r, where q, r are quaternions, and E, 17 are symbols (commutative with quaternions) obeying the laws £'=§ , 172=1;, 24 = 172 = o (cf. 7. In the extensive calculus of the nth category, we have, first of all, n independent “units, ” el, eg, en. From these are derived symbols of the type A1 = f1i6i'l”G2€2“l-»-~ '|~0fn6n =2¢16, mann's which we shall call extensive quantities of the jfs! species Zggggge (and, when necessary, of the nth category). The coordinates al, . an are scalars, and in particular applications may be restricted to real or complex numerical values. If B1==2Be, there is a law of addition expressed by A1'l'B1 = 2(ut+l3i)6f = B1 'l'A1C this law of addition is associative as well as commutative. The inverse operation is free from ambiguity, and, in fact, A1 " B1 = E(a¢ B»;)€;. To multiply A1 by a scalar, we apply the rule -EA1 =A1£ = E(E<1¢)@s. and similarly for division by a scalar. All this is analogous to the corresponding formulae in the barycentric calculus and in quaternions, it remains to consider the multiplication of two or more extensive quantities The binary products of the units ei are taken to satisfy the equalities ef =0, ere; = -efef; this reduces them to %n(n-I) distinct values, exclusive of zero. These values are assumed to be independent, so we have § n(n- 1) derived units of the second species or order. Associated with these new units there is a system of extensive quantities of the second species, represented by symbols of the type A2=2a¢E¢<2l [i=1, 2, ...§ n(n-1)], where E1'2), E2(2), &c, are the derived units of the second species. If A1=2aie, , B1=2{31ei, the distributive law of multiplication is preserved by assuming-AIBI it follows that A 1B1= -B, A1, and that A,2=o. By assuming the truth of the associative law of multiplication, and taking account of the reducing formulae for binary products, we may construct derived units of the third, fourth. . .nth species. Every unit of the rth species which does not vanish is the product of r different units of the first species; two such units are independent unless they are permutations of the same set of primary units e[i], in which case they are equal or opposite according to the usual rule employed in determinants. Thus, for instance— and, in general, the number of distinct units of the rth species in the nth category (r≤n) is C[n],[r]. Finally, it is assumed that (in the nth category) e[1]e[2]e[3]. . .e[n]=1, the suffixes being in their natural order. Let A[r] = ΣαE^(r) and B[s]= ΣβE^(s) be two extensive quantities of species r and s; then if r+s≤n, they may be multiplied by the rule where the products E^(r)E^(s) may be expressed as derived units of species (r+s). The product B[s]A[r] is equal or opposite to A^(r)B^(s), according as rs is even or odd. This process may be extended to the product of three or more factors such as A^(r)B^(s)C^(t). . . provided that r+s+t+...does not exceed n. The law is associative; thus, for instance, (AB)C=A(BC). But the commutative law does not always hold; thus, indicating species, as before, by suffixes, A[r]B[s]C[t] = (-1)^rs+st+trC[t]B[s]A[r], with analogous rules for other cases. If r+s>n, a product such as E[r]E[s], worked out by the previous rules, comes out to be zero. A characteristic feature of the calculus is that a meaning can be attached to a symbol of this kind by adopting a new rule, called that of regressive multiplication, as distinguished from the foregoing, which is progressive. The new rule requires some preliminary explanation. If E is any extensive unit, there is one other unit E′, and only one, such that the (progressive) product EE′=1. This unit is called the supplement of E, and denoted by |E. For example, when n = 4, and so on. Now when r+s>n, the product E[r] E[s] is defined to be that unit of which the supplement is the progressive product |E[r]|E[s]. For instance, if n=4, E[r]=e[1]e[2], E[s]=e[2]e[3]=e[4], we |E[r]|E[s] = (-e[2]e[4])(-e[1])=e[1]e[2]e[4]=|e[3] consequently, by the rule of regressive multiplication, Applying the distributive law, we obtain, when r+s>n, where the regressive products E[r]E[s] are to be reduced to units of species (r+s-n) by the foregoing rule. If A=ΣαE, then, by definition, |A=Σα|E, and hence Now this is formally analogous to the distributive law of multiplication; and in fact we may look upon A|B as a particular way of multiplying A and B (not A and B). The symbol AB, from this point of view, is called the inner product of A and B, as distinguished from the outer product |AB. An inner product may be either progressive or regressive. In the course of reducing such expressions as (AB) C, (AB){C(DE)} and the like, where a chain of multiplications has to be performed in a certain order, the multiplications may be all progressive, or all regressive, or partly, one, partly the other. In the first two cases the product is said to be pure, in the third case mixed: A pure product is associative; a mixed product, speaking generally, is not. The outer and inner products of two extensive quantities A, B, are in many ways analogous to the quaternion symbols Vab and Sab respectively. As in quaternions, so in the extensive calculus, there are numerous formulae of transformation which enable us to deal with extensive quantities without expressing them in terms of the primary units. Only a few illustrations can be given here, Let a, b, c, d, e, f be quantities of the first species in the fourth category; A, B, C. . .quantities of the third species in the same category. Then (de) (abc) = (abde)c + (cade)b + (bcde)a = (abce)d - (abcd)e, ab|c = (a|c)b - (b|c)a, (ab|cd) = (a|c)(b|d)-(a|d)(b|c). These may be compared and contrasted with such quaternion formulae as S(VabVcd) = SadSbc - SacSbd dSabc = aSbcd - bScda -I-cSazlb where a, b, c, d denote arbitrary vectors. 8. An n-tuple linear algebra (also called a complex number system) deals with quantities of the type A=Ea, e, derived from n special units e1, e2...e, ,. The sum £'};$;s and product of two quantities are defined in the first instance by the formulae Zac-l-Z/36=2(a.-I-/3)e, Za¢6¢XEl3 1e 1=2(n.¢/3j)64ej, so that the laws A, C, D of § 3 are satisfied. The binary products e, ej, however, are expressible as linear functions of the units e, by means of a “multiplication table ” which defines the special characteristics of the algebra in question. Multiplication may or may not be commutative, and in the same way it may or may not be associative. The types of linear associative algebras, not assumed to be commutative, have been enumerated (with some omissions) up to sextuple algebras inclusive by B. Peirce. Quaternions afford an example of a quadruple algebra of this kind; ordinary algebra is a special case of a duplex linear algebra. If, in the extensive calculus of the nth category, all the units (including I and the derived units E) are taken to be homologous instead of being distributed into species, we may regard it as a (2"-1)-tuple linear algebra, which, however, is not wholly associative. It should be observed that while the use of special units, or extraordinaries, in a linear algebra is convenient, especially in applications, it is not indispensable. Any linear quantity may be denoted by a symbol (al, ag, . . a, .) in which only its scalar coefficients occur; in fact, the special units only serve, in the algebra proper, as nmbrae or regulators of certain operations on scalars (see Number). This idea finds fuller expression in the algebra of matrices, as to which it must sufnce to say that a matrix is a symbol consisting of a rectangular array of scalars, and that matrices may be combined by a rule of addition which obeys the usual laws, and a rule of multiplication which is distributive and associative, but not, in general, commutative. Various special algebras (for example, quaternions) may be expressed in the notation of the algebra of 9. In ordinary algebra we have the disjunctive law that if ab=o, then either a=o or b=o. This applies also to quaternions, but not to extensive quantities, nor is it true for linear algebras in general. One of the most important questions in investigating a linear algebra is to decide the necessary relations between a and b in order that this product may be zero. 10. The algebras discussed up to this point may be considered as independent in the sense that each of them deals with a class of symbols of quantity more or less homogeneous, and a set of operations applying to them all. But when f, ';,1;:f::fy an algebra is used with a particular interpretation, or even in the course of its formal development, it frequently happens that new symbols of operation are, so to speak, superposed upon the algebra, and are found to obey certain formal laws of combination of their own. For instance, there are the symbols A, D, E used in the calculus of finite differences; Aronhold's symbolical method in the calculus of invariants; and the like. In most cases these subsidiary algebras, 'as they may be called, are inseparable from the applications in which they are used; but in any attempt at a natural classification of algebra (at present a hopeless task), they would have to be taken into account. Even in ordinary algebra the notation for powers and roots disturbs the symmetry of the rational theory; and when a schoolboy illegitimately extends the distributive law by writing √(a+b)=√a+√b, he is unconsciously emphasizing this want of complete .—A. de Morgan, "On the Foundation of Algebra," Trans. Camb. P.S. (vii., viii., 1839-1844); G. Peacock, Symbolical Algebra (Cambridge, 1845); G. Boole, Laws of Thought (London, 1854); E. Schröder, Lehrbuch der Arithmetik u. Algebra (Leipzig, 1873), Vorlesungen über die Algebra der Logik (ibid., 1890-1895); A. F. Möbius, Der barycentrische Calcul (Leipzig, 1827) (reprinted in his collected works, vol. i., Leipzig, 1885); W. R. Hamilton, Lectures on Quaternions (Dublin, 1853), Elements of Quaternions (ibid., 1866); H. Grassmann, Die lineale Ausdehnungslehre (Leipzig, 1844), Die Ausdehnungslehre (Berlin, 1862) (these are reprinted with valuable emendations and notes in his Gesammelte math. u. phys. Werke , vol. i., Leipzig (2 parts), 1894, 1896), and papers in Grunert's Arch. , xlix.-lxxxiv., Math. Ann. vii. xii.; B. and C. S. Peirce, "Linear Associative Algebra," Amer. Journ. Math. iv. (privately circulated, 1871); A. Cayley, on Matrices, Phil. Trans. cxlviii., on Multiple Algebra, Quart. M. Journ. xxii.; J. J. Sylvester, on Universal Algebra ( Amer. Journ. Math. vi.; H. J. S. Smith, on Linear Indeterminate Equations, Phil. Trans. cli.; R. S. Ball, Theory of Screws (Dublin, 1876); and papers in Phil. Trans. clxiv., and Trans. R. Ir. Ac. xxv.; W. K. Clifford, on Biquaternions, Proc. L. M. S. iv.; A. Buchheim, on Extensive Calculus and its Applications, Proc. L. M. S. xv.-xvii.; H. Taber, on Matrices, Amer. J. M. xii.; K. Weierstrass, "Zur Theorie der aus Haupteinheiten gebildeten complexen Grössen," Götting. Nachr. (1884); G. Frobenius, on Bilinear Forms, , lxxxiv., and Berl. Ber. (1896); L. Kronecker, on Complex Numbers and Modular Systems, Berl. Ber. (1888); G. Scheffers, "Complexe Zahlenssysteme," Math. Ann. xxxix. (this contains a biblioraphy up to 1890); S. Lie, Vorlesungen über continuirliche Gruppen (Leipzig, 1893), ch. xxi.; A. M'Aulay, "Algebra after Hamilton, or Multenions," Proc. R. S. E. , 1908, 28. p. 503. For a more complete account see H. Hankel Theorie der complexen Zahlensysteme (Leipzig, 1867); O. Stolz, Vorlesungen über allgemeine Arithmetik (ibid., 1883); A. N. Whitehead, A Treatise on Universal Algebra, with Applications (vol. i., Cambridge, 1898) (a very comprehensive work, to which the writer of this article is in many ways indebted); and the Encyclopädie d. math. Wissenschaften (vol. i., Leipzig, 1898), &c., §§ A 1 (H. Schubert), A 4 (E. Study), and B I (G. Landsberg). For the history of the development of ordinary algebra M. Cantor's Vorlesungen über Geschichte der Mathematik is the standard authority. C. History Various derivations of the word “algebra,” which is of Arabian origin, have been given by different writers. Ety- mology. The first mention of the word is to be found in the title of a work by Mahommed ben Musa al-Khwarizmi (Hovarezmi), who flourished about the beginning of the 9th century. The full title is ilm al-jebr wa'l-muqābala, which contains the ideas of restitution and comparison, or opposition and comparison, or resolution and equation, jebr being derived from the verb jabara, to reunite, and muqābala, from gabala, to make equal. (The root jabara is also met with in the word algebrista, which means a "bone-setter," and is still in common use in Spain.) The same derivation is given by Lucas Paciolus (Luca Pacioli), who reproduces the phrase in the transliterated form alghebra e almucabala, and ascribes the invention of the art to the Arabians. Other writers have derived the word from the Arabic particle al (the definite article), and geber, meaning “ man.” Since, however, Geber happened to be the name of a celebrated Moorish philosopher who flourished in about the 11th or 12th century, it has been supposed that he was the founder of algebra, which has since perpetuated his name. The evidence of Peter Ramus (1515-1572) on this point is interesting, but he gives no authority for his singular statements. In the preface to his Arithmeticae libri duo et totidem Algebrae (1560) he says: “The name Algebra is Syriac, signifying the art or doctrine of an excellent man. For Geber, in Syriac, is a name applied to men, and is sometimes a term of honour, as master or doctor among us. There was a certain learned mathematician who sent his algebra, written in the Syriac language, to Alexander the Great, and he named it almucabala, that is, the book of dark or mysterious things, which others would rather call the doctrine of algebra. To this day the same book is in great estimation among the learned in the oriental nations, and by the Indians, who cultivate this art, it is called aljabra and alboret; though the name of the author himself is not known.” The uncertain authority of these statements, and the plausibility of the preceding explanation, have caused philologists to accept the derivation from al and jabara. Robert Recorde in his Whetstone of Witte (1557) uses the variant algeber, while John Dee (1527-1608) affirms that algiebar, and not algebra, is the correct form, and appeals to the authority of the Arabian Avicenna. Although the term “algebra” is now in universal use, various other appellations were used by the Italian mathematicians during the Renaissance. Thus we find Paciolus calling it l'Arte Magiore; ditta dal vulgo la Regula de la Cosa over Alghebra e Almucabala. The name l'arte magiore, the greater art, is designed to distinguish it from l'arte minore, the lesser art, a term which he applied to the modern arithmetic. His second variant, la regula de la cosa, the rule of the thing or unknown quantity, appears to have been in common use in Italy, and the word cosa was preserved for several centuries in the forms coss or algebra, cossic or algebraic, cossist or algebraist, &c. Other Italian writers termed it the Regula rei et census, the rule of the thing and the product, or the root and the square. The principle underlying this expression is probably to be found in the fact that it measured the limits of their attainments in algebra, for they were unable to solve equations of a higher degree than the quadratic or square. Franciscus Vieta (François Viète) named it Specious Arithmetic, on account of the species of the quantities involved, which he represented symbolically by the various letters of the alphabet. Sir Isaac Newton introduced the term Universal Arithmetic, since it is concerned with the doctrine of operations, not affected on numbers, but on general symbols. Notwithstanding these and other idiosyncratic appellations, European mathematicians have adhered to the older name, by which the subject is now universally known. It is difficult to assign the invention of any art or science definitely to any particular age or race. The few fragmentary records, which have come down to us from past civilizations, must not be regarded as representing the totality of their knowledge, and the omission of a science or art does not necessarily imply that the science or art was unknown. It was formerly the custom to assign the invention of algebra to the Greeks, but since the decipherment of the Rhind papyrus by Eisenlohr this view has changed, for in this work there are distinct signs of an algebraic analysis. The particular problem—a heap (hau) and its seventh makes 19—is solved as we should now solve a simple equation; but Ahmes varies his methods in other similar problems. This discovery carries the invention of algebra back to about 1700 B.C., if not earlier. It is probable that the algebra of the Egyptians was of a most rudimentary nature, for otherwise we should expect to find traces of it in the works of the Greek geometers, of Greek algebra. whom Thales of Miletus (640-546 B.C.) was the first. Notwithstanding the prolixity of writers and the number of the writings, all attempts at extracting an algebraic analysis from their geometrical theorems and problems have been fruitless, and it is generally conceded that their analysis was geometrical and had little or no affinity to algebra. The first extant work which approaches to a treatise on algebra is by Diophantus (q.v.), an Alexandrian mathematician, who flourished about A.D. 350. The original, which consisted of a preface and thirteen books, is now lost, but we have a Latin translation of the first six books and a fragment of another on polygonal numbers by Xylander of Augsburg (1575), and Latin and Greek translations by Gaspar Bachet de Merizac (1621-1670). Other editions have been published, of which we may mention Pierre Fermat's (1670), T. L. Heath's (1885) and P. Tannery's (1893-1895). In the preface to this work, which is dedicated to one Dionysus, Diophantus explains his notation, naming the square, cube and fourth powers, dynamis, cubus, dynamodinimus, and so on, according to the sum in the indices. The unknown he terms arithmos , the number, and in solutions he marks it by the final ς; he explains the generation of powers, the rules for multiplication and division of simple quantities, but he does not treat of the addition, subtraction, multiplication and division of compound quantities. He then proceeds to discuss various artifices for the simplification of equations, giving methods which are still in common use. In the body of the work he displays considerable ingenuity in reducing his problems to simple equations, which admit either of direct solution, or fall into the class known as indeterminate equations. This latter class he discussed so assiduously that they are often known as Diophantine problems, and the methods of resolving them as the Diophantine analysis (see Equation, Indeterminate). It is difficult to believe that this work of Diophantus arose spontaneously in a period of general stagnation. It is more than likely that he was indebted to earlier writers, whom he omits to mention, and whose works are now lost; nevertheless, but for this work, we should be led to assume that algebra was almost, if not entirely, unknown to the Greeks. The Romans, who succeeded the Greeks as the chief civilized power in Europe, failed to set store on their literary and scientific treasures; mathematics was all but neglected; and beyond a few improvements in arithmetical computations, there are no material advances to be recorded. In the chronological development of our subject we have now to turn to the Orient. Investigation of the writings of IndianIndian algebra. mathematicians has exhibited a fundamental distinction between the Greek and Indian mind, the former being pre-eminently geometrical and speculative, the latter arithmetical and mainly practical. We find that geometry was neglected except in so far as it was of service to astronomy; trigonometry was advanced, and algebra improved far beyond the attainments of Diophantus. The earliest Indian mathematician of whom we have certain knowledge is Aryabhatta, who flourished about the beginning of the 6th century of our era. The fame of this astronomer and mathematician rests on his work, the Aryabhattiyam, the third chapter of which is devoted to mathematics. Ganessa, an eminent astronomer, mathematician and scholiast of Bhaskara, quotes this work and makes separate mention of the cuttaca (“pulveriser”), a device for effecting the solution of indeterminate equations. Henry Thomas Colebrooke, one of the earliest modern investigators of Hindu science, presumes that the treatise of Aryabhatta extended to determinate quadratic equations, indeterminate equations of the first degree, and probably of the second. An astronomical work, called the Surya-siddhanta (“knowledge of the Sun”), of uncertain authorship and probably belonging to the 4th or 5th century, was considered of great merit by the Hindus, who ranked it only second to the work of Brahmagupta, who flourished about a century later. It is of great interest to the historical student, for it exhibits the influence of Greek science upon Indian mathematics at a period prior to Aryabhatta. After an interval of about a century, during which mathematics attained its highest level, there flourished Brahmagupta (b. A.D. 598), whose work entitled Brahma-sphuta-siddhanta (“The revised system of Brahma”) contains several chapters devoted to mathematics. Of other Indian writers mention may be made of Cridhara, the author of a Ganita-sara (“Quintessence of Calculation”), and Padmanabha, the author of an algebra. A period of mathematical stagnation then appears to have possessed the Indian mind for an interval of several centuries, for the works of the next author of any moment stand but little in advance of Brahmagupta. We refer to Bhaskara Acarya, whose work the Siddhanta-ciromani (“Diadem of an Astronomical System”), written in 1150, contains two important chapters, the Lilavati (“the beautiful [science or art]”) and Viga-ganita (“root-extraction”), which are given up to arithmetic and algebra. English translations of the mathematical chapters of the Brahma-siddhanta and Siddhanta-ciromani by H. T. Colebrooke (1817), and of the Surya-siddhanta by E. Burgess, with annotations by W. D. Whitney (1860), may be consulted for details. The question as to whether the Greeks borrowed their algebra from the Hindus or vice versa has been the subject of much discussion. There is no doubt that there was a constant traffic between Greece and India, and it is more than probable that an exchange of produce would be accompanied by a transference of ideas. Moritz Cantor suspects the influence of Diophantine methods, more particularly in the Hindu solutions of indeterminate equations, where certain technical terms are, in all probability, of Greek origin. However this maybe, it is certain that the Hindu algebraists were far in advance of Diophantus. The deficiencies of the Greek symbolism were partially remedied; subtraction was denoted by placing a dot over the subtrahend; multiplication, by placing bha (an abbreviation of bhavita, “the product”) after the factors; division, by placing the divisor under the dividend; and square root, by inserting ka (an abbreviation of karana, irrational) before the quantity. The unknown was called yāvattāvat, and if there were several, the first took this appellation, and the others were designated by the names of colours; for instance, x was denoted by yā and y by kā (from kālaka, black). A notable improvement on the ideas of Diophantus is to be found in the fact that the Hindus recognized the existence of two roots of a quadratic equation, but the negative roots were considered to be inadequate, since no interpretation could be found for them. It is also supposed that they anticipated discoveries of the solutions of higher equations. Great advances were made in the study of indeterminate equations, a branch of analysis in which Diophantus excelled. But whereas Diophantus aimed at obtaining a single solution, the Hindus strove for a general method by which any indeterminate problem could be resolved. In this they were completely successful, for they obtained general solutions for the equations ax±by=c, xy=ax+by+c (since rediscovered by Leonhard Euler) and cy²=ax²+b. A particular case of the last equation, namely, y²=ax²+1, sorely taxed the resources of modern algebraists. It was proposed by Pierre de Fermat to Bernhard Frenicle de Bessy, and in 1657 to all mathematicians. John Wallis and Lord Brounker jointly obtained a tedious solution which was published in 1658, and afterwards in 1668 by John Pell in his Algebra. A solution was also given by Fermat in his Relation. Although Pell had nothing to do with the solution, posterity has termed the equation Pell's Equation, or Problem, when more rightly it should be the Hindu Problem, in recognition of the mathematical attainments of the Brahmans. Hermann Hankel has pointed out the readiness with which the Hindus passed from number to magnitude and vice versa. Although this transition from the discontinuous to continuous is not truly scientific, yet it materially augmented the development of algebra, and Hankel affirms that if we define algebra as the application of arithmetical operations to both rational and irrational numbers or magnitudes, then the Brahmans are the real inventors of algebra. The integration of the scattered tribes of Arabia in the 7th century by the stirring religious propaganda of Mahomet was accompanied by a meteoric rise in the intellectual Arabian algebra. powers of a hitherto obscure race. The Arabs became the custodians of Indian and Greek science, whilst Europe was rent by internal dissensions. Under the rule of the Abbasids, Bagdad became the centre of scientific thought; physicians and astronomers from India and Syria flocked to their court; Greek and Indian manuscripts were translated (a work commenced by the Caliph Mamun (813-833) and ably continued by his successors); and in about a century the Arabs were placed in possession of the vast stores of Greek and Indian learning. Euclid's Elements were first translated in the reign of Harun-al-Rashid (786-809), and revised by the order of Mamun. But these translations were regarded as imperfect, and it remained for Tobit ben Korra (836-901) to produce a satisfactory edition. Ptolemy's Almagest, the works of Apollonius, Archimedes, Diophantus and portions of the Brahmasiddhanta, were also translated. The first notable Arabian mathematician was Mahommed ben Musa al-Khwarizmi, who flourished in the reign of Mamun. His treatise on algebra and arithmetic (the latter part of which is only extant in the form of a Latin translation, discovered in 1857) contains nothing that was unknown to the Greeks and Hindus; it exhibits methods allied to those of both races, with the Greek element predominating. The part devoted to algebra has the title al-jebr wa'l-muqābala, and the arithmetic begins with “Spoken has Algoritmi,” the name Khwarizmi or Hovarezmi having passed into the word Algoritmi, which has been further transformed into the more modern words algorism and algorithm, signifying a method of computing. Tobit ben Korra (836-901), born at Harran in Mesopotamia, an accomplished linguist, mathematician and astronomer, rendered conspicuous service by his translations of various Greek authors. His investigation of the properties of amicable numbers (q.v.) and of the problem of trisecting an angle, are of importance. The Arabians more closely resembled the Hindus than the Greeks in the choice of studies; their philosophers blended speculative dissertations with the more progressive study of medicine; their mathematicians neglected the subtleties of the conic sections and Diophantine analysis, and applied themselves more particularly to perfect the system of numerals (see Numeral), arithmetic and astronomy (q.v.). It thus came about that while some progress was made in algebra, the talents of the race were bestowed on astronomy and trigonometry (q.v.). Fahri des al Karhi, who flourished about the beginning of the 11th century, is the author of the most important Arabian work on algebra. He follows the methods of Diophantus; his work on indeterminate equations has no resemblance to the Indian methods, and contains nothing that cannot be gathered from Diophantus. He solved quadratic equations both geometrically and algebraically, and also equations of the form x^2n+ax^n+b=0; he also proved certain relations between the sum of the first n natural numbers, and the sums of their squares and cubes. Cubic equations were solved geometrically by determining the intersections of conic sections. Archimedes' problem of dividing a sphere by a plane into two segments having a prescribed ratio, was first expressed as a cubic equation by Al Mahani, and the first solution was given by Abu Gafar al Hazin. The determination of the side of a regular heptagon which can be inscribed or circumscribed to a given circle was reduced to a more complicated equation which was first successfully resolved by Abul Gud. The method of solving equations geometrically was considerably developed by Omar Khayyam of Khorassan, who flourished in the 11th century. This author questioned the possibility of solving cubics by pure algebra, and biquadratics by geometry. His first contention was not disproved until the 15th century, but his second was disposed of by Abul Wefa (940-998), who succeeded in solving the forms x^4=a and x^4+ax^3=b. Although the foundations of the geometrical resolution of cubic equations are to be ascribed to the Greeks (for Eutocius assigns to Menaechmus two methods of solving the equation x^3=a and x^3=2a^3), yet the subsequent development by the Arabs must be regarded as one of their most important achievements. The Greeks had succeeded in solving an isolated example; the Arabs accomplished the general solution of numerical equations. Considerable attention has been directed to the different styles in which the Arabian authors have treated their subject. Moritz Cantor has suggested that at one time there existed two schools, one in sympathy with the Greeks, the other with the Hindus; and that, although the writings of the latter were first studied, they were rapidly discarded for the more perspicuous Grecian methods, so that, among the later Arabian writers, the Indian methods were practically forgotten and their mathematics became essentially Greek in character. Turning to the Arabs in the West we find the same enlightened spirit; Cordova, the capital of the Moorish empire in Spain, was as much a centre of learning as Bagdad. The earliest known Spanish mathematician is Al Madshritti (d. 1007), whose fame rests on a dissertation on amicable numbers, and on the schools which were founded by his pupils at Cordova, Dania and Granada. Gabir ben Aflah of Sevilla, commonly called Geber, was a celebrated astronomer and apparently skilled in algebra, for it has been supposed that the word “algebra” is compounded from his name. When the Moorish empire began to wane the brilliant intellectual gifts which they had so abundantly nourished during three or four centuries became enfeebled, and after that period they failed to produce an author comparable with those of the 7th to the 11th centuries. In Europe the decline of Rome was succeeded by a period, lasting several centuries, during which the sciences and arts were all but neglected. Political and ecclesiastical dissensions occupied the greatest intellects, and the only progress to be recorded is in the art of computing or arithmetic, and the translation of Arabic manuscripts. The first successful attempt to revive the study of algebra in Christendom was due to Leonardo of Pisa, an Italian merchant trading in the Mediterranean.Algebra in Europe. His travels and mercantile experience had led him to conclude that the Hindu methods of computing were in advance of those then in general use, and in 1202 he published his Liber Abaci, which treats of both algebra and arithmetic. In this work, which is of great historical interest, since it was published about two centuries before the art of printing was discovered, he adopts the Arabic notation for numbers, and solves many problems, both arithmetical and algebraical. But it contains little that is original, and although the work created a great sensation when it was first published, the effect soon passed away, and the book was practically forgotten. Mathematics was more or less ousted from the academic curricula by the philosophical inquiries of the schoolmen, and it was only after an interval of nearly three centuries that a worthy successor to Leonardo appeared. This was Lucas Paciolus (Lucas de Burgo), a Minorite friar, who, having previously written works on algebra, arithmetic and geometry, published, in 1494, his principal work, entitled Summa de Arithmetica, Geometria, Proportioni et Proportianalita. In it he mentions many earlier writers from whom he had learnt the science, and although it contains very little that cannot be found in Leonardo's work, yet it is especially noteworthy for the systematic employment of symbols, and the manner in which it reflects the state of mathematics in Europe during this period. These works are the earliest printed books on mathematics. The renaissance of mathematics was thus effected in Italy, and it is to that country that the leading developments of the following century were due. The first difficulty to be overcome was the algebraical solution of cubic equations, theCubic equations. pons asinorum of the earlier mathematicians. The first step in this direction was made by Scipio Ferro (d. 1526), who solved the equation x^3+ax=b. Of his discovery we know nothing except that he declared it to his pupil Antonio Marie Floridas. An imperfect solution of the equation x^3+px^2=q was discovered by Nicholas Tartalea (Tartaglia) in 1530, and his pride in this achievement led him into conflict with Floridas, who proclaimed his own knowledge of the form resolved by Ferro. Mutual recriminations led to a public discussion in 1535, when Tartalea completely vindicated the general applicability of his methods and exhibited the inefficiencies of that of Floridas. This contest over, Tartalea redoubled his attempts to generalize his methods, and by 1541 he possessed the means for solving any form of cubic equation. His discoveries had made him famous all over Italy, and he was earnestly solicited to publish his methods; but he abstained from doing so, saying that he intended to embody them in a treatise on algebra which he was preparing. At last he succumbed to the repeated requests of Girolamo or Geronimo Cardano, who swore that he would regard them as an inviolable secret. Cardan or Cardano, who was at that time writing his great work, the Ars Magna, could not restrain the temptation of crowning his treatise with such important discoveries, and in 1545 he broke his oath and gave to the world Tartalea's rules for solving cubic equations. Tartalea, thus robbed of his most cherished possession, was in despair. Recriminations ensued until his death in 1557, and although he sustained his claim for priority, posterity has not conceded to him the honour of his discovery, for his solution is now known as Cardan's Rule. Cubic equations having been solved, biquadratics soon followed suit. As early as 1539 Cardan had solved certain particular cases, but it remained for his pupil, LewisBiquad- equations. (Ludovici) Ferrari, to devise a general method. His solution, which is sometimes erroneously ascribed to Rafael Bombelli, was published in the Ars Magna. In this work, which is one of the most valuable contributions to the literature of algebra, Cardan shows that he was familiar with both real positive and negative roots of equations whether rational or irrational, but of imaginary roots he was quite ignorant, and he admits his inability to resolve the so-called “irreducible case” (see Equation). Fundamental theorems in the theory of equations are to be found in the same work. Clearer ideas of imaginary quantities and the “irreducible case” were subsequently published by Bombelli, in a work of which the dedication is dated 1572, though the book was not published until Contemporaneously with the remarkable discoveries of the Italian mathematicians, algebra was increasing in popularity in Germany, France and England. Michael Stifel and Johann Scheubelius (Scheybl) (1494-1570) flourished in Germany, and although unacquainted with the work of Cardan and Tartalea, their writings are noteworthy for their perspicuity and the introduction of a more complete symbolism for quantities and operations. Stifel introduced the sign (+) for addition or a positive quantity, which was previously denoted by plus, piū, or the letter p. Subtraction, previously written as minus, mene or the letter m, was symbolized by the sign (-) which is still in use. The square root he denoted by (√), whereas Paciolus, Cardan and others used the letter R. The first treatise on algebra written in English was by Robert Recorde, who published his arithmetic in 1552, and his algebra entitled The Whetstone of Witte, which is the second part of Arithmetik, in 1557. This work, which is written in the form of a dialogue, closely resembles the works of Stifel and Scheubelius, the latter of whom he often quotes. It includes the properties of numbers; extraction of roots of arithmetical and algebraical quantities, solutions of simple and quadratic equations, and a fairly complete account of surds. He introduced the sign (=) for equality, and the terms binomial and residual. Of other writers who published works about the end of the 16th century, we may mention Jacques Peletier, or Jacobus Peletarius (De occulta parte Numerorum, quam Algebram vocant, 1558); Petrus Ramus (Arithmeticae Libri duo et totidem Algebrae, 1560), and Christoph Clavius, who wrote on algebra in 1580, though it was not published until 1608. At this time also flourished Simon Stevinus (Stevin) of Bruges, who published an arithmetic in 1585 and an algebra shortly afterwards. These works possess considerable originality, and contain many new improvements in algebraic notation; the unknown (res) is denoted by a small circle, in which he places an integer corresponding to the power. He introduced the terms multinomial, trinomial, quadrinomial, &c., and considerably simplified the notation for decimals. About the beginning of the 17th century various mathematical works by Franciscus Vieta were published, which were afterwards collected by Franz van Schooten and republished in 1646 at Leiden. These works exhibit great originality and mark an important epoch in the history of algebra. Vieta, who does not avail himself of the discoveries of his predecessors—the negative roots of Cardan, the revised notation of Stifel and Stevin, &c.—introduced or popularized many new terms and symbols, some of which are still in use. He denotes quantities by the letters of the alphabet, retaining the vowels for the unknown and the consonants for the knowns; he introduced the vinculum and among others the terms coeffcient, affirmative, negative, pure and adfected equations. He improved the methods for solving equations, and devised geometrical constructions with the aid of the conic sections. His method for determining approximate values of the roots of equations is far in advance of the Hindu method as applied by Cardan, and is identical in principle with the methods of Sir Isaac Newton and W. G. Horner. We have next to consider the works of Albert Girard, a Flemish mathematician. This writer, after having published an edition of Stevin's works in 1625, published in 1629 at Amsterdam a small tract on algebra which shows a considerable advance on the work of Vieta. Girard is inconsistent in his notation, sometimes following Vieta, sometimes Stevin; he introduced the new symbols ff for greater than and § for less than; he follows Vieta in using the plus (+) for addition, he denotes subtraction by Recorde's symbol for equality (=), and he had no sign for equality but wrote the word out. He possessed clear ideas of indices and the generation of powers, of the negative roots of equations and their geometrical interpretation, and was the first to use the term imaginary roots. He also discovered how to sum the powers of the roots of an equation. Passing over the invention of logarithms (q.v.) by John Napier, and their development by Henry Briggs and others, the next author of moment was an Englishman, Thomas Harriot, whose algebra (Artis analytical praxis) was published posthumously by Walter Warner in 1631. Its great merit consists in the complete notation and symbolism, which avoided the cumbersome expressions of the earlier algebraists, and reduced the art to a form closely resembling that of to-day. He follows Vieta in assigning the vowels to the unknown quantities and the consonants to the knowns, but instead of using capitals, as with Vieta, he employed the small letters; equality he denoted by Recorde's symbol, and he introduced the signs > and < for greater than and less than. His principal discovery is concerned with equations, which he showed to be derived from the continued multiplication of as many simple factors as the highest power of the unknown, and he was thus enabled to deduce relations between the coefficients and various functions of the roots. Mention may also be made of his chapter on inequalities, in which he proves that the arithmetic mean is always greater than the geometric William Oughtred, a contemporary of Harriot, published an algebra, Claris mathematicae, simultaneously with Harriot's treatise. His notation is based on that of Vieta, but he introduced the sign ✕ for multiplication, ᆢᆢ for continued proportion, :: for proportion, and denoted ratio by one dot. This last character has since been entirely restricted to multiplication, and ratio is now denoted by two dots (:). His symbols for greater than and less than (⫎ and ┐) have been completely superseded by Harriot's signs. So far the development of algebra and geometry had been mutually independent, except for a few isolated applications of geometrical constructions to the solution of algebraical problems. Certain minds had long suspected the advantages which would accrue from the unrestricted application of algebra to geometry, but it was not until the advent of the philosopher Réné Descartes that the co-ordination was effected. In his famous Geometria (1637), which is really a treatise on the algebraic representation of geometric theorems, he founded the modern theory of analytical geometry (see Geometry), and at the same time he rendered signal service to algebra, more especially in the theory of equations. His notation is based primarily on that of Harriot; but he differs from that writer in retaining the first letters of the alphabet for the known quantities and the final letters for the unknowns. The 17th century is a famous epoch in the progress of science, and the mathematics in no way lagged behind. The discoveries of Johann Kepler and Bonaventure Cavalieri were the foundation upon which Sir Isaac Newton and Gottfried Wilhelm Leibnitz erected that wonderful edifice, the Infinitesimal Calculus (q.v.). Many new fields were opened up, but there was still continual progress in pure algebra. Continued fractions, one of the earliest examples of which is Lord Brouncker's expression for the ratio of the circumference to the diameter of a circle (see Circle), were elaborately discussed by John Wallis and Leonhard Euler; the convergence of series treated by Newton, Euler and the Bernoullis; the binomial theorem, due originally to Newton and subsequently expanded by Euler and others, was used by Joseph Louis Lagrange as the basis of his Calcul des Fonctions. Diophantine problems were revived by Gaspar Bachet, Pierre Fermat and Euler; the modern theory of numbers was founded by Fermat and developed by Euler, Lagrange and others; and the theory of probability was attacked by Blaise Pascal and Fermat, their work being subsequently expanded by James Bernoulli, Abraham de Moivre, Pierre Simon Laplace and others. The germs of the theory of determinants are to be found in the works of Leibnitz; Étienne Bézout utilized them in 1764 for expressing the result obtained by the process of elimination known by his name, and since restated by Arthur Cayley. In recent times many mathematicians have formulated other kinds of algebras, in which the operators do not obey the laws of ordinary algebra. This study was inaugurated by George Peacock, who was one of the earliest mathematicians to recognize the symbolic character of the fundamental principles of algebra. About the same time, D. F. Gregory published a paper “on the real nature of symbolical algebra.” In Germany the work of Martin Ohm (System der Mathematik, 1822) marks a step forward. Notable service was also rendered by Augustus de Morgan, who applied logical analysis to the laws of The geometrical interpretation of imaginary quantities had a far-reaching influence on the development of symbolic algebras. The attempts to elucidate this question by H. Kühn (1750-1751) and Jean Robert Argand (1806) were completed by Karl Friedrich Gauss, and the formulation of various systems of vector analysis by Sir William Rowan Hamilton, Hermann Grassmann and others, followed. These algebras were essentially geometrical, and it remained, more or less, for the American mathematician Benjamin Peirce to devise systems of pure symbolic algebras; in this work he was ably seconded by his son Charles S. Peirce. In England, multiple algebra was developed by James Joseph Sylvester, who, in company with Arthur Cayley, expanded the theory of matrices, the germs of which are to be found in the writings of Hamilton (see above, under (B); and Quaternions). The preceding summary shows the specialized nature which algebra has assumed since the 17th century. To attempt a history of the development of the various topics in this article is inappropriate, and we refer the reader to the separate articles. .—The history of algebra is treated in all historical works on mathematics in general (see ). Greek algebra can be specially studied in T. L. Heath's . See also John Wallis, Opera Mathematica (1693-1699), and Charles Hutton, Mathematical and Philosophical Dictionary (1815), article "Algebra."
{"url":"http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Algebra","timestamp":"2014-04-18T09:38:32Z","content_type":null,"content_length":"261539","record_id":"<urn:uuid:ca80d40e-d6c1-4f43-8d65-78846e9aee93>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematica Journal: Volume 9, Issue 2: In and Out In and Out Q: Given a set of n-dimensional vectors, is there a simple way to check if they are coplanar in A: Eckhard Hennig (aidev@kaninkolo.de) answers: To establish coplanarity in nullity) of the linear mapping defined by Define three linearly independent vectors in Generate a set of 10 coplanar vectors in SeedRandom to obtain a repeatable sequence of pseudorandom numbers. The null space of the mapping defined by In Version 5, you can use MatrixRank instead to calculate the dimension of the subspace spanned by NullSpace may be unreliable for large and ill-conditioned systems. A more reliable, but less efficient, alternative is to determine the number of singular values of the mapping. Since there are two singular values, the set of vectors spans a two-dimensional space, that is, the vectors are coplanar in Now crosscheck for the noncoplanar case: generate 10 vectors in The null space is empty, which implies that Alternatively, the number of singular values equals the dimension of the space.
{"url":"http://www.mathematica-journal.com/issue/v9i2/contents/Inout9-2/Inout9-2_3.html","timestamp":"2014-04-21T14:40:42Z","content_type":null,"content_length":"10426","record_id":"<urn:uuid:8df9f975-ba32-4fa5-b0aa-61e06d3373be>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Conroe SAT Math Tutor Find a Conroe SAT Math Tutor ...During the summers I am an AP Reader for the Calculus AP test. I have taught AP Calculus AB & BC and AP Statistics, also Dual Credit Trigonometry, Pre-Calculus, College Algebra, Calculus I & II. I have been an adjunct professor at Lone Star College for 10 years teaching Finite Math, College Algebra, Trigonometry, and Pre-Calculus. 6 Subjects: including SAT math, calculus, algebra 2, trigonometry ...My sessions are personalized to meet the needs of each student and are structured to guide the student toward a better understanding of the material. Students should expect to work hard, have fun, and come away with confident mastery of the subject matter. Every student has the potential to be a successful learner. 73 Subjects: including SAT math, reading, English, writing ...The mastery of reading depends on the understanding and practice of these vital reading concepts. I am trained in creating systems for organizing, processing, and comprehending what students read, hear, or prepare in class; planning homework and long-term assignments; studying for tests; and det... 15 Subjects: including SAT math, reading, English, grammar ...During the time I was working on my first Master's in Education, I developed an Intervention program for at-risk students focused on Learning Styles & Study Skills. The program was a success and became the model for the school district. Since than, I have continued to work with students by creating individual study-skill plans. 33 Subjects: including SAT math, English, reading, calculus ...I helped him get up to a B and a 4 out of 5 on the AP Calculus exam. I taught him basic test taking skills. He and his mother were very happy with the results. 24 Subjects: including SAT math, chemistry, calculus, physics
{"url":"http://www.purplemath.com/conroe_sat_math_tutors.php","timestamp":"2014-04-17T21:52:26Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:34d47de0-cfe6-4020-8578-705c006184d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 1997 [00363] [Date Index] [Thread Index] [Author Index] Re: Re: Another Bug in Mathematica 3.0.0 definite integration • To: mathgroup at smc.vnet.net • Subject: [mg9311] Re: [mg9254] Re: Another Bug in Mathematica 3.0.0 definite integration • From: David Withoff <withoff> • Date: Mon, 27 Oct 1997 02:47:24 -0500 • Sender: owner-wri-mathgroup at wolfram.com > >> a=Integrate[1/Sqrt[Sin[x]+Cos[x]], {x,0,Pi/2}] > This is a pattern I have observed in many integrals involving > trigonometric functions that lead to hypergeometric and/or elliptic > functions. The time comparison of 2.2 vs 3.0 is interesting. > Results from Mathematica 2.2.1 on above test function: > (Pi^(1/2)*Gamma[1/4])/(2*2^(1/4)*Gamma[3/4]) - > 2*Hypergeometric2F1[1/2, 1, 5/4, -1] + Hypergeometric2F1[3/4, 1, 3/2, > -1] > 1.397395299268851 > Time: 0.95 seconds on Mac 8500/120. > Results from Mathematica 3.0.1, same machine: > -2*2^(3/4)*HypergeometricPFQ[{1/4, 3/4}, {5/4}, -1] > -3.012362296717479 > Time: 10.4 seconds, a factor of 10. > Both 3.0 and 3.0.1 give the warning about convergence. I have not noticed or seen other reports of a pattern of behavior like this. A slowdown of a factor of 10 is sufficiently atypical that it should be reported to Wolfram Research so that it can be investigated. The only pattern that I have noticed is that Version 3.0 can do more integrals that Version 2.2, and that the integration packages load much more quickly. Just to check that I wasn't making this up, I tried a few integrals just now in Version 2.2 and in Version 3.0 for Windows 95 on my 100 MHz Pentium computer. The first elliptic integral that I tried finished about 18 times faster in Version 3.0 than in Version 2.2, and in Version 3.0 I didn't need to load any separate packages. The first time through, Version 2.2 gave up on my integral after 27.736 In[1]:= Integrate[1/Sqrt[2 + Cos[x] + Sin[x]], {x, 0, Pi}] //Timing General::intinit: Loading integration packages -- please wait. 1 Out[1]= {27.736 Second, Integrate[-------------------------, {x, 0, Pi}]} Sqrt[2 + Cos[x] + Sin[x]] The second time through it gave up after 1.982 seconds, which suggests that it took about 25 seconds to load the integration packages. After loading the separate Calculus`EllipticIntegrate` package (which took 28.028 seconds) Version 2.2 gave me a correct result in terms of elliptic integrals in 29.507 seconds. In Version 3.0 I got a correct result in 3.9 seconds the first time through, and 1.59 seconds the second time, which suggests that it took about 2.3 seconds (instead of 25+28 seconds) to load all of the necessary packages automatically, and that the integral itself takes 1.59 seconds in Version 3.0, about 18 times faster than the 29.507 seconds that I found in Version 2.2. I also tried some simpler integrals -- just the first trigonometric integrals that popped into my head and that I guessed that Version 2.2 would be able to do. In Version 2.2 these integrals finished in 12.77 In[5]:= Timing[{ Integrate[1/(1 + Sin[b x]), x], Integrate[Sin[3 a x] + Cos[b x], {x, x1, x2}], Integrate[Sin[x]^3, {x, 0, Pi}], Integrate[1/Sin[c x], {x, a, b}], Integrate[x^3 Sin[x]^2 Cos[x], {x, 1, 2}], Integrate[Sqrt[x] Sin[x], x]}][[1]] Out[5]= 12.77 Second The same integrals took 6.92 seconds in Version 3.0, almost twice as fast as in Version 2.2. A speedup of a factor of 18, as in my example, is unusual, but a slowdown of a factor of 10 is also unusual. Any significant slowdown like this should be reported to Wolfram Research. Dave Withoff Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/1997/Oct/msg00363.html","timestamp":"2014-04-21T12:28:16Z","content_type":null,"content_length":"37568","record_id":"<urn:uuid:d2d84ea3-e772-417d-9e64-40431dc9e94a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Warren, RI Math Tutor Find a Warren, RI Math Tutor ...The lessons we teach ourselves are the ones we remember best. Once I understand what concept a student needs to be taught or clarified, I devise a series of problems or logic steps that the student can solve in succession. Ultimately this will allow the student to start from a place of confiden... 12 Subjects: including algebra 1, algebra 2, calculus, chemistry ...Some topics I work with include solving equations, absolute value, inequalities, proportions and factoring. The study of anatomy involves a good deal of learning terminology. I try to help students by teaching them the meaning of common anatomical prefixes, suffixes, and root words. 15 Subjects: including geometry, algebra 1, algebra 2, biology ...I have coached baseball for a combined six years from T-Ball age through high school. I played baseball for 15 years. I was an assistant to training little league umpires. 22 Subjects: including algebra 1, prealgebra, physics, writing ...I presently teach in a public school. I teach direct study skills to five students every day in the classroom I teach. These skills and strategies are either derived from IEP's, district policies and the common core standards. 33 Subjects: including algebra 1, geometry, prealgebra, reading ...I received a score of 100% on the WyzAnt test for this subject. I taught Pre-Algebra a few times. I received a score of 100% on the WyzAnt test for this subject. 7 Subjects: including trigonometry, algebra 1, algebra 2, geometry Related Warren, RI Tutors Warren, RI Accounting Tutors Warren, RI ACT Tutors Warren, RI Algebra Tutors Warren, RI Algebra 2 Tutors Warren, RI Calculus Tutors Warren, RI Geometry Tutors Warren, RI Math Tutors Warren, RI Prealgebra Tutors Warren, RI Precalculus Tutors Warren, RI SAT Tutors Warren, RI SAT Math Tutors Warren, RI Science Tutors Warren, RI Statistics Tutors Warren, RI Trigonometry Tutors Nearby Cities With Math Tutor Barrington, RI Math Tutors Berkley, MA Math Tutors Bristol, RI Math Tutors Dighton, MA Math Tutors East Greenwich Math Tutors Exeter, RI Math Tutors Freetown, MA Math Tutors Greenville, RI Math Tutors Prudence Island Math Tutors Riverside, RI Math Tutors Rumford, RI Math Tutors Seekonk Math Tutors Somerset, MA Math Tutors Swansea, MA Math Tutors Tiverton Math Tutors
{"url":"http://www.purplemath.com/Warren_RI_Math_tutors.php","timestamp":"2014-04-18T04:10:35Z","content_type":null,"content_length":"23379","record_id":"<urn:uuid:cd82d775-af3e-41e9-b6db-d61e440ca95c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Consider a normal population with µ = 25 and σ = 8.0. (A) Calculate the standard score for a value x of 22. (B) Calculate the standard score for a randomly selected sample of 45 with = 22. (C) Explain why the standard scores of 22 are different between A and B above I figure solution to A. is 25 - 22 / 7 = 0.428 but I dont have any idea how to work out the rest, help? • one year ago • one year ago Best Response You've already chosen the best response. (A) \[z=\frac{X-\mu}{\sigma}=\frac{22-25}{8}=you\ can\ calculate\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fcca4de4b0860af51ea6c5","timestamp":"2014-04-16T04:25:00Z","content_type":null,"content_length":"27993","record_id":"<urn:uuid:2b62480c-b89a-400c-8706-65399dfd77e0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
DbSpatialServices Class Represents a provider-independent service API for geospatial (Geometry/Geography) type support. Namespace: System.Data.Spatial Assembly: System.Data.Entity (in System.Data.Entity.dll) The DbSpatialServices type exposes the following members. Name Description DbSpatialServices Initializes a new instance of the DbSpatialServices class. Name Description Default Gets the default services for the DbSpatialServices. Name Description AsBinary(DbGeography) Gets the well-known binary representation of the given DbGeography value. AsBinary(DbGeometry) Gets the well-known binary representation of the given DbGeometry value. AsGml(DbGeography) Generates the Geography Markup Language (GML) representation of this DbGeography value. AsGml(DbGeometry) Generates the Geography Markup Language (GML) representation of this DbGeometry value. AsText(DbGeography) Gets the well-known text representation of the given DbGeography value. This value should include only the Longitude and Latitude of points. AsText(DbGeometry) Gets the well-known text representation of the given DbGeometry value, including only X and Y coordinates for points. AsTextIncludingElevationAndMeasure Returns a text representation of DbSpatialServices with elevation and measure. AsTextIncludingElevationAndMeasure Returns a text representation of DbSpatialServices with elevation and measure. Buffer(DbGeography, Double) Creates a geography value representing all points less than or equal to distance from the given DbGeography value. Buffer(DbGeometry, Double) Creates a geometry value representing all points less than or equal to distance from the given DbGeometry value. Contains Determines whether one DbGeometry value spatially contains the other. CreateGeography This method is intended for use by derived implementations of GeographyFromProviderValue after suitable validation of the specified provider value to ensure it is suitable for use with the derived implementation. CreateGeometry This method is intended for use by derived implementations of GeometryFromProviderValue after suitable validation of the specified provider value to ensure it is suitable for use with the derived implementation. CreateProviderValue Creates a provider-specific value compatible with this spatial services implementation based on the specified well-known DbGeography representation. CreateProviderValue Creates a provider-specific value compatible with this spatial services implementation based on the specified well-known DbGeometry representation. CreateWellKnownValue(DbGeography) Creates an instance of DbGeographyWellKnownValue that represents the specified DbGeography value using one or both of the standard well-known spatial formats. CreateWellKnownValue(DbGeometry) Creates an instance of DbGeometryWellKnownValue that represents the specified DbGeometry value using one or both of the standard well-known spatial formats. Crosses Determines whether the two given DbGeometry values spatially cross. Difference(DbGeography, DbGeography) Computes the difference of two DbGeography values. Difference(DbGeometry, DbGeometry) Computes the difference between two DbGeometry values. Disjoint(DbGeography, DbGeography) Determines whether the two given DbGeography values are spatially disjoint. Disjoint(DbGeometry, DbGeometry) Determines whether the two given DbGeometry values are spatially disjoint. Distance(DbGeography, DbGeography) Computes the distance between the closest points in two DbGeography values. Distance(DbGeometry, DbGeometry) Computes the distance between the closest points in two DbGeometry values. ElementAt(DbGeography, Int32) Returns an element of the given DbGeography value, if it represents a geography collection. ElementAt(DbGeometry, Int32) Returns an element of the given DbGeometry value, if it represents a geometry collection. Equals(Object) Determines whether the specified object is equal to the current object. (Inherited from Object.) Finalize Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) GeographyCollectionFromBinary Creates a new DbGeography collection value based on the specified well-known binary value and coordinate system identifier (SRID). GeographyCollectionFromText Creates a new DbGeography collection value based on the specified well-known text value and coordinate system identifier (SRID). GeographyFromBinary(Byte[]) Creates a new DbGeography value based on the specified well-known binary value. GeographyFromBinary(Byte[], Int32) Creates a new DbGeography value based on the specified well-known binary value and coordinate system identifier (SRID). GeographyFromGml(String) Creates a new DbGeography value based on the specified Geography Markup Language (GML) value. GeographyFromGml(String, Int32) Creates a new DbGeography value based on the specified Geography Markup Language (GML) value and coordinate system identifier (SRID). GeographyFromProviderValue Creates a new DbGeography value based on a provider-specific value that is compatible with this spatial services implementation. GeographyFromText(String) Creates a new DbGeography value based on the specified well-known text value. GeographyFromText(String, Int32) Creates a new DbGeography value based on the specified well-known text value and coordinate system identifier (SRID). GeographyLineFromBinary Creates a new DbGeography line value based on the specified well-known binary value and coordinate system identifier (SRID). GeographyLineFromText Creates a new DbGeography line value based on the specified well-known text value and coordinate system identifier (SRID). GeographyMultiLineFromBinary Creates a new DbGeography multiline value based on the specified well-known binary value and coordinate system identifier. GeographyMultiLineFromText Creates a new DbGeography multiline value based on the specified well-known text value and coordinate system identifier. GeographyMultiPointFromBinary Creates a new DbGeography multipoint value based on the specified well-known binary value and coordinate system identifier. GeographyMultiPointFromText Creates a new DbGeography multipoint value based on the specified well-known text value and coordinate system identifier. GeographyMultiPolygonFromBinary Creates a new DbGeography multi polygon value based on the specified well-known binary value and coordinate system identifier. GeographyMultiPolygonFromText Creates a new DbGeography multi polygon value based on the specified well-known text value and coordinate system identifier. GeographyPointFromBinary Creates a new DbGeography point value based on the specified well-known binary value and coordinate system identifier (SRID). GeographyPointFromText Creates a new DbGeography point value based on the specified well-known text value and coordinate system identifier (SRID). GeographyPolygonFromBinary Creates a new DbGeography polygon value based on the specified well-known binary value and coordinate system identifier (SRID). GeographyPolygonFromText Creates a new DbGeography polygon value based on the specified well-known text value and coordinate system identifier (SRID). GeometryCollectionFromBinary Creates a new DbGeometry collection value based on the specified well-known binary value and coordinate system identifier (SRID). GeometryCollectionFromText Creates a new DbGeometry collection value based on the specified well-known text value and coordinate system identifier (SRID). GeometryFromBinary(Byte[]) Creates a new DbGeometry value based on the specified well-known binary value. GeometryFromBinary(Byte[], Int32) Creates a new DbGeometry value based on the specified well-known binary value and coordinate system identifier (SRID). GeometryFromGml(String) Creates a new DbGeometry value based on the specified Geography Markup Language (GML) value. GeometryFromGml(String, Int32) Creates a new DbGeometry value based on the specified Geography Markup Language (GML) value and coordinate system identifier (SRID). GeometryFromProviderValue Creates a new DbGeometry value based on a provider-specific value that is compatible with this spatial services implementation. GeometryFromText(String) Creates a new DbGeometry value based on the specified well-known text value. GeometryFromText(String, Int32) Creates a new DbGeometry value based on the specified well-known text value and coordinate system identifier (SRID). GeometryLineFromBinary Creates a new DbGeometry line value based on the specified well-known binary value and coordinate system identifier (SRID). GeometryLineFromText Creates a new DbGeometry line value based on the specified well-known text value and coordinate system identifier (SRID). GeometryMultiLineFromBinary Creates a new DbGeometry multiline value based on the specified well-known binary value and coordinate system identifier. GeometryMultiLineFromText Creates a new DbGeometry multiline value based on the specified well-known text value and coordinate system identifier. GeometryMultiPointFromBinary Creates a new DbGeometry multipoint value based on the specified well-known binary value and coordinate system identifier. GeometryMultiPointFromText Creates a new DbGeometry multipoint value based on the specified well-known text value and coordinate system identifier. GeometryMultiPolygonFromBinary Creates a new DbGeometry multi polygon value based on the specified well-known binary value and coordinate system identifier. GeometryMultiPolygonFromText Creates a new DbGeometry multi polygon value based on the specified well-known text value and coordinate system identifier. GeometryPointFromBinary Creates a new DbGeometry point value based on the specified well-known binary value and coordinate system identifier (SRID). GeometryPointFromText Creates a new DbGeometry point value based on the specified well-known text value and coordinate system identifier (SRID). GeometryPolygonFromBinary Creates a new DbGeometry polygon value based on the specified well-known binary value and coordinate system identifier (SRID). GeometryPolygonFromText Creates a new DbGeometry polygon value based on the specified well-known text value and coordinate system identifier (SRID). GetArea(DbGeography) Returns a nullable double value that indicates the area of the given DbGeography value, which may be null if the value does not represent a surface. GetArea(DbGeometry) Returns a nullable double value that indicates the area of the given DbGeometry value, which may be null if the value does not represent a surface. GetBoundary Returns a nullable double value that indicates the boundary of the given DbGeography value. GetCentroid Returns a DbGeometry value that represents the centroid of the given DbGeometry value, which may be null if the value does not represent a surface. GetConvexHull Returns a nullable double value that indicates the convex hull of the given DbGeography value. GetCoordinateSystemId(DbGeography) Returns the coordinate system identifier of the given DbGeography value. GetCoordinateSystemId(DbGeometry) Returns the coordinate system identifier of the given DbGeometry value. GetDimension(DbGeography) Gets the dimension of the given DbGeography value or, if the value is a collections, the largest element dimension. GetDimension(DbGeometry) Gets the dimension of the given DbGeometry value or, if the value is a collections, the largest element dimension. GetElementCount(DbGeography) Returns the number of elements in the given DbGeography value, if it represents a geography collection. GetElementCount(DbGeometry) Returns the number of elements in the given DbGeometry value, if it represents a geometry collection. GetElevation(DbGeography) Returns the elevation (Z coordinate) of the given DbGeography value, if it represents a point. GetElevation(DbGeometry) Returns the elevation (Z) of the given DbGeometry value, if it represents a point. GetEndPoint(DbGeography) Returns a DbGeography value that represents the end point of the given DbGeography value, which may be null if the value does not represent a curve. GetEndPoint(DbGeometry) Returns a DbGeometry value that represents the end point of the given DbGeometry value, which may be null if the value does not represent a curve. GetEnvelope Gets the envelope (minimum bounding box) of the given DbGeometry value, as a geometry value. GetExteriorRing Returns a DbGeometry value that represents the exterior ring of the given DbGeometry value, which may be null if the value does not represent a polygon. GetHashCode Serves as the default hash function. (Inherited from Object.) GetInteriorRingCount Returns the number of interior rings in the given DbGeometry value, if it represents a polygon. GetIsClosed(DbGeography) Returns a nullable Boolean value that whether the given DbGeography value is closed, which may be null if the value does not represent a curve. GetIsClosed(DbGeometry) Returns a nullable Boolean value that whether the given DbGeometry value is closed, which may be null if the value does not represent a curve. GetIsEmpty(DbGeography) Returns a nullable Boolean value that whether the given DbGeography value is empty. GetIsEmpty(DbGeometry) Returns a nullable Boolean value that whether the given DbGeometry value is empty. GetIsRing Returns a nullable Boolean value that whether the given DbGeometry value is a ring, which may be null if the value does not represent a curve. GetIsSimple Returns a nullable Boolean value that whether the given DbGeometry value is simple. GetIsValid Returns a nullable Boolean value that whether the given DbGeometry value is valid. GetLatitude Returns the Latitude coordinate of the given DbGeography value, if it represents a point. GetLength(DbGeography) Returns a nullable double value that indicates the length of the given DbGeography value, which may be null if the value does not represent a curve. GetLength(DbGeometry) Returns a nullable double value that indicates the length of the given DbGeometry value, which may be null if the value does not represent a curve. GetLongitude Returns the Longitude coordinate of the given DbGeography value, if it represents a point. GetMeasure(DbGeography) Returns the M (Measure) coordinate of the given DbGeography value, if it represents a point. GetMeasure(DbGeometry) Returns the M (Measure) coordinate of the given DbGeometry value, if it represents a point. GetPointCount(DbGeography) Returns the number of points in the given DbGeography value, if it represents a linestring or linear ring. GetPointCount(DbGeometry) Returns the number of points in the given DbGeometry value, if it represents a linestring or linear ring. GetPointOnSurface Returns a DbGeometry value that represents a point on the surface of the given DbGeometry value, which may be null if the value does not represent a surface. GetSpatialTypeName(DbGeography) Returns a value that indicates the spatial type name of the given DbGeography value. GetSpatialTypeName(DbGeometry) Returns a value that indicates the spatial type name of the given DbGeometry value. GetStartPoint(DbGeography) Returns a DbGeography value that represents the start point of the given DbGeography value, which may be null if the value does not represent a curve. GetStartPoint(DbGeometry) Returns a DbGeometry value that represents the start point of the given DbGeometry value, which may be null if the value does not represent a curve. GetType Gets the Type of the current instance. (Inherited from Object.) GetXCoordinate Returns the X coordinate of the given DbGeometry value, if it represents a point. GetYCoordinate Returns the Y coordinate of the given DbGeometry value, if it represents a point. InteriorRingAt Returns an interior ring from the given DbGeometry value, if it represents a polygon. Intersection(DbGeography, Computes the intersection of two DbGeography values. Intersection(DbGeometry, DbGeometry) Computes the intersection of two DbGeometry values. Intersects(DbGeography, DbGeography) Determines whether the two given DbGeography values spatially intersect. Intersects(DbGeometry, DbGeometry) Determines whether the two given DbGeometry values spatially intersect. MemberwiseClone Creates a shallow copy of the current Object. (Inherited from Object.) Overlaps Determines whether the two given DbGeometry values spatially overlap. PointAt(DbGeography, Int32) Returns a point element of the given DbGeography value, if it represents a linestring or linear ring. PointAt(DbGeometry, Int32) Returns a point element of the given DbGeometry value, if it represents a linestring or linear ring. Relate Determines whether the two given DbGeometry values are spatially related according to the given Dimensionally Extended Nine-Intersection Model (DE-9IM) intersection pattern. SpatialEquals(DbGeography, Determines whether the two given DbGeography values are spatially equal. SpatialEquals(DbGeometry, Determines whether the two given DbGeometry values are spatially equal. SymmetricDifference(DbGeography, Computes the symmetric difference of two DbGeography values. SymmetricDifference(DbGeometry, Computes the symmetric difference between two DbGeometry values. ToString Returns a string that represents the current object. (Inherited from Object.) Touches Determines whether the two given DbGeometry values spatially touch. Union(DbGeography, DbGeography) Computes the union of two DbGeography values. Union(DbGeometry, DbGeometry) Computes the union of two DbGeometry values. Within Determines whether one DbGeometry value is spatially within the other. Windows Phone 8.1, Windows Phone 8, Windows 8.1, Windows Server 2012 R2, Windows 8, Windows Server 2012, Windows 7, Windows Vista SP2, Windows Server 2008 (Server Core Role not supported), Windows Server 2008 R2 (Server Core Role supported with SP1 or later; Itanium not supported) The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
{"url":"http://msdn.microsoft.com/en-us/library/system.data.spatial.dbspatialservices.aspx","timestamp":"2014-04-16T08:09:39Z","content_type":null,"content_length":"119866","record_id":"<urn:uuid:b5d2699a-3c5d-4f24-8018-72533a94c3fe>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic help!! June 19th 2010, 10:11 PM #1 May 2010 marnie can walk 1km/m faster the jon. She completes a 20km hike 1 hour before him. Write an equation and solve it to find there walking speeds. i cannot seem to solve this ahh Hello LaylaSam Suppose that Jon walks at $x$ kph. Then Marnie walks at $x+1$ kph. Using the formula $\text{time} = \dfrac{\text{distance}}{\text{speed}}$ Marnie takes $\frac{20}{x+1}$ hours, and Jon takes $\frac{20}{x}$ hours. Since Jon takes $1$ hour more than Marnie, we get: Now multiply both sides by $x(x+1)$ to get rid of fractions: $\Rightarrow 20x+20=20x+x^2+x$ $\Rightarrow x^2+x-20=0$ $\Rightarrow (x+5)(x-4)=0$ $\Rightarrow x = 4$, since $x=-5$ is impossible. So Jon walks at 4 kph and Marnie at 5 kph. Omg ur a genious!!! June 19th 2010, 10:50 PM #2 June 19th 2010, 10:52 PM #3 May 2010
{"url":"http://mathhelpforum.com/algebra/148939-quadratic-help.html","timestamp":"2014-04-16T06:03:55Z","content_type":null,"content_length":"38623","record_id":"<urn:uuid:e3f4f372-81b3-4670-b25e-6fc927239e24>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Devon Geometry Tutor ...Grammar, spelling, important. Please, no grade-school vocabulary: good, happy, big, etc.! I expect the student to write at least 2 essays per week for correction. Hard work pays off! 35 Subjects: including geometry, English, chemistry, physics ...I start where the student is. Some students need help with their current assignments. On the other hand, I have found many Algebra 2 students need a few concepts they missed from Algebra 1. 10 Subjects: including geometry, calculus, physics, algebra 1 ...I have experience in both derivatives and integration. I have taken several courses in geometry and have experience with shapes and angles. I have tutored many students in pre algebra and have experience dealing with different types of equations and variables I have spent the past two years at Jacksonville University tutoring math. 13 Subjects: including geometry, calculus, GRE, algebra 1 ...I was also the Director of Education for the Women of Excellence for the same faith based facility for 5 years. I have written and taught biblical studies for women, children and the general population over the past 13 years and am currently a member of the Bishop's Council of the United Fellows... 51 Subjects: including geometry, English, reading, statistics ...My greatest skill is the ability to take complex concepts and break them into manageable and understandable parts. I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. I have been teaching math at a top rated high school for the last 10 years and my students are always among the top performers in the 15 Subjects: including geometry, calculus, algebra 1, GRE
{"url":"http://www.purplemath.com/devon_geometry_tutors.php","timestamp":"2014-04-16T16:39:09Z","content_type":null,"content_length":"23699","record_id":"<urn:uuid:712a208f-8dcf-482a-bd41-1b759fa6c0b2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
PAC-learnability of Probabilistic Deterministic Finite State Automata in terms of Variation Distance Nick Palmer and Paul Goldberg In: 16th Algorithmic Learning Theory Conference, 8-11 Oct 2005, Singapore. We consider the problem of PAC-learning distributions over strings, represented by probabilistic deterministic finite automata (PDFAs). PDFAs are a probabilistic model for the generation of strings of symbols, that have been used in the context of speech and handwriting recognition, and bioinformatics. Recent work on learning PDFAs from random examples has used the KL-divergence as the error measure; here we use the variation distance. We build on recent work by Clark and Thollard, and show that the use of the variation distance allows simplifications to be made to the algorithms, and also a strengthening of the results; in particular that using the variation distance, we obtain polynomial sample size bounds that are independent of the expected length of strings. PDF - Requires Adobe Acrobat Reader or other PDF viewer. Postscript - Requires a viewer, such as GhostView EPrint Type: Conference or Workshop Item (Paper) Project Keyword: Project Keyword UNSPECIFIED Subjects: Theory & Algorithms ID Code: 1242 Deposited By: Paul Goldberg Deposited On: 28 November 2005
{"url":"http://eprints.pascal-network.org/archive/00001242/","timestamp":"2014-04-20T18:26:04Z","content_type":null,"content_length":"7689","record_id":"<urn:uuid:2ea615c1-a1de-4280-b424-b61fe516fdb6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Drawing Graph Drawing Graph drawing can be thought of as a form of data visualization, but unlike most other types of visualization the information to be visualized is purely combinatorial, consisting of edges connecting a set of vertices. Applications of graph drawing include genealogy, cartography (subway maps form one of the standard examples of a graph drawing), sociology, software engineering (visualization of connections between program modules), VLSI design, and visualization of hypertext links. Typical concerns of graph drawing algorithms are the area needed to draw a graph, the types of edges (straight lines or bent), the number of edge crossings for nonplanar graphs, separation of vertices and edges so they can be distinguished visually, and preservation of properties such as symmetry and distance. The area has a large literature, concentrated in the annual Graph Drawing symposia, and I won't try to link here to all available research projects and papers on the subject, only those with some particular historical or application interest. Part of Geometry in Action, a collection of applications of computational geometry. David Eppstein, Theory Group, ICS, UC Irvine. Semi-automatically filtered from a common source file.
{"url":"http://www.ics.uci.edu/~eppstein/gina/gdraw.html","timestamp":"2014-04-21T14:41:07Z","content_type":null,"content_length":"7209","record_id":"<urn:uuid:c188a787-b4fa-49ee-9865-97f8713b972f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Patterns Geometric Shapes Lesson Plans & Worksheets Geometry – AAA Math. Geometry – Table of Contents. Geometry – Topics. Geometry Facts and Calculations; Area; Perimeter and Circumference. Math Playground Build patterns, create and solve critical thinking problems, and model various math concepts with pattern blocks. Subject(s): Place Value, Number Sense and Operations, Math, geometric patterns. Grade(s): Middle School (5-8) License: CC Attribution 3.0 . Created: August 9th, 2010. Geometry and Fractions: Patterns in Geometry (Gr. 1) Printable (1st Geometric Patterns in Math Lesson Plans & Worksheets Geometric Patterns. Check out our pictures of geometric patterns, tessellations and designs. Find a range of pictures and images featuring interesting shapes, unique IXL – Geometric growth patterns (5th grade math practice) Place values and number sense. Place values; Convert between place values; Compare numbers up to billions Camry 3.5Q GEOMETRIC PATTERNS WORKSHEET Quickly searchkeywords independent practice do number on the missing geometric patterns worksheets 3rd grade, Cachedquickly and geometric, Patterns and algebra, Patterns and algebra, Maths IXL – Geometric growth patterns (Grade 4 math practice) Patterns. Patterns are a good introduction to algebra and geometry in math. In this movie, you’ll learn how to build a pattern by repeating shapes, colors, sounds This video sets up a math lesson on geometric lessons. You can lead a class discussion comparing Robin’s pattern with geometric patterns and what make them The subject of geometry may seem dry and uninteresting, but in reality, geometric principles can be used to create all manner of amazing patterns and shapes. Using provided to determine the next squares in each geometric pattern IXL – Geometric growth patterns (5th grade math practice) Camry 3.5Q GEOMETRIC PATTERNS IN MATHS Flash show students that patterns and be considered geometric Chaos, fractals, geometric it has our brain when Islamic Many people find no beauty and pleasure in maths – but, as Lewis Dartnell explains, our brains have evolved to take pleasure in rhythm, structure and pattern. Multi-Dimensional or Hyper-Dimensional Geometry The geometry of symmetrical shapes familiar from 3-D, viewed in their 4th Contributed by CiCi Naifeh Video: What Is a Geometric Pattern in Math? | eHow Math.com Homework Help Geometry. Free math lessons and math homework help from basic math to algebra, geometry and beyond. Students, teachers, parents, and everyone Geometry Lesson Plans Back to Geometry Classroom Materials Math by Subject K12 Topics algebra arithmetic calculus discrete math geometry pre-calculus GEOMETRIC NUMBER PATTERNS . November 13, 2011 admin PATTERNS, 0. IXL – Geometric growth patterns (5th grade math practice) Place values and number sense. IXL – Geometric growth patterns (5th grade math practice) Education Worksheets Pre Algebra Worksheets Pattern Worksheets Math Patterns Worksheets Geometric Patterns Worksheet. Geometric Patterns Worksheet. Practice Problems. 1. NCTM Illuminations lesson plans. InternetBased lesson plans are examples of how lesson OneLooking for patterns. The students find products to the i-math Patterns, Algebra Second 2nd Grade Math Standards, Grade Level Help, Given rules, complete tables to reveal both arithmetic and geometric patterns. 0206.3.1 Geometry: Patterns in Geometry (Gr. 3) Printable (3rd Grade Patterns | Algebra Second 2nd Grade Math Standards A geometric pattern in maths is a sequence that is normally made by the act of multiplying by a certain fact each time. An example of a geometrical pattern is; Fun math practice! Improve your skills with free problems in ‘Geometric growth patterns’ and thousands of other practice lessons. Find math patterns geometric shapes lesson plans and teaching resources. Quickly find that inspire student Simple Geometric Patterns Worksheets Math Manipulative – Pattern Blocks – Welcome to Math Playground In order for a pattern to be considered geometric it has to meet a few key characteristics. Find out about a geometric pattern in math with help from a longtime Geometric shapes can form patterns and designs. Example 1: Find the missing pattern. The pattern of the geometric shapes are triangle Geometric sequence Before talking about geometric sequence, in math, a sequence is a set of numbers that follow a pattern. We call each number in the sequence a term. Lesson 6.7: Frieze patterns with flips turns and slides practice GEOMETRIC PATTERNS MATH | Browse Patterns Algebra Pattern Blocks – Use six common geometric shapes to build patterns and solve problems. In order for a pattern to be considered geometric it has to meet Download high quality free maths resources produced by Chris Olley, Director of the Maths PGCE at King’s College London and the Maths Zone team. 1. Awesome Library – Materials_Search The Awesome Library organizes 22 000 carefully reviewed K12 education resources, the top 5 percent for teachers, students Using the codes for frieze patterns, classify each of the following Geometric Shapes Patterns | Make Math Easy What Is a Geometric Pattern in Math? : Math Definitions & More In order for a pattern to be considered geometric it has to meet a few key characteristics. Introduction about geometric shapes patterns: This article is about geometric shapes patterns. Geometry is the representation of two dimensional and three dimensional Find geometric patterns in math lesson plans and teaching resources. Quickly find that inspire student learning. The project subscription is still just $36 for 12 months: 36 projects Geometric Patterns Worksheet – ĐẠI LÝ TOYOTA Bến Thành Fun math practice! Improve your skills with free problems in ‘Geometric growth patterns’ and thousands of other practice lessons. Print, download, or use this free kindergarten pattern worksheet online. The geometry patterns worksheet is great for kids, teachers, and parents. FRACTAL CRAFTS & ART PROJECTS Fimo Clay Sierpenski Fractal More fun with Fractal paper art Fractal Origami & Paper-folding Fractal Paper Cut-Outs Tesselation to the limit of a circle. geometric patterns | plus.maths.org Geometry – AAA Math. Geometry – Table of Contents. Geometry – Topics. Regular patterns from geometric shapes tend to indicate an organised and efficient mind. Fun math practice! Improve your skills with free problems in ‘Geometric growth patterns’ and thousands of other practice lessons. GEOMETRIC MATH PATTERNS | Browse Patterns
{"url":"http://opatpatt.com/geometric-math-patterns/","timestamp":"2014-04-16T10:08:53Z","content_type":null,"content_length":"23208","record_id":"<urn:uuid:e6e26861-9509-467c-b346-d0d2118890b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Leah on Monday, December 14, 2009 at 10:14pm. my daughter needs help finding a mystery number. 1)when the mystery number is divided by 3,there is a remainder of 1. but it has to be the same thing over again with 4 remainder 2 and 5 remainder 4.And there is one more what is the smallest number that could be divided by the mystery • math - Marth, Monday, December 14, 2009 at 10:27pm "when the mystery number is divided by 3,there is a remainder of 1" So, we know that the number must be 1, 4, 7, 10, etc. Each of those numbers n have the property that n modulus 3 = 1. (Modulus is the remainder when a natural number is divided by a natural "n modulus 4 = 2" This limits the numbers to 2, 6, 10, 14, etc. Note that this also means that the number is even. "n modulus 5 = 4" Similarly, this limits the numbers to 4, 9, 14, 19, etc. The number must be a multiple of 5 with 4 added to it. Let us start with the last condition (as it has the greatest increase) and limit it to evens. Find the first number that satisfies the first two conditions. 4, 14, 24, 34 34 satisfies all the above conditions. • math - ALEXXUS CAREY, Monday, December 14, 2009 at 11:31pm I DO NOT GET WHAT U GUYS ARE SAY BUT I KNOW THE ANSWER Related Questions math - Julie has a mystery number. When she divides the mystery number by 5, ... fourth grade math - one factor of the mystery number is 7 the other factor is an... math - lionel says the digits in his mystery number have a sum of 6. you say ... Mystery number - the mystery number is a two didget even number.it is a multiple... Math - Find the mystery digit in 5---,642,089,the mystery does not appear in any... math - a mystery number is bigger than 50 and larger than 100. you can make ... math - a mystery number is bigger than 50 and larger than 100. you can make ... maths - This mystery number has four digits. Each digit is an even number. All ... Mystery number - The mystery number is a two didget even number. It is a ... math - Find the mystery number im a number between 300 and 400 im divisible by 3...
{"url":"http://www.jiskha.com/display.cgi?id=1260846882","timestamp":"2014-04-16T22:17:08Z","content_type":null,"content_length":"9420","record_id":"<urn:uuid:db19eddc-ce40-4249-a392-904f40d26c57>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Gene Golub SIAM Summer School 2010 International Summer School on Numerical Linear Algebra Hotel Sierra Silvana, Selva di Fasano, Brindisi (Italy), June 7-18, 2010 The Gene Golub SIAM Summer School 2010 is the second in a series of periodic International Summer Schools on Numerical Linear Algebra (ISSNLA) organized by the SIAM Activity Group on Linear Algebra (SIAG/LA). The first ISSNLA took place in 2008. The 2010 ISSNLA will be supported in part by generous contribution from the Gene Golub Summer School fund. SIAM administers the Golub Summer School program from a bequest from the estate of Gene The main goal of these International Summer Schools is to offer accessible courses on current developments in Numerical Linear Algebra and on the applications of Numerical Linear Algebra to other disciplines. These courses will be focused on recent research topics that have reached a significant maturity and whose impact is widely recognized by the community, but that are not usually included in text books, or in basic courses at the doctoral level.The SIAG/LA-ISSNLA is mainly intended for doctoral students in any field where methods and algorithms of Numerical Linear Algebra are used. Numerical Linear Algebra is at present a well established discipline and research on Numerical Linear Algebra covers applications, fundamentals of matrix computations, and software development. • Applications are quickly expanding from the classical ones (numerical solution of partial differential equations, statistics, optimization, control...) to new areas as data mining, pattern recognition, image processing, web search engines, particle physics... • Fundamental problems are still subject of intense research. For instance, structured algorithms for different classes of matrices and the corresponding structured perturbation theory, distance problems in matrix computations, high relative accuracy algorithms and the corresponding roundoff error analysis, fast matrix multiplication and its application to develop fast and stable algorithms in Numerical Linear Algebra, convergence of iterative methods, stability analysis of iterative methods, combinatorial algorithms, algorithms for matrix functions... • Development of reliable and efficient software is the final goal of Numerical Linear Algebra. In this context, there exist well known packages that are frequently revised and improved by adding new routines and functions, and new packages are constantly appearing. The series of SIAG/LA International Summer Schools on Numerical Linear Algebra will offer courses on applications, fundamentals, and software developments. Attendance will be limited to 50 graduate students.
{"url":"http://issnla2010.ba.cnr.it/index.htm","timestamp":"2014-04-20T03:15:24Z","content_type":null,"content_length":"15654","record_id":"<urn:uuid:1e04c45e-6072-4367-ba4c-07a3f2981ce7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Animal See-Saw is a game for two players. Each player is given 12 animals. Players take it in turns to place an animal on their side of the see-saw - making sure that their side of the see-saw is never more than 5 kg heavier than the other side. The winner is the player who survives the game without ever being more than 5kg heavier than his/her opponent. If a player does become 5kg heavier than their opponent, the see saw falls, they lose and the game is over. If both players make it to the end of the game, the winner is the one with the most weight on his/her side.
{"url":"http://www.interactivemaths.net/index.php?q=category/1/28/30&page=1","timestamp":"2014-04-17T07:33:16Z","content_type":null,"content_length":"21175","record_id":"<urn:uuid:fae84406-8251-4f3b-93e3-34d312c1d376>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Splitting Orbits In the case when [G]X is a finite action, we can apply the sign map e to bar (G), the permutation group induced by G on X. Its kernel bar (G)^+:= { bar (g) Îbar (G) | e( bar (g)) =1 } is either bar (G) itself or a subgroup of index 2, as is easy to see. Denoting its inverse image by G^+:= {g ÎG | e( bar (g))=1 }, we obtain a useful interpretation of the alternating sum of fixed point numbers: Lemma: For any finite action [G]X such that G not = G^+, the number of orbits of G on X which split over G^+ (i.e. which decompose into more than one -- and hence into two -- G^+-orbits) is equal (1)/( | G | ) å[g ÎG] e( bar (g)) | X[g] | =(1)/( | bar (G) | ) å[ bar (g) Îbar (G)] e( bar (g)) | X[ bar (g)] | . Proof: As G not =G^+, and hence | G^+ | = | G | /2, we have (1)/( | G | ) å[g ÎG] e( bar (g)) | X[g] | =(2)/( | G | ) å[g ÎG^+] | X[g] | - (1)/( | G | ) å[g ÎG] | X[g] | =(1)/( | G^+ | ) å[g ÎG^+] | X[g] | - (1)/( | G | ) å[g ÎG] | X[g] | = | G^+ \\X | - | G \\X | . Each orbit of G on X is either a G^+-orbit or it splits into two orbits of G^+, since | G^+ | = | G | /2. Hence | G^+ \\X | - | G \\X | is the number of orbits which split over G^+. Finally the stated identity is obtained by an application of the homomorphism theorem. Corollary: In the case when G not = G^+, the number of G-orbits on X which do not split over G^+ is equal to (1)/( | G | ) å[g ÎG](1- e( bar (g))) | X[g] | . Note what this means. If G acts on a finite set X in such a way that G not =G^+, then we can group the orbits of G on X into a set of orbits which are also G^+-orbits. In the figure we denote these orbits by the symbol Ä. The other G-orbits split into two G^+-orbits, we indicate one of them by Å, the other one by OMINUS , and call the pair { Å, OMINUS } an enantiomeric pair of G-orbits. Hence the Lemma above gives us the number of enantiomeric pairs of orbits, while corollary yields the number of selfenantiomeric orbits of G on X. The elements x ÎX belonging to selfenantiomeric orbits are called achiral objects, while the others are called chiral. These notions of enantiomerism and chirality are taken from chemistry, where G is usually the symmetry group of the molecule while G^+ is its subgroup consisting of the proper rotations. We call [G]X a chiral action if and only if G not = G^+. Enantiomeric pairs and selfenantiomeric orbits Using this notation we can now rephrase aboves lemma and corollary in the following way: Corollary: If [G]X is a finite chiral action, then the number of selfenantiomeric orbits of G on X is equal to (1)/( | G | ) å[g ÎG](1- e( bar (g))) | X[g] | = 2 | G \\X | - | G^+ \\X | , while the number of enantiomeric pairs of orbits is (1)/( | G | ) å[g ÎG] e( bar (g)) | X[g] | = | G^+ \\X | - | G \\X | . The sign of a cyclic permutation is easy to obtain from the equation and the homomorphism property of the sign, described in the corollary: (i[1] ...i[r]) ÎA[n] iff r is odd. But in fact we need not check the lengths of the cyclic factors of p since an easy calculation shows (exercise) that, in terms of the number c( p) of cyclic factors of p, we have e( p)=(-1)^n-c( p), if pÎS[n] . harald.fripertinger "at" uni-graz.at http://www-ang.kfunigraz.ac.at/~fripert/ UNI-Graz Institut für Mathematik UNI-Bayreuth Lehrstuhl II für Mathematik last changed: January 19, 2005
{"url":"http://www.mathe2.uni-bayreuth.de/frib/html/book/hyl00_29.html","timestamp":"2014-04-20T23:27:07Z","content_type":null,"content_length":"10508","record_id":"<urn:uuid:38076ec8-26a1-4087-abc8-a0b04bd73e45>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Part IIA_ Paper 2 Consumer and Producer Theory201045161030 Part IIA, Paper 1 Consumer and Producer Theory Lecture 2 Direct and Indirect Utility Functions Flavio Toxvaerd Today’s Outline Indifference curves Marginal rates of substitution Marshallian demand functions Types of goods Indirect utility function Consumer surplus and welfare Some mathematical results (Envelope Thm.) Roy’s Identity Utility Function Recall from last lecture – given Axioms of Choice, continuity and local non-satiation - a consumer’s preference ordering can be represented by a utility Simplifying Assumption: The domain consists of only two types of commodities, types 1 and 2 A specific consumption bundle, x, will be represented by a vector x = (x1,x2) and we can write the utility function as u(x1,x2) Indifference Curves Indifference curves show combinations of commodities for which utility is constant Mathematic ally, we require that du ( x1, x2 ) 0 u ( x1, x2 ) u ( x1, x2 ) dx1 dx2 0 x1 x2 u ( x1, x2 ) dx2 x1 dx1 u ( x1, x2 ) Slope of Indifference Curve = Marginal Rate of Substitution Utility Maximisation max u ( x1 , x 2 ) x1 , x2 subject to p1 x1 p 2 x 2 m L u ( x1 , x2 ) [m p1 x1 p 2 x2 ] u ( x1 , x 2 ) First order conditions: p1 u ( x1 , x 2 ) p 2 x 2 p1 x1 p 2 x 2 m Utility Maximisation Eliminating the Lagrange Multiplier () gives u ( x1, x2 ) MRS = Ratio of Prices x2 p2 Slope of Indifference Curve u ( x1, x2 ) p1 = Slope of Budget Line and p1x1 p2 x2 m Budget Line Note, with more than two commodities have FOC xi MUxi MU of income pi pi Utility Maximisation To solve: move along budget line, until point of tangency with the indifference curve x2 x2 x1 x1 Demand Function When the indifference curves are strictly convex the solution is unique, say x*, where x1*= x1(p1,p2,m) , x2*= x2(p1,p2,m) giving demand as a function of prices and income Marshallian Demand Functions ' Ordinary' Good : dx1 ( p1 , p2 , m) 0 x2 Giffen Good : m Price expansion path dx1 ( p1 , p2 , m) Complements : dx2 ( p1 , p2 , m) 0 m p '1 Substitutes : p1 Demand Curve dx2 ( p1 , p2 , m) 0 p’1 x1(p1,p2,m) x1(p’1,p2,m) x1 0 : Inferior Good < 0 dx1 ( p1, p2 , m) dx1 ( p1, p2 , m) 0 : Normal Good > 0 dx1 ( p1, p2 , m) x1 : Superior Good > 1 dm m IncomeElasticity of Demand : dx1 m . Engel Curve dm x1 Problem 1: Show that the Marshallian Demand Function is homogenous degree zero. (So consumers never suffer from Money Illusion) Problem 2: Show that consumers’ purchase decisions are unaffected by any monotonic transformation of the utility function. (Hint: A monotonic transformation can be represented by a strictly increasing function f(.). Use the chain rule to show that the MRS remains Convex Indifference Curves Convex indifference curves means that the (absolute value of the) slope of the indifference curve is decreasing as x1 increases That is: Diminishing MRS Problem 3: Show that diminishing marginal utilities is neither a necessary nor a sufficient condition for convex indifference curves Example: Cobb-Douglas Utility u ( x1, x2 ) x1 x2 Lagrangian : L x 1 x2 ( p1x1 p2 x2 m) x1 1x1 p1 First Order Conditions (1 ) x1 x2 p2 p1 x1 p2 x2 m Example: Cobb-Douglas Utility Eliminating gives: (1 ) x1 p2 x2 p1 Substitution into the budget constraint gives the solution x *1 x1 ( p1 , p2 , m) (1 ) m x *2 x2 ( p1 , p2 , m) With Cobb-Douglas Utility the consumer spends a fixed proportion of income on each commodity Indirect Utility Function It is often useful to consider the utility obtained by a consumer indirectly, as a function of prices and income rather than the quantities actually Let v( p1 , p2 , m) u ( x *1 , x *2 ) u ( x1 ( p1 , p2 , m), x2 ( p1 , p2 , m)) max u ( x1 , x2 ) such that p1 x1 p2 x2 m v( p1 , p2 , m) is called the Indirect U tility Function Properties of Indirect Utility Fn Property 1: v(p,m) is non-increasing in prices (p), and non-decreasing in income (m). Proof: Diagramatically, it is clear that any increase in prices or decrease in income contracts the ‘affordable’ set of commodities – as nothing new is available to the consumer utility cannot increase Property 2: v(p,m) is homogeneous degree Proof: No change in the affordable set, or in Properties of Indirect Utility Fn These two are General Properties and NOT reliant on additional restrictions such as convexity of indifference curve, more is better etc. Direct and Indirect Utility We will see that direct and indirect utility functions are closely related - and that any preference ordering that can be represented by a utility function can also be represented by an indirect utility function. This means we are free to use whichever specification we please For Example: If the price of commodity 1 changes from, say, p1=a to p1=b, we may want to use the indirect utility function to measure the change in consumer welfare: (utility) v(b, p2 , m) v(a, p2 , m) Mathematical Digression The Envelope Theorem: If M (a ) max x , x g ( x1 , x2 , a ) s.t. h( x1 , x2 , a ) 0 g ( x *1 , x *2 , a ) for which the Lagrangian can be written L g ( x1 , x2 , a ) - h( x1 , x2 , a ) M L Evaluated at the ( x *1 , x *2 , a ) maximising a a values Proof: See Varian Microeconomic Analysis, p. 502. Application 1 Marginal utility of Income As v( p1, p2 , m) max x , x u ( x1, x2 ) s.t. p1x1 p2 x2 m 0 and the Lagrangian L u ( x1, x2 ) - ( p1x1 p2 x2 m) then, by the Envelope Theorem, v L ( x *1, x *2 , p1, p2 , m) m m The marginal utility of income is given by the Lagrange multiplier Application 2 Roy’s Identity Similarly, by the Envelope Theorem, v( p1, p2 , m) L( x1 , x1 ) * * x1( p1, p2 , m) p1 p1 and so, v( p1, p2 , m) p1 Roy’s x1( p1, p2 , m) v( p1, p2 , m) Identity Consumer Surplus and Welfare We saw earlier that, following a price change, (utility) v(b, p2 , m) v(a, p2 , m) v( p1, p2 , m) b p1 v( p1, p2 , m) .x1 ( p1, p2 , m) dp1 b m x1 ( p1, p2 , m) dp1 , v( p1, p2 , m) if constant Consumer Surplus and Welfare x ( p , p , m)dp 1 1 2 1 (Consumer Surplus) So, ( Utility) (Consumer Surplus) b Marshallian Demand Indifference curves Marginal rates of substitution Marshallian demand functions Types of goods Indirect utility function Consumer surplus and welfare Roy’s Identity Varian, Intermediate Economics (7th ed.) chapters 4, 5, 6, 14. Varian, Microeconomic Analysis, chapter 7
{"url":"http://www.docstoc.com/docs/33087427/Part-IIA_-Paper-2-Consumer-and-Producer-Theory201045161030","timestamp":"2014-04-19T18:17:00Z","content_type":null,"content_length":"63840","record_id":"<urn:uuid:0b1225f9-f854-4f58-a09d-a79bda1e28be>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-grid Discontinuous Galerkin Method for Quasi-Linear Elliptic Problems Abstract. In this paper, we consider the symmetric interior penalty discontinuous Galerkin (SIPG) method with piecewise polynomials of degree $r\ge1$ for a class of quasi-linear elliptic problems in $\Omega \subset \mathbb{R}^2$. We propose a two-grid approximation for the SIPG method which can be thought of as a type of linearization of the nonlinear system using a solution from a coarse finite element space. With this technique, solving a quasi-linear elliptic problem on the fine finite element space is reduced into solving a linear problem on the fine finite element space and solving the quasi-linear elliptic problem on a coarse space. Convergence estimates in a broken $H^1$-norm are derived to justify the efficiency of the proposed two-grid algorithm. Numerical experiments are provided to confirm our theoretical findings. As a byproduct of the technique used in the analysis, we derive the optimal pointwise error estimates of the SIPG method for the quasi-linear elliptic problems in $\mathbb{R}^d$, $d=2,3$ and use it to establish the convergence of the two-grid method for problems in $\Omega \subset \mathbb{R}^3$.
{"url":"http://www.uwyo.edu/vginting/publications/twogriddg.html","timestamp":"2014-04-21T15:17:34Z","content_type":null,"content_length":"3415","record_id":"<urn:uuid:0b37f987-b303-4428-83a3-cf9be69d091f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Help beginner at its finest Write your question here. what would be the necessary code needed to answer this question. I have tried multiple codes all failing miserably. From recent data students with an overall GPA of 3.0 may get a job that pays an average starting salary of about $80,000, while students who have a GPA at the time of graduation of 4.0 earn up to $120,000. You may assume a linear relationship between the overall GPA and the starting salary. Develop a computer program to calculate the present value of obtaining an “A” compared to a “B” in this class. The answer may surprise you. The inputs required for the code are a) Yearly rate of increase in salary for a student who gets an A compared to a B in PNG430. For example, you should enter 0.05 for a 5% rate of increase in salary. Assume a rate of 5% for your base case calculations here. b) Yearly rate of increase in salary for a student who gets a B in PNG430. One could assume that the salary rates for these students are similar, but in general a “B” student will receive smaller salary increases over their lifetime than a typical “A” student. Let’s be nice, however, and assume a rate of 5% for your calculations, which is equal to the “A” student increase. c) The yearly inflation (or discount) rate. Assume a rate of 3% for your calculations to account for the time value of money. d) Total number of classes for your degree, where for simplicity we assume that all classes have the same number of credits. Please use 43 classes for your calculations, which is equivalent to 129 credits. e) Your total number of working years, which we will take as 40 years for purposes of this calculation. Ok, hello nraker, nice to see a new member! Any way, make some integers, and fill their values with the variables above. Then, manipulate them with the formulas you are supposed to use. Also, perhaps you could post one of the programs you thought was pretty close so we can explain why it isn't working. Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/109882/","timestamp":"2014-04-20T14:10:45Z","content_type":null,"content_length":"8928","record_id":"<urn:uuid:583faf81-f966-4ba2-ad4e-b9c71c10437c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Devon Geometry Tutor ...Grammar, spelling, important. Please, no grade-school vocabulary: good, happy, big, etc.! I expect the student to write at least 2 essays per week for correction. Hard work pays off! 35 Subjects: including geometry, English, chemistry, physics ...I start where the student is. Some students need help with their current assignments. On the other hand, I have found many Algebra 2 students need a few concepts they missed from Algebra 1. 10 Subjects: including geometry, calculus, physics, algebra 1 ...I have experience in both derivatives and integration. I have taken several courses in geometry and have experience with shapes and angles. I have tutored many students in pre algebra and have experience dealing with different types of equations and variables I have spent the past two years at Jacksonville University tutoring math. 13 Subjects: including geometry, calculus, GRE, algebra 1 ...I was also the Director of Education for the Women of Excellence for the same faith based facility for 5 years. I have written and taught biblical studies for women, children and the general population over the past 13 years and am currently a member of the Bishop's Council of the United Fellows... 51 Subjects: including geometry, English, reading, statistics ...My greatest skill is the ability to take complex concepts and break them into manageable and understandable parts. I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. I have been teaching math at a top rated high school for the last 10 years and my students are always among the top performers in the 15 Subjects: including geometry, calculus, algebra 1, GRE
{"url":"http://www.purplemath.com/devon_geometry_tutors.php","timestamp":"2014-04-16T16:39:09Z","content_type":null,"content_length":"23699","record_id":"<urn:uuid:712a208f-8dcf-482a-bd41-1b759fa6c0b2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Bowie, MD Trigonometry Tutor Find a Bowie, MD Trigonometry Tutor ...Additionally, I received a 90% or above on the following content tests: Pre-algebra, Algebra, Geometry, Algebra II, and Trigonometry. I would love to help you build your confidence in mathematics whether you are strong in math or if you may loathe it. Check out how I would help you by viewing actual math questions that I have answered for other students in the "Resources" tab. 5 Subjects: including trigonometry, geometry, algebra 2, prealgebra ...I have graduated from the Smith School of Business at the University of Maryland College Park with my Master's in Business Administration. I work for the U.S. Department of Agriculture. 34 Subjects: including trigonometry, English, calculus, accounting ...I enjoy teaching students at every skill level. I believe in teaching beyond the short cuts and introducing students to the satisfaction of finding solutions using problem-solving skills. I teach basic through advanced mathematics and sciences. 14 Subjects: including trigonometry, chemistry, Visual Basic, algebra 1 ...Thanks to this combination I have an extensive background in science, math, Spanish, and writing. Although I am not a native speaker, I have lived in Spain for 4 months and traveled to Costa Rica as well. As an undergraduate, I tutored peers in Spanish including grammar, writing, and speaking skills. 17 Subjects: including trigonometry, Spanish, writing, physics ...The subject explores the following areas: Algebra II Linear Equations and Inequalities Functions (including inverse functions and, composite functions) Linear systems and Matrices Polynomial and Rational Functions Sequences and Series Radical Epressions and Equations Exponential and Logarithmic... 20 Subjects: including trigonometry, chemistry, calculus, geometry Related Bowie, MD Tutors Bowie, MD Accounting Tutors Bowie, MD ACT Tutors Bowie, MD Algebra Tutors Bowie, MD Algebra 2 Tutors Bowie, MD Calculus Tutors Bowie, MD Geometry Tutors Bowie, MD Math Tutors Bowie, MD Prealgebra Tutors Bowie, MD Precalculus Tutors Bowie, MD SAT Tutors Bowie, MD SAT Math Tutors Bowie, MD Science Tutors Bowie, MD Statistics Tutors Bowie, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Bowie_MD_trigonometry_tutors.php","timestamp":"2014-04-18T11:43:35Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:acebc47f-813a-40d8-898e-7f967f47b9d6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Avg value of fun. over given interval..(please explain each step in detail bc I am lost:( ) f(x)=2x+1 on [0,4]..... - Homework Help - eNotes.com Avg value of fun. over given interval..(please explain each step in detail bc I am lost:( ) f(x)=2x+1 on [0,4].. THANK YOU!!!! A function f(x) that is closed in an interval (a,b) the its average value defined by `1/(b-a)int_a^bf(x)dx` For our question; `f(x) = 2x+1` `a = 0` `b = 4` Average value `= 1/(4-0)int_0^4(2x+1)dx` `= 1/4[x^2+x]_0^4` `= 1/4(16+4-0-0)` `= 5` So the average value of the function is 5. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/avg-value-fun-over-given-interval-please-explain-443595","timestamp":"2014-04-17T18:45:33Z","content_type":null,"content_length":"25510","record_id":"<urn:uuid:dc9aedde-4834-434f-af4e-77dae2efcf22>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with this calculus ques? October 11th 2009, 04:14 PM Help with this calculus ques? A ball thrown straight up into the air from a bridge,and its height above the ground t seconds after it is thrown is f(t) = -16t^2 + 108t + 48 feet. a.Determine the velocity of the ball at the moment it hits the ground. b.Determine all points in time after the ball is tossed that it has speed of exactly 40 feet per second(speed is the magnitude of velocity). October 11th 2009, 04:19 PM for a), first find t by setting f(t) = 0, then find f'(t) which is the same as v(t); in f'(t) you plug the t you found first. for b) set v(t) = 40 and solve for t
{"url":"http://mathhelpforum.com/calculus/107437-help-calculus-ques-print.html","timestamp":"2014-04-20T16:52:05Z","content_type":null,"content_length":"3802","record_id":"<urn:uuid:5548f8be-da0f-4af9-a0d8-5217019c3b0a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrable solutions to an elliptic PDE on divergence form have a definite sign up vote 12 down vote favorite Let $f\colon\mathbb{R}^n\to\mathbb{R}^n$ be a smooth, bounded vector field. Further, let $u\colon\mathbb{R}^n\to\mathbb{R}$ satisfy $$-\Delta u=\operatorname{div}(fu).$$ If $u\in L^1(\mathbb{R}^n)$, then $u$ has one sign, i.e., either $u>0$ everywhere, or $u<0$ everywhere, or $u=0$ everywhere. I have a direct proof of this for $n=1$. For $n>1$, I have a proof using the theory of parabolic equations (see below). My question: Is there a direct proof using only the theory of elliptic PDEs? (Edited to assume $f$ bounded and fix the case $n=1$ below.) For $n=1$, my proof goes as follows. The equation is $-u''=(fu)'$, which integrates to $u'+fu=A$ for some constant $A$. If $u$ changes sign then we may without loss of generality take $u(0)=0$. Thus $$u(x)=A\int_0^x e^{F(t)-F(x)}\,dt$$ where $F'=f$. If $|f|\le c$ then $F(t)-F(x)\ge c(t-x)$ for $t<x$, so $u(x)\ge Ac^{-1}(1-e^{-cx})$ when $x>0$, and hence $u\notin L^1$ (unless $A=0$). For $n>1$, my only proof is much more involved. Here is a brief outline. Assume the conclusion is wrong, so we can write $u=u_+-u_-$ with $u_\pm\ge0$ everywhere and neither identically zero, and $u_+u_-=0$ everywhere. Now let $v_\pm$ solve $$\frac{\partial v_\pm}{\partial t}=\Delta v_\pm+\operatorname{div}(fv_\pm)$$ for $t>0$, with initial conditions $v_\pm(0,x)=u_\pm(x)$. By uniqueness for this equation (with suitable growth conditions at infinity), $v_+(t,x)-v_-(t,x)=u(x)$ for $t>0$ and $x\in\mathbb{R}^n$. Also, for $t>0$ we find $v_\pm>0$ everywhere, and also $$\int_{\mathbb{R}^n} v_\pm(t,x)\,dx=\int_{\ mathbb{R}^n} u_\pm(x)\,dx$$ since the equation is on divergence form. We conclude $$\int_{\mathbb{R}^n}|u(x)|\,dx=\int_{\mathbb{R}^n}(u_+(x)+u_-(x))\,dx=\int_{\mathbb{R}^n}(v_+(t,x)+v_-(t,x))\,dx>\ int_{\mathbb{R}^n}|v_+(t,x)-v_-(t,x)|\,dx,$$ which is a contradiction. I might add a bit of intuition for why this is true: The laplacian makes stuff diffuse, while the divergence term transports stuff along the vector field while preserving the total amount of stuff. If a solution has two signs, the diffusion will mix the positive and negative parts together, making them shrink. This intuition is of course formalized in the parabolic proof. – Harald Hanche-Olsen Feb 9 '10 at 0:52 Regarding my edit: It may be that it is sufficient to assume $f$ grows at most linearly. Some such restriction is necessary, though: I can make a counterexample for $n=1$ with $f$ odd, $f(x)=ax^ {a-1}$ for $x>0$ where $a>1$. The odd solution $u$ with $A=1$ in the notation of the question will belong to $L^1$. We find $\int_0^\infty u=\int_0^\infty \int_0^x e^{y^a-x^a}\,dy\,dx$. The part where $y<1$ is no problem, and the other part is handled by noting $y^a-x^a<ay^{a-1}(y-x)$ when $y<x$ and integrating. – Harald Hanche-Olsen Feb 9 '10 at 3:31 For a purely radial vector field, there is a purely radial solution, easily found, and with $g=0$ in your notation. For a purely rotational field, I don't know of any solution. You clearly can't get away with $g=0$ in that case. – Harald Hanche-Olsen Feb 9 '10 at 21:15 add comment 1 Answer active oldest votes I love this problem and have spent half the evening thinking about it. Here is a rough sketch of an idea that could possibly work. Making it rigorous might be a bit of a chore due to the unboundedness of $\mathbb{R}^n$, etc, but my guess is that it is doable. Let $L$ denote the elliptic operator \[ Lu : = -\Delta u - \mathrm{div}( f u ). \] Suppose the operator $L$ has a principal eigenvalue $\lambda_1$, which is the smallest number $\lambda$ for which there exists a nonzero solution of the equation $Lu = \lambda u$ in $\ mathbb{R}^n$. Then $\lambda_1$ should be simple (!) and have a principal eigenfunction which does not change sign. Let $\varphi \in L^1(\mathbb{R}^n)$ denote the principal eigenfunction, which we normalize to be positive. Assume for now that $\varphi$ and its derivatives tend to zero at infinity, and $f$ and its derivatives stay bounded. Then we may simply integrate the equation $L\varphi = \lambda_1\ varphi$ to get $\lambda_1 \int_{\mathbb{R}^n} \varphi dx = 0$. Well, that means that $\lambda_1 = 0$. Recalling the simplicity of $\lambda_1=0$, we see that the equation $Lu = 0$ not only has a positive solution, its set of solutions is precisely $\{ c \varphi : c\in \mathbb{R} \}$. This up vote 2 implies the result. down vote Now, you may have already noticed that there aren't really such principal eigenvalues in general, for example when $f=0$. But I think it is possible that the idea can still be made into a rigorous proof. If we look at a really large domain, say the ball $B(0,R)$ with $R> 0$ very large, there is a principal eigenvalue $\lambda_{1,R}$ of $L$ on $B(0,R)$ and it is going to be close to zero. This can be shown by considering the adjoint operator $L^*$, which has the same principal eigenvalue $\lambda_{1,R}$, and it is easy to show that $0 < \lambda_{1,R} \leq CR^{-2}$, due to boundedness of $f$. This is where we use the special form of the equation. If we have a solution to $Lu = 0$ on the whole space $\mathbb{R}^n$, with $u(0)>0$, I believe it should be possible to prove that if we choose the normalization $\varphi_{1,R}(0) = u(0)$, then $\varphi_{1,R}$ converges to $u$ (at least locally uniformly) as $R\to \infty$. In particular, $u$ must be positive everywhere. There are obviously some details left to work out. It is very possible I am making a silly error and it doesn't work at all. add comment Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes or ask your own question.
{"url":"http://mathoverflow.net/questions/14722/integrable-solutions-to-an-elliptic-pde-on-divergence-form-have-a-definite-sign?sort=votes","timestamp":"2014-04-17T07:05:12Z","content_type":null,"content_length":"57663","record_id":"<urn:uuid:0aa336db-9052-48b3-90f8-3da20b842d41>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Four-Impulsive Rendezvous Maneuvers for Spacecrafts in Circular Orbits Using Genetic Algorithms Mathematical Problems in Engineering Volume 2012 (2012), Article ID 493507, 16 pages Research Article Four-Impulsive Rendezvous Maneuvers for Spacecrafts in Circular Orbits Using Genetic Algorithms ^1Division of Space Mechanics and Control INPE, CP 515, 12227-310 São José dos Campos SP, Brazil ^2Department of Aerospace and Mechanical Engineering, Università degli Studi di Roma “La Sapienza”, Piazzale Aldo Moro 5, 00185 Roma, Italy Received 1 December 2011; Accepted 27 January 2012 Academic Editor: Maria Zanardi Copyright © 2012 Denilson Paulo Souza dos Santos et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Spacecraft maneuvers is a very important topic in aerospace engineering activities today. In a more generic way, a spacecraft maneuver has the objective of transferring a spacecraft from one orbit to another, taking into account some restrictions. In the present paper, the problem of rendezvous is considered. In this type of problem, it is necessary to transfer a spacecraft from one orbit to another, but with the extra constraint of meeting another spacecraft when reaching the final orbit. In particular, the present paper aims to analyze rendezvous maneuvers between two coplanar circular orbits, seeking to perform this transfer with lowest possible fuel consumption, assuming that this problem is time-free and using four burns during the process. The assumption of four burns is used to represent a constraint posed by a real mission. Then, a genetic algorithm is used to solve the problem. After that, a study is made for a maneuver that will make a spacecraft to encounter a planet, in order to make a close approach that will change its energy. Several simulations are presented. 1. Introduction This paper aims to analyze optimal rendezvous maneuvers between two spacecrafts that are initially in circular coplanar orbits around the Earth. The main goal is to perform this transfer having the fuel consumption as a penalty function, so the minimization of this quantity is searched during the process of finding the solutions. The approach used here is to assume that the problem is time-free, which means that the time of the transfer is not important. The control assumed to perform this task is an engine that can deliver four burns. This assumption is used to represent a common constraint posed by real missions. In the present paper, we are considering a generic problem, not a specific mission, but this type of constraint appears very often in space activities. Then, a genetic algorithm is used in order to solve the problem. This type of approach represents a new alternative to solve this problem and can be used for comparisons with results obtained by standard procedures available in the literature, as shown in [1–28]. Preliminary studies showed that, in some situations, this algorithm can be faster in convergence and more accurate, while in some others, it is slower and presents less accuracy. A detailed comparison still has to be made to evaluate under which circumstances this algorithm can be more efficient. In any case, several kinds of missions can use the benefits of the techniques based on the genetic algorithm showed in this work. The main types are transference with free time (to change the orbit of the space vehicle without restrictions in the time required by the execution of the maneuver), “rendezvous” (when one desires that the space vehicle stands alongside another spacecraft), “flyby” (a mission to intercept another body, however without the objective to remain next to it), “swing-by” (a close approach to a celestial body to gain or lose energy), and so forth. But, in the present paper, only the rendezvous maneuver is considered. 2. Description of the Problem The problem of orbital maneuvers has been studied in several published papers. Some of them are shown in [1–28]. The different approaches to solve this problem can be appreciated in those references. Some authors assumed that a low magnitude force is applied to the spacecraft during a finite time. This is the so-called continuous thrust approach. References [7, 10, 17, 28] have some details on this topic. As an alternative approach, the idea of an impulsive maneuver is also studied. In this situation, a high magnitude force is applied during a time that can be considered negligible. References [3, 5, 8, 27] used this important approach. More recently, two more ideas appeared in the literature to perform orbital maneuvers. The first one is the use of a close approach with a celestial body to change the orbit of a spacecraft. It is the swing-by maneuver. References that used this approach are [2, 13]. The second recent approach is the gravitational capture, where the force generated by the perturbation of a third body [14] can be used to decrease the fuel expenditure of a space maneuver. References [11, 12] have some details of this idea. Some publications cover all those topics in more details, like [6, 9, 15]. Studies more related to the research shown in the present paper are the ones considering the Lambert's problem ([1, 16]), the rendezvous maneuver ([ 20–26]), and genetic algorithms itself ([18, 19]). In the present research, in order to solve the transfer proposed here, the Lambert’s problem is used, in the way described below. The Lambert's problem can be formulated as follows: “Find an unperturbed orbit, under the mathematical model given by a law that works with the inverse square of the distance (Newtonian formulation), that connects two given points and , with the transfer time () specified.” In the literature, several researchers have solved this problem by using distinct formulations. Reference [1] shows several of them. In this way, the parameters of the transfer orbit can be defined by is the true anomaly of the departure point on the initial orbit. , is the angular length of the transfer. , is the semimajor axis of the transfer orbit. Note that, for each pair of departure and arrival points, a minimum value exists for . Two transfer orbits can be found for the same value of , depending on the sense of the transfer. The parameter is usually replaced by a different parameter . The advantage of this substitution is that the new variable has values between 0 and 1. The relationship is shown below [19] The parameters and determine the position of the points and that can be related to the radius vectors and . Any permitted value of the parameter determines univocally one transfer orbit. These parameters are, from the point of view of the genetic optimizer, the genes of the members of the population. The genetic algorithm searches for the best solution among a number of possible solutions, represented by vectors in the solution space. To find a solution is to look for some extreme value (minimum or maximum) in the solution space. The fitness of each individual is represented by the total velocity impulse required to perform the orbital transfer. The total impulse is given by the sum of the single impulses provided in each thrust point in order to pass from an orbital arc to the following one. It corresponds to the velocity difference at the relevant thrust point. The positions of the thrust points and the parameters of the transfer orbit are obtained using as input the three genes, that is, the parameters previously chosen. The velocities at the thrust points, before and after firing the engine, are easily computed, and it provides the total velocity impulse, which is the measurement of the individual fitness. The evolutionary process will select individuals with the genes corresponding to the optimal maneuver. Figure 1 shows an instantaneous scenario of the problem. Note that is the true anomaly of the point on the initial orbit; is the true anomaly of the point on the final orbit; is the angular length of the transfer; the orientation of the transfer orbit is defined by the angle between its axis and the axis of the initial orbit; is the distance between and (2.4); are the focus of the ellipse. 2.1. The Genetic Algorithm The procedure starts with a random population of up to 800 individuals. The initial population is generated randomly, and consider its characteristics distance and angles according to the constrains of each variable. The vectors are assembled according to the allowed boundary condition. Then, the fitness of each individual is verified, following the criteria of the objective function, which is to minimize the fuel consumption (measured by the ) found by solving the Lambert's problem. So, the best individuals are selected to go to the next generation, parents, and children. The procedure of crossover is then applied, as well as a mutation to insert diversity in the population (Figure 2). The random variables used for the implementation of the algorithm are . Those symbols have the meaning that is the true anomaly of the points that determine the transfer orbit, as shown in Figure 3; determines the radius vector (position) in each thrust; the (2.7) are the angles between (see Figure 3 again). Eventually, there are epidemics, with the goal of inserting diversity and reducing the elitism. After that, a new population is created, and the procedure is started again, finishing after n attempts. The block diagram of the genetic algorithm (Figure 2) shows the procedures followed to solve the problem. More details of the genetic algorithm can be obtained in [18, 19]. 2.2. Selecting the Next Generation and Performing the Crossover and the Epidemic Process The selection of the new generation is made after the analysis of each individual by measuring its objective function (Fitness). The ones with better values for this measurement are selected to undergo a process of crossing or reproduction (crossover), where parents are selected, and the children of this intersection are raised (Figure 2). When the population is too uniform, measured by the values of their objective functions, part of the population suffers an epidemic process, where many individuals are killed and replaced by others using again a random process, to insert diversity in the population and to prevent premature convergence to local optimal values. The crossover starts by separating the chromosomes of the parents in two parts. After this separation, the first part of the parent 1 is combined with the second part of the parent 2, and the first part of the parent 2 is combined with the second part of the parent 1. In this way, a second generation is created. See [ 18, 19] for more details. 2.3. Chromosome The chromosome representation is vital for a genetic algorithm (GA), because it is the way that we translate the information from the problem to a format that can be handled by the computer. This representation is completely arbitrary, so it varies according to the choice made by each developer, without any kind of obligation to adopt any representation available in the literature. This is a very important point to emphasize. The vast majority of researchers use the binary representation for this problem because it is the simplest one. In fact, many people, when they imagine a GA, quickly make an association with binary chromosomes (used to facilitate the crossing). However, other formulations using real chromosomes, modifying the way of performing crossover, get satisfactory results [18]. In this paper, each gene is chosen to be a real number between 0 and 1, being generated in a binary form and then converted in a real number. The value of the corresponding parameter is , where and are the minimum and maximum values of those variables, which means that they are the boundary conditions. The main reason to use the binary approach is to validate this usual approach, in GA problems, in the particular type of problem considered here. References [18, 19], that studied this same problem using GA, used different approaches, so the validation of the binary approach was considered important. 2.4. Objective Function Most of the selection techniques used in this procedure require comparisons of the fitness to decide which solutions should be propagated to the next generation. Normally, the fitness has a direct relation with the value of the objective function, according to the rule that better values of the objective function generate higher values of the fitness parameter. When the genetic algorithm calls the objective function, it transfers an array of parameters that specify the selected solution. This selection parameter must not be changed in any way by the objective function. Genetic algorithms are based on biological evolution, and they are able to identify and explore environmental factors to converge to optimal solutions, or approximately optimal global levels. Then, the fitness of each individual can be computed by using the five data that define the problem (, the first one being unit because of the normalization of the variables) and the three genes that characterize the individual. Then, we can obtain several important parameters [15]. The true anomaly of the arrival point is given by The radii of the departure and arrival point are given by The distance between and is The semimajor axis of the transfer orbit is The distances and of and from the vacant focus can be specified by the equations Figure 3 shows a description of several important variables. The angles can be calculated by (, The eccentricity of the transfer orbit is given by The true anomaly of the on the transfer orbit is The argument of perigee for the transfer orbit is which is the angle between the perigees of the transfer and the initial orbits. Now that the geometry of the maneuver has been shown, it is possible to calculate the radial and the tangential components of the spacecraft velocity before and after both impulses, what permits the computation of the total , which has been assumed to be the measurement of the individual fitness. 2.5. Normalization Nondimensional variables are used in the procedure. They are shown below The distance and velocity units for the normalized variables are the semimajor axis of the initial orbit and the velocity on a circular orbit with the same energy as the initial one. So, the reference time is . 3. Numerical Solutions Several maneuvers were simulated with the procedure developed here, using the genetic algorithm. Then, the equivalent Hohmann maneuvers were calculated to provide a level of comparison. The idea is not to find a transfer that has a smaller total , when compared to the Hohmann transfers, but to try to minimize the difference in costs, assuming that the engine of the spacecraft has a limitation that does not allow two impulsive maneuvers to be performed. In theory, for the cases simulated here, the two impulses maneuvers always have a lower consumption. So, the idea is to find the best maneuver that has four impulses, in order to compare with other works [18, 19]. The number of impulses is a parameter that can be modified in the input data of the algorithm to be useful for other applications. The results shown in the present paper always consider a rendezvous maneuver between two spacecrafts, where the radius of the orbit of the first spacecraft is , and several values were used for the radius of the spacecraft that is in the final orbit (see Table 1). The genetic algorithm provided satisfactory solutions, when compared with the solutions of the literature [18], as shown in Table 1. The population is composed by 800 individuals, and up to 400 generations of individuals were used. The results indicated that the maneuvers using the GA with 4 impulses do not provide savings over the Hohmann transfer for all cases simulated (see Figure 4 and Table 1), as expected and explained before, but it minimizes the difference in costs for the assumed four impulsive maneuvers. Figure 4 shows all the details for this comparison. Figures 5, 6, 7, 8, and 9, as well as Table 1, show a series of maneuvers. In general, an impulse is applied in the initial orbit (), generating the first elliptical transfer arc, and then, according to the procedure, the second impulse is applied (), leading to another elliptical transfer orbit. The third point of burn will happen () to put the spacecraft in the last transfer arc, and, finally, the last impulse () is applied to locate the vehicle in the desired orbit. The total consumption is the sum of all the intermediate impulses, and it is named (Table 1). This total consumption serves as an index of measurement and comparison between the methods. In other words, the information of the extra cost is due to the fact that a two-impulse maneuver is not possible and a detailed vision of the best four-impulse strategy is generated by the GA. The variables of the problem are) (see Figures 5 and 7). In each new generation of the population, the individuals are approaching the values suggested by the algorithm, converging to a solution of the problem. The best fitness values of the parameters show the convergence to the optimal value. Table 2 shows a detailed view of the maneuver, explaining all the intermediate Keplerian orbits Simulation 8 and Figure 7 show some new results that confirm that the use of the procedure with four impulses provides results with higher consumption than the bi-impulsive maneuver (Table 1), but that minimizes the four-impulsive burn technique. This study can also be applied to find orbital maneuvers that search for the minimum fuel consumption for a spacecraft that leaves one celestial body and goes back to this same body (Figures 9 and 10 ). This question is of great importance for missions whose objective is to shift the position of the satellite in a given orbit, without changing the other orbital elements. Prado and Broucke [1] also studied this problem using the Lambert method, under different circumstances. 3.1. The Swing-By Maneuvers The next step is to use the algorithm developed here to study a maneuver that will make a spacecraft to encounter a planet, in order to make a close approach that will change its energy. This problem can be seen as a rendezvous problem, where the second spacecraft, the one to be reached, is a planet and not a space vehicle. Using this approach, a transfer maneuver using an impulsive engine with four burns is followed by a gravity-assisted maneuver to send the spacecraft further in the solar system. This technique will reduce the cost of an interplanetary mission. This is a standard procedure in orbital maneuvers, and a more detailed description is available in references [2, 9]. In this case, the system consists of three bodies:the body , with finite mass, situated in the center of mass of the Cartesian system of reference;, a smaller body, that can be a planet or a satellite of , in a Keplerian orbit around ;a body , a space vehicle with infinitesimal mass, traveling in a Keplerian orbit around , when it passes close to . This close approach changes the orbit of and, by the hypothesis assumed for the problem, it is considered that the orbits of and do not change. Using the “patched conics” approximation, the equations that quantify those changes are available in the literature [9]. The standard maneuver can be identified by the following three parameters (Figure 11):, the magnitude of the velocity of the spacecraft with respect to when approaching the celestial body;, the distance between the spacecraft and the celestial body during the closest approach;, the angle the approach. Having those variables, it is possible to obtain , half of the total deflection angle, by using the equation [2] Note that is the velocity of the celestial body with respect to the main body and is the velocity of the smaller mass when passing by the periapsis. A complete description of this maneuver and the derivation of the equations can be found in Prado [9]. The final equations are reproduced below where is the angular velocity of the motion of the primaries, is the variation of energy, is the variation of the angular momentum, and is the variation of the magnitude of the velocity due to the swing-by. For the , we have the equation The gravity-assisted maneuver (swing-by) can provide a considerable change of the velocity and energy of the spacecraft, reducing the costs of the mission. During this approach, the spacecraft will be transferred to another orbit of interest of the mission. The dynamics used to solve this problem is the traditional model given by the “Patched Conics,” so it is assumed that all three bodies involved are points of mass and do not suffer external disturbances. The variations given by the swing-by, in terms of velocity variation and energy variation , can now be obtained. Figure 12 shows the maneuver obtained by the genetic algorithm. The spacecraft comes from an initial orbit with radius u.a., which represents the position of the Earth's heliocentric system, in astronomical units. It means that the spacecraft starts from the Earth. Then, it performs a maneuver with 4 impulses, using three elliptic intermediate transfer orbits, and finally it arrives in an orbit with u.a. (Jupiter). At this moment, it realizes a maneuver of Swing-by with the planet Jupiter. Note that the gain in velocity was and the gain in energy was . During this approach, the space vehicle place itself in another orbit of the interest of the mission. In this mission, the participation of the GA is to find the best procedure to make the spacecrafts reach the planet Jupiter. From this point, standard procedures of interplanetary trajectories can complete the mission. 4. Conclusion Based on the analysis of the results obtained, the genetic algorithm implemented here shows that this technique brings good results for the proposed four impulsive rendezvous maneuvers, when compared with the ones obtained by the traditional impulsive methods. It means that it can be used in real cases, specially when a bi-impulsive transfer is not possible due to the limitations of the engine of the spacecfraft. The procedure is also effective in maneuvering the spacecraft from one body back to the same body, that is, making it leaving and returning to the same orbit. The results indicate that the maneuver using the genetics algorithm with four impulses does not provide better fuel consumption in any case simulated, since the bi-impulsive maneuver is better in this situation, but the method proves to be efficient in minimizing the four impulsive maneuvers. It is necessary to take into account that, in many cases, the limitations of the propellers of the spacecraft require that the maneuver has to be performed using several impulses, passing through intermediate orbits to reach the target. Then, we studied a maneuver where the goal is to send a spacecraft to encounter the planet Jupiter to make a swing-by maneuver. The algorithm worked well in finding a good solution for this problem. In general, the proposed technique can be used when a rendezvous maneuver is required between two given circular orbits for a spacecraft that has an engine that requires the application of four In the future, it is possible to apply this technique in three dimensions, in maneuvers that requires more impulses, and also in maneuvers to avoid collisions between a spacecraft and asteroids. This work was accomplished with the support of São Paulo State Science Foundation (FAPESP) under Contract 2009/16517-7 and National Institute for Space Research, INPE, Brazil. 1. A. F. B. A. Prado and R. A. Broucke, “Study of hénon's orbit transfer problem using the lambert algorithm,” AIAA Journal of Guidance, Control, and Dynamics, vol. 17, no. 5, pp. 1075–1081, 1993. 2. D. P. S. Santos, A. F. B. A. P. Prado, and E. M. Rocco, “The use of consecutive collision orbits to obtain swing-by maneuvers,” in Proceedings of the 56th International Astronautical Congress, Fukuoka, Japan, October 2005. View at Scopus 3. F. W. Gobetz and J. R. Doll, “A survey of impulsive trajectories,” AIAA Journal, vol. 7, pp. 801–834, 1969. View at Publisher · View at Google Scholar 4. R. H. Goddard, “A method of reaching extreme altitudes,” Smithsonian Institute Public Miscelanea Collect, vol. 71, no. 2, pp. 809–811, 1920. View at Publisher · View at Google Scholar 5. W. Hohmann, Die Erreichbarkeit Der Himmelskorper, Oldenbourg, Munique, 1925. 6. J. P. Marec, Optimal Space Trajectories, Elsevier, New York, NY, USA, 1979. 7. D. P. S. Santos, L. Casalino, G. Colasurdo, and A. F. B. A. P. Prado, “Optimal trajectories using gravity assisted maneuver and solar electric propulsion (SEP) towards near-earth-objects,” in Proceedings of the 4th WSEAS International Conference on Applied and Theoretical Mechanics (Mechanics '08), pp. 62–68, Cairo, Egypt, December 2008. 8. E. M. Rocco, A. F. B. A. Prado, and M. L. O. Souza, “Bi-impulsive orbital transfers between non-coplanar orbits with time limit,” in Applied Mechanics in the Americas, D. Pamplona, C. Steele, H. I. Weber, P. B. Gonçalves, I. Jasiuk, and L. Bevilacqua, Eds., vol. 6, pp. 259–262, 1999. 9. A. F. B. A. Prado, Optimal transfer and swing-by orbits in the two- and three- body problems, Ph.D. thesis, Department of Aerospace Engineering and Engineering Mechanics—University of Texas, 10. A. A. Sukhanov and A. F. B. A. Prado, “Constant tangential low-thrust trajectories near on oblate planet,” Journal of Guidance, Control, and Dynamics, vol. 24, no. 4, pp. 723–731, 2001. View at Publisher · View at Google Scholar · View at Scopus 11. A. F. B. A. Prado, “Numerical and analytical study of the gravitational capture in the bicircular problem,” Advances in Space Research, vol. 36, no. 3, pp. 578–584, 2005. View at Publisher · View at Google Scholar · View at Scopus 12. A. F. B. A. Prado, “Numerical study and analytic estimation of forces acting in ballistic gravitational capture,” Journal of Guidance, Control, and Dynamics, vol. 25, no. 2, pp. 368–375, 2002. View at Publisher · View at Google Scholar · View at Scopus 13. A. F. B. A. Prado and R. Broucke, “Transfer orbits in restricted problem,” Journal of Guidance Control and Dynamics, vol. 18, no. 3, pp. 593–598, 1995. View at Publisher · View at Google Scholar 14. A. F. B. A. Prado, “Third-body perturbation in orbits around natural satellites,” Journal of Guidance, Control, and Dynamics, vol. 26, no. 1, pp. 33–40, 2003. View at Publisher · View at Google Scholar · View at Scopus 15. V. A. Chobotov, Orbital Motion, American Institute of Aeronautics and Astronautics, 2nd edition, 1996. 16. J. E. Prussing, “Geometrical interpretation of the angles α and β in lambert's problem,” Journal of Guidance, Control, and Dynamics, vol. 2, no. 5, pp. 442–443, 1979. View at Publisher · View at Google Scholar · View at Scopus 17. J. E. Prussing, “Equation for optimal power-limited spacecraft trajectories,” Journal of Guidance, Control, and Dynamics, vol. 16, no. 2, pp. 391–393, 1993. View at Publisher · View at Google Scholar · View at Scopus 18. M. Rosa Sentinella and L. Casalino, “Genetic algorithm and indirect method coupling for low-thrust. Trajectory optimization,” AIAA 064468, 2006. 19. F. Cacciatore and C. Toglia, “Optimization of orbital trajectories using genetic algorithms,” Journal of Aerospace Engineering, Sciences and Applications, vol. 1, no. 1, pp. 58–69, 2008. 20. J. E. Prussing and J. H. Chiu, “Optimal multiple-impulse time-fixed rendezvous between circular orbits,” Journal of Guidance, Control, and Dynamics, vol. 9, no. 1, pp. 17–22, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 21. B. H. Billik and H. L. Roth, “Studies relative to rendezvous between circular orbits,” Astronautica Acta, vol. 13, 1967. 22. J. E. Prussing, “Optimal four-impulse fixed-time rendezvous in the vicinity of a circular orbit,” AIAA Journal, vol. 7, no. 5, pp. 928–935, 1969. View at Publisher · View at Google Scholar 23. J. E. Prussing, “Optimal two-and three-impulse fixed-time rendezvous in the vicinity of a circular orbit,” AIAA Journal, vol. 8, no. 7, pp. 1221–1228, 1970. View at Publisher · View at Google Scholar · View at Scopus 24. H. Shen and P. Tsiotras, “Optimal two-impulse rendezvous using multiple-revolution lambert solutions,” Journal of Guidance, Control, and Dynamics, vol. 26, no. 1, pp. 50–61, 2003. View at Publisher · View at Google Scholar · View at Scopus 25. H. Shen and P. Tsiotras, “Optimal two-impulse rendezvous between two circular orbits using multiple revolution Lambert’s solutions,” Proceedings of the AIAA Guidance, Navigation, and Control Conference, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 26. Y. Z. Luo, G. J. Tang, and H. Y. Li, “Optimization of multiple-impulse minimum-time rendezvous with impulse constraints using a hybrid genetic algorithm,” Aerospace Science and Technology, vol. 10, no. 6, pp. 534–540, 2006. View at Publisher · View at Google Scholar 27. E. M. Rocco, A. F. B. A. Prado, M. L. O. Souza, and J. E. Baldo, “Optimal bi-impulsive non-coplanar maneuvers using hyperbolic orbital transfer with time constraint,” Journal of Aerospace Engineering, Sciences and Applications, vol. 1, no. 2, pp. 43–51, 2008. 28. D. P. S. Santos, A. F. B. A. Prado, L. Casalino, and G. Colasurdo, “Optimal Trajectories towards near-earth-objects using solar electric propulsion (SEP) and gravity assisted maneuver,” Journal of Aerospace Engineering, Sciences and Applications, vol. 1, no. 2, pp. 51–64, 2008.
{"url":"http://www.hindawi.com/journals/mpe/2012/493507/","timestamp":"2014-04-17T07:42:13Z","content_type":null,"content_length":"177198","record_id":"<urn:uuid:c8ff9d45-128d-4bfa-9659-bf9e66da32c4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
non linear systems June 3rd 2011, 02:00 AM #1 Jun 2011 non linear systems Can any one help me with this: Find the exact values of r, k, and x at the cusp point shown in Figure 3.7.5 r(x) = (2* (x^2))/ (1+(x^2))^2 , k(x) = 2*(x^3) /( x^2) - 1 Last edited by sandeepvelli; June 3rd 2011 at 04:45 AM. Surely that is not the whole problem??? We can read approximate values for r and k off the graph but we cannot get exact values without knowing what the equations of those graphs are. And there is no mention of even what x means or how it is related to r and k, much less any way of finding a value for x. June 3rd 2011, 03:14 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/differential-equations/182287-non-linear-systems.html","timestamp":"2014-04-16T04:41:25Z","content_type":null,"content_length":"33143","record_id":"<urn:uuid:da8ac65d-c731-47ed-ac42-5c08f27e66b6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Powers and roots October 8th 2008, 03:50 PM #1 Junior Member Mar 2008 Powers and roots The problem says to insert brackets to make this a true statement $<br /> \sqrt{2}^{{\sqrt{2}}^{{\sqrt{2}}^{\sqrt{2}}}} = <br /> \sqrt{2}^{{\sqrt{2}}^{\sqrt{2}}}<br />$ I know that: $<br /> \sqrt{2}^{\sqrt{2}} = 2^{\frac{\sqrt2}{2}}<br />$ $<br /> (\sqrt{2}^{\sqrt{2}})^{\sqrt{2}} = 2<br />$ I'm not sure how to proceed from here, apart from brute force. A hint would be appreciated. Last edited by Ting; October 8th 2008 at 04:02 PM. Hello ! Noting that $(\sqrt{2}^{\sqrt{2}})^{\sqrt{2}} = 2$ is a good thing ! However, and pardon me for that, but I don't know how I got to the answer... It was quite automatic $\sqrt{2}^{\left(\left({\sqrt{2}}^{\sqrt{2}}\right) ^{\sqrt{2}}\right)}=\left(\sqrt{2}^{\sqrt{2}}\righ t)^{\sqrt{2}}$ (it equals 2) Dear Moo Thank you! Guess I'll just have to work on developing the kind of automatic response you have. Lol am I able not to agree ? You're welcome ! Actually, I think that at first I was telling myself that the RHS needed parentheses. Making parentheses over the last 2 sqrt(2) wouldn't have made the problem easier. So if there were, it should have been over the first two sqrt(2) After that...blackout... October 9th 2008, 12:31 PM #2 October 9th 2008, 12:37 PM #3 October 9th 2008, 12:58 PM #4 Junior Member Mar 2008 October 9th 2008, 01:04 PM #5 October 9th 2008, 01:13 PM #6
{"url":"http://mathhelpforum.com/math-topics/52710-powers-roots.html","timestamp":"2014-04-18T23:34:56Z","content_type":null,"content_length":"50446","record_id":"<urn:uuid:b18692c6-c38d-434a-ae67-0e61afc72fce>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: TOWARD A MACKEY FORMULA FOR COMPACT PRAMOD N. ACHAR AND CLIFTON L.R. CUNNINGHAM Abstract. We generalize [6, Theorem 3] to a Mackey-type formula for the compact restriction of a semisimple perverse sheaf produced by parabolic in- duction from a character sheaf, under certain conditions on the parahoric group scheme used to define compact restriction. This provides new tools for match- ing character sheaves with admissible representations. In this paper we prove a Mackey-type formula for the compact restriction func- tors introduced in [6]. The main result, Theorem 1, applies to any connected reductive linear algebraic group G over any non-Archimendean local field K that satisfies the following three hypotheses: (H.0) G is the generic fibre of a smooth, connected reductive group scheme over the ring of integers OK of K; (H.1) the characteristic of K is not 2 (in particular, this condition is met if the characteristic of K is 0); (H.2) for every parabolic subgroup P¯K G ×Spec(K) Spec ¯K there is a finite unramified extension K of K and a subgroup P G ×Spec(K) Spec (K ) such that P ×Spec(K ) Spec ¯K is conjugate to P¯K by an element of G(Ktr
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/582/4766420.html","timestamp":"2014-04-19T20:02:19Z","content_type":null,"content_length":"8289","record_id":"<urn:uuid:bfefd3d0-d8ef-4a71-a1ca-9df2871829c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
mysql mathematical operation,database Jan 27, 2011, 21:04 mysql mathematical operation,database I have 4 column in my database, view name column, X cordinate,Y co ordinate ,Z coordinate for each xyz coordinate there is one view name. I want to run an sql query whcih will calculate for eg. ( sqrt root of ((x -x1)square+(y -y1)square + (z -z1)square)) and for each value of x,y,z cordinate in my whole column , it should show me the name of associated with that column if the ans is less than 100, beside each column for each value of x,y,z can anybody help? if question is not clear plz do reply. simplified way of question is- i want to do some calculation with my xyz co ordinate with all the other cordinate in this table and if the ans is less than 100 it should show me the name of all that column for which the value is less than 100. and put it beside the xyz cordinate for which it got result.. and do the same for each xyz cordinate and put the result beside it.. i did it using excel..but not able to implement it,in myphpmyadmin sql database.. Jan 27, 2011, 21:55 Jan 30, 2011, 03:57 Ok..thanks for ur reply.. I am attaching a printscreen of my database.. 1. let us suppose the value of x,y,z which are in table are x1,y1,z1,x2,y2,z2 and so on. 2.I want shortest distance between the first value x1,y1,z1 and second value of x,y,z i.e x2,y2,z2. using formulae : sqrt{(x2-x1)^2+(y2-y1)^2+(z2-z1)^2}=d1 again with second value x,y,z i.e d2=sqrt{(x3-x1)^2+(y3-y1)^2+(z3-z1)^2}. similarly it will calculate,d1,d2,d3,d4... till the last row for first value of x1,y1,z1. now in all this distances it should me the 5 "link" which is having the distance value less than or equal to 400 in Row R1,R2,R3,R4 and R5 in the first row as this all distance has been calculated for only first value of x,y,z .i.e x1,y1,z1 than again this query will run for second value of x,y,z i.e x2,y2,z2 of all the other value of x,y,z from x1,y1,z1 till the last value of x,y,z. and show me the 5 "link" which is having the distance value less than or equal to 400. in Row R1,R2,R3,R4 and R5 in the second row as this all distance has been calculated for second value of x,y,z i.e x2,y2,z2 in the same way this query will run for each x,y,z till last value of x,y,z and show the result in R1,R2,R3,R4 and R5 till the last row. :) i think the above explanation is somewhat clear but quite long.. hope I will get some reply... Jan 30, 2011, 04:44 sorry, i still do not understand what you want Jan 30, 2011, 06:39 its a simple mathematics.. first we have to calculate distance between one point to all the other point and if the distance is less than 400. show the text which is there in "link" column. link X Y Z R1 R2 R3 cow x1 y1 z1 cow monk tony goat x2 y2 z2 goat raj tatoo monk X3 y3 z3 monk cow tony raj x4 y4 z4 jony x5 y5 z5 tatoo x6 y6 z6 tony x7 y7 z7 In the above eg for x1,y1,z1 we have found cow,jony and tony..thats means if we will use the above formula between point x1,y1,z1 and x1,y1,z1 x1,y1,z1 and x5,y5 and z5 x1,y1,z1 and x7,y7 and z7 the value we will get is less than 400 similarly for goat we got goat,raj and tatoo and for monk we got monk, cow and tony i think now it is clear..
{"url":"http://www.sitepoint.com/forums/printthread.php?t=729382&pp=25&page=1","timestamp":"2014-04-19T20:46:40Z","content_type":null,"content_length":"9121","record_id":"<urn:uuid:99961775-f5c8-41c6-93e7-31f7fc827356>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
The VXL Project I am facing some trouble using FMatrixComputeNonLinear class in vxl's multi-view geometry (mvl) module .. The way I am using it is illustrated in the attached code snippet. it seems FMatrixComputeNonLinear's compute(FMatrix *f) method does not modify the parameter f to the new value of F computed. so the problem i am having is that after making a call to FMatrixComputeNonLinear::compute() I have no way of getting back the new F matrix. Is this a known problem ? Do I have to modify vxl src code to fix this ? I will be grateful to know if someone else has had this problem before and how it was resolved. Thanks in advance, FMatrix fm; PairMatchSetCorner match; // computed a set of corresponding matches in 'match' // initial estimate of the F matrix is 'fm' //Non Linear Estimation FMatrix * init_fmatrix = new FMatrix(fm); cout << "F before non linear minimization " << *init_fmatrix <<endl; FMatrixComputeNonLinear computor(&match); cout << "F after non linear minimization "<< *init_fmatrix << endl; //In the above code snippet the value of 'init_fmatrix' does not change after the call to compute as expected ..
{"url":"http://sourceforge.net/p/vxl/mailman/message/4130312/","timestamp":"2014-04-19T05:02:10Z","content_type":null,"content_length":"21319","record_id":"<urn:uuid:9400a27b-1b91-43de-9a85-d4e5ad68dfc5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Gene Ontology consistent protein function prediction: the FALCON algorithm applied to six eukaryotic genomes Gene Ontology (GO) is a hierarchical vocabulary for the description of biological functions and locations, often employed by computational methods for protein function prediction. Due to the structure of GO, function predictions can be self- contradictory. For example, a protein may be predicted to belong to a detailed functional class, but not in a broader class that, due to the vocabulary structure, includes the predicted one. We present a novel discrete optimization algorithm called Functional Annotation with Labeling CONsistency (FALCON) that resolves such contradictions. The GO is modeled as a discrete Bayesian Network. For any given input of GO term membership probabilities, the algorithm returns the most probable GO term assignments that are in accordance with the Gene Ontology structure. The optimization is done using the Differential Evolution algorithm. Performance is evaluated on simulated and also real data from Arabidopsis thaliana showing improvement compared to related approaches. We finally applied the FALCON algorithm to obtain genome-wide function predictions for six eukaryotic species based on data provided by the CAFA (Critical Assessment of Function Annotation) project. Protein function prediction; Gene Ontology; Evolutionary optimization Central aim of computational protein function prediction methods is to provide reliable and interpretable results, in order to be useful for the biological community. For this reason, prediction methods often make use of the Gene Ontology (GO) controlled vocabulary [1] to describe functional properties of proteins. GO terms are organized in three separate domains that describe different aspects of gene and protein function: Molecular Function (MF), Biological Process (BP) and Cellular Component (CC). Within each domain the terms are arranged in a Directed Acyclic Graph (DAG). Due to the hierarchical structure of the GO-DAG, a protein that is assigned to a particular term is by definition assigned to all of its predecessors, which are more general GO terms. On the other hand, if a protein does not perform a particular function, it is not assigned to the corresponding GO term, nor to any of the successors (more detailed terms) of that term. This constraint of the GO-DAG is referred to as the True Path Rule (TPR) and provides a framework to ensure that functional descriptions of proteins are not self-contradictory. Computational methods often neglect the TPR in their predictions, making their interpretations problematic. Taking the GO DAG (and thus TPR) into account in protein function prediction may lead to improvement of the performance and interpretation. Violation of TPR can be described in a continuous or in a discrete manner. In the former, the probability (or confidence) of membership to a GO term does not decrease monotonically when moving from more general GO terms to the more detailed ones. Therefore the space of probability vectors (where a vector denotes the joint set of per-GO term probabilities of memberships) can be divided in two sets: one set (C) that contains the probability vectors that satisfy the monotonicity constraint and another set (V) that contains those that violate the constraint. The challenge from the continuous point of view is, given a vector V to find an optimal corresponding vector in C, according to a criterion. Obozinski et al. [2] developed different “reconciliation” approaches to infer consistent probability vectors from Support Vector Machines (SVM) outputs transformed to probabilities. Performance comparisons between methods based on Belief Propagation, Kullback-Leibner minimization and Isotonic Regression (IR), showed that the last outperformed the rest. In IR [3,4] predictors are the ranks in the ordering of terms in the GO-DAG from general to detailed and the responses are the membership probabilities. The aim is to identify the probability vector that minimizes the squared error with the original input vector and that is monotonic for the predictors and thus belongs to C. In the discrete case, the interest is shifted from the probabilities of membership to the memberships themselves. The TPR violation can be evaluated by checking whether all dependencies are satisfied or not. Given an inconsistent probability vector, the aim is to find the most probable set of GO assignments that do not violate TPR. The task of inferring the most probable latent binary vector given the input probabilities is a decoding problem, which is well-studied in information theory when the underlying structure of constraints has a tree-like structure (including chains). The Viterbi algorithm [5] (also called min-sum [6]) performs such exact inference in tree-like structures. Standard hierarchical classification is not a suitable approach to this problem due to the the DAG structure of GO and multi-functionality of proteins [7]. For instance, applying hierarchical classification to the DAG depicted in Figure 1A, one starts from the root (x[1]) and moves to either x[2] or x[3]. Regardless the outcome of this classification, it is not possible to give a positive prediction for x[4] without violating the TPR (since exactly one of its parents will not be predicted). However, Vens et al. [8] proposed an hierarchical classification methodology adapted for the GO vocabulary. Other interesting approaches come from fuzzy classification [9]. Exact inference in DAG structures is an NP-hard problem [10] that can be performed by the Junction Tree algorithm [11] but the computational cost is intractable for the size of graphs such as the GO. Barutcuoglu et al. [12 ] modeled the GO-DAG as a Bayesian Network and they combined SVM outputs per GO term in order to obtain GO assignments. In their case, exact inference was feasible because of the small size of the GO-DAG part used in the study (105 terms). Another related approach was developed by Sokolov and Ben-Hur [13] where SVM classifiers for structured spaces, such as the Gene Ontology, were developed. Valentini et al. [14] and Cesa-Bianchi et al. [15] developed ensemble algorithms that transfer the decisions between base (GO term) classifiers according to the GO DAG structure. Jiang et al. [10] first converted the GO DAG to a tree structure and then applied exact inference. Figure 1. Example graphs. A Minimal Directed Acyclic Graph (DAG). B DAG with 15 nodes, that was used in our experiments. Here, we take a discrete approach to the problem of TPR violations and we develop an algorithm for the inference of most probable TPR consistent assignments using per-GO term probabilities as input. To the best of our knowledge there is no other algorithm for this task that is suitable for large DAGs. We model the GO DAG as a Bayesian Network and we infer the most probable assignments employing the global optimization method of Differential Evolution [16], which is adapted for discrete space. We test our algorithm on small graphs of size 6 and 15 nodes, for which we can perform exact inference. We show that our algorithm consistently finds the correct optimal configuration. Further we evaluate the performance of the algorithm on probabilistic outputs of Bayesian Markov Random Fields (BMRF) [17] as applied previously in Arabidopsis thaliana protein function prediction [18]. Our algorithm is applied to a graph that contains 1024 GO terms. We show that besides providing consistent predictions, our algorithm improves the performance of the predictions compared to a supervised method used in a previous study. We finally applied our algorithm in large scale and we provide function predictions for 32,000 unannotated proteins from six eukaryotic species. Materials and methods The GO is a vocabulary that describes the functions and locations of genes and its terms are arranged in a DAG structure, i.e. every node has zero, one or more parents and children. A protein can be assigned to one or multiple terms from each domain of GO [7]. The TPR of the GO-DAG implies that when a protein is known to be assigned to a particular GO term, it should also be assigned to all ancestor terms. In contrast, when a protein is known not to be a member of a GO term, it should not be a member of any of all the successors of that term. By GO-DAG consistency we denote satisfaction of the TPR (also see Table 1). In terms of prediction, given a probably inconsistent vector of input probabilities, one has to find the most probable multiple and consistent GO-DAG paths that the protein has to be annotated to. Table 1. Parent-child relationship in a GO-DAG Naturally, methods that treat GO terms independently and neglect the DAG structure of the GO can make predictions that are inconsistent. In particular for probabilistic methods those inconsistencies may appear in the form of p[i]>p[j] in which the term j is an ancestor of term i, and thus more general. In this study, we aim to find the most probable consistent GO term assignments, using such probability vectors as input. We first describe the general probabilistic setting, then derive two likelihood based objective functions and finally an evolutionary algorithm for the optimization. Bayesian network modelling of GO Consider a Directed Acyclic Graph (DAG) G=(V,E) with nodes V (denoting the set of GO terms) and E directed edges (the set of parent-child relationships). Vector θ denotes the input probability vector which is |V| - dimensional and x is the corresponding binary labeling, where x[g]=1 denotes membership for a particular protein to the g-th GO term in V GO term. We model the GO-DAG as a Bayesian Network, with density for x: where pa(g) denotes the parent set of node g and x[pa(g)] the set of labels that correspond to those parents. The probability p(x[g]∣x[pa(g)]), under the DAG constraints, is given using the Conditional Probability Table (CPT) of Table 2. The table shows that when min(x[pa(g)])=0 (i.e. at least one of the parents has label 0) then x[g]=0 with probability 1. Otherwise x[g]=1 with probability θ[g] and x[g]=0 with probability 1−θ[g]. Note that all inconsistent labelings have zero probability. Table 2. Conditional probability table, under the DAG constraints Given equation 1 and conditional probability tables with parameters = (θ[1],⋯,θ[∣V∣]), one wishes to identify the most probable labelings vector x. There are two challenges in this. The first is how to choose the parameter vector θ, discussed in this section, and the second is how to search for the most probable labelings vector x, which is discussed in the next section. Most computational methods for GO term prediction are developed under a multi-class classification framework, where each GO term denotes a class and for each protein the probability of being member of that class is evaluated by the method. Classes are arranged according to a DAG hierarchy and further each protein may belong to one or multiple classes. In GOStruct [13] a SVM approach was developed to perform multi-class classification in a single step. However, the vast majority of the methods split the multi-class problem in multiple binary classification ones (i.e. one versus all) and therefore act per GO term and disregard the GO hierarchy. GeneMANIA [19], Kernel Logistic Regression [20] and BMRF [17] propagate function information through networks of protein associations and this operation is performed per GO term. Blast2GO [21], GO-At [22] and Aranet [23] perform overrepresentation analysis for each GO term separately. Such methods do not return the conditional probabilities in the sense of equation 1. The membership probabilities that they return are perhaps best viewed as marginal probabilities, i.e. summed over all configurations for GO terms other than the specific term g. We might have tried to retrieve θ from the relation between marginal and conditional probabilities, but this is certainly not an easy way. We attempted other ways. Methods such as BMRF return low probabilities for detailed GO terms and high ones for general terms. Prioritization of the proteins in a particular GO term can then be achieved by simply sorting them. By contrast, prioritization of GO terms for a particular protein (a more important task) is not simple as the sets of probabilities for different GO terms are not directly comparable. To make them comparable, the probabilities need to be calibrated. We derive two approaches. The first, called DeltaL is based on the maximization of the difference of the likelihood and prior probability of the labelings as they are defined in equation (1). The second, called LogitR, is based on explicit calibration of the input probability vector. For DeltaL, we modify the objective function of equation (1) by incorporating the prior probabilities of membership θ^∗. The prior probability depends on the generality of the GO term g (i.e. the class size) and is estimated as the fraction of the total proteins annotated to that GO term. We use the log ratio between the input probability and the prior, as score function for the labeling of the g-th GO term. For x[g]=1 the score is equal to , while for x[g]=0 it takes the value of . When then x[g]=1 maximizes the function. In the opposite case x[g]=0 gives the maximum. The extended function is given by the difference of the log likelihoods: Note that when the input probabilities are very close to the priors, the objective function of DeltaL becomes multimodal. In LogitR optimization of equation 1 is performed on a calibrated input probability vector. The calibration is done as follows: where θ[cg] is the calibrated probability for node g and can be calculated using the inverse of the logistic transformation, is the prior probability of membership for node g and α a slope parameter. In this objective function, when the posterior probability θ=θ^∗ then the probability of membership is equal to θ^∗ (Figure 2A). As θ deviates from the prior, the calibrated probability θ[ cg] changes according to the logistic function given θ[g] and α (Figure 2B). The α parameter was tuned using Saccharomyces cerevisiae data. In particular, for a range of α=1,1.5,2.0,2.5,3.0 the LogitR approach was applied taking as input BMRF based predictions obtained from a previous study [17] before March 2010. The evaluation set consisted of 327 proteins that were annotated after March 2010, according to the GO annotation file of July 2011. The relevant part of GO DAG contained 423 terms from Biological Process. For each value of α the prediction performance was measured using the F-score, which is the harmonic mean of precision and recall. The largest F-score was obtained for α=2 and therefore we fixed α to that value. Figure 2. Calibration of posterior probabilities using α=2. A. Calibrated probabilities (y-axis) against the posterior probabilities (x-axis) when the prior is equal to 0.2. B. Image plot, for the entire range of prior and posterior probabilities. The colors denote the calibrated probabilities. Optimization by differential evolution: The FALCON algorithm The DeltaL in equation(3) and LogitR in equation(4) approaches do not involve directly the TPR constraints. We develop an optimization algorithm inspired from Differential Evolution (DE) [16] that by construction is restricted to the subspace of consistent labelings. We call our algorithm Functional Annotation with Labeling CONsistency (FALCON). In general, DE works by evolving a population of candidate solutions to explore the search space and retrieve the maximum. Because DE is derivative free, it has appealing global optimization properties. Also, it is suitable for optimization in discrete spaces (like the labelings space in our problem). The graph representation of the labelings is helpful to explain how the algorithm works. Given the graph G and its corresponding labeling X, we define a reduced graph which contains the nodes with corresponding labels x=1. If X is consistent, in the TPR sense, will be a connected sub-graph of G and maintaining the original structure for the nodes. Consider two labelings L[1], L[2] and their graphs , respectively is given in Figure 3. Graph union gives the expanded graph , while graph intersection , gives the contracted one . The nodes that will be included in the resulting graph are given by set operations (i.e. and respectively), but also equivalently by performing logical OR (for union), X[1]∨X[2], and logical AND (for intersection), X[1]∧X[2] operations on the labelings directly. Table 3 and Figure 3 illustrate those operations. Figure 3. Examples of graph (upper row) and logical (lower row) operations, using the DAG structure of Figure 1A. Table 3. Logical operations OR and AND for all the combinations of labels Operations between consistent graphs (labelings) result in consistent graphs (labelings) as well, because the edge set of the last is the union or the intersection of the operands and therefore a particular edge has to pre-exist in at least one of the operands without violating the TPR. This property can be seen as follows: For any parent-child pair of nodes there are three types of configurations that are consistent (Table 1). Graph union and intersection between any combination of those pairs leads to locally consistent labeling. This holds for all the parent-child pairs, so it holds for the full labeling. Therefore the outcome of graph union and intersection will be consistent as well. Further, operations between more than two labelings will be consistent as well due to the associativity property. The FALCON optimization algorithm is based on the generation and evolution of a population of subgraphs i.e. , with N=2∣V∣. The population is first initialized with consistent labelings (graphs) and evolved exploiting the graph-union and graph-intersection operations between individuals. Through the generations, all the constructed labelings will be consistent due to the abovementioned property. In our optimization problem we used four strategies to propose a new candidate solution (labelings) for the i-th graph , that is member of the population: The first two types are called global because they do not involve while the latter two are local moves. Graph e is a random subgraph of the original full graph (i.e. GO-DAG), constructed by sampling a random node and all its ancestors. e ensures that all consistent configurations can be eventually proposed and reached. With f the objective function i.e. being DeltaL or LogitR, the scheme of the FALCON algorithm is as follows: Initialize Population of size N=2∣V∣ by picking random consistent vectors (see below): while Convergence or Maximum generations not reached dofori=1 to Ndo Sample two labelings from the population Construct using the a randomly picked strategy S1,S2,S3,S4 if then, endforendwhile Initialization of the population for DeltaL is done by random sampling GO terms according to their individual score (log ratio of the input and prior probability), while LogitR by sampling from the binomial distribution with probability equal to the calibrated one. In both cases the nodes were up-propagated in order to construct a consistent labeling. The computation was terminated after 10,000 generations or after reaching a plateau (i.e. there is no improvement in the objective function for 100 generations). Finally we point that a valid Markov Chain Monte Carlo algorithm cannot be derived using those proposal strategies because they do not represent reversible moves. The bitwise exclusive OR move proposed by Sterns in [24] is reversible but does not lead to consistent labelings. Implementation of the algorithm was done in R language for Statistical Computing and using the igraph R package [25]. Performance evaluation We evaluated the performance of the FALCON algorithm on the DeltaL and LogitR objective function using Precision, Recall and F-score. Precision is defined as the percentage of correct GO terms in the list of the GO predictions. Recall is equal to the percentage of the GO assignments that were identified and F-score is the geometric mean of the Precision and Recall. Simulated data First, we tested the capability of FALCON to retrieve the most probable graph using the full graphs in Figure 1 with hundred simulated probability vectors. The first contains six nodes and the second fifteen. Because the graphs are small, exhaustive search of the most probable labeling was computationally tractable. We generated a hundred random probability vectors, by sampling probabilities for each node from the uniform distribution. Then we identified the most probable labeling for each simulated probability vector and the one returned by FALCON using equation (1) as objective function. Performance measures were calculated by comparing the vectors obtained by FALCON with the most probable ones as calculated from the exhaustive search. Real data The performance of FALCON was further evaluated using as input the GO membership probabilities of the Arabidopsis proteins as computed by BMRF in [18]. This method provides membership probabilities per GO term independently. We constructed two evaluation datasets from those data. First, we randomly picked 100 Arabidopsis proteins that were already annotated at the time of computing the BMRF posterior probabilities. One constraint was that they should have at least fifty annotations (after up-propagating their original annotations). In this way we ensured that they were annotated in rather detailed GO terms, and therefore the attempt to get GO-DAG consistent predictions would be sensible. Although these proteins had a fixed labeling in the computations, BMRF can calculate membership probabilities for them, by reconstitution, i.e. as if they were unknown. The second dataset consisted of 387 proteins that were annotated later than the date of the BMRF computations. Thus, at the time of the computation the proteins were not annotated. We used this second set of proteins to evaluate the performance of FALCON in realistic conditions. In addition, we obtained a further list of predictions using the supervised approach proposed in [18]. In this approach, from the posterior probabilities of the annotated proteins, an F-score based optimal threshold was calculated per GO term. Using this approach, called maxF, we derived a set of predictions for each evaluation dataset. Note that those lists are not guaranteed to be GO-DAG consistent. Results and discussion Performance of FALCON on simulated data We initially evaluated the performance of FALCON in the two small graphs of Figure 1. For each graph we simulated 100 probability vectors by drawing from the uniform. Because the graphs are small we could identify the most probable labeling by exhaustive searching. Using equation 1 as objective function and setting all the prior probabilities to 0.5, LogitR retrieved the 98/100 of the labelings for the 6-node graph and 92/100 of the labelings for the 15-node graph. The DeltaL approach also retrieved 98/100 labelings for the small graph (using priors = 0.5 for all the nodes). Performance of FALCON on real data To assess performance we used Arabidopsis proteins for which we previously calculated GO membership probabilities [18]. The true labelings of the proteins included in the evaluation datasets were known, so we were able to calculate performance metrics. Table 4 shows mean performance measures per protein and per GO term. The LogitR approach leads to the highest F-score, while maxF comes second and DeltaL comes last. We see that all three of them follow the precision-recall trade off (i.e. for larger precision there is lower recall and vice versa) with maxF being more precise but with reduced recall and the opposite for DeltaL. LogitR stays in the middle. In Figure 4 performance measures are shown in relation to the GO term level of detail and to the number of GO assignments per protein. Using the F-score to summarize the performance (Figure 4A) we see that for the GO terms that are rather general DeltaL (yellow) performs well, but for the more detailed ones its performance deteriorates. On the other hand LogitR and maxF perform well in detailed GO terms. In terms of Precision (Figure 4B) and Recall (Figure 4C), the latter methods have similar performance but LogitR performs slightly better. On the other hand DeltaL predicts large numbers of terms and therefore shows high recall but low precision, in particular for the detailed GO terms that are of real interest. Comparing the performance of predicting the assignments per protein (Figure 4D-F), the LogitR approach performs consistently better than the others in terms of the proteins that need either small or large number of GO terms to be functionally described. Table 4. Mean performance measures for the evaluation dataset consisting of 100 Arabidopsis proteins Figure 4. Performance on the evaluation dataset for the methods LogitR (red), DeltaL (blue), maxF (yellow).ABC. F-score, Precision and Recall scores for different size of GO terms. DEF. The same scores against the number of annotations per protein. Smoothed splines in each subplot show fitted generalized additive models and using the R function smoothṡpline. Because a large number of points in the scatterplot coincided, we performed jittering by adding a small error term to each value e∼N(0,10^−4), in order to make the maximum number of points visible. We further evaluated the performance of our approaches using a set of proteins that were annotated after obtaining the BMRF predictions (Table 5). From the total of 387 newly annotated proteins, maxF returned predictions for 84 of them, DeltaL for 328 proteins and LogitR for 147 proteins. Again, maxF and DeltaL show comparable performance while logitL returned an improved list in terms of F score. Further, the higher recall rates of DeltaL tend to give longer lists of predictions. Importantly however, DeltaL and LogitR return predictions that are consistent with GO-DAG and are therefore preferred because such predictions are biologically interpretable. Table 5. Mean performance measures for the newly annotated proteins Novel predictions We performed protein function predictions using the FALCON algorithm on the unannotated parts of the genomes of 6 eukaryotes (human, mouse, rat, slime mold, frog and arabidopsis). This dataset includes the eukaryotic targets used in the Critical Assessment of protein Function Annotation (CAFA) experiment of 2011 [26] and consists of 32,201 proteins. Function predictions were made for 1,917 GO terms from the Biological Process and Molecular Function compartments of Gene Ontology. The input probabilities were computed during CAFA’11 by BMRF integrating protein networks constructed from the STRING database [27] with orthology information obtained from ProgMap [28]. The BMRF and FALCON predictions are available in the BMRF website: http://www.ab.wur.nl/bmrf_yk/FALCON_CAFA.tab.gz Overall, we examined the performance of FALCON for two objective functions, but FALCON is in principle suitable for optimization of a wide range of objective functions. The main purpose of FALCON is to provide GO DAG consistent predictions. We showed that this comes with no loss of the prediction performance. In fact LogitR outperforms the maxF method. The predictions of FALCON are GO-DAG consistent and therefore biologically much easier interpreted by the curators of protein function annotations. In this study an estimate of the calibration parameter α for LogitR was obtained using a yeast data set and the input probabilities were obtained from a semi-supervised method (BMRF), but, thereafter, FALCON is unsupervised; it infers the optimal GO term assignment using only the input probability vectors and the prior probabilities per GO term (computed from a set of predictions or using external Gene Ontology information). In contrast, in maxF, a training set is necessary in order to obtain the optimal cutoffs per GO term. In this study both approaches were applicable but the FALCON algorithm is expected to have broader applicability. Authors’ contributions YK and CtB conceived the study and developed the algorithm. YK and ADJvD performed the BMRF analysis on the CAFA dataset. YK performed the analysis on FALCON and wrote the manuscript. CtB supervised the research and helped in writing the manuscript. All authors read and approved the final manuscript. We thank Roeland van Ham, James Holzwarth and the two reviewers for constructive remarks on the manuscript. YK was supported by the Biorange grant SP3.2.1 from the Netherlands Bioinformatics Centre. ADJvD was supported by the transPLANT project (funded by the European Commission within its 7th Framework Programme under the thematic area ‘Infrastructures’, contract number 283496) Sign up to receive new article alerts from Algorithms for Molecular Biology
{"url":"http://www.almob.org/content/8/1/10","timestamp":"2014-04-17T18:24:01Z","content_type":null,"content_length":"138276","record_id":"<urn:uuid:4caf6b89-6ab9-4510-bff5-5b41e9918130>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
All posts tagged '12-pack' Let’s take a last stab at our beer-delivery problem. We tried out a Sieve, we used the Microsoft Solver – time for some recursion. How can we organize our recursion? If we had only 1 type of beer pack, say, 7-packs, the best way to supply n bottles of beer is to supply the closest integer greater than n/7, that is, $$\lceil {n \over 7} \rceil$$ If we had 7-packs and 13-packs, we need to consider multiple possibilities. We can select from 0 to the ceiling of n/7 7-packs, and, now that we have only one type of case pack left, apply the same calculation as previously to the remaining bottles we need to supply – and select the best of the combinations, that is, the combination of beer packs closest to the target. If we had even more types of beer packs available, we would proceed the same way, by trying out the possible quantities for the first pack, and given the first, for the second, and so on until we reach the last type of pack – which is pretty much the outline of a recursive algorithm.
{"url":"http://clear-lines.com/blog/?tag=/12-pack","timestamp":"2014-04-18T05:29:37Z","content_type":null,"content_length":"44395","record_id":"<urn:uuid:1fd430a2-9c6d-44a0-844f-58b03c6a5c4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Sean Lewis I'm currently a Computer Science and Mathematics student at the University of Texas at Austin. I'm also a member of the Turing Scholars Honors program. I've got some work hosted at github.com/splewis /. I also really enjoy Mahler symphonies. But who doesn't... right? Also see Gravity Golf This is a game I started long ago in High School. The premise is that you launch a ball into space and try to get it to reach it's goal. This is made harder by the planets in the way which pull the ball and affects its trajectory. It's mostly written in Java, but I've written/rewritten certain sections in Scala and begun getting to work properly with SBT. (Before I relied on Eclipse to do the dirty work). It is hosted at github.com/splewis/GravityGolf. Below are some screenshots of the game.
{"url":"https://www.cs.utexas.edu/~splewis/","timestamp":"2014-04-20T15:51:15Z","content_type":null,"content_length":"3929","record_id":"<urn:uuid:ee132321-e813-4f32-98b3-156680a550f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: qs 1,2,3,5,6 solved in closed questions! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d3089be4b052fefd1db906","timestamp":"2014-04-17T04:00:31Z","content_type":null,"content_length":"53437","record_id":"<urn:uuid:50e77684-d139-4b06-ba0b-bdf8e6a27892>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate Rate of Disintegration of Projectile What fates may befall the material? Combustion, evaporation, thermal cracking, explosion... Rate of heating will matter, since rapid external heating would create greater stresses. Conductivity. For now lets assume the particle does not fracture into macroscopic fragments. Should a fate like that befall it then instead the methodology we create here can instead be reapplied for the fragments in question (with a lot of new variables that is). How would one be able to determine the heat gained from air friction? Also how would one be able to determine the transfer of heat into the "cooler" center of the sphere? Obviously thermal conductivity comes to play a very large role here but even if the conductivity high is would it effect disintegration in the first (maybe few) second(s) at some non-negligible
{"url":"http://www.physicsforums.com/showthread.php?p=4236326","timestamp":"2014-04-18T13:45:05Z","content_type":null,"content_length":"40487","record_id":"<urn:uuid:6851794e-116a-4be0-96f6-105ed7ef3ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutors Middletown, NY 10940 Science and Math Tutor ...Mainly, I work on lower level math skills they have not understood and mastered, homework help, or study skills. I have also helped many students prepare for the Regents in Math ( I, Geometry, II/Trigonometry), Science (Earth Science, Living Environment,... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Montague_NJ_Algebra_tutors.aspx","timestamp":"2014-04-20T16:58:03Z","content_type":null,"content_length":"58428","record_id":"<urn:uuid:38fd4a42-72fb-405a-8b09-b4a2bd1baeff>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
“Since besides the physico-theological proof, the cosmological proof, and the ontological proof of the existence of a highest being no other road is open to speculative reason, the ontological proof, of pure concepts or reason alone, is the only proof possible, if indeed a proof is possible at all of a proposition towering so high above all emperical use of the understanding.” Immanuel Kant… 238 more words Sunday Meditations &amp; Devotions
{"url":"http://en.wordpress.com/tag/proof-for-god/","timestamp":"2014-04-16T22:36:26Z","content_type":null,"content_length":"28998","record_id":"<urn:uuid:ff74a444-032a-4427-9334-7f21150a6dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Injective modules and torsion functors up vote 2 down vote favorite (This is a related question.) Local cohomology is studied mostly over Noetherian rings. Parts of the machinery do in fact not rely on Noetherianness, but on some weaker properties, for example the following: (ITI) $\mathfrak{a}$-torsion submodules of injective modules are injective. Noetherian rings have the ITI-property with respect to every ideal $\mathfrak{a}$; this is usually proven on use of the Artin-Rees Lemma or on use of Matlis' structure theory for injective modules. Rings with the ITI-property with respect to every ideal are not necessarily Noetherian; a (somewhat silly) example can be found in my comment to this question. Of course, in order for this property to be really useful one should know if certain classes of rings have the ITI-property with respect to certain classes of ideals. A particular choice of rings and ideals yields the following question: Does every ring have the ITI-property with respect to every principal ideal generated by a non-zero-divisor? (If this is the case, then it follows of course that integral rings have the ITI-property with respect to finitely generated ideals, and that polynomial algebras over arbitrary rings have the ITI-property with respect to finitely generated monomial ideals - hence local cohomology would not behave too bad in these two large and interesting settings.) add comment 1 Answer active oldest votes If $R$ is a valuation ring whose maximal ideal $\mathfrak{m}$ is of finite type, then $R$ has ITI with respect to $\mathfrak{m}$ if and only if $R$ is Noetherian. Since there exists a non-Noetherian valuation ring whose maximal ideal is of finite type, the answer to the question is no. Moreover, it follows that integral rings do not necessarily have the ITI property with respect to ideals of finite type. up vote 2 down vote accepted A proof of the above, concrete examples, and further details on the ITI-property will be found in a joint work with P.H.Quý, available in due time. add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/70725/injective-modules-and-torsion-functors/84147","timestamp":"2014-04-17T07:00:11Z","content_type":null,"content_length":"49964","record_id":"<urn:uuid:9f1fbfc9-e294-42df-8590-43b86568ca70>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove root 2 is Irrational? An Irrational Number is a real number that cannot be written as a simple fraction or we can say Irrational means not Rational For Example: π (pi) is an Irrational number which has the value 3.14 Now we will come to the Square ROOTS:- A square root of a number is a value that can be multiplied by itself to give the original number. For Example: A square root of 4 is 16, because when 4 is multiplied by itself we get 16. = 16 here 2 is the power of 4. Now, Prove That Square Root Of 2 Is Irrational:- = Irrational number. Yes, The square root of 2 is Irrational. Let us see how!! First we Square a Rational Number, If the rational number is a/b, then it will become a2/b2 when squared. For Example: ( 3/4)2 = 32/42 Now we will see that in this the exponent is 2 which is an even number. But to do this we should need to break the Numbers down into their prime factors . Example: (3/4)2 = (3/(2×2) )2 = 32/24 We notice that still the exponents are Even Numbers. The 3 has an exponent of 2 in (32) and the 2 has an exponent of 4 in (24). Now one thing becomes obvious that Every exponent is an even number!! So we can see that when we square a rational number, the result will be made up of Prime Numbers whose exponents are all even numbers. Now, When we square a rational number, each prime factor has an even exponent. Now, let just look at the number 2 As a fraction, 2 = 2/1 Which is 21/11, and that has odd exponents!! We can write 1 as 12 (so it has an even exponent), and then we have: 2 = 21/12. Now, it can be simpler to solve it to 21, but in other way: here it is an odd exponent, We could even try things like 2 = 4/2 = 22/21, but we still cannot get rid of an odd exponent. So it could not have been made by squaring a rational number! This means that the value that was squared to make 2 (i.e. the square root of 2) cannot be a rational number. Hence we can say that the square root of 2 is Irrational.
{"url":"http://math.tutorcircle.com/number-sense/prove-root-2-is-irrational.html","timestamp":"2014-04-21T13:21:23Z","content_type":null,"content_length":"18061","record_id":"<urn:uuid:b7f1fe4f-72ad-49bf-8d9e-add6c07d5f77>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Payments for Amortized Loans An amortized loan payment is a level payment that completely pays off a loan in a set period of time. For long-term loans like mortgages, it is helpful to understand how the loan amortizes. The amortization is how each payment is allocated to principal repayment and interest to the lender. An amortized loan payment schedule is usually determined using an online mortgage calculator or spreadsheet program with a mortgage template. The payment and amortization amounts can be calculated using a calculator with the ability to perform exponential calculations. Calculate the monthly interest by dividing the annual interest by 12. This amount will be designated R. The loan amount will be P, and the number of payments will be N. Calculate the monthly payment using this formula. R is divided by a denominator calculated by taking (1+R) to the negative exponent of N and subtracting the result from 1. The fraction of R divided by the calculated denominator is multiplied by P. The result will be an amortizing monthly payment. Calculate the interest and principal for the first monthly payment. The interest is P times R. The principal is the monthly payment minus the calculated interest. Calculate the loan balance after the first payment. Subtract the principal of the first payment from P. The result will be the loan balance after the first payment, labeled B. Calculate the principal and interest for the second payment. The monthly interest rate, R, is multiplied by B for the interest on the second payment. Subtract the new interest amount from B for the principal repayment in the second payment. Repeat the interest and principal calculation for each new loan balance until all of the payments have been calculated and the loan balance, B, is zero. Things You Will Need • Calculator with exponential function • Fixed-rate and term loan payments are all calculated the same. The calculations will work for a car payment or mortgage. • The spread sheet functions of PMT, IPMT and PPMT will calculate the payment, interest and principal amounts for setting up an amortization schedule using a spreadsheet program like Excel or Calc. • The output of any calculator is only as good as the input. Make sure you have accurate loan data when calculating a payment. • The actual payment on a loan may vary slightly. Loans often calculate in several days of extra interest to bring the payment to the first of the month.
{"url":"http://homeguides.sfgate.com/calculate-payments-amortized-loans-9631.html","timestamp":"2014-04-19T09:25:50Z","content_type":null,"content_length":"32510","record_id":"<urn:uuid:9c80d878-c1b7-4b17-ad02-b5d91d8ff2b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Itinerant Wargamer 2 btns + skirmishers. So, here are the first 2 Perry Btns, they will be joined by a 3rd very soon (about 1/3 done). I decided to do a bit of a mixture, so 1 btn is all Greatcoats, 1 in just coats, and the third will be a mix, this will make them readily indentifiable on the table and is a nice simple system with a bit of continuity whilst making every unit different. In order to do this I needed more figures than were in the plastics boxes. I wanted more officers including mtd ones and needed a few more figures generally. There are "only" 2 variations of flank coys in each of the coated and great-coated figures, which is great if you are mixing them all up but a bit weedy if you are an awkward bleeder like me and want to seperate them out. So what you are looking at above is 2 x 36 man units and half a dozen skirmishers. To get this I only used 2 boxes of plastics and then added in extra metals. I still have enough figures left over to get a third btn out of the 2 boxes. Each btn has a metal mtd officer, 3 metal foot officers, 4 metal flank coy figures, 3 metal centre coy bods plus 2 or 3 metal command figures. This way I have at least 2 metal figures per 6 man base which gives them a bit of weight- for those of us who like metal figures it does make a bit of a difference, they do feel a bit better. Obviously this increases the unit cost overall but its still very cheap. Lets see.... 2 x boxes of plastics =£30.00 42 metal foot plus 3 mtd colonels= £49.50 Total = £79.50 Thats enough to make up THREE btns with 5 metal and 7 plastics left over -plus 6 more skimishers. So it works out to less than £25 a btn, which is still a bit of a bargain. The "coated" btn I will end up with a shed load of skirmishers, but they are nice to paint so I'll probably bang them out and they will end up on e-bay. Having sorted out a 3 btn light infantry regt the question is: What to do next? I know what Noels' answer to this question is: If I am to continue with more of these, then the obvious next step would be a 3 btn line regt, but that is too obvious, Italians, maybe? Berg? I've already got 4 btns of Swiss, the Westfalians are sorted...suggestions welcome. Free Counter
{"url":"http://itinerantwargamer.blogspot.com/2009_03_01_archive.html","timestamp":"2014-04-20T05:43:11Z","content_type":null,"content_length":"91013","record_id":"<urn:uuid:45947569-3f6c-4dc7-8286-fc241cebcf10>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Full-Text Searching & the Burrows-Wheeler Transform Pattern Matching If you pick an arbitrary pattern string P, say "abr," one way to find all occurrences of it is to search the sorted blocks in M, finding the range of blocks that start with a, then narrowing it to blocks prefixed by "ab," and so on, extending the pattern from left to right. This method is workable, but a more efficient algorithm (first developed by Ferragina and Manzini) works in the opposite direction, extending and matching the pattern one character to the left at each turn. To understand this method inductively, first consider how to match one character c, then how to extend a single character beyond a pattern that has already been matched. The answer to the first problem is easy, since you know blocks in the range C[c]...C[c+1]-1 start with c. I call this range "R[c]." To left-extend a pattern match, consider the string cP formed by prepending a character c onto the already matched string P. Use FL, starting from the range of locations prefixed by P, mapping FL inversely to find the interval of blocks prefixed by cP, as follows. Given the next character c and the range R[P] of blocks prefixed by P, you need to find the range R[cP] of blocks prefixed by cP. You know two things about R[cP]: • It must fall within the range R[c] of blocks starting with c, that is, R[cP] is a subrange of R[c]. • FL must map all blocks in R[cP] into blocks in R[P], because every block starting with cP must left-shift to a block that starts with P. My approach is to start with the widest possible range R[c], and narrow it down to those entries that FL maps into R[P]. Because of the sorting, you know that entries prefixed by cP form a contiguous range. Since FL is order-preserving on R[c], you can find R[cP] as follows: • Scan R[c] from the start until you find the lowest position i that FL maps into R[P]. • Scan backwards from the end to find the highest position j that FL maps into R[P] (in practice, you use binary search, but the idea is the same). The resulting [i,j] range is R[cP], the range of blocks prefixed by cP. Figures 3(a), 3(b), 3(c), and 3(d) show this narrowing-down process; the refine method implements this algorithm in bwtindex.cc Figure 3(a): Start by finding the range of all blocks starting with "r." Figure 3(b): To find R_br, find bs that precede rs. I show two copies of F for clarity. Working right-to-left, the copy on the right represents the previous range, and the one on the left is new. To search: 1. Take all blocks starting with b; 2. search this set for the first i where FL[i]startp, and the last j where FL[j]<=endp; everything in this [i,j] range must map into the target interval between startp and endp. This [i,j] interval identifies all bs that are followed by r; in other words, all blocks starting with "br." i and j become the new startp/endp for the next step. Figure 3(c): Extending "br" to "abr." The next char is "a," so you start with the range of F containing as. Narrow this range down from all a's to a's followed by "br," again by inverting FL from the "br" range into the "a" range. This process can continue indefinitely, left-extending the pattern one character at each step. For any pattern, this process will give you the start and end (and count) of a range of matches in M. From any position in this range, you can decode forward to show the context of each match, but you don't know the offset of each match in the source file. Figure 3(d): What you don't know is the distance of each match to end/start of text. The result at each step in this process is a start/end position for a range of blocks prefixed by the pattern matched so far. The difference between these positions is the number of matches in the text, and starting from any position in this range, you can decode characters in and following each match. The Location Problem There is one valuable piece of information you haven't found: The exact offset of each match within the original text. I can call this the "location problem," because there is virtually no information in a sorted block to tell you how far you are from the start or end of the text, unless you decode and count all the characters in between. There are a number of solutions to the location problem that I won't address here except to say that all of them require extra information beyond FL and C, or any BWT representation. The simple but bulky solution is just to save the offset of each block in an array of n integers, reducing the problem to a simple lookup, but adding immensely to the space requirement. The problem is how to get the equivalent information into less space. Some approaches rely on marking an explicit offset milepost at only a few chosen blocks, so you quickly encounter a milepost while decoding forward from any block. Others use the text itself as a key, to index locations by unique substrings. Another possibility lets you jump ahead many characters at a time from certain blocks, so as to reach the end of the text more quickly while counting forward. The variety of possible solutions makes it impossible to cover them here. A Word About Compression Recall that I promised a full-text index that consumes only a few bits per character, but so far you've only seen a structure taking at least one int per character hardly an improvement. However, the integers in FL have a distribution that makes them highly compressible. You already know FL contains long sections of integers in ascending order. Another useful fact is that consecutive entries often differ by only one; in normal text, as many as 70 percent of these differences are one, with the distribution falling off rapidly with magnitude. My own experiments using simple differential and gamma coding have shrunk FL to fewer than 4 bits per character, and more sophisticated methods (see "Second Step Algorithms in the Burrows-Wheeler Compression Algorithm," by Sebastian Deorowicz; Software: Practice and Experience, Volume 32, Issue 2, 2002), have shrunk FL to even more competitive levels. The practical problem with compression is that elements of FL then vary in size, so finding an element FL[i] requires scanning from the beginning of the packed array. To eliminate most of the scanning, you need to use a separate bucket structure, which records the value and position of the first element of each bucket. To find FL[i], you scan forward from the beginning of the closest bucket preceding i, adding the encoded differences from that point until position i is reached. The process is laborious, but does not affect the higher level search and decoding algorithms. Kendall is a software engineer living in San Francisco and can be contacted at kendall@willets.org.
{"url":"http://www.drdobbs.com/global-developer/full-text-searching-the-burrows-wheeler/184405504?pgno=2","timestamp":"2014-04-18T06:03:44Z","content_type":null,"content_length":"98827","record_id":"<urn:uuid:26237163-fca7-4d10-9a75-5fd2b5b508ca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics and Puzzles As children, we all loved mathematics and working out puzzles. Mathematics was an all-important tool to answer questions, like "How many," "Who is older," "Which is larger." And puzzles were of course everywhere. We did not stop to check a dictionary to ascertain that a puzzle is something, such as a toy or game, that tests one's ingenuity. We did not care about our ingenuity a little bit, but just thrived on learning new things and skills that the nature made us curious about. Growing up was a great fun. Time brought a change. In school we were made to realize that learning is a serious business, and for many of us much of it has ceased to be entertaining. Although not for all. Some could not give up their erstwhile pursuits of mental entertainment. There are enough of puzzle lovers to provide a living for the selected few who invent and publish puzzles - in accordance with the dictionary definition to challenge one's ingenuity, puzzles old and new. The luckiest of the breed grew to become scientists, mathematicians in particular. Mathematicians solve puzzles as a matter of vocation. Puzzlists seek puzzles in newspapers, books, and now on the Web. There are many kinds of puzzles - jigsaw puzzles, slider puzzles, sliding blocks puzzles, logic puzzles, mazes, cryptarithms, crosswords, strategy games, dissections, magic squares - it's hard to enumerate all known kinds. Puzzlists and mathematicians have their preferences. Most of mathematicians will probably deem classification of their occupation as puzzle solving a misnomer. (Due to their mindset they will likely to inquire as to the definition of puzzle solving - just in case.) Mathematicians call their puzzles problems. Solved problems become lemmas, theorems, propositions. Why would they object to being categorized as puzzlists? Solving both puzzles and mathematical problems require perseverance and ingenuity. However, there is a profound difference between solving puzzles and what mathematicians do for a living. The difference is mainly that of the attitude towards either activity. For puzzlist, solving a puzzle is a goal in itself. For mathematician, solving a problem is an enjoyable and a desirable occupation but is seldom (with the exception, for instance, of great problems of a long standing, like Fermat's Last Theorem) a satisfactory achievement in itself. In most cases after solving a problem mathematician will try something else: modify or generalize the solved problem, seek another proof - perhaps simpler or more enlightening than the original one, attempt to understand what made the proof work, etc., which will lead him to another problem and so on. Whatever he does, he eventually gets a hierarchical network of interrelated solved problems - a theory. Why does mathematician seek new problems? The reason is in that mathematics, even if perceived by many as a not very meaningful manipulation of abstract symbols, embodies in its abstractedness a rare power of explanation. Some mathematics directly explains natural phenomena, some sheds light on other portions of mathematics or other sciences. (A famous Russian mathematician V.I. Arnold even categorized mathematics as that part of physics in which experiments are inexpensive.) Understanding in mathematics is born not only from formulas, definitions and theorems but, and even more so, from those networks of related problems. The process is very much like distilling the many meanings of a word in Thesaurus into a unique shade of the concept that it represents. Mathematics - the most exact science of all - is least of all a dictionary of term definitions. Mathematicians seek knowledge. In search of knowledge, they enjoy themselves tremendously inventing and solving new problems. The site - Interactive Mathematics Miscellany and Puzzles - makes an attempt to present mathematics as an evolving and entertaining subject in which an unsuspecting visitor may take an active part. Copyright © 1996-2000 Alexander Bogomolny
{"url":"http://puzzles.com/PuzzleLinks/MathAndPuzzles.html","timestamp":"2014-04-16T16:00:29Z","content_type":null,"content_length":"7639","record_id":"<urn:uuid:c9320913-4ef9-4826-87a9-27a4ad22c7ff>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
mg/hr into ppm Hey Anybody, Does anyone know how to change mg/hr into ppm? specifically for ozone. Not how much ozone would be in a room, but how much the machine would be producing. If the most the machine would produce is 1,200mg/hr, how much ozone, in ppm, would the machine produce? Thanks! Well, to be perfectly accurate, you can't. I'll start with the mass, since it's the hardest to compute. In 1 hr, you produce 1200 mg of Ozone. In order to find ppm, I need to know the mass of the air into which these 1200 mg of Ozone will be spread. This is where the measurements come in. Let's say the room is 4x5x2.5=50 m . The mass of the air in the room will be it's Volume x Density = 50 m X 1.25 g/l x 1000 l/m x 1000 mg/g = 62,500,000 mg. If you divide the mass of the ozone by the mass of the air, you'll get the mass fraction of ozone. Multiply that number by 1 million, and you've got ppm/hr: 1200/62,500,000*1,000,000=19.2 ppm/hr. Now all you've got to do is multiply that number by the time you run the machine and you've got ppm. For example, after 1 hour you'll have 19200 ppm/hr X 1 hr = 19.2 ppm.
{"url":"http://www.physicsforums.com/showthread.php?p=769684","timestamp":"2014-04-16T19:11:46Z","content_type":null,"content_length":"21289","record_id":"<urn:uuid:9aa7ad62-458f-41d7-b23a-4707877942ec>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
One-to-One Matching with Interdependent Preferences Mumcu, Ayse and Saglam, Ismail (2006): One-to-One Matching with Interdependent Preferences. Download (307Kb) | Preview In this paper, we introduce interdependent preferences to a classical one-to-one matching problem that allows for the prospect of being single, and study the existence and properties of stable matchings. We obtain the relationship between the stable set, the core, and the Pareto set, and give a sufficiency result for the existence of the stable set and the core. We also present several findings on the issues of gender optimality, lattices, strategy-proofness, and rationalizability. Item Type: MPRA Paper Institution: Bogazici University Original Title: One-to-One Matching with Interdependent Preferences Language: English Keywords: One-to-one matching; externalities D - Microeconomics > D6 - Welfare Economics > D62 - Externalities Subjects: C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C71 - Cooperative Games C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C78 - Bargaining Theory; Matching Theory Item ID: 1908 Depositing User: Ayşe Mumcu Date Deposited: 25. Feb 2007 Last Modified: 20. Feb 2013 22:37 Charness, G. and Rabin, M. "Understanding Social Preferences with Simple Tests," Quarterly Journal of Economics, 2002, 117(3), pp. 817-869. Demange, G., Gale, D. and Sotomayor, M. "A Further Note on the Stable Matching Problem," Discrete Applied Mathematics, 1987, 16(3), pp. 217-222. Echenique, F. and Yenmez, B. "A Solution to Matching with Preferences over Colleagues," Games and Economic Behavior, 2006, forthcoming. Echenique, F. "What Matchings Can Be Stable? The Testable Implications of Matching Theory,'' Caltech SS Working Paper 1252. Gale, D. and Shapley, L.S. "College Admissions and the Stability of Marriage," American Mathematical Monthly, 1962, 69(1), pp. 9-15. Gul, F. and Pesendorfer, W. "The Cannonical Type Space for Interdependent Preferences," 2005, mimeo, Princeton University. Hafalir, I.E. "Stability of Marriage with Externalities," 2006, mimeo, Penn State University. Klaus, B. and Klijn, F. Stable Matchings and Preferences of Couples," Journal of Economic Theory, 2005, 121(1), pp. 75-106. Knuth, D.E. Marriages stable. Montreal: Les Presses de l'Universite de Montreal, 1976. References: Li, J. "The Power of Conventions: A Theory of Social Preferences," 2005, mimeo, University of Pennsylvania. McVitie, D.G. and Wilson, L.B. "Stable Marriage Assignments for Unequal Sets," BIT, 1970(10), pp. 295-309. Ok, E. and Kockesen, L. "Negatively Interdependent Preferences," Social Choice and Welfare, 2000, 17(3), pp. 533-558. Pollak, R.A. "Interdependent Preferences," American Economic Review, 1976, 66(3), pp. 309-320. Postlewaite, A. "The Social Basis of Interdependent Preferences," European Economic Review, 1998, 42(3-5), pp. 779-800. Roth, A.E. "The Economics of Matching: Stability and Incentives,"Mathematics of Operations Research, 1982, 7(4), pp. 617-628. Roth, A.E. "The Evolution of the Labor Market for Medical Interns and Residents: A Case Study in Game Theory," Journal of Political Economy, 1984, 92(6), pp. 991-1016. Roth, A.E. and Sotomayor, M. Two-sided matching: A study in game theoretic modeling and analysis. Cambridge University Press, Cambridge, UK, 1990. Sasaki, H. and Toda, M. "Two-Sided Matching Problems with Externalities," Journal of Economic Theory, 1996, 70(1), pp. 93-108. Sobel, J. "Interdependent Preferences and Reciprocity," Journal of Economic Literature, 2005, 43(2), pp. 392-436. URI: http://mpra.ub.uni-muenchen.de/id/eprint/1908
{"url":"http://mpra.ub.uni-muenchen.de/1908/","timestamp":"2014-04-20T03:20:46Z","content_type":null,"content_length":"22021","record_id":"<urn:uuid:4a53e822-5a17-48ab-a0c0-66ff9d4f8596>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
GCSE Maths : Algebra Revision for Android Take a breath and make your GCSE preparation a fun activity with our collection of GCSE apps. Here comes the most comprehensive Algebra app. • HIGHEST QUALITY and QUANTITY 730 questions and 73 revision notes in all just for Algebra!. High quality content written by an experienced mathematician. • REVISE BY TOPIC Expressions, equations, inequalities, advanced expression equations, patterns and sequences, graphs. • MOCK TEST Mixed questions from all topics. • REVIW with EXPLANATION Review each question at the end of the test. Know the right answer with detailed explanation for each question. With our unique progress tracking feature including pie charts and bar graphs showing your progress, you know you are ready to take on the real test at the board when your progress meter says 100%. More details on topics: 1. Expressions, equations etc: The language of algebra Simplifying expressions (1) Simplifying expressions (2) Solving equations with brackets Multiplying expressions Equations with the variable on both sides (1) Formulae, expressions and equations Expansion, simplification and factorisation (1) Rearranging formulas (1) Solving linear equations Equations with the variable on both sides (2) Setting up equations Trial and Improvement Expansion, simplification and factorisation (2) Rearranging formulas (2) Quadratic expansion Squaring brackets Difference of two squares Solving quadratic equations by factorisation (1) Factorising a quadratic with a unit coefficient of x² Solving quadratics of the form ax² + bx + c = 0 Solving quadratic equations by factorisation (2) Solving the general quadratic by the quadratic formula Using the quadratic formula without a calculator Solving problems with quadratic equations Solving a quadratic equation by completing the square Quadratic equations with no solution 2. Inequalities (F & H) Solving inequalities Inequalities on number lines Graphical inequalities More than one inequality 3. Patterns and sequences (F) Patterns in number Number sequences The nth term of a sequence Finding the nth term Special sequences Finding the nth term from given patterns 4. Advanced expressions equations (H) Uses of graphs-solving linear simultaneous equations Simultaneous equations Setting up simultaneous equations Algebraic fractions Solving equations with algebraic fractions Linear and non-linear simultaneous equations 5. Graphs (F) Negative coordinates Conversion graphs Drawing graphs from tables Travel graphs Linear graphs Line lengths and mid-points Real-life graphs Cover-up method for drawing graphs Drawing quadratic graphs Reading Values from quadratic graphs Using graphs to solve quadratic equations 6. Graphs (H) 3D coordinates Parallel and perpendicular lines Gradient-Intercept method Drawing a line with a given gradient Finding the equation of a line from its graph Uses of graphs- finding formulae or rules Significant points of a quadratic graph Cubic graphs Exponential graphs Reciprocal graphs Graphs of loci and trig functions Solving equations by the method of intersection Sine and cosine graphs Transformations of the graph y = f(x) Keywords :- AQA, Algebra, Bank, CCEA, Edexcel, GCSE, Maths, Number, OCR, Pattern, Question, WJEC, bitesize, collins, equations, exam, expressions, foundation, graphs, higher, learn, notes, preparation, questions, revision, test Tags: expanding brackets quiz gcse. Recently changed in this version Bug fixes. Redesigned some screens for tablets. Comments and ratings for GCSE Maths : Algebra Revision • There aren't any comments yet, be the first to comment!
{"url":"http://www.appszoom.com/android_applications/education/gcse-maths-algebra-revision_ghugn.html","timestamp":"2014-04-16T14:48:39Z","content_type":null,"content_length":"59369","record_id":"<urn:uuid:a4b84564-0927-4656-bc90-d46eb187fce5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x S. Lee, M. Lu, "New Self-Routing Permutation Networks," IEEE Transactions on Computers, vol. 43, no. 11, pp. 1319-1323, November, 1994. BibTex x @article{ 10.1109/12.324564, author = {S. Lee and M. Lu}, title = {New Self-Routing Permutation Networks}, journal ={IEEE Transactions on Computers}, volume = {43}, number = {11}, issn = {0018-9340}, year = {1994}, pages = {1319-1323}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.324564}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - New Self-Routing Permutation Networks IS - 11 SN - 0018-9340 EPD - 1319-1323 A1 - S. Lee, A1 - M. Lu, PY - 1994 KW - network routing; switching networks; multiprocessor interconnection networks; computational complexity; self-adjusting systems; self-routing permutation networks; BNB SRPN; generalized baseline network; delay time; routing decision; hardware complexity; Benes network . VL - 43 JA - IEEE Transactions on Computers ER - This contribution is focused on self-routing permutation networks capable of routing all n! permutations of its n inputs to its n outputs without internal conflict. First, a self-routing permutation network named BNB SRPN is described. The network realizes the self-routing capability on the structure of the generalized baseline network, a modified model of the original baseline network. The network reduced both the hardware and the delay time compared with other comparable networks by a simple algorithm using 1-bit information for the routing decision. The network also has a good hardware regularity. In addition, a cost-effective self-routing network is presented, which is derived from the BNB SRPN. The network's hardware complexity, O(N log N), is the same as that of the Benes network which is not self-routing. The principle realizing a modular structure is also presented. The modular structure is derived from the principle to localize the routing decision. It is shown that the modular structure results in the reduction of the total delay through the network. [1] T. Feng, "A survey of interconnection networks," inIEEE Computer, pp. 12-27, Dec. 1981. [2] D. H. Lawrie, "Access and alignment of data in an array processor,"IEEE Trans. Comput., vol. C-24, pp. 1145-1155, Dec. 1975. [3] W. H. Kautz, K. N. Levitt, and A. Waksman, "Cellular interconnection arrays."IEEE Trans. Comput., vol. C-17, pp. 443-451, May 1968. [4] A. Waksman, "A permutation network,"J. ACM, vol. 15, no. 1, 1968. [5] D. Nassimi and S. Sahni, "Parallel algorithms to set up the Benes permutation network,"IEEE Trans. Comput., vol. C-31, pp. 148-154, 1982. [6] K. E. Batcher, "Sorting networks and their applications," inProc. 1968 Spring Joint Comput. Conf., vol. 32, AFIPS, Montvale, NJ, pp. 307-314. [7] D. Koppelman and A. Oruç, "A self-routing permutation network," inProc. 1989 Int. Conf. on Parallel Processing, 1989, vol. I, pp. 288-295. [8] S. Lee and M. Lu, "BNB self-routing permutation network," inProc. 1991 Int. Conf. Distrib. Computing Syst., May 1991, pp. 574-581. [9] K. E. Batcher, "The flip network in Staran," inProc. 1976 Int. Conf. Parallel Processing, 1976, pp. 65-71. [10] C. L. Wu and T. Feng, "On a class of multistage interconnection networks."IEEE Trans. Comput., vol. C-29, pp. 694-702, Aug. 1980. [11] H. Stone, "Parallel processing with the perfect shuffle,"IEEE Trans. Comput., vol. C-20, no. 2, pp. 153-161, Feb. 1971. [12] C. Wu and T. Feng, "The universality of the shuffle-exchange network,"IEEE Trans. Comput., vol. C-30, no. 5, pp. 324-332, May 1981. [13] S. Lee, "Self-routing permutation networks for communications and computer systems," Ph.D. dissertation, Texas A&M Univ., 1991. Index Terms: network routing; switching networks; multiprocessor interconnection networks; computational complexity; self-adjusting systems; self-routing permutation networks; BNB SRPN; generalized baseline network; delay time; routing decision; hardware complexity; Benes network . S. Lee, M. Lu, "New Self-Routing Permutation Networks," IEEE Transactions on Computers, vol. 43, no. 11, pp. 1319-1323, Nov. 1994, doi:10.1109/12.324564 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1994/11/t1319-abs.html","timestamp":"2014-04-16T19:45:23Z","content_type":null,"content_length":"52333","record_id":"<urn:uuid:774b7c9b-3f72-42ed-a97f-8ead535e7c60>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
I disagree with 0.999... = 1 Topic closed Re: I disagree with 0.999... = 1 It's also a good lesson as to why we can't use inductive logic in science. Math however, is a different story. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 but 0 really isnt a number - the romans never had any concept of a zero in their system the Greeks managed fine without a zero for quite a while too infact you will see what I am saying in a second the number 1378 is merely a kind of shorthand for the sum 1000.000... + 300.000.... + 70.000... + 8.000... notice all the zero's after the point stretch to infinity the zero merely denotes the complete abscence of a number at a certain point in the sum so 1305 = 1000.000.... + 300.000... +5.000... I rest my case - zero is the abscence of a number - not a number in itself therefore it is true that numbers range on the real number scale as and since .999... is neither -1 or +1 then the difference is between 1 and -1 that has to be the case it just cannot be otherwise I havent got time to continue this right now - I'll be back tomorrow. - Last edited by cray (2006-10-06 15:01:10) Re: I disagree with 0.999... = 1 Zero is indeed a strange bird. It is true that zero is a "placeholder" when writing down numerals. 302 is not 32. But I believe it also has a role as a number. Imagine the descending number series 5,4,3,2,... What happens next? 1 (a number), 0 (a what?), -1 (a number), -2,-3,... etc. To me a number is a count or measurement. It is debatable if you can count "0", but you can measure it. 0 degrees, 0 meters above sea level, $0 bank balance. And if two bags have the same number of apples each, then the difference is 0. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: I disagree with 0.999... = 1 Like I said before cray, if zero isn't a number, then you throw entire branches of math into complete chaos. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 And it's strange. In kindergarten, they teach you the number 1-10, then 10-20, but never 0. What gives? I believe 0 doesn't exist - It is just a term given to nothing. -1, -2, -3...however, is different, since those numbers represent debts. Re: I disagree with 0.999... = 1 Mary has 5 apples. Joe has 5 apples. How many more apples does Joe have than Mary? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Again - Even though we say 0 to represent the distance between the two amounts of apples, it is still nothing, so if 0 = nothing, then the distance between two of the same numbers must be nothing. Re: I disagree with 0.999... = 1 You're right, zero is nothing. And because of that, it's the most important number: x + 0 = 0 + x = x No other number has that special property. And without it, all of algebra is lost. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 And James Bond would also be ruined. He wouldn't be 007 - He would just be 7. Yeah, I was thinking of the algebra factors of the problem. 0 is a crucial part to play, and we cannot do without it. But is there a better way to define 0 than 'nothing' or 'the lack of something'? Something similar - About 0^0: http://mathforum.org/dr.math/faq/faq.0.to.0.power.html Re: I disagree with 0.999... = 1 I believe there is a subtle difference between zero and nothing. Zero specifically says what the amount is. There are zero dogs. He has zero dollars. This is my definition of Zero: http://www.mathsisfun.com/definitions/zero.html Also my definition of Number: http://www.mathsisfun.com/definitions/number.html Please correct me if you think I am wrong. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: I disagree with 0.999... = 1 Nice definitions - Although there are other forms of zero. Let's say, in graphs. To draw a straight line from -3 to 3, you would need the zero to help you. If zero told us the amount, I think we could say 'nothing' - No dogs, no dollars, etc. I think zero is just another word for nothing or the lack of something - I know I'm not 100% right, though. Re: I disagree with 0.999... = 1 Ok then I accept that but I just feel the solution is something to do with the idea of zero. except incase it isnt I'll put that on hold , though I have another question ! Is the reason why this is wrong 1/3 = .333... 2/3 = .666... etc etc because 1/3 is not a number but an idealisation of a perfect fraction ? after all 3/3 would literaly mean 3 divided by 3 equals 1 but a third of 1 is a lot more complex in reality than we think? so it acts more like a label than a actual number whereas .333... is an actual number? maybe its to do with that ? Last edited by cray (2006-10-06 21:59:57) Re: I disagree with 0.999... = 1 All numbers are labels. 0 is the label for the identity with respect to addition. 1 is the label for the identity with respect to multiplication. 2, 3, 4, 5... are the labels for adding a certain number of 1's. The negatives of all of these are the labels for the additive inverses of them. 1/x is the label for the multiplicative inverse. a/x is just the same as saying a * 1/x. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Power Member Re: I disagree with 0.999... = 1 if not 0.999... = 1 , then 1.000... isnt = 1 since there could be a 1 or a 5 or any number at the end of these infinite zeroes... and also, as said, 0.999... = 9/10+9/10^2+9/10^3+...+9/10^n then the difference (h) between 0.999... and 1 must be 1/10^n but since n = infinity, h=1/infinity=0 (since 1/0=infinity, 1/infinty=0) so the difference between 0.999... and 1 is 0, and therefor 0.999...=1 Super Member Re: I disagree with 0.999... = 1 zero is a useful concept, which means "nothing", "not any". It is particularly useful when it comes to negative numbers. I guess the first time a negative number occurred was the time when people had property and trade with debts. Hence zero played an important role as the cancelled-out. I owe you 5 pigs, I have 5 from you, but then I give you 5, I have -5 from you, thus due. cray, decimal system isn't just symbols, and every decimal represents some exponent of 10. 123.45 represents 1*10^2+2*10^1+3*10^0+4*(1/10)+5*(1/10)^2 so decimals are fractions by nature. Infinite decimals add even one more idealization than rationals. infinitesimal isn't zero, at least in classic defination made more than 120 years ago. That's the trick. Actually there could be two types of functions or series having the same limit C. One is constantly C, within some domain or since some step (which could be 1 to be general) The other one is approaching C, and never reach C in the field of functions . However, some people believe it can reach C when it is a series and at Entry Infinity. Here another piece of strong evidence of objection - inconsistency. 1.0=1 1.00=1 1.000=1 ... though 1.000... may not exist, it can lead to some 1.00000000 in practice and caus little trouble. It's fortunate enough to get a clear result to put it other words. But 0.999... do cause some trouble. Perhaps 0.9999999999 in practice. For example, NASA can reduce the chance of accident by having one more check. Are you confident that after many checks no accident? Infinite checks??? Come on... Re: I disagree with 0.999... = 1 infinitesimal isn't zero, at least in classic defination made more than 120 years ago. Since infinitesimals don't exist in the reals, they are 0. But in more, uh, "complex" number systems, there are non-zero infinitesimals. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 zero is a useful concept, which means "nothing", "not any". It is particularly useful when it comes to negative numbers. I guess the first time a negative number occurred was the time when people had property and trade with debts. Hence zero played an important role as the cancelled-out. I owe you 5 pigs, I have 5 from you, but then I give you 5, I have -5 from you, thus due. Interesting - I must read up on the history of math more ! I mean I was always under the impression the Romans didnt have a symbol for 0, and I heard they were also fantastic at bartering domestic animals etc. It maybe a mistake I have made in thinking that some cultures did not understand either the concept of zero (at least I thought they didnt have a term for it) But 0.999... do cause some trouble. Perhaps 0.9999999999 in practice. For example, NASA can reduce the chance of accident by having one more check. Are you confident that after many checks no accident? Infinite checks??? Come on... This comment blew me away, I just realised that a mathematical problem like this seems trivial and like a minor puzzle, but in actual fact it could have very serious consequences. My immediate response was to realise that the NASA mathematicians and programmers must surely "engineer" problems like this out of their systems by using a more precise definition of terms, its safe to say my knowledge of mathematics runs out at being awe inspired by the simple qualities of a sum like 3/3 = .999... it shook me up to realise that there must be programmers out there who dont realise this crucial simple peice of mathematics might have serious implications unless they understand the problem properly - in the aviation business for example. This problem has done my head in ! I thought maths was totally absolute and could be defined always in absolute terms Infinity has a lot to answer for and we should ban it ! LOL ha ha ha Just one more line of questioning from me and then I really have to stop thinking about infinity - mainly because it makes my poor little brain hurt! Does the concept of infinity only produce problems ? Is infinity a major problem for mathematics? Last edited by cray (2006-10-10 02:29:18) Power Member Re: I disagree with 0.999... = 1 cray wrote: This problem has done my head in ! I thought maths was totally absolute and could be defined always in absolute terms It is and it can - there is no opportunity for discussion about it, the simple fact of the matter is that 0.999... = 1. What this brings out is simply the fact that one number can have more than one decimal expansion. So what? The number 1 is unique, even though we can write down more than one way to represent it. This is simply what you could call a (minor) "flaw" in the way we represent numbers - the problem, as ever, is with the humans, not the mathematics. There is a lot that could be written about reccuring numbers. What exactly do we mean when we write a number such as 0.333...(recuring), for example? There are clear definitions of this that should not be ignored. cray wrote: Does the concept of infinity only produce problems ? Is infinity a major problem for mathematics? Infinity, as with anything else, is entirely well defined and poses no problems. It is, as it happens, a more complicated subject than you might think, though. Bad speling makes me [sic] Re: I disagree with 0.999... = 1 It is and it can - there is no opportunity for discussion about it, the simple fact of the matter is that 0.999... = 1 But that leaves open the possibility of ambiguity doesnt it? Surely the computer that takes astronauts to mars is going to be programmed to define 1 as 1, isnt it ? Else if there is a calculation where the sum turns out to be .999... it just might conflict with a backup calculation that gathers the answer 1 to be more appropriate - I mean you can imagine a situation where no matter how infinitely small the difference between .999... and 1 - it could have a consequence in the physical world. Or is that just not possible? Super Member Re: I disagree with 0.999... = 1 Ricky wrote: infinitesimal isn't zero, at least in classic defination made more than 120 years ago. Since infinitesimals don't exist in the reals, they are 0. But in more, uh, "complex" number systems, there are non-zero infinitesimals. Hey, since Ricky don't exist in the reals, he is 0-just kidding. An infinitesimal isn't even a number at all, at least highlighted in most caculus books. Super Member Re: I disagree with 0.999... = 1 cray wrote: I mean you can imagine a situation where no matter how infinitely small the difference between .999... and 1 - it could have a consequence in the physical world. Or is that just not possible? infinitely small, can never make a difference, for example if you have a number a, no matter how many times you add 1/inf to it, its still 'a' and it does not change. Dont think of it as adding a one to the end of an infinate chain of 0's. Thats trying to think of infinity and infintisemals as numbers. there not. The Beginning Of All Things To End. The End Of All Things To Come. Power Member Re: I disagree with 0.999... = 1 cray wrote: I mean you can imagine a situation where no matter how infinitely small the difference between .999... and 1 - it could have a consequence in the physical world. Or is that just not possible? You seem to be trying to say that a computer would not be able to tell the difference between 0.999...(recuring) and 1. A computer would never have to deal with 0.999...(recuring) as a decimal expansion, since it does not have enough memory. It only has a certain number of floating points. It may, if the programmer chooses, round off to 1 if it gets close enough, but there again lies the human factor. The maths is all good. Bad speling makes me [sic] Re: I disagree with 0.999... = 1 A computer would never have to deal with 0.999...(recuring) as a decimal expansion, since it does not have enough memory. "All [computer scientists] ask for is that you engineers fit an infinite amount of transistors on a finite sized chip" - A professor of mine. An infinitesimal isn't even a number at all, at least highlighted in most caculus books. Did I not just say that infinitesimals don't exist in the reals? And what is calculus? The study of real functions. But you need to go to other number systems: superreals, hyperreals. Think of .999... this way. How much would you have to subtract from 1 to get to 0.999.....? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 It may, if the programmer chooses, round off to 1 if it gets close enough, but there again lies the human factor. Exactly, so within say the 300 Million lines of code it will take to put a manned space craft on mars - well there is enough of a gamble to call it a risk. However I have at last come to accept that they are one and the same, or 1 and .999..., to be less precise. Power Member Re: I disagree with 0.999... = 1 luca-deltodesco wrote: if you have a number a, no matter how many times you add 1/inf to it, its still 'a' and it does not change. Dont think of it as adding a one to the end of an infinate chain of 0's. Thats trying to think of infinity and infintisemals as numbers. there not. just noticed a funny thing here. but if n= infinity, you add an infinty amount of 1/infinity, wouldnt this look like: 1/infinity > 0 thats if infinty can be used as a number, which i guess it cant, but still...funny Last edited by Kurre (2006-10-11 05:00:45) Topic closed
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=337&p=3","timestamp":"2014-04-20T13:19:17Z","content_type":null,"content_length":"45291","record_id":"<urn:uuid:e3607f5a-56b1-467f-959f-c609f2a8cb20>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Struct template child_c boost::proto::result_of::child_c — A metafunction that returns the type of the N^th child of a Proto expression. A metafunction that returns the type of the N^th child of a Proto expression. N must be 0 or less than Expr::proto_arity::value. child_c public types 1. typedef typename Expr::proto_child0 value_type; The raw type of the N^th child as it is stored within Expr. This may be a value or a reference. 2. If Expr is not a reference type, type is computed as follows: □ T const & becomes T □ T & becomes T □ T becomes T If Expr is a non-const reference type, type is computed as follows: If Expr is a const reference type, type is computed as follows:
{"url":"http://www.boost.org/doc/libs/1_55_0/doc/html/boost/proto/result_of/child_c.html","timestamp":"2014-04-24T23:22:11Z","content_type":null,"content_length":"10242","record_id":"<urn:uuid:4e165100-05f2-4433-b8d4-93e640403aef>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability Rank2Types Stability provisional Maintainer Edward Kmett <ekmett@gmail.com> Safe Haskell Trustworthy The name "plate" stems originally from "boilerplate", which was the term used by the "Scrap Your Boilerplate" papers, and later inherited by Neil Mitchell's "Uniplate". The combinators in here are designed to be compatible with and subsume the uniplate API with the notion of a Traversal replacing a uniplate or biplate. By implementing these combinators in terms of plate instead of uniplate additional type safety is gained, as the user is no longer responsible for maintaining invariants such as the number of children they received. Note: The Biplate is deliberately excluded from the API here, with the intention that you replace them with either explicit traversals, or by using the On variants of the combinators below with biplate from Data.Data.Lens. As a design, it forced the user into too many situations where they had to choose between correctness and ease of use, and it was brittle in the face of competing The sensible use of these combinators makes some simple assumptions. Notably, any of the On combinators are expecting a Traversal, Setter or Fold to play the role of the biplate combinator, and so when the types of the contents and the container match, they should be the id Traversal, Setter or Fold. It is often beneficial to use the combinators in this module with the combinators from Data.Data.Lens or GHC.Generics.Lens to make it easier to automatically derive definitions for plate, or to derive custom traversals. class Plated a whereSource A Plated type is one where we know how to extract its immediate self-similar children. Example 1: import Control.Applicative import Control.Lens import Control.Lens.Plated import Data.Data import Data.Data.Lens (uniplate) data Expr = Val Int | Neg Expr | Add Expr Expr deriving (Eq,Ord,Show,Read,Data,Typeable) instance Plated Expr where plate f (Neg e) = Neg <$> f e plate f (Add a b) = Add <$> f a <*> f b plate _ a = pure a instance Plated Expr where plate = uniplate Example 2: import Control.Applicative import Control.Lens import Control.Lens.Plated import Data.Data import Data.Data.Lens (uniplate) data Tree a = Bin (Tree a) (Tree a) | Tip a deriving (Eq,Ord,Show,Read,Data,Typeable) instance Plated (Tree a) where plate f (Bin l r) = Bin <$> f l <*> f r plate _ t = pure t instance Data a => Plated (Tree a) where plate = uniplate Note the big distinction between these two implementations. The former will only treat children directly in this tree as descendents, the latter will treat trees contained in the values under the tips also as descendants! When in doubt, pick a Traversal and just use the various ...Of combinators rather than pollute Plated with orphan instances! If you want to find something unplated and non-recursive with biplate use the ...OnOf variant with ignored, though those usecases are much better served in most cases by using the existing Lens combinators! e.g. toListOf biplate ≡ universeOnOf biplate ignored This same ability to explicitly pass the Traversal in question is why there is no analogue to uniplate's Biplate. Moreover, since we can allow custom traversals, we implement reasonable defaults for polymorphic data types, that only traverse into themselves, and not their polymorphic arguments. plate :: Traversal' a aSource Traversal of the immediate children of this structure. If you're using GHC 7.2 or newer and your type has a Data instance, plate will default to uniplate and you can choose to not override it with your own definition. Plated Exp Plated Pat Plated Type Plated Dec Plated Stmt Plated Con Plated [a] Plated (Tree a) Uniplate Combinators rewrite :: Plated a => (a -> Maybe a) -> a -> aSource Rewrite by applying a rule everywhere you can. Ensures that the rule cannot be applied anywhere in the result: propRewrite r x = all (isNothing . r) (universe (rewrite r x)) Usually transform is more appropriate, but rewrite can give better compositionality. Given two single transformations f and g, you can construct a -> f a mplus g a which performs both rewrites until a fixed point. rewriteOf :: ASetter' a a -> (a -> Maybe a) -> a -> aSource Rewrite by applying a rule everywhere you can. Ensures that the rule cannot be applied anywhere in the result: propRewriteOf l r x = all (isNothing . r) (universeOf l (rewriteOf l r x)) Usually transformOf is more appropriate, but rewriteOf can give better compositionality. Given two single transformations f and g, you can construct a -> f a mplus g a which performs both rewrites until a fixed point. rewriteOf :: Iso' a a -> (a -> Maybe a) -> a -> a rewriteOf :: Lens' a a -> (a -> Maybe a) -> a -> a rewriteOf :: Traversal' a a -> (a -> Maybe a) -> a -> a rewriteOf :: Setter' a a -> (a -> Maybe a) -> a -> a rewriteM :: (Monad m, Plated a) => (a -> m (Maybe a)) -> a -> m aSource Rewrite by applying a monadic rule everywhere you can. Ensures that the rule cannot be applied anywhere in the result. rewriteMOf :: Monad m => LensLike' (WrappedMonad m) a a -> (a -> m (Maybe a)) -> a -> m aSource Rewrite by applying a monadic rule everywhere you recursing with a user-specified Traversal. Ensures that the rule cannot be applied anywhere in the result. rewriteMOn :: (Monad m, Plated a) => LensLike (WrappedMonad m) s t a a -> (a -> m (Maybe a)) -> s -> m tSource Rewrite by applying a monadic rule everywhere inside of a structure located by a user-specified Traversal. Ensures that the rule cannot be applied anywhere in the result. rewriteMOnOf :: Monad m => LensLike (WrappedMonad m) s t a a -> LensLike' (WrappedMonad m) a a -> (a -> m (Maybe a)) -> s -> m tSource Rewrite by applying a monadic rule everywhere inside of a structure located by a user-specified Traversal, using a user-specified Traversal for recursion. Ensures that the rule cannot be applied anywhere in the result. universeOf :: Getting [a] a a -> a -> [a]Source Given a Fold that knows how to locate immediate children, retrieve all of the transitive descendants of a node, including itself. universeOf :: Fold a a -> a -> [a] universeOn :: Plated a => Getting [a] s a -> s -> [a]Source Given a Fold that knows how to find Plated parts of a container retrieve them and all of their descendants, recursively. universeOnOf :: Getting [a] s a -> Getting [a] a a -> s -> [a]Source Given a Fold that knows how to locate immediate children, retrieve all of the transitive descendants of a node, including itself that lie in a region indicated by another Fold. toListOf l ≡ universeOnOf l ignored transform :: Plated a => (a -> a) -> a -> aSource Transform every element in the tree, in a bottom-up manner. For example, replacing negative literals with literals: negLits = transform $ \x -> case x of Neg (Lit i) -> Lit (negate i) _ -> x transformOf :: ASetter' a a -> (a -> a) -> a -> aSource Transform every element by recursively applying a given Setter in a bottom-up manner. transformOf :: Traversal' a a -> (a -> a) -> a -> a transformOf :: Setter' a a -> (a -> a) -> a -> a transformM :: (Monad m, Plated a) => (a -> m a) -> a -> m aSource Transform every element in the tree, in a bottom-up manner, monadically. transformMOf :: Monad m => LensLike' (WrappedMonad m) a a -> (a -> m a) -> a -> m aSource Transform every element in a tree using a user supplied Traversal in a bottom-up manner with a monadic effect. transformMOf :: Monad m => Traversal' a a -> (a -> m a) -> a -> m a contextsOnOf :: ATraversal s t a a -> ATraversal' a a -> s -> [Context a a t]Source Return a list of all of the editable contexts for every location in the structure in an areas indicated by a user supplied Traversal, recursively using another user-supplied Traversal to walk each contextsOnOf :: Traversal' s a -> Traversal' a a -> s -> [Context a a s] holes :: Plated a => a -> [Pretext (->) a a a]Source The one-level version of context. This extracts a list of the immediate children as editable contexts. Given a context you can use pos to see the values, peek at what the structure would be like with an edited result, or simply extract the original structure. propChildren x = children l x == map pos (holes l x) propId x = all (== x) [extract w | w <- holes l x] holes = holesOf plate paraOf :: Getting (Endo [a]) a a -> (a -> [r] -> r) -> a -> rSource Perform a fold-like computation on each value, technically a paramorphism. paraOf :: Fold a a -> (a -> [r] -> r) -> a -> r Provided for compatibility with Björn Bringert's compos library. Note: Other operations from compos that were inherited by uniplate are not included to avoid having even more redundant names for the same operators. For comparison: composOpMonoid ≡ foldMapOf plate composOpMPlus f ≡ msumOf (plate . to f) composOp ≡ descend ≡ over plate composOpM ≡ descendM ≡ mapMOf plate composOpM_ ≡ descendM_ ≡ mapMOf_ plate parts :: Plated a => Lens' a [a]Source The original uniplate combinator, implemented in terms of Plated as a Lens. parts ≡ partsOf plate The resulting Lens is safer to use as it ignores 'over-application' and deals gracefully with under-application, but it is only a proper Lens if you don't change the list length!
{"url":"http://hackage.haskell.org/package/lens-3.9.0.2/docs/Control-Lens-Plated.html","timestamp":"2014-04-16T21:05:47Z","content_type":null,"content_length":"63515","record_id":"<urn:uuid:2201afcc-5110-40c3-a50c-085069a48c20>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Maths Ib Textbook Maths Ib Textbook DOC Sponsored High Speed Downloads and Curriculum Outline. 2010-2011. Welcome to the start of a new school year! This year, IB Learning Partners will be studying the first year of the IB Standard Level curriculum. IB Mathematical Studies. An InThinking workshop for teachers new to the course. Workshop leader - Jim Noble. Table of Contents. Course Overview4. Aims and Objectives4 Textbook Math Quest 9 , Ch 1. Textbook Math Quest 9, Ch 2. Title: MYP UNIT PLANNER Subject: Mathematics Author: Ruth Clarke and Cam Hall Last modified by: cameron_hall Created Date: 9/1/2008 6:58:00 AM Other titles: In MYP mathematics, the four main objectives support the IB learner profile, ... from their own and other subject groups teachers working in isolation multiple sources and resources for learning a textbook-driven curriculum students investigating, ... The International Baccalaureate aims to develop inquiring, ... IB Maths helps the students to develop a variety of skills which include understanding, ... If a key point is not very clear then seek clarification from your notes or textbook. Maths Assessment Guidelines. ... These can include questions from the textbook or homework book, test revision, ... In line with both the new International IGCSE Maths course and IB Maths courses, revision tests will consist of two main sections. Grade : 11 Maths Studies. Lesson 2. Topic : Sets. Set by Mr S Mahendra. Read ‘Number Sets’, Section B on pages 19 and 20 of the Haese and Harris textbook. In IB Capital letters are used to denote number sets as follows. N* - is the set of Counting Numbers. e.g {1, 2, 3, 4, 5,…………} IB Chemistry in Shanghai 3. Topic List 3. Teaching Time 4. External Assessment 5. IB assessment criteria 6. Criteria 6. Aspects 6. Design 7. Data processing and presentation 7 International Baccalaureate. Primary Years Programme. ... A choice of Chinese Maths or English Maths. ... PYP students access a wide range of learning resources and materials and do not follow one specific textbook. ( Collaborative learning. In particular we are an International Baccalaureate world school. ... and the students start to use their GCSE textbook in Year 9. ... Mathematics is a very popular and successful subject in our Sixth Form with two thirds of our students taking A level Maths, Further Maths or IB options. Students will follow advanced IB&M modules whose content extends beyond the basic textbook knowledge. The MSc in IB&M modules reflect state-of-the-art ... The MSc in International Business and Management, ... (IB/EB) that includes the subjects Maths Higher or Maths Standard/Methods and English, ... [See also holdings for newer editions, if any]. Textbook recommended for a few of our courses/modules. May be out of print now. MMM114 Materials Science I. (a+, Oct) check on MMM Book List. MMM211 ... ENM101 Engineering Maths IB. (a, Oct) but check on ENM Book List. Books 035 MTL Record No ... The Academy offers the International Baccalaureate Diploma. ... By moving away from textbook teaching we ... Leading Teacher Performing Arts + Enrichment Leading Teacher Business & Economics + Aim Higher Maths KS 5 Maths KS 3 PE Economics Applied Business Psychology CoPE Biology ... IB. The Language Arts Programme at MIA will include: The integration of reading, ... The Maths Programme at MIA enables students: ... Curriculum Framework June 2012 4th Grade Science Textbook: Scott Foresman Science Mathematical Studies Standard Level for the IB Diploma. ... textbook exercise [ggb] GeoGebra activity [V] ... (This could also be used for a history of maths activity, as it contains a discussion of the historical development of ‘zero’.) [www] Mathematical Studies Standard Level for the IB Diploma. ... students should be encouraged to talk about these both in maths lessons and during specific Theory of Knowledge lessons. ... textbook exercise [ggb] GeoGebra activity [V] video link ... school, grades, maths. Easy! Choose the correct sources. Your essay must include a mixture of sources from: websites, books ... Keep in mind that the examiners are IB teachers from your chosen subject ... • Provided a foot note/ citation for any method found in a textbook or reference ... ... school, grades, maths. Easy! Choose the correct sources. Your essay must include a mixture of sources from: websites, books ... Keep in mind that the examiners are IB teachers from your chosen subject ... Provided a foot note/ citation for any method found in a textbook or reference ... Should I Do More to Upgrade My Maths? What Study Skills Will I Need? ... e.g. IB) you will be taking ... For the revision of basic algebra, any GCSE (Higher Level) textbook will be useful, although Part 1 (chapters 1 to 5) of G. Renshaw. 2005. Maths for Economics, ... in International Business endows its graduates with solid academic knowledge in Business Administration, ... Basic maths: introductory course. ... The students may choose among the following - There is no single adopted textbook. Textbook of Work Study and Ergonomics, Standard Publ. Grandgean E. 1978. Ergonomics of the Home. Taylor & Francis. Ian Galer. 1982 . Applied Ergonomics Handbook. Butterworths & Co. Panero J & Zelnik M. 1979. Human Dimension and Interior Space. Maths. Music. Physical Education. Science. ... A textbook will be used to resource the section on the French Revolution. Ancient Civilizations: ... An end of term assessment will follow the IB There exist strong beliefs among them that if you do not follow the textbook, ... at the same time as some have great ambitions and want to enter the International Baccalaureate (IB) programme. ... Before, I had a hard time with maths, ... Bored by Elizabeth I but could avoid questions on her in the exam. European History – big syllabus (as for IB today but not modern A levels) - included ... Which would be in a modern textbook. ... So I never got on with maths because I used to say, ‘Hang on a minute, I need to know ... International Baccalaureate. Diploma Programme. Contents. Contents. ... Maths Studies SL. 33. Environmental Systems SL. 35. Biology, Chemistry, ... Themen Neu 1 and 2, textbook, tapes and workbook (Max Hueber) Mastering German Vocabulary. The existing curriculum has yet to be fully integrated into textbook content, teacher training, and student assessment or to be developed in detail in the teachers’ guides to textbooks. In addition, Past Questions and Answers for JSCE ISI Maths Department. Supplementary. Progressive ... Akwukwo Arumaru Igbo (JS 1 – 3) M.C. Aguo Fodie J.B. Global Ib. Mepuru Onyekuru Mrs. B. Ndubodi. ... Basics Science Pupils Textbook Two Heinemann Publisher Plc Ibadan. Elements was widely considered the most successful and influential Mathematics textbook of all time, ... A complex number a + ib represents a point in a plane. ... Russian maths genius Perelman urged to take $1m prize bbc.co.uk, Wednesday, 24 March 20102. The information contained in these notes is correct to my knowledge and by no means copied from any textbook, therefore, ... The (.ib model for the voltage controlled current source comes in useful for the 'common ... This time I'll go through the maths a lot faster, and let you work it out ... Monga, G S., Maths for Management & Economics, Vikas Publishing House, New Delhi. Chandan, J.S. ... International Business : An overview-types of international business; ... A Textbook On Foreign Exchange. Maurice D. Levi, International Finance, Mcgraw Hill, ... ... e.g. a textbook, ... Maths and Science, Modern Foreign Languages, Humanities, Art, ... In addition, we are the only college in St Helens offering the highly regarded International Baccalaureate ( IB) Diploma. At Cowley we strive to create a partnership between students, parents, ... ... asked the class which of us enjoyed Maths and preferred translating from English to Latin rather than from Latin ... I had been amazed by a sentence in my history textbook mentioning Julius Caesar's De Bello Gallico as something `you ... But this is an insidious and damaging course.(ib.). ... Textbook and Home Science Laboratory . Development and designing of curriculum. Teaching aids ... (1970). Curriculum and Teaching of Maths in Secondary Schools, A Research Monograph. Delhi: NCERT. David Wood (1988). How Children Think and ... i&iB~] fy[k~] vl~] Hkw] d` ¼yV~ rFkk y ... Maths. Music. Physical Education. Physics *Spanish (Language B2) Technology (ICT) ... All assessment will follow the IB pattern. ... Tasks from within the textbook. Supplementary tasks based on worksheet material. 7 B.Sc(Maths) 89 - 100 8 B.C.A. 101 - 141 9 B.Sc(Computer Science) 142 -171 ... > kz;> Rij> Ib> cNyhfk;> nty;yk;> re;jdk; Nghd;wit) mtw;Wf;Fupa tpjpfs; – topghl;bd; ... South Gate, A Textbook of Modern European History. Give MCS weight to Advanced Placement (or International Baccalaureate) ... Additional Maths (2) Algebra 2 with Trig(1) Additional Math (1) ... May 13th The master schedule, textbook purchase, ... Textbook in preparation where nuclear physics and particle physics in integrated. ... THE PROFILE OF OUR SCHOOL IS MATHS-SCIENCES OVER ALL THE FOURTH GRADES. Several times, with several persons ... More of Particle Physics I teach within IB Diploma Programme. 1 Carnegie unit in an IB course with a score of 4 or higher on the exam OR. ... (English I, English II, 2 required maths, 2 required sciences, 2 required social studies ... and current events related to astronomy. The course work will include textbook assignments, Internet activities, online ... maths with mummy . v zhitomirsky, l shevrin. moscow : raduga pub. 1989 . 431 2129 03 mar 2012. the cossack holota . maira prihara. ... elementary textbook on physics . moscow : mir publication. 1989 . 452 2160 18 feb 2012. fundamentals of physics vol i . moscow : mir publication. 1989 . The International Baccalaureate Diploma Programme ... Art & Design Business Studies Music Additional Maths Geography Information & Communication Technology History German EAL In addition, all ... The primary textbook will be: McAleavy, Tony. ... the IB Middle Years and Diploma programmes and the Pre-U examination. ... other than in the core subjects of maths, English and science, ... or superior to the exemplar material in the course 3.21 The International Baccalaureate includes an extended research essay as a final year capstone to the ... Communicating Maths is an optional module for third and fourth year students in the ... It builds on the more textbook-orientated knowledge and limited controlled laboratory ... - of working with accounting, resolving tasks for finance maths, estimating investment projects, ... World Economy and International Business. Textbook, edited by Professor Polyakova V.V. and Professor Schenin, 4th edition. M.: KNORUS, 2007. The International Baccalaureate Program is a rigorous academic program for students in their final two years of high school. ... Biology HL Group 5 Maths and computer science. Math Studies SL. ... Beside the textbook, students will use resources such as magazine articles, songs, games, ... Two Maths. Two Sciences. ... -AP -IB -Dual or college equivalent course -Advanced CTE/CTE credentialing courses -On-line courses -Other honors or above designated ... Lock Replacement $ 6.00 Textbook Replacement Fees vary based on textbook Workbook Replacement Fees vary based on workbook ... In this case the text will be heavily referred to but will not be the only textbook formally used by the lecturer in the delivery of the course. ... for example lecture material (maths and physics), ... Mathematics IB. Laboratory I Electricity. Thermodynamics. Mathematics II. Laboratory II ... u3 KGñnD 1÷ N IB Òs ... Assessment File Maths Year 5-6. Mark Patmore Code : 9781857584783. Pub Price : 35 UK.P. ... Keeping this in mind this self-contained d textbook is written which addresses to general relativity and cosmology. Special needs (VI) Reading and Maths difficulties 2,5 Psychological assistance for children with learning difficulties. ... A Textbook. – Great Britain: Open University Press: McGraw-Hill Education, 2002 (reprinted 2003). Suggested titles(02-further reading) Венар Ч., Кериг П. Major-Practical IIB 3 2 3 60 40 100 Allied Subject-II Paper – I - Allied Part-IB 4. 2 3. 2 3 75 25 100 Part ... Maths - I. Maths - II. Chemistry - I . Chemistry - II ... A Textbook of Chordate Embryology – Saras Publication – 420 pp. Balinsky, ... Research project on human resource management at Shanghai University of International Business and Economics (SUIBE), China. ... It builds on the more textbook orientated knowledge and limited controlled laboratory experiences in years one and two.
{"url":"http://ebookily.org/doc/maths-ib-textbook","timestamp":"2014-04-24T00:39:54Z","content_type":null,"content_length":"41283","record_id":"<urn:uuid:de826b98-9814-4c4a-ba10-997e60128b46>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring School on higher dimensional class field theory This is a spring school of the SFB/TRR 45 Bonn-Essen-Mainz financed by Deutsche Forschungsgemeinschaft. It takes place March 14 - 18, 2011 at the University of Mainz. The workshop intends to improve the training of PhD students and postdocs in the area, in particular of the members of the SFB/TRR 45. What • Summer School When Mar 14, 2011 09:00 AM to Mar 18, 2011 05:00 PM Where Mainz, 05-514 (lectures) and 05-432 (registration) Contact Name Jutta Gonska Contact Phone +49-6131-3922327 Add event to calendar vCal • Moritz Kerz (Essen) • Stefan Müller-Stach (Mainz) Speakers and Titles The goal of this workshop is to give an introduction to recent progress on the class field theory of higher dimensional arithmetic schemes and on higher dimensional Hasse principles. The lectures will assume only a basic acquaintance with classical class field theory and some knowledge of algebraic geometry. Ralf Gerkmann will recall classical results from class field theory of local and global fields. Alexander Schmidt and Tamas Szamuely will describe a new approach to the higher dimensional reciprocity isomorphism originating in the work of Goetz Wiesend. The reciprocity isomorphism describes the abelian fundamental group of an arithmetic scheme in terms of some idelic data, namely the class group of the scheme. An example of a higher dimensional class group is given by the Chow group of zero cycles if the scheme is proper over the integers. Uwe Jannsen will explain Kato's conjectures on higher cohomological Hasse principles. These conjectures generalize the classical Brauer-Hasse-Noether theorem about the Brauer group of a number field. There has been a lot of progress towards a proof of these conjectures during the last twenty years and in his talks Jannsen will explain the role of etale homology and the Weil conjectures. Provisional Schedule Registration: Monday 9 am, room 05-432 (Hilbertraum) Conference dinner: Tuesday, 7 p.m., Proviant-Magazin, Schillerstr. 11a, Mainz Monday Tuesday Wednesday Thursday Friday 10-11 Gerkmann Gerkmann Schmidt Gerkmann Jannsen/Saito 11-11^30 Coffee Coffee Coffee Coffee Coffee 11^30-12^30 Gerkmann Szamuely Schmidt Jannsen/Saito Jannsen/Saito 12^30-14 Lunch Lunch Lunch Lunch Lunch 14-15 Szamuely Szamuely Jannsen/Saito – 15-15^30 Coffee Coffee – Coffee – 15^30-16^30 Jannsen/Saito Schmidt – Szamuely – 16^45-17^45 – Schmidt – – – Travel information All lectures will take place in the Mathematics Department of the University of Mainz, Staudinger Weg 9, in room 05-514. Here is a map of the campus. The closest airport is located at Frankfurt/Main. Find your way from the airport to the institute here.
{"url":"http://www.sfb45.de/events/spring-school-on-higher-dimensional-class-field-theory","timestamp":"2014-04-19T14:58:15Z","content_type":null,"content_length":"45055","record_id":"<urn:uuid:2ceeb3b5-c96f-492e-925f-5741ca045fae>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
A selection of articles related to mathematics. Original articles from our library related to the Mathematics. See Table of Contents for further available material (downloadable resources) on Mathematics. Mathematics is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Mathematics books and related discussion. Suggested Pdf Resources Suggested News Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/mathematics/","timestamp":"2014-04-21T12:47:40Z","content_type":null,"content_length":"31548","record_id":"<urn:uuid:daab8484-1209-4d3b-8a2e-c4064d4cf72b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: 2011 26th Annual IEEE Conference on Computational Complexity Explicit Dimension Reduction and Its Applications San Jose, California USA June 08-June 11 ISBN: 978-0-7695-4411-3 ASCII Text x Zohar S. Karnin, Yuval Rabani, Amir Shpilka, "Explicit Dimension Reduction and Its Applications," 2012 IEEE 27th Conference on Computational Complexity, pp. 262-272, 2011 26th Annual IEEE Conference on Computational Complexity, 2011. BibTex x @article{ 10.1109/CCC.2011.20, author = {Zohar S. Karnin and Yuval Rabani and Amir Shpilka}, title = {Explicit Dimension Reduction and Its Applications}, journal ={2012 IEEE 27th Conference on Computational Complexity}, volume = {0}, year = {2011}, issn = {1093-0159}, pages = {262-272}, doi = {http://doi.ieeecomputersociety.org/10.1109/CCC.2011.20}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - CONF JO - 2012 IEEE 27th Conference on Computational Complexity TI - Explicit Dimension Reduction and Its Applications SN - 1093-0159 A1 - Zohar S. Karnin, A1 - Yuval Rabani, A1 - Amir Shpilka, PY - 2011 KW - Sample Space KW - Pseudo Random Generator KW - PRG KW - Johnoson-Lindenstrauss KW - Dimension Reduction KW - Derandomization KW - Max-Cut KW - Linear Threshold Function KW - Halfspace KW - Digon VL - 0 JA - 2012 IEEE 27th Conference on Computational Complexity ER - We construct a small set of explicit linear transformationsmapping R^n to R^t, where t=O(log (\gamma^{-1}) \eps^{-2} ), suchthat the L_2 norm of any vector in R^n is distorted by atmost 1\pm \eps in at least a fraction of 1-\gamma of thetransformations in the set. Albeit the tradeoff between thesize of the set and the success probability is sub-optimal comparedwith probabilistic arguments, we nevertheless are able to applyour construction to a number of problems. In particular, we use itto construct an \eps-sample (or pseudo-random generator) forlinear threshold functions on S^{n-1}, for \eps = o(1). Wealso use it to construct an \eps-sample for spherical digons inS^{n-1}, for \eps=o(1). This construction leads to anefficient oblivious derandomization of the Goemans-Williamson MAXCUT algorithm and similar approximation algorithms (i.e., weconstruct a small set of hyperplanes, such that for any instancewe can choose one of them to generate a good solution). Our technique for constructing \eps-sample for linear thresholdfunctions on the sphere is considerably different than previoustechniques that rely on k-wise independent sample spaces. Index Terms: Sample Space, Pseudo Random Generator, PRG, Johnoson-Lindenstrauss, Dimension Reduction, Derandomization, Max-Cut, Linear Threshold Function, Halfspace, Digon Zohar S. Karnin, Yuval Rabani, Amir Shpilka, "Explicit Dimension Reduction and Its Applications," ccc, pp.262-272, 2011 26th Annual IEEE Conference on Computational Complexity, 2011 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/proceedings/ccc/2011/4411/00/4411a262-abs.html","timestamp":"2014-04-18T04:25:51Z","content_type":null,"content_length":"50886","record_id":"<urn:uuid:baf797ca-a55f-43f3-a7d1-b756133372aa>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
I disagree with 0.999... = 1 Topic closed Re: I disagree with 0.999... = 1 It's also a good lesson as to why we can't use inductive logic in science. Math however, is a different story. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 but 0 really isnt a number - the romans never had any concept of a zero in their system the Greeks managed fine without a zero for quite a while too infact you will see what I am saying in a second the number 1378 is merely a kind of shorthand for the sum 1000.000... + 300.000.... + 70.000... + 8.000... notice all the zero's after the point stretch to infinity the zero merely denotes the complete abscence of a number at a certain point in the sum so 1305 = 1000.000.... + 300.000... +5.000... I rest my case - zero is the abscence of a number - not a number in itself therefore it is true that numbers range on the real number scale as and since .999... is neither -1 or +1 then the difference is between 1 and -1 that has to be the case it just cannot be otherwise I havent got time to continue this right now - I'll be back tomorrow. - Last edited by cray (2006-10-06 15:01:10) Re: I disagree with 0.999... = 1 Zero is indeed a strange bird. It is true that zero is a "placeholder" when writing down numerals. 302 is not 32. But I believe it also has a role as a number. Imagine the descending number series 5,4,3,2,... What happens next? 1 (a number), 0 (a what?), -1 (a number), -2,-3,... etc. To me a number is a count or measurement. It is debatable if you can count "0", but you can measure it. 0 degrees, 0 meters above sea level, $0 bank balance. And if two bags have the same number of apples each, then the difference is 0. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: I disagree with 0.999... = 1 Like I said before cray, if zero isn't a number, then you throw entire branches of math into complete chaos. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 And it's strange. In kindergarten, they teach you the number 1-10, then 10-20, but never 0. What gives? I believe 0 doesn't exist - It is just a term given to nothing. -1, -2, -3...however, is different, since those numbers represent debts. Re: I disagree with 0.999... = 1 Mary has 5 apples. Joe has 5 apples. How many more apples does Joe have than Mary? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Again - Even though we say 0 to represent the distance between the two amounts of apples, it is still nothing, so if 0 = nothing, then the distance between two of the same numbers must be nothing. Re: I disagree with 0.999... = 1 You're right, zero is nothing. And because of that, it's the most important number: x + 0 = 0 + x = x No other number has that special property. And without it, all of algebra is lost. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 And James Bond would also be ruined. He wouldn't be 007 - He would just be 7. Yeah, I was thinking of the algebra factors of the problem. 0 is a crucial part to play, and we cannot do without it. But is there a better way to define 0 than 'nothing' or 'the lack of something'? Something similar - About 0^0: http://mathforum.org/dr.math/faq/faq.0.to.0.power.html Re: I disagree with 0.999... = 1 I believe there is a subtle difference between zero and nothing. Zero specifically says what the amount is. There are zero dogs. He has zero dollars. This is my definition of Zero: http://www.mathsisfun.com/definitions/zero.html Also my definition of Number: http://www.mathsisfun.com/definitions/number.html Please correct me if you think I am wrong. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: I disagree with 0.999... = 1 Nice definitions - Although there are other forms of zero. Let's say, in graphs. To draw a straight line from -3 to 3, you would need the zero to help you. If zero told us the amount, I think we could say 'nothing' - No dogs, no dollars, etc. I think zero is just another word for nothing or the lack of something - I know I'm not 100% right, though. Re: I disagree with 0.999... = 1 Ok then I accept that but I just feel the solution is something to do with the idea of zero. except incase it isnt I'll put that on hold , though I have another question ! Is the reason why this is wrong 1/3 = .333... 2/3 = .666... etc etc because 1/3 is not a number but an idealisation of a perfect fraction ? after all 3/3 would literaly mean 3 divided by 3 equals 1 but a third of 1 is a lot more complex in reality than we think? so it acts more like a label than a actual number whereas .333... is an actual number? maybe its to do with that ? Last edited by cray (2006-10-06 21:59:57) Re: I disagree with 0.999... = 1 All numbers are labels. 0 is the label for the identity with respect to addition. 1 is the label for the identity with respect to multiplication. 2, 3, 4, 5... are the labels for adding a certain number of 1's. The negatives of all of these are the labels for the additive inverses of them. 1/x is the label for the multiplicative inverse. a/x is just the same as saying a * 1/x. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Power Member Re: I disagree with 0.999... = 1 if not 0.999... = 1 , then 1.000... isnt = 1 since there could be a 1 or a 5 or any number at the end of these infinite zeroes... and also, as said, 0.999... = 9/10+9/10^2+9/10^3+...+9/10^n then the difference (h) between 0.999... and 1 must be 1/10^n but since n = infinity, h=1/infinity=0 (since 1/0=infinity, 1/infinty=0) so the difference between 0.999... and 1 is 0, and therefor 0.999...=1 Super Member Re: I disagree with 0.999... = 1 zero is a useful concept, which means "nothing", "not any". It is particularly useful when it comes to negative numbers. I guess the first time a negative number occurred was the time when people had property and trade with debts. Hence zero played an important role as the cancelled-out. I owe you 5 pigs, I have 5 from you, but then I give you 5, I have -5 from you, thus due. cray, decimal system isn't just symbols, and every decimal represents some exponent of 10. 123.45 represents 1*10^2+2*10^1+3*10^0+4*(1/10)+5*(1/10)^2 so decimals are fractions by nature. Infinite decimals add even one more idealization than rationals. infinitesimal isn't zero, at least in classic defination made more than 120 years ago. That's the trick. Actually there could be two types of functions or series having the same limit C. One is constantly C, within some domain or since some step (which could be 1 to be general) The other one is approaching C, and never reach C in the field of functions . However, some people believe it can reach C when it is a series and at Entry Infinity. Here another piece of strong evidence of objection - inconsistency. 1.0=1 1.00=1 1.000=1 ... though 1.000... may not exist, it can lead to some 1.00000000 in practice and caus little trouble. It's fortunate enough to get a clear result to put it other words. But 0.999... do cause some trouble. Perhaps 0.9999999999 in practice. For example, NASA can reduce the chance of accident by having one more check. Are you confident that after many checks no accident? Infinite checks??? Come on... Re: I disagree with 0.999... = 1 infinitesimal isn't zero, at least in classic defination made more than 120 years ago. Since infinitesimals don't exist in the reals, they are 0. But in more, uh, "complex" number systems, there are non-zero infinitesimals. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 zero is a useful concept, which means "nothing", "not any". It is particularly useful when it comes to negative numbers. I guess the first time a negative number occurred was the time when people had property and trade with debts. Hence zero played an important role as the cancelled-out. I owe you 5 pigs, I have 5 from you, but then I give you 5, I have -5 from you, thus due. Interesting - I must read up on the history of math more ! I mean I was always under the impression the Romans didnt have a symbol for 0, and I heard they were also fantastic at bartering domestic animals etc. It maybe a mistake I have made in thinking that some cultures did not understand either the concept of zero (at least I thought they didnt have a term for it) But 0.999... do cause some trouble. Perhaps 0.9999999999 in practice. For example, NASA can reduce the chance of accident by having one more check. Are you confident that after many checks no accident? Infinite checks??? Come on... This comment blew me away, I just realised that a mathematical problem like this seems trivial and like a minor puzzle, but in actual fact it could have very serious consequences. My immediate response was to realise that the NASA mathematicians and programmers must surely "engineer" problems like this out of their systems by using a more precise definition of terms, its safe to say my knowledge of mathematics runs out at being awe inspired by the simple qualities of a sum like 3/3 = .999... it shook me up to realise that there must be programmers out there who dont realise this crucial simple peice of mathematics might have serious implications unless they understand the problem properly - in the aviation business for example. This problem has done my head in ! I thought maths was totally absolute and could be defined always in absolute terms Infinity has a lot to answer for and we should ban it ! LOL ha ha ha Just one more line of questioning from me and then I really have to stop thinking about infinity - mainly because it makes my poor little brain hurt! Does the concept of infinity only produce problems ? Is infinity a major problem for mathematics? Last edited by cray (2006-10-10 02:29:18) Power Member Re: I disagree with 0.999... = 1 cray wrote: This problem has done my head in ! I thought maths was totally absolute and could be defined always in absolute terms It is and it can - there is no opportunity for discussion about it, the simple fact of the matter is that 0.999... = 1. What this brings out is simply the fact that one number can have more than one decimal expansion. So what? The number 1 is unique, even though we can write down more than one way to represent it. This is simply what you could call a (minor) "flaw" in the way we represent numbers - the problem, as ever, is with the humans, not the mathematics. There is a lot that could be written about reccuring numbers. What exactly do we mean when we write a number such as 0.333...(recuring), for example? There are clear definitions of this that should not be ignored. cray wrote: Does the concept of infinity only produce problems ? Is infinity a major problem for mathematics? Infinity, as with anything else, is entirely well defined and poses no problems. It is, as it happens, a more complicated subject than you might think, though. Bad speling makes me [sic] Re: I disagree with 0.999... = 1 It is and it can - there is no opportunity for discussion about it, the simple fact of the matter is that 0.999... = 1 But that leaves open the possibility of ambiguity doesnt it? Surely the computer that takes astronauts to mars is going to be programmed to define 1 as 1, isnt it ? Else if there is a calculation where the sum turns out to be .999... it just might conflict with a backup calculation that gathers the answer 1 to be more appropriate - I mean you can imagine a situation where no matter how infinitely small the difference between .999... and 1 - it could have a consequence in the physical world. Or is that just not possible? Super Member Re: I disagree with 0.999... = 1 Ricky wrote: infinitesimal isn't zero, at least in classic defination made more than 120 years ago. Since infinitesimals don't exist in the reals, they are 0. But in more, uh, "complex" number systems, there are non-zero infinitesimals. Hey, since Ricky don't exist in the reals, he is 0-just kidding. An infinitesimal isn't even a number at all, at least highlighted in most caculus books. Super Member Re: I disagree with 0.999... = 1 cray wrote: I mean you can imagine a situation where no matter how infinitely small the difference between .999... and 1 - it could have a consequence in the physical world. Or is that just not possible? infinitely small, can never make a difference, for example if you have a number a, no matter how many times you add 1/inf to it, its still 'a' and it does not change. Dont think of it as adding a one to the end of an infinate chain of 0's. Thats trying to think of infinity and infintisemals as numbers. there not. The Beginning Of All Things To End. The End Of All Things To Come. Power Member Re: I disagree with 0.999... = 1 cray wrote: I mean you can imagine a situation where no matter how infinitely small the difference between .999... and 1 - it could have a consequence in the physical world. Or is that just not possible? You seem to be trying to say that a computer would not be able to tell the difference between 0.999...(recuring) and 1. A computer would never have to deal with 0.999...(recuring) as a decimal expansion, since it does not have enough memory. It only has a certain number of floating points. It may, if the programmer chooses, round off to 1 if it gets close enough, but there again lies the human factor. The maths is all good. Bad speling makes me [sic] Re: I disagree with 0.999... = 1 A computer would never have to deal with 0.999...(recuring) as a decimal expansion, since it does not have enough memory. "All [computer scientists] ask for is that you engineers fit an infinite amount of transistors on a finite sized chip" - A professor of mine. An infinitesimal isn't even a number at all, at least highlighted in most caculus books. Did I not just say that infinitesimals don't exist in the reals? And what is calculus? The study of real functions. But you need to go to other number systems: superreals, hyperreals. Think of .999... this way. How much would you have to subtract from 1 to get to 0.999.....? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 It may, if the programmer chooses, round off to 1 if it gets close enough, but there again lies the human factor. Exactly, so within say the 300 Million lines of code it will take to put a manned space craft on mars - well there is enough of a gamble to call it a risk. However I have at last come to accept that they are one and the same, or 1 and .999..., to be less precise. Power Member Re: I disagree with 0.999... = 1 luca-deltodesco wrote: if you have a number a, no matter how many times you add 1/inf to it, its still 'a' and it does not change. Dont think of it as adding a one to the end of an infinate chain of 0's. Thats trying to think of infinity and infintisemals as numbers. there not. just noticed a funny thing here. but if n= infinity, you add an infinty amount of 1/infinity, wouldnt this look like: 1/infinity > 0 thats if infinty can be used as a number, which i guess it cant, but still...funny Last edited by Kurre (2006-10-11 05:00:45) Topic closed
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=337&p=3","timestamp":"2014-04-20T13:19:17Z","content_type":null,"content_length":"45291","record_id":"<urn:uuid:e3607f5a-56b1-467f-959f-c609f2a8cb20>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Pneumatic Air Cylinders - Force Exerted Pneumatic air cylinders - air pressure and force exerted calculator Single Acting Cylinder The force exerted by a single acting pneumatic cylinder can be expressed as F = p A = p π d^2/4 (1) F = force exerted (N) p = gauge pressure (N/m^2, Pa) A = full bore area (m^2) d = full bore piston diameter (m) Single Acting Cylinder Calculator - Output Stroke Calculate Force when Pressure and Diameter are known: Calculate required Diameter when Force and Pressure are known: Calculate required Pressure when Force and Diameter are known: Example - Single Acting Piston The force exerted by a single acting pneumatic cylinder with 1 bar (10^5 N/m^2) and full bore diameter of 100 mm (0.1 m) can be calculated as F = p π d^2 / 4 = (10^5 N/m^2) π (0.1 m)^2 / 4 = 785 N = 0.785 kN Air Cylinder - Pressure/Force Diagram Imperial Units Double Acting Cylinder The force exerted by double acting pneumatic cylinder on outstroke can be expressed as (1). The force exerted on instroke can be expressed as F = p π (d[1]^2 - d[2]^2) / 4 (2) d[1] = full bore piston diameter (m) d[2] = piston rod diameter (m) Double Acting Cylinder Calculator - Input Stroke Calculate Force when Pressure and Diameter are known: Calculate required Diameter when Force and Pressure are known: Calculate required Pressure when Force and Diameter are known: Example - Double Acting Piston The force exerted from a single acting pneumatic cylinder with 1 bar (10^5 N/m^2), full bore diameter of 100 mm (0.1 m) and rod diameter 10 mm (0.01 m) can be calculated as F = p π (d[1]^2 - d[2]^2) / 4 = (10^5 N/m^2) π [(0.1 m)^2 - (0.01 m)^2] / 4 = 778 N = 0.78 kN • instroke capacity is reduced compared to outstroke capacity - due to the rod and reduced active pressurized areal Related Topics Related Documents
{"url":"http://www.engineeringtoolbox.com/pneumatic-cylinder-force-d_1273.html","timestamp":"2014-04-18T16:30:27Z","content_type":null,"content_length":"31389","record_id":"<urn:uuid:d4384ea2-c39c-47e0-9963-5d798c11b495>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Neural Network Toolbox adapt Adapt neural network to data as it is simulated adaptwb Adapt network with weight and bias learning rules adddelay Add delay to neural network response boxdist Distance between two position vectors cascadeforwardnet Cascade-forward neural network catelements Concatenate neural network data elements catsamples Concatenate neural network data samples catsignals Concatenate neural network data signals cattimesteps Concatenate neural network data timesteps closeloop Convert neural network open-loop feedback to closed loop combvec Create all combinations of vectors compet Competitive transfer function competlayer Competitive layer con2seq Convert concurrent vectors to sequential vectors configure Configure network inputs and outputs to best match input and target data confusion Classification confusion matrix convwf Convolution weight function crossentropy Neural network performance disp Neural network properties display Name and properties of neural network variables dist Euclidean distance weight function distdelaynet Distributed delay network divideblock Divide targets into three sets using blocks of indices divideind Divide targets into three sets using specified indices divideint Divide targets into three sets using interleaved indices dividerand Divide targets into three sets using random indices dividetrain Assign all targets to training set dotprod Dot product weight function elliot2sig Elliot 2 symmetric sigmoid transfer function elliotsig Elliot symmetric sigmoid transfer function elmannet Elman neural network errsurf Error surface of single-input neuron extendts Extend time series data to given number of timesteps feedforwardnet Feedforward neural network fitnet Function fitting neural network fixunknowns Process data by marking rows with unknown values formwb Form bias and weights into single vector fromnndata Convert data from standard neural network cell array form gadd Generalized addition gdivide Generalized division genFunction Generate MATLAB function for simulating neural network gensim Generate Simulink block for neural network simulation getelements Get neural network data elements getsamples Get neural network data samples getsignals Get neural network data signals getsiminit Get Simulink neural network block initial input and layer delays states gettimesteps Get neural network data timesteps getwb Get network weight and bias values as single vector gmultiply Generalized multiplication gnegate Generalized negation gridtop Grid layer topology function gsqrt Generalized square root gsubtract Generalized subtraction hardlim Hard-limit transfer function hardlims Symmetric hard-limit transfer function hextop Hexagonal layer topology function ind2vec Convert indices to vectors init Initialize neural network initcon Conscience bias initialization function initlay Layer-by-layer network initialization function initlvq LVQ weight initialization function initnw Nguyen-Widrow layer initialization function initsompc Initialize SOM weights with principal components initwb By weight and bias layer initialization function initzero Zero weight and bias initialization function isconfigured Indicate if network inputs and outputs are configured layrecnet Layer recurrent neural network learncon Conscience bias learning function learngd Gradient descent weight and bias learning function learngdm Gradient descent with momentum weight and bias learning function learnh Hebb weight learning rule learnhd Hebb with decay weight learning rule learnis Instar weight learning function learnk Kohonen weight learning function learnlv1 LVQ1 weight learning function learnlv2 LVQ2.1 weight learning function learnos Outstar weight learning function learnp Perceptron weight and bias learning function learnpn Normalized perceptron weight and bias learning function learnsom Self-organizing map weight learning function learnsomb Batch self-organizing map weight learning function learnwh Widrow-Hoff weight/bias learning function linearlayer Linear layer linkdist Link distance function logsig Log-sigmoid transfer function lvqnet Learning vector quantization neural network lvqoutputs LVQ outputs processing function mae Mean absolute error performance function mandist Manhattan distance weight function mapminmax Process matrices by mapping row minimum and maximum values to [-1 1] mapstd Process matrices by mapping each row's means to 0 and deviations to 1 maxlinlr Maximum learning rate for linear layer meanabs Mean of absolute elements of matrix or matrices meansqr Mean of squared elements of matrix or matrices midpoint Midpoint weight initialization function minmax Ranges of matrix rows mse Mean squared normalized error performance function narnet Nonlinear autoregressive neural network narxnet Nonlinear autoregressive neural network with external input nctool Neural network classification or clustering tool negdist Negative distance weight function netinv Inverse transfer function netprod Product net input function netsum Sum net input function network Create custom neural network newgrnn Design generalized regression neural network newlind Design linear layer newpnn Design probabilistic neural network newrb Design radial basis network newrbe Design exact radial basis network nftool Neural network fitting tool nncell2mat Combine neural network cell data into matrix nncorr Crross correlation between neural network time series nndata Create neural network data nndata2sim Convert neural network data to Simulink time series nnsize Number of neural data elements, samples, timesteps, and signals nnstart Neural network getting started GUI nntraintool Neural network training tool normc Normalize columns of matrix normprod Normalized dot product weight function normr Normalize rows of matrix nprtool Neural network pattern recognition tool ntstool Neural network time series tool numelements Number of elements in neural network data numfinite Number of finite values in neural network data numnan Number of NaN values in neural network data numsamples Number of samples in neural network data numsignals Number of signals in neural network data numtimesteps Number of time steps in neural network data openloop Convert neural network closed-loop feedback to open loop patternnet Pattern recognition network perceptron Perceptron perform Calculate network performance plotconfusion Plot classification confusion matrix plotep Plot weight-bias position on error surface ploterrcorr Plot autocorrelation of error time series ploterrhist Plot error histogram plotes Plot error surface of single-input neuron plotfit Plot function fit plotinerrcorr Plot input to error time-series cross correlation plotpc Plot classification line on perceptron vector plot plotperform Plot network performance plotpv Plot perceptron input/target vectors plotregression Plot linear regression plotresponse Plot dynamic network time series response plotroc Plot receiver operating characteristic plotsomhits Plot self-organizing map sample hits plotsomnc Plot self-organizing map neighbor connections plotsomnd Plot self-organizing map neighbor distances plotsomplanes Plot self-organizing map weight planes plotsompos Plot self-organizing map weight positions plotsomtop Plot self-organizing map topology plottrainstate Plot training state values plotv Plot vectors as lines from origin plotvec Plot vectors with different colors plotwb Plot Hinton diagram of weight and bias values pnormc Pseudonormalize columns of matrix poslin Positive linear transfer function preparets Prepare input and target time series data for network simulation or training processpca Process columns of matrix with principal component analysis prune Delete neural inputs, layers, and outputs with sizes of zero prunedata Prune data for consistency with pruned network purelin Linear transfer function quant Discretize values as multiples of quantity radbas Radial basis transfer function radbasn Normalized radial basis transfer function randnc Normalized column weight initialization function randnr Normalized row weight initialization function rands Symmetric random weight/bias initialization function randsmall Small random weight/bias initialization function randtop Random layer topology function regression Linear regression removeconstantrows Process matrices by removing rows with constant values removedelay Remove delay to neural network's response removerows Process matrices by removing rows with specified indices roc Receiver operating characteristic sae Sum absolute error performance function satlin Saturating linear transfer function satlins Symmetric saturating linear transfer function scalprod Scalar product weight function selforgmap Self-organizing map separatewb Separate biases and weight values from weight/bias vector seq2con Convert sequential vectors to concurrent vectors setelements Set neural network data elements setsamples Set neural network data samples setsignals Set neural network data signals setsiminit Set neural network Simulink block initial conditions settimesteps Set neural network data timesteps setwb Set all network weight and bias values with single vector sim Simulate neural network sim2nndata Convert Simulink time series to neural network data softmax Soft max transfer function srchbac 1-D minimization using backtracking srchbre 1-D interval location using Brent's method srchcha 1-D minimization using Charalambous' method srchgol 1-D minimization using golden section search srchhyb 1-D minimization using a hybrid bisection-cubic search sse Sum squared error performance function sumabs Sum of absolute elements of matrix or matrices sumsqr Sum of squared elements of matrix or matrices tansig Hyperbolic tangent sigmoid transfer function tapdelay Shift neural network time series data for tap delay timedelaynet Time delay neural network tonndata Convert data to standard neural network cell array form train Train neural network trainb Batch training with weight and bias learning rules trainbfg BFGS quasi-Newton backpropagation trainbr Bayesian regulation backpropagation trainbu Batch unsupervised weight/bias training trainc Cyclical order weight/bias training traincgb Conjugate gradient backpropagation with Powell-Beale restarts traincgf Conjugate gradient backpropagation with Fletcher-Reeves updates traincgp Conjugate gradient backpropagation with Polak-Ribiére updates traingd Gradient descent backpropagation traingda Gradient descent with adaptive learning rate backpropagation traingdm Gradient descent with momentum backpropagation traingdx Gradient descent with momentum and adaptive learning rate backpropagation trainlm Levenberg-Marquardt backpropagation trainoss One-step secant backpropagation trainr Random order incremental training with learning functions trainrp Resilient backpropagation trainru Unsupervised random order weight/bias training trains Sequential order incremental training with learning functions trainscg Scaled conjugate gradient backpropagation tribas Triangular basis transfer function tritop Triangle layer topology function unconfigure Unconfigure network inputs and outputs vec2ind Convert vectors to indices view View neural network
{"url":"http://www.mathworks.co.uk/help/nnet/functionlist-alpha.html?nocookie=true","timestamp":"2014-04-24T07:41:19Z","content_type":null,"content_length":"62901","record_id":"<urn:uuid:cf617c2c-88c2-4901-bbe2-02577fd4d61e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US4718103 - Method and apparatus for on-line recognizing handwritten patterns This invention is related to U.S. patent application Ser. No. 686,001 filed on Dec. 24, 1984 now U.S. Pat. No. 4,653,107 assigned to the same assignee. This invention relates to a method and apparatus for ON-LINE recognizing handwritten patterns and, particularly, to a method and apparatus for ON-LINE recognizing handwritten patterns which are liable to variation and rotation. A conventional handwritten pattern recognition system operates in such a way that each handwritten stroke is divided into numerous short vectors and it is tested whether part or all of the vectors match the majority of vectors of a candidate pattern in the dictionary, as described in an article entitled "Application of DP Matching Process to Character Recognition", Nikkei Electronics, pp. 198-199, Nov. 7, 1983. This method is capable of character recognition even if the dimension of the input vectors differs from that of the dictionary vectors and is advantageous in recognizing correct-attitude handwritten patterns. However, because of the comparison process based on the direction of each vector, when a rotated character is entered, which results in a change in the direction of each vector, it becomes difficult to recognize the character, and therefore the method could not readily be applied directly to pattern recognition where the rotation of pattern occurs There has been proposed another handwritten pattern recognition system operating in such a way that a handwritten pattern is decomposed into a plurality of segments, comparison is carried out between angular variations of adjoining segments and the counterpart of a candidate pattern stored in the dictionary to evaluate the difference between them, and the entered handwritten pattern is recognized depending on the degree of difference as disclosed in U.S. patent application Ser. No. 686,001 now U.S. Pat. No. 4,653,107. However, this method is entirely based on the comparison of the angular variations of adjoining segments between an input handwritten pattern and candidate patterns in the dictionary, and therefore it cannot deal with input handwritten patterns which are rich in variation. Namely, in FIG. 1a, an input handwritten pattern is approximated to segments a1, a2 and b, and adjoining segment pairs a1-a2 and a2-b have angular variations Δθ1 and Δθ2, respectively. In the recognition process, the input handwritten pattern of FIG. 1a, which is pertinent to a dictionary pattern shown in FIG. 1b, the angular variation Δθ1 between the adjoining segments a1 and a2 of FIG. 1a is compared with the angular variation θ between the adjoining segments a' and b' in FIG. 1b, resulting occasionally in a judgement of inconsistency. The presence of the segment a2 in FIG. 1a is a result of a significant variation in the input handwritten pattern, and for a reasonable pattern recognition process, the angular variation across the three consecutive segments a1, a2 and b (i.e., Δθ1+Δθ2) should be compared with the angular variation between the adjoining segments a' and b' of the dictionary pattern shown in FIG. 1b. This invention is intended to overcome the foregoing prior art deficiency, and its prime object is to provide a method and apparatus for recognizing handwritten patterns operative to correctly recognize even rotated handwritten patterns and variations of patterns. Generally, a pattern can be defined by a group of segments and their linkage relationship. It is not easy for hand-writing to draw a complete straight line and arc, and the ambiguity of hand-writing creates unnecessary segments, resulting generally in an increased number of segments. The inventive handwritten pattern recognition system operates in such a way that a handwritten pattern is decomposed into a series of segments and the angular variation between adjoining segments, in consideration of the angular variation across three or more segments, is compared with the angular variation between adjoining segments of a candidate dictionary pattern, and the ambiguity of hand-writing is overcome and correct pattern recognition is made possible. In the angular variation comparing process, the starting segment of a dictionary pattern is shifted sequentially, which allows recognition of a rotated handwritten pattern. FIGS. 1a and 1b are diagrams explaining a typical relationship between an input handwritten pattern and a pertinent dictionary pattern; FIG. 2 is a block diagram of the handwritten pattern recognition system embodying the present invention; FIG. 3 is a diagram explaining the operation of the system shown in FIG. 2; FIGS. 4a and 4b are diagrams explaining the process of approximating an input handwritten pattern to polygonal lines; FIGS. 5a and 5b are diagrams explaining the polygonal line approximation for an input handwritten pattern and a pertinent dictionary pattern, respectively; FIG. 6 is a flowchart showing the handwritten pattern recognition method according to this invention; and FIG. 7 is a diagram showing an example of the matching decision process for an input handwritten pattern and a dictionary pattern. The present invention will now be described in detail with reference to the accompanying drawings. In FIG. 2 showing in block diagram an embodiment of this invention, strokes of a pattern written by hand on a tablet 1 are sampled periodically by a sampling unit 2 and converted into plots on the 2-dimensional coordinates. The plots are stored temporarily in a file 3. A polygonal line approximation unit 4 reads out the plots in the file 3, eliminates intermediate plots residing on straight line segments so that the strokes are approximated to groups of polygonal lines, and they are stored in the file 3. A tracing unit 5 traces in a certain direction the sequence of the polygonal lines read out of the file 3 to produces a series of segments, and stores the result in the file 3. A matching unit 6 reads out the series of input segments and a series of reference segments from the file 3, and makes decision for the consistency or inconsistency between both series of segments. If the consistency has resulted, a display data generating unit 7 produces a clean pattern for the recognized input pattern, and operates on a display unit 8 to display it. Next, each of the above processing steps will be described in detail with reference to FIGS. 3, 4 and 5. FIG. 3 shows in flowchart the sequential operations implemented by the tablet 1 through tracing unit 5 in FIG. 2. Initially, in step 31, handwritten strokes as shown are entered through the tablet 1. The entered handwritten strokes are sampled at a speed of 100 points per second, for example, by the sampling unit 2 in FIG. 2 so that a series of plots as shown are formed on the coordinates in step 32. In the subsequent step 33, the sampled plots are rendered polygonal approximation, as shown, by the polygonal approximation unit 4 in FIG. 2. The polygonal approximation takes place as shown in FIG. 4a in such a way that a chord l connecting two points P2 and P7 is specified and then the area S27 enclosed by the chord l and the polygonal lines P2 through P7 is calculated. If the area S27 is smaller than the threshold value Sth(l) which is a function of the chord l, i.e., S27<Sth(l), the curve connecting the two points P2 and P7 is approximated to a straight line. Conversely, if the area S27 is larger than the threshold value Sth(l), i.e., S27>Sth(l) the point S7 is replaced with point P6 so that the distance from the point P2 decreases and the above process is repeated, and eventually a polygonal approximation as shown in FIG. 4b is reached. The example shown in FIG. 3 will result in a polygonal approximation as shown in step 33 of FIG. 3. In the subsequent step 34, the result of polygonal approximation is processed by the tracing unit 5, and the polygonal lines are converted into segments. Namely, the polygonal lines are traced starting with segment a up to the last segment n in a certain direction (e.g., clockwise direction), and a set of segments as shown is produced. The operation of the matching unit 6 for comparing a series of input segments obtained in step 34 with a series of reference segments in the dictionary is as follows. In the matching process, the angular variation between two adjoining segments is used as a characteristic value for comparison. For example, the initial four segments a1, a2, a3 and a4 shown in step 34 of FIG. 3 yield the angular variations shown in FIG. 5a. FIG. 5b shows a set of reference segments of part of the dictionary pattern pertinent to the input segments. The matching process compares the input angular variations Δθ1, Δθ2 and Δθ3 yielded by the input segments derived from the polygonal lines which approximate the handwritten pattern with the angular variations θ1 and θ2 for the segments derived from the corresponding portion of a candidate pattern in the dictionary. In FIG. 6 showing in flowchart the above matching process, step 11 sets the variables i and j to "1", sets the sum of angular variations, K, to "0", sets the degree of partial difference D, to "0", and resets the flag which indicates the comparison result to be within a certain range. The subsequent step 12 adds the input angular variation Δθi to the summed angular variation K. Since the summed angular variation K and variable i have been initialized to "0" and "1" respectively, in step 11, the operation of the first cycle yields K=Δθ1. The subsequent step 13 tests whether the summed input angular variation K is within the range of the reference angular variation θj plus/minus α, where α is preset to a value which is a quarter of the reference angular variation θj, for example. If the test result in affirmative, i.e., K is within θj ±α, the subsequent step 14 sets the flag indicative of "in-range" and calculates the partial difference D. The partial difference D is calculated by adding the absolute value of the difference between the input angular variation Δθi and the summed input angular variation K to the previous partial difference D, and it provides an estimation value indicative of the degree of variation of a handwritten character. Conversely, if the step 13 provides a negative test result, i.e., K is outside the range θj ±α, the subsequent step 15 tests whether the flag has been set in the operation of the previous cycle. If the flag is found reset, step 17 updates the partial difference D by adding the summed angular variation K to the partial difference D of the previous cycle. Namely, adjustment is made such that the more the number of unnecessary input segments, i.e., the more the number of adding operations for angular variations, the greater partial difference will result. If, on the other hand, the flag is found set in step 15, the summed regular variation K is reset to zero, the variables i and j are decremented and incremented by one, respectively, and the flag is reset. The variable i is incremented by one in step 18, and a new angular variation θj of the dictionary pattern is retrieved through the above operations and compared with the last input angular variation Δθi outside the range. Step 19 tests whether checking for at least one of input angular variations and dictionary angular variations is completed, and, if it is affirmative, step 20 divides the partial difference D by the number of dictionary segments, N, so as to evaluate the final difference. Next, the matching process for the case shown in FIGS. 5a and 5b will be described with reference to the following table 1. TABLE 1______________________________________Item Processing Process step______________________________________1 K(=Δθ[1])<(θ[1] -α) 12, 132 Reset flag. D=D+|K| 15, 173 (θ[1] -α)<K(=Δθ[1] +Δθ.su b.2)<(θ[1] +α) 12, 134 D=D+|θ[1] -K|; set flag. 145 K (=Δθ[1] +Δθ[2] +Δθ.sub. 3)<(θ[1] -α) 13, 15, 18 Set K to "0", reset flag; set i to "3", set j to "2"6 (θ[2] -α)<K(=Δθ[3])<(θ[2] +α) 12, 137 D=D+|θ[2] -K|, set flag 14: : :: : :______________________________________ An item 1, the reference angular variation θ1 and input angular variation Δθ1 are taken out, and it is found that the summed input angular variation K, which is Δθ1 at this time, is smaller than θ1-α (step 13). Next, at item 2, the flag is left reset so as to indicate that the input angular variation θ1 is outside the specified range, and the partial difference D is made equal to K (i.e., Δθ1). At the next item 3, the total input angular variation K is made equal to Δθ1+Δθ2, and it is found that the summed input angular variation K is within the specified range. At item 4, the flag is set and at the same time |θ1-K| is added to the partial difference D so as to produce a new partial difference D. At the next item 5, the summed input angular variation K is made equal to Δθ1+Δθ2+Δθ3, and it is found that this value is out of the specified range, with the flag being reset and the summed input angular variation K being reset to zero. At the next item 6, a new reference angular variation θ2 is taken, a new summed input angular variation K is made equal to Δθ3, and it is found that the value K is within the specified range. At the next item 7, the flag is set and the partial difference D is updated. In the example shown in FIGS. 5a and 5b, it is found that the input Δθ1 and Δθ2 correspond to the reference angular variation θ1, and the matching process is obviously possible even if an input pattern and a dictionary pattern have different numbers of segments. This is a clear distinction of the present invention from the U.S. patent application Ser. No. 686,001. Generally, a handwritten pattern may be entered in a rotated attitude, and therefore it will be effective to repeat the steps 11 through 20 shown in FIG. 6 while shifting the starting angular variation of the dictionary pattern one by one. FIG. 7 shows an example of correspondence between segments of an input pattern and segments of a dictionary pattern. The input segment varies from segment a to b to c and so one, with the direction of angular variation (clockwise or counterclockwise) being indicated by the arrows R1, R2, R3, R4 and R5. Similarly, the dictionary segment varies from segment a' to b' to c' and so on, with the direction of angular variation being indicated by the arrows R1', R2', R3' and so on. The dictionary segment sets L1 and L2 are the same, and they are provided with the intention that the recognition process will complete until the last block of L2 even if the input pattern has begun to yield a matching result at an intermediate block of L1. The following describes the process in more detail. In FIG. 7, block B1 of the input pattern and block B1' of the dictionary candidate pattern, each showing an angular variation, have their angular variations in the same direction as shown by the arrows R1 and R1', and therefore the matching process can be commenced with the tie of blocks B1 and B1'. Once the matching process for the blocks B1 and B1' has been commenced, however, the subsequent blocks B2 and B2' reveal angular variations in opposite directions as shown by the arrows R2 and R2', and it is found that the blocks B2 and B2' do not match each other. Accordingly, it is found to be impossible for the input segments a, b, c and so on to reach a matching result through the commencement of the matching process from block B1', i.e. segment a', of the dictionary candidate pattern. Symbol "x" in the figure indicates the failure of matching between segments of both patterns. The next trail of matching process for the block B1 of the input pattern is to commence with block B2' of the dictionary candidate pattern. The blocks B1 and B2' have their angular variations in the same direction as indicated by the arrows R1 and R2', and the result of matching is reached. Comparing the subsequent blocks B2 and B3' reveals that their angular variations are in the same direction, and these segments match each other. In the same way, ties of blocks B3 and B4', blocks B3 and B5', blocks B4 and B6', blocks B5 and B7', blocks B5 and B8', and blocks B5 and B1' (in dictionary segment set L2) for all corresponding combinations of the input segments and dictionary segments reach the result of matching. In order for an input pattern to be identified to be a candidate pattern in the dictionary, it is necessary that the number of blocks counted from the block of the dictionary sequent set at which the recognition process has commenced (block B2' of dictionary segment set L1 in the above example) up to the ending block (block B1' of dictionary segment set L2 in the above example) is equal to the number of blocks which constitute the candidate pattern in the dictionary. In the above example, the dictionary candidate pattern consists of eight blocks B1'-B8', equal in number to the blocks from the recognition commencement to the end, and it means that character recognition may have been possibly implemented correctly. However, there may exist more than one candidate pattern equal in the number of blocks to the input pattern, and therefore an input pattern cannot be recognized to be a candidate pattern merely basing on the equality of the number of blocks. In such cases, the matching process is conducted for all candidate patterns having the same number of blocks as of the input pattern so that their ultimate differences are evaluated. Then, among all a candidate pattern with the minimal ultimate difference is selected to be the result of recognition for the input pattern. If the numbers of blocks are not equal, the input character is not proved to be recognized accurately even if the matching process for each block tie provides a successful result. Although the foregoing embodiment is of the case of character recognition basing merely on the angular variation of the input segment set and dictionary segment set, the present invention is not confined to this scheme, but the recognition process may further deal with an additional parameter β indicative of the length of adjacent segments of a character so as to more clarify the feature of each character. In this case, polygonal lines which approximate an input pattern not only provide angular variations of adjacent segments, but also provide ratios of lengths of adjacent segments. For example, instead of the summed angular variation K shown in FIG. 6, an estimation function of K×β, where β is the abovementioned parameter, is preferably employed. In addition, it will be appreciated that the matching process for the input and candidate patterns may be preceded by the calculation of the number of blocks which indicate the angular variation directions of an input pattern as shown by B1-B5 in FIG. 7 and the selection of candidate patterns having the same number of blocks as calculated among dictionary patterns. As described above, the inventive handwritten pattern recognition method and apparatus are capable of correct pattern recognition even for deformed or rotated handwritten patterns.
{"url":"http://www.google.co.uk/patents/US4718103","timestamp":"2014-04-20T13:28:12Z","content_type":null,"content_length":"117610","record_id":"<urn:uuid:ea4e2daa-e3ab-4d9f-af22-ae4d4b5654ab>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Essington Calculus Tutor ...If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. 14 Subjects: including calculus, physics, geometry, ASVAB ...I teach 8th grade math at a school in North Philly. I love math and problem solving, but what I love even more is helping others to reach that "a-ha!" moment when all the pieces fall into place! I bring my practical experience with math from studying and working in civil engineering with me as I help tie the concepts that you are learning in your math classes to everyday 21 Subjects: including calculus, reading, physics, geometry ...All students are different, so I try to determine their needs and most effective learning styles at the beginning. Although I can adjust to any curriculum and will help students meet the requirements of their teachers, I use my own well-tested methods to give students different perspectives that... 32 Subjects: including calculus, English, geometry, biology ...The math section of the SAT's tests students math skills learned up to grade 12. I have a college degree in mathematics. I have successfully passed the GRE's (to get into graduate school) as well as the Praxis II content knowledge test for mathematics. 16 Subjects: including calculus, English, physics, geometry ...In particular I worked with a school administrator who was required to pass the Praxis exam. Even though she was deathly afraid of Math my approach with her gave her the confidence to take and pass the math portion on the exam. Praxis math for the most part is below or at high school level with an emphasis on practicality. 23 Subjects: including calculus, reading, geometry, statistics
{"url":"http://www.purplemath.com/Essington_calculus_tutors.php","timestamp":"2014-04-16T04:52:05Z","content_type":null,"content_length":"23989","record_id":"<urn:uuid:081af3fa-bde5-435b-823f-a1cf73d7f19e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Will a Payroll Tax Cut Stimulate the Economy? Will a payroll tax cut stimulate the economy? I am going to answer this in the context of Gauti B. Eggertsson' paper "What Fiscal Policy Is Effective at Zero Interest Rates?" where this question is addressed directly (the analysis begins on page 13). The model is New Keynesian. The answer, in this model anyway, is that in normal times a payroll tax cut would be stimulative, but at the zero bound it's not so clear. Let me see if I can explain why. When interest rates are positive, the framework is essentially a standard AS-AD model: A payroll tax cut increases labor supply and shifts out the AS curve. The shift in the AS curve results in lower inflation and higher output/employment. One thing that is left out of this model to simplify the analysis and keep it tractable is the demand-side effects of such policies. That is, a tax cut would also increase AD. If we add this effect, the graph then looks like: Output goes up even more, but whether inflation goes up or down depends upon which shift is larger, the shift in the AS or the shift in the AD (based upon the evidence on how labor supply responds to changes in taxes, I would expect that the shift in the AD would be larger and come first, but ultimately that is an empirical matter). When the economy is at the zero bound for the nominal interest rates things change. In particular, the AD curve slopes upward. This will be explained intuitively in a moment, but mechanically the effect of a positively sloped AD curve is as follows: Thus, when we consider only the supply-side effects of a tax cut, it has a negative impact on output and employment. Why is this? Figure 5 clarifies the intuition for why labor tax cuts become contractionary at zero interest rates while being expansionary under normal circumstances. The key is aggregate demand. At positive interest rates the AD curve is downward-sloping in inflation. The reason is that as inflation decreases, the central bank will cut the nominal interest rate more than 1 to 1 with inflation..., which is the Taylor principle... Similarly, if inflation increases, the central bank will increase the nominal interest rate more than 1 to 1 with inflation, thus causing an output contraction with higher inflation. As a consequence, the real interest rate will decrease with deflationary pressures and expanding output, because any reduction in inflation will be met by a more than proportional change in the nominal interest rate. This, however, is no longer the case at zero interest rates, because interest rates can no longer be cut. This means that the central bank will no longer be able to offset deflationary pressures with aggressive interest rate cuts, shifting the AD curve from downward-sloping to upward-sloping in (YL,πL) space... The reason is that lower inflation will now mean a higher real rate, because the reduction in inflation can no longer be offset by interest rate cuts. Similarly, an increase in inflation is now expansionary because the increase in inflation will no longer be offset by an increase in the nominal interest rate; hence, higher inflation implies lower real interest rates and thus higher Once again, however, demand side effects are missing. Tacking those on gives: Thus, the overall effect on employment depends upon the net effect of the AD and AS shifts. If the AD shift dominates, as I suspect it would, this policy will still have positive effects on output and employment. But the size of the effect depends upon the strength of the demand side shift, and how strong the shift would be is an open question, particularly given the degree of household balance sheet rebuilding we are seeing which causes the tax cuts to be saved rather than spent. [The timing matters as well with the AD effects generally coming first, so in the SR the demand side effects should dominate. If so, that is a reason to be a bit more supportive of these policies.] Another way to think about this is the following. Supply is not the problem right now, it's lack of demand, and a policy that encourages more supply and threatens deflation is not helpful except to the extent that it increases aggregate demand in the process. Other types of policies can avoid this problem, see, for example the sales tax cut discussed on page 20 or the discussion of fiscal policy multipliers on page 17, but they may not have the same political feasibility as tax cut for labor, which itself doesn't seem all that likely give the degree of opposition it will likely hit in Congress (the sales tax cut would be difficult to implement given that sales taxes are levied at the state level, and there's no chance that government spending increases will pass Congress right now; on the politics of a payroll tax cut, see the end of this post). [Note: The demand-side effects were left out of the paper to keep the mathematics tractable, and it may be that simply tacking on the demand-side effects as I've done (the red lines) isn't quite correct. I think it's okay, but if anyone can speak to this, that would be great. Also, the policy analyzed in the paper is best interpreted as a payroll tax cut on the worker side. I don't think it matters if the cut is on the employer side, and I hope the administration doesn't pursue this anyway since the employer side tax cut may not pass through to labor fully, or much at all in the very short-run, but, again, if that matters and someone can speak to this point, please do.] Posted by Mark Thoma on Friday, September 3, 2010 at 12:33 PM in Academic Papers, Economics, Fiscal Policy, Taxes, Unemployment | Permalink Comments (77) You can follow this conversation by subscribing to the comment feed for this post. Will a Payroll Tax Cut Stimulate the Economy?
{"url":"http://economistsview.typepad.com/economistsview/2010/09/will-a-payroll-tax-cut-stimulate-the-economy.html","timestamp":"2014-04-19T19:44:51Z","content_type":null,"content_length":"135627","record_id":"<urn:uuid:19121e1a-d593-4dd6-9e20-1d06e8f4ac6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums how do I solve log3(logx(log4^16))=-1 and also, if log5=a and log36 = b, determine an expression for log 6/25 in terms of a and b Convert the outer log to the corresponding exponential equation. Simplify to get: . . . . .log[x](log[4](16)) = 1/3 Use The Relationship to evaluate log[4](16) = log[4](4^2). See where that leads...
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=700&p=2182","timestamp":"2014-04-21T14:53:55Z","content_type":null,"content_length":"18558","record_id":"<urn:uuid:0b23e405-0232-4945-b55d-b8c54a5a5c35>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
From: nikl@mathematik.tu-muenchen.de (Gerhard Niklasch) Newsgroups: sci.math Subject: Re: Euclidean valuations on number rings Date: 28 Jun 1997 13:06:37 GMT In article <5p21l4$hjv$3@news.rain.org>, bowe@rain.org (Nick Halloway) writes: |> Suppose F is a finite extension of the rationals, D its domain of |> algebraic integers. If D is a principal ideal domain, does it have |> a Euclidean valuation? absolute value of the norm, perhaps? For a comprehensive survey of what's known about Euclidean number rings, see Franz Lemmermeyer's 1995 paper in Expositiones Math. You can find it online starting from his WWW homepage at by following the Preprints link; last time I looked it was number 7 on the preprints page. It has an extensive bibliography. Enjoy, Gerhard -- * Gerhard Niklasch *** Some or all of the con- * http://hasse.mathematik.tu-muenchen.de/~nikl/ ******* tents of the above mes- * sage may, in certain countries, be legally considered unsuitable for consump- * tion by children under the age of 18. Me transmitte sursum, Caledoni... :^/ ============================================================================== Date: Mon, 30 Jun 1997 02:34:26 -0600 From: rjc@maths.ex.ac.uk Subject: Re: Euclidean valuations on number rings Newsgroups: sci.math In article <5p21l4$hjv$3@news.rain.org>, bowe@rain.org (Nick Halloway) wrote: > > Suppose F is a finite extension of the rationals, D its domain of > algebraic integers. If D is a principal ideal domain, does it have > a Euclidean valuation? absolute value of the norm, perhaps? > > Z[i] and Z[cube root of 1] have Euclidean valuations, which you can > see geometrically since an ideal forms a regular lattice on the > complex plane. > > How about if F is a quadratic extension of the rationals and D is a PID? > Not in general. It's quite easy to prove that the quadratic imaginary fields F with D a PID are Q(sqrt(-1)), Q(sqrt(-2)), Q(sqrt(-3)), Q(sqrt(-7)) and Q(sqrt(-11)). Hence the integers of Q (sqrt(-19)) form a PID which is not Euclidean. All the real quadratic fields with D Euclidean with respect to the absolute value of the norm have been known for some time, but until recently it was an open problem as to whether such a D existed which was Euclidean but not with respect to the norm. About 5 years ago Clark proved that Q(sqrt(69)) was Euclidean but not norm-Eulcidean. Robin Chapman "256 256 256. Department of Mathematics O hel, ol rite; 256; whot's University of Exeter, EX4 4QE, UK 12 tyms 256? Bugird if I no. rjc@maths.exeter.ac.uk 2 dificult 2 work out." http:// www.maths.ex.ac.uk/~rjc/rjc.html Iain M. Banks - Feersum Endjinn -------------------==== Posted via Deja News ====----------------------- http://www.dejanews.com/ Search, Read, Post to Usenet ======= ======================================================================= From: nikl@mathematik.tu-muenchen.de (Gerhard Niklasch) Newsgroups: sci.math Subject: Re: Euclidean valuations on number rings Date: 30 Jun 1997 18:15:10 GMT In article <867655710.20460@dejanews.com>, rjc@maths.ex.ac.uk writes: |> [...] All the real quadratic fields with D |> Euclidean with respect to the absolute value of the norm have been known |> for some time, but until recently it was an open problem as to whether |> such a D existed which was Euclidean but not with respect to the norm. |> About 5 years ago Clark proved that Q(sqrt(69)) was Euclidean but not |> norm-Euclidean. More precisely, Clark gave a completely explicit example of a modified norm with respect to which the ring of integers of that quadratic field is Euclidean (it was well known that it wasn't Euclidean wrt. the absolute norm). Apparently, Q(sqrt(69)) is the only quadratic field for which this particular trick works. [David A. Clark, A quadratic field which is euclidean but not norm-euclidean. Manuscripta Mathematica 83 (1994), 327--330.] [Franz Lemmermeyer and Yours Truly independently verified the computations, see loc.cit.pp.443--446. There are by now several more examples of this kind in higher degrees.] Non-constructively, and assuming certain Generalized Riemann Hypotheses, P. Weinberger had shown in 1973 that whenever the ring of integers of a number field is a PID and possesses infinitely many units, it is Euclidean for _some_ valuation (however the proof provides no way to make this valuation explicit). Basically, the `best' valuation will map 0 to 0, all the units to 1, all the prime elements for which it makes sense to 2 (those for which all the prime residue classes contain units), and all the remaining prime elements to 3; its values for composites can then be determined inductively. The crux lies in showing that there are enough primes at level 2 to enable us to lob all the others into the third bin. This is non-explicit insofar as given a level-3 prime as a denominator and some residue class of numerators which contains no unit, we have no way of knowing how far out we'll have to look for a level-2 prime in this residue class: the proof only tells us that we'd find one if we kept looking long enough. [Peter J. Weinberger, On Euclidean rings of algebraic integers. In: H.G. Diamond (ed.), Analytic Number Theory. (Proc. 24th Sympos. Pure Math. of the AMS, St. Louis (MO) 1972.) AMS, Providence (RI) 1973, 321--332.] Enjoy, Gerhard -- * Gerhard Niklasch *** Some or all of the con- * http://hasse.mathematik.tu-muenchen.de/~nikl/ ******* tents of the above mes- * sage may, in certain countries, be legally considered unsuitable for consump- * tion by children under the age of 18. Me transmitte sursum, Caledoni... :^/
{"url":"http://www.math.niu.edu/~rusin/known-math/97/euclidean.domain","timestamp":"2014-04-17T03:49:37Z","content_type":null,"content_length":"6105","record_id":"<urn:uuid:2fe057bd-51ac-4d0d-b36f-3684b4567872>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Clustering Large Data Sets with Mixed Numeric and Categorical Values Results 1 - 10 of 22 , 1998 "... The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categoric ..." Cited by 156 (2 self) Add to MetaCart The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications. - In Research Issues on Data Mining and Knowledge Discovery , 1997 "... Partitioning a large set of objects into homogeneous clusters is a fundamental operation in data mining. The k-means algorithm is best suited for implementing this operation because of its efficiency in clustering large data sets. However, working only on numeric values limits its use in data mining ..." Cited by 82 (2 self) Add to MetaCart Partitioning a large set of objects into homogeneous clusters is a fundamental operation in data mining. The k-means algorithm is best suited for implementing this operation because of its efficiency in clustering large data sets. However, working only on numeric values limits its use in data mining because data sets in data mining often contain categorical values. In this paper we present an algorithm, called k-modes, to extend the k-means paradigm to categorical domains. We introduce new dissimilarity measures to deal with categorical objects, replace means of clusters with modes, and use a frequency based method to update modes in the clustering process to minimise the clustering cost function. Tested with the well known soybean disease data set the algorithm has demonstrated a very good classification performance. Experiments on a very large health insurance data set consisting of half a million records and 34 categorical attributes show that the algorithm is scalable in terms of ... - IEEE Transactions on Knowledge and Data Engineering , 2004 "... Abstract—In high-dimensional data, clusters can exist in subspaces that hide themselves from traditional clustering methods. A number of algorithms have been proposed to identify such projected clusters, but most of them rely on some user parameters to guide the clustering process. The clustering ac ..." Cited by 23 (3 self) Add to MetaCart Abstract—In high-dimensional data, clusters can exist in subspaces that hide themselves from traditional clustering methods. A number of algorithms have been proposed to identify such projected clusters, but most of them rely on some user parameters to guide the clustering process. The clustering accuracy can be seriously degraded if incorrect values are used. Unfortunately, in real situations, it is rarely possible for users to supply the parameter values accurately, which causes practical difficulties in applying these algorithms to real data. In this paper, we analyze the major challenges of projected clustering and suggest why these algorithms need to depend heavily on user parameters. Based on the analysis, we propose a new algorithm that exploits the clustering status to adjust the internal thresholds dynamically without the assistance of user parameters. According to the results of extensive experiments on real and synthetic data, the new method has excellent accuracy and usability. It outperformed the other algorithms even when correct parameter values were artificially supplied to them. The encouraging results suggest that projected clustering can be a practical tool for various kinds of real applications. Index Terms—Data mining, mining methods and algorithms, clustering, bioinformatics. 1 - In Pacific Rim International Conference on Artificial Intelligence , 2000 "... General purpose and highly applicable clustering methods are usually required during the early stages of knowledge discovery exercises. k-Means has been adopted as the prototype of iterative model-based clustering because of its speed, simplicity and capability to work within the format of very larg ..." Cited by 16 (2 self) Add to MetaCart General purpose and highly applicable clustering methods are usually required during the early stages of knowledge discovery exercises. k-Means has been adopted as the prototype of iterative model-based clustering because of its speed, simplicity and capability to work within the format of very large databases. However, k-Means has several disadvantages derived from its statistical simplicity. We propose an algorithm that remains very efficient, generally applicable, multi-dimensional but is more robust to noise and outliers. We achieve this by using the discrete median rather than the mean as the estimator of the center of a cluster. Comparison with k-Means, Expectation Maximization and Gibbs sampling demonstrates the advantages of our algorithm. , 1997 "... This paper looks at clustering using tools from graph theory. It first triangulates the data, then partitions the edges of the resulting graph into inter- and intra-cluster edges. The technique is unaffected by the actual shape of the clusters, thus allowing a far more general version of the cluster ..." Cited by 14 (0 self) Add to MetaCart This paper looks at clustering using tools from graph theory. It first triangulates the data, then partitions the edges of the resulting graph into inter- and intra-cluster edges. The technique is unaffected by the actual shape of the clusters, thus allowing a far more general version of the clustering problem to be solved. Section 2 of the paper is a general introduction to clustering, which includes a brief description of the commonly used k-means technique. Following this is a discussion of the problems which arise in the k-means (and related) methods and why there is a need for graph-based methods. Sections 4 and 6 explain the proposed new method, and give examples of its success. Section 5 discusses a few existing graph-based methods and why they can be improved upon. The test programs, which provide the results discussed in this paper, are currently written for two dimensional data sets, but Section 7 explains how the same principles can be extended to higher dimensional problems. - Int. J. Appl. Math. Comput. Sci , 2004 "... Most of the earlier work on clustering has mainly been focused on numerical data whose inherent geometric properties can be exploited to naturally define distance functions between data points. Recently, the problem of clustering categorical data has started drawing interest. However, the computatio ..." Cited by 14 (0 self) Add to MetaCart Most of the earlier work on clustering has mainly been focused on numerical data whose inherent geometric properties can be exploited to naturally define distance functions between data points. Recently, the problem of clustering categorical data has started drawing interest. However, the computational cost makes most of the previous algorithms unacceptable for clustering very large databases. The k-means algorithm is well known for its efficiency in this respect. At the same time, working only on numerical data prohibits them from being used for clustering categorical data. The main contribution of this paper is to show how to apply the notion of “cluster centers ” on a dataset of categorical objects and how to use this notion for formulating the clustering problem of categorical objects as a partitioning problem. Finally, a k-means-like algorithm for clustering categorical data is introduced. The clustering performance of the algorithm is demonstrated with two well-known data sets, namely, soybean disease and nursery databases. - In Proc. of the Canadian AI Conference , 2000 "... Abstract. Today’s case based reasoning applications face several challenges. In a typical application, the case bases grow at a very fast rate and their contents become increasingly diverse, making it necessary to partition a large case base into several smaller ones. Their users are overloaded with ..." Cited by 7 (0 self) Add to MetaCart Abstract. Today’s case based reasoning applications face several challenges. In a typical application, the case bases grow at a very fast rate and their contents become increasingly diverse, making it necessary to partition a large case base into several smaller ones. Their users are overloaded with vast amounts of information during the retrieval process. These problems call for the development of effective case-base maintenance methods. As a result, many researchers have been driven to design sophisticated case-base structures or maintenance methods. In contrast, we hold a different point of view: we maintain that the structure of a case base should be kept as simple as possible, and that the maintenance method should be as transparent as possible. In this paper we propose a case-base maintenance method that avoids building sophisticated structures around a case base or perform complex operations on a case base. Our method partitions cases into clusters where the cases in the same cluster are more similar than cases in other clusters. In addition to the content of textual cases, the clustering method we propose can also be based on values of attributes that may be attached to the cases. Clusters can be converted to new case bases, which are smaller in size and when stored distributedly, can entail simpler maintenance operations. The contents of the new case bases are more focused and easier to retrieve and update. To support retrieval in this distributed case-base network, we present a method that is based on a decision forest built with the attributes that are obtained through an innovative modification of the ID3 algorithm. 1 - In Proc. of the 3rd ACM SIGKDD Workshop on Data Mining in Bioinformatics (BIOKDD’03 , 2003 "... Projected clustering has become a hot research topic due to its ability to cluster high-dimensional data. However, most existing projected clustering algorithms depend on some critical user parameters in determining the relevant attributes of each cluster. In case wrong parameter values are used, th ..." Cited by 5 (0 self) Add to MetaCart Projected clustering has become a hot research topic due to its ability to cluster high-dimensional data. However, most existing projected clustering algorithms depend on some critical user parameters in determining the relevant attributes of each cluster. In case wrong parameter values are used, the clustering performance will be seriously degraded. Unfortunately, correct parameter values are rarely known in real datasets. In this paper, we propose a projected clustering algorithm that does not depend on user inputs in determining relevant attributes. It responds to the clustering status and adjusts the internal thresholds dynamically. From experimental results, our algorithm shows a much higher usability than the other projected clustering algorithms used in our comparison study. It also works well with a gene expression dataset for studying lymphoma. The high usability of the algorithm and the encouraging results suggest that projected clustering can be a practical tool for analyzing gene expression profiles. - Journal of Biomedical Informatics (JBI "... In microarray gene expression data, clusters may hide in certain subspaces. For example, a set of co-regulated genes may have similar expression patterns in only a subset of the samples in which certain regulating factors are present. Their expression patterns could be dissimilar when measuring in t ..." Cited by 3 (1 self) Add to MetaCart In microarray gene expression data, clusters may hide in certain subspaces. For example, a set of co-regulated genes may have similar expression patterns in only a subset of the samples in which certain regulating factors are present. Their expression patterns could be dissimilar when measuring in the full input space. Traditional clustering algorithms that make use of such similarity measurements may fail to identify the clusters. In recent years a number of algorithms have been proposed to identify this kind of projected clusters, but many of them rely on some critical parameters whose proper values are hard for users to determine. In this paper a new algorithm that dynamically adjusts its internal thresholds is proposed. It has a low dependency on user parameters while allowing users to input some domain knowledge should they be available. Experimental results show that the algorithm is capable of identifying some interesting projected clusters. "... Abstract — In many applications the objects to cluster are described by quantitative as well as qualitative features. A variety of algorithms has been proposed for unsupervised classification if fuzzy partitions and descriptive cluster prototypes are desired. However, most of these methods are desig ..." Cited by 3 (0 self) Add to MetaCart Abstract — In many applications the objects to cluster are described by quantitative as well as qualitative features. A variety of algorithms has been proposed for unsupervised classification if fuzzy partitions and descriptive cluster prototypes are desired. However, most of these methods are designed for data sets with variables measured in the same scale type (only categorical, or only metric). We propose a new fuzzy clustering approach based on a probabilistic distance measure. Thus a major drawback of present methods can be avoided which lies in the vulnerability to favor one type of attributes. I.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=247726","timestamp":"2014-04-24T20:45:45Z","content_type":null,"content_length":"41867","record_id":"<urn:uuid:ca4011bd-7a10-4899-b658-403aca8f00e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (6 - 7) Title: How Big Should it Be? Description: This lesson will allow students to become familiar with the concept of equivalent ratios and similar objects. Through an open investigation students will develop methods to find equivalent ratios. This is a lesson to be used as part of a unit with Painter Problems and How Far Can You Leap found in ALEX. This is a College- and Career-Ready Standards showcase lesson plan. Subject: Mathematics (7), or Technology Education (6 - 8) Title: Shaping the Future Description: Students will utilize new found knowledge of architectural design and engineering to apply math and science skills in building a structure. By using interactive websites, they will create scale drawing plans and design and engineer a card structure that should withstand the weight of a regular textbook. Subject: Mathematics (7) Title: Search the Perimeter and Secure the Area Description: This lesson will not only help students compare and contrast the concepts of area and perimeter, but it will also move toward the concept that area is maximized while minimizing perimeter with a square. Subject: Mathematics (7) Title: Using Scale Factors: Drawing Your Body to Scale Description: Teachers will use this lesson to review and apply skills involving scale factors. Teachers will use illuminations website to review scale factor. Then students will be given instructions to draw their bodies to scale. Subject: Mathematics (7), or Technology Education (6 - 8) Title: 2020 - Year of the Cool School! Description: Hey! Are you tired of your old lame school? Are the halls too narrow? Do you like walking outside to get to the cafeteria? Are the classrooms too small? Is the design too simple - just too wacky for a middle school? Well, now is the biggest break of your lives - You and your peers will design the school of the future and make history! Your biggest challenge is to create a section of a dream school that oozes and awes your peers. Each team will design a section (cafeteria, library, performing arts room, or recreational room) of the school, research different floor coverings for the section, research different loans to cover the cost of the floor coverings, and design a model of each section. Grab your math knowledge and as many construction tools as you can because you are about to construct a building for the world to remember. But wait, THE CATCH - your team will be competing against other groups. So put your thinking caps on and let the building begin! Subject: Agriculture, Food, and Natural Resources (9 - 12), or Mathematics (7 - 12) Title: Wildlife Math-Enhancing mathematics in the career/technical classroom and providing relevance in the mathematics classroom Description: This integrated lesson is the result of collaboration between Chip Blanton, a wildlife management teacher, and Greg Pendergrass, a math teacher (Ft. Payne High School). Learning to manage wildlife requires an understanding of planting food plots, creating areas for cover, and providing water. Students learn to use the math skills required to calculate the number acres and the cost of fertilizer and seed needed in food plots and sanctuaries. Given a certain amount of land students decide what part of that land they are to plant and maintain for deer, quail, turkey, rabbit and forestation. Subject: Arts Education (7 - 12), or Mathematics (7) Title: Cartoons and Scale Drawings Description: This lesson will lead students to understand the concept of scale drawings. The students will artistically and mathematically enlarge a cartoon.This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project. Subject: Mathematics (7) Title: Circles All Around Us Description: In this lesson, students will estimate several dimensions of circles in everyday objects. Students will use the metric and customary systems for their estimations. This lesson plan was created by exemplary Alabama Math Teachers through the AMSTI project. Subject: Mathematics (6 - 7), or Science (4 - 5), or Technology Education (3 - 5) Title: Scaling Down the Solar System Description: In this lesson students will work collaboratively to gain a better understanding of the vastness of space by scaling down the solar system. This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project. Subject: Mathematics (7) Title: Shadows and Indirect Measurement Description: This lesson will allow students to measure shadows to compute heights indirectly. They will set up and solve a proportion to find the height of a light pole on campus.This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project. Subject: Mathematics (6 - 7) Title: "Movin' On Up" Description: Students will use their knowledge of scale factor to enlarge a picture to scale. They will then use PhotoStory to create a collage of their pictures. This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project. Subject: Arts Education (7 - 12), or Mathematics (7) Title: Enlarge my "Ride" (Scale & Proportion) Description: Students will take the measurements of model cars. The students will use these measurements and scale factors to set up proportions and find the actual size of the the vehicle. As a final activity, students will use sidewalk chalk to sketch the vehicle in the parking lot. Alternate indoor activity provided under Modifications Section.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Subject: Mathematics (7) Title: Similar Figure Activity Description: The purpose of this lesson is to allow students to self-discover the relationships between similar figures. The students should then take what they learn about these relationships and apply these concepts to problem-solving. Subject: Mathematics (7 - 12), or Technology Education (9 - 12) Title: Water Tank Creations Part I Description: In this lesson students will study the surface area and volume of three-dimensional shapes by creating a water tank comprised of these shapes. Students will work in groups of 4-5 to research water tanks, develop scale drawings and build a scale model. Teacher will evaluate the project using a rubric and students will assess one anothers cooperative skills using a rubric. Subject: Mathematics (7 - 12), or Technology Education (9 - 12) Title: Creating a Water Tank - Part II "Selling the Tank" Description: Working in groups of 4-5 students will take the information,pictures and 3-D model of the water tank they assembled in Part I of Creating a Water Tank and develop a web page and a video presentation. The web page will be a tool to advertise their water tank construction company and must include hyperlinks and digital pictures. The video presentation will be a "sales pitch" to a city council. The web page and video will be scored using a rubric. The web page and video must include the surface area, volume and cost of construction. Subject: Mathematics (6 - 12), or Technology Education (9 - 12) Title: Golden Ratios of the Body, Architecture, and Nature Description: Students will study the golden ratio as it relates to human body measurements, architecture, and nature. Students will use a desktop publishing program to create a poster. The poster will have digital photos of themselves, architecture samples, or nature examples. Students will also include a spreadsheet with the lengths, widths, and length/width ratios of the samples included in the photos. Subject: Mathematics (6 - 7), or Social Studies (12), or Technology Education (6 - 8) Title: Trading and Tracking Stock Portfolios Online Description: Student teams will trade stocks and track the progress of their Alabama Stock Market Simulation portfolio by using spreadsheet software and the Internet. They will relate the information to a previous social studies unit on economics and will be utilizing math concepts they have learned. The Internet will be used to research stock trends. Thinkfinity Lesson Plans Subject: Mathematics Title: Cubes Everywhere Add Bookmark Description: In this Illuminations lesson, students use cubes to develop spatial thinking and review basic geometric principles through real-life applications. Students are given the opportunity to build and take apart structures based on cubes. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: A Swath of Red Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students estimate the area of the country that voted for the Republican candidate and the area that voted for the Democratic candidate in the 2000 presidential election using a grid overlay. Students then compare the areas to the electoral and popular vote election results. Ratios of electoral votes to area are used to make generalizations about the population distribution of the United States. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Scale Factor Add Bookmark Description: This reproducible transparency, from an Illuminations lesson, features information about scale factors. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Triangula Island Overhead Add Bookmark Description: This reproducible transparency, from an Illuminations lesson, contains an activity that asks students to conjecture the best location of a point inside a regular polygon such that the sum of the distances to each side is a minimum. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Making Your First Million Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students attempt to identify the concept of a million by working with smaller numerical units, such as blocks of 10 or 100, and then expanding the idea by multiplication or repeated addition until a million is reached. Additionally, they use critical thinking to analyze situations and to identify mathematical patterns that enable them to develop the concept of very large numbers. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8 Subject: Mathematics Title: Area Contractor Add Bookmark Description: In this Illuminations lesson, students are given the opportunity to explore surface area in the same way that a contractor might when providing an estimate to a potential customer. Once the customer accepts the estimate, a more detailed measurement is taken and a quote prepared. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Learning about Length, Perimeter, Area, and Volume of Similar Objects by Using Interactive Figures: Side Length, Volume, and Surface Area of Similar Solids Add Bookmark Description: This is part two of a two-part e-example from Illuminations that illustrates how students can learn about the length, perimeter, area, and volume of similar objects using dynamic figures. In this part, Side Length, Volume, and Surface Area of Similar Solids, the user can manipulate the scale factor that links two three-dimensional rectangular prisms and learn about the relationships among edge lengths, surface areas, and volumes. e-Math Investigations are selected e-examples from the electronic version of the Principles and Standards for School Mathematics (PSSM). Given their interactive nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math Investigations. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Arts,Mathematics,Social Studies, Language arts Title: History on Stage Pop-Up Lesson Add Bookmark Description: This lesson plan, developed in support of the exhibition Paper Engineering: Fold, Pull, Pop, and Turn , introduces students to the variety of mechanisms included in movable books and encourages them to build their own pop-up in support of a social studies lesson. Making pop-ups subtly reinforces student's understanding of mechanical movement and helps budding architects, designers, and engineers begin to envision objects three-dimensionally. Thinkfinity Partner: Smithsonian Grade Span: 5,6,7,8 Subject: Mathematics Title: Shops at the Mall Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students develop number sense in and around the shopping mall. They explore the allotment of space within a mall by generating a class list of stores typically found in a shopping center or mall. They then categorize stores into general categories, such as women's clothing, food service, and so on, and conduct research about malls in their own area. Finally, students plan a mall and create a scale drawing. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Scaling Away Add Bookmark Description: This reproducible worksheet, from an Illuminations lesson, contains questions regarding the effect of multiplying by a scale factor on the surface area and volume of a rectangular prism or cylinder. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Language Arts,Mathematics Title: Making Beds Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students participate in activities in which they focus on connections between mathematics and children s literature. They listen to the story How Big Is a Foot? by Rolf Myller and then explore the need for a standard unit of measure. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2,3,4,5,6,7,8 Subject: Mathematics Title: Purple Prisms Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students investigate rectangular prisms using an online, interactive applet. They manipulate the scale factor that links two three-dimensional rectangular prisms to learn about edge lengths and surface area relationships. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Turtle Pond Add Bookmark Description: In this student interactive, from Illuminations, students guide a turtle to a pond using computer commands. They improve their skills in estimating length and angle measurement as they enter a sequence of commands (distances and angles at which to turn) to help the turtle move toward the pond. The goal is to find the shortest path to the pond. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2,3,4,5,6,7,8 Subject: Language Arts,Mathematics Title: Mathematics and Children's Literature Add Bookmark Description: In this five-lesson unit, from Illuminations, students participate in activities in which they focus on connections between mathematics and children s literature. Five pieces of literature are applied to teaching a wide range of topics in the mathematics curriculum, from sorting and classifying to the meaning of averages. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2,3,4,5,6,7,8 Subject: Mathematics Title: Canada Data Map Add Bookmark Description: Investigate data for the Canadian provinces and territories with this interactive tool. Students can examine data sets contained within the interactive, or they can enter their own data. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8,9,10,11,12 Subject: Mathematics Title: Bermuda Triangle Add Bookmark Description: This printable overhead projector sheet, from an Illuminations lesson, features a map with a drawing of the Bermuda Triangle. Students are asked to determine the area of the Bermuda Triangle based on this drawing and using the scale provided. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Fractal Tool Add Bookmark Description: This student interactive, from Illuminations, illustrates iteration graphically. Students can view preset iterations of various shapes and/or choose to create their own iterations. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8,9,10,11,12 Subject: Mathematics Title: Scaling Away Add Bookmark Description: In this Illuminations lesson, students measure the dimensions of a common object, multiply each dimension by a scale factor, and examine a model using the multiplied dimensions. Students then compare the surface area and volume of the original object with those of the enlarged model. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Centimeter Grid Paper Add Bookmark Description: This reproducible grid, from an Illuminations lesson, which features 1 cm by 1 cm squares, can be photocopied onto transparency paper and used by students to estimate the area of various Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Soda Rack Add Bookmark Description: In this lesson, one of a three-part unit from Illuminations, students consider the arrangement of cans placed in a bin with two vertical sides and discover an interesting result. They then prove their conjectures about the interesting results. In addition, there are links to online activity sheets and other related resources. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Building Height Add Bookmark Description: In this Illuminations lesson, students use a clinometer (a measuring device built from a protractor) and isosceles right triangles to find the height of a building. The class compares measurements, talks about the variation in their results, and selects the best measure of central tendency to report the most accurate height. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Shopping Mall Math Add Bookmark Description: In this two-lesson unit, from Illuminations, students participate in activities in which they develop number sense in and around the shopping mall. Two grade-level activities deal with size and space, estimation, measurement and applications involving percent. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8 Subject: Mathematics Title: Planning a Playground Add Bookmark Description: In this Illuminations lesson, students design a playground using manipulatives and multiple representations. Maximum area with a given perimeter will be explored using tickets. This is an interesting demonstration of how a real-world context can change a purely mathematical result. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics,Social Studies Title: Hospital Map Activity Sheet Add Bookmark Description: This student reproducible, from an Illuminations lesson, contains a map of the area wherein students must choose a location to build a new hospital so it is equidistant from Boise, Idaho; Helena, Montana; and Salt Lake City, Utah. Students do their work on a blank transparency and overlay their work onto this map. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
{"url":"http://alex.state.al.us/plans2.php?std_id=53939","timestamp":"2014-04-17T04:01:55Z","content_type":null,"content_length":"59838","record_id":"<urn:uuid:52af3693-e2f5-49da-9334-e988a1761cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Drinfeld--Jimbo Deformations and Finite Quantum Groups up vote 4 down vote favorite As is well-known, for the compact semi-simple Lie groups $G$, there exist non-commutative Hopf algebra deformations ${\cal O}_q[G]$ of their coordinate algebras ${\cal O}[G]$, the so-called Drinfeld--Jimbo quantized coordinate algebras. Do there exist other examples of noncommutative Hopf algebra deformations of ${\cal O}[G]$? Have such deformations been classified? Moreover, does there exist analogous constructions for "quantizing" the group algebra of a finite group. qa.quantum-algebra quantum-groups deformation-theory add comment 4 Answers active oldest votes In addition to the Hopf algebra $O_q(G)$ which you mentioned, there is an important twist-equivalent Hopf algebra $A_q$ introduced by Majid, as the "equivariantized coordinate algebra". In modern terminology, O_q(G) is an algebra in a rather unnatural category: $C^{op}\boxtimes C$, where $C$ is the braided tensor category of locally finite $U_q(g)$-modules. Majid's algebra is the image of $O_q$ under the composition $C^{op}\boxtimes C\to C\boxtimes C \to C$, first applying the braiding on the $C^{op}$ factor, and then applying the functor of tensor product. By construction $A_q$ is equivariant for the adjoint action (hence the name) of $U_q$ on itself. Generally speaking, if you are wanting to quantize something related to the diagonal or adjoint action of $G$, $A_q$ is the one you want. If you are considering rather the one-sided action of some subgroup of $G$, then you want $O_q(G)$. For $GL_n$, the algebra $A_q$ is sometimes called the reflection equation algebra, because it's defining relations related to affine reflection groups. up vote The algebra $A_q$ also has a nice interpretation as the CoEnd of the tensor functor $C\boxtimes C\to C$; equivalently, it is a direct limit of $V^*\otimes V$, over all finite dimensional 6 down representations $V$ of $U_q(g)$, subject to certain natural relations involving duals. Finally work of Caldero and Joseph-Letzter exhibits $A_q$ as a certain canonical sub-algebra of $U_q$, so that one can view $U_q$ as degenerating both to $U(g)$ and $O(G)$ at the same time; this can be regarded as a non-commutative Fourier transform. Regarding the question of quantizing the group algebra of a finite group: one issue is that even infinitessimal deformations are necessarily trivial, since (at least over a field of char zero), the group ring of a finite group is semi-simple and so admits no non-trivial deformations. That said, there are the Hecke algebras which "deform" the group algebras of reflection groups; although these deformations are trivial for generic parameters (are actually isomorphic to the group algebra itself), they are still interesting for many reasons. add comment I do not know of a general method for quantizing the group algebra of a finite group. However, there is a way to do it for Coxeter groups (finite or not): the result is called an Iwahori-Hecke algebra. These are closely related to Drinfeld-Jimbo quantum groups, at least when the Coxeter group is the Weyl group of a finite-dimensional semisimple Lie algebra (and probably this extends to Kac-Moody algebras, but I don't know enough about that to make a definitive statement). up vote 2 down vote One thing to think about is that if $G$ is a finite group, the group algebra $kG$ is semisimple as long as the characteristic of the field doesn't divide the order of $G$. And semisimple algebras are somewhat resistant to deformation. See this question for some more details on that story. Sorry, link to Iwahori-Hecke algebra isn't working. The wiki page has a long dash in the title which seems to make copy/paste not work properly. But if you click the link I put there it will take you to the disambiguation page for "Hecke algebra", which has a link to the page for Iwahori-Hecke algebras. – MTS Mar 15 '13 at 3:13 I may have forgotten/mixed things, but the problem of semisimplicity was already there for D-J quantum groups; actually, the 'trick' is that one doesn't deform the algebra (it's essentially impossible), but rather the coproduct. – Amin Mar 15 '13 at 7:47 add comment Much depending on what you want to do with it.... ;-) There is a duality between coordinate algebras to the Drinfel'd-Jimbo $U_q(\mathfrak{g})$ (you're title suggests you're interested rather in the finite-dimensional truncations??). Generalizations of the latter (so-called finite-dimensional pointed Hopf algebras) have indeed be classified in certain cases (Andruskiewitsch-Schneider, arXiv). You give a Dynkin diagram of Cartan type, q-decorations and so-called linkings (dotted lines) fulfilling certain diagramactic rules. There are exotic examples with several different "Borel-algebras". up vote 2 down vote Maybe your problem can be reformulated using the duality? Can you give further information, then I'll be glad to try to help? add comment In the compact case: twisted quantized function algebras (which shouldn't be the same as what David Jordan is discussing, since they are true Hopf algebras over $\mathbb C$) where introduced by Levendorskij and Soibelman: 1. Levendorskij Twisted function algebras on a compact quantum group and their representations St. Petersburg Math. J. 3, (1992) 2. Levendorskij and Soibelman Algebras of functions on compact quantum groups, Schubert cells and quantum tori, Comm. Math. Phys. 139, (1991). Such Hopf algebras are a family parametrized by a real number $a$ and an element $u\in\wedge^2\mathfrak t$, where $\mathfrak t$ is the Lie algebra of the maximal torus $T$ of $G$. When $a= 1$ and $u=0$ one gets back the more familiar quantization. In the $SU(2)$ case, since $\wedge^2\mathfrak t=0$ and the real parameter $a$ can be "rescaled" than there is nothing differing from Drinfel'd-Jimbo. It is often the case that the term "twisted" is reserved only for the $a=0$ case, so my terminology here could be not completely standard. up vote 0 Are they classified? Well in a sense yes. The point is that we know that Hopf algebra quantizations of compact groups are (from the work of Etingof-Kazhdan) functorial from Poisson--Lie down vote group structures. So we have one, up to iso, for any non isomorphic Poisson-Lie group structure on $G$. Now this twisted quantized function algebra are quantizing all compact Poisson-Lie group structure, the latter being classified. So in this setting this list is complete. A good place to study such things is the book "Algebras of functions on quantum groups PartI" Math. Surv. and Monographs 56, by Korogodski and Soibelman. However references there are quite limited in number. The "complexified" part of the story is much more complicated. A good starting point is Hodges-Levasseur-Toro "Algebraic structure of multiparametric quantum groups" Advances in Math. 126 (1997). add comment Not the answer you're looking for? Browse other questions tagged qa.quantum-algebra quantum-groups deformation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/124513/non-drinfeld-jimbo-deformations-and-finite-quantum-groups/124592","timestamp":"2014-04-16T13:51:41Z","content_type":null,"content_length":"67069","record_id":"<urn:uuid:70b3fb3c-b858-4342-ae93-7bb8f978162c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Shutting down sci.math Replies: 2 Last Post: Apr 30, 2012 10:40 AM Messages: [ Previous | Next ] Re: Shutting down sci.math Posted: Apr 30, 2012 10:40 AM > Has anyone put in some serious thought into shutting > down sci.math? > Is there really a point anymore? The serious > mathematicians just go to mathoverflow anyway. And > there's mathstackexchange for the not as much > research related. > sci.math seems to mostly be a forum for cranks. Is > having a forum for cranks to vent a net positive? This is a good question. Certainly the site has been overwhelmed with cranks for a long time. Yet occasionally some good people who don't know any better post here, and several times over the years I have met some interesting people with good questions that way. Robert H. Lewis Fordham University Date Subject Author 4/29/12 James 4/29/12 Re: Shutting down sci.math adamk 4/30/12 Re: Shutting down sci.math Robert H. Lewis
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2373482&messageID=7808299","timestamp":"2014-04-16T17:00:30Z","content_type":null,"content_length":"18801","record_id":"<urn:uuid:75225c9d-9309-45c8-9f55-b7276100821c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms and Complexity CSC 344 Algorithms and Complexity Spring, 1997 This course meets from 9:25-10:40 AM TTh in room 116 Alumnae Hall. Note room change! We are not meeting in room 39 in the basement of the Business Building. The textbook Fundamentals of Algorithmics, by Gilles Brassard and Paul Bratley, is required. The syllabus is available in LaTeX, DVI, Postscript, and HTML. An updated schedule will contain the latest updates to homework due dates, lecture topics, etc. Please check the schedule regularly and keep up on the assigned reading! Homework Assignments I shall assign several homework assignments during the semester, mostly "paper assignments" (i.e. not programming), although they may be turned in on paper or by email. Homework 1, assigned 30 Jan, due 18 Feb Read problems 1.13,1.17,1.21,1.24,1.27,1.28,1.29,1.30,1.34,1.46. Work and turn in any four of them, including at least one induction proof. Read problems 2.2,2.5,2.6,2.7,2.8,2.11,2.19,2.20. Work and turn in any three of them. Homework 2, assigned 21 Feb, due 18 Mar Read problems 3.2,3.5,3.6,3.7,3.11,3.14,3.15,3.17,3.21,3.28. Work and turn in any four of them. Read problems 4.1,4.2,4.5,4.6,4.7,4.8,4.10,4.11,4.17,4.21. Work and turn in any four of them. Homework 3, assigned 3 April, due 17 April Do 50 points' worth of problems from the following: 5 point (easy) problems 10 point (moderate) problems 5.2,5.3,5.4,5.10,5.11,5.12,5.13,5.17,5.19,5.21,5.22,5.23,5.25,5.26, 5.27,5.28,5.29,5.30,5.31,5.32,5.36,5.37 15 point (hard) problems 5.5, 5.14, 5.15, 5.16, 5.24 Homework 4, assigned 17 April, due 1 May Read problems 6.2, 6.3, 6.7, 6.8, 6.9, 6.10, 6.14, 6.15, 6.18, 6.19, 6.21(a), 6.21(b). Work and turn in any six of them, including at least one proof. (Yes, problem 6.21 counts as two.) Reading assignments By Thursday, 17 April, you should have read pages 1-180 of the textbook. I'll discuss section 5.7 in detail in class on the 17th; you can read section 5.8 for yourselves; and I'll try to discuss section 5.9 in class. You are visitor number to this page since Jan. 13, 1997. Last modified: Stephen Bloch / sbloch@boethius.adelphi.edu
{"url":"http://home.adelphi.edu/sbloch/class/archive/344/spring1997/","timestamp":"2014-04-20T00:41:46Z","content_type":null,"content_length":"3708","record_id":"<urn:uuid:33dcf3b7-af0e-4964-9755-29367e8417a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Could this be a NP complete? up vote 2 down vote favorite Given a undirected and unweighted graph G(V,E). M is a subset of vertices of V. s is a vertex in V - M. Find an optimal tree T of G defined as: (1) M and s are in V(T) (2) Distance (which is length of the shortest path) from s to any vertex in M in tree T is equal to distance from s to these vertices in G (3) No other tree T' satisfying condition (1) and (2) can have fewer nodes than T My idea was to use Dijkstra's algorithm to find shortest path from s to all vertices in M. However, there could be many shortest paths from vertex s to a vertex v. So, I will pick the shortest path that has the most number of vertices in M. Merge all these paths together to get tree T. This seems to solve the problem in polynomial time. However, my concern is the number of shortest path from vertex s to a vertex v could be very large that can make this algorithm be exponential. I don't know if there is any upper bound for the number of shortest path between 2 vertex in a graph. Also, does any one know if this problem is NP problem or it could be solved in polynomial time? graph-theory np computational-complexity add comment 2 Answers active oldest votes The problem is NP-complete. I think that the following algorithm describes a polynomial reduction of SAT to your problem. Let S be an instance of SAT. So you have a finite set of clauses $C_1$, $C_2$, ...,$C_n$. and a finite set of variables $p_1$, $p_2$, ..., $p_k$. Each clause contains some literals, i.e., variables $p_i$ and/or negated variable $\lnot p_i$. (in 3sat we assume that each clause contains at most 3 literals.) We may assume that for each variable $p$ there is a clause $C_p$ containing only $p$ and $\lnot p$, so $n\ge k$. Make S into a graph as follows: There is a special vertex $ s$. For each variable $p$ there are two vertices $p$ and $\lnot p$, both connected to $s$ (EDITED to simplify) by an edge. There is a vertex for every clause. Each literal $L$ is connected by an edge to each clause $C$ in which $L$ appears. The set $M$ will be the set of all clauses. If the original problem S was satisfiable, say with an assignment $A$, then then there is an optimal tree with $n+k$ edges: Connect $s$ with all literals which are true under $A$, and up vote 4 down connect each clause $C$ with a literal $L$ in $C$ that is true under $A$. vote accepted (EDITED to clarify and to close a gap:) Conversely, if there is an optimal graph with at most $n +k$ edges, then: 1. Each clause has to be on the tree, so it has to be connected to some literal. This costs $n$ edges. 2. For each variable $p$, either $p$ or $\lnot p$ has to be on the tree (because of $C_p$), so either $p$ or $\lnot p$ has to be connected (by an edge) to $s$ (because the distance has to be $1$). These connections cost $k$ edges. 3. So from each such pair EXACTLY one is connected with $s$. Those literals which are connected to $s$ now define a satisfying truth assignment. Hence the instance $G,M$ of your problem that I constructed from the SAT problem $S$ has a solution of size at most $n+k$ iff $S$ is satisfiable. So any algorithm to solve your problem also solves SAT. Hence your problem is NP-complete. 1 Thanks a lot goldstern. I thinks this transformation is correct. I just wonder why we don't connect s directly to each variable instead of going through path o length n. That way we can have a tree of n + k edges, can't we? – chepukha May 8 '11 at 1:18 another problem with this transformation is that you're trying to limit the number of edges while the optimal tree must have minimum number of vertices. With the way you select the tree, the number of vertices is not minimum. – chepukha May 8 '11 at 5:02 You are right, the paths are not necessary. I edited my answer to simplify it (and also to clarify some points). – Goldstern May 8 '11 at 8:19 I do not understand your question about edges vs vertices. A tree with v vertices has v-1 edges. So you minimize the number of edges iff you minimize the number of vertices. – Goldstern May 8 '11 at 8:20 Maybe I didn't quite understand your proof. So let me give an example. Let say I have a 3SAT (x+y+z)(x+¬y+z). We can assign x=y=z=T. So, if we follow the proof, then we will get a tree with 6 vertices and n+k=2+3=5 edges. However, I can have another smaller tree that can have the same shortest paths to vertices in M={C1, C2}. That tree has 3 edges connecting the vertex representing x with s, C1, and C2. It also has only 4 vertices. Am I missing something here? – chepukha May 8 '11 at 8:48 show 1 more comment Just a rough idea: (I am not really an expert on graph theory, so there may be a much better upper bound.) You can group the vertices according to their distance to $s$, say $L_i=\{v\in V|dist(s,v)=i\}$ Then of course the $L_i$ are pairwise disjoint. Any shortest path from $s$ to a given vertex $v$ has to pass the $L_i$ ascending (you can easily proof this fact), so first a vertex from up vote 0 $L_1$, then one from $L_2$, and so on. That means there are at most $\prod_{i=1}^{dist(s,v)-1}|L_i|$ shortest paths. As $|L_i|\leq |V|$, you have a polynomial upper bound. down vote While thinking about it: There might be a way to press this bound much lower, as $|L_i|=|V|$ only happens when all vertices have distance 1, which means the shortest path is the direct connection between $s$ and $v$. For decreasing amount of vertices in the $L_i$, the possible length of the path grows. You probably could use this to get a better bound. I do not see why the rest of your algorithm should not work. Thank you. I didn't have time to verify your proof here but I think the above transformation is correct. – chepukha May 8 '11 at 1:19 add comment Not the answer you're looking for? Browse other questions tagged graph-theory np computational-complexity or ask your own question.
{"url":"http://mathoverflow.net/questions/64233/could-this-be-a-np-complete?sort=oldest","timestamp":"2014-04-19T02:52:51Z","content_type":null,"content_length":"63794","record_id":"<urn:uuid:d2b19977-7120-4a38-82be-7d23d1aa9d14>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00518-ip-10-147-4-33.ec2.internal.warc.gz"}