content
stringlengths
86
994k
meta
stringlengths
288
619
Minimum distance from a circle September 19th 2009, 05:26 AM #1 Sep 2009 Minimum distance from a circle I'm not sure if this is a very easy or a very hard question... What I know is that you have to be patient as my definition of it is probably not very rigorous. In a circle, the centre is equidistant from the points on the edge. However, can I also safely assume that the centre is the point at the minimum combined distance from the edge? In other words, is the sum of the distances from all the points on the edge of a circle to a certain point within the circle minimum if the given point is also the centre? Thanx for your answers. I think if you want sum all lines from edge the circle to the point, you will obtain the superface of the circle. Hence donīt care the point you take, always the sum will be same Hi Matteo, I think your question makes sense if we interpret "sum of the distances from all the points on the edge of a circle to a certain point" as a suitable integral. For simplicity let's assume our circle has radius 1 and is centered at (0,0). Take arbitrary point P in R^2 and because our problem is invariant under rotation of coordinate axes we can assume P=(a,0), a >= 0. The intergral i have in mind is: $\int_0^{2\pi}\sqrt{(a-\cos(x))^2+\sin^2(x)}\, \mbox{d}x = \int_0^{2\pi}\sqrt{1+a^2-2a\cos(x)}\, \mbox{d}x =\ldots$ this leads to a complete elliptic integral of the second kind, with reference to $a \ge 0$ we get $\ldots = 4\cdot (a+1)\cdot E\left(2\sqrt{a/(a+1)^2}\right)$ and this has a minimum $=2\pi$ at $a=0$. This means that point P=(0,0), which is the centre of the circle, is the point that minimizes the integral. ah, the wonderful tools of mathematics I'm a little late with this, but thank you very much! Why my answer is wrong? September 19th 2009, 10:24 AM #2 September 19th 2009, 12:49 PM #3 Aug 2009 September 28th 2009, 08:08 AM #4 Sep 2009 September 28th 2009, 08:25 AM #5
{"url":"http://mathhelpforum.com/geometry/103088-minimum-distance-circle.html","timestamp":"2014-04-17T17:01:36Z","content_type":null,"content_length":"42124","record_id":"<urn:uuid:13a21148-b197-4a19-9929-0259ee432877>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
e of Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 78 APPENDIX C ESTIMATION OF DESIGN LENGTHS OF ATL COMPONENTS Introduction A key design element of ATLs is the appropriate length of the ATL's upstream and downstream components. Although it may be hypothesized that a longer ATL promotes higher ATL use, extensive field observations of ATLs tend to contradict this theory. In fact, some of the ATL sites used to develop the operational models in the "ATL Volume Estimation" section in Chapter 3 had high ATL utilization and short downstream lengths. Instead, the primary motivator for using the ATL appears to be a defensive one: avoiding a cycle failure when traffic in the adjoining CTL is moderately to highly congested. Based on the above premise, the required ATL upstream length is predicated on the provision of adequate storage for and access to the ATL from the neighboring CTL. The downstream length, on the other hand, is predicated on servicing the queued vehicles in the ATL so that they can accelerate to the approach free flow speed and smoothly merge before reaching the end of the downstream taper. Gap availability and acceptance in the CTL for ATL vehicles operating under relatively high-speed, uninterrupted conditions must also be considered. Therefore, the recommended minimum downstream length is the greater of the lengths determined from these two operating conditions. Note that the lengths determined from this method represent minimum design requirements for ATLs. Poor downstream sight distance, lack of proper signage (or existence of overhead lane signs), presence of downstream driveways, and significant right-turn-on-red (RTOR) flow from cross-street traffic may all necessitate adjustments to the minimum length to accommodate those effects. Finally, the minimum ATL lengths developed in this section are predicated on the assumption that an ATL will in fact be built. This is a strong assumption, but it is one that relies on the engineer's judgment on the practical need for such a lane. Because one of the major outputs of these guidelines is the predicted ATL through flow rate under various conditions, it is incumbent on the practitioner to decide whether the estimated ATL volume indeed warrants the additional lane, especially if the anticipated flow is only one to three vehicles per cycle on average. However, if the decision is to proceed with an ATL installation, then the procedure described in the next section can be followed. ATL Length Estimation Procedure This procedure is built around the ATL flow rate estimation models described in the "ATL Volume Estimation" section in Chapter 3. Since there are separate models for one-CTL and 2-CTL cases, the same reasoning applies to the ATL length estimation process. The procedure is implemented in two Microsoft Excel spreadsheets that estimate minimum ATL length and provide other important performance measures as outputs. The starting point of the analysis has no ATL presence. For the one-CTL case, the procedure considers an approach with a single shared through-right continuous lane, while the 2-CTL Page C-1 OCR for page 78 case assumes an exclusive through-movement lane and a shared through-right continuous lane. In all cases, left turns are assumed to operate from an exclusive lane or pocket and therefore are not part of the analysis. An outline of the procedure as it relates to the ATL upstream length determination is explained in the following steps: 1. Identify whether the one-CTL or 2-CTL case applies. 2. Supply the data required for ATL flow rate estimation including: a. Total approach through and right-turn flow rates, b. Cycle length and effective green time for the subject approach, and c. Saturation flow rate for both through and right-turn movements. 3. Estimate the ATL flow rate based on the one-CTL or 2-CTL model in the "ATL Volume Estimation" section in Chapter 3. 4. Calculate the ATL through flow rate assuming equal lane volume-to- adjusted saturation flow rate (v/s) based on the HCM 2010 shared or exclusive lane-group volume distribution. 5. Take the predicted ATL flow rate as the lower estimate from steps 3 and 4. 6. Calculate the ATL and CTL volumes, capacity, control delay, and back of queue using the HCM 2010 signalized intersection procedures. For shared ATLs, include the right-turn flow rate in the lane flow computations. 7. Estimate the 95th percentile queues in both the ATL and CTL (for one CTL lane in the case of two CTLs) using HCM procedures. 8. Select a storage length based on the greater of the 95th percentile queues in the ATL and CTL. Queue storage or access distance is calculated based on an estimate of average vehicle spacing in a stopped queue. The determination of the requisite downstream length requires a further set of input parameters, some of which may be defaulted as shown in parentheses, namely: · Approach free flow speed or speed limit, · Average acceleration rate from a stop on the ATL (10 feet/second2), · Intersection width measured from the stop line to the far curb (40 feet), · Minimum acceptable headway in CTL traffic stream (6 seconds), and · Driver reaction time (1 second). The downstream length estimation based on storage of vehicles at the desired spacing in the downstream length (DSL1) proceeds as follows. Estimate the average uniform, random, and oversaturation back of queue (BOQ) for ATL through traffic only (Q1 + Q2 in HCM terminology). This approach incorporates two opposing and simplifying assumptions. The first is that the required length will be based on the average BOQ as opposed to the 95th percentile value as was done in the upstream case. This is offset by another Page C-2 OCR for page 78 assumption where all through-movement vehicles in the ATL are assumed to be contiguous in the queue and not separated intermittently by righ t-turning vehicles in a shared lane, which would result in a larger separation between through-movement vehicles. This procedure assumes that the effects of the two assumptions will balance. The downstream storage criterio n is based on providing sufficient spacing between ATL vehicles at the free flow speed or speed limit. Since v ehicles accelerate from the stop line position, the downstream distance me asured from the far curb can be shown to be: Where: V = free flow speed or speed limit (in feet/second), A = acceleration rate from stop line (in feet/second2), L = spacing between vehicles at stop (in feet), T = driver reaction time (in seconds), an d INTW = intersection width measure d from the stop bar to the far curb (in feet). The second criterion for est imating required downstream length is based on gap availability and acceptance under uninterrupted flow conditions, especially on high-speed approaches. The concept is that, after tr aveling a reaction distance past the intersection, an ATL driver must find an acceptable merge gap in the neighboring CTL within the confines of the downstream ATL length. Using assumptions on the headway distribution in the CTL and a minimum acceptable merge headway value, the distance me asured from the far curb can be shown to be: Where: NUM = the number of rejected gaps in the CTL. This could be either the mean value of rejected gaps or a pre-specified percentile number of rejected gaps, as explained below. Gr = expected or average size of a rejected headway in the CTL (in seconds). Page C-3 OCR for page 78 This model used to calculate DSL2 is based upon a gap acceptance procedure with the following assumptions: · Drivers begin searching for gaps as soon as they pass the stop bar, · Drivers have reached the operating speed of the arterial, · Drivers are homogeneous with regard to a critical headway or gap (tc), and · Traffic in the adjacent CTL follows an exponential headway distribution. The following steps describe the model development: Step 1. Determine the number of rejected gaps encountered until an acceptable gap is found. Let p be the probability of rejecting a gap in the CTL, tc be the size of the critical headway, and h be the time headway between vehicles in the CTL. Then where is the flow rate in the CTL (in vehicles per hour). Then the probability of rejecting exactly i gaps is pi (1 ­ p) and the expected number of rejected gaps is: An alternative approach to using Nr is to design the downstream length to accommodate the 95th number of rejected gaps, as opposed to the mean value. In this case, we would like to determine the number of rejected gaps that would only be exceeded at most (1 ­ ) percent of the time. In other words, find I such that the number of rejected gaps X is such that or conversely which can be then expressed as Page C-4 OCR for page 78 Solving for I gives the condition for the percentile rejected gap: For example, if the probability of a rejected gap p = 0.50 and a 95th percentile confidence level on the number of rejected gaps is desired, then This compares with a mean number of rejected gaps of In the remaining steps, the user may choose to apply either the percentile or mean value of rejected gaps. Step 2. Determine the expected size of a rejected gap, E(t|t < tc): where using integration by parts, and after simplifying gives: Since Step 3. Calculate the expected waiting time for an acceptable gap, which is equal to the product of the number of rejected gaps and the expected size of a rejected gap: Page C-5 OCR for page 78 Optionally, if one selected the percentile gap approach, then the waiting time for the (alpha) percentile rejected gap would be Step 4. Calculate the distance traveled before an acceptable gap is found: or in the case of the percentile gap: where V is the operating speed in feet per second. Incorporating the reaction time T, the total distance traveled (in feet) is given by or in the case of the percentile gap, The computational engine described in Appendix B provides both a mean and percentile option for computing the design value of DSL2. Page C-6
{"url":"http://www.nap.edu/openbook.php?record_id=14617&page=78","timestamp":"2014-04-19T15:14:32Z","content_type":null,"content_length":"47848","record_id":"<urn:uuid:03bcc919-11eb-4253-ace8-cebf0c6cfa40>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Showing a function is one-to-one or onto. April 10th 2010, 11:11 AM Showing a function is one-to-one or onto. Can someone help me understand how to do this problem? Let F:N -> {0,1} be defined by F(n)= 0 if 3|n or 1 if 3|n a)Show F is NOT one-to-one. b)Show F is onto. April 11th 2010, 01:11 AM April 12th 2010, 11:36 AM April 12th 2010, 11:44 AM It still makes no sense. 0 if 3|n or 1 if 3|n. That reads 0 if three divides n or 1 if three divides n. That is nonsense. April 12th 2010, 11:44 AM Math Major Sorry, but what are you using | to mean? April 12th 2010, 01:37 PM This is what it was suppose to say, I found out that there was a TYPO on the assignment itself... It is suppose to read 0 if 3(does NOT)|n or 1 if 3|n April 12th 2010, 02:08 PM April 12th 2010, 02:10 PM So b) is ONTO. I need help trying to understand the question and knowing what to look for.... I don't really need help solving it. Hints don't help much either. Can someone please explain how to approach such a problem and what I need to look for in order to prove something is onto or one-to-one? April 12th 2010, 02:26 PM I need help trying to understand the question and knowing what to look for.... I don't really need help solving it. Hints don't help much either. Can someone please explain how to approach such a problem and what I need to look for in order to prove something is onto or one-to-one? Any function is a set of ordered pairs. A function is one-to-one is no two pairs have the same second term. A function is onto if the range is the same as the co-domain (the final set). April 13th 2010, 06:58 AM So I know the answer b/c I went over it with my prof. but I still dont really understand it, can someone please explain? Here it is: F:N -> {0,1} F(n)= {0 if 3 does not |n, 1 if 3|n) a) is NOT one-to-one <- I know this is due to having more then one value mapped to the same second value, but I dont understand how this conclusion is figured out given the information of the Consider n(base1)= 1 and n(base2)= 2 Then F(n(base1)=F(n(base2))=0 but n(base1)DOES NOT EQUAL 2. Is F onto? YES Let y E {0,1} if y=0,choose n=1 then, n E N and F(n)=y since F(1)=0 If y=1, choose n-3 then, n E N and F(n)=y since F(3)=1. Can someone please explain how this proof was reached and the logic behind whats going on in detail so i understand it once and for all? Thank You in advance, Matt H
{"url":"http://mathhelpforum.com/discrete-math/138329-showing-function-one-one-onto-print.html","timestamp":"2014-04-18T11:40:07Z","content_type":null,"content_length":"12639","record_id":"<urn:uuid:f4b1ba3c-27c5-49b6-9945-22b8eeb9fd77>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Negative value of cost function(objective function) Replies: 3 Last Post: Jan 2, 2014 2:27 PM Messages: [ Previous | Next ] Re: Negative value of cost function(objective function) Posted: Jan 2, 2014 8:34 AM On 1/1/2014 11:30 PM, Rubayet wrote: > Hello every one, > I am developing my cost function as objective function .this function > is nonlinear & my optimization model is unconstrained nolinear > model.While i use fminsearch as solver i am gettting negative value > for my cost function & one of my decision variable among two decision > variable is also negative. May be fminsearch solver search from its > default -Inf to Inf. I dont know whats going on? Can any body help me > to figure out this problem. Please be informed that while i consider > fixed value for my decision variables & put it into my objective > function then i get the positive value.So there is no problem in my > cost(objective) function.But while i run this by using fminsearch > solver then i found negative value. I guess this is very basic > problem. Does fminsearch has any default value for search area like > -Inf to Inf .?.... so that its start taking negative value of my > decision variable & giving me negative result in final iteration. So i > am really looking for fruitful solution regarding this problem.. > Regards > Rubayet You say that a decision variable is negative in the fminsearch solution, and you imply that you don't want that to happen. Unconstrained means any value of a decision variable is OK. So fminsearch is doing what it is supposed to do. If you want your decision variables to remain positive, then I suggest that you use fmincon (assuming that you have Optimization Toolbox), or else use abs(x) as your decision variables, which keeps the values positive. Alan Weiss MATLAB mathematical toolbox documentation Date Subject Author 1/1/14 Negative value of cost function(objective function) Rubayet 1/2/14 Re: Negative value of cost function(objective function) Alan Weiss 1/2/14 Re: Negative value of cost function(objective function) Rubayet 1/2/14 Re: Negative value of cost function(objective function) John D'Errico
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2614160&messageID=9354471","timestamp":"2014-04-16T08:30:09Z","content_type":null,"content_length":"21300","record_id":"<urn:uuid:a3807648-1edb-4e7b-a7ed-b935739cfcbf>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Integer Programming March 10th 2010, 06:56 PM #1 Mar 2010 Integer Programming 1.Let P is s subset of Rn be a nonempty polyhedron. For c belonging to Rn, define z(c) := max{c (transpose)x : x belongs to P}..... (1) Show that the following statements are equivalent. (i) P = conv(P intersection Zn). (ii) Let F subset of P be any face. Then, F intersection Zn not = to null set;. (iii) The optimal value z(c) of the linear program (1) is attained at some point in P intersection Zn,for all c belonging to Rn such that z(c) is infinite. 2.Let Pi = Qi + Ci, i = 1; 2 be polyhedra in Rn (here, Qi are polytopes and Ci are polyhedral (a) Let R subset of Rn be a polyhedron. show that R is a closed set. Now Let R1 subset of R^2 be a line, and R2 subset of R^2 be a singleton point not lying on R1. Then, show that conv(R1 U R2) is not a closed set. (b) Show that Q := conv(Q1 U Q2) is a polytope and C := conv(C1 U C2) is a polyhedral cone. (c) Let P := conv(P1 U P2). Show that P Q+C, where Q and C are as defined in (b). Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/133218-integer-programming.html","timestamp":"2014-04-20T06:07:48Z","content_type":null,"content_length":"30106","record_id":"<urn:uuid:c10431fa-3231-4e03-ba35-f19320fb6d04>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Conflict resolution in the sch Rajeev Kohli Conflict resolution in the scheduling of television commercials Coauthor(s): Daya Gaur, Ramesh Krishnamurti. Adobe Acrobat PDF We extend a previous model for scheduling commercial advertisements during breaks in television programming. The proposed extension allows differential weighting of conflicts between pairs of commercials. We formulate the problem as a capacitated generalization of the max k-cut problem in which the vertices of a graph correspond to commercial insertions and the edge weights to the conflicts between pairs of insertions. The objective is to partition the vertices into k capacitated sets to maximize the sum of conflict weights across partitions. We note that the problem is NP-hard. We extend a previous local-search procedure to allow for the differential weighting of edge weights. We show that for problems with equal insertion lengths and break durations, the worst-case bound on the performance of the proposed algorithm increases with the number of program breaks and the number of insertions per break, and that it is independent of the number of conflicts between pairs of insertions. Simulation results suggest that the algorithm performs well even if the problem size is small. Source: Operations Research Exact Citation: Gaur, Daya, Ramesh Krishnamurti, and Rajeev Kohli. "Conflict resolution in the scheduling of television commercials." Operations Research 57, no. 5 (September 2009): 1098-1105. Volume: 57 Number: 5 Date: 9 2009
{"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=rk35&pub=2279","timestamp":"2014-04-16T04:23:22Z","content_type":null,"content_length":"4872","record_id":"<urn:uuid:bcc3707f-4a62-4465-8b55-b2879cc04a16>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Two sets intersect => their boundaries also intersect? March 1st 2010, 02:20 PM #1 Jul 2009 Two sets intersect => their boundaries also intersect? In a topological space, if two subset A and B intersect, do their boundaries also intersect? If not, under what conditions can this statement be true? Thanks! What about $A=(0,1),B=\left[\tfrac{1}{2},\tfrac{3}{4}\right]$. We see that $A\cap Be \varnothing$ but $\partial A\cap\partial B=\{0,1\}\cap\left\{\tfrac{1}{2},\tfrac{3}{4}\righ t\}=\varnothing$ March 1st 2010, 02:23 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/131478-two-sets-intersect-their-boundaries-also-intersect.html","timestamp":"2014-04-16T19:52:00Z","content_type":null,"content_length":"34238","record_id":"<urn:uuid:932d8676-7a2b-4773-b41c-cf42088f0ae8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Word Problems Now that we've practiced turning words into linear equations, let's actually solve a couple of word problems. This is usually a three-step process: 1. Find the linear equation being described. 2. Figure out what question is being asked, and answer that question. 3. Check your answer. The fourth step, "take a nap, " is totally optional. Sample Problem Jenna works at a retail shop. Yes, she still works there, even after all her thievery, but she will tell you it has nothing to do with her old man owning the joint. She still makes $10 per hour, plus $3 for each item she sells. 1. How much does Jenna make in one hour if she sells 5 items during that hour? 2. How many items would Jenna need to sell in an hour to make $43 during that hour? This word problem is describing a line with an equation we found earlier: y = 3x + 10. Since we've found the linear equation, now we can answer the questions. 1. How much does Jenna make in one hour if she sells 5 items during that hour? Since x is the number of items Jenna sells during one hour, if Jenna sells 5 items during an hour we want to have x = 5. Then y = 3(5) + 10 = 25, which means Jenna would be paid $25. This amount doesn't include tips. Yeah, she makes tips, too. What can we say, this girl knows how to turn a buck. 2. How many items would Jenna need to sell in an hour to make $43 during that hour? Since y is the amount Jenna is paid, if Jenna makes $43 we want to have y = 43. Then, using the equation of the line, we have 43 = 3x + 10. We can solve this equation for x to find Since x is the number of items Jenna sells during an hour, in order to make $43 Jenna must sell 11 items. Given her foolproof sales technique of breaking down into tears whenever someone decides not to buy something, she shouldn't have any problem hitting that mark. Let's check that this is correct, though: If Jenna sells 11 items she will make 3(11) + 10 dollars, which is indeed $43. Sample Problem Marcio spent $7 per day. Knowing Marcio, he probably spent it on Lotto scratchers. After five days, he had $8 left. How much money did Marcio start with? First, we need to come up with a linear equation. The amount of money Marcio has depends on how many days have passed. Let's have x be the number of days that have passed, and y be the amount of money Marcio has. The statement "After five days, he had $8 left'' tells us that the point (5, 8) is on the graph. It also tells us he "shockingly" hasn't struck it rich yet, or he probably would have given up on these silly things by now. Since Marcio is spending $7 per day, the slope of the line is -7. We can use this information to find an equation for the line. Let's use point-slope form, since we have a point and a slope. We find the equation y – 8 = -7(x – 5) Now we can worry about answering the question. The amount of money Marcio started with is the amount of money he had when 0 days have passed. Oh, to go back in time and have all that hard-earned cashola back, eh, Marcio? We want to find the y-intercept of the line. We can do this by rearranging our point-slope equation into slope-intercept form. The y-intercept is 43, which means Marcio started with $43. Hey...that's how much Jenna made from selling her 11 items! These two might be in cahoots... Let's make sure we're right. If Marcio started with $43 and spent $7 per day, after 5 days he would have 43 – 5(7) = 43 – 35, which is indeed 8 dollars. Word problems that involve a linear equation can give us the information we need to write that equation in several different ways. We could be told two points on the line, or a point and a slope, or the y-intercept and the slope, or both intercepts. We could be given a treasure map that will lead us to the information we need, although those problems are more rare. Word problems can ask questions about the intercepts of the line, or the slope. They can provide one coordinate of a point on the line and then ask for the other coordinate. Come to think of it, they ask us for a whole lot of stuff without giving much back in return. We are in a one-sided relationship, and should probably get out of it. We'll see what our therapist has to say about this on Tuesday. After we find the line described by the word problem, the trick, as usual, is to figure out what the question is actually asking. Don't be distracted by any of its mumbo-jumbo. Solving Word Problems Practice: Antonio works for a car dealership. He's paid a base salary plus a commission for each car he sells. One year Antonio sold 10 cars and made $35,000. Another year he sold 13 cars and made $39,500. Gee, Antonio. Only 23 cars in two whole years? What are you doing, being nice and non-aggressive toward your customers? What kind of a car salesman are you? What is Antonio's base salary, and what is his per-car commission? A dolphin eats seven fish per hour. How many hours until the dolphin has eaten 56 fish? Leslie sells pianos. She gets a base salary of $30,000 per year plus a commission for every piano she sells. Last year she sold 20 pianos and made $48,000. What is her per-piano commission? Giovanni starts 700 miles from L.A. and drives straight towards the city at 55 miles per hour. After how many hours of driving is he 370 miles away from L.A.? Anna made forty-one cookies and went to a party. She gave each kid 2 cookies, and had 3 cookies left over. She threw those out rather than eat them herself because she's trying to watch her girlish figure. How many kids were at the party? Robert lives due south of an amusement park. One day he left his home and drove due south, directly away from the amusement park. He couldn't take the screaming kids any more. After two hours, Robert was 95 miles from the amusement park. After five hours, he was 212 miles away. Strangely, he could still hear them... 1. How far is Robert's home from the amusement park? 2. How fast was Robert driving?
{"url":"http://www.shmoop.com/linear-equation-systems/solving-word-problems-help.html","timestamp":"2014-04-18T00:42:58Z","content_type":null,"content_length":"46414","record_id":"<urn:uuid:d01c8aa6-d8cc-4fdc-8d87-825bfe1a94a8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
SparkNotes: SAT Subject Test: Math Level 2: Polar Coordinates 8.1 The Coordinate Plane 8.7 Polar Coordinates 8.2 Lines and Distance 8.8 Parametric Equations 8.3 Graphing Linear Inequalities 8.9 Key Formulas 8.4 Other Important Graphs and Equations 8.10 Review Questions 8.5 Vectors 8.11 Explanations 8.6 Coordinate Space Polar Coordinates Polar coordinates make rare appearances on the Math IIC. They offer an alternate way of expressing the location of a point in the coordinate plane. Instead of measuring a given point’s distance from the y and x axes, as rectangular coordinates do, polar coordinates measure the distance between a point and the origin, and the angle between the positive x-axis and the ray whose endpoint is the origin that contains the point. The distance from a point to the origin is the first coordinate, r, and the angle between the positive x-axis and the ray containing the point is the second coordinate, . The distance r and the angle are both directed—meaning that they represent the distance and angle in a given direction. A positive angle starts at the positive x-axis and rotates counterclockwise, whereas a negative angle rotates clockwise. Once you have rotated through an angle of degrees (or radians), a positive value of r means that the point lies on the ray at which the angle terminated. If r is negative, however, then the point lies r units from the origin in the opposite direction of the specified angle. It is possible, therefore, to have negative values for both r and . In the rectangular coordinate system, each point is specified by exactly one ordered pair. This is not true in the polar coordinate system. A point can be specified by many ordered pairs. To express the same point using different polar coordinates, simply add or subtract 360° to the measure of . The point (7, 45°), for example, can also be expressed as (7, 405°) or (7, –315°). Another way to express the same point using different polar coordinates is to add or subtract 180° and reverse the sign of r. The point (7, 45°), for example, is the same as (–7, 225°) and (–7, –135°). Generally speaking, any point (r, ) is also given by the coordinates (r, + 2nπ) and (–r, + (2n + 1)π), where n is an integer. However, the usual way to express a point in polar coordinates is with a positive r and between 0° and 360°. A given point has only one set of polar coordinates that satisfies these conditions. For the Math IIC, you should know how to convert polar coordinates into rectangular coordinates and back. To make these conversions, you have to have some knowledge of trigonometry, which we will cover in the next chapter. To find the normal rectangular coordinates of the point (r, ), use the following two formulas: To find the polar coordinates of the point (x, y), use these formulas: The diagram below might help you see these relationships: For example, the point (12, 60°) can be expressed by rectangular coordinates as the point (12 cos 60°, 12 sin 60°) = (6,^12⁄[2]). Practice this conversion by finding the polar coordinates of (–2, So the polar coordinates of (–2, –2) are (2, 45°).
{"url":"http://www.sparknotes.com/testprep/books/sat2/math2c/chapter8section7.rhtml","timestamp":"2014-04-16T04:29:42Z","content_type":null,"content_length":"50396","record_id":"<urn:uuid:68591164-866c-4520-911f-e2ec3fcec55e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Bindings and CAFs on the Haskell Heap by Edward Z. Yang Today, we discuss how presents on the Haskell Heap are named, whether by top-level bindings, let-bindings or arguments. We introduce the Expression-Present Equivalent Exchange, which highlights the fact that expressions are also thunks on the Haskell heap. Finally, we explain how this let-bindings inside functions can result in the creation of more presents, as opposed to constant applicative forms (CAFs) which exist on the Haskell Heap from the very beginning of execution. When we’ve depicted presents on the Haskell Heap, they usually have names. We’ve been a bit hush-hush about where these names come from, however. Partially, this is because the source of most of these names is straight-forward: they’re simply top-level bindings in a Haskell y = 1 maxDiameter = 100 We also have names that come as bindings for arguments to a function. We’ve also discussed these when we talked about functions. You insert a label into the machine, and that label is how the ghost knows what the “real” location of x is: f x = x + 3 pred = \x -> x == 2 So if I write f maxDiameter the ghost knows that wherever it sees x it should instead look for maxDiameter. But this explanation has some gaps in it. What if I write f (x + 2): what’s the label for x + 2? One way to look at this is to rewrite this function in a different way: let z = x + 2 in f z, where z is a fresh variable: one that doesn’t show up anywhere else in the expression. So, as long as we understand what let does, we understand what the compact f (x + 2) does. I’ll call this the Expression-Present Equivalent Exchange. But what does let do anyway? Sometimes, exactly the same job as a top-level binding. These are Constant Applicative Forms (CAF). So we just promote the variable to the global heap, give it some unique name and then it’s just like the original situation. We don’t even need to re-evaluate it on a subsequent call to the function. To reiterate, the key difference is free variables (see bottom of post for a glossary): a constant-applicative form has no free variables, whereas most let bindings we write have free variables. Glossary. The definition of free variables is pretty useful, even if you’ve never studied the lambda calculus. The free variables of an expression are variables for which I don’t know the value of simply by looking at the expression. In the expression x + y, x and y are free variables. They’re called free variables because a lambda “captures” them: the x in \x -> x is not free, because it is defined by the lambda \x ->. Formally: fv(x) = {x} fv(e1 e2) = fv(e1) `union` fv(e2) fv(\x -> e1) = fv(e1) - {x} If we do have free variables, things are a little trickier. So here is an extended comic explaining what happens when you force a thunk that is a let binding. Notice how the ghosts pass the free variables around. When a thunk is left unevaluated, the most important things to look at are its free variables, as those are the other thunks that will have been left unevaluated. It’s also worth repeating that functions always take labels of presents, never actual unopened presents themselves. The rules are very simple, but the interactions can be complex! Last time: How the Grinch stole the Haskell Heap Technical notes. When writing strict mini-languages, a common trick when implementing let is to realize that it is actually syntax sugar for lambda application: let x = y in z is the same as (\x -> z) y. But this doesn’t work for lazy languages: what if y refers to x? In this case, we have a recursive let binding, and usually you need to use a special let-rec construction instead, which requires some mutation. But in a lazy language, it’s easy: making the binding will never evaluate the right-hand side of the equation, so I can set up each variable at my leisure. I also chose to do the presentation in the opposite way because I want people to always be thinking of names. CAFs don’t have names, but for all intents and purposes they’re global data that does get shared, and so naming it is useful if you’re trying to debug a CAF-related space leak. Perhaps a more accurate translation for f (x + 2) is f (let y = x + 2 in y), but I thought that looked kind of strange. My apologies. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. 2 Responses to “Bindings and CAFs on the Haskell Heap” 1. > But this doesn’t work for lazy languages: what if y refers to x? Urgh. It sounds like all bindings in all lazy languages are recursive. While it’s absolutely true that laziness makes it easy to define value recursion, you may also have non-recursive bindings (and I think Haskell would be better off with an easily accessible non-recursive binder). I think the semantic distinction between recursive and non-recursive bindings is important regardless of the evaluation strategy. If you want to be able to describe both, you may use (\x -> z) y for non-recursive binding, and (\x -> z) (fix (\x -> y)) for the recursive one, regardless of whether the language is strict, lazy, etc. 2. Well, value recursion doesn’t quite work in a strict language. :-)
{"url":"http://blog.ezyang.com/2011/05/bindings-and-cafs-on-the-haskell-heap/comment-page-1/","timestamp":"2014-04-17T04:51:52Z","content_type":null,"content_length":"18271","record_id":"<urn:uuid:39ac78cf-4151-42a1-ad9a-4e96d15a9f0f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Solve the system August 19th 2009, 12:06 PM Solve the system Solve the system : $\left\{ \begin{array}{l}<br /> x^y = y^x \\ <br /> a^x = b^y \\ <br /> \end{array} \right.$ August 21st 2009, 03:19 AM I hope that you like logarithms. Hopefully there may be an easier solution than mine! From equation #2 Substituting for x in equation #1 $\therefore (ylog_ab)^y=y^{ylog_ab}$ $\therefore y^y(log_ab)^y=(y^y)^{log_ab}$ $\therefore (log_ab)^y=(y^y)^{(log_ab-1)}$ taking log of both sides to base $log_ab$ $\therefore log_{log_ab}((log_ab)^y)=y=log_{log_ab}(y^y)^{(log _ab-1)}=y(log_ab-1)log_{log_ab}(y)$ dividing through by y $\therefore (log_ab-1)log_{log_ab}(y)=1$ $\therefore log_{log_ab}(y)=\frac 1{log_ab-1}$ $\therefore y=log_ab$ to the power of $(\frac 1{log_ab-1})$ Example; say a = e =2.718 and b = 17 then y=1.764875. $x=log_a(b^y)=5.000267$ from equation #2 August 22nd 2009, 08:14 AM August 24th 2009, 12:46 AM It is usual for letters from the first half of the alphabet to be considered as constants and letters from the last half of the alphabet to be variables. Therefore, if the question had been written more formally it might have said something like: Solve the following system of equations for x and y, given the constants a and b. August 24th 2009, 04:32 AM That to me means no more than "it is usual for men to wear pants and wowen to wear dresses". "It is usual" is quite dangerous in mathematics... In programming, using a to m as variables is not "syntax error". Ain't arguing, mate, just my opinion.
{"url":"http://mathhelpforum.com/advanced-math-topics/98600-solve-system-print.html","timestamp":"2014-04-17T23:04:58Z","content_type":null,"content_length":"9537","record_id":"<urn:uuid:5dacc51e-66fa-447f-b002-3e32d3bf2db5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Simultaneous equations September 15th 2007, 02:59 AM #1 Sep 2007 Simultaneous equations Need help on this one please.... Thanks very much. Sandra can take two alternative routes from Melbourne to Brisbane. If she takes route A she passes through 8 toll booths and takes 19 hours and 10 minutes to get there, and the trip costs her $155.00. If she takes route B she passes through 2 toll booths and takes 21 hours and 40 minutes to get there, costing her $140.00. All toll booths charge the same price. How much does petrol cost Sandra per hour, and what price do the tollbooths cost? We know the toll has to be paid a certain number of times for each trip and is the same, and the running cost will be the same per hour (fuel per hour). The total cost of each trip is known, so if we write the various costs of each trip we get the known total. Try solving these equations for X and Y. X will be the cost of the toll and Y will be the cost per hour of the trip. 8x + (19+1/6)y = 155 2x + (21+2/3)y = 140 (If you don't get along with fractions very well, you might like to convert the times to minutes, then multiply the final answer by 60) Have a go at it and let me know if you'd like a worked solution... [Ans: Toll = $5, Fuel = $6/hr] Last edited by Spimon; September 15th 2007 at 03:32 AM. Thanks so much for your quick response I will try and figure it out. If I have problems I will get back to you. Thanks again Thank you so much spimon I worked it out. It wasn't easy but it was done. (with a lot of help from above) and your self of course.... Thanks a lot No worries matey. Definitely better if you do the working yourself so you really understand what's going on. Well done getting the answer! September 15th 2007, 03:21 AM #2 September 15th 2007, 04:16 AM #3 Sep 2007 September 15th 2007, 06:34 AM #4 Sep 2007 September 15th 2007, 06:47 AM #5
{"url":"http://mathhelpforum.com/algebra/18984-simultaneous-equations.html","timestamp":"2014-04-17T11:12:57Z","content_type":null,"content_length":"39172","record_id":"<urn:uuid:53cfe76d-11ea-43b8-8b0e-898901c29414>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from April 2010 on ManjushaJoshi's Blog Project on Use of open source for Teaching Mathematics now started at Bhaskaracharya Institute in Maths, (BIM) Pune. Initially there will be 4 national workshops based on Scilab, LaTeX, Maxima, Geogebra, Linux and introduction to SAGE & R. These workshops will be for College teachers of Mathematics. These will be 5 days workshops. More information of the workshops can be found at http://fossme.bprim.org 20 Apr More workshops on Scilab and LaTeX in Feb- March, 2010 • 19th April, Numerical Methods with Scilab, MIT COE, Pune, 85 students of Second year participated. • Dept. of Maths, Nagpur University, 29,30,31 March. Workshop on LaTeX and other free & open source sotware. Talk on LaTeX and Scilab Graphics, Around 55 Maths teachers participated. • NIT, Calicuit, Kerala, 14-15 March, Students participants around 30-35. Scilab Graphics, `Applications of Maths in Engneering using scilab’. • U. C. College, Aluwa, Cochin, Kerala, 25-27 Feb. Talked on Scilab, LaTeX, Introduction to R Teacher participants around 58, Some photos • 19,20 Feb, Gnunify 2010, Talk on `Beamer: Scientific presentation tool using LaTeX‘
{"url":"https://manjushajoshi.wordpress.com/2010/04/","timestamp":"2014-04-21T04:32:14Z","content_type":null,"content_length":"22442","record_id":"<urn:uuid:3c7081fb-0474-46a3-b26f-d69a332a0bc0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Trains A and B are traveling in the same direction on parall I am hoping someone can point me in the right direction. I have no idea where to start. Trains A and B are traveling in the same direction on parallel tracks. Train A is traveling at 80 mph and B at 84 mph. Train A passes the styation at 7:20 pm. If train B passes the same station at 7:50 pm, at what time will train B catch up to train A? Thank you for any help. Re: Trains A and B are traveling in the same direction on pa jpalmer11 wrote:I am hoping someone can point me in the right direction. I have no idea where to start. Trains A and B are traveling in the same direction on parallel tracks.... To learn how to set up and solve this sort of "uniform rate" exercise, please study this lesson. Once you have learned the basic methodology, please attempt the exercise. (A good start will be to figure out how much of a "head start" Train A has, in having traveled at 80 mph for 0.5 hours.) If you get stuck, you can then reply with a clear listing of your steps and reasoning thus far. Thank you.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=7&t=2255&p=6580","timestamp":"2014-04-20T19:41:04Z","content_type":null,"content_length":"19000","record_id":"<urn:uuid:2f78f216-ad78-4194-95ab-858c4637388f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Newport, DE Geometry Tutor Find a Newport, DE Geometry Tutor ...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Trigonometry with a national tutoring chain for five years. I have taught Trigonometry as a private tutor since 2001. 12 Subjects: including geometry, calculus, algebra 1, writing I have experience tutoring students for the SAT and ACT in all areas of the tests and have taught mathematics at the high school level. I have a proven track record of increasing students' ACT and SAT scores and improving their skills. My approach is tailored specifically to the student, so no two programs are alike. 19 Subjects: including geometry, calculus, statistics, algebra 1 ...I am stern but caring, serious but fun, and nurturing but have high expectations of all of my students. Together as a team, you and I can help your child to do his or her best. I look forward to working with you and your child!I am a certified and current teacher in the public schools. 12 Subjects: including geometry, algebra 1, trigonometry, algebra 2 ...I have a B.S. degree in chemical engineering and teach Physical Science 307 (chemistry and physics) at Wilmington University. General subject matter covered: Addition/Subtraction, Multiplication/Division; Factoring and Lowest Common Denominator; Fraction Notation; Decimal Notation; Percent Notat... 7 Subjects: including geometry, chemistry, algebra 1, algebra 2 ...I tutor trigonometry using a calm, persistent style and bring practical examples from a career as a professional physicist. I took a one-year course in astronomy as a physics major and I have been fascinated by it ever since. I keep up with the latest research in the scientific journals and have read many books. 10 Subjects: including geometry, calculus, physics, algebra 1 Related Newport, DE Tutors Newport, DE Accounting Tutors Newport, DE ACT Tutors Newport, DE Algebra Tutors Newport, DE Algebra 2 Tutors Newport, DE Calculus Tutors Newport, DE Geometry Tutors Newport, DE Math Tutors Newport, DE Prealgebra Tutors Newport, DE Precalculus Tutors Newport, DE SAT Tutors Newport, DE SAT Math Tutors Newport, DE Science Tutors Newport, DE Statistics Tutors Newport, DE Trigonometry Tutors
{"url":"http://www.purplemath.com/Newport_DE_geometry_tutors.php","timestamp":"2014-04-16T13:34:14Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:c6b14acc-006e-4d5e-9146-c405b37c56cf>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
[Help-gsl] Comments/questions about BFGS algorithm implemented for the G [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Help-gsl] Comments/questions about BFGS algorithm implemented for the G From: Alan W. Irwin Subject: [Help-gsl] Comments/questions about BFGS algorithm implemented for the GSL Date: Fri, 14 Apr 2006 01:36:42 -0700 (PDT) Background: I have a GPLed fortran library (see freeeos.sf.net) that requires a routine to minimize the thermodynamic free energy in order to calculate the equation of state (pressure as a function of density and temperature) for the physical conditions in the interiors of stars. I have been experimenting with the BFGS technique to accomplish this task since that technique comes highly recommended in Fletcher's book, "Practical Methods of Optimization", 2nd edition. For months I have been experimenting with the L-BFGS-B fortran implementation (http://www.ece.northwestern.edu/~nocedal/lbfgsb.html) to minimize the free-energy. The L-BFGS-B implementation is recommended everywhere, and it usually works for me, but there are some problems. (1) The free license is non-standard and has an obnoxious advertising clause that means not only my scientific paper (which is a reasonable requirement which I would do out of professional courtesy in any case), but also unreasonably all papers that ever use my equation of state calculation must include two references specified by the original authors of the L-BFGS-B fortran implementation. (2) the authors deliberately do not maintain the code now and instead prefer users (according to recent e-mail from Nocedal to me) to switch to proprietary (and quite costly) software for minimization. Presumably because of the bad free license, nobody else has stepped forward to maintain the L-BFGS-B code. (3) From my many experiments for a wide range of physical conditions, L-BFGS-B usually does the job, but I have found two bugs: (a) the code sometimes asks for x values substantially outside the bounds specified by the user. Fortunately, I was able to transform my problem to one which did not require bounds, but for general use this is a major deficiency of the L-BFGS-B implementation since its major claim to fame is the ability to specify simple bounds. (b) Even for the transformed unbounded case, I found one case in my experiments where the code ignored a good minimum solution found during the line minimization and demanded a bad solution should be used instead Because of these licensing and non-robustness issues I have now given up completely on the L-BFGS-B fortran implementation, and I have now implemented a fortran alternative which follows exactly the BFGS technique implemented (in C) in the GSL. It is early days yet, and I have only tested it on one standard problem (Rosenbrock's function), but for that my fortran implementation gives essentially identical results to using the GSL. Here are the results for the iteration number, the two components of x - known minimum, f, and the two components of the gradient for the GSL case. (My own fortran implementation provides essentially identical results). 1 -2.03710e+00 8.38208e-02 4.15657e+00 2 -1.95205e+00 -1.28596e-01 3.93289e+00 3 -1.95496e+00 -1.21321e-01 3.93253e+00 4 -1.94768e+00 -1.18408e-01 3.82074e+00 116 -4.77646e-05 -9.57178e-05 2.28510e-09 117 -4.01129e-05 -8.03843e-05 1.61161e-09 118 -2.48095e-05 -4.97174e-05 6.16493e-10 119 5.79720e-06 1.16164e-05 3.36557e-11 120 3.19460e-10 -1.72199e-10 6.58936e-17 121 -5.10669e-12 -1.02340e-11 2.61210e-23 122 -1.11022e-16 0.00000e+00 4.94271e-30 123 0.00000e+00 0.00000e+00 0.00000e+00 Obviously it converges very nicely at the end with the maximum gradient component rapidly going to exactly zero. But why does it take so long (119 iterations) to get to the final rapid convergence phase (both for the GSL C version and my fortran implementation of the same algorithm)? In Fletcher's book, "Practical Methods of Optimization", 2nd edition, the problem only requires 21 iterations to converge (see Table 3.5.1) with Fletcher's own BFGS code. Although, my experients showed the L-BFGS-B implementation is fundamentally non-robust, for this test function it seems to be okay, and it converges in 39 iterations confirming there is something wrong with the efficiency of the preliminary convergence steps in the GSL case. I suspect there is a substantial inefficiency problem with the GSL line- search implementation (since the BFGS update part is completely straightforward). I am not any sort of expert in this field though so I cannot pin down where the problem is occurring, but a superficial google search reveals a number of complaints about the GSL minimization efficiency. Thus, I urge someone with some experience of line-search algorithms should have a look to make sure some easily rectifiable mistake is not being made. A simple bug fix that results in a factor of 3 to 5 in efficiency improvement in optimization would be well worth having. Meanwhile, I am about to embark on a long series of tests of my fortran implementation of the GSL BFGS technique for solving the equation of state for a wide variety of physical conditions, and I will report back here how robust it is. Hopefully, it will be completely robust (always determine a good solution for all physical conditions) for my equation of state problem. Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); PLplot scientific plotting software package (plplot.org); the Yorick front-end to PLplot (yplot.sf.net); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project Linux-powered Science [Prev in Thread] Current Thread [Next in Thread] • [Help-gsl] Comments/questions about BFGS algorithm implemented for the GSL, Alan W. Irwin <= • Message not available
{"url":"http://lists.gnu.org/archive/html/help-gsl/2006-04/msg00033.html","timestamp":"2014-04-17T04:54:12Z","content_type":null,"content_length":"11939","record_id":"<urn:uuid:071ac164-ec65-4096-869d-cf67964f72e6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Questions October 9th 2008, 08:39 AM #1 Calculus Questions Hi I need help with these questions: 1.Write the composite function in the form f(g(x)). then find the derivative dy/dx. y=sin square root x. 2. Find the first and second derivatives of the function y=4x/ square root (X+1) 3. find dy/dx by implicit differentiation square root (x+y)=1 +x^2 y^2 1) $f(x)=sin x, g(x)=\sqrt{x}$ by chain rule: $dy/dx=f'(g(x))g'(x)$. Try by yourself first! I got cos square root x/ 2 square root x. Not sure if this is correct. :/ You need to be confidence on yourself For number 2, use quotient rule. For number 3, you have I'll do the LHS and you do the RHS $<br /> \frac{d}{dX}(x+y)^{1/2}=\frac{1}{2} (x+y)^{-1/2}\cdot \frac{d}{dx} (x+y)=\frac{1}{2} (x+y)^{-1/2} (1+\frac{dy}{dx})$ After you do the RHS, then solve the equation for dy/dx So is my answer to the first question correct? As for number 2. I got y= 2x+1/ 2(x+1)^1/2. Not sure about that :/ As for number 3. I am still lost :/ It's not that I lack confidance, I am trying to get better with practice and it's hard when I get stuck on questions. I really appreciate the help In implicit differentiation, you treat x and y as functions of x. So when you differentiate $\sqrt{x+y}=1+x^2y^2$, you need to apply chain rule to $\sqrt{x+y}$ and product rule to $x^2y^2$ I'll show the initial first step, and then I'll let you simplify this. $\frac{\,d}{\,dx}\left[\sqrt{x+y}\right]=\frac{\,d}{\,dx}\left[1+x^2y^2\right]\implies \tfrac{1}{2}(x+y)^{-\frac{1}{2}}\cdot\left(1+\frac{\,dy}{\,dx}\right)= 2xy^2+2x^2y\frac{\,dy}{\,dx}\implies\ From here, simplify a little bit, but the important part is to group the terms that contain $\frac{\,dy}{\,dx}$ together. Then we can easily solve for $\frac{\,dy}{\,dx}$. Can you take it from here? October 9th 2008, 08:50 AM #2 Junior Member Oct 2008 October 9th 2008, 08:56 AM #3 October 9th 2008, 09:12 AM #4 Junior Member Oct 2008 October 9th 2008, 09:37 AM #5 October 9th 2008, 09:43 AM #6
{"url":"http://mathhelpforum.com/calculus/52825-calculus-questions.html","timestamp":"2014-04-21T11:38:52Z","content_type":null,"content_length":"46463","record_id":"<urn:uuid:59d549b8-8712-45c3-bb58-e69b18e03eb9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Z score for batting average I checked out a link with tangos instructions for calculating, but am having trouble following how he calculates the z for batting average. For HR's, sb's etc I;m taking (projection-mean for group)/SD for group I understand with avg (as with era or whip) the number of ab's should be a factor. Can someone clarify? Re: Calculating Z score for batting average Good question. Yes, the AB's need to be counted. Player A who has a .320 average over 200 ABs might actually contribute less than play B who has a .310 average over 600 ABs. I'll tell you what I do, would be interested in hearing if others have different methods. I assume I already have the rest of my team in place, and it's a completely league average team. That would be about a .270 average. Say I already had 10 hitters in place averaging 500 ABs. That would be 5000 AB for my team, at .270, they would have 1350 hits. The numbers for average change a bit using different roster sizes, so I change the formula based on the league I want values for. So for each player, I add his ABs and hits to my league total, then see what my team BA would be. Let's use ARod's CHONE projections as an example. ARod is projected to hit 159/541 = .294. So add ARod's 159 hits to my existing 1350, and his 541 ABs to my 5000. We then have my team BA is 1509/5541 = .2723. So ARod brings my team BA up .0023 or 2.3 points if you will. Do the same thing for every player and take z-scores. Re: Calculating Z score for batting average I do like Rock does in a way. If I'm trying to determine the value of a 2B within the draftable 2B universe (15 players let's say), the value of a player's BA will be weighted by how many ABs I have projected for him relative to the average of all 15. TennCare rocks!!!! Re: Calculating Z score for batting average Thanks alot - do you use IP and work it the same way for era and whip? Re: Calculating Z score for batting average Mike_nyc wrote:Thanks alot - do you use IP and work it the same way for era and whip? Sure do Re: Calculating Z score for batting average I think TheRock's way works, but here's another way to think about it: First find what the average BA of the entire draft pool would be: Take all 168 hitters drafted (or however many your league drafts) and get the league average BA. Let's say it ends up at .270. The basic formula Tango suggests is: H - (AB * league_average) So if A-Rod gets 159 H in 549 AB: 159 - (541 * .270) = 159 - 146 = 13 Here's what that formula is doing: Let's suppose that a league average player (the .270 hitter) gets the same number of AB as A-Rod (541). In that situation, how many hits would he have? He would get 146 hits (541 * .270). Now A-Rod is projected to get 159 hits. When we subtract the pro-rated hits of the average player (146), we find that A-Rod is projected to get 13 more hits than the league average. That 13 is what Tango uses for xH. EDIT: I can't subtract. Re: Calculating Z score for batting average Note that this general formula works to convert any rate stat into a counting stat: numerator - (denominator * league_average) So ,for example, OBP: xOBP = (H + BB + HBP) - ( (AB + BB + HBP + SF) * league_average) For stats where a small number is good, all you have to do is multiply by -1: xERA = ( ER - (IP * league_average) ) * -1 xWHIP - ( (BB + H) - (IP * league_average) ) * -1 EDIT: Note that for ERA I'm assuming for simplicity that it's ER/IP, and the league average is the league average ER/IP. It would work the same if you prefer multiplying/dividing everything by 9. Re: Calculating Z score for batting average this has been great info thanks alot. after calculating all the xH's, he calculates the SD of these values as part of his formula, correct? Re: Calculating Z score for batting average Mays wrote:I think TheRock's way works, but here's another way to think about it: First find what the average BA of the entire draft pool would be: Take all 168 hitters drafted (or however many your league drafts) and get the league average BA. Let's say it ends up at .270. The basic formula Tango suggests is: H - (AB * league_average) So if A-Rod gets 159 H in 549 AB: 159 - (541 * .270) = 159 - 146 = 13 Here's what that formula is doing: Let's suppose that a league average player (the .270 hitter) gets the same number of AB as A-Rod (541). In that situation, how many hits would he have? He would get 146 hits (541 * .270). Now A-Rod is projected to get 159 hits. When we subtract the pro-rated hits of the average player (146), we find that A-Rod is projected to get 13 more hits than the league average. That 13 is what Tango uses for xH. EDIT: I can't subtract. In the end, this actually takes ABs back out the equation. xH tells us how much a players performance increases the overall team numerator when computing team BA, but not how much it increases the denominator. And you need both to know the value. I.e. ARod has 13 hits over average. My team altogether has 60 hits over average. Some other team also has 60 hits over average. The winner is whichever team attained that in the fewest at bats. It's information you need to know. But as a rough guide of value, it does have its purpose. Re: Calculating Z score for batting average another question if i may - I have a player pool of 250 hitters. For a league where 150 are used i use the top 150 values for each statistical category correct? Then for a league with only 100 batters i just cut down the values used to the top 100 for each?
{"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=368208","timestamp":"2014-04-21T05:17:09Z","content_type":null,"content_length":"79014","record_id":"<urn:uuid:b4caa8ae-1688-4181-ac90-c626bd646e49>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
M Theory Lesson 64 Whilst on the topic of AdS/CFT, Michael Rios has an interesting post on dimension altering weak coupling phase transitions for N=4 SUSY Yang-Mills. A continuous change in dimension from six down to five is reminiscent of Thurston’s beautiful fractal 2-spheres, which are filled with a 1-dimensional curve. These arise in the study of 3-manifolds such as those with 1-punctured torus fibres over the circle. The punctures draw out a boundary for the manifold by tracing a knot. Now according to Matti, the fractional modular domains would fit into the domain for the once punctured torus moduli (the $n=2$ case) on the upper half plane. Perhaps the $n=5$ domain (or rather the theta functions) could be used to model a 5-sphere, much as the j-invariant Belyi map links the $n=2$ domain to $\mathbb{CP}^1$. Note: For the new readers to this blog, our use of the term M Theory must not be confused with its more popular usage in string theory related papers. The letter M stands here possibly for Motive, or perhaps Monad. Although these terms do appear in the popular literature, they rarely correspond to the physical usage we would like to make of them. Dear Kea, thanks for clarification of terms. Of course, although I cannot take M-theory in the conventional sense seriously as a physical theory, I feel great sympathy for those who work seriously to develop it or any another theory and it is very stimulating do exchange ideas. What creates tensions is the refusal of string theorists to publicly respond to the criticism and admit some basic difficulties besides hype. And of course, the censorship in archives which emerged around second super string revolution is rather frustrating. Yes, censorship has been a real concern, which is why we are so fortunate to live through the web revolution. Thanks for the clarity in your definition of M-theory! ;-) Yes, censorship has been a real concern, which is why we are so fortunate to live through the web revolution. – Kea It’s self-devaluating. Before the web everyone in a democracy was free to stand on a soap box at a street corner and lecture those passing if they wanted to hear (so long as they didn’t cause too much of an obstruction in so doing). That was the definition of freedom. Now it is the internet, doing the same thing. How many sites are there on the internet? The internet has not really changed things for the better. If anyone uses it to try to bypass censorship, no one listens. Even Feynman’s revolutionary work was censored out until he gave up. Tony Smith quotes the following about Feynman’s censorship problems: “… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right. … For instance, take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time … … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”. The above quotation is from The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248). If you look to see how Feynman’s ideas eventually gained attention, it resulted from a long struggle of Dyson versus Oppenheimer, explained by Dyson in a video here (Dyson says Oppenheimer behaved like an “old bigot” and wouldn’t listen at first). … it didn’t make me angry, it just made me realize that … [they] … didn’t know what I was talking about, and it was hopeless to try to explain it further. Thanks for the great Feynman quote, Nigel. But we live in a different time to Feynman. Physics needs a more radical change right now, and nobody who realises what that means can possibly give up trying to explain it. Ummm. I think you’re right. Another example, allegedly, of ignored genius was Weyl. I’m reading Weyl’s The Theory of Groups and Quantum Mechanics, 2nd ed., 1930. It’s kind of funny because it’s the book mentioned in Not Even Wrong as being the one wrote with chapters on symmetry groups alternating with chapters on quantum theory: the joke is that only half the chapters were read. I can see why. All the odd numbered chapters are pure maths (chapter 1: unitary geometry, chapter 3: groups and their representations, chapter 5: the symmetry permutation group and the algebra of symmetric transformations) while the even numbered chapters are straightforward physics (chapter 2: quantum theory, chapter 4: application of the theory of groups to quantum mechanics). The style is totally and obviously different in each case. Surprisingly, the physics chapters are brilliant. The maths chapters begin with pages full of boring definitions that look hard to remember, easy to confuse, etc., while the physics chapters are exciting from the first line, stick to solid facts and are brilliant even in comparison to modern introductions to quantum mechanics. It’s like two totally separate books, with the chapters shuffled together. Some of the symmetry group theory is soaking in, but I won’t get all I need from Weyl’s book. Problem is, I write down a list of things I expect to find out from reading a book before starting to read, and always end up learning hardly any of the things I intended, but a whole load of unexpected gems. The funny thing is, historically all the most mathematical developments in physics have been experimentally guided. I can’t think of one instance where maths is any more than a tool in empirically confirmed physics. Weyl couldn’t use group theory to sort out forces because there wasn’t any data on the weak or strong forces to reveal what the symmetry groups were in 1930. Yet the whole idea of using maths to usefully model physical facts is anathema to the string theorists, obsessed with unobservable speculation with spin-2 gravitons and supersymmetry. Sometime during the 60s or 70s people became disillusioned with the restrictions of reality and observable spacetime and decided there must be more to the universe than the observed number of dimensions, etc. “These arise in the study of 3-manifolds such as those with 1-punctured torus fibres over the circle.” If you mean by “3-manifolds” something like 3-d manifolds, then you’re slipping into Weyl-type mathspeak, unless you’re just writing for mathematicians.
{"url":"http://arcadianfunctor.wordpress.com/2007/06/06/m-theory-lesson-64/","timestamp":"2014-04-19T11:55:37Z","content_type":null,"content_length":"58860","record_id":"<urn:uuid:3fbf4a41-7a38-4342-acab-3ca7ba907615>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
1.2 Formalizations - An Overview 12 Computability Theory 8. Assume that S is a decidable set of natural numbers, and that f is a total effec- tively calculable function on N . Explain why {x | f (x) S} is decidable. (This set is called the inverse image of S under f .) 9. Assume that S is a semidecidable set of natural numbers and that f is an effec- tively calculable partial function on N . Explain why {x | f (x) and f (x) S} is semidecidable. 10. In the decimal expansion of , there might be a string of many consecutive 7's. Define the function f so that f (x) = 1 if there is a string of x or more consecutive 7's and f (x) = 0 otherwise: f (x) = 1 0 if has a run of x or more 7's otherwise. 11. 12. 13. 14. Explain, without using any special facts about or any number theory, why f is effectively calculable. Assume that g is a total nonincreasing function on N (that is, g(x) g(x + 1) for all x). Explain why g must be effectively calculable. Assume that f is a total function on the natural numbers and that f is eventually periodic. That is, there exist positive numbers m and p such that for all x greater than m, we have f (x + p) = f (x). Explain why f is effectively calculable. (a) Assume that f is a total effectively calculable function on the natural numbers. Explain why the range of f (that is, the set { f (x) | x N }) is semidecidable. (b) Now suppose f is an effectively calculable partial function (not necessarily total). Is it still true that the range must be semidecidable? Assume that f and g are effectively calculable partial functions on N . Explain why the set {x | f (x) = g(x) and both are defined} is semidecidable. 1.2 Formalizations ­ An Overview In the preceding section, the concept of effective calculability was described only very informally. Now we want to make those ideas precise (i.e., make them part of mathematics). In fact, several approaches to doing this will be described: idealized computing devices, generative definitions (i.e., the least class containing certain initial functions and closed under certain constructions), programming languages, and defin- ability in formal languages. It is a significant fact that these very different approaches all yield exactly equivalent concepts. This section gives a general overview of a number of different (but equivalent) ways of formalizing the concept of effective calculability. Later chapters will develop a few of these ways in full detail.
{"url":"http://my.safaribooksonline.com/book/-/9780123849588/chapter-1dot-the-computability-concept/12_formalizations__an_overview?bookview=contents","timestamp":"2014-04-18T08:36:54Z","content_type":null,"content_length":"47440","record_id":"<urn:uuid:6a27e52c-4aac-4070-809a-a3d6cabd0660>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
In a sample of items from a population, the average of the squares of the individual deviations of the items from the mean. The sample variance is normally denoted by s^2, the variance of the whole sample by σ^ 2 (see also standard deviation). Related category
{"url":"http://www.daviddarling.info/encyclopedia/V/variance.html","timestamp":"2014-04-21T07:20:56Z","content_type":null,"content_length":"5473","record_id":"<urn:uuid:27e083e2-bda7-4b26-9d5f-a9d05c0eecc9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
amplitude for cos graph October 26th 2009, 02:27 PM #1 Mar 2008 amplitude for cos graph from the diagram, i can say that the amplitude, a is 3 but the ans given is 1.5. y intercept, b is 0 but the ans given is 1.5 and how to determine the n?? the ans for n is 4. i need the above ans before continue to b question..tq I had a big thing typed out but lost it. Long story short - their answers are correct. This graph has been flipped and shifted horizontally. Go back to what the graph of cosine looks like; if you still need help let me know. do u mean the attached grah already flipped and shifted.. i need your help on this the for your quick reply Well, yes the attached graph has already been flipped and shifted. . .but you already know that from the equation they give you (assuming they aren't being jerks): If you work from scratch and start applying the transformations: -cos(x): Graph of cosine is flipped (so 0,1 becomes 0,-1) -acos(x): Graph of cosine is stretched/compressed by a factor of a. -acos(nx): Graph of cosines period is adjusted from $2\pi$ to $\frac{2\pi}{n}$ -acos(nx)+b: Graph of cosine is shifted vertically by b units. You then end up with the graph you are looking at right now. It is up to you however to, using the values of points on this graph as well as an understanding of how to calculate amplitude, period and shifting, figure out what a, b, c and n are. tq for the above information and i will try to work out on it.. How are you going with the problem? Thought I'd help. The best way I think is to first draw in the equilibrium line (sometimes called mean value line) basically its the horizontal line through the middle of the graph. Thats at y=1.5. That's what b a is the amplitude which is the max vertical distance between the equilibrium line and the curve, so a = 1.5. (The neg sign on a simply means the graph has been flipped) The period of the function is = 2pi/n. On your graph the period is clearly 0.5pi. Therefore solve 2pi/n =0.5pi to give n=4. Hope this helps. October 26th 2009, 02:32 PM #2 Super Member Jul 2009 October 26th 2009, 02:36 PM #3 Mar 2008 October 26th 2009, 02:43 PM #4 Super Member Jul 2009 October 26th 2009, 02:58 PM #5 Mar 2008 October 26th 2009, 03:15 PM #6 Senior Member Oct 2009
{"url":"http://mathhelpforum.com/calculus/110654-amplitude-cos-graph.html","timestamp":"2014-04-19T00:43:43Z","content_type":null,"content_length":"42244","record_id":"<urn:uuid:4c661f9b-b987-47e3-b288-23a58da239db>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Increase speed of -replace- [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Increase speed of -replace- From "fhuebler@gmail.com" <fhuebler@gmail.com> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject st: Increase speed of -replace- Date Thu, 8 May 2008 12:31:52 -0700 I am looking for a way to increase the speed of -replace-. I have a long string variable consisting of several words that should be reduced to a shorter string, depending on the text in each observation. The problem can be reproduced with the auto data. Assume that we want to replace the text in the variable "make" by a single word. Assume further that the text we are looking for (e.g. "Chev.") is not necessarily at the beginning of the string but that it can be anywhere in the variable. My solution is shown below but it is slow with more than 200 -replace- commands and about 150,000 observations. Is there a faster solution? sysuse auto replace make = "AMC" if strpos(make,"AMC")>0 replace make = "Buick" if strpos(make,"Buick")>0 replace make = "Cadillac" if strpos(make,"Cad.")>0 replace make = "Chevrolet" if strpos(make,"Chev.")>0 replace make = "Dodge" if strpos(make,"Dodge")>0 * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ Privileged, confidential or patient identifiable information may be contained in this message. This information is meant only for the use of the intended recipients. If you are not the intended recipient, or if the message has been addressed to you in error, do not read, disclose, reproduce, distribute, disseminate or otherwise use this transmission. Instead, please notify the sender by reply e-mail, and then destroy all copies of the message and any attachments. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-05/msg00364.html","timestamp":"2014-04-16T07:19:05Z","content_type":null,"content_length":"6795","record_id":"<urn:uuid:73d8ef6c-13ff-4bf9-8bae-4a602d32fd92>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Catalog of Some Elementary Functions This Demonstration presents a catalog of some elementary functions (no transcendental functions like sine or log included) that can be used to illustrate how each of the parent functions can be transformed by varying different parameters. The transformations include shift, reflection, stretch, and shrink in both horizontal and vertical directions. You can show or hide the domain, range, or graph of the function as well as the expression representing it.
{"url":"http://demonstrations.wolfram.com/CatalogOfSomeElementaryFunctions/","timestamp":"2014-04-16T21:55:15Z","content_type":null,"content_length":"43627","record_id":"<urn:uuid:54f86e54-aa65-47a4-bdd9-f6ee41b71ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekly Problem 49 - 2011 Copyright © University of Cambridge. All rights reserved. 'Weekly Problem 49 - 2011' printed from http://nrich.maths.org/ Inspector Remorse estimates that he can solve the average murder in $x$ hours, a bank robbery in half that time, and a car theft in one third of the time he takes to solve a bank robbery. How much time would he expect to take in solving two murders, six car thefts and four bank robberies? This problem is taken from the UKMT Mathematical Challenges. View the previous week's solutionView the current weekly problem
{"url":"http://nrich.maths.org/2219/index?nomenu=1","timestamp":"2014-04-19T06:54:25Z","content_type":null,"content_length":"3643","record_id":"<urn:uuid:dade1881-5c2b-4c51-ba48-2a726c105c2d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Leah Edelstein-Keshet: Publications Last updated: May 2011 • Understanding Complex Systems. Synergy 5.1 (UBC Faculty of Science publication), December 1999. Book or Chapters in Books • Lee A Segel and Leah Edelstein-Keshet (2010) Perspectives on Mathematical Biology; (under review for SIAM; 427pp). [An expanded version of a book project that Lee Segel was developing before he passed away.] • Edelstein-Keshet, L (2005) Adapting mathematics to the new biology, Chapter in Math & Bio 2010; Linking Undergraduate Disciplines, Lynn A Steen, Ed, Mathematical Association of America (MAA). • Edelstein-Keshet, L (2005; originally 1988) Mathematical Models in Biology, reprinted by SIAM under the "classics" editions. Link to the SIAM book webpage. Other details on my site. • Okubo, A, Grunbaum, D, Edelstein-Keshet, L (2001) The Dynamics of Animal Grouping, Chapter 7 in: A Okubo and S Levin, Diffusion and Ecological Problems: Modern Perspectives, Springer Verlag, NY • Leah Edelstein-Keshet: Nature or Nurture? My Mathematical Biology Upbringing AAAS Highwire Press, ScienceCareers.org Feb 20, 2004. Recent publications • Jilkine A, Edelstein-Keshet L (2011) A Comparison of Mathematical Models for Polarization of single eukaryotic cells in response to guided cues, (Review), PLOS Computational Biology 7 (4):e1001121, April 28, 2011. Link to PLOS paper • Tania N, Prosk E, Condeelis J, Edelstein-Keshet L (2011) A temporal model of cofilin regulation and the early peak of actin barbed ends in invasive tumor cells, Biophys J 100, 1883-1892,Link to Journal , P., Sup.. • Mori Y, Jilkine A, Edelstein-Keshet L (2010) Asymptotic and bifurcation analysis of wave-pinning in a reaction-diffusion model for cell polarization, under review, SIAM J Applied Math. Link to • Vanderlei B, Feng J, Edelstein-Keshet L. (2010) A computational model of cell polarization and motility coupling mechanics and biochemistry, submitted. Link to ArXiv • Green JEF, Waters SL, Whiteley JP, Edelstein-Keshet L, Shakesheff KM, Byrne HM (2010) Non-local models for the formation of hepatocyte-stellate cell aggregates, J Theor Biol 267 (1): 106-120. Link . • Lukeman R, Li YX, Edelstein-Keshet L (2010) How do ducks line up in rows: inferring individual rules from collective behaviour, PNAS 107 (28): 12576-12580. Link. This paper was featured in a short editorial in Nature 466:163 (July 8, 2010) and on the cover of the AMS Notices, in April 2011 • Das R, Nachbar R, Saltzman J, Bagchi A, Bailey J, Edelstein-Keshet L, Coombs D, Cook J, Hargreaves R, Simon A (2010) Modeling effect of a gamma-secretase inhibitor on amyloid-beta dynamics reveals significant role of an amyloid clearance mechanism, Bull Math Biol. 73(1): 230-247. Link to Journal • Khadra A, Tsai, Santamaria P, Edelstein-Keshet L, (2010) On how monospecific memory-like autoregulatory CD8+ T cells can blunt diabetogenic autoimmunity: a computational approach. Journal of Immunology 185: 5962-5972. Link to Anmar's Publications • Khadra A, Santamaria P, Edelstein-Keshet L (2010) The pathogenicity of self-antigen decreases at high levels of autoantigenicy: a computational approach, International Immunology 22 (7), 571-852. Link to Anmar's Publications • Khadra A, Santamaria P, Edelstein-Keshet L (2008) The role of low avidity T cells in the protection against Type 1 Diabetes: a modeling investigation, J theor biol 256 (1): 126-141.Link to Anmar's Publications • Lukeman R, Li Y X, Edelstein-Keshet L (2008) A conceptual model for milling formations in biological aggregates, Bull Math Biol. 71(2): 352-382 Preprint • Mori Y, Jilkine A, Edelstein-Keshet L (2008) Wave-pinning and cell polarity from a bistable reaction-diffusion system, Biophysical Journal 94(9): 3684-3697. Preprint and Supplementary material • Maree AFM, Komba M, Finegood D, Edelstein-Keshet L(2008) A quantitative comparison of rates of phagocytosis and digestion of apoptotic cells by macrophages from normal (BALB/c) and diabetes-prone (NOD) mice, Journal of Applied Physiology 104(1): 157-169 Preprint. • Li YX, Lukeman R, Edelstein-Keshet L (2008) Minimal mechanisms for school formation in self-propelled particles, Physica D 237 (5):699-720. • Dawes AT, Edelstein-Keshet L (2007) Phosphoinositides and Rho proteins spatially regulate actin polymerization to initiate and maintain directed movement in a 1D model of a motile cell, Biophysical Journal 92: 1-25 (February) (Uncorrected proofs). • Jilkine A, Maree AFM, Edelstein-Keshet L (2007) Mathematical model for spatial segregation of the Rho-family GTPases based on inhibitory crosstalk, Bull Math Biol 68(5):1169-1211 Preprint • Mahaffy JM, Edelstein-Keshet L (2007) Modeling cyclic waves of circulating T-cells in autoimmune diabetes, SIAM J Applied Math (SIAP) 67(4): 915-937Preprint. • Maree AFM, Jilkine A, Dawes AT, Greineisen VA, Edelstein-Keshet L (2006) Polarization and movement of keratocytes: a multiscale modelling approach. Bull Math Biol, 68(5):1169-1211. Preprint. You can also see Movies on Stan's website. • Maree AFM, Santamaria P, Edelstein-Keshet L (2006) Modeling competition among CD8+ T cells in autoimmune diabetes: implications for antigen-specific therapy. International Immunology, 18(7): 1067-1077 reprint. • Dawes AT, Ermentrout GB, Cytrynbaum EN, Edelstein-Keshet L (2006) Actin filament branching and protrusion velocity in a simple 1D model of a motile cell, J theor Biol 242(2):265-279. reprint • Gutenkunst R, Newlands N, Lutcavage M, Edelstein-Keshet L (2006) Inferring resource distributions from Atlantic bluefin tuna movements: an analysis based on net displacement and length of track, J theor Biol 245(2): 243-257.Preprint. • Maree AFM, Kublik R, Finegood DT, Edelstein-Keshet L (2006) Modelling the onset of Type 1 Diabetes: can impaired macrophage phagocytosis make the difference between health and disease? Philosophical Transactions of the Royal Society A, 364, 1267-1282. reprint • Han B, Serra P, Amrani A Yamanouchi J, Maree AFM, Edelstein-Keshet L, Santamaria P (2005) Prevention of diabetes by manipulation of anti-IGRP autoimmunity: high efficiency of a low-affinity peptide. Nature Medicine (June) 11 (6): 645-652.reprint • Maree AFM, Komba M, Dyck C, Labecki M, Finegood DT, Edelstein-Keshet L (2005) Quantifying macrophage defects in type 1 diabetes. J theor Biol (April 21) 233(4): 533-551. reprint • Mogilner A, Edelstein-Keshet L, Bent L, Spiros A (2003) Mutual interactions, potentials, and individual distance in a social aggregation, J Math Biol 47(4): 353-389. reprint • Luca M, Chavez-Ross A, Edelstein-Keshet L, Mogilner A (2003) Chemotactic signalling, microglia, and Alzheimer's disease senile plaques: is there a connection? Bull Math Biol 65: 693-730. reprint • Mogilner A, Edelstein-Keshet L (2002) Regulation of Actin Dynamics in Rapidly moving cells: A quantitative analysis. Biophys. J. 83: 1237-1258. reprint Note: Typo on p1254: col 1, middle of pg: k_{-3} and k_{3}: the coefficients k^{pt} and k^t should be interchanged. • Edelstein-Keshet L, Spiros A (2001) Exploring the formation of Alzheimer's Disease senile plaques in Silico, J theor Biol. 216: 301-326. reprint • Edelstein-Keshet L, Israel A, Lansdorp P (2001) A modeller's perspectives on aging: Can Mathematics help us stay young? J theor Biol 213 (4): 325-355. reprint • Edelstein-Keshet, L and Ermentrout, G B (2001) A model for actin filament length distribution in a lamellipod. J Math Biol. 43 (4):325-355 reprint • Edelstein-Keshet, L., and G.B. Ermentrout (2000) Models for the spatial polymerization dynamics of rod-like polymers, J. Math. Biol., 40 (1), 64-96. reprint • Parrish, J. and Edelstein-Keshet, L. (1999) Complexity, Pattern, and Evolutionary Trade-offs in Animal Aggregation, Science, 284 (April 2). Reprint in PS format • Mogilner A, Edelstein-Keshet L (1999) A non-local model for a swarm, J. Math Biology, 38:534-570. reprint • Edelstein-Keshet, L. (1998) Mathematical approaches to cytoskeletal assembly, European Biophysics Journal 27:521-531. reprint • Edelstein- Keshet, L. J. Watmough, and D. Grunbaum (1998) Do traveling band solutions describe cohesive swarms? An investigation for migratory locusts, J. Math. Biol., 36, 515-549. reprint • Spiros, A., and Edelstein-Keshet, L. (1998) Testing a model for the dynamics of actin structures with biological parameter values, Bull. Math. Biol., 60 (2), 275-305. reprint • Edelstein-Keshet, L. and G.B. Ermentrout (1998) Models for the length distribution of actin filaments I: Simple polymerization and fragmentation, Bull. Math. Biol 60(3), 449-475. reprint • Ermentrout, G.B. and L. Edelstein-Keshet (1998) Models for the length distribution of actin filaments II: Polymerization and fragmentation by gelsolin acting together, in press, Bull. Math. Biol 60(3), 477-503. reprint • Dukas, R. and Edelstein-Keshet, L. (1998) The spatial distribution of colonial provisioners, J. Theor. Biol. 190, 121-134. reprint • Mogilner, A. Edelstein-Keshet, L. and G.B. Ermentrout (1996) Selecting a common direction: II. Peak-like solutions representing total alignment of cell clusters, J. Math. Biol., 34, 811-842. • Mogilner, A. and Edelstein-Keshet, L. (1996) Spatio-angular order in populations of self-aligning objects: formation of oriented patches. Physica D 89, 346-367.reprint • Mogilner, A and Edelstein-Keshet, L. (1995) Selecting a common direction: I: how orientational order can arise from simple contact responses between interacting cells. J. Math. Biol. 33; 619-660 • Edelstein-Keshet, L., Watmough, J. and G.B. Ermentrout (1995) Trail-following in social insects: Individual Properties determine population behaviour. Behavioral Ecology and Sociobiology 36(2), 119-133. reprint • Watmough, J. and Edelstein-Keshet, L. (1995) A one dimensional model of trail propagation by army ants. J. Math. Biol., 33, 459-476. reprint • Watmough, J. and Edelstein-Keshet, L. (1995) Modelling the Formation of trail networks by foraging ants. J. Theor. Biol., 176, 357-371. reprint • Edelstein-Keshet, L. (1994) Simple models for Trail following behaviour; trunk trails versus individual foragers, J. Math. Biol. 32:303-328. reprint • Civelecoglu, G. and Edelstein-Keshet, L. (1994) Modelling the dynamics of F-Actin in the cell, Bull. Math. Biol. 56 (4), 587-616. reprint • Ermentrout, G.B., Edelstein-Keshet, L. (1993) Cellular automata approaches to biological modelling, J. Theor. Biol., 160 (1), 97-133. reprint • Crenshaw, H. and Edelstein-Keshet, L. (1993) Orientation by helical motion- II, Changing the direction of the axis of motion, Bull. Math. Biol., 55 (1), 213-230. reprint • Edelstein-Keshet, L.(1991) Heat therapy for tumors, UMAP Journal Vol. 12, (2), 113-142. • Edelstein-Keshet, L., G.B. Ermentrout (1990) Models for contact mediated pattern formation: cells that form parallel arrays, J. Math. Biol. 29: 33-58.reprint • Edelstein-Keshet, L., and G.B. Ermentrout (1990) Contact responses of cells can mediate morphogenetic pattern formation, Differentiation: 45 (3), 147-159. • Edelstein-Keshet L (1986) Mathematical theory for plant-herbivore systems, J Math Biol 24: 25-58. reprint Conference Proceedings • Edelstein-Keshet, L (2001) Mathematical models of swarming and social aggregation, invited lecture; The 2001 International Symposium on Nonlinear Theory and its Applications, (NOLTA 2001) Miyagi, Japan (Oct 28-Nov 1, 2001) Preprint in PDF form Back to my home page.
{"url":"http://www.math.ubc.ca/~keshet/papers.html","timestamp":"2014-04-20T05:56:37Z","content_type":null,"content_length":"15479","record_id":"<urn:uuid:a6690d19-d58e-4f19-8e7d-492d95d748ac>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Statistics January 31st 2009, 11:52 PM #1 Junior Member Sep 2008 Simple Statistics The mean and the standard deviation of the marks of a class in an Exam were 53 and 7 respectively. The test was then given to a new student who scored 63. Which of the following statements is certainly true? A) The range increased B) The mean increased C) The standard deviation increased D) The median increased The answer to this question was B,C and D. I know that the mean will increased as the score is higher than the mean, but I don't understand why the standard deviation and the median would CERTAINLY increase? The mean and the standard deviation of the marks of a class in an Exam were 53 and 7 respectively. The test was then given to a new student who scored 63. Which of the following statements is certainly true? A) The range increased B) The mean increased C) The standard deviation increased D) The median increased The answer to this question was B,C and D. I know that the mean will increased as the score is higher than the mean, but I don't understand why the standard deviation and the median would CERTAINLY increase? Look at the definition of variance and you will see why the variance must increase. However, I would argue that it's not certain that the median will change (there may be several scores of 63 in the middle ranks ....) February 1st 2009, 12:14 AM #2
{"url":"http://mathhelpforum.com/statistics/71091-simple-statistics.html","timestamp":"2014-04-18T21:50:31Z","content_type":null,"content_length":"33581","record_id":"<urn:uuid:238dc75f-38bf-4a17-9403-3c608d82b497>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
A hidden Herbrand theorem: Combining the object, logic and functional paradigms , 2000 "... This paper describes the Tatami project at UCSD, which is developing a system to support distributed cooperative software development over the web, and in particular, the validation of concurrent distributed software. The main components of our current prototype are a proof assistant, a generator fo ..." Cited by 13 (8 self) Add to MetaCart This paper describes the Tatami project at UCSD, which is developing a system to support distributed cooperative software development over the web, and in particular, the validation of concurrent distributed software. The main components of our current prototype are a proof assistant, a generator for documentation websites, a database, an equational proof engine, and a communication protocol to support distributed cooperative work. We believe behavioral specification and verification are important for software development, and for this purpose we use first order hidden logic with equational atoms. The paper also briefly describes some novel user interface design methods that have been developed and applied in the project - Annals of Software Engineering , 2001 "... recent advances in web technology, interface design, and specification. Our effort to improve the usability of such systems has led us into algebraic semiotics, while our effort to develop better formal methods for distributed concurrent systems has led us into hidden algebra and fuzzy logic. This p ..." Cited by 7 (2 self) Add to MetaCart recent advances in web technology, interface design, and specification. Our effort to improve the usability of such systems has led us into algebraic semiotics, while our effort to develop better formal methods for distributed concurrent systems has led us into hidden algebra and fuzzy logic. This paper discusses the Tatami system design, especially its software architecture, and its user interface principles. New work in the latter area includes an extension of algebraic semiotics to dynamic multimedia interfaces, and integrating Gibsonian affordances with algebraic semiotics. 1 "... We show that for any behavioral Sigma-specification B there is an ordinary algebraic specification ~ B over a larger signature, such that a model behaviorally satisfies B if and only if it satisfies ~ B, where is the information hiding operator exporting only the Sigma-theorems of ~ B. The idea is t ..." Add to MetaCart We show that for any behavioral Sigma-specification B there is an ordinary algebraic specification ~ B over a larger signature, such that a model behaviorally satisfies B if and only if it satisfies ~ B, where is the information hiding operator exporting only the Sigma-theorems of ~ B. The idea is to add machinery for contexts and experiments (sorts, operations and equations), use it, and then hide it. We develop a procedure, called unhiding, that takes a finite B and produces a finite ~ B. The practical aspect of this procedure is that one can use any standard equational or inductive theorem prover to derive behavioral theorems, even if neither equational reasoning nor induction is sound for behavioral satisfaction.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=286022","timestamp":"2014-04-16T22:44:26Z","content_type":null,"content_length":"17610","record_id":"<urn:uuid:fa8d2d9d-4509-4951-8545-9e3814b3fc4e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Quad equation using fuctions problem 12-07-2006 #1 Registered User Join Date Mar 2006 Quad equation using fuctions problem #include <iostream> using namespace std; void quadraticequation(float a, float b, float c); int main() int a,b,c; cout <<"a"<< endl; void quadraticequation(float a, float b, float c) if (((b*b)-4*a*c)<0) cout<<"No real solution"<<endl; else if (((b*b)-4*a*c)=0 ) cout<<"Answer is"<<-b/2*a; else if (((b*b)-4*a*c)>0) cout<<"X1 ="<< -b+sqrt((b*b)-4*a*c)/2*a; cout<<"X2 ="<< -b-sqrt((b*b)-4*a*c)/2*a; In the above code I'm to write out a function that solves a quadratic equation. I have one error that comes up but I dont know how to stop it because it seems okay to me(the beginner Try to always tell us what the errors are when you get them. It seems to me the error comes from the fact that you have mistaken '==' for '='. It's in your first 'else if' statement. Well, mathematically you should have cout<<"X1 ="<< (-b+sqrt((b*b)-4*a*c))/(2*a); cout<<"X2 ="<< (-b-sqrt((b*b)-4*a*c))/(2*a); You could also save some typing by defining a variable for b*b-4*a*c. Edit: And also cout<<"Answer is"<<-b/(2*a); Last edited by robatino; 12-07-2006 at 10:09 AM. #include <math.h> #include <iostream> using namespace std; void quadraticequation(float a, float b, float c); int main() int a,b,c; cout <<"a"<< endl; void quadraticequation(float a, float b, float c) if (((b*b)-4*a*c)<0) cout<<"No real solution"<<endl; else if (((b*b)-4*a*c)==0 ) cout<<"Answer is"<<-b/(2*a); else if (((b*b)-4*a*c)>0) cout<<"X1 ="<< (-b+sqrt((b*b)-4*a*c))/2*a; cout<<"X2 ="<< (-b-sqrt((b*b)-4*a*c))/2*a; there are two more places with /2*a that should be fixed The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time. Thanks guys, It works now. I really appreciate the help, I have an exam in this Wednesday and up till today I was clueless, still am but I think I know the basics. 12-07-2006 #2 Registered User Join Date May 2006 12-07-2006 #3 Registered User Join Date Sep 2006 12-07-2006 #4 Registered User Join Date Nov 2006 12-07-2006 #5 12-07-2006 #6 Registered User Join Date Mar 2006
{"url":"http://cboard.cprogramming.com/cplusplus-programming/86312-quad-equation-using-fuctions-problem.html","timestamp":"2014-04-17T19:56:21Z","content_type":null,"content_length":"59829","record_id":"<urn:uuid:52879962-0299-428f-adc8-a2d748f22ebb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
College Point Calculus Tutor ...Hello,I've been tutoring all aspects of the ASVAB for over 2 years. I have found my knowledge of advanced mathematics, English and other standardized tests can be directly applied to help potential students achieve their goals in this test. I break down the exam into efficient and effective tes... 55 Subjects: including calculus, reading, English, geometry ...I have tutored students both at the high school and college level in CS. I particularly enjoy explaining concepts like data structures and basic algorithms in a user-friendly way, as well as familiarizing students and getting them more comfortable with general programming syntax. I am familiar with Macintosh and other Apple technologies. 37 Subjects: including calculus, chemistry, French, physics ...Therefore, I am fluent in Mandarin, Japanese and English. I am also proficient to teach math to children up to 7th grade because I have tutored my own nieces so far and they both have the best math scores in their class (level 1).I can teach Mandarin at business level and Japanese at conversational level. I am easygoing and get along with children at all ages very well. 6 Subjects: including calculus, Japanese, Chinese, precalculus ...During those times, I have helped numerous students master the subject like I did. Now with 20+ years of tutoring under my belt, I believe that I can give adequate assistance to a new crop of students who may be experiencing difficulty in certain areas of math. I am more than qualified to assist such students, and it is a pleasure of mine that will probably never die. 22 Subjects: including calculus, chemistry, Spanish, geometry ...Neither of his parents spoke French or were able to help him. We worked it out and he was very successful in his new school. The experience was challenging but rewarding. 18 Subjects: including calculus, chemistry, physics, French
{"url":"http://www.purplemath.com/College_Point_Calculus_tutors.php","timestamp":"2014-04-20T21:32:48Z","content_type":null,"content_length":"24198","record_id":"<urn:uuid:7807fb31-9935-4c37-b276-a71eaddc1547>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
making a 2-d array i have to write a function called transpose that accepts a two-dimensional array of doubles named matrix and an integer named size. the matrix is assumed to be "square" such that size is the number of rows as well as the number of columns (columns=rows). for each element, the function determines which other element corresponds to the first when mirrored about the diagonal axis(from top-left to bottom right). be careful to ensure that elements are only transposed once not twice. the function returns nothing(void). I have never done a 2-d array before so can someone help me with this Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/119064/","timestamp":"2014-04-16T19:00:39Z","content_type":null,"content_length":"6884","record_id":"<urn:uuid:806f03a1-54bf-45e2-ab7c-f86e68b12cad>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Does every vector bundle allow a finite trivialization cover? up vote 12 down vote favorite Suppose there is a vector bundle (smooth, with constant rank finite-dimensional fibres) over a (smooth, second-countable, Hausdorff, not necessarily connected) manifold $B$ of dimension $n$. (a) Is it true, that the manifold B can be covered by a finite number of sets $U_1,\dots,U_N$ s.t. the vector bundle, restricted to $U_i$, is isomorphic to a trivial one for every $i=1,\dots,N$? (b) If yes, can $N$ be taken to be $n+1$? P.S. Some observations: 1. It's proven in the book by Milnor and Stasheff, that every bundle allows a countable locally finite trivialization cover. 2. Part (a) is obviously trivial for compact manifolds. 3. It seems, that (b) is true if $B$ is an $n$-dimensional CW-complex. Proof: Denote with $B_k$ union of cells of dimension $0,\dots,k$. Prove by induction in $k$, that there are subsets $U_0,\ dots,U_k$ of $B$, which cover $B_k$, s.t. the restriction of the bundle to each of them is trivializable. Start with case $k=0$: construct contractible neighbourhoods of each 0-cell, which do not intersect with each other. Take there union. Now to prove the claim for the next value of $k$ it is enough to construct a contractible non-intersecting neighbourhoods of each $X_\alpha=e_\alpha\ setminus (U_0\cup \dots\cup U_{k-1})$. Call the desired neighbourhood with $V_\alpha$. First, note, that $X_\alpha$ is closed in $e_\alpha^k$ and doesn't intersect with its boundary $\partial e_\ alpha^k$, so we can find its neighbourhood in $e_\alpha^k$, which doesn't intersect with $\partial e_\alpha^k$. This set is our candidate for $V_\alpha\cap B_k$. Extending it to an open set in $B_{k+1}$ can be done cell by cell: interpreting $e^{k+1}_\beta$ as a unit ball with the center in the origin, we can write every its point as $r\theta$, where $\theta\in S^k$ and $r\in [0,1]$. We include $r\theta$ in $V_\alpha\cap B_{k+1}$ iff $\theta$ is already there and $r>0.99$. Repeating this procedure we extend it to $B$. Edit: Open sets $U_1,\dots, U_N$ are assumed to be open. I don't ask them to be connected. add comment 6 Answers active oldest votes The answer (to both questions (a) and (b)) is YES (assuming $B$ is a smooth manifold). A proof can be found on Walschap's book "Metric Structures in Differential geometry", p. 77, Lemma For the OP's convenience, here's a sketch of the proof. Choose an open cover of $B$ such that your vector bundle is trivial over each element. From general results in topology, this (and up vote 11 in fact any) cover of an $n$-dim manifold $B$ admits a refinement $\{ V_\alpha\}_{\alpha\in A}$ such that any point in $B$ belong to at most $n+1$ $V_\alpha$'s. Let $\{\phi_\alpha\}$ be a down vote partition of unity subordinate to this cover and denote by $A_i$ the collection of subsets of $A$ with $i+1$ elements. Given $a=\{\alpha_0,\dots,\alpha_i\}\in A_i$, denote by $W_a$ the accepted set consisting of those $b\in B$ such that $\phi_\alpha(b)\lt\phi_{\alpha_0}(b),\dots,\phi_{\alpha_i}(b)$ for all $\alpha\neq\alpha_0,\dots,\alpha_i$. Then the collection of $n+1$ open subsets $U_i:=\cup_{a\in A_i} W_a$ covers $B$ and is such that your bundle restricted to each $U_i$ is trivial. I made purely formatting changes, to correct a conflict between MarkDown and MathJax. I hope I did not introduce any errors. – Theo Johnson-Freyd Apr 19 '12 at 3:42 I apologize for the weird formatting: my curly brackets seem not to work otherwise. I hope MathJax is rendering everything else without problems... – Renato G Bettiol Apr 19 '12 at @Theo: thanks! that's exactly what I needed... I'll look what you did so I finally learn to deal with this issue! Thanks again. – Renato G Bettiol Apr 19 '12 at 3:45 3 Walschap's book is available in electronic form here: libgen.info/view.php?id=539769 – Dmitri Pavlov Apr 19 '12 at 8:51 @Renato: The problem, of course, is that Markdown and LaTeX both consider braces special, and so to escape them you need a backslash. Markdown goes first, sees \{ , and replaces it with { , which is all MathJax sees. The solution is to put the entire paragraph between <p> ... </p> tags, which Markdown interprets as "don't process anything in this section". The cost is that you don't get to use any MarkDown syntax within the paragraph, so to make something bold (for example) requires <b> or <strong> or other html. <p> also protects underscores from Markdown, of course. – Theo Johnson-Freyd Apr 21 '12 at 22:22 add comment This should be a comment to the answer of Andreas Blass, but was too long. The Lusternik-Schnirelmann category $\operatorname{cat}(X)$ of a space $X$ is the smallest number $k$ such that $X$ has a cover by open sets $U_1,\ldots , U_k$ which are contractible in $X$. This means the inclusions $U_i\hookrightarrow X$ are null-homotopic. Note that the $U_i$ need not be contractible themselves, or even connected (although each $U_i$ should be contained within a connected component $X_i$). Note also that $\operatorname{cat}(X)$ is greater than or equal to the minimum number of open sets needed to trivialize any vector bundle on $X$, by the bundle creeping lemma and the fact that any bundle over a point is trivial. One of the first theorems about LS-category is that if $X$ is paracompact, then $\operatorname{cat}(X)\le \operatorname{dim}(X)+1$, where $\operatorname{dim}$ denotes the Lebesgue covering dimension (which for manifolds agrees with the usual dimension). So Andreas Blass is correct. up vote 9 down If you want the minimum number of sets in a trivializing cover for your bundle $E$, you can do better, using the notion of sectional category of a fibre bundle (also known as the Schwarz vote genus). The sectional category $\operatorname{secat}(p)$ of a fibre bundle $p\colon\thinspace E\to B$ is the smallest number $k$ such that $B$ has a cover by open sets $U_1,\ldots , U_k$ on each of which $p$ admits a continuous local section (that is, a continuous map $s\colon\thinspace U_i\to E$ such that $p\circ s =\operatorname{incl}\colon\thinspace U_i\hookrightarrow B$). Then the minimum number of sets in a trivializing cover for the vector bundle $E\to B$ equals the sectional category of the frame bundle $F(E)\to B$. Addendum: One can show using obstruction theory that for a $r$-connected manifold $B$, $$\operatorname{cat}(B)< \frac{\dim(B)+1}{r+1}+1.$$ So if, say, your manifold is simply-connected, you can find a trivializing cover with roughly half as many sets. add comment I think what you're looking for (or rediscovering) is the concept of the Lusternik-Schnirelman category of a space - the minimum number of contractible open sets needed to cover the space. up vote 6 More precisely, you need the maximum LS-category of the components of your space. down vote 2 @Andreas Blass : It's not necessary that the open sets be contractible, merely that each of their components are contractible. Once you allow disconnected components, then the OP has a correct proof that $(n+1)$ open sets are necessary for an $n$-dimensional manifold. – Andy Putman Apr 19 '12 at 2:23 @Andry Putman : I do allow a cover by disconnected open sets (I've just added this clarification to the question), but I didn't noticed my proof, you are referencing to. I have some proof for the case of CW-complexes, but Wikipedia says only that manifolds are homotopy equivalent to CW-complexes. – Fiktor Apr 19 '12 at 3:14 1 ALso, since the OP is asking for a local trivialisation of a given vector bundle (as opposed to all vector bundles), the connected components of the open sets don't need to be contractible, merely that the vector bundle restricts to be trivial over them. If he has proved that the manifold has a cover by (disjoint unions of) contractible opens, then this is even stronger than what the question requires. – David Roberts Apr 19 '12 at 3:16 1 @Fiktor : I guess that's true, but Kirby-Siebenmann proved that all manifolds have topological handle decompositions, and thus are homeomorphic to CW complexes. – Andy Putman Apr 19 '12 at 3:26 3 Actually, you specified that your manifolds are smooth, so there is no need to appeal to fancy things like Kirby-Siebenmann. Smooth manifolds can be triangulated, and thus are homeomorphic to simplicial complexes (better than just CW complexes!). – Andy Putman Apr 19 '12 at 3:28 show 2 more comments From the reference above (Walschap's "Metric Structures in Differential geometry") it seems that, in order to construct the classifying map to a suitable Grassmannian $G(\mathbb{R}^N,k)$, where $k$ is the rank of your bundle $E$, one needs the existence of a finite trivializing cover. up vote Personally, I believe that such a perspective can be reversed, i.e., first construct a classifying map and then use it to prove the existence of a finite trivializing cover. Indeed, the 3 down classifying map can be constructed just by using the Gauss map associated with a Whitney embedding $E\subseteq\mathbb{R}^N$. Namely, the base manifold $M$ is embedded into $E$ via the zero vote section, and attached to any point $x$ of $M$ there is the vertical tangent space (i.e., the subspace of $T_xE$ tangent to the fiber through $x$). Apply then the Gauss map to this vertical tangent space, and you'll get an element of $G(\mathbb{R}^N,k)$. Since any bundle over $G(\mathbb{R}^N,k)$ admits a finite trivializing cover, so will any bundle pulled-back from it. I think this is really the most conceptual answer. – Ryan Reich Aug 7 '13 at 6:16 add comment (a) and (b) are also true for topological manifolds of dimension $n$, see pages 17-21 of up vote 2 down • MR0336650 Greub, Werner; Halperin, Stephen; Vanstone, Ray: Connections, curvature, and cohomology. Vol. I: De Rham cohomology of manifolds and vector bundles. Pure and Applied vote Mathematics, Vol. 47. Academic Press, New York-London, 1972. add comment I wonder that this was not said before. Take a triangulation of your manifold. Choose disjoint open balls around each 0-cell in the triangulation and set the union to be $U_0$ (a ball means here something diffeomorphic to an open ball). For each 1-cell, choose an open ball such that together with the balls chosen before around the corners, the 1-cell is completely covered, and choose those balls to be disjoint. Take the union of those to be $U_1$. up vote 2 down vote Keep on going in the same way, until you reach the $n$-cells. Now you have $n+1$ sets $U_0, \dots, U_n$, each consisting of disjoint unions of open balls and such that $U_0 \cup \dots \cup U_j$ covers all cells of dimension less or equal to $j$. In particular, the union of all these sets covers your manifold. Your vector bundle must be trivial on each of the sets $U_j$, as each are disjoint unions of open balls. add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology vector-bundles smooth-manifolds manifolds differential-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/94479/does-every-vector-bundle-allow-a-finite-trivialization-cover?sort=oldest","timestamp":"2014-04-17T21:52:55Z","content_type":null,"content_length":"89067","record_id":"<urn:uuid:98ceb542-4384-48dd-a41d-59b06f26aea2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Finding the root of an equation Replies: 0 veru Finding the root of an equation Posted: May 11, 2013 4:58 AM Posts: 3 Registered: I would like to create a function which finds the root of an equation numerically by using the following recursive formula: x(n+1) = x(n) - ( f(x(n)) / g(x(n) ) , for n=1,2,3,... where g(x(n)) = f( x(n) + f(x(n)) ) / f(x(n)). This iterative procedure must stop when the absolute difference between x(n) and x(n+1) is less than a given tolerance epsilon. The function must accept as inputs, a scalar function 'f', an initial number 'x' and a positive number 'epsilon' to terminate the procedure. Hence by using this numerical technique I would like to find the root of the equation e^x-x^2=0. function y=q3(x) f=('Enter a scalar function: '); x=('Enter a number: '); % initial number epsilon=('Enter a positive number: '); Any help would be greatly appreciated. Thanks in advance.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2571802","timestamp":"2014-04-16T16:29:18Z","content_type":null,"content_length":"14234","record_id":"<urn:uuid:3d2ac3d4-6b22-45c8-a762-c201435e1be7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Is it sand running through our fingers, or a taffy like substance, in symbolic form? The difference, discretium and fluidity of nature, geometrically/topologically driven, are at war with what we might interpret in time? Early on, Salvador Dali understood well this geometrical propensity to the tesserack, that he embued his art with higher religious context(time). But in real life, he was different man?:) The issues were not far removed from perspective, that this battle would find itself challenged, in how we would portray the nature of reality? That it had burst forth in science and it's But come back to earth, and we have to wonder indeed if this fluid is slipping through our fingers as time reveals a more intrinistic view of the reality in the cosmos? Sean Carroll said Friedmann fights back: For those of you interested in the attempt by Kolb, Matarrese, Notari, and Riotto to do away with dark energy, some enterprising young cosmologists (not me, I'm too old to move that quickly) have cranked through the equations and come out defending the conventional wisdom. Three papers in particular seem interesting: Lubos Motl Superhorizon fluctuations and accelerating Universe: Several physicists and bloggers, e.g. Jacques Distler, Peter Woit and especially Sean Carroll who may be considered a true expert in these questions and who added a very new article after this article of mine was published, recently noticed a paper that claimed that the cosmological constant was not needed. Instead, the accelerating expansion was conjectured to be a consequence of fluctuations of a scalar field (and the associated stress energy tensor) whose wavelength was longer than the Hubble radius i.e. the size of the visible Universe, roughly speaking. I agree with Lubos here in regards to what has already been establish to date in the positions. Here with Sean Carroll Jacques Distler Peter Woit , and Lubos Motl respectively, that they all agree on the standards set here? This would be a clear statement of position, and one that would signal, accepted practice on the expository view of our cosmos? Is it to ambitious? Out of this a standard, even if there are divergences of personality; this is wiped away, so that we are introduced to new information as Sean shows us withRaychaudhuri equation? This gives one direction to look at. This equation has the special characteristic that it is true without reference to the Einstein equations . That is, it is true for any spacetime. It is an intrinsic property of Now we come back to the intuitive development from this standard presence. Would it be so wrong to ask that four minds to stand together and paper their perspective? Then open it up to geometry/ topological views, in relation to how we might develop the imagery of what might have been gathered from the dynamical realization of early universe idealizations? In regards to the tactile experience one might want to comprehend is in the way the universe now has unfolded? Now there is a most definite need to grasp the issue here in terms of what causality might mean in terms of balckhole/3 brane collapse as a perspective to the dynamics that would be revealled, for photon,/graviton production from the blackhole? Using Calorimeter, we see where such advances help us to distinquish early universe information in Glast cosiderations, but how much more suttle has this experience need to be expanded upon, to understand the exchange that takes place in the gravitational collapse? John Baez Now, the way Hawking likes to calculate things in this sort of problem is using a "Euclidean path integral". This is a rather controversial approach - hence his grin when he said it's the "only sane way" to do these calculations - but let's not worry about that. Suffice it to say that we replace the time variable "t" in all our calculations by "it", do a bunch of calculations, and then replace "it" by "T" again at the end. This trick is called "Wick rotation". In the middle of this process, we hope all our formulas involving the geometry of 4d spacetime have magically become formulas involving the geometry of 4d space. The answers to physical questions are then expressed as integrals over all geometries of 4d space that satisfy some conditions depending on the problem we're studying. This integral over geometries also includes a sum over topologies. That's what Hawking means by this: Stephen Hawking:I adopt the Euclidean approach, the only sane way to do quantum gravity non-perturbatively. In this, the time evolution of an initial state is given by a path integral over all positive definite metrics that go between two surfaces that are a distance T apart at infinity. One then Wick rotates the time interval, T, to the Lorentzian. The path integral is taken over metrics of all possible topologies that fit in between the surfaces. How would missing energy events isolate the realization that such ventures would have been specific in detailing the envelope capturing all that has evolved in our universe to know that there is this consistancy, that spreads itself through all possibiltyies of Feynman's sum over paths of expression, that still needs to be identified? Now you must know that there are consequences when we see this collapse take place that asks us to consider the nature of the temperatures and diameter in reduction? That what has been reduced in this energy developing scenarion of the cosmos in action, is a applicable view to geometry/topology that at the same time reveals the idealization of entropic features of supersymmetical views that we learn to see? How this experience, as tactile as I approach it, is induced, is at very illusatory experience way back in some speculative past.:)Whooh! What? Careful now, I am analogically speaking here, because I like to see this way. It feels right(not saying it is right) as simple statement quickly summing up many mathematical views in a very short and simple way. That's what I hope anyway. When you look at this fluid geometrically/topolgically driven what view has transpired in blackhole production? You want to be able to understand the symmetrical breaking that is taking place? Crystalization processes, would quickly surmize a Laughlin view from a fast cooling temperature, to realize, it is much more slower then this in the cooling(15 bilion year assumption) in a cosmological process? So we understand curvature is well aquainted with vast track of cosmological views, but it become much more diffiult at such microscopic thinking. Sort of, all smeared out in a vast supersymmetrical views of previous states of existance, that quickly gather to form maybe, cosmic strings?:) John Baez said But you shouldn't imagine the mood as one of breathless anticipation. At least for the physicists present, a better description would be something like "skeptical curiosity". None of them seemed to believe that Hawking could suddenly shed new light on a problem that has been attacked from many angles for several decades. One reason is that Hawking's best work was done almost 30 years ago. A string theorist I know said that thanks to work relating anti-deSitter space and conformal field theory - the so-called "AdS-CFT" hypothesis - string theorists had become convinced that no information is lost by black holes. Thus, Hawking had been feeling strong pressure to fall in line and renounce his previous position, namely that information is lost. A talk announcing this would come as no big surprise. Sometimes you have to venture further into the logic of strings to see where these applications are revealling themselves for consideration first and then work from the idea of compacted states and the relevance of dimensial attributes for consideration? Here the question points to bulk information in relation to the blackhole, three brane wrap to gravitational collapse. How is this clcical nature revealling itself and not limiting time to a end, but to the recognition of the value of "time" in those dimensions. The idea of taffy seems like a very tactile experience for me, becuase of how I see entropic issues relevant to unsynmmetrical views of symmetry breaking, and this relation not only to the blackhole expansion that takes place but the recognition of previous states of existed at high energy scale. Did Picasso know about Einstein? Was it a coincidence that Picasso developed Cubism at about the same time that Einstein published his theory of relativity? Arthur I Miller thinks not, as he explains to Ciara Muldoon You have to understand, that artist rendition must be implored sometimes to help good scientists extend their visions of things most appropriately. I found evidence of this, when reading about Arthur Miller, and looking at what Penrose did when he implored the skills of Escher? Basic intuition tells us that there are three spatial dimensions in our universe. In more normal terms, this means that we are able to move along three different axes (basic directions) of motion, back/forth, left/right, and up/down. Einstein, in his theory of relativity, proposes that time is also a dimension, similar to the three spatial dimensions, except for the fact that we do not control our motion through it. We almost never consider the idea that there could be more than these dimensions, because we have never experienced anything that suggests this. In context of this information what would degrees of freedom have to do with how we see these extra spatial dimensions signify. What rules tell us what actions will take place there? What extra dimensions, you probably think, having just read the title. We know very well that the world around us is three-dimensional. We know East from West, North from South, up from down – what extra dimensions could there possibly be if we never see them? Well, it turns out that we do not really know yet how many dimensions our world has. All that our current observations tell us is that the world around us is at least 3+1-dimensional. (The fourth dimension is time. While time is very different from the familiar spatial dimensions, Lorentz and Einstein showed at the beginning of the 20th century that space and time are intrinsically related.) The idea of additional spatial dimensions comes from string theory, the only self-consistent quantum theory of gravity so far. It turns out that for a consistent description of gravity, one needs more than 3+1 dimensions, and the world around us could have up to 11 spatial dimensions! For those who do not understand the issues in regards to the compactified dimensions you have to understand the implications that this topic speaks too. I have listened to well intentioned indivduals reject this notion outright, without further explanation. To me ,logical discourse must speak to this. If you think about Plato's cave you soon learn what is encouraged here again? From not the usual framework the cave offered us, as we look out from inside and explain the shadow figures on the wall. It is a paradoxial twist on human comprehension? Science fiction characters make travel through extra dimensions look as easy as getting on the subway, but physicists have never taken them seriously. Now in the 6 December PRL a team proposes a radical idea: We may indeed live in a world with more than three spatially infinite dimensions, yet the extra dimensions might be essentially imperceptible. For years researchers have discussed extra dimensions that might be "compactified"--curled up to a very small size--but no one thought that non-compact dimensions could exist without obvious effects on experiments. Many physicists hope that string theory will ultimately unify quantum mechanics, the theory of small-scale interactions, with general relativity, the theory of gravity. String theory requires at least nine spatial dimensions, so proponents normally claim that all but three of them are compactified and only accessible in extremely high-energy particle collisions. As an alternative to compactified dimensions, Lisa Randall of Princeton University and Raman Sundrum, now of Stanford University, describe a scenario in which an extra, infinite dimension could have remained undetected so far. One of the things that appeared so strange to me was in how we could look at gravitational variances with scientific means. As we know now, this is being accomplished in ways that test the minds imagination, as to how we would apply these features here to earth, and beyond. You and I know it as a time machine. Physicists, on the other hand, call it a "closed timelike curve." Below, feast on the concepts and conjectures, the dialects and definitions that physicists rely on when musing about the possibility of time travel. If this list only whets your appetite for more, we recommend you have a gander at the book from which we excerpted this glossary: Black Holes and Time Warps: Einstein's Outrageous Legacy, by Kip S. Thorne (Norton, 1994). Failing to move into the abstract realms of higher dimensional figurations, is one of the exploits that we can operate with in the realm of human experience? Why would we not look at these examples? So, if we have enough examples, how would this lay out in experimental context. Is the Earth not so round any more? Now of course I will assume that the world of spacetime has been reached and understood by most readers. In so recognizing this capability of those same people I then want to re-instroduce some of the concepts that captured my mind as I delved further into string theory and the dimensional inferences that soon materialized. 1. Where is Gravity Stronger? 2. When Gravity Becomes Strong? 3. Time and Gravity 4. Microstates of Quantum Gravity 5. Primordial Gravitational Waves? 6. A Sphere that is Not So Round Time is of your own making; its clock ticks in your head. The moment you stop thought time too stops dead. Angelus Silesius Now much as we would like to speak about the concepts of "Time" in spacetime regard, it is very important that we extend our vocabulary in terms of what the mind will do to stretch it's capabilities when considering the abstract world of dimensional attributes. We understand well I think the limits in the strength's and weakness in terms of what we may see of the universe now. The microscopic view, within the context of a the cosmological view? It's no secret now, that I see where symbols are very important in the analysis of complex structures, once modeled. Might move the definition of everything we had encountered, from that model assumption. One had to know that Michio Kaku has prep the minds for this deepr understanding, and with it something very powerful abot the symbols he implores. For the first time, physicists appreciate the power of symmetry in their equations. When a physicist talks about “beauty and elegance” in physics, what he or she often really means is that symmetry allows one to unify a large number of diverse phenomena and concepts into a remarkably compact form. The more beautiful an equation is, the more symmetry it possesses, and the more phenomena it can explain in the shortest amount of space” Pg 76 Einstein's Cosmos by Michio Kaku In looking at what Michio Kaku presents in his books, one thing I learnt from reading was the powerful way in which such images are implored to help us see in ways that we might not have seen And I fiddled with it, I monkeyed with it. I sat in my attic, I think for two months on and off. But the first thing I could see in it, it was describing some kind of particles which had internal structure which could vibrate, which could do things, which wasn't just a point particle. And I began to realize that what was being described here was a string, an elastic string, like a rubber band, or like a rubber band cut in half. And this rubber band could not only stretch and contract, but wiggle. And marvel of marvels, it exactly agreed with this formula. I was pretty sure at that time that I was the only one in the world who knew this. In looking at Susskind, and the history of strings, flashes of insight are very important features of work, which previously and intensely, occupied a mind. Might all of a sudden reveal to it, a synthesis of all that it has worked through, in such an image, as was revealed to Susskind. These applications are very interesting to me because on two levels, we see where constructive phases would encourage the mathematical mind to work within a environment, and then success where new work might be introduced to help explain previous mathematical processes that lack expression. As to the historical figurations, such views are important to determining the process which evolution has embedded itself in evolutionary tactics of the brains development (systems of science)? Are such adaptations significant in the brains developmental encasement, to see where evolution has evolved its capacity to think differently? Banchoff's fifth dimensional capabilities, as they are explained in regards to computer screens, is something the brain is quite capable of handling. We just didn't know that it could visualize things this way before? Lastly in the case of Witten, where such work intensely occupies the mind, a nice quiet walk by a stream or anything that frees it from such engagement, might find a free line and direct outward ness to expression. That's what I call creativity. I have examples of this in terms of the effort of Cubist art and the Monte Carlo methods used to induce idealization in terms of quantum gravity. One method anyway :) Cubist Art: Picasso's painting 'Portrait of Dora Maar' Cubist art revolted against the restrictions that perspective imposed. Picasso's art shows a clear rejection of the perspective, with women's faces viewed simultaneously from several angles. Picasso's paintings show multiple perspectives, as though they were painted by someone from the 4th dimension, able to see all perspectives simultaneously. The appearance of figures in cubist art --- which are often viewed from several direction simultaneously --- has been linked to ideas concerning extra dimensions: Hyperspace: A Scientific Odyssey A look at the higher dimensionsBy Michio Kaku "Why must art be clinically “realistic?” This Cubist “revolt against perspective” seized the fourth dimension because it touched the third dimension from all possible perspectives. Simply put, Cubist art embraced the fourth dimension. Picasso's paintings are a splendid example, showing a clear rejection of three dimensional perspective, with women's faces viewed simultaneously from several angles. Instead of a single point-of-view, Picasso's paintings show multiple perspectives, as if they were painted by a being from the fourth dimension, able to see all perspectives simultaneously. As art historian Linda Henderson has written, “the fourth dimension and non-Euclidean geometry emerge as among the most important themes unifying much of modern art and theory." When you look at the spacetime fabric the cosmological views makes it nice and neat for us, when we are tryng to comprehend the ripples and waves that are generated. So how, you might ask, can multiple strings make up a proton if each has a mass of ten billion billion times that of a proton? The answer has to do with quantum jitter. According to the uncertainty principle in quantum mechanics, nothing is completely at rest. Quantum jitter actually has negative energy that cancels out much of a string's mass. In the case of the graviton, the cancellation is perfect, yielding a particle with zero mass. This is what was predicted since gravitons travel at the speed of light. The microscopic view of gravitatinal wave generation asks that we look much closer at how we perceive the actions of the turbulence and uncertainty as we move closer for a introspective view of the compact spaces that the genration of graviton in place of the views such uncertainty might be generated. Reviews Georg Riemann's view of curved spaces, which is the mathematical core of general relativity. Quantum geometry is the mathematical core of string theory, though it is not as ready-made as was Riemann's geometry for Einstein. Riemann drew on Gauss, Lobachevsky, Bolyai etc. and evaluated the measure of distances in curved space. Einstein concluded the curvature of space is gravity. All the informtaion is gathered material, that held the Larry Summers issue in context. I was revealing aspects of this issue on another level, that many would not have understood. I wanted to bring it together here because the understanding of rhythmns, is at the core of my belief system. I have expounded on this greatly throughout this blog. It is not altogether clear sometimes even for So I hope the suttle implications here are understood not only from a psychological point of view, but also from a scientific one as well. It is a strange thing when the mind has jumped in model apprehension about how it might look at the world in a new way(Quantum Harmonic Oscillators). I don't know when it happened, and how, it's just that I don't look at the world in a normal way anymore.:) Model apprehension if adopted, will color one's thinking. If life was so simple as the circle would have implied, then it's varying attributes topologically would be listed deeply embedded within our thinking? Thinking that extends far beyond the earth and it's solid form, to the far reaches of the cosmo to all of our suns and balckholes, in our thinking? Without this cyclical nature, the universe does not make much sense to me. Of course I have to bite my tongue here, so that I do not loose many good minds that are speaking to the finer details of our views theoretically developing at the forefront of the physics and theoretic examination of those same "fields" of endeavor. Anyway to the below information gathered. For some who wanted to know? Have a good look at what took place today in a Harvard vote. Of course this is date dmaterial so looking back one should look forward, to what is presented today here. I present on the one hand, material for revealing a closer views of the "source material." I have refrained from commenting as well, until now. I tend to look at the ideas here as more inclined to questioning the dynamics of geometry. Male and female, as a dynamical expression, not only in the cosmos(Friedmann equations) but in how this may be deeply inserted in our psychological processes. I inserted previously, the subject of Liminocentric structures under the title of PI day, as part of this comprehension and expression. Maybe see a ole rerun of the movie PI and wonder if the Show Numb3rs was based on it? Or think about the Perimeter Institute(PI) for those less familiar. ....or even contemplate John Baez's Fool's Gold PHI and the golden ration begininng from some sort of mandalic interpretation, coming from a psychological model based on Liminocentric structures? :) This might seem a little kookish as well? :) I try to reserve judgement on characters, while maintaining a view of the situation as it is reported. I do not look at the character of who is reporting but maintaining awareness of the bend they might have to that same reporting. European attitudes, are they different then Canadians views on men and woman? So I try and move past these stereotyping behaviors to look deeper in the process of the human structure of thinking. You know, right brain left brain? :) Induction ad Deduction In looking past to the origins of expression, to undertand, what it is that both sexes use to express either of these nurturing, or abstract tools of mathematics? As moving beyond the home, where the ability is quite feasible in either sexes, when implored. If you wanted to understand democracies, first principles this is where I began my journey. Matter condensed physics under the likes of Robert Laughlin and his buildings blocks of matter, or a deeper look into the structure of our universe? Some reject the Mother principle of M theory,and preferred to stay in the coordinates of a euclidean world. This does not reject, what we could apply in our non-euclidean thinking. Maxwell and Gauss and Riemann are all part of this process in enlighened thinking? Einstein moved it further, in the use of GR with the help of Grossman? Now there is this attempt to join the cosmological world of the very large with the very small(reductionism views). You just had to know how to get there in the expression using the Fifth postulate. Emmy Noether and others tell me that we can all use this faculty for inductive/deductive processes in our Even Thales, recognized the Arche of reason? Parts of this picture, are pasted throughout this site linked above. Thales might of called it the primary principal and used water. It is a very defining I think that we understand this fluid nature as a contiuity of expression, while we like discrete things as well. Plato solids eventually lead to a defintion of what God might be? :) Discrete things could be viewed as solidification processes, we like to see consolidating in the shadows of things, while the sun can be a very abtract thing topologically in revealing the energy that we can use? :) I tend to look at the ideas here as more inclined to questioning the dynamics of geometry. Male and female, as a dynamical expression, not only in the cosmos(Friedmann equations) but in how this may be deeply inserted in our psychological processes. One may tend back to philosophical questions about these differences, but they are innnate features in all of us? Jung's determinations, in the animus or anima, and the response's too, the male and female that seeks balance in it's life? The distinction may be domineering at one point or another, but there is this balance contained, much as we would define the relationship of Maxwell in electric/magnetic fields? That one would ask where good common sense should rule? The humanities in science should then beg the question, that neither of these things should ever be considered separate, or not equal, for they would be interchangeable and contained deeply in our extensions and consolidations of expressive thought? That these distinctions are never really separate issues contained in the brain's alternating features of the brain's capacity to Babble, left/right? :) Topologically it would be difficult to distinction of the inner from the outer, but this is not to say that this can't be done in perspective. ....or even contemplate John Baez's Fool's Gold PHI and the golden ration begininng from some sort of mandalic interpretation, coming from a psychological model based on Liminocentric structures? The mind, all the time engaged in heavy thought can easly recognize, when it has all come together in model apprehension. It's own image, for complete acceptance, we then undertand well the forays the mind has ventured through to come to such forms of consolidate thought and image reproduction? :) It is not until well into the high school years that males begin to close the gaps in terms of Language and social skills. Unfortunaly the boys, society, and educators continue to view boys as poor communicators and as doing poorly in L.A., therefore boys don't view their language skills as strong. There are just plain busy elsewhere? It's the primal words spoken of beating drums and far off places? :) Maybe, the educational system is not intune with the developing scenarios of male/female attributes?? I was reading the other day a opening in regards to Plato's academy for learning. Although this was in context of early historical developement, I couldn't help be drawn to the the views that were developed alongside of, perspective. I will have to go back and look at this. One thing that attracted my interest was the role music would play in developing rythmns of youth that were conducive to awareness and steady developement? Now I don't like to be called ole fashion, but the rhythms in youth were attractive to me because they might have taught the basis of movement in life, much like, and in concert with, the expansion and contraction scenarios of thinking and developement. Now remember focus on the rhythm Of course I always come back to the regress of reasons to explain this intuitive leap that developed trends found in new realities emerging. Arche means elements, and any reductionistic system asks us to delve into the reasons why, even though on large scale media observation, the Summers might have had issue with deeper implications to consider? Not fully understanding the emotive developement and differences between both, mental enhancement through age aggression to advance thinking, these rhythmns would have helped to place, societal thinking above the aggressions of war, our own human struggle to rise above those things that would hold us to the earth? :) I am not sure if this bothers others, but when I close a link, I do not like the whole web site to shut down so that I have to go to a link and restablish contact. So what I do is offer a target= _blank to my html, so that it adds a open window that can be closed, and leave you with the site in question. So any thoughts on this you html buffs? 1. Here Previously 2.Here again 3.I'm adding this link so that the views of Cosmic string are held in context of what Emergence might mean? Circles wihtin circles, and Sklar interpretation is posted throughout this site. A deeper look into the "manifestation point( let's call it emergence in this thread)" of blackhole consideration would ask, okay, at the supersymmeticla level , what is this "point" that is to emerge? See, when you take this vision of the three brane collapsing in context of gravitatinal collpase, you are given perspective geometrically/topologically that any abstract mind would have missed, had they not understood the physics involved, is also tied to these views. Now the interesting thing to me is, if you move your perspective to the blackhole for a minute here you learn to question what value might be attained from emssion standards that would help you orientate the views to a much more dynamical version of the cosmos? What is the ideal supersymmetical view that would arise from 1 brane, and could have manifested from three brane collapse? Now you kow I am working backwards here, just to point to the source of the cosmic string, so that what ever it's manifestation, I am quick to think of a vast network of energy very spread out all of a sudden igniting some lightning strike across the universe? Thinking of a universe in a box here helps sometimes, but the move to ballooning features, you need to understand this progression through universal manifestation from those bubbling universes? So there are two views here, that ask whether the constructive mode of the current universe, arising from this cosmic string, has painted a nice little framework for topological considertaions, that we could now see were the micro views of blackhole emissions are held as standard? Witten said: One thing I can tell you, though, is that most string theorist's suspect that spacetime is a emergent Phenomena in the language of condensed matter physics. Part of the difficulty was realizing that the end result of a current depiction of the universe, and the reality around us now, had led us to assumption discrete manifestations of a earlier prospective universe. From that early universe, until now. In 1877 Boltzmann used statistical ideas to gain valuable insight into the meaning of entropy. He realized that entropy could be thought of as a measure of disorder, and that the second law of thermodynamics expressed the fact that disorder tends to increase. You have probably noticed this tendency in everyday life! However, you might also think that you have the power to step in, rearrange things a bit, and restore order. For example, you might decide to tidy up your wardrobe. Would this lead to a decrease in disorder, and hence a decrease in entropy? Actually, it would not. This is because there are inevitable side-effects: whilst sorting out your clothes, you will be breathing, metabolizing and warming your surroundings. When everything has been taken into account, the total disorder (as measured by the entropy) will have increased, in spite of the admirable state of order in your wardrobe. The second law of thermodynamics is relentless. The total entropy and the total disorder are overwhelmingly unlikely to decrease Now the apparent contradiction is to understand that when the views are taken to those small spaces, reductionistic features of a discrete nature have forced us to consider the building blocks of matter, but at the same time, something else makes it's way into our views that would have been missed had you not realized that the space contains a lot of energy? To build this symmetrical and simple model of elegance, you needed some model, some framework in which to consider the distant measure here would be ultimately derived from the blackhole and it's dynamics? The simple solution would help you recognize that any massless particle emitted from this state, would automatically signal the closest source of consideration that any of us could have Even Smolin, recognized the Glast determinations. Why I have said, that Smolin could not have gotten any closer then what is surmised from the origination of emission from the blackhole You know it is very frustrating sometimes when the paradoxal is presented to the mind through obsevration, to have it sloughed off as some speculative point that might be less then what the Doctor In this case, the cause of the observation posts, what a three brane wrapped blackhole can mean? Here we see where the issues of Space-tearng conifold transitions are presented in a theoretical approach and quickly discarded by some, because it contained the brane word and would imply some kind of Brain world, ( Brane world.) Well if you do not catch it the first time, there is hope that the theoretic applied will speak to what Hawking radiation might have in regards to the gravitational collapse initiated. What is this physics doing here? This is not some free jaunt down memory lane, but a advancement in what is proposed? Very difficult to do this, if the environment is not translated in and by other ways, to see what the outcome of such a gravitational collapse can do. What is transmitted back into the space? So I leave this here for a minute, and draw a quote from someone who is a good writer and has a good comprehension of the world as it sits. He might have been targeted as some wonder seeker by some, but his position to me has presented inquiring minds with the knowledge and basis, from which we must think. So here is his quote, and I shall not name him. So that those who think he is some "wonder seeker" who has bastard the science who sold out his values, might wonder about their own position in the developing world of theoretics. To wonder, why such a message might not be important, when they think they can propose their own views about what the world should be. Should what they have to say be held in any less contempt, that we should not only apply these same rules to those who hold a position about the harmonious whole, that we should even take the time to listen what they have to say? To be quickly dismissed as verbiage not worth seeking because of some entertainment value as though one might he have sold out his profession? Any one, who uses this medium and blogging, can now consider themself part of the evil they think has manifested and sold out on. I refuse to even name this individual because he is advanced in his thinkng and is courting the world of theoretics. For the first time, physicists appreciate the power of symmetry in their equations. When a physicist talks about “beauty and elegance” in physics, what he or she often really means is that symmetry allows one to unify a large number of diverse phenomena and concepts into a remarkably compact form. The more beautiful an equation is, the more symmetry it possesses, and the more phenomena it can explain in the shortest amount of space” Pg 76 And again here so that we see know less the value of these inisghts, I place this final quote of his as well. Rotating in four dimensional space unifies the concept of space and time You had to know, that the pre-existing set of cicumstances would highlight the accomplishements of what Maxwell had done, as well as, learn to see into what the world Gauss and Reimann sought to exemplify beyond our normal comprehensions modes. Moving into such realms dones not as far as I see it lessen the impact of what theorectic has done by way of descrbing the physics, but cautiously asks us to see what is happening in those compacted Many years ago in my doodling, I created some comparisons to what I would have percieved in describing a point, line and plane. To me, I wanted to find a way to describe this point amidst a vast background of all points, so by constructing this diagram, and by realizing coordinates, intersection of lines and planes seemed a interesting idea to get to this point. This brought some consideration to what was being shown by Greene below. The Elegant Universe, by Brian Greene, pg 326 Now at the time, this being far removed from the stories that are developing in string theory, learning that having moved to brane considerations we can see where three brane world wrapped around a sphere could produce wonderful things for us to further ponder. That such emissions, from the gravitatinal collapse could all of a sudden produce, massless vibrating strings. We know then that such strings can be a photon or a other massless particles?:) The Elegant Universe, by Brian Greene, pg 327 Part of the problem then for me is to figure out the stage of the developement of the cosmo what stage followed which stage, and the scheme within the cosmological display, the torus that had to become a sphere, or sphere collapsing to a torus? Concentrations of gravitonic expressions? There were geometrical consideration here to think about. Physicists found that a three-brane wrapped around a three-dimensional sphere will result in a gravitational field bearing the appearance of an extremal black hole, or one that has the minimum mass consistent with its force charges. Additionally, the mass of the three-brane is the mass of the black hole and is directly proportional to the volume of the sphere. Therefore, a sphere that collapses to a point as described above appears to us as a massless black hole, which will return to the discussion later. Now as you know from my previous thread on the Flower considerations, color is a wonderful thing, but if my view was to be consistent, then how could there be any tearing in the use of a topological structure? The flower became very symbolic to me of what we see in the universe unfolding in these galaxies? Two-dimensional strings trace out two-dimensional worldsheets. Since strings, according to Feynman's sum-over-paths formulation of quantum mechanics, simultaneously travel by all paths from one point to another, they are always passing by every point in space. According to physicist Edward Witten, this property of strings ensures that six-dimensional figures called Calabi-Yau spaces (theorized to be the shape of the other dimensions of our universe) can be transformed by certain topology-changing deformations called flop transitions without causing physical calamity. This is because strings are constantly sweeping out two-dimensional worldsheets that shield the flop transition point from the rest of the universe. A similar thought process goes toward the ability of Calabi-Yau spaces to undergo more drastic changes called space-tearing conifold transitions. In order for me to consider the comlexity of the question certain insights about the nature of our universe has pointed out that there always had to be something existing, even in face of what any of us might thought of as a singularity in that blackhole collapse. But it is not that easy. One had to assume that the bulk represented the continuance of some kind of flunctuating field of endeavor, that could hold our thoughts to dimensional attributes shared in the presetnation of Reimann's sphere. Gauss saw this early and gaussian coordinates also help to unite Maxwell into the glorifed picture of a dynamcial world? The replacement of a 1-D sphere ( a circle ) with a 0-D sphere ( two points ) can create a different topological shape. A do-nut has a circle, round its lesser diameter, which is pinched to nothing. The do-nut turns into a cresent or banana-shape, with the two end-points repaired by the two points of a zero-dimensional sphere. The torus cum cresent can now transform into a ball, without further tearing. This is as if Klein's hidden extra dimensions of space transformed from the one curled-up shape to another, comparably to the normal extended three dimensions changing the shape of the universe from a torus to a ball. The evolution of the universe may involve such transmutations between curled-up Calabi-Yau spaces. Equations governing the 'branes' showed that, from our limited three-dimensional view-point, the three-brane "smeared" around a three-dimensional sphere, within a ( curled-up ) Calabi-Yau space, sets up a gravitational field like a black hole. The space tearing conifold transition from three to two dimensional sphere happens to increase the number of holes by one. These holes determine the number of low mass particles, considered as low energy string vibration patterns. The shrinking volume of the 3-D sphere goes with a proportionate mass decrease to zero: a massless black hole. These are beautiful pictures of flowers my wife grew, and as a collage they make a nice way of expressing the diversity of galaxies, within context of our whole universe.:) So you develope this sense on the large scale about what is possible given certain circumstances. What is driving inflation? As this universe expands and we realize that Omega=1 one has to assume that teetering on the brink of a topolgical form has some significane in how we see the overall expression of this same universe? What are supersymmetrical valuations telling us about the nature of the universe in that the beginning? Is it "seed like" and how would such things be driven too, if something did not already exist? Can this nugget actually be living in nothing and arise from nothing? This logic is really hard to swallow for me, yet I recognized that a dynamcial universe needed soemthing in order to drive it from such flat state of existance, to indicators that would have revealled and explained these geometries/topologies. Unsymmetrical-cooling-gravity weaker \ / \ / \ / _\ /___ / \ / / / \ / / / \/ / -------- / / Supergravity ------------- Symmetrical If you define something arising from such a state where nothing exists, the logic saids, that the geometry could have never arisen if you did not have some motivator telling it too? So you begin to enteretain cyclical natures that would be very revealling. Steinhardt, Turok, and others start to wonder then about how these things could materialize? So we look at the span of time in relation, from the supersymmterical state to the 300,000? Yet on a dynamical level if the universe was to level out in fifteen billions years, then we would have understood that we had only seen one part of this dynamic process revealing itself from a state of existance maybe as a nugget form, to extend itself, all the way to the outer fringes and cooling nature, in a flat state. Will it turn back to the crunch? One consequence of general relativity is that the curvature of space depends on the ratio of rho to rho(crit). We call this ratio Omega = rho/rho(crit). For Omega less than 1, the Universe has negatively curved or hyperbolic geometry. For Omega = 1, the Universe has Euclidean or flat geometry. For Omega greater than 1, the Universe has positively curved or spherical geometry. We have already seen that the zero density case has hyperbolic geometry, since the cosmic time slices in the special relativistic coordinates were hyperboloids in this model. So the logic is telling me that such a crunch would have had to signal other geometries/topologies, that would kick in, that taken in view of the large way in which we are taking snapshots, this consistentcy is being, and should be topolgically considered even though it is happenng on such large scales? If a blackhole existed in the center of every galaxy, then the universal expression in nature would detail for us "phases in symmetrical breaking" within the overall larger perspective? On this larger perspective and sense, we would see this mode of operandi, expressing itself many times not just in context of the whole universe, but within the subtle parts, all the way down to the microstates of existance? Thes ewould have to be initiated even within context of our safe and surreal world of matter states, that we have come to love and feel safe in?:) So what does sound have to to do with all this? I like knocking the wind out of the sails in order for one to shift perspective in how resonances might be percieved and such gatherings in nodal point cosiderations, as string indicators of gravitonic expression. In order to shift this focus to such states of cyclical natures in the realms of topological considerations, you had to understand that even on a flat plate in Chaldni examples, these views were developing on much larger scales, on ballons with dyes, all the time revealing resonant features, to the quality of those same sounds? Ahem!:)Ya I know. How do you transfer such thinking from orbits of Mercury and binary star rotations to signal valuations in sound determinations? Now remeber I gave a very global perspective on the unverse that include geometry/topological considerations. I wanted to shift these views to viable means of expression.:) One the Earth as a Sphere is not so Round, and giving the symetrical relaizatin of a sphere, smaller circles and all, there had to be a way inorder to speak to the 1r radius of expresion not just a s a inverse square law valution of gravity, but also within context of other things based on this law. These within the case of the standard model would have to be inclusive in a model design. I know it is very difficult for some people to understand this translation to harmonical expressions(any horizon and what is to lie beyond?) and the way in which we would percieve this dynamcial nature, using the expressions of non-Euclidean geometries? We understood this creation of positive and negtaive in context of each other did we? Riemannian Geometry, also known as elliptical geometry, is the geometry of the surface of a sphere. It replaces Euclid's Parallel Postulate with, "Through any point in the plane, there exists no line parallel to a given line." A line in this geometry is a great circle. The sum of the angles of a triangle in Riemannian Geometry is > 180°. It is a strange thing to wonder how the heck one get's to translating harmonical oscillations in context of what we see expounded by Taylor and Hulse. To understand that at some point, the rotation around each other in distance, will decrease in time, and the oscillations will increase? What does this signal?:) You do not discard thnking about the cosmological nature, methods, that have been used to orientate the world view in such a way, where all of a sudden the complexity of this dynamical nature has moved your thinking to strength and weakness of those same gravitational wave explanations. Working closely with the experimental group, we use astrophysical, particle physics and superstring theory combined with observations to study gravitation and the origin and evolution of our The beautiful consistency of the cosmological tests with the Lambda CDM theory for structure formation maybe is particularly impressive to me because I spent so much of the last 15 years studying alternatives; you can trace through astro-ph my history of proposals that were viable when submitted but soon ruled out by advances in measurements of the angular distribution of the 3K thermal background radiation. But the constraints from the cosmological tests are not yet much more numerous than the assumptions in Lambda CDM and related models; it's too soon to declare closure of the cosmological tests. You would think with this uncertainty, that I had answered my own question about what Peter Woit has done, in terms of offering some substantiative understanding for rejection of string theory. But it doesn't. Sometimes we rely on the roads taken by Webber, Wheeler and Kip, and those who understood well the consequence of gravitational considerations, to further enhance these specualtive journies and to better explore the bulk that might have varying attributes? The theory of relativity predicts that, as it orbits the Sun, Mercury does not exactly retrace the same path each time, but rather swings around over time. We say therefore that the perihelion -- the point on its orbit when Mercury is closest to the Sun -- advances What am I saying here? You mean, that in the primordial understanding there are extensions of what could be thought of in terms of strength and weakness, in gravitational terms? I would most certainly like to see the light shining in these circumstances.:)But like anything of course, we like to LIGOlize these terms for further consideration. Don't we? Let us see how these great physicists used harmonic oscillators to establish beachheads to new physics. Using a rubber band analogy over top of a ball is a interesting way to approach the circle used, and the energy determinations found of value in calculating 1r radius(KK Tower) of that same circle, as you move to the top. But if you move it along a length and you find that you can calculate the difference in this length by the changes in the energy valuations? It’s how you look at this space inside the bubble versus outside the bubble. John Baez might look at it on the outside as such above recognizing well the lines connectin flip and change depending on the energy demonstrated in a quantum grvaity model? While the inside of the bubble is dictated by some other means of interpretation? From the inside, a soccer ball universe(poincare structure) would seem so appropriate but here Max Tegmark has answer this view, through Wmap views? For me, I would look at the surface of the bubble and the rainbows that could shimmer across it’s surface. We would be defining the shape of the bubble in a way we had not considered before? Moving sound in analogy to the world of gravitational considerations how would this view be considered now in context of bubble technologies? Using circles as energy determination seems viable as they travel the length, but it becomes much more diffiuclt when you are trying to merge these bubbles, it looks discrete, when the lines are joining while curvature defines the connection between the two? You see the bubbles have a outer structure. As these circles merge, it is not past the knowledge to coisder that the path integral approach is being exemplified. Running contrary to the view of bubble world, the images of a vast systems of cosmic strings that would flash across a universe may seem very interesting as I gaze from artistic perception about the flash of a lightening strike? That ignited new possibilties into expression, new life in the universe? Quantum gravity, the as yet unconsummated marriage between quantum physics and Einstein's general relativity, is widely (though perhaps not universally) regarded as the single most pressing problem facing theoretical physics at the turn of the millennium. The two main contenders, ``Brane theory/ String theory'' and ``Quantum geometry/ new variables'', have their genesis in different communities. They address different questions, using different strategies, and have different strengths (and weaknesses). What is Quantum Gravity? Quantum gravity is the field devoted to finding the microstructure of spacetime. Is space continuous? Does spacetime geometry make sense near the initial singularity? Deep inside a black hole? These are the sort of questions a theory of quantum gravity is expected to answer. The root of our search for the theory is a exploration of the quantum foundations of spacetime. At the very least, quantum gravity ought to describe physics on the smallest possible scales - expected to be 10-^35 meters. (Easy to find with dimensional analysis: Build a quantity with the dimensions of length using the speed of light, Planck's constant, and Newton's constant.) Whether quantum gravity will yield a revolutionary shift in quantum theory, general relativity, or both remains to be This high energy consideration does this, as well as directs the mind to consider the cosmological evidence that lays before us now. Dimensional interpretation, has to have it's basis contained within this whole view. With the cosmic string we are only defining a period of time with in the whole expression of this same universe? This would inlcude Pre bang scenarios and how these must be Cosmic Superstrings Revisitedby Joseph Polchinski Thus far we have quoted upper bounds, but there are possible detections of strings via gravitational lensing. A long string will produce a pair of images symmetric about an axis, very different from lensing by a point mass. Such an event has been reported recently Universe INside a Box Lens candidates in the Capodimonte Deep Field in vicinity of the CSL1 object by Sazhin M.V.1, Khovanskaya O.S.1, Capaccioli M.2,3, Longo G.3,4,Alcal´a J.M. 2, Silvotti R.2, Pavlov M.V.2 In Paper I we discussed the strange properties of CSL1: a peculiar object discovered in the OACDF which spectroscopic investigations proveed to be the double undistorted image of an elliptical galaxy. Always in Paper I we showed that CSL1 could be interpreted as the first case of lensing by a cosmic string. In the present work, starting from consideration that a cosmic string is an elongated structure which produces non local effects we investigated the statistics of lens candidates around the CSL1 position. There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world. — Nikolai Lobachevsky John Ellis: Extensions of the Standard Model often contain more discriminatory parameters, and this is certainly true of supersymmetry, my personal favourite candidate for new physics beyond the Standard Model. One of the possibilities suggested by supersymmetry is that Higgs bosons might distinguish couple differently to matter Without consideration of that early universe, the quantum interpretation doesn't make sense unless you include it in something whole? Lubos said, There are also many other, indirect ways how can we "go" back in time. This is what evolution, cosmology, and other fields of science are all about. Unsymmetrical-cooling-gravity weaker \ / \ / \ / _\ /___ / \ / / / \ / / / \/ / --------300,000 years / / Gravity strong ------------- Symmetrical Q-------------Quark measure is stronger \ / \ / \ / \ / Symbolically how do you create a inclusive system, but to look at alien and foreign ways in which this logic might force you to consider the interactivity of a theory of everything? Greater quark distance, greater energy, higher gravitational field generation. The field around this distance, and supersymmetrical realization bring us closer to the source of the energy creation, closer to the source of the universe's beginnings ....to consider such eneregies within the sphere of M, at a quantum level, as well at such cosmological scales." The Bubble Universe / Andre Linde's Self Creating Universe These are the theories discussed in class. The bubble universe concept involves creation of universes from the quantum foam of a "parent universe." On very small scales, the foam is frothing due to energy fluctuations. These fluctuations may create tiny bubbles and wormholes. If the energy fluctuation is not very large, a tiny bubble universe may form, experience some expansion like an inflating balloon, and then contract and disappear from existence. However, if the energy fluctuation is greater than a particular critical value, a tiny bubble universe forms from the parent universe, experiences long-term expansion, and allows matter and large-scale galactic structures to form. The "self-creating" in Andre Linde's self-creating universe theory stems from the concept that each bubble or inflationary universe will sprout other bubble universes, which in turn, sprout more bubble universes. The universe we live in has a set of physical constants that seem tailor-made for the evolution of living things. It is very difficult sometimes to bring another individuals view in line with the vast resources that could point the mind to consider the whole thing? If you did not have a encompassing philosophy, and I know this word is dirty to some, but without pointing to a basis for which the universe sprang, then such topological features would never make So you direct the thinking to what the early universe looked like(?), and it's potential for expression. A lot of things are going on that are not considered geometrically/topologically unfolding, which hide within the basis of expression. So you have to use analogies to nudge the mind into possible structural considerations, with evidence of graviton production? Notes on Hyperspace Saul-Paul Sirag The rule is that for n hidden dimensions the gravitational force falls off with the inverse (n + 2 ) power of the distance R. This implies that as we look at smaller and smaller distances (by banging protons together in particle accelerators) the force of gravity should look stronger and stronger. How much stronger depends on the number of hidden dimensions (and how big they are). There may be enough hidden dimensions to unify the all the forces (including gravity) at an energy level of around 1 TeV (1012 eV), corresponding to around 10-19 meters. This would be a solution to the hierarchy problem of the vast difference in energy scale between the three standard gauge forces and gravity. This is already partly solved by supersymmetry (as mentioned previously); but this new idea would be a more definitive solution--if it were the right solution!
{"url":"http://www.eskesthai.com/2005_03_01_archive.html","timestamp":"2014-04-19T09:25:30Z","content_type":null,"content_length":"332471","record_id":"<urn:uuid:77634630-7dd4-4100-a73d-e82a76b80d06>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from October 2011 on Discount Gambling My local Barona Casino offers a $1 progressive blackjack side bet that pays when your 20 loses to a dealer 21. The payout depends on the number of dealer cards, where the payout increases with the number of cards in the dealer’s hand. The progressive pays out when your 20 is beaten by a 7-or-more card dealer 21. Blackjack Bad Beat Progressive Number of Dealer Cards Payout Probability* Return 7 or more cards 100% 4.6827e-6 jackpot/213,500 6 cards 1000-for-1 5.7405e-5 0.057347 5 cards 100-for-1 5.0457e-4 0.049952 4 cards 50-for-1 0.002790 0.136727 3 cards 25-for-1 0.008282 0.198762 2 cards 10-for-1 0.004975 0.044771 magic card** 20-for-1 ~(1/26)(2/52) ~0.028 miss -1 ~98% total 100% ~0.52 + *calculated using basic strategy (includes surrenders and other decisions that are not optimal for the sidebet). **magic card average frequency assumed at 1-every-26 hands, and average 3 cards/hand So, the break-even point for the jackpot is about $100,000. I used basic simple strategy and a 6 deck shoe in this analysis. Effect of Shoe Depth on EV While looking for a counting edge on this side bet, I ran into a very pronounced effect of shoe depth on the bet EV. I was expecting the random distribution of cards at the end of the shoe to distribute the EVs around 0. Well, I did find a roughly bell-shaped distribution of EVs, but I discovered that the average return decreases sharply with the shoe depth. What this means is that as you reach the end of the shoe, the average return of the bad beat side bet decreases very quickly. Most bets don’t behave like this. You want the bet to have the same average return at the start of the shoe as at the end of the shoe. Imagine if blackjack started at a 0.5% house edge for the first hand, but for the last hand ended up at a 50% house edge, on average. Most people wouldn’t stand for that (if they knew it was happening). [You'd probably sense such a bias on an "even-money" bet, but not for a bet that hits < 2% of the time.] I've plotted out some curves to shoe the effect of shoe depth on the bad beat EV. I set the jackpot at $100k, so the EV for a full six-deck shoe is near 0 (magic card not included). Then, I plotted EV vs. number of decks in the shoe, assuming no missing cards (i.e., a neutral shoe). So, in the graph below, 0.5 decks means half a deck. The above graph shows the big disparity between playing the bonus at a 6-deck shoe game vs. a double-deck pitch game. The graph also reflects the bet’s average as a function of decks remaining in the shoe. Taking an extreme example (1/2 deck remaining in shoe; most houses shuffle before this point), the distribution of actual EVs is sampled in the below graph. Note how it’s shaped mostly below the -41.5% average return of the above graph. This effect is probably due to the fact that you need a lot of perfect cards to make a 7-or-more card dealer 21. When you run low on cards, there’s probably much fewer ways (percentage-wise) to make the perfect hand. Some players might intuitively suspect this, or at least would not be surprised when they see these results. So be careful with this bet. It gets a lot worse than you think it might, especially at the end of the shoe. Read all my posts on the Panda-8 sidebet. There’s another bonus bet on the EZ Baccarat table, called Panda-8, that pays 25-1 for a 3-card player 8 win. Using the same methods as the prior posts on countable baccarat side bets, I came up with the following card count values: Card Count Value Ace 1 Deuce 1 Trey -2 Four -2 Five -2 Six -1 Seven -1 Eight -2 Nine 4 Ten/Face 1 The player should bet the Panda-8 when the true count is ≥ 11. (The plot below shows the EV curve crosses the x-axis at a true count of 10.5): Use the following running count (RC) thresholds when betting the Panda-8 side bet: Hand # RC[Min] Using the above RC[min] thresholds, simulations showed a profit of 22% of a fixed bet per shoe, at an average 3.8 bets/shoe. I worked out a baccarat tracking sheet for counting the player dragon bonus available at my local Barona Casino. The tracking sheet simulates at +0.112 EV/shoe, gained from an average 3.9 bets/shoe. That means the count gets good enough to bet the player dragon about 3.9 times per shoe (4.8% of the time), and you’ll profit a total of 11.2% of a fixed bet, per shoe. So, if you’re betting the max $300 per dragon, you’ll make about $33.60/shoe, on average. That’s not bad, since you’re allowed (encouraged, actually) to use tracking sheets while sitting at the Rapid Baccarat terminals. The player dragon has payouts ranging from 30-to-1 down to 1-to-1 and pushes, so you’ll win something about 30% of the time. This is much better than waiting for a 40-to-1 payout on an infrequent $100 dragon-7 bet. The tracking sheet below helps you maintain the running count (RC) for each hand of the shoe, and lists the required minimum RC for betting the player dragon on any given hand: You’ll see the minimum betting RCs decrease as more hands are played. Typically, you’ll probably only be able to bet the dragon for the last 8 hands of the shoe, if at all. Welcome to the world of Use the following count values for each card: Player Dragon Count Card Count Value Ace 1 Deuce 2 Trey 2 Four 1 Five 1 Six 0 (ignore) Seven -1 Eight -1 Nine -1 Face/Ten -1 The dealer pulls the first card out of the shoe, and turns it face up. Start the running count with the count value of the card. The following unseen burn cards do not affect the RC or the bet Make sure you only use this sheet for betting the player dragon bet (1-1 for natural player win, push for natural player tie, else 30-1 for player win by 9 points, 10-1 for win by 8 points, 6-1 for win by 7 points, 4-1 for win by 6 points, 2-1 for win by 5 points, 1-1 for win by 4 points, lose all others). For each hand dealt, add up the count values of each card to get the count for the hand. Notice that 2′s and 3′s are +2, then “high cards” (7,8,9,10) are -1, and “low cards” (1,4,5) are +1. Write down the count for the hand in the sub-box, and add it to the running count (RC). Write the new RC value in the box. If the new RC is greater or equal to the number printed in the next hand’s box, then bet the dragon. That’s it. You’re at the start of the shoe. The dealer pulls out a 7, and burns seven cards. Start the running count at -1. The first hand dealt is player (10,9) and banker (4,5). There are no 2′s or 3′s in the hand. The two high cards cancel out the two low cards, so the count for the hand is 0. The running count remains at -1, and write it in the box. The next hand dealt is player (1,2) and banker (8,10). The count value of the hand is 1+2-1-1 = +1. Write the new RC of 0 into the box. The RC is less than 56, so don’t bet the player dragon the next The next hand dealt is player (3,1,5) and banker (4,10,5). The count value of the hand is 2+1+1+1-1+1 = +5. Update the running count by +5, and write the new RC of 5 in the box. The RC is less than 55, so don’t bet the player dragon the next hand. Update: See the tracking sheet in action with my Dragon-7 shoe-by-shoe simulator. Also, check out the much easier-to-use unbalanced Dragon-7 count. I tested out Eliot Jacobson’s true count system for the 40-to-1 baccarat dragon-7 bet (banker wins with 3 card 7), and got excellent results with the following tracking sheet: The sheet helps you track the running count (RC) for each hand in the shoe. You just compare the running count (RC) to the minimum betting threshold for the next hand. When the RC is greater or equal to the number printed in next box, bet the dragon. The tracking sheet simulates at an average profit of about $53 per shoe, for $100 dragon bets. On average, you’ll make about 5.2 dragon bets per shoe. I put together this tracking sheet because I know no one is going to read the WoO post and implement the true count correctly at the table. Half the people around the baccarat table write down something complicated every hand, so this will be my craziness. Use the following count values for each card: Dragon-7 Count Values Card Count Value Ace 0 (ignore) Deuce 0 (ignore) Trey 0 (ignore) Four -1 Five -1 Six -1 Seven -1 Eight 2 Nine 2 Face/Ten 0 (ignore) The dealer pulls the first card out of the shoe, and turns it face up. Start the running count with the count value of the card. The following unseen burn cards do not affect the RC or the bet For each hand dealt, add up the count values of each card to get the count for the hand. Notice that 8′s and 9′s are +2, and 4-thru-7′s are -1. Ignore all other cards. Write down the count for the hand in the sub-box, and add it to the running count (RC). Write the new RC value in the box. If the new RC is greater or equal to the number printed in the next hand’s box, then bet the dragon. That’s it. You’re at the start of the shoe. The dealer pulls out a 7, and burns seven cards. Start the running count at -1. The first hand dealt is player (10,9) and banker (4,5). The count value of this hand is 2-1-1 = 0. The RC doesn’t change, so write the RC of -1 in the box. The RC of -1 is less than the minimum 40 required to bet the dragon on the next hand. The next hand dealt is player (1,2) and banker (8,10). The count value of the hand is +2. Write the new running count of +1 in the box. Again, the RC of +1 is too low to bet the dragon on the next hand (need at least 40). Keep filling in the hand boxes, from left to right, and down the page. There are about 80.9 hands/shoe on average. There’s room for 88 hands, which occurs very infrequently. You can track two shoes per sheet. You’ll see as you get deeper into the shoe, the minimum RC for betting the dragon decreases. This follows from true count = RC/(decks remaining). OMG, did you see the recent WoO post on counting the baccarat dragon bet?! That bet gets so +EV that a simple counting scheme yields an average +4% edge over 7.5% of the time. Betting $100 dragons yields an average $25 profit per shoe with the simple count. Update: use my simple dragon tracking sheet that simulates at an $53 profit/shoe (betting $100 dragons 6.4% of the time). I had never looked into counting for the dragon, because I assumed the 7.611% house edge was too big to overcome. However, given the 40-to-1 payout odds, a 0.25% (quarter percent) change in the dragon frequency yields a 10% change in the bet EV. And it works out that the effect-of-removal (EOR), i.e. the sensitivity, of a card to the dragon-7 EV is rather large. So I looked into other countable dragon opportunities at my nearby Barona Casino. They have a Rapid Baccarat bank of player consoles, with the following dragon bet: Rapid Baccarat Dragon Bet (on Player Hand) Hand Payout Frequency Return Non-natural winner by 9 points 30-to-1 0.003683 0.110492 Non-natural winner by 8 points 10-to-1 0.006822 0.068217 Non-natural winner by 7 points 6-to-1 0.017924 0.107543 Non-natural winner by 6 points 4-to-1 0.028257 0.113027 Non-natural winner by 5 points 2-to-1 0.033244 0.066489 Non-natural winner by 4 points 1-to-1 0.037368 0.037368 Natural winner 1-to-1 0.162589 0.162589 Natural tie push 0.017871 0.000000 All Others -1 0.692242 -0.692242 Total 1.0000 -0.026517 The sensitivity of the player dragon bet to any given card is listed in the following EOR table (8 deck shoe): per card for Player Dragon EV Card EOR Face/Ten -0.000530 Ace +0.000777 Deuce +0.001160 Trey +0.001273 Four +0.000470 Five +0.000172 Six +0.000053 Seven -0.000563 Eight -0.000619 Nine -0.000604 The result is that the player dragon bet on the Rapid Baccarat machines at Barona is countable (you can calculate when the bet is +EV). Here’s a baccarat tracking sheet that tells you when to make the dragon bet. Below are the theoretical distributions of the dragon-7 and the player dragon EVs after 7 decks are dealt. The graph shows that the dragon-7 (banker wins with a 3 card 7) often gets very advantageous at the 7th-deck point of the shoe. This graph shows the actual EV of the bets, independent of counts. You’ll need a mobile app to track the shoe exactly to get this calculation. In the next posts, I’ll compare the return using a simple count to these ideal returns. The graph of the Rapid Baccarat Player Dragon shows a much narrower distribution, which still yields a player advantage. The player dragon compares favorably to the dragon-7, in a few important aspects. First, you can sit around at a Rapid Baccarat terminal all day, without any floor person even noticing you. You can use your mobile device at a Rapid Baccarat terminal. Secondly, the variance of the player dragon is much, much less than the dragon-7, because you hit something 30% of the time, more than 10x as often as the dragon-7′s infrequent 2.5% hit rate. The Kelly criterion will allow you to bet much more on the player dragon than the dragon-7 for any given bankroll. (Think about it, you’re still betting on a 40-to-1 longshot on the dragon-7. On the other hand, the player dragon pays even money for a natural winner.) Would I sit around all day at a Rapid Baccarat terminal waiting for the count to go +EV on the player dragon? While listening to my podcasts and music on my iPhone? Probably not, unless the numbers work out amazing well. I’d need the shoe to be +EV around 10% of the time. Otherwise, I’d rather play the fun -EV carnival games. Tisso on Counting CSM Blackjack (+EV) Daniel on Counting CSM Blackjack (+EV) James on Counting CSM Blackjack (+EV) willis on Counting CSM Blackjack (+EV) stephenhow on Baccarat Dragon-7 Tracking Sheet pali on Baccarat Dragon-7 Tracking Sheet James on Counting CSM Blackjack (+EV) stephenhow on Free Bet Blackjack @ Golden Nugget, LV
{"url":"http://discountgambling.net/2011/10/","timestamp":"2014-04-17T09:34:07Z","content_type":null,"content_length":"57091","record_id":"<urn:uuid:83c0c00c-dbf8-4a25-a8c7-6ad5c49e314d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Proportionality constant Two related quantities are called directly proportional ) if there exists a constant non-zero number such that y = kx In this case, is called the proportionality constant of the relation. If are proportional, we often write y ~ x. For example, if you travel at a constant speed, then the distance you cover and the time you spend are proportional, the proportionality constant being the speed. Similarly, the amount of force acting on a certain object from the gravity of the Earth at sea level is proportional to the object's mass. To test whether x and y are proportional, one performs several measurements and plots the resulting points in a Cartesian coordinate system. If the points lie on (or close to) a straight line passing through the origin (0,0), then the two variables are proportional, with the proportionality constant given by the line's slope. The two quantities x and y are inversely proportional if there exists a non-zero constant k such that y = k/x For instance, the number of people you hire to shovel sand is (approximately) inversely proportional to the time needed to get the job done. See also: proportional font, proportional representation All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/pr/Proportionality_constant","timestamp":"2014-04-20T18:26:41Z","content_type":null,"content_length":"13837","record_id":"<urn:uuid:3ca7a016-fb6d-4934-b5bd-dd493a8e03ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
point charge and electric flux 1. The problem statement, all variables and given/known data You measure an electric field E at a distance x from a point charge. What is the electric flux through a sphere whose center is at that distance and whose radius is less than x from the charge? 2. Relevant equations E*dA = (q_enc)/(e_0) 3. The attempt at a solution I'm not sure that I'm understanding the question....The sphere does not enclose the point charge since its center is at x and its radius is less than x from the charge, so doesn't that mean q_enc = 0 and the flux is 0? (I have the answer so I know it's not zero, but I don't understand what I'm missing conceptually here....)
{"url":"http://www.physicsforums.com/showthread.php?t=470189","timestamp":"2014-04-17T12:47:25Z","content_type":null,"content_length":"27420","record_id":"<urn:uuid:7ccbeaf2-efd6-42b6-a67a-b1b9ccedcdc7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
The Shadow Case Model of Complexity May 17, 2010 An approach to clustering based on a new model of complexity Nina Balcan is one of the top young researchers in computational learning theory. She is not only a strong problem solver, but has a vision of her field that is wonderful. Today I want to talk about a breakthrough she has made with Avrim Blum and Anupam Gupta. Their work has created an entire new way of looking at problems, and I think it will be considered soon, if not already, as one of the great results of recent times. Nina was hired over two years ago, then she spent a year at the Microsoft Labs in Boston. We—at Tech—held our collective breath hoping she would leave there and come “back” to Tech. She did, much to our delight. Classic Complexity Models One of the main contributions of complexity theory is the formalization of what it means for an algorithm to be efficient. Prior to modern complexity theory, there were algorithms, but there was no way to understand their performance abstractly. Of course, one could use a stop-watch approach, and people did. Algorithms were timed on benchmark data—this is still done today. But, the ability to compare the performance of algorithms as a mathematical notion is, in my opinion, one of the great contributions of the field. In the following let ${\cal I}$ be a collection of problem instances. As usual ${n}$ will be used to denote the size of the problem. ${\bullet }$Worst Case Model: This is the classic gold standard model we all know and love. Given an algorithm we judge its performance by how well it does on the worst possible instance ${I \in \cal I}$ of size ${n}$. I have discussed this model before here. The strength of the model is also its weakness: sometimes problem instances are not worst case. Sometimes, it “feels” that instances have some additional structure or some additional properties—but it is hard to make this “feeling” precise. ${\bullet }$Average Case Model: This is now a classic model whose formal version is due to Leonid Levin—see the survey by Andrej Bogdanov and Luca Trevisan. The concept here is that additional structure is captured by assuming the problem instances come from a random distribution. There are many pretty results in this area, but there seems to be something unrealistic about the model. How often are instances of a problem really selected according to a random distribution? This is the primary difficulty with this model of complexity—it is not easy to see why instances are random. There are other models, but these are two of the main ones. In the next sections I will discuss the new model due to Nina and her colleagues. A New Model Nina has developed a new model that is currently called the BBG model after the three authors of the seminal paper: Balcan, Blum, and Gupta. ${\bullet }$Shadow Case Model: This is the new model—the name is mine. The central idea is that in practice, problem instances are not selected by an all powerful adversary, nor are they selected by the flip of a coin. BBG argue instances are selected to have a certain formal property that allows approximation algorithms to work. I think we need a better name, but for now I will stay with “the Shadow Case model.” I have always thought problem instances are neither worst case nor random, but have never been able to find a good replacement. BBG seem to have succeeded in finding the right way to model many situations. At first it seems like their idea might be circular: a problem instance is “good,” provided our algorithms work well on it. This might be true, but it seems useless for building a sound theory. The magic is BBG have a notion of what a “good” instance is, they can build a sound mathematical theory, further the theory is beautiful, and even better it works in practice. Hard to beat. Application to Clustering Here is how they make the Shadow Case model concrete for the classic problem of clustering. As always see their paper for the details and for the generalization to domains other than clustering. They assume there is a metric space ${X}$ with a distance function ${d}$. You are given ${n}$ points from the space, and the task is to find a division of the points into ${k}$ sets, clusters. They assume there is a true clustering into sets, $\displaystyle \mathbf C = C_{1}, \dots, C_{k}.$ Finding this cluster is usually very hard, so we almost always settle for a clustering $\displaystyle \mathbf C' = C'_{1}, \dots, C'_{k}$ that is “near” the true clustering. The notion of near is quite simple—just count the fraction of points that are in the wrong cluster. This induces a natural distance function on set systems. The goal is to find a clustering so the distance to the true one is at most some small ${\varepsilon}$. So far nothing new, nothing different: this is all standard stuff. Of course the clustering problem as set up is impossible to solve. Not just hard. Impossible, since we have no prior knowledge of the true clustering. The key to making all this work is the following. Let ${\Phi}$ be any measure on the quality of a particular clustering. Say the Shadow Hypothesis holds for ${\Phi}$ with parameters ${(\alpha,\ varepsilon)}$ provided: for any clustering ${\mathbf C' = C'_{1}, \dots, C'_{k}}$ with $\displaystyle \Phi(\mathbf C') \le (\alpha)\cdot \mathrm{OPT}_{\Phi},$ then ${\mathbf C'}$ is within ${\varepsilon}$ of the true clustering ${\mathbf C}$. There several remarks I need to make. Note, the definition is not meant to be checkable, since it refers to the ${\mathrm{OPT}_{\Phi} }$ which is almost always impossible to compute efficiently. Second, they use the assumption in a very clever way. They do not go and find a clustering that is close to the ${\Phi}$ optimum and then move it around somehow. No. They use the assumption purely in the background. The mere existence of the property is used to get an algorithm that works. This is the brilliance of their notion. The reason I call it the Shadow Case Model should now be clear. It is not that I do not like BBG, but I wanted to highlight that there is something lurking in the “shadows.” The assumption talks about an optimum they never find, and an approximation to that optimum they never find either. Here is a kind of lemma they can prove using the Shadow Hypothesis. Lemma: Suppose a ${k}$-median clustering problem instance satisfies the Shadow Hypothesis with parameters ${(\alpha,\varepsilon)}$. Then, at most ${10\varepsilon n/\alpha}$ points have a distance to their true cluster of more than $\displaystyle \frac{\alpha w_{\mathrm{avg}}}{10 \varepsilon},$ where ${w_{\mathrm{avg}}}$ is the average distance of each point to its true cluster. Lemmas like this allow them to create simple and fast algorithms that get terrific clustering results. Again the measure ${\Phi}$, the optimal clustering, and even the approximation to it, all remain in the shadows. This is very neat work—see their paper for how all the magic works. Open Problems There are two obvious open problems. What is the right name for this notion? Is BBG the right one? If they use Shadow they can use, perhaps, Shadow the Hedgehog as their logo: The most important open problem is to start using the model on other areas of theory. This is already well underway, but I expect a flood of papers using their brilliant idea. Finally, the complexity theorist in me wants to know if there are ways to use their model to define formal complexity classes? Is this possible? 1. May 17, 2010 2:47 pm Hi Dick, there are many models of “semi-random” instances that are intermediate between worst case and average case, most notably the model of smoothed complexity. What I find most appealing in their paper (and this gets a bit lost in the level of abstraction you propose) is that, in a clustering problem, we are not really interesting in optimizing a cost function, but we are interested in finding the clusters. (The cost function is usually sort of reverse-engineered to make the clusters emerge in the optimal and near-optimal solutions.) So if no good clustering exists, this means that either we are working with the wrong cost function or the data itself has no useful clustering. In such a case, it is not so useful to be able to find near-optimal solutions, and so an algorithm that has good approximation ratio only on instances that have good solutions is exactly as useful as an algorithm that has good approximation ration on all inputs. In many other problems this is not really true, because the cost function represents a “real” cost, and we may be interested in approximate solutions even when we are working with instances that have no “good” solution. □ May 17, 2010 4:17 pm You said it very well. The measures that are introduced to help find the clusters are not as basic as measures in other problems. 2. May 17, 2010 5:20 pm When you feature a multi-author paper like this, how do you choose which author to picture? □ May 18, 2010 8:02 am I picked Nina fro two reasons: I just spoke to hear about the work and I have featured Blum before. It usually a bit random, but I will feature her three co-author another time. Good question. 3. May 17, 2010 11:08 pm What do you think of Existential Complexity as a name? I think it captures the features, has a nice ring to it, is descriptive, and won’t be confused with other jargon. 4. May 18, 2010 3:16 am “…sometimes problem instances are not worst case. Sometimes, it “feels” that instances have some additional structure or some additional properties—but it is hard to make this “feeling” I think that this is the motivation of the parametrized approach of Downey and Fellows. 5. May 18, 2010 5:56 am I think the moral of that paper is that you shouldn’t think too hard about approximation algorithms. They show roughly “if you have a problem for which each and every alpha approximation to the optimum is goodish, then here is a 1-line algorithm that will solve that problem perfectly.” And I’d guess that the 1-line algorithm doesn’t do that too often. So there’s no point trying to get any old alpha approximation. Much less trying to show hardness of alpha approximation. For the sake of much of modern theory, we shouldn’t let the paper get any publicity. □ May 19, 2010 3:28 pm I wouldn’t say it’s a 1-line algorithm – more like a 2-stage version of k-means where the first stage gives a somewhat greedy initial clustering and the second stage reclusters by median distance. In any event, it does suggest that for problems where the objective being optimized is really a “measurable proxy” for some other underlying goal, it makes sense to consider the implications of the implicit assumption that the two are related. It also suggests perhaps expanding beyond approximation-ratio as the sole measure of quality for problems of this type. E.g., suppose your algorithm achieves a good objective score *and* produces solutions with some other desirable criterion as well: in that case you are only assuming that solutions of *that* kind are high quality. We’ve been recently trying to look at some extensions of this form. On a different note, I think it might be interesting to consider something of this style for problems of evolutionary tree reconstruction, where again the measurable objectives are typically a proxy for what you really want, namely a correct tree. Thanks, Dick, for the nice post! 6. May 18, 2010 7:12 am Hi Dick. Two quick comments, one technical and one subjective: Technical: For the key lemma there are actually two parts. The part you listed is the one that holds without assumptions, just by definition of w_avg (by Markov’s inequality). The part that uses the Shadow Hypothesis is the companion statement that says that at most 6*epsilon*n points can have distance to their second-closest cluster center less than (alpha*w_avg)/(2*epsilon). The combination of these two is what then allows one to find the clusters. Subjective: I’m not sure we really argue that necessarily in practice instances are selected to satisfy the condition. It’s just that the approximation algorithms paradigm for these problems is implicitly assuming that something of this form is true. (since if it’s false, then why work so hard on approximating the objective!) 7. May 18, 2010 11:13 am This reminds me of recent empirical work in protein folding. Rea proteins have a deep well around their minimum-energy configuration; the lowest local minimum has a much higher energy, and configurations whose energy is close to the minimum are in fact close to the configuration in some sense. (Indeed, protein sequences have evolved to have these properties, so that they are easy to fold, and don’t get stuck in local optima.) In a setting like this, any approximation algorithm which gets close to the optimum energy is actually getting close to the optimum configuration. □ May 18, 2010 12:15 pm typo: I meant “real” proteins. 8. May 19, 2010 6:34 am This great topic unites two of my own (and many people’s) main interests. In line with Christopher Moore’s post on protein folding, for many months my quantum systems engineering colleagues and I have been attending the regular weekly meeting of a synthetic biology group; the idea being to learn (by direct observation) better answers to the Ferengi question: “How does synthetic biology work, exactly?” The kind of answers we engineers are looking to find have everything to do with Dick’s fine blog in general, and the e BBG Shadow Case Model in particular: What kinds of proof technologies —whether explicit or implicit—are synthetic biologists exploiting nowadays? What kinds of dynamical simulation algorithms do they build upon those proofs? And particularly: what new notions of naturality in computational complexity theory can we abstract from this work? What makes the BBG Shadow Case Model so interesting (to me) is that it points toward new notions of abstractable naturality in complexity theory. Everyone is familiar with the notion of reduction in complexity theory … essentially a certificate that your problem instance might (in the worst case) be encoding an NP-complete problem. The BBG Shadow Hypothesis seems to be a solid step in the opposite direction … a new kind of certificate that the problem instance is not encoding an NP-complete problem. In practice, “I am not a computation” certificates are ubiquitous in synthetic biology … even though biologists usually do not (as yet) formulate them in a rigorous or abstractably natural way. Typically the certification is associated with a (classical) thermostat process or a (quantum) Lindblad process; in both cases these processes inject enough entropy that the desired “not a computation” dynamics are achieved. Even though synthetic biologists do not think of these noise processes as (effectively) inducing a Shadow Hypothesis, this might IMHO be a very good way to think about it. It would be terrific for engineers in general, and synthetic biologists in particular, if the methods already in-use were better understood in terms of the elements that complexity theorists hold dear: proof technologies and dynamical simulation algorithms that are solidly grounded in (new?) notions of complexity-theoretic naturality. My experience has been that it’s best not to approach these problems with too many preconceptions … older textbooks contain plenty of mumpsimus that (for students) can solidify into cognitive obstruction to perceiving what ‘s going on. That’s why there’s real excitement at the frontiers like DisProt: the Database of Protein DISorder … and in the recognition that, even taking the rapidly accelerating progress of synthetic biology into account, we are very far from approaching the (still unknown) limits to observing and simulating the classical and quantum dynamics of complex systems We systems engineers urgently need, in particular, a better mathematical toolset for describing and appreciating the “naturality” of these challenges. So thanks for a blog, and a topic, and a BBG model, that is “naturally” of immense interest to us engineers! 9. May 19, 2010 11:30 am PS: Oh yeah … one other thing we’ve been learning from the synthetic biologists, which connects to Dick’s topic challenge to “start using the (BBG) model on other areas of theory”. The name synthetic biologists have for this strategy is repurposing … a term that synthetic biologists love with much the same ardor that mathematicians have for naturality. Acting over billions of years, Nature has ubiquitously repurposed almost every genetic and structural motif (often many times over), at every scale of dynamical and informatic complexity, from genes to biomes. In effect, repurposing is a strategy that Nature *wants* synthetic biologists to adopt, in the same way that Nature (in effect) *wants* mathematical frameworks to be natural. Although they don’t mean quite the same thing, “naturality” and “repurposing” *do* dovetail nicely, in the sense that (for example) a mathematical measure of complexity is natural if it can be repurposed to serve many practical applications. These are two concrete directions in which (hopefully) the ideas of the BBG article can be developed, and more broadly, they are two concrete reasons why the topics in this blog are (usually) so very fruitful from a systems engineering point-of-view. So, thanks again for a *wonderful* post, about a wonderful article! :) 10. May 20, 2010 4:33 am It strikes me that this has similarities to the idea of adaptive algorithms, where (usually) the time complexity is a function of some “difficulty measure” of the instance. Most of the work on adaptive algorithms has been in sorting and searching, but maybe it is time to do similar things for other types of problems. Another term that has been used is “opportunistic algorithms” that take advantage of features of the input. Not sure if any of these terms really capture the new idea here, though. 11. May 24, 2010 4:04 am i search with keyword “paper folding” and found your site.. hahaha.. not match.. but it’s ok.. nice blog.. just passing by.. thank you. 12. May 25, 2010 11:24 am There was a paper in ICS 2010 that seems to do something similar for MaxCut: “Are Stable Instances Easy?” by Yonatan Bilu and Nathan Linial. It argues that if one MaxCut (which can be thought of clustering a graph into 2 clusters) stands out as the best even if the graph is peturbed a little, than it can be found. Paper available here: http://www.cs.huji.ac.il/~nati/PAPERS/ 13. April 7, 2012 11:58 pm Hi, Dick. I read this paper recently and consider some other problems related with this model. For example, like a paper ‘A Randomized Rounding Approach to the Traveling Salesman Problem’ appeared in FOCS 2011, which win the best paper award, breaks the 1.5 ratio for TSP by using the random sampling tree technique. I think probably if we assume that any c-approximation solution for TSP is \epsilon-close to the optimal solution (or the ground truth), a smaller ratio maybe achievable. Thanks. Recent Comments Ibrahim Cahit on In Praise Of P=NP Proofs Paul Beame on In Praise Of P=NP Proofs In Praise Of P=NP Pr… on No-Go Theorems In Praise Of P=NP Pr… on Graph Isomorphism and Graph… In Praise Of P=NP Pr… on Can Amateurs Solve P=NP? Jon Awbrey on Triads and Dyads Pip on Triads and Dyads Hendrik Jan Hoogeboo… on Triads and Dyads Mike R on The More Variables, the B… maybe wrong on The More Variables, the B… Jon Awbrey on The More Variables, the B… Henry Yuen on The More Variables, the B… The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian … The More Variables,… on Crypto Aspects of The Jacobian…
{"url":"https://rjlipton.wordpress.com/2010/05/17/the-shadow-case-model-of-complexity/","timestamp":"2014-04-20T08:24:05Z","content_type":null,"content_length":"113823","record_id":"<urn:uuid:995bfb00-fb07-4b78-9afd-780c2bb0238a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
CSA - Discovery Guides Black hole: A gravitationally collapsed mass inside the Schwarzschild radius (the critical radius, according to the general theory of relativity, at which light is unable to escape to infinity) from which no light, matter, or signal of any kind can escape; quantum corrections indicate a black hole radiates particles with a temperature inversely proportional to the mass and directly proportional to Planck's constant (the constant of proportionality relating the frequency of a photon to its quantum of energy). Continental drift: The concept of continent formation by the fragmentation and movement of land masses on the surface of the Earth. Electromagnetic waves: Disturbances propagating outward from any electric charge that oscillates or is accelerated; far from the charge, they consist of vibrating electric and magnetic fields that move at the speed of light and are at right angles to each other and to the direction of motion. Gravitational waves: Also known as gravitational radiation; according to general relativity, a propagating gravitational field, which is produced by some change in the distribution of matter; generally, any body which experiences varying acceleration will emit gravitational waves by an amount proportional to the rate of change of the acceleration. Jovian: Of, relating to, or characteristic of the planet Jupiter. Magnetosphere: The region of space surrounding a rotating magnetized sphere. Specifically, the outer region of a planets ionosphere. On Earth, the magnetosphere starts at about 100 km (about 60 miles) above Earths surface, and extends to about 60,000 km (or considerably farther, on the side away from the Sun), i.e., to a distant boundary that marks the beginning of interplanetary space. Maxwell's equations: Equations governing the varying electric and magnetic fields in a medium. Microarcseconds: 1/1000th of a second of arc; a second of arc being a unit of plane angle equal to 1/60th of a minute; a minute being 1/60th of a degree. Plasma: 1. A highly ionized gas that contains equal numbers of ions and electrons in sufficient density so that the Debye shielding length (the characteristic distance beyond which the electric field of a charged particle is shielded by particles having charges of the opposite sign. 2. A completely ionized gas composed entirely of a nearly equal number of positive and negative free charges (positive ions and electrons). Quasar: Quasi-stellar astronomical object, often a radio source; all quasars have large red shifts and small optical diameter, but may have large radio diameter. Also known as quasi-stellar object Space-time: A four-dimensional space used to represent the Universe in the theory of relativity, with three dimensions corresponding to ordinary space and the fourth to time. Also known as space-time
{"url":"http://www.csa.com/discoveryguides/gravity/gloss_f.php","timestamp":"2014-04-19T02:54:57Z","content_type":null,"content_length":"19540","record_id":"<urn:uuid:3d6a1fe1-238a-447d-a897-cfb1f11ba4fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
[HELP!] Parametric Surface Area December 9th 2009, 01:54 PM #1 Dec 2008 I'm so stuck with the Parametric Surface Area problem right now T_T Can anyone please help me out? There are two questions in total, if you can help me with any one of the two, it'll be so appreciated! Question 1: The part of the plane 3x+2y+z = 6 that lies in the 1st octant. Question 2: The surface z = 2/3 (x^(3/2) + y^(3/2)), 0 <= x <= 1, 0 <= y <= 1 Please, help! Last edited by violet8804; December 10th 2009 at 08:30 AM. Reason: Question is solved. For reference, the formula for surface area is $\int\int_R \sqrt{\left(\frac{\partial z}{\partial x}\right)^2+\left(\frac{\partial z}{\partial y}\right)^2+1}\,dA.$ Because the region is defined by $0\le x \le 1$ $0\le y \le 1,$ the integral for surface area will be $\int_0^1\int_0^1 \sqrt{\left(\frac{\partial z}{\partial x}\right)^2+\left(\frac{\partial z}{\partial y}\right)^2+1}\,dx\,dy.$ As we are given a square region and a simple integrand (when you simplify it, that is!), we can make do with Cartesian coordinates in this integral. Because the region is defined by $0\le x \le 1$ $0\le y \le 1,$ the integral for surface area will be $\int_0^1\int_0^1 \sqrt{\left(\frac{\partial z}{\partial x}\right)^2+\left(\frac{\partial z}{\partial y}\right)^2+1}\,dx\,dy.$ As we are given a square region and a simple integrand (when you simplify it, that is!), we can make do with Cartesian coordinates in this integral. OHHH~~get it now~! thank you so much!! =D Not a problem. The plane 3x+ 2y+ z= 6 crosses the xy-plane, where z= 0, in the line 3x+ 2y= 6. That, in turn, has y-intercept (0, 3) and x-intercept (2, 0). You can take the limits of integration to be x=0 to 2 and, for each x, y= 0 to 3- (3/2)x. Or, if you prefer to integrate with respect to x first, y from 0 to 3 and, for each y, x from 0 to 2- (2/3)y. Question 2: The surface z = 2/3 (x^(3/2) + y^(3/2)), 0 <= x <= 1, 0 <= y <= 1 Please, help! December 9th 2009, 02:21 PM #2 Senior Member Dec 2008 December 9th 2009, 03:12 PM #3 Dec 2008 December 9th 2009, 03:58 PM #4 Senior Member Dec 2008 December 9th 2009, 04:07 PM #5 Dec 2008 December 9th 2009, 04:16 PM #6 Senior Member Dec 2008 December 10th 2009, 05:01 AM #7 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/119611-help-parametric-surface-area.html","timestamp":"2014-04-16T21:56:19Z","content_type":null,"content_length":"49904","record_id":"<urn:uuid:8a3bcc61-5913-42bc-8ab6-ae85261849f6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenOffice.org Forum :: Equation editor integration limits Author Message Guest Posted: Thu Mar 18, 2004 4:45 pm Post subject: Equation editor integration limits am I being an idiot?? How do I specifiy the limits for integration within the equation editor? I looked in the helpfile and found the bit about from and to, but that seems to just endup printing what I type in and an upside down questionmark, which I assume means a parse error or some sort? Can someone paste a simple example of how to use it here please? Basicaly I want to say integrate f(x) between a and b. Guest Posted: Thu Mar 18, 2004 8:05 pm Post subject: Re: Equation editor integration limits I don't have it handy, and simply don't use that math editor at all, but ... A question mark often indicates a spot to put things you want into an auto-generated formula. Try putting upper and lower bounds where there are quesiton marks to see what you get. If doing math, there are many more powerful programs out there. If dead serious, I'd look into Mackichan software, or even the math editor in WordPerfect 8 (now much better in later versions of that and Word.) It's all MUCH easier ith click-and-paste than typing pseudonames. Corel (Wordperfect) still has academic versions that make the purchase within reason. Anonymous wrote: am I being an idiot?? How do I specifiy the limits for integration within the equation editor? I looked in the helpfile and found the bit about from and to, but that seems to just endup printing what I type in and an upside down questionmark, which I assume means a parse error or some sort? Can someone paste a simple example of how to use it here please? Basicaly I want to say integrate f(x) between a and b. dkeith Posted: Thu Mar 18, 2004 9:52 pm Post subject: Power User Try Joined: 01 Nov 2003 int from a to b f(x) dx Posts: 93 Location: UK int from {x=0} to {x = infinity}{1 over {1+x^2}} "d"x I wouldn't agree with David about point-and-click vs. pseudocode -- if you've ever used latex you should find OOo's code familiar. For text + some equations OOo is fine. awoodland Posted: Fri Mar 19, 2004 2:12 am Post subject: Newbie thanks for the help, I was just being an idiot. From the help documentation I had concluded that writing from a to b int f(x) dx was how it was done Joined: 18 Mar 2004 Posts: 2 awoodland Posted: Fri Mar 19, 2004 2:25 am Post subject: Newbie Ok, now i've run into another slighty problem.... basicaly I'm trying to show integration in my doucment, and now I want to show that I have yet to apply the limits using square brackets. is: stack {b # a}[x over 2] Joined: 18 Mar 2004 Posts: 2 the way to do this or is there something else i'm missing about from and to? RGB Posted: Fri Mar 19, 2004 2:34 am Post subject: Super User Try this: Joined: 25 Nov 2003 left[ a over b right]_a ^b Posts: 1743 Location: In Lombardy, near a glass of red Tuscany wine PD: For this kind of questions, the Math forum is best
{"url":"http://www.oooforum.org/forum/viewtopic.phtml?p=25742","timestamp":"2014-04-18T15:44:18Z","content_type":null,"content_length":"32162","record_id":"<urn:uuid:f7b52ef3-7cab-4d6b-b968-9b802bf9cbb5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
quickly looping over a 2D array? On 2009-07-27 06:24, Martin wrote: > Hi, > I am new to python and I was wondering if there was a way to speed up > the way I index 2D arrays when I need to check two arrays > simultaneously? My current implementations is (using numpy) something > like the following... > for i in range(numrows): > for j in range(numcols): > if array_1[i, j] == some_value or array_2[i, j]>= array_1[i, > j] * some_other_value > array_1[i, j] = some_new_value Peter has given you good answers, but you will probably want to ask future numpy questions on the numpy mailing list. Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
{"url":"http://www.velocityreviews.com/forums/t692662-quickly-looping-over-a-2d-array.html","timestamp":"2014-04-19T12:43:44Z","content_type":null,"content_length":"66699","record_id":"<urn:uuid:8a5c14e5-1999-4815-aea6-a7497a3534b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
List of Important Mathematicians - The Story of Mathematics This is a chronological list of some of the most important mathematicians in history and their major achievments, as well as some very early achievements in mathematics for which individual contributions can not be acknowledged. Where the mathematicians have individual pages in this website, these pages are linked; otherwise more information can usually be obtained from the general page relating to the particular period in history, or from the list of sources used. A more detailed and comprehensive mathematical chronology can be found at http://www-groups.dcs.st-and.ac.uk/~history/Chronology/full.html. Date Name Nationality Major Achievements 35000 BC African First notched tally bones 3100 BC Sumerian Earliest documented counting and measuring system 2700 BC Egyptian Earliest fully-developed base 10 number system in use 2600 BC Sumerian Multiplication tables, geometrical exercises and division problems 2000-1800 BC Egyptian Earliest papyri showing numeration system and basic arithmetic 1800-1600 BC Babylonian Clay tablets dealing with fractions, algebra and equations 1650 BC Egyptian Rhind Papyrus (instruction manual in arithmetic, geometry, unit fractions, etc) 1200 BC Chinese First decimal numeration system with place value concept 1200-900 BC Indian Early Vedic mantras invoke powers of ten from a hundred all the way up to a trillion 800-400 BC Indian “Sulba Sutra” lists several Pythagorean triples and simplified Pythagorean theorem for the sides of a square and a rectangle, quite accurate approximation to √2 650 BC Chinese Lo Shu order three (3 x 3) “magic square” in which each row, column and diagonal sums to 15 624-546 BC Thales Greek Early developments in geometry, including work on similar and right triangles 570-495 BC Pythagoras Greek Expansion of geometry, rigorous approach building from first principles, square and triangular numbers, Pythagoras’ theorem 500 BC Hippasus Greek Discovered potential existence of irrational numbers while trying to calculate the value of √2 490-430 BC Zeno of Elea Greek Describes a series of paradoxes concerning infinity and infinitesimals 470-410 BC Hippocrates of Chios Greek First systematic compilation of geometrical knowledge, Lune of Hippocrates 460-370 BC Democritus Greek Developments in geometry and fractions, volume of a cone 428-348 BC Plato Greek Platonic solids, statement of the Three Classical Problems, influential teacher and popularizer of mathematics, insistence on rigorous proof and logical 410-355 BC Eudoxus of Cnidus Greek Method for rigorously proving statements about areas and volumes by successive approximations 384-322 BC Aristotle Greek Development and standardization of logic (although not then considered part of mathematics) and deductive reasoning 300 BC Euclid Greek Definitive statement of classical (Euclidean) geometry, use of axioms and postulates, many formulas, proofs and theorems including Euclid’s Theorem on infinitude of primes 287-212 BC Archimedes Greek Formulas for areas of regular shapes, “method of exhaustion” for approximating areas and value of π, comparison of infinities 276-195 BC Eratosthenes Greek “Sieve of Eratosthenes” method for identifying prime numbers 262-190 BC Apollonius of Perga Greek Work on geometry, especially on cones and conic sections (ellipse, parabola, hyperbola) 200 BC Chinese “Nine Chapters on the Mathematical Art”, including guide to how to solve equations using sophisticated matrix-based methods 190-120 BC Hipparchus Greek Develop first detailed trigonometry tables 36 BC Mayan Pre-classic Mayans developed the concept of zero by at least this time 10-70 AD Heron (or Hero) of Greek Heron’s Formula for finding the area of a triangle from its side lengths, Heron’s Method for iteratively computing a square root 90-168 AD Ptolemy Greek/ Develop even more detailed trigonometry tables 200 AD Sun Tzu Chinese First definitive statement of Chinese Remainder Theorem 200 AD Indian Refined and perfected decimal place value number system 200-284 AD Diophantus Greek Diophantine Analysis of complex algebraic problems, to find rational solutions to equations with several unknowns 220-280 AD Liu Hui Chinese Solved linear equations using a matrices (similar to Gaussian elimination), leaving roots unevaluated, calculated value of π correct to five decimal places, early forms of integral and differential calculus 400 AD Indian “Surya Siddhanta” contains roots of modern trigonometry, including first real use of sines, cosines, inverse sines, tangents and secants 476-550 AD Aryabhata Indian Definitions of trigonometric functions, complete and accurate sine and versine tables, solutions to simultaneous quadratic equations, accurate approximation for π (and recognition that π is an irrational number) 598-668 AD Brahmagupta Indian Basic mathematical rules for dealing with zero (+, - and x), negative numbers, negative roots of quadratic equations, solution of quadratic equations with two unknowns 600-680 AD Bhaskara I Indian First to write numbers in Hindu-Arabic decimal system with a circle for zero, remarkably accurate approximation of the sine function 780-850 AD Muhammad Persian Advocacy of the Hindu numerals 1 - 9 and 0 in Islamic world, foundations of modern algebra, including algebraic methods of “reduction” and “balancing”, Al-Khwarizmi solution of polynomial equations up to second degree 908-946 AD Ibrahim ibn Sinan Arabic Continued Archimedes' investigations of areas and volumes, tangents to a circle 953-1029 AD Muhammad Al-Karaji Persian First use of proof by mathematical induction, including to prove the binomial theorem 966-1059 AD Ibn al-Haytham Persian/ Derived a formula for the sum of fourth powers using a readily generalizable method, “Alhazen's problem”, established beginnings of link between algebra (Alhazen) Arabic and geometry 1048-1131 Omar Khayyam Persian Generalized Indian methods for extracting square and cube roots to include fourth, fifth and higher roots, noted existence of different sorts of cubic 1114-1185 Bhaskara II Indian Established that dividing by zero yields infinity, found solutions to quadratic, cubic and quartic equations (including negative and irrational solutions) and to second order Diophantine equations, introduced some preliminary concepts of calculus 1170-1250 Leonardo of Pisa Italian Fibonacci Sequence of numbers, advocacy of the use of the Hindu-Arabic numeral system in Europe, Fibonacci's identity (product of two sums of two squares (Fibonacci) is itself a sum of two squares) 1201-1274 Nasir al-Din al-Tusi Persian Developed field of spherical trigonometry, formulated law of sines for plane triangles 1202-1261 Qin Jiushao Chinese Solutions to quadratic, cubic and higher power equations using a method of repeated approximations 1238-1298 Yang Hui Chinese Culmination of Chinese “magic” squares, circles and triangles, Yang Hui’s Triangle (earlier version of Pascal’s Triangle of binomial co-efficients) 1267-1319 Kamal al-Din Persian Applied theory of conic sections to solve optical problems, explored amicable numbers, factorization and combinatorial methods 1350-1425 Madhava Indian Use of infinite series of fractions to give an exact formula for π, sine formula and other trigonometric functions, important step towards development of 1323-1382 Nicole Oresme French System of rectangular coordinates, such as for a time-speed-distance graph, first to use fractional exponents, also worked on infinite series 1446-1517 Luca Pacioli Italian Influential book on arithmetic, geometry and book-keeping, also introduced standard symbols for plus and minus 1499-1557 Niccolò Fontana Italian Formula for solving all types of cubic equations, involving first real use of complex numbers (combinations of real and imaginary numbers), Tartaglia’s Tartaglia Triangle (earlier version of Pascal’s Triangle) 1501-1576 Gerolamo Cardano Italian Published solution of cubic and quartic equations (by Tartaglia and Ferrari), acknowledged existence of imaginary numbers (based on √-1) 1522-1565 Lodovico Ferrari Italian Devised formula for solution of quartic equations 1550-1617 John Napier British Invention of natural logarithms, popularized the use of the decimal point, Napier’s Bones tool for lattice multiplication 1588-1648 Marin Mersenne French Clearing house for mathematical thought during 17th Century, Mersenne primes (prime numbers that are one less than a power of 2) 1591-1661 Girard Desargues French Early development of projective geometry and “point at infinity”, perspective theorem 1596-1650 René Descartes French Development of Cartesian coordinates and analytic geometry (synthesis of geometry and algebra), also credited with the first use of superscripts for powers or exponents 1598-1647 Bonaventura Italian “Method of indivisibles” paved way for the later development of infinitesimal calculus 1601-1665 Pierre de Fermat French Discovered many new numbers patterns and theorems (including Little Theorem, Two-Square Thereom and Last Theorem), greatly extending knowlege of number theory, also contributed to probability theory 1616-1703 John Wallis British Contributed towards development of calculus, originated idea of number line, introduced symbol ∞ for infinity, developed standard notation for powers 1623-1662 Blaise Pascal French Pioneer (with Fermat) of probability theory, Pascal’s Triangle of binomial coefficients 1643-1727 Isaac Newton British Development of infinitesimal calculus (differentiation and integration), laid ground work for almost all of classical mechanics, generalized binomial theorem, infinite power series 1646-1716 Gottfried Leibniz German Independently developed infinitesimal calculus (his calculus notation is still used), also practical calculating machine using binary system (forerunner of the computer), solved linear equations using a matrix 1654-1705 Jacob Bernoulli Swiss Helped to consolidate infinitesimal calculus, developed a technique for solving separable differential equations, added a theory of permutations and combinations to probability theory, Bernoulli Numbers sequence, transcendental curves 1667-1748 Johann Bernoulli Swiss Further developed infinitesimal calculus, including the “calculus of variation”, functions for curve of fastest descent (brachistochrone) and catenary 1667-1754 Abraham de Moivre French De Moivre's formula, development of analytic geometry, first statement of the formula for the normal distribution curve, probability theory 1690-1764 Christian Goldbach German Goldbach Conjecture, Goldbach-Euler Theorem on perfect powers 1707-1783 Leonhard Euler Swiss Made important contributions in almost all fields and found unexpected links between different fields, proved numerous theorems, pioneered new methods, standardized mathematical notation and wrote many influential textbooks 1728-1777 Johann Lambert Swiss Rigorous proof that π is irrational, introduced hyperbolic functions into trigonometry, made conjectures on non-Euclidean space and hyperbolic triangles 1736-1813 Joseph Louis Italian/ Comprehensive treatment of classical and celestial mechanics, calculus of variations, Lagrange’s theorem of finite groups, four-square theorem, mean Lagrange French value theorem 1746-1818 Gaspard Monge French Inventor of descriptive geometry, orthographic projection 1749-1827 Pierre-Simon Laplace French Celestial mechanics translated geometric study of classical mechanics to one based on calculus, Bayesian interpretation of probability, belief in scientific determinism 1752-1833 Adrien-Marie French Abstract algebra, mathematical analysis, least squares method for curve-fitting and linear regression, quadratic reciprocity law, prime number theorem, Legendre elliptic functions 1768-1830 Joseph Fourier French Studied periodic functions and infinite sums in which the terms are trigonometric functions (Fourier series) 1777-1825 Carl Friedrich Gauss German Pattern in occurrence of prime numbers, construction of heptadecagon, Fundamental Theorem of Algebra, exposition of complex numbers, least squares approximation method, Gaussian distribution, Gaussian function, Gaussian error curve, non-Euclidean geometry, Gaussian curvature 1789-1857 Augustin-Louis French Early pioneer of mathematical analysis, reformulated and proved theorems of calculus in a rigorous manner, Cauchy's theorem (a fundamental theorem of Cauchy group theory) 1790-1868 August Ferdinand German Möbius strip (a two-dimensional surface with only one side), Möbius configuration, Möbius transformations, Möbius transform (number theory), Möbius Möbius function, Möbius inversion formula 1791-1858 George Peacock British Inventor of symbolic algebra (early attempt to place algebra on a strictly logical basis) 1791-1871 Charles Babbage British Designed a "difference engine" that could automatically perform computations based on instructions stored on cards or tape, forerunner of programmable 1792-1856 Nikolai Lobachevsky Russian Developed theory of hyperbolic geometry and curved spaces independendly of Bolyai 1802-1829 Niels Henrik Abel Norwegian Proved impossibility of solving quintic equations, group theory, abelian groups, abelian categories, abelian variety 1802-1860 János Bolyai Hungarian Explored hyperbolic geometry and curved spaces independently of Lobachevsky 1804-1851 Carl Jacobi German Important contributions to analysis, theory of periodic and elliptic functions, determinants and matrices 1805-1865 William Hamilton Irish Theory of quaternions (first example of a non-commutative algebra) 1811-1832 Évariste Galois French Proved that there is no general algebraic method for solving polynomial equations of degree greater than four, laid groundwork for abstract algebra, Galois theory, group theory, ring theory, etc 1815-1864 George Boole British Devised Boolean algebra (using operators AND, OR and NOT), starting point of modern mathematical logic, led to the development of computer science 1815-1897 Karl Weierstrass German Discovered a continuous function with no derivative, advancements in calculus of variations, reformulated calculus in a more rigorous fashion, pioneer in development of mathematical analysis 1821-1895 Arthur Cayley British Pioneer of modern group theory, matrix algebra, theory of higher singularities, theory of invariants, higher dimensional geometry, extended Hamilton's quaternions to create octonions 1826-1866 Bernhard Riemann German Non-Euclidean elliptic geometry, Riemann surfaces, Riemannian geometry (differential geometry in multiple dimensions), complex manifold theory, zeta function, Riemann Hypothesis 1831-1916 Richard Dedekind German Defined some important concepts of set theory such as similar sets and infinite sets, proposed Dedekind cut (now a standard definition of the real 1834-1923 John Venn British Introduced Venn diagrams into set theory (now a ubiquitous tool in probability, logic and statistics) 1842-1899 Marius Sophus Lie Norwegian Applied algebra to geometric theory of differential equations, continuous symmetry, Lie groups of transformations 1845-1918 Georg Cantor German Creator of set theory, rigorous treatment of the notion of infinity and transfinite numbers, Cantor's theorem (which implies the existence of an “infinity of infinities”) 1848-1925 Gottlob Frege German One of the founders of modern logic, first rigorous treatment of the ideas of functions and variables in logic, major contributor to study of the foundations of mathematics 1849-1925 Felix Klein German Klein bottle (a one-sided closed surface in four-dimensional space), Erlangen Program to classify geometries by their underlying symmetry groups, work on group theory and function theory 1854-1912 Henri Poincaré French Partial solution to “three body problem”, foundations of modern chaos theory, extended theory of mathematical topology, Poincaré conjecture 1858-1932 Giuseppe Peano Italian Peano axioms for natural numbers, developer of mathematical logic and set theory notation, contributed to modern method of mathematical induction 1861-1947 Alfred North British Co-wrote “Principia Mathematica” (attempt to ground mathematics on logic) 1862-1943 David Hilbert German 23 “Hilbert problems”, finiteness theorem, “Entscheidungsproblem“ (decision problem), Hilbert space, developed modern axiomatic approach to mathematics, 1864-1909 Hermann Minkowski German Geometry of numbers (geometrical method in multi-dimensional space for solving number theory problems), Minkowski space-time 1872-1970 Bertrand Russell British Russell’s paradox, co-wrote “Principia Mathematica” (attempt to ground mathematics on logic), theory of types 1877-1947 G.H. Hardy British Progress toward solving Riemann hypothesis (proved infinitely many zeroes on the critical line), encouraged new tradition of pure mathematics in Britain, taxicab numbers 1878-1929 Pierre Fatou French Pioneer in field of complex analytic dynamics, investigated iterative and recursive processes 1881-1966 L.E.J. Brouwer Dutch Proved several theorems marking breakthroughs in topology (including fixed point theorem and topological invariance of dimension) 1887-1920 Srinivasa Ramanujan Indian Proved over 3,000 theorems, identities and equations, including on highly composite numbers, partition function and its asymptotics, and mock theta 1893-1978 Gaston Julia French Developed complex dynamics, Julia set formula 1903-1957 John von Neumann Hungarian/ Pioneer of game theory, design model for modern computer architecture, work in quantum and nuclear physics 1906-1978 Kurt Gödel Austria Incompleteness theorems (there can be solutions to mathematical problems which are true but which can never be proved), Gödel numbering, logic and set 1906-1998 André Weil French Theorems allowed connections between algebraic geometry and number theory, Weil conjectures (partial proof of Riemann hypothesis for local zeta functions), founding member of influential Bourbaki group 1912-1954 Alan Turing British Breaking of the German enigma code, Turing machine (logical forerunner of computer), Turing test of artificial intelligence 1913-1996 Paul Erdös Hungarian Set and solved many problems in combinatorics, graph theory, number theory, classical analysis, approximation theory, set theory and probability theory 1917-2008 Edward Lorenz American Pioneer in modern chaos theory, Lorenz attractor, fractals, Lorenz oscillator, coined term “butterfly effect” 1919-1985 Julia Robinson American Work on decision problems and Hilbert's tenth problem, Robinson hypothesis 1924-2010 Benoît Mandelbrot French Mandelbrot set fractal, computer plottings of Mandelbrot and Julia sets 1928- Alexander French Mathematical structuralist, revolutionary advances in algebraic geometry, theory of schemes, contributions to algebraic topology, number theory, category Grothendieck theory, etc 1928- John Nash American Work in game theory, differential geometry and partial differential equations, provided insight into complex systems in daily life such as economics, computing and military 1934-2007 Paul Cohen American Proved that continuum hypothesis could be both true and not true (i.e. independent from Zermelo-Fraenkel set theory) 1937- John Horton Conway British Important contributions to game theory, group theory, number theory, geometry and (especially) recreational mathematics, notably with the invention of the cellular automaton called the "Game of Life" 1947- Yuri Matiyasevich Russian Final proof that Hilbert’s tenth problem is impossible (there is no general method for determining whether Diophantine equations have a solution) 1953- Andrew Wiles British Finally proved Fermat’s Last Theorem for all numbers (by proving the Taniyama-Shimura conjecture for semistable elliptic curves) 1966- Grigori Perelman Russian Finally proved Poincaré Conjecture (by proving Thurston's geometrization conjecture), contributions to Riemannian geometry and geometric topology
{"url":"http://www.storyofmathematics.com/mathematicians.html","timestamp":"2014-04-17T03:47:52Z","content_type":null,"content_length":"93699","record_id":"<urn:uuid:bf05dcf8-9d05-4e55-add8-c4f596e04b5c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse function on definite integral October 20th 2010, 08:03 AM #1 Dec 2009 Inverse function on definite integral The function $\displaystyle f$ and its inverse $\displaystyle f^{-1}$ are continuous. If $\displaystyle f(0) = 0$, find $\displaystyle\int^5_0 f(x)\ dx + \int^{f(5)}_0 f^{-1}(t)\ dt$ I really have no idea how to proceed. Answer: 5f(5) I would interpret the problem geometrically, in terms of area. Obviously, is the area under the function $f(x),$ if we assume $f(x)\ge 0\;\forall\,x\in[0,5],$ which we will do for now. Now the graph of the function $f^{-1}$ is just the graph of $f$, but flipped over the line $y=x.$ So the integral is the area under the function $f^{-1}.$ If you plot this up for a few functions, you might notice that if you flip the area represented by over the line $y=x$, that will just equal the area required to fill the rectangle whose area is $5f(5).$ The result follows for positive functions, at least. I should point out that one assumption here, that is justified by the hypothesis that both $f$ and $f^{-1}$ are continuous, is that $f^{-1}$ exists. That means $f$ would be 1-1. Hint in the 2nd integral let $u=f^{-1}(t) \iff f(u)=t \implies f'(u)du=dt$ and the new limits of integration are $t=0 \implies u=0 \text{ and } t=f(5) \implies u=5$ then use integration by parts. Also this can be seen on a graph. The first integral is in blue the inverse is in brown. Edit: Wow I am really slow at posting today October 20th 2010, 08:48 AM #2 October 20th 2010, 08:56 AM #3
{"url":"http://mathhelpforum.com/calculus/160383-inverse-function-definite-integral.html","timestamp":"2014-04-20T09:02:58Z","content_type":null,"content_length":"44110","record_id":"<urn:uuid:1cba729b-fd84-4eff-9d69-b09d4233feda>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Alfred North Whitehead Born: 15 February 1861 in Ramsgate, Isle of Thanet, Kent, England Died: 30 December 1947 in Cambridge, Massachusetts, USA Click the picture above to see two larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Alfred North Whitehead's father, also named Alfred Whitehead, was an Anglican clergyman from Ramsgate. He is said to have been an upright man with countless friends and Alfred North Whitehead's son, North Whitehead, wrote of his grandfather:- He never asked a favour of anyone and never shirked what he considered to be a duty, but it cannot be said that he spent more time absorbing the lessons of the New Testament than was necessitated by his calling. Canon Alfred Whitehead, the mathematician's father, married Maria Sarah Buckmaster, who came from London, on 20 December 1851. She is described as (see [6]);- ... an unimaginative, small minded woman with some wit but no sense of humour. Alfred and Maria Whitehead had four children, with Alfred North Whitehead as the youngest of the family. He had two brother who were seven and eight years older than he was, and a sister who was two years older. Whitehead was always treated by his parents as the baby of the family and, rather surprisingly, they considered him a sickly and frail child when it appears that this was not the case. Whitehead was not sent to primary school because his parents thought that he was too delicate, so he was taught at home by his father until he was 14. Other than the usual childhood illnesses he was, despite his parents views, a healthy child. He received much affection from his father and brothers (but sadly little from his mother) and he seems to have had a childhood which was not unhappy, even though he was on his own a great deal and must have been somewhat lonely. Whitehead's father taught him Latin from the age of ten and Greek from the age of twelve. His ability in these subjects could certainly be classed as competent but it was certainly not outstanding; there was no sign of the genius that he showed later in life. He did learn a little mathematics from his father but quite how he developed an interest in the subject is a mystery. In September 1875 he left his father's vicarage and entered Sherbourne Independent School. His oldest brother became a teacher at the school in 1876 when Whitehead was entering his second year of study. The course he followed at Sherbourne was a fairly standard one for the time. There was little choice of subjects and all the boys studied as their major subjects Latin, Greek and English, with the minor subjects of mathematics, physical sciences, history, geography and modern languages receiving less attention. Whitehead showed a special gift for mathematics and was allowed to devote extra time to that subject in his final school year, dropping composition and reading of Latin poetry to make way for the extra mathematics. In 1879 Whitehead took the entrance examinations for Trinity College, Cambridge, and he won a scholarship. Following this he spent his final year at Sherbourne as Head Boy and Captain of Games before he entered university in October 1880. As the holder of a scholarship, Whitehead lived in College. He attended only mathematics lectures and was taught by J W L Glaisher, H M Taylor, and W D Niven. He also attended lectures by Stokes and Cayley while his coach was the famous E J Routh. Among his close friends at Cambridge was D'Arcy Thompson. Whitehead won a second scholarship, a College Foundation, and so by the time he entered his second year of study he was quite well off. He took the Mathematical Tripos examinations in 1883 and was placed Fourth Wrangler; the Senior Wrangler that year was G B Mathews (the Senior Wrangler was ranked first, the Fourth Wrangler ranked fourth in the list of students awarded a First Class degree). In the following year he was also placed in the First Class of Part III of the Mathematical Tripos. He presented a dissertation on Maxwell's theory of electricity and magnetism in the competition for a Fellowship in 1884. Thomson and Forsyth were appointed to examine Whitehead and, much to his surprise, he won one of the five scholarships available that year. After winning the Fellowship, Whitehead was appointed to an assistant lectureship. He taught mostly applied mathematics but, surprisingly, he published no papers during the first five years of his tenure of the Fellowship. It is not known if he worked on mathematical research over this period. Certainly he was very much of a loner and did not talk much with the other mathematicians. In the twelve years following taking up the teaching position at Cambridge he published only two papers, both in 1889 on the motion of viscous fluids. The reason that the topic interested him was almost certainly because he had attended lectures by Stokes on viscous fluids. Despite his poor publication record, Whitehead was promoted to a Lectureship at Cambridge in 1888. He took up additional teaching duties by accepting a teaching position at Girton College. All the signs at this time would point to him having decided that his strength was in teaching and not in publishing. A rather remarkable change came, however, when he married Evelyn Wade in London on 16 December 1890 [6]:- Whereas he was quiet and restrained, she was active and outgoing. He had become interested in pure mathematics and he started work on Treatise on Universal Algebra in January 1891, just weeks after his marriage. The work would take him seven years to complete, not finally being published until 1898. Whitehead's wife, Evelyn Wade, was the daughter of Captain A Wade, and they had three children, two sons and a daughter. The younger of the two sons, Eric Alfred Whitehead, became a second Lieutenant in the Royal Flying Corps (which was set up in 1912 and later became part of the Royal Air Force) and died while on a flying patrol in France in 1918. Other changes in Whitehead's life took place around the time of his marriage. We have already indicated that Whitehead's father was an Anglican vicar and, of course, Whitehead was brought up as an Anglican. However around 1889-90 he began to move towards the Roman Catholic Church. He debated with himself for seven years whether to remain an Anglican or join the Roman Catholic Church. In the end he chose neither and became an agnostic around the mid 1890s. He himself stated that the biggest factor in his becoming an agnostic was the rapid developments in science; particularly his view that Newton's physics was false. It may seem surprising to many that the correctness of Newton's physics could be a major factor in deciding anyone's religious views. However one has to understand the complex person that Whitehead was, and in particular the interest which he was developing in philosophy and metaphysics. We should return to the story of the Treatise on Universal Algebra which Whitehead worked on for much of the 1890s. Perhaps the first comment we should make is that the work is not on the modern topic of universal algebra for the term 'universal algebra' had quite a different meaning to Whitehead. In fact the name was taken from a paper published by Sylvester fourteen years earlier. In the Preface to the treatise he writes that his aim is:- ... to present a thorough investigation of the various systems of symbolic reasoning allied to ordinary algebra ... . The chief examples of such systems are Hamilton's Quaternions, Grassmann's Calculus of Extension, and Boole's Symbolic Algebra. Also in the Preface Whitehead also gives his views on the nature on mathematics and the philosophy of mathematics:- Mathematics in its widest signification is the development of all types of formal, necessary, deductive reasoning. The reasoning is formal in the sense that the meaning of propositions forms no part of the investigation. The sole concern of mathematics is the inference of proposition from proposition. ... The ideal of mathematics should be to erect a calculus to facilitate reasoning in connection with every providence of thought, or external experience, in which the succession of thoughts, or of events can be definitely ascertained and precisely stated. So that all serious thought which is not philosophy, or inductive reasoning, or imaginative literature, shall be mathematics developed by means of a calculus. Although Whitehead became very productive after his marriage, he never considered himself a creator of new areas of mathematics, but rather as a developer of ideas introduced by others. This does not mean that his contribution should be considered any less important because of this but certainly Cambridge seems to have undervalued his contribution. In 1894 Whitehead became an examiner for the Mathematical Tripos. In 1903 he was promoted to Senior Lecturer, a position which had only just been established at Cambridge. Whitehead is perhaps best known for his collaboration with Bertrand Russell. We shall give details of this collaboration below, but first we shall complete the details of Whitehead's career. He remained at Cambridge until 1910 but, in some sense, having not made the grade in mathematics and, having little prospects of a mathematics chair at Cambridge, he moved to the University of London. This explanation of his move is almost certainly basically correct and this indeed was the motivation behind Whitehead's thinking; on the face of it, however, rather different and dramatic events ended his association with Cambridge. In 1910 Andrew Forsyth, who had been a close friend of Whitehead's since his student days, had a love affair with Marion Amelia Boys, the wife of C V Boys, and the scandal forced him to resign his chair at Cambridge. Whitehead did everything he could to ensure that Forsyth kept his Fellowship. The decision as to whether he could keep the Fellowship was taken by the Council of Trinity and Whitehead, as a member of that Council, argued strongly that Forsyth should be allowed to remain a Fellow of Trinity. Whitehead was outvoted on the Council, however, and shortly after this he resigned his Senior Lectureship and his Fellowship. The Council then voted that Whitehead had served as a Lecturer for over 25 years (the maximum period) so must leave his post. Whitehead's appointment as Senior Lecturer still had three years to run but he did not stay to argue his case. He moved to London in the summer of 1910 with no job to go to. In 1914, after four years without a proper position, he became Professor of Applied Mathematics at the Imperial College of Science and Technology in London. He accepted a chair in philosophy at Harvard University in 1924, and he taught at Harvard until his retirement in 1937. Bertrand Russell entered Cambridge in 1890 and immediately Whitehead, as examiner for the entrance examinations, spotted Russell's brilliance in his examination papers. Whitehead argued that Russell should be awarded a more prestigious scholarship than his marks would have merited and indeed this was agreed. When Russell was in his second year as an undergraduate he was taught by Whitehead. Their collaboration on Principia Mathematica appears to have begun near the end of 1900, although both men failed to remember the exact time their collaboration began when interviewed late in their lives. In fact they had attended the International Congress of Mathematicians in Paris in 1900 and there they had learnt about Peano's work on the foundations of mathematics. This led to them study Peano's papers and this must have been a major factor in getting their collaboration started. At the time they began collaborating, Whitehead was working on his article Memoir on the algebra of symbolic logic while Russell was close to finishing the first draft of his Principles of mathematics. Whitehead was planning a second volume of Treatise on Universal Algebra but both their plans were somewhat disrupted in 1901 when Russell discovered his famous set theory paradox. After the initial worry over the paradox they joined forces on Volume 2 of Russell's work so, by 1903, Whitehead was working simultaneously on two different second volumes. Realising that this was not the optimal course for him he abandoned the second volume of his own work to concentrate on his collaboration with Russell. Their joint work attempted to construct the foundations of mathematics on a rigorous logical basis and it was carried out with Russell as the philosopher on the project and Whitehead as the mathematician. Working with Russell did not occupy Whitehead completely for he continued to produce work of his own. In 1906 he published The axioms of projective geometry and, in the following year, The axioms of descriptive geometry. The first volume of Principia Mathematica was published in 1910, the second in 1912, and the third in 1913. He also wrote the popular mathematics book An introduction to mathematics which was published in 1911, between Volumes 1 and 2 of the Principia. As the Principia Mathematica neared completion, Whitehead turned his attention to the philosophy of science. This interest arose out of the attempt to explain the relation of formal mathematical theories in physics to their basis in experience, and was sparked by the revolution brought on by Einstein's general theory of relativity. In The Principle of Relativity (1922), Whitehead presented an alternative to Einstein's views. Science and the Modern World (1925), a series of lectures given in the United States, served as an introduction to his later metaphysics. Whitehead's most important book, Process and Reality (1929), took this theory to a level of even greater generality. Whitehead received many honours throughout his career. Elected to the Royal Society in 1903, he was awarded the Society's Sylvester Medal in 1925 because of his work on the foundations of mathematics and his studies of physical concepts. The Royal Society of Edinburgh awarded him their James Scott Prize in 1922 (he was the first recipient). Columbia university awarded him their Butler Medal in 1930 and in the following year he was elected to the British Academy. He was awarded the Order of Merit in 1945. Many universities awarded him an honorary degree including Manchester, St Andrews, Wisconsin, Harvard, Yale and Montreal. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (24 books/articles) Some Quotations (43) A Poster of Alfred North Whitehead Mathematicians born in the same country Additional Material in MacTutor Honours awarded to Alfred North Whitehead (Click below for those honoured in this way) Fellow of the Royal Society 1903 Royal Society Sylvester Medal 1925 Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © October 2003 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Whitehead.html","timestamp":"2014-04-17T06:43:44Z","content_type":null,"content_length":"29619","record_id":"<urn:uuid:1bf7fcf3-d1f7-4bfd-80b4-a32cf678fed6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Audubon Park, NJ Calculus Tutor Find an Audubon Park, NJ Calculus Tutor ...I believe that the best approach for teaching is to help students conceptualize some seemingly abstract topics in these subjects. Most of the time people get hung up on the language or complex symbols used in math and science when really the key to understanding is to be able to look beyond thos... 16 Subjects: including calculus, Spanish, physics, algebra 1 ...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy... 14 Subjects: including calculus, physics, geometry, ASVAB ...I teach reading music in both treble and bass clefs, time and key signatures. I also teach Solfege (Do, Re Mi..) and intervals to help with correct pitch. I use IPA to teach correct pronunciation of foreign language lyrics (particularly Latin). I have a portable keyboard to bring to lessons and music if the student doesn't have their own selections that they wish to learn. 58 Subjects: including calculus, reading, geometry, biology ...Our sessions will be based on mutual respect. In a short time we can build ourselves a partnership of learning. I can offer endless alternative ways of presenting material to help you 10 Subjects: including calculus, physics, geometry, algebra 1 ...If your home isn’t ideal, contact me and we can work out a convenient location. I’m looking forward to hearing from you soon!As a tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter this fundamental math subject every day in my professional lif... 9 Subjects: including calculus, physics, geometry, algebra 1 Related Audubon Park, NJ Tutors Audubon Park, NJ Accounting Tutors Audubon Park, NJ ACT Tutors Audubon Park, NJ Algebra Tutors Audubon Park, NJ Algebra 2 Tutors Audubon Park, NJ Calculus Tutors Audubon Park, NJ Geometry Tutors Audubon Park, NJ Math Tutors Audubon Park, NJ Prealgebra Tutors Audubon Park, NJ Precalculus Tutors Audubon Park, NJ SAT Tutors Audubon Park, NJ SAT Math Tutors Audubon Park, NJ Science Tutors Audubon Park, NJ Statistics Tutors Audubon Park, NJ Trigonometry Tutors Nearby Cities With calculus Tutor Audubon, NJ calculus Tutors Barrington, NJ calculus Tutors Brooklawn, NJ calculus Tutors Collingswood calculus Tutors Glendora, NJ calculus Tutors Gloucester City calculus Tutors Haddon Heights calculus Tutors Hi Nella, NJ calculus Tutors Laurel Springs, NJ calculus Tutors Mount Ephraim calculus Tutors Oaklyn calculus Tutors Philadelphia Ndc, PA calculus Tutors West Collingswood Heights, NJ calculus Tutors Westville, NJ calculus Tutors Woodlynne, NJ calculus Tutors
{"url":"http://www.purplemath.com/audubon_park_nj_calculus_tutors.php","timestamp":"2014-04-17T04:18:26Z","content_type":null,"content_length":"24416","record_id":"<urn:uuid:98b1f108-b34f-4cc0-a4e8-24a856023531>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Stafford, TX Trigonometry Tutor Find a Stafford, TX Trigonometry Tutor A mother of three, I graduated with honors in Chemical Engineering and taught at Kumon for over 5 years. I love helping students discover the unlimited ways of learning higher level math. I am available on Saturday mornings and provide a free diagnostic to test for areas of weakness and strengths. 9 Subjects: including trigonometry, calculus, algebra 1, algebra 2 ...This allows me to more effectively motivate and inspire them to achieve their goals. I am happy to tutor on campus in one of the private study rooms at the library or come to another location (as long as there are minimal distractions)! Feel free to reach out to me for more details and I look f... 22 Subjects: including trigonometry, chemistry, geometry, physics ...I have conducted TAKS tutorials for over 20 years. I helped organize TAKS tutorials when I was math department head at Stafford High School. I have had much success with my students. 14 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of your own home at a schedule convenient to you. 35 Subjects: including trigonometry, chemistry, calculus, physics I have taught calculus and analytic geometry at the U. S. Air Force Academy for 7 years. 11 Subjects: including trigonometry, calculus, geometry, statistics Related Stafford, TX Tutors Stafford, TX Accounting Tutors Stafford, TX ACT Tutors Stafford, TX Algebra Tutors Stafford, TX Algebra 2 Tutors Stafford, TX Calculus Tutors Stafford, TX Geometry Tutors Stafford, TX Math Tutors Stafford, TX Prealgebra Tutors Stafford, TX Precalculus Tutors Stafford, TX SAT Tutors Stafford, TX SAT Math Tutors Stafford, TX Science Tutors Stafford, TX Statistics Tutors Stafford, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/stafford_tx_trigonometry_tutors.php","timestamp":"2014-04-16T04:47:11Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:7e334e27-8a06-40b0-a471-3dd1b23d174f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Fort Worth Trigonometry Tutor ...I hold a certificate in composite science and physical education from the state. I live in the DFW area and would be very happy to offer help and assistance to a student in a variety of content areas. I am also bilingual in Spanish if that is an issue of concern. 21 Subjects: including trigonometry, chemistry, Spanish, biology ...In high school, I took math courses up to AP Calculus BC, receiving A's in these math courses. During my time in high school, I was a part of the National Honor Society and completed community service hours by tutoring my peers in math after school. I am easy-going and get along great with kids. 7 Subjects: including trigonometry, geometry, biology, algebra 1 ...So, it is always to the students' advantage to send ahead of time prior to our scheduled lesson. Praxis is a teacher's qualification test for various subjects - similar to the Texas TExES 135 Math exam - which must be passed for math teachers to teach mathematics in public high schools in Texas. I passed this 5 hour exam the first time taking it. 20 Subjects: including trigonometry, calculus, statistics, geometry ...I have a Masters degree in Mathematics and have tutored Linear Algebra in the past. I have a master's degree in Mathematics. I have also passed two of the actuarial exams. 15 Subjects: including trigonometry, chemistry, calculus, geometry ...I have taught cell biology to undergraduates at a major university. My research has required an understanding of biology at the molecular level (DNA sequencing, enzymology), the cellular level (cell culture, electron microscopy) and at the animal level (electrophysiology, muscle physiology and r... 55 Subjects: including trigonometry, chemistry, reading, writing
{"url":"http://www.purplemath.com/Fort_Worth_trigonometry_tutors.php","timestamp":"2014-04-17T07:44:31Z","content_type":null,"content_length":"24263","record_id":"<urn:uuid:bd56c3d1-bad4-407a-89c1-b1d44014eaa8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiscale expansion of invariant measures for SPDEs - Linearization of Random Dynamical Systems. Dynamics Report, Volumn 4. Spring-Verlog , 1995 "... Abstract. Stochastic partial differential equations arise as mathematical models of complex multiscale systems under random influences. Invariant manifolds often provide geometric structure for understanding stochastic dynamics. In this paper, a random invariant manifold reduction principle is prove ..." Cited by 4 (1 self) Add to MetaCart Abstract. Stochastic partial differential equations arise as mathematical models of complex multiscale systems under random influences. Invariant manifolds often provide geometric structure for understanding stochastic dynamics. In this paper, a random invariant manifold reduction principle is proved for a class of stochastic partial differential equations. The dynamical behavior is shown to be described by a stochastic ordinary differential equation on an invariant manifold, under suitable conditions. The change of dynamical structures for the stochastic partial differential equations is thus obtained by investigating the stochastic ordinary differential equation. The random cone invariant property is used in the approach. Moreover, the invariant manifold reduction principle is applied to detect bifurcation phenomena and stationary states in stochastic parabolic and hyperbolic partial differential equations. 1. , 2009 "... The qualitative properties of local random invariant manifolds for stochastic partial differential equations with quadratic nonlinearities and multiplicative noise is studied by a cut off technique. By a detail estimates on the Perron fixed point equation describing the local random invariant manifo ..." Cited by 2 (0 self) Add to MetaCart The qualitative properties of local random invariant manifolds for stochastic partial differential equations with quadratic nonlinearities and multiplicative noise is studied by a cut off technique. By a detail estimates on the Perron fixed point equation describing the local random invariant manifold, the structure near a bifurcation is given. 1 1 "... b a b i l i t y ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9930971","timestamp":"2014-04-17T16:25:16Z","content_type":null,"content_length":"16677","record_id":"<urn:uuid:b9cac7ca-8e52-4897-94e8-ae27d91f87d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Finding area was rigorously defined by Riemann, which is why we call these Riemann Sums. The sums transform into the integral, which is used to find area. EDIT: I apologize. My Latex coding is not working. Perhaps a mod could point out the error. [Don't worry about it. LaTeX is pretty complicated. It took me a while to get that working. Hopefully that's what you meant to say
{"url":"http://www.mathisfunforum.com/post.php?tid=2553","timestamp":"2014-04-18T21:51:01Z","content_type":null,"content_length":"17592","record_id":"<urn:uuid:f91160fe-5f5d-440b-8f0d-aa2912ca5719>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
MySQL Lists: mysql: Defragmentation of MySQL tables, how many times have I to do it? List: General Discussion « Previous MessageNext Message » From: Antonio Fernández Pérez Date: March 28 2013 11:09am Subject: Defragmentation of MySQL tables, how many times have I to do it? View as plain text Hi everybody, Once I have done the defragmentation of MySQL tables, mysql-tunner.pl suggests me do it again. Is this correct? I think that the idea is that in the result of the script there are not fragmented tables ... Any ideas? Thank you very much. Best regards, • Defragmentation of MySQL tables, how many times have I to do it? Antonio Fernández Pérez 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to doit? Reindl Harald 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to do it? Antonio Fernández Pérez 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to doit? Reindl Harald 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to do it? Antonio Fernández Pérez 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to do it? Manuel Arostegui 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to do it? Antonio Fernández Pérez 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to do it? Bheemsen Aitha 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to doit? Reindl Harald 28 Mar • RE: Defragmentation of MySQL tables, how many times have I to do it? Rick James 28 Mar • Re: Defragmentation of MySQL tables, how many times have I to doit? Reindl Harald 28 Mar
{"url":"http://lists.mysql.com/mysql/229162","timestamp":"2014-04-16T22:57:11Z","content_type":null,"content_length":"6240","record_id":"<urn:uuid:c49cd4eb-20f4-4a9a-a9a9-a754d777b864>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Larry Simpson, Marketing Consultant, Technical Writer, Computer Programmer, Designer, Runner If you are not really much of a math person, this formula may look a little intimidating at first. The "e" represents what is called the Euler's constant. Because it is a constant, it is always equal to 2.7182 if we round it off to 4 decimal places. The "t" is the amount of time that a human can run at the calculated intensity. And finally the "I" is the intensity that is expressed as a percentage of a person's maximum oxygen uptake capacity. So in other words, this formula predicts how long you can run at a given percentage of your maximum oxygen uptake capacity before the body starts to lock up and you are forced to slow down because the muscles will no longer fire properly. So what does this mean given the fact that in our example we want to run the 5000 meters in 13:17? What is the "drop dead" percentage of maximum oxygen uptake that we can run for this length of time? Again, we must substitute our known data into our equation. We know the value of "t" already. It is 13.28 minutes. And we know that "e" is equal to 2.7182. We now make the substitutions: I = 0.2989558 x (2.7182)^-0.1932605 x 13.28 + 0.1894393 x (2.7182)^-0.012778 x 13.28 + 0.8 I = 0.2989558 x (2.7182)^-2.5664994 + 0.1894393 x (2.7182)^-0.16969184 + 0.8 I = 0.2989558 x 0.07680394 + 0.1894393 x 0.84392484 + 0.8 I = 0.0229610 + 0.15987253 + 0.8 I = .98 or 98% of maximum oxygen uptake Now suppose that you have gone to a physiology laboratory or a good sports doctor and it has been determined that you have a maximum oxygen uptake of 83 ml/kg/min. Given your goal of 13.28 minutes, you can only expect to expend for that length of time 81.34 ml/kg/min tops if you want to make it to the finish line in time. (0.98 x 83 = 81.34) You also learned on the previous page that it would take 78.63 ml/kg/min to run at the velocity that you are trying to achieve. Since 81.34 is larger than 78.63, you know from these predictions that your goal of 13.28 minutes is now achievable. So like the Nike commercial says, "Just Do It!"
{"url":"http://www.simpsonassociatesinc.com/runningmath3.htm","timestamp":"2014-04-16T10:26:56Z","content_type":null,"content_length":"7935","record_id":"<urn:uuid:e544d9e3-58f7-4754-96c6-72a4746a1d35>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphical Solutions to Systems of Inequalities 5.6: Graphical Solutions to Systems of Inequalities Created by: CK-12 Graph the following system of linear inequalities on the same Cartesian grid. $\begin{Bmatrix} y>-\frac{1}{2}x+2 \\ y \le 2x-3\end{Bmatrix}$ What is the solution to this system? Watch This Khan Academy Systems of Linear Inequalities When a linear inequality is graphed it appears as a shaded region on the Cartesian plane. Therefore, when a system of linear inequalities is graphed, you will see two shaded regions that most likely overlap in some places. The solution to a system of linear inequalities is this region of intersection. The solution to a system of linear inequalities is also known as the feasible region. Example A Solve the following system of linear inequalities by graphing: $\begin{Bmatrix} -2x-6y \le 12\\ -x +2y > -4\end{Bmatrix}$ Write each inequality in slope-intercept form. $& -2x-6y \le 12\\& -2x{\color{red}+2x}-6y \le {\color{red}2x}+12\\& -6y \le {\color{red}2x}+12\\& \frac{-6y}{{\color{red}-6}} \le \frac{2x}{{\color{red}-6}} + \frac{12}{{\color{red}-6}}\\& \frac{\ cancel{-6}y}{\cancel{-6}} \le -\frac{2}{6}x+\frac{\overset{{\color{red}-2}}{\cancel{12}}}{\cancel{-6}} \quad \text{Simplify the slope to lowest terms.} \ \boxed{-\frac{2}{6}=-\frac{1}{3}}\\& \boxed{y \ {\color{red}\ge} \ -\frac{1}{3}x-2}$ $& -x+2y > -4\\& -x{\color{red}+x}+2y > {\color{red}x}-4\\& 2y > {\color{red}x}-4\\& \frac{2y}{{\color{red}2}} > \frac{x}{{\color{red}2}} -\frac{4}{{\color{red}2}}\\& \frac{\cancel{2}y}{\cancel{2}} > \frac{1}{2}x - \frac{\overset{{\color{red}2}}{\cancel{4}}}{\cancel{2}}\\& \boxed{y > \frac{1}{2}x-2}$ Graph: $y \ge -\frac{1}{3}x-2$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $-2x-6y &\le 12\\-2({\color{red}1})-6({\color{red}1}) &\le 12\\{\color{red}-2-6} &\le 12\\{\color{red}-8} &\le 12 \quad \text{Is it true?}$ Yes, negative eight is less than or equal to twelve. The point (1, 1) satisfies the inequality and will lie within the shaded region. Now graph $y>\frac{1}{2}x-2$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $-x+2y &> -4\\-({\color{red}1})+2({\color{red}1}) &> -4\\{\color{red}-1} + {\color{red}2} &> -4\\{\color{red}1} &> -4 \quad \text{Is it true?}$ Yes, one is greater than negative four. The point (1, 1) satisfies the inequality and will lie within the shaded region. The region that is indicated as the feasible region is the area on the graph where the shading from each line overlaps. This region contains all the points that will satisfy both inequalities. The feasible region is the area shaded in blue. Example B Solve the following system of linear inequalities by graphing: $\begin{Bmatrix} x \ge 1\\ y > 2\end{Bmatrix}$ These are the special lines that need to be graphed. The first line is a line that has an undefined slope. The graph is a vertical line parallel to the $y$$x$ Graph: $x \ge 1$ On the graph, every point to the right of the vertical line has an $x$ Now graph $y>2$ On the graph, every point above the horizontal line has a $y$ The region that is indicated as the feasible region is the area on the graph where the shading from each line overlaps. This region contains all the points that will satisfy both inequalities. The feasible region is the area shaded in purple. Example C More than two inequalities can be shaded on the same Cartesian plane. The solution set is all of the coordinates that lie within the shaded regions that overlap. When more than two inequalities are being shaded on the same grid, the shading must be done accurately and neatly. Solve the following system of linear inequalities by graphing: $\begin{Bmatrix} y < x+1\\ y \le -2x+5\\ y > 0 \end{Bmatrix}$ Graph: $y<x+1$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $y &< x+1\\({\color{red}1}) &< ({\color{red}1}) +1\\{\color{red}1} &< {\color{red}1}+1\\1 &< {\color{red}2} \quad \text{Is it true?}$ Yes, one is less than two. The point (1, 1) satisfies the inequality and will lie within the shaded region. $y \ge -2x+4$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $y &\ge -2x+4\\({\color{red}1}) &\ge -2({\color{red}1})+4\\{\color{red}1} &\ge {\color{red}-2}+4\\1 &\ge {\color{red}2} \quad \text{Is it true?}$ No, one is not greater than or equal to two. The point (1, 1) does not satisfy the inequality and will not lie within the shaded region. The graph of $y>0$$x$$y$ The region that is indicated as the feasible region is the area on the graph where the shading from each line overlaps. This region contains all the points that will satisfy both inequalities. The feasible region is the area on the right shaded in pink. Concept Problem Revisited $\begin{Bmatrix} y>-\frac{1}{2}x+2 \\ y \le 2x-3\end{Bmatrix}$ Both inequalities are in slope-intercept form. Begin by graphing $\boxed{y > -\frac{1}{2}x+2}$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $& y > -\frac{1}{2}x+2\\& ({\color{red}1}) > -\frac{1}{2}({\color{red}1})+2\\&{\color{red}1} > {\color{red}-\frac{1}{2}}+2\\& \boxed{1>1\frac{1}{2}} \quad \text{Is it true?}$ No, one is not greater than one and one-half. Therefore, the point (1, 1) does not satisfy the inequality and will not lie in the shaded area. The shaded area is above the dashed line. Now, the inequality $y \le 2x-3$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $& y \le 2x-3\\& ({\color{red}1}) \le 2({\color{red}1})-3\\& {\color{red}1} \le {\color{red}2}-3\\& {\color{red}1} \le {\color{red}-1}\\& \boxed{1 \le -1} \quad \text{Is it true?}$ No, one is not less than or equal to negative one. Therefore, the point (1, 1) does not satisfy the inequality and will not lie in the shaded area. The shaded area is below the solid line. The region that is indicated as the feasible region is the area on the graph where the shading from each line overlaps. This region contains all the points that will satisfy both inequalities. Another way to indicate the feasible region is to shade the entire region a different color. If you were to do this exercise in your notebook using colored pencils, the feasible region would be very The feasible region is the area shaded in yellow. Feasible region The feasible region is the part on the graph where the shaded areas of the inequalities overlap. This area contains the all the solution sets for the inequalities. Guided Practice 1. Solve the following system of linear inequalities by graphing: $\begin{Bmatrix} 4x+5y \le 20\\ 3x + y \le 6\\\end{Bmatrix}$ 2. Solve the system of linear inequalities by graphing: $\begin{Bmatrix} 2x+y \le 8\\ 2x+3y < 12\\ x \ge 0\\ y \ge 0\\\end{Bmatrix}$ 3. Determine and prove three points that satisfy the following system of linear inequalities: $\begin{Bmatrix} y < 2x+7\\ y \ge -3x-4\\ \end{Bmatrix}$ 1. $\begin{Bmatrix} 4x+5y \le 20\\ 3x + y \le 6\\\end{Bmatrix}$ Write each inequality in slope-intercept form. $& 4x+5y \le 20 && 3x+y \le 6\\&4x{\color{red}-4x}+5y \le {\color{red}-4x}+20 && 3x{\color{red}-3x}+y \le {\color{red}-3x}+6\\&5y \le {\color{red}-4x}+20 && y \le {\color{red}-3x}+6\\& \frac{5y} {{\color{red}5}} \le \frac{-4x}{{\color{red}5}}+\frac{20}{\color{red}5} && \boxed{y \le-3x+6}\\& \frac{\cancel{5}y}{\cancel{5}} \le -\frac{4}{5}x+\frac{\overset{{\color{red}4}}{\cancel{20}}}{\ cancel{5}}\\& \boxed{y \le -\frac{4}{5}x+4}$ Graph: $y \le -\frac{4}{5}x+4$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $4x+5y &\le 20\\4({\color{red}1})+5({\color{red}1}) &\le 20\\{\color{red}4}+{\color{red}5} &\le 20\\{\color{red}9} &\le 20 \quad \text{Is it true?}$ Yes, nine is less than or equal to twenty. The point (1, 1) satisfies the inequality and will lie within the shaded region. Graph $y \le -3x+6$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $3x+y &\le 6\\3({\color{red}1})+({\color{red}1}) &\le 6\\{\color{red}3}+{\color{red}1} &\le 6\\{\color{red}4} &\le 6 \quad \text{Is it true?}$ Yes, four is less than or equal to six. The point (1, 1) satisfies the inequality and will lie within the shaded region. The feasible region is the area shaded in pink. 2. $\begin{Bmatrix} 2x+y \le 8\\ 2x + 3y < 12 \\ x \ge 0\\ y \ge 0\end{Bmatrix}$ Write the first two inequalities in slope-intercept form. $& 2x+3y <12 && 2x+y \le 8\\& 2x{\color{red}-2x}+3y < {\color{red}-2x}+12 && 2x{\color{red}-2x}+y \le {\color{red}-2x}+8\\& 3y < {\color{red}-2x}+12 && y \le {\color{red}-2x}+8\\& \frac{3y}{{\ color{red}3}} < \frac{-2x}{{\color{red}3}} + \frac{12}{{\color{red}3}} && \boxed{y \le -2x+8}\\& \frac{\cancel{3}y}{\cancel{3}} < -\frac{2}{3}x + \frac{\overset{{\color{red}4}}{\cancel{12}}}{\ cancel{3}}\\& \boxed{y < -\frac{2}{3}x+4}$ Graph: $y<-\frac{2}{3}x+4$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $2x+3y &< 12\\2({\color{red}1})+3({\color{red}1}) &< 12\\{\color{red}2}+{\color{red}3} &< 12\\{\color{red}5} &< 12 \quad \text{Is it true?}$ Yes, five is less than twelve. The point (1, 1) satisfies the inequality and will lie within the shaded region. Graph: $y \le -2x+8$ The point (1, 1) is not on the graphed line. Test the point (1, 1) to determine the location of the shading. $2x+y &\le 8\\2({\color{red}1})+({\color{red}1}) &\le 8\\{\color{red}2}+{\color{red}1} &\le 8\\{\color{red}3} &\le 8 \quad \text{Is it true?}$ Yes, three is less than or equal to eight. The point (1, 1) satisfies the inequality and will lie within the shaded region. Graph: $x \ge 0$ The graph will be a vertical line that will coincide with the $y$ All $x$ Graph: $y \ge 0$ The graph will be a horizontal line that will coincide with the $x$. All $y$ The feasible region is the area shaded in green. 3. $\begin{Bmatrix} y < 2x+7\\ y \ge -3x-4 \\\end{Bmatrix}$ Graph the system of inequalities to determine the feasible region. Three points in the feasible region are (-1, 3); (4, -2); and (6, 5). These points will be tested in each of the linear inequalities. All of these points should satisfy both inequalities. Test (-1, 3) $& y < 2x+7 \qquad \qquad \qquad and && y \ge -3x-4\\ & y < 2x+7 && y \ge -3x-4\\& ({\color{red}3})<2({\color{red}-1})+7 && ({\color{red}3}) \ge -3({\color{red}-1})-4\\& {\color{red}3} < {\color {red}-2}+7 && {\color{red}3} \ge {\color{red}3}-4\\& 3 < {\color{red}5} && 3 \ge {\color{red}-1}$ The point (-1, 3) satisfies both inequalities. In the first inequality, three is less than five. In the second inequality three is greater than or equal to negative one. Therefore, the point lies within the feasible region and is a solution for the system of linear inequalities. Test (4, -2) $& y < 2x+7 && y \ge -3x-4\\& ({\color{red}-2}) < 2({\color{red}4})+7 && ({\color{red}-2}) \ge -3({\color{red}4})-4\\& {\color{red}-2}<{\color{red}8}+7 && {\color{red}-2} \ge {\color{red}-12}-4\\ & -2 < {\color{red}15} && -2 \ge {\color{red}-16}$ The point (4, -2) satisfies both inequalities. In the first inequality, negative two is less than fifteen. In the second inequality negative two is greater than or equal to negative sixteen. Therefore, the point lies within the feasible region and is a solution for the system of linear inequalities. Test (6, 5) $& y < 2x+7 && y \ge -3x-4\\& ({\color{red}5})< 2({\color{red}6})+7 && ({\color{red}5}) \ge -3({\color{red}6}) -4\\& {\color{red}5} < {\color{red}12}+7 && {\color{red}5} \ge {\color{red}-18}-4\\& 5 < {\color{red}19} && 5 \ge {\color{red}-22}$ The point (6, 5) satisfies both inequalities. In the first inequality, five is less than nineteen. In the second inequality five is greater than or equal to negative twenty-two. Therefore, the point lies within the feasible region and is a solution for the system of linear inequalities. Solve the following systems of linear inequalities by graphing. $\begin{Bmatrix} 3x+5y>15\\ 2x-7y \le 14\\\end{Bmatrix}$ $\begin{Bmatrix} 3x+2y \ge 10\\ x-y < -1\\\end{Bmatrix}$ $\begin{Bmatrix} x-y > 4\\ x+y > 6\\\end{Bmatrix}$ $\begin{Bmatrix} y>3x-2\\ y < -2x+5\\\end{Bmatrix}$ $\begin{Bmatrix} 3x-6y > -6\\ 5x+9y \ge -18\\\end{Bmatrix}$ $\begin{Bmatrix} 2x-y<4\\ x \ge -1\\ y \ge -2\end{Bmatrix}$ $\begin{Bmatrix} 2x+y>6\\ x +2y \ge 6\\ x \ge 0\\ y \ge 0\end{Bmatrix}$ $\begin{Bmatrix} x \le 3\\ x \ge -2\\ y \le 4\\ y \ge -1\end{Bmatrix}$ $\begin{Bmatrix} y < x+1\\ y \ge -2x+3\\ y > 0\end{Bmatrix}$ $\begin{Bmatrix} x+y > -1\\ 3x -2y \ge 2\\ x < 3\\ y \ge 0\end{Bmatrix}$ You can only attach files to Modality which belong to you If you would like to associate files with this Modality, please make a copy first.
{"url":"http://www.ck12.org/book/CK-12-Algebra-I-Concepts---Honors/r9/section/5.6/","timestamp":"2014-04-18T11:40:15Z","content_type":null,"content_length":"158068","record_id":"<urn:uuid:71ae473d-59b7-459b-9d28-97fc3ea8b9f9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Visualizing Parametric Equations for the Cycloid Play with the applet to get a feel for it. • It is color coordinated so that things with the same color are related. • Try to figure out the relationships. Set the applet as follows 1. Set a = 1 and set b = 1. (When a and b are equal we get the cycloid.) 2. Check the 'trace' box. 3. Check the 'velocity vector' box. The following questions will guide you through a visual exploration of the cycloid and related curves. 1. Describe a real world system that this applet models. 2. What is the yellow dot in your model? 3. Notice that the radius of the wheel and the length of the blue strut can be adjusted. □ Are there values of these parameters for which the yellow dot sometimes moves to the left? What are they? □ Are there values for which the yellow dot is sometimes stationary? What are they? 4. Select 'trace' and animate the system. 5. For what values of the parameters does the yellow curve cross the x-axis? 6. When it does cross the axis, what is the direction of the tangent vector? Please explain. 7. When 'trace' is selected, a purple line is drawn. What does it represent, in the real world system you described above? 8. Carry out the following vector manipulations: □ Express the directed segment joining the origin to the left end of this purple line as a vector. □ Express the directed segment along the purple line from the left end to the right end as a vector. (It will depend upon theta). □ Express the blue segment directed from the center of the circle to the end as a vector. (This will also depend upon theta). □ Now express the position of the yellow dot as a vector-valued function of theta. Exploring the cusp of the cycloid Set the parameters a and b back to 1 and turn trace on. 1. Grab the circle with your mouse and drag it to the right. 2. The 'cusps' are the sharp points on the cycloid. What happens to the velocity vector at these points? 3. Now set b to 3.5 and drag the circle to the right. What happened to the cusp? What happens to the velocity vector at the bottom of the trajectory? 4. Repeat this with b = .5.
{"url":"http://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/1.-vectors-and-matrices/part-c-parametric-equations-for-curves/session-17-general-parametric-equations-the-cycloid/MIT18_02SC_s17_guide.html","timestamp":"2014-04-17T09:35:43Z","content_type":null,"content_length":"2856","record_id":"<urn:uuid:d2cf216b-35d0-48a1-96da-42cb4d56b212>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
User-defined function for "for loop" My program works fine, but I am struggling developing a user-defined function for the for loop. Everything I try won't work, any help would be greatly appreciated. #include <iostream> #include <cstdio> #include <cmath> #include <iomanip> using namespace std; int main() int counter; double firstTerm = ' '; double commRatio = ' '; double total, numTerms = ' '; double sum; cout << "Program will compute terms in a Geometric Series"<<endl<<endl; cout << "Enter the first term in the series: "; cin >> firstTerm; cout<<"Enter the common ratio for the series: "; cin >> commRatio; cout<<"Enter the number of terms to use: "; //while (numTerms < 2) cout << fixed << showpoint; cout <<setprecision(4)<< "Terms in series for Common Ration of "<<commRatio<<" are:"<<endl; cout << setprecision(9); for (int counter = 1; counter <= numTerms; counter++ ) sum = firstTerm * pow(commRatio,(counter-1)); total += sum; cout << setw(20) <<sum<< endl; system ("pause"); return 0; Not sure why the pow function is used. In a geometric series, each term can be derived from the previous one by a simple multiplication by the common ratio. A question: are you trying to list all of the terms, or just calculate the sum of all the terms? Okay I got rid of the ' '; and initialized to 0; I am trying to list the terms. My delema is now this, any value returned from the function to main is 0, how do I take the input value from the function to main? double startTerm (double firstTerm, string str1); double startTerm (double ft, string str1) double newTerm; cout<<"Enter first term: "; while (newTerm > 1) return newTerm; if (newTerm <= 0) cout<< str1 << " " <<endl; Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/80418/","timestamp":"2014-04-16T19:07:20Z","content_type":null,"content_length":"10591","record_id":"<urn:uuid:897b9186-d66b-427b-9e37-d4c3d65195ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
shape modality Cohesive $\infty$-Toposes Presentation over a site Structures in a cohesive $(\infty,1)$-topos structures in a cohesive (∞,1)-topos Structures with infinitesimal cohesion infinitesimal cohesion? Modalities, Closure and Reflection In a locally ∞-connected (∞,1)-topos with fully faithfiil inverse image (such as a cohesive (∞,1)-topos), the extra left adjoint $\Pi$ to the inverse image $Disc$ of the global sections geometric morphism $\Gamma$ induces a higher modality $\int \coloneqq Disc \circ \Pi$, which sends an object to something that may be regarded equivalently as its geometric realization or its fundamental ∞-groupoid. In either case $\int X$ may be thought of as the shape of $X$ and therefore one may call $\int$ the shape modality. It forms an adjoint modality with the flat modality $\flat \coloneqq Disc \circ \Gamma$. Generally, given an (∞,1)-topos $\mathbf{H}$ (or just a 1-topos) equipped with an idempotent monad $\mathbf{\Pi} \colon \mathbf{H} \to \mathbf{H}$ (a (higher) modality/closure operator) which preserves (∞,1)-pullbacks over objects in its essential image, one may call a morphism $f \colon X \to Y$ in $\mathbf{H}$$\mathbf{\Pi}$-closed if the unit-diagram $\array{ X &\stackrel{\eta_X}{\to}& \mathbf{\Pi}(X) \\ \downarrow^{\mathrlap{f}} && \downarrow^{\mathrlap{\mathbf{\Pi}(f)}} \\ Y &\stackrel{\eta_Y}{\to}& \mathbf{\Pi}(Y) }$ is an (∞,1)-pullback diagram. These $\mathbf{\Pi}$-closed morphisms form the right half of an orthogonal factorization system, the left half being the morphisms that are sent to equivalences in $\ Let $(\Pi\dashv \Disc\dashv \Gamma):H\to\infty\Grpd$ be an infinity-connected (infinity,1)-topos, let ${\mathbf{\Pi}}:=\Disc \Pi$ be the geometric path functor / geometric homotopy functor, let $f:X\ to Y$ be a $H$-morphism, let $c_{\mathbf{\Pi}} f$ denote the ∞-pullback $\array{c_{\mathbf{\Pi}} f&\to& {\mathbf{\Pi}} X\\\downarrow&&\downarrow^{{\mathbf{\Pi}}_f}\\Y&\xrightarrow{1_{(\Pi\dashv \Disc)}}&{\mathbf{\Pi}}Y}$ $c_{\mathbf{\Pi}} f$ is called ${\mathbf{\Pi}}$-closure of $f$. $f$ is called ${\mathbf{\Pi}}$-closed if $X\simeq c_{\mathbf{\Pi}}f$. If a morphism $f:X\to Y$ factors into $f=g\circ h$ and $h$ is a $\mathbf{\Pi}$-equivalence then $g$ is $\mathbf{\Pi}$-closed; this is seen by using that ${\mathbf{\Pi}}$ is idempotent. $\Pi$-closed morphisms are a right class of an orthogonal factorization system (in an (∞,1)-category) and hence, as discussed there, are closed under limits, composition, retracts and satisfy the left cancellation property. As open maps A consequence of the previous property is that the class of $\mathbf{\Pi}$-closed morphisms gives rise to an admissible structure in the sense of structured spaces on an (∞,1)-connected (∞,1)-topos, hence they serve as a class of a kind of open maps. Internal locally constant $\infty$-stacks In a cohesive (∞,1)-topos $\mathbf{H}$ with an ∞-cohesive site of definition, the fundamental ∞-groupoid-functor $\mathbf{\Pi}$ satisfies the above assumptions (this is the example gives this entry its name). The $\mathbf{\Pi}$-closed morphisms into some $X \in \mathbf{H}$ are canonically identified with the locally constant ∞-stacks over $X$. The correspondence is effectively what is called categorical Galois theory. Let $H$ be a cohesive (∞,1)-topos possessing a ∞-cohesive site of definition. Then for $X\in H$ the locally constant ∞-stacks $E\in \L\Const(X)$, regarded as ∞-bundle morphisms $p:E\to X$ are precisely the $\mathbf{\Pi}$-closed morphisms into $X$ Formally étale morphisms If a differential cohesive (∞,1)-topos $\mathbf{H}_{th}$, the de Rham space functor $\mathbf{\Pi}_{inf}$ satisfies the above assumptions. The $\mathbf{\Pi}_{inf}$-closed morphisms are precisely the formally étale morphisms. The examples of locally constant $\infty$-stacks and of formally étale morphisms are discussed in sections 3.5.6 and 3.7.3 of
{"url":"http://ncatlab.org/nlab/show/shape%20modality","timestamp":"2014-04-16T13:06:03Z","content_type":null,"content_length":"55020","record_id":"<urn:uuid:26ae603e-c942-4143-825c-aa8ffd9e6d73>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Homework, New to Java, Please Help!! Join Date Sep 2012 Rep Power This is my assignement, my professor wnats me to output 4500.0 in interest, but I'm getting is 0. All codes are below, public class Interest { public static void main(String[] args) { //declare variables double loan=5000, interest=0.0; int years=15, rate=6; interest = loan * (rate/100) * years; THIS IS THE ASSIGNMENT PAPER Assignment 2 (10 points): Calculating Interest Our goal is to calculate the interest given the loan amount, rate, and years to be taken out. Your program should have the following: Make the name of the project Interest 4 comment lines that state the purpose of the program, author, date and JDK used. (1 point) Include 4 variables for the amount of loan, rate, years, and interest. The amount of loan and interest variables are decimal numbers. The years and rate variables should be integers. Make up your own meaningful correctly-formed variable names for these 4 items and declare them appropriately as an int or double. (4 points) Set the loan amount to be 5000. Set interest rate to be 6. Set years to be 15. With an assignment statement, have the computer calculate the interest using the following formula: (3 points) interest = amount * (rate/100) * years Note: Please use your own variable names in above formula. Have the computer display the amount of loan, rate, years and the interest that you calculated. You should print this on several lines. (2 points) Compile your program until you have no compilation errors. When you run this application, you should get an answer for interest as 4500.0 If you are getting an answer of 0, THINK!! Don't change the variable types. Don't worry about it appearing with dollars and cents since formatting has not been covered yet. Join Date Sep 2012 Rep Power In the expression both rate and 100 are integers so it's an integer operation also. Which means instead of 0.06 you get 0. And anything multiplied by 0 is 0. To solve this you can do several things. - You can make rate be of a floating-point number type in the first place. - You can write 100 as a floating-point number. - You can cast either rate or 100 in the (rate/100) expression to a floating-point number. Join Date Sep 2012 Rep Power In the expression both rate and 100 are integers so it's an integer operation also. Which means instead of 0.06 you get 0. And anything multiplied by 0 is 0. To solve this you can do several things. - You can make be of a floating-point number type in the first place. - You can write as a floating-point number. - You can cast either in the (rate/100) expression to a floating-point number. I tried making rate floating as "float rate = 6F;", ans comes out like 4499.9999999 How do I make 100 a floating point? Join Date Jun 2008 Blog Entries Rep Power For one, don't use floats when you can use double since the accuracy is much greater with double with minimal cost. For another, please understand that no programming language that runs on a digital computer will be able to represent floating point type values with exact precision. Instead you need to accept a very close approximation. This is not a fault of Java but rather a limitation of digital computers in general. What many do is to format our values to the significant digit desired when converting to String for display. Join Date Sep 2012 Rep Power For one, don't use floats when you can use double since the accuracy is much greater with double with minimal cost. For another, please understand that no programming language that runs on a digital computer will be able to represent floating point type values with exact precision. Instead you need to accept a very close approximation. This is not a fault of Java but rather a limitation of digital computers in general. What many do is to format our values to the significant digit desired when converting to String for display. Can you please tell me what to put and where to put the correct code so that I can get the output 4500, please... Im new to Java, so having problems understanding that Join Date Jun 2008 Blog Entries Rep Power I don't like preventing you from a learning experience, so I'd rather give you suggestions, but you should write the code. As noted above, do floating point division, say by dividing rate by 100.0, a double literal, not 100, an int literal. This will result in a double result, what you want. If later you want to display the result as a String with no decimal part, the use String.format(...) or a DecimalFormat object to do this for you. For example, String.format("%.0f", interest) will return a String representation of the interest variable with no decimal portion. The API of the Formatter class will give all the gory details of how to use this construct.
{"url":"http://www.java-forums.org/new-java/63019-simple-homework-new-java-please-help.html","timestamp":"2014-04-17T08:16:20Z","content_type":null,"content_length":"90145","record_id":"<urn:uuid:560b5974-a85e-4072-b330-b2922ed431df>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
call-by-value with sums and products call-by-value with sums and products I'm trying to understand the state-of-the-art in call-by-value with products and sums. Where in the literature would I find, say, Moggi's computational lambda calculus augmented with sums and products? (NB: This is distinct from Moggi's monadic metalanguage -- there are no monads in this calculus, though it is designed to be sound and complete with regard to translation into the monadic metalanguage.) The closest I know of is Selinger (2001), who gives an extension of Moggi's computational lambda calculus with sums and products (as well as incorporating Parigot's lambda-mu, so it also has, in effect, call/cc). But Selinger uses equations, not reductions. Where in the literature would I find equivalent or related systems that use reductions? Also, his system has the slightly surprising property that fst(x) and snd(x) are considered as values, where x is a variable. Are there alternative formulations that avoid this I'll summarize and post the answers. Many thanks! -- P Peter Selinger, Control Categories and Duality: on the Categorical Semantics of the Lambda-Mu Calculus Mathematical Structures in Computer Science 11:207-260, 2001.
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg01365.html","timestamp":"2014-04-21T08:14:43Z","content_type":null,"content_length":"3228","record_id":"<urn:uuid:07224411-7b02-4fd3-b58b-eda6666230a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by dee Total # Posts: 757 a pair of dice is rolled. what is the probabilty of each of the following? a. the sum of the numbers shown uppermost is less than 5. b. at least one six is cast. american national goverment 1. Locke s use of the term property (Points : 1) serves to stress the absolute sanctity of property. serves as a defense against the arbitrary exercise of power and authority. dramatizes his commitment to natural law. means that a sovereign who seizes property... Why do I subtract .9375 from H^+ instead of adding .9375 and then subtracting .9375 from NH4^+ instead of adding? Okay thank you so much! A buffer solution is prepared by mixing 50.0 mL of 0.300 M NH3 with 50 mL of 0.300 NH4Cl. The pKb of NH3 is 4.74. NH3 + H2O-> NH4+ +OH- 7.50 mL of 0.125 M HCl is added to the 100 mL of the buffer solution. Calculate the concentration of NH3 and NH4Cl for the buffer solution... Using f(x), determine a formula for the Riemann Sum S_n obtained by dividing the interval [0, 4] into n equal sub-intervals and using the right-hand endpoint for each c_k. f(x)= 5x+2 Now compute the limit as n goes to infinity S_n to compute the area under the curve over [0,4]. a piece of metallic iron (10 moles) was dissolved in concentrated hydrochloric acid. the reaction formed iron (II) chloride and hydrogen gas. how many moles of iron (II) chloride were formed? Suppose that Ramos contributes $6000/year into a traditional IRA earning interest at the rate of 2%/year compounded annually, every year after 35 until his retirement age at age 65. At the same time, his wife Vanessa deposits $4900/year into a Roth IRA earning interest at the ... Base is found by ________ the portion by the rate Two insulated wires, each 2.70 m long, are taped together to form a two-wire unit that is 2.70 m long. One wire carries a current of 7.00 A; the other carries a smaller current I in the opposite direction. The two wire unit is placed at an angle of 77.0° relative to a magn... A proton, traveling with a velocity of 4.9 106 m/s due east, experiences a magnetic force that has a maximum magnitude of 7.4 10-14 N and a direction of due south. What are the magnitude and direction of the magnetic field causing the force? The drawing shows a parallel plate capacitor that is moving with a speed of 30 m/s through a 3.1 T magnetic field. The velocity vector v is perpendicular to the magnetic field. The electric field within the capacitor has a value of 170 N/C, and each plate has an area of 7.5 10... college math CalJuice Company has decided to introduce three fruit juices made from blending two or more concentrates. These juices will be packaged in 2-qt (64-oz) cartons. One carton of pineapple-orange juice requires 8 oz each of pineapple and orange juice concentrates. One carton of or... college math Acrosonic manufactures a model G loudspeaker system in Plants I and II. The output at Plant I is at most 800/month, and the output at Plant II is at most 600/month. Model G loudspeaker systems are also shipped to the three warehouses A, B, and C whose minimum monthly... college math HELP!!!!! Acrosonic manufactures a model G loudspeaker system in Plants I and II. The output at Plant I is at most 800/month, and the output at Plant II is at most 600/month. Model G loudspeaker systems are also shipped to the three warehouses A, B, and C whose minimum monthly... college math HELP!!!!! CalJuice Company has decided to introduce three fruit juices made from blending two or more concentrates. These juices will be packaged in 2-qt (64-oz) cartons. One carton of pineapple-orange juice requires 8 oz each of pineapple and orange juice concentrates. One carton of or... college math HELP!!!!! Sharon has a total of $190,000 to invest in three types of mutual funds: growth, balanced, and income funds. Growth funds have a rate of return of 12%/year, balanced funds have a rate of return of 10%/year, and income funds have a return of 6%/year. The growth, balanced, and i... College math HELP!!!!! nevermind, i figured it out. thank you! College math HELP!!!!! how did you get 0.6? College math HELP!!!!! A financier plans to invest up to $450,000 in two projects. Project A yields a return of 10% on the investment, whereas Project B yields a return of 15% on the investment. Because the investment in Project B is riskier than the investment in Project A, the financier has decide... MATH HELP!!!!!!!! i need the 3 numbers for the answer, how do i find that? MATH HELP!!!!!!!! | 3 2 7 | x1 | | 46 | | -2 1 4 | x2 | = | 12 | | 6 -5 8 | x3 | | 60 | (b). Solve the system using the inverse of the coefficient matrix. (x1, x2, x3) = (________) An electric heater used to boil small amounts of water consists of a 25-Ω coil that is immersed directly in the water. It operates from a 60-V socket. How much time is required for this heater to raise the temperature of 0.50 kg of water from 29° C to the normal boili... A tungsten wire has a radius of 0.077 mm and is heated from 20.0 to 1205 °C. The temperature coefficient of resistivity is α = 4.5 10-3 (C°)−1. When 110 V is applied across the ends of the hot wire, a current of 1.3 A is produced. How long is the wire? Negle... Business Math Convert the following mixed number to an improper fraction: 12 3/4 college math HELP!!!!! i need to find equilibrium price using these two equations: P= -(1/25)x+190 P= (1/75)x+70 the equilibrium quantity is: 2250 units the equilibrium price is: ______? math HELP!! The quantity demanded x each month of Russo Espresso Makers is 250 when the unit price p is $180; the quantity demanded each month is 1000 when the unit price is $150. The suppliers will market 750 espresso makers if the unit price is $80. At a unit price of $90, they are will... college math HELP!!!!! The quantity demanded x each month of Russo Espresso Makers is 250 when the unit price p is $180; the quantity demanded each month is 1000 when the unit price is $150. The suppliers will market 750 espresso makers if the unit price is $80. At a unit price of $90, they are will... college finite math The quantity demanded x each month of Russo Espresso Makers is 250 when the unit price p is $180; the quantity demanded each month is 1000 when the unit price is $150. The suppliers will market 750 espresso makers if the unit price is $80. At a unit price of $90, they are will... college finite math A product may be made using Machine I or Machine II. The manufacturer estimates that the monthly fixed costs of using Machine I are $17,000, whereas the monthly fixed costs of using Machine II are $14,000. The variable costs of manufacturing 1 unit of the product using Machine... college finite math A division of a company produces "Personal Income Tax" diaries. Each diary sells for $6. The monthly fixed costs incurred by the division are $30,000, and the variable cost of producing each diary is $1. (a) Find the break-even point for the division. (x, y) = (6000,... college finite math Cowling's Rule is a method for calculating pediatric drug dosages. If a denotes the adult dosage (in milligrams) and if t is the child's age (in years), then the child's dosage is given by D(t) = (t + 1/24)a (a) Show that D is a linear function of t. Hint: Think of... college finite math Cowling's Rule is a method for calculating pediatric drug dosages. If a denotes the adult dosage (in milligrams) and if t is the child's age (in years), then the child's dosage is given by D(t) = (t + 1/24)a (a) Show that D is a linear function of t. Hint: Think of... college finite math For wages less than the maximum taxable wage base, Social Security contributions (including those for Medicare) by employees are 7.65% of the employee's wages. (a) Find an equation that expresses the relationship between the wages earned (x) and the Social Security t... In a vacuum, two particles have charges of q1 and q2, where q1 = +3.7 µC. They are separated by a distance of 0.22 m, and particle 1 experiences an attractive force of 3.5 N. What is q2 (magnitude and sign)? What part of speech is coursework? college Algebra Write the system as a matrix and solve it by Gauss-Jordan elimination. (If the system is inconsistent, enter INCONSISTENT. If the system has infinitely many solutions, show a general solution in terms of x, y, or z.) x + 2y − z = 5 2x − y + z = 2 3x − 4y + 3z... college Algebra Use matrix methods to solve the problem. To use space effectively, librarians like to fill shelves completely. One 105-inch shelf can hold 3 dictionaries, 5 atlases, and 1 thesaurus; or 6 dictionaries and 2 thesauruses; or 2 dictionaries, 4 atlases, and 3 thesauruses. How wide... college Algebra thank you college Algebra Use matrix methods to solve the problem. 120,000 gallons of fuel are to be divided between two airlines. Triple A Airways requires three times as much as Unity Air. How much fuel should be allocated to Triple A? ______ gal college Algebra i have no clue why its not working... i tried: 0,0,0 9,9,9 INCONSISTENT college Algebra its saying its wrong? :/ college Algebra i think you used the wrong one for the third line... x+z=18 y+z=18 x-z=0 college Algebra the answer didnt work... college Algebra Solve the system, if possible. (If the system is inconsistent, enter INCONSISTENT. If the system is dependent, enter a general solution in terms of x, y, or z.) x + y = 18 y + z = 18 x − z = 0 college Algebra Solve the system, if possible. (If the system is inconsistent, enter INCONSISTENT. If the system is dependent, enter a general solution in terms of x, y, or z.) x + y + z = 7 2x + y + z = 14 x + 2y + 3z = 2 College Algebra thank you, big help College Algebra Use a system of equations to solve the problem. A rectangular picture frame has a perimeter of 2,200 centimeters and a width that is 300 centimeters less than its length. Find the area of the picture. _____ cm2 A 0.125 kg meter stick is rotating (as shown to the right) in a horizontal plane about an axis through its center at 3.45 rad/s. a) Find the rotational inertia of this stick. At one of its ends, find its b) speed and c) centripetal acceleration. If this stick comes to rest in ... Thanks, Steve. I was having a mental block on this one and knew it was easy! I appreciate your help. This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS). algebra 1 Each of the values in your table shows an x value and a corresponding y value. So, your ordered pairs are: {(4,3), (2,5), (2,2), (3,5), (1,1)} Graph them. The domain is {1,2,3,4} or the x values in numerical order and not listed more than once. The range is {1,2,3,5} or the y ... that's exactly what I got but it keeps saying it is wrong. find all roots of the polynomial equation. x^5+3x^4-2x^3-14x^2-15x-5=0. thank you soooo much!!! a hot-air balloon is tethered to the ground and only moves up and down. You and a friend take a ride on the balloon for approximately 25 minutes. On this particular ride the velocity of the balloon, v(t) in feet per minute, as a function of time, t in minutes, is represented b... Help me factor x^2 -2x - 4 = 0 and 3x^2 + x + 3 = 0. college algebra A partial solution set is given for the polynomial equation. Find the complete solution set. (Enter your answers as a comma-separated list.) x^3 + 3x^2 − 14x − 20 = 0; {−5} college algebra Use long division to perform the division. (Express your answer as quotient + remainder/divisor.) (x^4 + 7x^3 − 2x^2 + x − 1) / (x − 1) college algebra Solve the exponential equation using logarithms. Give the answer in decimal form, rounding to four decimal places. (Enter your answers as a comma-separated list.) 4^(x − 3) = 3^(2x) x = _____ college algebra Use a calculator to help solve the problem. An isotope of lead, 201Pb, has a half-life of 8.4 hours. How many hours ago was there 40% more of the substance? (Round your answer to one decimal place.) college algebra not sure how to find the t, tried using the log approach but can't get the right answer. college algebra An isotope of lead, 201Pb, has a half-life of 8.4 hours. How many hours ago was there 40% more of the substance? (Round your answer to one decimal place.) ____hr college algebra no it looks more like this: 4^(x-3)=3^2x college algebra Solve the exponential equation using logarithms. Give the answer in decimal form, rounding to four decimal places. (Enter your answers as a comma-separated list.) 4x − 3 = 32x x = _____ college algebra In 4 years, 30% of a radioactive element decays. Find its half-life. (Round your answer to one decimal place.) _____ yr college algebra thank you college algebra Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = log 9 college algebra Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = −0.35 college Algebra Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = −0.35 college Algebra Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = log 9 college Algebra Thank you, big help! college Algebra Use a calculator to help solve the problem. The life expectancy of white females can be estimated by using the function = 78.5(1.001)x, where x is the current age. Find the life expectancy of a white female who is currently 50 years old. Give the answer to the nearest tenth. L... college algebra i did that and it is not working. i get 0.0348 college algebra Use a calculator to help solve the problem. The intensity I of light (in lumens) at a distance x meters below the surface is given by I = I0kx, where I0 is the intensity at the surface and k depends on the clarity of the water. At one location in the Atlantic Oce... College Algebra thank you Reiny and Bob College Algebra Use a calculator to help solve the problem. One isotope of holmium, 162Ho, has a half life of 22 minutes. The half-life of a second isotope, 164Ho, is 37 minutes. Starting with a sample containing equal amounts, find the ratio of the amounts of 162Ho to 164Ho after two hours. ... P5. A thirty U.S. Treasury bond has a 4.0 percent interest rate. In contrast, a ten-year Treasury bond has an interest rate of 2.5 percent. A maturity risk premium is estimated to be .2 percentage points for the longer maturity bond. Investors expect inflation to average 1.5 p... College Algebra I think the question is asking what she would pay at the end of the year. I tried $159.93 and it was not correct. College Algebra A bank credit card charges interest at the rate of 23% per year, compounded monthly. If a senior in college charges $1,700 to pay for college expenses, and intends to pay it in one year, what will she have to pay? if a 50ft flag pole cast a 10ft shadow how far is it from the end of the shadow to the top of the pole college algebra graph the following: 7x + 6y = 42 (-6x+42)\(7) college algebra graph the following: (5/4)x-4 (4x+16)/(5) college algebra Thanks, I finally got it. college algebra now all I need help on is finding the domain of the f(f) that you just posted. college algebra now all I need help on is finding the domain of the f(f) that you just posted. college algebra no sir, it asks for f(f). college algebra thank you so much steve!! could you help me with this one? Let f(x) = square root (x + 2) and g(x) = x^2 − 2. Determine the domain of the composite function. (Enter your answer using interval notation.) f compose f _____? and Find the composite function. (f compose f)(x)... college algebra find the domain of (x-2)/(-3x+7) college algebra Let f(x) = 1/x − 2 and g(x) = 1/x − 3 . Determine the domain of the composite function. (Enter your answer using interval notation.) g compose f _____? & Find the composite function. (g compose f)(x) = _____? college algebra Let f(x) = square root (x + 2) and g(x) = x^2 − 2. Determine the domain of the composite function. (Enter your answer using interval notation.) f compose f ______? & Find the composite function. (f compose f)(x) = _____? college algebra (Enter your answer using interval notation.) f compose f college algebra Let f(x) = x + 2 and g(x) = x2 − 2. Determine the domain of the composite function. college algebra Thank you so much. college algebra what is the domain of (square root of x) +7? college algebra Thank you Steve, that was a big help. college algebra the following rational function gives the number of days it would take two construction crews, working together, to frame a house that crew 1 (working alone)could complete in t days and crew 2 (working alone) could complete in t+3 days. f(t)=(t^2+3t)/(2t+3). (a) if crew 1 coul... college algebra Need help in finding all vertical, horizontal, and slant asymptotes, x- and y-intercepts, and symmetries of f(x)=(x-3)/(x-4) What is the relationship of the concentration of H3O and of OH social studies men from higher social classes in------ cultures tend to have low levels of conformity, more self-direction, and have greater intellectual flexibility than did men from lower social classes A) collectivist B) individualist C) interdependent D) all My choice is C but Im a littl... Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=dee","timestamp":"2014-04-19T11:29:34Z","content_type":null,"content_length":"29749","record_id":"<urn:uuid:756cc3c8-53b1-47a3-82ac-0c5708183c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Mathematics challenges for students Replies: 0 Mathematics challenges for students Posted: Sep 14, 2010 10:58 AM Odysseys2sense is a fairly new game-like forum in which anonymous contributors gain power by making posts that are rated highly by other players on criteria such as completeness, accuracy, helpfulness, and Several current forums are focused on mathematical topics that might be of interest to you and your students. Each forum consists of challenges, which in these cases are mathematics problems with a twist. In fact, college students sometimes find even the elementary- level problems challenging. These forums (Odysseys) were set up for various courses, whose students are responding to the challenges. You and your students can freely read and review those responses and participate in the general discussions of them. I personally find the process, which encourages critical thinking, entertaining and very informative. You can also apply to set up forums for your own courses. The site is After free and private registration, you can find mathematical Odysseys at Copes: Problems in probability and statistics Copes: Conic sections Copes: Problems in secondary geometry Copes: Problems in secondary mathematics Copes: Problems in middle-school mathematics Copes: Problems in elementary mathematics Schield: Statistical literacy There are other forums there as well, concerning pedagogical issues as well as a wide range of social questions. Some of these are more developed than others.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2139705","timestamp":"2014-04-17T13:30:58Z","content_type":null,"content_length":"15064","record_id":"<urn:uuid:d61900c9-23d3-4218-bbc8-7db95ca837c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
The String Ideology If you want to get an understanding of the ideology that many string theorists subscribe to, you should check out Lubos Motl’s latest posting. Besides the usual dismissal of non-believers as idiots, incompetents and crackpots (an attitude that unfortunately seems to be all too common among string theorists), Lubos does actually address some scientific issues. There’s nothing at all in what he has to say that actually makes any connection between string theory and the real world. The effort to find such a connection is completely ignored, including the work of the large part of the string theory community that continues to unsuccesfully work on this. No mention of “string phenomenology”, the landscape, or anything of this kind. He chooses instead to address scientific issues in a resolutely unscientific way, basing everything upon faith and ideology, beginning with the opening part of his argument: I will treat the “whole Universe” and “all of string theory” as synonyma because I am not aware of any controllable framework that would allow me to separate them sharply. Most of the rest of the posting is a series of criticisms of other ideas that people have advanced as alternatives to string theory. At one, point, after criticizing John Baez and Urs Schreiber for their interest in 2-groups and gerbes, he makes clear what he sees as the proper way to approach new ideas about fundamental physics that one is not familiar with: The previous paragraph also clarifies my style of reading these papers. The abstract has so far been always enough to see that these fundamental gerbes papers make no quantitative comparison with the known physics – i.e. physics of string theory – and for me, it is enough to be 99.99% certain (I apologize for this Bayesian number whose precise value has no physical meaning) that the paper won’t contain new interesting physics insights. This attitude makes life very simple. You don’t have to bother doing the hard work of trying to understand what non-string theorists are doing. All you need to do is to read the abstracts of their papers, note that they aren’t doing string theory, and then you can be sure you don’t need to read any farther, because if it isn’t string theory, it can’t provide any interesting new insights into Lubos dismisses various ideas about string theory one after the other. Much of this is devoted to dismissing the idea that has led particle physics to many of it’s biggest successes: that of looking for new symmetries or new ways of exploiting ones that are already known. He insists that: we have learned that the gauge symmetries are not fundamental in physics. with the idea being that because of dualities, the character of gauge symmetries is not fundamental but what he calls “social scientific”. This argument doesn’t make any sense to me. An equivalence of two different gauge theories is very interesting, but it in no way tells you that gauge symmetry is not fundamental. Making such an argument is like arguing that representations of Galois groups in number theory are not fundamentally important because of Langlands duality. More seriously, Lubos does mention the philosophically trickiest aspect of gauge theories: the physical degrees of freedom are not parametrized explicitly, but as quotients by the gauge group action of a larger space of degrees of freedom. It’s certainly true that this is how gauge theory works, and one can try and argue that one should just ignore gauge symmetry and work directly with gauge invariant degrees of freedom. In terms of representation theory, physical states are gauge-invariant ones, so one could hope to just work with these physical states. The problem is that in most interesting cases this isn’t possible. The space of connections modulo gauge transformations is non-linear and in general can’t be parametrized in a useful way. Working with the linear space of connections, which can be easily parametrized and understood, and then taking into account the action of the gauge group, is the method that actually works and has been hugely successful. All experience shows that fundamental theories are best understood using an extended space of states, together with a method for picking out the physical subspace. After dismissing alternatives to string theory, Lubos finally gets around to explaining what he sees as the fundamental principle of string theory. Amazingly, it’s the bootstrap philosophy, the failed idea that guided much of particle theory during the sixties and early seventies, before the advent of gauge theories and the standard model. The bootstrap philosophy is that symmetries are nothing fundamental, what is really fundamental are certain kinds of consistency conditions. All you need to do is impose these consistency conditions, and miraculously a unique solution will appear, one which describes the real world. In the sixties the hope was that the strong interactions could be understood simply by imposing things like unitarity and analyticity conditions, and that this would lead to a unique solution of the problem. It turned out that this can’t work. While unitarity and analyticity properties are very useful and tell you a lot about the implications of a theory, they in no way pick out any particular theory. There are lots and lots (a whole landscape of them, even) of QFTs that satisfy the consistency conditions. There never was evidence for uniqueness, and the bootstrap philosophy was from the beginning built on a pipe dream and large helpings of wishful thinking. The new version of the bootstrap that Lubos wants to promote goes as follows: In the context of quantum gravity, many of us more or less secretly believe another version of the bootstrap. I think that most of the real big shots in string theory are convinced that all of string theory is exactly the same thing as all consistent backgrounds of quantum gravity. By a consistent quantum theory of gravity, we mean e.g. a unitary S-matrix with some analytical conditions implied by locality or approximate locality, with gravitons in the spectrum that reproduce low-energy semiclassical general relativity, and with black hole microstates that protect the correct high-energy behavior of the scattering that can also be derived from a semi-classical description of general relativity, especially from the black hole physics. So, the idea is that, at its most fundamental level, physics does not involve simple laws or symmetry principles, just some consistency conditions (of a much more obscure kind than the analyticity ones of the original bootstrap). Lubos avoids the crucial question of how big the space of solutions to these consistency conditions is. All the evidence so far is that it is so large that one can’t hope to ever get any predictions about physics out of it, and the string theory community is now divided between those who hope this problem will magically go away, and those who want to give up and stop doing science as it has traditionally been understood. In 1973 the theory of strong interactions was heavily dominated by string theory and the bootstrap philosophy. The willingness of Veltman and ‘t Hooft to do the hard work of understanding how to properly quantize and renormalize non-abelian gauge theories ultimately led to asymptotic freedom and QCD. This pulled the plug conclusively on that era’s version of the bootstrap. Perhaps sometime in the future, new hard work on gauge theories will lead to insights that will pull the plug on this latest version, which thrives despite conclusive failure due to the kind of unscientific ideological fervor that Lubos so perfectly embodies. 54 Responses to The String Ideology 1. Hi Eugene, One reason we don’t have an exact or approximate theory telling us which electron will show spin up and which will show spin down is because Copenhagen has bamboozled the physics community into not looking for one. It then becomes a self-fulfilling negative mindset. I want to know whether the next electron will show spin up or down. If you don’t want to know, that is up to you, but isn’t this the sort of question we are there to ask? Bohm dared to look for hidden variables, though he didn’t find them and his “quantum potential” was really an interpretation of quantum mechanics, since it could not generate any differing testable predictions. Today there are other such rationalisations, such as many-worlds. The key, though, is to find a theory that predicts where QM doesn’t. Anton [preferred abbreviation] Anthony Garrett The challenge is to find a deeper theory that, with its extra variables suitably averaged (marginalised) over, reproduces the probabilistic predictions of quantum mechanics. We know that such a theory must be nonlocal and partly acausal. That is a tough assignment, but if you want easy problems you should stick to trainspotting. My worry is that nobody is trying. I thought those theories were studied for quite a while but without public resonance. It is even hard to find qualified criticism – if any. 3. A reminder: This is not a general discussion forum or an appropriate place to discuss foundational problems of QM, unless that’s what the posting topic is about. Please take this kind of off-topic discussion elsewhere. I just don’t have the time or ability to properly moderate it, and unmoderated, the results are depressing and something I don’t want here. 4. Peter, Duly noted. I simply wanted to present a positive alternative area to strings where basic progress might be made, as well as congratulate you on your fine book that debunks string theory. This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"http://www.math.columbia.edu/~woit/wordpress/?p=432","timestamp":"2014-04-19T22:30:39Z","content_type":null,"content_length":"44380","record_id":"<urn:uuid:36410941-6d64-477b-aebe-bcce0a7407b2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Our Lvl 90 Talents - Why Incanters Ward is Good! 2013-03-11, 02:17 AM #81 Field Marshal Join Date Aug 2012 Using the active on the other hand means 2 GCDs, or 3 seconds every 60 seconds, cast time spent to maintain a 15% buff. Invocation means 3 seconds every 63 seconds spent casting to maintain a 15% buff. The passive goes away when you are using the active. So, it works out to about the same if you can consistently trigger the effect every CD. That is a big 'if' to trust your damage to. Theoretically if you had a fight with the right timing to use the triggered effect at the right moment and use the downtime when you couldn't cast anyway then IW may be able to win. But, that will be somewhat rare. Conclusion: Invocation wins every reasonable case. [i]Edited to fix my invocation cast time. Sorry, after all that, I forgot to take haste out of it......doh. The basic principle is correct though) my abilities in math aren't great, but I think you are missing something with the active. correct me if I've missed something, but it looks to me like you have assumed 2 casts of IW gives a full 60sec 'uptime' when its actually only 50s (15 with the buff, 10without x2) so in your reasoning you still need to include the extra 10seconds a minute of 6% passive. I realise those few seconds aren't much, but doesn't it mean that IW used to its max, every 30s is actually 16%? [15*1.3]+[10*1.0]+[5*1.06]/30 = 1.16 Last edited by Agzarah; 2013-03-11 at 02:29 AM. zomgDPS - is like the broccoli and carrot in your meal. MVPS - are like the ice cream at the end of the meal. Blizzard prefers the sweet sounding fatty icecream to the sometimes salty and healthy vegies. It is a problem with much of the world, misinformation spoken sweetly gets through more often that the hard salty truth. zomgDPS - is like the broccoli and carrot in your meal. MVPS - are like the ice cream at the end of the meal. Blizzard prefers the sweet sounding fatty icecream to the sometimes salty and healthy vegies. It is a problem with much of the world, misinformation spoken sweetly gets through more often that the hard salty truth. Disagree, i find Lhivera to be a very competent theorycrafter, posting in a forum full of poison. As soon as he dares suggesting something that is not a pure buff to mages the hatred is unleashed upon him by the angry nerds. Floppi Mage on DSH server: http://eu.battle.net/wow/de/characte.../Floppi/simple Lhivera has a tendency to be outright dismissive of context and brings little to no current practical experience to the table. These are my only gripes with him. Yes, he is good at math. Congratulations, but that doesn't make him the oracle for mages. OT: Does anyone think that Incanter's being off the global would make it more attractive? I'd like to use it with one button push macro'ed to Ice Barrier or some other defensive cooldown. I know, it isn't a big deal to hit a button twice, but still, given the cumbersome nature of these talents, maybe this would help? Last edited by Malfecto; 2013-03-11 at 01:04 PM. OT: Does anyone think that Incanter's being off the global would make it more attractive? I'd like to use it with one button push macro'ed to Ice Barrier or some other defensive cooldown. I know, it isn't a big deal to hit a button twice, but still, given the cumbersome nature of these talents, maybe this would help? It'd be a small help i suppose, i mean really anything off the GCD is a minor QoL improvement, but so minor I doubt it'll happen - you can currently achieve the same effect with Temporal Shield, since TS is off the GCD, so you only use one GCD to pop both IW and TS at the same time. (And they have the same CD) He also tends to over inflate advantages. He likes to take a best case scenario as the norm. Partly a problem of doing everything on paper and not really experiencing any of it. I think the other problem is he starts with a preconceived idea and looks for math to prove it. It causes him to miss a lot and come to some very wrong conclusions. Please change the title of this thread to be "Our Lvl 90 Talents - Why Evocation and RoP is Bad!" Reason: this title will only make Blizzard want to nerf IW, that is quite their style. Why don't we ask a different question here: let's say we have these dead-time cycles where we can't get the full benefit of a channeled Invocation. How many full, uninterrupted cycles do we need to make up for that? The answer is not many. Let's say the overall damage multiplier of Invocation for an abbreviated cycle is 1.15*s/(s+3), where s < 60. The damage multiplier of passive Incanter's Ward is always 1.06. We get a table like this: Uptime Invo IW 5 0.7188 1.06 10 0.8846 1.06 15 0.9583 1.06 20 1.0000 1.06 25 1.0268 1.06 30 1.0455 1.06 35 1.0592 1.06 40 1.0698 1.06 45 1.0781 1.06 50 1.0849 1.06 55 1.0905 1.06 60 1.0952 1.06 Note that the breakeven point is about 35.3 seconds. Now, how many full cycles N do we need to recover and break even with IW, for s < 35.3 seconds? We can setup a simple equation to answer this question. Let I(s) be the Invo multiplier, and W = 1.06. The equation is then 63 N I(60) - (s+3) I(s) = W (63 N + s + 3) Solving for N yields N = [W - I(s)][s+3]/[63{I(60) - W}] A table of these values is given below. Uptime Invo IW Breakeven cycles 5 0.7188 1.06 1.23 10 0.8846 1.06 1.03 15 0.9583 1.06 0.82 20 1.0000 1.06 0.62 25 1.0268 1.06 0.42 30 1.0455 1.06 0.22 35 1.0592 1.06 0.01 TLDR: a single abbreviated Invocation cycle almost always breaks even with passive IW after one full, unabbreviated cycle, or two at the most. And that's all before considering haste. Hunters got all the cool level 90 talents. Give us a machine gun type spell too, a barrage of fire/frost/arcane energy. People seem to be forgetting trinket procs with invocation as well. Both last tier's sha trinket and this tier's badge trinket give enormous haste procs. Evocation for me in raid as frost (properly gemmed as int > haste) is 2.21s cast. With the badge trinket proc, it drops to 1.8s cast. The badge trinket ICD causes it to frequently line up with the invocation buff falling off so many of my evocation casts benefit. I feel that much of the theorycrafting and discussion surrounding the 90 talents is being done by people who have experienced little of this tier's content and did not clear last tier's available heroic content. I did all 12 boss fights last week with invocation and it was perfectly fine. If evocation was still a 6 second base cast / 25% buff instead of 3s / 15, then I can see IW being considerably better for several encounters where there's too much unpredictable movement. But as it stands now invocation is the go-to talent for both frost and fire on just about every encounter. IW is probably pretty good on many boss fights and is -viable-, but most of us heroic raiders will be continuing to use invocation, barring today's potential buffs changing something in relation to 90 talents. People seem to be forgetting trinket procs with invocation as well. Both last tier's sha trinket and this tier's badge trinket give enormous haste procs. Evocation for me in raid as frost (properly gemmed as int > haste) is 2.21s cast. With the badge trinket proc, it drops to 1.8s cast. The badge trinket ICD causes it to frequently line up with the invocation buff falling off so many of my evocation casts benefit. I feel that much of the theorycrafting and discussion surrounding the 90 talents is being done by people who have experienced little of this tier's content and did not clear last tier's available heroic content. I did all 12 boss fights last week with invocation and it was perfectly fine. If evocation was still a 6 second base cast / 25% buff instead of 3s / 15, then I can see IW being considerably better for several encounters where there's too much unpredictable movement. But as it stands now invocation is the go-to talent for both frost and fire on just about every encounter. IW is probably pretty good on many boss fights and is -viable-, but most of us heroic raiders will be continuing to use invocation, barring today's potential buffs changing something in relation to 90 talents. I consider myself a heroic raider. I play fire and I most often use Invo as well (Obviously not haste stacking). As you can tell by my signature I did not complete all content on heroic last tier but I did finish a lot of it previous to 5.2. I didn't start this thread as a call to everyone to switch or anything but I did start it for people, like myself, who maybe weren't thinking about its quality if used correctly. My first thought for this tier is Tortos. So much incoming damage constantly makes it a valid choice. When used correctly it can be as good if not better than the other choices. As you know as well, when lined up with cool downs for fire, the massive SP boost is even better for our AT/PoM/Combustion stacking. <The Decoy> 2 Night/Week 10M - US - Illidan - 11/13 HC T15, 11/14 HC T16 Our Kill Videos Guild Twitter: @illidandecoy Pokemon Y - Ghost Safari FC: 2036 - 7493 - 8881 OK... could somebody explain to me why the pro-incanter's ward arguments don't work for Arcane? Probably a silly question, I'm not much of a theorycrafter. I've been playing arcane with RoP since MoP came out and it was working fine for me, although I'm sure I wasn't a top performer. Since 5.2 came out though, I've been finding myself moving too much for RoP to be as effective. I started reading this and thinking incanter's ward might be worth a try, but there are a few comments in here that indicate it's not so good with arcane? The vast majority of Arcane's damage comes from the Mage standing still. If he's standing still, he may as well throw down RoP as it's better than IW in no movement situations. As with all the talents they're supposed to be fight based and personal, and relatively equal in power. Try IW and see what you think. Isomorphic for LoL EU West W/L/Death count: Wolf: 0/1/1 | Mafia: 0/5/5 | TPR: 0/2/2 SK: 0/1/1 | VT: 1.5/3.5/5 | Cult: 1/0/1 Legendary Overlooked for WoW EU-EN. ? i don't get this how could you have anything less then 100% up time on invo. i don't see how anyone could mess it up i mean 1 min that's easy as hell to keep up. Isomorphic for LoL EU West W/L/Death count: Wolf: 0/1/1 | Mafia: 0/5/5 | TPR: 0/2/2 SK: 0/1/1 | VT: 1.5/3.5/5 | Cult: 1/0/1 Legendary Overlooked for WoW EU-EN. First of all, thumbs up to this thread, it was a good read. However the original poster was assuming semi-pro skill-level. In other words, he was assuming that players would not be attacking with the buff for the full duration, and also calculated in the time required to cast or channel a spell to activate the buff. While we are all human and may not perform perfectly at all times, I am not going to make my decision on which talent to choose based on assuming I'll be making mistakes. Planning to fail is never a good strategy imho. So below is the math-craft rehashed and assumes Pro performance (ie attacking with the buff the entire time, counting the time spent casting or channeling to activate the buff). Since I did not see this explained, however it did appear in at least one reply: Average IW Buff w/o regard to time spent activating or skill. Incanter's Ward Buff Average over 25s = [ ( 15 * 1.3 + 10 * 1.0 ) / 25 ] = 1.18 multiplier (18%) For the purpose of keeping all values standardized it is assumed below that the player leaves no down-time between the buff ending and re-applying the buff. Average IW Buff with regard to time spent activating, and Pro skill. Incanter's Ward Buff with regard to 1s GCD to activate [ 25 / ( 25 + 1 ) ] * 1.18 multiplier = 1.135 (13.5%) Invocation and RoP with regard to time spent activating, and Pro skill. Invocation Buff = [ 60 / ( 60 + 3 ) ] * 1.15 = 1.095 (9.5%) Rune of Power Buff = [ 60 / ( 60 + 1.5 ) ] * 1.15 = 1.122 multiplier (12.2%) *EDIT: Post 5.3 IW* New Incanter's Ward Buff Average over 25s = [ ( 25 * 1.15 ) / 25 ] = 1.15 multiplier (15%) New Incanter's Ward Buff with regard to 1s GCD to activate [ 25 / ( 25 + 1 ) ] * 1.15 = 1.106 multiplier (10.6%) The benefit being now IW should have %100 upkeep, and it doesn't matter where you stand (whereas you may have to move out of your RoP circle). I still like it better, much more versatile as long as you can handle another timer to eagle eye in your priority/rotation. Last edited by Anomlety; 2013-05-15 at 12:09 AM. 2013-03-11, 02:45 AM #82 2013-03-11, 12:45 PM #83 2013-03-11, 01:02 PM #84 Stood in the Fire Join Date Jun 2011 2013-03-11, 03:23 PM #85 2013-03-11, 05:59 PM #86 High Overlord Join Date Nov 2011 2013-03-11, 08:20 PM #87 Join Date Nov 2011 2013-03-11, 09:39 PM #88 Join Date Mar 2013 2013-03-12, 06:00 PM #89 2013-03-12, 07:45 PM #90 2013-03-12, 09:32 PM #91 2013-03-13, 09:22 PM #92 Join Date Feb 2012 2013-03-13, 10:54 PM #93 2013-03-14, 05:02 AM #94 The Patient Join Date Jun 2010 2013-03-14, 03:19 PM #95 2013-05-04, 08:36 PM #96
{"url":"http://www.mmo-champion.com/threads/1272012-Our-Lvl-90-Talents-Why-Incanters-Ward-is-Good!?p=21022444","timestamp":"2014-04-19T18:13:42Z","content_type":null,"content_length":"137272","record_id":"<urn:uuid:370a809f-f262-438b-be4a-959cd01fb9df>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Illustration 14.1 Java Security Update: Oracle has updated the security settings needed to run Physlets. Click here for help on updating Java and setting Java security. Illustration 14.1: Pressure in a Liquid Please wait for the animation to completely load. With fluids, instead of discussing forces, we usually talk about pressure, which is defined as the force per unit area or P = F/A. This is because the direction of the force a liquid exerts on its container depends on the shape of the container (force is normal to the surface of the container) and the size of the container. Pressure is not a vector (no direction) and does not depend on the size of the container (position is given in meters and pressure is given in pascals). Restart. Move the pressure indicator in the tube and note the pressure readings (the pressure is only measuring the effect of the liquid asdescribed below). Let's discuss why pressure increases as a function of depth. Assume the blue liquid is water (density 1000 kg/m^3). Pick a point to measure the pressure somewhere in the upper tube. If the dimension of the container into the screen (the dimension you cannot see) is 1 m, what is the volume of water above the point you picked? What is the mass and thus the weight of the water at that point? For example, consider a depth of 3 m. The pressure is 29,400 N/m^2. The volume of water above this point is a cylinder of volume 9.4 m^3. The mass of the water is the volume times the water's density, or 9,400 kg, and therefore the weight of the water is 92,120 N. What is the force downward at that point? Well take the weight and divide by the cross-sectional area of the column of water at that point, which is 3.14 m^2. This pressure should be equal to the pressure reading. The units of pressure are N/m^2 = pascals (abbreviated Pa). Strictly speaking, this is the gauge pressure, not the absolute pressure, because we assumed P = 0 at the top of the water column when the pressure (due to the atmosphere) is actually around 1 x 10^5 Pa. The absolute pressure then would be the pressure at the top due to the atmosphere added to the pressure due to the weight of the water. All of this comes together in the equation: P = P[0] + ρgy, where P[0] is the pressure at the top, ρ is the density of the liquid, g is acceleration due to gravity and y is the depth of the liquid. What will be the pressure at point A? Add a second pressure indicator to check. Illustration authored by Anne J. Cox. next »
{"url":"http://www.compadre.org/Physlets/fluids/illustration14_1.cfm","timestamp":"2014-04-17T21:45:54Z","content_type":null,"content_length":"17010","record_id":"<urn:uuid:20e885a5-9155-4736-9005-fb7ba4359a8f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael Newton IMA Complex Systems Seminar 3:30 Wednesday, October 8, 2003 Randomization test for array-based comparative genomic hybridization Departments of Statistics and Biostatistics and Medical Informatics University of Wisconsin – Madison Madison, WI 53706 After low-level preprocessing, data from an array-based comparative genomic hybridization (aCGH) experiment may be viewed as a set of independent and identically distributed (iid) copies of the random vector X = (X_1, X_2, ..., X_n) where X_i ~ Bernoulli( p_i ), but in which there is dependence among the components owing to aspects of the tumor growth process that is being measured by X. Here i denotes a position in the genome and X_i indicates whether or not genomic damage occurs at position i in the sampled tumor. One statistical problem is to test the null hypotheses H_0: p_i = p for all i; that is, there are no aberration hot-spots in the genome. I will describe several attempts to implement such a test using conditional inference and three permutation methods. I conjecture that an elementary shuffling procedure provides a conservative hypothesis test. The approach is tested on a set of 60 aCGH profiles from cancer cell lines. Issues with hidden-Markov model-based inferences will also be discussed.
{"url":"http://www.ima.umn.edu/~tkurtz/Newton.htm","timestamp":"2014-04-18T15:42:23Z","content_type":null,"content_length":"5610","record_id":"<urn:uuid:a946b2e0-16ff-4da5-8c2f-6f8522193d7c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Well Ordering January 10th 2013, 10:27 AM Re: Well Ordering I assumed that, since the only number mentioned was an integer, that the usual order relation was intended. I really don't see why this has gone to, now, 16 posts! January 10th 2013, 11:30 AM Re: Well Ordering Thank you, Thank you, HallsofIvy. If the concept of well ordering is meaningless for a single digit, and the positive integers are well ordered, then there is no such thing as a single integer subset, so if a subset of the integers contains n, it has to contain a larger number → Archimedes Postulate <-> Euclids Postulate And also Archimedes postulate (Euclid) implies well ordering. Now wasn’t that worth 16 posts? January 10th 2013, 11:35 AM Re: Well Ordering one can ask a similar, but related question: suppose A is a singleton set: A = {a}. how many possible partial orders can be defined on A? i humbly submit that there is exactly ONE: x ≤ y iff x = y, for all x, y in A. we can make this a strict order by defining: x < y iff x ≠ y. under this strict ordering A is indeed well-ordered, with smallest element a. January 10th 2013, 11:42 AM Re: Well Ordering one can ask a similar, but related question: suppose A is a singleton set: A = {a}. how many possible partial orders can be defined on A? i humbly submit that there is exactly ONE: x ≤ y iff x = y, for all x, y in A. we can make this a strict order by defining: x < y iff x ≠ y. under this strict ordering A is indeed well-ordered, with smallest element a. I was afraid of that, you are of course right if a single integer can be ordered (< = or >). January 10th 2013, 12:11 PM Re: Well Ordering Thank you, Thank you, HallsofIvy. If the concept of well ordering is meaningless for a single digit, and the positive integers are well ordered, then there is no such thing as a single integer subset, so if a subset of the integers contains n, it has to contain a larger number → Archimedes Postulate <-> Euclids Postulate And also Archimedes postulate (Euclid) implies well ordering. Now wasn’t that worth 16 posts? oh, bother! the natural numbers come with a CANONICAL well-ordering. if one uses the construction, s(x) = x U {x}, then k < n iff k is an element of n. for example: 2 = {0,1}, so 1 < 2, and 0 < 2. the well-ordering of the natural numbers is "axiomatic", that is: it is INTRINSIC, and equivalent in strength to the axiom schema of induction. actually, ZF set theory says something a bit MORE: there exists an infinite well-ordered set (which may, or may not be, the natural numbers). this stops "just short" of the axiom of choice, in that it does not assert that EVERY set is well-ordered (but certainly implies a method of well-ordering any FINITE set, using an injection into the well-ordered infinite set). the notion that singleton subsets do not exist is absurd, and violates the axiom of extensionality. January 10th 2013, 01:04 PM Re: Well Ordering I didn't say singleton subsets don't exist. I simply asked whether a singleton could be ordered. It's academic because {2,2} doesn't take me where I want to go because < OR = rips it. January 10th 2013, 01:30 PM Re: Well Ordering the set {2,2} is just the same as the set {2} (they have the same elements). it seems to me that a more profitable question for you, and one that is far more likely to be better-received on these forums is: under what constraints on a set S (with whatever algebraic structure and order structure is necessary for our purposes), are the euclidean algorithm and the archimedean property equivalent? that might actually lead to an interesting and fruitful discussion. January 12th 2013, 01:38 PM Re: Well Ordering It's hard to believe this isn't spam. Your original question was "What is the least member of {2}?" That is a set with one member. Whether you ask for "least member", "largest member" with respect to whatever order relation, it must be a member so there is only one possible answer! January 12th 2013, 02:25 PM Re: Well Ordering I think that pretty well wraps things up. hartlw: Find a new topic to troll. January 12th 2013, 06:39 PM Re: Well Ordering But the concept of order is NOT meaningless for the set of integers. And even a single integer is a member of the set of integers. The smallest member of {2} is 2. In fact it is also the largest member of {2}, the "evenest" member of {2}, and the "*****est" member of {2} as long as "*****" is an attribute that 2 has- because 2 is the only member of the set so any answerable question about a member of {2} must be "2". The well ordering principle does apply to sets of integer that have a "lower bound". And we don't need to talk about a "glb". If a set of integers has a lower bound, then it has a smallest member, the least member of the set. We usually apply the term "glb" to sets that do NOT have a smallest member. For example, the set of all positive rational numbers does not have a smallest member. It has glb 0 which is not in the set.
{"url":"http://mathhelpforum.com/higher-math/211116-well-ordering-2-print.html","timestamp":"2014-04-21T09:03:04Z","content_type":null,"content_length":"14691","record_id":"<urn:uuid:0c38e09c-6f20-42b9-a235-90b088871ee8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Question: Centroid given a distance metric Replies: 3 Last Post: Feb 13, 2013 7:25 AM Messages: [ Previous | Next ] Re: Question: Centroid given a distance metric Posted: Feb 12, 2013 4:39 PM On Tuesday, February 12, 2013 10:26:24 AM UTC-8, Nicolas Bonneel wrote: > On 2/11/2013 12:17 PM, Andrey Savov wrote: > > Was wondering if you guys can point me in the right direction. > > > > Are there any known/studied methods to calculate a centroid (geometric center) of finite set of points in n-dimensional real Euclidean space by only knowing a distance metric f(x,y): R^n x R^n -> R ? > > > Have you tried posing it as an optimization problem: > F(x) = \argmin \sum_i d(x, x_i)^2 > and running any optimization method ? > There won't likely be a close form solution for an arbitrary distance > d(x,y), but if it's smooth and the dimension not too large, you can > manage to find a global optimum. It will not necessarily be unique > though, but should exist if d is not a strange function (like d=\infty > everywhere etc.). > -- > Nicolas Bonneel I, as is often the case the distance function is convex, the sum of squares of it is also convex, so a local min will be a global min. However, the problem arises that sometimes the F(x) function is NOT smooth: the minimum may--and in practical problems, often does--lie right on top of one of the points x_i, making F non-differentiable at the optimal solution. This does not always happen, but it does happen often enough that location-analysis folks have to devise special algorithms to handle the problem. I would ask: why do you want to minimize the sum of squares? For Euclidean distance, that F(x) has some physical and statistical meaning, and furthermore leads to a simple solution. However, for other norms such as d(x,y) = |x|+|y| or d(x,y) = max(|x|,|y|), or for a p-norm with 1 < p < 2, what significance can one attach to the sum of squares? Certainly it makes _some_ problems much harder instead of easier (for example, when d(x,y) = |x| + |y|). Ray Vickson Date Subject Author 2/12/13 Re: Question: Centroid given a distance metric RGVickson@shaw.ca 2/13/13 Re: Question: Centroid given a distance metric quasi
{"url":"http://mathforum.org/kb/thread.jspa?messageID=8308086&tstart=0","timestamp":"2014-04-18T13:34:29Z","content_type":null,"content_length":"19391","record_id":"<urn:uuid:a9066dc5-bea3-44ca-a394-084486dcbc6d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [PATCH 2.0.18] plothraw with more flags Ilya Zakharevich on Fri, 3 Mar 2000 15:24:21 -0500 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] Re: [PATCH 2.0.18] plothraw with more flags On Wed, Mar 01, 2000 at 03:45:39PM -0500, Ilya Zakharevich wrote: > This change is a first step of a more ambitious plan. For the most > harmonic interaction of high-level and low-level plotting functions we > need > a) a possiblity of high-level plotting functions "plotting to > vectors" instead of plotting to rectangles/output-device, > *and* > b) a possibility to plot these vectors in the same ways as the > current high-level functions do. > This splitting of plotting into two stages would allow arbitrary > transformations inserted in between of these two steps. One possible > solution for "b" is to modify plotpoints() to accept a flag (with the > default being 64). Another one would be to have an entirely new > function which would allow plotting several curves simultaneously, and > would accept different formats: those good for several parametric > curves or several x-to-y curves. Stupid me! Just let function(s)-graph operators accept a precooked array of results as the "expr" argument if an appropriate flag is given. So instead of providing the function to calculate points of the graphs, one would be able to provide the list of results of calculation of this function in appropriate points. ploth(X=7, 13, [ [1,1], [2,8], [3,27], [4,64] ], parametric+list+splines) will plot the graph of x^3 on [1,4] (so arguments X, 7, and 13 are ignored) ploth(X=7, 13, [ [1,1], [2,8], [3,27], [4,64] ], list+splines) will plot two curves (x-6)/2 and ((x-6)/2)^3 on [7,13], and ploth(X=[5,6,7,8], 13, [ [1,1], [2,8], [3,27], [4,64] ], list+splines) will plot two curves (x-4) and (x-4)^3 on [5,8] (so the argument 13 is ignored). In all of these cases the bit "list" can be deduced from the input. It cannot be deduced in ploth(X=7, 13, [ 1, 8, 27, 64 ], list+splines) which should plot ((x-6)/2)^3. Do you like this? Should we autodeduce "list" (probably 512)?
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0003/msg00009.html","timestamp":"2014-04-17T09:57:33Z","content_type":null,"content_length":"5945","record_id":"<urn:uuid:0937791d-a2d3-4fa6-9195-2dc181b29efa>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Block-specific Sprayer Calibration Worksheet Department of Plant, Soil, & Insect Sciences 1. CALCULATE DILUTE GALLONAGE REQUIREMENT PER ACRE (based on Tree Row Volume, TRV) = Dilute GPA The formula: (Tree Shape X Tree Width (ft.) X Canopy Height (ft.) X 35) / Row Spacing (ft.) Dilute Gallons Per Acre (GPA) is a fundamental concept of sprayer calibration. It is based Tree Row Volume (TRV), i.e. the total canopy volume (cubic feet) per acre and the fact (for Eastern orchards) it takes 1 gallon of spray material (water plus crop protectant) to cover 1,450 cubic feet of foliage. Dilute GPA should be calculated for every orchard block that differs significantly in tree size, shape, or age; and row spacing. The calculated GPA represents a full dilute (1X concentrate) spray volume of water plus crop protectant. 2. SPRAYER OUTPUT IF NOZZLED FOR SPECIFIC BLOCK = Sprayer Gallons per Minute (GPM) The formula: Dilute GPA/Concentration (1X, 2X, 3X etc.) = Gallons Per Acre; Gallons Per Acre X Travel Speed (m.p.h.) X Row Spacing (ft.) / 495 = Gallons Per Minute (GPM). Gallons per minute (GPM) is the desired sprayer output (calibration 'target') based on dilute GPA, desired concentration (1X, 2X, 3X, etc.), tractor travel speed, and block row spacing. Dilute GPA and GPM are the two orchard factors essential to calibrate your sprayer to accurately deliver recommended pesticide rates on a block-by-block basis. 3. ADJUSTING FOR BLOCK DIFFERENCES WITH KNOWN SPRAYER OUTPUT The formula: Actual output per acre (GPA) = (GPM X 495)/Travel Speed X Row Spacing; Actual concentration = Dilute GPA/GPA Given the known GPM (sprayer output), you can easily calculate GPA for another block with different row spacing and/or travel speed. (Of course, this assumes the same nozzle set-up.) Then divideDilute GPA by GPA to determine the actual concentration (2X, 3X, etc.). Ideally, dilute gallonage requirements per acre (based on TRV) and sprayer output(GPM) should be calculated first for each block. *Some other sprayer calibration considerations: • if tree size is variable, calibrate using average tree width/height measurements for the larger trees • in general, two-thirds of the spray volume should be directed towards the top half of the tree; an exception is early in the season when it is reasonable to direct halk the spray in the bottom, half in the top (i.e., oil sprays) • concentrate spraying is more sensitive to wind speed, drying conditions, and sprayer calibration errors; when spraying at 3X or more, the rate of crop protectant can be reduced by 20% from the recommended dilute rate for most insecticides and fungicides, except when the label indicates the product should be applied dilute only (usually needed for greater coverage); plant growth regulators should be applied as a dilute spray if possible. (Certainly no more concentrate than 3X) • note that newer insecticide/fungicide/plant growth regulators will give a rate per acre only regardless of tree row volume and/or dilute vs. concentrate applications. In this case, use no less than the recommended rate per acre. • always consult the label before making any pesticide application; the label is the law.
{"url":"http://extension.umass.edu/fruitadvisor/fact-sheets/block-specific-sprayer-calibration-worksheet","timestamp":"2014-04-25T00:40:01Z","content_type":null,"content_length":"36325","record_id":"<urn:uuid:06dd1e7c-6573-4605-b574-6442ad96815d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Stickney, IL Chicago, IL 60610 Talented Math, Music Tutor ...Young high school, one of the leading high schools in Chicago. I was a part of their Academic Decathlon team, a prestigious academic competition in ten different subjects encompassing , Music, Economics, Science, History, Art, Literature, Speech, Interview,... Offering 10+ subjects including algebra 1, algebra 2 and geometry
{"url":"http://www.wyzant.com/geo_Stickney_IL_Math_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-17T01:09:39Z","content_type":null,"content_length":"61673","record_id":"<urn:uuid:ea30b56e-0771-464d-a95c-b5e526b786b8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
semigroups acting as continuous functions on regular rooted trees up vote 4 down vote favorite Let $T$ be a regular rooted tree. Make $T$ into a metric space by making each edge isometric to the unit interval. What is known about what semigroups can act as continuous functions on $T$ such that each element of the semigroup fixes the root of $T$ and each element maps vertices to vertices? Are there any algebraic restrictions on such semigroups, for example? Can every semigroup arise in this way? Thanks! add comment 1 Answer active oldest votes It depends how you allow the semigroup to act and if you are assuming the action is faithful. If the semigroup acts faithfully by level preserving endomorphisms of the tree, then it must be residually finite. If you remove the restriction on regular tree, you can obtain all residually finite semigroups. Good references are the papers Automata, Dynamical Systems, and Groups by Nekrashevych, Grigorchuk and Sushchanskii http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tm&paperid=515&option_lang=eng and Monoids acting on trees by Rhodes (in the first issue of IJAC) and Further results on monoids acting on trees by Rhodes and Silva http://arxiv.org/abs/1103.2344 up vote 1 down I guess as long as your endomorphisms preserve the graph structure of the tree (and hence will be contractions) then you will still have residual finiteness since you cannot increase vote distance from the root and so if you fix a level, then the set of levels less than or equal to that level is invariant. It is natural to look at level preserving maps because they correspond to sequential machines (or transducers) and they induce contractions of the boundary of the tree. add comment Not the answer you're looking for? Browse other questions tagged semigroups or ask your own question.
{"url":"http://mathoverflow.net/questions/53847/semigroups-acting-as-continuous-functions-on-regular-rooted-trees","timestamp":"2014-04-17T09:39:43Z","content_type":null,"content_length":"49632","record_id":"<urn:uuid:d2cd3080-24ba-472f-86ad-98975a91c078>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
The Computer Guy 1. Code without thinking. 2. Believe the first answer you get. 3. Use an array as a database field type. 4. Create a custom database engine. 5. Cram all options in one dialog box. 6. Validate data w/o feedback. 7. Launch with “Implement this” messages. 8. Create a custom application framework. 9. Work in only one language. 10. Have a negative attitude. 1. Document your code. 2. Use a Version Control System. 3. Admit that you know nothing of design. 4. Understand that functionality doesn’t beat ease of use. 5. Test! Test! TEST! 6. Indent properly. 7. Optimize to out perform. 8. Hardcode NOTHING. 9. Speak with someone who will use the program 30+ hrs/wk 10. `grep -i -R 'TODO' *` For about a year now, I’ve been playing around with my digital tuner card. It wasn’t until I turned off the cable that I have a need to use it. Using some cool linux tools, I’ve made a script to record HDTV broadcasts to my computer. It is a work in progress, but here’s what I’ve got so far. The Tuner Card The tuner card is DVB based on a Conextant chipset, so the first step was to get my kernel to make the card usable. A quick check will show if the driver is loaded: dmesg | grep dvb The Tools Required • dvb-atsc-tools • azap • ffmpeg Channel Scan Scan for channels using: dvbscan /usr/share/dvb/atsc/us-ATSC-center-frequencies-8VSB > ~/.azap/channels.conf Edit the file ~/.azap/channels.conf to make sure the channel names are correct. Your base frequencies file may be in a different location, but it is usually under /usr/share. Iteration 1: Crontab Recording At first, I used only the crontab to record. Here’s an example: 24 12 * * * /usr/bin/azap -c /home/dvr/.azap/channels.conf -r WRDW-HD 25 12 * * * /bin/cat /dev/dvb/adapter0/dvr0 > /data/dvr/young-restless.mpeg 35 13 * * * /usr/bin/pkill cat 36 13 * * * /usr/bin/pkill azap 37 13 * * * /usr/bin/ffmpeg -i /data/dvr/young-restless.mpeg -s 1024x476 -vcodec libxvid -b 1600000 -acodec copy /data/dvr/y-r-friday.avi This is a very ugly solution with lots of cracks. For instance, if I were running cat from a console when /usr/bin/pkill cat were running, it would die. Heaven forbid another processes is using cat when that runs. Also, I had to change the name of the ffmpeg output file every day. Iteration 2: Cronable Perl Script This script does pretty much the same thing as the above 4 lines in the crontab does. This means you don’t have to write 4 lines in the crontab for each recording, just 1 line. Also, the file name is appended with the date in yyyy-mm-dd format. #Does someone need a reminder? if ( $#ARGV != 2 ) { print “Usage:\n”; print “record.pl \n”; #Creates a random 16 character (a-z) string sub randstr { my @chars=(‘a’..’z'); my $res = “”; for(my $i=0;$i<16;$i++) { $res .= $chars[rand($#chars)]; return $res; #Grab the command line args my ( $channel, $length, $finalFileName ) = @ARGV; #Temporary mpeg filename my $tempFileName = randstr(); #Add date to final filename $finalFileName .= “-”.`date +%Y-%m-%d`; $finalFileName =~ s/\n//; #Start Azap in the background print “Starting azap\n”; system( “/usr/bin/azap -c /home/barry/.azap/channels.conf -r $channel >/dev/null 2>/dev/null &” ); sleep 5; #Start cat in the background print “Starting cat\n”; system( “/bin/cat /dev/dvb/adapter0/dvr0 > /data/dvr/$tempFileName.mpeg &” ); #Sleep the required seconds for the show to record print “Recording for “.(60*$length).” seconds…\n”; sleep 60*$length; #TODO: Remove pkill, as it may cause problems print “Killing cat and azap.\n”; `pkill cat`; `pkill azap`; #Resize & Encode to XVID using ffmpeg #ffmpeg sometimes stops working b/c of bad mpeg data #TODO: Replace with mencoder print “Encoding…\n”; `/usr/bin/ffmpeg -i /data/dvr/$tempFileName.mpeg -s 1024×476 -vcodec libxvid -b 1600000 -acodec copy /data/dvr/$finalFileName.avi`; #Remove the temporary mpeg file `rm $tempFileName.mpeg`; print “Done!\n”; I know it’s not the most elegant of perl scripts, but it gets the job done. Here’s a sample cron: 25 12 * * * /home/dvr/record.pl WRDW-HD 70 young-restless As you can see from the TODO comments, I continue to tinker with the script. When I make a good development, I’ll post it. If you have any suggestions, feel free to post a comment or contact me. In the previous post, simple regular expressions were explained. Today, regex becomes useful. If you didn’t read the previous post, you should at least skim it. For this post, all examples will be using perl. Getting a Match Parentheses are used to extract a match from a string. Let’s say you want to know what is inside the “head” html tags, here’s the code: if ( $html =~ m/(.*)<\/head>/is ) { print "HTML Header:\n$1\n"; The match is given to the code as the variable $1. Note that this example has an “s” after the closing forward slash. The s treats the string to be compared as a single line. Without it, you probably wouldn’t get a match. Also, this simple regex will not match the entire head in all cases. If you put “</head>” inside a meta keyword list, it would match, but stop at the first “</head>”. Here’s another example: if ( $text =~ m/ a ([aeiou][a-z]+)/i ) { print "Grammar error: use \"an\" when the following word starts with a vowel.\n i.e. an $1\n"; Yup, it’s a grammar rule check. Now you know where that green squiggly underline comes from. Whitespace and Non-whitespace matching Whitespace refers to a space, tab, and carriage returns. “\s” matches a whitespace character and “\S” matches a non-whitespace character. That’s the basic of regular expressions. These magically expressions work in almost every language, including perl, php, javascript, and python. Happy pattern matching. Regular Expressions, those oddities that live between two forward slashes, are very powerful and quite mysterious. Staring at something like /([abcdef0123456789]+)/i all day can give you a heaadache. With a little luck and a bit of hard work, you’ll know exactly what the previous expression means. For this post, all examples will be using Perl. Text Search A regular expression, or regex, in its simplest form is a text search. Here’s an example: $var = "Hello World"; if ( $var =~ m/Hello/ ) { print "Match\n"; In perl, the operator =~ is used to run a regex against a variable. The m/Hello/ will match if the variable has “Hello” anywhere. To make the match case-insensitive, simply add an i after the last forward slash. So change the regex to m/Hello/i to match “Hello”, “HeLlO” and “hello”. Carets and Dollar Signs A caret (^) at the beginning of a regex represents the beginning of a string. Here’s an example: $var = "Hello World"; if ( $var =~ m/^Hello/ ) { print "Match\n"; A dollar sign ($) at the end of a regex represents the end of a string. Another example: $var = "Hello World"; if ( $var =~ m/World$/ ) { print "Match\n"; If you want to match one of these special characters, put a backslash before it. $var = "Hello^ $World"; if ( $var =~ m/e\^ \$W/ ) { print "Match\n"; Putting a list of characters inside braces “[]” will match any of these characters. $var = "Hello World"; if ( $var =~ m/[aeiou]/ ) { print "There is a vowel.\n"; You can even tell if a string contains a hexadecimal value. This example uses the special character +. It means that the previous character must appear 1 or more times. $var = "0x157afde"; if ( $var =~ m/^0x[0123456789abcdef]+$/ ) { print "It is hexadecimal\n"; Within the braces, instead of listing every possible character, you can specify a range to be matched. For instance, 0-9 will match any digit 0 through 9. Here’s a slightly shorter example: $var = "0x157afde"; if ( $var =~ m/^0x[0-9a-f]+$/ ) { print "It is hexadecimal\n"; The caret (^) continues its job as a special character within braces. Putting one at the beginning of the braces will match anything but those listed inside the braces. $var = "0x157afde"; if ( $var =~ m/^[^0-9a-fx]+$/ ) { print "It is not hexadecimal\n"; Periods and Asterisks A period (.) will match any character. $var = "Hello World"; if ( $var =~ m/^H.llo W.+$/ ) { print "Match\n"; An asterisks (*) is similar to a plus sign (+), but an asterisks will match 0 or more of the previous character. $var = "Hello World"; if ( $var =~ m/^Hello .*$/ ) { print "Saying hello\n"; Tomorrow, more special characters, including white-space characters, non-whitespace characters, and matching parentheses. About 6 months ago, I was surfing and came across a math/programming site that has fascinated me ever since, Project Euler. They have over 200 problems to solve and continue to add more. The problems start off very easy and get difficult rather quickly. Some of the beginning problems can even be solved using paper and pencil. If you are just starting to learn computer programming, you should check out this site. The first problem can be solved fairly easily, and I will show you how I did it in perl. First, let’s look at the problem. They want you to “Find the sum of all the multiples of 3 or 5 below 1000.” What the program will have to do is loop through all whole numbers from 1 to 999. Inside the loop, there is a check to see if the number is a multiple of 3 or 5. If it is, it is added to the sum. After the loop, print the sum to the screen. Here’s the code: $sum = 0; $counter = 1; while( $counter < 1000 ) { if ( $counter % 3 == 0 || $counter % 5 == 0 ) { $sum = $sum + $counter; $counter = $counter + 1; print "The answer is $sumn"; This program, just like every other computer program, uses flow control. Basically flow control tell the computer what to do and when to do it. There are 2 flow control structures here, a while loop and an if statement. Both of these are started by a comparison inside parenteses, then an opening curly brace. They are ended by the closing curly brace. The while statement should be fairly straightforward, but the if statement is a little complex. The percent(%) sign means modulus(mod for short), which is simply the remainder of division. For example, 4 % 3 is 1, 4 % 5 is 4, and 9 % 3 is 0. The double pipe(||) means OR. When reading line 4, you say “If counter mod 3 equals 0 or counter mod 5 equals 0, then.” Let’s go line by line. Line:1 $sum = 0; Set a variable named “sum” to 0. Line:2 $counter = 1; Set a variable named “counter” to 1. Line:3 while( $counter < 1000 ) { Start a loop and continue the loop while counter is less than 1000. Line:4 if ( $counter % 3 == 0 || $counter % 5 == 0 ) { Check to see if the remainder of counter divided by 3 is zero OR the remainder of counter divided by 5 is zero. Line:5 $sum = $sum + $counter; Add counter to sum if so. Line:6 } Close the if block. Line:7 $counter = $counter + 1; Add 1 to counter. Line:8 } Close the while loop. At this point, the program will repeat the check at line 3 if check is true goto line 4. Line:9 print “The answer is $sumn”; Print the answer to the screen. You can download perl from perl.com. Once it is installed, you can copy and paste the source code into notepad and save it as euler-1.pl or whatever you want. To run the code, double click on the Play around with the code. Poke it. Prod it. Change the while comparison so it only goes to 10, 100, 100000, etc. Changed the if to check for numbers divisble by 2 and 3. If you mess it up, copy and paste the code back into the file. If you have any problems, leave a comment. Octal, or base-8, is commonly used in Unix style operating systems. This number system, being base-8, uses digits 0-7. It’s easily translated to binary, but hex and decimal are a little harder. Each digit in octal is 3 digits in binary, so 0[8] is 000[2], 1[8] is 001[2], … 7[8] is 111[2]. Because it is simple to convert octal to binary, I would suggest that you convert octal to binary and then to decimal or hex. Unix style operating systems use octal to define file permissions. Each file has a 3 octal digit code. The first digit defines the user’s permissions. The second defines the group’s permissions. The final digit defines everyone else’s permissions. These permissions are based in binary. The first digit is to allow reading of the file. The second digit is to allow writing to the file. The third digit is to allow execution of the file. A file with permissions of 777 allows everyone to do everything, while 664 allows the owner and group to read and write, but everyone else only gets to read the file. Check back Wednesday for a discussion of binary truth tables. Home work: Convert the following from octal to decimal: Now that you understand binary, let’s move onto hexadecimal, or base 16. This is the numbering system most programmers use, because it translates easily to and from binary. Also, 2 hexadecimal digits make up a byte, or 8 bits. How do we have 16 digits? Simple, hexadecimal uses the digits 0-9 and the letter A-F. 0-9 is the same in hexadecimal as decimal. The digits A-F in hexadecimal are 10-15 in decimal. Here is a conversion chart of single digit hexadecimal: Hexadecimal Binary Decimal 0[16] 0[2] 0[10] 1[16] 1[2] 1[10] 2[16] 10[2] 2[10] 3[16] 11[2] 3[10] 4[16] 100[2] 4[10] 5[16] 101[2] 5[10] 6[16] 110[2] 6[10] 7[16] 111[2] 7[10] 8[16] 1000[2] 8[10] 9[16] 1001[2] 9[10] A[16] 1010[2] 10[10] B[16] 1011[2] 11[10] C[16] 1100[2] 12[10] D[16] 1101[2] 13[10] E[16] 1110[2] 14[10] F[16] 1111[2] 15[10] Hexadecimal, sometimes simply called hex, is very easy to translate to and from binary. This is because each digit in hex is 4 digits in binary. All you have to do is replace the hexadecimal digit with the binary equivalent. For instance, the number FF[16] is 11111111[2]. Converting a binary number to hex works just slightly different. When you convert a number from hex to binary, you can work from left to right or right to left. When you convert a number from binary to hex, you must work from right to left. This is because a binary number might not be the correct length to work from left to right. That’s all for today, check back Friday for Octal, or base 8. Here’s some homework: Convert the following hexadecimal numbers to binary and decimal: Since it is Monday, I won’t ask you to think too much today. Adding binary numbers is actually easier than adding decimal numbers, if you can believe that. Here’s how to do it. Write 2[10] binary numbers down, say 1111[2] and 111[2]. Make sure to align them on the right, same as you would to add decimal numbers. This should be on your paper/notepad: + 111 Starting from the right, add the first two digits. 1[2] + 1[2] is 10[2], so write 0 as the right most digit and carry the 1. Moving on to the next digit, 1[2] + 1[2] + 1[2](carried) = 11[2], or 3[10], so write 1 as the next digit and carry the 1. Third digit is the exact same. 1[2] + 1[2] + 1[2](carried) = 11[2], or 3[10], so write 1 as the next digit and carry the 1. For the Forth digit, 1[2] + 1[2](carried) is 10[2], so write 0 as the right most digit and carry the 1. Drop down the carry, because 1 + 0 is 1 no matter what base you are working with. There’s the + 111 Checking the solution is as simple as opening Windows calculator. If you’re not in scientific view, switch to it from the view menu. Select “Bin” for binary mode. Then put in the math problem, same as any other. Now that you know about calculator, try not using it. Vocab: A digit in binary can be refered to as a bit, so 64-bit, means 64 binary digits. Create 8 or 9 random binary addition problems. If you want to make it hard, write out 64 digits, or bits, for each binary number. Wednesday, I discussed the very basics of binary and how to count, or increment, in binary. Today I’ll be discussing converting decimal numbers to and from binary numbers. Before I cover conversion, let’s talk for a second about bases. That’s what this series boils down to. I’m not talking about military bases, but number bases. Binary is base 2, meaning that there are 2 digits. Because decimal uses 10 digits, it is base 10. When a number is of a certain base, you denote that by putting a subscripted 2, or 10, immediately after the number. So 100[2] means 100 in binary, while 100[10] means 100 in decimal. If you don’t fully grasp this, I’m sure you will once you finish the exercises for today. So, we all know that 1[2] is 1[10], but what does 10[2] equal in base 10? It’s 2[10]. Check out this chart: Binary Decimal 1[2] 1[10] 10[2] 2[10] 100[2] 4[10] 1000[2] 8[10] 10000[2] 16[10] 100000[2] 32[10] 1000000[2] 64[10] 10000000[2] 128[10] 100000000[2] 256[10] For every extra zero on the binary side, the decimal side doubles. This is because binary is base 2. You can say that, counting from the right in binary, each digit placement is worth double the You can use this chart to convert a binary number to a decimal number. For every 1, add the decimal equivilent. For instance, the number 111[2] is 1[10] + 2[10] + 4[10] which is 7[10]. Another example is 1001[2], which is 8[10] + 1[10], or 9. OK, converting binary to decimal is the easier part for today. On to converting decimal to binary. You do pretty much the same thing as converting binary to decimal, but in reverse. The first step is to grab a scrap piece of paper, or open notepad. To walk through the process, I’ll convert 105[10] to binary. The first step is to find the decimal number in the chart that is closest to 105[10] without going over, which is 64[10]. 64[10] is 1000000[2], so write that binary number on the first line. The second step is to subtract 64[10] from 105[10], which is 41[10]. Then we repeat. So, 32[10], or 100000[2], is the next number. Write 100000[2] on the second line, making sure to line up the numbers on the right side, same as you would for decimal. Then subtract 32 [10] from 41[10], which is 9[10]. Then we repeat. So, 8[10], or 1000[2], is the next number. Write 1000[2] on the second line, making sure to line up the numbers on the right side, same as you would for decimal. Then subtract 8[10] from 9[10], which is 1[10]. Then we repeat. So, 1[10], or 1[2], is the next number. Write 1[2] on the second line, making sure to line up the numbers on the right side, same as you would for decimal. Then subtract 1[10] from 1 [10], which is 0[10]. Because, we have reached zero, we can move on the last step. On your paper/notepad should be: All you have to do is combine these. Starting on the left, any column that has a 1 in it, write a 1, or if the column only has zeros, write a 0. This yeilds: 1101001[2]. Converting this back to decimal will let you know you did it right. Convert the following binary numbers to decimal: Convert the following decimal numbers to binary: Remember to convert it back to check your work. Check back Monday when I’ll discuss adding binary numbers.
{"url":"http://compguyaug.com/category/programming/page/2/","timestamp":"2014-04-19T22:26:46Z","content_type":null,"content_length":"51390","record_id":"<urn:uuid:ff313636-399c-461c-838e-dc444a3d7f50>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Project MATH Who's Involved | About CSU, Chico | Get Paid | Housing | Getting Your Credential | Applications | Related Links Mathematics and Teaching on the Horizon This program builds on CSU, Chico’s long tradition of innovation in the preparation of teachers. In fact, we think you will find that few, if any universities offer prospective mathematics teachers the type of opportunities provided by Project MATH. Take a look at our VIDEO created by Project MATH students and mentors. Students are guaranteed housing in the Konkow Math and Science House, one of the premier on-campus housing units in Konkow Residence Hall. These apartment-style resident halls, equipped with a full kitchen, are considered the best on-campus housing at CSU, Chico. Students in the Math and Science House build a supportive community that provides a mix of academic support and extracurricular opportunities. Residents have access to a mathematics tutor who comes to the house 4 hours a week. They join in house activities such as bowling or evening walks to town. In the first two years of Project MATH, students participate in a bi-weekly seminar in which they work closely with mentor teachers to begin their transition from students of mathematics to teachers of mathematics. They take turns developing and presenting lessons in a safe, supportive environment. Connected to the seminar is an early field experience in local junior high schools and senior high school classrooms. This give the Project MATH students an opportunity to teach lessons previously developed in seminar. At the end of the sophomore year, Project MATH students are well positioned to take advantage of the newly created Credential Pathway in the Math Education Option of the degree. This program blends mathematics and credential course work in the junior and senior years, allowing a student to complete their degree and credential program by the end of the senior year. Whether a student chooses this pathway, or the traditional Math Ed Option, they will be taking three mathematics courses specifically designed for those interested in becoming quality single subject mathematics teachers. CSU, Chico students interested in becoming mathematics or science teachers are supported by scholarships from the Mathematics and Science Teaching Initiative (MSTI) and Teacher Recruitment Project Scholarships, as well as the Robert Noyce Scholarship Program. Visit our scholarship page. Scholarship applications from Project MATH students are considered favorably since involvement in Project MATH indicates a commitment to becoming a teacher. In fact, we have scholarship money dedicated to incoming freshmen who are accepted into Project MATH. Program Features: • Guaranteed apartment-style housing in Konkow Residence Hall during freshman year • Experiences in public schools throughout their undergraduate careers • In-house tutoring in calculus and other course as necessary • Seminars in math education • Mentoring by secondary math teachers and math faculty members • Paid tutoring positions at middle and high schools • Math education conferences and workshops • Math related field trips
{"url":"http://www.csuchico.edu/cmse/csuc_students/project_math/index.shtml","timestamp":"2014-04-20T06:59:21Z","content_type":null,"content_length":"13632","record_id":"<urn:uuid:679d00c1-ee7c-4bdc-97dd-2d2a38c627e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Cohomology of orthogonal and symplectic groups up vote 8 down vote favorite in their book Cohomology of Finite Groups Adem and Milgram investigate the cohomology of the finite orthogonal and symplectic groups only in case $\mathbb{F}_2$. Let $p$ be a prime dividing the order of $\text{O}_n(q)$, $\text{Sp}_n(q)$ and $q$ a prime power. I am wondering if anything is known about $\text{H}^\ast(\text{O}_n(q),\mathbb{F}_p)$ or $\text{H}^\ast(\text{Sp}_n(q),\mathbb{F}_p)$. I am also interested in the maximal elementary abelian $p$-subgroups of these groups. In light of Quillen's stratification theorem these two questions are, of course, related to each other. I would be grateful for any kind of information. finite-groups group-cohomology add comment 1 Answer active oldest votes Direct finite group computations of cohomology of the finite groups of Lie type tend to be very sparse. The case $p=2$ has special interest for topologists and does provide some explicit results. More generally, some indirect results of interest have been found in recent decades by systematically comparing cohomology of the finite groups (in the defining characteristic $p$) with rational cohomology of the ambient algebraic groups. The main work in this direction is found in papers by some combination of Cline, Parshall, Scott, van der Kallen (see their joint 1977 paper in Invent. Math.), Friedlander. But a full description of the cohomology rings is elusive. Benson and Carlson have done a lot of work in the traditional setting as well. There are lots of papers, but not many definitive results. up vote 6 down vote For maximal elementary abelian $p$-subgroups, some of the work by Avrunin, Parshall, Scott would be relevant, as well as the older LMS Lecture Notes by Kleidman and Liebeck. Perhaps the accepted most helpful source is the third volume of an ongoing series of books on the classification of finite simple groups. See especially section 3.3 and the summary table there for groups of Lie type:MR1490581 (98j:20011) 20D05 (20-02) Gorenstein, Daniel; Lyons, Richard (1-RTG); Solomon, Ronald (1-OHS) The classification of the finite simple groups. Number 3. Part I. Chapter A. Almost simple K-groups. Mathematical Surveys and Monographs, 40.3. American Mathematical Society, Providence, RI, 1998. xvi+419 pp. ISBN 0-8218-0391-3 Thank yo for all these references and comments! – Tilemachos Vassias May 27 '10 at 16:37 1 The literature I was aware of five years ago is summarized in Chapters 14-15 of my 2006 book *Modular Representations of Finite Groups of Lie Type" (Cambridge Univ. Press), with lots of references. But the only really strong calculations occur for low degree cohomology with coefficients in trivial or other modules. – Jim Humphreys May 27 '10 at 16:59 add comment Not the answer you're looking for? Browse other questions tagged finite-groups group-cohomology or ask your own question.
{"url":"http://mathoverflow.net/questions/26134/cohomology-of-orthogonal-and-symplectic-groups","timestamp":"2014-04-19T10:29:51Z","content_type":null,"content_length":"54988","record_id":"<urn:uuid:6c5c356c-e6d4-4094-9bbc-455835b474fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Responses to John Cochran’s reply to Paul Krugman’s Attack From philosopher Alex Rosenberg and economists Bob Murphy and Mario Rizzo. If Hayek thinks the economy is a calculating machine then mathematically model one. What would the economy calculate? The yield curve, of course. How would the economy compute? Using digits, of course. What are digits? They are inventory steps in a finite length distribution network. How does the economy choose them? By minimizing the total number of transactions as Hayek would point out. How does the economy do that? By adjusting the lot sizes and transactions rates of each good at each step such that the volatility of inventory levels is equal (within the standard error) at each step. Hence each inventory level in a production system has equal probability of going to zero. We call this achieving economies of scale. Why does this result in asymmetry when we have a restructuring? (This is the big question the Keynesians and EMH folks can’t figure out) Asymmetry results when a positive or negative shock to the production system causes the steps in production to drop by one (sudden constraint) or add one step (positive shock). Adding a step to production is inflation, more steps in production to deliver a greater variety of intermediate goods. A sudden constraint causes deflation, the removal of a step to get greater economies of scale. In a restructuring, all the stages have to adjust their lot sizes and transaction rates such that the amount of the yield curve allocated to each step results in a fair share of the bandwidth. Bandwidth equals volatility management, and inventory volatility has to be minimized and equal across steps. This adjustment process is the recalculation that Arnold Kling talks about. Does the entire economy seek that same number of step in production? Yes, in the aggregate the yield curve will be coherent, each goods being processed by the same number of steps, or a integer multiple thereof. That is, there must not be overlap of the space shared on the yield curve among the different steps of the different goods. Otherwise we get interference and price volatility, agents cannot tell self price volatility from correlated volatility. Call this coherence at equilibrium. What is money? Just another good (sorry Austrians, money has no special role) But the financial system will deflate and inflate in response to shocks. What is price? Price is exactly a constant time square root(volatility)/mean of a particular inventory at equilibrium. (Price is the inverse of signal to noise ratio at equilibrium.) In fact, at the impossible equilibrium point, all prices for all goods should be equal, because all inventory has a constant probability of dropping to zero. Why aren’t we stuck at equilibrium? Because the acceptable uncertainty level of inventory measurement is not a fixed number, but an acceptable band. This uncertainty band is fixed by the biology of herding animals. What mathematics should we use? Quantum Mechanics. Minimization of total transaction is your Hamiltonian. Minimization of inventory variance is the force. Quantization comes from having a relatively fixed band of measurement uncertainty. As an aside, Hayek was there when physics adopted quantum mechanics, it must have influenced him.
{"url":"http://hayekcenter.org/?p=1672","timestamp":"2014-04-17T16:04:37Z","content_type":null,"content_length":"137631","record_id":"<urn:uuid:8a87ef72-e94c-4254-ad82-518aacc1e84d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Faber Polynomial Coefficient Estimates for Meromorphic Bi-Starlike Functions International Journal of Mathematics and Mathematical Sciences Volume 2013 (2013), Article ID 498159, 4 pages Research Article Faber Polynomial Coefficient Estimates for Meromorphic Bi-Starlike Functions ^1Institute of Mathematical Sciences, Faculty of Science, University of Malaya, 50603 Kuala Lumpur, Malaysia ^2Department of Mathematical Sciences, Kent State University, Burton, OH 44021, USA Received 6 January 2013; Accepted 11 March 2013 Academic Editor: Paolo Ricci Copyright © 2013 Samaneh G. Hamidi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider meromorphic starlike univalent functions that are also bi-starlike and find Faber polynomial coefficient estimates for these types of functions. A function is said to be bi-starlike if both the function and its inverse are starlike univalent. Consider the function where the coefficients are in the submanifold on such that is univalent in . Therefore where is a Faber polynomial of degree . (Also see [1, 2].) We note that In general (also see Bouali [3, page 52]) where The coefficients of , the inverse map of are given by where and with is a homogeneous polynomial of degree in the variables . (Also see [1, page 349].) Similarly where is a Faber polynomial of degree and where for is a homogeneous polynomial of degree in the variables . The Faber polynomials introduced by Faber [4] play an important role in various areas of mathematical sciences, especially in geometric function theory (e.g., see Gong [5] and Schiffer [6]). The recent interest in the calculus of the Faber polynomials, especially when it involves the function , the inverse map of (see [2, page 186]) beautifully fits the case for the meromorphic bi-univalent The function is said to be meromorphic bi-univalent in if both and its inverse are meromorphic univalent in . By the same token, the function is said to be meromorphic bi-starlike of order in if both and its inverse map are meromorphic starlike of order in , that is, Estimates on the coefficients of meromorphic univalent functions were widely investigated in the literature. For example, Schiffer [6] obtained the estimate for meromorphic univalent functions with and Duren ([7] or [8, Theorem 4.9, page 139]) proved that if for then . He then proved that this bound also holds for meromorphic starlike univalent functions of order zero (Duren [8, Theorem 4.8, page 137]). So far, the latest known results are given by the following two articles. Kapoor and Mishra [9] found sharp bounds for the coefficients of starlike univalent functions of order ; in and for its inverse functions they obtained the bound when . More recently, Srivastava et al. [10] found sharp bounds for the coefficients of starlike univalent functions of order , , having -fold gaps in their series representation in and also for their inverse functions. The above two articles settled the coefficient bounds for starlike functions and their inverses but they have not considered the bi-starlike case. The problem arises when the bi-univalency condition is imposed on the meromorphic functions . The bi-univalency requirement makes the task of finding bounds for the coefficients of and its inverse map more involved. In this paper, for the first time, we use the Faber polynomial expansions to study the coefficients of meromorphic bi-starlike functions. As a result, we are able to prove. Theorem 1. Let be meromorphic bi-starlike of order in . If for being odd or if for being even, then Proof. Suppose that the function is a meromorphic bi-starlike function of order in . Then both and its inverse are starlike of order in . Therefore, by definition, there exist two functions and with positive real parts in of the form so that Note that, according to the Caratheodory lemma (see Duren [8, page 41]), and for . On the other hand, comparing the corresponding coefficients of the functions and , we obtain Now, from and , upon noting that there are just two choices of and or and , we obtain Since for the second system of equation we can write Therefore, for odd , we obtain the system of equations Hence Applying the Caratheodory Lemma yields Similarly, for even with , we obtain Hence which upon applying the Caratheodory Lemma, we obtain Relaxing the coefficient restrictions imposed on Theorem 1, we can prove the following. Theorem 2. Let be meromorphic bi-starlike of order in . Then,. Proof. Comparing the corresponding coefficients of we obtain Similarly, comparing the corresponding coefficients of we obtain Adding and , we obtain which, upon applying the Caratheodory Lemma, yields On the other hand, subtracting from , we obtain which upon, applying the Caratheodory Lemma, yields Remark 3. For the estimates of the first two coefficients of certain subclasses of analytic and bi-univalent functions, also see recent publications by Srivastava et al. [11] and Frasin and Aouf [12
{"url":"http://www.hindawi.com/journals/ijmms/2013/498159/","timestamp":"2014-04-19T18:29:09Z","content_type":null,"content_length":"299978","record_id":"<urn:uuid:3fcd3a65-d959-433d-8959-37087824148e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Transfer entropy—a model-free measure of effective connectivity for the neurosciences • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Journal of Computational Neuroscience J Comput Neurosci. Feb 2011; 30(1): 45–67. Transfer entropy—a model-free measure of effective connectivity for the neurosciences Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain’s activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction. Keywords: Information theory, Effective connectivity, Causality, Information transfer, Electroencephalography, Magnetoencephalography Science is about making predictions. To this aim scientists construct a theory of causal relationships between two observations. In neuroscience, one of the observations can often be manipulated at will, i.e. a stimulus in an experiment, and the second observation is measured, i.e. neuronal activity. If we can correctly predict the behavior of the second observation we have identified a causal relationship between stimulus and response. However, identifying causal relationships between stimuli and responses covers only part of neuronal dynamics—a large part of the brain’s activity is internally generated and contributes to the response variability that is observed despite constant stimuli (Arieli et al. 1996). For the case of internally generated dynamics it is rather difficult to infer a physical causality because a deliberate manipulation of this aspect of the system is extremely difficult. Nevertheless, we can try to make predictions based on the concept of causality as it was introduced by Wiener (1956). In Wiener’s definition an improvement of the prediction of the future of a time series X by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Such causal interactions across brain structures are also called ‘effective connectivty’ (Friston 1994) and they are thought to reveal the information flow associated to neuronal processing much more precisely than functional connectivity, which only reflects the statistical covariation of signals as typically revealed by cross-correlograms or coherency measures. Therefore, we must identify causal relationships between parts of the brain, be they single cells, cortical columns, or brain areas. Various measures of causal relationships, or effective connectivity, exist. They can be divided into two large classes: those that quantify effective connectivity based on the abstract concept of information of random variables (e.g. Schreiber 2000), and those based on specific models of the processes generating the data. Methods in the latter class are most widely used to study effective connectivity in neuroscience, with Granger causality (GC, Granger 1969) and dynamic causal modeling (DCM, Friston et al. 2003) arguably being most popular. In the next two paragraphs we give a short overview over the data generation models in GC and DCM and their specific consequences so that the reader can appreciate the fundamental differences between these model based approaches and the information theoretic approach presented below: Standard implementations of GC use a linear stochastic model for the intrinsic dynamics of the signal and a linear interaction.^1 Therefore, GC is only well applicable when three prerequisites are met: (a) The interaction between the two units under observation has to be well approximated by a linear description, (b) the data have to have relatively low noise levels (see e.g. Nalatore et al. 2007), and (c) cross-talk between the measurements of the two signals of interest has to be low (Nolte et al. 2008). Frequency domain variants of GC such as the partial directed coherence or the directed transfer function fall in the same category (Pereda et al. 2005). DCM assumes a bilinear state space model (BSSM). Thus, DCM covers non-linear interactions—at least partially. DCM requires knowledge about the input to the system, because this input is modeled as modulating the interactions between the parts of the system (Friston et al. 2003). DCM also requires a certain amount of a priori knowledge about the network of connectivities under investigation, because ultimately DCM compares the evidence for several competing a priori models with respect to the observed data. This a priori knowledge on the input to the system and on the potential connectivity may not always be available, e.g. in studies of the resting-state. Therefore, DCM may not be optimal for exploratory analyses. Based on the merits and problems of the methods described in the last paragraph we may formulate four requirements that a new measure of effective connectivity must meet to be a useful addition to already established methods: 1. It should not require the a priori definition of the type of interaction, so that it is useful as a tool for exploratory investigations. 2. It should be able to detect frequently observed types of purely non-linear interactions. This is because strong non-linearities are observed across all levels of brain function, from the all-or none mechanism of action potential generation in neurons to non-linear psychometric functions, such as the power-law relationship in Weber’s law or the inverted-U relationship between arousal levels and response speeds described in the Yerkes-Dodson law (Yerkes and Dodson 3. It should detect effective connectivity even if there there is a wide distribution of interaction delays between the two signals, because signaling between brain areas may involve multiple pathways or transmission over various axons that connect two areas and that vary in their conduction delays (Swadlow and Waxman ; Swadlow et al. 4. It should be robust against linear cross-talk between signals. This is important for the analysis of data recorded with electro- or magnetoencephalography, that provide a large part of the available electrophysiological data today. The fact that a potential new method should be as model free as possible naturally leads to the application of information theoretic techniques. Information theory (IT) sets a powerful framework for the quantification of information and communication (Shannon 1948). It is not surprising then that information theory also provides an ideal basis to precisely formulate causal hypotheses. In the next paragraph, we present the connection between the quantification of information and communication and Wiener’s definition of causal interactions (Wiener 1956) in more detail because of its importance for the justification of using IT methods in this work. In the context of information theory, the key measure of information of a discrete^2 random variable is its Shannon entropy (Shannon 1948; Reza 1994). This entropy quantifies the reduction of uncertainty obtained when one actually measures the value of the variable. On the other hand, Wiener’s definition of causal dependencies rests on an increase of prediction power. In particular, a signal X is said to cause a signal Y when the future of signal Y is better predicted by adding knowledge from the past and present of signal X than by using the present and past of Y alone (Wiener 1956). Therefore, if prediction enhancement can be associated to uncertainty reduction, it is expected that a causality measure would be naturally expressible in terms of information theoretic First attempts to obtain model-free measures of the relationship between two random variables were based on mutual information (MI). MI quantifies the amount of information that can be obtained about a random variable by observing another. MI is based on probability distributions and is sensitive to second and all higher order correlations. Therefore, it does not rely on any specific model of the data. However, MI says little about causal relationships, because of its lack of directional and dynamical information: First, MI is symmetric under the exchange of signals. Thus, it cannot distinguish driver and response systems. And second, standard MI captures the amount of information that is shared by two signals. In contrast, a causal dependence is related to the information being exchanged rather than shared (for instance, due to a common drive of both signals by an external, third source). To obtain an asymmetric measure, delayed mutual information, i.e. MI between one of the signals and a lagged version of another has been proposed. Delayed MI results in an asymmetric measure and contains certain dynamical structure due to the time lag incorporated. Nevertheless, delayed mutual information has been pointed out to contain certain flaws such as problems due to a common history or shared information from a common input (Schreiber 2000). A rigorous derivation of a Wiener causal measure within the information theoretic framework was published by Schreiber under the name of transfer entropy (Schreiber 2000). Assuming that the two time series of interest Xx[t] and Yy[t] can be approximated by Markov processes, Schreiber proposed as a measure of causality to compute the deviation from the following generalized Markov condition where , , while m and n are the orders (memory) of the Markov processes X and Y, respectively. Notice that Eq. 1 is fully satisfied when the transition probabilities or dynamics of Y is independent of the past of X, this is in the absence of causality from X to Y. To measure the departure from this condition (i.e. the presence of causality), Schreiber uses the expected Kullback-Leibler divergence between the two probability distributions at each side of Eq. 1 to define the transfer entropy from X to Y as Transfer entropy naturally incorporates directional and dynamical information, because it is inherently asymmetric and based on transition probabilities. Interestingly, Paluš has shown that transfer entropy can be rewritten as a conditional mutual information (Paluš 2001; Hlavackova-Schindler et al. 2007). The main convenience of such an information theoretic functional designed to detect causality is that, in principle, it does not assume any particular model for the interaction between the two systems of interest, as requested above. Thus, the sensitivity of transfer entropy to all order correlations becomes an advantage for exploratory analyses over GC or other model based approaches. This is particularly relevant when the detection of some unknown non-linear interactions is required. Here, we demonstrate that transfer entropy does indeed fulfill the above requirements 1–4 and is therefore a useful addition to the available methods for the quantification of effective connectivity, when used as a metric in a suitable permutation test for independence. We demonstrate its ability to detect purely non-linear interactions, its ability to deal with a range of interaction delays, and its robustness against linear cross-talk on simulated data. This latter point is of particular interest for non-invasive human electrophysiology using EEG or MEG. The robustness of TE against linear cross-talk in the presence of noise, has to our knowledge not been investigated before. We test transfer entropy on a variety of simulated signals with different signal generation dynamics, including biologically plausible signals with spectra close to 1/f. We also investigate a range of linear and purely non-linear coupling mechanisms. In addition, we demonstrate that transfer entropy works without specifying a signal model, i.e. that requirement 1 is fulfilled. We extend earlier work (Hinrichs et al. 2008; Chávez et al. 2003; Gourvitch and Eggermont 2007) by explicitly demonstrating the applicability of transfer entropy for the case of linearly mixed signals. The method section is organized in four main parts. In the first part we describe how to compute TE numerically. As several estimation techniques could be applied for this purpose we quickly review these possibilities and give the rationale for our particular choice of estimator. In the second part, we describe two particular problems that arise in neuroscience applications—delayed interactions, and observation of the signals of interest by measurements that only represent linear mixtures of these signals. The third part provides details on the simulation of test cases for the detection of effective connectivity via TE. The last part contains details of the MEG recordings in a self-paced finger-lifting task that we chose as a proof-of-concept for the analysis of neuroscience data. Computation of transfer entropy Transfer entropy for two observed time series x[t] and y[t] can be written as where t is a discrete valued time-index and u denotes the prediction time, a discrete valued time-interval. and are d[x]- and d[y]-dimensional delay vectors as detailed below. An estimator of the transfer entropy can be obtained via different approaches (Hlavackova-Schindler et al. 2007). As with other information-theoretic functionals, any estimate shows biases and statistical errors which depend on the method used and the characteristics of the data (Hlavackova-Schindler et al. 2007; Kraskov et al. 2004). In some applications the magnitude of such errors is so large that it prevents any meaningful interpretation of the measure. To our purposes, it is crucial then to use a proper estimator that is as accurate as possible under the specific and severe constraints that most neuronal data-sets present and to complement it with an appropriate statistical test. In particular, a quantifier of transfer entropy apt for neuroscience applications should cope with at least three difficulties. First, the estimator should be robust to moderate levels of noise. Second, the estimator should rely only on a very limited number of data samples. This point is particularly restrictive since relevant neuronal dynamics typically unfolds over just a few hundred of milliseconds. And third, due to the need to reconstruct the state space from the observed signals, the estimator should be reliable when dealing with high-dimensional spaces. Under such restrictive conditions, to obtain a highly accurate estimator of TE is probably impossible without strong modelling assumptions. Unfortunately, strong modelling assumptions require specific information which is typically not available for neuroscience data. Nevertheless, some very general and biophysically motivated assumptions are available that enable the use of particular kernel-based estimators (Victor 2002). Here, we build on this framework to derive a data-efficient estimator, detailed below. Even using this improved estimator inaccuracies in estimation are unavoidable, specially for the restrictive conditions commented above, and it is necessary to evaluate the statistical significance of the TE measures, i.e. we use TE as a statistic measuring dependency of two time series and test against the null hypothesis of independent time series. Since no parametric distribution of errors is known for TE, one needs suitable surrogate data to test the null hypothesis of independent time series (‘absence of causality’). Suitable in this context means that the surrogate data should be prepared such that the causal dependency of interest is destroyed by constructing the surrogates but trivial dependencies of no interest are preserved. It is the particular combination of a data efficient estimator and a suitable statistical test that forms the core part of this study and its contribution to the field of effective connectivity analysis. In the next subsection we detail both, how to obtain an data-efficient estimation of Eq. 3 from the raw signals, and a statistical significance analysis based on surrogate data. Reconstructing the state space Experimental recordings can only access a limited number of variables which are more or less related to the full state of the system of interest. However, sensible causality hypotheses are formulated in terms of the underlying systems rather than on the signals being actually measured. To partially overcome this problem several techniques are available to approximately reconstruct the full state space of a dynamical system from a single series of observations (Kantz and Schreiber 1997). In this work, we use a Takens delay embedding (Takens 1981) to map our scalar time series into trajectories in a state space of possibly high dimension. The mapping uses delay-coordinates to create a set of vectors or points in a higher dimensional space according to This procedure depends on two parameters, the dimension d and the delay τ of the embedding. While there is an extensive literature on how to choose such parameters, the different methods proposed are far away from reaching any consensus (Kantz and Schreiber 1997). A popular option is to take the delay embedding τ as the auto-correlation decay time () of the signal or the first minimum (if any) of the auto-information. To determine the embedding dimension, the Cao criterion offers an algorithm based on false neighbors computation (Cao 1997). However, alternatives for non-deterministic time-series are available (Ragwitz and Kantz 2002). The parameters d and τ considerably affect the outcome of the TE estimates. For instance, a low value of d can be insufficient to unfold the state space of a system and consequently degrade the meaning of any TE measure, as will be demonstrated below. On the other hand, a too large dimensionality makes the estimators less accurate for a given data length and significantly enlarges the computing time. Consequently, while we have used the recipes described above to orient our search for good embedding parameters, we have systematically scanned d and τ to optimize the performance of TE measures. Estimating the transfer entropy After having reconstructed the state spaces of any pair of time series, we are now in a position to estimate the transfer entropy between their underlying systems. We proceed by first rewriting Eq. ( 3) as sum of four Shannon entropies according to Thus, the problem amounts to computing the different joint and marginal probability distributions implicated in Eq. (5). In principle, there are many ways to estimate such probabilities and their performance strongly depends on the characteristics of the data to be analyzed. See Hlavackova-Schindler et al. (2007) for a detailed review of techniques. For discrete processes, the probabilities involved can be easily determined by the frequencies of visitation of different states. For continuous processes, the case of main interest in this study, a reliable estimation of the probability densities is much more delicate since a continuous density has to be approximated from a finite number of samples. Moreover, the solution of coarse-graining a continuous signal into discrete states is hard to interpret unless the measure converges when reducing the coarsening scale. In the following, we reason for our choice of the estimator and describe its functioning. A possible strategy for the design of an estimator relies on finding the parameters that best fit the sample probability densities into some known distribution. While computationally straightforward such approach amounts to assuming a certain model for the probability distribution which without further constraints is difficult to justify. From the nonparametric approaches, fixed and adaptive histogram or partition methods are very popular and widely used. However, other nonparametric techniques such as kernel or nearest-neighbor estimators have been shown to be more data efficient and accurate while avoiding certain arbitrariness stemming from binning (Victor 2002; Kaiser and Schreiber 2002). In this work we shall use an estimator of the nearest-neighbor class. Nearest-neighbor techniques estimate smooth probability densities from the distribution of distances of each sample point to its k-th nearest neighbor. Consequently, this procedure results in an adaptive resolution since the distance scale used changes according to the underlying density. Kozachenko-Leonenko (KL) is an example of such a class of estimators and a standard algorithm to compute Shannon entropy (Kozachenko and Leonenko 1987). Nevertheless, a naive approach of estimating TE via computing each term of Eq. 5 from a KL estimator is inadequate. To see why, it is important to notice that the probability densities involved in computing TE or MI can be of very different dimensionality (from 1d[x] up to 1d[x]d[y] for the case of TE). For a fixed k, this means that different distance scales are effectively used for spaces of different dimension. Consequently, the biases of each Shannon entropy arising from the non-uniformity of the distribution will depend on the dimensionality of the space, and therefore, will not cancel each other. To overcome such problems in mutual information estimates, Kraskov, Stögbauer, and Grassberger have proposed a new approach (Kraskov et al. 2004). The key idea is to use a fixed mass (k) only in the higher dimensional space and project the distance scale set by this mass into the lower dimensional spaces. Thus, the procedure designed for mutual information suggests to first determine the distances to k-th nearest neighbors in the joint space. Then, an estimator of MI can be obtained by counting the number of neighbors that fall within such distances for each point in the marginal space. The estimator of MI based on this method displays many good statistical properties, it greatly reduces the bias obtained with individual KL estimates, and it seems to become an exact estimator in the case of independent variables. For these reasons, in this work we have followed a similar scheme to provide an data-efficient sample estimate for transfer entropy (Gomez-Herrero et al. 2010). Thus, we have obtained an estimator that permits us, at least partially, to tackle some of the main difficulties faced in neuronal data sets mentioned in the beginning of the Methods section. In summary, since the estimator is more data efficient and accurate than other techniques (especially those based on binning), it allows to analyze shorter data sets possibly contaminated by small levels of noise. At the same time, the method is especially geared to handle the biases of high dimensional spaces naturally occurring after the embedding of raw signals. As to computing time, this class of methods spends most of resources in finding neighbors. It is then highly advisable to implement an efficient search algorithm which is optimal for the length and dimensionality of the data to be analyzed (Cormen et al. 2001). For the current investigation, the algorithm was implemented with the help of OpenTSTool (Version1.2 on Linux 64 bit; Merkwirth et al. 2009). The full set of methods applied here is available as an open source MATLAB toolbox (Lindner et al. 2009). In practice, it is important to consider that this kernel estimation method carries two parameters. One is the mass of the nearest-neighbors search (k) which controls the level of bias and statistical error of the estimate. For the remainder of this manuscript this parameter was set to k2004), unless stated otherwise. The second parameter refers to the Theiler correction which aims to exclude autocorrelation effects from the density estimation. It consists of discarding for the nearest-neighbor search those samples which are closer in time to a reference point than a given lapse ( T). Here, we chose , unless stated otherwise. In general, it means that even though TE does not assume any particular model, its numerical estimation relies on at least five different parameters; the embedding delay (τ) and dimension (d), the mass of the nearest neighbor search (k), the Theiler correction window (T), and the prediction time (u). The latter accounts for non-instantaneous interactions. Specifically it reflects that in that case an increment of predictability of one signal thanks to the incorporation of the past of others should only occur for a certain latency or prediction time. Since axonal conduction delays among remote areas can amount to tens of milliseconds (Swadlow and Waxman 1975; Swadlow 1994), its incorporation for a sensible causality analysis of neuronal data sets is important for the results as we shall see below. Significance analysis To test the statistical significance of a value for TE obtained we used surrogate data. In general, generating surrogate data with the same statistical properties as the original data but selectively destroying any causal interaction is difficult. However, when the data set has a trial structure it is possible to reason that shuffling trials generates suitable surrogate data sets for the absence of causality hypothesis if stationarity and trial independency are assured. On these data we have then used a permutation test (~19,000 permutations) on the unshuffled and shuffled trials to obtain a p-value. P-values below 0.05 were considered significant. Where necessary a correction of this threshold for multiple comparisons was applied using the false discovery rate (FDR, q2002). Particular problems in neuroscience data: instantaneous mixing and delayed interactions Neuroscience data have specific characteristics that challenge a simple analysis of effective connectivity. First, the interaction may involve large time delays of unknown duration and, second, the data generated by the original processes may not be available but only measurements that represent linear mixtures of the original data—as is the case in EEG and MEG. In this section we describe a number of additional tests that may help to interpret the results obtained by computing TE values from these types of neuroscience data. Tests for instantaneous linear mixing and for multiple noisy observations of a single source Instantaneous, linear mixing of the original signals by the measurement process as is always present in MEG and EEG data. This may result in two problems: First, linear mixing may reduce signal asymmetry and, thus, make it more difficult to detect effective connectivity of the underlying sources. This problem is mainly one of reduced sensitivity of the method and maybe dealt with, e.g. by increasing the amount of data. A second problem arises when a single source signal with an internal memory structure is observed multiple times on different channels with individual channel noise. As demonstrated before (Nolte et al. 2008) this latter case can result in false positive detection of effective connectivity for methods based on Wiener’s definition of causality (Wiener 1956). This problem is more severe, because it reduces the specificity of the method. As an example of this problem think of an AR process of order m, s(t) that is mixed with a mixing parameter ε onto two sensor signals X′,Y′ in the following way where the dynamics for Y′ can be rewritten as In this case TE will identify a causal relationship between X′ and Y′ as it detects the relationship between the past of X′ and the present X′ that is contained in Y′ as (1ε)η[s]. Therefore, we implemented the following additional test (‘time-shift test’) to avoid false positive reports for the case of instantaneous, linear mixing: We shifted the time series for X′ by one sample into the past such that a potential instantaneous mixing becomes lagged and thereby causal in Wiener’s sense. For instantaneous mixing processes TE values increase for the interaction from the shifted time series X′′(t) to Y′ compared to the interaction from the original time series X′(t) to Y′. Therefore, an increase of this kind may indicate the presence of instantaneous mixing. The actual shift test implements the null hypothesis of instantaneous mixing and the alternative hypothesis of no instantaneous mixing in the following way: If the null hypothesis of instananeous mixing is not discarded by this test, i.e. if TE values for the original data are not significantly larger than those for the shifted data, then we have to discard the hypothesis of a causal interaction from X′ to Y′. Therefore, when data potentially contained instantaneous mixing, we tested for the presence of instantaneous mixing before proceeding to test the hypothesis of effective connectivity. More specifically, this test was applied for the instantaneously mixed simulation data (Figs. (Figs.4,4, ,5,5, ,6)6) and the MEG data (Fig. (Fig.8). 8). In general, we suggest to use this test, whenever the data in question may have been obtained via a measurement function that contained linear, instantaneuos mixing.A less conservative approach to the same problem would be to discard data for TE analysis only when we have significant evidence for the presence of instantaneous mixing. In this case the hypotheses would be: In this case we would proceed analysing the data if we did not have to reject H[0]. For the remainder of this manuscript, however, we stick to testing the more conservative null hypothesis presented in Eq. (10). Simulation results for linearly mixed measurements (X[ε], Y[ε] ) of two unidirectionally and linearly coupled underlying source signals (X →Y). (a) Mixing model and original autoregressive source time courses X, Y. (b–d) Effective ... Simulation results for linearly mixed measurements (X[ε], Y[ε] ) of two unidirectionally coupled underlying source signals (X →Y) coupled via a threshold function. (a) Mixing model and original autoregressive source time courses ... False positive rates for the detection of effective connectivity when observing one source via two EEG or MEG sensors. (a) Signal generation by an autoregressive order ten process X(t) and simultaneous observation of this source signal on two sensor signals ... Differences in effective connectivity (EC) between lifting of the right (RFL) and left index finger (LFL) for subject 1 (left) and subject 2 (right). The investigated frequency band was 5–29 Hz encompassing the μ and β rhythms, ... Delayed interactions, Wiener’s definition of causality, and choice of embedding parameters This paragraph introduces a difficulty related to Wiener’s definition of causality. As described above, non-zero TE values can be directly translated into improved predictions in Wiener’s sense by interpreting the terms in Eq. 2 as transition probabilities, i.e. as information that is useful for prediction. TE quantifies the gain in our knowledge about the transition probabilities in one system Y, that we obtain if we condition these probabilities on the past values of another system X. It is obvious that this gain, i.e. the value of TE, can be erroneously high, if the transition probabilities for system Y alone are not evaluated correctly. We now describe a case where this error is particularly likely to occur: Consider two processes with lagged interactions and long autocorrelation times. We assume that system X drives Y with an interaction delay δ (Fig. (Fig.1).1). A problem arises if we test for a causal interaction from Y to X, i.e. the reverse direction compared to the actual coupling, and do not take enough care to fully capture the dynamics of X via embedding. If for example the embedding dimension d or the embedding delay τ was chosen too small, then some information contained in the past of X is not used although it would improve (auto-) prediction. This information is actually transferred to Y via the delayed interaction from X to Y. It is available in Y with a delay δ, and therefore, at time-points were data from Y is used for the prediction of X. As stated before this information is useful for the prediction of X. Thus, inclusion of Y will improve prediction. Hence, TE values will be non-zero and we will wrongly conclude that process Y drives process X. Illustration of false positive effective connectivity due to insufficient embedding for delayed interactions. Source signal X drives target signal Y with a delay δ. The internal memory of process X is reflected in the slowly decaying autocorrelation ... Simulated data We used simulated data to test the ability of TE to uncover causal relations under different situations relevant to neuroscience applications. In particular, we always considered two interacting systems and simulated different internal dynamics (autoregressive and 1/f characteristics), effective connectivity (linear, threshold and quadratic coupling), and interaction delays (single delay and a distribution of delays). In addition, we simulated linear instantaneous mixing processes during measurement, because of their relevance for EEG and MEG. Internal signal dynamics We have simulated two types of complex internal signal dynamics. In the first case, an autoregressive process of order 10, AR(10), is generated for each system. The dynamics is then given by where the coefficients α[i] are drawn from a normalized Gaussian distribution, the innovation term η represents a Gaussian white noise source, and σ controls the relative strength of the noise contribution. Notice, that we use here the typical notation in dynamical systems where the innovation term η(t) is delayed one unit with respect the output x(t As a second case, we have considered signals with a 1/f^θ profile in their power spectra. To produce such signals we have followed the approach in Granger (1980). Accordingly, the 1/f^θ time series are generated as the aggregation of numerous AR(1) processes with an appropriate distribution of coefficients. Mathematically, each 1/f^θ signal is then given by where we aggregate over N with the coefficients α[i] randomly chosen according to the probability density function . Types of interaction To simulate a causal interaction between two systems we added to the internal dynamics of one process (Y) a term related to the past dynamics of the other (X). Three types of interaction or effective connectivity were considered; linear, quadratic, and threshold. In the linear case, the interaction is proportional to the amount of signal at X. The last two cases represent strong non-linearities which challenge approaches of detection based on linear or parametric methods. The effective connectivity mediated by the threshold function is of special relevance in neuroscience applications due to the approximated all or none character of the neuronal spike generation and transmission. Mathematically, the update of y(t) is then modeled by the addition of an interaction term such that the full dynamics is described as where D(.) represents the internal dynamics (AR(10) or 1/f) of y and y[] represents past values of y. In the last case, the threshold function is implemented through a sigmoidal with parameters b[1] and b[2] which control the threshold level and its slope, respectively. Here, b[1] was set to 0 and b[2] was set to 50. In all cases, δ represents a delay which typically arises from the finite speed of propagation of any influence between physically separated systems. Note that since we deal with discrete time models (maps) in our modeling δ takes only positive integer values. In case that two systems interact via multiple pathways it is possible that different latencies arise in their communication. For example, it is known that the different characteristics of the axons joining two brain areas typically lead to a distribution of axonal conduction delays (Swadlow et al. 1978; Swadlow 1985). To account for that scenario we have also simulated the case where δ instead of a single value is a distribution. Accordingly, for each type of interaction we have considered the case where the interaction term is where the sums are extended over a certain domain of positive integer values. In the results section we consider the case in which δ′ takes values on a uniform distribution of width 6 centered around a given delay. The coupling constants γ[lin], γ[quad], γ[thresh] were always chosen such that the variance of the interaction term was comparable to the variance of y(t) that would be obtained in the absence of any Linear mixing Linear instantaneous mixing is present in human non-invasive electrophysiological measurements such as EEG or MEG and has been shown to be problematic for GC (Nolte et al. 2008). The problem we encounter for linearly and instantaneously mixed signals is twofold: On the one hand, instantaneous mixing from coupled source signals onto sensor signals by the measurement process degrades signal asymmetry (Tognoli and Scott Kelso 2009), it will therefore be harder to detect effective connectivity. On the other hand—as shown in Nolte et al. (2008)—instantaneous presence of a single source signal in two measurements of different signal to noise ratio may be interpreted as effective connectivity erroneously. To test the influence of linear instantaneous mixing we created two test cases: 1. The first test case consisted in unidirectionally coupled signal pairs generated from coupled AR(10) processes as described above and then transformed into two linear instantaneous mixtures in the following way: is a parameter that describes the amount of linear mixing or ‘signal cross-talk’. A value of of 0.5 means that the mixing leads to two identical signals and, hence, no significant TE should be observed. We then investigated for three different values of if only the linear mixtures are available. 2. The second test case consisted in generating measurement signals in the following way: Here, s(t) is the common source, a mean-free AR(10) process with unit variance. s(t) is measured twice: once noise free in X[ε] and once dampened by a factor (1ε) and corrupted by independent Gaussian noise of unit variance, η[Y], in Y[ε]. Here, we tested the ability of our implementation of TE to reject the hypothesis of effective connectivity. This second test case is of particular importance for the application of TE to EEG and MEG measurements where often a single source may be observed on two sensors that have different noise characteristics, i.e. due to differences in contact resistance of the EEG electrodes or the characteristics of the MEG-SQUIDS. Choice of embedding parameters for delayed interactions To demonstrate the effects of suboptimal embedding parameters for the case of delayed interactions we simulated processes with autoregressive order 10 (AR(10)) dynamics, three different interaction delays (5, 20, 100 samples) and all three coupling types (linear, threshold, quadratic). The two processes were coupled unidirectionally X →Y. 15, 30, 60, and 120 trials were simulated. We tested for effective connectivity in both possible directions using permutation testing. All coupled processes were investigated with three different prediction times u of 6, 21, and 101 samples. The remaining analysis parameters were: d, k. In addition, we simulated processes with 1/f dynamics, an interaction delay δ of 100 samples and a unidirectional, quadratic coupling. 30 trials were simulated and we tested for effective connectivity in both directions. These coupled processes were investigated with all possible combinations of three different embedding dimensions d or and three different prediction times uk. Results are presented in Tables Tables11 and and22. Detection of true and false effective connectivity for a fixed embedding dimension d of 7, and an embedding delay τ of 1 autocorrelation time Detection of true and false effective connectivity in dependence of the parameters embedding delay τ, embedding dimension d, and prediction time u for data with unidirectional coupling X →Y via a quadratic function, 1/f dynamics and an ... MEG experiment Rationale In order to demonstrate the applicability of TE to neuroscience data obtained non-invasively we performed MEG recordings in a motor task. Our aim was to show that TE indeed gave the results that were expected based on prior, neuroanatomical knowledge. To verify the correctness of results in experimental data is difficult because no knowledge about the ultimate ground truth exists when data are not simulated. Therefore, we chose an extremely simple experiment—self-paced finger lifting of the index fingers in a self-chosen sequence—where very clear hypotheses about the expected connectivity from the motor cortices to the finger muscles exist. Subjects and experimental task Two subjects (S1, m, RH, 38 yrs; S2, f, RH, 23 yrs) participated in the experiment. Subjects gave written informed consent prior to the recording. Subjects had to lift the right and left index finger in a self-chosen randomly alternating sequence with approximately 2s pause between successive finger liftings. Finger movements were detected using a photosensor. In addition, an electromyogram (EMG) response was recorded from the extensor muscles of the the right and left index fingers. Recording and preprocessing MEG data were recorded using a 275 channel whole head system (OMEGA2005, VSM MedTech Ltd., Coquitlam, BC, Canada) in a synthetic 3rd order gradiometer configuration. Additional electrocardiographic, -occulographicc and -myographic recordings were made to measure the electrocardiogram (ECG), horizontal and vertical electrooculography (EOG) traces, and the electromyogram (EMG) for the extensor muscles of the right and left index fingers. Data were hardware filtered between 0.5 and 300 Hz and digitized at a sampling rate of 1.2 kHz. Data were recorded in two continuous sessions lasting 600 s each. For the analysis of effective connectivity between scalp sensors and the EMG, data were preprocessed using the Fieldtrip open-source toolbox for MATLAB (http://fieldtrip.fcdonders.nl/; version 2008-12-10). Data were digitally filtered between 5 and 200 Hz and then cut in trials from −1,000 ms before to 90 ms after the photosensor indicated a lift of the left or right index finger. This latency range ensured that enough EMG activity was included in the analysis. We used the artifact rejection routines implemented in Fieldtrip to discard trials contaminated with eye-blinks, muscular activity and sensor jumps. Analysis of effective connectivity at the MEG sensor level using transfer entropy Effective connectivity was analyzed using the algorithm to compute transfer entropy as described above. The algorithm was implemented as a toolbox (Lindner et al. 2009) for Fieldtrip data structures (http://fieldtrip.fcdonders.nl/) in MATLAB. The nearest neighbour search routines were implemented using OpenTSTool (Version1.2 on Linux 64 bit; Merkwirth et al. 2009). Parameters for the analysis were chosen based on a scanning of the parameter space, to obtain maximum sensitivity. In more detail we computed the difference between the transfer entropy for the MEG data and the surrogate data for all combinations of parameters chosen from: , udkq2002). The parameter values with optimum sensitivity, i.e. most sginificant results across sensor pairs after corrcetion for multiple comparison were: embedding dimensions d, forward prediction time uk. In addition we required that prediction should be possible for at least 150 samples, i.e. individual trials where the combination of a long autocorrelation time and the embedding dimension of 7 did not leave enough data for prediction were discarded. We required that at least 30 trials should survive this exclusion step for a dataset to be analyzed.Even a simple task like self-paced lifting of the left or right index finger potentially involves a very complex network of brain areas related to volition, self-paced timing, and motor execution. Not all of the involved causal interactions are clearly understood to date. We therefore focused on a set on interactions where clear-cut hypothesis about the direction of causal interactions and the differences between the two conditions existed: We examined TE from the three bilateral sensor pairs displaying the largest amplitudes in the magnetically evoked fields (MEFs) (compare Fig. Fig.7)7) before onset of the two movements (left or right finger lift) to both EMG channels. This also helped to reduce computation time, as for an all-to-all analysis of effective connectivity at the MEG and EMG sensor level would involve the analysis of 277 ×276 directed connections. We then tested connectivities in both conditions against each other by comparing the distributions of TE values in the two conditions using a permutation test. For this latter comparison a clear lateralization effect was expected, as task related causal interactions common to both conditions should cancel. Activity in at least three different frequency bands has been found in the motor cortex and it has been proposed that each of these different frequency bands subserves a different function: • A slow rhythm (6–10 Hz) has been postulated to provide a common timing for agonist/antagonist muscles pairs in slow movements and is thought to arise from from synchronization in a cerebello-thalamo-cortical loop (Gross et al. ). The coupling of cortical (primary motor cortex M1, primary somatosensory cortex S1) activity to muscular activity was proposed to be bidirectional (Gross et al. ) in this frequency range. The coupling may also depend on oscillations in spinal stretch reflex loops (Erimaki and Christakos • Activity in the beta range (~20 Hz) has been suggested to subserve the maintenance of current limb position (Pogosyan et al. ) and strong cortico-muscular coherence in this band has been found in isometric contraction accordingly (Schoffelen et al. ). Coherent activity in the beta band has also been demonstrated between bilateral motor cortices (Mima et al. ; Murthy and Fetz • In contrast, motor-act related activity in the gamma band (>30 Hz) is reported less frequently and its relation to motor control is less clearly understood to date (Donoghue et al. ). We therefore focused our analysis on a frequency interval from 5–29 Hz. Note that we omitted the frequently proposed preprocessing of the EMG traces by rectification (Myers et al. 2003), as TE should be able to detect effective connectivity without this additional step. Neuromagnetic fields in a finger lifting task. (a) Single-trial raw traces of magnetic fields (thin line) measured by two MEG sensors over left (MLT24) and right (MRT24) motor cortex (also compare (d) for the position of these sensors). In this trial ... In this section we first present the analysis of effective connectivity in pairs of simulated signals {X,Y}. All signal pairs were unidirectionally coupled from X to Y. We used three coupling functions: linear, threshold and a purely non-linear quadratic coupling. We simulated two different signal dynamics, AR(10) processes and processes with 1/f spectra, that were close to spectra observed in biological signals. The two signals of a pair always had similar characteristics. We always analyzed both directions of potential effective connectivity: X →Y and Y →X to quantify both, sensitivity and specificity of our method. In addition to this basic simulation we investigated the following special cases: coupling via multiple coupling delays for linear and threshold interactions, linearly mixed observation of two coupled signals for linear and threshold coupling, and observation of a single signal via two sensors with different noise levels. In this last case no effective connectivity should be detected. The absence of false positives in this latter case is of particular importance for EEG and MEG sensor-level analysis. As a proof of principle we then applied the analysis of effective connectivity via TE to MEG signals recorded in a self-paced finger lifting task. Here the aim was to recover the known connectivity from contralateral motor cortices to the muscles of the moved limb, via a comparison of effective connectivty for left and right finger lifting. Simulation study Detection of non-linear interactions for various signal dynamics Transfer entropy in combination with permutation testing correctly detected effective connectivity (X →Y) for both, autoregressive order 10 and 1/f signal dynamics and all three simulated coupling types (linear, threshold, quadratic) if at least 30 trials were used to compute statistics (Fig. (Fig.2).2). No false positives, i.e. significant results for the direction Y →X, were observed. We note that the cross-correlation function between the signals X and Y were flat when coupled non-linearly, which indicates that linear approaches may be insufficient to detect a significant interaction in those cases. Detection of effective connectivity by TE for two unidirectionally coupled signals (X →Y). (a–c) Signals generated from an autoregressive order ten process and coupled via (a) linear, (b) threshold, and (c) quadratic coupling. (d ... Detection of interactions with multiple interaction delays The statistical evaluation of TE values robustly detected the correct direction of effective connectivity (X→Y) for the two unidirectionally coupled AR(10) time series (X,Y), coupled via a range of delays δ from 17–23 samples, and for the two unidirectionally coupled 1/f time series, coupled via a range of delays δ from 97-103 samples. The correct coupling direction (X →Y) was found for all three investigated coupling functions (linear, threshold, quadratic), even if only 15 trials were investigated (Fig. (Fig.3).3). For these analysis we used a prediction time u of 21 samples for the case of a delay δ of 17–23 samples, and a prediction time u of 101 samples for the delay δ of 97–103 samples. Correct detection of effective connectivity was also possible when using a prediction time u of 21 samples for the delay δ of 97–103 samples, i.e. a prediction time that was shorter than the interaction delay (data not shown). This was expected because of the delocalization in time provided for by the delay embedding. However, no effective connectivity was detected when using a prediction time u of 101 samples for a interaction delay δ of 17–23 samples, i.e. when using a prediction time that was considerably longer than the interaction delay (data not shown; compare Table Table11 for single interaction delays). No false positive effective connectivities (Y→X) were found. However, relatively high values for (1-p) for some cases indicate that the embedding parameters were not optimally chosen, as discussed Detection of effective connectivity by TE for two unidirectionally coupled time series (X →Y) with a range of coupling delays as indicated by the shaded boxes in (a) and (d). (a–c) autoregressive order ten processes; interaction ... Detection of effective connectivity from linearly mixed measurement signals In order to investigate the application of TE to EEG and MEG sensor signals, where the signals from the processes in question can only be observed after linear mixing processes, we simulated two unidirectionally coupled AR(10) signals (X→Y with linear or threshold coupling). These signals then underwent a symmetric linear mixing process in dependence of a parameter ε in the range from 0.1 to 0.4, where a value of ε15, 16). For the case of linearly coupled source signals TE indicated effective connectivity in direction from the sensor signal X[ε] that had a higher contribution from the driving process (X) to the sensor Y[ε] dominated by the receiving process (Y) for all investigated cases of linearly mixed measurement signals except for the case of ε(Figs.44 and and55). Robustness against instantaneous mixing To quantify the false positive rates when applying transfer entropy to multiple observations of the same signal, but with differential noise, we simulated an autoregressive order 10 process and two observation of this process: one noise free observation, X[ε], and a second observation, Y[ε], corrupted by a varying amount of white noise (Fig. 6(a) and (b)). Similar to the performance of GC in this case (Nolte et al. 2008), the application of TE resulted in a considerable number of false positive detections of effective connectivity from the noise free sensor signal to the noise-corrupted sensor signal (Fig. 6(c)). However, application of the time-shifting test as proposed in the methods section removed all false positive cases. Choosing embedding parameters for delayed interactions To demonstrate the importance of correct embedding we simulated unidirectionally coupled signals with various interaction delays and analyzed effective connectivity with different choices for the embedding dimension d, the embedding delay τ and the prediction time u (Tables (Tables11 and and2).2). As expected because of theoretical considerations (see Fig. Fig.1),1), false positive effective connectivity is reported for short interaction delays (5, 20 samples) in combination with short prediction times (six samples) and insufficient embedding (d). In contrast, if we try to detect long interactions delays (δud,τ,u) the range of interaction delays δ that can be investigated reliably is limited (Table (Table1).1). The above problem is solved naturally by increasing embedding dimensions and embedding delays as demonstrated in Table Table2—although2—although this may not be possible in practical terms sometimes. In our simulations we generally found an embedding delay of in combination with embedding dimensions between 7 and 10 to be more appropriate than smaller (dTable2)2) or larger embedding dimensions (d). While it is often proposed to use for embedding our data suggest that for the evaluation of TE it is particularly important to cover most or all of the memory inherent in both, source and target signals. For our data this could be be achieved by choosing to prevent against false positive detection of causality in the presence of delayed interactions. We also observed that values of the prediction time u close to the actual interaction delay δ made the analysis of TE both, more sensitive and more robust against false positives, even for suboptimal choices of d and τ (Tables (Tables11 and and2).2). Hence, a choice of u close to δ, e.g. based on prior (e.g. anatomical) knowlegde, may yield a method that is more robust in the face of unkown and hard to determine values for d and τ. Effective connectivity at the MEG sensor level Motor evoked fields Self paced lifting of the right or left index fingers in a self chosen sequence resulted in robust motor evoked fields, that were compatible with motor evoked fields reported in the literature (Mayville et al 2005; Weinberg et al. 1990; Nagamine et al. 1996; Pedersen et al. 1998) (Fig. (Fig.7).7). We observed a slow readiness field at sensors over contralateral motor cortices starting approximately 350 ms before onset of EMG activity and a pronounced reversal of field polarity during movement execution (data not shown). Movement related effective connectivity As expected, effective connectivity from sensors over contralateral motor cortices was significantly larger to EMG electrodes over the muscle of the moved finger than to the EMG electrode over the muscle of the non-moved finger (Fig. (Fig.8).8). Unexpectedly however, effective connectivity from ipsilateral motor cortices was also significantly larger to the EMG electrodes over the muscle of the moved finger than to the EMG electrode over the muscle of the non-moved finger. Effective connectivity was never larger from any sensor over motor cortices to the EMG electrodes over the muscle of the non-moved finger. Transfer entropy as a tool to quantify effective connectivity In the present study we aimed to demonstrate that TE is a useful addition to existing methods for the quantification of effective connectivity. We argued that existing methods like GC, that are based on linear stochastic models of the data, may have difficulties detecting purely non-linear interactions, such as inverted-U relationships. Here, we could show that transfer entropy reliably detected effective connectivity correctly when two signals were coupled by a quadratic, i.e. purely non-linear, function (Fig. (Fig.2).2). Particularly relevant for neural interactions, we have also shown that couplings mediated by threshold or sigmoidal functions are correctly captured by TE.Furthermore, we extended the original definition of TE to deal with long interaction delays and demonstrated that TE detected effective connectivity correctly when the coupling of two signals was mediated by multiple interactions that spanned a range of latencies (Fig. (Fig.3).Moreover,3).Moreover, we considered the problem of volume conduction and showed that TE robustly detected effective connectivity when only linear mixtures of the original coupled signals were available (Figs. 4 and 5), if signals were not too close to being identical. In addition, if the two measurements reflected a common underlying source signal (’common drive’) but had different levels of measurement noise added, TE in combination with a test on time shifted data, correctly rejected the hypothesis of effective connectivity between the two measurement signals, in contrast to a naive application of GC (Nolte et al. 2008). Therefore, TE in combination with this test is well applicable to EEG and MEG sensor-level signals, where linear instantaneous mixing is inherent in the measurement method. However, without the additional test on time shifted data, TE had a non-negligible rate of false positives detections of effective connectivity. The origin of these false positives can be understood as follows. Theoretically transfer entropy is zero in the absence of causality, i.e. when processes are fully independent—as should be the case for surrogate data. TE is also zero for identical copies of a single signal, as required from a causality measure, when driver and response system cannot be distinguished. Here, we considered the case of volume conduction of a single signal onto two sensors in the presence of additional noise. Hence, the use of surrogate data for a test of the causality hypothesis inevitably leads to the comparison of two (noisy) zeros and false positives. Because of this difficulty we suggest to perform the time-shift test whenever multiple observations of a single source signal are likely to be present in the data, as is the case for EEG and MEG measurements.Last but not least, we proposed TE as a tool for the exploratory investigation of effective connectivity, because it is a model-free measure based on information theory. Complicated types of coupling such as cross-frequency phase coupling (Palva et al. 2005) should be readily detectable without prior specification, e.g. the coupling via a quadratic function—as investigated here—, introduces a frequency doubled (and distorted) input to the target signal. Nevertheless it was readily detected by TE. While the argument on model-freeness holds theoretically, any practical implementation comes with certain parameters that have to be adapted to the data empirically, such as the correct choice of a delay τ and the number of dimensions d used for delay embedding. In addition, the implementation of TE proposed here incorporates a parameter for the prediction time u to adapt the analysis for cases where a long interaction delay is present. If chosen ad hoc these parameters amount to a sort of model for the data. To keep the method model-free we therefore proposed to scan a sufficiently large parameter space on pilot data before analyzing the data of interest or to scan the parameter space and to correct for the arising multiple comparison problem later on, during statistical testing.To handle the estimation of TE, the parameter scanning and the statistical testing, including the shift-test, we implemented the proposed procedure in the form of a convenient open-source MATLAB toolbox for the Fieldtrip data format that is available from the authors (Lindner et al. 2009). Limitations Despite the above-mentioned merits, the TE method also has limitations that have to be considered carefully to avoid misinterpretations of the results:We note that model-freeness is not always an advantage. In contrast to model-based methods, the detection of effective connectivity via TE does not entail information on the type of interaction. This fact has two important consequences. First, the absence of a specific model of the interaction leads to a high sensitivity for all types of depedencies between two time-series. This way, trivial (nuisance) dependencies, might be detected by testing against surrogates. This is bound to happen if these dependencies are not kept intact when creating the surrogate data. Second, the specific type of interaction must be separately assessed post hoc by using model based methods, after the presence of effective connectivity was established using transfer entropy. In principle the analysis of effective connectivity using TE, and the post-hoc comparison of signal pairs with and without significant interaction in an exploratory search of the actual mechanism of this interaction are possible in the same dataset. This is because these two questions are orthogonal. However, the relationship between siginificant effective connectivity—detected by TE—and a specific mechanism of the interaction needs to be tested on independent data.Another limitation is that false positive reports are possible when the embedding parameters for the reconstruction of the state space are not chosen correctly. We therefore suggest to use TE with a careful choice of parameters, especially with respect to τ, and only after checking that the data to be analyzed meets certain characteristics. In the following we list a number of characteristics to be considered. First, strong non-stationarities in the data can make impossible to average over time to reliably estimate the probability densities on which TE is based. Consequently, TE should only be used on data of sufficient length that show at most weak non-stationarities. For an approach to overcome this limitation problem by using the trial structure of data sets see Gomez-Herrero et al. (2010). Second, in this work we have only assessed pairwise interactions. Although a fully multivariate extension is conceptually possible (Gomez-Herrero et al. 2010; Lizier et al. 2008), practical data lengths and computing time restrict its use. Third, TE analysis is difficult to interpret when signals have a different physical origin such as for example a chemical concentration and an electric field. The reason is that even though the signals entering the TE analysis are z-scored to obtain a certain normalization, there is no clear physical meaning of distance in the joint space of the signals, and consequently, no a priori justification to use any particular coarse-graining box in the two directions. Since the results of TE are sensitive to the use of different coarse-graining scales in the two directions, the meaning of any numerical estimate of TE for signals of different physical origin is difficult to establish. Finally, if the interaction to be captured is known to be linear, then the use of linear approaches is fully justified and usually outperforms TE in aspects such as computing time and data-efficiency. Last but not least we should comment on some general limitations related to the concept of causality as defined by Wiener. It is important to note that Wiener’s definition does not include any interventions to determine causality, i.e. it describes observational causality. Methods based on Wiener’s principle such as GC, TE share certain limitations: 1. The decsription of all system involved has to be causally complete, i.e. there must not be unobserved common causes that do not enter the analysis. 2. If two systems are related by a deterministic map, no causality can be inferred. This would exclude systems exhibiting complete synchronization, for example. Technically this is reflected in Eq. : For TE to be well defined the probability densities and their logarithms must exist. Therefore -distributions in the joint embedding space of two signals, which are equivalent to deterministic maps between these signals, are excluded. 3. The concept of observational causality rests on the axiom that the past and present may cause the future but the future may not cause the past. For this axiom to be useful observations must be made at a rate that allows a meaningful distinction between past, present and future with respect to the interaction delays involved. This means that interactions that take place on a timescale faster than the sampling rate must be missed in methods based on observational causality. Application of TE to MEG recordings in a motor task As a proof-of-principle, we applied TE to MEG data recorded during self paced finger lifting. The analysis of the effective connectivity from MEG to EMG signals was performed without the recommended rectification of the EMG signal (Myers et al. 2003) to proove that TE could perform the analysis well without this step. Our expectations of stronger effective connectivity from contralateral motor cortex to the moved finger were met for both fingers in both investigated subjects. Surprisingly, however, we also found stronger effective connectivity from ipsilateral motor cortex to the moved finger. It is not clear at present whether this effective connectivity reflected an indirect interaction: Contralateral motor cortex may drive both, ipsilateral cortex and the muscles of the moved finger, albeit with strongly differing delays. In this case, TE may erroneously detect effective connectivity from ipsilateral cortex to the muscle, as discussed above. Additional analyses, quantifying the coupling between the two motor cortices will be necessary to clarify this issue. As discussed below, these analyses should preferentially be performed using a multivariate extension of the TE method. Comparison to existing literature The application of non-linear methods to detect effective connectivity in neuroscience data has been suggested before: One of the earliest attempts to extend GC to the non-linear case and to apply it to neurophysiological data was presented by Freiwald et al. (1999). They used a locally linear, non-linear autoregressive (LLNAR) model where time varying autoregression coefficients were used to capture non-linearities. This model was only tested, however, on simulations of unidirectionally and linearly coupled signals and correctly identified the coupling as unidirectional and as linear. No attempt was made to validate the model on simulations of explicitly non-linear directed interactions. Application to EEG data from a patient with complex partial seizures indicated non-linear coupling of the signals measured at electrode positions C3 and C4. Another test on local field potential (LFP) data recorded in the anterior inferotemporal cortex (macaque area TE) of the macaque monkey however detected no indication of a non-linear interaction. We add to these results by demonstrating that also purely non-linear (square, threshold) interactions are reliably detected using TE in combination with appropriate statistical testing and by demonstrating that interactions can also be found in MEG and EMG data, even when omitting the usual rectification of the EMG. Chávez et al. (2003) used TE on data from an epileptic patient and also proposed a statistical test based on block-resampling of the data that is similar to the trial shuffling approach used here. They found that TE with a fixed prediction time and a fixed inclusion radius for neighbor search was able to detect the directed linear and non-linear interactions for the simulated models. Our findings are in agreement with these results. In addition, we demonstrated that TE also detects directed non-linear interactions for biologically plausible data with 1/f characteristics and a range of interaction delays instead of a single delay. Hinrichs et al. (2008) used a measure that is very similar to transfer entropy as it was investigated here. However, in contrast to our study they substituted the time-consuming estimation of probability densities by kernel-based methods with a linear method based on the data covariance matrices. As explicitely stated in the mathematical appendix of Hinrichs et al. (2008) this effectively limits the detection of directed interactions to linear ones. Here, we demonstrate that, while being relatively time consuming, a kernel based estimation of the required probability densities is feasible using the Kraskov-Stögbauer-Grassberger estimator (Kraskov et al. 2004), even for a dimensionality of five and higher. We note however, that the amount of data necessary for these estimations may not always be available and that the achievable ‘temporal resolution’ is limited by this factor. Interestingly, scanning of the prediction time u, revealed an optimal interaction delay in the MEG/EMG data of around 16 ms, in accordance with their findings. Outlook As demonstrated in this study TE is a useful tool to quantify effective connectivity in neuroscience data. Its ability to detect purely non-linear interactions and to operate without the specification of one or more a priori models make it particularly useful for exploratory data analysis, but its use is not limited to this application. The implementation of TE estimation used here only considered pairs of signals, i.e. it is a bivariate method. Direct and indirect interactions may, therefore, not be separated well. However, an extension to the multivariate case is possible as noted before (e.g. Chávez et al. 2003) and is currently under investigation. Its application to cellular automata by Lizier and colleagues have already revealed interesting insights into the pattern formation and information flow in these models of complex systems (Lizier et al. 2008).The problem of direct versus indirect interactions can also be ameliorated for the case of MEG and EEG data by performing the analysis at the level of source time-courses obtained from a suitable source analysis method. Using source level time-courses will reduce the number of signals for analysis. A post hoc analysis of the obtained reduced network of effective connectivty by DCM may be possible then. Using source level time-courses will also improve the interpretability of the obtained effective connectivities compared to those at the sensor level. This is because for a given causal interaction observed at the sensor level any of the multiple sources reflected in the sensor signal may be responsible for the observed effective connectivity.Although the estimation of TE presented here is geared at continuous data TE has found application in the analysis of spiking data as reported in Gourvitch and Eggermont (2007). The particularities to estimate TE from point processes can be found there. Thus, both macroscopic (fMRI, EEG/MEG) and more local signals (LFP, single unit activity) can be readily analized in the common framework of TE. In the future, it will be interesting to compare the effective connectivities for a variety of temporal and spatial scales as revealed by TE. Conclusion Transfer entropy robustly detected effective connectivity in simulated data both for complex internal signal dynamics (1/f) and for strongly non-linear coupling. Detection of effective connectivity was possible without specifying an a priori model. With the use of an additional test for linear instantaneous mixing it was robust against false positives due to simulated volume conduction. Therefore it is not only applicable for invasive electrophysiological data but also for EEG and MEG sensor-level analysis. Analysis of MEG and EMG sensor-level data recorded in a simple motor task data revealed the expected connectivity, even without rectification of the EMG signal. We therefore propose TE as a useful tool for the analysis of effective connectivity in neuroscience The authors would like to thank Viola Priesemann from the Max Planck Institute for Brain Research, Frankfurt, for valuable comments on this manuscript, German Gomez Herrero from the Technical University of Tampere, Wei Wu from the Humboldt-Universität in Berlin, Mikhail Prokopenko from the CSIRO in Sydney, and Prof. Jochen Triesch from the Frankfurt Institute for Advanced Studies (FIAS) for stimulating discussions, and Sarah Straub from the Department of Psychology, University of Regensburg for assistance in data acquisition. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. ^1Historically, however, GC was formulated without explicit assumptions about the linearity of the system (Granger 1969) and was therefore closely related to Wiener’s formal definition of causality (Wiener 1956). ^2For a continuous random variable the natural generalization of Shannon entropy is its differential entropy. Although differential entropy does not inherit the properties of Shannon entropy as an information measure, the derived measures of mutual information and transfer entropy retain the properties and meaning they have in the discrete variable case. We refer the reader to Kaiser and Schreiber (2002) for a more detailed discussion of TE for continuous variables. In addition, measurements of physical systems typically come as discrete random variables because of the binning inherent in the digital processing of the data. R. Vicente, M. Wibral, and M. Lindner contributed equally. ML was funded by the Hessian initiative for the development of scientific and economic excellence (LOEWE). RV and GP were in part supported by the Hertie Foundation and the EU (EU project Contributor Information Raul Vicente, Email: vicente/at/fias.uni-frankfurt.de. Michael Wibral, Email: michael.wibral/at/web.de. Michael Lindner, Email: m.lindner/at/idea-frankfurt.eu. Gordon Pipa, Email: pipa/at/mpih-frankfurt.mpg.de. • Arieli A, Sterkin A, Grinvald A, Aertsen A. Dynamics of ongoing activity: Explanation of the large variability in evoked cortical responses. Science. 1996;273(5283):1868–1871. doi: 10.1126/ science.273.5283.1868. [PubMed] [Cross Ref] • Cao L. Practical method for determining the minimum embedding dimension of a scalar time series. Physica, A, 1997;110:43–50. • Chávez M, Martinerie J, Quyen MLV. Statistical assessment of non-linear causality: Application to epileptic eeg signals. Journal of Neuroscience Methods. 2003;124(2):113–128. doi: 10.1016/ S0165-0270(02)00367-9. [PubMed] [Cross Ref] • Cormen, T., Leiserson, C., Rivest, R., & Stein, C. (2001). Introduction to algorithms. MIT Press and McGraw-Hill. • Donoghue JP, Sanes JN, Hatsopoulos NG, Gal G. Neural discharge and local field potential oscillations in primate motor cortex during voluntary movements. Journal of Neurophysiology. 1998;79 (1):159–173. [PubMed] • Erimaki S, Christakos CN. Coherent motor unit rhythms in the 6–10 hz range during time-varying voluntary muscle contractions: Neural mechanism and relation to rhythmical motor control. Journal of Neurophysiology. 2008;99(2):473–483. doi: 10.1152/jn.00341.2007. [PubMed] [Cross Ref] • Freiwald WA, Valdes P, Bosch J, Biscay R, Jimenez JC, Rodriguez LM, et al. Testing non-linearity and directedness of interactions between neural groups in the macaque inferotemporal cortex. Journal of Neuroscience Methods. 1999;94(1):105–119. doi: 10.1016/S0165-0270(99)00129-6. [PubMed] [Cross Ref] • Friston K. Functional and effective connectivity in neuroimaging: A synthesis. Human Brain Mapping. 1994;2:56–78. doi: 10.1002/hbm.460020107. [Cross Ref] • Friston KJ, Harrison L, Penny W. Dynamic causal modelling. NeuroImage. 2003;19(4):1273–1302. doi: 10.1016/S1053-8119(03)00202-7. [PubMed] [Cross Ref] • Genovese CR, Lazar NA, Nichols T. Thresholding of statistical maps in functional neuroimaging using the false discovery rate. NeuroImage. 2002;15(4):870–878. doi: 10.1006/nimg.2001.1037. [PubMed] [Cross Ref] • Gomez-Herrero, G., Wu, W., Rutanen, K., Soriano, M. C., Pipa, G., & Vicente, R. (2010). Assessing coupling dynamics from an ensemble of time series. arXiv:1008.0539v1. • Gourvitch B, Eggermont JJ. Evaluating information transfer between auditory cortical neurons. Journal of Neurophysiology. 2007;97(3):2533–2543. doi: 10.1152/jn.01106.2006. [PubMed] [Cross Ref] • Granger C. Long memory relationships and the aggregation of dynamic models. Journal of Econometrics. 1980;14:227–238. doi: 10.1016/0304-4076(80)90092-5. [Cross Ref] • Granger CWJ. Investigating causal relations by econometric models and cross-spectral methods. Econometrica. 1969;37:424–438. doi: 10.2307/1912791. [Cross Ref] • Gross J, Timmermann L, Kujala J, Dirks M, Schmitz F, Salmelin R, et al. The neural basis of intermittent motor control in humans. Proceedings of the National Academy of Sciences of the United States of America. 2002;99(4):2299–2302. doi: 10.1073/pnas.032682099. [PMC free article] [PubMed] [Cross Ref] • Hinrichs H, Noesselt T, Heinze HJ. Directed information flow: A model free measure to analyze causal interactions in event related eeg-meg-experiments. Human Brain Mapping. 2008;29(2):193–206. doi: 10.1002/hbm.20382. [PubMed] [Cross Ref] • Hlavackova-Schindler K, Palus M, Vejmelka M, Bhattacharya J. Causality detection based on information-theoretic approaches in time series analysis. Physics Reports. 2007;441:1–46. doi: 10.1016/ j.physrep.2006.12.004. [Cross Ref] • Kaiser A, Schreiber T. Information transfer in continuous processes. Physica, D. 2002;110:43–62. doi: 10.1016/S0167-2789(02)00432-3. [Cross Ref] • Kantz, H., & Schreiber, T. (1997). Nonlinear time series analysis. Cambridge University Press. • Kozachenko L, Leonenko N. Sample estimate of entropy of a random vector. Problems of Information Transmission. 1987;23:95–100. • Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating mutual information. Physical Review. E, Statistical, Nonlinear, and Soft Matter Physics, 69(6 Pt 2), 066,138. [PubMed] • Lindner, M., Vicente, R., & Wibral, M. (2009). Trentool—the transfer entropy toolbox. http://www.michael-wibral.de/TRENTOOL. Accessed 7 August 2010. • Lizier, J., Prokopenko, M., & Zomaya, A. (2008). Local information transfer as a spatiotemporal filter for complex systems. Physical Review. E, 77, 026,110. [PubMed] • Mayville JM, Fuchs A, Kelso JAS. Neuromagnetic motor fields accompanying self-paced rhythmic finger movement at different rates. Experimental Brain Research. 2005;166(2):190–199. doi: 10.1007/ s00221-005-2354-2. [PubMed] [Cross Ref] • Merkwirth, C., Parlitz, U., Wedekind, I., Engster, D., & Lauterborn, W. (2009). Opentstool version 1.2 (2/2009). http://www.physik3.gwdg.de/tstool/index.html. Accessed 7 August 2010. • Mima T, Matsuoka T, Hallett M. Functional coupling of human right and left cortical motor areas demonstrated with partial coherence analysis. Neuroscience Letters. 2000;287(2):93–96. doi: 10.1016 /S0304-3940(00)01165-4. [PubMed] [Cross Ref] • Murthy VN, Fetz EE. Oscillatory activity in sensorimotor cortex of awake monkeys: Synchronization of local field potentials and relation to behavior. Journal of Neurophysiology. 1996;76 (6):3949–3967. [PubMed] • Myers LJ, Lowery M, O’Malley M, Vaughan CL, Heneghan C, Gibson ASC, et al. Rectification and non-linear pre-processing of emg signals for cortico-muscular analysis. Journal of Neuroscience Methods. 2003;124(2):157–165. doi: 10.1016/S0165-0270(03)00004-9. [PubMed] [Cross Ref] • Nagamine, T., Kajola, M., Salmelin, R., Shibasaki, H., & Hari, R. Movement-related slow cortical magnetic fields and changes of spontaneous meg- and eeg-brain rhythms. Electroencephalography and Clinical Neurophysiology, 99(3), 274–286. [PubMed] • Nalatore, H., Ding, M., & Rangarajan, G. (2007). Mitigating the effects of measurement noise on Granger causality. Physical Review. E, Statistical, Nonlinear and Soft Matter Physics, 75(3 Pt 1), 031,123. [PubMed] • Nolte, G., Ziehe, A., Nikulin, V. V., Schloegl, A., Kraemer, N., Brismar, T., et al. (2008). Robustly estimating the flow direction of information in complex physical systems. Physical Review Letters, 100(23), 234,101. [PubMed] • Paluš, M. (2001). Synchronization as adjustment of information rates: Detection from bivariate time series. Physical Review. E, 63, 046,211. [PubMed] • Palva JM, Palva S, Kaila K. Phase synchrony among neuronal oscillations in the human cortex. Journal of Neuroscience. 2005;25(15):4250–4254. doi: 10.1523/JNEUROSCI.4250-04.2005. [PubMed] [Cross • Pedersen JR, Johannsen P, Bak CK, Kofoed B, Saermark K, Gjedde A. Origin of human motor readiness field linked to left middle frontal gyrus by meg and pet. NeuroImage. 1998;8(2):214–220. doi: 10.1006/nimg.1998.0362. [PubMed] [Cross Ref] • Pereda E, Quiroga R, Bhattacharya J. Nonlinear multivariate analysis of neurophysiological signals. Progress in Neurobiology. 2005;77:1–37. doi: 10.1016/j.pneurobio.2005.10.003. [PubMed] [Cross • Pogosyan A, Gaynor LD, Eusebio A, Brown P. Boosting cortical activity at beta-band frequencies slows movement in humans. Current Biology. 2009;19(19):1637–1641. doi: 10.1016/j.cub.2009.07.074. [ PMC free article] [PubMed] [Cross Ref] • Ragwitz M, Kantz H. Markov models from data by simple nonlinear time series predictors in delay embedding spaces. Physical Review. E. 2002;65:56201. doi: 10.1103/PhysRevE.65.056201. [PubMed] [ Cross Ref] • Reza, F. (1994). An introduction to information theory. Dover. • Schoffelen JM, Oostenveld R, Fries P. Imaging the human motor system’s beta-band synchronization during isometric contraction. NeuroImage. 2008;41(2):437–447. doi: 10.1016/ j.neuroimage.2008.01.045. [PubMed] [Cross Ref] • Schreiber T. Measuring information transfer. Physical Review Letters. 2000;85(2):461–464. doi: 10.1103/PhysRevLett.85.461. [PubMed] [Cross Ref] • Shannon C. A mathematical theory of communication. Bell System Technical Journal. 1948;27:379–423. • Swadlow H. Physiological properties of individual cerebral axons studied in vivo for as long as one year. Journal of Neurophysiology. 1985;54:1346–1362. [PubMed] • Swadlow H. Efferent neurons and suspected interneurons in motor cortex of the awake rabbit: Axonal properties, sensory receptive fields, and subthreshold synaptic inputs. Journal of Neurophysiology. 1994;71:437–453. [PubMed] • Swadlow H, Rosene D, Waxman S. Characteristics of interhemispheric impulse conduction between the prelunate gyri of the rhesus monkey. Experimental Brain Research. 1978;33:455–467. doi: 10.1007/ BF00235567. [PubMed] [Cross Ref] • Swadlow H, Waxman S. Observations on impulse conduction along central axons. Proceedings of the National Academy of Sciences. 1975;72:5156–5159. doi: 10.1073/pnas.72.12.5156. [PMC free article] [ PubMed] [Cross Ref] • Takens, F. (1981). Dynamical Systems and Turbulence, Warwick 1980. In Lecture Notes in Mathematics (Vol. 898, chap.). Detecting Strange Attractors in Turbulence(pp. 366–381). Springer. • Tognoli E, Scott Kelso J. Brain coordination dynamics: True and false faces of phase synchrony and metastability. Progress in Neurobiology. 2009;12:31–40. doi: 10.1016/j.pneurobio.2008.09.014. [ PMC free article] [PubMed] [Cross Ref] • Victor J. Binless strategies for estimation of information from neural data. Physical review. E. 2002;66:51903. doi: 10.1103/PhysRevE.66.051903. [PubMed] [Cross Ref] • Weinberg H, Cheyne D, Crisp D. Electroencephalographic and magnetoencephalographic studies of motor function. Advances in Neurology. 1990;54:193–205. [PubMed] • Wiener N. The theory of prediction. In: Beckenbach EF, editor. In modern mathematics for the engineer. New York: . McGraw-Hill; 1956. • Yerkes RM, Dodson JD. The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology. 1908;18:459. doi: 10.1002/cne.920180503. [Cross Ref] Articles from Springer Open Choice are provided here courtesy of Springer • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3040354/?tool=pubmed","timestamp":"2014-04-20T23:56:49Z","content_type":null,"content_length":"196412","record_id":"<urn:uuid:3a6e9034-8e08-4f71-8851-2e51ce44e53c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Exercise 2.1 Home | 18.013A | Chapter 2 | Section 2.1 Tools Glossary Index Up Previous Next Exercise 2.1 Figure out the series for exp x and prove it to be so. The power series expansion of exp x about 0 has the form exp x = a[0] + a[1] x + a[2] x^2 + ... When x is near 0, exp x is near 1. This implies a[0] = 1. The derivative of exp x is itself and so is also near 1 when x is near 0. Differentiating the series we find This allows us to identify a[1] = a[0], 2a[2] = a[1], 3a[3] = a[2], from the fact that the coefficients of each power of x must be the same here and in the previous expression for exp x and in general ja[j] = a[j-1]. This allows us to identify
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter02/exercise01.html","timestamp":"2014-04-17T18:27:40Z","content_type":null,"content_length":"2948","record_id":"<urn:uuid:4eabc805-574f-42e1-9bfb-cbfbfce09a0f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
How to reduce the number of tables? 05-16-2010, 06:36 PM How to reduce the number of tables? Hi everyone! I have a big problem: If there is a set A={1,2,3,...,47,48,49} there are 49 elements, if we choolse randomly 6 numbers what is probability? If we generate all of these combination and we will have 13983816 combination. We want to put these in a table and add six column it will be (12-COLUMNS X 13983816-ROWS) then the number of rows will be reduce. I have build some tables with a sample 2-elements from 9. The first table is main and the other tables are the table in wich all of these combiantion find. My question: How can I made all of these combination in a small number of tables? 05-16-2010, 08:11 PM What is the context here? Is this a Swing app and are these JTables? Something else? Buena suerte!
{"url":"http://www.java-forums.org/advanced-java/28888-how-reduce-number-tables-print.html","timestamp":"2014-04-20T08:24:26Z","content_type":null,"content_length":"4813","record_id":"<urn:uuid:7f32cf9d-1098-41ef-8c08-8a2d754c257a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
how to read chi square table Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Chi-square distribution In probability theory and statistics, the chi-square distribution (also chi-squared or ) with kdegrees of freedom is the distribution of a sum of the squares of kindependentstandard normal random variables. It is one of the most widely used probability distributions in inferential statistics, e.g. in hypothesis testing or in construction of confidence intervals. When there is a need to contrast it with the noncentral chi-square distribution, this distribution is sometimes called the central chi-square distribution. The best-known situations in which the chi-square distribution is used are the common chi-square tests for goodness of fit of an observed distribution to a theoretical one, and of the independence of two criteria of classification of qualitative data, and a third well known use is the confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also lead to a use of this distribution, like Friedman's analysis of variance by ranks. The chi-square distribution is a special case of the gamma distribution. If X[1], ..., X[k] are independent, standard normal random variables, then the sum of their squares Q\ = \sum_{i=1}^k X_i^2 is distributed according to the chi-square distribution with k degrees of freedom. This is usually denoted as Q\ \sim\ \chi^2(k)\ \ \text{or}\ \ Q\ \sim\ \chi^2_k The chi-square distribution has one parameter: k— a positive integer that specifies the number of degrees of freedom (i.e. the number of X[i]’s) Further properties of the chi-square distribution can be found in the box at right. Probability density function The probability density function (pdf) of the chi-square distribution is f(x;\,k) = \frac{1}{2^{k/2}\Gamma(k/2)}\,x^{k/2 - 1} e^{-x/2}\, I_{\{x\geq0\}}, where Γ(k/2) denotes the Gamma function, which has closed-form values at the half-integers. For derivations of the pdf in the cases of one and two degrees of freedom, see Proofs related to chi-square distribution. Cumulative distribution function Its cumulative distribution function is: F(x;\,k) = \frac{\gamma(k/2,\,x/2)}{\Gamma(k/2)} = P(k/2,\,x/2), where γ(k,z) is the lower incomplete Gamma function and P(k,z) is the regularized Gamma function. In a special case of k = 2 this function has a simple form: F(x;\,2) = 1 - e^{-\frac{x}{2}}. Tables of this distribution — usually in its cumulative form — are widely available and the function is included in many spreadsheets and all statistical packages. For a closed formapproximation for the CDF, see under Noncentral chi-square distribution. It follows from the definition of the chi-square distribution that the sum of independent chi-square variables is also chi-square distributed. Specifically, if {X[i]}[i=1]^n are independent chi-square variables with {k[i]}[i=1]^n degrees of freedom, respectively, then is chi-square distributed with degrees of freedom. Information entropy The information entropy is given by H = \int_{-\infty}^\infty f(x;\,k)\ln f(x;\,k) \, dx = \frac{k}{2} + \ln\big( 2\Gamma(k/2) \big) + \big(1 - k/2\big) \psi(k/2), where ψ(x) is the Digamma function. Noncentral moments The moments about zero of a chi-square distribution with k degrees of freedom are given by \operatorname{E}(X^m) = k (k+2) (k+4) \cdots (k+2m-2) = 2^m \frac{\Gamma(m+k/2)}{\Gamma(k/2)}. The cumulants are readily obtained by a (formal) power series expansion of the logarithm of the characteristic function: \kappa_n = 2^{n-1}(n-1)!\,k Asymptotic properties By the central limit theorem, because the chi-square distribution is the sum of k independent random variables, it converges to a normal distribution for large k (k&nbsp;>&nbsp;50 is “approximately normal�). Specifically, if X&nbsp;~&nbsp;χ²(k), then as k tends to infinity, the distribution of (X-k)/\sqrt{2k} tends< From Yahoo Answers Question:Here's the question: A heterozygous white-fruited squash plant is crossed with a yellow-fruited plant, yielding 200 seeds. Hypothesis: White is autosomal dominant to yellow. Of these, 110 produce white-fruited plants while only 90 produce yellow-fruited plants. Does the data support or not support the hypothesis? Explain using Chisquare analysis. When I attempted this question, I ended up with some weired ass (did I mention BIG) number :P help is very much appreciated! :D Answers:I typed it into a chi square calculator on the net and got 2 for my chi value. At first, I didn't read the question right and got 42. Don't forget that because the white squash is heterzygous, we expect that there will be fifty-fifty chance of getting white or yellow fruit, not 3:1, like I originally thought. P-value, with df =1, is about 0.1..so roughly a ten percent chance of all of this being random...I'd say based on the chi value, it didn't support the hypothesis, but it depends on how accurate you want to be. Question:Thanks for the previous questions answers, but what I am looking for is how to figure out and area of lets say .95 when all I am given in my table are areas for .10, .05,.025,.01,.005. I need to know how to figure out an area when all I have are these numbers. Answers:A chi-square table would give you the values such that 0.10, 0.05, 0.025, and so on are to the right of those values. So use the value such that 0.05 is to the right. Question:Sorry for the unscientific way im asking but what i mean is, does the table "vary" for different tests or (for example) will the P value for chi squared of .05, with 5 degree freedom always equal 11.07 no matter where you look?? thanks. Thanks for answer but i mean in a BIO 101 context, (so, yes or no?) Answers:this is a good question: you have to be really careful how the table is set up. so suppose P(Xx) then 1-0.05=0.95. so then chi squared (subscript 0.95) is 1.145. Question:The purpose of the Chi square X^2 test is to determine whether experimentally obtained data constitute a good fit to a theoretical, expected ratio. In other words, the X^2 test enables one to determine whether it is reasonable to attribute deviations from an expected value to chance. Obviously, if deviations are small then they can be more reasonably attributed to chance. The question is how small must deviations be in order to be attributed to chance? The formula for X^2 is as follows: X^2 = X (O-E)2 where O = the observed number of individuals of a particular phenotype, E = the expected number in that phenotype, and = the summation of all possible values of (O-E)2/E for the various phenotypic categories. The following is an example of how one might apply the test to genetics. In a cross of tall tomato plants to dwarf ones, the F1 consisted entirely of tall plants, and the F2 consisted of 102 tall and 44 dwarf plants. Does this data fit a 3:1 ratio? To answer this question, a 2 value can be calculated (see table 1). Table 1 PhenotypeGenotypeOE(O-E)(O-E)2(O-E)2/E TallT_102109.5-7.556.250.5137 Dwarftt4436.57.556.251.5411 Totals1461462.0548 The calculated 2 value is 2.0548. What does this mean? If the observed had equaled the expected, the value would have been zero. Thus, a small 2 value indicates that the observed and expected ratios are in close agreement. However, are the observed deviations within the limits expected by chance? In order to determine this, one must look up the 2 value on a Chi Square table (see table 1.2). Statisticians have generally agreed on the arbitrary limits of odds of 1 chance in 20 (probability = .05) for drawing the line between acceptance and rejection of the hypothesis as a satisfactory explanation of the data tested. A 2 value of 3.841 for a two-term ratio corresponds to a probability of .05. That means that one would obtain a 2 value of 3.841 due to chance alone on only about 5% of similar trials. When 2 exceeds 3.841 for a two term ratio, the probability that the deviations can be attributed to chance alone is less than 1 in 20. Thus, the hypothesis of compatibility between the observed and expected ratios must be rejected. In the example given the 2value is much less than 3.841. Therefore, one can attribute the deviations to chance alone. Table 1.2 Values of Chi Square probability of a larger value of X2 dF.995.990.975.950.900.750.500.250.100.050.025.010.005 1.04393.03157.03982.02393.0158.102.4551.322.713.845.026.637.78 2.0100.0201.0506.103.211.5751.392.774.615.997.389.2110.6 3.0717.115.216.352.5841.212.374.116.257.819.3511.312.8 4.207.297.484.7111.061.923.365.397.789.4911.113.314.9 5.412.554.8311.151.612.674.356.639.2411.112.815.116.7 The number of degrees of freedom is one minus the number of terms in the ratio. In the example above (3:1) there are two terms. Therefore, the degrees of freedom is 2 - 1 = 1. Application of the Chi Square Test M&Ms are carefully selected for equality of size and shape. Quantities of each color were packaged in your bag. Randomly select enough M&M to cover the bottom of the cup. Count the number of M&M of each color and record your data in table 1.3. Then calculate the expected numbers based on the sample size and the known ratio of M&M in general. Complete the table and calculateX^2 Table 1.3 Phenotype (color)OE(O-E)(O-E)2(O-E)2/E Sum How many degrees of freedom are there?____________________________________ Using table 1.2, what 2 values lie on either side of your calculated 2value?_________ What are the probability values associated with the 2 values?_____________________ Briefly interpret the X2 value you have just Answers:lol you typed so much i had to make a comment :D. From Youtube MInitab - Chi-Square analysis (two-way table in worksheet) :Minitab - Chi-Square analysis (two-way table in worksheet) If you only have a two-way table (and not the raw data) this is the command you need to get a Chi-Square test of indpendence done. I use data from this paper: Law, et al Journal of Affective Disorders 114 (2009) 254262 on railway suicides. If you are interested in the actual results you should read this paper to find out what the authors think - I'm just showing you how to do the test! See my additional lecture notes for more information about the test. Stephen's Tutorials chi squared :Using Excel's chitest tool to test for independence between row and column variables in a contingency table. Useful for surveys, especially when the data is interval and not ratio.
{"url":"http://www.edurite.com/kbase/how-to-read-chi-square-table","timestamp":"2014-04-18T00:19:49Z","content_type":null,"content_length":"80856","record_id":"<urn:uuid:45859b5e-8679-45d4-9bb1-cfb5d48026f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Results about F^omega with type fixed-points ?! Results about F^omega with type fixed-points ?! • To: types@cis.upenn.edu • Subject: Results about F^omega with type fixed-points ?! • From: Andreas Abel <abel@informatik.uni-muenchen.de> • Date: Tue, 23 Sep 2003 19:53:41 +0200 • Organization: LMU Muenchen • User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030312 Currently I am studying extensions of System F^omega by positive fixed-points of type operators. These fixed-points are useful to formalize so-called heterogeneous or nested datatypes. An example of such a fixed-point would be PList : * -> * PList = \A. A + PList (A x A) which denotes powerlists resp. perfectly balanced leaf-labelled trees. My question: Is something known about extensions of F^omega by higher-order fixed-points like "PList"? Does a proof of normalization The literature I came across only deals with positive fixed-points of kind "*" (lists, trees ...) or with fixed-points that are not positive and hence destroy normalization. Any hints very much appreciated. Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg01446.html","timestamp":"2014-04-20T11:00:43Z","content_type":null,"content_length":"3158","record_id":"<urn:uuid:46e0b536-1e77-4b96-be8d-30ab08ca1cc4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Island Lake Algebra Tutor Find a Island Lake Algebra Tutor ...I lead workshops for a year at the University. My job was to make sure students understood the lecture material and answer any questions they had. In addition to that I led the students through exercises that were created by the Biology Department. 27 Subjects: including algebra 2, chemistry, English, algebra 1 ...Depending on the needs, I am able to help the student, whether the needs may be beginning sounds, letters, or processing, to deficits relating to language, writing, mathematics, or social studies. I am a certified Reading Specialist with experience helping SPED students in math and reading-based subjects. I have been interested in chess since 1995. 40 Subjects: including algebra 2, algebra 1, reading, English ...I have many strategies for making sense out of scientific vocabulary and concepts. I am familiar with different learning styles and can adapt lessons to meet the student's skills. I have experience teaching biology, genetics, Earth science, ecology, physics (force and motion), and chemistry. 13 Subjects: including algebra 1, physics, photography, biology ...Everything around us has some explanation involving these subjects! I would like to share my enthusiasm with others. My background includes physical science (two degrees in Engineering) as well as biological sciences (MD). I have teaching experience at the university level as a teaching assistant for biology as well as anatomy and physiology. 2 Subjects: including algebra 1, geometry ...I have taught resource classes for students for 8 years in which we work on study skills. Within these classes we developed plans that were individualized to each student. I have been coaching basketball at the middle school level for the past 8 years. 12 Subjects: including algebra 2, chemistry, physics, algebra 1
{"url":"http://www.purplemath.com/Island_Lake_Algebra_tutors.php","timestamp":"2014-04-20T02:32:07Z","content_type":null,"content_length":"23997","record_id":"<urn:uuid:ec87a16d-f934-4e69-a411-fe4d01ff2440>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: sum 1 plz help me with hw1 exercise 1-2 • 10 months ago • 10 months ago Best Response You've already chosen the best response. i have given a possible solution of ur problem to Keni_jk plz look at it Best Response You've already chosen the best response. |dw:1369544893178:dw| solve this u will get solution of 1st part Best Response You've already chosen the best response. |dw:1369544973454:dw| solve box . get answere. now solve with resistor in series Best Response You've already chosen the best response. tankzz helped a lot Best Response You've already chosen the best response. welcome :-) Best Response You've already chosen the best response. |1K|1K| = 1K(1||1|. 1K + 1K = 1K(1+1) So I'll only use 1's (1+1)||1 +1 = 5/2 (1+1||1)||1 = 3/5 I had to try different arrangements until I came upon this. 1||1 = 1/2 2||1 = 2/3 general rule 1||a = a/(1+a) Best Response You've already chosen the best response. should be 5/3 This principle of factoring resistance can be used on all passive networks aR--a, L--L/R, C--RC You need to make appropriate change in driving function, whether you preserving loop currents or nodal voltages This is used in filter design Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51a0e797e4b04449b2221bbb","timestamp":"2014-04-17T03:58:38Z","content_type":null,"content_length":"69145","record_id":"<urn:uuid:50eb590d-cccf-4bb6-8ba5-ccfebddd297a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Groundwater Investigation and Remediation Projects Software and Apps Following is a list of freeware that provides tools for conducting groundwater investigations and remediation projects. Before you access any software or apps, please read our disclaimer and permissions page if you haven't already done so. AIRSLUG (for Data General AViiON) By US Geological Survey (USGS). Used to generate type curves needed to estimate transmissivity and storativity from water level data collected during the recovery part of an air-pressurized slug test. UNIX source code is available. AIRSLUG (for DOS) By US Geological Survey (USGS). Used to generate type curves needed to estimate transmissivity and storativity from water level data collected during the recovery part of an air-pressurized slug test. Source code is available. AIRSLUG (for Silicon Graphics Indigo) By US Geological Survey (USGS). Used to generate type curves needed to estimate transmissivity and storativity from water level data collected during the recovery part of an air-pressurized slug test. UNIX source code is available. AIRSLUG (for Sun SPARCstation Solaris) By US Geological Survey (USGS). Used to generate type curves needed to estimate transmissivity and storativity from water level data collected during the recovery part of an air-pressurized slug test. UNIX source code is available. BALNINPT (for Data General AViiON) By US Geological Survey (USGS). An interactive program for mass-balance calculations (obsolete program--superseded by NETPATH and PHREEQC). UNIX source code is available. BALNINPT (for DOS) By US Geological Survey (USGS). An interactive program for mass-balance calculations (obsolete program--superseded by NETPATH and PHREEQC). Source code is available. BALNINPT (for Sun SPARCstation Solaris) By US Geological Survey (USGS). An interactive program for mass-balance calculations (obsolete program--superseded by NETPATH and PHREEQC). UNIX source code is available. Bioassay Quick Reference Table By California Department of Toxic Substances Control. This guide is the prototype of a project aimed at creating an index of all the standard bioassays currently being used in the scientific community. Initially includes ASTM standards. Darcy Flux & Groundwater Velocity Calculator By GroundwaterSoftware.com. A free, online tool to calculate Darcy's Law and average groundwater velocity. By Michal Truban. The HydroOffice package is modular software package for users interested in water sciences. It was created and is currently being developed mainly in the author's free time. Its aim is to create powerful software tools to streamline the work in the water sciences. The beginnings of the HydroOffice modules development dates back to 2002 and the individual modules are progressively developed and new modules are added. HYSEP (for Data General AViiON) By US Geological Survey (USGS). Performs hydrograph separation, estimating the ground-water, or base flow, component of streamflow. UNIX source code is available. HYSEP (for DOS) By US Geological Survey (USGS). Performs hydrograph separation, estimating the ground-water, or base flow, component of streamflow HYSEP (for Solaris) By US Geological Survey (USGS). Performs hydrograph separation, estimating the ground-water, or base flow, component of streamflow. UNIX source code is available. Johnson and Ettinger Model for Subsurface Vapor Intrusion into Buildings By US Environmental Protection Agency, Office of Solid Waste and Emergency Response. A series of models for estimating indoor air concentrations and associated health risks from subsurface vapor intrusion into buildings. MODBRNCH (for Data General AViiON) By US Geological Survey (USGS). Surface- and ground-water interactions can be simulated by the coupled BRANCH and USGS modular, three-dimensional, finite-difference ground-water flow (MODFLOW-96) models, referred to as MODBRNCH. Source code is available. MODBRNCH (for DOS) By US Geological Survey (USGS). Surface- and ground-water interactions can be simulated by the coupled BRANCH and USGS modular, three-dimensional, finite-difference ground-water flow (MODFLOW-96) models, referred to as MODBRNCH. Source code is available. MODBRNCH (for Sun SPARCstation Solaris) By US Geological Survey (USGS). Surface- and ground-water interactions can be simulated by the coupled BRANCH and USGS modular, three-dimensional, finite-difference ground-water flow (MODFLOW-96) models, referred to as MODBRNCH. Source code is available. MODFLOW-2000 (for UNIX) By US Geological Survey (USGS). MODFLOW-2000 simulates steady and nonsteady flow in an irregularly shaped flow system in which aquifer layers can be confined, unconfined, or a combination of confined and unconfined. Flow from external stresses, such as flow to wells, areal recharge, evapotranspiration, flow to drains, and flow through river beds, can be simulated. Hydraulic conductivities or transmissivities for any layer may differ spatially and be anisotropic (restricted to having the principal directions aligned with the grid axes), and the storage coefficient may be heterogeneous. Specified head and specified flux boundaries can be simulated as can a head dependent flux across the model's outer boundary that allows water to be supplied to a boundary block in the modeled area at a rate proportional to the current head difference between a "source" of water outside the modeled area and the boundary block. MODFLOW is currently the most used numerical model in the U.S. Geological Survey for ground-water flow problems. In addition to simulating ground-water flow, the scope of MODFLOW-2000 has been expanded to incorporate related capabilities such as solute transport and parameter estimation. MODFLOW-2000 (for Windows) By US Geological Survey (USGS). MODFLOW-2000 simulates steady and nonsteady flow in an irregularly shaped flow system in which aquifer layers can be confined, unconfined, or a combination of confined and unconfined. Flow from external stresses, such as flow to wells, areal recharge, evapotranspiration, flow to drains, and flow through river beds, can be simulated. Hydraulic conductivities or transmissivities for any layer may differ spatially and be anisotropic (restricted to having the principal directions aligned with the grid axes), and the storage coefficient may be heterogeneous. Specified head and specified flux boundaries can be simulated as can a head dependent flux across the model's outer boundary that allows water to be supplied to a boundary block in the modeled area at a rate proportional to the current head difference between a "source" of water outside the modeled area and the boundary block. MODFLOW is currently the most used numerical model in the U.S. Geological Survey for ground-water flow problems. In addition to simulating ground-water flow, the scope of MODFLOW-2000 has been expanded to incorporate related capabilities such as solute transport and parameter estimation. By US Geological Survey (USGS). MODFLOW-2005 can be used to help address such issues as water availability and sustainability, interaction of ground water and surface water, wellhead protection, seawater intrusion, and remediation of contaminated ground water. MODFLOWP (for Data General AViiON) By US Geological Survey (USGS). A version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW), which, with the Parameter-Estimation Package, can be used to estimate parameters by nonlinear regression. The following MODFLOW model inputs can be estimated: layer transmissivity, storage, coefficient of storage, hydraulic conductivity, and specific yield; vertical leakance; horizontal and vertical anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge; maximum evapotranspiration; pumpage; and the hydraulic head at constant-head boundaries. Source code is available. Replace by MODFLOW-2000. MODFLOWP (for DOS) By US Geological Survey (USGS). A version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW), which, with the Parameter-Estimation Package, can be used to estimate parameters by nonlinear regression. The following MODFLOW model inputs can be estimated: layer transmissivity, storage, coefficient of storage, hydraulic conductivity, and specific yield; vertical leakance; horizontal and vertical anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge; maximum evapotranspiration; pumpage; and the hydraulic head at constant-head boundaries. Source code is available. Replace by MODFLOW-2000. MODFLOWP (for Sun SPARCstation Solaris) By US Geological Survey (USGS). A version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW), which, with the Parameter-Estimation Package, can be used to estimate parameters by nonlinear regression. The following MODFLOW model inputs can be estimated: layer transmissivity, storage, coefficient of storage, hydraulic conductivity, and specific yield; vertical leakance; horizontal and vertical anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge; maximum evapotranspiration; pumpage; and the hydraulic head at constant-head boundaries. Source code is available. Replaced by MODFLOW-2000. National Water Information System (NWIS) By US Geological Survey (USGS). Provides access to water-resources data collected at approximately 1.5 million sites in all 50 US States, the District of Columbia, and Puerto Rico. It includes information on the occurrence, quantity, quality, distribution, and movement of surface and underground waters. Natural Attenuation Software (NAS) By Virginia Tech. A screening tool to estimate remediation timeframes for monitored natural attenuation (MNA) to lower groundwater contaminant concentrations to regulatory limits, and to assist in decision-making on the level of source zone treatment in conjunction with MNA using site-specific remediation objectives. NAS consists of a combination of analytical and numerical solute transport NETPATH (for DOS) By US Geological Survey (USGS). An interactive program for calculating NET geochemical reactions and radiocarbon dating along a flow PATH. NETPATH (for LINUX) By US Geological Survey (USGS). An interactive program for calculating NET geochemical reactions and radiocarbon dating along a flow PATH. NETPATH (for Windows) By US Geological Survey (USGS). An interactive program for calculating NET geochemical reactions and radiocarbon dating along a flow PATH. OnSite OnLine Tools for Site Assessment By US Environmental Protection Agency, Office of Research and Development. The OnSite set of online tools for site assessment contains calculators for formulas, models, unit conversion factors and scientific demonstrations to assess the impacts from ground water contaminants. PHAST (for Windows) By US Geological Survey (USGS). A 3-dimensional model for simulating ground-water flow, solute transport, and multicomponent geochemical reactions. PHAST (RPM file for Linux) By US Geological Survey (USGS). A 3-dimensional model for simulating ground-water flow, solute transport, and multicomponent geochemical reactions. PHAST (tar.gz file for Linux) By US Geological Survey (USGS). A 3-dimensional model for simulating ground-water flow, solute transport, and multicomponent geochemical reactions. PHREEQC (for Linux) By US Geological Survey (USGS). A program for aqueous geochemical calculations that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. UNIX source code is PHREEQC (for Mac) By US Geological Survey (USGS). A program for aqueous geochemical calculations that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. UNIX source code is PHREEQC (for Windows) By US Geological Survey (USGS). A program for aqueous geochemical calculations that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Source code is PHREEQC (generic UNIX version) By US Geological Survey (USGS). A program for aqueous geochemical calculations that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. PHREEQCI (for Windows) By US Geological Survey (USGS). A graphical user interface for PHREEQC, a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. For Windows. By Integrated GroundWater Modeling Center (IGWMC). SLUGT2 is an updated version of program 'SLUGT' and computes hydraulic conductivity values based on the analysis of slug-test data. Tracer-Test Planning Using the EHTD Program By US Environmental Protection Agency, National Center for Environmental Assessment (NCEA). A hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass. US EPA Johnson & Ettinger Models (with California Health Criteria) (Groundwater Contamination) By California Department of Toxic Substances Control. Estimates human health risks from subsurface vapor intrusion into buildings. Incorporates human health criteria specific to California, as developed by the Cal/EPA Office of Environmental Health Hazard Assessment (OEHHA). By North Carolina State University (NCSU) Water Quality Group. WATERSHEDSS (WATER, Soil, and Hydro- Environmental Decision Support System) was designed to help watershed managers and land treatment personnel identify their water quality problems and select appropriate best management practices. ZONEBUDGET (for Data General AViiON) By US Geological Survey (USGS). Computes subregional water budgets using results from the MODFLOW ground-water flow model. Source code is available. ZONEBUDGET (for DOS) By US Geological Survey (USGS). Computes subregional water budgets using results from the MODFLOW ground-water flow model. Source code is available. ZONEBUDGET (for Silicon Graphics Indigo) By US Geological Survey (USGS). Computes subregional water budgets using results from the MODFLOW ground-water flow model. Source code is available. ZONEBUDGET (for Sun SPARCstation Solaris) By US Geological Survey (USGS). Computes subregional water budgets using results from the MODFLOW ground-water flow model. Source code is available. Can't find what you need?
{"url":"http://www.ehsfreeware.com/gwfclean.htm","timestamp":"2014-04-17T00:48:40Z","content_type":null,"content_length":"28342","record_id":"<urn:uuid:5b301f34-ead1-4f1b-8745-00891b0f84ee>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Explaining Kilowatts vs. Kilowatt-Hours What’s the practical difference between kilowatts and kilowatt-hours? Knowing the difference can help you look smart and impress your friends. Kilowatts (kW)– this is the rate that energy is produced or consumed. A kilowatt is 1000 watts (W). Sometimes you may see megawatt (MW), which is a million watts. The power of a generator is specified in kW. Devices consume electricity at a rate specified in kW. A lightbulb may be 60 watts (.06 kW). Kilowatt-hours (kWh) – amount of electricity produced or consumed. This is rate (kW) multiplied by time (hours). Your electric bill will charge you for the number of kilowatt-hours you consumed. For example, running a .06 kW (60w) lightbulb for 1 hour consumes .06kWh of electricity. Running a 30w bulb for 2 hours also consumes .06 kWh of electricity. Running a .3 kW solar panel for 6 hours a day may produce 1.8 kW per day. Power generation must match power consumption. In order to provide power to devices that cumulatively consume 10 kW, you need to cumulatively produce 10 kW. If there’s not enough electricity to meet demand, then we have blackouts. But we can’t have too much electricity either or the grid gets overloaded. For example, the Seattle Times reported that hydro plants were producing higher levels of electricity due to excess snow, and so other electricity sources were being shut down to avoid overloading the grid. What are some actual wattage numbers? Here’s a table showing very rough ballpark kW values for various generators. Nuclear Power plant 1 million kW A single wind turbine (when wind blows) 1000 kW A single household solar panel (average during day) 0.3 kW generator hooked up to a bicycle (while peddling at average rate) 0.1 kW Most electronic devices will label how many watts they consume. My laptop requires 65 W. (So in theory, I could power my laptop with a bicycle-generator and get some exercise while I work. ) A hairdryer may be 1800 W. While conventional gas cars measure efficiency in miles-per-gallon (mpg), electric cars measure efficiency in kWh-per-mile. For example, the EV1 gets 0.180 kWh/mile. According to the Department of Energy, the U.S. consumes about 3.5 trillion kWh a year. (Here is a breakdown by sources.) The Energy Information Administration (EIA) has the national averages: "In 2008, the average annual electricity consumption for a U.S. residential utility customer was 11,040 kWh, an average of 920 kilowatt-hours (kWh) per month." In my own personal electric bill from a few months ago, I consumed 780 kWh. It cost about $80 (including taxes), so that’s 10.25 cents / kWh. Math and why it matters The ability to calculate kW and kWh is essential because it lets us evaluate the effectiveness of various proposals. Here are some examples: Proposal #1: Say we have people peddling on bicycles connected to generators. How many such bicycle-generators would be needed to replace a 1 million kW nuclear power plant? At .1 kW per person, it would take (1 million / .1) = 10 million people. The power plant runs 24 hours a day. Say the person only works a standard 8-hour day. So we need 30 million people peddling bicycles full-time (8 hours per day) to replace a 1 million kW plant. Proposal #2: How cost-effective would it be to employ people selling electricity produced from bicycle-generators? A 100 watt bicycle generator may produce 0.8 kWh in a 8 hour work day. At 10 cents per kWh, that’s 8 cents worth of electricity produced for a day’s work peddling your bicycle generator. So we can see that manually generating electricity like this is not a very effective policy. If the bicycle generator system cost $500, it would take $500 / ($.08 / day) = 6250 days = 17 years just to earn back the money to pay for the initial capital investment of the generator. Proposal #3: How many bicycle-generators would be needed to provide the U.S. electricity needs? Assume the U.S. uses 3.5 trillion kWh per year and a generator produces 100 watts. The U.S. needs 3.5 trillion kWh per year / (365 days/year) = 9.5 billion kWh per day. 1 person can produce 0.8 kWh per day. So that would take 11.9 billion people peddling bicycle-generators to meet the electricity demands of the U.S. The U.S. population is about 300 million, which is about 1/40 the amount needed. So clearly such an energy source does not scale to national demand. 1. Hey, how about some piratical situations. Ie. Have 1kw generator, what can you power with it and how long. How about a formula? Trackbacks & Pingbacks
{"url":"http://energypolicygeek.wordpress.com/2011/05/04/explaining-kilowatts-vs-kilowatt-hours/","timestamp":"2014-04-17T16:03:43Z","content_type":null,"content_length":"59689","record_id":"<urn:uuid:212531cc-b434-47bf-bf19-29a557508270>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
The Endeavour I have worked as a math professor, programmer, consultant, manager, and statistician. I enjoy combining these skills and experiences to solve problems and see the solutions carried out. The Endeavour's Latest Posts This evening something reminded me of the following line from Rudyard Kipling’s famous poem If: … If all men count with you, but none too much … It would be good career advice for a mathematician to say “Let all…Read more › Cancer research is sometimes criticized for being timid. Drug companies run enormous trials looking for small improvements. Critics say they should run smaller trials and more of them. Which side is correct depends on what’s out there waiting to be…Read more › There are numerous packages for creating commutative diagrams in LaTeX. My favorite, based on my limited experience, is Paul Taylor’s package. Another popular package is tikz-cd. To install Paul Taylor’s package on Windows, I created a directory called localtexmf, set…Read more › There’s a theorem in statistics that says You could read this aloud as “the mean of the mean is the mean.” More explicitly, it says that the expected value of the average of some number of samples from some distribution…Read more › From Leslie Lamport: Every time code is patched, it becomes a little uglier, harder to understand, harder to maintain, bugs get introduced. If you don’t start with a spec, every piece of code you write is a patch. Which means…Read more › Log in to leave a comment
{"url":"http://scienceseeker.org/site/1068","timestamp":"2014-04-19T04:50:31Z","content_type":null,"content_length":"16993","record_id":"<urn:uuid:81061366-95a3-4d89-b4d6-cb59af1315d4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this correct? September 18th 2011, 08:53 PM #1 Jul 2010 Is this correct? This is the question: Write the equation of a sine function whose amplitude is 2, has a phase shift of -2, and a period of pi. Is this correct: y= 2sin (2x+4) Thank You Re: Is this correct? Re-writing it like this y= 2sin (2x+4) = 2sin 2(x+2) makes a good argument for what you have said. September 18th 2011, 09:50 PM #2
{"url":"http://mathhelpforum.com/trigonometry/188330-correct.html","timestamp":"2014-04-16T16:28:25Z","content_type":null,"content_length":"31857","record_id":"<urn:uuid:cadf47ee-5920-418c-9668-0d185f9a88f1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Construction #2 Given three medians of a triangle ABC, construct the triangle. $\setlength{\unitlength}{1.3mm}<br /> \begin{picture}(50,45)<br /> \put(24.6,38){\bullet}<br /> \put(19.6,5.6){\bullet}<br /> \put(33.7,3.9){\bullet}<br /> \put(21.6,39){A}<br /> \put(0,6){B}<br /> \ put(36,3.2){C}<br /> \put(24,16){O}<br /> \put(20,3){D}<br /> \put(36,22.5){X}<br /> \put(43,11){Y}<br /> \put(25.6,38.6){\line(3,-5){20}}<br /> \put(45,11){\line(-5,-3){20}}<br /> \thicklines<br /> \put(39,0){\line(-1,1){34}}<br /> \put(26,41){\line(-1,-6){7}}<br /> \put(42,29){\line(-5,-3){42}}<br /> \end{picture}$ Given lines AO, BO, CO, you want to locate the positions of A, B, C on those lines so that they form the vertices of a triangle having those lines as medians. The size of the triangle is not determined by that condition, so you can fix A arbitrarily on the line AO. Construct the point D on the line AO, on the opposite side of O from A, so that $DO = \tfrac12AO$. Then D will lie on the side BC of the triangle. The centroid O of the triangle is the centre of mass of three equal masses at the vertices. So the vertices A and C must be equidistant from the median BO. Construct a line AXY through A, perpendicular to BO, meeting BO at X, and such that AX = XY. Then construct a line through Y perpendicular to AXY (and therefore parallel to BOX). This line meets CO at the vertex C of the triangle, and thus determines the position of C. Finally, you can construct B as the point where the line CD meets the median BOX.
{"url":"http://mathhelpforum.com/geometry/141566-geometric-construction-2-a.html","timestamp":"2014-04-19T12:30:12Z","content_type":null,"content_length":"35298","record_id":"<urn:uuid:8ed2a5bc-3b39-48e3-aa3e-6e579325071d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
3D graphing software (free? linux-compatible?) April 14th 2010, 02:44 AM #1 Apr 2010 3D graphing software (free? linux-compatible?) Hello everyone, I'm a high school student and I'm looking for some software which will allow me to graph lines and surfaces in 3D, and which will also perform some calculations such as finding intersections of lines and/or surfaces. I have been working on a research project which involved graphing lines and finding their intersections: initially i used a program called GeoGebra, which is very simple but works well. However, extending the investigation I would like to find intersections of plains (for a large range of functions; by hand just won't cut it). I'd prefer the software to be free (even better is it's open source) and also preferably linux compatible. However, just general information on good programs out there would be helpful. I tried gnuplot, but looking through the documentation I can't find any mention of calculating intersections, and I suspect it is not possible. Is this right? Thanks in advance. An update Update: Any comments on Freemat or Sage (which I have only just found and downloaded)? Hello everyone, I'm a high school student and I'm looking for some software which will allow me to graph lines and surfaces in 3D, and which will also perform some calculations such as finding intersections of lines and/or surfaces. I have been working on a research project which involved graphing lines and finding their intersections: initially i used a program called GeoGebra, which is very simple but works well. However, extending the investigation I would like to find intersections of plains (for a large range of functions; by hand just won't cut it). I'd prefer the software to be free (even better is it's open source) and also preferably linux compatible. However, just general information on good programs out there would be helpful. I tried gnuplot, but looking through the documentation I can't find any mention of calculating intersections, and I suspect it is not possible. Is this right? Thanks in advance. Not completely free, but almost so, and not completely Linux-compatible is OpenGeometry (Ordinariat für Geometrie - Universität für Angewandte Kunst). It is not completely free: it comes as source code with a book (aka. "bookware"). And it is not completely Linux-compatible. The main version of this 3d modeling software is written for the commercial version of Visual C++ (although the 2002 edition of the book came with a Linux version, too). A commercial version of Visual C++ is needed, because the user-inferface is based on Microsofts MFC classes that are not included in the free Visual C++ "Express" editions. Similarly, the book "Computer Graphics and Geometric Modeling: Implementation and Algorithms" by Max K. Agoston (Springer, 2005) includes the source code of modeling software written in C++. Like OpenGeometry it is written in Visual C++ with a GUI that is, unfortunately, based on Microsoft's commercial MFC classes. The software for the book "Geometric Algebra" (by Doorst, Fontijne, Mann) is freely available on the web (Geometric Algebra For Computer Science), though I'm not sure about the software requirements. It seems that it runs on OS/X, and Solaris, too. So, maybe, it will run on Linux. But don't take my word for it. There is also software that might do the trick for you, that is not strictly for geometric modeling, but for making 3d drawings, such as ePiX (ePiX Home Page). This software is completely free and Linux-compatible. It is typically used to create 2d or 3d drawings for inclusion in LaTeX documents. Another program of this sort is Asymptote (Asymptote). Thanks very much for the suggestions, I will have a look at all those. Re: 3D graphing software (free? linux-compatible?) For not too complex problems you can also try this: Plot Graphs . It is online although linux compatible... Last edited by peterm; August 27th 2012 at 01:54 PM. April 14th 2010, 09:50 PM #2 Apr 2010 April 15th 2010, 02:33 AM #3 April 15th 2010, 11:52 PM #4 Apr 2010 August 27th 2012, 01:52 PM #5 Aug 2012
{"url":"http://mathhelpforum.com/math-software/139121-3d-graphing-software-free-linux-compatible.html","timestamp":"2014-04-18T02:09:05Z","content_type":null,"content_length":"43332","record_id":"<urn:uuid:409a2d73-8bf5-4768-85b2-2107e421264e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Brisbane Geometry Tutors ...I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject. I have a strong background in linear algebra and differential equations. 49 Subjects: including geometry, calculus, physics, statistics ...I served as a teacher?s assistant in my math class because I excelled in math during middle and high school. My responsibilities included tutoring my classmates and grading homework and exams. I enjoyed helping my classmates with their challenges as math has always been one of my favorite subjects, and I continued to help my classmates during my free time in college. 22 Subjects: including geometry, calculus, statistics, biology ...My doctoral degree is in psychology. I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math and take great joy in communicating this to reluctant and struggling students, as well as to able student... 20 Subjects: including geometry, calculus, statistics, biology ...I will be adding availability to tutor students in college algebra and other college ? level math courses like calculus, trigonometry, and pre-calculus in the future as soon as I have a chance to review some more mathematics to refresh my skills in these subjects at this level. But for now, I am... 48 Subjects: including geometry, reading, Spanish, English ...I offer a wide variety of services and a flexible schedule for your convenience, though I do request 24-hour notice of lesson cancellation. Centrally located in San Francisco, I am able to travel anywhere BART goes and to many locations in Marin. I have many professional references available upon request. 41 Subjects: including geometry, English, reading, writing
{"url":"http://www.algebrahelp.com/Brisbane_geometry_tutors.jsp","timestamp":"2014-04-19T07:26:01Z","content_type":null,"content_length":"24974","record_id":"<urn:uuid:4dba7abf-c0c2-461f-8ce4-2ad11a03d07b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Definitions of cognitive functions - Any pointers? From: Ben Goertzel (ben@goertzel.org) Date: Mon Feb 12 2007 - 22:14:28 MST For some time now, I've been considering the idea of defining many -typically hard to define- cognitive functions (such as though, memory recollection, consiousness, pain, desire) using a rationally concrete mathematical model. That is, defining the above functions as states, transformations or relations within this mathematical model. I have attempted such a thing in a series of book written over the last 15 years. For a summary of my perspective see my recent book "The Hidden Pattern." Most of the book is nonmathematical, but there are two Appendices that put some of the foundational concepts in the book in mathematical form. Then, if you're still curious, you can dip back into earlier works such as The Structure of Intelligence, The Evolving Mind and Chaotic Logic (which alas are more riddled with typos as I was an even messier copy-editor back then...) for more detailed models of things like different types of memory and different kinds of learning. I spent a long time trying to mathematize different aspects of cognition. What I arrived at was a bunch of mathematical equations that represented my intuitive understanding of how mind works, but were not tractable or solvable using contemporary mathematical tools! So, the formalizations so far have not been much use for anything except refining and guiding my intuitive understanding. Of course, that is a significant use and I think if I hadn't gone through this extended mental exercise of cognitive formalization, I would not have been able to come up with a workable AGI design (although that also took me many years of effort ... the challenge being that the first AGI design my formalizations led me to, the Webmind design, was workable-in-principle but overly complex from a practical implementation standpoint...) -- Ben Goertzel Anthony Mak wrote: > Not sure if these pointers are mathematical enough. > But I am currently looking into cognitive psychology as part of my study. > In "The Architecture of Cognition", Anderson, there are two chapters > called > Spread of Activation and Control of Cognition and there seems to be some > differential equations in it. > And I am having a look at this interesting book called "Cognitive > Psychology: > A Neural-Network Approach", Martindale. Even though it does not seem to > contains any mathematical equations. > Sorry, I can't give much details yet as I haven't gone through them yet :) > (I am a phd student in ANU working on hybridization of machine > learning and > automated reasoning methods) > Anthony > -----Original Message----- > *From:* owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] *On Behalf Of > *Konstantinos Natsakis > *Sent:* Tuesday, 13 February 2007 8:05 AM > *To:* sl4@sl4.org > *Subject:* Definitions of cognitive functions - Any pointers? > Hello all, > I've been reading this list for the last couple of months, and I > enjoyed reading about novel (and new to me) ideas in fields that I > consider as the frontiers of the human intellect. > For some time now, I've been considering the idea of defining many > -typically hard to define- cognitive functions (such as though, > memory recollection, consiousness, pain, desire) using a > rationally concrete mathematical model. That is, defining the > above functions as states, transformations or relations within > this mathematical model. > For example defining thought as a state of a neural state machine > with particular properties. Memory recollection as a state change > and a slight transformation of the underlying state machine > structure. Concious thoughts as a subset of thoughts with certain > properties etc > I find it a bit hard to believe that noone has ever thought this > way before, but I have found nothing similar at any of the > resources I've read so far (which is not much). > Do you know of any work that has been done in this field? Or any > arguments supporting that this is just as useless way of thinking? :-) > Any pointers will be greatly appreciated. > I'm currently doing the first year of a "Computer Science" BSc at > the University of Sheffield, UK. > -- > cyfex > -- > No virus found in this incoming message. > Checked by AVG Free Edition. > Version: 7.5.441 / Virus Database: 268.17.36/681 - Release Date: > 11/02/2007 6:50 PM > -- > No virus found in this outgoing message. > Checked by AVG Free Edition. > Version: 7.5.441 / Virus Database: 268.17.37/682 - Release Date: > 12/02/2007 1:23 PM This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT
{"url":"http://www.sl4.org/archive/0702/16224.html","timestamp":"2014-04-18T23:56:41Z","content_type":null,"content_length":"9864","record_id":"<urn:uuid:86caed80-70fb-42b7-b2b5-dbe34194ad49>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
circular image plot (MATLAB) Thanks for the reply, I don't fully understand how to apply it to my problem (never been strong on plot tools), my image has no coordinates, it's just a 500x500 matrix. I don't need the axis, just for the plotter to not plot anything outside a circular region. Currently I'm using pcolor with "EdgeAlpha" to zero so it looks very similar to that image I linked, but it's a square.
{"url":"http://www.physicsforums.com/showthread.php?t=517274","timestamp":"2014-04-20T09:01:33Z","content_type":null,"content_length":"32519","record_id":"<urn:uuid:0a55dba4-df13-4afd-936a-9334535bcc10>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
system of inequalities May 15th 2007, 02:38 PM #1 May 2007 system of inequalities THe Quetion: The science club can spend at most $400 on a field trip to a dinosaur exhibit. It has enough chaperones to allow at most 100 students to go on the trip. The exhibit costs $3 for students 12 years and younger and $6 for students over 12. Write a system of linear ineqalities to model this situation. Graph your system of ineqalities. Be sure it is clear which region shows the solution. Thanks so much!!! Let x variable represent the amount of younger students. Let y varialbe represent the amount of older students. Then, x+y <= 100 Each x costs $3 thus in total 3x dollars. Each y costs $6 thus in total 6y dollars. The net total is, 3x+6y. Then, 3x+6y<=400 This yields two inequalities, But there are two more, x>=0 and y>=0 These are natural inequalities. Since you cannot have a negative amount of students going. So we really have, x>=0 and y>=0 May 15th 2007, 02:49 PM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/pre-calculus/15032-system-inequalities.html","timestamp":"2014-04-16T07:00:23Z","content_type":null,"content_length":"34204","record_id":"<urn:uuid:b0bf8fbb-2926-4eac-a90e-f37c10dc15e6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Surjectively isometric normed spaces: Hamel vs (extended) Schauder dimension up vote 0 down vote favorite Bonjour/bonsoir à toutes et à tous. This may really be a very basic question, but... Let $\mathbf{X} \equiv (X, \|\cdot\|_X)$ and $\mathbf{Y} \equiv (Y, \|\cdot\|_Y)$ be surjectively isometric (1) normed spaces (over the real or the complex field). Question 1. Do $\mathbf{X}$ and $\mathbf{Y}$ need to share the same Hamel dimension? The answer is clearly yes if the scalar field is the real numbers, since then any surjective isometry $\mathbf{X} \to \mathbf{Y}$ is, a fortiori, an affine transformation $X \to Y$ (via the Mazur-Ulam theorem). But what about the complex case? Added later. As pointed out in the comments below, the answer to Question 1 is still yes if $\mathbf{X}$ and $\mathbf{Y}$ are (real or complex) Banach spaces essentially because the Hamel dimension of an infinite-dimensional (real or complex) Banach space is always at least the cardinality of the continuum (even if the CH fails) as proved in H. E. Lacey, The Hamel Dimension of any Infinite Dimensional Separable Banach Space is c, The American Mathematical Monthly, Vol. 80 (1973), p. 298 (click). This is why I resolved to edit the OP and drop the earlier assumption on the completeness of $\mathbf{X}$ and $\mathbf{Y}$. Question 2. Now assuming that $\mathbf{X}$ and $\mathbf{Y}$ are Banach, do they need to share the same (extended) Schauder dimension as defined in J. W. Evans and R. A. Tapia, Hamel Versus Schauder Dimension, The American Mathematical Monthly, Vol. 77, No. 4 (Apr., 1970), pp. 385-388 (click)? Notes. (1) I'm using the term isometry to refer, herein, to both linear and non-linear isometries. isometries norms banach-spaces As to the first question, isn't the real case sufficient? A complex vector space is also a real vector space by restriction of the scalar field, and the real Hamel dimension (finite or not) is twice the complex Hamel dimension. – Pietro Majer Aug 25 '11 at 13:07 Also, isn't the Hamel dimension of a (real or complex) infinite dimensional Banach space equal to its cardinality? – Pietro Majer Aug 25 '11 at 13:28 As for question 1, there is an article by J. Bourgain, Real isomorphic complex Banach spaces need not be complex isomorphic, in Proc. AMS, vol 96, n.2 (1986) where an example is given of two complex Banach spaces which are isometric but not linearly isomorphic (over complex numbers). I don't remember the details, so I'm not sure it has really something to do with your question, but it could be a start. – Samuele Aug 25 '11 at 15:13 @Pietro. You're absolutely right with your 2nd comment! The key point is that the Hamel dimension of an $\infty$-dimensional (real or complex) Banach space cannot be less than $|\mathbb{R}|$ (even if the CH fails!) as proved in H. E. Lacey, The Hamel Dimension of any Infinite Dimensional Separable Banach Space is $\mathfrak{c}$, The AMM, Vol. 80 (1973), p. 298. I confess, this is somehow surprising for me as I really thought the answer should not depend on the completeness of the space, and I'm editing the OP accordingly to pose the question in the right (normed) setting. – Salvo Tringali Sep 13 '11 at 13:34 Note. By error, I deleted my previous comment (dated 25 Aug 2011). Now, I've tried to remember what it should be and posted it again. Sorry for the inconvenience! @Samuele. Not really what I was seeking, but still useful. Thanks! – Salvo Tringali Sep 13 '11 at 13:38 show 6 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged isometries norms banach-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/73657/surjectively-isometric-normed-spaces-hamel-vs-extended-schauder-dimension","timestamp":"2014-04-21T15:14:29Z","content_type":null,"content_length":"54602","record_id":"<urn:uuid:aceb62a9-330b-405e-bd33-fda894c8d5d9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Annu. Rev. Astron. Astrophys. 1992. 30: 311-358 Copyright © 1992 by . All rights reserved The angular diameter distance quoted in Equation 1 is only useful in a homogeneous universe. Inhomogeneous universes necessitate drastic approximations. Most treatments of this problem assume that the universe is uniform on the scale of the Hubble radius and that the relationship between redshift and cosmological time is the same as that in a FRW universe of similar mean density. We are interested in the evolution of the cross section of a null geodesic congruence (a bundle of rays) as it propagates backward in time away from the observer, specifically in the mapping from the observer's angle space to proper distance perpendicular to some central or fiducial ray. Let us generalize Figure 5 and set the vector, [1], [2]), equal to this offset as measured all along the bundle, not just in the lens plane. The subscripts 1, 2 represent components with respect to an orthonormal basis parallel propagated along the fiducial ray, and [1] + i [2]. The angle between a ray in the congruence and the fiducial ray at the observer can also be represented as a complex number dot[0], where a dot denotes differentiation with respect to [1], D[2]), where both D[1] and D[2] are complex, using the general linear relation The real part of D[1] measures the expansion of the ray while its imaginary part describes pure rotation. (In practice, rotation is usually small and D[1] is approximately real.) D[2] measures the shear. All the information about the local image distortion is contained in D. The conventional angular diameter distance, whose square is the ratio of the source area to the solid angle it subtends, is given by [| D[1] |^ 2 - | D[2] |^ 2] ^1/2, and suffices for point sources where only the flux can be measured. In an inhomogeneous universe containing Newtonian matter, D can be shown to evolve according to where the quantity R = -(1 + z)^2([,11] + [,22]) = -4z)^2Gz)^2([,11] - [,22] + 2i [,12]) describes the influence of matter external to the congruence (e.g. Penrose 1966, Blandford et al. 1991). This formalism immediately gives expressions for the magnification tensors, [µ] (cf Equation 3), whose definition we can now generalize by identifying of similar average density to the inhomogeneous universe under consideration. (See Ehlers & Schneider 1986 for an alternative choice of reference universe). In the limiting case when all the matter in the universe apart from the lens is isolated from the congruence (D[2] = 0), the lack of focusing by matter in the beam (save for the lens) compared to a FRW universe of the same [0] increases the angular diameter distance of the source (Dashevskii & Zel'dovich 1965, Dyer & Roeder 1972, Nottale 1983, Nottale & Hammer 1984, Kasai et al. 1990). The increase is about 30 per cent for a source with z[s] ~ 2 in an Einstein-De Sitter universe. However, the cumulative shear caused by external matter usually produces a second order focusing which leads to a diminished net effect. In general, if multiple imaging is uncommon, the distribution of magnifications due to smoothly distributed matter is dominated by the convergence rather than shear (Lee & Paczynski 1990, Watanabe & Sasaki 1990). The total flux is always conserved when suitably averaged over all directions (Weinberg 1976, Peacock 1986). If the focusing (say due to a lensing galaxy) is strong enough to make the rays cross along any congruence (Figure 8), then multiple images must form and we have an example of gravitational lensing. At the point where the rays cross, known as a conjugate point to the observer, the conventional angular diameter distance vanishes (| D[1] | = | D[2] |) and the formal magnification diverges. The locus of these conjugate points is a two-dimensional surface, a caustic sheet, to which the rays are tangent (see Blandford & Narayan 1986 for a schematic diagram showing the caustic sheets associated with an elliptical lens). Equivalently, we can think in terms of wavefronts normal to the rays merging at a caustic (Kayser & Refsdal 1983). For a source at a fixed redshift, the source plane intersects the caustic sheets at caustic lines. The images of these lines are known as critical curves (cf Figures 6, 7). Figure 8. An infinitesimal conical bundle of rays is shown drawn backwards from an observer, past an elliptical lens, and touching two caustic sheets. The second caustic sheet, on the left, has a cusp line perpendicular to the plane of the diagram, while the first caustic sheet has a cusp in the plane. Representative cross sections of the bundle are indicated at the bottom. Where the bundle touches a caustic, its cross section degenerates to a straight line. Beyond this point, the bundle is ``inverted'' and a source located here will acquire two additional images. In general there could be many caustic sheets behind a complex lens, but with a single elliptical lens there are only two sheets (which may penetrate each other, cf Blandford & Narayan 1986). In the generic situation, the caustic sheet corresponds to a fold caustic. When a source crosses a fold, an extra pair of images will either be created or destroyed. These image pairs will be stretched toward each other along a direction essentially perpendicular to the projection of the caustic on the sky (Blandford & Kovner 1988). Because of the stretching, the images will be bright; an example is the pair of bright images, A[1]A[2], in Q1115+080. The magnifications of the two images will be inversely proportional to their separations and also inversely proportional to the square root of the distance of the source from the caustic (Benson & Cooke 1979, Ohanian 1983, Blandford & Narayan 1986, Kayser & Witt 1989). Therefore, for a fold caustic, the cross section, µ), for the magnification to be greater than µ has a universal scaling, µ^-2, for µ >> 1. Equivalently, the differential cross section scales as d/ dµ µ^-3. Every time a ray touches a caustic (grazes it tangentially), the associated image is inverted, i.e. its parity is reversed. (Polarization directions are parallel propagated and unaffected.) In Q0957+561, the A image is believed to be uninverted while the ray associated with the B image has touched one caustic, so the two images are approximate reflections of each other. A faint third image ought to be formed in the galaxy nucleus, inverted twice through roughly orthogonal planes, hence rotated through ~ 180°. Fold surfaces meet at cusp lines, which correspond to a cusp caustic. Sources lying just inside cusps create three bright images (plus any additional images that are not associated with the cusp). Sources lying just outside cusps have one of their images highly brightened. In this region, the cross section for large µ scales as µ) µ^-5/2, or d/dµ µ^-7/2. Cusps are believed to play an important role in the luminous arcs. Cusp lines meet at points associated with higher order singularities, but these have not yet been identified in the observations. The closest point of the caustic to the observer is generically a cusp. When a source is located close to this point, the lens is said to be marginal and may produce one or three bright images (Narayan et al. 1984, Kovner 1987d, e). In general, for a non singular lens, caustic surfaces separate regions with image multiplicities differing by two. Since far from the lens a source has but one image, therefore the total number of images has to be odd (Burke 1981, McKenzie 1985).
{"url":"http://ned.ipac.caltech.edu/level5/Blandford/Blandford3_3.html","timestamp":"2014-04-19T04:40:48Z","content_type":null,"content_length":"14578","record_id":"<urn:uuid:8c3b9baf-084b-4bc2-9b22-53bf7f3d6e23>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 341 Group Project Geometry on the Cone See how much geometry you can develop on cones with cone angle < 360 degrees (e.g., the 90 degree cone). You may also be interested in comparing these with cones with cone angle > 360 degrees (e.g., the 450 degree cone) and/or cylinders. For any of these surfaces you can ask any of the following questions: What do the intrinsically straight lines look like? What happens to lines that run into the cone point? What do circles look like? Which of Euclid’s Postulates are true? What is the sum of the interior angles of a triangle? What is the holonomy of a triangle? Which triangle congruence theorems hold (or don’t hold)? Is there a unique straight line joining any two points? If not, for which points is there a unique straight line joining them? Do any straight lines intersect themselves? If so, how many times? Explore any of these questions, or others, that interest you, and write a paper explaining what you find.
{"url":"http://www.unco.edu/nhs/mathsci/ClassSites/hoppercourse/math341/asscone.html","timestamp":"2014-04-16T14:48:09Z","content_type":null,"content_length":"1860","record_id":"<urn:uuid:c3e58f57-eb6b-4433-b21c-495a5e6bd23c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
help me Re: help me Ugly is in the idea of the beholder. To us lagooners I am considered a real hunk. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me If that is ugly, I would enjoy being ugly 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: help me I do too. When I walk around I wear a big cloak so as to not scare the kids. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me Yes, the kiddies stare at me for some reason. Children find me fascinating. It must be my honest expression. See post #249. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: help me Hmmm... By now I have enough evidence to "accuse" you of being a Time Lord. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: help me A what? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: help me A Time Lord. It seems you're not a fan of British TV shows. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: help me Hmmmm. Monty Python, Sherlock Holmes ( Jeremy Brett ),Rumpole of the Bailey and Benny Hill is all I know of British TV. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me Monty Python's Flying Circus? Isn't it the show from where the name of the Programming language "Python"'s name has been derived? Is it still running? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: help me Monty Python's Flying Circus is still being shown but I do not think it is still running. I do not know whether they used that to derive the name for python. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me um very fun Love is Golden. Re: help me They are funny but kind of zany. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me What does th word "zany" mean? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: help me Crazy, goofy, wacky and kooky best describe it. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me Hi Sarah; How are you today? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me im good thanks, wbu? Love is Golden. Re: help me HI Sarah; About the same. Doing chores all morning and now I have to go shopping. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me Love is Golden. Re: help me I didn't go shopping. Just did not feel like it. Too many people in the store probably. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me Did you go the store and just come back? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: help me No, I never went. People here tend to shop on Saturday so I go any other day. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: help me I got that number theory book. It is surely a great "Recreation", but would it help? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: help me Very good, you are a clever fellow. Do not be fooled by the term recreational. Those are very serious problems. Take a look at it, if you do not like it I will try to find another. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=242863","timestamp":"2014-04-21T02:26:48Z","content_type":null,"content_length":"35685","record_id":"<urn:uuid:395209cc-e40a-4999-a478-ac94cd033500>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Application: polynomial time algorithm for Next: Application: integer relations between Up: Discrete Math, Tenth Problem Previous: Short vectors Throughout this section, let for all The question is, for what • guarantee that there exists a solution; • find a solution in polynomial time? Dirichlet's Theorem provides an answer to the existence question: Theorem 3.1 (Dirichlet) There exists a solution to the above problem with Recall our second proof of Dirichlet's theorem which used Minkowski's theorem. We examined the matrix: We took integer linear combinations of the columns with coefficients construct such a vector (not just guarantee its existence). From part (b) of Theorem 2.11, we can find a Lovasz-reduced basis for the lattice spanned by the columns of this matrix, and that gives a vector For this to be less than So for this value of Next: Application: integer relations between Up: Discrete Math, Tenth Problem Previous: Short vectors Varsha Dani 2003-07-25
{"url":"http://people.cs.uchicago.edu/~laci/reu03/notes10/node3.html","timestamp":"2014-04-21T07:06:43Z","content_type":null,"content_length":"8343","record_id":"<urn:uuid:cf758330-3382-4870-b3ac-0b2e6e556e2f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
st: reg3: averaging across K equations and testing that average coeff eq Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: reg3: averaging across K equations and testing that average coeff equals zero From Guy Grossman <guygrossman1@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: reg3: averaging across K equations and testing that average coeff equals zero Date Sat, 9 Jul 2011 13:01:55 -0400 Hi - I am hoping someone can assist me with the following question. I am estimating the following system of equations: Y1 = alpha_1*T+beta_1*X +e_1 Y2 = alpha_2*T+beta_2*X +e_2 Y3 = alpha_3*T+beta_3*X +e_3 Yk = alpha_k*T+beta_k*X +e_k where Y are standardized dependent variables, X is an exogenous covariate and T is a binary endogenous variable, instrumented by Z. I am using reg3 (three-stage estimation). My main interest is the coefficients on T (the alphas). Y1--Yk are thought to be from the same family of outcomes (say, political participation). I am hoping to conduct the following analysis (similar to the one described in the fantastic Working Paper by Casey, Glennerster and Miguel (2011) "Reshaping Institutions: Evidence on External Aid and Local Collective Action", p. 22) 1. calculate a mean effect index that captures the average relationship between T and the K different outcomes that belong to the same family 2. calculate the standard error of this index which depends on the variance of of each of the individual alpha and the covariance between alpha_1,k and alpha_1, --k 3. test the cross equation hypothesis that the average index of K coefficients equals zero My Stata code starts with: global y1 "(q1: y1 t x)" global y2 "(q2: y2 t x)" global y3 "(q3: y3 t x)" global y4 "(q4: y4 t x)" reg3 $y1 $y2 $y3 $y4, endog(t) exog(z) I am not sure however, how to continue from here. suggestions would be highly appreciated! * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-07/msg00372.html","timestamp":"2014-04-17T00:56:08Z","content_type":null,"content_length":"8676","record_id":"<urn:uuid:8a2f8266-1388-4984-979e-ac7a28290591>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
5.1. Open and Closed Sets │ 5.1. Open and Closed Sets │IRA│ In the previous chapters we dealt with collections of points: sequences and series. Each time, the collection of points was either finite or countable.and the most important property of a point, in a sense, was its location in some coordinate or number system. Now will deal with points, or more precisely with sets of points, in a more abstract setting. The location of points relative to each other will be more important than their absolute location, or size, in a coordinate system. Therefore, concepts such as addition and multiplication will not work anymore, and we will have to start, in a sense, at the beginning again. All of the previous sections were, in effect, based on the natural numbers. Those numbers were postulated as existing and all other properties - including other number systems - were deduced from those numbers and a few principles of logic. We will now proceed in a similar way: first, we need to define the basic objects we want to deal with, together with their most elementary properties. Then we will develop a theory of those objects and called it topology. Definition 5.1.1: Open and Closed Sets A set U R is called open, if for each x U there exists and U. Such an interval is often called an - neighborhood of x, or simply a neighborhood of x. A set F is called closed if the complement of F, R \ F, is open. Examples 5.1.2: • Which of the following sets are open, closed, both, or neither ? 1. The intervals (-3, 3), [4, 7], (-4, 5], (0, 2. The sets R (the whole real line) and 0 (the empty set) 3. The set {1, 1/2, 1/3, 1/4, 1/5, ...} and {1, 1/2, 1/3, 1/4, ...} It is fairly clear that when combining two open sets (either via union or intersection) the resulting set is again open, and the same statement should be true for closed sets. What about combining infinitely many sets ? Proposition 5.1.3: Unions of Open Sets, Intersections of Closed Sets • Every union of open sets is again open. • Every intersection of closed sets is again closed. • Every finite intersection of open sets is again open • Every finite union of closed sets is again closed. How complicated can an open or closed set really be ? The basic open (or closed) sets in the real line are the intervals, and they are certainly not complicated. As it will turn out, open sets in the real line are generally easy, while closed sets can be very complicated. The worst-case scenario for the open sets, in fact, will be given in the next result, and we will concentrate on closed sets for much of the rest of this chapter. Next we need to establish some relationship between topology and our previous studies, in particular sequences of real numbers. We shall need the following definitions: Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points Let S be an arbitrary set in the real line R. 1. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S. The set of all boundary points of S is called the boundary of S, denoted by bd(S). 2. A point s S is called interior point of S if there exists a neighborhood of S completely contained in S. The set of all interior points of S is called the interior, denoted by int(S). 3. A point t S is called isolated point of S if there exists a neighborhood U of t such that U S = {t}. 4. A point r S is called accumulation point, if every neighborhood of r contains infinitely many distinct points of S. Examples 5.1.6: • What is the boundary and the interior of (0, 4), [-1, 2], R, and O ? Which points are isolated and accumulation points, if any ? • Find the boundary, interior, isolated and accumulation points, if any, for the set {1, 1/2, 1/3, ... } Here are some results that relate these various definitions with each other. Proposition 5.1.7: Boundary, Accumulation, Interior, and Isolated Points • Let S R. Then each point of S is either an interior point or a boundary point. • Let S R. Then bd(S) = bd(R \ S). • A closed set contains all of its boundary points. An open set contains none of its boundary points. • Every non-isolated boundary point of a set S R is an accumulation point of S. • An accumulation point is never an isolated point. Finally, here is a theorem that relates these topological concepts with our previous notion of sequences. Theorem 5.1.8: Closed Sets, Accumulation Points, and Sequences • A set S R is closed if and only if every Cauchy sequence of elements in S has a limit that is contained in S. • Every bounded, infinite subset of R has an accumulation point. • If S is closed and bounded, and S, then there exists a subsequence S. Interactive Real Analysis, ver. 1.9.5 (c) 1994-2007, Bert G. Wachsmuth Page last modified: Mar 26, 2007
{"url":"http://pirate.shu.edu/~wachsmut/ira/topo/open.html","timestamp":"2014-04-19T09:22:58Z","content_type":null,"content_length":"12418","record_id":"<urn:uuid:c06a4973-fbbb-4326-a0ce-a0f9c5f91106>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
CFD: Computational Fluid Dynamics This CFD (computational fluid dynamics) project develops parallel algorithms and software to enable accurate simulation of unsteady incompressible fluid flows in general three-dimensional geometries. The difficulty with this class of problems stems from a number of features of the governing Navier-Stokes equations. First, the highest order derivative is typically multiplied by a small number (the inverse of the Reynolds number). This singular perturbation results in thin boundary layers and gives rise to disparate length scales to be resolved by the numerical grid. Second, the equations are nonlinear, which leads to thin internal layers having unknown and possibly time varying positions. Finally, the incompressibility constraint must be satisfied at all times, implying global coupling of degrees-of-freedom at each time step.
{"url":"http://www.mcs.anl.gov/project/cfd-computational-fluid-dynamics","timestamp":"2014-04-17T16:46:37Z","content_type":null,"content_length":"25194","record_id":"<urn:uuid:29648195-be30-4043-89e6-944a288d7648>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00437-ip-10-147-4-33.ec2.internal.warc.gz"}