content
stringlengths
86
994k
meta
stringlengths
288
619
December 6th 2012, 12:26 PM #1 Dec 2012 3 people are picked at random what's the probability that all 3 will have different birthdays.? NB take all months to be equally likely ans=55/72 please explain! Re: probability I think you mean what is the probability that their birthdays will be in different months. It doesn't matter what month the first person's birthday is in. The second person has 11 months their birthday may be in, and the third person has 10 months their birthday can be in. So we have: For future reference, this forum is for introductions. Math queries should be posted in the appropriate forum. Your questions will be more likely to be answered there. Re: probability Please post in the appropriate forum, there is a list. I assume statistics? Statistics December 6th 2012, 12:37 PM #2 December 6th 2012, 12:37 PM #3
{"url":"http://mathhelpforum.com/new-users/209214-probability.html","timestamp":"2014-04-18T10:12:17Z","content_type":null,"content_length":"35975","record_id":"<urn:uuid:0ef9e54a-01d4-440f-a2af-9ab18f7856d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Intermediate Autocad (ENGR-1012) 1. Course title: Intermediate AutoCad 2. Prerequisite: ENGR 1011 or consent of the instructor. 3. Textbook: Your AutoCAD 2000 Instructor by James A. Leach. McGraw-Hill Publishers. ISBN 0-07-234761-9 4. Catalog Description: This is the second of a two part course which continues with the review of basic commands of AutoCAD for windows to produce two dimensional drawings and provides hands on instruction in using this industry standard software to create three dimensional engineering drawings and solid modeling. Course objective: To introduce students to advanced features of the widely chosen computer-aided design software AutoCAD to effectively: 1. Create, insert and edit blocks with attributes. 2. Extract attribute data from a drawing. 3. Use the display controls needed for viewing three-dimensional drawings. 4. Use three-dimensional drawing aids. 5. Produce three-dimensional representations. 6. Basic concepts of rendering 7. General notes: a. Instructor should note that the class may consist of students registered as credit and some as non-credit. b. The evaluation procedure is mandatory for the grade for credit students and optional for non-credit students. c. Non-credit students are given a certificate at the end of course after successful completion of all instructor assigned work. 7. Course Outline: Block Attributes, editing and extracting: chapter 22 XYZ point filters: Chapter 20 Pictorial views: Chapter 25 (review) Auxiliary views: Chapter 27 (review) Dimensioning 3-D parts: Chapter 28 Advance paper space techniques: Chapter 33 3-D modeling basics: Chapter 34 Display controls for 3-D drawings: Chapter 26 User Coordinate systems UCS: Chapter 36 Wireframe modeling: Chapter 37 Solid modeling construction: Chapter 38 Surface modeling: Chapter 40 Rendering Concepts (only): Chapter 41 8. Evaluation: The emphasis should be on measuring the level of expertise achieved in applying the software commands to successfully reproduce 2-dimensional drawings. The details of grade determination are flexible however one recommended procedure would be to weigh the four components as follows: Labs/Homework: 15%-20% Tests (two): 30% Project: 20% Final Examination: 30-35% Effective date: January, 2002
{"url":"http://depts.gpc.edu/~mcse/CourseDocs/engr/1012_tg_2002_Jan.htm","timestamp":"2014-04-18T13:06:59Z","content_type":null,"content_length":"17929","record_id":"<urn:uuid:bdd60d25-26da-4bc4-bceb-ae2e585616d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolved:sorting variables c++ smartcat99s wrote: Couldn't you toss them in an array, sort them, and yank them back out? CPP / C++ / C Code: if (var1<var2){ }Notice that this will take forever to come up with all the possible combinations. I'm learning arrays but I don't know how to sort them yet. I think I've resolved it. Here is the assignment: Write a program that takes five numbers from the user and calls the following functions: isEqual5() - determines if the 5 numbers are all identical. isEqual4() - determines if any 4 numbers are identical. isEqual3() - determines if any 3 numbers are identical. No sorting is allowed in any of these functions including the main. All comparisons must be performed within the functions that return answers, true or false. The input and output must remain in the main function And here is what I got: #include <iostream> using namespace std; //prototype functions for each event bool isEqual5(int,int,int,int,int); bool isEqual4(int,int,int,int,int); bool isEqual3(int,int,int,int,int); int main() //initialize variables int n1,n2,n3,n4,n5; bool three,four,five; cout<<"Please enter five numbers."<<endl; //function calls with variable assignments //output statements for true functions if (five==1) cout<<"There are five matching numbers."; else if (four==1) cout<<"There are four matching numbers."; else if (three==1) cout<<"There are three matching numbers."; return 0; //comparison function for instance of five equal numbers bool isEqual5(int x1,int x2, int x3,int x4,int x5){ int a=0; if (a==5) return true; return false; //comparison function for instance of four equal numbers bool isEqual4(int x1,int x2, int x3,int x4,int x5){ int a=0; if (a==4) return true; return false; //comparison function for instance of three equal numbers bool isEqual3(int x1,int x2, int x3,int x4,int x5){ int a=0; if (a==3) return true; return false; You think that solves the problem? Did your teacher not teach any sorting algorithms? He will, we just haven't gotten to that yet. If I was allowed to use an array I would do a bubble sort, but we're not allowed to sort so what are you going to do. Having read the assignment, I realise that I was wrong. You are told not to sort because it is a waste of time. You need to sit and think about what you are doing before you start writing code. For instance: bool isEqual5(int x1, int x2, int x3, int x4, int x5) { if (x1 != x2) return false; // if these two numbers are not equal, all five numbers are not equal else if (x1 != x3) return false; else if (x1 != x4) return false; else if (x1 != x5) return false; else return true; This code was based on the simple fact that if any pair of numbers in the set of 5 are not equal, then the function must return false. By searching for non-equal pairs, we can shortcut the search as soon as we find one. Next, we are looking for 4 out of 5 positive matches. This means that we are looking for a set of 5 numbers, 4 of which are equal.
{"url":"http://www.maximumpc.com/forums/viewtopic.php?f=23&t=84143","timestamp":"2014-04-20T02:41:41Z","content_type":null,"content_length":"45832","record_id":"<urn:uuid:4161d78b-b728-4230-99a9-2aac89f804fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: f What is the standard form of the equation y = 7x + 9 with integer coefficients? View Solution f What is the standard form of the equation y = -5x + 8 with integer coefficients? View Solution f What is the standard form of the equation y = 7x - 8 with integer coefficients? View Solution f What is the standard form of the equation y = ( 109)x + 4 with integer coefficients? View Solution f What is the standard form of the equation y = -( 56)x - 3 with integer coefficients? f View Solution f What is the standard form of the equation y = 9x + 34 with integer coefficients? f View Solution f What is the standard form of the equation y = -(56)x - 46 with integer coefficients? View Solution f What is the standard form of the equation y = ( 23)x + 43 with integer coefficients? f View Solution f Write an equation of the line (in standard form using integer coefficients), which passes through the points (2, -3) and (-4, 5). View Solution f What is the equation of the line, which passes through the point (-9, 6) with a slope of 5 in standard form with integer coefficients? View Solution f What is the equation of the line, which passes through the point (-4, 6) with a slope of -5 in standard form with integer coefficients? f View Solution f What is the equation of the line (in standard form with integer coefficients), which passes through the point (3, 3) and has a slope of 2 ? View Solution f What is the equation of the line (in standard form with integer coefficients) that passes through the point (2, 4) and has a slope of -47 ? View Solution f Write the equation of the line (in standard form with integer coefficients) that passes through the point (-45, 5) with a slope of 56. View Solution f What is the equation of the line (in standard form with integer coefficients) that passes through the point (5, -32) with a slope of 23 ? View Solution f What is the equation of the line (in standard form with integer coefficients), which passes through the point (-4, -4) with a slope of 34 ? f View Solution f What is the standard form of the equation y = -(23)x + 2 with integer coefficients? f View Solution f What is the standard form of the equation of the line that passes through the points (4, 5) and (6, 15) with integer coefficients? f View Solution f What is the standard form of the equation of the line that passes through the points (3, 2) and (-3, 3) with integer coefficients? View Solution f What is the equation of the line (in standard form with integer coefficients), which passes through the point (-6, -5) with a slope of 56 ? f View Solution f What is the standard form of the equation y = -(23)x + 4 with integer coefficients? f View Solution f What is the standard form of the equation y = -(23)x - 13 with integer coefficients? View Solution f Write an equation of the line (in standard form using integer coefficients), which passes through the points (5, -6) and (-7, 8). View Solution f Which of the following represents the standard form of the equation of a line? View Solution f Identify the equation in standard form of the line in the graph with integer coefficients. View Solution f What is the equation of the line (in standard form with integer coefficients), which passes through the point (4, 2) and has a slope of 5 ? View Solution f What is the equation of the line (in standard form with integer coefficients) that passes through the point (2, 4) and has a slope of -59 ? View Solution f What is the equation of the line, which passes through the point (-3, 5) with a slope of 3 in standard form with integer coefficients? View Solution f Write the equation of the line (in standard form with integer coefficients) that passes through the point (-45, 4) with a slope of 56. View Solution f What is the equation of the line, which passes through the point (-3, 5) with a slope of -4 in standard form with integer coefficients? f View Solution f What is the standard form of the equation y = 4x + 8 with integer coefficients? View Solution f What is the standard form of the equation y = -9x + 7 with integer coefficients? View Solution f What is the standard form of the equation y = 7x - 2 with integer coefficients? View Solution f What is the standard form of the equation y = ( 87)x + 8 with integer coefficients? View Solution f What is the standard form of the equation y = -( 34)x - 3 with integer coefficients? f View Solution f What is the standard form of the equation y = 7x + 56 with integer coefficients? f View Solution f What is the standard form of the equation y = ( 34)x + 54 with integer coefficients? f View Solution f What is the equation of the line (in standard form with integer coefficients) that passes through the point (2, -32) with a slope of 23 ? View Solution f What is the standard form of the equation of the line that passes through the points (4, 2) and (6, 6) with integer coefficients? f View Solution f What is the standard form of the equation of the line that passes through the points (4, 5) and (-3, 6) with integer coefficients? View Solution f What is the standard form of the equation of the line in the graph? f View Solution f What is the standard form of the equation of the line in the graph? View Solution f What is the standard form of the equation of the line in the graph? View Solution f Identify the equation of the line in the graph in standard form (with integer coefficients). f View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdkaxkjkdj&.html","timestamp":"2014-04-16T07:19:39Z","content_type":null,"content_length":"92528","record_id":"<urn:uuid:0919f238-0156-4dd7-889b-7ef083aeac9b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Linked: Evocative or Explanatory? Friday, October 17, 2003, 9:30 a.m. I just finished Linked: The New Science of Networks, Albert László Barabási's book about the study of networks -- specifically scale-free networks and their use in describing the Internet, biological systems, social networks, and the economy. It was an interesting but rather unsatisfying book. The writing was at times forced. The opening chapters discuss mathematical models of random networks. Each section begins with a vignette about Paul Erdos or some other bit of history. They had some interest, but served primarily as an interruption to the main narrative. The writing is best when it sticks to the science. There's also some interesting personal background about the author's own exploration of the field, but it's not as good as Alan Guth's book The Inflationary Universe. My main gripe, however, was never being sure whether the mathematical model of a scale-free network -- i.e. one in which power law distributions exist and there is no typical node, no characteristic scale -- was intended to be just a mathematical convenience or whether it was really a representative model of the networks in question. Walter Willinger and others have a critique of the Barabási-Albert model that suggests my uneasiness was well-founded. The paper Scaling phenomena in the Internet: Critically examining criticality (Proceedings of the National Academy of Science, Feb. 19, 2002): bring[s] to bear a simple validation framework that aims at testing whether a proposed model is merely evocative, in that it can reproduce the the phenomenon of interest but does not necessarily capture and incorporate the true underlying cause, or indeed explanatory, in that it also captures the causal mechanisms (why and how, in addition to what). The Barabási-Albert model is widely known. In addition to Linked, it has been described by papers in Science and Nature. It explains the evolution of a graph relying on two simple phenomena -- incremental growth and preferential attachment. In the resulting model, the number of links per node follows a power law distribution; we expect to see many nodes with few links and a few nodes with very many links. Willinger et al. conclude that this model is evocative: In particular, we examine a number of recently proposed dynamical models of Internet traffic and Internet topology at the AS level that explain the entirely unexpected scaling behavior in terms of critical phenomena. In the process, we offer conclusive evidence that even though the models produce the self-similar scaling phenomena of interest, they do not explain why or how the observed phenomena arise in the Internet. Some of these criticality-based explanations can still be put to good use. I'll have to read the article more carefully.
{"url":"https://www.python.org/~jeremy/weblog/031017b.html","timestamp":"2014-04-20T05:52:25Z","content_type":null,"content_length":"4166","record_id":"<urn:uuid:5009b25f-a890-44c7-86d9-55af0bd8cea1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Math tricks for everyone I would like to have this thread dedicated to showing math tricks from all areas of mathematics. Hopefully the title has aroused your interest and you have an interesting trick you would like to share with everyone. Let me start by showing one of my favorite tricks, perhaps something that has not occured to many of you? Start with a general quadratic, do not set it equal to zero, set it equal to bx+c ax^2 = bx + c multiply everything by 4a 4(ax)^2 = 4abx + 4ac subtract 4abx from both sides 4(ax)^2 - 4abx = 4ac add b^2 to both sides 4(ax)^2 - 4abx + b^2 = b^2 + 4ac factor the left hand side (2ax - b)^2 = b^2 + 4ac take square roots of both sides 2ax - b = +-sqrt(b^2 + 4ac) add b to both sides 2ax = b +-sqrt(b^2 + 4ac) divide by 2a, a NOT zero x = [b +- sqrt(b^2 + 4ac)]/(2a) This quadratic formula works perfectly fine for quadratic equations, just make sure you isolate the ax^2 term BEFORE you identify a, b, and c 1) Notice that this version has 2 less minus signs than the more popular version 2) The division in the derivation is done AT THE LAST STEP instead of at the first step in the more popular derivation, avoiding 'messy' fractions. 3) In this derivation there was no need to split numerator and denominator into separate radicals 4) Writing a program using this version, instead of the more popular version, requires less memory since there are less 'objects' the program needs to keep track of. (Zero is absent, 2 less minus I hope you find this interesting and i look forward to seeing your tricks. The method of completing the square... multiplying by 4a and adding b^2 i learned from NIVEN AND ZUCKERMAN in their book ELEMENTARY NUMBER THEORY however it was an example they used on a congruence, they did not apply it to the quadratic formula.
{"url":"http://www.physicsforums.com/showthread.php?p=3380659","timestamp":"2014-04-19T04:44:16Z","content_type":null,"content_length":"74668","record_id":"<urn:uuid:5e773409-6833-4cdb-b546-7e7a7504acf3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: rambi_@_mail.com Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: rambi_@_mail.com User Profile for: rambi_@_mail.com UserID: 195978 Name: rambiz@gmail.com Registered: 1/29/05 Total Posts: 5 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=195978","timestamp":"2014-04-19T19:52:13Z","content_type":null,"content_length":"11926","record_id":"<urn:uuid:61ec4582-8bba-4dfc-9a62-dfa6bf24f160>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Dickinson, TX Prealgebra Tutor Find a Dickinson, TX Prealgebra Tutor I am an experienced and effective tutor that truly loves to see a student's confidence and knowledge grow! I have a B.S. in Biology and absolutely love math and reading. I have experience tutoring in phonics, reading, reading comprehension and math, including elementary, pre-algebra, algebra & geometry. 12 Subjects: including prealgebra, reading, English, biology Hi, my name is Richard. I am a graduate student at the University of Houston and a mathematics tutor at San Jacinto College. I have 3 to 4 years experience in mathematics. 16 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I really enjoy it and I always receive great feedback from my clients. I consider my client's grade as if it were my own grade, and I will do whatever it takes to make sure you get it, and at the same time make sure our sessions are easy and enjoyable. I can tutor almost any subject, but my spe... 38 Subjects: including prealgebra, English, calculus, reading ...I have worked with a variety of students and have a 5.0 rating over 300 hours of work. I am able to introduce many different methods of instruction and studying because of my background in education and teaching. As a certified teacher in Texas, I have ample experience with TAKS testing. 24 Subjects: including prealgebra, chemistry, English, reading ...I can almost guarantee that you will have “Aha! So that’s how it works!” moments as algebra becomes more familiar and understandable. Algebra 2 builds on the foundation of algebra 1, especially in the ongoing application of the basic concepts of variables, solving equations, and manipulations such as factoring. 20 Subjects: including prealgebra, writing, algebra 1, algebra 2 Related Dickinson, TX Tutors Dickinson, TX Accounting Tutors Dickinson, TX ACT Tutors Dickinson, TX Algebra Tutors Dickinson, TX Algebra 2 Tutors Dickinson, TX Calculus Tutors Dickinson, TX Geometry Tutors Dickinson, TX Math Tutors Dickinson, TX Prealgebra Tutors Dickinson, TX Precalculus Tutors Dickinson, TX SAT Tutors Dickinson, TX SAT Math Tutors Dickinson, TX Science Tutors Dickinson, TX Statistics Tutors Dickinson, TX Trigonometry Tutors Nearby Cities With prealgebra Tutor Alvin, TX prealgebra Tutors Bacliff prealgebra Tutors Beach City, TX prealgebra Tutors El Lago, TX prealgebra Tutors Hitchcock, TX prealgebra Tutors Kemah prealgebra Tutors La Marque prealgebra Tutors League City prealgebra Tutors Manvel, TX prealgebra Tutors Nassau Bay, TX prealgebra Tutors Santa Fe, TX prealgebra Tutors Seabrook, TX prealgebra Tutors Taylor Lake Village, TX prealgebra Tutors Texas City prealgebra Tutors Webster, TX prealgebra Tutors
{"url":"http://www.purplemath.com/Dickinson_TX_prealgebra_tutors.php","timestamp":"2014-04-17T01:23:24Z","content_type":null,"content_length":"24148","record_id":"<urn:uuid:96b741a8-4c3d-487d-8fd2-2f416d36fae0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: hi! can anybody explain relativity briefly and clearly? Best Response You've already chosen the best response. relativy is... in short words... that the situations or messeaurements are differents for each reference system Best Response You've already chosen the best response. This arises from the Maxwell Equation. Because in any inertia reference frame, the maxwell equation indicates C (speed of light) is the same, (of course in vacuum). If we strongly believe in this and "constant speed of light"= distance/ time, we will have relative time because we do not agree on distance when in different reference frames. This is the origin. Just think about it more. Best Response You've already chosen the best response. if two objects are moving in the same direction with the same velocity,a, their relative velocity is zero! if the two objects were moving in opposite directions with the same velocity,a, their relative velocity would be 2a! Best Response You've already chosen the best response. according to einstein's theory of relativity, all inertial frames see the speed of light as the same i.e. 3x10^8 m/s, even if the two frames have some diff in their velocities they'd see a beam of light coming towards them at the same speed.this happens as no frame can be preferred over another frame. Best Response You've already chosen the best response. relativity comes from the word relative.This means that the length,time and velocity are different according to any observer in their frame of reference.The length,time and velocity measured by an observer in stationary frame of reference are different from other observer in moving frame of reference.All come from Doppler effect.But Doppler effect only applicable for velocity very much lower than speed of light,299 792 458 m/s.Please watch this video in youtube.http://www.youtube.com/watch?v=wteiuxyqtoM Best Response You've already chosen the best response. the theory of relativity stems from two postulates (assumptions) which are that light travles at a constant speed, C and that the laws of physics stay the same in all reference frames (from all points of view in any type of motion). let us imagine then we have a clock comprising of two mirrors within which a beam of light bounces back and forth (a tick could be said to be the beam bouncing of a mirror). now let us imagine someone is holding the clock at that i am looking at them. if they start to run then from my reference frame the beam of light has to travel further (since it is both traveling side ways and vertically). however from the referance frame of the person with the clock the beam of light only travels vertically (since relative to him the clock is not moving). now it seems logical that for both people the light will take the same time to bounce between the mirrors. this is not the case however as if it were the light would have to speed up in some way to allow it to cover a greater distance in the same time. therefore according to me (the person watching) the other person has slowed down! they have actually experienced less time than i have. this is the interesting bit though. from the other postulate (that you cannot tell if you are in motion because there is no change in the laws of physics) that ffrom the other persons point of view he must not be able to tell he is moving. therefore the same thing must happen from his point of view (so that neither can tell who is moving) therefore from his point of view it is the person watching who nslows down. the theory shows that there is no such thing as absolute time or space and that events can appear to happen in a different order to different people. Sorry if that was a little rushed (you did ask though) if you want to understan it better i woulld suggest reading a book called 'why does E=mc^2' by prof brian cox. it explains it really clearly and simply. p.s. that was only an explanation of special relativity explaining general relativity would take to long (its explained in that book). hope this helpes Best Response You've already chosen the best response. Special relativity involves making a 4-dimensional manifold from space and time and then putting a minus sign in the appropriately generalized Pythagoras's theorem. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e2a00aa0b8b3d38d3b99f1f","timestamp":"2014-04-20T08:19:55Z","content_type":null,"content_length":"47919","record_id":"<urn:uuid:c0dfc71b-4b0e-4a74-8971-198401491da9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
data Event t a Constant time events. Value a starts at some time and lasts for some time. Functor (Event t) (Eq t, Eq a) => Eq (Event t a) (Show t, Show a) => Show (Event t a) eventEnd :: Num t => Event t a -> t End point of event (start time plus duration). temp :: a -> Score aSource temp constructs just an event. Value of type a lasts for one time unit and starts at zero. event :: Dur -> a -> Score aSource Creates a single event. event dur a Event lasts for some time and contains a value a. (=:/) :: Score a -> Score a -> Score aSource Turncating parallel composition. Total duration equals to minimum of the two scores. All events that goes beyond the lmimt are dropped. sustainT :: Dur -> Score a -> Score aSource Prolongated events can not exceed total score duration. All event are sustained but those that are close to end of the score are clipped. It resembles sustain on piano, when score ends you release the pedal. Common patterns slice :: Dur -> Dur -> Score a -> Score aSource slice cuts piece of value within given time interval. for (slice t0 t1 m), if t1 < t0 result is reversed. If t0 is negative or t1 goes beyond dur m blocks of nothing inserted so that duration of result equals to abs (t0 - t1). alignByZero :: Real t => [Event t a] -> [Event t a] Shifts all events so that minimal start time equals to zero if first event has negative start time. linfun :: (Ord t, Fractional t) => [t] -> t -> t Linear interpolation. Can be useful with mapEvents for envelope changes. linfun [a, da, b, db, c, ... ] a, b, c ... - values da, db, ... - duration of segments linfunRel :: (Ord t, Fractional t) => t -> [t] -> t -> t With linfunRel you can make linear interpolation function that has equal distance between points. First argument gives total length of the interpolation function and second argument gives list of values. So call linfunRel dur [a1, a2, a3, ..., aN] is equivalent to: linfun [a1, dur/N, a2, dur/N, a3, ..., dur/N, aN] Monoid synonyms This package heavily relies on Monoids, so there are shorcuts for Monoid methods. Volume control withAccent :: VolumeLike a => (Dur -> Accent) -> Score a -> Score aSource Accent that depends on time of note, time is relative, so Score starts at 't = 0' and ends at 't = 1'. Pitch control Denotes lower 1-2 and higher 1-2. Time stretching Naming conventions : First part x can be [b | w | h | q | e | s | t | d[x] ] b means brewis (stretch 2) w means whole (stretch 1) h means half (stretch $ 1/2) q means quater (stretch $ 1/4) e means eighth (stretch $ 1/8) s means sixteenth (stretch $ 1/16) t means thirty second (stretch $ 1/32) d[x] means dotted [x] (stretch 1.5 $ x) Naming conventions are the same as for 'time stretching'.
{"url":"http://hackage.haskell.org/package/temporal-music-notation-0.2.3/docs/Temporal-Music-Score.html","timestamp":"2014-04-19T10:50:17Z","content_type":null,"content_length":"68277","record_id":"<urn:uuid:f92907ff-3d5c-459c-8d48-59e8b0c359f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Reading to Learn Math Sep 20 2007 Reading to Learn Math [Photo by Betsssssy.] Do you ever take your kids’ math tests? It helps me remember what it is like to be a student. I push myself to work quickly, trying to finish in about 1/3 the allotted time, to mimic the pressure students feel. And whenever I do this, I find myself prone to the same stupid mistakes that students make. Even teachers are human. In this case, it was a multi-step word problem, a barrage of information to stumble through. In the middle of it all sat this statement: …and there were 3/4 as many dragons as gryphons… My eyes saw the words, but my mind heard it this way: …and 3/4 of them were dragons… What do you think — did I get the answer right? Of course not! Every little word in a math problem is important, and misreading even the smallest word can lead a student astray. My mental glitch encompassed several words, and my final tally of mythological creatures was correspondingly screwy. But here is the more important question: Can you explain the difference between these two statements? If Johnny Can’t Read, Then He Can’t Do Math To solve word problems, students must be able to read and understand what is written, and they must be able to follow directions. They need to comprehend what they read — to paraphrase it, concentrating on the relevant facts — and then to translate that information into a mathematical expression. Many times, they must be able to “read between the lines” and understand something that is implied, not explicitly stated. When students struggle with word problems, more often than not it is a language issue that confuses them. Paraphrasing is one of the most important skills we can teach junior high and high school students. Often they want to rush into interpreting and reacting to a text even before they know what it means. We teachers sometimes suffer from the delusion that since a student can read the words on the page, he or she understands what’s been read. But that’s not always true. — Nick Senger Quotes about Reading for Students to Paraphrase That quote is from an article at Teen Literacy Tips blog. Does a literature teacher have anything useful to say about solving math problems? Well, the fact that word problems are also called story problems should clue us in to a significant connection. As important as mathematics is, it is a distant second to the need for good reading comprehension. We teachers so often hear students summarize a course by saying, ‘I could do everything except the word problems.’ Sadly, in the textbook of life, there are only word problems. — Herb Gross, quoted by Jerome Dancis in Reading Instruction for Arithmetic Word Problems [The entire article by Dancis is worth reading, and you may want to explore the rest of his webpage as well. I will be using Supposedly Difficult Arithmetic Word Problems as ratio practice with my MathCounts students later this semester.] For a simple (yet often confusing) example, consider these two statements. Can you explain the difference? • Eight divided in half is four. • Eight divided by one-half is sixteen. If your students keep a Math Journal, this would be a great writing prompt. An answer is given at bottom of this post. Now, Let’s Analyze My Mistake In my word problem, it turned out there were 56 creatures in all. I got that part of the answer just fine, but then I needed to know how many of those creatures were gryphons. This is how I did it: …and 3/4 of them were dragons… 4 units = 56 1 unit = 56 ÷ 4 = 14 gryphons But that was not at all what the problem said. There should have been several more gryphons than dragons. If I had been paying better attention to what I read, this is how I should have solved the …and there were 3/4 as many dragons as gryphons… 7 units = 56 1 unit = 56 ÷ 7 = 8 4 units = 8 x 4 = 32 gryphons Just to make the language issue more difficult, consider this: All of the following statements are equivalent. Compare each statement to the second drawing above (the correct one). Can you see each • There are 3/4 as many dragons as gryphons. • For every 4 gryphons, there are 3 dragons. • The ratio of dragons to gryphons is 3:4. • 4 out of every 7 creatures is a gryphon. • There are 1/3 more gryphons than dragons. • There are 25% fewer dragons than gryphons. • If you tag a creature at random from the group, the probability of choosing a dragon is 3/7. Can you think of any other ways to say it? This would be another good math journal writing prompt. Ratio problems like this are some of the most confusing word problems our pre-algebra students will face. The more we can work with them on reading, paraphrasing, and translating these problems into mathematical expressions, the better prepared our students will be to face the word problems they meet in “the textbook of life.” [Edited to add: This problem follows students beyond middle school. Jackie is struggling to get her high school math students to read carefully. See her post Mis-Reading in Mathematics (and the comments section).] One Possible Answer to the Question About Dividing by 1/2 When you divide a number in half, you split it into two equal parts. But if you divide a number by 1/2, you are finding how many halves it takes to make that number — that is, you are cutting it into half-size pieces and counting how many there are. And in that case, because each whole thing is two halves, there will be double as many pieces as the number you started with. Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Have more fun on Let’s Play Math! blog: 11 comments on “Reading to Learn Math” 1. How timely! My students are doing word problems right now and converting from prose to mathematical expressions is challenging. I’ll watch out for tricky wording like you pointed out. 2. These things are hard to read. Even for good readers. I slow kids down, it probably helps. 3. I was just working on this (again) with my pre-algebra students. They admitted they just skip the word problems – especially on standardized tests. Our new goal is not to be “tricked” by them I love your quotes – especially the one by Herb Gross. sorry – somehow your comment was marked as spam – not anymore! 4. I started out tutoring algebra, then switched to remedial reading using phonics. Once you get into higher math where you need to be able to read the explanations and the word problems, reading is important to math. Without the ability to read well, you’ll never excel in your other subjects. 5. Eight divided in half is four. Eight divided by (one-)half is sixteen. Eight divided in two is four. Eight divided by two is four. (brain explodes) 6. LOL! I did it again this week. My son gets a laugh out of my mistakes on his homework, especially when he got the problem right. Last time, I ignored the word “more” in a MathCounts problem, and this week I missed the word “additional.” You’d think I would have learned by now… As for comments, Jackie, I have given up on rescuing them from the spam folder. I have been getting way over 100 spam a day, and I just don’t have that much time to sort them. But I did fix your blog link for you! 7. I teach a lot of literacy in my class, even when it seems it’s weird to do so in a math class. It helps when kids actually read with me the problems they have to do. That’ll be especially important for a year in which I have a class full of ELLs. In any case, good post. 8. Very good post. This is where we had many fun discussion in college about the validity of tests and knowing what you are really testing. But the problem is, if we want to prepare our children for life, they need this kind of reasoning as well. 9. You have some good examples here about the critical importance of knowing how to translate from words to math. I also liked the quote about the paraphrasing. Of course, I like those because my book, “Solving Word Problems,” explains exactly these things (and more)! :-) The dragons-gryphons sentence is actually a very difficult one. Your mistake was a simple one but the sentence structure “3/4 as many dragons as gryphons” requires either an ability to manipulate the two objects, dragons and gryphons, in one’s head and understand that dragons = 3/4 grypohns, or to know how to rephrase it to an easy to understand sentence. The first is EXTREMELY hard to do and most students will write 3/4dragons = gryphons. The second can be easily taught! [See my book of course LOL]. To the one with the exploding brain: you can’t divide eight in two. You can only use this grammatical structure for fractions. It took me a minute to understand that this is the issue, but maybe I noticed it because English is not my first language so I’m more sensitive to the translation issue :-) 10. Hi. This is Herb Gross and I am now putting toglether all of my arithmetic and algebra materials (including textbooks, videos and slide shows) on my website for anyone ot use free of charge. The website is just temporary and in a short time it will be made more user-friendly in terms of being able to access items quickly. Please feel free tot use the material on my site (www.adjectivenounmath.com) in any ways that you wish. You may email me at hgross3@comcast.net Please excuse any typos. At age 81 the small prnt is my nemesis.
{"url":"http://letsplaymath.net/2007/09/20/reading-to-learn-math/","timestamp":"2014-04-19T12:02:39Z","content_type":null,"content_length":"87605","record_id":"<urn:uuid:5bb236f9-4432-4a2c-b6da-29911cc75921>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Re: Paths to tree apfelmus at quantentunnel.de apfelmus at quantentunnel.de Tue Jan 30 12:39:09 EST 2007 John Ky wrote: > I can't know, but it doesn't seem unreasonable that you intend to use >> the ArcForest as a trie, i.e. an efficient implementation of a set of >> paths which allows to look up quickly whether a given path (here of type >> [String]) is in the set or not. So, we have > For a while, I was thinking what on Earth are you talking about, even while > I continued reading the rest of the email, but it eventually clicked what > you where trying to show me - which was something I didn't dare try until I > got more familiar with Haskell. > Your examples got me started on dealing with these sorts of complex tree > structures (or tries as you call them). They made more sense as I spent > more time reading and rereading them. :) I think that the important point is that one can think of the trees you had as things where one can insert and lookup (path,value)-pairs. This suggests a lot of useful functions like 'insert', 'union' and 'singleton' together with corresponding laws like insert k v m == union (singleton k v) m -- left biased union that are very handy for implementation. > Now what about 'MapString v', how do we get this? Well, your >> implementation corresponds to the choice >> type MapString v = [(String,v)] >> But in our case, we can apply the same trick again! > [...] > That's quite beautiful, but I don't actually need to go that far. > Question though, does taking the approach to this conclusion > actually have real applications? Well, besides providing an actual implementation of finite maps, it is also one of the fastest available. So while 'MapString v' and 'Data.Map String v' have the same purpose, 'MapString v' will be faster. But in your case, I wouldn't bother about this now, because if it turns out that you need to change the trie data structure again, the effort spend in optimization would be wasted. Moreover, changing from 'Data.Map' to 'MapString' or similar is very transparent and therefore can be done later because you only rely on the functions like 'unionWith' that are provided by both. Also, the trick that currently reduces the problem of a finite map for the list [k] to the problem of a finite map for k can be extended to decompose arbitrary types. To get a finite map for either one of the keys k1 or k2, you can take a pair of finite maps for the keys Either k1 k2 -> v ^= (k1 -> v, k2 -> v) Similarly, a finite map for pair of keys (k1,k2) can be encoded as a composition of finite maps (k1,k2) -> v ^= k1 -> (k2 -> v) The paper has more on this. > Now, we can build up our finite map for paths: >> data MapPath v = TriePath (Maybe v) (MapString (MapPath v)) >> because it (maybe) contains a value for the key '[] :: Path' and it >> (maybe) contains a map of paths that is organized by their first String >> element. > In my own code I had to diverge from your definition because for my needs, > every node needed to contain a value (even if it was a default value). I > plan to later add other numerical values to every node so that I can > traverse them and do calculations that feed up and trickle down the tree. > type Path k = [k] > data Trie k v = Trie v (Map k (Trie k v)) deriving Show That's fine, adapt them recklessly to your task :) > I did try to write my own insertWithInit called by fromPath (below), > which I couldn't get working. Branches went missing from the result. > I had so much rouble figuring where in the function I forgot to do > something. I don't know an easy way to implement 'insertWithInit' that works with default elements. The problem is that one has to create the default nodes when inserting insertWithInit v0 f ["a","b","c"] x $ / \ "a" "b" v w / \ "a" "b" v w while still guaranteeing that f only acts on the inserted value x. This somehow breaks the intuition of inserting a single (key,value)-pair. If you dispense with the 'With' part, you can outsource the creation of the default nodes to 'fromPath' and employ 'union' to implement 'insertInit': insertInit :: (Ord k) => v -> Path k -> v -> Trie k v -> Trie k v insertInit vInit path v m = union m (fromPath vInit v path) In fact, that's what you did for fromList'. If there is a globally known default element, you also have the option to actually stick with (Maybe v). For example, if you do calculations with 'Int', you can do vdefault = 5 withDefault :: Maybe Int -> Int withDefault Nothing = vdefault withDefault (Maybe x) = x instance Num (Maybe Int) where x + y = Just $ withDefault x + withDefault y One could also do with 'Trie k v = Trie (Either v) ...' but i don't think that it's really worth it. > At this point my head was about to explode, so I took a different approach > using union called by fromList' (also below), which from my limited testing > appears to work. I also find the union function incredibly easy to > understand. I only hope I got it right. > union :: (Ord k) => Trie k v -> Trie k v -> Trie k v > union (Trie k0 v0) (Trie k1 v1) = Trie k0 v > where > v = Map.unionWith union v0 v1 Well, once you found such a really concise function, it can only be correct :) While it is not relevant in your case, note that 'union' can be extended to be applicable with the recursive trick. But the extension suggest itself by noting that you used 'Map.unionWith' instead of 'Map.union'. So, you could do unionWith :: (Ord k) => (v -> v -> v) -> Trie k v -> Trie k v -> Trie k v unionWith f (Trie k0 v0) (Trie k1 v1) = Trie (f k0 k1) v v = Map.unionWith (unionWith f) v0 v1 union = unionWith (\_ y -> y) More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-January/021867.html","timestamp":"2014-04-17T16:47:32Z","content_type":null,"content_length":"9275","record_id":"<urn:uuid:ea4b36d1-5064-42d2-b1f0-929b6111e11b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Trace of a square matrix From Encyclopedia of Mathematics 2010 Mathematics Subject Classification: Primary: 15A15 [MSN][ZBL] $ \newcommand{\tr}{\mathop{\mathrm{tr}}} \newcommand{\Tr}{\mathop{\mathrm{Tr}}} \newcommand{\Sp}{\mathop{\mathrm{Sp}}} $ The sum of the entries on the main diagonal of this matrix. The trace of a matrix $A = [a_{ij}]$ is denoted by $\tr A$, $\Tr A$ or $\Sp A$: $$ \tr A = \sum_{i=0}^n a_{ii} $$ Let $A$ be a square matrix of order $n$ over a field $k$. The trace of $A$ coincides with the sum of the roots of the characteristic polynomial of $A$. If $k$ is a field of characteristic 0, then the $n$ traces $\tr A, \ldots \tr A^n$ uniquely determine the characteristic polynomial of $A$. In particular, $A$ is nilpotent if and only if $\tr A^m = 0$ for all $m=1,\ldots,n$. If $A$ and $B$ are square matrices of the same order over $k$, and $\alpha,\beta \in k$, then $$ \tr(\alpha A + \beta B) = \alpha \tr A + \beta \tr B, \quad \tr AB = \tr BA, $$ while if $\det B \neq 0$, $$ \tr(BAB^{-1}) = \tr A. $$ The trace of the tensor (Kronecker) product of square matrices over a field is equal to the product of the traces of the factors. The trace of a product of matrices $A \in \mathbb{R}^{n \times m}$, $B \in \mathbb{R}^{m \times n}$ with a resulting square matrix is equal to the sum over all components of a hadamard product of $A$ and $B^T$: $$ \tr(AB) = \sum_{i=1}^n \sum_{j=1}^n (A \circ B^T)_{i,j}. $$ [Co] P.M. Cohn, "Algebra", 1, Wiley (1982) pp. 336 [Ga] F.R. [F.R. Gantmakher] Gantmacher, "The theory of matrices", 1, Chelsea, reprint (1959) (Translated from Russian) How to Cite This Entry: Trace of a square matrix. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Trace_of_a_square_matrix&oldid=31319 This article was adapted from an original article by D.A. Suprunenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Trace_of_a_square_matrix","timestamp":"2014-04-19T04:21:30Z","content_type":null,"content_length":"21517","record_id":"<urn:uuid:43fae47b-9681-4588-9be4-43bbc74c6b28>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
When do sheaves deform over a family? up vote 3 down vote favorite Let $\mathcal{X} \to B$ be a flat family with some fibre $X_b \to b$. Suppose I have a coherent sheaf $F_b$ on $X_b$. When does it spread out to a sheaf $\mathcal{F}$ on $\mathcal{X}$ flat over What about a subscheme $z \subset X_b$? Arbitrary diagrams of sheaves? (I am only concerned with the case where everything is defined over $\mathbf{C}$, and moreover in the local case where $B$ is a disc and I am perfectly happy to take $\mathcal{X}$ to be affine. However $\mathcal{X}$ should not be assumed smooth, nor $X_b$ to even be reduced.) Vivek, if $X$ is proper over $B$ then this is related to the local structure "near" $F_b$ (or the closed subscheme) viewed as a point in the corresponding moduli space/scheme for a Quot-functor or Hilbert functor. This doesn't help with non-proper $X$, nor for diagrams of sheaves. Doesn't really answer the question, but suggests what can go wrong (given the horrors one can find on Hilbert schemes). It also raises the question: are you happy to make a preliminary quasi-finite flat base change on $B$ near $b$? That may improve things. – BCnrd Sep 23 '10 at 7:30 Hmm, correction: for closed subschemes the link with Hilb is OK, but for "abstract" (coherent) sheaf $F_b$ we should assume projectivity to get an a-priori cohomological control of some Serre twist that will be generated by enough global sections even after coherent $B$-flat lifting, so one has an actual Quot-functor to grab onto. That's all quite abstract, so likely not going to really give a useful answer (but maybe some justified bad examples). – BCnrd Sep 23 '10 at 7:40 I am indeed happy to make a quasi-finite flat base change on B. – Vivek Shende Sep 23 '10 at 7:41 add comment 2 Answers active oldest votes Suppose that $B_n$ it the $n^{\rm th}$ infinitesimal neighborhood of $b$ in $B$; that is, if $\frak m$ is the maximal ideal of $b$ in $B$, we set $B_n := \mathop{\rm Spec} \mathcal O_B/ {\frak m}^{n+1}$. If $\mathcal F_n$ is as extension of $\mathcal F_b$ to $B_n$, there is a canonically defined element of $({\frak m}^{n+2}/{\frak m}^{n+1})\otimes_{\mathbb C}\mathop{\ rm Ext}_{\mathcal O_{X_b}}(\mathcal F, \mathcal F)$, called the obstruction; if this is zero, then the sheaf $\mathcal F_n$ extends to $B_{n+1}$. This depends on $\mathcal F_n$, not only on $\mathcal F_b$. up vote 6 down vote If this obstructions are always 0 (for example, if $\mathop{\rm Ext}_{\mathcal O_{X_b}}(\mathcal F, \mathcal F) = 0$), then $\mathcal F_b$ will extend to some étale neighborhood of $b$ accepted in $B$. I don't think you can say much more in this generality. There is a whole subject devoted to the study of this kind of problems (not only for sheaves, but for much more general objects), called deformation theory. Does (how does) the obstruction depend on the family? How to compute it? I assume there's some sequence? – Vivek Shende Sep 23 '10 at 7:42 1 My first answer was not very clear, nor correct, I edited it. The obstructions are described, for example, in Theorem 5.4 of <arxiv.org/abs/1006.0497>;, and (undoubtably) in lots of other references that I am too lazy to look up. – Angelo Sep 23 '10 at 9:09 Ah, thanks for the reference. – Vivek Shende Sep 23 '10 at 9:11 A special case of what Angelo wrote: if $X\to S$ is smooth and $Z$ a smooth subscheme of $X_b$, the obstruction space is $H^1(Z,N_{Z/X})$, where $N$ is the normal sheaf. So, e.g., if $X_b$ is a surface and $Z$ a copy of $\mathbb P^1$ with $Z^2=-1$, then $Z$ always extends, but if $Z^2=-2$ then sometimes it does and sometimes it doesn't. – inkspot Sep 23 '10 at @inkspot $H^1(Z, N_{Z/X})$ is the obstruction space to deform to the first order $Z$ in $X$ with $X$ fixed (embedded deformations). The correct obstruction space for the first-order extension problem is $H^2(X, T_X(- \log Z))$. However, if $H^1(Z, N_{Z/X})=0$ then there is a surjective map $H^1(X, T_X(- \log Z)) \to H^1(X, T_X)$, which means that no first-order deformation of $X$ makes $Z$ desappear. So your conclusion was correct, after all :-) – Francesco Polizzi Sep 23 '10 at 20:12 add comment If $\mathcal{F}_b$ is an invertible sheaf and $B=\textrm{Spec} \mathbb{C}[\epsilon]/(\epsilon^2)$ (first-order deformations) then the obstruction theory for deforming $\mathcal{F}_b$ described in Angelo's answer becomes very explicit. Indeed, there is the following result, whose proof can be found in Sernesi's book "Deformation of algebraic schemes", p. 147. Set $\mathcal{L}:=\mathcal{F}_b$, $X:=X_b$. THEOREM Given a first-order deformation $\xi$ of $X$, there is a first-order deformation of $\mathcal{L}$ along $\xi$ if and only if $\kappa(\xi) \cdot c(\mathcal{L})=0$. up vote 1 Here $\kappa$ is the Kodaira-spencer map, $c$ is the first Chern class and "$\cdot$" denotes the composition down vote $H^1(X, T_X) \times H^1(X, \Omega^1_X) \to H^2(X, T_X \otimes \Omega^1_X) \to H^2(X, \mathcal{O}_X)$, where the first arrow is induced by the cup-product and the second one by the duality pairing $T_X \otimes \Omega^1_X \to \mathcal{O}_X$. Using this, one can prove for instance that if $X$ is an Abelian variety of dimension $g$ and $\mathcal{L}$ is an ample line bundle, then $\mathcal{L}$ extends along a subspace of $H^1(X, T_X)$ of dimension $g(g+1)/2$. For $g \geq 2$, $\mathcal{L}$ does not extend to the whole of $H^1(X, T_X)$ (not even to the first-order!), since the general deformation of $X$ is not add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/39708/when-do-sheaves-deform-over-a-family?answertab=active","timestamp":"2014-04-20T13:40:33Z","content_type":null,"content_length":"66484","record_id":"<urn:uuid:174ec378-3658-41d7-b7c2-2708a0d4bd70>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
North Metro Prealgebra Tutor Find a North Metro Prealgebra Tutor ...These subjects varied from calculus, chemistry, biology, general engineering and general education courses. I am confident I can help you sharpen your study skills so you can truly understand the material. I'm ready to get started when you are!It's the science of life! 15 Subjects: including prealgebra, reading, biology, calculus ...My goal is always to make sure that the student, not only understands the material, but also feels confident in what they are doing. I feel that every student is different in what builds their confidence in the material, so try to figure out what that is as we work together. I also ask for some... 9 Subjects: including prealgebra, chemistry, calculus, geometry ...During those 19 years, I have tutored students in math from grades 6 - 12. I recently started offering baseball lessons as well. I am a certified math teacher with a Master's degree in math education from Georgia State University, and a Bachelor's degree from University of Georgia.I have 18 years' experience coaching high school baseball. 9 Subjects: including prealgebra, geometry, algebra 2, algebra 1 ...Not only am I a certified ESL teacher (by the London Teacher Training School), I have been a student of 4 foreign languages (Arabic, French, Spanish and Chinese) myself - so I know what it is like to learn a new language and to face the challenge of communicating in a language that is not one's m... 14 Subjects: including prealgebra, Spanish, geometry, statistics ...One of the most important moments in my career path was entering the the College of Foreign Languages at Vinnitsa State Pedagogical University and being accepted by a government-funded program. The time spent studying methodology, lexicology, grammar and stylistics gave me ability to put my know... 10 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/North_Metro_Prealgebra_tutors.php","timestamp":"2014-04-18T16:17:48Z","content_type":null,"content_length":"24261","record_id":"<urn:uuid:3d99def5-32ad-4630-8c5a-dd2b83acaab1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Steel Drum June 20th 2012, 02:20 PM #1 Jun 2012 Hello! I am having problems in solving this problem. I don't know where to star but I have some formulas that I started with and I don't know what to do from there. Your task is to build a steel drum (right circular cylinder) of fixed volume. This time the consideration of waste material is added, but the material cost is still the same for the top and the sides (same gage and same cost) the tops and the bottoms will be cut from sheet metal from squares of length 2r. Use calculus to determine the amount of metal used is minimized when h/r=8/π. I have just the formulas: --Area of a cirlce --Volume of a cylinder --Area of a square I will appreciate your help Re: Steel Drum With the radius r and the height h, how much material is used? Using the formula for the volume (and the fixed volume V), can you find a relation between r and h? Can you express the used material as function of a single variable (r or h) only? Do you know how to find the minimum of this function? These steps are not specific for your problem here, they can be used for all problems of this type. Re: Steel Drum I presume the barrel can be made just by bending a rectangle of width h and length $2\pi r$, the circumference of the circle. Now, since we have to pay for waste as well as the material used, from "sheet metal from squares of length 2r" we have to pay for the full $2(2r)^2$. Now what is the total material used for both barrel and ends in terms of h and r? Use the formula for volume to reduce that to one variable. Because the problem asks for a relation between r and h, rather than explicit values, you might try the "Laplace multiplier" method. It tends to give relations first which could then be solved for explicity values. Do you know that method? However, doing it both ways, I get the same answer but NOT " $8/\pi$"! Last edited by HallsofIvy; June 23rd 2012 at 01:53 PM. June 21st 2012, 07:18 AM #2 Junior Member Jun 2012 June 23rd 2012, 01:46 PM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/200236-steel-drum.html","timestamp":"2014-04-21T12:27:25Z","content_type":null,"content_length":"36509","record_id":"<urn:uuid:a5986128-62ed-413e-b4d9-ef28b3b36b07>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
C# isPowerOf function up vote 7 down vote favorite I have the next function: static bool isPowerOf(int num, int power) double b = 1.0 / power; double a = Math.Pow(num, b); return a == (int)a; I inserted the print function for analysis. If I call the function: isPowerOf(25, 2) It return true since 5^2 equals 25. But, if I call 16807, which is 7^5, the next way: isPowerOf(16807, 5) In this case, it prints '7' but a == (int)a return false. Can you help? Thanks! c# math exponentiation 6 Obligatory link to What Every Computer Scientist Should Know About Floating-Point Arithmetic – AakashM Jul 6 '12 at 9:47 1 Everyone's going to suggest better floating point comparisons, but IMO the root of the problem is the algorithm here. – harold Jul 6 '12 at 9:48 add comment 4 Answers active oldest votes Try using a small epsilon for rounding errors: return Math.Abs(a - (int)a) < 0.0001; up vote 6 down vote accepted As harold suggested, it will be better to round in case a happens to be slightly smaller than the integer value, like 3.99999: return Math.Abs(a - Math.Round(a)) < 0.0001; It works now, but how come that 7 != (int)7 ? – Tyymo Jul 6 '12 at 9:45 @GuyDavid: Its because of rounding errors, the number you got isn't 7, but it is 7.000000001 or something like that – Dani Jul 6 '12 at 9:46 @Guy David try : Console.WriteLine((int)a); – Nahuel Fouilleul Jul 6 '12 at 9:50 Isn't 0.0001 a magic number? – Danny Chen Jul 6 '12 at 10:08 What if the result of math.pow is off by more than 0.0001? – harold Jul 6 '12 at 10:11 show 2 more comments Comparisons that fix the issue have been suggested, but what's actually the problem here is that floating point should not be involved at all. You want an exact answer to a question involving integers, not an approximation of calculations done on inherently inaccurate measurements. So how else can this be done? The first thing that comes to mind is a cheat: double guess = Math.Pow(num, 1.0 / power); return num == exponentiateBySquaring((int)guess, power) || up vote 5 num == exponentiateBySquaring((int)Math.Ceil(guess), power); down vote // do NOT replace exponentiateBySquaring with Math.Pow It'll work as long as the guess is less than 1 off. But I can't guarantee that it will always work for your inputs, because that condition is not always met. So here's the next thing that comes to mind: a binary search (the variant where you search for the upper boundary first) for the base in exponentiateBySquaring(base, power) for which the result is closest to num. If and only if the closest answer is equal to num (and they are both integers, so this comparison is clean), then num is a power-th power. Unless there is overflow (there shouldn't be), that should always work. Yes indeedy, there are good reasons why integers and floating-point numbers are separate types. – High Performance Mark Jul 6 '12 at 11:03 add comment Math.Pow operates on doubles, so rounding errors come into play when taking roots. If you want to check that you've found an exact power: • perform the Math.Pow as currently, to extract the root up vote 2 down vote • round the result to the nearest integer • raise this integer to the supplied power, and check you get the supplied target. Math.Pow will be exact for numbers in the range of int when raising to integer powers add comment If you debug the code and then you can see that in first comparison: isPowerOf(25, 2) a is holding 5.0 Here 5.0 == 5 => that is why you get true and in 2nd isPowerOf(16807, 5) up vote 2 down vote a is holding 7.0000000000000009 and since 7.0000000000000009 != 7 => you are getting false. and Console.WriteLine(a) is truncating/rounding the double and only show 7 That is why you need to compare the nearest value like in Dani's solution add comment Not the answer you're looking for? Browse other questions tagged c# math exponentiation or ask your own question.
{"url":"http://stackoverflow.com/questions/11359694/c-sharp-ispowerof-function","timestamp":"2014-04-24T10:26:06Z","content_type":null,"content_length":"83838","record_id":"<urn:uuid:68be5129-e929-4dbe-8f42-b7aa1831ad68>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverdale Park, MD Algebra 2 Tutor Find a Riverdale Park, MD Algebra 2 Tutor ...I have a love and passion for the English language, and love making other work better. Personal ACT Scores: 33 Composite, 32 Reading My BA in Political Science and Japanese from Tufts University gives me an exceptional ability to analyze complex pieces of writing, which is well-suited for the A... 33 Subjects: including algebra 2, English, writing, reading ...I work work with students to develop ideas in an organized manner through the use of outlines, as well as additional techniques to address their specific needs as a writer. I have worked as an English writing tutor for middle school, high school, and college level writing. I have worked with st... 11 Subjects: including algebra 2, writing, algebra 1, public speaking ...My approach is flexible to meet the needs of the learner. ALGEBRA II The contents of Algebra II include, solving equations and inequalities involving absolute values, solving system of linear equations and inequalities (in two or three variables), operation on polynomials, factoring polynomials... 7 Subjects: including algebra 2, geometry, algebra 1, SAT math ...Through studying mathematics, students can learn to process information and make decisions based on data and established facts rather than through gut feelings and bad info. Some students aren't really reaping these benefits. Maybe the class is moving too fast, or there are holes in their basic math knowledge. 19 Subjects: including algebra 2, English, reading, physics ...The Russian school system puts a lot of focus on developing excellent math skills and teaching outstanding abilities to use applied math in everyday life. I pride myself in having acquired excellent math skills, and I will be glad to help my students improve their math skills. I can help with E... 10 Subjects: including algebra 2, calculus, ESL/ESOL, algebra 1 Related Riverdale Park, MD Tutors Riverdale Park, MD Accounting Tutors Riverdale Park, MD ACT Tutors Riverdale Park, MD Algebra Tutors Riverdale Park, MD Algebra 2 Tutors Riverdale Park, MD Calculus Tutors Riverdale Park, MD Geometry Tutors Riverdale Park, MD Math Tutors Riverdale Park, MD Prealgebra Tutors Riverdale Park, MD Precalculus Tutors Riverdale Park, MD SAT Tutors Riverdale Park, MD SAT Math Tutors Riverdale Park, MD Science Tutors Riverdale Park, MD Statistics Tutors Riverdale Park, MD Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bladensburg, MD algebra 2 Tutors Brentwood, MD algebra 2 Tutors Cheverly, MD algebra 2 Tutors College Park algebra 2 Tutors Edmonston, MD algebra 2 Tutors Greenbelt algebra 2 Tutors Hyattsville algebra 2 Tutors Landover Hills, MD algebra 2 Tutors Lanham Seabrook, MD algebra 2 Tutors Mount Rainier algebra 2 Tutors New Carrollton, MD algebra 2 Tutors North Brentwood, MD algebra 2 Tutors Riverdale Pk, MD algebra 2 Tutors Riverdale, MD algebra 2 Tutors University Park, MD algebra 2 Tutors
{"url":"http://www.purplemath.com/riverdale_park_md_algebra_2_tutors.php","timestamp":"2014-04-20T15:55:51Z","content_type":null,"content_length":"24610","record_id":"<urn:uuid:d1bdf758-a9b0-43c0-b0bc-81b4a16bac4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Some norm inequalities for operators Canad. Math. Bull. 42(1999), 87-96 Printed: Mar 1999 • Fuad Kittaneh Let $A_i$, $B_i$ and $X_i$ $(i=1, 2, \dots, n)$ be operators on a separable Hilbert space. It is shown that if $f$ and $g$ are nonnegative continuous functions on $[0,\infty)$ which satisfy the relation $f(t)g(t) =t$ for all $t$ in $[0,\infty)$, then $$ \Biglvert \,\Bigl|\sum^n_{i=1} A^*_i X_i B_i \Bigr|^r \,\Bigrvert^2 \leq \Biglvert \Bigl( \sum^n_{i=1} A^*_i f (|X^*_i|)^2 A_i \Bigr)^r \ Bigrvert \, \Biglvert \Bigl( \sum^n_{i=1} B^*_i g (|X_i|)^2 B_i \Bigr)^r \Bigrvert $$ for every $r>0$ and for every unitarily invariant norm. This result improves some known Cauchy-Schwarz type inequalities. Norm inequalities related to the arithmetic-geometric mean inequality and the classical Heinz inequalities are also obtained. Keywords: Unitarily invariant norm, positive operator, arithmetic-geometric mean inequality, Cauchy-Schwarz inequality, Heinz inequality MSC Classifications: 47A30 - Norms (inequalities, more than one norm, etc.) 47B10 - Operators belonging to operator ideals (nuclear, $p$-summing, in the Schatten-von Neumann classes, etc.) [See also 47L20] 47B15 - Hermitian and normal operators (spectral measures, functional calculus, etc.) 47B20 - Subnormal operators, hyponormal operators, etc.
{"url":"http://cms.math.ca/10.4153/CMB-1999-010-6","timestamp":"2014-04-21T02:02:21Z","content_type":null,"content_length":"36523","record_id":"<urn:uuid:7a4b04d7-c2dc-49bd-8349-64dfc2f0a143>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Interdepartmental Major: Math / Economics Scope and Objectives In recent years economics has become increasingly dependent on mathematical methods, and the mathematical tools it employs have become more sophisticated. Mathematically competent economists, with bachelor's degrees and with advanced degrees, are needed in industry and government. Graduate programs in economics and finance programs in graduate schools of management require strong undergraduate preparation in mathematics for admission. This degree program is designed to give students a solid foundation in both mathematics and economics, stressing those areas of mathematics and statistics that are most relevant to economics and the parts of economics that emphasize the use of mathematics and statistics. Undergraduate Study For students who declared the major from Fall 2012 to Fall 2013. Students should review the general catalog for more detailed information.
{"url":"http://www.math.ucla.edu/ugrad/majors/mathecon","timestamp":"2014-04-20T00:40:25Z","content_type":null,"content_length":"14863","record_id":"<urn:uuid:94e4b68f-c776-473f-8aaa-8f9dbb852f79>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Cronbach's Alpha Cronbach's Alpha We use Cronbach's alpha to evaluate the unidimensionality of a set of scale items. It's a measure of the extent to which all the variables in your scale are positively related to each other. In fact, it is really just an adjustment to the average correlation between every variable and every other. The formula for alpha is this: In the formula, K is the number of variables, and r-bar is the average correlation among all pairs of variables. People always want to know what's an acceptable alpha. Nunnally (1978) offered a rule of thumb of 0.7. More recently, one tends to see 0.8 cited as a minimum alpha. One thing to keep in mind is that alpha is heavily dependent on the number of items composing the scale. Even using items with poor internal consistency you can get a reliable scale if your scale is long enough. For example, 10 items that have an average interitem correlation of only .2 will produce a scale with a reliability of .714. Similarly, if the average correlation among 5 variables is .5, the alpha coefficient will be 0.833. But if the number of variables is 10 (with the same average correlation), the alpha coefficient will be 0.909. Avg Corr # of Vars Corr 0.5 1 0.500 0.5 2 0.667 0.5 3 0.750 0.5 4 0.800 0.5 5 0.833 0.5 6 0.857 0.5 7 0.875 0.5 8 0.889 0.5 9 0.900 0.5 10 0.909 0.5 11 0.917 0.5 12 0.923 0.5 13 0.929 0.5 14 0.933 0.5 15 0.938 Another way to think about alpha is that it is the average split-half reliability for all possible splits. A split half reliability is obtained by taking, at random, half of the variables in your scale, averaging them into a single variable and then averaging the remaining half, and correlating the two composite variables. The expected value for the random split-half reliability is alpha. • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. • Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.
{"url":"http://www.analytictech.com/ba762/handouts/alpha.htm","timestamp":"2014-04-19T17:04:30Z","content_type":null,"content_length":"22992","record_id":"<urn:uuid:2fa56ba6-4908-48ef-b679-9e36b45872aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Issaquah Science Tutor Find a Issaquah Science Tutor ...I aim towards improving the studying methods of individuals and develop their interest in the subject. Students can be best guided to start thinking as scientists in their primary years of education. Hence, it is crucial that they obtain good training in their formative undergraduate years and I aim towards achieving that. 1 Subject: chemistry ...I was voted the most inspirational student by my senior year band class. For every semester that I attended WSU, I was on the President’s Honor Roll. I was recognized as the student of the month by my AP US History instructor, immediately following the AP season. 62 Subjects: including ACT Science, biochemistry, physics, Spanish ...I think I have a patient, encouraging, and intuitive teaching style that works well with students that age, and I also do adapt my teaching style to the needs of the student. I regularly tutor students in math through calculus, biology, English, and chemistry. I also coach students through the college application process and enjoy helping them write their personal statement or essay. 28 Subjects: including ACT Science, physiology, anatomy, ESL/ESOL ...And then, I give the student sample problems to solve independently and coach them further as needed. My main goal is to make sure the student is self-sufficient, and capable of using the methods on quizzes or tests. With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. 26 Subjects: including chemistry, ACT Science, physiology, physics ...There I taught a few foreign students whose native languages were French, German, Italian and Mandarin. At McGill University in Montreal, my students' numbers, range of backgrounds and languages, vastly increased. Out of the thousands I taught, hundreds came from Central America, Europe, Africa and Asia, the first of their immigrant families to enter university. 30 Subjects: including geology, GRE, zoology, botany
{"url":"http://www.purplemath.com/Issaquah_Science_tutors.php","timestamp":"2014-04-16T07:26:42Z","content_type":null,"content_length":"23798","record_id":"<urn:uuid:86883e6f-a8af-4c1f-9fa8-59b0617f9690>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
14.30 Introduction to Statistical Methods in Economics (MIT) 14.30 Introduction to Statistical Methods in Economics (MIT) course will provide a solid foundation in probability and statistics for economists and other social scientists. We will emphasize topics needed for further study of econometrics and provide basic preparation for 14.32. Topics include elements of probability theory, sampling theory, statistical estimation, and hypothesis testing. • Language: English • Author: Menzel, Konrad • Lisence Terms: Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm • Tags: statistics, economic applications, probability theory, sampling theory, statistical estimation, regression analysis, hypothesis testing, Elementary econometrics, statistical tools, economic data, economics, statistical, probability distribution function, cumulative distribution function, normal, Student's t, chi-squared, central limit theorem, law of large numbers, Bayes theorem, • Course Publishing Date: Sep 30, 2009
{"url":"http://www.edumojo.com/elearning/free_open_course.php?name=14.30+Introduction+to+Statistical+Methods+in+Economics+(MIT)","timestamp":"2014-04-19T10:22:31Z","content_type":null,"content_length":"11965","record_id":"<urn:uuid:6cd9f651-d6fb-4623-81f0-64153e33f625>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is this not working? September 28th, 2012, 02:34 PM #1 Junior Member Join Date Sep 2012 Thanked 0 Times in 0 Posts Hi all, I have to write a program that uses this equation: when you type in the numbers 7500, 14.5, and 36 you should get: The amount I wish to borrow is? 7500 The loan rate I can get is? 14.5 The number of months it will take me to pay off the loan is? 36 Why doesnt it work? Heres the code: import java.io.*; import java.util.*; public class Prog58i public static void main (String[] args) System.out.print("the amount I wish to borrow is? "); Scanner a = new Scanner(System.in); double borrow = a.nextDouble(); //P System.out.print("The loan rate I can get is? "); Scanner b = new Scanner(System.in); double rate = b.nextDouble(); //R System.out.print("The number of months it will take me to pay of this loan is? "); Scanner c = new Scanner(System.in); double months = c.nextDouble(); //M double MP = borrow * (rate/1200) * Math.pow((1+rate/1200), months) / (Math.pow((1+rate/1200), months)- 1); double intrest = (MP * months) - borrow; double repaid = (MP*months); System.out.print( "My monthly payments will be $" +MP +"\n"); System.out.print( "Total Interest Paid is $" +intrest +"\n"); System.out.print( "Total Amount Paid is $" +repaid +"\n"); Im very new to Java so please make the answers as simple as possible. Thanks Why doesnt it work? Please explain what happens? What doesn't work? If you are getting error messages, please copy and pasted the full text here. If the output is wrong, copy and paste the full contents of the console window showing your input and the program's output. To copy the contents of the command prompt window: Click on Icon in upper left corner Select Edit Select 'Select All' - The selection will show Click in upper left again Select Edit and click 'Copy' Paste here. If you don't understand my answer, don't ignore it, ask a question. Never mind I got it to work. The problem I was having was that the output kept coming out as 0.0 instead of 283.17. The problem was that there was an error in the typing of the formula. It should have been: double MP = borrow * (rate/1200) * (Math.pow((1+rate/1200), months)) / (Math.pow((1+rate/1200), months)- 1); Instead of: double MP = borrow * (rate/1200) * Math.pow((1+rate/1200), months) / (Math.pow((1+rate/1200), months)- 1); Thanks for the fast reply though September 28th, 2012, 03:18 PM #2 Super Moderator Join Date May 2010 Eastern Florida Thanked 1,954 Times in 1,928 Posts September 28th, 2012, 04:38 PM #3 Junior Member Join Date Sep 2012 Thanked 0 Times in 0 Posts
{"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/17969-why-not-working.html","timestamp":"2014-04-17T15:45:30Z","content_type":null,"content_length":"53895","record_id":"<urn:uuid:eb71d78c-ea70-4ad1-bf3e-387620dbb3d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Markup - Math Central We have two responses for you Both methods are used but in my opinion when you say you have a 20% markup most people will assume you used the first method. In this case you are calculating 20% of your cost (0.20 × $938.00 = $187.60) and adding it to your cost ($938.00 + $187.60 = $1125.60) to obtain the "sticker price". Thus the markup ($187.60) is 20% of your cost. In the second method 20% of $1172.50 is $234.50 and if you add $234.50 to your cost of $938.00 you arrive at $1172.50. Thus the markup ($234.50) is 20% of the sticker price. The Markup percentage is the percentage of the selling price not represented in the cost of the goods. So if the markup is 20%, then 80% of the selling price is the cost. Your cost is $938, so the $938/80% = $1172.50 would be the cost for a product with a 20% markup. The contrasts with the gross Margin percentage, which is the percentage by which you increase the cost in order to find the selling price. If the margin were 20%, then you would calculate $938 * 120% = $1125.60 as the selling price. Thus the Margin is the view from the manufacturing side of things and the Markup is the view from the sales side of things. The Margin is always lower and you can relate the two using this equation, where U is the markUp as a decimal (20% = 0.20) and G is the marGin as a decimal: So for instance, if you knew you wanted a 20% margin, you could find the related markup: So a margin of 20% is a markup of 25%. (Conversely, a markup of 20% is a margin of 16 2/3%.) Hope this helps, Stephen La Rocque.
{"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.06/h/drew1.html","timestamp":"2014-04-21T02:00:22Z","content_type":null,"content_length":"8817","record_id":"<urn:uuid:3695974d-a18c-4151-b906-00b055715386>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
[Lapack] Regarding functions and results from CLAPACK LAPACK Archives <prev [Date] next> <prev [Thread] next> [Lapack] Regarding functions and results from CLAPACK From: koushik Date: Mon, 28 Aug 2006 16:24:03 -0400 I am Koushik Aravalli from Clemson University. I am using CLAPACK for my research and trying to decode the resultant matrices formed from the subroutines dgeqr2.c, dgetrf.c. For example the final matrix formed out from the dgetrf subroutine is a combination of a matrix produced from a scalar and a vector. The below message is from the subroutine mentioned. FROM CLAPACK dgetrf subroutine. Further Details The matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(k), where k = min(m,n). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i), and tau in TAU(i). So if I solve for a linear equation set using QR decomposition, the final matrix obtained is a product of reflectors. Comparing with the MATLAB results I am not able to figure out a way to decode the matrix obtained from "dgetrf" subroutine. By the way I am using MS Visual studio to do these matrix computations. Can you please help me out in obtaining the decoded matrix results. Thank you, Koushik Aravalli -------------- next part -------------- An HTML attachment was scrubbed... <Prev in Thread] Current Thread [Next in Thread> • [Lapack] Regarding functions and results from CLAPACK, koushik <= For additional information you may use the LAPACK/ScaLAPACK Forum Or one of the mailing lists, or
{"url":"http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg00136.html","timestamp":"2014-04-16T04:50:45Z","content_type":null,"content_length":"7014","record_id":"<urn:uuid:a2215e3b-b31f-44ea-9dc9-052eb558092d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
UWO and the Fields Institute The University of Western Ontario is a principal sponsoring university for The Fields Institute for Research in the Mathematical Sciences. As a consequence, Fields supports a wide range of UWO-related research activity in the computational sciences on the UWO campus and at the Institute. Proposals for Fields Institute support may be submitted by UWO researchers to the UWO/Fields Committee. The membership of the Committee consists of representatives from the supporting UWO budget units: • Stephen Watt, Department of Computer Science, ORCCA • Masoud Khalkhali, Department of Mathematics • John Braun, Department of Statistical & Actuarial Sciences • Mike Cottam, Faculty of Science • George Gadanidis, Faculty of Education The Committee is presently chaired by Masoud Khalkhali. This committee approves proposals locally and oversees their submission to the Fields Institute for final approval. The Committee encourages and welcomes submissions from the UWO research community. Here is a page of guidelines for submitting a proposal for a conference or program: UWO/Fields Proposal Submission. Click here for the Fields Institute's Annual Reports, highlighting all Fields sponsored activities. Fields Postdoctoral Fellowships □ Oleg Golubitsky, Department of Computer Science, 2008-2009 □ Wenyuan Wu, Department of Applied Mathematics (2007-2008) □ Gabor Sass, Department of Statistical & Actuarial Sciences (2007) □ Atabey Kaygun (PhD, Ohio State, 2005), Department of Mathematics (2005-06), Noncommutative geometry □ Elena Smirnova (PhD, Paris 12, 2002; PhD, Saint-Petersburg State, 2002), Department of Computer Science (2004-05), Computer Science □ Zengjing Chen (PhD, Shangdong University, 1998), Statistical and Actuarial Sciences (2002-03), Applied Mathematics (2003-04), Financial Mathematics The Fields Postdoctoral Fellow position rotates between the four mathematical science departments on a yearly basis. It will be held in the Department of Computer Science in 2008-2009. Conferences and Programs □ Workshop: Disturbances: Modelling Spread in Forests was held at UWO, Oct 18-19, 2007. □ SNC'07 (Symbolic-Numeric Computation '07), July 25 - 27, 2007. □ PASCO '07 (Parallel Symbolic Computation '07), July 27 - 28, 2007. □ CPM '07(Combinatorial Pattern Matching), July 9 - 11, 2007. □ Nerenberg Lecture 2007, March 12, 2007. □ Geometric Applications of Homotopy Theory, Research Program, Fields Institute (Department of Mathematics, UWO), January-June, 2007. □ Actuarial Research Day, June 1, 2006. □ Nerenberg Lecture 2006, March 21, 2006. □ Quantitative Finance Conference on Credit Risk, Workshop, Department of Applied Mathematics, UWO, November 4-5, 2005. □ Designing Mathematical Thinking Tools, Symposium, Faculty of Education, UWO, June 9-15, 2005. □ DNA11: The 11th International DNA Computing Conference, Department of Computer Science, June 6-9, 2005. □ Workshop on Forest Fires and Point Processes, Fields Instute (Department of Statistics and Actuarial Sciences, UWO), May 24-28, 2005. □ Modelling the Rapid Evolutions of Infectious Disease: Epidemiology and Treatment Strategies, Workshop, Department of Applied Mathematics, UWO, May 12-15, 2005. □ Noncommutative Geometry, the Local Index Formula and Hopf Algebras, Fields Institute (Department of Mathematics, UWO). September 10-12, 2004. □ Algebraic Topological Methods in Computer Science, II, Department of Mathematics, July 16-20, 2004. □ Symposium: Online mathematical investigation as a narrative experience, Faculty of Education, June 11-13, 2004. □ 2004 Canadian Symposium on Abstract Harmonic Analysis, Department of Mathematics, May 17-18, 2004. □ Midwest Several Complex Variables Meeting, Department of Mathematics, April 2-4, 2004 □ Fields Institute Program: Homotopy Theory and its Applications, Department of Mathematics, September, 2003. □ Mathematics as Story:, Faculty of Education, June 13-15, 2003. □ Southern Ontario Statistical Graduate Student Seminar, Department of Statistical and Actuarial Sciences, May, 2003 □ Symbolic Computational Algebra, 2002, Ontario Research Centre for Computer Algebra, July 15-22, 2002 □ Ontario Topology Seminar, Department of Mathematics, October 13-14, 2001. □ Future Directions in Categorical Programming Languages, Ontario Research Centre for Computer Algebra, July 26, 2001.
{"url":"http://www.apmaths.uwo.ca/~mdavison/research/UWO-Fields.htm","timestamp":"2014-04-17T03:51:31Z","content_type":null,"content_length":"7849","record_id":"<urn:uuid:3e1eb699-8685-4e4f-9021-6de59aeff72b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy conservation during free-fall Next: Work Up: Conservation of energy Previous: Introduction Consider a mass N.B., This is clearly an example of a closed system, involving only the mass and the gravitational field.) The physics of free-fall under gravity is summarized by the three equations (24)-(26). Let us examine the last of these equations: The above equation clearly represents a conservation law, of some description, since the left-hand side only contains quantities evaluated at the initial height, whereas the right-hand side only contains quantities evaluated at the final height. In order to clarify the meaning of Eq. (123), let us define the kinetic energy of the mass, and the gravitational potential energy of the mass, Note that kinetic energy represents energy the mass possesses by virtue of its motion. Likewise, potential energy represents energy the mass possesses by virtue of its position. It follows that Eq. ( 123) can be written Here, total energy of the mass: i.e., the sum of its kinetic and potential energies. It is clear that i.e., although the kinetic and potential energies of the mass vary as it falls, its total energy remains the same. Incidentally, the expressions (124) and (125) for kinetic and gravitational potential energy, respectively, are quite general, and do not just apply to free-fall under gravity. The mks unit of energy is called the joule (symbol J). In fact, 1 joule is equivalent to 1 kilogram meter-squared per second-squared, or 1 newton-meter. Note that all forms of energy are measured in the same units (otherwise the idea of energy conservation would make no sense). One of the most important lessons which students learn during their studies is that there are generally many different paths to the same result in physics. Now, we have already analyzed free-fall under gravity using Newton's laws of motion. However, it is illuminating to re-examine this problem from the point of view of energy conservation. Suppose that a mass 123), if energy is conserved i.e., any increase in the kinetic energy of the mass must be offset by a corresponding decrease in its potential energy. Now, the change in potential energy of the mass is simply Suppose that the same mass is thrown upwards with initial velocity 125) that as the mass rises its potential energy increases. It, therefore, follows from energy conservation that its kinetic energy must decrease with height. Note, however, from Eq. (124), that kinetic energy can never be negative (since it is the product of the two positive definite quantities, zero it can rise no further, and must, presumably, start to fall. The change in potential energy of the mass in moving from its initial height to its maximum height is 127) that It should be noted that the idea of energy conservation--although extremely useful--is not a replacement for Newton's laws of motion. For instance, in the previous example, there is no way in which we can deduce how long it takes the mass to rise to its maximum height from energy conservation alone--this information can only come from the direct application of Newton's laws. Next: Work Up: Conservation of energy Previous: Introduction Richard Fitzpatrick 2006-02-02
{"url":"http://farside.ph.utexas.edu/teaching/301/lectures/node57.html","timestamp":"2014-04-21T07:05:49Z","content_type":null,"content_length":"14264","record_id":"<urn:uuid:180d9bb7-1c09-466a-b43d-05c7b35d4325>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
The Low-z I The Low-z Intergalactic Medium. I. OVI Baryon Census Charles Danforth & J. Michael Shull 2004, ApJ, 624, 555 Intergalactic absorbers along lines of sight to distant quasars are a powerful diagnostic for the evolution and content of the intergalactic medium (IGM). In this study, we use the FUSE satellite to search 128 known Lya absorption systems at z greater than 0.15 toward 31 AGN for corresponding absorption from higher Lyman lines and the important metal ions OVI and CIII. We detect OVI in 52 systems over a smaller range of column density (logN[OVI]=12.8-14.4) than seen in HI (logN[HI]=13.0-16.0). The co-existence of OVI and HI suggests a multiphase IGM, with both warm neutral and hot ionized components. With improved OVI detection statistics, we find a steep distribution in OVI column density, dNdN~N^-2.1, which suggests that numerous, weak OVI absorbers contain baryonic mass comparable to the rare strong absorbers. The total cosmological mass fraction is at least Omega[WHIM]h[70]=0.0030+-0.0005, assuming (O/H) of 10% solar metallicity and an ionization fraction f[OVI]= 0.2. Thus, gas in the WHIM at 10^5-6 K contributes at least 6.6+-1.1% of the total baryonic mass at low redshift, a value 50% higher than previous estimates. Our survey is based on a large improvement in the number of OVI absorbers (52 vs. 10) and total redshift pathlength (Delta z=2.2 vs. Delta z=0.5) compared to earlier surveys. The Low-z Intergalactic Medium. II. LyB, OVI, and CIII Forest Danforth, Shull, Rosenberg, & Stocke 2004, submitted to ApJ, astro-ph/0508656 We present the results of a large survey of HI, OVI, and CIII absorption lines in the low-redshift (z less than 0.3) intergalactic medium (IGM). We begin with 171 strong Lyalpha absorption lines (W > 80 mA) in 31 AGN sight lines studied with the Hubble Space Telescope and measure corresponding absorption from higher-order Lyman lines with FUSE. Higher-order Lyman lines are used to determine N [HI] and b[HI] accurately through a curve-of-growth (COG) analysis. We find that the number of HI absorbers per column density bin is a power-law distribution, dN/dN[HI]=N[HI]^-beta, with beta[HI]= 1.68+-0.11. We made 40 detections of OVI 1032,1038 and 30 detections of CIII 977 out of 129 and 148 potential absorbers, respectively. The column density distribution of CIII absorbers has beta [CIII]=1.68+-0.04, similar to beta[HI] but not as steep as beta[OVI]=2.1+-0.1. From the absorption-line frequency, dN[CIII]/dz=12^+3[-2] for W>30 mA, we calculate a typical IGM absorber size r[0] ~400 kpc. The COG-derived b-values show that HI samples material with T less than 10^5 K, incompatible with a hot IGM phase. By calculating a grid of CLOUDY models of IGM absorbers with a range of collisional and photoionization parameters, we find it difficult to simultaneously account for the OVI and CIII observations with a single phase. Instead, the observations require a multiphase IGM in which HI and CIII arise in photoionized regions, while OVI is produced primarily through shocks. From the multiphase ratio N[HI]/N[CIII], we infer the IGM metallicity to be Z[C]=0.12Z[sun], similar to our previous estimate of Z[O]=0.09Z[sun] from OVI. Charles Danforth Last modified: Thu Sep 1 08:58:59 MDT 2005
{"url":"http://casa.colorado.edu/~danforth/science/igm/","timestamp":"2014-04-19T07:03:20Z","content_type":null,"content_length":"4544","record_id":"<urn:uuid:cb122382-b9a3-4f6d-93f2-eba2171201f7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Vertex culling without GL_EXT_cull_vertex [Archive] - OpenGL Discussion and Help Forums 07-05-2007, 02:32 AM i draw a sphere in wireframe mode with face culling on. I also want to draw the vertices of the wireframe by using glVertex3fv. I cannot use the extension GL_EXT_cull_vertex for vertex culling. Because i have the normal for each vertex, it must be possible to decide wich vertex has to be drawn and wich not. Can i accomplish this task by using a vertex shader? If yes, is it possible to prevent drawing the vertex after it reached the vertex shader executing the culling test. If i can't use a vertex shader, how do i get the eye vector in object space to do the scalar product with the vertex normal?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-150215.html","timestamp":"2014-04-20T18:42:49Z","content_type":null,"content_length":"5162","record_id":"<urn:uuid:f0d6534f-ad6c-4936-bbc4-3464496ba9a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Order Isomorphic Date: Jan 31, 2013 12:01 AM Author: William Elliot Subject: Order Isomorphic Is every infinite subset S of omega_0 with the inherited order, order isomorphic to omega_0? Yes. S is an ordinal, a denumerable ordinal. Let eta be the order type of S. Since S is a subset of omega_0, eta <= omega_0. Since omega_0 is the smallest infinite ordinal, omega_0 <= eta. Thus S and omega_0 are order isomorphic. Does the same reasoning hold to show that an uncountable subset of omega_1 with the inherited order is order isomorphic to omega_1. It seems intuitive that since S is a subset of omega_1, that order type S = eta <= omega_1. How could that be rigorously
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8197722","timestamp":"2014-04-19T20:40:51Z","content_type":null,"content_length":"1579","record_id":"<urn:uuid:23277484-d9d3-4b0c-8ac7-3c477fd97451>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
18 Aug 17:38 2013 abs minBound < (0 :: Int) && negate minBound == (minBound :: Int) Nicolas Frisby <nicolas.frisby <at> gmail.com> 2013-08-18 15:38:07 GMT The docs at give a NB mentioning that (abs minBound == minBound) is possible for fixed-width types. This holds, for example, at Int. It is also the case that (negate minBound == minBound). Two questions: 1) This behavior surprised me. Does it surprise enough people to include a warning in the Haddock for abs and negate? IE Here. 2) Is this a common behavior in other languages? My tinkering with gcc suggests it does not support the value -2^63, but instead bottoms out at (-2^63+1). Haskell-Cafe mailing list Haskell-Cafe <at> haskell.org
{"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/106837","timestamp":"2014-04-17T18:59:11Z","content_type":null,"content_length":"53695","record_id":"<urn:uuid:87c6e9e2-18e5-4a99-a927-a58917af8b8b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
pfu and Taq PCR mutation percentage Gys de Jongh GysdeJongh at compuserve.com Wed Jan 19 18:17:20 EST 2000 Gys de Jongh <GysdeJongh at compuserve.com> wrote in message news:862t5t$4mi$1 at ssauraab-i-1.production.compuserve.com... I think I found an analytical solution for the error propagation in a symetrical PCR. right : does not contain any error. wrong : contains more than zero errors ; contains 1 or *more* errors. Right molecules can only be copied from other right molecules ; a copy of a wrong molecule will allways be be wrong. A,B : to distinguish the forward from the reverse strand. n = PCR cycle number. b=number of bases in the ss DNA molecule ; the number of base pairs in the ds DNA molecule N(n) = The number of ds DNA molecules at the completion of PCR cycle number n ; equals the number of ss DNA molecules of strand A. Equals the number of ss DNA molecules of strand B. Because we only consider symetrical PCR. p1w = the polymerase error rate ; the chance for incorporating 1 base wrong. p1r = the chance for incorporating 1 base right. pssMr = the chance that the ss DNA molecule of b bases is copied ( *only 1 time* ) right. FssMr(n) = the chance that the ss DNA molecule of b bases is copied right after the completion of n PCR cycles. FdsMr(n) = the chance that the ds DNA molecule of b bases is copied right after the completion of n PCR cycles. CF = Copied Fraction , is between 0 and one. The fraction of the template molecules that is copied. First calculate pssMr : The chance for incorporating 1 base right = p1r = (1 - p1w ) , by definition. If a ss DNA molecule of b bases must be copied (*only 1 time*) right than the first base must be right ; p1r = ( 1 - p1w ) AND the second base must be right ; p1r = ( 1 - p1w ) and so on upto the last b base. So b events with chance (1 - p1w ) must *all* happen. From chance theory it follows that : pssMr = ( 1- p1w ) ^ b In a symetrical PCR pssMr is the same for the A strand and the B strand. At the start of cycle 1 there are N(0) ds DNA molecules of b base pairs with no errors at all. After melting there are N(0) ss DNA molecules of the A strand and N(0) ss DNA molecules of b bases of the B strand ; both with no errors at all. The amplification of the A strand : The N(0) ss DNA molecules of the B strand will be copied to ss DNA molecules of the A strand. However only a fraction , CF , of the B strand molecules will be copied given the finite extension time and reaction velocity of the enzyme. Of this fraction only a second fraction , pssMr , is copied right. So there where N(0) right ss DNA molecules at the start of cycle 1. After cycle 1 there will be the original N(0) right ss DNA molecules plus pssMr * CF * N(0) new right ones synthesized from the right B strand as template. Or after the completion of cycle 1 there are : N(0) + pssMr * CF * N(0) = N(0) * (1 + pssMr * CF) right A strand molecules. This number will be the input for the next cycle where the process is repeated. Thus the number of right ss DNA molecules of b bases of the A strand will be : N(0) * (1 + pssMr * CF) ^ n after completion of cycle number n. In a similar way we find the total number of ss DNA molecules : N(0) * (1 + CF) ^ n ( right or wrong ) after completion of cycle number n. Division gives the fraction of right ss DNA molecules of b bases of the A strand after completion of cycle number n : FssMr(n) = [ (1 + pssMr * CF) / (1 + CF) ] ^ n which is , of course , also the chance that we pick 1 right ss DNA molecule of b bases of the A strand from the reaction mix after completion of cyle number n. Observe that : 1) if the reaction is driven to completion , thus CF = 1 , than the total number will increase as N(0) * (1 + 1) ^ n = N(0) * 2 ^ n 2) if also the duplication is perfect , thus pssMr = 1 than the number of right molecules will increase as : N(0) * (1 + 1 *1) ^ n = N(0) * 2 ^ n 3) The fraction of right molecules is *not* a linear function of the cycle As the PCR is symetrical the same is true for the B strand. The amplification of the ds DNA molecule : If we are interested in right ds DNA molecules than at the condensation of the ss DNA molecules we must first pick a right A strand AND than a right B strand. The chance for this event will be the product of these , equal , chances. This will of course also be the fraction of right ds DNA molecules after the completion of cycle number n. So : FdsMr(n) = FssMr(n) ^ 2 Observe that in this model a combination of 1 right A strand and 1 B strand with only 1 error base will be considered wrong. If we put all this in a spreadsheet the influence of the various parameters can be seen. FssMr(n) can be very good approximated by a linear function of n with some "real life" PCR parameters. So there are usefull approximations of the problem. More information about the Methods mailing list
{"url":"http://www.bio.net/bionet/mm/methods/2000-January/080417.html","timestamp":"2014-04-20T13:40:20Z","content_type":null,"content_length":"7188","record_id":"<urn:uuid:e960c578-c730-4f53-b70c-27e0e86e8383>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Prototyping Algorithms and Testing CUDA Kernels in MATLAB NVIDIA GPUs are becoming increasingly popular for large-scale computations in image processing, financial modeling, signal processing, and other applications—largely due to their highly parallel architecture and high computational throughput. The CUDA programming model lets programmers exploit the full power of this architecture by providing fine-grained control over how computations are divided among parallel threads and executed on the device. The resulting algorithms often run significantly faster than traditional code written for the CPU. While algorithms written for the GPU are often much faster, the process of building a framework for developing and testing them can be time-consuming. Many programmers write CUDA kernels with the expectation that they will be integrated into C or Fortran programs for production. For this reason, they often use these languages to iterate on and test their kernels, which requires writing significant amounts of “glue code” for tasks such as transferring data to the GPU, managing GPU memory, initializing and launching CUDA kernels, and visualizing kernel outputs. This glue code is time-consuming to write and difficult to modify if, for example, you want to evaluate your kernel for different input data or visualize kernel outputs using a different type of plot. Using an image white balancing example, this article describes how MATLAB^® supports CUDA kernel development by providing a language and development environment for quickly evaluating kernels, analyzing and visualizing kernel results, and writing test harnesses to validate kernel results. Image White Balancing Example White balancing is a technique that is used to adjust the colors in an image so that the image does not have a reddish or bluish tint. Suppose you want to write a white balance routine in CUDA C for integration into a larger C program. Before writing any C code, it’s useful to explore the algorithm, investigate different algorithmic approaches, and develop a working prototype. We do this in MATLAB using the following code: This code computes the average amount of each color present in the input image and then applies scaling factors to ensure that the output image has an equal amount of each color. Notice that, with MATLAB, developing the algorithm takes just five lines of code—far fewer than it would take in C or Fortran. One reason is that MATLAB is a high-level, interpreted language, and therefore there is no need to perform administrative tasks such as declaring variables and allocating memory. Another is that MATLAB includes thousands of built-in math, engineering, and plotting functions and can be extended with domain-specific algorithms in signal and image processing, computational finance, communications, and other areas. We call the MATLAB white balance algorithm using an input image that includes a Gretag Macbeth color chart, which is commonly used to calibrate cameras. We then visualize the output using the imshow command in Image Processing Toolbox™: adjustedImage = whitebalance(imagedata); The algorithm removes the reddish tint from the original image (Figure 1). This working MATLAB implementation will serve as a reference as we develop and test CUDA kernels for the white balance algorithm. Exploring MATLAB Code and Implementing Portions in CUDA C We ultimately want to implement the white balance algorithm in C, with each computational step written as a CUDA kernel. Before getting started with CUDA, we use the MATLAB white balance code to explore the algorithm and decide how to break it into kernels. We begin by using the MATLAB Profiler to see how long each section of code takes to execute. The profiler will indicate the bottleneck areas where we will need to spend extra effort to develop efficient CUDA kernels. We launch the MATLAB Profiler using the Run and Time button on the MATLAB desktop (Figure 2). We see that the final three lines of code take 0.15 seconds to run, making this the most time-consuming section of the algorithm. This code multiplies every element in the image data with an appropriate scale factor. It is clearly an operation that can be parallelized massively, and one that could be accelerated significantly on the GPU. We reimplement this code in CUDA C/C++, as follows: Before writing kernels for the other computational steps in the white balance algorithm, we will transition back to MATLAB to evaluate and test this kernel to make sure that it runs properly and gives correct results. Evaluating the CUDA Kernel in MATLAB To load the kernel into MATLAB, we provide paths to the compiled PTX file and source code: kernel = parallel.gpu.CUDAKernel( 'applyScaleFactorsKernel.ptx', ... 'applyScaleFactorsKernel.cu' ); Once the kernel is loaded we must complete a few setup tasks before we can launch it, such as initializing return data and setting the sizes of the thread blocks and grid. The kernel can then be used just like any other MATLAB function, except that we launch the kernel using the feval command, with the following syntax: [outArguments] = feval(kernelName, inArguments) We replace the final three lines of code in our MATLAB white balance algorithm with code that loads and launches the kernel. The updated white balance routine (whitebalance_gpu.m) is as follows: Notice the relative ease of calling CUDA kernels from MATLAB. Each task, such as transferring data to the GPU, initializing return data, and launching the kernel, is performed using a single line of MATLAB code. Furthermore, the code is robust in that we can evaluate the kernel for different sized images without updating the code. In lower-level languages like C or Fortran, the process of moving the data to the GPU, managing memory, and launching CUDA kernels requires significantly more coding. The code is not only more difficult to write but also more difficult for other developers and project collaborators to understand and modify for their own purposes. Using the MATLAB Prototype Code as a Test Harness Now that we have integrated our kernel into MATLAB, we test whether the results are correct by comparing the original MATLAB implementation of the white balance algorithm (whitebalance.m) with the new version that incorporates the kernel (whitebalance_gpu.m) We see that the output images appear identical, providing visual validation that the kernel is working properly (Figure 3). We also calculate the norm of the difference of the output images to be zero, which validates the kernel numerically. We could easily use this test harness to test the kernel for additional input images with different characteristics. We could also develop more sophisticated test harnesses to perform more detailed postprocessing or to automate testing. Incrementally Developing Additional CUDA Kernels So far, we have reimplemented one portion of the white balance algorithm in CUDA. We ultimately want the entire algorithm written as a collection of CUDA kernels, since the major computational steps appear to be parallelizable and good candidates for the GPU. Performing the entire computation on the GPU would also reduce the overhead associated with transferring data between the CPU and GPU multiple times. We will not implement the remaining steps in CUDA C in this example; however, you could use the process we just used for the image scaling operation, writing kernels and then testing them against the original MATLAB code. If the computation is available in a CUDA library such as NPP, you can use the GPU MEX API in Parallel Computing Toolbox™ to call the host-side C functions, and pass them pointers to the underlying GPU data. This process of incrementally developing kernels and testing them as you go makes it easier to isolate bugs in your code, and ensures a more organized development process. Developing GPU Applications in MATLAB This article has focused on how MATLAB can help you incrementally develop and validate CUDA kernels that will be integrated into a larger C application. For applications that do not have to be delivered in C, you can often save significant development time by staying in MATLAB and leveraging its built-in GPU capabilities. Most core math functions in MATLAB, as well as a growing number of toolbox functions, are overloaded to run on the GPU when given input data of the gpuArray data type. This means that you can get the speed advantages of the GPU without the need to write any CUDA kernels, and with minimal changes to your MATLAB code. Recall our original MATLAB prototype of the white balance routine. Rather than writing a CUDA kernel for the image scaling operation, we could have done it on the GPU simply by transferring the imageData variable to the GPU using the gpuArray command and then performing the image scaling without any additional changes to the code: This approach reduces the total time for the image scaling operation from 150 ms on the CPU to 9 ms on the GPU^1, of which 2.3 ms is execution time and 6.7 ms is for transferring the data to the GPU and back. In a larger algorithm the data transfer time often becomes negligible since data transfer needs to be completed only once, and we can compare execution times only. In our example, that equates to a 65x speedup on the GPU. As this example has shown, with MATLAB you can develop your algorithms much faster than in C or Fortran, and still take advantage of GPU computing for computationally intensive parts of your code.
{"url":"http://www.mathworks.se/company/newsletters/articles/prototyping-algorithms-and-testing-cuda-kernels-in-matlab.html?nocookie=true","timestamp":"2014-04-24T20:12:12Z","content_type":null,"content_length":"36470","record_id":"<urn:uuid:c1823469-7d40-4303-85bb-433c50cfad9c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Institute for Mathematics (Maine) The mission of the Research Institute for Mathematics at Orono, Maine is to conduct and supervise pure mathematical research and grant PhD to doctoral candidates. To publish mathematical research and monographs. To advance the field of mathematical research. RIM is an independent research institute modeled on the IAS (Institute for Advanced Study) at Princeton, NJ.
{"url":"http://researchinstituteformathematics.edu/","timestamp":"2014-04-21T07:05:17Z","content_type":null,"content_length":"2771","record_id":"<urn:uuid:dd537f95-4f07-4aef-89ed-80a4985578a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Transcendental Functions Next: Trigonometric Functions Up: High-Level Operations Previous: Limits within Computing transcendental functions is performed as follows. Suppose we wish to compute the value f(x), we find a sequence or series in terms of x which either which tends towards a limit f(x) with a known rate of convergence, or converges to but oscillates round the limit f(x). Once we have this we can generate a sequence of upper and lower bounds on this limit at each term of the original sequence. We know that the sequence converges, so we can use this fact to generate an infinite and strictly nested stream of intervals containing the limit of the sequence. Once we have this stream, we can then converted it into a signed binary representation of the desired result using the method described in section 5.1. We now give some sequences satisfying these conditions for a number of trigonometric and logarithmic transcendental functions. Martin Escardo
{"url":"http://www.dcs.ed.ac.uk/home/mhe/plume/node78.html","timestamp":"2014-04-16T18:56:49Z","content_type":null,"content_length":"4120","record_id":"<urn:uuid:95031356-2fb1-4613-98ce-526387d1c24e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Epic pi quest sets 10 trillion digit record A pair of pi enthusiasts have calculated the largest chunk of the mathematical constant yet, reaching just over 10 trillion digits. Alexander Yee and Shigeru Kondo, respectively a computer scientist in the US and a systems engineer in Japan, fought hard-drive failures and narrowly missed widespread technical disruptions due to the Japan earthquake to break their previous Guinness world record of 5 trillion digits. As the title of the announcement on their website - "Same program, same computer, just a longer wait..." - suggests, it was only a matter of time before the record was smashed. Indeed, calculating so many digits of pi serves no useful mathematical purposes - pi goes on forever, but just 39 digits are enough to calculate the circumference of a circle the size of the observable universe with an error no larger than the radius of a hydrogen atom. Yet, as demonstrated by Yee and Kondo's recent epic quest - which was particularly fraught this time around - the feat still sparks intense passion, a testament to the enduring fascination with this curious ratio. Yee wrote the pi-calculating software while Kondo performed the number-crunching on his custom-built PC, adding another ten hard drives since the previous attempt to calculate pi. Calculations began on 16 October last year and were over a third complete when a hard drive failure on 9 December meant the pair had to start from scratch - the failure occurred just before a scheduled backup. Then the earthquake struck on 11 March, soon after the pair had reached 47 per cent completion. Thankfully, Kondo was fine and the earthquake failed to disrupt the pair's calculations, as his PC was connected to Japan's unaffected western electricity grid. Further hard-drive failures, and subsequent replacements, however, slowed things further until finally, on 26 August, the feat was complete. The calculations required were so intense that Kondo's computer heated the air in its room to nearly 40 °C. "We could dry the laundry immediately, but we had to pay 30,000 yen [$400] a month for electricity," his wife Yukkio told The Japan Times. The pair then had to verify that all 10 trillion digits were correct. After all, no one had ever calculated them before. Thankfully there is a formula for calculating any particular digit of pi, which they could use to check the result. A researcher at Yahoo used the same formula last year to find the 2-quadrillionth binary digit of pi. One final step remained: converting the digits from base 16, the number system used to carry out the calculations, to base 10, the familiar system we use ever day. The pair finally finished this step last Sunday. Phew. Oh, and in case you are wondering, the 10 trillionth digit is 5. Top image: Pi-crunching machine/Alexander J. Yee & Shigeru Kondo. This post originally appeared on New Scientist. 1 27Reply
{"url":"http://io9.com/5852502/epic-pi-quest-sets-10-trillion-digit-record","timestamp":"2014-04-19T00:40:09Z","content_type":null,"content_length":"85427","record_id":"<urn:uuid:751f7aef-e788-4af3-b058-caaa47fc181e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear equivalence of divisors in smooth algebraic surface up vote 1 down vote favorite Let's assume $X$ is a smooth algebraic surface and $C$ a curve containing a smooth point $p_0\in X$, then there exist divisors $H_1$ and $H_2$ non of which contain $p_0$ such that $C+H_1$ is linearly equivalent to $H_2$. 5 This is a fairly straightforward consequence of the definition of ample divisor. It's probably useful to work out the details for yourself. – Artie Prendergast-Smith Feb 1 '12 at 17:23 Besides, it is worded like a homework problem, not as a question. – Angelo Feb 2 '12 at 4:40 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/87255/linear-equivalence-of-divisors-in-smooth-algebraic-surface","timestamp":"2014-04-21T05:15:06Z","content_type":null,"content_length":"46796","record_id":"<urn:uuid:f7f60573-b5a2-478c-b716-14101825f717>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum computing in free-fall What can a quantum computer do? The answer is as simple as watching a ball free-falling under gravity, say Australian physicists. They say the maths that describes a falling ball can be used to identify algorithms for quantum computers to work on. Mark Dowling and colleagues at the University of Queensland report their argument in today's issue of the journal Science. Some problems, like finding the factors of very large numbers, are beyond the capability of normal computers, says Dowling. "A simple example would be 15. The factors are 3 and 5," he says. "But if I give you 256 billion nine hundred and whatever and ask you what numbers multiply to give that, that's a hard problem." In general, a standard computer will require 10^n steps to find the factors of a number with 'n' digits. This means that as a number gets bigger, the number of steps in the algorithm used to solve the problem increases exponentially and it becomes unfeasible to calculate. But it is expected that quantum computers will be able to solve such problems more easily, using algorithms that do not have an exponential increase in steps as the number increases. Quantum computers are only in their infancy right now, which is why internet commerce can rely on encryption codes based on factoring very large numbers to secure credit card details. But as quantum computers mature, not only will this present a challenge for those involved in internet security, researchers will also want to put these powerful computers to more useful work than cracking codes. Scientists have been looking for problems, like factoring, which would be suitable for a quantum computer to solve. But so far it's been a difficult task. Now the Queensland researchers have found a surprising way of identifying quantum computer algorithms. Geometric inspiration Dowling and colleagues were inspired by the field of mathematics called Riemannian geometry, which helps to find the shortest path between two points in a curved space. Picture a ball at the top of a hill in a hilly landscape about to travel from A to B. The quickest path would be for the ball to fall freely from A under gravity, working its way down the hill to B. Now imagine the hills are like steps in an algorithm. The bigger the hill, the more steps. There may be many possible algorithms that solve a problem, or paths to get from A to B. But the ones with the least number of steps are equivalent to the path taken by the free-falling ball. The researchers have found that the maths that describes the path of the free-falling ball can be used to identify algorithms suitable for quantum computer problems. "It's a new route to finding problems that a quantum computer can do easily," says Dowling. He says there are many problems that standard computers cannot solve and that may be candidates for quantum computing. One such problem is the 'travelling salesman problem', which identifies the shortest route for someone visiting a large number of locations.
{"url":"http://www.abc.net.au/science/articles/2006/02/24/1576608.htm?site=science&topic=latest","timestamp":"2014-04-17T02:45:47Z","content_type":null,"content_length":"38386","record_id":"<urn:uuid:d9356a8b-451f-40f4-bc45-7d51a986e1c0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
May 20, 2000 This Week's Finds in Mathematical Physics (Week 147) John Baez Various books are coming out to commemorate the millennium.... describing the highlights of the math we've done so far, and laying out grand dreams for the future. The American Mathematical Society has come out with one: 1) Mathematics: Frontiers and Perspectives, edited by Vladimir Arnold, Michael Atiyah, Peter Lax and Barry Mazur, AMS, Providence, Rhode Island, 2000. This contains 30 articles by bigshots like Chern, Connes, Donaldson, Jones, Lions, Manin, Mumford, Penrose, Smale, Vafa, Wiles and Witten. I haven't actually read it yet, but I want to get ahold of Springer Verlag is coming out with one, too: 2) Mathematics Unlimited: 2001 and Beyond, edited by Bjorn Engquist and Wilfried Schmid, Springer Verlag, New York, 2000. It should appear in the fall. I don't know what the physicists are doing along these lines. The American Physical Society has a nice timeline of 20th century physics on their website: 3) The American Physical Society: A Century of Physics, available at http://timeline.aps.org/APS/home_HighRes.html But I don't see anything about books. One reason I haven't been doing many This Week's Finds lately is that I've been buying and then moving into a new house. Another is that James Dolan and I have been busily writing our own millennial pontifications, which will appear in the Springer-Verlag book: 4) John Baez and James Dolan, From finite sets to Feynman diagrams, preprint available as math.QA/0004133 So let me talk about this stuff a bit.... As usual, the underlying theme of this paper is categorification. I've talked about this a lot already - e.g. in "week121" - so I'll assume you remember that when we categorify, we use this analogy: SET THEORY CATEGORY THEORY elements objects equations between elements isomorphisms between objects sets categories functions functors equations between functions natural isomorphisms between functors to take interesting equations and see them as shorthand for even more interesting isomorphisms. To take a simple example, consider the laws of basic arithmetic, like a+b = b+a or a(b+c) = ab+ac. We usually think of these as equations between elements of the set of natural numbers. But really they arise from isomorphisms between objects of the category of finite sets. For example, if we have finite sets a and b, and we use a+b to denote their disjoint union, then there is a natural isomorphism between a+b and b+a. Moreover, this isomorphism is even sort of interesting! For example, suppose we use 1 to denote a set consisting of one dot, and 2 to denote a set of two dots. Then the natural isomorphism between 1+2 and 2+1 can be visualized as the process of passing one dot past two, like this: . . . \ / / \ / / \ / / / / / \ / / / / / \ / / \ / / \ / / \ . . . This may seem like an excessively detailed "picture proof" that 1+2 indeed equals 2+1, perhaps suitable for not-too-bright kindergarteners. But in fact it's just a hop, skip and a jump from here to heavy-duty stuff like the homotopy groups of spheres. I sketched how this works in "week102" so I won't do so again here. The point is, after we categorify, elementary math turns out to be pretty Now, let me make this idea of "categorifying the natural numbers" a bit more precise. Let FinSet stand for the category whose objects are finite sets and whose morphisms are functions between these. If we "decategorify" this category by forming the set of isomorphism classes of objects, we get N, the natural numbers. All the basic arithmetic operations on N come from operations on FinSet. I've already noted how addition comes from disjoint union. Disjoint union is a special case of what category theorists call the "coproduct", which makes sense for a bunch of categories - see "week99" for the general definition. Similarly, multiplication comes from the Cartesian product of finite sets, which is a special case of what category theorists call the "product". To get the definition of a product, you just take the definition of a coproduct and turn all the arrows around. There are also nice category-theoretic interpretations of the numbers 0 and 1, and all the basic laws 0, 1, addition and multiplication. Exponentiation too! Combinatorists have lots of fun thinking about how to take equations in N and prove them using explicite isomorphisms in FinSet - they call such a proof a "bijective proof". To read more about this, 5) James Propp and David Feldman, Producing new bijections from old, Adv. Math. 113 (1995), 1-44. Also available at http://www.math.wisc.edu/~propp/articles.html 6) John Conway and Peter Doyle, Division by three. Available at http://math.dartmouth.edu/~doyle/docs/three/ The latter article studies this question: if I give you an isomorphism between 3x and 3y, can you construct a isomorphism between x and y? Here of course x and y are finite sets, 3 is any 3-element set, and multiplication means Cartesian product. Of course you can prove an isomorphism exists, but can you construct one in a natural way - i.e., without making any random choices? The history of this puzzle turns out to be very interesting. But I don't want to give away the answer! See if you can do it or not. Anyway, having categorified the natural numbers, we might be inclined to go on and categorify the integers. Can we do it? In other words: can we find something like the category of finite sets that includes "sets with a negative number of elements"? There turns out be an interesting literature on this subject: 7) Daniel Loeb, Sets with a negative number of elements, Adv. Math. 91 (1992), 64-74. 8) S. Schanuel, Negative sets have Euler characteristic and dimension, Lecture Notes in Mathematics 1488, Springer Verlag, Berlin, 1991, pp. 379-385. 9) James Propp, Exponentiation and Euler measure, available as math.CO/0204009. 10) Andre Joyal, Regle des signes en algebre combinatoire, Comptes Rendus Mathematiques de l'Academie des Sciences, La Societe Royale du Canada, VII (1985), 285-290. See also "week102" for more.... But I don't want to talk about negative sets right now! Instead, I want to talk about fractional sets. It may seem odd to tackle division before subtraction, but historically, the negative numbers were invented quite a bit after the nonnegative rational numbers. Apparently half an apple is easier to understand than a negative apple! This suggests that perhaps `sets with fractional cardinality' are simpler than `sets with negative cardinality'. The key is to think carefully about the meaning of division. The usual way to get half an apple is to chop one into "two equal parts". Of course, the parts are actually NOT EQUAL - if they were, there would be only one part! They are merely ISOMORPHIC. This suggests that categorification will be handy. Indeed, what we really have is a Z/2 symmetry group acting on the apple which interchanges the two isomorphic parts. In general, if a group G acts on a set S, we can "divide" the set by the group by taking the quotient S/G, whose points are the orbits of the action. If S and G are finite and G acts freely on S, this construction really corresponds to division, since the cardinality |S/G| is equal to |S|/|G|. However, it is crucial that the action be free. For example, why is 6/2 = 3? We can take a set S consisting of six dots in a row: o o o o o o let G = Z/2 act freely by reflections, and identify all the elements in each orbit to obtain 3-element set S/G. Pictorially, this amounts to folding the set S in half, so it is not surprising that |S /G| = |S|/|G| in this case. Unfortunately, if we try a similar trick starting with a 5-element set: o o o o o it fails miserably! We don't obtain a set with 2.5 elements, because the group action is not free: the point in the middle gets mapped to itself. So to define "sets with fractional cardinality", we need a way to count the point in the middle as "half a point". To do this, we should first find a better way to define the quotient of S by G when the action fails to be free. Following the policy of replacing equations by isomorphisms, let us define the "weak quotient" S//G to be the category with elements of S as its objects, with a morphism g: s → s' whenever g(s) = s', and with composition of morphisms defined in the obvious way. Next, let us figure out a good way to define the "cardinality" of a category. Pondering the examples above leads us to the following recipe: for each isomorphism class of objects we pick a representative x and compute the reciprocal of the number of automorphisms of this object; then we sum over isomorphism classes. It is easy to see that with this definition, the point in the middle of the previous picture gets counted as `half a point' because it has two automorphisms, so we get a category with cardinality 2.5. In general, |S//G| = |S|/|G| whenever G is a finite group acting on a finite set S. This formula is a simplified version of `Burnside's lemma', so-called because it is due to Cauchy and Frobenius. Burnside's lemma gives the cardinality of the ordinary quotient. But the weak quotient is nicer, so Burnside's lemma simplifies when we use weak quotients. Now, the formula for the cardinality of a category makes sense even for some categories that have infinitely many objects - all we need is for the sum to make sense. So let's try to compute the cardinality of the category of finite sets! Since any n-element set has n! automorphisms (i.e. permutations), we get following marvelous formula: |FinSet| = e This turns out to explain lots of things about the number e. Now, a category all of whose morphisms are isomorphisms is called a "groupoid". Any category C has an underlying groupoid C[0] with the same objects but only the isomorphisms as morphisms. The cardinality of a category C always equals that of its underlying groupoid C[0]. This suggests that this notion should really be called "groupoid cardinality. If you're a fan of n-categories, this suggests that we should generalize the concept of cardinality to n-groupoids, or even ω-groupoids. And luckily, we don't need to understand ω-groupoids very well to try our hand at this! Omega-groupoids are supposed to be an algebraic way of thinking about topological spaces up to homotopy. Thus we just need to invent a concept of the `cardinality' of a topological space which has nice formal properties and which agrees with the groupoid cardinality in the case of homotopy 1-types. In fact, this is not hard to do. We just need to use the homotopy groups π[k](X) of the space X. So: let's define the "homotopy cardinality" of a topological space X to be the alternating product |X| = |π[1](X)|^-1 |π[2](X)| |π[3](X)|^-1 .... when X is connected and the product converges; if X is not connected, let's define its homotopy cardinality to be the sum of the homotopy cardinalities of its connected components, when the sum converges. We call spaces with well-defined homotopy cardinality "tame". The disjoint union or Cartesian product of tame spaces is again tame, and we have |X + Y| = |X| + |Y| , |X × Y| = |X| × |Y| just as you would hope. Even better, homotopy cardinality gets along well with fibrations, which we can think of as `twisted products' of spaces. Namely, if F → X → B is a fibration and the base space B is connected, we have |X| = |F| × |B| whenever two of the three spaces in question are tame (which implies the tameness of the third). As a fun application of this fact, recall that any topological group G has a "classifying space" BG, meaning a space with a principal G-bundle over it G → EG → BG whose total space EG is contractible. I described how to construct the classifying space in "week117", at least in the case of a discrete group G, but I didn't say much about why it's so great. The main reason it's great is that any G-bundle over any space is a pullback of the bundle EG over BG. But right now, what I want to note is that since EG is contractible it is tame, and |EG| = 1. Thus G is tame if and only if BG is, and |BG| = 1 / |G| so we can think of BG as the "reciprocal" of G! This idea is already lurking behind the usual approach to "equivariant cohomology". Suppose X is a space on which the topological group G acts. When the action of G on X is free, it is fun to calculate cohomology groups (and other invariants) of the quotient space X/G. When the action is not free, this quotient can be very pathological, so people usually replace it by the "homotopy quotient" X//G, which is defined as (EG x X)/G. This is like the ordinary quotient but with equations replaced by homotopies. And there is a fibration X → X//G → BG , so when X and G are tame we have |X//G| = |X| × |BG| = |X|/|G| just as you would hope! Now in the paper, Jim and I go on to talk about how all these ideas can be put to use to give a nice explanation of the combinatorics of Feynman diagrams. But I don't want to explain all that stuff here - then you wouldn't need to read the paper! Instead, I just want to point out something mysterious about homotopy cardinality. Homotopy cardinality is formally very similar to Euler characteristic. The Euler characteristic χ(X) is given by the alternating sum χ(X) = dim(H[0](X)) - dim(H[1](X)) + dim(H[2](X)) - .... whenever the sum converges, where H[n](X) is a vector space over the rational numbers called the nth rational homology group of X. Just as for homotopy cardinality, we have χ(X + Y) = χ(X) + χ(Y), χ(X × Y) = χ(X) × χ(Y) and more generally, whenever F → X → B is a fibration and the base space B is connected, we have χ(X) = χ(F) × χ(B) whenever any two of the spaces have well-defined Euler characteristic, which implies that the third does too (unless I'm confused). So Euler characteristic is a lot like homotopy cardinality. But not many spaces have both well-defined homotopy cardinality and well-defined Euler characteristic. So they're like Jekyll and Hyde - you hardly ever see them in the same place at the same time, so you can't tell if they're really the same guy. But there are some weird ways to try to force the issue and compute both quantities for certain spaces. For example, suppose G is a finite group. Then we can build BG starting from a simplicial set with 1 nondegenerate 0-simplex, |G|-1 nondegenerate 1-simplices, (|G|-1)^2 nondegenerate 2-simplices, and so on. If there were only finitely many nondegenerate simplices of all dimensions, we could compute the Euler characteristic of this space as the alternating sum of the numbers of such simplices. So let's try doing that here! We get: c(BG) = 1 - (|G|-1) + (|G|-1)^2 - .... Of course the sum diverges, but if we go ahead and use the geometric formula anyway, we get c(BG) = 1/|G| which matches our previous (rigorous) result that |BG| = 1/|G| So maybe they're the same after all! There are similar calculations like this in James Propp's paper "Exponentiation and Euler characteristic", referred to above... though he uses a slightly different notion of Euler characteristic, due to Schanuel. Clearly something interesting is going on with these "divergent Euler characteristics". For appearances of this sort of thing in physics, 11) Matthias Blau and George Thompson, N = 2 topological gauge theory, the Euler characteristic of moduli spaces, and the Casson invariant, Comm. Math. Phys. 152 (1993), 41-71. and the references therein. (I discussed this paper a bit in "week51".) However, there are still challenging tests to the theory that homotopy cardinality and Euler characteristic are secretly the same. Here's a puzzle due to James Dolan. Consider a Riemann surface of genus g > 1. Such a surface has Euler characteristic 2 - 2g, but such a surface also has vanishing homotopy groups above the first, which implies that it's BG for G equal to its fundamental group. If homotopy cardinality and Euler characteristic were the same, this would imply |G| = 1/|BG| = 1/c(S) = 1/(2 - 2g) But the fundamental group G is infinite! What's going on? Well, I'm actually sort of glad that 1/(2 - 2g) is negative. Sometimes a divergent series of positive integers can be cleverly summed up to give a negative number. The simplest example is the geometric series 1 + 2 + 4 + 8 + 16 + ... = 1/(1 - 2) = -1 but in "week126" I talked about a more sophisticated example that is very important in string theory: 1 + 2 + 3 + 4 + 5 + ... = ζ(-1) = -1/12 So maybe some similar trickery can be used to count the elements of G and get a divergent sum that can be sneakily evaluated to obtain 1/(2 - 2g). Of course, even if we succeed in doing this, the skeptics will rightly question the significance of such tomfoolery. But there is sometimes a lot of profound truth lurking in these bizarre formal manipulations, and sometimes if you examine what's going on carefully enough, you can discover cool stuff. To wrap up, let me mention an interesting paper on the foundations of categorification: 12) Claudio Hermida, From coherent structures to universal properties, available at http://www.cs.math.ist.utl.pt/cs/s84/claudio.html and also two papers about 2-groupoids and topology: 13) K. A. Hardie, K. H. Kamps, R. W. Kieboom, A homotopy bigroupoid of a topological space, in: Categorical Methods in Algebra and Topology, pp. 209-222, Mathematik-Arbeitspapiere 48, Universitaet Bremen, 1997. Appl. Categ. Structures, to appear. K. A. Hardie, K. H. Kamps, R. W. Kieboom, A homotopy 2-groupoid of a Hausdorff space, preprint. I would talk about these if I had the energy, but it's already past my bed-time. Good night! Addenda: Toby Bartels had some interesting things to say about this issue of This Week's Finds. Here is my reply, which quotes some of his remarks.... Toby Bartels (toby@ugcs.caltech.edu) wrote: >>3) The American Physical Society: A Century of Physics, available >>at http://timeline.aps.org/APS/home_HighRes.html >I like how they make the famous picture of Buzz Aldrin, >the one that everyone thinks is a picture of Neil Armstrong, >into a picture of Neil Armstrong after all: >"Here he is reflected in Buzz Aldrin's visor.". Heh. Sounds like something a doting grandmother would say! >> 5) John Conway and Peter Doyle, Division by three. >> http://math.dartmouth.edu/~doyle/docs/three/three/three.html >>The latter article studies this question: if I give you an isomorphism >>between 3x and 3y, can you construct a isomorphism between x and y? >The answer must be something that won't work >if 3 is replaced by an infinite cardinal. >That said, I can't even figure out how to divide by 2! >If I take the 3 copies of X or Y and put them on top of each other, >I get a finite, 2coloured, 3valent, nonsimple, undirected graph. >I remember from combinatorics that the 2 colours of >a finite, 2coloured, simple, undirected graph of fixed valency >are equipollent, but I can't remember the bijective proof. >(Presumably it can be adopted to nonsimple graphs.) It's a tricky business. Let me quote from the above article: A proof that it is possible to divide by two was presented by Bernstein in his Inaugural Dissertation of 1901, which appeared in Mathematische Annallen in 1905; Bernstein also indicated how to extend his results to division by any finite n, but we are not aware of anyone other than Bernstein himself who ever claimed to understand this argument. In 1922 Sierpinski published a simpler proof of division by two, and he worked hard to extend his method to division by three, but never succeeded. In 1927 Lindenbaum and Tarski announced, in an infamous paper that contained statements (without proof) of 144 theorems of set theory, that Lindenbaum had found a proof of division by three. Their failure to give any hint of a proof must have frustrated Sierpinski, for it appears that twenty years later he still did not know how to divide by three. Finally, in 1949, in a paper `dedicated to Professor Waclaw Sierpinski in celebration of his forty years as teacher and scholar', Tarski published a proof. In this paper, Tarski explained that unfortunately he couldn't remember how Lindenbaum's proof had gone, except that it involved an argument like the one Sierpinski had used in dividing by two, and another lemma, due to Tarski, which we will describe below. Instead of Lindenbaum's proof, he gave another. Now when we began the investigations reported on here, we were aware that there was a proof in Tarski's paper, and Conway had even pored over it at one time or another without achieving enlightenment. The problem was closely related to the kind of question John had looked at in his thesis, and it was also related to work that Doyle had done in the field of bijective combinatorics. So we decided that we were going to figure out what the heck was going on. Without too much trouble we figured out how to divide by two. Our solution turned out to be substantially equivalent to that of Sierpinski, though the terms in which we will describe it below will not much resemble Sierpinski's. We tried and tried and tried to adapt the method to the case of dividing by three, but we kept getting stuck at the same point in the argument. So finally we decided to look at Tarski's paper, and we saw that the lemma Tarski said Lindenbaum had used was precisely what we needed to get past the point we were stuck on! So now we had a proof of division by three that combined an argument like that Sierpinski used in dividing by two with an appeal to Tarski's lemma, and we figured we must have hit upon an argument very much like that of Lindenbaum's. This is the solution we will describe here: Lindenbaum's argument, after 62 years. >>So: let's define the "homotopy cardinality" of a topological space X to >>be the alternating product |X| = \prod_{i>0} |π_i(X)|^{(-1)^i} >>when X is connected and the product converges; >What about divergence to 0? >If π_i(X) is infinite for some odd i but no even i, >can we say |X| is 0? Well, we can, but we might regret it later. In a sense 0 is no better than ∞ when one is doing products, so if you allow 0 as a legitimate value for a homotopy cardinality, you should allow ∞, but if you allow both, you get in trouble when you try to multiply them. This dilemma is familiar from the case of infinite sums (where +∞ and -∞ are the culprits), and the resolution seems to be □ disallow both 0 and ∞ as legitimate answers for the above product, □ allow both but then be extra careful when stating your theorems so that you don't run into problems. >>As a fun application of this fact, recall that any topological group G >>has a "classifying space" BG, meaning a space with a principal G-bundle >>over it G → EG → BG >>whose total space EG is contractible. I described how to construct >>the classifying space in "week117", at least in the case of a discrete >>group G, but I didn't say much about why it's so great. The main >>reason it's great is that any G-bundle over any space is a pullback >>of the bundle EG over BG. But right now, what I want to note is that >>since EG is contractible it is tame, and |EG| = 1. Thus G is tame if >>and only if BG is, and |BG| = 1 / |G|, >>so we can think of BG as the `reciprocal' of G! >OTOH, G is already a kind of reciprocal of itself. >If G is a discrete group, it's a topological space >with |G|[homotopy] = |G|[set]. >But G is also a groupoid with 1 object, >and |G|[groupoid] = 1 / |G|[set]. >So, |G|[homotopy] |G|[groupoid] = 1. Believe it or not, you are reinventing BG! A groupoid can be reinterpreted as a space with vanishing homotopy groups above the first, and if you do this to the groupoid G, you get BG. More generally: Recall that we can take a pointed space X and form a pointed space LX of loops in X that start and end at the basepoint. This clearly has π[n+1](X) = π[n](LX) so if X is connected and tame we'll have |LX| = 1/|X| Now with a little work you can make LX (or a space homotopy-equivalent to it!) into a topological group with composition of loops as the product. And then it turns out that BLX is homotopy equivalent to X when X is connected. Conversely, given a topological group G, LBG is homotopy equivalent to G. So what we're seeing is that topological groups and connected pointed spaces are secretly the same thing, at least from the viewpoint of homotopy theory. In topology, few things are as important as this fact. But what's really going on here? Well, to go from a topological group G to a connected pointed space, you have to form BG, which has all the same homotopy groups but just pushed up one notch: π[n+1](BG) = π[n](G) And to go from a connected point space X to a topological group, you have to form LX, which has all the same homotopy groups but just pushed down one notch: π[n-1](LX) = π[n](X) This is actually the trick you are playing, in slight disguise. And the real point is that a 1-object ω-groupoid can be reinterpreted as an ω-groupoid by forgetting about the object and renaming all the j-morphisms "(j-1)-morphisms". See? When you finally get to the bottom of it, this "BG" business is just a silly reindexing game!!! Of course no textbook can admit this openly - partially because they don't talk about >>So Euler characteristic is a lot like homotopy cardinality. But >>not many spaces have both well-defined homotopy cardinality and >>well-defined Euler characteristic. So they're like Jekyll and Hyde - >>you hardly ever see them in the same place at the same time, so you >>can't tell if they're really the same guy. >So, are they ever both defined but different? I don't recall any examples where they're both finite, but different. I know very few cases where they're both finite! How about the point? How about the circle? How about the 2-sphere? I leave you to ponder these cases. >>However, there are still challenging tests to the theory that homotopy >>cardinality and Euler characteristic are secretly the same. Here's a >>puzzle due to James Dolan. Consider a Riemann surface of genus g > 1. >>Such a surface has Euler characteristic 2 - 2g, but such a surface also >>has vanishing homotopy groups above the first, which implies that it's >>BG for G equal to its fundamental group. If homotopy cardinality and >>Euler characteristic were the same, this would imply >> |G| = 1/|BG| = 1/χ(S) = 1/(2 - 2g). >>But the fundamental group G is infinite! What's going on? >This doesn't seem too surprising. 1/(2 - 2g) is also infinite. >Just use the geometric series in reverse: > 1/(2 - 2g) = (1/2) ∑[i] g^i, >which diverges since g > 1. Well, what I really want is a way of counting elements of the fundamental group of the surface S which gives me a divergent sum that I can cleverly sum up to get 1/(2 - 2g). Later, my wish above was granted by Laurent Bartholdi and Danny Ruberman! People have already figured out how to count the number of elements in the fundamental group of a Riemann surface, resum, and get 1/(2 - 2g) in a nice way. Here are two references: 14) William J. Floyd and Steven P. Plotnick, Growth functions on Fuchsian groups and the Euler characteristic, Invent. Math. 88 (1987), 1-29. 15) R. I. Grigorchuk, Growth functions, rewriting systems and Euler characteristic, Mat. Zametki 58 (1995), 653-668, 798. You can read more about Euler characteristic and homotopy cardinality here: 16) John Baez, Euler characteristic versus homotopy cardinality, lecture at the Fields Institute Program on Applied Homotopy Theory, September 20, 2003. Available in PDF form at http:// The imaginary expression √-a and the negative expression -b resemble each other in that each one, when they seem the solution of a problem, they indicate that there is some inconsistency or nonsense. - Augustus De Morgan, 1831. © 2000 John Baez
{"url":"http://math.ucr.edu/home/baez/week147.html","timestamp":"2014-04-20T01:25:51Z","content_type":null,"content_length":"33148","record_id":"<urn:uuid:5d83e08a-afed-49bb-b139-a6ac9be012dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the acceleration of a lever? October 27th 2009, 03:34 AM #1 Junior Member May 2009 Finding the acceleration of a lever? Consider the rotating rod in the figure. The length of the rod is L m. While the rod is oriented vertically as shown, the velocity and tangential acceleration of its end point B are given as 1 m/s and -b m/s2, respectively. Calculate the normal component of the acceleration at point B. The values of L and b are given below. b[m/s2] = 3.6; L[m] = 3; Please help! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/math-topics/110777-finding-acceleration-lever.html","timestamp":"2014-04-18T08:43:09Z","content_type":null,"content_length":"29149","record_id":"<urn:uuid:2b788ce5-4f0b-42d6-acdb-ce24e564712a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Ming the Mechanic: Square Wheels by Flemming Funch From Roland Piquepaille's Technology Trends. So, you didn't think a bicycle could have square wheels? Well, it all depends on the surface you're riding on. Stan Wagon, a mathematician at Macalester College in St. Paul, Minn., has a bicycle with square wheels. It's a weird contraption, but he can ride it perfectly smoothly. His secret is the shape of the road over which the wheels roll. A square wheel can roll smoothly, keeping the axle moving in a straight line and at a constant velocity, if it travels over evenly spaced bumps of just the right shape. This special shape is called an inverted catenary. A catenary is the curve describing a rope or chain hanging loosely between two supports. At first glance, it looks like a parabola. In fact, it corresponds to the graph of a function called the hyperbolic cosine. Turning the curve upside down gives you an inverted catenary -- just like each bump of Wagon's road. OK, so here's an idea: What about wheels that dynamically change shape quickly enough that they always fit whatever road surface you're going over, so that you can always have a smooth ride. And we might become less attached to smooth surfaces.
{"url":"http://ming.tv/flemming2.php/__show_article/_a000010-001185.htm","timestamp":"2014-04-16T19:17:24Z","content_type":null,"content_length":"17595","record_id":"<urn:uuid:dec2c982-53c5-458a-8c7f-21fd1089d62d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Long Division with Remainder or Decimals July 1st 2008, 08:54 PM #1 Senior Member Feb 2008 Berkeley, Illinois Long Division with Remainder or Decimals I've put together a long division calculator that shows the math for remainders or decimal places. I've done some testing on it, but want to work out any kinks that you may find or add enhancements if requested. This lesson is located on the basic math page, and searchable by division, remainder, long, and decimal. I've tried to design it similar to the mathisfun.com examples in some of the threads here. Long Division with Remainders or Decimals There's a bug. Do first, 4 / 6 (long division) then 14 / 6. The first problem is not entirely erased. I made one change last night, not sure if it is related to this. I ran it this morning immediately one after another and it looks fine. Did you scroll down to see the math? Let me know if it still does not work, I'll clear out my arrays. I did 4/6 and then 14/6 using both methods, one after the other and it looks ok. I've seen something like you decribed before with javascript with rapid fire pushing of buttons. Also, if you scroll down to the bottom of the math bar, and then run another calc, the scroll bar stays at the bottom. Let me know if it looks ok on your end. Refresh your browser first please. My apologies. I read up on what happened, just because it worked for me does not mean it will work for you. I've emptied my results upon each calculation now, so this should fix it on your end. Sorry about the confusion. I just made the change 2 minutes ago. I've put together a long division calculator that shows the math for remainders or decimal places. I've done some testing on it, but want to work out any kinks that you may find or add enhancements if requested. This lesson is located on the basic math page, and searchable by division, remainder, long, and decimal. I've tried to design it similar to the mathisfun.com examples in some of the threads here. Long Division with Remainders or Decimals Looking good! Two things though: □ When writing escape characters in HTML, the proper format is "&---;". Some browsers allow you to omit the semicolon, but this is not standard. In my browser, your non-breaking spaces are not parsed correctly, and I see "&nbsp" on the page by the buttons. In fact, the W3C validator finds a number of other problems with the page's compliance. □ You might want to improve the decimal fraction division method to detect repeating decimals rather than looping blindly until the digit extraction limit is reached. For example, when you calculate $\frac13$, you could end it after the second iteration or so. This could be a bit tricky though, especially with things like $\frac{16}{99}$ or $\frac{448451}{3333333}$ where the repeated sequence is more than one digit long. Still, good work. Thanks for the review! I've performed some of the changes on the validation error report you supplied. In addition, for answers with decimals, I've limited the loops now to 5. Please review after refreshing your browser and see if the nbsp's are gone as well as the new decimal iteration maximum. One additional enhancement has been made to this lesson -->: Long Division with Remainders or Decimals A random problem generator button has now been added, similar to what was done on the quadratic lesson. One additional requested enhancement has just been added: When running the remainder calculation, the remainder answer portion will also be expressed as the remainder divided by the denominator in addition to the original answer of quotient and In addition, this will be reduced down as far as possible using a GCF. The math work portion of the above will be located after the program loops through the iterations of division near the bottom of the chalkboard. Thank you for your suggestions. One significant gigantic update to this lesson, and hopefully, the final one: This lesson now can take any 2 positive numbers, and add/subtract/multiply as well as the original long division operations with remainder and decimals. I've rewritten the entire lesson plus the division in another language so the alignments and spacing would be more lined up and readable. For the addition/subtraction/multiplication piece, I eliminated my old lessons in entry boxes that could only handle certain digits and did not have full math work and replaced it with this. This will show the borrowing/carrying for the 3 operations. As a final add in, for multiplication, when all the round of multiplication complete, the program will then add each column step by step. As always, let me know if you have questions or enhancements or corrections. This lesson now calculates the partial sums of two numbers as well. This lesson now calculates the partial quotient. This method is used instead of long division at times to do division. Be advised that the method of partial quotients can be done many ways, but the general majority consensus of teachers I spoke with who asked me to add this feature said that factors of 10, 5, 2, and 1 are targeted in classrooms, therefore, this is how I built this lesson. Color coding is included. Enhancement Update: I have added another button on this lesson for "short division". July 2nd 2008, 03:34 AM #2 July 2nd 2008, 05:44 AM #3 Senior Member Feb 2008 Berkeley, Illinois July 2nd 2008, 06:03 AM #4 Senior Member Feb 2008 Berkeley, Illinois July 2nd 2008, 02:14 PM #5 July 3rd 2008, 08:06 AM #6 Senior Member Feb 2008 Berkeley, Illinois August 9th 2008, 02:55 PM #7 Senior Member Feb 2008 Berkeley, Illinois September 8th 2008, 01:34 PM #8 Senior Member Feb 2008 Berkeley, Illinois October 9th 2008, 11:12 AM #9 Senior Member Feb 2008 Berkeley, Illinois October 20th 2010, 01:47 PM #10 Senior Member Feb 2008 Berkeley, Illinois October 30th 2010, 06:46 PM #11 Senior Member Feb 2008 Berkeley, Illinois March 9th 2011, 08:05 PM #12 Senior Member Feb 2008 Berkeley, Illinois
{"url":"http://mathhelpforum.com/math-software/42884-long-division-remainder-decimals.html","timestamp":"2014-04-19T12:35:07Z","content_type":null,"content_length":"63852","record_id":"<urn:uuid:c378736f-8fba-4997-80ae-684b07936906>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Exploring the World of Mathematics by John Hudson Tiner Exploring the World of Mathematics by John Hudson Tiner ISBN-13: 978-0890514122 Paperback: 160 pages Publisher: Master Books Released: June 2004, Nov. 2005 Bought from local bookstore. Book Description from Back Cover: Numbers surround us. Just try to make it through a day without using any. It’s impossible: telephone numbers, calendars, volume settings, shoe sizes, speed limits, weights, street numbers, microwave timers, TV channels, and the list goes on and on. The many advancements and branches of mathematics were developed through the centuries as people encountered problems and relied upon math to solve them. For instance: • What timely invention was tampered with by the Caesars and almost perfected by a pope? • Why did ten days vanish in September of 1752? • How did Queen Victoria shorten the Sunday sermons at chapel? • What important invention caused the world to be divided into time zones? • What simple math problem caused the Mars Climate Orbiter to burn up in the Martian atmosphere? • What common unit of measurement was originally based on the distance from the equator to the North Pole? • Does water always boil at 212° Fahrenheit? • What do Da Vinci’s Last Supper and the Parthenon have in common? • Why is a computer glitch called a “bug”? It’s amazing how ten simple digits can be used in an endless number of ways to benefit man. The development of these ten digits and their many uses is the fascinating story you hold in your hands: Exploring the World of Mathematics Review:Exploring the World of Mathematics is a history of the development of mathematics with some instruction on how to do the various types of math worked in. (Chapters 5, 9, and 10 were more focused on math instruction than history.) The text was engaging and easy to understand. Much of the book was suitable for middle schoolers, though some chapters were more high school level. There were useful black and white charts and illustrations. At the end of each chapter, there were 10 questions--most tested if you learned the important points in the chapter, but some were math problems based on what was learned. The answers were in the back. The book occasionally referred to things in the Bible, like explaining the cubit as an ancient measurement of length. The author had math start with the ancient Egyptians (since, according to him, it wasn't needed before then because people were roaming herders). It also referred to a Sumerian counting system that started back in 3300 B.C. Overall, the book was interesting and well-written. I'd recommend it to those interested in an overview of the development of mathematics or to those desiring to teach their children math in an interesting way. Chapter 1 talked about ancient calendars (how days, months, and years were calculated in various cultures) and how the modern calendar was developed. Chapter 2 talked about marking the passage of time (including how & why people started counting hours, minutes, and seconds). Chapter 3 talked about the development of weights and measures from ancient ones to modern non-metric systems. Chapter 4 talked about the development of the metric system (mostly weight, length, capacity, and temperature). Chapter 5 talked about how ancient Egyptians used basic geometry to build pyramids and survey farm land. Chapter 6 talked about how ancient Greeks continued to develop mathematics. Chapter 7 talked about the different systems and symbols for numbers in various cultures and times. Chapter 8 talked about number patterns (like odd, even, prime, Fibonacci numbers, square numbers, and triangular Chapter 9 talked about mathematical proofs, decimal points, fractions, negative numbers, irrational numbers, and never-ending numbers. Chapter 10 talked about algebra and analytical geometry. Chapter 11 talked about network design, combinations & permutations, factorials, Pascal's triangle, and probability. Chapter 12 talked about the development of counting machines, from early mechanical calculators to modern digital calculators. Chapter 13 talked about the development of modern computers. Chapter 14 gave some math tricks and puzzles. If you've read this book, what do you think about it? I'd be honored if you wrote your own opinion of the book in the comments. Excerpt from page 9 Most names for months in our calendar are from the Roman calendar. The ancient Roman calendar originally had only 10 months and 304 days. The year began with the month of March. Later, the months of January and February were inserted before March, and the new year began with January. January was named for Janus. In Roman mythology, he was the keeper of doorways. January was the entrance to the new year. February was from a Roman word meaning "festival." March was named after Mars, the Roman god of war. April came from a Roman word meaning "to open," probably because buds opened in April. May was named after Maia, the mother of Mercury. June was named after Juno, the queen of the gods in Roman mythology. She was portrayed as the protector of women. In the Roman calendar, months after June had names based on their original calendar before January and February were added: Quintilis (quin, "fifth"), Sextilis (sex, "sixth"), September (sep, "seventh"), October (oct, "eighth"), November (non, "ninth"), and December (dec, "tenth"). Julius Caesar took the month Quintilis and named it July after himself. The next Roman ruler, Augustus Caesar, took the month Sextilis and named it August after himself. August had only 30 days but July had 31 days. Augustus took another day from February and added it to August so his month would be as long as the one for Julius Caesar. chapter one No comments:
{"url":"http://differenttimedifferentplace.blogspot.com/2010/10/exploring-world-of-mathematics-by-john.html","timestamp":"2014-04-19T06:54:00Z","content_type":null,"content_length":"85773","record_id":"<urn:uuid:c81055f5-1dba-4ba6-8f3c-8cda2065994e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypercomputation in A Computable Universe This is the full version of my answer to a question formulated by Francisco A. Doria to me included in the Discussion Section of A Computable Universe, my edited volume being published by World Scientific and Imperial College Press coming out next month (already available in Asia) concerning whether I think if Hypercomputation is possible: I was once myself a hypercomputation enthusiast (my Master’s thesis–in French–was on “Hyper calcul”, focused on feasibility of hypercomputational models). Paradoxically it wasn’t until I had a good knowledge of it that I started to better appreciate and to increasingly enjoy more the beauty of the digital (Turing) model. On the one hand, hypercomputational models do not converge in computational power. There are a plethora of possible models of hypercomputation while there is only one digital (in terms of computational power). I find it astonishing how little it takes to reach the full power of (Turing) universality, and I don’t see the power of universal machines as being subject to any limitation, all the opposite. I happen to think the question has a truth value and is ultimately susceptible of a physical answer. However, I don’t think the answer will emerge in the foreseeable future, if it ever does. Still, I found Doria’s contribution to the subject interesting, as he points out a particular (“ideal”) model of hypercomputation that I think gets around some of Martin Davis’ objections (The Myth of Hypercomputation), though it doesn’t fully address the problem of verification. Unlike a Turing computation, which can in principle be verified by carrying it out by hand, step by step the inner working of a hypercomputer can only be followed by another hypercomputer. The caricature version of the problem is in the whimsical answer given by the (hyper?)computer Deep Thought in “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, which proposed “42″ as the “Ultimate (uncomputable?) answer to the Ultimate Question of Life, The Universe, and Everything” [added parenthesis]. The only way to verify such an answer would be by building another, more powerful and even less understandable computer. This makes me wonder whether we ought not to favour meaningful computation over what could potentially be hypercomputation, even if hypercomputation were possible. There is a strong analogy to the concept of proof in math. Mathematical proofs seem to fall into two types. They either serve to convince of the truth of a statement one wasn’t certain was true, or else to provide logical evidence that a statement intuitively believed to be true was in fact so (e.g. the normality of the mathematical constant pi). But in the latter case, why would one bother to provide a proof of a statement that nobody would argue to be false? It is because ideally math proofs should provide insight into why a statement is true, and not simply establish whether or not it is so. There are a few exceptions of course. Some come from mathematical practice, for example Wiles’ proof of Fermat’s last theorem. It is not clear whether Wiles’ proof provides any insight into the original question of the truth value of Fermat’s theorem (it does contribute to understand the connection among different powerful mathematical theories). Some other cases, especially among computer automated proofs, are of the same kind, often neglecting the fact that a proof is also about explanation (for humans). In that sense I think we should also favour meaningful mathematical proofs (from meaningful mathematical questions!), just as we should better appreciate meaningful (digital) computation. The study of infinite objects has given us great insight into profound and legitimate questions of math and computation, such as the question of the nature of what is a number or what is a computation. And it has been immensely useful to focus on limits of computation in order to better understand it. It is at the boundary between decidability and undecidability that one seems best positioned to answer the question of what does computation mean. And there are some examples where the study of a hypercomputational model is more valuable not as a model of hypercomputation but by its ancillary results. Unlike most people, I think the contribution of, for example, Siegelmann’s ARNN model, has much more to say about computational complexity (the ARNN model relativises P = NP and P != NP in a novel fashion) and therefore classical computation than about hypercomputation! (Siegelmann’s ARNNs model has come to be strangely revered among hypercomputation enthusiasts). While I may agree that the problem of verification of a computation is not necessarily an argument against hypercomputation (because it can also be used against digital computation in practice), the answer is only in the realm of physics and not in paper models. As Davis points out, it is not a surprise that encoding non-computable numbers as weights in an artificial neural network deals to non-computable computation! so, assuming real numbers exist the consequence is straightforward and the justification of a hypercomputational model just circular. Other models of hypercomputation are just fundamentally wrong, such as Peter Wegner’s model of “interactive computation” that is claimed to break Turing’s barrier. For a recent commentary pointing out the the flaws of yet another hypercomputational claim, you can consult this entertaining blog post by Scott Aaronson and his Toaster-Enhanced Turing Machine. 1. Regarding the use of mathematical proofs. Both Wiles’s proof and “human-unfriendly” computer-generated proofs are useful if you spend enough time studying them, with appropriate mathematical knowledge. They shorten the “search” for a proof of a statement, say P, in the sense of building in our head the appropriate certainty, implemented as neural networks that make us behave as P is Actually knowing or thinking a bit about the history of Wiles’s proof may convince you of this. I myself do not understand well the argument but even the faint picture I have of it gives me confidence in FLT, more than, say, a proof of the n=3 or 4 cases. So those historical details I am thinking about: many people have contributed to the only currently accepted proof. (Though stay tuned for another proof if Mochizuki’s work settles ABC, which implies FLT for large exponents, and perhaps all exponents with some tuning.) Actually what motivated Wiles to start working on FLT was not really working on FLT itself, but on the Shimura-Taniyama-Weil modularity conjecture. Because the biggest insight into FLT had been obtained before, Hellegouarch and Frey (and others) sought to prove that Fermat triples for any n>2 yielded elliptic curves with strange properties. It would contradict, via a naturally associated elliptic curve, a conjecture of Szpiro related to the ABC conjecture. And Frey also looked at a similar contradiction supposing that curve was modular. He was helped (or rescued) by Serre and Ribet. And once this connection to modularity was well understood Wiles was quite confident and “only” had to prove modularity, the STW conjecture. In fact he did not have himself the insight that FLT was reachable, but when he understood the work of the above-mentioned people he was convinced, enough to work hard for 7 years on STW. So that tells you that definitely he got insight that FLT was true from those partial results toward it, and he completed the work, and certainly he got surer and surer that FLT was true as he fleshed out his arguments and uncovered key parts of his eventual proof. Similarly, all the people involved in that (Nicholas Katz, Ribet, Sarnak, grad students like Buzzard) felt more and more sure as they learned and thought, each at their own pace, about the different parts of the proof, some old (the theory of elliptic curves, or even algebraic number theory) some new (Galois representations in GL(2,F_p)). If you invested much time in studying that you would also gain insight into FLT, become comfortable with it. Unfriendly computer proofs are just hard to fit to our relatively flexible neural constructs but they can be grasped. Now the question of whether it’s more efficient to look for proofs by ourselves rather than try to make computers good at that and then understand them is a very interesting question. But in the case of Wiles’s proof it definitely does shed light on FLT. I guess though it is still a matter of taste: you may say I am satisfied with checking 3 exponents up to a,b<1000 on the computer to be convinced of FLT, or you may be satisfied with Kummer's proof for regular primes. The questions seems to be "How much certainty am I looking for?", "How much work am I willing to put?", "What is the best tradeoff?". From this point of view we can question complicated proofs, but if they are full and fully understood proofs compared to heuristics, they do provide more insight. We can also think of the interesting situation where a proof is so complicated that we do not have the ressources to understand it, and we do not trust expert that much. Then a heuristic may be more reliable than the proof, though not fully reliable. And we can argue the definition of insightful, it may be a product "information x speed to get it" = "information / time to get it". But I still like to say Wiles et al's proof provides insight beyond partial results. Well, this was much rambling, I hope at least it will not bother you too much. I have to read more of your work to understand it but I should already take the opportunity to thank you for it.
{"url":"http://www.mathrix.org/liquid/archives/hypercomputation-in-a-computable-universe","timestamp":"2014-04-17T21:37:32Z","content_type":null,"content_length":"41516","record_id":"<urn:uuid:32110ffd-102b-434a-8c4b-c9b72d216adb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
On the definition of convergence of a sequence of sections of a bundle up vote -2 down vote favorite Convergence of a sequence of sections of a bundle is defined as follows: Definition: Let $E$ be a vector bundle over a manifold $M$, and let metrics $g$ and connections $∇$ be given on $E$ and on $TM$. Let $Ω ⊂ M$ be an open set with compact closure $\bar{Ω}$ in $M$, and let $(ξ_k)$ be a sequence of sections of $E$. For any $p ≥ 0$ we say that $ξ_k$ converges in $C^p$ to $ξ_∞ ∈ Γ(E\big|_{\bar{\Omega}})$ if for every $ε > 0$ there exists $k_0 = k_0(ε)$ such that $$\ sup_{0\leq |\alpha | \leq p}\sup_{x\in \bar{\Omega}}|\nabla^{\alpha}(\xi_k -\xi_\infty)|_{g}<\varepsilon$$ whenever $k > k_0$. $\nabla^\alpha$ is the covariant derivative corresponding to the multi-index $α$. Question: In the book "The Ricci Flow in Riemannian Geometry" by Ben Andrews and Christopher Hopper, is written: Note that since we are working on a compact set, the choice of metric and connection on $E$ and $TM$ have no affect on the convergence. I can't understand why the sentence is true. Can someone help me? Thanks in advance. dg.differential-geometry riemannian-geometry vector-bundles ricci-flow add comment 2 Answers active oldest votes This is just to supply some details to what Rafe Mazzeo wrote. Let $g_{i}$ be metrics and $^{\left( i\right) }\nabla$ be connections on $E$ and on $M$ for $i=1,2$. Since $\bar{\Omega}$ is compact, the uniform equivalence of norms reduces to a local coordinate chart $(U,\{x^{i}\})$ over which the bundle $E$ is trivialized. In the following, the constant $C$ may change from line to line. Since $C^{-1}g_{1}\leq g_{2}\leq Cg_{1}$ (uniform equivalence) on $E$ (fiberwise) and on $M$ for some $C$, for any $\xi\in\Gamma(E\otimes\bigotimes^{k}T^{\ast}M)$ we have $|\xi |_{g_{1}}\leq C|\xi|_{g_{2}}$ (same for $1$ and $2$ switched). Let $\alpha=(\alpha_{1},\ldots,\alpha_{n})$, so that $\nabla^{\alpha}=\nabla _{1}^{\alpha_{1}}\cdots\nabla_{n}^{\alpha_{n}}$ up vote (up to uniform equivalence of norms, we may order it this way since commutators yield curvature and its derivative terms, which are bounded). Let $\lesssim$ denote $\leq C\cdot$. Now for $\ 1 down xi\in\Gamma(E)$, $$ |^{\left( 1\right) }\nabla^{\alpha}\xi|_{g_{1}}\lesssim|^{\left( 1\right) }\nabla^{\alpha}\xi|_{g_{2}}\lesssim|^{\left( 2\right) }\nabla^{\alpha} \xi|_{g_{2}}+|\sum_{k=0} vote ^{\left\vert \alpha\right\vert -1}{}^{\left( 1\right) }\nabla^{\ast k}\circ(^{\left( 1\right) }\nabla-{}^{\left( 2\right) }\nabla)\circ^{\left( 2\right) }\nabla_{g_{2}}^{\ast(\left\vert \ accepted alpha\right\vert -k-1)}|, $$ where the sum is comprised of linear combinations of $\ell$-th order covariant derivatives $^{\left( i\right) }\nabla^{\ast\ell}$. Since the sum has only covariant derivatives of lower order together with (bounded) derivative of the difference of connections terms, by induction on $p$ we obtain $\sum _{\left\vert \alpha\right\vert \leq p}|^{\ left( 1\right) }\nabla^{\alpha} \xi|_{g_{1}}\lesssim\sum_{\left\vert \alpha\right\vert \leq p}|^{\left( 2\right) }\nabla^{\alpha}\xi|_{g_{2}}$ over $U$, independent of $\xi$. add comment The reason is simply that over a compact set, all the choices (metrics, connections, etc.) are uniformly (or in whatever C^k topology you want) equivalent to one another. up vote 4 down vote Where can I find proof of your claim? It is extremely important for me. thanks in advance. – Sepideh Bakhoda Nov 19 '13 at 4:46 Once one chooses trivializations of bundles, coordinates, etc., then this reduces to the standard real analysis exercise that if $f_1$ and $f_2$ are two strictly positive functions on a 1 compact set $K$, then there exist positive constants $C_1$, $C_2$ such that $C_1 f_1 \leq f_2 \leq C_2 f_1$. Similarly, if $A_1$ and $A_2$ are two positive definite matrices depending smoothly on a compact set $K$, then $C_1 A_1 \leq A_2 \leq C_2 A_1$ (where $A \leq B$ means $\langle Av,v \rangle \leq \langle Bv, v\rangle$). – Rafe Mazzeo Nov 19 '13 at 5:03 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry vector-bundles ricci-flow or ask your own question.
{"url":"http://mathoverflow.net/questions/147275/on-the-definition-of-convergence-of-a-sequence-of-sections-of-a-bundle","timestamp":"2014-04-20T01:53:16Z","content_type":null,"content_length":"58231","record_id":"<urn:uuid:c69fc59d-b067-47ce-b662-62593d37fd0e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Progress and references. Prequel to this course. I will try to make it as self-contained as possible, though. All references are to the book by Arora and Barak [1], unless stated otherwise. Many of the sources listed below can be used for further reading on some of the topics only briefly touched in the classroom. If I promised a reference in class and then forgot about it please remind me. • First week: Boolean circuits [Ch. 6.1]. Shannon counting argument [Thm. 6.21]. Nonuniform hierarchy theorem [Thm. 6.22]. Advice Turing machines and an alternate description of P/poly [Ch. 6.3]. BPP is in P/poly [Thm. 7.14]. Karp-Lipton Theorem [Thm. 6.19]. $\Sigma_2^p$ does not have small circuits [Exc. 6.5 and 6.6], see also this recent post pertained to the discussion we had in class. Circuit depth, $NC^k$ etc. [Ch. 6.7.1], for a comprehensive survey of related complexity classes see [2]. Complete bases [3, Ch. 1.3]. Boolean formulas [3, Ch. 1.4]. Relations between circuit size, formula size and depth [3, Ch.7]. Khrapchenko's bound [3, Ch. 8.8]; our exposition follows ideas from [4,5]. • Second week: Khrapchenko's bound cntd. Sub-linear space, classes L and NL [Ch. 4.1]. Uniform families of circuits [Ch. 6.2]. Branching programs [Ch. 14.4.4]. For a comprehensive survey of related models see [6]. USTCONN in logarithmic space (without proof) [7]. Nechiporuk's bound [3, Ch. 14.3]. Immerman-Szelepcsenyi theorem [Thm. 4.20]; our proof follows the exposition in [8, Ch. 8.6]. Lower bounds for $AC^0$ [Ch. 14.1]. • Third week (short): Lower bounds for $AC^0$ cntd. Applications to the Fourier analysis [9]. Small restrictions switching lemma [10]. Motivations for the "constructive" proof of the switching lemma can be found in [11, Appendix], and another exposition is in [12]. Monotone circuits lower bounds (without proof) [Ch. 14.3]. • Fourth week: Lower bounds for $ACC^0$ [Ch. 14.2]; our version closely follows one of the original papers [13]. A survey on polynomial correlations: [14]. Barrington's theorem [15]. Non-uniform automata over other groups: [16]. Communication complexity [Ch. 13.1]; a comprehensive reference on the subject is [17], see also [24]. Combinatorial rectangles and tiling complexity [Ch. • Fifth week: Rank lower bound [Ch. 13.2.3]. Log-rank conjecture [Ch. 13.2.6]. Non-deterministic communication complexity [17, Ch. 2.1]. The relation between deterministic and non-deterministic complexities [17, Ch. 2.3]. Quadratic separation between them [17, Example 2.12]. Application to circuit complexity (simple lower bound for MAXIMAL COVER) and Karchmer-Raz-Wigderson approach (without proofs) [17, Ch. 10]; see also the original papers [4,18,19]. Multi-party communication complexity [Ch. 13.3]. Discrepancy [Ch. 13.2.4]. Babai-Nisan-Szegedy bound [Th. 13.24]. Gowers's exposition of hypergraph regularity lemma [20]. • Sixth week: BNS bound cntd. Majority and threshold circuits [21]. ACC^0 vs. depth-3 majority circuits (without proof) [22]. Forster's bound for unbounded error communication complexity (without proof) [23]. One-way functions [Ch. 9.2.1]. Pseudo-random generators [Ch. 9.2.3]. All polynomially bounded stretches are equivalent [Exc. 9.10]. • Seventh week: All polynomial stretches are equivalent cntd. Unpredictability implies pseudo-randomness (without proof) [Thm 9.11]. Goldreich-Levin theorem [Thm. 9.12]. Pseudorandom function generators [Sct. 9.5.1]. Goldreich-Goldwasser-Micali construction [Thm 9.17]. Natural proofs [Ch. 23]. • Eighth week: Natural Proofs cntd. Decision trees [Ch. 12]. Computational, combinatorial and analytical complexity measures of Boolean functions and their polynomial equivalence [26]. • Ninth week: Algebraic complexity [28], see also [Ch. 16]. The degree bound (without proof) [28, Sct. 6]. Baur-Strassen Lemma [28, Sct. 7]. Bilinear complexity [28, Sct. 10]. Valiant's complexity classes [Ch. 16.1.4]. Uniform algebraic models [Ch. 16.3]. Proof Complexity [29], see also [Ch. 15]. • Tenth week (short): Proof Complexity cntd. Philosophical discussion, circuit tautologies, connections to Natural Proofs, pseudorandom generators etc. [30, Introduction]. Feasible Interpolation [Ch. 15.2.2]. Cutting planes and feasible interpolation for this system [29, Sct. 7.2.1].
{"url":"http://people.cs.uchicago.edu/~razborov/teaching/spring10.html","timestamp":"2014-04-16T04:15:48Z","content_type":null,"content_length":"10579","record_id":"<urn:uuid:fe50d7a0-4828-426f-9efd-33a025ce1a21>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
San Juan Bautista Calculus Tutors I have experience in tutoring for 3 years now. I was a tutor for the MESA program at Hartnell College for 2 years. I specialize in Calculus, but I do have experience in tutoring all math topics. 17 Subjects: including calculus, English, physics, algebra 1 ...In fact, I enjoyed teaching so much that I kept on doing it to this day. Through the years, I have worked mostly with junior high and high schoolers, but I have also worked with kids as young as 4th graders and adults at university or community colleges. My number one goal is the academic success of my students. 11 Subjects: including calculus, chemistry, physics, geometry ...As a current local K-12 substitute teacher I am fingerprinted and background checked. Prior to having children, I taught 6th grade in public school and a 2nd/3rd combo class in a private school, was Director of Children's Ministries and a Junior High youth leader, and worked for a college regist... 34 Subjects: including calculus, English, reading, writing I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including calculus, statistics, algebra 2, geometry ...I absolutely love science. I have years of experience with not only the software and applications that run on computer systems but also the hardware. I have written a pc bios, scientific computing and control applications for a jet propulsion laboratory. 19 Subjects: including calculus, chemistry, Spanish, physics
{"url":"http://www.algebrahelp.com/San_Juan_Bautista_calculus_tutors.jsp","timestamp":"2014-04-17T10:03:59Z","content_type":null,"content_length":"25266","record_id":"<urn:uuid:b4588c26-9eb3-4d87-acd7-06a3fc19333c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Chess Ratings - How They Work Like it or not, we ALL have a chess rating. You may not care at all about your rating, or you may be whining every time it goes down in the slightest. You might be someone who plays a game a year, or someone who plays 1,000 a day. Still, there is a number out there that represents how well you play chess. Well, that's the theory, anyway. To understand chess ratings you have to understand two things: #1 - that you have a TRUE rating that perfectly represents your strength of play, and #2 - that that TRUE rating will never be known and so we have to use statistics to get as close as possible to the truth. I'm writing this article in response to many people who ask about ratings and need a simple explanation of how they work. (I only know about all this because of a recent super-in-depth statistics course I took and my research in building Chess.com!) There are two main rating systems, and each one has its merits. The Elo System (used by the United States Chess Federation, FIDE, and many other online chess sites) is popular for two reason - it has been around for a long time, and it is simple. The idea is this: given two chess players of different strengths, we should be able to calculate the % chance that the better player will win the game. For example, Garry Kasparov has ~100% chance of beating my 4-year-old daughter. But he may only have a ~60% chance of beating another Grandmaster. So when playing that other Grandmaster, if he wins 6 games out of 10, his rating would stay the same. If he won 7 or more, it would go up, and 5 of less, his rating would go down. Basically, the wider the spread of the ratings, the higher percentage of games the higher rated player is expected to win. So to calculate a person's rating after playing a few games you calculate the average ratings of his opponents, and then how many games he was expected to win, and then plug it into a formula that spits out the new rating. Simple enough. Well, it turns out, that is maybe TOO simple. The Glicko System (used by Chess.com, the Australian Chess Federation, and some other online sites) is a more modern approach that builds on some of the concepts above, but uses a more complicated formula. (This only makes sense now that we have computers that can calculate this stuff in the blink of an eye - when Elo created his system they were doing it on paper!) It is a bit trickier than the Elo system, so pay attention. With the Elo system you have to assume that everyone's rating is just as sure as everyone else's rating. So my rating is as accurate as your rating. But that is just not true. For example, if this is your first game on Chess.com and you start at 1200, how do we really know what your rating is? We don't. But if I have played 1,000 games on this site, you would be much more sure that my current rating is accurate. So the Glicko system gives everyone not only a rating, but an "RD", called a Rating Deviation. Basically what that number means is "I AM 95% SURE YOUR RATING IS BETWEEN X and Y." (Nerd Fact: In technical terms this is called a "confidence interval".) If this if your first game on Chess.com I might say, "I am 95% sure that your rating is somewhere between 400 and 2400". Well that is a REALLY big range! And that is represented by a really big RD, or Rating Deviation. If you have played 1,000 games and your rating is currently 1600 I might say "I am 95% sure your rating is between 1550 and 1650". So you would have a low RD. As you play more games, your RD gets lower. To add one extra wrinkle in there, the more recent your games, the lower your RD. Your RD gets bigger over time (because maybe you have gotten better or worse over time - I'm just less sure of what your actual rating is if I haven't seen you play recently). Now, how does this affect ratings? Well, if you have a big RD, then your rating can move up and down more drastically because your rating is less accurate. But if you have a small RD then your rating will move up and down more slowly because your rating is more accurate. The opposite is true for your opponent! If they have a HIGH RD, then your rating will change LESS when you win or lose because their rating is less accurate. But if they have a LOW RD, then your rating will move MORE because their rating is more accurate. I wish there was some simple analogy to explain all this, but there isn't. It all comes back to this: you have a theoretically exact chess rating at any given moment, but we don't know what that is and so we have to use math to estimate what it is. There are really smart people out there who work on this stuff for a living, and at the end of it all we get to put their proven methods into our code so that we can all enjoy knowing what little numbers next to our name we deserve. If you want to read more, check out these articles (WARNING - SEVERE NERD CONTENT AHEAD): - The Glicko System by Professor Mark Glickman, Boston University
{"url":"http://www.chess.com/article/view/chess-ratings---how-they-work?page=6","timestamp":"2014-04-20T09:41:39Z","content_type":null,"content_length":"70878","record_id":"<urn:uuid:9448369b-1008-41a8-9691-198f94a129b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
internal logic of an (infinity,1)-topos Type theory $(\infty,1)$-Topos Theory Extra stuff, structure and property structures in a cohesive (∞,1)-topos Just as an ordinary topos comes with its internal logic formalized by type theory, an (∞,1)-topos should come with its internal ”$(\infty,1)$-logic” formalized by homotopy type theory. Type theory versus logic As remarked at type theory, it is useful to distinguish between the internal type theory of a category and the internal logic which sits on top of that type theory. The type theory is about constructing objects, while the logic is about constructing subobjects. For instance, limits and colimits, exponentials, and object classifiers belong to the type theory, while images, dual images?, intersections, unions, and subobject classifiers belong to the logic. Thus, the semantics of (extensional) type theory naturally lies in a category with appropriate structure, while the semantics of logic over that type theory naturally lies in some indexed poset over that category. However, we commonly take this indexed poset to consist of the subobjects in the category in question, in which case additional “logical” structure on the category is required, for instance that it be a Heyting category. In an elementary 1-topos, all of the “logical” structure is not usually included in the definition, because it comes for free once you have power objects. But object classifiers may not be as powerful as power objects in this respect, so for purposes of studying the internal logic (and not just the internal type theory) of an $(\infty,1)$-topos, it’s good to keep in mind both the type-theoretic structure and the logical structure, and in particular both the object classifier and the subobject classifier. The type theory of an $(\infty,1)$-category Amazingly, a variant of type theory that seems appropriate for interpretation in an $(\infty,1)$-category already exists, namely intensional type theory with identity types. Intensional identity types in $(\infty,1)$-categories The usual sort of type theory that one interprets in a 1-category is extensional type theory. To explain what this means, consider how the categorical structure of finite limits is represented in the type theory. On the one hand, we have product types $A\times B$, which of course represent categorical products; thus to obtain finite limits it suffices to have equalizers. We can obtain these from identity types, which supply for each type $A$ and each pair of terms $x,y:A$, a dependent type $Id_A(x,y)$, whose intended interpretation is that it is inhabited precisely when $x=y$. In terms of 1-categorical semantics, it is natural to require that any two elements of $Id_A(x,y)$ be equal, i.e. that $Id_A(x,y)$ be essentially a truth value/subsingleton. Then if we have two terms $x:A\vdash f(x):B$ and $x:A\vdash g(x):B$ representing morphisms $f,g\colon A\to B$, their equalizer is represented by the type $\Sigma_{x:A} Id_B(f(x),g(x))$. However, for semantics in an $(\infty,1)$-category, it makes sense to use the same identity types, but now interpreted as a path space. Now it will no longer be true that $Id_A(x,y)$ is a subsingleton, since two points can be connected by more than one path, so we must drop that axiom. This intensional type theory has been widely studied by type-theorists, although from a different point of view: assuming the propositions as types approach, $Id_A(x,y)$ should be the type of proofs or reasons why $x=y$, which can also of course have many different elements. Thus we suspect that intensional type theory may be a natural sort of type theory to have semantics in an $(\infty,1)$-category. According to the general framework of syntax/semantics, we would hope 1. From any intensional type theory, we can construct a syntactic $(\infty,1)$-category, and 2. In any $(\infty,1)$-category, we can model an intensional type theory. Some work in both of these directions has been done. On the one hand, it is known that in any intensional type theory with identity types, for any type $A$ the globular set (or more accurately globular context) given by $A \leftleftarrows Id_A \leftleftarrows Id_{Id_A} \leftleftarrows \dots$ has the structure of a Batanin ω-groupoid. This can be found in: Moreover, the syntactic category of such a theory carries a natural weak factorization system, the identity type weak factorization system?. However, there seems as yet to be no published work constructing a full syntactic $(\infty,1)$-category. On the other hand, it is known that in any nice enough model category (and in fact, in any category with a nice enough weak factorization system), one can model intensional type theory. This is studied in Vladimir Voevodsky has also studied the particular model of intensional type theory in simplicial sets, which he calls the univalent model; see his website. Additional axioms Although intensional type theory has semantics in $(\infty,1)$-categories, one can naturally expect that these models will all satisfy additional axioms. This is especially true if we want to add additional structure to our $(\infty,1)$-categories. • Exponential (and dependent product) types can probably be modeled by (locally) cartesian closed $(\infty,1)$-categories. However, although exponentials in an $(\infty,1)$-category are not strictly extensional the way they are in a 1-category, they are still extensional “up to coherent higher homotopies,” which (unlike the case for identity types) is seemingly not guaranteed by the type-theoretic structure. Thus, there may be an $\infty$-extensionality axiom to be added. • Disjoint sum types may be expected to correspond to coproducts in an $(\infty,1)$-category. • The usual notions of quotient type make little sense without extensional identity types. In the 1-categorical world, quotient types correspond to exact categories, while the appropriate notion of “exactness” for an $(\infty,1)$-category deals with groupoid objects in an (∞,1)-category. It remains to be seen how to phrase a corresponding axiom in the type theory. • The object classifier in an $(\infty,1)$-topos does in fact correspond to a well-known concept in type theory, namely that of a universe such as the type $Type$. However, as a universe, the object classifier in an $(\infty,1)$-topos has the special property that the paths between two types $A$ and $B$ as elements of $Type$ (that is, the path space $Id_{Type}(A,B)$) is equivalent to the space of equivalences between $A$ and $B$ as types (an appropriate subspace of the exponential $B^A$). A type-theoretic axiom asserting this equivalence was introduced by Voevodsky under the name of the equivalence axiom. Logic over type theory in an $(\infty,1)$-category Now when we go to add logic to the type theory of an $(\infty,1)$-category, it seems natural by analogy that it will deal with subobjects, i.e. with monomorphisms in an (∞,1)-category. That is, a proposition $\varphi(x)$ with a variable $x:A$ will be interpreted by a monomorphism $[\varphi] \rightarrowtail [A]$. Just as in the internal logic of a 1-category and of a 2-category, in order to interpret the logical connectives and quantifiers we will then need suitable structure on the posets of subobjects in our $(\infty,1)$-category. It is naturally to be expected that any $(\infty,1)$ -topos will have this necessary structure. Again, the requisite type theory more or less exists, namely intensional type theory together with a sort of propositions that can depend on types. In fact, this type theory is very closely related to the calculus of constructions used in the proof assistant Coq, making Coq a very convenient place to play around with the type theory that ought to be valid in an $(\infty,1)$-topos. In particular, Voevodsky has written out a Coq script up to the statement of his equivalence axiom, to be found on his website. The problem of finiteness In describing the internal type theory and logic of an $(\infty,1)$-category we encounter the problem that many structures in an $(\infty,1)$-category require a (countably) infinite amount of data to describe. For instance, when looking for a way to state the “exactness” property one has to say what is meant by a groupoid object, but since this really means a “coherent” or ”$A_\infty$” groupoid object, it involves an infinite amount of data. By contrast, the most common type theories are purely finitary systems. There is the one amazing fact that the entire complicated infinitary structure of a Batanin $\omega$-groupoid can be recovered from the simple finitary rules of identity types. It is not clear, however, whether we can expect this happy occurrence to continue. We might have to bite the bullet and work with an infinitary type theory, i.e. one allowing derivation rules taking as input an infinite list of hypotheses. In fact, this is almost certainly what we will need if we want a good notion of a geometric theory in the $(\infty,1)$-case, since that involves infinitary logic even in the 1-categorical case. However, such a type theory would obviously no longer have “computational content” and couldn’t be modeled in a proof assistant such as Coq, and also wouldn’t provide a fully “elementary,” i.e. finitary first-order, theory such as ETCS provides in the 1-categorical case. It might be helpful to note that infinitary structures can at least sometimes be finitarily described using inductive types and/or coinductive types, but it is not clear yet whether this is useful in the $(\infty,1)$-categorical context. Internal logic in $\infty Grpd$ The archetypical (∞,1)-topos is ∞Grpd. This is to be thought of as the (∞,1)-categorification of the archetypical 1-topos Set. At internal logic - in Set is a step-by-step discussion of how ordinary logic is recovered from the point of view of the internal logic of a topos $\mathcal{T}$ when choosing $\mathcal{T} := Set$. Here we look at the $(\infty,1)$-categorical analog of that discussion, step-by-step, now everything internal to ∞Grpd. • The terminal object of $\infty Grpd$ is the contractible $\infty$-groupoid $*$. This generates $\infty Grpd$ under colimits: every small $\infty$-groupoid is a colimit over a small diagram consisting only of copies of the terminal $\infty$-groupoid. • The subobject classifier of $\infty Grpd$ is $\Omega = \{\top, \bottom\}$ The object classifier should be the core of the universal left fibration.
{"url":"http://ncatlab.org/nlab/show/internal+logic+of+an+(infinity,1)-topos","timestamp":"2014-04-17T15:31:19Z","content_type":null,"content_length":"75583","record_id":"<urn:uuid:9a3620e2-6cbe-4a86-857f-acad8d75bddd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Prof. W. Kahan's Notes and Problems for Math. 185 & H185 Prof. W. Kahan's Notes and Problems for Math. 185 & H185: - - Analytic Functions of a Complex Variable - - Math. 185 was taught in the Fall semester 2006. Math. H185 was taught in the Spring semester 2008. The 185 class voted for an exam to which each examinee could bring a Crib-Sheet: An 8.5x11" sheet covered with the examinee's own notes. No other notes, books, papers, computers nor telephones were allowed. The H185 class voted for a closed-book exam to which each examinee could bring no notes, books, papers, computers nor communications devices. Files available from this web page as of 19 May 2008; some may have been updated since first posted, so get the latest! A closed-book Midterm Exam took place on Mon. 30 Oct. 2006 A closed-book Midterm Test took place on Wed. 23 Apr. 2008 Assignment due Mon. 25 Sept. 2006: Hand in a solution for Exercise 19 on p. 14 of ~~~~~~~~~~ the foregoing notes. Grades reported in 2006 to the Registrar. (ASCII file)
{"url":"http://www.cs.berkeley.edu/~wkahan/Math185/","timestamp":"2014-04-17T18:30:26Z","content_type":null,"content_length":"4180","record_id":"<urn:uuid:75e75840-629e-4aa5-9d23-96eea99f00be>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Indian researcher helps prove math conjecture from the 1950s On June 18, Adam Marcus and Daniel A. Spielman of Yale University, along with Nikhil Srivastava of Microsoft Research India, announced a proof of the Kadison-Singer conjecture, a question about the mathematical foundations of quantum mechanics. Ten days later, they posted, on Cornell University's arXiv open-access e-prints site, a manuscript titled Interlacing Families II: Mixed Characteristic Polynomials and The Kadison-Singer Problem. Thousands of academic papers are published every year, and this one's title wouldn't necessarily earn it much attention beyond a niche audience … except for the fact that the text divulged a proof of a mathematical conjecture more than half a century old—and the ramifications could be broad and significant. The Kadison-Singer conjecture was first offered in 1959 by mathematicians Richard Kadison and Isadore Singer. In a summary of the achievement, the website Soul Physics says, "… this conjecture is equivalent to a remarkable number of open problems in other fields … [and] has important consequences for the foundations of physics!" That description will get no argument from Ravi Kannan, principal researcher in the Algorithms Research Group at Microsoft Research India. "Nikhil Srivastava and his co-authors have settled an important, 54-year-old problem in mathematics," Kannan says. "They gave an elegant proof of a conjecture that has implications for many areas of mathematics, computer science, and quantum physics." Srivastava offers a layman's explanation of what he, Marcus, and Spielman have achieved. "We proved a very fundamental and general statement about quadratic polynomials that was conjectured by [mathematician] Nik Weaver and that, he showed, implies Kadison-Singer. The proof is based on a new technique we developed, which we call the 'method of interlacing families of polynomials.'" The proof—for a more technical, extended discussion, see Srivastava's post on the Windows on Theory blog—elicited the most basic of emotions from Srivastava when he got a chance to contemplate what he and his colleagues had wrought. "My main reaction was awe at how beautiful the final proof was," he recalls. "I actually started laughing when I realized that it worked. It fit together so beautifully and sensibly you knew it was the 'right' proof and not something ad hoc. It combined bits of ideas that we had generated from all over the five years we spent working on this." The Soul Physics site goes on to state, "Settling this conjecture shows an important way in which our experiments are enough to provide a complete description of a quantum system." Srivastava is in complete agreement. "It has clear implications for the foundations of quantum physics," he says. "This is something [theoretical physicist] Paul Dirac mistakenly thought was obvious, and Kadison and Singer and many other experts thought this was probably false. "It implies that it is possible to 'approximate' a broad class of networks by networks with very few edges, which should have impact in combinatorics and algorithms. Finally, it is equivalent to several conjectures in signal processing and applied mathematics that seem to have practical use." More information: arxiv.org/pdf/1306.3969v3.pdf 3 / 5 (4) Jul 17, 2013 Almost completely content free. not rated yet Jul 19, 2013 Almost completely content free. 15 votes which rate this article at 4.5 disagree and i disagree aswell, this article describes what this maths means to physics and that is all we really need to know, if you'd like to read up on the actual maths then follow the link provided at the bottom of the article not rated yet Jul 23, 2013 once again written by a seventh grader ... I understand that many authors may not be very fimilar with a subject but to not include basic things like: 1) what are the implications of the proof -- answer ) things like quantum Physics which measures independant properties which are mutually exclusive ( momentum vs position ), spin, if you measure all the properties that are possible at one time you can learn everything there is about the particle.
{"url":"http://phys.org/news/2013-07-indian-math-conjecture-1950s.html","timestamp":"2014-04-17T14:18:30Z","content_type":null,"content_length":"73934","record_id":"<urn:uuid:5848d4b1-ff21-44fb-a73e-a9dd1d7c0bad>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical theory of connecting networks and telephone traffic Results 1 - 10 of 96 - IEEE Transactions on Computers , 1988 "... VLSI communication networks are wire limited. The cost of a network is not a function of the number of switches required, but rather a function of the wiring density required to construct the network. This paper analyzes communication networks of varying dimension under the assumption of constant wi ..." Cited by 296 (16 self) Add to MetaCart VLSI communication networks are wire limited. The cost of a network is not a function of the number of switches required, but rather a function of the wiring density required to construct the network. This paper analyzes communication networks of varying dimension under the assumption of constant wire bisection. Expressions for the latency, average case throughput, and hot-spot throughput of k-ary n- cube networks with constant bisection are derived that agree closely with experimental measurements. It is shown that low-dimensional networks (e.g., tori) have lower latency and higher hot-spot throughput than high-dimensional networks (e.g., binary n-cubes) with the same bisection width. Keywords Communication networks, interconnection networks, concurrent computing, message-passing multiprocessors, parallel processing, VLSI. 1 Introduction The critical component of a concurrent computer is its communication network. Many algorithms are communication rather than processing limited. Fi... , 1996 "... In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a th ..." Cited by 193 (9 self) Add to MetaCart In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a theoretical framework, many are quite efficient in practice or have key ideas that have been used in efficient implementations. This research on parallel algorithms has not only improved our general understanding ofparallelism but in several cases has led to improvements in sequential algorithms. Unf:ortunately there has been less success in developing good languages f:or prograftlftling parallel algorithftls, particularly languages that are well suited for teaching and prototyping algorithms. There has been a large gap between languages , 1981 "... In this paper we implement several basic operating system primitives by using a "replace-add" operation, which can supersede the standard "test and set", and which appears to be a universal primitive for efficiently coordinating large numbers of independently acting sequential processors. We also pr ..." Cited by 89 (2 self) Add to MetaCart In this paper we implement several basic operating system primitives by using a "replace-add" operation, which can supersede the standard "test and set", and which appears to be a universal primitive for efficiently coordinating large numbers of independently acting sequential processors. We also present a hardware implementation of replace-add that permits multiple replace-adds to be processed nearly as efficiently as loads and stores. Moreover, the crucial special case of concurrent replace-adds updating the same variable is handled particularly well: If every PE simultaneously addresses a replace-add at the same variable, all these requests are satisfied in the time required to process just one request. - Journal of Algorithms , 1994 "... This paper presents a general paradigm for the design of packet routing algorithms for fixed-connection networks. Its basis is a randomized on-line algorithm for scheduling any set of N packets whose paths have congestion c on any bounded-degree leveled network with depth L in O(c + L + log N) steps ..." Cited by 88 (13 self) Add to MetaCart This paper presents a general paradigm for the design of packet routing algorithms for fixed-connection networks. Its basis is a randomized on-line algorithm for scheduling any set of N packets whose paths have congestion c on any bounded-degree leveled network with depth L in O(c + L + log N) steps, using constant-size queues. In this paradigm, the design of a routing algorithm is broken into three parts: (1) showing that the underlying network can emulate a leveled network, (2) designing a path selection strategy for the leveled network, and (3) applying the scheduling algorithm. This strategy yields randomized algorithms for routing and sorting in time proportional to the diameter for meshes, butterflies, shuffle-exchange graphs, multidimensional arrays, and hypercubes. It also leads to the construction of an area-universal network: an N-node network with area Θ(N) that can simulate any other network of area O(N) with slowdown O(log N). - Parallel Computing , 1995 "... Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper reviews important results for sequential algorithms and describes previous work on parallel algorithms f ..." Cited by 80 (1 self) Add to MetaCart Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper reviews important results for sequential algorithms and describes previous work on parallel algorithms for hierarchical clustering. Parallel algorithms to perform hierarchical clustering using several distance metrics are then described. Optimal PRAM algorithms using n log n processors are given for the average link, complete link, centroid, median, and minimum variance metrics. Optimal butterfly and tree algorithms using n log n processors are given for the centroid, median, and minimum variance metrics. Optimal asymptotic speedups are achieved for the best practical algorithm to perform clustering using the single link metric on a n log n processor PRAM, butterfly, or tree. Keywords. Hierarchical clustering, pattern analysis, parallel algorithm, butterfly network, PRAM algorithm. 1 In... - DNA Based Computers III, volume 48 of DIMACS , 1999 "... Biomolecular Computation(BMC) is computation at the molecular scale, using biotechnology engineering techniques. Most proposed methods for BMC used distributed (molecular) parallelism (DP); where operations are executed in parallel on large numbers of distinct molecules. BMC done exclusively by DP r ..." Cited by 53 (16 self) Add to MetaCart Biomolecular Computation(BMC) is computation at the molecular scale, using biotechnology engineering techniques. Most proposed methods for BMC used distributed (molecular) parallelism (DP); where operations are executed in parallel on large numbers of distinct molecules. BMC done exclusively by DP requires that the computation execute sequentially within any given molecule (though done in parallel for multiple molecules). In contrast, local parallelism (LP) allows operations to be executed in parallel on each given molecule. Winfree, et al [W96, WYS96]) proposed an innovative method for LP-BMC, that of computation by unmediated self-assembly of � arrays of DNA molecules, applying known domino tiling techniques (see Buchi [B62], Berger [B66], Robinson [R71], and Lewis and Papadimitriou [LP81]) in combination with the DNA self-assembly techniques of Seeman et al [SZC94]. The likelihood for successful unmediated self-assembly of computations has not been determined (we discuss a simple model of assembly where there may be blockages in self-assembly, but more sophisticated models may have a higher likelihood of success). We develop improved techniques to more fully exploit the potential power of LP-BMC. To increase - Advances in Computing Research , 1996 "... Fat-trees are a class of routing networks for hardware-efficient parallel computation. This paper presents a randomized algorithm for routing messages on a fat-tree. The quality of the algorithm is measured in terms of the load factor of a set of messages to be routed, which is a lower bound on the ..." Cited by 51 (11 self) Add to MetaCart Fat-trees are a class of routing networks for hardware-efficient parallel computation. This paper presents a randomized algorithm for routing messages on a fat-tree. The quality of the algorithm is measured in terms of the load factor of a set of messages to be routed, which is a lower bound on the time required to deliver the messages. We show that if a set of messages has load factor on a fat-tree with n processors, the number of delivery cycles (routing attempts) that the algorithm requires is O(+lg n lg lg n) with probability 1 \Gamma O(1=n). The best previous bound was O( lg n) for the off-line problem in which the set of messages is known in advance. In the context of a VLSI model that equates hardware cost with physical volume, the routing algorithm can be used to demonstrate that fat-trees are universal routing networks. Specifically, we prove that any routing network can be efficiently simulated by a fat-tree of comparable hardware cost. 1 Introduction Fat-trees - 21st ACM Symp. on Theory of Computing , 1989 "... Abstract. In this paper, we study the problem of emulating T G steps of an N G-node guest network, G, on an N H-node host network, H. We call an emulation work-preserving if the time required by the host, T H,isO(T GN G/N H), because then both the guest and host networks perform the same total work ..." Cited by 46 (18 self) Add to MetaCart Abstract. In this paper, we study the problem of emulating T G steps of an N G-node guest network, G, on an N H-node host network, H. We call an emulation work-preserving if the time required by the host, T H,isO(T GN G/N H), because then both the guest and host networks perform the same total work (i.e., processor-time product), �(T GN G), to within a constant factor. We say that an emulation occurs in real-time if T H � O(T G), because then the host emulates the guest with constant slowdown. In addition to describing several work-preserving and real-time emulations, we also provide a general model in which lower bounds can be proved. Some of the more interesting and diverse consequences of this work include: (1) a proof that a linear array can emulate a (much larger) butterfly in a work-preserving fashion, but that a butterfly cannot emulate an expander (of any size) in a work-preserving fashion, (2) a proof that a butterfly can emulate a shuffle-exchange network in a real-time work-preserving fashion, and vice versa, (3) a proof that a butterfly can emulate a mesh (or an array of higher, but fixed, dimension) in a real-time work-preserving fashion, even though any O(1)-to-1 embedding of an N-node mesh in an N-node butterfly has dilation �(log N), and , 1997 "... Integrated network technologies, such as ATM, support multimedia applications with vastly different bandwidth needs, connection request rates, and holding patterns. Due to their high level of flexibility and communication rates approaching several gigabits per second, the classical network planning ..." Cited by 42 (5 self) Add to MetaCart Integrated network technologies, such as ATM, support multimedia applications with vastly different bandwidth needs, connection request rates, and holding patterns. Due to their high level of flexibility and communication rates approaching several gigabits per second, the classical network planning techniques, which rely heavily on statistical analysis, are less relevant to this new generation of networks. In this paper, we propose a new model for broadband networks and investigate the question of their optimal topology from a worst-case performance point of view. Our model is more flexible and realistic than others in the literature, and our worst-case bounds are among the first in this area. Our results include a proof of intractability for some simple versions of the network design problem, and efficient approximation algorithms for designing nonblocking networks of provably small cost. More specifically, assuming some mild global traffic constraints, we show that a minimum-cost non... , 1991 "... In this paper, we describe an O(log N)-bit-step randomized algorithm for bit-serial message routing on a hypercube. The result is asymptotically optimal, and improves upon the best previously known algorithms by a logarithmic factor. The result also solves the problem of on-line circuit switching in ..." Cited by 36 (9 self) Add to MetaCart In this paper, we describe an O(log N)-bit-step randomized algorithm for bit-serial message routing on a hypercube. The result is asymptotically optimal, and improves upon the best previously known algorithms by a logarithmic factor. The result also solves the problem of on-line circuit switching in an O(1)-dilated hypercube (i.e., the problem of establishing edge-disjoint paths between the nodes of the dilated hypercube for any one-to-one mapping). Our algorithm is adaptive and we show that this is necessary to achieve the logarithmic speedup. We generalize the Borodin-Hopcroft lower bound on oblivious routing by proving that any randomized oblivious algorithm on a polylogarithmic degree network requires at least \Omega\Gammaast 2 N= log log N) bit steps with high probability for almost all permutations. 1 Introduction Substantial effort has been devoted to the study of store-and-forward packet routing algorithms for hypercubic networks. The fastest algorithms are randomized, and c...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=16185","timestamp":"2014-04-19T20:30:33Z","content_type":null,"content_length":"40399","record_id":"<urn:uuid:22d99aad-5841-4abd-958b-c83bd996c367>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotation around a tilted axis. I can't work out how to use it. Does the formula return an array? Which formula in that article are you talking about? If you want a conceptually simple way of doing it, consider this: Do you know how to write code that would change from one coordinate system to another? If you change coordinates so that the "center" of the rotation becomes the origin in the new coordinate system and the axis of rotation becomes the z-axis in the new coordinates system then you can apply the formula for rotating an object about the z-axis. Get the coordinates of interest in the new coordinate system and then transfer those coordinates back to the original coordinate system. I've read articles by people who do computer animation that say that using quaterions is the best way to deal with the motion of objects. Are you doing animation?
{"url":"http://www.physicsforums.com/showthread.php?t=547564","timestamp":"2014-04-20T05:53:45Z","content_type":null,"content_length":"27174","record_id":"<urn:uuid:65bce067-6490-4c90-bb2c-7a3c7f8d390f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Articles under category:Note: Theory of Computing: An Open Access Electronic Journal in Theoretical Computer Science Articles under category: Vol 9, Article 29 (pp 889-896) [NOTE] [Boolean Spec Issue] Hypercontractivity Via the Entropy Method by Eric Blais and Li-Yang Tan Vol 9, Article 17 (pp 587-592) [NOTE] [Boolean Spec Issue] A Monotone Function Given By a Low-Depth Decision Tree That Is Not an Approximate Junta by Daniel Kane Vol 9, Article 6 (pp 273-282) [NOTE] The Complexity of the Fermionant and Immanants of Constant Width by Stephan Mertens and Cristopher Moore Vol 8, Article 16 (pp 369-374) [NOTE] Quantum Private Information Retrieval with Sublinear Communication Complexity by François Le Gall Vol 8, Article 10 (pp 231-238) [NOTE] Monotone Circuits: One-Way Functions versus Pseudorandom Generators by Oded Goldreich and Rani Izsak Vol 7, Article 13 (pp 185-188) [NOTE] Computing Polynomials with Few Multiplications by Shachar Lovett Vol 7, Article 12 (pp 177-184) [NOTE] On Circuit Lower Bounds from Derandomization by Scott Aaronson and Dieter van Melkebeek Vol 7, Article 10 (pp 147-153) [NOTE] The Influence Lower Bound Via Query Elimination by Rahul Jain and Shengyu Zhang Vol 7, Article 4 (pp 45-48) [NOTE] Tight Bounds on the Average Sensitivity of k-CNF by Kazuyuki Amano Vol 7, Article 2 (pp 19-25) [NOTE] Inverting a Permutation is as Hard as Unordered Search by Ashwin Nayak Vol 6, Article 4 (pp 81-84) [NOTE] Decision Trees and Influence: an Inductive Proof of the OSSS Inequality by Homin K. Lee Vol 5, Article 7 (pp 135-140) [NOTE] A Simple Proof of Toda's Theorem by Lance Fortnow Vol 5, Article 5 (pp 119-123) [NOTE] Discrete-Query Quantum Algorithm for NAND Trees by Andrew M. Childs, Richard Cleve, Stephen P. Jordan, and David Yonge-Mallo
{"url":"http://www.theoryofcomputing.org/categories/note.html","timestamp":"2014-04-18T13:16:13Z","content_type":null,"content_length":"9189","record_id":"<urn:uuid:430239d8-8e55-4d0b-84ca-db896e0fd574>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
A Small Round Object Is Tested In A 1 M Diameter ... | Chegg.com a small round object is tested in a 1 m diameter wind tunnel. The pressure is uniform across sections 1 and 2. The upstream pressure is 20 mm H20 (gage), the downstream pressure is 10 mm H20 (gage), and the mean air speed is 10 m/s. The velocity profile at section 2 is linear; it varies from zero at the tunnel centerline to a maximum at the tunnel wall. Calculate (a) the mass flow rate in the wind tunnel, (b) the maximum velocity at section 2, and (c) the drag of the object and its supporting vane. Neglect viscous resistance at the tunnel wall. Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/small-round-object-tested-1-m-diameter-wind-tunnel-pressure-uniform-across-sections-1-2-up-q2965880","timestamp":"2014-04-20T00:38:17Z","content_type":null,"content_length":"21902","record_id":"<urn:uuid:bcb2b139-8273-4498-bcfa-a54e58b9da5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Analytic study on the higher order Itô equations: new solitary wave solutions using the Exp-function method. (English) Zbl 1198.35224 Summary: We use the Exp-function method to construct the generalized solitary wave solutions of the generalized $\left(1+1\right)$-dimensional and the generalized $\left(2+1\right)$-dimensional Ito equations. These equations play a very important role in mathematical physics and engineering sciences. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The results show the reliability and efficiency of the proposed method. Finally, some new solitonary wave solutions are obtained. Editorial remark: There are doubts about a proper peer-reviewing procedure of this journal. The editor-in-chief has retired, but, according to a statement of the publisher, articles accepted under his guidance are published without additional control. 35Q53 KdV-like (Korteweg-de Vries) equations 35C08 Soliton solutions of PDE
{"url":"http://zbmath.org/?q=an:1198.35224","timestamp":"2014-04-19T14:46:50Z","content_type":null,"content_length":"20960","record_id":"<urn:uuid:d1fe3e1d-5af3-4f74-8d76-87d30a4e4922>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
What is 47.2 kg in pound? You asked: What is 47.2 kg in pound? 104.058187751262 pounds the mass 104.058187751262 pounds Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_is_47.2_kg_in_pound","timestamp":"2014-04-18T06:39:02Z","content_type":null,"content_length":"51200","record_id":"<urn:uuid:c8f5fae9-90a7-41dd-aebb-ee74818ba72e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 106,303 The circumference of a circle changes from 20 to 4 after a dilation. What was the scale factor of the dilation? A. 1/25 B. 1/5 C. 5 D. 25 Sunday, March 2, 2014 at 10:40am by anon The scale factor of two similar solids is 7:5. 8) Find the ratio of volumes. 9) Find the ratio of surface areas. Thursday, April 14, 2011 at 8:52pm by Need Help The scale factor for two similar triangles is 4 : 3. The perimeter of the smaller triangle is 12. What is the perimeter of the larger triangle? Wednesday, April 27, 2011 at 2:21pm by Tyler The scale factor for two similar triangles is 4 : 3. The perimeter of the smaller triangle is 12. What is the perimeter of the larger triangle? Sunday, May 13, 2012 at 12:10pm by adeny The scale factor is 5/4. Find the area of the pre-image if the area of the image is 150 in2.Show your work. Wednesday, March 27, 2013 at 3:58pm by bre AB has a length of 5 cm the line segment is dilated to produce A'B' which has a length of 2 cm what is the scale factor of the dilation Sunday, April 21, 2013 at 6:40pm by Dannelle A model train is a scale model created from actual measurements. The scale factor for HO or Half Zero model trains is 1:87. A typical engine is 50mm in height and 200mm in length. Determine the actual dimensions of the train engine. Help Please Sunday, March 28, 2010 at 6:45pm by Anonymous A model train is a scale model created from actual measurements. The scale factor for HO or Half Zero model trains is 1:87. A typical engine is 50mm in height and 200mm in length. Determine the actual dimensions of the train engine. Help Please Sunday, March 28, 2010 at 7:15pm by Anonymous The scale factor of two similar triangles is 2/5. if the perimeter of the small triangles is 80 cm, what is the perimeter of the large triangle? Saturday, April 2, 2011 at 5:42pm by Amanda the scale factor of two similar polygons as 4:7. the perimeter of the smaller polygon is 320 centimeter what is the perimeter of the larger polygon Tuesday, March 27, 2012 at 9:11pm by joselynn if a scale factor of four is used, the points in the image will be eight times as far from the origin as at he pre-image points true or false Friday, June 28, 2013 at 7:35pm by Barb Scale Representations A model airplane is made on a scale of 1:128. A)What is the scale factor of the model? B)If the airplane has a wingspan of 30 cm, what is the wingspan of the actual airplane? Thursday, March 29, 2012 at 4:34pm by Sari the area of the first triangle is 22 sq feet. if the scale factor of two similar traingles if 3, what is the area of that second triangle>? Tuesday, January 18, 2011 at 9:19pm by dylan Assume there are 2 similar polygons, whose scale factor is 4:5. If the larger polygon has an area of 30 units squared, what is the area o the smaller polygon? Sunday, February 13, 2011 at 9:39pm by Mark Given the volume of two cubes, find the scale factor. 6) V = 343 in3 and V = 729 in3 7) V = 1331 ft3 and V = 125 ft3 Thursday, April 14, 2011 at 8:52pm by Need Help I had to puzzle this out myself. In this case, we need to (a) find the center of dilation (b) verify that the dilation of A to A' is the same as that of B to B' Since dilation is a linear scaling, both A and B are moving toward some point C. Naturally, if the dilation were 0, ... Monday, December 19, 2011 at 3:50pm by Steve Two similar rectangular prisms have a scale factor of 4:9. The volume of the smaller prism is 60 cm3. Find the volume of the larger prism. Friday, July 16, 2010 at 12:37pm by hello the endpoints of AB are A(9,4) and B(5,-4) the endpoints of its image after a dilation are A1(6,3) and B1(3,-3) explain how to find the scale factor. Is there someone that can help me with this problem and explain foe me? Wednesday, March 7, 2012 at 10:30am by Brandon Please help me with this!! The actual length of the picnic table is 180cm with legs 60cm. What is the scale factor for this diagram? Thursday, February 27, 2014 at 12:41pm by Cherie A computer chip on a circuit board has a rectangular shape, with a width of 6mm and a length of 9mm. Plans for the circuit board must be drawn using a scale factor of 15. Draw a scale diagram of the computer chip as it would appear on the plans. Please show work. Wednesday, July 31, 2013 at 10:05pm by Anonymous If i have a scale factor to enlarge a picture at 11/3, how would i get the scale to get from the big picture to the small picture? Thursday, October 11, 2007 at 6:46pm by Brittany Pre-Algebra Help Ms. Sue Please!!! Given a scale factor of 2, find the coordinates for the dilation of the line segment with endpoints (–1, 2) and (3, –3). A.(–2, 4) and (6, 6) B.(2, 4) and (6, 6) C.(–2, 4) and (6, –6) D.(2, –1) and (–3, 3) Wednesday, November 7, 2012 at 12:10pm by Anonymous PLEASE CHECK AND CORRECT MY ANSWERS!!!! MRTS = marginal rate of technical substitution RTS = returns to scale MP = marginal product Suppose the production function is Cobb-Douglas and f(x1, x2) = [x1 ^(1/2)][x2^(3/2)]. (Note: x1 and x2 are variables). a) Select an expression ... Wednesday, October 14, 2009 at 8:41pm by Anonymous math 2 When two figures are similar, the reduced ratio of any two corresponding sides is called the scale factor of the similar figure. Example One side of a square is 6 and the corresponding side of a similar square is 3. The ratio is 6 : 3 or 6/3 = 2. The scale factor is 2 : 1. Thursday, February 10, 2011 at 3:45pm by helper volume ratio = 216/27, and is the cube of the linear scale factor. So, each side of the larger is 6/3 = 2 times the smaller. So, the area of the larger is 2^2 = 4 times the smaller. Monday, October 22, 2012 at 11:08am by Steve Troy is building a pool that is proportional to the Olympic-sized pool. The width of his pool will be 625 centimeters. find the scale factor troy will use to build his pool use the scale factor to show the length of troys pool if the length of the Olympica pool is 50 meters ... Thursday, December 16, 2010 at 9:53pm by Court An architect makes a scale drawing. She uses 2 cm to represent 100 m. What is the scale ratio for her drawing? Show your work. Friday, November 26, 2010 at 11:59am by Kristen physics (3 seperate questions) (1) f = [1/(2 pi)]*(k/m)^1/2 Rearrange for k. (2) With k staying the same, f drops a factor sqrt2 if the mass increases by a factor of 2. (3) Spring scales measure weight, and weight = M g. The scale reading decreases by a factor of 6. M stays the same and g decreases to 1/6 ... Monday, March 28, 2011 at 8:01pm by drwls You have purchased a scale model of a car. The scale factor is 1:24. The model is 2.9 inches high, 2.75 inches wide, and 6.4 inches long. Find the dimensions of the car in feet. If the rear cargo area of the actual car has a volume of 12.5 cubic feet, what is the volume of the... Friday, May 13, 2011 at 4:40am by Sharon rectangle a is similar to smaller rectangle b. the scale factor is 5/3. if the area of rectangle a is 150 square inches, what is the area of rectangle b? Thursday, March 22, 2012 at 4:21pm by josh Without seeing the oscilloscope traces, we cannot answer your question. 1) Read the peak-to-peak height yourself, in squares, and multiply it by the vertical scale factor. The rms voltage is the peak-to-peak voltage divided by 2*sqrt2, for a sine wave. 2) Measure the distance ... Tuesday, January 17, 2012 at 12:27am by drwls Geometry - Transformations and Dilations When a square of area 4 is dilated by a scale factor of k, we obtain a square of area 9. Find the sum of all possible values of k. I do not understand...it is not 3/2, as I was told, but I don't understand why? Help? Thursday, February 21, 2013 at 9:32am by Knights the 5 is a scale factor, so the graph is stretched by a factor of 5 vertically. (x-1) shifts the graph to the right one unit. Saturday, December 14, 2013 at 3:12pm by Steve 216 in cubed is the volume of a figure 27 in cubed is the volume of a similiar figure what is the scale factor? Wednesday, March 23, 2011 at 5:10pm by Nasser Polygon ABCD has vertices A(–4, 0), B(–2, 4), C(3, 4), and D(8, 0). Polygon A'B'C'D' is the dilation image of ABCD, using the center (0, 0) and a scale factor of 0.5. What are the coordinates of A', B', C', and D'? This question has me confused. Could someone just show me an ... Monday, May 23, 2011 at 11:03pm by Miranda Can someone help? Your make a scale drawing of a tree using the scale 5 in. =27 ft. If the tree is 67.5 ft tall, how tall is the scale drawing? Saturday, February 18, 2012 at 10:34am by myles Each step on the richter scale is a factor of ten. So an earthquake of magnitude 3 has ten times the energy of one of magnitude 2. Or if you like a difference of 1 between the magnitudes is a factor of 10 increase, a difference of 2 is a factor of a hundred increase, a ... Monday, March 16, 2009 at 10:08pm by Dr Russ You are designing a scale model of the solar system mercury it is 4,878 km in diameter.using the following scale factor, how large does your model of mercury need to be? A. 160cm B. 16cm C.146cm Wednesday, January 26, 2011 at 7:42am by walter Below is a scale of my garden. It is drawn at 3in: 8ft scale. To the nearest whole squre foot what is the area of my garden? **picture of octagon with one side 8in** Tuesday, December 13, 2011 at 4:02pm by Nicole A photographer enlarges a photo using a scale factor of 6. The original photo has a width of 9 cm and a length of 15 cm. What is the width of the enlarged photo? Thursday, December 8, 2011 at 1:39pm by dorothy Water is in the big beaker in the figure on the left. Scale 1 reads 87 Newtons, scale 2 reads 544 newtons, and scale 3 reads 0 newtons. The hanging block has a density of 15 x 10^3 kg/m^3. What does scale 1 read after the block is fully lowered into the beaker of water? Scale... Monday, April 7, 2014 at 4:55pm by sean George is building a model of his father's sailboat with a scale factor of 1:32. The actual sail is in the shape of a right triangle with a base of 8 meters and a hypotenuse of 13 meters. What will be the approproximate perimeter of the sail on the model boat? Monday, January 24, 2011 at 12:18pm by Marjorie If you stand on a bathroom scale, the spring inside the scale compresses 0.50 mm, and it tells you your weight is 680 N. Now if you jump on the scale from a height of 1.4 m, what does the scale read at its peak? Friday, March 2, 2012 at 12:11am by Caroline If you stand on a bathroom scale, the spring inside the scale compresses 0.50 mm, and it tells you your weight is 760 N. Now if you jump on the scale from a height of 1.0 m, what does the scale read at its peak? Monday, November 5, 2012 at 8:06pm by Lauria Scale factor between two similar polygons is the ratio of the linear dimensions, in this case, perimeter. The factor of the larger one to the smaller one is 50/20=2.5 Sunday, June 5, 2011 at 9:27pm by MathMate please help. I am working on graphing. i do not need help on graphing i need to know what 1. reflect in the x- axis 2. dilate a scale factor of 1/2 please answer my questions oh and the numbers are a= (-2,4) b= (0,2) c= (-2, 6-) for both of the questions. Monday, January 7, 2008 at 7:58pm by Taylor arrange in increasing size order to test corresponding sides RI = 7.5 TR = 10 TI = 12.5 now if all sides are related by the same scale factor, they are similar 7.5/6 = 1.25 10/8 = 1.25 12.5/10 = 1.25 YES!! the corresponding sides are in the same ratio. They are similar with ... Wednesday, February 6, 2008 at 8:20pm by Damon The side length of square ABCD is 1 unit. Its diagonal, AC, is a side of square ACEF. Square ACEF is then enlarged by a scale factor of 2. What is the area of the enlargement, in square units? Tuesday, March 27, 2012 at 7:50pm by Kayla They cagily disguise the growth factor r as a percentage. You need to convert 15% reduction to a scale factor: 85% or 0.85 So, Tn = 1120 * .85^(n-1) Wednesday, September 21, 2011 at 6:18pm by Steve If it is less than one, the actual object is smaller than the model. If it is greater than one, the object is greater than its model. All dimensions have the same scale factor in either case. Wednesday, December 3, 2008 at 10:11pm by drwls the scale factor between figure A and figure B is 0.25. If one side length of figure B is 56 yards, what is the corresponding side length of figure A? Sunday, October 3, 2010 at 9:31pm by linda i have math homework and it says i have to find 1 pair of rectangle that are simular and find the scale factor but i cant find the scale factorthe k rectangle is 4 cubes base by 3cubes height and n rectangle is 12 cubes base and 1 cube height but i cant find the scal factor ... Tuesday, September 21, 2010 at 8:34pm by hawk On one side of a scale there are 6 coins, 3 weighing 2 grams each and 3 weighing x grams each. The scale is balanced if 5 coins weighing x grams each are placed on the other side of the scale. How much does each of the unknown coins weigh? Answer in units of gram Friday, November 27, 2009 at 8:45pm by Chitra A scale model of a park has an area of 192 cm2. The actual park has an area of 12 km2. What is the scale factor used for the model? Thursday, October 27, 2011 at 5:13pm by sak MATH 11 Yani wants to make a scale diagram of the floor plan of his school. He wants his diagram to fit on an 8.5in. by 11in. sheet of paper. The school is 650ft long and 300 ft wide at its widest point. a) what would be a reasonable scale for Yani to use so that his diagram will fit ... Wednesday, July 31, 2013 at 9:56pm by Please Help!!! Triangle DEF ~ Triangle HJK, and the scale factor of Triangle DEF to Triangle HJK is 5/2. If EF=15, find JK Wednesday, June 16, 2010 at 10:09am by April I don't understand the problem. I get the T and D scales and I understand that the T scale is -10 to 80. On the D scale, however, did you mean ice melts at 20 on the D scale and boils at 100 on the D Monday, August 9, 2010 at 12:26pm by DrBob222 8km^2*scale^2=128cm^2 8*k(10cm)^2*scale^2=128cm^2 scale^2= 128/*8000*100)=1/6250 scale= 1/79 appx Saturday, June 2, 2012 at 9:40pm by bobpursley Joe's living room is a rectangle and measures 15 ft by 27 ft. Using a scale of 1 in. = 3ft. How could I sketch it and show both the actual measurements and the scale measurements Monday, February 20, 2012 at 9:47pm by snoop the dimensions of the triangular lateral faces of a pyramid are dilated with a scale factor of 8/3 to create similar triangles. if the perimeter of one triangular face was 57 inches what is the perimeter of each of the dilated triangular faces Saturday, April 13, 2013 at 4:56pm by stormy Without knowing the 8 steps to which you are referring, you can do the following: Factor both numerator and denominator: R(x)=(x-5)(x+6)/[(x-5)(x+4)] You will find the zeroes and asymptotes. Since there is a common factor (x-5), the point f(5) is indeterminate and should be ... Saturday, May 25, 2013 at 11:14pm by MathMate Hexagon D and hexagon H are regular hexagons. The scale factor from hexagon D to hexagon H is 0.25. One side of hexagon D measures 18 cm. What is the length of one side of hexagon H? Tuesday, December 6, 2011 at 4:23pm by Anonymous A park at the end of a city block is a right triangle with legs 150 ft and 200 ft long. Make a scale drawing of the park using the following scale 1in:300ft. Wednesday, February 1, 2012 at 4:44pm by ian A model airplane is made on a scale of 1:128. A)what is the scale factor? b) if the airplane model has a wingspan of 30cm,what is the wingspan of the actual plane? use model over original Wednesday, April 2, 2014 at 4:54pm by Cleopatra Math PLEASE HELP!!! A figure is dilated by a scale factor of 4. If the origin is the center of dilation, what are the coordinated in the original figure of a vertex located at (8,8) in the enlarged figure? A/(2,2) B/ (4,4) No one has answered Wednesday, January 9, 2013 at 5:42pm by Sammy in a scale drawing of a school's floor plan, 6 inches represents 100 feet. If the distance from one end of the main hallway to the other is 175 feet, find the corresponding length in the scale Monday, February 28, 2011 at 1:08am by shannon 7th grade math Suppose you copy a figure on a copier using the given size factor. Find the scale factor from the original figure to the copy in decimal form. a] 200% b] 150% c] 75% Wednesday, October 28, 2009 at 8:19pm by DIANE A model airplane is made on a scale of 1:128. A)What is the scale factor of the model? B)If the airplane has a wingspan of 30 cm, what is the wingspan of the actual airplane Need help on this. Friday, March 30, 2012 at 4:22pm by Sari Two cones are similar. The larger cone, R, has a volume of 1331 cubic feet and the smaller cone, S, has a volume of 64 cubic feet. Find the scale factor of Cone R to Cone S. Thursday, May 10, 2012 at 11:12am by jack a) If you are building a scale model in which the moon's diameter were 2.5 mm, the sun should be 400 times larger, according to what you have stated. b) The answer depends upon the scale of your model. You have stated that the moon is 2.5 mm dia in your model. The sun is ... Friday, January 9, 2009 at 7:00pm by drwls suppose you copy a figure on the copier using the givin size factor. Find the scale factor from the orignal figure to copy in desimal form. for 200%, 150% 75% 50% 125% and 25% Thursday, November 26, 2009 at 8:06pm by ashtyn suppose you copy a figure on a copier using the given size factor. find the scale factor from the original figure to the copy in decimal form. a)200%, b)50%, c)150%, d)125%, e)75%, f)25% Tuesday, December 8, 2009 at 11:02am by dillan algebra, HELP! 9. The sides of a rectangle are increased by a scale factor of 4. The perimeter of the smaller rectangle is 20 cm. What is the perimeter of the larger rectangle? (1 point) 320 cm 80 cm 120 cm 60 cm 10. A right triangle has an area of 13 m2. The dimensions of the triangle are ... Monday, November 12, 2012 at 12:33pm by Lilly Trying to find the scale for a 77 centimeter by 53 centimeter painting to fit on a 8.5 by 11 inch paper. Please show me how- Sunday, February 2, 2014 at 10:40pm by RR PLEASE, please, don't call a balance a scale. I weigh a sack of potatoes on a scale but in a chemistry lab I use a balance. :-) My students used to think my antics about this were very funny; in fact, so funny that they bought large poster boards and taped them up on the doors... Monday, June 16, 2008 at 9:25pm by DrBob222 X(-1,-2),Y(2,1),Z(4,-1); scale factor 3 Monday, November 8, 2010 at 8:00pm by Sarah What is the scale factor? Tuesday, January 15, 2013 at 6:50pm by Janir A preimage includes a line segment of length x and slope m. If the preimage is dilated by a scale factor of n, what are the length and slope of the corresponding line segment in the image Tuesday, June 18, 2013 at 9:21am by ELIJAH A preimage includes a line segment of length x and slope m. If the preimage is dilated by a scale factor of n, what are the length and slope of the corresponding line segment in the image Monday, January 24, 2011 at 1:51pm by chy ENGLISH/MATH HELP!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NYS ELA 7 Score - Level 2 (high two so I was seven points away to a three) My Scale Scare - 658 Level Two: 642-664 (scale score) Level Three: 665-697 (scale score) NYS Math 7 Score - Level 3 My Scale Score - 678 Level Three - 670-693 (scale score) I really want to know my ... Saturday, March 23, 2013 at 4:19pm by Laruen Factor at the greatest common factor from the expression. Can someone please show me how to factor so I will know? Thank you! 18x^714x^2 Thursday, August 2, 2012 at 7:57am by Anonymous* p: scale by 5/8, reflect in x-axis r: translate x by 5, reflect about x=5 or, reflect in y-axis, then shift x by 5 t: contract x by a factor of 2, stretch y by a factor of 2 Friday, June 1, 2012 at 11:35pm by Steve Please Help!!, The dimensions of a photo of a mountain bike are 15cm by 12cm. An enlargement is to be made for a poster with dimensions 4.0m by 3.2m. What is the scale factor of the poster to the nearest tenth? Monday, February 24, 2014 at 1:49pm by Cherie 7th grade math last question! #11 The coordinate of rectangle ABCD are given. Find the coordinates of its image after a dilation with the scale factor of 2. A (2,4) B (-3, -1) C (0,9) D (6,-4) I DESPERATELY NEED HELP WITH THIS QUESTION IT ISN'T MULTIPLE CHOICE SOMEONE HELP PLEASE!!!! Monday, November 5, 2012 at 11:22pm by Delilah On the scale drawing of a floor plan 2 inches= 5 ft. A room is 7 inches long on the scale drawing. Find the actual length of the room. Thursday, March 29, 2012 at 12:44am by Cassie Triangle UVW is the pre-image of triangle RST with a scale factor of 3. Find the area of triangle UVW if the area of triangle RST is 9 cm2. Wednesday, May 2, 2012 at 10:54am by Lindsey Scale factor: 8/4 x 10/6 = 2 x 1 2/3. Monday, March 14, 2011 at 6:10pm by Henry What is 36:6 with a scale factor of .3 Tuesday, August 14, 2012 at 11:16pm by Keara how do we find the scale factor Thursday, December 20, 2012 at 8:02pm by jamine Algebra 8th grade please help!!!!!! A right triangle has an area of 13 m2. The dimensions of the triangle are increased by a scale factor of 3. What is the area of the new triangle? Please help!!!! a)39 m2 b)169 m2 c)117 m2 d)142 m2 Wednesday, November 28, 2012 at 1:56pm by April :) Pilar made a scale drawing of a game room. The pool table is 2 inches long in the drawing. The actual pool table is 6 feet long. What is the scale factor of the drawing? Monday, September 17, 2012 at 11:59am by guillermo anaya 8th grade math A model car is a 1/43 scale model of the original car. What is the scale factor needed to find the dimensions of the original car? model car=4.2" long, 1.3"tall, and 1.6" wide? Wednesday, January 16, 2013 at 5:29pm by Kory A male moose is 2.6 m tall and 3.2 m long, with antlers that are 1.2 m across. An artist wants to carve scale models of the moose. She uses a scale factor of 1/30. a) What are the dimensions of the carvings to the nearest centimetre? b) How many carvings can she make using ... Monday, January 14, 2013 at 2:52am by alan The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Saturday, November 16, 2013 at 7:13pm by BARBIE LEE The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Sunday, November 17, 2013 at 5:24am by barbie lee are you thinking of y=a x^2 + b if a is varied, the "dialation" is obvious. Here is a another shape reflected on the y axis (and x axis) by+^2+ax^2= c^2 the ellipse. Tuesday, January 21, 2014 at 7:46pm by bobpursley what does a scale factor represent ? Wednesday, August 25, 2010 at 7:19am by Bob lee How to find scale factor of dilation: A-(-1,2) B-(2,5) Friday, January 25, 2013 at 6:29am by Nina geometry easy The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Sunday, November 17, 2013 at 7:26am by barbie lee A microscope shows you an image of an object that is 80 times the actual size. Therefore, the scale factor of the enlargement is 80. An insect has a body length of 7 millimeters. What is the body length of the insect under the microscope? I know it's 560, but not sure if ... Tuesday, March 6, 2012 at 3:28pm by Brandon Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=GEOMETRY+DIALATION+AND+SCALE+FACTOR+PLEASE+HELP!&page=2","timestamp":"2014-04-16T22:06:51Z","content_type":null,"content_length":"39067","record_id":"<urn:uuid:f32ebafc-1028-4864-b7e6-b0614329704c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Stat-134, Section 2 Concepts of Probability Fall, 2002 Instructor : Antar Bandyopadhyay ( Email : antar@stat.berkeley.edu, Office : 357 Evans Hall ). Class Time : MWF 10:00 - 11:00 in room 60 Evans. NOTE : The Room has changed to 60 Evans. ( effective from Friday, September 6, 2002 ) GSI : Gabor Pete ( Email : gabor@stat.berkeley.edu, Office : 447 Evans Hall ). Office Hours : ( Changed and effective from Monday, September 30, 2002 ) • Antar : □ Mon 11:10 - 12:10 in room 443 Evans Hall. □ Tue 3:00 - 5:00 in room 357 Evans Hall. • Gabor : □ Mon 2:10 - 4:00 in room 399 Evans Hall. □ Tue 2:10 - 3:00 in room 447 Evans Hall. □ Wed 4:10 - 5:00 in room 399 Evans Hall. Review for the Final Exam: • Antar : Monday ( December 9 ) 10:00 - 12:00 in room 60 Evans Hall. • Gabor : Thursday ( December 5 ) 5:00 - 6:30 in room 330 Evans Hall. Course Outline : Click here to get the course outline. Prerequisite : Calculus is a serious prerequisite. Any Calculus course equivalent of Math-53 is good enough. We will need various summations formulae, inequalities, summation of infinite series, limits, differentiation, integration, integration of function of two variables. Text : Probability by Jim Pitman. Other References : Here are two more books which are good to look at for reading and problem solving. You may not want to buy them, best would be to borrow from library. • R. G. Hoel, S. C. Port and C. J. Stone - Introduction to Probability Theory. • Sheldon M. Ross - A First Course in Probability, 6th Edition. Lecture Schedule : Click here to get the lecture schedule ( this schedule may change as the semester progress ). Exams : • One midterm on October 16, 2002 ( Wednesday ) ( in class ). • Final exam on December 11, 2002 ( Wednesday ), 8:00 AM - 11:00 AM in room 4 LECONTE ( Click here to check ). The exams will be open note, that means you are allowed to bring your own hand written notes, like class notes, your homework solutions etc; but no printed materials, like books, computer printouts, photocopied materials are allowed. If needed then the Normal Table will be supplied. You may use a basic calculator for calculation purpose. Note that the final exam is on the first day of the exam week. There will be no late, early, or repeat exam. If you can not take the final on the date and time mentioned above, then you can not take this class. Syllabus for the Exams : Practice Exams : • For Midterm : [ Also you may want to look at the back of the book to get two more practice midterm exams. If you still want some more practice then you may want to see this one, which is from this semester's Statistics-101 course ( this midterm is longer than ours ). ] • For Final : [ Also you may want to look at the back of the book to get two more practice exams. If you still want some more practice then you may want to see this one, which is from this semester's Statistics-101 course ] Solutions to the Exam Problems : Homework : There will be 12 sets of homework. 6 before the midterm and 6 after. 10 best homework scores will be taken for the final grading. I will assign homework in class on Monday, and it will be due in class on Wednesday of the following week. Each homework set will be based on course materials covered in the lectures given on the week it is assigned. For example, the homework assigned on September 9 (Monday) will be due on September 18 (Wednesday), and will be on materials covered on 9th, 11th and 13th. Late submission of homework will not be accepted. If you can not submit a homework on time, don't worry about it, and try to do well in the others. It will not count in your final grade, since you have two extra homework for the whole semester. Here is the Homework Schedule. This schedule may also change as the semester progress. The homework sets are mainly from the book, but some time I may assign problems from outside the text book also. Homework solutions will be posted in one of the glass cases on the middle corridor of third floor of Evans Hall. Each week's homework solutions will be posted latest by Friday afternoon, and will remain in the glass case for two consecutive week. NO solution will be made available on-line or in any printable version because of copy-right issues. Grades : Note : To receive any extra credit you have to submit your solution of the Extra Credit Problem Set by Friday, December 6 ( in class ). No letter grades will be given on the homework, midterm, or final. Your letter grade for the course will be based on your overall score, computed according to the scheme above.
{"url":"http://www.ima.umn.edu/~antar/stat-134/index.html","timestamp":"2014-04-20T03:58:22Z","content_type":null,"content_length":"7318","record_id":"<urn:uuid:7f259795-8215-442a-bfd0-f1bb5efb2454>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Celeste Prealgebra Tutor Find a Celeste Prealgebra Tutor ...As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding learning, memory, and instructional practices. I utilize this knowledge to identify underlying processing difficulties that may be interfering with learning,... 39 Subjects: including prealgebra, reading, English, chemistry ...Learning chemistry can be very difficult at times. However, I will do my best to help you achieve success in your studies.I have taught organic chemistry at the college level. My background is in synthetic organic chemistry and organometallic chemistry. 10 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...I have also found fulfilling this role requires my constant study, reflection, and adaptation of teaching methods and styles. As a math teacher, I encounter many students who believe they “just aren’t good at math” or “just have to get through their math class(es).” Upon meeting these students,... 49 Subjects: including prealgebra, English, geometry, calculus ...Please leave a message if I'm unable to answer and I will promptly return your call.I'm a biologist. Math is a necessity for what I do. I've been educated in Physics and up to Calculus I (Thomas'). I'm a biologist and a scientist. 14 Subjects: including prealgebra, chemistry, reading, algebra 1 ...I am currently teaching AP Spanish to help students get Spanish credit for college. Students are trained to take the College Board AP Spanish Test each year. They receive the specific training in Listening Speaking, Reading, and Writing to be successful in the test. 6 Subjects: including prealgebra, Spanish, TOEFL, soccer
{"url":"http://www.purplemath.com/celeste_prealgebra_tutors.php","timestamp":"2014-04-21T10:28:02Z","content_type":null,"content_length":"23727","record_id":"<urn:uuid:832cde94-9161-479a-a764-5b4ad7b87c32>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Crofton, MD Statistics Tutor Find a Crofton, MD Statistics Tutor ...I have served as a copy editor for daily publications. My current job requires me to proof documents everyday. Fellow classmates always called upon me to proof and edit their work on papers, essays, or articles. 33 Subjects: including statistics, reading, English, writing ...Thanks to this combination I have an extensive background in science, math, Spanish, and writing. Although I am not a native speaker, I have lived in Spain for 4 months and traveled to Costa Rica as well. As an undergraduate, I tutored peers in Spanish including grammar, writing, and speaking skills. 17 Subjects: including statistics, Spanish, writing, physics ...That means I have taken three bar exams (DC I waived into). I have also tutored a student taking the NY Bar exam. I enjoy teaching legal concepts and I am more than qualified to tutor bar students in MD, DC, FL or NY bar exams. I graduated from the University of Miami School of Law in May 2007. 36 Subjects: including statistics, reading, calculus, algebra 1 ...An avid reader, I am hardly ever seen without a book. I can teach general reading strategies, and I have experience teaching Critical Reading for the SAT/ACT as a tutor for another tutoring company. As a college math major, I have full command of trigonometry topics. 28 Subjects: including statistics, English, reading, calculus ...I have a PhD in Economics from Howard University. I have been a professor there since 1993. I have over twenty years of tutoring experience. 11 Subjects: including statistics, geometry, accounting, algebra 1
{"url":"http://www.purplemath.com/crofton_md_statistics_tutors.php","timestamp":"2014-04-20T19:15:50Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:29cd3ada-c62a-447e-b215-434ab1e23b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Mapping the Topology of a Cold World The properties of particles in periodic potentials are determined not only by the energy-band structure but also by the topology of the eigenstates in the bands. Following a closed trajectory in momentum space within the Brillouin zone, a particle may acquire a Berry phase that is the integral of the Berry curvature over the surface bounded by the contour. The periodicity of the lattice requires that the integral over the entire Brillouin zone is quantized, which implies the existence of topological invariants underlying the behavior of the system. In a paper appearing in Physical Review A, Hannah Price and Nigel Cooper at the University of Cambridge, UK, propose a new protocol for mapping the local Berry curvature in ultracold gas experiments. The idea consists of adiabatically moving an atomic wave packet in a two-dimensional lattice subjected to an external force. Even though the trajectory in real-space is very complicated, the path in momentum space can be traced and, most importantly, the force can be cleverly managed in such a way to measure the effect of the Berry curvature at each point in the Brillouin zone. Price and Cooper show how their protocol is expected to work in the case of an asymmetric hexagonal lattice and in the so-called “optical flux lattices,” where the atoms feel an artificial magnetic field with high flux density. They also provide concrete arguments about the feasibility of experiments with the state-of-the-art techniques. A successful program in this direction could eventually open new perspectives in the study of quantum Hall physics in ultracold gases and, more generally, of topological effects in the dynamics of matter waves. – Franco Dalfovo
{"url":"http://physics.aps.org/synopsis-for/print/10.1103/PhysRevA.85.033620","timestamp":"2014-04-19T15:53:48Z","content_type":null,"content_length":"4826","record_id":"<urn:uuid:1708f6e6-eb63-465b-909b-0642daed7585>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply krassi_holmz wrote: Merry Christmas Ricky and Mathsy! I have a problem: What picture is better than e^Pii? This is e^Pii with christmas hat on e! But when I try to change my picture, I get the message that the picture has been refreshed and then I go back. But then I see my previous picture. Has anybody had the same problem as mine? Merry Christmas Ricky and krazzi! I had that problem a while ago, back in my days of frequent avatar changing. I think it's because the forum has something that only lets you change avatars once every [insert amount of time here]. In my experience, the forum saves the picture you uploaded but doesn't actually put it as your avatar until the time limit is up, so stop the forum from slowing down. To Ricky, you want the format (dx + e)(fx + g), where: e * g = c d * g + e * f = b.
{"url":"http://www.mathisfunforum.com/post.php?tid=2246&qid=20888","timestamp":"2014-04-19T09:52:06Z","content_type":null,"content_length":"25388","record_id":"<urn:uuid:2524753d-ce18-4e2c-9291-b0082a026ce3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
About This Blog March 2, 2013 By Isaac About This Blog My name is Isaac and I'm a Ph.D. student in Clinical Psychology. Why am I writing about fantasy football and data analysis? Because fantasy football involves the intersection of two things I love: sports and statistics. With this blog, I hope to demonstrate the relevance of statistics for improving your performance in fantasy football. In particular, I will use a statistical software package called R. Why R? R is free and open source, and has great flexibility for advanced statistical techniques and graphics. You can download it here: . I strongly recommend the RStudio text editor for working with R scripts: . R scripts and data files for this blog are located in the following GitHub repository: How Can I Learn R? About The Author Everyone has biases. For full disclosure, here are mine. I tend not to believe in the following: Instead, I prefer the following: 1. Previous performance does not affect future performance, yet our brains perceive order out of randomness and streaks out of nothing (known as cognitive biases) 2. Random variation around the central tendency (e.g., mean) 3. Actuarial formulas Future Posts These assumptions will serve as an important conceptual building block for the analytical approaches that I will discuss in the future. In future posts, I will show you how to download and calculate fantasy projections, how to determine the riskiness of a player, and how to determine the best possible players to pick in a snake and auction draft to maximize your team's chances of winning your league championship. Thanks for reading, and I would appreciate your ideas, comments, thoughts, and suggestions below! 1. Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 234, 1668-1674. 2. Gilovich, T., Vallone, R., & Tversky, A. (1985). The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology, 17, 295-314. doi: 10.1016/0010-0285(85)90010-6 daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/about-this-blog/","timestamp":"2014-04-18T16:00:36Z","content_type":null,"content_length":"39313","record_id":"<urn:uuid:13049397-00ad-4092-89c0-2e48e8e635b0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Calculating Magnetic Flux in 3D Hi everybody, first time poster here. I am working on calculating the force of magnets in a 3 dimensional space. I have found a formula for the magnetic flux density at a distance z from the magnet face at this , under Flux density at a distance from a single rod magnet. My problem is that I can't find a formula which will relate the magnetic flux density with distances in the x and y directions as well as z. Does anyone know of a formula or way to figure this out? On a similar note, how do I then relate magnetic flux density to the pulling force at that distance?
{"url":"http://www.physicsforums.com/showpost.php?p=4216646&postcount=1","timestamp":"2014-04-16T04:20:07Z","content_type":null,"content_length":"9109","record_id":"<urn:uuid:013292d2-3a07-4b9e-818e-e01f8ae72598>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Math (joke) [Archive] - Spyder Forums 09-28-2006, 03:37 PM Math 1950-2006 Last week I purchased a burger at Burger King for $1.58. The Counter girl took my $2 and I was digging for my change when I pulled 8 cents from my pocket and gave it to her. She stood there, holding the nickel and 3 pennies, while looking at the screen on her register. I sensed her discomfort and tried to tell her to just give me two quarters, but she hailed the manager for help. While he tried to explain the transaction to her, she stood there and cried. Why do I tell you this? Because of the evolution in teaching math since the 1950s: 1. Teaching Math In 1950 A logger sells a truckload of lumber for $100. His cost of production is 4/5 of the price. What is his profit? 2. Teaching Math In 1960 A logger sells a truckload of lumber for $100. His cost of production is 4/5 of the price, or $80. What is his profit? 3. Teaching Math In 1970 A logger sells a truckload of lumber for $100. His cost of production is $80. Did he make a profit? 4. Teaching Math In 1980 A logger sells a truckload of lumber for $100. His cost of production is $80 and his profit is $20. Your assignment: Underline the number 20. 5. Teaching Math In 1990 A logger cuts down a beautiful forest because he is selfish and inconsiderate and cares nothing for the habitat of animals or the preservation of our woodlands. He does this so he can make a profit of $20. What do you think of this way of making a living? Topic for class participation after answering the question: How did the birds and squirrels feel as the logger cut down their homes? (There are no wrong answers.) 6. Teaching Math In 2006 Un hachero vende una carretada de maderapara $100. El costo de la producciones es $80. Cuanto dinero ha hecho?
{"url":"http://www.spyder.tv/forums/archive/index.php?t-6419.html","timestamp":"2014-04-19T22:17:21Z","content_type":null,"content_length":"7048","record_id":"<urn:uuid:0ca55f75-50c9-4911-90c9-f8fe7a190070>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Tapping Into The Subconscious to Solve Complex Math Problems Tapping Into The Subconscious to Solve Complex Math Problems There was a time when I struggled with an advanced mathematics class in High School when the teacher presented math theory from an abstract perspective. This bothered me, but I was determined to allow the subject to seep into my mind, as well as I possibly could, while listening to my teacher lecture. When he, the teacher, decided to give us one problem for homework, I secretly thought that I got off easy for a 1 problem math homework night. Little did I know that the question was posed in such an abstract, new language, manner that I really did not know how to approach solving the problem because I did not completely understand the question. First of all, it is difficult for any of us to admit weakness. This is just not built into our DNA. We want to feel capable, and when we do not, we panic. This is a major issue for many students who want to more easily learn mathematics. When I stumbled upon the same issue with trying to understand the one question given to me, I was petrified. One question my teacher gave me and I did not get it. I remember thinking that the new language he used during his lecture went over my head. I did not record his sentences but I did take notes. I struggled desperately to understand what he said as he jotted notations on the board. But, no, I was at a stand-still. Mentally, I just could not wrap my mind around the new concepts presented. What was I to do? I did not want to continuously struggle, for this was wearing me out. In my stupor, I decided to go to the school library, hoping that the studious atmosphere might rub off onto me and somehow the answer would pop into my head. So, I quietly entered the library and found a quiet remote table to sit at. I put the paper on the table directly in front of me and stared at it. "I can't believe this," I said to myself, "I just don't get it. It's like a foreign language to me." Sitting there frustrated, I again became much stressed, and then the tension simply wore me out. "I need to not think about it so much," I decidedly told myself. "Hmm, let me lay my head down on the table and take a little rest." I simply followed this little voice inside of me. I put my head down and went into a semi-meditative resting state. All the stress left my body, and my mind was free to roam with new information that left my conscious mind and seeped into my subconscious. Suddenly, I visualized while resting with my eyes closed, the ideas clearly needed to answer the question. Suddenly, the teacher's presentation of newly formed ideas came together perfectly. When I opened my eyes, I hurriedly wrote down what I discovered. Then I reviewed it again with such clarity, that it was hard to understand why my conscious mind could not figure it out. I needed my subconscious to communicate to my waking mind and it did so; calmly and without hesitation once I allowed my resting mind to reach a peaceful state pulling all the new information together in a very clear, logical manner. I was lucky to learn this technique prior to college. Many times, especially when studying difficult math theory, I used this technique to ease the completion of homework while either dreaming at night, or going into this meditative state during the day. I hope to help students overcome unproductive tensions while attempting to learn mathematics and teach them this stress-free technique. My students will be amazed of their minds' innate capabilities of learning mathematics in this peaceful manner.
{"url":"http://www.wyzant.com/resources/blogs/15804/tapping_into_the_subconscious_to_solve_complex_math_problems","timestamp":"2014-04-19T09:46:19Z","content_type":null,"content_length":"35325","record_id":"<urn:uuid:96f8be65-eab9-4e5c-a048-f63e6322385d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Age of onset in chronic diseases: new method and application to dementia in Germany Age of onset is an important outcome to characterize a population with a chronic disease. With respect to social, cognitive, and physical aspects for patients and families, dementia is especially burdensome. In Germany, like in many other countries, it is highly prevalent in the older population and imposes enormous efforts for caregivers and society. We develop an incidence-prevalence-mortality model to derive the mean and variance of the age of onset in chronic diseases. Age- and sex-specific incidence and prevalence of dementia is taken from published values based on health insurance data from 2002. Data about the age distribution in Germany in 2002 comes from the Federal Statistical Office. Mean age of onset of a chronic disease depends on a) the age-specific incidence of the disease, b) the prevalence of the disease, and c) the age distribution of the population. The resulting age of onset of dementia in Germany in 2002 is 78.8±8.1 years (mean±standard deviation) for men and 81.9±7.6 years for women. Although incidence and prevalence of dementia in men are not greater than in women, men contract dementia approximately three years earlier than women. The reason lies in the different age distributions of the male and the female population in Germany. Worldwide, dementia is a major public health problem today and in the future. The current number of cases is estimated to be 35.6 million, about one-fifth of those living in Western Europe [1]. In Germany, the country with most inhabitants in Europe, the number of cases will likely double by 2050 [1]. Patients with dementia encounter a variety of limitations including social, cognitive, psychological, and physical aspects with substantial loss of quality of life for the patients themselves and also for caregivers and families [2]. The economic impact of dementia is enormous. Associated annual costs are estimated at 604 billion US dollars worldwide and will increase even more quickly than the prevalence [1]. Age of onset of a disease has been described as an alternative to incidence as a measure for occurrence and effect in epidemiology [3]. Traditionally, comparisons between groups with a factor present or absent are expressed as relative risks. In common diseases with a high background risk, rate ratios between groups (i.e., ratios of person-time incidence rates) cannot be interpreted as risk ratios. In these cases, a statement that someone being exposed to a risk factor contracts the disease, on average, a number of years earlier than someone who is not exposed, is easily interpretable to nonepidemiologists [3]. In decisions of policy-makers, such as the planning of the need for special care units and nursing homes, the age of onset can be seen as a key measure. With respect to dementia, the age of onset is hardly accessible by empirical studies. In Germany, registers of newly diagnosed cases do not exist, and representative surveys of the age of onset are difficult to conduct. Besides presenting a feasible, new way of estimating the mean age of onset of a chronic disease, this article shows that age of onset depends on the age distribution of the population under Assuming that the age-course of the incidence is known, we use a simple incidence-prevalence-mortality (IPM) model for calculating the mean age of onset of the chronic disease. In a first step, the general IPM model is introduced. Then, formulas for the age of onset will be developed and will be applied to epidemiological data on dementia. In consideration of basic epidemiological parameters such as incidence of, prevalence of, and mortality from a disease, it is helpful to look at state (or compartmental) models. The model used here consists of the three states Normal, Disease, and Death and the transitions between the states. Normal means healthy with respect to the disease under consideration. The numbers of people in the Normal state are denoted as S (susceptible), while in the Disease state they are denoted as C (cases). The transition rates are the incidence rate i and the mortality rates m[0] and m[1] of the nondiseased and diseased people, respectively (Figure 1). Age of onset in the IPM model In the general IPM model, the rates depend on calendar time (t), age (a), and in the case of m[1], the disease duration (d). For a specific point in time t* and a small time period Δ>0, the number of newly diseased people aged a is about i(t*, a) S(t*, a) Δ. By integration we obtain the number of all newly diseased people at time t* across all ages: The upper limit w in Equation (1) is the age of the oldest member in the population. The mean age of onset at time t* is obtained by weighting the integrand in Equation (1) with the age a and dividing by the number of all newly diseased people. Hence it holds In practical applications the number S of nondiseased subjects in a population is not accessible. By setting N(t*, a)=S(t*, a)+C(t*, a) and p(t*, a)=C(t*, a)/N(t*, a), it holds S(t*, a)={1 – p(t*, a)} N(t*, a). The function N(t*, a) is the (absolute) age distribution of the population, and p(t*, a) is the age-specific prevalence at time t*. Then, Equation (2) reads as Figure 1. IPM model. Simple model of a chronic disease with three states. People in the state Normal are healthy with respect to the considered disease. In the state Disease they suffer from the disease. The transition rates depend on the calendar time t, on the age a, and in the case of mortality m[1] of the diseased people, on the disease’s duration d. Mostly, the age distribution N can be obtained from official vital statistics of the population under consideration. The incidence i and the prevalence p in Equation (3) is subject to epidemiological By interpreting the mean age at onset as the first moment of a random variable A(t*), the corresponding variance is Equations (3) and (4) hold true for subpopulations as well. In many diseases, the incidence i, the prevalence p, and the age distribution N differ substantially between sexes. Thus, it may be useful to apply Equations (3) and (4) to males and females separately. Relations between incidence, prevalence, and mortality Besides the age distribution N, Equations (3) and (4) depend on the incidence i and the prevalence p. In cases where one of i or p is unknown, it may be possible to approximate it. For this, we assume that the transition rates do not depend on t or on d. In this situation Murray and Lopez considered a system of ordinary differential equations (ODEs), which expresses the change in the numbers of healthy and sick patients aged a with the corresponding rates [4,5]. The system can be transformed into a scalar ODE of Riccati type [6]: This equation relates the change in the prevalence at age a to the rates i, m[0], and m[1]. The advantage of such closed-form ODEs includes the possibility of calculating the age profile of the prevalence from given age-specific incidence and mortality rates. Under certain smoothness constraints, the incidence and mortality rates uniquely determine the prevalence. In addition, for given prevalence and mortality rates the incidence can be obtained, which allows cross-sectional studies to be used for incidence estimates [6]. Application: dementia in Germany The formulas developed above are applied to epidemiological data on dementia in Germany. The age-specific incidence has been taken from German health insurance data, separately for males and females [7]. The data have been interpolated affine-linearly using the middle of the age classes as knots. The mortality m of the general population is taken from the life tables of the Federal Statistical Office of Germany [8]. The reference year t* is 2002. The relative mortality R=m[1]/m[0] is set constant to R=2.4 [9]. Although it is likely that R depends on a, the age-specific values are not reported [9]. In case the general mortality m=(1 – p)·m[0]+p·m[1] and the relative mortality R are given, the ODE (2) changes its type and becomes Abelian [10]: The age-specific prevalence for men and women is derived by integrating the Abelian ODE (6) with initial condition p(60)=0 via the classical Runge-Kutta method, cf. [11]. With N(2002, a) given for every age a=0, …, 99, 100+ from the official statistics [9], the integrals in Equations (3) and (4) are replaced by sums. All calculations are performed with the Software R (The R Foundation for Statistical Computing), version 2.12.0. Figure 2 shows the age-specific prevalence of dementia in Germany for males and females. In both cases, the prevalence starts at 0 at the age of 60 years, which is the initial condition. Until the age of 70 years, the prevalences of dementia in men and women are almost identical. Then the curves start to diverge, which is an effect of the difference in general mortality. Incidence rates in this age class are almost the same for men and women. However, general mortality m for men is almost twice as high as for women. It is striking that both prevalence curves have a maximum at age a*= 96 years. At this age it holds dp/da=0. Figure 2. Prevalence of dementia in Germany. Age- and sex-specific prevalence of dementia after integration of the ODE (6). The age distribution of the new cases of dementia i(t*, a)·(1 – p(t*, a))·N(t*, a) for each age a=60, …, 99, 100+ at t*=2002 is shown in Figure 3 for males and females. Both distributions are left-skewed. The discontinuities stem from the discontinuous structure of the age distribution. Women are far more often affected than men. The modus of the age of new cases is at age 80 and 85 in men and women, respectively. The associated mean age of onset of dementia together with the standard deviation are presented in Table 1. On average, males contract the disease at the age of 78.8 years, whereas females develop it three years later at 81.9 years of age. The standard deviation of the age of onset is similar in men and women at about 8 years. The source and data files for calculating these numbers using the R software are provided as Additional file 1 to this article. Additional file 1. Data and source files for use with the statistical software R. Format: ZIP Size: 3KB Download file The framework of the IPM model allows the calculation of the mean age of onset of a chronic disease. One might expect that the age at onset only depends on the disease, that it is disease inherent. However, the age at onset depends on the shape of the age distribution. The age distribution is a subject of demography, and there are population models where the numbers of people in the age groups can be represented analytically. The simplest example is the stationary population[12]. However, real populations typically are nonstationary and have to be managed differently. In Germany, like in many other countries, the age structure of the population is captured accurately by the Federal Statistical Office. Figure 3. Age distribution of incident dementia. Age distribution of the number of new cases (blue: males, red: females) in 2002 based on the age distribution N(2002, a) in Germany. Table 1. Onset and duration of dementia for males and females in Germany in 2002 When applying the methods to dementia in Germany, the mean age of onset of dementia is about 79 in men and 82 in women. Due to the different life expectancies of men and women in Germany, there are far more females in the older age groups, and the difference is not surprising. It is clear that there is a large difference in the numbers of male and female patients with dementia. Figure 4 shows the numbers of female and male patients in each of the age groups. In 2002, a total of about 63,000 men and 147,000 women aged 60 years and above fell ill. The reasons for this discrepancy between men and women are twofold. First, the incidence of dementia is higher in females, which leads to a higher prevalence (see Figure 2). Second, the number of individuals over 60 years is higher in females. For comparison, there were only 8.5 million men and 11.6 million women 60 years and above in Germany in 2002 [8]. Another point is worth being mentioned: the examinations in this paper predict a peak in the prevalence of dementia in the second half of the ninth decade of life for men and women. After the peak, prevalence decreases. In another survey from 2007 about the prevalence of dementia in Germany, there are indications of the existence of a maximum in the age-specific prevalence [13]. Figure 4. Age distribution of people with dementia. Age distribution of the number of people with dementia (blue: males, red: females) in 2002 based on the age distribution N(2002, a) in Germany. The analytical representations of the mean age of onset and have several advantages. First, by the formulas (2) – (4) the effects of interventions in chronic diseases on can be estimated in advance. For example, if a prevention program lowers the incidence of the disease by a certain amount, the prevalence is lowered [6] and the impact of the incidence reduction on can be predicted. Second, by making the dependence of on the age distribution explicit, the necessity of proper age profiles (or adjustment methods) in epidemiological studies that survey age of onset becomes obvious. With respect to surveying the age of onset empirically (e.g., by questioning patients about their age at date of diagnosis), Chen et al were aware of the problem of choosing a representative age distribution and gave some corresponding advice [14]. Finally, the approach presented for the first time in this article allows the estimation of the mean age of onset of dementia in the entire relevant population of Germany. Currently, there are no patient registers about dementia in Germany, and surveys involving patients with dementia and relatives are very difficult. Nevertheless, this work has some weaknesses. First, our way of calculating the age-specific prevalence of dementia for men and women requires the incidence and mortality rates to be independent from calendar time and independent from disease duration. These independence assumptions are hardly fulfilled in real data. In most populations, mortality has a secular trend due to medical progress and health awareness. Similarly, in the calculation of prevalence, the relative mortality is assumed to be constant for all age groups and both sexes. This is unlikely to be true, but more detailed data are lacking. However, the age- and sex-specific prevalences are quite similar to another survey [13], which gives justification for our method. Second, the age distribution of the Federal Statistical Office does not stratify ages beyond 100 years. The people 100 years and above are summarized in one age class 100+. With a view to the relatively low case numbers (see Figure 3), the effect of this limitation is negligible. Third, the incidences are based on claims data of the statutory health insurance (SHI) from 2002. Therefore, age of onset actually means age of diagnosis. Furthermore, there is a large proportion of undetected cases in dementia [15,16], which are not considered in the present study. Additionally, the rate of officially diagnosed cases may depend on formal or reimbursement reasons or the sensibility for the disease. In the present work, an IPM model has been used to study the age of onset of dementia. The mean age of onset depends on the incidence and prevalence of the disease and on the age distribution of the population under consideration. If the age-specific prevalence is known, the formulas for mean age of onset can be applied directly. Alternatively, the age-specific prevalence inherent in the numbers might be obtained as the solution of a new ODE [6]. As a practical example, the calculations were applied to data on dementia in Germany. The new approach might be beneficial, because studying dementia by empirical studies is very difficult. Characteristics of the age of onset of dementia and the estimated numbers of diseased people in different age groups are highly relevant for health services allocation planning. The methods described here help to predict characteristics of people with chronic diseases (for instance the proportion of people with walking disabilities or in need of care). The methods also allow predictions on regional levels. Because age distributions regionally differ quite substantially, the associated mean ages of onset will be different as well. Authors’ contributions RB developed the methods, drafted the text, and made the programming. SL proofread the programming. SL and RW critically revised the text, methods, and results. All authors have given important intellectual contributions and final approval of the version to be published. 1. World health organization and Alzheimer’s disease international: dementia: a public health priority. http://www.who.int/mental_health/publications/dementia_report_2012/en/ webcite PubMed Abstract 2. Bruvik FK, Ulstein ID, Ranhoff AH, Engedal K: The quality of life of people with dementia and their family carers. Dement Geriatr Cogn Disord 2012, 34(1):7-14. PubMed Abstract | Publisher Full Text 3. Boshuizen HC: Average age at first occurrence as an alternative occurrence parameter in epidemiology. Int J Epidemiol 1997, 26(4):867-872. PubMed Abstract | Publisher Full Text 4. Murray CJL, Lopez AD: Quantifying disability: data, methods and results. Bull WHO 1994, 72(3):481-494. PubMed Abstract | PubMed Central Full Text 5. Murray CJL, Lopez AD: Global and regional descriptive epidemiology of disability: incidence, prevalence, health expectancies and years lived with disability. In The Global Burden of Disease. Edited by Murray CJL, Lopez AD. Boston: Harvard School of Public Health; 1996:201-246. 6. Brinks R, Landwehr S, Icks A, Koch M, Giani G: Deriving age-specific incidence from prevalence with an ordinary differential equation. Stat Med 2012. http://dx.doi.org/10.1002/sim.5651 webcite 7. Ziegler U, Doblhammer G: Prävalenz und Inzidenz von Demenz in Deutschland [Prevalence and incidence of dementia in Germany]. Gesundheitswesen 2009, 71(5):281-290. PubMed Abstract | Publisher Full Text 8. Federal Statistical Office of Germany. http://www.destatis.de webcite 9. Rait G, Walters K, Bottomley C, Petersen I, Iliffe S, Nazareth I: Survival of people with clinical diagnosis of dementia in primary care: cohort study. BMJ 2010, 341:c3584. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 10. Schulz A, Doblhammer G: Aktueller und zukünftiger Krankenbestand von Demenz in Deutschland [Current and future number of patients with dementia in Germany]. In Versorgungs-Report 2012. Edited by Guenster C, Klose J, Schmacke N. Stuttgart: Schattauer; 2012:161-175. 11. Chen WJ, Faraone SV, Orav EJ, Tsuang MT: Estimating age at onset distributions: the bias from prevalent cases and its impact on risk estimation. Genet Epidemiol 1993, 10(1):43-59. PubMed Abstract | Publisher Full Text 12. Bradford A, Kunik ME, Schulz P, Williams SP, Singh H: Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer Dis Assoc Disord 2009, 23(4):306-314. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 13. Connolly A, Gaehl E, Martin H, Morris J, Purandare N: Underdiagnosis of dementia in primary care: variations in the observed prevalence and comparisons to the expected prevalence. Aging Ment Health 2011, 15(8):978-984. PubMed Abstract | Publisher Full Text Sign up to receive new article alerts from Population Health Metrics
{"url":"http://www.pophealthmetrics.com/content/11/1/6","timestamp":"2014-04-21T14:42:49Z","content_type":null,"content_length":"99537","record_id":"<urn:uuid:969cd1fd-b592-4848-96b1-3093719a6a7f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Laurel, MD Algebra 1 Tutor Find a Laurel, MD Algebra 1 Tutor ...I am looking to earn a little extra money doing what I love, helping kids learn! My students will tell you I really love to make learning fun! I find this to be the most effective way for kids to remember what they need and to make it their own. 30 Subjects: including algebra 1, English, reading, elementary (k-6th) ...I like to emphasize that it is much more important how one arrives at the answer than the answer itself. I look forward to working with you!I have tutored for several years with a private online tutoring company working with children in grades 1-12. I have also volunteered in several elementary school tutoring and summer programs and have experience working with this age group. 14 Subjects: including algebra 1, reading, Spanish, writing ...For the past four years, since graduating, I have been working in Environmental Education through various organizations, including the Robinson Nature Center in Columbia. This has provided me with a lot of experience educating students of all ages. I hope that with my knowledge, patience, and m... 34 Subjects: including algebra 1, chemistry, calculus, geometry ...I have taught fifth grade math, pre-algebra, algebra, and 8th grade math. I have experience with addition, subtraction, multiplication, & division of whole numbers, fractions, and decimals; simple geometry and trigonometry; algebraic equations and inequalities; and more. This is a subject I am very confident in. 7 Subjects: including algebra 1, reading, writing, study skills ...MATLAB stands for Matrix Laboratory and involves the formulation of a problem in matrix terms. Matlab can handle vast amounts of input data and manipulate the data in accordance with the instructions that the user provides. It has amazing plotting capabilities with both 2-D and 3-D plots. 17 Subjects: including algebra 1, English, calculus, ASVAB Related Laurel, MD Tutors Laurel, MD Accounting Tutors Laurel, MD ACT Tutors Laurel, MD Algebra Tutors Laurel, MD Algebra 2 Tutors Laurel, MD Calculus Tutors Laurel, MD Geometry Tutors Laurel, MD Math Tutors Laurel, MD Prealgebra Tutors Laurel, MD Precalculus Tutors Laurel, MD SAT Tutors Laurel, MD SAT Math Tutors Laurel, MD Science Tutors Laurel, MD Statistics Tutors Laurel, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/laurel_md_algebra_1_tutors.php","timestamp":"2014-04-21T05:13:26Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:50b2b2be-d783-4110-9fab-ce610ed9d8ed>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher Clifford Algebras Posted by John Baez Lately Urs has been dreaming of categorified Clifford algebras. But he’s not the only one! We should send one of our spies to this talk tomorrow: • Chris Douglas, Higher Clifford algebras, Topology Seminar, Chicago University, talk in E203 at 4:30 pm, pre-talk in the same room at 3:00, October 30, 2007. Here’s the abstract: Real K-Theory is 8-periodic. This periodicity can be seen algebraically from the periodicity of Clifford algebras: Clifford algebras form a 2-category, and in that 2-category, the generator Cl(1) has order 8. The analogous algebraic objects for elliptic cohomology might be called “higher Clifford algebras” and ought to form a 3-category. We introduce a candidate such 3-category whose objects are invertible conformal nets. We show that the generating net, the net of free fermions, will have order at least 24. This is joint work with Arthur Bartels and André Henriques. Life is getting interesting. For more on the 8-hour Clifford clock, the 2-category of Clifford algebras, and the super-Brauer group, try these: Posted at October 30, 2007 12:30 AM UTC Re: Higher Clifford Algebras higher Clifford algebras […] ought to [be] invertible conformal nets. Sounds good. That’s probably what WIAEO? suggests. $spin(n)$ acts on Clifford algebra, $\mathrm{string}(n)$ via $\hat \Omega_k \mathrm{spin}(n)$ on fermions on the circle. Dirac-Ramond theory also suggests that 2-Clifford is Clifford algebra on the circle. What I think we’d eventually want is a way to say: under the principle of least resistance under categorification Dirac operators are this-or-that. Namely “quantized” covariant derivatives. With “quantized” meaning: send Grassmann to Clifford. We already know that covariant derivatives are, under the principle-of-least-resistance (namely morphisms into the curvature). So it remains to understand what the notion of “quantization” here is. Posted by: Urs Schreiber on October 30, 2007 1:22 AM | Permalink | Reply to this Re: Higher Clifford Algebras Mike Shulman said he’d spy on this talk and report back to us. Posted by: John Baez on October 30, 2007 5:03 AM | Permalink | Reply to this Re: Higher Clifford Algebras As usual at Chicago, Chris first gave a `pre-talk’ with background material before the main talk. The pretalk was about Clifford algebras, which you all know about (or can read about in TWF). To summarize: we have a sequence of Clifford algebras $Cl_n$ which are generated by $n$ anticommuting square roots of $\pm 1$. The sequence is periodic up to Morita equivalence; $Cl_8$ is $\mathbb{R} (16)$, the algebra of $16\times 16$ real matrices, which is Morita equivalent to $\mathbb{R}$, and from then on it repeats every 8 with extra matrix dimensions thrown in. By the way, Chris remarked on something which I’ve never thought about before: it’s also true that $Cl_6$ is $\mathbb{R}(8)$, so why don’t we get a period of 6 instead of 8? The answer is that the Clifford algebras are really best thought of as $\mathbb{Z}/2$-graded algebras, and $Cl_6$ is not Morita equivalent to $\mathbb{R}$ as a graded algebra. All of this has relevance to K-theory, because it turns out that $K^n(X)$ can be represented geometrically by `bundles of Clifford modules’ over $X$. Let’s start with $K^0$; we know that elements of $K^0(X)$ are `formal differences’ $V-W$ of vector bundles over $X$. We can model the formal difference $V-W$ with an honest geometric object by using the $\mathbb{Z}/2$-graded vector bundle $V\oplus W$, where $V$ is even and $W$ is odd. Such a thing should represent the zero class in K-theory just when $V$ and $W$ are isomorphic; this can be rephrased as saying that there exists an odd operator $e$ on $V\oplus W$ (hence, taking $V$ to $W$ and vice versa) such that $e^2=1$. But this just says that $V\oplus W$ has an action of the first Clifford algebra $Cl_1$. More generally, Karoubi proved that for any $n$, $K^{-n}(X)$ can be represented by $Cl_n$-module bundles on $X$ modulo those such that the $Cl_n$-action extends to a $Cl_{n+1}$-action. When $n=0$ this is what we had above, since a $Cl_0$-module is just a vector space. This allows us to deduce Bott periodicity for K-groups from the algebraic periodicity (up to Morita equivalence) of Clifford K-theory tells us about bundles of C-modules for a Clifford algebra C, so it cares about the category C-mod of C-modules. A vector bundle can be thought of as a functor from the category of paths in X to Vect, and similarly a C-module bundle is a functor to C-mod. Now, Clifford algebras live naturally as the 0-cells of a symmetric monoidal bicategory CL2, whose 1-cells are bimodules and whose 2-cells are bimodule maps. Internal equivalence in this bicategory gives us the notion of Morita equivalence. We find the category C-mod inside CL2 as the hom-category from C to the unit 1. Moreover, the endomorphism category CL2(1,1) of the unit object is just the category Vect that we originally thought $K^0$ was telling us about. What we want to do is find a `higher’ version of all this that applies to elliptic cohomology. We can no longer get away with using finite-dimensional things, so we have to replace our ordinary vector spaces with Hilbert spaces and our Clifford algebras with von Neumann algebras (algebras that embed into operators on a Hilbert space and are complete in an induced topology). We also move up one level, so that instead of ordinary bundles, we’re talking about functors defined on the 2-category of points, paths, and surfaces in X. Instead of Vect, these 2-functors should land in the 2-category of von Neumann algebras, bimodules, and maps. Thus, this looks like a 2-dimensional QFT, rather than a 1-dimensional one. Stolz and Teichner had the idea that this should tell us something about $Ell^0(X)$, the 0-degree part of the elliptic cohomology of $X$. The goal is now to get information about $Ell^n(X)$ for higher $n$. By analogy, we hope to find some object $?_n$ like $Cl_n$, so that elements of $Ell^n(X)$ will be represented by functors into $? _n$-mod. We hope to find some sort of 3-category $??3$ such that $??3(?_n,1)$ is the 2-category $?_n$-mod and $??3(1,1)$ is the 2-category of von Neumann algebras, bimodules, and maps that we started with for $Ell^0$. The solution proposed by Chris, Arthur Bartels, and Andre Henriques (DBH), is a 3-categorical structure whose objects are conformal nets. A conformal net is a `cosheaf of von Neumann algebras on the category of intervals’. The idea is that you have a von Neumann algebra for each subinterval of an interval, with inclusions for inclusions of intervals, and gluing conditions for unions of subintervals. Chris told us that conformal nets have been around for a while in other contexts, with slightly different definitions. There is a conformal net $Ff$ called the net of free fermions which plays the role of $Cl_1$, and the three of them have proven that its order is divisible by 24 (this would be the periodicity, analogous to the 8-periodicity of Clifford algebras). They don’t know what the order actually is yet, but they’re betting on 24 or 48. The thing I haven’t said much about is the type of `3-categorical structure’ they’re using. It’s not a 3-category or tricategory, but something more like a 3-dimensional version of a double category (or a framed bicategory), provided you interpret that correctly. A double category can be defined to be an internal category object in the 2-category Cat; what they’re looking at is an `internal bicategory object’ in the 2-category of symmetric monoidal categories. This notion is something I thought about myself a while ago as a natural generalization of framed bicategories, but never developed much, so it makes me happy to see that the same idea has occurred to other people independently. It’s easy to define an internal strict $n$-category object in any category with pullbacks: you have objects $C_k$ of $k$-cells for all $0\le k\le n$, source and target maps $C_k\to C_{k-1}$, compositions $C_k\times_{C_\ell} C_k\to C_k$, and so on, satisfying the obvious associativity, unit, and interchange axioms. Given this, it’s not too hard to see what you mean by a `pseudo’ such object (which DBH call a `coherent $n$-category object’) in a 2-category with pullbacks: you weaken the associativity, unit, and interchange axioms to be up to coherent isomorphism in your 2-category. For instance, a pseudo 1-category object in the 2-category of categories is just a pseudo double category. A pseudo 1-category object in the 2-category of symmetric monoidal categories is a symmetric monoidal pseudo double category, and so on. What I’m calling `pseudo $n$-category objects in Cat’ were actually defined by Michael Batanin, who called them `monoidal $n$-globular The really neat thing, from my point of view, is that they also impose fibrational conditions on the source and target maps, just like in my definition of framed bicategory. As in the case of monoidal framed bicategories, which I mentioned in the appendix to my paper, here fibrational conditions imply that if you look only at the objects of $C_0$, the objects of $C_1$, and the objects and morphisms of $C_2$, you get an ordinary tricategory—and often with a lot less work than would be involved in checking the definition by hand, since here the only coherence involved is I’ve thought some about this myself, including the example (related to one of theirs) of rings, algebras, and bimodules, but didn’t get around to doing much with it. I also didn’t come up with a good name for these things. The best I’ve thought of so far is `3-level category’. This fits the way I tend to think about them, generalizes nicely to `$n$-level category’, and, like `$n$-fold category’, suggests something $n$-dimensional which isn’t quite the same as an $n$-category. Actually, DBH end up weakening this notion further, although it’s not clear whether this extra weakness is essential. You can actually define an internal bicategory object in any category with pullbacks, by equipping yourself with maps such as $C_1\times_{C_0} C_1\times_{C_0} C_1 \to C_2$ that picks out an associator for each composable triple. And you can then `pseudoify’ that in a 2-category, in an appropriate way. DBH call this a `coherent bicategory object’. If what you pseudoify is the notion of `bicategory with strict associativity’, they call it a `coherent semi-strict bicategory object’. Fibrational conditions on a coherent 2-category object should allow you to `lift’ the coherence to make it a coherent bicategory object as well. Chris has given me permission to link to his precise definitions of coherent semi-strict bicategory object and coherent 2-category object. Posted by: Mike on November 1, 2007 9:55 PM | Permalink | Reply to this Re: Higher Clifford Algebras A vector bundle can be thought of as a functor from the category of paths in $X$ to $\mathrm{Vect}$, and similarly a $C$-module bundle is a functor to $C-\mathrm{mod}$. All right. So now we are even talking about differential K-theory. Since, unless we restrict to constant paths, these functors yield vector bundles equipped with a connection. Now, Clifford algebras live naturally as the 0-cells of a symmetric monoidal bicategory CL2, whose 1-cells are bimodules and whose 2-cells are bimodule maps. Internal equivalence in this bicategory gives us the notion of Morita equivalence. We find the category C-mod inside CL2 as the hom-category from C to the unit 1. Moreover, the endomorphism category CL2(1,1) of the unit object is just the category Vect that we originally thought $K^0$ was telling us about. Okay, so let’s put this together with the previous statement. We find that instead of thinking about our vector bundle directly as a functor, we should rather be thinking about it as the component map of a pseudonatural transformation. For let $1 : P_2(X) \to CL2$ be the tensor unit in the 2-category of 2-functors from 2-paths to CL2, i.e. the one that sends each point to $\mathrm{Cl}_0 \simeq \mathbb{R}$ and all paths to identities. Moreover, let $C : P_2(X) \to CL2$ be the 2-functor that sends everything to the identity on the Clifford algabra $C$. Then a $C$-module bundle (with connection) is a (pseudonatural) transformation $V : 1 \to C \,.$ When we replace $C$ by something less trivial, we obtain twisted K-theory. Posted by: Urs Schreiber on November 2, 2007 4:14 PM | Permalink | Reply to this Re: Higher Clifford Algebras Viewing C-module bundles as natural transformations $V:1 \rightarrow C$ is exactly right, and is how Stolz and Teichner originally formulated this story, and is our preferred way to think about it. In particular, the idea that a twisting of a theory (eg K-theory) and the degree grading have exactly the same status is a nice consequence of this viewpoint, as you say. One of the key tests that is going to tell us that we have the right notion of Higher Clifford Algebras is when we can recognize $bgl_1 tmf$ (the classifying space for twistings of tmf) from our 3-category. Posted by: Chris Douglas on November 5, 2007 5:52 PM | Permalink | Reply to this Re: Higher Clifford Algebras is how Stolz and Teichner originally formulated this story Hm. Last time I talked to them, they were thinking of this in slightly different terms. Unless my memory is failing me, I then pointed out in our elliptic seminar that this is equivalent to having a morphism into the twisting object. Anyway, it’s not so important who had wich idea first (except that for me it is a bad issue that I am lagging behind so much with writeups). Anyway, I am pretty fond of this idea of realizing (twisted) $n$-bundles (with connection) as morphism into $(n+1)$-bundles. I gave a talk on that at the Fields institue last winter, and have been developing it a little further since. One way to look at it on the classical side is as a vast generalization (nonabelian and higher $n$) of Stokes’ theorem, really. If you think about it. It’s that shift in dimenson which one encounters every once in a while. And then the quantum analog of this statement: $n$-dimensional QFTs as morphisms into $(n+1)$-dimensional ones. This is what physicsist’s call the holographic principle. And when thinking in terms of $n$-transport one finds that this is really the holographic principle of $n$-category theory at work. This principle simply says: the transformation of two $n$-functors is itself, in components, an $(n-1)$-functor. Once we equip all our $n$-functors here with extra structure that makes them what I call transport $n$-functors (smoothness, local trivializability), this obvious statement becomes a powerful generating mechanism for interesting phenomena. There are lots of fun variations on this theme. Like The covariant derivative of an $n$-connection $\mathrm{tra}$ is a morphism into the corresponding curvature $(n+1)$-transport. (I think this particular point should eventually be most relevant to what you are considering here: Dirac operators are “quantized” covariant derivatives (Grassmann to Clifford). We know by the above what $n$-covariant derivatives are. So what weneed to figure out is what the $n$-“symbol map” does to them. I am pretty sure the answer involves the 2-Clifford algebras you found, but I need to understand this better.) I have been talking about this holographic thing in Sections, States, Twists and Holography, recalling there some of the aspects that made it into my Toronto notes. What I am eventually trying to finish is the thing sketched in Towards 2-functorial CFT: Classical WZW 2-transport is a morphism into CS 3-transport (“trivializing” in a generalized way the Chern-Simons 3-transport which obstructs the lift of a $G$-transport to a $\mathrm{String}(G)$ After quantization this becomes the statement that 2dWZW CFT 2-transport is a morphism into 3dCS TFT 3-transport. I am claiming that this last statement is secretly the mechanism behind the FRS construction. Only problem that I keep making these claims while still not producing the corresponding writeups. There is too much to do. Anyway, it is good to see you emphasize the importance of morphisms into twisting objects in particular and to see your highly exciting work on Clifford 2-algebras in general. And thanks to Mike Shulman for reporting on it! Posted by: Urs Schreiber on November 5, 2007 6:33 PM | Permalink | Reply to this Re: Higher Clifford Algebras Only problem that I keep making these claims while still not producing the corresponding writeups. Maybe at least some of these things can now finally be written up. I arrived in Aarhus last week, and I’m almost done taking care of all the practical stuff associated with relocating. So, I predict that I will start working on this again during the second half of this week, eagerly looking forward to actually doing some research again… Posted by: Jens on November 5, 2007 7:42 PM | Permalink | Reply to this Re: Higher Clifford Algebras Hi Jens! Good to hear from you, indeed. I arrived in Aarhus last week Ah, that’s good. I was wondering if you are already there. I am about to leave to Trondheim for a week, where I’ll visit Nils Baas together with Konrad. There are $x+1$ things I need to do when I get back, but let’s keep an eye on our stuff. We should proceed by writing out the big story further, along the pictures already produced, and then iteratively filling in the details. Posted by: Urs Schreiber on November 5, 2007 8:05 PM | Permalink | Reply to this Re: Higher Clifford Algebras By the way, in the unlikely event that anyone happens to consider following any of the links I was providing: don’t bother until in a couple of hours. The server is down which hosts the pictures and docs to be found there. Posted by: Urs Schreiber on November 5, 2007 10:01 PM | Permalink | Reply to this Karoubi K-theory I turned the part on Karoubi’s Clifford-algebraic description of K-theory into an $n$Lab entry: [[Karoubi K-theory]] I am wondering if anyone can provide me with more or maybe just other references than collected there so far. I’d be grateful for any pointers. Posted by: Urs Schreiber on July 14, 2009 9:01 PM | Permalink | Reply to this Re: Higher Clifford Algebras Clifford modules are those super Lie modules for the super Lie algebra $\mathbb{R}^{1|q}$ on which $\mathbb{R}^{1|0}$ acts as a multiple of the identity. Another way to say this is that the Clifford algebra $Cl_q$ is obtained from the universal enveloping algebra of the super Lie algebra $\mathbb{R}^{1|q}$ by identifying the generator of $\mathbb{R}^ {1|0}$ with the identity in the algebra. Here $\mathbb{R}^{1|q}$ is the super Lie algebra spanned by a single even element $I$ and $q$ odd elements $\{\theta_i\}$ with the only non-trivial super Lie bracket being $[\theta_i,\theta_j] = \delta_{ij} \cdot I \,.$ So possibly it would be fruitful to look for super Lie 2-algebra extensions of $\mathbb{R}^{1|q}$. Posted by: Urs Schreiber on May 30, 2008 6:43 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2007/10/higher_clifford_algebras.html","timestamp":"2014-04-20T00:41:16Z","content_type":null,"content_length":"68171","record_id":"<urn:uuid:e3bb7403-8760-4b3b-8ae6-0d979e4395eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Office: 345 Gordon Palmer Hall Each student entering the University takes a mathematics placement examination. Students placed in MATH 005 must complete MATH 005 as a prerequisite for MATH 100. Students placed in MATH 100 must complete MATH 100 as a prerequisite for MATH 110, MATH 112, MATH 115, or any other MA-designated course. A grade of “C-” or higher is required in all prerequisite mathematics courses. MATH 005 Remedial Mathematics. No credit awarded. Prerequisite: One unit of high-school mathematics. Brief review of arithmetic operations followed by intensive drill in basic algebraic concepts: factoring, operations with polynomials and rational expressions, linear equations and word problems, graphing linear equations, simplification of expressions involving radicals or negative exponents, and elementary work with quadratic equations. Grades are reported as pass/fail. MATH 100 Intermediate Algebra. 3 hours. Prerequisites: Placement and two units of college-preparatory mathematics; if a student has previously been placed in MATH 005, a grade of “C-” or higher in MATH 005 is required. Intermediate-level course including work on functions, graphs, linear equations and inequalities, quadratic equations, systems of equations, and operations with exponents and radicals. The solution of word problems is stressed. NOT APPLICABLE to UA Core Curriculum mathematics requirement. Grades are reported as “A,” “B,” “C,” or “NC” (No credit). MATH 110 Finite Mathematics. 3 hours. Prerequisites: Placement and two units of college-preparatory mathematics; if a student has previously been placed in MATH 100, a grade of “C-” or higher in MATH 100 is required. Sets and counting, permutations and combinations, basic probability, conditional probability, matrices and their application to Markov chains, and a brief introduction to statistics. Grades are reported as “A,” “B,” “C,” or “NC” (No credit). MATH 112 Precalculus Algebra. 3 hours. Prerequisites: Placement and three units of college-preparatory mathematics; if a student has previously been placed in MATH 100, a grade of “C-” or higher in MATH 100 is required. A higher-level course emphasizing functions including polynomial functions, rational functions, and the exponential and logarithmic functions. Graphs of these functions are stressed. The course also includes work on equations, inequalities, systems of equations, the binomial theorem, and the complex and rational roots of polynomials. Applications are stressed. Grades are reported as “A,” “B,” “C,” or “NC” (No credit). MATH 113 Precalculus Trigonometry. 3 hours. Prerequisite: MATH 112. Continuation of MATH 112. The course includes study of trigonometric functions, inverse trigonometric functions, trigonometric identities, and trigonometric equations. Complex numbers, De Moivre’s Theorem, polar coordinates, vectors, and other topics in algebra are also addressed, including conic sections, sequences, and series. Grades are reported as “A,” “B,” “C,” or “NC” (No credit). MATH 115 Precalculus Algebra and Trigonometry. 3 hours. Prerequisite: Placement and a strong background in college-preparatory mathematics, including one-half unit in trigonometry. Properties and graphs of exponential, logarithmic, and trigonometric functions are emphasized. Also includes trigonometric identities, polynomial and rational functions, inequalities, systems of equations, vectors, and polar coordinates. Grades are reported as “A,” “B,” “C,” or “NC” (No credit). Degree credit will not be granted for both MATH 115 and MATH 112 or MATH 113. MATH 121 Calculus and Its Applications. 3 hours. Prerequisite: MATH 112 or equivalent. A brief overview of calculus primarily for students in the Culverhouse College of Commerce and Business Administration. Warning: This course is not satisfactory preparation for curricula requiring standard calculus or higher mathematics, and it is not a prerequisite to calculus or higher mathematics. Includes differentiation and integration of algebraic, exponential, and logarithmic functions, and applications in business and economics. Some work on functions of several variables and Lagrange multipliers is done. L’Hopital’s Rule and multiple integration are included. Only business-related applications are covered. Degree credit will not be granted for both MATH 121 and MATH 125. MATH 125 Calculus I. 4 hours. Prerequisites: MATH 112 and MATH 113, MATH 115, or placement. This is the first of three courses in the basic calculus sequence. Topics include the limit of a function; the derivative of algebraic, trigonometric, exponential, and logarithmic functions; and the definite integral. Applications of the derivative are covered in detail, including approximations of error using differentials, maxima and minima problems, and curve sketching using calculus. There is also a brief review of selected precalculus topics at the beginning of the course. Degree credit will not be granted for both MATH 121 and MATH 125. MATH 126 Calculus II. 4 hours. Prerequisites: MATH 125. This is the second of three courses in the basic calculus sequence. Topics include vectors and the geometry of space, applications of integration, integration techniques, L’Hopital’s Rule, improper integrals, parametric equations, polar coordinates, conic sections, and infinite series. MATH 145 Honors Calculus I. 4 hours. Honors sections of MATH 125. MATH 146 Honors Calculus II. 4 hours. Honors sections of MATH 126. MATH 208 Mathematics for Elementary School Teachers: Numbers and Operations. 3 hours. Prerequisites: Elementary education or special education major and grade of “C-” or higher in MATH 100. Arithmetic of whole numbers and integers, fractions, proportion and ratio, and place value. Class activities initiate investigations underlying mathematical structure in arithmetic processes and include hands-on manipulatives for modeling solutions. Emphasis is on the explanation of the mathematical thought process. Students are required to verbalize explanations and thought processes and to write reflections on assigned readings on the teaching and learning of mathematics. MATH 209 Mathematics for Elementary School Teachers: Geometry and Measurement. 3 hours. Prerequisites: Elementary education or special education major and grade of “C-” or higher in MATH 208. Properties of two- and three-dimensional shapes, rigid motion transformations, similarity, spatial reasoning, and the process and techniques of measurement. Class activities initiate investigations of underlying mathematical structure in the exploration of shape and space. Emphasis is on the explanation of the mathematical thought process. Technology specifically designed to facilitate geometric explorations is integrated throughout the course. MATH 210 Mathematics for Elementary School Teachers: Data Analysis, Statistics, and Probability. 3 hours. Prerequisites: Elementary education or special education major and grade of “C-” or higher in MATH 209. Data analysis, statistics, and probability, including collecting, displaying/representing, exploring, and interpreting data, probability models, and applications. Focus is on statistics for problem solving and decision making, rather than calculation. Class activities deepen the understanding of fundamental issues in learning to work with data Technology specifically designed for data-driven investigations and statistical analysis is integrated throughout the course. MATH 227 Calculus III. 4 hours. Prerequisite: MATH 126. This is the third of three courses in the basic calculus sequence. Topics include vector functions and motion in space, functions of two or more variables and their partial derivatives, applications of partial derivatives (including Lagrange multipliers), quadric surfaces, multiple integration (including Jacobian), line integrals, Green’s Theorem, vector analysis, surface integrals, and Stokes’ MATH 237 Applied Matrix Theory. 3 hours. Prerequisite: MATH 126. Corequisite: MATH 227. Fundamentals of matrices and vectors in Euclidean space. Topics include solving linear systems of equations, matrix algebra, inverses, determinants, eigenvalues and eigenvectors. Also covers the basic notions of span, subspace, linear independence, basis, dimension, linear transformation, range, and null-space. Use of mathematics software is an integral part of the course. MATH 238 Applied Differential Equations I. 3 hours. Prerequisite: MATH 227. Introduction to analytic and numerical methods for solving differential equations. Topics include numerical methods and qualitative behavior of first order equations, analytic techniques for separable and linear equations, applications to population models and motion problems; techniques for solving higher-order linear differential equations with constant coefficients (including undetermined coefficients, reduction of order, and variation of parameters), applications to physical models; the Laplace transform (including initial value problems with discontinuous forcing functions). Use of mathematics software is an integral part of the course. MATH 247 Honors Calculus III. 4 hours. Honors sections of MATH 227. MATH 257 Linear Algebra. 3 hours. Prerequisite: MATH 126. Corequisite: MATH 227. A theory-oriented course in which students are expected to understand and prove theorems. Topics include vector spaces and subspaces, linear independence, bases and dimension of vector spaces, solving systems of linear equations, matrices, determinants, linear transformations, eigenvalues, eigenvectors, and diagonalization. MATH 300 Introduction to Numerical Analysis. 3 hours. Prerequisites: MATH 227, CS 114 or GES 126, and ability to program in a high-level programming language. Credit will not be granted for both MATH 300 and MATH 411. A beginning course in numerical analysis. Topics include number representation in various bases, error analysis, location of roots of equations, numerical integration, interpolation and numerical differentiation, systems of linear equations, approximations by spline functions, and approximation methods for first-order ordinary differential equations and for systems of such equations. MATH 301 Discrete Mathematics. 3 hours. Prerequisite: MATH 125. An introductory course that primarily covers logic, recursion, induction, modeling, algorithmic thinking, counting techniques, combinatorics, and graph theory. MATH 303 Contemporary Applied Mathematics. 3 hours. Prerequisites: CS 110 or CS 114, and MATH 125. The course is primarily concerned with mathematical models of real-world situations in the physical and social sciences and the professions. It provides excellent background material for middle-school and secondary-school mathematics teachers. Usually offered in the fall semester. MATH 307 Introduction to the Theory of Numbers. 3 hours. Prerequisite: MATH 227. Divisibility theory in the integers, the theory of congruencies, Diophantine equations, Fermat’s theorem and generalizations, and other topics. Usually offered in the spring semester. MATH 309 Foundations of Mathematics. 3 hours. Prerequisite: MATH 126. Provides background material for middle school and secondary school mathematics teachers. Topics include logic and proof, set theory, mathematical induction, Cartesian products, relations, functions, cardinality, basic concepts of higher algebra, and field properties of real numbers. Usually offered in the fall semester. MATH 343 Applied Differential Equations II. 3 hours. Prerequisite: MATH 238. Continuation of MATH 238. Topics include Laplace Transform methods, series solutions of second-order differential equations, the method of Frobenius, Bessel equations and functions, Fourier series, separation of variable method, elementary boundary valve problem for the Laplace, heat and wave equations, an introduction to Sturm-Liouville boundary valve problems, and phase plane analysis. Usually offered in the fall semester. MATH 355 Theory of Probability. 3 hours. Prerequisite: MATH 227. The foundations of the theory of probability, laws governing random phenomena, and their practical applications in other fields. Topics include probability spaces, properties of probability set functions, conditional probability, an introduction to combinatorics, discrete random variables, expectation of discrete random variables, Chebyshev’s Inequality, continuous variables and their distribution functions, and special densities. MATH 371 Advanced Linear Algebra. 3 hours. Prerequisite: MATH 237 or MATH 257. Topics include inner product spaces, norms, self adjoint and normal operators, orthogonal and unitary operators, orthogonal projections and the spectral theorem, bilinear and quadratic forms, generalized eigenvectors, and Jordan canonical form. Usually offered in the spring semester. MATH 382 Advanced Calculus. 3 hours. Prerequisites: MATH 227 and MATH 237 or MATH 257. Further study of calculus with emphasis on theory. Topics include limits and continuity of functions of several variables; partial derivatives; transformations and mappings; vector functions and fields; vector differential operators; the derivation of a function of several variables as a linear transformation; Jacobians; orthogonal curvilinear coordinates; multiple integrals; change of variables; line integrals; and Green’s, Stokes’, and Divergence Theorems. MATH 402 History of Mathematics. 3 hours. Prerequisite: Permission of the department; background in traditional high-school geometry, algebra, or calculus is recommended. Survey of the development of some of the central ideas of modern mathematics, with emphasis on the cultural context. MATH 405 Geometry for Teachers. 3 hours. Prerequisite: MATH 125 or permission of the instructor. Topics include advanced Euclidean and analytic geometry. The course provides excellent background material for middle school and secondary school mathematics teachers. Usually offered in the fall MATH 406 Curriculum in Secondary Mathematics. 3 hours. Prerequisites: Admission to the teacher education program in secondary mathematics, BCT 300, and MATH 227; or permission of the instructor. Future secondary mathematics teachers examine advanced concepts, structures, and procedures that comprise secondary mathematics. MATH 410 Numerical Linear Algebra. 3 hours. Prerequisite: MATH 237 or MATH 257. Further study of matrix theory, emphasizing computational aspects. Topics include direct solution of linear systems, analysis of errors in numerical methods for solving linear systems, least-squares problems, orthogonal and unitary transformations, eigenvalues and eigenvectors, and singular value decomposition. Usually offered in the spring semester. MATH 411 Introduction to Numerical Analysis (previously MATH 311). 3 hours. Prerequisites: MATH 238; MATH 237 or MATH 257; CS 114 or GES 126; and ability to program in a high-level programming language. Credit will not be granted for both MATH 411 and MATH 300. A rigorous introduction to numerical methods, formal definition of algorithms, and error analysis and their implementation on a digital computer. Topics include interpolation, roots, linear equations, integration and differential equations, and orthogonal function approximation. Usually offered in the fall semester. MATH 413 Finite-Element Methods. 3 hours. Prerequisites: MATH 343 and MATH 382. Corequisite: MATH 410. Quadratic functionals on finite dimensional vector spaces, variational formulation of boundary value problems, the Ritz Galerkin method, the finite-element method, and direct and iterative methods for solving finite-element equations. MATH 419 Introduction to Optimization. 3 hours. Prerequisite: MATH 237 or MATH 257. A one-semester introduction to both linear and nonlinear programming for undergraduate students and non-math graduate students. Emphasis is on basic concepts and algorithms and the mathematical ideas behind them. Major topics in linear programming include the simplex method, duality, sensitivity, and network problems; major topics in nonlinear programming include optimality conditions, several search algorithms for unconstrained problems, and a brief discussion of constrained problems. In-depth theoretical development and analysis are not included. MATH 420 Linear Optimization Theory. 3 hours. Prerequisite: MATH 237 or MATH 257 In-depth theoretical development and analysis of linear programming. Topics include formulation of linear programs, various simplex methods, duality, sensitivity analysis, transportation and networks, and various geometric concepts. MATH 421 Nonlinear Optimization Theory. 3 hours. Prerequisite: MATH 237 or MATH 257. In-depth theoretical development and analysis of nonlinear programming with emphasis on traditional constrained and unconstrained nonlinear programming methods and an introduction to modern search MATH 422 Mathematics for Finance. 3 hours. Prerequisites: MATH 227 and MATH 355, or consent of the department. Topics include the basic no-arbitrage principle, binomial model, time value of money, money market, risky assets such as stocks, portfolio management, forward and future contracts, and interest MATH 428 Introduction to Optimal Control. 3 hours. Prerequisite: MATH 238. Corequisite: MATH 410 or permission of the instructor. Introduction to the theory and applications of deterministic systems and their controls. Major topics include calculus of variations, the Pontryagin’s maximum principle, dynamic programming, stability, controllability, and numerical aspects of control problems. Usually offered in the fall semester. MATH 432 Graph Theory and Applications. 3 hours. Prerequisite: MATH 237 or MATH 257. Survey of several of the main ideas of general theory with applications to network theory. Topics include oriented and nonoriented linear graphs, spanning trees, branching and connectivity, accessibility, planar graphs, networks and flows, matching, and applications. Usually offered in the fall semester. MATH 441 Boundary Value Problems. 3 hours. Prerequisite: MATH 238. Methods of solving the classical second-order linear partial differential equations: Laplace’s equation, the heat equation, and the wave equation, together with appropriate boundary or initial conditions. Usually offered in the fall semester. MATH 442 Integral Transforms and Asymptotics. 3 hours. Prerequisite: MATH 441. Complex variable methods, integral transforms, asymptotic expansions, WBK method, Airy’s equation, matched asymptotics, and boundary layers. Usually offered in the spring semester. MATH 445 Theoretical Foundations of Fluid Dynamics I. 3 hours. Prerequisite: MATH 343, AEM 264 or equivalent, or permission of the department. Introduction to continuum mechanics and tensors. Local fluid motion. Equations governing fluid flow and boundary conditions. Some exact solutions of the Navier-Stokes equations. Vortex motion. Potential flow and aerofoil theory. MATH 451 Mathematical Statistics with Applications I. 3 hours. Prerequisites: MATH 237, or MATH 257 and MATH 355. Introduction to mathematical statistics. Topics include bivariate and multivariate probability distributions, functions of random variables, sampling distributions and the central limit theorem, concepts and properties of point estimators, various methods of point estimation, interval estimation, tests of hypotheses, and Neyman-Pearson Lemma, with some applications. Usually offered in the fall semester. MATH 452 Mathematical Statistics with Applications II. 3 hours. Prerequisite: MATH 451. Further applications of the Neyman-Pearson Lemma, Likelihood Ratio tests, Chi-square test for goodness of fit, estimation and test of hypotheses for linear statistical models, analysis of variance, analysis of enumerative data, and some topics in nonparametric statistics. Usually offered in the spring semester. MATH 457 Stochastic Processes with Applications I. 3 hours. Prerequisite: MATH 355 or equivalent. Introduction to the basic concepts and applications of stochastic processes. Markov chains, continuous-time Markov processes, Poisson and renewal processes, and Brownian motion. Applications of stochastic processes including queueing theory and probabilistic analysis of computational algorithms. MATH 459 Stochastic Processes with Applications II. 3 hours. Prerequisite: MATH 457 or equivalent. Continuation of MATH 457. Advanced topics of stochastic processes including Martingales, Brownian motion and diffusion processes, advanced queueing theory, stochastic simulation, and probabilistic search algorithms (simulated annealing). Usually offered in the fall semester. MATH 460 Introduction to Differential Geometry. 3 hours. Prerequisite: MATH 486, or MATH 382 and permission of the department. Introduction to basic classical notions in differential geometry: curvature, torsion, geodesic curves, geodesic parallelism, differential manifold, tangent space, vector field, Lie derivative, Lie algebra, Lie group, exponential map, and representation of a Lie group. Usually offered in the spring semester. MATH 465 Introduction to General Topology. 3 hours. Prerequisite: MATH 486. Basic notions in topology that can be used in other disciplines in mathematics. Topics include topological spaces, open sets, closed sets, basis for a topology, continuous functions, separation axioms, compactness, connectedness, product spaces, quotient spaces, and metric spaces. Usually offered in the spring semester. MATH 466 Introduction to Algebraic Topology. 3 hours. Prerequisites: MATH 465 and MATH 470. Homotopy, fundamental groups, covering spaces, covering maps, and basic homology theory, including the Eilenberg Steenrod axioms. Usually offered in the fall semester. MATH 467 Advanced Geometry. 3 hours. Prerequisite: MATH 405 or permission of the instructor. This is a second course in axiomatic geometry. Topics include Euclidean and non-Euclidean geometry, studied from an analytic point of view and from the point of view of transformation geometry. Some topics in projective geometry may also be treated. Usually offered in the spring semester. MATH 470 Principles of Modern Algebra I. 3 hours. Prerequisite: MATH 237 or MATH 257. A first course in abstract algebra. Topics include groups, permutation groups, Cayley’s theorem, finite abelian groups, isomorphism theorems, rings, polynomial rings, ideals, integral domains, and unique factorization domains. Usually offered in the fall semester. MATH 471 Principles of Modern Algebra II. 3 hours. Prerequisite: MATH 470. Introduction to the basic principles of Galois Theory. Topics include rings, polynomial rings, fields, algebraic extensions, normal extensions, and the fundamental theorem of Galois Theory. Usually offered in the spring semester. MATH 474 Cryptography. 3 hours. Prerequisites: MATH 307, MATH 470, or permission of the department. Introduction to the rapidly growing area of cryptography, an application of algebra, especially number theory. Usually offered in the fall semester. MATH 485 Introduction to Complex Calculus. 3 hours. Prerequisite: MATH 227. Some basic notions in complex analysis. Topics include analytic functions, complex integration, infinite series, contour integration, and conformal mappings. Usually offered in the spring semester. MATH 486 Introduction to Real Analysis I (previously MATH 380). 3 hours. Prerequisite: MATH 237 or MATH 257. Rigorous development of the calculus of real variables. Topics include topology of the real line, sequences, limits, continuity, and differentiation. Usually offered in the fall semester. MATH 487 Introduction to Real Analysis II (previously MATH 481). 3 hours. Prerequisite: MATH 486. Riemann integration, introduction to Reimann-Stieltjes integration, series of constants and convergence tests, sequences and series of functions, uniform convergence, power series, Taylor series, and the Weierstrass Approximation Theorem. Usually offered in the spring semester. MATH 495 Seminar/Directed Reading. 1 to 3 hours. Offered as needed.
{"url":"http://www.ua.edu/catalogs/catalog08/534100.html","timestamp":"2014-04-19T15:06:23Z","content_type":null,"content_length":"33935","record_id":"<urn:uuid:c1428bfa-83bc-4ea5-a44a-23b0da666ec0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Conduction electrons in a metal Next: White-dwarf stars Up: Quantum statistics Previous: The Stefan-Boltzmann law The conduction electrons in a metal are non-localized (i.e., they are not tied to any particular atoms). In conventional metals, each atom contributes a single such electron. To a first approximation, it is possible to neglect the mutual interaction of the conduction electrons, since this interaction is largely shielded out by the stationary atoms. The conduction electrons can, therefore, be treated as an ideal gas. However, the concentration of such electrons in a metal far exceeds the concentration of particles in a conventional gas. It is, therefore, not surprising that conduction electrons cannot normally be analyzed using classical statistics: in fact, they are subject to Fermi-Dirac statistics (since electrons are fermions). Recall, from Sect. 8.5, that the mean number of particles occupying state according to the Fermi-Dirac distribution. Here, is termed the Fermi energy of the system. This energy is determined by the condition that Let us investigate the behaviour of the Fermi function In this limit, if 10. In the limit as 10. This is an obvious result, since when ground-state, configuration. Since the Pauli exclusion principle requires that there be no more than one electron per single-particle quantum state, the lowest energy configuration is obtained by piling electrons into the lowest available unoccupied states until all of the electrons are used up. Thus, the last electron added to the pile has quite a considerable energy, Let us calculate the Fermi energy where Fermi momentum Thus, at Now, we know, by analogy with Eq. (514), that there are Fermi sphere of radius twice this, because electrons possess two possible spin states for every possible translational state. Since the total number of occupied states (i.e., the total number of quantum states inside the Fermi sphere) must equal the total number of particles in the gas, it follows that The above expression can be rearranged to give which implies that the de Broglie wavelength According to Eq. (668), the Fermi energy at It is easily demonstrated that The majority of the conduction electrons in a metal occupy a band of completely filled states with energies far below the Fermi energy. In many cases, such electrons have very little effect on the macroscopic properties of the metal. Consider, for example, the contribution of the conduction electrons to the specific heat of the metal. The heat capacity i.e., If the electrons obeyed classical Maxwell-Boltzmann statistics, so that all electrons, then the equipartition theorem would give However, the actual situation, in which 10, is very different. A small change in 675), we expect each electron in this region to contribute roughly an amount However, since only a fraction It follows that Since i.e., Note that the specific heat (678) is not temperature independent. In fact, using the superscript electronic specific heat, the molar specific heat can be written where 7.12). Clearly, at low temperatures The total molar specific heat of a metal at low temperatures takes the form If follows that a plot of straight line whose intercept on the vertical axis gives the coefficient 11 shows such a plot. The fact that a good straight line is obtained verifies that the temperature dependence of the heat capacity predicted by Eq. (680) is indeed correct. Next: White-dwarf stars Up: Quantum statistics Previous: The Stefan-Boltzmann law Richard Fitzpatrick 2006-02-02
{"url":"http://farside.ph.utexas.edu/teaching/sm1/lectures/node86.html","timestamp":"2014-04-19T22:07:08Z","content_type":null,"content_length":"32780","record_id":"<urn:uuid:c1230801-d9e8-4fcb-acdd-aeb9665e6493>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Category:Mathematical physics This category contains books on mathematical physics: the scientific discipline concerned with the application of mathematics to problems in physics, the development of mathematical methods suitable for physics applications, and the formulation of physical theories. Related categories The following 3 related categories may be of interest, out of 3 total. Books or Pages The following 4 pages are in this category, out of 4 total. Last modified on 10 June 2009, at 03:50
{"url":"https://en.m.wikibooks.org/wiki/Category:Mathematical_physics","timestamp":"2014-04-18T21:10:22Z","content_type":null,"content_length":"20140","record_id":"<urn:uuid:a2c2dd4e-9b89-4a3e-837b-bf2b4bb577bc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
SparkNotes: SAT Subject Test: Math Level 2: Parametric Equations 8.1 The Coordinate Plane 8.7 Polar Coordinates 8.2 Lines and Distance 8.8 Parametric Equations 8.3 Graphing Linear Inequalities 8.9 Key Formulas 8.4 Other Important Graphs and Equations 8.10 Review Questions 8.5 Vectors 8.11 Explanations 8.6 Coordinate Space Parametric Equations Just like polar coordinates, parametric equations will not show up on the Math IIC test very often. However, they do show up occasionally, and knowing them might separate you from the pack. Parametric equations are a useful way to express two variables in terms of a third variable. The third variable is called the parameter. Here is an example: As the value of t changes, the ordered pair (x, y) changes according to the parametric equations, and a graph can be drawn. Below is a graph of the parametric equations x = 3t – 2; y = –t + 4 for the range of values 0 ≤ t ≤ 3. Eliminating the Parameter As you might have guessed from the graph above, plotting parametric equations by substituting values of the parameter can be tedious. Luckily, some parametric equations can be reduced into a single equation by eliminating the parameter. All this involves is a little algebra. Consider the parametric equations x = 2t; y = t + 1. In the first equation, we can solve for t: t = ^1/[2 ]x. Now we can substitute this value into the second equation to get y = ^1/[2] x + 1, which is a line we can easily sketch. But be careful to keep the range of the original equations in mind when you eliminate the parameter in parametric equations. For example, by eliminating the parameter in the parametric equations x = 2t^2; y = 4t^2 + 3, you arrive at the equation y = 2x + 3. The range of this function, however, does not include x values below 0 or y values below 3 because the ranges of the original parametric equations do not include these values. So, the graph of these parametric equations actually looks like the graph of y = 2x + 3 cut off below the point (0, 3): When questions on parametric equations do appear on the test, they’re usually quite simple. Given a parametric equation, you should be able to recognize or sketch the proper graph, whether by plotting a few points or by eliminating the parameter.
{"url":"http://www.sparknotes.com/testprep/books/sat2/math2c/chapter8section8.rhtml","timestamp":"2014-04-20T16:15:55Z","content_type":null,"content_length":"47655","record_id":"<urn:uuid:be6dcce8-5b67-462d-b046-5d12e367b9e4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
subcanonical coverage subcanonical coverage Topos Theory Internal Logic Topos morphisms Cohomology and homotopy In higher category theory A coverage (resp. Grothendieck topology, resp. Grothendieck pretopology) defining a site is called subcanonical if all representable presheaves on this site are sheaves. Of course, a subcanonical site is one whose coverage is subcanonical. The term “subcanonical” comes about because the largest coverage for which the representables are sheaves is called the canonical coverage, and the subcanonical coverages are precisely the “sub-coverages” of the canonical one. Effective-epimorphic sieves An alternate definition is that a Grothendieck coverage is subcanonical if and only if all of its covering sieves $R\hookrightarrow C(-,U)$ are effective-epimorphic, meaning that the morphisms $f:V\ to U$ in $R$ form a colimit cone under the diagram consisting of all morphisms between them over $U$. To see this, first recall that if $R\hookrightarrow C(-,U)$ is a sieve, then a functor $X:C^{op}\ to Set$ satisfies the sheaf axiom for $R$ if and only if • for every family $(x_f)_{f\in R}$ which is compatible, in the sense that $X(g)(x_f) = x_{f g}$ whenever this makes sense, there exists a unique $x\in X(U)$ such that $x_f = X(f)(x)$. Interpreting this when $X$ is a representable functor $C(-,Z)$, we obtain • for every family of maps $(h_f:V\to Z)$, where $f:V\to U$ is in $R$, such that $h_f g = h_{f g}$ for any $g:V'\to V$, there exists a unique $k:U\to Z$ such that $h_f = k f$. But this says precisely that $R$ is effective-epimorphic, as defined above. In fact, since the covering sieves in a subcanonical coverage must also satisfy pullback-stability, they must be not only effective-epimorphic but universally effective-epimorphic (meaning that any pullback of them is effective-epimorphic). It is then easy to see that the canonical coverage consists precisely of all the universally effective-epimorphic sieves. Note also that if $f:V\to U$ is a single morphism having a kernel pair $p,q:V\times_U V \;\rightrightarrows\; V$, then the sieve generated by $f$ is effective-epimorphic iff $f$ is the coequalizer of its kernel pair, and thus iff $f$ is a effective epimorphism. Revised on November 26, 2013 23:46:15 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/subcanonical+coverage","timestamp":"2014-04-18T13:11:51Z","content_type":null,"content_length":"32038","record_id":"<urn:uuid:e4ec9d8a-aeba-4a01-afc4-5f858023e0de>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
infinite universe This had better not be an infinite universe , because if it is, then it contains everything. EVERYTHING. Even a universe that is not infinite. All possibilities would become realities, given infinite time and space: A planet full of failed carpet cleaning businesses Another planet full of the people who sold the machines to the carpet cleaners who ran the businesses into the ground An entire galaxy full of the psychiatrists to the parking meter attendants who ticketed the veteranarians of the sickly dogs of the former owners of the few customers of the failed carpet cleaning businesses And so on, ad suicidem.
{"url":"http://everything2.com/title/infinite+universe?showwidget=showCs455075","timestamp":"2014-04-21T09:47:52Z","content_type":null,"content_length":"51493","record_id":"<urn:uuid:a344e58b-8ffd-4fea-9aa9-737101107a25>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
On the M-Projective Curvature Tensor of ISRN Geometry Volume 2013 (2013), Article ID 932564, 6 pages Research Article On the M-Projective Curvature Tensor of -Contact Metric Manifolds Department of Mathematical Sciences, APS University, Rewa, Madhya Pradesh 486003, India Received 29 December 2012; Accepted 20 January 2013 Academic Editors: J. Keesling, A. Morozov, and E. Previato Copyright © 2013 R. N. Singh and Shravan K. Pandey. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The object of the present paper is to study some curvature conditions on -contact metric manifolds. 1. Introduction The notion of the odd dimensional manifolds with contact and almost contact structures was initiated by Boothby and Wong in 1958 rather from topological point of view. Sasaki and Hatakeyama reinvestigated them using tensor calculus in 1961. Tanno [1] classified the connected almost contact metric manifolds whose automorphism groups possess the maximum dimension. For such a manifold, the sectional curvature of plain sections containing is a constant, say . He showed that they can be divided into three classes: (i) homogeneous normal contact Riemannian manifolds with , (ii) global Riemannian products of line or a circle with a Kähler manifold of constant holomorphic sectional curvature if , and (iii) a warped product space if . It is known that the manifolds of class (i) are characterized by admitting a Sasakian structure. Kenmotsu [2] characterized the differential geometric properties of the manifolds of class (iii); so the structure obtained is now known as Kenmotsu structure. In general, these structures are not Sasakian [2]. On the other hand in Pokhariyal and Mishra [3] defined a tensor field on a Riemannian manifold as where and . Such a tensor field is known as m-projective curvature tensor. Later, Ojha [4] defined and studied the properties of m-projective curvature tensor in Sasakian and Khler manifolds. He also showed that it bridges the gap between the conformal curvature tensor, conharmonic curvature tensor, and concircular curvature tensor on one side and H-projective curvature tensor on the other. Recently m-projective curvature tensor has been studied by Chaubey and Ojha [5], Singh et al. [6], Singh [7], and many others. Motivated by the above studies, in the present paper, we study flatness and symmetry property of -contact metric manifolds regarding m-projective curvature tensor. The present paper is organized as follows. In this paper, we study the m-projective curvature tensor of -contact metric manifolds. In Section 2, some preliminary results are recalled. In Section 3, we study m-projectively semisymmetric -contact metric manifolds. Section 4 deals with m-projectively flat -contact metric manifolds. -m-projectively flat -contact metric manifolds are studied in Section 5 and obtained necessary and sufficient condition for an -contact metric manifold to be -m-projectively flat. In Section 6, m-projectively recurrent -contact metric manifolds are studied. Section 7 is devoted to the study of -contact metric manifolds satisfying . The last section deals with an -contact metric manifolds satisfying . 2. Contact Metric Manifolds An odd dimensional differentiable manifold is said to admit an almost contact structure if there exist a tensor field of type (1, 1), a vector field , and a 1-form satisfying An almost contact metric structure is said to be normal if the induced almost complex structure on the product manifold defined by is integrable, where is tangent to , is coordinate of , and is smooth function on . Let be a compatible Riemannian metric with almost contact structure , that is, Then, becomes an almost contact metric manifold equipped with an almost contact metric structure . From (2) and (7), it can be easily seen that for all vector fields and . An almost contact metric structure becomes a contact metric structure if for all vector fields and . The 1-form is then a contact form, and is its characteristic vector field. We define a (1, 1)-tensor field by , where denotes the Lie-differentiation. Then, is symmetric and satisfies . We have and . Also, holds in a contact metric manifold. A normal contact metric manifold is a Sasakian manifold. An almost contact metric manifold is Sasakian if and only if for all vector fields and , where is the Levi-Civita connection of the Riemannian metric . A contact metric manifold for which is a killing vector is said to be a K-contact manifold. A Sasakian manifold is K-contact, but the converse needs not be true. However, a 3-dimensional K-contact manifold is Sasakian [8]. It is well known that the tangent sphere bundle of a flat Riemannian manifold admits a contact metric structure satisfying [9]. On the other hand, on a Sasakian manifold the following holds: As a generalization of both and the Sasakian case; Blair et al. [11] considered the -nullity condition on a contact metric manifold and gave several reasons for studying The -nullity distribution ([10, 11]) of contact metric manifold is defined by for all , where . A contact metric manifold with is called a -manifold. In particular, on a -manifold, we have On a -manifold, . If , the structure is Sasakian ( and is indeterminate), and if , the -nullity condition determines the curvature of completely [11]. In fact, for a -manifold, the conditions of being a Sasakian manifold, a K-contact manifold, and are all equivalent. In a -manifold, the following relations hold ([11, 12]): where is the Ricci tensor of type (0, 2), is the Ricci operator, that is, , and is the scalar curvature of the manifold. From (11), it follows that Also in a -manifold, holds. The -nullity distribution of a Riemannian manifold [13] is defined by for all and being a constant. If the characteristic vector field , then we call a contact metric manifold an -contact metric manifold [14]. If , then -contact metric manifold is Sasakian, and if , then -contact metric manifold is locally isometric to the product for and flat for . If , the scalar curvature is . If , then a -contact metric manifold reduces to a -contact metric manifolds. In [9], -contact metric manifold were studied in some detail. In -contact metric manifolds the following relations hold ([15, 16]): For a -dimensional almost contact metric manifold, m-projective curvature tensor is given by [3] for arbitrary vector fields , , and , where is the Ricci tensor of type and is the Ricci operator, that is, . The m-projective curvature tensor for an -contact metric manifold is given by 3. M-Projectively Semisymmetric -Contact Metric Manifolds Definition 1. A -dimensional -contact metric manifold is said to be -projectively semisymmetric [17] if it satisfies , where is the Riemannian curvature tensor and is the m-projective curvature tensor of the manifold. Theorem 2. An m-projectively semisymmetric -contact metric manifold is an Einstein manifold. Proof. Suppose that an -contact metric manifold is -projectively semisymmetric. Then, we have The above equation can be written as follows: In view of (22), the above equation reduces to Now, taking the inner product of the above equation with and using (3) and (9), we get which on using (30), (32), (34), and (35), gives Putting in the above equation and taking summation over , , we get which shows that is an Einstein manifold. This completes the proof. 4. M-Projectively Flat -Contact Metric Manifolds Theorem 3. An m-projectively flat -contact metric manifold is an Einstein manifold. Proof. Let . Then, from (30), we have Let be an orthonormal basis of the tangent space at any point. Putting in the above equation and summing over , , we get which shows that is an Einstein manifold. This completes the proof. 5. -M-Projectively Flat -Contact Metric Manifolds Definition 4. A -dimensional -contact metric manifold is said to be -m-projectively flat [18] if for all . Theorem 5. A -dimensional -contact metric manifold is -m-projectively flat if and only if it is an -Einstein manifold. Proof. Let . Then, in view if (30), we have By virtue of (9), (23), and (28), the above equation reduces to which by putting , gives Now, taking the inner product of above equation with , we get which shows that -contact metric manifold is an -Einstein manifold. Conversely, suppose that (47) is satisfied. Then, by virtue of (46) and (31), we have . This completes the proof. 6. M-Projectively Recurrent -Contact Metric Manifolds Definition 6. A nonflat Riemannian manifold is said to be m-projectively recurrent if its m-projective curvature tensor satisfies the condition where is nonzero 1-form. Theorem 7. If an -contact metric manifold is m-projectively recurrent, then it is an -Einstein manifold. Proof. We define a function on , where the metric is extended to the inner product between the tensor fields. Then, we have This can be written as From the above equation, we have Since the left-hand side of the above equation is identically zero and on , then that is, 1-form is closed. Now from we have In view of (52) and (54), we have Thus by virtue of Theorem 3, the above equation shows that is an -Einstein manifold. This completes the proof. 7. -Contact Metric Manifolds Satisfying Theorem 8. If on an -contact metric manifold , then . Proof. Let . In this case, we can write In view of (34), the above equation reduces to Now, putting in above equation and using (3), (9), and (23), we get This completes the proof. 8. -Contact Metric Manifolds Satisfying Theorem 9. On an -contact metric manifold, if , then . Proof. Suppose that , then it can be written as which on using (33), takes the form Taking the inner product of above equation with , we get Now using (22), (28), and (29) in the above equation, we get Putting in the above equation and summing over , , we get This completes the proof. 1. S. Tanno, “The automorphism groups of almost contact Riemannian manifolds,” The Tohoku Mathematical Journal, vol. 21, pp. 21–38, 1969. View at Zentralblatt MATH · View at MathSciNet 2. K. Kenmotsu, “A class of almost contact Riemannian manifolds,” The Tohoku Mathematical Journal, vol. 24, pp. 93–103, 1972. View at Zentralblatt MATH · View at MathSciNet 3. G. P. Pokhariyal and R. S. Mishra, “Curvature tensors' and their relativistics significance,” Yokohama Mathematical Journal, vol. 18, pp. 105–108, 1970. View at Zentralblatt MATH · View at 4. R. H. Ojha, “M-projectively flat Sasakian manifolds,” Indian Journal of Pure and Applied Mathematics, vol. 17, no. 4, pp. 481–484, 1986. View at Zentralblatt MATH · View at MathSciNet 5. S. K. Chaubey and R. H. Ojha, “On the m-projective curvature tensor of a Kenmotsu manifold,” Differential Geometry, vol. 12, pp. 52–60, 2010. View at MathSciNet 6. R. N. Singh, S. K. Pandey, and G. Pandey, “On a type of Kenmotsu manifold,” Bulletin of Mathematical Analysis and Applications, vol. 4, no. 1, pp. 117–132, 2012. View at MathSciNet 7. J. P. Singh, “On m-projective recurrent Riemannian manifold,” International Journal of Mathematical Analysis, vol. 6, no. 24, pp. 1173–1178, 2012. View at MathSciNet 8. J.-B. Jun, I. B. Kim, and U. K. Kim, “On 3-dimensional almost contact metric manifolds,” Kyungpook Mathematical Journal, vol. 34, no. 2, pp. 293–301, 1994. View at MathSciNet 9. C. Baikoussis, D. E. Blair, and T. Koufogiorgos, “A decomposition of the curvature tensor of a contact manifold satisfying $R\left(X,Y\right)\xi =\kappa \left[\eta \left(Y\right)X-\eta \left(X\ right)Y\right]$,” Mathematics Technical Report, University of Ioanniana, 1992. 10. B. J. Papantoniou, “Contact Riemannian manifolds satisfying $R\left(\xi ,X\right).R=0$ and $\xi \in \left(\kappa ,\mu \right)$-nullity distribution,” Yokohama Mathematical Journal, vol. 40, no. 2, pp. 149–161, 1993. View at MathSciNet 11. D. E. Blair, T. Koufogiorgos, and B. J. Papantoniou, “Contact metric manifolds satisfying a nullity condition,” Israel Journal of Mathematics, vol. 91, no. 1–3, pp. 189–214, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 12. E. Boeckx, “A full classification of contact metric $\left(\kappa ,\mu \right)$-spaces,” Illinois Journal of Mathematics, vol. 44, no. 1, pp. 212–219, 2000. View at MathSciNet 13. S. Tanno, “Ricci curvatures of contact Riemannian manifolds,” The Tohoku Mathematical Journal, vol. 40, no. 3, pp. 441–448, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 14. D. E. Blair, J.-S. Kim, and M. M. Tripathi, “On the concircular curvature tensor of a contact metric manifold,” Journal of the Korean Mathematical Society, vol. 42, no. 5, pp. 883–892, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 15. D. E. Blair, T. Koufogiorgos, and R. Sharma, “A classification of 3-dimensional contact metric manifolds with $Q\phi =\phi Q$,” Kodai Mathematical Journal, vol. 13, no. 3, pp. 391–401, 1990. View at Publisher · View at Google Scholar · View at MathSciNet 16. D. E. Blair and H. Chen, “A classification of 3-dimensional contact metric manifolds with $Q\phi =\phi Q$. II,” Bulletin of the Institute of Mathematics, vol. 20, no. 4, pp. 379–383, 1992. View at MathSciNet 17. U. C. De and A. Sarkar, “On a type of P-Sasakian manifolds,” Mathematical Reports, vol. 11(61), no. 2, pp. 139–144, 2009. View at Zentralblatt MATH · View at MathSciNet 18. G. Zhen, J. L. Cabrerizo, L. M. Fernández, and M. Fernández, “On ξ-conformally flat contact metric manifolds,” Indian Journal of Pure and Applied Mathematics, vol. 28, no. 6, pp. 725–734, 1997. View at MathSciNet
{"url":"http://www.hindawi.com/journals/isrn.geometry/2013/932564/","timestamp":"2014-04-16T05:28:16Z","content_type":null,"content_length":"515176","record_id":"<urn:uuid:5bf44fd8-fe2a-4011-a5d5-22e7c8aee834>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Parents This is the Mathematician Parent series – a series of interviews with mathematicians who have kids or grandkids. • Libby Often - mom of two boys (ages 10 & 12), math teacher and EdD student • Jennifer Wilson - mom of two girls (ages 6 & 9),math teacher and Texas Instruments instructor • David Wees - dad of one 5 year old son, math teacher and blogger • Marilyn Curtain-Phillips - mom of two grownups, author of math books • John Golden - dad of two (ages 11 & 12), teacher of math educators and blogger • Caroline Mukisa - mom of four (ages 2-11), math teacher and blogger • David Chandler - dad of two grownup daughters, physics and math teacher and online math business owner
{"url":"http://mathfour.com/math-parents","timestamp":"2014-04-19T19:45:49Z","content_type":null,"content_length":"16342","record_id":"<urn:uuid:72fe0be7-16c1-4721-93a6-d7012364329f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
"Convex Optimization" over varying-dimension vector space? up vote 3 down vote favorite For all instances of Convex Optimization I know of, the dimension of the vector space is defined beforehand. Is there an area of mathematics that deals with "convex optimization" of varying-dimension vector space? I.e. a problem like: min: k s.t. x_1 + ... x+k = 10; 1 <= x_i <= 2 abstract-algebra oc.optimization-control add comment 5 Answers active oldest votes This reminds me of the compressed sensing literature. Suppose that you know some upper bound for k, let that be K. Then, you can try to solve $\min||{\bf x}||_0$ subject to ${\bf 1} ^T_K{\bf x}=10$ and ${\bf x}_i\in[1,2], \; \forall i\in\{1,\ldots,K\}$. The 0-norm counts the number of nonzero elements in ${\bf x}$. up vote 1 down This is by no means a convex problem, but there exist strong approximation schemes such as the $\ell_1$ minimization for the more general problem $\min||{\bf x}||_0, \;{\bf A}{\bf vote accepted x}={\bf b},\; {\bf x}\in\mathbb{R}^{K\times 1}$, where ${\bf A}$ is fat. If you googlescholar compressed sensing you might find some interesting references. add comment You may be interested in some of the recent work of Bill Helton and his collaborators. The idea (very roughly) is to study convex problems which can be defined in some intrinsic way without reference to the dimension of the problem, usually in terms of matrix equations and inequalities (semidefiniteness constraints). For example the existence of a quadratic Lyapunov function for a linear system reduces to such a form: the matrix equations and inequalities you write down look the same regardless of the dimension of the system if you're representing each of these matrices with a symbol rather than an $n\times n$ array. This means there is some "uniformity" to the problem. up vote They study this by viewing the matrices involved as noncommuting indeterminates. They go on to define convexity and related ideas for noncommutative polynomials. The algebraic structure of 1 down this "noncommutative geometry" is very rigid (much more than the commutative case), so you can say a lot about how these polynomials must look. This in turn tells you things about which vote problems can be cast in such a uniform way, and perhaps why so many of them are semidefinite programs. I don't know enough to even call myself a novice in this area, let alone an expert, so I'm sure I am not doing this work justice. I was hoping someone more qualified might come along and give a better explanation, but no one has mentioned it so far and it may be what you are looking for, so I gave it a shot. add comment A lot of convex optimization theory carries over to infinite dimensions. The $\mathbb{R}^{\infty}$ space is the injective limit of $\mathbb{R}^1 \subset \mathbb{R}^2 \subset \cdots$ and its elements are finite sequences of real numbers, so you should be able to investigate finite-dimensional questions of varying dimension by working with such a space. But I can't say I've seen up vote 0 any applications along these specific lines. If you could go into more detail about what you are trying to accomplish, people would be able to give more pertinent answers. down vote add comment If $x_j$ are integers, maybe you could pose and solve this as a knapsack problem? You can think of $x_1, x_2, \ldots, x_k$ (for some large enough $k$) as all the possible items (i.e. numbers) that you want to place into your knapsack and then solve $$ \min &\sum_{j=1} up vote 0 ^k I_j\\ \text{s.t.}&\sum_{j=1}^k x_j I_j = 10 $$ where $I_j=1$ if $x_j$ is put in your knapsack, otherwise $I_j=0$. down vote If $x_j$ are not integers, then maybe there is a nice way to extend the knapsack approach? add comment Dimension minimization problems are notoriously hard (a standard example is: given a graph $G,$ what is the minimal dimension Euclidean space where the graph can be embedded with unit edge lengths? (minimum is finite since every graph with N vertices can be realized with unit edge lengths in dimension $N(N-1)/2 -1.$)) Or, more generally, finding the minimal realization up vote dimension of a metric space. All of these questions are NP-hard. There is a large literature on the subject (look under Maryam Fazel, or Stephen Boyd), but most of the work is purely 0 down heuristic. Candes and Tao's work on compressed sensing is one of the few bright spots where rigorous results are known. add comment Not the answer you're looking for? Browse other questions tagged abstract-algebra oc.optimization-control or ask your own question.
{"url":"http://mathoverflow.net/questions/34213/convex-optimization-over-varying-dimension-vector-space","timestamp":"2014-04-20T01:45:10Z","content_type":null,"content_length":"66278","record_id":"<urn:uuid:2ec09e03-3b0b-4b07-b573-4450dfc3b409>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplication Issue when Variables as Double Datatype Author Multiplication Issue when Variables as Double Datatype Ranch Hand package test1; Joined: Jul 07, 2009 public class TestApp { Posts: 50 public static void main(String[] args) { double a = 0.43; double b = 0.1; System.out.println(a * b); The Output shows 0.043000000000000003. But the 0.43 * 0.1 is 0.043. May i know why the Value have 3 at last in the Output ----> 0.04300000000000000 3. Thanks in Advance. N.Senthil Kumar Ranch Hand Joined: Oct 27, 2009 Use float it will work. Posts: 92 Ranch Hand Do you need that much precision? A 3 in that far down seems pretty negligible to me. Joined: Mar 13, 2009 Posts: 492 -Hunter I like... "If the facts don't fit the theory, get new facts" --Albert Einstein Ranch Hand N.Senthil Kumar wrote:May i know why the Value have 3 at last in the Output ----> 0.04300000000000000 3. Joined: Oct 22, 2009 Posts: 237 Not every decimal number (which humans prefer) can be exactly converted to a binary number (which computers prefer). So conversions back and forth between number systems tend to produce small errors. Also the number of bits used to represent numbers internally in the computer is fixed. As you probably know a float uses 32 bits and a double 64. This means that when arithmetical operations are performed, the result may not be exact. So operations on numbers in fixed-size representation also tend to introduce small errors. The result is what you see in your example. The combination of conversion errors and fixed-size arithmetic errors show up as a "ripple" in the result. These kinds of errors are fundamental to digital computers so they will always be present. One way to handle them is to round the result of a calculation. As you can see the error is far smaller than the 2 digits of precision you've used in the input numbers. If you round the output to the same precision the error disappears and the result is "correct". Also note that programmers must watch out so formulas they use don't magnify those small fundamental and unavoidable errors. Such formulas are called ill-conditioned and there's a whole branch of math called numerical mathematics which is dealing with that. Joined: Mar Please UseAMeaningfulSubjectLine. You can edit the subject using the 22, 2005 Posts: 39547 As to your question, see #20 in the http://faq.javaranch.com/java/JavaBeginnersFaq. Ping & DNS - updated with new look and Ping home screen widget Ranch Hand Joined: Jul Friends, 07, 2009 Thanks for the Reply . Posts: 50 subject: Multiplication Issue when Variables as Double Datatype
{"url":"http://www.coderanch.com/t/469611/java/java/Multiplication-Variables-Double-Datatype","timestamp":"2014-04-19T07:24:58Z","content_type":null,"content_length":"31659","record_id":"<urn:uuid:8bdcbf67-412e-4677-b2e0-e9a6782f1be3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple question about C of a solution suppose that the molar concentration of (2k+ , (S2O4)2-) is C=1mol/l we have C((S2O4)2-)=C=1mol.l but does C(2k+)=1 mol/l ? or 2C=2mol/l(since there are 2 k+ ) this question might be silly , but this really confused me and i can't come up with the correct answer
{"url":"http://www.physicsforums.com/showthread.php?s=e49d27fbf4e036357e1bcd0950eb9cdb&p=4551235","timestamp":"2014-04-19T22:45:21Z","content_type":null,"content_length":"22378","record_id":"<urn:uuid:f4e8ec93-ae35-4f16-949b-034b43dacd24>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of imaginary-unit $ldots$ (repeats the pattern from blue area) $i^\left\{-3\right\} = i,$ $i^\left\{-2\right\} = -1,$ $i^\left\{-1\right\} = -i,$ $i^0 = 1,$ $i^1 = i,$ $i^2 = -1,$ $i^3 = -i,$ $i^4 = 1,$ $i^5 = i,$ $i^6 = -1,$ $ldots$ (repeats the pattern from blue area) In mathematics, physics, and engineering, the imaginary unit is denoted by or the Latin or the Greek alternative notations below). It allows the real number to be extended to the complex number Its precise definition is dependent upon the particular method of extension. The primary motivation for this extension is the fact that not every polynomial equation with real coefficients $f\left(x\right)=0$ has a solution in the real numbers. In particular, the equation $x^ 2+1=0$ has no real solution (see "Definition", below). However, if we allow complex numbers as solutions, then this equation, and indeed every polynomial equation $f\left(x\right)=0$ does have a solution. (See algebraic closure and fundamental theorem of algebra.) For a history of the imaginary unit, see the history of complex numbers. The imaginary unit is often loosely referred to as the "square root of negative one" or the "square root of minus one", but see below for difficulties that may arise from a naïve use of this idea. By definition, the imaginary unit is one solution (of two) of the quadratic equation $x^2 + 1 = 0$ or equivalently $x^2 = -1.$ Since there is no real number that produces a negative real number when squared, we imagine such a number and assign to it the symbol i. It is important to realize, though, that i is as well-defined a mathematical construct as the real numbers, despite its formal name and being less than immediately intuitive. Real number operations can be extended to imaginary and complex numbers by treating i as an unknown quantity while manipulating an expression, and then using the definition to replace any occurrence of i^ 2 with −1. Higher integral powers of $i$ can also be replaced with −i, 1, $i$, or −1: $i^3 = i^2 i = \left(-1\right) i = -i ,$ $i^4 = i^3 i = \left(-i\right) i = -\left(i^2\right) = -\left(-1\right) = 1 ,$ $i^5 = i^4 i = \left(1\right) i = i. ,$ $i$ and −$i$ Being a second order polynomial with no multiple real root, the above equation has two distinct solutions that are equally valid and that happen to be additive and multiplicative inverses of each other. More precisely, once a solution $i$ of the equation has been fixed, the value −$i$ (which is not equal to $i$) is also a solution. Since the equation is the only definition of $i$, it appears that the definition is ambiguous (more precisely, not well-defined). However, no ambiguity results as long as one of the solutions is chosen and fixed as the "positive $i$". This is because, although −$i$ and $i$ are not quantitatively equivalent (they are negatives of each other), there is no qualitative difference between $i$ and −$i$ (which cannot be said for −1 and +1). Both imaginary numbers have equal claim to being the number whose square is −1. If all mathematical textbooks and published literature referring to imaginary or complex numbers were rewritten with −$i$ replacing every occurrence of +$i$ (and therefore every occurrence of −$i$ replaced by −(−$i$) = +$i$), all facts and theorems would continue to be equivalently valid. The distinction between the two roots $x$ of $x ^2 + 1 = 0$ with one of them as "positive" is purely a notational relic; neither root can be said to be more primary or fundamental than the other. The issue can be a subtle one. The most precise explanation is to say that although the complex field, defined as R[X]/ (X^2 + 1), (see complex number) is unique up to isomorphism, it is not unique up to a unique isomorphism — there are exactly 2 field automorphisms of R[X]/ (X^2 + 1), the identity and the automorphism sending X to −X. (These are not the only field automorphisms of C, but are the only field automorphisms of C which keep each real number fixed.) See complex number, complex conjugation, field automorphism, and Galois group. A similar issue arises if the complex numbers are interpreted as 2 × 2 real matrices (see complex number), because then both $X = begin\left\{pmatrix\right\}$ 0 & -1 1 & ;; 0 end{pmatrix} and $X = begin\left\{pmatrix\right\}$ 0 & 1 -1 & ;; 0 are solutions to the matrix equation $X^2 = -I.$ In this case, the ambiguity results from the geometric choice of which "direction" around the unit circle is "positive" rotation. A more precise explanation is to say that the automorphism group of the special orthogonal group SO (2, R) has exactly 2 elements — the identity and the automorphism which exchanges "CW" (clockwise) and "CCW" (counter-clockwise) rotations. See orthogonal group. All these ambiguities can be solved by adopting a more rigorous definition of complex number, and explicitly choosing one of the solutions to the equation to be the imaginary unit. For example, the ordered pair (0, 1), in the usual construction of the complex numbers with two-dimensional vectors. Proper use The imaginary unit is sometimes written in advanced mathematics contexts (as well as in less advanced popular texts); however, great care needs to be taken when manipulating formulas involving . The notation is reserved either for the principal square root function, which is defined for real ≥ 0, or for the principal branch of the complex square root function. Attempting to apply the calculation rules of the principal (real) square root function to manipulate the principal branch of the complex square root function will produce false results: $-1 = i cdot i = sqrt\left\{-1\right\} cdot sqrt\left\{-1\right\} = sqrt\left\{\left(-1\right) cdot \left(-1\right)\right\} = sqrt\left\{1\right\} = 1$(incorrect). The calculation rule $sqrt\left\{a\right\} cdot sqrt\left\{b\right\} = sqrt\left\{a cdot b\right\}$ is only valid for real, non-negative values of For a more thorough discussion of this phenomenon, see square root and branch. To avoid making such mistakes when manipulating complex numbers, a strategy is never to use a negative number under a square root sign. For instance, rather than writing expressions like $sqrt\left\ {-7\right\}$, one should write $isqrt\left\{7\right\}$ instead. That is the use for which the imaginary unit was created. Square root of the imaginary unit One might assume that a further set of imaginary numbers need to be invented to account for the square root of . However this is not necessary as it can be expressed ( albeit rather poorly - see above ) as either of two complex numbers: $pm sqrt\left\{i\right\} = pm frac\left\{sqrt\left\{2\right\}\right\}2 \left(1 + i\right).$ This can be shown to be a valid solution: $left\left(pm frac\left\{sqrt\left\{2\right\}\right\}2 \left(1 + i\right) right\right)^2$ $= left\left(pm frac\left\{sqrt\left\{2\right\}\right\}2 right\right)^2 \left(1 + i\right)^2$ $= frac\left\{1\right\}\left\{2\right\} \left(1 + i\right)\left(1 + i\right)$ $= frac\left\{1\right\}\left\{2\right\} \left(1 + 2i + i^2\right) quad quad quad \left(i^2 = -1\right)$ $= frac\left\{1\right\}\left\{2\right\} \left(1 + 2i - 1\right)$ $= frac\left\{1\right\}\left\{2\right\} \left(2i\right)$ $= frac\left\{2i\right\}\left\{2\right\}$ $= i.$ Powers of $i$ The powers of $i$ repeat in a cycle: $i^\left\{-3\right\} = i,$ $i^\left\{-2\right\} = -1,$ $i^\left\{-1\right\} = -i,$ $i^0 = 1,$ $i^1 = i,$ $i^2 = -1,$ $i^3 = -i,$ $i^4 = 1,$ This can be expressed with the following pattern where n is any integer: $i^\left\{4n\right\} = 1,$ $i^\left\{4n+1\right\} = i,$ $i^\left\{4n+2\right\} = -1,$ $i^\left\{4n+3\right\} = -i.,$ This leads to the conclusion that $i^n = i^\left\{n bmod 4\right\},$ where mod 4 represents arithmetic modulo 4. i and Euler's formula $e^\left\{ix\right\} = cos\left(x\right) + isin\left(x\right) ,$ , where x is a real number. The formula can also be analytically extended for complex x. Substituting $x = pi$ yields $e^\left\{ipi\right\} = cos\left(pi\right) + isin\left(pi\right) = -1 + i0 ,$ and one arrives at the elegant Euler's identity: $e^\left\{ipi\right\} + 1 = 0.,$ This remarkably simple equation relates five significant mathematical quantities (0, 1, π, e, and i) by means of the basic operations of addition, multiplication, and exponentiation. Substitution of $x = pi/2 - 2Npi,$ where N is an arbitrary integer, produces $e^\left\{i\left(pi/2 - 2Npi\right)\right\} = i.,$ Or, raising each side to the power $i$, $e^\left\{i i\left(pi/2 - 2Npi\right)\right\} = i^i ,$ $e^\left\{-\left(pi/2 - 2Npi\right)\right\} = i^i ,$, which shows that $i^i,$ has an infinite number of elements in the form of $i^i = e^\left\{-pi/2 + 2pi N\right\},$ where N is any integer. This real value although real is not uniquely determined. The reason is that the complex logarithm is multiply-valued. Operations with i Many mathematical operations that can be carried out with real numbers can also be carried out with $i$, such as exponentation, roots, logarithms and trigonometric functions . A number raised to the $ni$ power is: $! x^\left\{ni\right\} = cos\left(ln\left(x^n\right)\right) + i sin\left(ln\left(x^n\right)\right).$ The $ni$^th root of a number is: $! sqrt\left[ni\right]\left\{x\right\} = cos\left(ln\left(sqrt\left[n\right]\left\{x\right\}\right)\right) - i sin\left(ln\left(sqrt\left[n\right]\left\{x\right\}\right)\right).$ The imaginary-base logarithm of a number is: $log_i\left(x\right) = \left\{\left\{2 ln\left(x\right)\right\} over ipi\right\}.$ As with any logarithm, the log base i is not uniquely defined. The cosine of $i$ is a real number: $cos\left(i\right) = cosh\left(1\right) = \left\{\left\{e + 1/e\right\} over 2\right\} = \left\{\left\{e^2 + 1\right\} over 2e\right\} = 1.54308064.$ And the sine of $i$ is imaginary: $sin\left(i\right) = sinh\left(1\right) , i = \left\{\left\{e - 1/e\right\} over 2\right\} , i = \left\{\left\{e^2 - 1\right\} over 2e\right\} , i = 1.17520119 , i.$ Alternative notations □ In electrical engineering and related fields, the imaginary unit is often written as $j,$ to avoid confusion with electrical current as a function of time, traditionally denoted by $i\left(t\ right),$ or just $i.,$ The Python programming language also uses j to denote the imaginary unit, while in Matlab, both notations i and j are associated with the imaginary unit. □ Some extra care needs to be taken in certain textbooks which define j = −i, in particular to travelling waves (e.g. a right travelling plane wave in the x direction $e^\left\{ i \left(kx - omega t\right)\right\} = e^\left\{ j \left(omega t-kx\right)\right\} ,$). □ Some texts use the Greek letter iota (ι ) to write the imaginary unit to avoid confusion. For example: Biquaternion. See also □ Paul J. Nahin, An Imaginary Tale, The Story of √-1, Princeton University Press, 1998 External links
{"url":"http://www.reference.com/browse/imaginary-unit","timestamp":"2014-04-16T23:37:04Z","content_type":null,"content_length":"90826","record_id":"<urn:uuid:2290968e-1498-4c24-8749-ce38337a1ab4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Waves Glossary Wave concepts and terminology for students and teachers. By Margaret Olsen, SouthEast COSEE Education Specialist You can download the glossary as a Microsoft Word document, an Open Document, or an Adobe PDF The repeating and periodic disturbance that travels through a medium (e.g. water) from one location to another location. Wave characteristics Wave crest The highest part of a wave. Wave trough The lowest part of a wave. Wave height The vertical distance between the highest (crest) and lowest (trough) parts of a wave. The distance from a certain point on one wave to the same point on the next wave (e.g. distance between two consecutive wave crests or between two consecutive wave troughs). Wave amplitude One half the distance from the crest to the trough. Wave amplitude is a more technical term for wave height and is used in engineering technology. Wave frequency The number of waves passing a fixed point in a specified period of time. Frequency has units of waves per second or cycles per second. Another unit for frequency is the Hertz (abbreviated Hz) where 1 Hz is equivalent to 1 cycle per second. Wave period The time it takes for two successive crests (one wavelength) to pass a specified point. Wave speed The distance the wave travels divided by the time it takes to travel that distance. Wave speed is determined by dividing the wavelength by the wave period. In symbols c = λBB; / T, where c is the wave speed, λ (lambda) is the wavelength, and T is the period. Wave Steepness The ratio of height to wavelength. When wave steepness exceeds 1:7, breakers form. If a wave has a height of one foot and a length from crest to crest of 8 feet, then the ratio is 1:8 and this wave is not going to break. But if the height is 1 foot and the length decreases to 5 feet, then the ratio is 1:5 and this wave has now become so steep that the crest topples and the wave breaks. Types of Ocean Waves Capillary waves Very small waves with wavelengths less than 1.7 cm or 0.68 inches. They are the first waves to form when the wind blows over the surface of the water and are created by the friction of wind and the surface tension of the water. These tiny little waves increase the surface area of the sea surface and if the wind continues to blow, the size of the wave will increase in size and become a wind wave. Small waves causing the ocean surface to be rough. The ruffling of the water’s surface due to pressure variations of the wind on the water. This creates stress on the water and results in tiny short wavelength waves called ripples. Ripples are often called capillary waves. The motion of a ripple is governed by surface tension. Standing Wave Waves that move back and forth (oscillate) in a vertical position. They do not move forward but appear as crests and troughs in a fixed position. Standing waves are created when a wave strikes an obstruction head-on and then are reflected backwards in the direction they came from. The smooth undulation (rising and falling of waves) of the ocean surface that forms as waves move away from the storm center where they are created. As waves move out and away from the storm center, they sort themselves out into groups of similar speeds and wavelengths. This produces the smooth undulating ocean surface called a swell. Swells may travel thousands of kilometers from the storm center until they strike shore. Other terms Beaufort Scale A scale of wind velocity used for estimating the force or speed of winds. It is a numbered scale from 0 to 12 to describe wave size and sea conditions. The Beaufort Scale was developed by Rear Admiral Sir Francis Beaufort. 0 on the Beaufort scale represents the calmest of seas (the water is so smooth that it looks like glass). A 12 on the Beaufort scale represents hurricane force When a wave approaches shore, it touches bottom and the front of the wave slows down and the back overtakes the front. This forces the water into a peak that curves forward. This peak will eventually fall forward in a tumbling rush of foam and water called a breaker. Waves break on or near shore or over reefs or offshore sandbars. There are three types of breaking waves: plunging breakers, spilling breakers and collapsing breakers. Breakers may be one or a combination of these types. Cat’s Paws A light breeze that ruffles small areas of a water surface. The uninterrupted area or distance over which the wind blows (in the same direction). Fetch is important because the interrelationship between wind speed and duration, both functions of fetch, is predictive of wave conditions. Orbital depth The depth to which the orbital motion of the wave energy can be felt. This depth is equal to half of the wavelength. At the sea surface, orbital diameter is equal to wave height. As depth increases, less wave energy can be felt. The orbital depth is the depth where zero energy remains. For example, if a wave at the surface has a height of 4 meters and a wavelength of 45 m, then the depth where no motion from the wave exists is 45/2 or 22.5 meters. A chaotic jumble of waves of many different sizes (wave heights, wavelengths, and periods).
{"url":"http://secoora.org/classroom/virtual_wave/glossary","timestamp":"2014-04-20T20:56:26Z","content_type":null,"content_length":"41673","record_id":"<urn:uuid:162a7171-4333-403f-b86a-37091e514c51>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? Fernando Perez fperez.net at gmail.com Mon Jan 15 14:03:05 CST 2007 On 1/15/07, Dick Moores <rdm at rcblue.com> wrote: > Of, course! I forgot about integer division. Thanks, Vincent and Darren. > But I still don't get the precision: > ========================== > # clnumTest3-c.py > from __future__ import division > import clnum as n > n.set_default_precision(40) > print repr(n.exp(n.log(5/23)*2/7)) > ========================= > gets mpf('0.6466073240654112295',17) > How come? Because you shoulnd't use from __future__ import division in this case, since that will turn 5/23 into a plain float, with 64-bit accuracy (well, 53 bits for the mantissa, really). In [3]: n.set_default_precision(50) In [4]: n.exp(n.log(n.mpq(5,23)*n.mpq(2,7))) Out[4]: mpf('0.06211180124223602484472049689440993788819875776397515527949',55) or alternatively: In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) Out[7]: mpf('0.06211180124223602484472049689440993788819875776397515527949',55) clnum exposes true rationals, so you can use them for a computation such as this one. In this regard SAGE has the convenience that it preparses your input to apply on-the-fly conversions so that all objects are treated either as rationals or as extended-precision floats. This has a performance impact so it's not a good choice for everyday numerics, but it does save a lot of typing and makes it a bit more convenient for this kind of thing: sage: r=RealField(200) sage: r(5/3)**(r(2/7)) Which of the two approaches (full SAGE or clnum/mpfr inside regular python) suits your needs best is a question only you can answer. ps - due to security problems, the public SAGE notebook is unfortunately down at the moment. More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-January/010521.html","timestamp":"2014-04-17T07:26:02Z","content_type":null,"content_length":"4771","record_id":"<urn:uuid:7e4817a8-a57a-4f83-b177-4ae40d525258>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Highlights of the History of the Lambda-Calculus Results 11 - 20 of 24 , 2010 "... This lecture presents the Church-Rosser Theorem (i.e., −→α,β is confluent). 1 ..." , 2009 "... – If you give me an algorithm to solve Π, I can check whether this algorithm really solves Π. – But, if you ask me to find an algorithm to solve Π, I may go on forever trying but without success. • But, this result was already found by Aristotle: Assume a proposition Φ. – If you give me a proof of Φ ..." Add to MetaCart – If you give me an algorithm to solve Π, I can check whether this algorithm really solves Π. – But, if you ask me to find an algorithm to solve Π, I may go on forever trying but without success. • But, this result was already found by Aristotle: Assume a proposition Φ. – If you give me a proof of Φ, I can check whether this proof really proves Φ. – But, if you ask me to find a proof of Φ, I may go on forever trying but without success. • In fact, programs are proofs and much of computer science in the early part of the 20th century was built by mathematicians and logicians. • There were also important inventions in computer science made by physicists (e.g., von Neumann) and others, but we ignore these in this talk. ISR 2009, Brasiliá, Brasil 1An example of a computable function/ solvable problem • E.g., 1.5 chicken lay down 1.5 eggs in 1.5 days. • How many eggs does 1 chicken lay in 1 day? • 1.5 chicken lay 1.5 eggs in 1.5 days. • Hence, 1 chicken lay 1 egg in 1.5 days. • Hence, 1 chicken lay 2/3 egg in 1 day. ISR 2009, Brasiliá, Brasil 2Unsolvability of the Barber problem • which man barber in the village shaves all and only those men who do not shave themselves? • If John was the barber then – John shaves Bill ⇐ ⇒ Bill does not shave Bill – John shaves x ⇐ ⇒ x does not shave x – John shaves John ⇐ ⇒ John does not shave John • Contradiction. ISR 2009, Brasiliá, Brasil 3Unsolvability of the Russell set problem - ACIT 2005 , 2005 "... ..." - ILLC ALUMNI EVENT, AMSTERDAM 2004 , 2004 "... ..." , 1987 "... This dissertation presents three implementation models for the Scheme Programming Language. The first is a heap-based model used in some form in most Scheme implementations to date; the second is a new stack-based model that is considerably more efficient than the heap-based model at executing most ..." Add to MetaCart This dissertation presents three implementation models for the Scheme Programming Language. The first is a heap-based model used in some form in most Scheme implementations to date; the second is a new stack-based model that is considerably more efficient than the heap-based model at executing most programs; and the third is a new string-based model intended for use in a multiple-processor implementation of Scheme. The heap-based model allocates several important data structures in a heap, including actual parameter lists, binding environments, and call frames. The stack-based model allocates these same structures on a stack whenever possible. This results in Jess heap allocation, fewer memory references, shorter instruction sequences, less garbage collection, and more efficient use of memory. The string-based model allocates versions of these structures right in the program text, which is represented as a string of symbols. In the string-based model, Scheme programs are translated into an FFP language designed specifically to support Scheme. Programs in this language are directly executed by the , 2005 "... The evolution of types and logic in the 20th century ∗ ..." "... Brasiliá 2010Welcome to the fastest developing and most influential subject: Computer Science • Computer Science is by nature highly applied and needs much precision, foundation and theory. • Computer Science is highly interdisciplinary bringing many subjects together in ways that were not possible ..." Add to MetaCart Brasiliá 2010Welcome to the fastest developing and most influential subject: Computer Science • Computer Science is by nature highly applied and needs much precision, foundation and theory. • Computer Science is highly interdisciplinary bringing many subjects together in ways that were not possible before. • Many recent scientific results (e.g., in chemistry) would not have been possible without computers. • The Kepler Conjecture: no packing of congruent balls in Euclidean space has density greater than the density of the face-centered cubic packing. • Sam Ferguson and Tom Hales proved the Kepler Conjecture in 1998, but it was not published until 2006. • The Flyspeck project aims to give a formal proof of the Kepler Conjecture. , 2007 "... Can we formalise a mathematical text, avoiding as much as possible the ambiguities of natural language, while still guaranteeing the following four goals? 1. The formalised text looks very much like the original mathematical text (and hence the content of the original mathematical text is respected) ..." Add to MetaCart Can we formalise a mathematical text, avoiding as much as possible the ambiguities of natural language, while still guaranteeing the following four goals? 1. The formalised text looks very much like the original mathematical text (and hence the content of the original mathematical text is respected). 2. The formalised text can be fully manipulated and searched in ways that respect its mathematical structure and meaning. 3. Steps can be made to do computation (via computer algebra systems) and proof checking (via proof checkers) on the formalised text. 4. This formalisation of text is not much harder for the ordinary mathematician than L ATEX. Full formalization down to a foundation of mathematics is not required, although allowing and supporting this is one goal. (No theorem prover’s language satisfies these goals.) University of West of England, Bristol 1A brief history • There are two influencing questions: , 2007 "... Saarbruecken, GermanyWhat is the aim for MathLang? Can we formalise a mathematical text, avoiding as much as possible the ambiguities of natural language, while still guaranteeing the following four goals? 1. The formalised text looks very much like the original mathematical text (and hence the cont ..." Add to MetaCart Saarbruecken, GermanyWhat is the aim for MathLang? Can we formalise a mathematical text, avoiding as much as possible the ambiguities of natural language, while still guaranteeing the following four goals? 1. The formalised text looks very much like the original mathematical text (and hence the content of the original mathematical text is respected). 2. The formalised text can be fully manipulated and searched in ways that respect its mathematical structure and meaning. 3. Steps can be made to do computation (via computer algebra systems) and proof checking (via proof checkers) on the formalised text. 4. This formalisation of text is not much harder for the ordinary mathematician than L ATEX. Full formalization down to a foundation of mathematics is not required, although allowing and supporting this is one goal. (No theorem prover’s language satisfies these goals.) Saarbruecken, Germany 1A brief history • There are two influencing questions: , 2004 "... The evolution of types and logic in the 20th ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=760755&sort=cite&start=10","timestamp":"2014-04-20T07:12:51Z","content_type":null,"content_length":"31288","record_id":"<urn:uuid:0128b06b-92aa-4638-932a-2f0066f3a3cb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
If else statement problem and solve 10-18-2005, 12:26 AM #1 Registered User Join Date Oct 2005 If else statement problem and solve Problem : The " I can't believe it's true " movie company is looking for extras to be in their new movie.your job is to write a program that calculates the total pay due to an extra based on the number of hours they are needed on the set. the pay is calculated using the fallowing rules. there is base rate of $1000 plus an hourly rate of $20 for someone working less that 8 hours, an hourly rate $25 dollars for someone acting from 8 to 15 hours,an hourly rate of $37 for someone working 16 to 30 hours and $ 45 an hour for those working for more than 30 hours.use the conditional operator ask the user to input the number of hours they have worked. write the output in the following format: you have worked X hours and your total pay is $ yyy.yy int main () float totalpay; int hours; cout<<"Enter the number of hours you worked:"; if(hours < 8) else if (hours >= 8 && hours <= 15) else if (hours >=16 && hours <= 30) else if (hours > 30) cout<<"\nYou have worked "<<hours<<" hours and your total pay is "<<totalpay<<" dollars"<<endl; cin >> hours; return 0 ; thank you Mansur but the Q said that useing the conditional operator is it the same as I send it to you ? using namespace std; int main () { float totalpay; int hours; cout<<"Enter the number of hours you worked:"; cout<<"\nYou have worked"<<hours<<"hours and your total pay is "<<totalpay<<"dollars"<<endl; return 0 ; thank you agin :) Last edited by malehda3y; 10-18-2005 at 12:44 AM. You sure ? The ? alternative : the ? operator can be used to replace if/else statements of general form : if ( condition) expression; else expression; the ? is called a ternary operator and takes the general form Exp1 ? Exp2 : Exp3 I am reading this in The complete Reference Borland C++ I will try to solve in ternary operator it 's ok Mansur dont warry about it :) dont do anything with it you really helped me alot and Thank you alot :D but would you main if you take alook at the other probelm i send it to you please ?? Thank you so muchhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh alllah ya7fe9*eak :) The ? operator is ugly and there is no good reason to use it. It makes the code harder to read and should be avoided in real code. 10-18-2005, 12:34 AM #2 Registered User Join Date May 2005 10-18-2005, 01:50 AM #3 Registered User Join Date Oct 2005 10-18-2005, 01:55 AM #4 Registered User Join Date May 2005 10-18-2005, 08:21 AM #5 Senior Member Join Date Dec 2003
{"url":"http://forums.devx.com/showthread.php?147178-Need-Help-With-Multi-Function-Program&goto=nextnewest","timestamp":"2014-04-16T07:28:56Z","content_type":null,"content_length":"81222","record_id":"<urn:uuid:bd7c9ff3-750a-4708-a10f-c08616abe90f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
White Hall, PA Math Tutor Find a White Hall, PA Math Tutor ...I have tutored Algebra (all levels) multiple times over the past 5 years. I often find it is best to combine the notes from the teacher and examples from the book to come up with new problems to review. Some students are more visual and need to see the problem a couple of different ways until they understand it. 27 Subjects: including algebra 2, ACT Math, algebra 1, SAT math ...Though my degree and passion was for International Relations, I started with a love of math and science, going into university as an undeclared BioChem major, and so took college level courses in mathematics through calculus 2, biology through micro and macro biology, and chemistry through organi... 52 Subjects: including SAT math, precalculus, prealgebra, trigonometry ...I have been successfully speaking publicly for more than 40 years. I began as a championship debater, representing the state of Pennsylvania at the National Debate Tournament two years in a row. I majored in Rhetoric and Public Address as an undergraduate in college, attending on a debating sch... 43 Subjects: including algebra 1, geometry, prealgebra, reading ...I also completed over three months of student teaching, where I gained classroom experience as well as worked with individual students on a daily basis in the Subjects of Algebra and Pre-Calculus. After having worked with so many different students and mathematical abilities, I try to adjust my ... 12 Subjects: including calculus, prealgebra, reading, writing I am currently finishing up my undergraduate degrees in math and Spanish. I have completed much of a teaching program for mathematics as well, so I have completed many field experience hours in the classroom. I am flexible in terms of my style of teaching and can adjust to whatever learning style is necessary. 12 Subjects: including algebra 1, algebra 2, calculus, geometry Related White Hall, PA Tutors White Hall, PA Accounting Tutors White Hall, PA ACT Tutors White Hall, PA Algebra Tutors White Hall, PA Algebra 2 Tutors White Hall, PA Calculus Tutors White Hall, PA Geometry Tutors White Hall, PA Math Tutors White Hall, PA Prealgebra Tutors White Hall, PA Precalculus Tutors White Hall, PA SAT Tutors White Hall, PA SAT Math Tutors White Hall, PA Science Tutors White Hall, PA Statistics Tutors White Hall, PA Trigonometry Tutors Nearby Cities With Math Tutor Baldwin Township, PA Math Tutors Brentwood, PA Math Tutors Bridgeville, PA Math Tutors Castle Shannon, PA Math Tutors Dormont, PA Math Tutors Dravosburg Math Tutors Elizabeth, PA Math Tutors Gastonville Math Tutors Greentree, PA Math Tutors Ingram, PA Math Tutors Liberty, PA Math Tutors Pleasant Hills, PA Math Tutors West Elizabeth Math Tutors West Mifflin Century Mall, PA Math Tutors Whitehall, PA Math Tutors
{"url":"http://www.purplemath.com/White_Hall_PA_Math_tutors.php","timestamp":"2014-04-17T01:02:23Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:fcb2f2c5-4df4-4c3d-a7ab-611010f0bcb4>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Up: Table of Contents REC-MathML-19980407 3.2 Token Elements Token elements can contain any sequence of 0 or more characters, or extended characters represented by entity references. In particular, tokens with empty content are allowed, and should typically render invisibly, with no width except for the normal extra spacing for that kind of token element. The allowed set of entity references for extended characters is given in Chapter 6. In MathML, characters and MathML entity references are only allowed to occur as part of the content of a token element. The only exception is whitespace between elements, which is ignored. The <malignmark/> element (see Section 3.5.4) is the only element allowed in the content of tokens. It marks a place which can be vertically aligned with other objects, as explained in that section. 3.2.1 Attributes common to token elements Several attributes related to text formatting are provided on all token elements except <mspace/>, but on no other elements except <mstyle>. These are: Name values default fontsize number v-unit inherited fontweight normal | bold inherited fontstyle normal | italic normal (except on <mi>) fontfamily string | css-fontfamily inherited color #rgb | #rrggbb | html-color-name inherited (See Section 2.3.3 for terminology and notation used in attribute value descriptions.) Token elements (other than <mspace/>) should be rendered as their content (i.e., in the visual case, as a closely-spaced horizontal row of standard glyphs for the characters in their content) using the attributes listed above, with surrounding spacing modified by rules or attributes specific to each type of token element. Some of the individual attributes are further discussed below. Recall that all MathML elements, including tokens, accept class, style, and id attributes for compatibility with style sheet mechanisms, as described in Section 2.3.4. In principle, the font properties controlled by the attributes listed above might be better handled using style sheets. When style sheet support becomes available for XML, future revisions of MathML will likely revisit the issue of font control. MathML expressions are often embedded in a textual data format such as HTML, and their renderings are likewise embedded in a rendering of the surrounding text. The renderer of the surrounding text (e.g. a browser) should provide the MathML renderer with information about the rendering environment, including attributes of the surrounding text such as its font size, so that the MathML can be rendered in a compatible style. For this reason, most attribute values affecting text rendering are inherited from the rendering environment, as shown in the "default" column in the table above. (Note that it is also important for the rendering environment to provide the renderer with additional information, such as the baseline position of surrounding text, which is not specified by any MathML attributes.) The exception to the general pattern of inheritance is the fontstyle attribute, whose default value is "normal" (non-slanted) for most tokens, but for <mi> depends on the content in a way described in the section about <mi>, Section 3.2.2. Note that fontstyle is not inherited in MathML, even though the corresponding CSS1 property "font-style" is inherited in CSS. The fontsize attribute specifies the desired font size. v-unit represents a unit of vertical length (see Section 2.3.3). The most common unit for specifying font sizes in typesetting is pt (points). If the requested size of the current font is not available, the renderer should approximate it in the manner likely to lead to the most intelligible, highest quality rendering. Many MathML elements automatically change fontsize in some of their children; see the discussion of scriptlevel in the section on <mstyle>, Section 3.3.4. The value of the fontfamily attribute should be the name of a font which may be available to a MathML renderer, or information which permits the renderer to select a font in some manner; acceptable values and their meanings are dependent on the specific renderer and rendering environment in use, and are not specified by MathML (but see the note about css-fontfamily below). (Note that the renderer's mechanism for finding fonts by name may be case-sensitive.) If the value of fontfamily is not recognized by a particular MathML renderer, this should never be interpreted as a MathML error; rather, the renderer should either use a font which it considers to be a suitable substitute for the requested font, or ignore the attribute and act as if no value had been given. Note that any use of the fontfamily attribute is unlikely to be portable across all MathML renderers. In particular, it should never be used to try to achieve the effect of a reference to an extended character (for example, by using a reference to a character in some symbol font which maps ordinary characters to glyphs for extended characters). As a corollary to this principle, MathML renderers should attempt to always produce intelligible renderings for the extended characters listed in Chapter 6, even when these characters are not available in the font family indicated. Such a rendering is always possible -- as a last resort, a character can be rendered to appear as an XML-style entity reference using one of the entity names given for the same character in Chapter 6. The symbol css-fontfamily refers to a legal value for the "font-family" property in CSS1, which is a comma-separated list of alternative font family names or generic font types in order of preference, as documented in more detail in CSS1. MathML renderers are encouraged to make use of the CSS syntax for specifying fonts when this is practical in their rendering environment, even if they do not otherwise support CSS. (See also the subsection CSS-compatible attributes within Section 2.3.3.) The syntax and meaning of the color attribute are as described for the same attribute of <mstyle> (Section 3.3.4). 3.2.2 <mi> -- identifier An <mi> element represents a symbolic name or arbitrary text which should be rendered as an identifier. Identifiers can include variables, function names, and symbolic constants. Not all "mathematical identifers" are represented by <mi> elements -- for example, subscripted or primed variables should be represented using <msub> or <msup> respectively. Conversely, arbitrary text playing the role of a "term" (such as an ellipsis in a summed series) can be represented using an <mi> element, as shown in an example under the subheading Mixing text and math in Section 3.2.5. It should be stressed that <mi> is a presentation element, and as such, it only indicates that its content should be rendered as an identifier. In the majority of cases, the contents of an <mi> will actually represent a mathematical identifier such as a variable or function name. However, as the preceding paragraph indicates, the correspondence between notations which should render like identifiers and notations which are actually intended to represent mathematical identifiers is not perfect. For an element whose semantics is guaranteed to be that of an identifier, see the description of <ci> in Chapter 4. Attributes of <mi>: <mi> elements accept the attributes listed in Section 3.2.1, but in one case with a different default value: Name values default fontstyle normal | italic (depends on content; described below) A typical graphical renderer would render an <mi> element as the characters in its content, with no extra spacing around the characters (except spacing associated with neighboring elements). The default fontstyle would (typically) be "normal" (non-slanted) unless the content is a single character, in which case it would be "italic". Note that this rule for fontstyle is specific to <mi> elements; the default value for the fontstyle attribute of other MathML token elements is "normal". Examples of <mi>: <mi> x </mi> <mi> D </mi> <mi> sin </mi> An <mi> element with no content is allowed; <mi></mi> might, for example, be used by an "expression editor" to represent a location in a MathML expression which requires a "term" (according to conventional syntax for mathematics) but does not yet contain one. Identifiers include function names such as "sin". Expressions such as "sin x" should be written using the "&ApplyFunction;" operator (which also has the short name "&af;") as shown below; see also the discussion of invisible operators in Section 3.2.4. <mi> sin </mi> <mo> &ApplyFunction; </mo> <mi> x </mi> Miscellaneous text that should be treated as a "term" can also be represented by an <mi> element, as in: <mn> 1 </mn> <mo> + </mo> <mi> ... </mi> <mo> + </mo> <mi> n </mi> When an <mi> is used in such exceptional situations, explicitly setting the fontstyle attribute may give better results than the default behavior of some renderers. The names of symbolic constants should be represented as <mi> elements: <mi> &pi; </mi> <mi> &ImaginaryI; </mi> <mi> &ExponentialE; </mi> Use of special entity references for such constants can simplify the interpretation of MathML presentation elements. See Chapter 6 for a complete list of character entity references in MathML. 3.2.3 <mn> -- number An <mn> element represents a "numeric literal" or other data which should be rendered as a numeric literal. Generally speaking, a numeric literal is a sequence of digits, perhaps including a decimal point, representing an unsigned integer or real number. The concept of a mathematical "number" depends on the context, and is not well-defined in the abstract. As a consequence, not all mathematical numbers should be represented using <mn>; examples of mathematical numbers which should be represented differently are shown below, and include negative numbers, complex numbers, ratios of numbers shown as fractions, and names of numeric constants. Conversely, since <mn> is a presentation element, there are a few situations where it may desirable to include arbitrary text in the content of an <mn> which should merely render as a numeric literal, even though that content may not be unambiguously interpretable as a number according to any particular standard encoding of numbers as character sequences. As a general rule, however, the <mn> element should be reserved for situations where its content is actually intended to represent a numeric quantity in some fashion. For an element whose semantics are guaranteed to be that of a particular kind of mathematical number, see the description of <cn> in Chapter 4. Attributes of <mn>: <mn> elements accept the attributes listed in Section 3.2.1. A typical graphical renderer would render an <mn> element as the characters of its content, with no extra spacing around them (except spacing from neighboring elements such as <mo>). Unlike <mi>, <mn> elements are (typically) rendered in an unslanted font by default, regardless of their content. Examples of <mn>: <mn> 2 </mn> <mn> 0.123 </mn> <mn> 1,000,000 </mn> <mn> 2.1e10 </mn> <mn> 0xFFEF </mn> <mn> MCMLXIX </mn> <mn> twenty one </mn> Examples of numbers which should not be written using <mn> alone: Many mathematical numbers should be represented using presentation elements other than <mn> alone; this includes negative numbers, complex numbers, ratios of numbers shown as fractions, and names of numeric constants. Examples of MathML representations of such numbers include: <mrow> <mo> - </mo> <mn> 1 </mn> </mrow> <mn> 2 </mn> <mo> + </mo> <mn> 3 </mn> <mo> &InvisibleTimes; </mo> <mi> &ImaginaryI; </mi> <mfrac> <mn> 1 </mn> <mn> 2 </mn> </mfrac> <mi> &pi; </mi> <mi> &ExponentialE; </mi> 3.2.4 <mo> -- operator, fence, separator or accent An <mo> element represents an operator or anything which should be rendered as an operator. In general, the notational conventions for mathematical operators are quite complicated, and therefore MathML provides a relatively sophisticated mechanism for specifying the rendering behavior of an <mo> element. As a consequence, in MathML the list of things which should "render as an operator" includes a number of notations which are not mathematical operators in the ordinary sense. Besides ordinary operators with infix, prefix, or postfix forms, these include fence characters such as braces, parentheses, and "absolute value" bars, separators such as comma and semicolon, and mathematical accents such as a bar or tilde over a symbol. The term "operator" as used in Chapter 3 means any symbol or notation which should render as an operator, and which is therefore representable by an <mo> element. That is, the term "operator" includes any ordinary operator, fence, separator, or accent unless otherwise specified or clear from the context. All such symbols are represented in MathML with <mo> elements since they are subject to essentially the same rendering attributes and rules; subtle distinctions in the rendering of these classes of symbols, when they exist, are supported using the boolean attributes fence, separator and accent which can be used to distinguish these cases. A key feature of the <mo> element is that its default attribute values are set on a case-by-case basis from an "operator dictionary" as explained below. In particular, default values for fence, separator and accent can usually be found in the operator dictionary and therefore need not be specified on each <mo> element. Note that some mathematical operators are represented not by <mo> elements alone, but by <mo> elements "embellished" with (for example) surrounding superscripts; this is further described below. Conversely, as presentation elements, <mo> elements can contain arbitrary text, even when that text has no standard interpretation as an operator; for an example, see the discussion Mixing text and math in Section 3.2.5. See also Chapter 4 for definitions of MathML content elements which are guaranteed to have the semantics of specific mathematical operators. Attributes of <mo>: <mo> elements accept the attributes listed in Section 3.2.1, and the additional attributes listed here. Most attributes get their default values from the "operator dictionary", as described later in this section. When a dictionary entry is not found for a given <mo> element, the default value shown here in parentheses is used. Name values default form prefix | infix | postfix set by position of operator in an <mrow> (rule given below); used with <mo> content to index operator dictionary fence true | false set by dictionary (false) separator true | false set by dictionary (false) lspace number h-unit set by dictionary (.27777em) rspace number h-unit set by dictionary (.27777em) stretchy true | false set by dictionary (false) symmetric true | false set by dictionary (true) maxsize number [ v-unit | h-unit ] | infinity set by dictionary (infinity) minsize number [ v-unit | h-unit ] set by dictionary (1) largeop true | false set by dictionary (false) movablelimits true | false set by dictionary (false) accent true | false set by dictionary (false) h-unit represents a unit of horizontal length (see Section 2.3.3). v-unit represents a unit of vertical length. If no unit is given with maxsize or minsize, the number is a multiplier of the normal size of the operator in the direction (or directions) in which it stretches. These attributes are further explained below. Typical graphical renderers show all <mo> elements as the characters of their content, with additional spacing around the element determined from the attributes listed above. Detailed rules for determining operator spacing in visual renderings are described in a subsection below. As always, MathML does not require a specific rendering, and these rules are provided as suggestions for the convenience of implementors. Renderers without access to complete fonts for the MathML character set may choose not to render an <mo> element as precisely the characters in its content in some cases. For example, <mo> &le; </mo> might be rendered as <= to a terminal. However, as a general rule, renderers should attempt to render the content of an <mo> element as literally as possible. That is, <mo> &le; </mo> and <mo> &lt;= </mo> should render differently. (The first one should render as a single extended character representing a less-than-or-equal-to sign, and the second one as the two-character sequence <=.) Examples of <mo> elements representing ordinary operators: <mo> + </mo> <mo> &lt; </mo> <mo> &le; </mo> <mo> &lt;= </mo> <mo> ++ </mo> <mo> &sum; </mo> <mo> .NOT. </mo> <mo> and </mo> <mo> &InvisibleTimes; </mo> Examples of expressions using <mo> elements for fences and separators: Note that the <mo> elements in these examples don't need explicit fence or separator attributes, since these can be found using the operator dictionary as described below. Some of these examples could also be encoded using the <mfenced> element described in Section 3.3.8. <mo> ( </mo> <mi> a </mi> <mo> + </mo> <mi> b </mi> <mo> ) </mo> <mo> [ </mo> <mn> 0 </mn> <mo> , </mo> <mn> 1 </mn> <mo> ) </mo> <mi> f </mi> <mo> &ApplyFunction; </mo> <mo> ( </mo> <mi> x </mi> <mo> , </mo> <mi> y </mi> <mo> ) </mo> Invisible operators Certain operators which are "invisible" in traditional mathematical notation should be represented using specific entity references within <mo> elements, rather than simply by nothing. The entity references used for these "invisible operators" are: Full name Short name Examples of use &InvisibleTimes; &it; xy &ApplyFunction; &af; f(x) sin x &InvisibleComma; &ic; m[12] The MathML representations of the examples in the above table are: <mi> x </mi> <mo> &InvisibleTimes; </mo> <mi> y </mi> <mi> f </mi> <mo> &ApplyFunction; </mo> <mo> ( </mo> <mi> x </mi> <mo> ) </mo> <mi> sin </mi> <mo> &ApplyFunction; </mo> <mi> x </mi> <mi> m </mi> <mn> 1 </mn> <mo> &InvisibleComma; </mo> <mn> 2 </mn> The reasons for using specific <mo> elements for invisible operators include: • such operators should often have specific effects on visual rendering (particularly spacing and linebreaking rules) which are not the same as either the lack of any operator, or spacing represented by <mspace/> or <mtext> elements; • these operators should often have specific audio renderings different than that of the lack of any operator; • automatic semantic interpretation of MathML presentation elements is made easier by the explicit specification of such operators. For example, an audio renderer might render f(x) (represented as in the above examples) by speaking "f of x", but use the word "times" in its rendering of xy. Although its rendering must still be different depending on the structure of neighboring elements (sometimes leaving out "of" or "times" entirely), its task is made much easier by the use of a different <mo> element for each invisible Entity references for other special operators For reasons like those for including special entities for invisible operators, MathML also includes "&DifferentialD;" for use in an <mo> element representing the differential operator symbol usually denoted by "d". Detailed rendering rules for <mo> elements Typical visual rendering behaviors for <mo> elements are more complex than for the other MathML token elements, so the rules for rendering them are described in this separate subsection. Note that, like all rendering rules in MathML, these rules are suggestions rather than requirements. Furthermore, no attempt is made to specify the rendering completely; rather, enough information is given to make the intended effect of the various rendering attributes as clear as possible. The operator dictionary Many mathematical symbols, such as an integral sign, a plus sign, or a parenthesis, have a well-established, predictable, traditional notational usage. Typically, this usage amounts to certain default attribute values for <mo> elements with specific contents and a specific form attribute. Since these defaults vary from symbol to symbol, MathML anticipates that renderers will have an "operator dictionary" of default attributes for <mo> elements (see Appendix C) indexed by each <mo> element's content and form attribute. If an <mo> element is not listed in the dictionary, the default values shown in parentheses in the table of attributes for <mo> should be used, since these values are typically acceptable for a generic operator. Some operators are "overloaded", in the sense that they can occur in more than one form (prefix, infix, or postfix), with possibly different rendering properties for each form. For example, "+" can be either a prefix or an infix operator. Typically, a visual renderer would add space around both sides of an infix operator, while only on the left of a prefix operator. The form attribute allows specification of which form to use, in case more than one form is possible according to the operator dictionary and the default value described below is not suitable. Default value of form attribute The form attribute does not usually have to be specified explicitly, since there are effective heuristic rules for inferring the value of the form attribute from the context. If it is not specified, and there is more than one possible form in the dictionary for an <mo> element with given content, the renderer should choose which form to use as follows (but see the exception for embellished operators, described later): • If the operator is the first argument in an <mrow> of length (i.e. number of arguments) greater than one (ignoring all spacelike arguments (see Section 3.2.6) in the determination of both the length and the first argument), the prefix form is used; • if it is the last argument in an <mrow> of length greater than one (ignoring all spacelike arguments), the postfix form is used; • in all other cases, including when the operator is not part of an <mrow>, the infix form is used. Note that these rules make reference to the <mrow> in which the <mo> element lies. In some situations, this <mrow> might be an inferred <mrow> implicitly present around the arguments of an element such as <msqrt> or <mtd>. Opening (left) fences should have form="prefix", and closing (right) fences should have form="postfix"; separators are usually "infix", but not always, depending on their surroundings. As with ordinary operators, these values do not usually need to be specified explicitly. If the operator does not occur in the dictionary with the specified form, the renderer should use one of the forms which is available there, in the order of preference: infix, postfix, prefix; if no forms are available for the given <mo> element content, the renderer should use the defaults given in parentheses in the table of attributes for <mo>. Exception for embellished operators There is one exception to the above rules for choosing an <mo> element's default form attribute. An <mo> element which is "embellished" by one or more nested subscripts, superscripts, surrounding text or whitespace, or style changes behaves differently. It is the embellished operator as a whole (this is defined precisely, below) whose position in an <mrow> is examined by the above rules and whose surrounding spacing is affected by its form, not the <mo> element at its core; however, the attributes influencing this surrounding spacing are taken from the <mo> element at the core (or from that element's dictionary entry). For example, the "+[4] " in <mrow>, but its rendering attributes should be taken from the <mo> element representing the "+", or when those are not specified explicitly, from the operator dictionary entry for <mo form="infix"> + </mo>. The precise definition of an "embellished operator" is: • an <mo> element; • or one of the elements <msub>, <msup>, <msubsup>, <munder>, <mover>, <munderover>, <mmultiscripts>, <mfrac>, or <semantics> (Section 4.2.6), whose first argument exists and is an embellished • or one of the elements <mstyle>, <mphantom>, or <mpadded>, such that an <mrow> containing the same arguments would be an embellished operator; • or an <maction> element whose selected subexpression exists and is an embellished operator; • or an <mrow> whose arguments consist (in any order) of one embellished operator and zero or more spacelike elements. Note that this definition permits nested embellishment only when there are no intervening enclosing elements not in the above list. The above rules for choosing operator forms and defining embellished operators are chosen so that in all ordinary cases it will not be necessary for the author to specify a form attribute. Rationale for definition of embellished operators: The following notes are included as a rationale for certain aspects of the above definitions, but should not be important for most users of MathML: An <mfrac> is included as an "embellisher" because of the common notation for a differential operator: <mo> &DifferentialD; </mo> <mo> &DifferentialD; </mo> <mi> x </mi> Since the definition of embellished operator affects the use of the attributes related to stretching, it is important that it includes embellished fences as well as ordinary operators; thus it applies to any <mo> element. Note that an <mrow> containing a single argument is an embellished operator if and only if its argument is an embellished operator. This is because an <mrow> with a single argument must be equivalent in all respects to that argument alone (as discussed in Section 3.3.1). This means that an <mo> element which is the sole argument of an <mrow> will determine its default form attribute based on that <mrow>'s position in a surrounding, perhaps inferred, <mrow> (if there is one), rather than based on its own position in the <mrow> it is the sole argument of. Note that the above definition defines every <mo> element to be "embellished" -- that is, "embellished operator" can be considered (and implemented in renderers) as a special class of MathML expressions, of which "<mo> element" is a specific case. Spacing around an operator The amount of space added around an operator (or embellished operator), when it occurs in an <mrow>, can be directly specified by the lspace and rspace attributes. These values are in ems if no units are given. By convention, operators that tend to bind tightly to their arguments have smaller values for spacing than operators that tend to bind less tightly. This convention should be followed in the operator dictionary included with a MathML renderer. In TeX, these values can only be one of three values; typically they are 3/18em, 4/18em, and 5/18em. MathML does not impose this limit. Some renderers may choose to use no space around most operators appearing within subscripts or superscripts, as is done in TeX. Non-graphical renderers should treat spacing attributes, and other rendering attributes described here, in analogous ways for their rendering medium. Stretching of operators, fences and accents Four attributes govern whether and how an operator (perhaps embellished) stretches so that it matches the size of other elements: stretchy, symmetric, maxsize, and minsize. If an operator has the attribute stretchy="true", then it (that is, each character in its content) obeys the stretching rules listed below, given the constraints imposed by the fonts and font rendering system. In practice, typical renderers will only be able to stretch a small set of characters, and quite possibly will only be able to generate a discrete set of character sizes. There is no provision in MathML for specifying in which direction (horizontal or vertical) to stretch a specific character or operator; rather, when stretchy="true" it should be stretched in each direction for which stretching is possible. It is up to the renderer to know in which directions it is able to stretch each character. (Most characters can be stretched in at most one direction by typical renderers, but some renderers may be able to stretch certain characters, such as diagonal arrows, in both directions independently.) The minsize and maxsize attributes limit the amount of stretching (in either direction). These two attributes are given as multipliers of the operator's normal size in the direction or directions of stretching, or as absolute sizes using units. For example, if a character has maxsize="3", then it can grow to be no more than three times its normal (unstretched) size. The symmetric attribute governs whether the height and depth above and below the axis of the character are forced to be equal (by forcing both height and depth to become the maximum of the two). An example of a situation where one might set symmetric="false" arises with parentheses around a matrix not aligned on the axis, which frequently occurs when multiplying non-square matrices. In this case, one wants the parentheses to stretch to cover the matrix, whereas stretching the parentheses symmetrically would cause them to protrude beyond one edge of the matrix. The symmetric attribute only applies to characters that stretch vertically (otherwise it is ignored). If a stretchy <mo> element is embellished (as defined earlier in this section), the <mo> element at its core is stretched to a size based on the context of the embellished operator as a whole, i.e. to the same size as if the embellishments were not present. For example, the parentheses in the following example (which would typically be set to be strechy by the operator dictionary) will be stretched to the same size as each other, and the same size they would have if they were not underlined and overlined, and furthermore will cover the same vertical interval: <mo> ( </mo> <mo> &UnderBar; </mo> <mi> a </mi> <mi> b </mi> <mo> ) </mo> <mo> &OverBar; </mo> Note that this means that the stretching rules given below must refer to the context of the embellished operator as a whole, not just to the <mo> element itself. Example of stretchy attributes: This shows one way to set the maximum size of a parenthesis so that it does not grow, even though its default value is stretchy="true". <mo maxsize="1"> ( </mo> <mi> a </mi> <mi> b </mi> <mo maxsize="1"> ) </mo> The above should render as Note that each parenthesis is sized independently; if only one of them had maxsize="1", they would render with different sizes. Vertical Stretching Rules: • If a stretchy operator is a direct subexpression of an <mrow> element, or is the sole direct subexpression of an <mtd> element in some row of a table, then it should stretch to cover the height and depth (above and below the axis) of the non-stretchy direct subexpressions in the <mrow> element or table row, unless stretching is constrained by minsize or maxsize attributes. • In the case of an embellished stretchy operator, the preceding rule applies to the stretchy operator at its core. • If symmetric="true", then the maximum of the height and depth is used to determine the size, before application of the minsize or maxsize attributes. • The preceding rules also apply in situations where the <mrow> or <mtd> element is inferred (see Section 3.5.1 for a discussion of inferred <mtd> elements). Most common opening and closing fences are defined in the operator dictionary to stretch by default; and they stretch vertically. Also, operators such as &sum;, &int;, /, and vertical arrows stretch vertically by default. In the case of a stretchy operator in a table cell (i.e. within an <mtd> element), the above rules assume each cell of the table row containing the stretchy operator covers exactly one row. (Equivalently, the value of the rowspan attribute is assumed to be 1 for all the table cells in the table row, including the cell containing the operator.) When this is not the case, the operator should only be stretched vertically to cover those table cells which are entirely within the set of table rows that the operator's cell covers. Table cells which extend into rows not covered by the stretchy operator's table cell should be ignored. Horizontal Stretching Rules: • If a stretchy operator, or an embellished stretchy operator, is a direct subexpression of an <munder>, <mover>, or <munderover> element, or if it is the sole direct subexpression of an <mtd> element (perhaps an inferred one) in some column of a table (see <mtable>), then it, or the <mo> element at its core, should stretch to cover the width of the other direct subexpressions in the given element (or in the same table column), given the constraints mentioned above. • If a stretchy operator is a direct subexpression of an <munder>, <mover>, or <munderover> element, or if it is the sole direct subexpression of an <mtd> element in some column of a table, then it should stretch to cover the width of the other direct subexpressions in the given element (or in the same table column), given the constraints mentioned above. • In the case of an embellished stretchy operator, the preceding rule applies to the stretchy operator at its core. • The preceding rules also apply in situations where the <mtd> element is inferred (see Section 3.5.1 for a discussion of inferred <mtd> elements). By default, most horizontal arrows and some accents stretch horizontally. In the case of a stretchy operator in a table cell (i.e. within an <mtd> element), the above rules assume each cell of the table column containing the stretchy operator covers exactly one column. (Equivalently, the value of the columnspan attribute is assumed to be 1 for all the table cells in the table row, including the cell containing the operator.) When this is not the case, the operator should only be stretched horizontally to cover those table cells which are entirely within the set of table columns that the operator's cell covers. Table cells which extend into columns not covered by the stretchy operator's table cell should be ignored. The rules for horizontal stretching include <mtd> elements to allow arrows to stretch for use in commutative diagrams laid out using <mtable>. The rules for the horizontal stretchiness include scripts to make examples such as the following work: <mi> x </mi> <mo> &RightArrow; </mo> <mtext> maps to </mtext> <mi> y </mi> This displays as Rules Common to both Vertical and Horizontal Stretching: If a stretchy operator is not required to stretch (i.e. if it is not in one of the locations mentioned above, or if there are no other expressions whose size it should stretch to match), then it has the standard (unstretched) size determined by the font and current fontsize. If a stretchy operator is required to stretch, but all other expressions in the containing element or object (as described above) are also stretchy, all elements that can stretch should grow to the maximum of the normal unstretched sizes of all elements in the containing object, if they can grow that large. If the value of minsize or maxsize prevents this then that (min or max) size is used. For example, in an <mrow> containing nothing but vertically stretchy operators, each of the operators should stretch to the maximum of all of their normal unstretched sizes, provided no other attributes are set which override this behavior. Of course, limitations in fonts or font rendering may result in the final, stretched sizes being only approximately the same. Other attributes of <mo> The largeop attribute specifies whether the operator should be drawn larger than normal if displaystyle="true" in the current rendering environment. This roughly corresponds to TeX's \displaystyle style setting. MathML uses two attributes, displaystyle and scriptlevel, to control orthogonal presentation features that TeX encodes into one "style" attribute with values \displaystyle, \textstyle, \scriptstyle, and \scriptscriptstyle. These attributes are discussed further in Section 3.3.4 describing the <mstyle> element. Note that these attributes can be specified directly on an <mstyle> element's begin tag, but not on most other elements. Examples of large operators include &int; and &prod;. The movablelimits attribute specifies whether underscripts and overscripts attached to this <mo> element should be drawn as subscripts and superscripts when displaystyle="false". movablelimits= "false" means that underscripts and overscripts should never be drawn as subscripts and superscripts. In general, displaystyle is "true" for displayed math and "false" for inline math. Also, displaystyle is "false" by default within tables, scripts and fractions, and a few other exceptional situations detailed in Section 3.3.4. Thus, operators with movablelimits="true" will display with limits (i.e., underscripts and overscripts) in displayed math, and with subscripts and superscripts in inline math, tables, scripts and so on . Examples of operators that typically have movablelimits ="true" are &sum;, &prod;, and "lim". The accent attribute determines whether this operator should be treated by default as an accent (diacritical mark) when used as an underscript or overscript; see <munder>, <mover>, and <munderover> (Sections 3.4.4 - 3.4.6). The separator attribute may affect automatic linebreaking in renderers which position ordinary infix operators at the beginnings of broken lines rather than at the ends (that is, which avoid linebreaking just after such operators), since linebreaking should be avoided just before separators, but is acceptable just after them. The fence attribute has no effect in the suggested visual rendering rules given here; it is not needed for properly rendering traditional notation using these rules. It is provided so that specific MathML renderers, especially non-visual renderers, have the option of using this information. 3.2.5 <mtext> -- text An <mtext> element is used to represent arbitrary text which should be rendered as itself. In general, the <mtext> element is intended to denote commentary text which is not central to the mathematical meaning or notational structure of the expression it is contained in. Note that some text with a clearly defined notational role might be more appropriately marked up using <mi> or <mo>; this is discussed further below. An <mtext> element can be used to contain "renderable whitespace", i.e., invisible characters which are intended to alter the positioning of surrounding elements. In non-graphical media, such characters are intended to have an analogous effect, such as introducing positive or negative time delays or affecting rhythm in an audio renderer. This is not related to any whitespace in the source MathML consisting of blanks, newlines, tabs, or carriage returns; whitespace present directly in the source is trimmed and collapsed, as described in Section 2.3.5. Whitespace which is intended to be rendered as part of an element's content must be represented by entity references (unless it consists only of single blanks between non-whitespace characters). Renderable whitespace can have a positive or negative width, as in "&ThinSpace;" and "&NegativeThinSpace;", or zero width, as in "&ZeroWidthSpace;". The complete list of such characters is given in Chapter 6. Note that there is no formal distinction in MathML between renderable whitespace characters and any other class of characters, in <mtext> or in any other element. Renderable whitespace can also include characters that affect alignment or linebreaking. Some of these characters are: Entity name Purpose (rough description) NewLine start a new line -- don't indent IndentingNewLine start a new line -- indent NoBreak do not allow a linebreak here GoodBreak if a linebreak is needed on the line, here is a good spot BadBreak if a linebreak is needed on the line, try to avoid breaking here For the complete list of MathML entities, consult Chapter 6. Attributes of <mtext>: <mtext> elements accept the attributes listed in Section 3.2.1. See also the warnings about the legal grouping of "spacelike elements" in Section 3.2.6, and about the use of such elements for "tweaking" or conveying meaning in Section 3.3.6. Examples of <mtext>: <mtext> Theorem 1: </mtext> <mtext> &ThinSpace; </mtext> <mtext> &ThickSpace;&ThickSpace; </mtext> <mtext> /* a comment */ </mtext> Mixing text and math In some cases, text embedded in math could be more appropriately represented using <mo> or <mi> elements. For example, the expression <mo> there exists </mo> <mi> &delta; </mi> <mo> &gt; </mo> <mn> 0 </mn> <mo> such that </mo> <mi> f </mi> <mo> &ApplyFunction; </mo> <mo> ( </mo> <mi> x </mi> <mo> ) </mo> <mo> &lt; </mo> <mn> 1 </mn> An example involving an <mi> element is: <mi> element, since it takes the place of a term in the sum (see Section 3.2.2, <mi>). On the other hand, expository text within MathML is best represented with an <mtext> element. An example of this is: However, when MathML is embedded in HTML, the example is probably best rendered with only the two inequalities represented as MathML at all, letting the text be part of the surrounding HTML. Another factor to consider in deciding how to mark up text is the effect on rendering. Text enclosed in an <mo> element is unlikely to be found in a renderer's operator dictionary, so it will be rendered with the format and spacing appropriate for an "unrecognized operator", which may or may not be better than the format and spacing for "text" obtained by using an <mtext> element. An ellipsis entity in an <mi> element is apt to be spaced more appropriately for taking the place of a term within a series than if it appeared in an <mtext> element. 3.2.6 <mspace/> -- space An <mspace/> empty element represents a blank space of any desired size, as set by its attributes. The default value for each attribute is "0em" or "0ex", so it will not be useful without some attributes specified. Attributes of <mspace/>: Name values default width number h-unit 0em height number v-unit 0ex depth number v-unit 0ex h-unit and v-unit represent units of horizontal or vertical length, respectively (see Section 2.3.3). Note the warning about the legal grouping of "spacelike elements" given below, and the warning about the use of such elements for "tweaking" or conveying meaning in Section 3.3.6. See also the other elements which can render as whitespace, namely <mtext>, <mphantom>, and <maligngroup/>; Definition of spacelike elements A number of MathML presentation elements are "spacelike" in the sense that they typically render as whitespace, and do not affect the mathematical meaning of the expressions in which they appear. As a consequence, these elements often function in somewhat exceptional ways in other MathML expressions. For example, spacelike elements are handled specially in the suggested rendering rules for <mo> given in Section 3.2.4. The following MathML elements are defined to be "spacelike": Note that an <mphantom> is not automatically defined to be spacelike, unless its content is spacelike. This is because operator spacing is affected by whether adjacent elements are spacelike. Since the <mphantom> element is primarily intended as an aid in aligning expressions, operators adjacent to an <mphantom> should behave as if they were adjacent to the contents of the <mphantom>, rather than to an equivalently sized area of whitespace. Legal grouping of spacelike elements Authors who insert spacelike elements or <mphantom> elements into an existing MathML expression should note that such elements are counted as arguments, in elements which require a specific number of arguments, or which interpret different argument positions differently. Therefore, spacelike elements inserted into such a MathML element should be grouped with a neighboring argument of that element by introducing an <mrow> for that purpose. For example, to allow for vertical alignment on the right edge of the base of a superscript, the expression <msup> <mi> x </mi> <malignmark edge="right"/> <mn> 2 </mn> </msup> is illegal, because <msup> must have exactly 2 arguments; the correct expression would be: <mi> x </mi> <malignmark edge="right"/> <mn> 2 </mn> See also the warning about "tweaking" in Section 3.3.6. 3.2.7 <ms> -- string literal The <ms> element is used to represent "string literals" in expressions meant to be interpreted by computer algebra systems or other systems containing "programming languages". By default, string literals are displayed surrounded by double quotes. As explained in Section 3.2.5, ordinary text embedded in a mathematical expression should be marked up with <mtext>, or in some cases <mo> or <mi>, but never with <ms>. Note that the string literals encoded by <ms> are "Unicode strings" rather than "ASCII strings". In practice, non-ASCII characters will typically be represented by entity references. For example, <ms>&amp;</ms> represents a string literal containing a single character, '&', and <ms>&amp;amp;</ms> represents a string literal containing 5 characters, the first one of which is '&'. (In fact, MathML string literals are even more general than Unicode string literals, since not all MathML entity references necessarily refer to existing Unicode characters, as discussed in Chapter 6.) Like all token elements, <ms> does trim and collapse whitespace in its content according to the rules of Section 2.3.5, so whitespace intended to remain in the content should be encoded as described in that section. Attributes of <ms>: <ms> elements accept the attributes listed in Section 3.2.1, and additionally: Name values default lquote string &quot; rquote string &quot; In visual renderers, the content of an <ms> element is typically rendered with no extra spacing added around the string, and a quote character at the beginning and the end of the string. By default, the left and right quote characters are both the standard double quote character &quot;. However, these characters can be changed with the lquote and rquote attributes respectively. The content of <ms> elements should be rendered with visible "escaping" of certain characters in the content, including at least "double quote" itself, and preferably whitespace other than individual blanks. The intent is for the viewer to see that the expression is a string literal, and to see exactly which characters form its content. For example, <ms>double quote is "</ms> might be rendered as "double quote is \"". Next: Presentation Markup -- General Layout Schemata Up: Table of Contents
{"url":"http://www.w3.org/TR/1998/REC-MathML-19980407/chap3_2.html","timestamp":"2014-04-17T16:10:56Z","content_type":null,"content_length":"67277","record_id":"<urn:uuid:5b2a0f88-7828-42e0-ba8b-cd82bb51ab84>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculation of Unbalanced Transformer Loads Member Login Come Join Us! Are you an Engineering professional? Join Eng-Tips now! • Talk With Other Members • Be Notified Of Responses To Your Posts • Keyword Search • One-Click Access To Your Favorite Forums • Automated Signatures On Your Posts • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. Donate Today! Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. Link To This Forum! Forum Search FAQs Links Jobs Whitepapers MVPs 7JLAman4 (Electrical) 7 Jun 06 14:51 I'm attempting to calculate the Line currents (with no sucess) of a 120V Delta transformer secondary with 3,4 & 5 kVA on phase windings A, B & C respectively with a phase sequence ABC. I originally started out with a 15kVA 3Ph load (5kVA per phase) and knowing how to calculate for a balanced system, tried to achieve the currents using nodal analysis which I was then going to apply to the problem above. I can't seem to calculate the line currents from the phase currents using vectors. I had started out with: Ia=3kVA/120V=25A@0deg, Ib=4kVA/120V=33.3A@-120deg, Ic=5kVA/120V=41.7A@120deg. IA=Ia-Ic, IB=Ib-Ia, IC=Ic-Ib. IA=(25cos(0)+i25sin(0)) - (41.7cos(120)+i41.7sin(120)) I'm not sure if this is the correct method. However using a sample that I know what the outcome should be, Ican't seem to get near close enough to the answer. jghrist (Electrical) 7 Jun 06 15:42 Your method seems correct assuming that Ia is the current in winding A, Ib is the current in winding B, etc.; the line currents are IA, IB, and IC; and the transformer connection is D[AC] (top of winding A connected to bottom of winding C). If the connection were D[AB] (top of winding A connected to bottom of winding B), then IA=Ia-Ib, IB=Ib-Ic, and IC=Ic-Ia. cflatters (Electrical) 7 Jun 06 15:42 Sounds like a college exercise to me ! 7JLAman4 (Electrical) 7 Jun 06 16:08 College didn't explain unbalanced systems. Work is driving this goal of mine to better understand certain theories and formula shortcuts. When using the aboe calculations on a "balanced" system, Line current IC always somehow comes out much higher than line currents IA or IB. jghrist (Electrical) 7 Jun 06 16:33 I get IA=58.3A, IB=50.6A, IC=65.0A using your equations for a D[AC] winding. IC doesn't seem inordinately high considering the unbalance in the winding kVA. Using 5 kVA in each phase gives 72.1A in each line, the same as winding current time sqrt(3). stevenal (Electrical) 7 Jun 06 16:35 Why are you assuming currents are evenly displaced by 120 degrees? 7JLAman4 (Electrical) 7 Jun 06 16:40 If only given kVA of the loads connected to the transformer, how would the phase angles be calculated then? and is it necessary? stevenal (Electrical) 7 Jun 06 17:09 You can't. Necessary? Not if you have a meter. waross (Electrical) 7 Jun 06 19:03 Further to stevenal's comments: If the power factors of the different loads are different, you add another level of complexity to the problem. I suggest that you draw some vectors to scale and double check your calculations. Another suggestions to help you. On a balanced system: 50 amps phase current will result in 50 x 1.73 = 86.5 amps line amps. 65 amps phase current will give 65 x 1.73 = 112.4 amps. When you combine 50 amps on one phase with 65 amps on another phase, the result will be more than 86.5 amps and less than 112.4 amps. I suggest that you use scale vector drawings and the high and low limits to self check your calculations, until you develop confidence in your solutions. 7JLAman4 (Electrical) 8 Jun 06 9:24 Thankyou for the replies. stevenal (Electrical) 8 Jun 06 11:43 Even without differing pf loads, this is not an easy problem. A balanced delta can be converted to an equivalent wye, and solved on a per phase basis. Unbalanced systems are generally handled by breaking the problem down to its balanced sequence components, each of which can be solved on a per phase basis and recombined. Note that on a delta system, all the line currents must add to zero since there is no neutral return path. Even if the loads are 100% resistive, unequal magnitude line currents cannot be evenly displaced by 120 degrees and still sum to zero. If this is really necessary for work, I suggest you look for a course on sequence components. jghrist (Electrical) 8 Jun 06 15:24 Solving for the line currents with the winding currents known is an easy problem. It is just nodal analysis with two currents known going into each node. The equations are as jlamann gave in the original post. The more difficult problem is solving for winding currents with the line currents known. In this case, you know one current going into each node and need to find the other two. stevenal (Electrical) 8 Jun 06 19:11 Solving for the line currents with the winding currents known is an easy problem. Agreed, but in this case all that is known is the KVA per winding. From this jlamann got a set of winding currents that don't close the delta graphically. The equations are good, but the inputs into them don't work. GIGO. stevenal (Electrical) 8 Jun 06 21:18 Another thought, though. The winding current magnitudes calculated above can only describe one triangle. You should be able to use the laws of cosines and sines to find the relative angles of the three phasors. Then use the formulas to find the line currents. Pf and sequence components avoided. jghrist (Electrical) 8 Jun 06 23:33 The winding currents do not have to add to zero. The line currents do. In the original example: IA = Ia - Ic = 58.33@-38.21° IB = Ib - Ia = 50.69@-145.28° IC = Ic - Ib = 65.09@93.67° IA + IB + IC = 0 tinfoil (Electrical) 9 Jun 06 8:56 Further to jghrist: The line currencts must sum to zero The nodal currents (i.e. the sum of currents at each corner of the delta) must sum to zero The winding currents are a function only of the load. Imagine a delta service with only single phase load on one leg. Assuming ideal transformers, only one winding will have non-zero current. jghrist (Electrical) 9 Jun 06 9:33 Or consider a delta-wye transformer with a single phase-to-ground fault on the wye secondary and no load on the unfaulted phases. There will be current in only one of the windings (primary and secondary). Obviously, with current in only one winding, the sum of the winding currents will not add to zero. The problem of calculating winding currents from the line currents arises because the set of equations IA = Ia - Ic IB = Ib - Ia IC = Ic - Ib cannot be solved for winding currents Ia, Ib, or Ic. If a constant is added to each winding current, the equations still hold. The constant adder (magnitude and phase) might be recognized as a zero-sequence current circulating in the delta. waross (Electrical) 9 Jun 06 10:54 Hi tinfoil; I don't undestand what you mean by an ideal transformer. Imagine a delta service with only single phase load on one leg. Assuming ideal transformers, only one winding will have non-zero current. What about two transformers in parallel with a single phase load. Won't ideal transformers share the load? When a single phase load is applied to "A" phase of a delta transformer bank, "B" phase and "C" act as an open delta transformer. The open delta may be resolved to a single phase transformer in parallel with "A" phase. The "B" phase and "C" phase transformers share 50% of the load. jghrist (Electrical) 9 Jun 06 11:42 Let's say you have a wye-delta transformer with A phase of the primary open. You could still serve load between the 'a' and 'b' terminals (ends of the delta A winding). There would be zero current in the A winding and the currents in the B and C windings would be equal. This current would flow out of the 'b' terminal (top of B winding, bottom of A winding), through the load, and back into the 'a' terminal, which is connected to the bottom of the C winding. This would be equivalent to an open-wye, open-delta connection. Then again, if B and C of the primary were open, the currents in the delta B and C windings would be zero and all of the load current would flow in the A winding. This would be equivalent to a single-phase transformer. It just shows that if the delta winding currents are known, you can calculate the line currents, but not vice versa. stevenal (Electrical) 9 Jun 06 11:43 I agree that in the general case, winding currents do not necessarily sum to zero. However in this case, with a delta secondary feeding the load, winding currents will sum to zero. There is no source given for the circulating zero sequence current. For jghrists example to occur; there must be a source on the delta side and a wye winding with unbalanced loading or fault. Possibly an unbalanced source voltage on the wye side could also create circulating current, but the problem assumed the transformed voltage was 120V across each winding. You cannot have a single phase load on a delta that involves only one leg unless you are speaking of that 4 wire delta that center taps one of the windings. Given only the three nodes of the original post, I assume this is a 3 wire delta. Single phase loads can only be connected l-l and therefore involve all three windings. Current consists of + and - sequence only. Winding current sums to zero to give no 0 sequence current. With eng-tips MVPs challenging me on this, I've double checked my memory using Aspen. Grounded wye source with l to l fault on the delta side. No zero sequence current on either side. Memory waross (Electrical) 9 Jun 06 12:14 Hi stevenal; I agree with your statement, "Possibly an unbalanced source voltage on the wye side could also create circulating current, but the problem assumed the transformed voltage was 120V across each winding" I would add. In North America, the primary neutral is left floating on a wye-delta bank. This alows the primary voltages and phase angles to rearrange themselves so as to prevent circulating currents in the delta secondary. In other parts of the world, I have seen standard practice of grounding the primary neutral on a wye-delta bank. The circulating curents do flow. Fuses blow. Transformers burn out. With the loss of one or two phases, the resulting low voltage backfeed burns out a lot of refrigerators and freezers. It is common to see a transformer bank with one fuse blown, or missing altgether. stevenal (Electrical) 9 Jun 06 12:43 IA=60.2<-33.6 deg jghrist (Electrical) 9 Jun 06 13:00 Circulating currents or not, there is no reason to require that the winding currents sum to zero. Why can the originally proposed winding currents (magnitude and angle) not exist? It isn't like there is nowhere for the currents to go. There are three branches at each node, including the lines. I just threw in the circulating currents to show that the winding currents are indeterminate given only the line currents. In the original post, the winding currents are given, not the line Back currents. To IA=60.2<-33.6 deg Are IA, IB, and IC winding currents or line currents? The original posting used lower case for winding currents and upper case for line currents. I've been trying to keep with this convention even though it seems backwards. waross (Electrical) 9 Jun 06 13:20 Hi fellas; A couple of points; The OP said a delta transformer bank. The load current is not the same as the winding current. This may sound like insignificant nit picking. It may be nit picking but it is not insignificant. I have seen several situations where there was zero load on a bank of distribution transformers, and zero current in the secondary lines, but the magnitude of the circulating winding currents was sufficient to either burn out the transformers or blow the primary fuses. stevenal (Electrical) 9 Jun 06 14:05 By definition, the sum of the three winding currents is equal to 3IO. Since these currents cannot go to the load, they are trapped to circulate in the delta if they exist. To even exist, they must have corresponding currents in the primary. The OP has given no indication that there is any source of zero sequence voltage in the primary to drive zero sequence current around the delta The original problem did not start with winding current, it started with winding kVA and balanced voltage. His first assumption, which I challenge is that the winding currents, although unbalanced in magnitude happen to be evenly displaced in phase. This cannot occur without zero sequence current, which cannot occur with balanced voltage. It is self contradictory. The currents I calculated above are line currents derived from winding currents. The winding currents were derived from winding kVA and balanced voltage using trig to ensure they sum to zero. Its no nit, I've tried to be clear when speaking of winding current versus line or load current. The condition you speak of was most likely caused by primary voltage unbalance or transformer impedance mismatch or both. Either one of these situations would change the 120V the OP used initially to obtain winding current. jghrist (Electrical) 9 Jun 06 16:14 When you're dealing with sequence components, you are generally dealing with line currents, not delta winding currents. I'm not sure if the concept applies. Do you need to have a zero-sequence voltage to get zero-sequence currents? Let's look at a delta connected load with balanced voltages. V[AB] = 120V < 0° V[BC] = 120V < -120° V[CA] = 120V < 120° A 4.8 ohm resistor across A and B phases (3kVA at 120V) A 3.6 ohm resistor across B and C phases (4kVA at 120V) A 2.88 ohm resistor across C and A phases (5kVA at 120V) Current in the resistors: I[AB] = 25A < 0° V[BC] = 33.33A < -120° V[CA] = 41.67A < 120° The sequence components would be: I0 = 4.81A < 150° I1 = 33.33A < 0° I2 = 4.81A < -150° Where does the zero-sequence current come from? stevenal (Electrical) 9 Jun 06 16:32 Sequence component theory is perfectly valid within the windings. There is no 0 sequence current in your example. All three windings are involved in supplying current to each one of the resistors you're loading. I would begin by converting your delta connected load to it's equivalent wye. Loop equations might work also. If I get time I may work out the actual line and winding currents. Just with inspection, though, with no neutral return path there can be no 0 sequence current in the lines, and with balanced voltages none can exist in the windings. tinfoil (Electrical) 9 Jun 06 16:41 An ideal xmfr is only ever encountered at school when they are trying to teach theory: It has no iron or copper losses, and can deliver infinite current with no regulation voltage drop. I referred to it as a shorthand way of saying "ignoring losses in..." In my utility experiences, we used to generate delta banks using two (open-delta) or three (full delta, preferred) discrete single phase two-primary bushing xmfrs. It was a common practice to hook up a single can line-to-line to service single phase loads. Obviously, with only one can, only one primary winding has nonzero current. If you hooked up a bank of three such transformers, and still only placed single phase load on one leg, only one WINDING would have non-zero current (again, assuming ideal transformers). I agree that two of the LINES would provide this current, but that is not what I said in my post. Once you start using real rather than ideal xmfrs, SOME currents will flow in the other two transformers' primary windings, but not enough to cause the sum of these currents to be zero. jghrist (Electrical) 9 Jun 06 16:51 Why can there be zero-sequence current in the delta load, but not in the delta windings? If sequence component theory is perfectly valid within the winding, why not within a delta connected load? I used standard equations to calculate the sequence components of the current in the delta connected resistors and I found a zero-sequence component with balanced voltages applied. Yes, you are correct that the line currents will have no zero sequence component. The line currents will be: IA = 58.33A < -38.2° IB = 50.68A < -145.2° IC = 65.08A < 93.6° Sequence components of line current: I0 = 0 I1 = 57.73 < -30° I2 = 8.33 < -120° If you convert the delta connected load to a wye equivalent, you will get the same currents that I got in the line. How will that show that there is no zero-sequence current in the delta connected load? stevenal (Electrical) 9 Jun 06 16:51 Sorry Jghrist, I read over your post too quickly. You were saying nothing about the winding currents, only the delta connected load currents. I concede that this unbalanced loading creates zero sequence circulating current within the load delta. jghrist (Electrical) 9 Jun 06 17:10 Now, given these load currents, what is the current in the delta windings serving the load? I don't know how to calculate this. There may be more than one answer, depending on the primary source. My assumption would be that if the source was balanced, the winding currents would equal the currents in the delta connected loads, which is where the OP started. If you had a wye primary and one phase was open, then there would be no current in the corresponding secondary winding. The load could still be served, much as in an open-wye, open-delta connection, but the secondary winding currents would certainly be very different. I think that as long as all three primary connections were made to your three-phase bank, the other secondary windings would share some of the single phase load. waross (Electrical) 9 Jun 06 17:13 Hi tinfoil; Consider a single phase transformer compared to the open end of an open delta on the other two phases. The terminal voltages, and voltage regulation of the single transformer will be equal to the open delta combination. The phase relationship will also be the same. Put a 2 KW load on the single transformer and a 2KW load on the open delta. Now if you connect the single phase transformer in parallel with the open delta, you have a closed delta with a load of 4 KW. There was no voltage difference between the two systems before you made the connection. The current to support a single phase load on a delta bank divides equally between the single in phase transformer and the open delta equivalent. This assumes similar transformers and equal primary voltages. For a look at a similar division of load, look at the double delta connection that is the standard connection to convert a three phase generator to single phase use. stevenal (Electrical) 9 Jun 06 17:20 Assume a secondary delta abc with a single phase load across terminals a and b and no other connection. You have the series combination of windings b to c and c to a in parallel with winding a to b feeding this load. All three windings have current. You can even remove winding a to b without disturbing the current going to the load. Note that this situation is similar to the l to l fault I simulated in Aspen. This situation is much easier to analyze. Obviously the one load (or fault)current cannot sum with two zeros to make zero. But the delta winding currents feeding this load (fault) do sum to zero, or I would be seeing zero sequence primary current in my fault study.
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=156846","timestamp":"2014-04-16T07:31:44Z","content_type":null,"content_length":"92484","record_id":"<urn:uuid:f4ecc27f-02cc-4432-a959-2608bbddc78e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with M Pages: 1 2 Post reply Re: Help with M Since there is no known formula for the nth prime use the loop and memoization. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M Yes. Here is the function that finds the nth number to satisfy the condition Test: The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M What would you use it for? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M Well, to get Harshad from HarshadQ, for example. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M Isn't the one I suggested capable of that and it is less procedural? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M Which one? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M Post #16 looks like a beauty to me. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M But, with that you only test which of those are Harshad. How do you plan on finding the nth one for a given n? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M By doing what M does. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M Sort of like using NextPrime. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M I still do not get it. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M You keep a list and expand upon it each time you need a higher number. M keeps a very big list of primes in memory. But although it would be faster it is harder to program so stick with what you have In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M Oh, I get it. But, I do not think I am going to use the Harshad function so often that I need a list such as a prime list. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M Actually you do not keep the entire list. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Real Member Re: Help with M anonimnystefy wrote: I want to find the 1000th Harshad number in base 5. Sounds like a Brilliant problem! 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Help with M I agree. It most certainly seems like a Brilliant problem. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M It is not a brilliant problem for L5. You do not need the whole list to speed everything up but you do not require it for a program that run infrequently. CSBFC. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: Help with M I solved that problem. BTW, I have a 17 MB list of primes, so I did not have to compute the primes again 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Help with M I remember that problem. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Help with M By the way, I was looking over at Rosetta Code. There is a problem that attracts my attention. It's this one. How can it be done in M? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Help with M I can provide the core commands but you must tell how you want the game to be played. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Post reply Pages: 1 2
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=277412","timestamp":"2014-04-17T12:39:45Z","content_type":null,"content_length":"36877","record_id":"<urn:uuid:b4952f04-7711-4caf-ae98-c27d85f02d8c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Abstracts for New Mathematics and Algorithms for 3-D Image Analysis Second Chances Wednesday, January 11 Second Chances Tuesday, January 10 Second Chances Monday, January 9 Akram Aldroubi (Vanderbilt University) Processing of Diffusion-Tensor Magnetic Resonance Images Diffusion-Tensor Magnetic Resonance Imaging (DT-MRI) is a relatively recent imaging modality. DT-MRI images are measurements of the diffusion tensor of water in each voxel of an imaging volume. They can be viewed as noisy, discrete, voxel-averaged samples of a continuous function from 3D space into the positive definite symmetric matrices. These DT-MRI images can be used to probe the structural and architectural features of fiber tissue such white matter and the heart ventricles. We will present an overview of the problems, methods and applications associated with DT-MRI data and their Gaik Ambartsoumian (Texas A & M University) , Peter Kuchment (Texas A & M University) Image Reconstruction in Thermoacoustic Tomography Thermoacoustic tomography (TCT or TAT) is a new and promising method of medical imaging. It is based on a hybrid imaging technique, where the input and output signals have different physical nature. In TAT, a microwave or radiofrequency electromagnetic pulse is sent through the biological object triggering an acoustic wave measured in the exteriror of the object. The resulting data is then used to recover the absorption function.The poster addresses several problems of image reconstruction in thermoacoustic tomography. The presented results include injectivity properties of the related spherical Radon transform, its range description, reconstruction and incomplete data problems. Jung-Ha An (University of Minnesota Twin Cities) Image Segmentation using a Modified Mumford-Shah Model The purpose of this paper is to acquire image segmentation using a modified Mumford-Shah model. A variational region intensity based image segmentation model is proposed. The boundary of the given image is extracted by using a modified Mumford-Shah segmentation model. The proposed model is tested against synthetic data and simulated normal noisy human-brain MRI images. The experimental results provide preliminary evidence of the effectiveness of the presented model. Mark A. Anastasio (Illinois Institute of Technology) Diffraction tomography using intensity measurements Diffraction tomography (DT) is a well-established imaging method for determination of the complex-valued refractive index distribution of a weakly scattering object. The success of DT imaging in optical applications, however, has been limited because it requires explicit knowledge of the phase of the measured wavefields. To circumvent the phase-retrieval problem, a theory of intensity DT (I-DT) has been proposed that replaces explicit phase measurements on a single detector plane by intensity measurements on two or more different parallel planes. In this work, we propose novel I-DT reconstruction theories that are applicable to non-conventional scanning geometries. Such advancements can improve the effectiveness of existing imaging systems and, perhaps more importantly, prompt and facilitate the development of systems for novel applications. Numerical simulations are conducted to demonstrate and validate the proposed tomographic reconstruction algorithms. Heath Barnett (Louisiana State University) , Les G. Butler (Louisiana State University) http://chemistry.lsu.edu/butler/ , Kyungmin Ham (Louisiana State University) 3D Image Acquisition and Image Analysis Algorithms Given a near-perfect X-ray source, such as a synchrotron, then what are the reasonable options for image reconstruction? Back-projection reconstruction has dominated, with lambda tomography not receiving the attention it deserves. Give the computation power a the beamline, is it reasonable to perform both reconstructions so as to discern object domains and interfaces? Second, all imaging methods reach a similar bottleneck, image analysis. Here, analysis means counting domains, identifying structure, following paths. If only the “yellow book” (Numerical Recipes: the art of scientific computing by Press, Flannery, Teukolsky, and Vettering) had another couple of chapters on algorithms for image analysis. Today, we start writing those chapters. We present some sample data sets from our work at the LSU synchrotron, the Advanced Photon Source, and the National Synchrotron Light Source. Also, we discuss potential future issues for neutron tomography at the Spallation Neutron Source. Oliver Brunke (Universität Bremen) Synchrotron micro computed tomography as a tool for the quantitative characterization of the structural changes during the ageing of metallic foams Metallic foams are a rather new class of porous and lightweight materials offering a unique combination of mechanical, thermal and acoustical properties. Their high stiffness to weight ratio, acoustic damping properties and thermal resistance provide possible applications in automobile industry for instance as crash energy absorbers or acoustic dampers, or in aerospace industry for lightweight parts in rockets or aircrafts. We will demonstrate methods which we use for the analysis of 3D datasets of Aluminum foams obtained at the synchrotron µCT facility at the HASYLAB/DESY. It will be shown that by means of standard 3D image processing techniques it is possible to study and quantify how different processing parameters like foaming temperature, time influence the structure formation and development of metallic Les G. Butler (Louisiana State University) http://chemistry.lsu.edu/butler/ , Eric Todd Quinto (Tufts University) http://www.tufts.edu/~equinto Summary session: What was accomplished? What's next? no abstract Boris Aharon Efros (Ben Gurion University of the Negev) Multiframe Dim Target Detection Using 3D Multiscale Geometric Analysis Joint work with Dr. Ofer Levi, Industrial Engineering Department and Prof. Stanley Rotman Electrical Engineering Department, Ben-Gurion University of the Negev. We present new multi-scale geometric tools for both analysis and synthesis of 3-D data which may be scattered or observed in voxel arrays, which are typically very noisy, and which may contain one-dimensional structures such as line segments and filaments. These tools mainly rely on the 3-D Beamlet Transform (developed by Donoho et al.) offering the collection of line integrals along a strat egic multi-scale set of line segments, the Beamlet set (running through the image at different orientations, positions and lengths). 3D Beamlets methods can be applied in a wide variety of application fields that involve 3D imaging, in this work we focus on applying Bea mlet methods for the problem of dim target multi-frame detection and develop specialized tools for this application. We use tools from Gra ph theory and apply them to the special graph generated by the beamlets set. Adel Faridani (Oregon State University) Tomography and Sampling Theory Computed tomography entails the reconstruction of a function from measurements of its line integrals. In this talk we explore the question: How many and which line integrals should be measured in order to achieve a desired resolution in the reconstructed image? Answering this question may help to reduce the amount of measurements and thereby the radiation dose, or to obtain a better image from the data one already has. Our exploration leads us to a mathematically and practically fruitful interaction of Shannon sampling theory and tomography. For example, sampling theory helps to identify efficient data acquisition schemes, provides a qualitative understanding of certain artifacts in tomographic images, and facilitates the error analysis of some reconstruction algorithms. On the other hand, applications in tomography have stimulated new research in sampling theory, e.g., on nonuniform sampling theorems and estimates for the aliasing error. Our dual aim is an exposition of the main principles involved as well as the presentation of some new insights. Alex Gittens (University of Houston) http://tangentspace.net/cz Frame Isotropic Multiresolution Analysis for Micro CT Scans of Coronary Arteries Joint with Bernhard G. Bodmann, Donald J. Kouri, and Manos Papadakis. Recent studies have shown that as much as 85% of heart attacks are caused by the rupture of lesions comprising fatty deposits capped by a thin layer of fibrous tissue-- so-called vulnerable plaques. An imaging modality for the reliable and early detection of vulnerable plaques is therefore of significant clinical relevance. As a move in that direction, we develop a texture-based algorithm for labeling tissues in high resolution CT volume scans based upon variations in the local statistics of the wavelet coefficients. We use a fast wavelet transform associated with isotropic, three-dimensional wavelets; as a result, the algorithm is able to process large volume sets in their entirety, as opposed to two-dimensional cross-sections, and retains an orientation independent sensitivity to features at all levels. The algorithm has been applied to the classification of tissues in scans of coronary artery specimens taken using a General Electric RS-9 Micro CT scanner with a linear resolution of 27 micrometers. In the current revision, it has shown promise for reliably distinguishing fibromuscular, lipid, and calcified tissues. Natalia Grinberg (Universität Fridericiana (TH) Karlsruhe) Factorization method in inverse obstacle scattering Many inverse problems from acoustics, elasticity or electromagnetism can be reduced to the inverse scattering problem for the Helmholtz equation. We consider scattering by inclusions or obstacles in an homogeneous background medium. The factorization method establishes explicite relation between the spectral properties of the far field operator or its derivatives and the shape of the scatterer. This relation allows to reconstruct unknown scattering object pointwise. The factorization method works pretty well for any type of boundary condition and is dimension independent. Gabor T. Herman (City University of New York (CUNY)) http://www.cs.gc.cuny.edu/~gherman/ Discrete tomography Breakout groups 1/11/2006 Gabor T. Herman (City University of New York (CUNY)) http://www.cs.gc.cuny.edu/~gherman/ Recovery of the internal grain structure of polycrystals from X-ray diffaction data using discrete tomography Many materials (such as metals) are polycrystals: they consist of crystaline grains at various orientations. The interaction of these grains with X-rays can be detected as diffraction spots. Discrete tomography can be used to recover the internal oriention arrangement of the grains form such diffraction measurements. Michael Hofer (Vienna University of Technology) 3D Shape Recognition and Reconstruction with Line Element Geometry This poster presents a new method for the recognition and reconstruction of simple geometric shapes from 3D data. Line element geometry, which generalizes both line geometry and the Laguerre geometry of oriented planes, enables us to recognize a wide class of surfaces (spiral surfaces, cones, helical surfaces, rotational surfaces, cylinders, etc.) by fitting linear subspaces in an appropriate seven-dimensional image space. In combination with standard techniques such as PCA and RANSAC, line element geometry is employed to effectively perform the segmentation of complex objects according to surface type. George Kamberov (Stevens Institute of Technology) Segmentation and Geometry of 3D Scenes form Unorganized Point Clouds We present a framework to segment 3D point clouds into 0D,1D and 2D connected components (isolated points, curves, and surfaces) and then to assign robust estimates of the Gauss and mean curvatures and the principal curvature directions at each surface point. The framework is point-based. It does not use surface reconstruction, works on noisy data, no-human-in-the-loop is required to deal with non-uniformly sampled clouds and boundary points. The topology and geometry recovery are parallelizable with low overhead. Alexander Katsevich (University of Central Florida) Improved cone beam local tomography A new local tomography function g is proposed. It is shown that g still contains non-local artifacts, but their level is an order of magnitude smaller than those of the previously known local tomography function. We also investigate local tomography reconstruction in the dynamic case, i.e. when the object f being scanned is undergoing some changes during the scan. Properties of g are studied, the notion of visible singularities is suitably generalized, and a relationship between the wave fronts of f and g is established. It is shown that the changes in f do not cause any smearing of the singularities of g. Results of numerical experiments are presented. Richard Ketcham (University of Texas) Surface detection Breakout groups 1/10/2005 Richard Ketcham (University of Texas) Measuring features in volumetric data sets using Blob3D Blob3D is a software project begun at the University of Texas at Austin in 1999 for facilitating measurements of discrete features of interest in volumetric data sets. It is designed in particular to deal with cases where features are touching or impinging, and to allow up to tens of thousands of features to be processed in a reasonable amount of time. Processing is broken into three stages: segmentation of a phase of interest, separation of touching objects, and extraction of measurements from the interpreted volume. For each stage a variety of three-dimensional algorithms have been created that account for vagaries of CT data, and program design is intended to enable relatively straightforward addition of new methods as they are developed. Separation is the most time-intensive step, as it utilizes manual and semi-automated methods that rely heavily on the user. This approach is most appropriate in many instances where the natural variation and complexity of the features require expert interpretation, but further automation is a future goal. Although designed in particular for geological applications using X-ray CT data, Blob3D is sufficiently general that it can be applied to other data types and in other fields. Richard Ketcham (University of Texas) Surface detection Breakout groups 1/12/2006 Seongjai Kim (Mississippi State University) Zoomable PDEs The presentation introduces edge-forming schemes for image zooming by arbitrary magnification factors. Image zooming via conventional interpolation methods often produces the so-called checkerboard effect, in particular, when the magnification factor is large. In order to remove the artifact and to form reliable edges, a nonlinear convex-concave model and its numerical algorithm are suggested along with anisotropic edge-forming numerical schemes. The algorithm is analyzed for stability and choices of parameters. It has been numerically verified that the resulting algorithm can form clear edges in 2 to 3 iterations of a linearized ADI method. Various results are given to show effectiveness and reliability of the algorithm, for both gray-scale and color images. This is a joint work with Dr. Youngjoon Cha. Carl E. Krill III (Universität Ulm) http://www.uni-ulm.de/matwis/ Unraveling the Mysteries of Grain Growth by X-Ray Tomography: Segmentation of the 3-D Microstructure of Polycrystalline Al-Sn During the phenomenon of grain growth, larger grains tend to grow at the expense of their smaller neighbors, resulting in a steady increase in the average grain size. Because the growth of any given grain is affected by that of its neighbors, the behavior of the ensemble of grains is a strong function of nearest-neighbor size correlations. Quantitative information concerning these correlations can be extracted only from a truly three-dimensional characterization of the sample microstructure. We have used x-ray microtomography to measure the size correlations in a polycrystalline specimen of Al alloyed with 2 at.% Sn. The tin atoms segregate to the grain boundaries, where they impart a strong contrast in x-ray attenuation that can be reconstructed tomographically; however, the nonuniform nature of the segregation process presents a formidable challenge to the automated segmentation of the reconstructions. By employing an iterative region-growing algorithm followed by a novel grain-boundary-network optimization routine (based on a phase-field simulation of grain growth), we were able to measure the size, topology and local connectivity of nearly 5000 contiguous Al grains, from which the nearest-neighbor size correlations could be computed. The resulting information was incorporated into a non-mean-field theory for grain growth, the accuracy of which was evaluated by comparing its predictions to the observed microstructure of the Al-Sn samples. Peter Kuchment (Texas A & M University) On mathematics of thermoacoustic imaging Joint with Gaik Ambartsoumian. In thermoacoustic tomography TAT (sometimes called TCT), one triggers an ultrasound signal from the medium by radiating it with a short EM pulse. Mathematically speaking, under ideal conditions, the imaging problem boils down to inversion of a spherical Radon transform. The talk will survey known results and open problems in this area. Ofer Levi (Ben Gurion University of the Negev) Real Time Multi-Scale Geometric Segmentation of 3D Images no abstract Chunming Li (Vanderbilt University) http://vuiis.vanderbilt.edu/~licm/ Active Contours with Local Binary Fitting Energy We propose a novel active contour model for image segmentation. The proposed model is based on an assumption that the image is locally binary. Our method is able to segment images with non-homogeneous regions, which is difficult for existing region based active contour models. Experimental results demonstrate the effectiveness of our method, and comparative study shows its advantage over previous methods. Hyeona Lim (Mississippi State University) Method of Background Subtraction for Medical Image Segmentation Medical images can involve high levels of noise and unclear edges and therefore their segmentation is often challenging. In this presentation, we consider the method of background subtraction (MBS) in order to minimize difficulties arising in the application of segmentation methods to medical imagery. When an appropriate background is subtracted from the given image, the residue can be considered as a perturbation of a binary image, for which most segmentation methods can detect desired edges effectively. New strategies are presented for the computation of the background and tested along with active contour models. Various numerical examples are presented to show effectiveness of the MBS for segmentation of medical imagery. The method can be extended to an efficient surface detection of 3-D medical images. Jundong Liu (Ohio University) http://ace.cs.ohiou.edu/~liu A Unified Registration and Segmentation Framework for Medical Images In this paper, we present an unified framework for joint segmentation and registration. The registration component of the method relies on two forces for aligning the input images: one from the image similarity measure, and the other from an image homogeneity constraint. The former, based on local correlation, aims to find the detailed intensity correspondence for the input images. The latter, generated from the evolving segmentation contours, provides an extra guidance in assisting the alignment process towards a more meaningful, stable and noise-tolerant procedure. We present several 2D/ 3D example on synthetic and real data. Thomas H. Mareci (University of Florida) http://faraday.mbi.ufl.edu/~thmareci/ Imaging Translational Water Diffusion with Magnetic Resonance for Fiber Mapping in the Central Nervous System Work in collaboration with Evren Ozarslan of the National Institutes of Health and Baba Vemuri of the University of Florida. Magnetic resonance can be used to measure the rate and direction of molecular translational diffusion. Combining this diffusion measurement with magnetic resonance imaging methods allows the visualization of 3D motion of molecules in structured environments, like biological tissue. In its simplest form, the 3D measure of diffusion can be modeled as a real, symmetry rank-2 tensor of diffusion rate and direction at each image voxel. At a minimum, this model requires seven unique measurements of diffusion to fit the model (Basser, et al., J Magn Reson 1994;B:247–254). The resulting rank-2 tensor can be used to visualize diffusion as an ellipsoid at each voxel and fiber connections can be inferred by connecting the path, defined by the long axis (principle eigenvector) of the ellipse, passing through each voxel. However, the rank-2 model of diffusion fails to accurately represent diffusion in complex structured environments, like nervous tissue with many crossing fibers. This limitation can be overcome by extending the angular resolution of diffusion measurements (Tuch, et al., Proceedings of the 7 Annual Meeting of Inter Soc Magn Reson Med, Philadelphia, 1999. p 321.) and by modeling the diffusion with higher rank tensors (Ozarslan et al., Magn. Reson. Med. 2003; 50:955-965 & Magn. Reson. Med. 2005;53;866-876). At each voxel in this more complete model, the 3D diffusion is represented by an "orientation distribution function" (ODF) indicating the probability of diffusion rate and direction. The diffusion ODF can be used to infer fiber connectivity but the issue of probable path selection remains a challenge. Plus the chosen procedure for path selection will influence with the level of resolution required for the measurements. In this presentation, methods of diffusion measurement and examples of diffusion-weighted magnetic resonance images from brain and spinal cord will be presented to illustrate the potential and challenges for path selecting leading to fiber mapping in the central nervous system. Frank Natterer (Westfälische Wilhelms-Universität Münster) http://wwwmath.uni-muenster.de/math/u/natterer/ Ultrasound tomography no abstract Ozan Oktem (Sidec Technologies) Electron tomography. A short overview of methods and challenges Already in 1968 one recognized that the transmission electron microscope could be used in a tomographic setting as a tool for structure determination of macromolecules. However, its usage in mainstream structural biology has been limited and the reason is mostly due to the incomplete data problems that leads to severe ill-posedness of the inverse problem. Despite these problems its importance is beginning to increase, especially in drug discovery. In order to understand the difficulties of electron tomography one needs to properly formulate the forward problem that models the measured intensity in the microscope. The electron-specimen interaction is modelled as a diffraction tomography problem and the picture is completed by adding a description of the optical system of the transmission electron microscope. For weakly scattering specimens one can further simplify the forward model by employing the first order Born approximation which enables us to explicitly express the forward operator in terms of the propagation operator from diffraction tomography acting on the specimen convolved with a point spread function, derived from the optics in the microscope. We next turn to the algorithmic and mathematical difficulties that one faces in dealing with the resulting inverse problem, especially the incomplete data problems that leads to severe ill-posedness. Even though we briefly mention single particle methods, our focus is will be on electron tomography of general weakly scattering specimens and we mention some of the progress that has been made in the field. Finally, if time permits, we provide some examples of reconstructions from electron tomography and demonstrate some of the biological interpretations that one can make. Sarah K. Patch (University of Wisconsin) http://www.phys.uwm.edu/people/faculty/patchs.html Thermoacoustic Tomography - Reconstruction of Data Measured under Clinical Constraints Thermoacoustic tomography (TCT) is a hybrid imaging technique that has been proposed as an alternative to xray mammography. Ideally, electromagnetic (EM) energy is deposited into the breast tissue uniformly in space, but impulsively in time. This heats the tissue causing thermal expansion. Cancerous masses absorb more energy than healthy tissue, creating a pressure wave, which is detected by standard ultrasound transducers placed on the surface of a hemisphere surrounding the breast. Assuming constant sound speed and zero attenuation, the data represent integrals of the tissue's EM absorptivity over spheres centered about the receivers (ultrasound transducers). The inversion problem for TCT is therefore to recover the EM absorptivity from integrals over spheres centered on a hemisphere. We present an inversion formula for the complete data case, where integrals are measured for centers on the entire sphere. We discuss differences between ideal and clinically measurable TCT data and options for accurately reconstructing the latter. Henning Friis Poulsen (Riso National Laboratory) Grain maps and grain dynamics — a reconstruction challenge Crystalline materials such as most metals, ceramics, rocks, drugs, and bones are composed of a 3D space-filling network of small crystallites r the grains. The geometry of this network governs a range of physical properties such as hardness and lifetime before failure. Our group has pursued an experimental method r 3DXRD r which for the first time enable structural characterisation of the grains in 3D. Furthermore, changes in grain morphology can be followed during typical industrial processes such as annealing or deformation. 3DXRD is based on reconstruction principles. In comparison to conventional tomography the use of higher dimensional spaces is required. The projections are subject to group symmetry and their number is inherently limited. The grains exhibit a number of geometric properties which can be utilised. Furthermore the problem at hand can be reformulated in terms of both vector-type and scalar-type reconstructions. In conjunction these effects make 3DXRD reconstruction mathematically challenging.The 3DXRD method will be presented with a few applications. The algorithms developed so far r for simplifying cases r will be summarised with focus on continuous reconstruction methods. Eric Todd Quinto (Tufts University) http://www.tufts.edu/~equinto An introduction to the mathematics of tomography The speaker will provide an overview of the Radon transform, showing its relationship with X-ray tomography and other tomographic problems. He will also describe the filtered back projection inversion formula and contrast it with Lambda tomography. Finally, he plans to give an elementary introduction to microlocal analysis and its implications for limited data tomography. Sample reconstructions (tomography pictures) will be provided to illustrate the ideas. Gregory J. Randall (University of the Republic) An Active Regions approach for the segmentation of 3D biological tissue Joint work with Juan Cardelino and Marcelo Bertalmio. Some of the most succesful algorithms for the automated segmentation of images use an Active Regions approach, where a curve is evolved so as to maximize the disparity of its interior and exterior. But these techniques require the manual selection of several parameters, which make impractical the work with long image sequences or with a very dissimilar set of sequences. Unfortunately this is precisely the case with 3D biological image sequences. In this work we improve on previous Active Regions algorithms in two aspects: by introducing a way to compute and update the optimum weights for the different channels involved (color, texture, etc.) and by estimating if the moving curve has lost any object so as to launch a re-initialization step. Our method is shown to outperform previous approaches. Several examples of biological image sequences, quite long and different among themselves, are presented. Walter Richardson Jr. (University of Texas) Sobolev gradients and negative norms for image decomposition The use of Sobolev gradients and negative norms has proven to be a very useful preconditioning strategy for a variety of problems from mechanics and CFD, including transonic flow, minimal surface, and Ginzburg-Landau. We summarize results of applying this methodology in a variational approach for image decomposition f=u+v+w. Erik L. Ritman (Mayo Clinic) http://mayoresearch.mayo.edu/mayo/research/staff/ritman_el.cfm Dual-Modality Micro-CT with Poly-Capillary X-ray Optics Conventional attenuation-based x-ray micro-CT is limited in terms of the image contrast it can convey for differentiating different tissue components, spaces and functions. Multi-modality imaging (e.g., radionuclide emission and/or x-ray scatter) can expand the information that can be obtained about those tissue aspects, but a challenge is accurate co-registration of the multiple images needed for the CT image data to be used to enhance the other modality's specificity. Poly-capillary optics consist of bundles of hollow glass capillaries (nominally 25µm in lumen diameter) which can "bend" x-rays or gamma rays by virtue of reflection of the photons within those capillaries. This approach serves both to exclude unwanted radiation (i.e., collimates the radiation) and to allow passage of radiation along accurately described paths - either parallel or focused. As both x-rays from an external x-ray source and from gamma ray emitters and x-ray scatterers within an object can be imaged with this approach, the images from these three modalities are perfectly co-registered. This allows use of the x-ray image to provide for attenuation correction of the internally generated radiation, as well as restricting that emission to specific anatomic structures and spaces by virtue of a priori physiological knowledge. Justin Romberg (California Institute of Technology) Image acquisition from a highly incomplete set of measurements Many imaging techniques acquire information about an underlying image by making indirect linear measurements. For example, in computed tomography we observe line integrals through the image, while in MRI we observe samples of the image's Fourier transform. To acquire an N-pixel image, we will in general need to make at least N measurements. What happens if the number of measurements is (much) less than N (that is, the measurements are incomplete)? We will present theoretical results showing that if the image is sparse, it can be reconstructed from a very limited set of measurements essentially as well as from a full set by solving a certain convex optimization program. By "sparse", we mean that the image can be closely approximated using a small number of elements from a known orthobasis (a wavelet system, for example). Although the reconstruction procedure is nonlinear, it is exceptionally stable in the presence of noise, both in theory and in practice. We will conclude with several practical examples of how the theory can be applied to "real-world" imaging problems. Partha S. Routh (Boise State University) http://cgiss.boisestate.edu/~routh Total variation imaging followed by spectral decomposition using continuous wavelet transform In general geophysical images provide two kinds of information: (a) structural images of discontinuities that define various lithology units and (b) physical property distribution within these units. Depending on the resolution of the geophysical survey, large scale changes can usually be detected that are often correlated with stratigraphic architecture of the subsurface. Knowledge of these architectural elements provides information about subsurface. Total variation (TV) regularization is one possibility to preserve discontinuity in the images. Another goal is to interpret these images is to obtain features that have varying scale information. Wavelets have the attractive quality of being able to resolve scale information in signal or data set. Moreover, heterogeneity produces non-stationary signal that can be effectively analyzed using wavelets due to its localization property. In this work we will present a new methodology for computing a time-frequency map for non-stationary signals using the continuous wavelet transform (CWT) that is more advantageous than conventional method of producing a time-frequency map using the Short Time Fourier Transform (STFT). This map is generated by transforming the time-scale map by taking the Fourier transform of the inverse CWT to produce a time-frequency map. We refer to such a map as the time-frequency CWT (TFCWT). Imaging using total variation regularization operator followed by spectral decomposition using TFCWT can be used as an effective interpretive tool. Guillermo R. Sapiro (University of Minnesota Twin Cities) http://www.ece.umn.edu/users/guille/ 3D segmentation in tomography In this talk I will describe recent results in the segmentation of relevant structures in electron tomography. We have developed novel techniques based on PDEs to work with this extremely hard data. I will describe the problem and the proposed solution, both at a tutorial level for a general audience. This is joint work with A. Bartesaghi and S. Subramaniam from NCI at NIH. Eric S. Weber (Iowa State University) http://www.public.iastate.edu/~esweber/ Orthogonal Wavelet Frames for Color Image Compression We present an algorithm for constructing orthogonal wavelet frames from MRA’s in L2(R), as well as for the associated filter banks. This construction gives rise to a vector-valued wavelet transform (VDWT) for vector valued data, such asimages. We present numerical results of image data compression using the VDWT. Martin Welk (Universität des Saarlandes) Structure-Preserving Filtering of DT-MRI Data: PDE and Discrete Approaches Joint work with: Joachim Weickert, Christian Feddern, Bernhard Burgeth, Christoph Schnoerr, Florian Becker Curvature-driven PDE filters like mean-curvature motion (MCM), and median filters are well-studied as structure-preserving filters for grey-value images. They are related via a remarkable approximation result by Guichard and Morel. We show generalisations of both types of filters to multivariate, specifically matrix-valued images. We discuss properties and algorithmic aspects, and demonstrate their usefulness for the filtering of diffusion-tensor data. Art W. Wetzel (Pittsburgh Supercomputing Center) http://www.psc.edu/~awetzel/ A Networked Framework for Visualization and Analysis of Volumetric Datasets Work in collaboration with the PSC-VB team and the Duke Center for In Vivo Microscopy (CIVM) with support from the National Library of Medicine. Volumetric datasets (CT, MRI, EM, etc.) on the gigabyte scale are relatively common in the basic and clinic al Life Sciences, and datasets on the terabyte scale will become increasingly common in the near future. At these scales visualization and analysis using typical users desktop systems is difficult. We have been developing a client-server system, the PSC Volume Browser (PSC-VB), that links the graphics power of user's PCs with remote high performance servers and supercomputing resources to enable the routine sharing, visualization, and anaylsis of of large volumetric and time series datasets. PSC-VB provides the framework for efficient data transfer and data manipulation using both client and server side processing. The system is designed to link extensible toolsets for data analysis including the National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) and user provided processing modules.We are currently using PSC-VB for analysis of mouse cardiac function using time series micro-CT volumes and mouse embryo development micro-MRI volumes captured at the Duke CIVM. This talk will provide an overview of the PSC-VB system and its specific application to the CIVM time series data analysis as well as preliminary efforts to build very large data volumes from serial section electron microscopy images. Although our current applications involve biological data the general framework is applicable to other data modalities and has been used to view, for example, earthquake ground motions and electro magnetic fields. Common reflection angle imaging Common reflection angle migration (CRAM) is a computationally efficient ray-based seismic imaging technology developed at ExxonMobil which, as its name implies, enables us to form images of the subsurface in which all reflection events are imaged at the same reflection angle. It is most useful in complex imaging situations, such as beneath salt masses where signal/noise is a key issue and CRAM often enables us to separate signal and noise by comparing and contrasting different common reflection angle volumes. Our poster shows a recent example of how this works in practice. Sinogram decomposition for fan beam transform and its applications Recently several research groups independently proposed a sinogram decomposition approach for different problems in medical imaging. Sinogram is the set of projections of the reconstructed object. The main idea is to treat a sinogram as a family of the sinogram curves (s-curves). Each s-curve is obtained by tracing a single object point in the sinogram. There are many operators that can be defined on a the space of s-curves: backprojection (sum), minimum/maximum, etc. Therefore, the sinogram decomposition approach can be used in many applications: reconstruction from noisy data, sinogram completion for truncation correction and field-of-view extension, and artifact correction. In this poster we will derive equations of s-curves in fan-beam geometry, native for medical CT scanners, parameterize the family of s-curves through a given sinogram pixel, and consider some of the applications, where we suggest ways for estimation of missing data using this approach.
{"url":"http://ima.umn.edu/2005-2006/W1.9-12.06/abstracts.html","timestamp":"2014-04-16T07:14:40Z","content_type":null,"content_length":"74764","record_id":"<urn:uuid:c1d5a09a-5d2d-4ea2-93d7-3866552f16cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Updated Numpy reference guide Pauli Virtanen pav@iki... Sun Aug 31 13:21:06 CDT 2008 Hi all, I finished the first iteration of incorporating material from Travis Oliphant's "Guide to Numpy" to the Sphinxy reference guide we were constructing in the Doc marathon. Result is here: (the PDF is a bit ugly, though, some content is almost randomly scattered there) Source is here: (Stéfan, if it looks ok to you, could you pull and check if it builds for you when you have time?) What I did with the "Guide to Numpy" material was: - Collapsed each of the reference Chapters 3, 6, 8, 9 (ndarrays, scalars, dtypes, ufuncs) with the more introductory material in Chapter 2. - As this was supposed to be a reference guide, I tried to compress the text from Chapter 2 as much as possible, by sticking to definitions and dropping some more tutorial-oriented parts. This may have reduced readability at some points... - I added some small bits or rewrote parts in the above sections in places where I thought it would improve the result. - I did not include material that I thought was better to be put into appropriate docstrings in Numpy. What to do with class docstrings and obscure __xxx__ attributes was not so clear a decision, so what I did for these varies. - The sections about Ufuncs and array indexing are taken almost verbatim from the "Guide to Numpy". The ndarray, scalar and dtype sections somewhat follow the structure of the Guide, but the text is more heavily edited from the original. Some things to do: - Descriptions about constructing items with __new__ methods should probably still be clarified; I just replaced references to __new__ with references to the corresponding classes. - What to do with the material from numpy.doc.* should be decided, as the text there doesn't look like it should go into a reference manual. Some questions: - Is this good enough to go into Numpy SVN at some point? Or should we redo it and base the work closer to the original "Guide to Numpy"? - Does it build for you? (I'd recommend using the development 0.5 version of Sphinx, so that you get the nifty Inter-Sphinx links to the Python documentation.) We are unfortunately beating the Sphinx with a big stick to make it place the documentation of each function or class into a separate file, and to convert the Numpy docstring format to something the Sphinx can There's also some magic in place to make toctrees:: of function listings more pleasant to the eye. Any comments of what should be improved are welcome. (Even better: clone the bzr branch, make the changes yourself, and put the result somewhere available! E.g. as a bzr bundle or a branch on the launchpad.) Pauli Virtanen More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-August/037052.html","timestamp":"2014-04-17T13:20:30Z","content_type":null,"content_length":"5560","record_id":"<urn:uuid:d5ed149a-94ef-4130-b9fc-788e1349b8ab>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
shared by Denis S on 03 Mar 10 - Cached shared by Denis S on 25 Feb 10 - Cached shared by Denis S on 04 Feb 10 - Cached shared by Denis S on 25 Jan 10 - Cached Mike McIlveen on 07 Nov 09 Java applets some of which the website claims may be used for assessment. E.g. Venn, Fractions, Geometry Lots for Grades 3-8 "In this section, you will find some fun ways to learn about math. You can start out with Estimation of Length, Place Value and Weight and Capacity. If you want something more challenging, take a look at Line Symmetry, Patterns and Tangrams." This is a collection of interactive lessons from a Grade 4 Math course developed by Winpossible - the course uses an innovative instruction format by combining an engaging animated character's visual and voice with Winpossible's unique ChalkTalk™ technology (patent pending) in order to replicate the "classroom experience" for students. The course matches with the Grade 4 correlations of most states in the US. Large collection of math activities, lessons, standards, and web links from the NCTM 1 - 11 of 11 items per page Selected Tags Related tags
{"url":"https://groups.diigo.com/group/math-links/content/tag/lessons%20Interactive","timestamp":"2014-04-17T19:13:14Z","content_type":null,"content_length":"81513","record_id":"<urn:uuid:aa63754f-24bf-441e-ad44-dec373ceb1fd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Grantville, GA Algebra 2 Tutor Find a Grantville, GA Algebra 2 Tutor ...I initially focus on improving the student's reading comprehension skills in all subjects (math and science included), since this is the ultimate key to academic success. I teach them how to thoroughly study class notes, as well as highlight key concepts, study graphs/tables and read chapter summaries in their textbooks. I also emphasize reviewing class notes on a nightly basis. 57 Subjects: including algebra 2, reading, chemistry, geometry ...I also have several years experience tutoring Chemistry and Math privately and while working at my University's Math and Science Learning Center. In addition I was in charge of Chemistry group tutoring sessions working directly with the Chemistry professors to coordinate lessons. Lastly during ... 10 Subjects: including algebra 2, chemistry, geometry, algebra 1 ...It is my mission to help students understand math, to see how it fits together, and to become independent, successful learners. I know that takes time and consistency, both of which I am more than willing to provide. I am a former math teacher, and a former teacher educator. 8 Subjects: including algebra 2, statistics, algebra 1, trigonometry ...I am a published chemist and a lecturer at the graduate school level. I am certified to teach science and mathematics in Ohio, was a "preferred substitute teacher" in two school districts in NE Ohio, and have tutored and taught science and mathematics in high school and middle school in Ohio for... 12 Subjects: including algebra 2, chemistry, algebra 1, GED Teaching others to understand and appreciate mathematics is an outlet for me. Because I enjoy mathematics, over the years I have learned how to teach and show my clients that it is nothing they can't do if they give themselves time to learn and understand. I'm a graduate of Southern Polytechnic State University. 9 Subjects: including algebra 2, algebra 1, precalculus, trigonometry Related Grantville, GA Tutors Grantville, GA Accounting Tutors Grantville, GA ACT Tutors Grantville, GA Algebra Tutors Grantville, GA Algebra 2 Tutors Grantville, GA Calculus Tutors Grantville, GA Geometry Tutors Grantville, GA Math Tutors Grantville, GA Prealgebra Tutors Grantville, GA Precalculus Tutors Grantville, GA SAT Tutors Grantville, GA SAT Math Tutors Grantville, GA Science Tutors Grantville, GA Statistics Tutors Grantville, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Chattahoochee Hills, GA algebra 2 Tutors Concord, GA algebra 2 Tutors Gay, GA algebra 2 Tutors Glenn, GA algebra 2 Tutors Greenville, GA algebra 2 Tutors Haralson algebra 2 Tutors Hogansville algebra 2 Tutors Luthersville algebra 2 Tutors Molena algebra 2 Tutors Moreland, GA algebra 2 Tutors Roopville algebra 2 Tutors Sargent, GA algebra 2 Tutors Turin, GA algebra 2 Tutors Warm Springs, GA algebra 2 Tutors Williamson, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/grantville_ga_algebra_2_tutors.php","timestamp":"2014-04-21T02:18:27Z","content_type":null,"content_length":"24275","record_id":"<urn:uuid:0f984a5c-7470-485a-be84-eeb68b4124cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Boffy's Blog Crises Analysis– 1847, 1857, 2008, 20?? (8) c) 2008 - continued The expansion of debt, described in Part 87, had two aspects. Firstly, as the fall in global commodity values proceeded, the expansion of credit led to a fall in the value of tokens, which prevented the fall in values being manifest as a global deflation of commodity . But, this same process, led to an unprecedented inflation of financial asset prices. Between 1982 and 2000, the Dow Jones Index rose from 1,000 to 10,000, a percentage rise way in excess of the growth of the US economy during that period. Similar rises took place in other stock markets, and bond markets. In addition, property markets, in several countries, experienced the same kind of bubbles. The bubbles in these asset markets were, in turn, the unsafe collateral on which individuals were encouraged to take on even more unsustainable levels of debt. Secondly, these growing levels of debt were the means by which the increasing gap between the exports of the US and UK to Asia, and their imports from Asia, was bridged. China and other Asian economies produced masses of cheap commodities, a large proportion of which were sold to the US and UK. These commodities contained large large amounts of produced surplus value, but to realise it, US and UK consumers had to buy those commodities, despite stagnant or falling real . They did so by taking on ever larger levels of debt, collateralised on increasingly outrageous valuations of their houses, shares and bonds. The credit was provided in large part by those same Chinese and Asian producers, who recirculated the dollars they received into the purchase of US and UK bonds. In doing so, they ensured that they could continue to sell their commodities into these economies. This situation described in Part 74 is similar to that described by Marx in Capital III in relation to that in the 19th Century, concerning China and India, except the situation is reversed. Then, Mill and Ricardo etc. argued that Britain was not overproducing, but China and India were under-consuming. So, Britain forced its loans on China so that it could continue to consume British exports. Today, no one is forcing Britain or the US to borrow from China, but the end result is the same. Chinese over production is allowed to continue for so long as the loans keep flowing to finance the consumption. At some point, when the flow of profits declines, so that the supply of declines relative to its demand, then global interest rates will rise. When interest rates rise, borrowers default. When borrowers default, lenders also go bust and become loathe to lend. But, those who have borrowed to consume can no longer continue to do so. The producers of the commodities they bought from now find their market has dried up, and their overproduction becomes manifest. Nor can Central Banks remedy this situation of insolvency by printing money. Marx says, “Ignorant and mistaken bank legislation, such as that of 1844-45, can intensify this money crisis. But no kind of bank legislation can eliminate a crisis.” In a system based on credit, a crisis must occur when its not available. Bills represent sales, and purchases, “whose extension far beyond the needs of society is, after all, the basis of the whole crisis.” In a message to today's politicians and central bankers, he continues, “The entire artificial system of forced expansion of the reproduction process cannot, of course, be remedied by having some bank, like the Bank of England, give to all the swindlers the deficient capital by means of its paper and having it buy up all the depreciated commodities at their old nominal values.” “Incidentally, everything here appears distorted, since in this paper world, the real price and its real basis appear nowhere, but only bullion, metal coin, notes, bills of exchange, securities. Particularly in centres where the entire money business of the country is concentrated, like London, does this distortion become apparent; the entire process becomes incomprehensible; it is less so in centres of production.” Once the mass of potential money-capital begins to fall relative to its demand, the consequent rise in interest rates cannot be reversed by increased money printing, because under these conditions, the devaluation of the currency simply results in inflation. Suppliers of money-capital then demand even higher rates of interest to compensate. But, this is to get ahead of ourselves. Crises Analysis– 1847, 1857, 2008, 20?? (7) c) 2008 The background to the financial crisis of 2008 has been described earlier, in examining the post-war slump that ran from around 1974 to 1999. As was seen, that period of crisis and stagnation was fully explicable in terms of Marx and Engels' theories of crisis. The period of post-war boom led to the frequent kind of cyclical crisis, in which capital is over produced, as a result of exuberance, based on high rates of profit and strong demand. The development of new technologies led to the creation of a series of new industries, into which capital could flow. As each crisis was resolved, the boom continued and the accumulation of capital went on apace. “On the other hand, new lines of production are opened up, especially for the production of luxuries, and it is these that take as their basis this relative over-population, often set free in other lines of production through the increase of their constant capital. These new lines start out predominantly with living labour, and by degrees pass through the same evolution as the other lines of production. In either case the variable capital makes up a considerable portion of the total capital and wages are below the average, so that both the rate and mass of surplus-value in these lines of production are unusually high.” (Capital III, Chapter 14) But, this period gave way to a period of stagnation in the 1980's and 90's. During this period, the potential for establishing new, high-value/high profit industries is limited. This limits, therefore, the potential for capital accumulation, which is the basis of the stagnation. This is more pronounced in developed economies such as the US and UK, because industrial capital, in search of higher rates of profit, in existing industries, begins to relocate to a number of low wage economies, in Asia, where a combination of high levels of productivity, resulting from the capital intensive nature of production, with low wages, results in a high rate of surplus value, and higher rate of profit than possible in developed economies. This results in a rapid industrialisation of a number of these Asian Tigers, and in particular, China, alongside the de-industrialisation of the economy in the US and UK. In these two economies, in particular, political decisions of their conservative governments, tied historically, sociologically and electorally to the petit-bourgeoisie, the small capitalists, and to the landed and financial oligarchy, encourage this process. This is the situation Marx describes in Capital III, Chapter 20, where he sets out the way that the predominance of Merchant Capital (in which he includes money-dealing capital) always leads to such backwardness, compared to where industrial capital predominates. “On the contrary, wherever merchant's capital still predominates we find backward conditions. This is true even within one and the same country, in which, for instance, the specifically merchant towns present far more striking analogies with past conditions than industrial towns.” Capital III, Chapter 20 It connects up, Marx says with those other throwbacks, the landed and financial oligarchy. The centre for such interests is London, and it is in London, and its environs that the support for these reactionary policies pursued by the Tories is greatest, whereas in the rest of the country support for the Tories is much lower. Instead of a social-democratic strategy, such as that followed in Germany, of seeking to move industrial capital away from these old, low-profit industries into newer high value/high profit industries, capable of sustaining higher wages on the basis of high levels of productivity, and the use of complex labour, these governments instead fell back on to a 19th century strategy of trying to extract absolute surplus value, on the basis of low wages, that could now be imposed after costly battles had led to the defeat of the workers, and the weakening of their organisations. But, for all the reasons Marx and Engels described, in relation to the limitations of extracting absolute surplus value, this strategy was never likely to be successful in the longer-term. As described in those sections, one consequence of this, and the solutions adopted, in these economies, was that large amounts of money tokens, and credit-money was produced. In the global economy, the shift of production to Asia had caused the value of many commodities to fall drastically. This was intensified in the 1990's, as new technologies began to be introduced, which raised productivity, and also significantly reduced the turnover time of capital. Both brought a sharp rise in the rate and mass of profit. In so far as capital flowed into some of these new technology industries in the developed economies, particularly in the US, their own high value/high profit production led to rapid growth for a number of companies. The growth in the mass and rate of profit from the late 1980's onwards was so huge, however, that even after creating whole new economies in Asia etc., and whole new industries in the area of technology, the supply of money-capital exceeded its demand for conversion into productive-capital. It pressed down, therefore, on global money markets, on interest rates, in the way Marx describes. global interest rates fell almost continuously from 1982 until 2012, at first because the demand for capital was low, and then because its supply was high. This combination of low interest rates, caused by a rising mass of profit, and a devaluation of currencies resulting from the expansion of credit, was the basis for expanding aggregate demand in the UK and US as real wages stagnated or fell. A central plank was the deregulation of financial markets in the late 1980's, introduced by Thatcher and Reagan, that encouraged more and more people to go into debt, and removed all controls from the banks and finance houses that lent to them and who then gambled, via a series of derivatives, over how many of these debts would actually go bad. In the upcoming elections, to the European Parliament, in May, the Right and Far Right will do very well. In Britain, UKIP are likely to come second to Labour, beating even an increasingly conservative and euroseptic Tory Party into third place. The Liberals, particularly after Clegg's abysmal performance against Farage, are likely to be annihilated. It could even spell an early bath for Clegg if not for the Coalition. In France, the Front National may even top the poll. It has sought to shed its neo-fascist image, to present itself as merely an extreme nationalist party, similar to UKIP. Yet, the history of many of the members of both parties – the BNP has openly admitted encouraging its members to join UKIP – and the underlying racism of both parties, remains. Similar Far Right parties in the Netherlands, in Austria, Finland and elsewhere look set to benefit from the nationalist bandwagon that short-sighted policies of austerity have generated across Europe. As living standards have dropped, and services been cut, as a result of those policies, the usual scapegoat of foreigners – be it EU bureaucrats, or immigrants – has formed an easy target, facilitating the message of the Far Right. But, in many ways, all this gives a false picture. The main reason these Far Right parties will do well is that the turnout, in the EU elections, will be low. It is the same reason UKIP, and even the BNP, did well in the last Euro elections. Its why they tend to do well in local elections, where the turnout is usually less than 30%. Even where UKIP have done relatively well in by-elections, their actual share of the vote, for a normal General Election, has not been that significant. They have done well, in Labour constituencies, only to the extent that the Tories have done appallingly. Compared to the Labour vote, they have continued to lag well behind. No one seriously believes that in a General Election, Farage and his circus of “loonies, fruitcakes and closet racists” will even win one seat let alone pose any significant chance of winning. The most likely effect will be to take sufficient votes from the Tories to let Labour win. Look at the experience of the BNP. It held many council seats, won in small turnout elections, in the same way it won its Euro seats. Today, it's a busted flush. In the General Election, it went backwards; it lost most of the council seats it had; it's bankrupt politically and financially. Even the most successful of the Far Right parties, the FN in France, has no chance of winning the Presidency or a majority in the National Assembly. It benefits from the semi-proportional representation system in France, as did the BNP and UKIP in the last Euro elections. But, the success of the Far Right in the euro elections is likely to be part of their undoing. When Jean Marie Le Pen managed to get into the final round of Presidential elections, several years ago, the response of the establishment was to muster against him, in favour of Chirac. The same could be seen in relation to the BNP, at its height, and to an extent today with Farage – though in part he has been a media created figure in his own right. Capital, particularly big capital, has no need of these Far Right, and certainly not fascist or neo-fascist parties, at the moment. In fact, after their experiences with Hitler in the 1930's, they are likely to have a high watermark before they resort to such measures again. The Far Right represent a destabilising force that capital does not need when it is secure in its position, entrenched within resilient bourgeois social-democratic regimes. Although the success of UKIP is likely to exert a further centrifugal force on the Tory Party – sending its conservative wing off in the direction of Farage, and its social-democratic wing off towards what is left of the Liberals and towards Labour – the main result will be a further coming together of the interests of big capital, under the aegis of social democracy. Whether that social democracy has the party label Labour or Tory, SDP or CDU etc. does not matter. The other reason that May will mark the high water mark for the Far Right is the economic conjuncture. The long wave cycle turned from its Spring phase to its Summer phase around 2012/13. That means that strong global growth continues until around 2025-30, just as it did between 1999-2008, and indeed as it has done, in most of the world, outside Europe and North America, since 2009. But, the conditions under which that growth occurs have changed, and will continue to change. Firstly, the high prices of raw materials that characterised the earlier period, stop rising and begin to fall as large, new sources of supply come on stream. Secondly, the large gains in productivity that reduced the values of commodities and pushed up profits, slow down. Thirdly, the large increases in the supply of labour-power (both new workers and relative surplus population due to productivity growth) slows significantly. China is already experiencing that and seeking sources of cheap labour in Vietnam, Indonesia, Africa etc. Even Britain is experiencing shortages for some skilled workers, exacerbated by the immigration cap. In the US, it was revealed that the top technology companies have formed a secret cartel so that none of them poach highly paid workers from the others, which would push up wages even further. The consequence is that countries producing manufactured goods and services find it harder to sell to primary producing economies as the latter see their income fall, as raw material prices fall. Secondly, the latter see their currencies fall as their income falls. This pushes up domestic inflation. Workers seek higher wages, so profits fall. The rash of strikes across South Africa's mining sector is an indication of this process. But, this comes at a time when other emerging markets are seeing their currencies fall and inflation rates climb, in the backwash of the tapering of QE in the US. The result is sharply higher interest rates in these economies to defend the currency and curtail inflation. But, this process plays into and is part of a general rise in interest rates across the global economy. Thirdly, the slow down in productivity growth means that the fall in commodity values slows down or stops. That is exacerbated by the fact that the world's main manufacturing power – China – has faced rising costs and a rising currency value, which makes the commodities it supplies to the world's consumers increasingly expensive. In the last thirty years, a massive expansion in the quantity of money tokens and credit-money, pumped into circulation, did not cause consumer price inflation only because the value of those consumer goods was itself being massively reduced. In a world of slowing productivity growth, and rising commodity values, the massive amount of liquidity already in circulation, will inevitably result in sharply rising inflation. The latest US data already suggest inflation is rising, and the only reason inflation in the UK and EU has fallen (besides the fact the figures are bogus because they do not include rising housing and pension costs) is because the value of the pound and euro have risen against the dollar, reducing import costs. As interest rates rise across the globe, the money that flowed into Europe and the US, will flow out again, causing their exchange rates to fall, inflation rates to rise, and prompting another rise in interest rates, as bond investors seek to defend their assets against depreciation. The consequence of this is a weakening of the economic conditions which have strengthened the positions of those sections of capital on which conservative and nationalist parties rely. Low interest rates are the condition for the growth of the “plethora of small capital”, as Marx describes it. It is seen in the 150,000 businesses in Britain described as “zombie firms”, who just about survive being able to repay this low rate of interest, but unable even to produce enough profit to repay the capital sum they have borrowed. These zombie firms cling to existence on the back of these low interest rates, and on the back of the extraction of absolute surplus value from their low paid workers, who make up many of those on zero hours contracts. Many survive only because the low wages they pay are subsidised, by the state, by a transfer of tax, taken from the wages of other workers, and from the fact that their workers, even then, have to resort to Pay Day lenders, to make ends meet. These small capitalists are the bedrock of the Tory Party, and what they represent makes up the bulk of votes for the Tories. UKIP simply represents the more extreme, more consistent exposition of those ideas. But, the Tories also draw support from other traditional sources, from the financial and landed oligarchy, and commercial capital. As Marx points out, wherever these interests predominate, the political regime is more reactionary than where industrial capital predominates. The centre for these interests in Britain, is London and its environs, and its there that the Tories have most of their support. But, the consequence of the change in the conjuncture is that as well as interest rates rising for the reasons outlined, the rate of profit begins to fall, as all those causes of it previously rising go into reverse. A fall in the rate of profit first hits all of that plethora of small capital. The initial effect is likely to be a sharp rise in unemployment, as the zombie firms go bust. The large scale disguised unemployment of millions employed part-time, on temporary contracts, under employed, and on zero hours contracts will then be exposed, along with all of those who are supposedly self-employed, but who are simply scraping a living from underemployment on their own account. But, ironically, big capital may benefit from this process. Part of the drag on its costs, represented by the taxes on its workers, to subsidise low paying small capitalists, will be lifted. To the extent it picks up capital on the cheap, its rate of profit will rise. More workers will be picked up by this big capital, paid higher wages, and may for the first time become organised in unions. But, in any case, the fall in the rate of profit, at a time when more productive capital will need to be employed as productivity growth falls, and in order to retain markets, means that interest rates will rise, as the supply of money-capital falls relative to its demand. The likely consequence will be in the short term a more serious financial collapse even than that of 2008/9. It will fatally weaken the power of the financial and landed oligarchy and the merchant capitalists, as workers end their obsession with debt fuelled consumption and property speculation. It will by contrast strengthen big industrial capital and encourage its logical drive to establish a European state. To the extent it does that by mobilising social-democratic forces to achieve it, the power of conservative and nationalistic ideas will be further weakened. In the second half of 2014, a new 3 year cycle will lead to a slow down in growth that will last until around mid 2015. Survey data is already indicating the onset of that cycle. In countries like the UK, where austerity has been inflicted, it will give the lie to the idea that those policies have been beneficial. In Britain, where much of the recovery has been built on an encouragement of further debt, and the same kind of state intervention in the property market that led to the US sub-prime crisis, that is likely to be even more acute, particularly considering the huge number of people who now rely on Pay Day lenders, and food banks. Despite the government throwing everything it could at it, outside London and a few other cities, the property market barely flickered. How could it do any more when in most of Britain around half the working age population now use Pay Day loans, and about a quarter of the population have used food banks. The Liberal-Tory claims that we are all in this together, suggested again recently by Employment Minister Esther McVey, who said, "It’s been a tough time for you, for me and everybody in the UK but we’ve now turned that round.” (Paul Mason's Blog) shows just how remote they are from the real world. A slow down in the economy, rising unemployment, rising interest rates, and increasing debt defaults will kill the property market. All suggestions that “this time its different” will be shown to be as false as when the same statements preceded the 75% drop in the NASDAQ in 2000! The denouement in all this financial froth will be the death knell for those conservative and nationalist political forces that rose on the back of it. We should say good riddance to both. Classic Wheel Instrumental. Crises Analysis– 1847, 1857, 2008, 20?? (6) b) 1857 - continued Marx explains the evolution of the crisis of 1857, and its relation to the cycle of stagnation, prosperity, boom, and crisis, and how this impacts the rate of profit and of interest, in similar terms to those I have set out elsewhere in relation to the Long Wave. Marx writes, “After the reproduction process has again reached that state of prosperity which precedes that of over-exertion, commercial credit becomes very much extended; this forms, indeed, the "sound" basis again for a ready flow of returns and extended production. In this state the rate of interest is still low, although it rises above its minimum. This is, in fact, the only time that it can be said a low rate of interest, and consequently a relative abundance of loanable capital, coincides with a real expansion of industrial capital. The ready flow and regularity of the returns, linked with extensive commercial credit, ensures the supply of loan capital in spite of the increased demand for it, and prevents the level of the rate of interest from rising. On the other hand, those cavaliers who work without any reserve capital or without any capital at all and who thus operate completely on a money credit basis begin to appear for the first time in considerable numbers. To this is now added the great expansion of fixed capital in all forms, and the opening of new enterprises on a vast and far-reaching scale. The interest now rises to its average level. It reaches its maximum again as soon as the new crisis sets in. Credit suddenly stops then, payments are suspended, the reproduction process is paralysed, and with the previously mentioned exceptions, a superabundance of idle industrial capital appears side by side with an almost absolute absence of loan capital.” In fact, at the moment before such crises erupt everything appears to be going well, because during such periods, asset and commodity prices are at their height, having been raised in the preceding “Thus business always appears almost excessively sound right on the eve of a crash. The best proof of this is furnished, for instance, by the Reports on Bank Acts of 1857 and 1858, in which all bank directors, merchants, in short all the invited experts with Lord Overstone at their head, congratulated one another on the prosperity and soundness of business — just one month before the outbreak of the crisis in August 1857. And, strangely enough, Tooke in his History of Prices succumbs to this illusion once again as historian for each crisis. Business is always thoroughly sound and the campaign in full swing, until suddenly the debacle takes place.” At this point, the demand for money as means of circulation and means of payment rises sharply. That very shortage causes the holders of money to hang on to it, increasing the shortage further, whilst sellers who have previously been happy to accept payment on credit, now demand hard cash, increasing the shortage further. Short run interest rates are driven sharply higher. There is a credit crunch. In 1847, and 1857, it was the limitations of the 1844 Bank Act, which exacerbated this shortage. Engels writes, “By such artificial intensification of demand for money accommodation, that is, for means of payment at the decisive moment, and the simultaneous restriction of the supply the Bank Act drives the rate of interest to a hitherto unknown height during a crisis. Hence, instead of eliminating crises, the Act, on the contrary, intensifies them to a point where either the entire industrial world must go to pieces, or else the Bank Act. Both on October 25, 1847, and on November 12, 1857, the crisis reached such a point; the government then lifted the restriction for the Bank in issuing notes by suspending the Act of 1844, and this sufficed in both cases to overcome the crisis. In 1847, the assurance that bank-notes would again be issued for first-class securities sufficed to bring to light the £4 to £5 million of hoarded notes and put them back into circulation; in 1857, the issue of notes exceeding the legal amount reached almost one million, but this lasted only for a very short time.” Second Case. A Change in the Price of Materials of Production, All Other Circumstances Remaining the Same. Marx examines what happens if everything else is held constant, but the price of materials is halved. Of the £900 advanced capital, 4/5 = £720, was previously spent on materials, and £180 on wages. If the price of materials falls by 50%, only £360 is required for 9 weeks, or £240 for the 6 week working period. £180 is still required for wages, so the total capital advanced for 9 weeks, is £180 + £360 or £540. That means £360 of the original £900 capital is now released. If the business is not to be expanded, this released capital now becomes superfluous, and enters the money market, in search of some other venture to finance. “If this fall in prices were not due to accidental circumstances (a particularly rich harvest, over-supply, etc.) but to an increase of productive power in the branch of production which furnishes the raw materials, then this money-capital would be an absolute addition to the money-market, and to the capital available in the form of money-capital in general, because it would no longer constitute an integral part of the capital already invested.” (p 295-6) In other words, this money could only act as permanently released capital, if the fall in prices was itself permanent rather than a temporary fluctuation in market prices. If it were the latter, it would be likely to be cancelled out by a future variation in the opposite direction. But, a fall in price caused by a fall in value is itself reflected in the fact that, as a consequence of the fall in the value of materials, goes a fall in the value of the end product. Less money-capital is advanced to purchase materials, and a smaller corresponding amount is returned from the sale of the end product. Less capital circulates in this sphere (£360) and is spun off to Third Case. A Change in the Market Price of the Product Itself. It should be noted that this is a change in its market price not its value. A change in market price arises as a consequence of changes in its demand and supply. A change in its value arises from a change in the socially necessary labour time required for its production. Suppose a commodity is produced by the average productivity, but, when it is brought to market, for some reason, for example a change of fashion, demand for it has fallen sharply. Supply exceeds demand and prices fall. Technically, too much labour-time has been spent on its production, but this may be merely a temporary situation. If the product is ice cream, and this week is cold, demand next week, when there is a heatwave, could more than compensate for this week's low demand. Either way, the fact that the commodity has to be sold at a market price below its exchange value represents a capital loss for the seller. In order to continue production, on the same scale they will have to make it good with additional capital from their own pocket, or borrowing. The loss to the seller may be a gain to the buyer. If the price of ice cream falls this week, because of bad weather, the producers and wholesalers may suffer a loss as prices fall. But, vendors who buy up these cheap supplies will benefit if they sell them next week during the height of a heat wave. That is a direct gain for the buyer. But, the buyer may gain, “Indirectly, if the change of prices is caused by a change of value reacting on the old product and if this product passes again, as an element of production, into another sphere of production and there releases capital pro tanto.” (p 296) In this case, the producer of X has sent it to market having expended say £80 in materials and £20 in wages on its production. In the meantime, the value of the materials falls to £70, which can now only be recovered in its price. It falls to £90. If X is used in the production of Y, the producers of Y gain indirectly, because £10 of capital, they previously advanced, has now been released. But, the producer of X does not really suffer a loss here. The £90 they receive for X is enough to buy the replacement labour-power, and the materials at its new price of £70. They can continue production on the same scale. They may have suffered a paper capital loss, because the historic price paid for these materials was £80, and are now worth only £70, but not only is the £70 they now receive, as part of the price of their own commodity, sufficient to reproduce this capital, but because the capital they now have to advance for production has been reduced by £10, their rate of profit is correspondingly increased. In short, any surplus value they produce would now buy a greater quantity of these materials than it did previously. The same is true in reverse if prices rise. A rise in market price not related to a change in value, provides a capital gain to the seller, and capital loss to the buyer. But, a higher price could also be due to a change in its value resulting from productivity changes arising after it was sent to market. If its linen, for example, and the price of cotton rises by 50% (say a £10 rise) then the price of linen will rise by £10 also, even though this £10 was never advanced for its production. The seller of the linen appears to make a £10 gain, but in reality, they need this extra £10 in order to replace the cotton consumed in production. The value of the linen is based not on the money-capital advanced for its purchase, its historic price, or the labour-time embodied in the productive capital it bought, but on the labour-time currently required to reproduce it. In fact, value is not intrinsic to a commodity; it is not somehow embodied, and fixed within it. The commodity is only a shell, which at any time acts as a receptacle within which a given portion of society's available social labour-time is kept. Because the latter is constantly changing, the value residing in each commodity is constantly changing too. “As we have assumed that the prices of the elements of the product were given before it was brought to market as commodity-capital, a real change of value might have caused the rise of prices since it acted retroactively, causing a subsequent rise in the price of, say, raw materials. In that event capitalist X would realise a gain on his product circulating as commodity-capital and on his available productive supply. This gain would give him an additional capital, which would now be needed for the continuation of his business with the new and higher prices of the elements of production.” (p 296) It can be seen, from these examples, why interest rates have fallen over the last 30 years. Not only have huge rises in productivity brought about a massive rise in the rate and volume of profit, but the same causes have also reduced the value of constant and variable capital, bringing about the kind of “freeing” of money-capital into the money market described by Marx above. In addition, those same increases in productivity have brought about a significant reduction in both the working period and circulation period of capital, throwing even greater amounts of “freed” money-capital into money markets, continually pushing down the global rate of interest. Crises Analysis– 1847, 1857, 2008, 20?? (5) b) 1857 1857 came in the latter part of the Summer phase of the Long Wave. 1847 and 2008 are financial crises in the context of high and rising rates of profit creating an abundance of money-capital, causing low interest rates and speculation. 1857 is more like the 1929 Wall Street Crash, in the US. Both are instances of over-trading and speculation, made possible by an increase in credit. 1857 is mostly a financial crisis, similar to 1847, again resulting from the effects of the 1844 Bank Act, indicated by the fact that after the Act was suspended, the 1857 crisis was overcome, and the economy continued to grow until the late 1860's. The 1857 crisis was itself also mostly a financial crisis that spills over into the real economy, but as Marx sets out, the basis of the financial crisis in 1857 was more closely tied to what was happening in the real economy than was the case in 1847. There were many aspects that were the same such as the low interest rates, and availability of money-capital encouraging speculation, and the fact that this was a period of boom, and high profits. Marx sets out, “...the example of Ipswich, where in the course of a few years immediately preceding 1857 the deposits of the capitalist farmers quadrupled), what was formerly a private hoard or coin reserve is always converted into loanable capital for a definite time, does not indicate a growth in productive capital any more than the increasing deposits with the London stock banks when the latter began to pay interest on deposits. As long as the scale of production remains the same, this expansion leads only to an abundance of loanable money-capital as compared with the productive. Hence the low rate of interest.” (ibid) Especially when interest rates are low, these small money-hoards are frequently mobilised into larger pools organised by the banks, and today by the insurance companies, mutual funds and so on, and used for the purpose of speculation in property, bonds, shares etc. solely on the basis of obtaining quick, sizeable capital gains, which causes asset price bubbles of the type we have seen, in all these areas, grow larger and larger for the last 30 years. That is effectively what happened in 1857. A financial panic broke out in the US, whose economy had been growing rapidly in the preceding period, sucking in large amounts of imports from Europe, particularly Britain. Marx wrote a number of articles for the New York Daily Tribune at the time on the crisis. Marx, in this article, makes clear that the effects of the 1844 Bank Act had effects far wider than just in the UK, just as today the policy of QE has effects far wider than just in the US. Rather as happens today with China supplying credit, Britain supplied large amounts of credit to the US. US banks had already become more cautious in their lending earlier in 1857, as the end of the Crimean War had led to a restoration of agricultural production, which meant the need for US agricultural imports declined. When the panic erupted in the US, with the collapse of Ohio Life Insurance and Trust this quickly spread into the economy causing demand to fall, with a consequent effect on US imports from Europe. The US continued to send its shipments of cotton and other goods to Britain and Europe, but now, without revenue from exports to cover them, Britain had to pay with gold. Hence the “flow of gold from England to America”. But, the consequence is then that British exporters having lost markets have to cut production, lay off workers, and go bust. That means Britain in turn buys less imported cotton etc. from the US. As US exports then no longer cover its imports from Britain, it has to pay for them with gold, the flow goes back the other way from the US to Britain. Marx describes it as being like volley-firing. “In 1857, the crisis broke out in the United States. A flow of gold from England to America followed. But as soon as the bubble in America burst, the crisis broke out in England and the gold flowed from America to England. The same took place between England and the continent. The balance of payments is in times of general crisis unfavourable to every nation, at least to every commercially developed nation, but always to each country in succession, as in volley firing, i.e., as soon as each one’s turn comes for making payments; and once the crisis has broken out, e.g., in England, it compresses the series of these terms into a very short period. It then becomes evident that all these nations have simultaneously over-exported (thus over-produced) and over-imported (thus over-traded), that prices were inflated in all of them, and credit stretched too far. And the same break-down takes place in all of them. The phenomenon of a gold drain then takes place successively in all of them and proves precisely by its general character 1) that gold drain is just a phenomenon of a crisis, not its cause; 2) that the sequence in which it hits the various countries indicates only when their judgement-day has come, i.e., when the crisis started and its latent elements come to the fore there.”
{"url":"http://boffyblog.blogspot.com/","timestamp":"2014-04-21T16:08:14Z","content_type":null,"content_length":"197494","record_id":"<urn:uuid:c3567ef2-74af-46a0-b184-2fd8f8105a64>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Norms and condition numbers of oscillatory integral operators in acoustic scattering In this talk we discuss domain and boundary integral operators arising in the theory and numerical treatment by integral equation methods of the Helmholtz equation or time harmonic Maxwell equations. These integral operators are increasingly oscillatory as the wave number k increases (k proportional to the frequency of the time harmonic incident field). An interesting theoretical question, also of practical significance, is the dependence of the norms of these integral operators and their inverses on k. We investigate this question, in particular for classical single- and double-layer potential operators for the Helmholtz equation on the boundary of bounded Lipschitz domains. The results and techniques used depend on the domain geometry. In certain 2D cases (for example where the boundary is a starlike polygon) bounds which are sharp in their dependence on k can be obtained, but there are many open problems for more general geometries and higher dimension. Related Links
{"url":"http://www.newton.ac.uk/programmes/HOP/Abstract2/chandler_wilde.html","timestamp":"2014-04-19T20:10:37Z","content_type":null,"content_length":"3369","record_id":"<urn:uuid:848fc523-dad6-4bf5-9c1b-397e2468eaff>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Seifert Conjecture Overthrown Up: Geometry Forum Articles Prev: Seifert Conjecture Overthrown Seifert Conjecture Overthrown Part 2 Part 1 of this article discussed some of the background of the following conjecture, which K.M. Kuperberg has recently disproved: Seifert Conjecture: Every non-vanishing vector field on the three-sphere has a periodic orbit. In this article, I give an outline of Kuperberg's construction of an infinitely differentiable counterexample. Kuperberg's construction builds on Wilson's construction mentioned in part one. Namely, on every compact manifold of characteristic zero and dimension greater than three, Wilson constructs a non-vanishing infinitely differentiable vector field with no periodic orbit. Wilson's Construction Given a manifold as above of dimension n greater than 3, Wilson starts with a non-vanishing vector field with a finite number of periodic orbits. He then modifies the vector field in a small strangely-shaped neighborhood of a point of a periodic orbit. This neighborhood is called a plug. The modified vector field has the property that it breaks a periodic orbit without vanishing or creating additional periodic orbits. After a finite number of modifications, the vector field on the manifold has no periodic orbits. Here is a description of Wilson's plug to modify a vector field; assume that the original vector field is constant in the (0,0,...,1) direction at each point in a cube in R^n. This is a reasonable assumption, since any non-vanishing vector field in a small neighborhood looks like this under a change of coordinates. The strange shaped plug mentioned above is an embedding of (T^2) x ([0,1]^ (n-3)) x ([0,1]) into this cube. Thus we describe below a vector field for points (x,y,z) in (T^2) x ([0,1]^(n-3)) x ([0,1]). This vector field has some very important properties: 1. It matches smoothly with the original vector field on the boundary. Since (T^2) x ([0,1]^(n-3)) can be embedded in R^(n-1) for n greater than 3, the z coordinate of the embedded plug corresponds to the last coordinate for the cube. Thus the condition amounts to having z'=1 on the boundary. 2. The vector field has no periodic orbits inside the plug. 3. Any orbit which flows into and out of the plug has the same x and y values on exit as on entrance. 4. There is an orbit which flows into but never out of the plug. Property 1 guarantees that the modified vector field is still smooth. 2 implies that locally there are no periodic orbits, and 3 implies that the new vector field does not create new periodic orbits globally for the manifold, since we chose a neighborhood on which there was only one periodic orbit. In order to do this, Wilson uses a vector field which has a mirror image property; for this vector field this means: for z in (0,1/2), the flow winds irrationally around the torus. In order to get back to the original x value, for z in (1/2,1), the flow unwinds in the x direction. In other words it flows irrationally, but in the opposite direction. This relies on the fact that z of the embedding corresponds to the last coordinate of R^n. Finally, 4 means that we can get rid of the periodic orbit. Thus these four properties are enough to give a satisfactory counterexample. Here is the vector field. All vectors are of length one: When z in (0,1/2), x'=kf(x), f irrational rotation on the torus, k to normalize the vector field z'=g(y,z), where g(0,1/4) g greater than 0 everywhere else. When z in (1/2,1) , we use the mirror image property: We can find a g so that 1,2, and 3 hold. 4 holds since some orbit reaches (x,0,1/4). Since y'=z'=0, the orbit remains at (x1,0,1/4) for all time. Since the rotation on the torus is irrational, the orbit never has the same x value again. Thus the counterexample holds. Schweitzer's Counterexample Schweitzer's construction in S^3 is based on Wilson's plug. Again, assume the initial flow constant in the z direction on a cube in R^3. We are no longer guaranteed property 3 since we cannot use the mirror image property. In order to preserve this property, rather than using the whole torus, Schweitzer uses a punctured torus P crossed with [0,1]. Due to a construction of Denjoy, there is a C^1 flow on such a punctured torus which has no periodic orbits. Although it is still impossible to embed P in the plane, Schweitzer retains 3 by embedding (P)x([0,1]) in R^3 in such a way that although things may enter and exit the plug twice, they keep property 3 each time. The trick here is that he makes the two parts of the punctured torus with the same z component parallel. Figures 1 and 2 show the standard embedding for the punctured torus and P to show that P really is another punctured torus. Like Wilson's plug, this new vector field still breaks the orbit of one point while allowing all other points to pass through without any change in their orbit. The construction of Denjoy is only C^1, so this is a C^1 vector field. Kuperberg's Counterexample Kuperberg's idea is to use the original Wilson plug. In order to fit it into three-dimensional space, she uses a one-dimensional torus x([0,1])x([0,1]), as opposed to using T^2 crossed with intervals. The problem is that a map on a one-dimensional torus has a periodic orbit, and there are two z values for which the flow is purely around a circle. Thus the plug creates two new orbits for each that it removes. We could try to remove the new orbits by attaching new plugs, but this will create new periodic orbits. Thus Kuperberg uses a different idea. By cleverly reinserting the plug into itself, she manages to let the flow of the plug kill off its own periodic orbits. She manages to do this in such a way that the vector field remains infinitely differentiable. With this example, Kuperberg has disproved the Seifert conjecture. Krystyna M. Kuperberg, "A C^infinity counterexample to the Seifert conjecture in dimension three," preprint, 1993. P.A. Schweitzer, "Counterexamples to the Seifert conjecture and opening closed leaves of foliations," Annals of Math. 100(1974), 386-400. Wilson, "On the minimal sets of non-singular vector fields," Annals of Math. 84(1966), 529-536. This article and the previous are based on a lecture by Richard McGehee. McGehee's lecture took place February 24, 1994, as part of the Dynamics and Mechanics seminars at the University of Minnesota. Up: Geometry Forum Articles Prev: Seifert Conjecture Overthrown The Geometry Center Home Page Comments to: webmaster@www.geom.uiuc.edu Created: April 16 1994 --- Last modified: Jun 18 1996
{"url":"http://www.geom.uiuc.edu/docs/forum/seifert/se2.html","timestamp":"2014-04-20T04:35:41Z","content_type":null,"content_length":"7737","record_id":"<urn:uuid:8e458490-9b5b-4f48-b690-cf04e62be898>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison of Values from All Hinge and Quartile Methods - Peltier Tech Blog This is the fourth of a five part series. Quartiles for Box Plots This topic is covered in the companion page Quartiles for Box Plots. Hinge Techniques for Determining Quartiles This topic is covered in the companion page Hinges. Interpolation Methods of Determining Quartiles This topic is covered in the companion page Quartiles. Comparison of Values from All Hinge and Quartile Methods Effect of N First and third quartiles (or first and second hinges) for N=8 through 15 are tabulated below for all of the quartile determination methods described in the previous sections. Here the hinges for the Tukey (inclusionary) and the Moore and McCabe (exclusionary) methods are plotted. We see that for even N, the methods result in the same hinges, while for odd N, Tukey is closer to the median, and M&M is further from the median. Here the CDF is overlaid on the previous plot of Tukey and M&M. For even N, all techniques agree, while for odd N, the CDF sticks with the method that yields a whole number index. This chart plots the quartile indices for the N+1, N, and N-1 Basis approaches. The N-1 quartiles are closer to the median, the N+1 quartiles are further, and the N are in between. This is the pattern we noticed in the number lines in the previous section. It becomes interesting when we overlay the various hinge techniques on the N+1/N/N-1 plot. We see that the Tukey hinges is bounded by the N-1 and N quartiles. The M&M quartiles are bounded by the N and N+1 quartiles. And the CDF hinges are bounded by N+1 and N-1. Finally, since these quartiles are intended for use in box plots, here are box plots comparing the six techniques, one box plot each for N=8, 9, 10, and 11. Doubling the Data Set Before making any recommendations, let’s see how the techniques compare when we double a data set. For example, if we have a data set of {1,2,3,4,5} and another data set with the same values, but two of each, {1,1,2,2,3,3,4,4,5,5}, we would expect to find the same quartiles. Here is what all the techniques predict. Forget staring at a table of numbers, the predictions are plotted in the following charts. Any pairs that are not vertically aligned have different quartiles for the data set and its double. These unmatched cases are drawn in orange in the charts below and also in the table above. Of all the techniques evaluated, only CDF yields the same quartiles for all cases of a data set and its double. Techniques Used by Software Packages The following chart rehashes the difference between the N+1 and N-1 techniques for interpolating quartiles. Microsoft Excel’s legacy QUARTILE function uses the N-1 approach, while Minitab, JMP, and other packages use the N+1 approach. Microsoft added two functions to Excel 2010: QUARTILE.INC, which is based on N-1 and is therefore identical to QUARTILE, and QUARTILE.EXC, which is based on N+1. SAS also offers an N+1 option (see below). The quartiles for the N-1 technique are closer to the median, so the interquartile range (IQR) is smaller, and the limit for identifying outliers, 1.5IQR above Q3 and below Q1, are also closer to the This means that there are likely to be more outliers identified by Excel than by Minitab. This difference in behavior was a mystery to me a decade ago when my employer provided us with Minitab in addition to Excel, but now it’s very clear. (Many more things were a mystery to the brilliant minds I was stuck working with, but that’s a story for another day, when we’re killing time and beers in the pub.) The next chart shows the two SAS quartile options. The default is CDF (SAS option PCTLDEF = 5), which as we have seen yields identical quartiles for a data set and the same data set with two of each value. SAS also offers the N+1 option (PCTLDEF = 4), which is used by Minitab, JMP, and Excel’s QUARTILE.EXC. SAS also offers three more options (PCTLDEF 1, 2, and 3), which often produce asymmetric median and quartile definitions, because they round to the larger or closer of two values instead of averaging. The two sets of results are slightly different, and the CDF quartiles tend to be closer to the median than the N+1 quartiles. As with Excel’s N-1 results, the CDF will have smaller IQR than N+1, leading to identification of more borderline outliers. So after all this noise, which quartile definition should you use? The CDF approach is considered by Langford (Quartiles in Elementary Statistics) to be the all-around “best” approach. It is also the default for the powerful software package SAS, though it doesn’t seem to be used in other packages. This may then be the option of choice. However, an important consideration is consistency. If others you work with are using Minitab or JMP, you should use the N+1 option for compatibility. Quartiles in the Peltier Tech Box Plot Utility This topic is covered in the companion page Quartiles in the Peltier Tech Box Plot Utility. 1. DaleW says: Great visual presentation and discussion of quartiles! Guess that I will have to fire up my spare PC with Excel 2010 to see what your beta charting utility does with them. A quartile method that you’ve left out is the recommended “R-8 method” from the 1996 statistical paper by Hyndman and Fan on calculating quantiles of any type. It uses (N+1/3) weighted interpolation to give approximately median-unbiased results. (My favorite statistical add-in for Excel also uses it.) 2. Jon Peltier says: Dale - Oh great, you mean I missed a relevant technique? So that’s the (k-1/3)/(N+1/3) one in the paper? I remember using something like (k-1/4)/(N+3/8) years ago when plotting Weibull distributions. I need to do some other work, then maybe I’ll get back to this. 3. DaleW says: Yes, for when you have more free time again, R-8 is the depth h=(N+1/3)p+1/3 quantile method where p=k/4 for quartiles k={1,2,3 }. This gives nearly unbiased estimates of the median for population order statistics (including quartiles), regardless of distribution. The Analyse-it website gives a nice explanation of why they only use them for box plots in their Excel statistical add-in. You can plot Filliben’s estimate of the median order statistic for a 1st or 3rd quartile vs. N if you want to see how good the N+1/3 Basis is. If you don’t trust Filliben’s 1975 estimate, Monte Carlo simulations of quartiles in Excel (with enormously more computing power than Filliben is likely to have had!) for a few known population distributions and values of N should be quite You were probably using the R-9 method where h=(N+1/4)p+3/8 which gives nearly unbiased estimates of the mean for population order statistics when the distribution is normal. This basis can also be useful, now that you mention it, although for box plots it could be risky to assume a normal distribution. If we demand symmetry from our quantiles (or for quartiles, just that the k=2 median of N numbers be = Np+1/2), then once we select the N-related basis, the intercept constant for h is also determined. If we know h, and h has been limited to the obvious range of 1 to N (a simple and easy correction for our fenceposts regardless of basis) to avoid extrapolation, then our very general quantile or quartile interpolation rule would be simply =SMALL(X,h) — if only SMALL could do linear interpolation for non-integers! Fortunately, you teach how to do linear interpolation between two nearest neighbors in your blog. You won’t be able to use your Choose() one of 4 formulas for interpolated quartiles when the basis is no longer an integer, so the R-8 and R-9 methods would complicate your utility in that sense. The CDF is a special case of N-based quartiles with fair rounding (0.5 causes us to split the difference), and the inclusive Tukey hinges are a N-based biased rounding (historically significant and logical for quick box plots because both the median and extreme value are already in our 5 number summary), while the exclusive M&M ~hinges are a biased N-based rounding of dubious value in box plots (IMO). I hope that makes sense . . .
{"url":"http://peltiertech.com/WordPress/comparison/","timestamp":"2014-04-20T03:33:35Z","content_type":null,"content_length":"43764","record_id":"<urn:uuid:2daf259a-213f-4f12-83e3-0077076be6fc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home An estimate for coeffcients of polynomials in $L^2$ norm, II G.V. Milovanovi\'c and L.Z. Ranci\'c Elektronski fakultet, Nis, Yugoslavia Abstract: Let ${\Cal P}_n$ be the class of algebraic polynomials $P(x)=\sum_{k=0}^na_kx^k$ of degree at most $n$ and $\|P\|_{d\sigma}= (\int_R|P(x)|^2d\sigma(x))^{1/2}$, where $d\sigma(x)$ is a nonnegative measure on $R$. We determine the best constant in the inequality $|a_k|\le C_{n,k} (d\sigma)\|P\|_{d\sigma}$, for $k=0,1,\dots,n$, when $P\in {\Cal P}_n$ and such that $P(\xi_k)=0$, $k=1, \dots,m$. The cases $C_{n,n}(d\sigma)$ and $C_{n,n-1}(d\sigma)$ were studed by Milovanovi\'c and Guessab [6]. In particular, we consider the case when the measure $d\sigma(x)$ corresponds to generalized Laguerre orthogonal polynomials on the real line. Classification (MSC2000): 26C05, 26D05; 33C45, 41A44 Full text of the article: Electronic fulltext finalized on: 1 Nov 2001. This page was last modified: 16 Nov 2001. © 2001 Mathematical Institute of the Serbian Academy of Science and Arts © 2001 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/072/15.html","timestamp":"2014-04-19T12:21:01Z","content_type":null,"content_length":"3626","record_id":"<urn:uuid:9656b812-a97a-43cb-b4ac-44d17a48174f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00473-ip-10-147-4-33.ec2.internal.warc.gz"}