content
stringlengths
86
994k
meta
stringlengths
288
619
Need help on C programming Need help on C programming Hey guys can you show me the right direction? #include "stdio.h" #include "math.h" float falsepositives(float accuracy, float incidence, float population) return population*(1-incidence)*(1-(accuracy/100)); void a2question1() float a,i,f; int p; printf("Enter population size: "); printf("Enter the accuracy of the test as a percentage (e.g. 99): "); printf("Enter incidence of the disease as a decimal fraction (e.g. .0001): "); f = falsepositives(a,i,p); printf("Population = %d, accuracy = %.2f, incidence = %f,\nfalse positives = %.f\n\n",p,a,i,f); float threenumbers(float num1, float num2, float num3) return num1,num2,num3; void a2question2() float a,b,c,s,avg,prod; printf("Enter a whole number: "); printf("enter a second whole number: "); printf("Enter a third whole number: "); s = threenumbers(a,b,c); avg = threenumbers(a,b,c); prod = threenumbers(a,b,c); printf("Sum = %f\nAverage = %f\nProduct = %f\n\n",s,avg,prod); int main() return 0; I need the program to add,multiply,find the average, the lowest numbers and the highest numbers all in the float functions, but I can't seem to do it, maybe the teacher post the wrong one? Anyway can you guys show me what direction to go, no need to write the cod I don't really understand what your problem is, you need to elaborate more and point out the parts of code you are having issues with. I am going to take a guess and say that your problem is mainly with this function float threenumbers(float num1, float num2, float num3) return num1,num2,num3; What it looks like to me is that you are trying to somehow store the results of various operations in num1, num2 and num3. And then give the results back to the caller. Unfortunately C does not allow you to return multiple values as you are attempting here. What happens is that your statements are executed (comma is a statement separator with some less-then desirable side-effects in most cases) and then the value of the last statement is returned. If you want to store the results in the arguments to the function, you need to pass them as pointers like so: float num1, num2, num3; void threenumbers(float * num1, float * num2, float * num3) *num3=*num1 * *num2 * *num3; Obviously not the prettiest syntax here but I'm just showing you a raw conversion of your function into one that does (one piece) of what you want. The other problem here is that after the first line of this function *num1 is now containing a different value then what it first had so it basically invalidates the remaining two statements (same problem with *num2 on the third statement). So you need to use temporary variables to store the intermittent operation results. That is the second problem which is a logic problem and I'm going to leave you to try and fix that on your own. What I am trying to do is to calculate addition,multiplcation and finding the average in one float function which will return back into the main function, I am not sure whether if its possible but that is my assignments. Thank you for your answer again, I do not understand the part where up put an asterisk before the num1,num2,num3(*num1,*num2,*num3) for. Can you elaborate? If it's your assignment, then the chance is that it is possible ;) These are pointers. Tutorial on pointers in C. I strictly suggest you to read pointers. From the link, from the slides of your course.. From anywhere you can. C and pointers go together ;) If you don't know pointers, then you can not program in C. Hi std, Thank you for the quick reply, I read through the link you gave and still are quite unsure how will that help. So I know that pointers are used to address "memory," does this means whatever value I gave to x before, does not change? As in say I have this function: int *x and if I declare that *x is 5 using scanf, does that means I will get the value of 6,7,8 or do I get *x=x*+1 which is 6, so *x=6+2, then *x=8+3 with the final answer being 11? I think this example can help. #include <stdio.h> /* Read a value. */ void input(int* v); /* Suppose that we want to swap the values */ /* of two variables. */ void swapNoPointers(int a, int b); void swap(int* a, int* b); int main(void) int var; int n = 5; printf("Please input\n"); printf("You tuped: %d\n", var); printf("var = %d and n = %d\n", var, n); swapNoPointers(var, n); printf("var = %d and n = %d\n", var, n); printf("var = %d and n = %d\n", var, n); swap(&var, &n); printf("var = %d and n = %d\n", var, n); return 0; void input(int* v) scanf("%d", v); /* We don't to pass the reference of the variable, */ /* because we have as parameter *v. So, actually we */ /* pass as parameter in fscanf function this: &*v, which */ /* results to this: v */ void swapNoPointers(int a, int b) int temp; temp = a; a = b; b = temp; void swap(int* a, int* b) int temp; temp = *a; *a = *b; *b = temp; Thanks, that was insightful. Somehow I still can't use it in my function, it will still give me the value for only 1 equation. To consider your example, first write it in the "normal way": #include "standard_headers.h" #define INIT_VALUE 5 int main(void) int x = INIT_VALUE; printf("Line %d: Now x has the value %d\n", __LINE__, x); x = x + 1; printf("Line %d: Now x has the value %d\n", __LINE__, x); x = x + 2; printf("Line %d: Now x has the value %d\n", __LINE__, x); x = x + 3; printf("Line %d: Now x has the value %d\n", __LINE__, x); return EXIT_SUCCESS; And now consider how we can write it using pointers. This normally means we do our own memory management. So it's helpful to define two functions: one to create an object and one to free the #include "standard_headers.h" #define INIT_VALUE 5 int *new_int(void) { int *x = malloc(sizeof *x); *x = INIT_VALUE; return x; void delete_int(int *x) { int main(void) int *x = new_int(); printf("Line %d: Now x has the value %d\n", __LINE__, *x); *x = *x + 1; printf("Line %d: Now x has the value %d\n", __LINE__, *x); *x = *x + 2; printf("Line %d: Now x has the value %d\n", __LINE__, *x); *x = *x + 3; printf("Line %d: Now x has the value %d\n", __LINE__, *x); return EXIT_SUCCESS; Notice that the code is the same. Basically only replacing x by *x float threenumbers(floatnum1, floatnum2, floatnum3){ float threenumbers(float num1, float num2, float num3) printf("Total Added: %f\n", num1+num2+num3); printf("Total Mean: %f\n", (num1+num2+num3)/3); printf("Total Multiplied: %f\n ", num1*num2*num3); float threenumbers(floatnum1, floatnum2, floatnum3){ float threenumbers(float num1, float num2, float num3) printf("Total Added: %f\n", num1+num2+num3); printf("Total Mean: %f\n", (num1+num2+num3)/3); printf("Total Multiplied: %f\n ", num1*num2*num3);
{"url":"http://cboard.cprogramming.com/c-programming/153976-need-help-c-programming-printable-thread.html","timestamp":"2014-04-24T15:43:06Z","content_type":null,"content_length":"23732","record_id":"<urn:uuid:7c18c3e6-f353-420d-a9ef-6d87eb6011d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Borussia Monchengladbach v Marseille Borussia Park Referee: Serge Gumienny | Attendance: 45000 * Local time based on your geographic location. • Filip Daems 33' PEN • • Peniel Kokou Mlapa 67' • • Filip Daems takes a direct freekick with his left foot from his own half. Outcome: open play • Florian Raspentino commits a foul on Juan Arango resulting on a free kick for Borussia Monchengladbach • Throw-in: Rod Fanni takes it (Attacking) • Marc-André ter Stegen takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Andre Ayew • Throw-in: Michel Lucas Mendes takes it (Attacking) • Marc-André ter Stegen takes a long goal kick • Benoit Cheyrou hits(volley) a left footed shot, but it is off target. Outcome: over bar • Throw-in: Rod Fanni takes it (Attacking) • Steve Mandanda takes a direct freekick with his right foot from his own half. Outcome: open play • Juan Arango commits a foul on Rod Fanni resulting on a free kick for Marseille • Marc-André ter Stegen takes a long goal kick • Andre Ayew takes a direct freekick with his right foot from the left wing. Outcome: pass • Havard Nordtveit commits a foul on Jordan Ayew resulting on a free kick for Marseille • Peniel Kokou Mlapa takes a direct freekick with his right foot from the left channel. Outcome: open play • Rod Fanni commits a foul on Peniel Kokou Mlapa resulting on a free kick for Borussia Monchengladbach • Throw-in: Filip Daems takes it (Attacking) • Throw-in: Charles Kabore takes it (Defending) • Marc-André ter Stegen takes a long goal kick • Throw-in: Charles Kabore takes it (Attacking) • Throw-in: Filip Daems takes it (Attacking) • Marseille makes a sub: Benoit Cheyrou enters for Kassim Abdallah. Reason: Tactical • Marc-André ter Stegen takes a long goal kick • Throw-in: Kassim Abdallah takes it (Attacking) • Throw-in: Kassim Abdallah takes it (Attacking) • Throw-in: Filip Daems takes it (Attacking) • Michel Lucas Mendes takes a direct freekick with his left foot from the left wing. Outcome: pass • Thorben Marx commits a foul on Jordan Ayew resulting on a free kick for Marseille • Throw-in: Michel Lucas Mendes takes it (Attacking) • Throw-in: Jordan Ayew takes it (Attacking) • Throw-in: Filip Daems takes it (Defending) • Borussia Monchengladbach makes a sub: Mike Hanke enters for Patrick Herrmann. Reason: Tactical • Marc-André ter Stegen takes a long goal kick • Throw-in: Kassim Abdallah takes it (Defending) • Nicolas N'Koulou takes a direct freekick with his right foot from his own half. Outcome: pass • Havard Nordtveit commits a foul on Joey Barton resulting on a free kick for Marseille • Marseille makes a sub: Florian Raspentino enters for Mathieu Valbuena. Reason: Tactical • Marc-André ter Stegen makes a very good save (Catch) • Mathieu Valbuena crosses the ball. Outcome: save • Steve Mandanda takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Martin Stranzl • Juan Arango crosses the ball. Outcome: open play • Juan Arango takes a direct freekick with his left foot from the right wing. Outcome: cross • Andre Ayew commits a foul on Lukas Rupp resulting on a free kick for Borussia Monchengladbach • Thorben Marx takes a direct freekick with his right foot from his own half. Outcome: pass • Charles Kabore is awarded a yellow card. Reason: unsporting behaviour • Charles Kabore commits a foul on Thorben Marx resulting on a free kick for Borussia Monchengladbach • Throw-in: Filip Daems takes it (Defending) • Steve Mandanda takes a long goal kick • Peniel Kokou Mlapa hits a right footed shot, but it is off target. Outcome: over bar • Rod Fanni takes a direct freekick with his right foot from his own half. Outcome: open play • Thorben Marx commits a foul on Mathieu Valbuena resulting on a free kick for Marseille • Throw-in: Michel Lucas Mendes takes it (Defending) • Throw-in: Filip Daems takes it (Attacking) • Marc-André ter Stegen takes a long goal kick • Mathieu Valbuena drills a right footed shot, but it is off target. Outcome: over bar • Patrick Herrmann crosses the ball. Outcome: open play • Peniel Kokou Mlapa drills a good right footed shot. Outcome: goal • Marc-André ter Stegen takes a long goal kick • Jordan Ayew hits a good header, but it is off target. Outcome: miss right • Mathieu Valbuena crosses the ball. Outcome: shot • Jordan Ayew takes a direct freekick with his right foot from the left wing. Outcome: pass • Lukas Rupp commits a foul on Andre Ayew resulting on a free kick for Marseille • Steve Mandanda takes a long goal kick • Marseille makes a sub: Andre Ayew enters for Loic Remy. Reason: Tactical • Borussia Monchengladbach makes a sub: Peniel Kokou Mlapa enters for Luuk de Jong. Reason: Injury • Juan Arango takes a direct freekick with his left foot from the left channel. Outcome: open play • Rod Fanni commits a foul on Patrick Herrmann resulting on a free kick for Borussia Monchengladbach • Marc-André ter Stegen takes a long goal kick • Joey Barton drills a good right footed shot, but it is off target. Outcome: miss right • Steve Mandanda takes a direct freekick with his right foot from his own half. Outcome: pass • Luuk de Jong commits a foul on Joey Barton resulting on a free kick for Marseille • Roel Brouwers hits a good header, but it is off target. Outcome: open play • Havard Nordtveit takes the corner kick from the right byline with his right foot and hits an outswinger to the near post, resulting in: shot • Luuk de Jong crosses the ball. Outcome: out of play • Steve Mandanda takes a long goal kick • Juan Arango drills a good left footed shot, but it is off target. Outcome: over bar • Juan Arango takes a direct freekick with his left foot from the left channel. Outcome: shot • Charles Kabore commits a foul on Havard Nordtveit resulting on a free kick for Borussia Monchengladbach • Throw-in: Filip Daems takes it (Attacking) • Marc-André ter Stegen takes a long goal kick • Mathieu Valbuena takes a direct freekick with his right foot from the left channel. Outcome: pass • Lukas Rupp commits a foul on Mathieu Valbuena resulting on a free kick for Marseille • Throw-in: Kassim Abdallah takes it (Defending) • Filip Daems takes an indirect freekick with his left foot from his own half. Outcome: open play • Joey Barton commits a foul on Filip Daems resulting on a free kick for Borussia Monchengladbach • Throw-in: Filip Daems takes it (Attacking) • Throw-in: Morgan Amalfitano takes it (Attacking) • Joey Barton takes the corner kick from the right byline with his right foot and hits an inswinger to the centre, resulting in: open play • Alvaro Dominguez Soto hits a good header, but it is off target. Outcome: over bar • Havard Nordtveit crosses the ball. Outcome: shot • Havard Nordtveit takes a direct freekick with his right foot from the left wing. Outcome: cross • Handball called on Kassim Abdallah • Marc-André ter Stegen takes a long goal kick • Michel Lucas Mendes hits a good header, but it is off target. Outcome: miss right • Mathieu Valbuena crosses the ball. Outcome: shot • Mathieu Valbuena takes a direct freekick with his right foot from the left wing. Outcome: cross • Martin Stranzl commits a foul on Jordan Ayew resulting on a free kick for Marseille • Michel Lucas Mendes takes an indirect freekick with his left foot from the right channel. Outcome: pass • Handball called on Patrick Herrmann • Throw-in: Filip Daems takes it (Attacking) • Havard Nordtveit clears the ball from danger. • Joey Barton takes the corner kick from the right byline with his right foot and hits an outswinger to the centre, resulting in: clearance • Nicolas N'Koulou clears the ball from danger. • Mathieu Valbuena crosses the ball. Outcome: clearance • Mathieu Valbuena takes a direct freekick with his right foot from the left channel. Outcome: cross • Havard Nordtveit commits a foul on Jordan Ayew resulting on a free kick for Marseille • Morgan Amalfitano takes a direct freekick with his right foot from the right wing. Outcome: pass • Filip Daems commits a foul on Morgan Amalfitano resulting on a free kick for Marseille • Throw-in: Filip Daems takes it (Attacking) • Throw-in: Martin Stranzl takes it (Defending) • Throw-in: Michel Lucas Mendes takes it (Attacking) • Michel Lucas Mendes takes an indirect freekick with his left foot from his own half. Outcome: open play • Offside called on Patrick Herrmann • Michel Lucas Mendes takes a direct freekick with his left foot from the left wing. Outcome: pass • Havard Nordtveit commits a foul on Jordan Ayew resulting on a free kick for Marseille • Nicolas N'Koulou clears the ball from danger. • Lukas Rupp crosses the ball. Outcome: clearance • Marc-André ter Stegen takes a long goal kick • Joey Barton hits(volley) a good right footed shot, but it is off target. Outcome: over bar • Steve Mandanda takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Luuk de Jong • Mathieu Valbuena takes a direct freekick with his right foot from the left wing. Outcome: pass • Thorben Marx commits a foul on Mathieu Valbuena resulting on a free kick for Marseille • Marc-André ter Stegen takes a long goal kick • Nicolas N'Koulou hits a good right footed shot, but it is off target. Outcome: miss right • Morgan Amalfitano takes a direct freekick with his right foot from the left channel. Outcome: pass • Juan Arango commits a foul on Joey Barton resulting on a free kick for Marseille • Nicolas N'Koulou takes an indirect freekick with his right foot from his own half. Outcome: pass • Offside called on Luuk de Jong • Throw-in: Kassim Abdallah takes it (Attacking) • Marc-André ter Stegen takes a direct freekick with his right foot from his own half. Outcome: open play • Morgan Amalfitano commits a foul on Filip Daems resulting on a free kick for Borussia Monchengladbach • Steve Mandanda makes a good save (Catch) • Juan Arango crosses the ball. Outcome: save • Juan Arango takes a direct freekick with his left foot from the right channel. Outcome: cross • Jordan Ayew commits a foul on Juan Arango resulting on a free kick for Borussia Monchengladbach • Lukas Rupp takes a direct freekick with his right foot from his own half. Outcome: pass • Joey Barton commits a foul on Luuk de Jong resulting on a free kick for Borussia Monchengladbach • Throw-in: Roel Brouwers takes it (Defending) • Steve Mandanda takes a long goal kick • Thorben Marx takes the corner kick from the right byline with his right foot and hits an outswinger to the centre, resulting in: open play • Thorben Marx takes a direct freekick with his right foot from the right channel. Outcome: pass • Charles Kabore commits a foul on Thorben Marx resulting on a free kick for Borussia Monchengladbach • Borussia Monchengladbach makes a sub: Roel Brouwers enters for Tony Jantschke. Reason: Injury • Handball called on Charles Kabore • Marc-André ter Stegen makes a very good save (Feet) • Juan Arango curls a good left footed shot. Outcome: save • Juan Arango takes a direct freekick with his left foot from the right wing. Outcome: shot • Michel Lucas Mendes commits a foul on Tony Jantschke resulting on a free kick for Borussia Monchengladbach • Tony Jantschke crosses the ball. Outcome: open play • Alvaro Dominguez Soto clears the ball from danger. • Morgan Amalfitano crosses the ball. Outcome: clearance • Nicolas N'Koulou clears the ball from danger. • Patrick Herrmann crosses the ball. Outcome: clearance • Alvaro Dominguez Soto takes a direct freekick with his right foot from his own half. Outcome: pass • Loic Remy commits a foul on Havard Nordtveit resulting on a free kick for Borussia Monchengladbach • Steve Mandanda takes a long goal kick • Juan Arango hits a left footed shot, but it is off target. Outcome: miss left • Martin Stranzl takes an indirect freekick with his right foot from his own half. Outcome: pass • Offside called on Mathieu Valbuena • Tony Jantschke takes a direct freekick with his right foot from his own half. Outcome: pass • Joey Barton commits a foul on Luuk de Jong resulting on a free kick for Borussia Monchengladbach • Filip Daems takes an indirect freekick with his right foot from his own half. Outcome: pass • Offside called on Mathieu Valbuena • Michel Lucas Mendes takes an indirect freekick with his left foot from his own half. Outcome: pass • Offside called on Patrick Herrmann • Tony Jantschke takes a direct freekick with his right foot from the right wing. Outcome: pass • Jordan Ayew commits a foul on Tony Jantschke resulting on a free kick for Borussia Monchengladbach • Marc-André ter Stegen takes a direct freekick with his right foot from his own half. Outcome: pass • Offside called on Loic Remy • Throw-in: Filip Daems takes it (Attacking) • Kassim Abdallah takes a direct freekick with his right foot from his own half. Outcome: pass • Filip Daems commits a foul on Morgan Amalfitano resulting on a free kick for Marseille • Throw-in: Michel Lucas Mendes takes it (Attacking) • Throw-in: Tony Jantschke takes it (Attacking) • Tony Jantschke clears the ball from danger. • Mathieu Valbuena takes the corner kick from the right byline with his right foot and hits an outswinger to the centre, resulting in: clearance • Marc-André ter Stegen takes a long goal kick • Loic Remy hits a good header, but it is off target. Outcome: miss left • Mathieu Valbuena takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: shot • Marc-André ter Stegen makes a very good save (Feet) • Mathieu Valbuena takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: save • Marc-André ter Stegen makes a very good save (Round Post) • Loic Remy hits a good left footed shot. Outcome: save • Morgan Amalfitano crosses the ball. Outcome: shot • Throw-in: Kassim Abdallah takes it (Attacking) • Throw-in: Kassim Abdallah takes it (Defending) • Marc-André ter Stegen takes a direct freekick with his right foot from his own half. Outcome: open play • Offside called on Loic Remy • Throw-in: Kassim Abdallah takes it (Attacking) • Havard Nordtveit clears the ball from danger. • Morgan Amalfitano crosses the ball. Outcome: clearance • Throw-in: Michel Lucas Mendes takes it (Attacking) • Joey Barton takes a direct freekick with his right foot from the left channel. Outcome: pass • Luuk de Jong commits a foul on Joey Barton resulting on a free kick for Marseille • Throw-in: Kassim Abdallah takes it (Defending) • Michel Lucas Mendes takes a direct freekick with his right foot from the left wing. Outcome: open play • Lukas Rupp commits a foul on Michel Lucas Mendes resulting on a free kick for Marseille • Throw-in: Filip Daems takes it (Attacking) • Throw-in: Kassim Abdallah takes it (Defending) • Shots (on goal) • tackles • Fouls • possession • Borussia Monchengladbach • - Match Stats • Borussia Monchengladbach • Marseille 6(1) Shots (on goal) 9(1) 17 Fouls 15 2 Corner kicks 5 5 Offsides 5 50% Time of Possession 50% 0 Yellow Cards 1 0 Red Cards 0 4 Saves 1
{"url":"http://www.espnfc.com/en/gamecast/355790/gamecast.html?soccernet=true&cc=null","timestamp":"2014-04-18T06:21:41Z","content_type":null,"content_length":"162961","record_id":"<urn:uuid:18bc111e-4f89-4cab-a0ce-e98f51bb5673>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Chamblee, GA Algebra 1 Tutor Find a Chamblee, GA Algebra 1 Tutor ...I begin with a diagnostic session to identify areas for improvement and then I tailor the passages and questions that I ask to lead to the maximum possible improvement for the student. Assignments can be anything from speed drills to sentence diagramming, but they are guaranteed to be effective and engaging. I am an engineer, so I use ACT math every day for my job. 17 Subjects: including algebra 1, chemistry, writing, physics ...I love standardized tests and have scored within the 99th percentile for all tests I tutor. I have been able to consistently help my students increase their SAT scores significantly by reviewing all math concepts that one needs to know for each test, and by teaching strategies like substitution,... 19 Subjects: including algebra 1, physics, calculus, geometry ...I give assignments to help deepen understanding, and to develop independent thinking skills. Oh yes, and I do help students understand what they are stumped on in the moment. But "understanding" is my goal -- I don't focus on quick tricks, but on lasting and deep learning. 8 Subjects: including algebra 1, statistics, trigonometry, algebra 2 ...I look forward to helping you! All the best, ArisOne of the most difficult classes, calculus can be a killer. I probably have taught and tutored this subject more than any other, having taught it for the last 15 years without a break and having graded the AP calculus test a few years back. 20 Subjects: including algebra 1, calculus, statistics, geometry I am a junior Mathematics major at LaGrange College. I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. 9 Subjects: including algebra 1, geometry, algebra 2, precalculus Related Chamblee, GA Tutors Chamblee, GA Accounting Tutors Chamblee, GA ACT Tutors Chamblee, GA Algebra Tutors Chamblee, GA Algebra 2 Tutors Chamblee, GA Calculus Tutors Chamblee, GA Geometry Tutors Chamblee, GA Math Tutors Chamblee, GA Prealgebra Tutors Chamblee, GA Precalculus Tutors Chamblee, GA SAT Tutors Chamblee, GA SAT Math Tutors Chamblee, GA Science Tutors Chamblee, GA Statistics Tutors Chamblee, GA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Avondale Estates algebra 1 Tutors Berkeley Lake, GA algebra 1 Tutors Clarkston, GA algebra 1 Tutors Cumming, GA algebra 1 Tutors Doraville, GA algebra 1 Tutors Dunwoody, GA algebra 1 Tutors Holly Springs, GA algebra 1 Tutors Lilburn algebra 1 Tutors Norcross, GA algebra 1 Tutors North Atlanta, GA algebra 1 Tutors Pine Lake algebra 1 Tutors Sandy Springs, GA algebra 1 Tutors Scottdale, GA algebra 1 Tutors Stone Mountain algebra 1 Tutors Tucker, GA algebra 1 Tutors
{"url":"http://www.purplemath.com/Chamblee_GA_algebra_1_tutors.php","timestamp":"2014-04-19T23:25:03Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:dfe6d647-2752-4cd4-acd7-eb7e5bf13340>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #7316 View entire discussion [<<prev] [next>>] From: Ralph (for Teacher2Teacher Service) Date: Nov 21, 2001 at 15:59:52 Subject: Re: Special Education -- estimating Dear Ann, Well, I hope you DO get a chance to be "teaching on your own" next year, because it certainly sounds like you have the patience, knowledge, and insights it takes to be a great teacher! As for your "immediate challenge", trying to make mental math interesting and challenging for the student you're working with, I would suggest having a look at: There you'll find quite an array of mental math tips and tricks with the idea that using mental math you can come up with an answer faster than someone "punching in the numbers" and doing the calculations on the calculator. You might select some of the "Beatcalc" activities and have him learn them, then see if he can actually do them faster than you can do them on a calculator. I'm sure you'll find some of the Beatcalc activities that will be very similar to the mental math/estimation strategies you mentioned. I hope this helps a bit! Good luck! -Ralph, for the T2T service
{"url":"http://mathforum.org/t2t/message.taco?thread=7316&message=2","timestamp":"2014-04-18T10:36:23Z","content_type":null,"content_length":"5274","record_id":"<urn:uuid:88043a69-329c-45a0-a656-9d465fb8fb32>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
TCDB » SEARCH The Outer Membrane Beta-barrel Endoprotease, Omptin (Omptin) Family The Omptin family is a large family of outer membrane proteases/adhesins, found and studied primarily in enterobacteria (Kukkonen and Korhonen 2004). They play important roes in the degredation of denatured periplasmic proteins (E. coli), and funtion in pathogenesis (in Shigella, Escherichia, Yersinia and Salmonella). In Yersinia pestis, the Pla protein is a plasminogen activator. It both activates plasminogen and inactivating α2-antiplasmin (Suomalainen et al. 2007). It also degrades complement components. In E. coli, omptins, OmpT and OmpP, have been shown to cleave and inactivate cationic antimicrobial peptides (Kukkonen and Korhonen 2004). OmpT of E. coli cleaves peptide bonds between two basic amino acids using a histidyl residue and an aspartyl residue at the active site of the protease and surprisingly, is functional in high concentrations of urea (Stathopoulos 1998; Hritonenko and Stathopoulos 2007). The omptins Pla (Yersinia) and PgtE (Salmonella) attack innate immunity by affecting the plasminogen/plasmin, complement, coagulation, fibrinolysis, and matrix metalloproteinase systems, by inactivating antimicrobial peptides, and by enhancing bacterial adhesiveness and invasiveness (Haiko et al. 2009; Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Hritonenko and Stathopoulos 2007). The omptins enhance pathogenicity and attack innate immunity bynumerous mechanisms: by affecting the plasminogen/plasmin, complement, coagulation, fibrinolysis, and matrix metalloproteinase systems, by inactivating antimicrobial peptides, and by enhancing bacterial adhesiveness and invasiveness (Haiko et al. 2009; Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Hritonenko and Stathopoulos 2007). The omptins enhance pathogenicity and attack innate immunity bynumerous mechanisms: by affecting the plasminogen/plasmin, complement, coagulation, fibrinolysis, and matrix metalloproteinase systems, by inactivating antimicrobial peptides, and by enhancing bacterial adhesiveness and invasiveness (Haiko et al. 2009; Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013Korhonen et al. 2013Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Hritonenko and Stathopoulos 2007). The omptins enhance pathogenicity and attack innate immunity bynumerous mechanisms: by affecting the plasminogen/plasmin, complement, coagulation, fibrinolysis, and matrix metalloproteinase systems, by inactivating antimicrobial peptides, and by enhancing bacterial adhesiveness and invasiveness (Haiko et al. 2009; Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013Korhonen et al. 2013<Hritonenko and Stathopoulos 2007). The omptins enhance pathogenicity and attack innate immunity bynumerous mechanisms: by affecting the plasminogen/plasmin, complement, coagulation, fibrinolysis, and matrix metalloproteinase systems, by inactivating antimicrobial peptides, and by enhancing bacterial adhesiveness and invasiveness (Haiko et al. 2009; Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013Korhonen et al. 2013Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Yun and Morrissey 2009; Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide (Korhonen et al. 2013Korhonen et al. 2013<Korhonen et al. 2013). Although the mechanistic details of the functions of Pla and PgtE differ, the outcome is the same: enhanced spread and multiplication of Y. pestis and S. enterica in the host. Maximal activity requires association with lipopolysaccharide ( Korhonen et al. 2013Korhonen et al. 2013Korhonen et al. 2013 Korhonen et al. 2013Korhonen et al. 2013). References associated with 9.B.50 family: Haiko, J., M. Suomalainen, T. Ojala, K. Lähteenmäki, and T.K. Korhonen. (2009). Invited review: Breaking barriers--attack on innate immune defences by omptin surface proteases of enterobacterial pathogens. Innate Immun 15: 67-80. Hritonenko, V. and C. Stathopoulos. (2007). Omptin proteins: an expanding family of outer membrane proteases in Gram-negative . Mol. Membr. Biol. 24: 395-406. Korhonen, T.K., J. Haiko, L. Laakkonen, H.M. Järvinen, and B. Westerlund-Wikström. (2013). Fibrinolytic and coagulative activities of Yersinia pestis . Front Cell Infect Microbiol 3: 35. Kukkonen, M. and T.K. Korhonen. (2004). The omptin family of enterobacterial surface proteases/adhesins: from housekeeping in Escherichia coli to systemic spread of Yersinia pestis . Int. J. Med. Microbiol. 294: 7-14. Stathopoulos, C. (1998). Structural features, physiological roles, and biotechnological applications of the membrane proteases of the OmpT bacterial endopeptidase family: a micro-review. Membr Cell Biol 12: 1-8. Suomalainen, M., J. Haiko, P. Ramu, L. Lobo, M. Kukkonen, B. Westerlund-Wikström, R. Virkola, K. Lähteenmäki, and T.K. Korhonen. (2007). Using every trick in the book: the Pla surface protease of Yersinia pestis . Adv Exp Med Biol 603: 268-278. Yun, T.H. and J.H. Morrissey. (2009). Polyphosphate and omptins: novel bacterial procoagulant agents. J Cell Mol Med 13: 4146-4153.
{"url":"http://www.tcdb.org/search/result.php?tc=9.B.50","timestamp":"2014-04-17T10:20:20Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:101b6d4b-f998-4432-87b8-087d4188b659>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
[Bug-gnubg] TD(lambda) training for neural networks -- a question [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Bug-gnubg] TD(lambda) training for neural networks -- a question From: boomslang Subject: [Bug-gnubg] TD(lambda) training for neural networks -- a question Date: Thu, 21 May 2009 00:12:46 +0000 (GMT) Hi all, I have a question regarding TD(lambda) training by Tesauro (see The formula for adapting the weights of the neural net is w(t+1)-w(t) = a * [Y(t+1)-Y(t)] * sum(lambda^(t-k) * nabla(w)Y(k); k=1..t). I would like to know if nabla(w)Y(k) in the formula above is the gradient of Y(k) to the weights of the net at time t (i.e. the current net) or to the weights of the net at time k. I assume the former. Thanks in advance! greetings, boomslang [Prev in Thread] Current Thread [Next in Thread] • [Bug-gnubg] TD(lambda) training for neural networks -- a question, boomslang <=
{"url":"http://lists.gnu.org/archive/html/bug-gnubg/2009-05/msg00057.html","timestamp":"2014-04-20T13:58:39Z","content_type":null,"content_length":"5771","record_id":"<urn:uuid:e0b01156-08c7-4ec0-809a-eff178f11b73>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the equation of the circle [Archive] - Free Math Help Forum 06-01-2011, 03:16 AM A circle with its centre on the y-axis intersects the graph of y x = at the origin, O, and exactly two other distinct points, A and B. Prove that the ratio of the area of triangle ABO to the area of the circle is always 1 : ?. Since the circle has centre on the y-axis (say, has coordinates (0,b)), then its radius is equal to b (and b must be positive for there to be three points of intersection). So the circle has equation x^2+(y-b)^2=b^2 I do not understand the sentence in bold. Can you please explain?
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-70989.html","timestamp":"2014-04-21T02:22:30Z","content_type":null,"content_length":"5043","record_id":"<urn:uuid:7edb23c3-6315-4f2c-8a6e-3db903990551>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
In CUDA, a kernel is a function that runs in parallel using many threads on the device. We can write a kernel version of our mandelbrot function by simply assuming that it will be run by a grid of threads. NumbaPro provides the familiar CUDA threadIdx, blockIdx, blockDim and gridDim intrinsics, as well as a grid() convenience function which evaluates to blockDim * blockIdx + threadIdx. Our example juse needs a minor modification to compute a grid-size stride for the x and y ranges, since we will have many threads running in parallel. We just add these three lines: startX, startY = cuda.grid(2) gridX = cuda.gridDim.x * cuda.blockDim.x; gridY = cuda.gridDim.y * cuda.blockDim.y; And we modify the range in the x loop to use range(startX, width, gridX) (and likewise for the y loop). We decorate the function with @cuda.jit, passing it the type signature of the function. Since kernels cannot have a return value, we do not need the restype argument.
{"url":"http://nbviewer.ipython.org/gist/anonymous/f5707335f40af9463c43","timestamp":"2014-04-19T02:13:21Z","content_type":null,"content_length":"212740","record_id":"<urn:uuid:fb3c5a8c-15b6-4e86-8fea-7058eaa22aaa>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
May 29th 2009, 03:26 PM #1 Apr 2009 Following formula shows the veloctiy, vms-1, at time t seconds for the first 8 seconds of motion: Find the acceleration when t=8 I know you have to differentiate and i got 120t-15t^2 However, the mark scheme says to reduce the mark if 64 is left off - why would you leave 64 on? Equally when I intergrate the formula, apparently 64 is not 64t but just 64, why? Thanks Very Much Following formula shows the veloctiy, vms-1, at time t seconds for the first 8 seconds of motion: Find the acceleration when t=8 I know you have to differentiate and i got 120t-15t^2 However, the mark scheme says to reduce the mark if 64 is left off - why would you leave 64 on? Equally when I intergrate the formula, apparently 64 is not 64t but just 64, why? Thanks Very Much because of the constant multiple rule ... $\frac{d}{dt}[k \cdot f(t)] = k \cdot f'(t)$ think of $k$ as $\frac{1}{64}$ May 29th 2009, 04:08 PM #2
{"url":"http://mathhelpforum.com/algebra/90997-differentiation.html","timestamp":"2014-04-17T23:22:02Z","content_type":null,"content_length":"33647","record_id":"<urn:uuid:c8b68ac2-cb18-4c14-b9fe-ca6bd467b6d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
The two-step Fagan's nomogram: ad hoc interpretation of a diagnostic test result without calculation 1. ^1Faculty of Science, School of Animal & Veterinary Sciences, The University of Adelaide, Roseworthy Campus, South Australia, Australia 1. Correspondence to : Dr Charles G B Caraguel Faculty of Science, School of Animal & Veterinary Sciences, The University of Adelaide, Leske Building, Roseworthy Campus, SA 5371, Australia; charles.caraguel{at}adelaide.edu.au • Accepted 11 February 2013 • Published Online First 6 March 2013 In 1975, Fagan published a nomogram to help practitioners determine, without the use of a calculator or computer, the probability of a patient truly having a condition of interest given a particular test result. Nomograms are very useful for bedside interpretations of test results, as no test is perfect. However, the practicality of Fagan's nomogram is limited by its use of the likelihood ratio (LR), a parameter not commonly reported in the evaluation studies of diagnostic tests. The LR reflects the direction and strength of evidence provided by a test result and can be computed from the conventional diagnostic sensitivity (DSe) and specificity (DSp) of the test. This initial computation is absent in Fagan's nomogram, making it impractical for routine use. We have seamlessly integrated the initial step to compute the LR and the resulting two-step nomogram allows the user to quickly interpret the outcome of a test. With the addition of the DSe and DSp, the nomogram, for the purposes of interpreting a dichotomous test result, is now complete. This tool is more accessible and flexible than the original, which will facilitate its use in routine evidence-based practice. The nomogram can be downloaded at: www.adelaide.edu.au/vetsci/research/pub_pop/2step-nomogram/. Collecting and interpreting evidence, both clinical and analytical, is essential to the diagnostic process. This evidence may or may not support the likelihood of a patient having a given condition, depending on the nature and strength of the evidence. In practice, the range of collectable evidence is wide and includes patient profile, exposure history, symptoms and clinical or laboratory test results. In the context of evidence-based medicine, the probability of a patient having a condition of interest, given the evidence collected, should be objectively quantified. This probability is referred to as the posterior (post-test) probability of having the condition or, in conventional epidemiology, the predictive value. However, the calculations for predictive values, derived from Bayes’ theorem, are tedious and rarely performed in practice.1 ,2 Fagan's nomogram In 1975, Dr Terrence J. Fagan3 integrated Bayes’ theorem into a nomogram for practitioners to quantify the post-test probability that an individual is affected by a condition given an observed test result and given the probability of the individual having the condition before the test was run (pretest probability). The Fagan's nomogram is widely recognised as a convenient graphical calculator and is frequently referenced in evidence-based medicine and clinically applied epidemiology textbooks.4 ,5 To use the Fagan's nomogram (as depicted in figure 1), a line must be drawn from the estimated pretest probability (left axis) through the likelihood ratio (LR) of the observed test result (centre axis) and the intersection of the line with the right axis provides the post-test probability. Regrettably, its routine use seems to be limited by the unfamiliarity of practitioners with the concept of diagnostic LRs,6 and also because LR estimates are rarely reported in studies evaluating diagnostic tests.7 LR of a test result The LR represents the direction and the strength of evidence provided by a test result. It is calculated by dividing the likelihood of the test result among patients with the condition by the likelihood of this same test result among patients without the condition.8 The values of the LR range from zero to infinity. When the LR is greater than one, the test result supports the presence of the condition (individuals with the condition are more likely to have the given test result than individuals without the condition), while, when it is lower than one, the test result supports the absence of the condition (individuals with the condition are less likely to have the given test result than individuals without the condition). An LR of one suggests that the observed test result has no diagnostic value. The farther the LR is away from one (towards zero or infinity), the stronger the evidence is provided by the test. LRs can be estimated for binary (positive or negative), ordinal (more than two categories) or continuous (number scale) diagnostic test outcomes. However, ordinal and continuous outcomes are often dichotomised using a cut-off value to help with the decision-making process,9 and validation studies for diagnostic tests conventionally report the corresponding diagnostic sensitivities and specificities (DSe and DSp, respectively), not LRs.7 For a test with a binary outcome, two LRs are reported, one for a positive test result (LR^+) and one for a negative test result (LR^−). The LR^+ and LR^− can be directly computed from the test DSe and DSp (LR^+=DSe/(1−DSp) and LR^−=(1−DSe)/DSp, respectively). The two-step Fagan's nomogram The original version of the Fagan's nomogram first requires the calculation of the LRs of the test result from the accessible DSe and DSp. The two-step Fagan's nomogram, proposed here, includes the initial calculation step for the LR^+ and LR^−, while maintaining the structure of the original nomogram (figure 2); it was generated using the Python-based program PyNomo,10 and the script is available from the corresponding author upon request. User's guide for the two-step Fagan's nomogram As compared to the original Fagan's nomogram (figure 1), the updated version includes two additional axes corresponding to the DSe and the DSp of the test (figure 2). The DSe and the DSp axes have red (left-hand side) and blue (right-hand side) scales that are used, respectively, to calculate the LR of a positive or a negative test result as a first step. The second step corresponds to the traditional Fagan's approach where the post-test probability is deducted from the previously obtained LR and the pre-test probability. Step 1: calculation of the LR of a given test result It is first necessary to know the DSe and DSp of the test (from the manufacturer or the literature) and to have obtained a test result from the patient (ie, positive or negative). If the obtained test result is positive, the red scales on the DSe/DSp axes must be used, whereas, if the test result is negative, the blue scales should be used. A line is drawn to connect the appropriate DSe and DSp values for the test and the intersection of the line with the central axis provides the LR of the obtained test result. At this stage, the user can appreciate the direction and strength of the evidence provided by the test result, regardless of the pre-test probability. If the LR of the test result is greater than one and very large, the evidence provided by the test result strongly supports the presence of the condition. However, if the LR of the test result is smaller than one and very close to zero, the evidence provided by the test result strongly supports the absence of the condition. Naturally, if the LRs for the test are reported or available, this first step is not necessary. Step 2: calculation of the post-test probability The second step corresponds to the original use of the Fagan's nomogram. Given an estimate for the pre-test probability, a second line is drawn from the pre-test probability estimate on the far left axis through the previously obtained estimate of the LR on the central axis (from Step 1). The intercept of this line with the far right axis provides the corresponding post-test probability of the The example of MRI screening for women at high risk for breast cancer is used here to illustrate the application of the two-step Fagan's nomogram (eg, in figure 2). A meta-analysis, using data from 11 MRI evaluation studies, reported estimates for the DSe and DSp at 75% and 96%, respectively.11 In the instance where the MRI yields a positive screening result for at least one breast of a patient (red scales used for the DSe and DSp axes, figure 2), the line produced in the first step indicates an LR^+ of approximately 19 (red line #1, figure 2). The LR^+ is greater than one and is quite large indicating that a positive result from the MRI supports the likelihood of a cancer being present. Subsequently, based on an estimated prevalence of 2% for breast cancer in these high-risk patients,11 the intercept of the line produced in the second step indicates that the probability for the patient to have breast cancer increased from approximately 2 to 28%, given the positive MRI result for this patient (red line #2, figure 2). Estimates of the prevalence are not always available in the literature for all the health conditions. In these instances, it is a common practice to use best guess estimates from clinical experience. Alternatively, if the MRI yields a negative screening result for both breasts (blue scales used for the DSe and DSp axes, figure 2), the line produced in the first step indicates an LR^− of approximately 0.25 (blue line #1, figure 2). The LR^− is smaller than one and is close to zero indicating that a negative result from the MRI does not support the likelihood of a cancer being present. Subsequently, based on the previous 2% prevalence estimate, the intercept of the line produced in the second step indicates that the probability for the patient to have breast cancer decreased from approximately 2 to 0.6%, given the negative MRI result for this patient (blue line #2, figure 2). The Fagan's nomogram is the simplest of the Bayes’ theorem calculators to help practitioners determine the probability of a patient truly having a condition of interest given a particular test result.4 It is particularly useful for the clinical practice when speed is favoured over precision without the need of a calculator or computer.12 However, its practicality at the bedside is limited because the initial computation of the LR, from the DSe and DSp, is missing. With the addition of this first step, the two-step Fagan's nomogram is expected to facilitate the interpretation of any test outcome (positive or negative) and enhance its utilisation for routine use by evidence-based practitioners. The Bayes’ theorem, and thus the nomogram, is the proper approach to interpret the result from one test at a time, however, users should be cautious when interpreting multiple results from a chain of tests (eg, history, clinical examination, etc.). Unless the tests used in the chain are shown to be conditionally independent (ie, the result of one test does not depend on the result of the other test given the health status), it is not recommended to use the post-test probability from one test result as the pre-test probability for the subsequent test. The two-step Fagan's nomogram can be downloaded at: www.adelaide.edu.au/vetsci/research/pub_pop/2step-nomogram/. • Competing interests None.
{"url":"http://ebm.bmj.com/content/18/4/125.full","timestamp":"2014-04-24T11:45:41Z","content_type":null,"content_length":"153594","record_id":"<urn:uuid:fb7eaecd-94bf-42f4-9888-87d2819f6612>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Mori-Tanaka approximation Next: CONSISTENCY WITH EXACT RESULTS Up: EFFECTIVE MEDIUM THEORIES Previous: Differential effective medium approximation The final approximation I consider is the Mori-Tanaka (MT) Scheme of Mori and Tanaka (1973), as described by Weng (1984), Benveniste (1987), and others [see, for example, Berryman and Berge (1996)]. For the drained frame, the Mori-Tanaka approximation is obtained by assuming the composite has a host material with imbedded inclusions and then choosing the host to serve as the reference material, so r = h. Making this choice in (general) and then substituting v_i (^(i) - ^*_MT)^hi = 0. The Mori-Tanaka result for the bulk modulus with arbitrary ellipsoidal inclusion shapes is v_i (K^(i) - K^*_MT)P^hi = 0. Because the Mori-Tanaka scheme can not be derived using any analogy to scattering theory (unlike the other three schemes considered so far), there is some ambiguity about how to apply the present method within Mori-Tanaka and different choices of formulas for the Biot-Willis parameter may result. One of the more straightforward approaches can be shown to lead to the formula v_i (^(i) - ^*_MT)P^hi = 0, when the inclusions are all spherical in shape. I stress however that (MTBW) is not the only possible formula that could be obtained or that could be considered to be fully consistent with the Mori-Tanaka scheme. Note that it is easy to show that both (MTBW) and (DEMBW) have the advantage that they reproduce the known exact results (Berryman and Milton, 1991) for two component poroelastic media. This fact provides a useful criterion for choosing among various possibilities that arise when trying to identify the proper generalizations of these theories for the poroelastic case. Next: CONSISTENCY WITH EXACT RESULTS Up: EFFECTIVE MEDIUM THEORIES Previous: Differential effective medium approximation Stanford Exploration Project
{"url":"http://sepwww.stanford.edu/public/docs/sep97/jim/paper_html/node7.html","timestamp":"2014-04-18T23:15:36Z","content_type":null,"content_length":"5498","record_id":"<urn:uuid:47f1aa08-3122-42fa-9002-4c98e5008698>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Illustration 2.3 Java Security Update: Oracle has updated the security settings needed to run Physlets. Click here for help on updating Java and setting Java security. Illustration 2.3: Average and Instantaneous Velocity Please wait for the animation to completely load. When an object's velocity is changing, it is said to be accelerating. In this case, the average velocity over a time interval is (in general) not equal to the instantaneous velocity at each instant in that time interval. So how do we determine the instantaneous velocity? Play the first animation where the toy Lamborghini's velocity is changing (increasing) with time (position is given in centimeters and time is given in seconds). Restart. Click the "show rise, run, and slope" button. The slope of the blue-line segment represents the Lamborghini's average velocity, v[avg], during the time interval (5 s, 10 s). What is the Lamborghini's average velocity during the time interval (6 s, 9 s)? It is the slope of the new line segment shown when you enter in 6 s for the start and 9 s for the end and click the "show rise, run, and slope" When you get a good-looking graph, right-click on it to clone the graph and resize it for a better view. What is the Lamborghini's average velocity, v[avg], during the time interval (7 s, 8 s)? How about the average velocity during the time interval (7.4 s, 7.6 s)? As the time interval gets smaller and smaller, the average velocity approaches the instantaneous velocity as shown by the following Instantaneous Velocity Animation. Instantaneous Velocity Animation The instantaneous velocity therefore is the slope of the position vs. time graph at any time. If you have taken calculus, you know that this slope is also the derivative of the function shown, here x (t). The Lamborghini moves according to the function: x(t) = 1.0*t^2, and therefore v(t) = 2*t, which is the slope depicted in the Instantaneous Velocity Animation. « previous next »
{"url":"http://www.compadre.org/Physlets/mechanics/illustration2_3.cfm","timestamp":"2014-04-19T06:52:34Z","content_type":null,"content_length":"24161","record_id":"<urn:uuid:ae225abd-eb83-46e8-b0c8-d78048ce1548>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and odds question April 5th 2013, 04:35 PM #1 Apr 2013 Probability and odds question Imagine there are two tennis players (X and Y) that are evenly matched. If they play 4 matches, what are the odds that one of them will win all four? Now assume there are two tennis players that may or may not be evenly matched. They play 4 matches and player X wins all 4 matches. What are the odds that player Y is at least as good if not better than player X? Re: Probability and odds question Hey ItsOmar. You should think about whether the number of matches one player wins is modeled by a Binomial random variable. If each match is independent and each player has the same probability of winning, then this is a Binomial distribution with n matches and probability p for a fixed player. If the probabilities change, but are independent, you need to use a probability generating function. If the probabilities depend on what happened in the outcome of the last match, you need to use a Markov Chain model. If it is more complex than this, you will probably need to use simulation models on a computer. Re: Probability and odds question Thanks for the help! Assuming each match is independent, I used the binomial distribution and got an answer of .0625 for the first question. I doubled it however since either player could potentially win all four. So.. 12.5% As for the second question, I'm still completely lost. Re: Probability and odds question For the second one you need to do a Hypothesis test. If you assume a binomial with parameters n = 4, and p = probability of player X winning, then if your hypothesis is for player Y being at least as good or better than X, then it means that p < So now you are testing that H0: p < 0.5, H1: p > 0.5. Your firstly have to estimate p with your sample (it will be 1 or close to 1 given your sample). Then you need to construct a test statistic (it will be from a binomial distribution), and show that the p-value for H0 is small enough to reject H0. April 5th 2013, 09:29 PM #2 MHF Contributor Sep 2012 April 6th 2013, 04:51 AM #3 Apr 2013 April 6th 2013, 05:10 PM #4 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/statistics/216757-probability-odds-question.html","timestamp":"2014-04-16T20:55:18Z","content_type":null,"content_length":"37311","record_id":"<urn:uuid:ee875c95-32c9-48c0-b76c-5dbee13b1ea1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Plotting Monitor Capabilities XFree86 Video Timings HOWTO : Plotting Monitor Capabilities Previous: Fixing Problems with the Image. Next: Credits 15. Plotting Monitor Capabilities To plot a monitor mode diagram, you'll need the gnuplot package (a freeware plotting language for UNIX-like operating systems) and the tool modeplot, a shell/gnuplot script to plot the diagram from your monitor characteristics, entered as command-line options. Here is a copy of modeplot: # modeplot -- generate X mode plot of available monitor modes # Do `modeplot -?' to see the control options. # ($Id: video-modes.sgml,v 1.2 1997/08/08 15:07:24 esr Exp $) # Monitor description. Bandwidth in MHz, horizontal frequencies in kHz # and vertical frequencies in Hz. TITLE="Viewsonic 21PS" vesa=72.5 # VESA-recommended minimum refresh rate while [ "$1" != "" ] case $1 in -t) TITLE="$2"; shift;; -b) BANDWIDTH="$2"; shift;; -h) MINHSF="$2" MAXHSF="$3"; shift; shift;; -v) MINVSF="$2" MAXVSF="$3"; shift; shift;; -a) ASPECT="$2"; shift;; -g) GNUOPTS="$2"; shift;; -?) cat <<EOF modeplot control switches: -t "<description>" name of monitor defaults to "Viewsonic 21PS" -b <nn> bandwidth in MHz defaults to 185 -h <min> <max> min & max HSF (kHz) defaults to 31 85 -v <min> <max> min & max VSF (Hz) defaults to 50 160 -a <aspect ratio> aspect ratio defaults to 4/3 -g "<options>" pass options to gnuplot The -b, -h and -v options are required, -a, -t, -g optional. You can use -g to pass a device type to gnuplot so that (for example) modeplot's output can be redirected to a printer. See gnuplot(1) for details. The modeplot tool was created by Eric S. Raymond <esr@thyrsus.com> based on analysis and scratch code by Martin Lottermoser <Martin.Lottermoser@mch.sni.de> This is modeplot $Revision: 1.2 $ gnuplot $GNUOPTS <<EOF set title "$TITLE Mode Plot" # Magic numbers. Unfortunately, the plot is quite sensitive to changes in # these, and they may fail to represent reality on some monitors. We need # to fix values to get even an approximation of the mode diagram. These come # from looking at lots of values in the ModeDB database. F1 = 1.30 # multiplier to convert horizontal resolution to frame width F2 = 1.05 # multiplier to convert vertical resolution to frame height # Function definitions (multiplication by 1.0 forces real-number arithmetic) ac = (1.0*$ASPECT)*F1/F2 refresh(hsync, dcf) = ac * (hsync**2)/(1.0*dcf) dotclock(hsync, rr) = ac * (hsync**2)/(1.0*rr) resolution(hv, dcf) = dcf * (10**6)/(hv * F1 * F2) # Put labels on the axes set xlabel 'DCF (MHz)' set ylabel 'RR (Hz)' 6 # Put it right over the Y axis # Generate diagram set grid set label "VB" at $BANDWIDTH+1, ($MAXVSF + $MINVSF) / 2 left set arrow from $BANDWIDTH, $MINVSF to $BANDWIDTH, $MAXVSF nohead set label "max VSF" at 1, $MAXVSF-1.5 set arrow from 0, $MAXVSF to $BANDWIDTH, $MAXVSF nohead set label "min VSF" at 1, $MINVSF-1.5 set arrow from 0, $MINVSF to $BANDWIDTH, $MINVSF nohead set label "min HSF" at dotclock($MINHSF, $MAXVSF+17), $MAXVSF + 17 right set label "max HSF" at dotclock($MAXHSF, $MAXVSF+17), $MAXVSF + 17 right set label "VESA $vesa" at 1, $vesa-1.5 set arrow from 0, $vesa to $BANDWIDTH, $vesa nohead # style -1 plot [dcf=0:1.1*$BANDWIDTH] [$MINVSF-10:$MAXVSF+20] \ refresh($MINHSF, dcf) notitle with lines 1, \ refresh($MAXHSF, dcf) notitle with lines 1, \ resolution(640*480, dcf) title "640x480 " with points 2, \ resolution(800*600, dcf) title "800x600 " with points 3, \ resolution(1024*768, dcf) title "1024x768 " with points 4, \ resolution(1280*1024, dcf) title "1280x1024" with points 5, \ resolution(1600*1280, dcf) title "1600x1200" with points 6 pause 9999 Once you know you have modeplot and the gnuplot package in place, you'll need the following monitor characteristics: • video bandwidth (VB) • range of horizontal sync frequency (HSF) • range of vertical sync frequency (VSF) The plot program needs to make some simplifying assumptions which are not necessarily correct. This is the reason why the resulting diagram is only a rough description. These assumptions are: 1. All resolutions have a single fixed aspect ratio AR = HR/VR. Standard resolutions have AR = 4/3 or AR = 5/4. The modeplot programs assumes 4/3 by default, but you can override this. 2. For the modes considered, horizontal and vertical frame lengths are fixed multiples of horizontal and vertical resolutions, respectively: HFL = F1 * HR VFL = F2 * VR As a rough guide, take F1 = 1.30 and F2 = 1.05 (see frame "Computing Frame Sizes"). Now take a particular sync frequency, HSF. Given the assumptions just presented, every value for the clock rate DCF already determines the refresh rate RR, i.e. for every value of HSF there is a function RR(DCF). This can be derived as follows. The refresh rate is equal to the clock rate divided by the product of the frame sizes: RR = DCF / (HFL * VFL) (*) On the other hand, the horizontal frame length is equal to the clock rate divided by the horizontal sync frequency: HFL = DCF / HSF (**) VFL can be reduced to HFL be means of the two assumptions above: VFL = F2 * VR = F2 * (HR / AR) = (F2/F1) * HFL / AR (***) Inserting (**) and (***) into (*) we obtain: RR = DCF / ((F2/F1) * HFL**2 / AR) = (F1/F2) * AR * DCF * (HSF/DCF)**2 = (F1/F2) * AR * HSF**2 / DCF For fixed HSF, F1, F2 and AR, this is a hyperbola in our diagram. Drawing two such curves for minimum and maximum horizontal sync frequencies we have obtained the two remaining boundaries of the permitted region. The straight lines crossing the capability region represent particular resolutions. This is based on (*) and the second assumption: RR = DCF / (HFL * VFL) = DCF / (F1 * HR * F2 * VR) By drawing such lines for all resolutions one is interested in, one can immediately read off the possible relations between resolution, clock rate and refresh rate of which the monitor is capable. Note that these lines do not depend on monitor properties, but they do depend on the second assumption. The modeplot tool provides you with an easy way to do this. Do modeplot -? to see its control options. A typical invocation looks like this: modeplot -t "Swan SW617" -b 85 -v 50 90 -h 31 58 The -b option specifies video bandwidth; -v and -h set horizontal and vertical sync frequency ranges. When reading the output of modeplot, always bear in mind that it gives only an approximate description. For example, it disregards limitations on HFL resulting from a minimum required sync pulse width, and it can only be accurate as far as the assumptions are. It is therefore no substitute for a detailed calculation (involving some black magic) as presented in Putting it All Together. However, it should give you a better feeling for what is possible and which tradeoffs are involved. XFree86 Video Timings HOWTO : Plotting Monitor Capabilities Previous: Fixing Problems with the Image. Next: Credits
{"url":"http://xfree86.org/3.3.6/VideoModes15.html","timestamp":"2014-04-20T21:25:12Z","content_type":null,"content_length":"8979","record_id":"<urn:uuid:e5b0fa03-02a0-4866-9315-556690997e63>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Intuition for Nagata's altitude formula? up vote 3 down vote favorite This is theorem 14.C on p.84 of Matsumura's commutative algebra. Let $A$ be a noetherian domain, and let $B$ be a finitely generated overdomain of $A$. Let $P \in Spec(B)$ and $p = P \cap A$. Then we have $ht(P) \leq ht(p) + tr.d._{A} B - tr.d._{K(p)} K(P)$ with equality holds when $A$ is universally catenary or if $B$ is a polynomial ring over $A$. Question: How should one understand this formula? I'm hazarding a guess that this factor, $tr.d._{A} B - tr.d._{K(p)}K(P)$, can somehow measure how primes of $B$ will be identified when they are restricted back to $A$. But this sounds woefully wrong and I just want to know how I should view this result or whether there is any (geometric) intuition behind the result. ac.commutative-algebra ag.algebraic-geometry add comment 1 Answer active oldest votes Put dim B=n for the dimension of the variety with coordinates ring B. Then n-ht P ≥ ((n-tr deg [A] B)- ht p)+ tr deg [k(p)] k(P) The first member of the inequality indicates the dimension of the subvariety definited by P. up vote 1 down vote The term (n-tr deg [A] B) in the second member is the dimension of the variety with coordinates ring A: it looses tr deg [A] B dimensions with respect the other variety with accepted coordinate ring B. Then ((n-tr deg [A] B)- ht p) represent the dimension of the subvariety definited by p. The term tr deg [k(p)] k(P) is a corrector term because blow up can occur. hmm, this looks reasonable for finitely generated k-algebras. Just to be safe, what you have said is not going to be true for an arbitrary noetherian domain right? – Ho Chung Siu Nov 25 '09 at 14:21 Yes, I´m thinking in algebraic varieties, but I added an special case for Krull domains in my new answer – Francisco Perdomo Nov 25 '09 at 19:08 add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/5982/intuition-for-nagatas-altitude-formula?sort=votes","timestamp":"2014-04-19T07:30:26Z","content_type":null,"content_length":"53364","record_id":"<urn:uuid:94902bc8-5401-459f-b6ad-708f8fec6c3e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Radical Notation Any expression involving an nth root can be written using radical notation. The symbol radical symbol. If n is a positive integer and a is a real number for which a^1/n is defined, then the expression radical, and The number a is called the radicand. The number n is called the index of the radical. Radicals of index 2 and 3 are referred to as “square roots†and “cube roots,†respectively, whereas the expression ^1/n. So are not real numbers because there are no even roots of negative numbers in the real number system. Example 1 Changing notations Write each exponential expression using radical notation and each radical expression using exponential notation. Assume the variables represent positive numbers. Do not simplify. Example 2 Simplifying radical expressions
{"url":"http://www.sofsource.com/radical-notation.html","timestamp":"2014-04-18T18:11:29Z","content_type":null,"content_length":"18253","record_id":"<urn:uuid:7e80b7ff-886c-44ab-8e04-a81b04f2688e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
The Plot Of The Week - Higgs Windows Of Opportunity Last week a new important paper appeared in the Arxiv: " MSSM Higgs Boson Searches at the LHC:Benchmark Scenarios after the Discovery of a Higgs-like Particle" , by M.Carena, S.Heinemeyer, O. Stal, C.Wagner, and G.Weiglein. The paper fills a void that was created after the discovery of the Higgs particle last July by the ATLAS and CMS experiments: a thorough assessment of what constraints on the allowed chunks of SUSY parameter space in the light of the existence of a neutral scalar at 125 GeV. The issue is complicated, because Supersymmetry is not just a theory, but a framework within which one can build quite different phenomenologies, depending on the exact value of a multitude of free parameters. So hard is the problem of characterizing this wide 100+-dimensional space that before one ventures to study the model predictions one must decide on a set of "test points" which can be used as benchmarks. These are called "benchmark scenarios", and agreeing on the particular values of the defining parameters is important for a number of reasons: understanding how strong are the experimental constraints of subsequent experiments, comparing the sensitivity of different searches, and figuring out what may be the most distinctive signatures of the considered models. Hence the study, which is quite extensive and covers several possibilities for some of the experimental signatures of the Higgs sector of the MSSM (the minimal Supersymmetric extension of the standard model, already a restrictive choice in the mare magnum of possibilities) which can be nailed early on by LHC measurements: for example, rates of specific decays of the Higgs-like particle discovered last July. The study also considers the possibility that the particle is not the lightest neutral scalar, but actually the heavy one; such hypothesis creates of course wholly different One bit of the discussion on which I am personally interested (although, as you might recall, I remain a strong SUSY sceptic) is the scenario where the discovered light Higgs boson can be produced in pairs by the decay of a heavier counterpart: H->hh. Such a circumstance is possible in a wide chunk of parameter space and would lead to quite interesting experimental signatures. Given that I was involved in the search for the MSSM bbH->bbbb production process, which resulted in a recent publication (cited by the article discussed here), it is clear that the option of studying the H->hh->bbbb process in the same final state (one involving four b-flavoured jets) is quite attractive. We'll see what we end up doing there... Anyway, this article is titled "the plot of the week", so let me pick a representative graph from the paper to stimulate you to give a deeper look. The graph is a representation of the plane of two of the parameters defining the MSSM: tan(β) and μ in the so-called "low-mH scenario". Different coloured swaths of the plane correspond to areas that experimental searches have already excluded (such as the purple one at the bottom, which is excluded by direct LHC searches for MSSM particles, or the blue one excluded by LEP experiments, or the red one excluded by limits on the H->ττ decay by ATLAS and CMS; the most interesting regions are those in green, which correspond to values of the Higgs boson measured by the experiments. The black area instead correspond to parameter space points which would predict rates of Higgs production too high with respect to what has been observed. All in all one gets the impression that the "window of opportunity" for the MSSM is closing down. But if you read the paper (written by MSSM enthusiasts) you might get a different idea ! But what measurements actually affect the odds (this particular version of) the MSSM is correct? An analogy to clarify: You've dropped your car keys somewhere and it seems most likely you did it while walking across a football field. Direct searches obviously affect the odds you actually did drop them there, but increasingly accurate measurements of which path you walked across the field don't. They obviously limit the part of the field where the keys could be, but knowing the exact path you took would "only" limit your search to a line rather than an area, but not affect the odds of the keys being in the field at all. Do the measurements of Higgs mass and production rate really affect the odds of Susy being right? JollyJoker (not verified) | 03/04/13 | 16:17 PM Correction: it's not the Higgs particle, it's the Higgs-like particle Correction: it's not the MSSM, it's the MSSM-like failed-like theory. Nemo ArtCarney-Humide (not verified) | 03/04/13 | 21:23 PM Those purple, red, blue and black exclusion regions look impressively ominous to me. If I were a SUSY practitioner I would be feeling definitely claustrophobic... But then, I guess one might also think that the fact that the window is closing down only makes discovery more imminent. After all, that's what happened with the Higgs. Pity we'll have to wait until the end of 2015, at least, to find out. Anonymous4 (not verified) | 03/05/13 | 12:01 PM The "closing window" only refers to the case, where the *heavy* CP-even Higgs is interpreted as the newly discovered state at 125.5 GeV. That possibility might indeed be ruled out by non-SM Higgs searches. The more "conventional" interpretation in which the light CP-even Higgs has a mass of 125.5 GeV remains a perfect possibility (as also mentioned in the article arXiv:1302.7033. Sven (not verified) | 03/06/13 | 03:21 AM Skeptics believe that this story of the cure child is not really conclusive. They say that the girl - although at high risk for contracting the virus from her mother - was not actually infected. This story shows no proof that the child was indeed born with HIV. On the other hand it mentions that the girl, being at high risk of infection, was placed on treatment even before laboratory investigations had been done. That being said, doctors agree that the child was most likely infected, so it's still a story of hope :) Shanna Carson (not verified) | 03/05/13 | 15:38 PM
{"url":"http://www.science20.com/quantum_diaries_survivor/plot_week_higgs_windows_opportunity-105326","timestamp":"2014-04-20T01:28:20Z","content_type":null,"content_length":"47784","record_id":"<urn:uuid:69183521-54e4-483e-8552-df9094baf646>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Facts about Erdös Numbers and the Collaboration Graph Return to Erdös Number Project home page. The following interesting facts about the collaboration graph and Erdös numbers are mostly based on information in the database of the American Mathematical Society’s Mathematical Reviews (MR) as of July, 2004. Internet access to MR data is provided by the service MathSciNet. We gratefully acknowledge the assistance of the AMS in making this information available. An article with much of the information contained in this page appears in Geographical Analysis. [For an older page, with the corresponding facts as of May, 2000, click here. It is interesting to note that over this 4-year period, 64,000 new authors were added to the MR database, but the number of authors who have written only solo-authored papers has DECREASED, from just over 84,000 to just under 84,000. Similarly, the mean number of collaborators per author increased 14%, from 2.94 to A different standard is used for collaborations here than is used in constructing our Erdös-1 and Erdös-2 lists. First, for our lists we use sources in addition to Mathematical Reviews; the conclusion on this page are based just on the MR data. Second, we generally do not count articles that are not the result of research collaboration as establishing a link. For example, if Jack and Jill wrote a joint obituary article on Humpty Dumpty when he died, that article might appear in the MR database and establish a link between Jack and Jill for the conclusions reached on this page, whereas the traditional definition of the collaboration graph would not suggest putting an edge between them just on this basis. There are also a few author identification problems in the Math Reviews database (primarily prior to 1985), which make the conclusions here only approximate. Data on the entire collaboration graph There are about 1.9 million authored items in the Math Reviews database, by a total of about 401,000 different authors. (This includes all books and papers in MR except those items, such as some conference proceedings, that do not have authors.) Approximately 62.4% of these items are by a single author, 27.4% by two authors, 8.0% by three authors, 1.7% by four authors, 0.4% by five authors, and 0.1% by six or more authors. The largest number of authors shown for a single item is in the 20s, but sometimes the author list includes “et al.”, whom we did not count as a real person. The fraction of items authored by just one person has steadily decreased over time, starting out above 90% in the 1940s and currently standing at under 50%. Let B be the bipartite graph whose vertices are papers and authors, with an edge joining a paper with each author of that paper. Then B has about 2.9 million edges. The average number of authors per paper is 1.51, and the average number of papers per author is 7.21. Click here to see the distribution of the number of papers per author. The median is 2, the mean is 7.21, and the standard deviation is 18.02. It is interesting (for tenure review committees?) to note that the 60th percentile is 3 papers, the 70th percentile is 4, the 80th percentile is 8, the 90th percentile is 18, and the 95th percentile is 32. Indeed, over 42% of all authors in the database have just one paper. There are four authors with more than 700 papers: Paul Erdös with 1416 (he actually wrote more papers than that, but these are just the ones covered by Math Reviews), Drumi Bainov with 823, SAHARON SHELAH with 760, and Leonard Carlitz with 730. Bainov’s Erdös number is 4, SHELAH’s is 1, and Carlitz’s is 2. The other mathematicians with more than 500 papers listed in MathSciNet (and their Erdös numbers) are Hari M. Srivastava (2), Lucien Godeaux (infinite — actually he wrote only one joint paper), Ravi Agarwal (3), Edoardo Ballico (3), FRANK HARARY (1), Josip E. Pecaric (2), Shigeyoshi Owa (3), and Richard Bellman (2). The most prolific authors listed in the DBLP (dealing with computer science publications) can be found on a list at their website (DBLP), which is definitely worth The collaboration graph C has the roughly 401,000 authors as its vertices, with an edge between every pair of people who have a joint publication (with or without other coauthors — but see below for a discussion of “Erdös number of the second kind”, where we restrict links to just two-author papers). Click here for a picture of a small portion of this graph. The entire graph has about 676,000 edges, so the average number of collaborators per person is 3.36. (If we were to view C as a multigraph, with one edge between two vertices for each paper in which they collaborated, then there would be about 1,300,000 edges, for an average of 6.55 collaborations per person.) In C there is one large component consisting of about 268,000 vertices. Of the remaining 133,000 authors, 84,000 of them have written no joint papers (these are isolated vertices in C). The average number of collaborators for people who have collaborated is 4.25; the average number of collaborators for people in the large component is 4.73; and the average number of collaborators for people who have collaborated but are not in the large component is 1.65. Click here to see the data on the number of collaborators per author (in other words, the numbers of coauthors mathematicians have). In graph-theoretical terms, this table shows the degrees of the vertices in C. The median is 1, the mean is 3.36, and the standard deviation is 6.61. (If we omit the isolated vertices, then the median degree is 2, and the mean is 5.37.) Recent research (see our research page) has indicated that we should expect the nonzero degrees to follow a power law: the number of vertices with degree x should be proportional to x raised to a power, where the exponent is somewhere around –2 or –3. Indeed, when we fit such a model to our data from May, 2000 (grouping the data in the tail), we find the exponent to be about –2.97, with a correlation coefficient for the model of r = 0.97. A slightly more accurate model throws in an exponential decay factor, and with this factor present, the exponent is –2.46, and r = 0.98. Apparently these models are appropriate for our data. The five people with more than 200 coauthors are Paul Erdös (of course) with 509 (although the MR data actually show only 504, missing some coauthors of very minor works or works before 1940, when MR was started), FRANK HARARY (Erdös number 1) with 268, Yuri Alekseevich Mitropolskii (Erdös number 3) with 244, NOGA ALON (Erdös number 1) with 227, and Hari M. Srivastava (Erdös number 2) with 244. Click here for information on publication habits over time (1940 to 1999). It is clear from these data that collaboration has increased over the past 60-odd years, especially so recently. By 2000, less than half of all mathematics papers were by a single author, about a third were by two authors, about an eighth by three authors, and 3% by four or more authors. The table also indicates that the average number of papers per author in a decade has slowly increased over time, now standing at about 5 (although the variance is very large, and the median is only 2). The radius of the large component of C (as it existed in Mathematical Reviews data as of July, 2004) is 12, and its diameter is 23. There are exactly two vertices with eccentricity 12 — Izrail M. Gelfand (Rutgers University) and Yakov Sinai (Princeton University), both of whom have Erdös number 3 — but not including Paul Erdös! (In other words, there is no one with Gelfand number or Sinai number greater than 12, whereas the maximum Erdös number is 13. In all, 1220 people have eccentricity 13.) Erdös does have the distinction of having the smallest mean distance to the other vertices, though: 4.65. There are five other people with means less than 5. In order of increasing mean, they are RONALD GRAHAM, ANDREW ODLYZKO, NOGA ALON, Larry Shepp, and FRANK HARARY. All of them have eccentricity 14 and Erdös number 1 except for Shepp, whose eccentricity is 13 and whose Erdös number is 2. The means for Gelfand and Sinai are slightly higher than 5. Based on a sample of 100 pairs of vertices in this component, the average distance between two vertices is around 7.64 (between 7.41 and 7.87 with 95% confidence), with a standard deviation of about 1.19. The median of the sample was 7, with the quartiles at 6 and 8. The smallest and largest distances in the sample were 4 and 11, respectively. The appropriate phrase for C, then, is perhaps “ eight degrees of separation”, if we wish to account for three quarters of all pairs of mathematicians. To analyze this another way, we took a sample of 100 vertices in the large component and computed for each of them: the degree, the mean distance to all the other vertices, the standard deviation of the distances to all the other vertices, and the maximum distance to another vertex (the “eccentricity”). Here are the results from the sample. The mean distance to other vertices varied from 5.80 to 10.67, with an average of 7.37 and a standard deviation of 0.86. The standard deviation of the distances to all the other vertices was remarkably constant, with the numbers varying only between 1.14 and 1.28 (mean 1.19, standard deviation 0.03). So although the average “Jane Doe” number varies quite a bit, depending on who Jane Doe is, the distribution of these numbers has pretty much the same shape and spread for everyone. It’s as if those people further away from the heart of the graph may take longer to get to the heart, but once there, the fan-out pattern is the same. The eccentricities of the vertices in the sample ranged from 14 to 19, with a mean of 15.62 and a standard deviation of 1.04. We also looked at correlations among Erdös number (n), vertex degree (d), and average distance to the other vertices (l). The associations are as one might predict: The correlation coefficient between d and n is –0.46 (people with a lot of collaborators tend to have smaller Erdös number); the correlation coefficient between d and l is –0.56 (people with a lot of collaborators tend to have shorter paths to other people); and the correlation coefficient between n and l is 0.78 (people with a small Erdös number are closer to the heart of the graph and therefore have shorter paths to others, compared to those out in the fringes). The “clustering coefficient” of a graph is equal to the fraction of ordered triples of vertices a,b,c in which edges ab and bc are present that have edge ac present. (In other words, how often are two neighbors of a vertex adjacent to each other?) The clustering coefficient of the collaboration graph of the first kind is 1308045/9125801 = 0.14. The high value of this figure, together with the fact that average path lengths are small, indicates that this graph is a “small world” graph (as defined by Duncan Watts — see our pages on research on collaboration and related concepts). We also have some data on the portion of the collaboration graph outside the “Erdös component” (the one giant component). We are ignoring here the 84,000 isolated vertices and looking only at those authors who have collaborated but do not have a finite Erdös number. There are about 50,000 such vertices. There are about 41,000 edges in these components, so the average degree of these vertices is 1.65. In other words, a person who has collaborated but does not find herself in the Erdös component of C has on the average collaborated with only one or two people. In contrast, the average degree of vertices in the Erdös component is 4.73 (there are about 634,000 edges and 268,000 vertices). Click here for the distribution of component sizes. As would be expected, most of these roughly 18,000 other components are isolated edges (64% of them, in fact). The largest component has 32 vertices. Its most collaborating author is Yu. A. Shevlyakov (Department of Applied Mathematics, Simferopol State University, Crimea, Ukraine), who has 13 coauthors. The person outside the Erdös component with the most coauthors is Gholam Reza Jahanshahloo (Department of Mathematics, University for Teacher Education, Tehran, Iran), who is in a component with 23 vertices (he has collaborated with all but two of them). Smaller collaboration graphs It would be interesting to see how much collaboration goes on within one department. In the Department of Mathematics and Statistics at Oakland University there seems to be quite a bit. Click here for a pdf file of their collaboration graph in 2004 and here for the 2012 graph. If other departments produce such a graph, please send the link to me, and I will list them here. So far we have the University of Georgia mathematics department. The distribution of Erdös numbers The following table shows the number of people with Erdös number 1, 2, 3, ..., according to the electronic data. Note that there are slightly fewer people shown here with Erdös numbers 1 and 2 than in our lists, since our lists are compiled by hand from various sources in addition to MathSciNet. In addition to these 268,000 people with finite Erdös number, there are about 50,000 published mathematicians who have collaborated but have an infinite Erdös number, and 84,000 who have never published joint works (and therefore of course also have an infinite Erdös number). Erdös number 0 --- 1 person Erdös number 1 --- 504 people Erdös number 2 --- 6593 people Erdös number 3 --- 33605 people Erdös number 4 --- 83642 people Erdös number 5 --- 87760 people Erdös number 6 --- 40014 people Erdös number 7 --- 11591 people Erdös number 8 --- 3146 people Erdös number 9 --- 819 people Erdös number 10 --- 244 people Erdös number 11 --- 68 people Erdös number 12 --- 23 people Erdös number 13 --- 5 people Thus the median Erdös number is 5; the mean is 4.65, and the standard deviation is 1.21. One of the five people with the largest finite Erdös number is Arturo Robles, and one shortest path goes like this (year of joint work in parenthese): Erdös to Daniel D. Bonar (1977) to Charles L. Belna (1979) to S. A. Obaid (1983) to Wadie A. Bassali (1981) to Ibrahim H. M. el-Sirafy (1976) to Konstantin Chernous (1977) to Jose Valdes (1980) to B. Dugnol (1980) to P. Suarez Rodriguez (1995) to A. E. Alvarez Vigil (1995) to C. Gonzalez Nicieza (1992) to Jose Angel Huidobro (1986) to Robles (1990). Since Paul Erdös collaborated with so many people, one would expect this distribution for him to be shifted downward from that of a random mathematician. For example, “Jerry Grossman numbers” have a median of 6, a mean of 5.71 (standard deviation = 1.22), and range as high as 15; and “Arturo Robles numbers” have a median of 15, a mean of 15.06 (standard deviation = 1.21). It turns out that the standard deviation is almost exactly the same for almost everyone in the large component. Erdös numbers of the second kind The entire discussion so far has been based on linking two mathematicians if they have written a joint paper, whether or not other authors were involved. A purer definition of the collaboration graph (in fact, the one that Paul Erdös himself seemed to favor) would put an edge between two vertices if the mathematicians have a joint paper by themselves, with no other authors. Under this definition, for example, YOLANDA DEBOSE would not have an Erdös number of 1, since her only joint publication with Erdös was a three-author paper with ARTHUR M. HOBBS as well. (But HOBBS would still have Erdös number 1, since some of his joint works are with Paul alone.) Let C' denote the collaboration graph under this more restrictive definition, and let us call the associated path lengths “Erdös numbers of the second kind” (and therefore call traditional Erdös numbers “Erdös numbers of the first kind” when we need to make a distinction). Here is what we know about C' and Erdös numbers of the second kind. This two-author-only collaboration graph has about 166,000 isolated vertices (including the 84,000 people who have written no joint papers, together with another 83,000 people who have written joint papers but only when three or more authors were involved — these numbers all rounded to the nearest thousand). The remaining 235,000 mathematicians in C' account for about 284,000 edges, so the average degree of a nonisolated vertex in C' is about 2.41 (as opposed to 4.25 for C). Click here to see the data on the distribution of these degrees, i.e., the number of collaborators per author counting only dual works. The median is 1, the mean is 1.34, and the standard deviation is 2.84. (If we omit the isolated vertices, then the median degree is still 1, the mean is 2.41, and the standard deviation is 3.37.) As with the collaboration graph of the first kind, we should expect the nonzero degrees to follow a power law, and when we fit this a model to our data from May 2000 (again grouping the data in the tail), we find the exponent to be about –3.26, with a correlation coefficient for the model of r = 0.97. The model with an exponential decay factor present gives the exponent as –2.70, with r = 0.98. The three people with 100 or more coauthors of this type are Paul Erdös (of course) with 230, FRANK HARARY with 124, and SAHARON SHELAH with 121. HARARY’s only papers with Erdös are 3-author works, so his Erdös number of the second kind is 2 (through BOLLOBAS, for example); SHELAH’s is 1. There are about 176,000 vertices in the large component of C' (versus 268,000 in C). The average number of two-author-only collaborators for people in the large component is 2.82; and the average number of two-author-only collaborators for people who have written two-author papers but are not in the large component is 1.21. The radius of the large component of C' (as it existed in Mathematical Reviews data as of July, 2004) is 14. The unique center is J. Bryce McLeod (whose Erdös numbers, of both kinds, are 3), and not Paul Erdös, whose eccentricity is 15, as is the eccentricity of 392 other people. The diameter of C' is 26 (this is the distance between the two people with Erdös number of the second kind equal to 15). As is the case with the collaboration graph of the first kind, Erdös has the distinction of having the smallest mean distance to the other vertices, 5.58, and no one else has a mean less than 6. As in the case of C, we took a sample of vertices in the large component of C' and computed for each of them: the degree, the mean distance to all the other vertices, the standard deviation of the distances to all the other vertices, and the maximum distance to another vertex (the “eccentricity”). Here are the results from the sample of 100 vertices. The mean distance to other vertices varied from 6.87 to 11.99, with an average of 9.18 and a standard deviation of 1.19. (Thus a 95% confidence interval for the average distance between vertices is 8.95 to 9.42.) The standard deviation of the distances to all the other vertices was again remarkably constant, with the numbers varying only between 1.48 and 1.63 (mean 1.54, standard deviation 0.034). The eccentricities of the vertices in the sample ranged from 15 to 21, with a mean of 18.21 and a standard deviation of 1.32. As for the correlations among Erdös number (n), vertex degree (d), and average distance to the other vertices (l), the correlation coefficient between d and n is –0.41; the correlation coefficient between d and l is –0.48; and the correlation coefficient between n and l is 0.86. The clustering coefficient of the collaboration graph of the second kind is 48132/1738599 = 0.028. This is actually a fairly high value (compared to a random graph with this density of edges, where the clustering coefficient is essentially 0), so again we have a “small world” graph. (The reason it is so much smaller than the clustering coefficient for the collaboration graph of the first kind is that the multi-author collaborations create a lot of triangles.) The three mathematicians with at least 25 two-author collaboration pairs among their collaborators whose collaborators most collaborate with each other are Masatoshi Fujii, Masahiro Nakamura, and Jian She Yu, each with about 30 two-author collaborators and local clustering coefficients in the 11% to 13% range — these are the only ones above 10%. (In other words, for these people, about 12% of the pairs of their two-author collaborators have themselves written a two-author paper. In fact, Fujii and Nakamura are adjacent in C'.) We also have some data on the portion of the collaboration graph of the second kind outside the “Erdös component” (the one giant component). We are ignoring here the 166,000 isolated vertices and looking only at those authors who have written two-author papers but do not have a finite Erdös number of the second kind. There are about 59,000 such vertices. There are about 36,000 edges in these components, so the average degree of these vertices is 1.21. (In contrast, the average degree of vertices in the Erdös component is 2.82 (there are about 248,000 edges and 176,000 vertices). Click here for the distribution of component sizes. As would be expected, most of these roughly 23,000 other components are isolated edges (three fourths of them, in fact). The largest component has 28 The distribution of Erdös numbers of the second kind The following table shows the number of people with Erdös number 1, 2, 3, ..., according to the electronic data but counting only coauthorships on papers with just two authors. In addition to these 176,000 people with finite Erdös number of the second kind, there are about 59,000 mathematicians who have collaborated but have an infinite Erdös number of the second kind (this is about 9,000 greater than the corresponding number for Erdös numbers of the first kind). these are Erdös numbers of the second kind Erdös number 0 --- 1 person Erdös number 1 --- 230 people Erdös number 2 --- 2153 people Erdös number 3 --- 10118 people Erdös number 4 --- 28559 people Erdös number 5 --- 47430 people Erdös number 6 --- 44102 people Erdös number 7 --- 25348 people Erdös number 8 --- 11265 people Erdös number 9 --- 4299 people Erdös number 10 --- 1570 people Erdös number 11 --- 533 people Erdös number 12 --- 206 people Erdös number 13 --- 61 people Erdös number 14 --- 25 people Erdös number 15 --- 2 people Thus the median Erdös number of the second kind is 5; the mean is 5.58, and the standard deviation is 1.55, a little higher than the corresponding statistics for Erdös numbers of the first kind, as would be expected. The two people with maximum Erdös number of the second kind Sunil Kumar-2 and N. V. Silenok. Paul Erdös asked the following question: Is the collaboration graph of the second kind planar? Our guess was that surely it was not, and we now have a proof. If we can find a homeomorphic copy of the complete graph on five vertices in C', or a copy of the complete bipartite graph with three vertices in each part, then we know that the graph cannot be imbedded in a plane. A natural place to look for such subgraphs would be in a portion of the graph where there are lots of edges. The following concept, apparently introduced not by graph theorists but by sociologist, proved fruitful. The “k-core” of a graph is the (unique) largest subgraph all of whose vertices have degree at least k. (See the article in Social Networks discussed on the “research” subpage for references to the notion of core.) It is easy to find the k-core: just remove all vertices of degree less than k, then repeat again and again until no such vertices remain. If any vertices remain, then they form the k-core. It is clear that the 1-core contains the 2-core, which contains the 3-core, etc. The smallest nonempty k-core (i.e., the one for largest k) is called the “main core”. For the collaboration graph of the second kind, we found (using electronic data) that the main core is the 5-core, and it has 70 vertices (including Erdös, not surprisingly, with degree 30) and 272 edges. Click here for the names of these most social mathematicians (all of whom have Erdös number of the first kind at most 2, and 50 of whom are Erdös coauthors), and here for the adjacency matrix of this graph. It turns out that the main core of the collaboration graph of the second kind has four complete graphs on five vertices: ALON-FUREDI-KLEITMAN-WEST-E R D O S, COLBOURN-Hartman-Mendelson-PHELPS-ROSA, COLBOURN-Lindner-Mendelson-PHELPS-ROSA, and Lindner-MULLIN-ROSA-STINSON-Wallis. It also has 125 copies of the complete bipartite graph with three vertices in each part (the other canonical nonplanar graph), such as (FAN CHUNG, RODL, SZEMEREDI)-(RON GRAHAM, TROTTER, E R D O S). So this graph is certainly nonplanar. Actually, these are not the only complete graphs on five vertices in the collaboration graph of the second kind. For example, Gerald Ludden (Michigan State University) has only four collaborators of the second kind, but each of them has two-author collaborated with each of the others (Koichi Ogiue, Masafumi Okumura, Bang-Yen Chen, and David E. Blair). Statistical summaries of Erdos1 and Erdos2 lists (numbers of the first kind) The data below are based on the 2007 data contained on this site (as opposed to the July 2004 MR data). This file contains a statistical summary of the number of Erdös number 1 coauthors for people with Erdös number 2, the number of Erdös number 1 coauthors for people with Erdös number 1, the total number of coauthors for people with Erdös number 1, the number of papers that Erdös’s coauthors have with him, and the number of new coauthors Paul Erdös added each year. This is a textfile giving the adjacency lists for the induced subgraph of the collaboration graph on all Erdös coauthors. This file lists the Erdös number record holders (for example, which person with Erdös number 2 has the most coauthors with Erdös number 1?). MORE INFORMATION: A paper summarizing some of what is on this page is available in pdf. It appears in the Proceedings of 33rd Southeastern Conference on Combinatorics (Congressus Numerantium, Vol. 158, 2002, pp. 201-212). An abbreviated version appears in SIAM News 35:9 (November, 2002), pp. 1, 8-9; click here for a reprint (pdf). Another article, which also looks at the publication patterns as a function of area of mathematics, appears in the Janaury 2005 issue of the Notices of the American Mathematical Society. Finally, here is a file of slides from a recent talk about the collaboration graph of papers rather than the collaboration graph of people. URL = http://www.oakland.edu/enp/trivia.html This page was last updated on November 20, 2012. Return to Erdös Number Project home page.
{"url":"http://www.oakland.edu/enp/trivia/","timestamp":"2014-04-21T12:12:37Z","content_type":null,"content_length":"63174","record_id":"<urn:uuid:8ee94567-b023-4cfe-b2c1-8e3d5e4fb5ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Paradise Valley, AZ Statistics Tutor Find a Paradise Valley, AZ Statistics Tutor ...You don't want to fall too far behind in your studies, so contact me soon to arrange a tutoring session. Since 2008, I have been both teaching and tutoring statistics at the college level. I am also the manager (and tutor) of a tutoring center at a community college that only tutors students enrolled in statistics and accounting courses. 2 Subjects: including statistics, accounting ...Minor studies were mathematics and computer programming. I received an outstanding Math Award in Jr. College and Ph.D. 27 Subjects: including statistics, chemistry, reading, calculus ...Also, having being born and raised in Tokyo, I am able to teach proper Japanese. I have all the necessary qualifications to be an effective tutor and will be able to guide you through an enjoyable learning process. Thank you for reading my profile. 12 Subjects: including statistics, calculus, physics, geometry ...Geometry I actually enjoyed." - Liam Neeson I am a highly qualified and state certified high school math teacher with a master's degree in mathematics. I have taught geometry for 14 years in the classroom. I would welcome the opportunity to work with you! 15 Subjects: including statistics, calculus, geometry, algebra 1 I am a recently retired System Engineer who did Operating System support for large-scale computer vendors for 30+ years. After receiving my BS in physics at DePaul University in Chicago, I taught mathematics (algebra, geometry and trigonometry) and physics at the high school level for 5 years while... 10 Subjects: including statistics, physics, precalculus, geometry Related Paradise Valley, AZ Tutors Paradise Valley, AZ Accounting Tutors Paradise Valley, AZ ACT Tutors Paradise Valley, AZ Algebra Tutors Paradise Valley, AZ Algebra 2 Tutors Paradise Valley, AZ Calculus Tutors Paradise Valley, AZ Geometry Tutors Paradise Valley, AZ Math Tutors Paradise Valley, AZ Prealgebra Tutors Paradise Valley, AZ Precalculus Tutors Paradise Valley, AZ SAT Tutors Paradise Valley, AZ SAT Math Tutors Paradise Valley, AZ Science Tutors Paradise Valley, AZ Statistics Tutors Paradise Valley, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/Paradise_Valley_AZ_statistics_tutors.php","timestamp":"2014-04-17T22:08:34Z","content_type":null,"content_length":"24273","record_id":"<urn:uuid:c8244d10-6165-40a5-a4a3-445b93a91598>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] RFC: A (second) proposal for implementing some date/time types in NumPy [Numpy-discussion] RFC: A (second) proposal for implementing some date/time types in NumPy Ivan Vilata i Balaguer ivan@selidor.... Fri Jul 18 09:42:47 CDT 2008 Francesc Alted (el 2008-07-16 a les 18:44:36 +0200) va dir:: > After tons of excellent feedback received for our first proposal about > the date/time types in NumPy Ivan and me have had another brainstorming > session and ended with a new proposal for your consideration. After re-reading the proposal, Francesc and me found some points that needed small corrections and some clarifications or enhancements. Here you have a new version of the proposal. The changes aren't fundamental: * Reference to POSIX-like treatment of leap seconds. * Notes on default resolutions. * Meaning of the stored values. * Usage examples for scalar constructor. * Using an ISO 8601 string as a date value. * Fixed str() and repr() representations. * Note on operations with mixed resolutions. * Other small corrections. Thanks for the feedback! A (second) proposal for implementing some date/time types in NumPy :Author: Francesc Alted i Abad :Contact: faltet@pytables.com :Author: Ivan Vilata i Balaguer :Contact: ivan@selidor.net :Date: 2008-07-18 Executive summary A date/time mark is something very handy to have in many fields where one has to deal with data sets. While Python has several modules that define a date/time type (like the integrated ``datetime`` [1]_ or ``mx.DateTime`` [2]_), NumPy has a lack of them. In this document, we are proposing the addition of a series of date/time types to fill this gap. The requirements for the proposed types are two-folded: 1) they have to be fast to operate with and 2) they have to be as compatible as possible with the existing ``datetime`` module that comes with Python. Types proposed To start with, it is virtually impossible to come up with a single date/time type that fills the needs of every case of use. So, after pondering about different possibilities, we have stuck with *two* different types, namely ``datetime64`` and ``timedelta64`` (these names are preliminary and can be changed), that can have different resolutions so as to cover different needs. .. Important:: the resolution is conceived here as metadata that *complements* a date/time dtype, *without changing the base type*. It provides information about the *meaning* of the stored numbers, not about their *structure*. Now follows a detailed description of the proposed types. It represents a time that is absolute (i.e. not relative). It is implemented internally as an ``int64`` type. The internal epoch is the POSIX epoch (see [3]_). Like POSIX, the representation of a date doesn't take leap seconds into account. It accepts different resolutions, each of them implying a different time span. The table below describes the resolutions supported with their corresponding time spans. ======== =============== ========================== Resolution Time span (years) ------------------------ -------------------------- Code Meaning ======== =============== ========================== Y year [9.2e18 BC, 9.2e18 AC] Q quarter [3.0e18 BC, 3.0e18 AC] M month [7.6e17 BC, 7.6e17 AC] W week [1.7e17 BC, 1.7e17 AC] d day [2.5e16 BC, 2.5e16 AC] h hour [1.0e15 BC, 1.0e15 AC] m minute [1.7e13 BC, 1.7e13 AC] s second [ 2.9e9 BC, 2.9e9 AC] ms millisecond [ 2.9e6 BC, 2.9e6 AC] us microsecond [290301 BC, 294241 AC] ns nanosecond [ 1678 AC, 2262 AC] ======== =============== ========================== When a resolution is not provided, the default resolution of microseconds is used. The value of an absolute date is thus *an integer number of units of the chosen resolution* passed since the internal epoch. Building a ``datetime64`` dtype The proposed way to specify the resolution in the dtype constructor Using parameters in the constructor:: dtype('datetime64', res="us") # the default res. is microseconds Using the long string notation:: dtype('datetime64[us]') # equivalent to dtype('datetime64') Using the short string notation:: dtype('T8[us]') # equivalent to dtype('T8') Compatibility issues This will be fully compatible with the ``datetime`` class of the ``datetime`` module of Python only when using a resolution of microseconds. For other resolutions, the conversion process will loose precision or will overflow as needed. The conversion from/to a ``datetime`` object doesn't take leap seconds into account. It represents a time that is relative (i.e. not absolute). It is implemented internally as an ``int64`` type. It accepts different resolutions, each of them implying a different time span. The table below describes the resolutions supported with their corresponding time spans. ======== =============== ========================== Resolution Time span ------------------------ -------------------------- Code Meaning ======== =============== ========================== W week +- 1.7e17 years d day +- 2.5e16 years h hour +- 1.0e15 years m minute +- 1.7e13 years s second +- 2.9e12 years ms millisecond +- 2.9e9 years us microsecond +- 2.9e6 years ns nanosecond +- 292 years ps picosecond +- 106 days fs femtosecond +- 2.6 hours as attosecond +- 9.2 seconds ======== =============== ========================== When a resolution is not provided, the default resolution of microseconds is used. The value of a time delta is thus *an integer number of units of the chosen resolution*. Building a ``timedelta64`` dtype The proposed way to specify the resolution in the dtype constructor Using parameters in the constructor:: dtype('timedelta64', res="us") # the default res. is microseconds Using the long string notation:: dtype('timedelta64[us]') # equivalent to dtype('timedelta64') Using the short string notation:: dtype('t8[us]') # equivalent to dtype('t8') Compatibility issues This will be fully compatible with the ``timedelta`` class of the ``datetime`` module of Python only when using a resolution of microseconds. For other resolutions, the conversion process will loose precision or will overflow as needed. Example of use Here it is an example of use for the ``datetime64``:: In [5]: numpy.datetime64(42) # use default resolution of "us" Out[5]: datetime64(42, 'us') In [6]: print numpy.datetime64(42) # use default resolution of "us" 1970-01-01T00:00:00.000042 # representation in ISO 8601 format In [7]: print numpy.datetime64(367.7, 'D') # decimal part is lost 1971-01-02 # still ISO 8601 format In [8]: numpy.datetime('2008-07-18T12:23:18', 'm') # from ISO 8601 Out[8]: datetime64(20273063, 'm') In [9]: print numpy.datetime('2008-07-18T12:23:18', 'm') Out[9]: 2008-07-18T12:23 In [10]: t = numpy.zeros(5, dtype="datetime64[ms]") In [11]: t[0] = datetime.datetime.now() # setter in action In [12]: print t [2008-07-16T13:39:25.315 1970-01-01T00:00:00.000 1970-01-01T00:00:00.000 1970-01-01T00:00:00.000 In [13]: t[0].item() # getter in action Out[13]: datetime.datetime(2008, 7, 16, 13, 39, 25, 315000) In [14]: print t.dtype And here it goes an example of use for the ``timedelta64``:: In [5]: numpy.timedelta64(10) # use default resolution of "us" Out[5]: timedelta64(10, 'us') In [6]: print numpy.timedelta64(10) # use default resolution of "us" In [7]: print numpy.timedelta64(3600.2, 'm') # decimal part is lost 2 days, 12:00 In [8]: t1 = numpy.zeros(5, dtype="datetime64[ms]") In [9]: t2 = numpy.ones(5, dtype="datetime64[ms]") In [10]: t = t2 - t1 In [11]: t[0] = datetime.timedelta(0, 24) # setter in action In [12]: print t [0:00:24.000 0:00:01.000 0:00:01.000 0:00:01.000 0:00:01.000] In [13]: t[0].item() # getter in action Out[13]: datetime.timedelta(0, 24) In [14]: print t.dtype Operating with date/time arrays ``datetime64`` vs ``datetime64`` The only arithmetic operation allowed between absolute dates is the In [10]: numpy.ones(5, "T8") - numpy.zeros(5, "T8") Out[10]: array([1, 1, 1, 1, 1], dtype=timedelta64[us]) But not other operations:: In [11]: numpy.ones(5, "T8") + numpy.zeros(5, "T8") TypeError: unsupported operand type(s) for +: 'numpy.ndarray' and 'numpy.ndarray' Comparisons between absolute dates are allowed. ``datetime64`` vs ``timedelta64`` It will be possible to add and subtract relative times from absolute In [10]: numpy.zeros(5, "T8[Y]") + numpy.ones(5, "t8[Y]") Out[10]: array([1971, 1971, 1971, 1971, 1971], dtype=datetime64[Y]) In [11]: numpy.ones(5, "T8[Y]") - 2 * numpy.ones(5, "t8[Y]") Out[11]: array([1969, 1969, 1969, 1969, 1969], dtype=datetime64[Y]) But not other operations:: In [12]: numpy.ones(5, "T8[Y]") * numpy.ones(5, "t8[Y]") TypeError: unsupported operand type(s) for *: 'numpy.ndarray' and 'numpy.ndarray' ``timedelta64`` vs anything Finally, it will be possible to operate with relative times as if they were regular int64 dtypes *as long as* the result can be converted back into a ``timedelta64``:: In [10]: numpy.ones(5, 't8') Out[10]: array([1, 1, 1, 1, 1], dtype=timedelta64[us]) In [11]: (numpy.ones(5, 't8[M]') + 2) ** 3 Out[11]: array([27, 27, 27, 27, 27], dtype=timedelta64[M]) In [12]: numpy.ones(5, 't8') + 1j TypeError: the result cannot be converted into a ``timedelta64`` dtype/resolution conversions For changing the date/time dtype of an existing array, we propose to use the ``.astype()`` method. This will be mainly useful for changing For example, for absolute dates:: In[10]: t1 = numpy.zeros(5, dtype="datetime64[s]") In[11]: print t1 [1970-01-01T00:00:00 1970-01-01T00:00:00 1970-01-01T00:00:00 1970-01-01T00:00:00 1970-01-01T00:00:00] In[12]: print t1.astype('datetime64[d]') [1970-01-01 1970-01-01 1970-01-01 1970-01-01 1970-01-01] For relative times:: In[10]: t1 = numpy.ones(5, dtype="timedelta64[s]") In[11]: print t1 [1 1 1 1 1] In[12]: print t1.astype('timedelta64[ms]') [1000 1000 1000 1000 1000] Changing directly from/to relative to/from absolute dtypes will not be In[13]: numpy.zeros(5, dtype="datetime64[s]").astype('timedelta64') TypeError: data type cannot be converted to the desired type Final considerations Why the ``origin`` metadata disappeared During the discussion of the date/time dtypes in the NumPy list, the idea of having an ``origin`` metadata that complemented the definition of the absolute ``datetime64`` was initially found to be useful. However, after thinking more about this, we found that the combination of an absolute ``datetime64`` with a relative ``timedelta64`` does offer the same functionality while removing the need for the additional ``origin`` metadata. This is why we have removed it from this proposal. Operations with mixed resolutions Whenever an operation between two time values of the same dtype with the same resolution is accepted, the same operation with time values of different resolutions should be possible (e.g. adding a time delta in seconds and one in microseconds), resulting in an adequate resolution. The exact semantics of this kind of operations is yet to be defined, Resolution and dtype issues The date/time dtype's resolution metadata cannot be used in general as part of typical dtype usage. For example, in:: numpy.zeros(5, dtype=numpy.datetime64) we have yet to find a sensible way to pass the resolution. At any rate, one can explicitly create a dtype:: numpy.zeros(5, dtype=numpy.dtype('datetime64', res='Y')) BTW, prior to all of this, one should also elucidate whether:: numpy.dtype('datetime64', res='Y') would be a consistent way to instantiate a dtype in NumPy. We do really think that could be a good way, but we would need to hear the opinion of the expert. Travis? .. [1] http://docs.python.org/lib/module-datetime.html .. [2] http://www.egenix.com/products/python/mxBase/mxDateTime .. [3] http://en.wikipedia.org/wiki/Unix_time .. Local Variables: .. mode: rst .. coding: utf-8 .. fill-column: 72 .. End: Ivan Vilata i Balaguer @ Welcome to the European Banana Republic! @ http://www.selidor.net/ @ http://www.nosoftwarepatents.com/ @ -------------- next part -------------- A (second) proposal for implementing some date/time types in NumPy :Author: Francesc Alted i Abad :Contact: faltet@pytables.com :Author: Ivan Vilata i Balaguer :Contact: ivan@selidor.net :Date: 2008-07-18 Executive summary A date/time mark is something very handy to have in many fields where one has to deal with data sets. While Python has several modules that define a date/time type (like the integrated ``datetime`` [1]_ or ``mx.DateTime`` [2]_), NumPy has a lack of them. In this document, we are proposing the addition of a series of date/time types to fill this gap. The requirements for the proposed types are two-folded: 1) they have to be fast to operate with and 2) they have to be as compatible as possible with the existing ``datetime`` module that comes with Python. Types proposed To start with, it is virtually impossible to come up with a single date/time type that fills the needs of every case of use. So, after pondering about different possibilities, we have stuck with *two* different types, namely ``datetime64`` and ``timedelta64`` (these names are preliminary and can be changed), that can have different resolutions so as to cover different needs. .. Important:: the resolution is conceived here as metadata that *complements* a date/time dtype, *without changing the base type*. It provides information about the *meaning* of the stored numbers, not about their *structure*. Now follows a detailed description of the proposed types. It represents a time that is absolute (i.e. not relative). It is implemented internally as an ``int64`` type. The internal epoch is the POSIX epoch (see [3]_). Like POSIX, the representation of a date doesn't take leap seconds into account. It accepts different resolutions, each of them implying a different time span. The table below describes the resolutions supported with their corresponding time spans. ======== =============== ========================== Resolution Time span (years) ------------------------ -------------------------- Code Meaning ======== =============== ========================== Y year [9.2e18 BC, 9.2e18 AC] Q quarter [3.0e18 BC, 3.0e18 AC] M month [7.6e17 BC, 7.6e17 AC] W week [1.7e17 BC, 1.7e17 AC] d day [2.5e16 BC, 2.5e16 AC] h hour [1.0e15 BC, 1.0e15 AC] m minute [1.7e13 BC, 1.7e13 AC] s second [ 2.9e9 BC, 2.9e9 AC] ms millisecond [ 2.9e6 BC, 2.9e6 AC] us microsecond [290301 BC, 294241 AC] ns nanosecond [ 1678 AC, 2262 AC] ======== =============== ========================== When a resolution is not provided, the default resolution of microseconds is used. The value of an absolute date is thus *an integer number of units of the chosen resolution* passed since the internal epoch. Building a ``datetime64`` dtype The proposed way to specify the resolution in the dtype constructor Using parameters in the constructor:: dtype('datetime64', res="us") # the default res. is microseconds Using the long string notation:: dtype('datetime64[us]') # equivalent to dtype('datetime64') Using the short string notation:: dtype('T8[us]') # equivalent to dtype('T8') Compatibility issues This will be fully compatible with the ``datetime`` class of the ``datetime`` module of Python only when using a resolution of microseconds. For other resolutions, the conversion process will loose precision or will overflow as needed. The conversion from/to a ``datetime`` object doesn't take leap seconds into account. It represents a time that is relative (i.e. not absolute). It is implemented internally as an ``int64`` type. It accepts different resolutions, each of them implying a different time span. The table below describes the resolutions supported with their corresponding time spans. ======== =============== ========================== Resolution Time span ------------------------ -------------------------- Code Meaning ======== =============== ========================== W week +- 1.7e17 years d day +- 2.5e16 years h hour +- 1.0e15 years m minute +- 1.7e13 years s second +- 2.9e12 years ms millisecond +- 2.9e9 years us microsecond +- 2.9e6 years ns nanosecond +- 292 years ps picosecond +- 106 days fs femtosecond +- 2.6 hours as attosecond +- 9.2 seconds ======== =============== ========================== When a resolution is not provided, the default resolution of microseconds is used. The value of a time delta is thus *an integer number of units of the chosen resolution*. Building a ``timedelta64`` dtype The proposed way to specify the resolution in the dtype constructor Using parameters in the constructor:: dtype('timedelta64', res="us") # the default res. is microseconds Using the long string notation:: dtype('timedelta64[us]') # equivalent to dtype('timedelta64') Using the short string notation:: dtype('t8[us]') # equivalent to dtype('t8') Compatibility issues This will be fully compatible with the ``timedelta`` class of the ``datetime`` module of Python only when using a resolution of microseconds. For other resolutions, the conversion process will loose precision or will overflow as needed. Example of use Here it is an example of use for the ``datetime64``:: In [5]: numpy.datetime64(42) # use default resolution of "us" Out[5]: datetime64(42, 'us') In [6]: print numpy.datetime64(42) # use default resolution of "us" 1970-01-01T00:00:00.000042 # representation in ISO 8601 format In [7]: print numpy.datetime64(367.7, 'D') # decimal part is lost 1971-01-02 # still ISO 8601 format In [8]: numpy.datetime('2008-07-18T12:23:18', 'm') # from ISO 8601 Out[8]: datetime64(20273063, 'm') In [9]: print numpy.datetime('2008-07-18T12:23:18', 'm') Out[9]: 2008-07-18T12:23 In [10]: t = numpy.zeros(5, dtype="datetime64[ms]") In [11]: t[0] = datetime.datetime.now() # setter in action In [12]: print t [2008-07-16T13:39:25.315 1970-01-01T00:00:00.000 1970-01-01T00:00:00.000 1970-01-01T00:00:00.000 In [13]: t[0].item() # getter in action Out[13]: datetime.datetime(2008, 7, 16, 13, 39, 25, 315000) In [14]: print t.dtype And here it goes an example of use for the ``timedelta64``:: In [5]: numpy.timedelta64(10) # use default resolution of "us" Out[5]: timedelta64(10, 'us') In [6]: print numpy.timedelta64(10) # use default resolution of "us" In [7]: print numpy.timedelta64(3600.2, 'm') # decimal part is lost 2 days, 12:00 In [8]: t1 = numpy.zeros(5, dtype="datetime64[ms]") In [9]: t2 = numpy.ones(5, dtype="datetime64[ms]") In [10]: t = t2 - t1 In [11]: t[0] = datetime.timedelta(0, 24) # setter in action In [12]: print t [0:00:24.000 0:00:01.000 0:00:01.000 0:00:01.000 0:00:01.000] In [13]: t[0].item() # getter in action Out[13]: datetime.timedelta(0, 24) In [14]: print t.dtype Operating with date/time arrays ``datetime64`` vs ``datetime64`` The only arithmetic operation allowed between absolute dates is the In [10]: numpy.ones(5, "T8") - numpy.zeros(5, "T8") Out[10]: array([1, 1, 1, 1, 1], dtype=timedelta64[us]) But not other operations:: In [11]: numpy.ones(5, "T8") + numpy.zeros(5, "T8") TypeError: unsupported operand type(s) for +: 'numpy.ndarray' and 'numpy.ndarray' Comparisons between absolute dates are allowed. ``datetime64`` vs ``timedelta64`` It will be possible to add and subtract relative times from absolute In [10]: numpy.zeros(5, "T8[Y]") + numpy.ones(5, "t8[Y]") Out[10]: array([1971, 1971, 1971, 1971, 1971], dtype=datetime64[Y]) In [11]: numpy.ones(5, "T8[Y]") - 2 * numpy.ones(5, "t8[Y]") Out[11]: array([1969, 1969, 1969, 1969, 1969], dtype=datetime64[Y]) But not other operations:: In [12]: numpy.ones(5, "T8[Y]") * numpy.ones(5, "t8[Y]") TypeError: unsupported operand type(s) for *: 'numpy.ndarray' and 'numpy.ndarray' ``timedelta64`` vs anything Finally, it will be possible to operate with relative times as if they were regular int64 dtypes *as long as* the result can be converted back into a ``timedelta64``:: In [10]: numpy.ones(5, 't8') Out[10]: array([1, 1, 1, 1, 1], dtype=timedelta64[us]) In [11]: (numpy.ones(5, 't8[M]') + 2) ** 3 Out[11]: array([27, 27, 27, 27, 27], dtype=timedelta64[M]) In [12]: numpy.ones(5, 't8') + 1j TypeError: the result cannot be converted into a ``timedelta64`` dtype/resolution conversions For changing the date/time dtype of an existing array, we propose to use the ``.astype()`` method. This will be mainly useful for changing For example, for absolute dates:: In[10]: t1 = numpy.zeros(5, dtype="datetime64[s]") In[11]: print t1 [1970-01-01T00:00:00 1970-01-01T00:00:00 1970-01-01T00:00:00 1970-01-01T00:00:00 1970-01-01T00:00:00] In[12]: print t1.astype('datetime64[d]') [1970-01-01 1970-01-01 1970-01-01 1970-01-01 1970-01-01] For relative times:: In[10]: t1 = numpy.ones(5, dtype="timedelta64[s]") In[11]: print t1 [1 1 1 1 1] In[12]: print t1.astype('timedelta64[ms]') [1000 1000 1000 1000 1000] Changing directly from/to relative to/from absolute dtypes will not be In[13]: numpy.zeros(5, dtype="datetime64[s]").astype('timedelta64') TypeError: data type cannot be converted to the desired type Final considerations Why the ``origin`` metadata disappeared During the discussion of the date/time dtypes in the NumPy list, the idea of having an ``origin`` metadata that complemented the definition of the absolute ``datetime64`` was initially found to be useful. However, after thinking more about this, we found that the combination of an absolute ``datetime64`` with a relative ``timedelta64`` does offer the same functionality while removing the need for the additional ``origin`` metadata. This is why we have removed it from this proposal. Operations with mixed resolutions Whenever an operation between two time values of the same dtype with the same resolution is accepted, the same operation with time values of different resolutions should be possible (e.g. adding a time delta in seconds and one in microseconds), resulting in an adequate resolution. The exact semantics of this kind of operations is yet to be defined, Resolution and dtype issues The date/time dtype's resolution metadata cannot be used in general as part of typical dtype usage. For example, in:: numpy.zeros(5, dtype=numpy.datetime64) we have yet to find a sensible way to pass the resolution. At any rate, one can explicitly create a dtype:: numpy.zeros(5, dtype=numpy.dtype('datetime64', res='Y')) BTW, prior to all of this, one should also elucidate whether:: numpy.dtype('datetime64', res='Y') would be a consistent way to instantiate a dtype in NumPy. We do really think that could be a good way, but we would need to hear the opinion of the expert. Travis? .. [1] http://docs.python.org/lib/module-datetime.html .. [2] http://www.egenix.com/products/python/mxBase/mxDateTime .. [3] http://en.wikipedia.org/wiki/Unix_time .. Local Variables: .. mode: rst .. coding: utf-8 .. fill-column: 72 .. End: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 307 bytes Desc: Digital signature Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080718/07b67134/attachment-0001.bin More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035838.html","timestamp":"2014-04-18T03:30:39Z","content_type":null,"content_length":"30526","record_id":"<urn:uuid:aa0f62cf-79b1-4b46-aec1-f9e963956ac6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
CSE 542. Advanced Data Structures and Algorithms Spring 2013 Course description (from the catalog) This course is concerned with the design and analysis of efficient algorithms, focusing principally on algorithms for combinatorial optimization problems. A key element of the course is the role of data structures in algorithm design and the use of amortized complexity analysis to determine how data structures affect performance. The course is organized around a set of core problems and algorithms, including classical network optimization algorithms, as well as newer and more efficient algorithms. This core is supplemented by algorithms selected from the recent technical literature. Prerequisites: CSE 241. Credit: 3 units. • Time: Tuesday, Thursday 2:30-4:00 • Place: Cupples I, Room 115 • Texts: Data Structures and Network Algorithms by Robert Tarjan Lecture Notes for CSE 542 by Turner Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein Selected papers • Supplemental Text: Network Flows by Ahuja, Magnanti and Orlin (not required) • Professor. Jon Turner, Bryan 522, 935-8552, office hours: T 9:00-11:00, W 3:00-5:00 or by appointment (send me email or call me to schedule). • Teaching Assistant. Yinfan Li, linfeixb27@gmail.com, Bryan 522, office hours: M 3:00-5:00 Paras Tiwari, tiwari.paras@gmail.com, Bryan 410, 935-4163, office hours: T 11:00-1:00 • Grading: Class preparation (10%), Quizes (15%), Labs (25%), Exams (50%) Class Preparation. For each class, there is a reading assignment and a set of review questions that address a portion of the material covered in that class. You are expected to do the reading assignments before coming to class and to turn in your answers to the review questions at the start of class, when one person will be asked to explain their answer to each question. Review questions will be checked to verify that you have made a reasonable effort to understand the material, but will not be graded in detail, or returned. Labs. The class includes a series of laboratory exercises designed to help you get a deeper understanding of the various data structures and algorithms we will be covering. The labs require some programming, but the amount of new code you will need to write for each assignment is usually fairly small (1-4 pages). Implementations of many of the data structures and algorithms we'll be using will be provided to you, through a private subversion repository. The provided code is written in C++ and has been compiled and tested with the Gnu C++ compiler (g++) under MacOS and Linux. You can also expect them to work under cygwin on a Windows PC, although you may have to make some minor adjustments to get them to compile. You will typically be asked to implement at least some portion of an algorithm or data structure, and you will be asked to measure some aspect of its performance. There will also typically be some analysis, comparing the measured performance to the worst-case analytical bounds. Specific assignments can be found in the detailed class schedule below. Quizes. Every second Tuesday, there will be a short quiz. The first will be on January 22. Each quiz will address material covered since the previous quiz or exam. Quizes will be given at the beginning of class, so don't be late. There will be no makeup quizes, but your low quiz score will be dropped from your course grade. Examinations. There will be three exams given during the semster. The first two will be in class on February 14 and March 28. The final exam will be May 8, 3:30-5:30. THERE WILL BE NO ALTERNATE TIMES FOR ANY OF THE EXAMS - IT IS UP TO YOU TO ARRANGE YOUR OTHER ACTIVITIES TO AVOID CONFLICTS. Lecture Notes. Bound, paper copies of the lecture notes are available in the bookstore. These are printed with space provided for adding your own notes. You are strongly encouraged to purchase a copy, and use it for taking notes during class. Links to online copies are also provided in the detailed schedule below. Reading Assignments. Reading assignments in the two text books (Tarjan and Cormen, et. al) appear in the detailed course schedule below. It is strongly recommended that you read over this material before class each day, then study it in more detail after class. The Tarjan book is not a typical text, and requires careful attention to detail. Don't let its conciseness fool you. He packs a great deal of meaning into every sentence and you need to think hard about what he is saying, in order to really understand it. The CLRS text is an easier read, but covers only a portion of the material in the course. In addition to these texts, there are notes that explain the material covered in the first few slides of each lecture. This is the material addressed by the review questions. Practice Problems. Each section of the lecture notes concludes with a set of exercises. The most important single thing you can do to master the material in this course is to work through these problems. Solutions are provided for most questions, but you should make a serious effort to solve them on your own, rather than just look at the solutions. Working with Others Students. You are encouraged to work with other students in study groups, so that you can help each other master the course material. However, all work that is to be handed in must be done individually. This includes answers to review questions and labs. You may discuss general approaches with your fellow students, and the TAs will provide hints and general guidance. However, you are expected to turn in your own work and only your own work. You should not share your solutions with other students. Sharing of source code, measurement results or any other written material is expressly forbidden. Any group of students found to have collaborated inappropriately on an assignment will have the full value of the assignment subtracted from the grades of all students involved. Repeat offenses will not be treated so leniently. Late Policy. Assignments are due in class on the assigned date. Late assignments will not be accepted, not even for partial credit. No exceptions. If you are not going to be in class, you may turn in your assignment early by giving it to me in person, or by putting it in my box in the CSE office after having it initialed by one of the CSE department staff, with the date and time. Please use this procedure only in exceptional circumstances. Expectations. This course covers a great deal of material and you will need to devote substantial time and effort to mastering it. You should plan to spend an average of eight to twelve hours per week outside of class, preparing for class, doing the assigned readings, working exercises and doing the labs. Don't expect to master the material simply by sitting in class and listening to the On-line Communication. Most information about the course can be obtained electronically. In addition to this web site, there is a discussion group setup on Piazza. All registered students should receive an invitation email to join the group. If you have not received such an email, let me know and I will make sure you do. I strongly recommend that you use the discussion group to post questions you may have about lecture material, exercises and labs. The TAs and I will use it to answer questions and to provide guidance and occasional hints. It will also be used to post clarifications and corrections and to make general announcements, so you should monitor it regularly. You are also encouraged to respond to posts from other students. The more you all use the group, the more useful it will be for everyone. Course Lectures and Assignments The course material is organized into three parts. The first part will provide an introduction to each of the main problems we will be studying over the course of the semester. In the second part and third parts, we will study additional data structures and algorithms, going into selected topics in greater depth. In the reading assignments listed below, JSTx stands for my online notes, T stands for the Tarjan text, CLRS2 stands for the second edition of Cormen Leiserson, Rivest and Stein and CLRS3 stands for the third edition of CLRS. Readings from CLRS are used for material that is not covered in Tarjan. I have not had the bookstore order CLRS for this class, as I expect most of you already own a copy (if not, you should). Much of the material covered by Tarjan is also covered in CLRS. Some of you may find it useful to read this material in addition to the assigned readings in Tarjan. First Half Date Lecture Notes Reading Review Other 1/15 Introduction to CSE 542 T 1-19 - - 1/17 Minimum Spanning Trees and d-Heaps JST2, T 75-77, 33-38 .doc, .pdf - 1/22 Shortest Paths JST3, T 85-91 .doc, .pdf Quiz 1 1/24 Fibonacci Heaps JST4, CLRS2 476-495, .doc, .pdf - CLRS3 505-526 1/29 Maximum Flows in Graphs JST5, T 97-101 .doc, .pdf Lab 1, 1/31 Minimum Cost Flows JST6, T 108-111 .doc, .pdf - 2/5 All Pairs Shortest Paths and Faster JST7, T 94-95 .doc, .pdf Quiz 2 Min Cost Flows 2/7 Matchings in Bipartite Graphs JST8, T 113-115 .doc, .pdf - 2/12 Review session - - - Second Half Date Lecture Notes Reading Review Other 2/19 Kruskal's MST Algorithm and the Partition JST9, T 74-75, 23-24, JST10 1-5 .doc, .pdf Lab 2, Data Structure Solution, 2/21 Analysis of Partition Data Structure JST10 6-13 .doc, .pdf - 2/26 Applications of Matching JST13 .doc, .pdf Quiz 3 2/28 Round Robin MST Algorithm and Leftist Heaps JST11, T 77-82, 38-43 .doc, .pdf - 3/5 Edmonds Maximum Matching Algorithm JST12, T 115-123 .doc, .pdf Lab 3, 3/7 Linear Programming and Network Optimization JST14, CLRS2 770-790, .doc, .pdf - 3/19 Weighted Matching in General Graphs - Part 1 JST15 .doc, .pdf Quiz 4 3/21 Weighted Matching in General Graphs - Part 2 JST16 .doc, .pdf - 3/26 Review Session, Practice Questions - - - Third Half Date Lecture Notes Reading Review Questions Other 4/2 Dinic's Max Flow Algorithm JST17,T 102-104 .doc, .pdf Lab 4, Solution, 4/4 Preflow Push Method for Max Flows JST18, CLRS2 669-691, CLRS3 736-759 .doc, .pdf - 4/9 Dinic's Algorithm with Dynamic Trees JST19, T 107-108 .doc, .pdf Quiz 5 4/11 Binary Search Trees JST20, T 45-53 .doc, .pdf - 4/16 Self-Adjusting Binary Search Trees JST21, T 53-56 .doc, .pdf 4/18 Dynamic Trees and Path Sets JST22, T 59-64 .doc, .pdf - 4/23 Fast Implementation of Dynamic Trees JST23, T 64-70 .doc, .pdf Quiz 6 4/25 Review Session - - - 5/2 - - - Lab 5, Solution, Corrections to Tarjan book • On page 18, two lines before the pseudo-code, "visited" should be replaced with "unvisited". • On page 26, two lines from the bottom of the proof of Lemma 2.2, " rank(x) < rank(y) is symmetric" should be replaced with "rank(x) > rank(y) is symmetric." • On page 38, Figure 3.5, the node with key 14 should have a rank of "1" not "2". • 67. Replace the body of the findpath routine with the following. x := v; do p(x) != null => x := p(x) od; return x; This change is required for the expose operation to work correctly. • Page 104, line -11. Change first occurrence of advance to augment.
{"url":"http://www.arl.wustl.edu/~jst/cse/542/intro.html","timestamp":"2014-04-16T07:59:42Z","content_type":null,"content_length":"19211","record_id":"<urn:uuid:cf37db8b-aa9a-47d6-a5ab-c4e4863be521>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
The finite state firing squad Myhill posed the firing squad synchronization problem in 1957. In this problem, you are given a row of soldiers, each of which can be in a finite number of states. A clock provides a sequence of time steps, 0, 1, 2, and so on; at each positive time step, the state of each soldier is set to a function, the transition function, of its previous state and the adjacent soldiers' previous states. This function is identical for all soldiers not at the ends of the row; the end soldiers, however, may have different transition functions. (Nowadays this would be called a `cellular automaton'.) Three of the soldiers' states are called the quiescent, excited, and firing states. At time 0, all soldiers are in their quiescent states, except for one soldier at an end, who is in his excited state. The problem is to arrange the soldiers' state sets and transition functions such that for this starting position, no matter how many soldiers you start with, all soldiers will, at some future time, enter their firing states simultaneously. Try to solve the problem now, or read on for the solution. Back to the home page David Moews ( dmoews@fastmail.fm ) Last significant update 20-IX-2004
{"url":"http://djm.cc/fsquad/firing.html","timestamp":"2014-04-19T22:58:22Z","content_type":null,"content_length":"1883","record_id":"<urn:uuid:cf5a46ce-7bd5-43ff-b674-c45b8f459d11>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jodie on Sunday, October 20, 2013 at 10:48am. A poll was taken of 100 students at a commuter campus to find out how they got to campus. The results were as follows: 31 said they drove alone, 39 rode in carpool, 35 rode public transportation, 10 used both carpool and public transportation, 7 used both a carpool and sometimes their own cars, 9 used buses as well as their own cars, 5 used all three methods. how many used none of the above-mentioned means of transportation? • Algebra - Reiny, Sunday, October 20, 2013 at 11:08am Make a Venn diagram, showing three overlapping circles, label them A for driving alone C for carpool P for public trans put 5 in the intersection of all 3 circles 10 used C and P, so 10 goes in that intersection, BUT 5 are already counted, so place 5 in the outer part of the intersection of C and P in the same way, place 4 in the outer part of A and P in the same way , place 2 in the outer part of A and C now each circle itself A should have a total of 31, but I already have 11 counting in the A circle, leaving 20 in the unused part of A C should have 39, but I already have 12 filled in, leaving 27 for the rest of circle C P should have 35, but I already have 14 filled in, leaving 21 for the rest of circle P Adding up all the numbers I see filled in, I get 84 so from the 100 students, 16 do not use any of the 3 methods. check my arithmetic • OR - Algebra - Reiny, Sunday, October 20, 2013 at 11:11am N(A and C and P) = N(A) + N(C) + N(P) - N(A and C) - N(A and P) - N(C and P) + N(A and C and P) = 31 + 39 + 35 - 10 - 7 - 9 + 5 = 84 so 100 - 84 are unaccounted for 16 don't use any of the given ways Related Questions algebra - A poll was taken of 100 students at a commuter campus to find out how ... algebra - A poll was taken of 100 students at a commuter campus to find out how ... Algebra with applications - A poll was taken of 100 students at a commuter ... Algebra - A poll was taken of 100 students on how they got to campus. 33 drove ... ALGEBRA - A poll was taken of 100 students at a commuter campus to find out how ... algebra - A poll was taken of 100 students at a commuter campus to find out how ... algebra - A poll was taken of 100 students at a commuter campus to find out how ... Algebra - A poll was taken of 100 students at a commuter campus to find out how ... Algebra - A poll was taken of 100 students at a commuter campus to find out how ... algebra - a poll was taken of 100 students at a computer campus to find out how ...
{"url":"http://www.jiskha.com/display.cgi?id=1382280503","timestamp":"2014-04-18T09:06:04Z","content_type":null,"content_length":"9833","record_id":"<urn:uuid:22a81656-df67-4ddc-8163-94984968c680>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Law of gravitation I was going through a site which tells: Yes, as what with the others have said, we use a minus sign to indicate (an attractive) force as a vector. So the equation, with R proportional to Force, should and would be: F = G M So that R would mean to be inversely proportional to the Force (F), equal to the second equation in your original post. The equation is derived from the Inverse Square Law : The greater the (square) distance between objects/masses, the lesser the force; and vice versa.
{"url":"http://www.physicsforums.com/showthread.php?p=4221450","timestamp":"2014-04-20T08:36:36Z","content_type":null,"content_length":"42924","record_id":"<urn:uuid:36560cd7-711c-4fa3-9b72-4144f3b7bbab>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'How do I merge 3D graphics?' topic Author Comment/Response In Response To 'Re: Re: How do I merge 3D graphics?' You wrote "Why don't I see all three points in the output of Show[elips,pBeg, pMin, pMax]?" I can't see what you see, but when I scrape that code off the screen, paste it in my notebook and evaluate it I see all three of your points in the plot with one just at the upper corner. If you can't see that then perhaps opening a fresh notebook, pasting the code, evaluating the cell and doing a screen capture showing the code and the plot with the missing point might provide supporting evidence. (Usually a screen shot is a terrible way to ask for help because others just need to type it back in, but maybe in this particular case it might be OK) I suspect the bounding box is just large enough to contain the points. I purposely made the points huge, so that you couldn't miss them, and that can make part of the big black dot be outside the box, but I think the point is on the edge of the box. You can change the plot range to make it as large or small as you like. You wrote "Why does Opacity work in surf = Plot3D[Sin[x + y^2], {x, -3, 3}, {y, -2, 2}, PlotStyle -> Opacity[0.5]] but not in elips = ContourPlot3D[ x^2 + 2 y^2 + 6 z^2 == 1, {x, -1, 1}, {y, -1, 1}, {z, -0.5, 0.5}, PlotStyle -> Opacity[0.5]]" Options[Plot3D] tells you all the options Plot3D will accept. PlotStyle is one of those. Options[CountorPlot3D] tells you all the options ContourPlot3D will accept. PlotStyle is not one of them. Always checking that you are using valid options is an excellent idea. If you aren't getting bright red warning about "Unknown option PlotStyle" in your ContourPlot3D then something is very wrong with the error system. You wrote "It's frustrating that Mathematica plotting is so inconsistent. Something that does just what I want in one place doesn't work in another. I expect there's a work-around, but I need advice to find it" I do not mean anything rude by this, but you may not have even begun to see the magnitude of frustration waiting to be found. I highly recommend getting yourself some good books and read them carefully. "Mathematica Navigator" is still good, but it is somewhat dated. "Mathematica Cookbook" is good and newer. "Mathematica Graphics Guidebook" was excellent, but it is deeply unfortunate that it is now very very old and there has not been an up to date edition published. "The Mathematica Book" 5th edition is very old and heavy to ship, but it is the last one that will ever be published. Getting at least a couple of those and reading them over and over will probably be helpful. You are attempting to learn "the Mathematica way of thinking." That is not nearly as simple learning many other far simpler and more mechanical languages with fewer surprises in hiding. You are trying to learn how to, by the third or fourth guess, find what the answer to a problem is or at least what to look at which will lead you to the answer. I have a general rule that seems to be verified again and again: If getting just the math sort of working and maybe some kind of plot takes about X time and effort then getting the math really exactly correct often takes between two and ten times longer than that. Getting everything "desktop published" and the graphics exactly the way you want them done and the fonts and the superscripts and subscripts pretty consistently takes between two and ten times, and possibly infinitely, longer than all that, depending on your personal standards. Again and again I see "I got the math typed in in five minutes and well yes it did take me an hour to find and fix my mistakes in that, but I don't want to think about that. But what do you mean it is going to take me between two and twenty four hours of intense effort and repeated failures and lots of experience before I have five minutes of math that looks close to how I imagine I want it to look?!?!?!" Well that is because it is often just the way it is. Even more annoying, often the better someone gets at Mathematica, the higher their standards are for desktop publishing, so the "two to ten" factor doesn't go away and might even get worse. URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/499497","timestamp":"2014-04-19T09:45:30Z","content_type":null,"content_length":"30589","record_id":"<urn:uuid:488b47f3-28cc-4932-8c31-7a90c934b8d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling and analysis From Steelconstruction.info Structural analysis is the process of calculating the forces, moments and deflections to which the members in a structure are likely to be subjected. There is a vast range of analysis tools offering speed, precision and economy of design; 3-D, FE modelling, bespoke portal frame, cellular beam or plate girder design software are now widely available. Modelling catenary actions, cold formed member performance or grillage analysis - all these are now commonplace for structures, where hand analysis is impossible. Increasingly sophisticated analysis methods continue to improve the accuracy with which the behaviour of structures can be predicted. Modelling the real world behaviour of a structure is made easier by the use of full model generating software, with load generating tools enabling frame stability verification along with member checks. The design can be performed to British or European Standards. This article explains the basics of common structural modelling approaches and describes the differences between various types of analysis. Emphasis is placed on the verification of models and analysis results to ensure a safe and economic structure is obtained at the end of the design process. [top] Understanding structural behaviour It is recognised that with the universal presence of computer analysis, an intuitive understanding becomes increasingly important, both in the creation of analysis models and critically, in the appraisal of the analysis results, such as the deflected shape, distribution of moments or distribution of reactions. Numerical analysis of structures relies on the designer's understanding of structural behaviour, choice of appropriate software, method of analysis and above all the use of engineering judgement to know when the answers are reasonable. An intuitive approach uses broader, more dynamic reasoning skills to evaluate the behaviour of any particular structure. The key principles involved in developing this kind of understanding of structural behaviour are: • To consider the deformed shape of a structure • To use statically determinate simple systems, so that good appreciation of the behaviour of the real structure with all its complexities can be gained. It is helpful to use the graphic options of the software to review input data, such as loads, and output data, such as deflections and bending moments. Most orthodox buildings are a series of repeating 2-D frames and often it is convenient to model in this way. Most steel sections are highly efficient in one primary direction and moment resisting connections to the minor axis can be difficult and expensive. However, many multi-storey buildings are modelled in 3-D, as it is very effective to copy and repeat similar floors together with defined load patterns. 3-D modelling is also useful for analysing complex frames and for cataloguing member size, type, location, etc. within the whole building model. Pinned, braced structures are most cost effective. Analysis can accommodate continuous design, but the connections are more expensive. [top] Modelling It is generally convenient to consider first the form of the building frame in orthogonal directions, and to identify: • The primary structural elements which form the main frames and transfer both horizontal and vertical load to the foundations • The secondary structural elements, such as secondary beams or purlins, which transfer the loads to the primary structural elements • The other elements, such as cladding or partitions, which only transfer loads to the primary or secondary structural elements. At the same time, any constraints on the form of the building must be identified as these may well dictate how the structure is modelled, and in particular, which (if any) frames may be braced, and which must be modelled as rigid. The objective for the designer is (within the constraints of the specification and any architectural requirements) to provide a safe, economical structure. The definition of an economical structure is not straightforward, and it may be necessary to investigate several forms of framing before undertaking the detailed analysis and design. However, it is possible to provide general guidance based mainly on the understanding that moment resisting connections are more expensive than nominally pinned connections. Thus in order of preference, the designer should consider: It must be emphasised that, in most cases, there is more than one option for the form of the building frame. Further advice on structural form can be found in the Steel Designers' Manual. [top] Multi-storey buildings Steel frame multi-storey structures in the UK are typically analysed and designed for two types of loading - gravity and lateral. For a structure where floor grids repeat, greater levels of repeatability within the structure, and thus a more economic design, can be achieved by analysing the structure in the following order: 1. Analyse and design the structure to resist gravity actions (self-weight, imposed actions, snow loads, etc). This structure comprises floors, often composite floor decks acting compositely or non-compositely with steel beams and columns. It is recommended to model: (i) One typical floor first - ensuring common member sizes are used where possible to maximize standardization (ii) Using this floor, replicate it within the building as many times as possible; design all floors and the columns for gravity combination of actions. 2. Analyse and design for lateral actions (arising from wind and initial imperfections etc.) and design the lateral load resisting system. This system can consist of one or more of the following: • Braced frames - with bays containing diagonal braces or cross-bracing which resist the lateral loading in tension and/or compression • Continuous frames with bays resisting lateral load due to frame action and moment-resisting connections between beams and columns • Concrete shear walls which are typically planar elements or groups of planar elements (cores) which resist the lateral load in shear or shear and bending respectively. A number of simplifying assumptions are made when modelling the building for analysis: • Analysis elements are aligned with the tops of steel beams in floors thereby ignoring the small offsets in centre line between beams of different depth • The horizontal offset of edge beams is usually small enough to be ignored • All columns are typically modelled as being co-linear along their centre line • small offsets of columns from grids are typically ignored in design • to ensure that all the lateral loading is carried by the braced or moment frames (continuous framing), it is typical to assume that all columns not in braced bays or moment frames are pinned at each floor level, so they do not attract lateral loads. More information on modelling multi-storey buildings can be found in the Steel Designers' Manual. [top] Trusses and lattice girders There are a variety of models which may be used for the analysis of a truss. These include: • Pin jointed frames • Continuous chords and pin jointed internal, i.e. web, members The first two options are preferred, since in most situations there will be no bending moments to be included in the joint capacity checks and connection design. Pin jointed frames are the traditional choice to model trusses, while the assumption of continuous chords and pin jointed internals usually better reflects the behaviour in practice. In this case the chords resist some bending due to loading off 'panel points' but behave primarily as an axially loaded continuous beam. The internals are axially loaded only - moments due to self-weight are usually ignored. The analysis model should reflect this behaviour. In trusses using hollow sections, despite the fact that the internals are fully welded to the chords, this type of connection is still assumed to be pinned. This is due to the relatively thin walls of hollow sections and the large deformations that such connections can sustain. The third option is usually only particularly relevant to trusses that use Vierendeel action. In this case the behaviour of the truss, in the way it resists loads, is through bending action in both the chords and internals. This has the advantage that the diagonal internals are omitted but the efficacy of the hollow section is somewhat lost as the Virendeel truss acts as a continuous frame and significant moments develop at the ends of members. The connections between the hollow sections have to be stiffer since they have to be designed to resist the bending moments due to the Vierendeel Proprietary software dedicated to the analysis of portal frames generally involves an elastic analysis to check frame deflection at the serviceability limit state, and an elastic-plastic analysis to determine the forces and moments in the frame at the ultimate limit state. These methods have largely replaced the rigid-plastic method which is unable to account for the important second-order effects in portal frames. Nevertheless the rigid-plastic method continues to be allowed by BS EN 1993-1-1^[1] Cl 5.4.3 (1) providing certain criteria are met - see Cl 5.2.1(3) of the same standard - that ensure second-order effects can safely be disregarded. BS EN 1993-1-1^[1] provides three alternatives for 'plastic global analysis': • Elastic-plastic analysis with plastified sections and/or joints as plastic hinges (referred to below as the 'elastic-plastic method') • Non-linear plastic analysis considering the partial plastification of members in plastic zones. This method is not generally used in commercial portal frame design • Rigid plastic analysis neglecting the elastic behaviour between hinges. The approach to analysis in BS EN 1993-1-1^[1] is to presuppose that second-order effects need to be allowed for in the design except under special circumstances when the second-order effects are small enough to be ignored. Haunches are frequently provided at the eaves and apex connections of a portal frame. Typically analysis software does not have the facility to use a 'tapered element'. In such cases it is acceptable to model tapered members as a series of uniform prismatic elements. The assumption that the neutral axis remains at the centre line of the rafter and does not descend towards the haunch is safe, since it tends to overestimate both the compression in the bottom flange, and the shear. [top] Special members Normal frame members are generally modelled as one (or more) straight elements, with associated section properties. Universal beams, universal columns, tees, angles, channels and hollow sections are modelled on this basis. Non-standard sections may require a different approach. [top] Curved members Curved members are modelled as a series of short, straight elements. Modelling by using more, shorter elements, improves the accuracy of the results. As a general guide, a length of arc corresponding to 15° produces reasonable results. More information can be found in Design of Curved Steel (SCI P281) [top] Tapered members Tapered members can be simply modelled as a series of short elements, each with an inertia corresponding to the depth of the member at that position. Generally, three such sections give reasonable accuracy when modelling tapered members. Many steelwork analysis programs provide libraries of standard section properties, and may also include the section properties for castellated beams. This will allow the structural designer to include castellated members in a frame model in the same way as standard sections. Whilst the frame bending moments produced by this approach will generally be satisfactory, the structural designer should note that the deflection of a castellated or cellular beam will be more than that predicted by Engineer's bending theory. This is due to the Vierendeel effect and to shear deflection. As a rule of thumb, the deflection of a cellular or castellated beam may be taken as 25% greater than the equivalent depth beam without openings. Additional deflection due to the Vierendeel effect becomes more significant with multiple, long openings. As a rule of thumb, the deflection of a beam with multiple, long openings may be taken as 35% greater than that of the equivalent depth beam without openings. In some circumstances, the structural designer may conclude that the additional deflection may be ignored, or is not critical. Alternatively an allowance may be made for the additional deflection during design and checking of the members. More information can be found in Design of composite and non-composite cellular beams (SCI P100). [top] Joints Within a frame, joint behaviour affects the distribution of internal forces and moments and the overall deformation of the structure. In many cases, however, the effect of modelling a continuous joint as fully rigid, or a simple joint as perfectly pinned, compared to modelling the real behaviour, is sufficiently small to be neglected. Elastic analysis programs consider only the stiffness of the joint and it is convenient to define three types as follows: • Simple - a joint which may be assumed not to transmit bending moments. Sometimes referred to as a pinned connection, it must also be sufficiently flexible to be regarded as a pin for analysis • Continuous - a joint which is stiff enough for the effect of its flexibility on the frame bending moment diagram to be neglected. Sometimes referred to as 'rigid'; they are, by definition, • Semi-continuous - a joint which is too flexible to qualify as continuous, but is not a pin. The behaviour of this type of joint must be taken into account in the frame analysis. BS EN 1993-1-8^[2] requires that joints be classified according to their stiffness for analysis and should also have sufficient strength to transmit the forces and moments acting at the joint that result from the analysis. When classifying the stiffness of a joint: Simple joints - are described as 'nominally pinned' rather than pinned since it is accepted that some moment is transferred. In this definition these moments are insufficient to adversely affect the member design. The UK National Annex BS EN 1993-1-8^[3] indicates that connections designed in accordance with the principles given in SCI P358 may be classified as nominally pinned. Continuous joints - are described as 'rigid joints'. The UK National Annex^[3] refers the designer to SCI P398 in which, for many, but importantly not all, cases connections designed for strength alone can be considered as rigid. Semi-continuous joints - are 'semi-rigid' and are usually also 'partial strength'. These are described as 'ductile connections' in UK practice and are used in plastically designed semi-continuous frames. For braced semi-continuous frames, the UK NA^[3] indicates that these may be designed using the principles given in SCI P183 . [top] Supports The interaction between the foundation and supporting ground is complex. Detailed modelling of the soil-structure relationship is probably too involved for general analysis. Base connections fall into the same categories of nominally pinned, semi rigid and rigid as other joints, as BS EN 1993-1-8^[2] has no specific recommendations covering their rotational stiffness. More information can be found in SN045 . [top] Pinned and rocker bases Where a true pin or rocker is used, the rotational stiffness is zero. The use of such bases is rarely justified in practice, as careful consideration needs to be given to the issues connected with shear transfer to the foundation and temporary stability of the column during erection. If a column base is nominally pinned and the foundation design assumes the base moment is zero, it is recommended that: • When using elastic global analysis to establish the design forces and moments at the Ultimate Limit State the base should be assumed to be pinned. • When checking frame stability i.e. when checking whether the frame is susceptible to second-order effects the base may be assumed to have a stiffness equal to 10% of the column stiffness (which can be taken as 4EI/L) • When calculating deflections at the Serviceability Limit State the base may be assumed to have stiffness equal to 20% of the column stiffness. If a column is rigidly connected to a suitable foundation, the following recommendations should be applied: • The stiffness of the base should be limited to the stiffness of the column when using elastic global analysis to establish the design forces and moments at the Ultimate Limit State • The base may be assumed to be rigid when calculating deflections at the Serviceability Limit State • For elastic-plastic global analysis, the assumed stiffness of the base must be consistent with the assumed moment capacity of the base but should not exceed the stiffness of the column. Any base moment capacity between zero and the plastic moment of resistance of the column may be assumed, provided that the foundation and base plate are designed to resist a moment equal to the assumed moment capacity, together with the forces obtained from the analysis. [top] Semi-rigid bases A nominal base stiffness of up to 20% of the column stiffness may be assumed in elastic global analysis, provided that the foundation is designed for the moments and forces obtained from this [top] Modelling base stiffness Bespoke software normally has the facility to select the recommended values of base stiffness. Unless such software is used, the base stiffness may be modelled by the use of a spring stiffness or dummy members at the column base. When assessing the sensitivity of the frame to second-order effects (calculating α[cr] ) a nominally pinned base can be modelled with a spring stiffness equal to 0.4EI[col]/L[col]. When calculating frame deflections at the serviceability limit state a nominally pinned base can be modelled with a spring stiffness equal to 0.8EI[col]/L[col]. If the computer program cannot accommodate a rotational spring, the base fixity may be modelled by a dummy member of equivalent stiffness as shown below. When modelling a nominally pinned base the second moment of area (I[y]) of the dummy member should be taken as: • I[y] = 0.1 I[y,col] when assessing frame stability • I[y] = 0.2 I[y,col] when calculating deflections at SLS. In both cases, the length of the dummy member is L = 0.75 L[col], and modelled with a pinned support at the extreme end. Results from analysis with the use of dummy members should not be used explicitly, as the provision of an additional support will affect the base reactions. The vertical base reaction should be taken as the axial force in the column. [top] Model verification The most important part in any analysis exercise is to review the output in order to confirm that an appropriate structural model has been used, and that the applied loads are correct. This is not to confirm that the execution of the analysis is correct! When using proven software the analysis will be correct - the exercise is to check the structural designer's input. Software frequently contains default values for certain input data. Support fixity and restraint conditions are common examples of data which may have default values. Default values are intended to avoid the necessity for the structural engineer to enter data, and represent the 'usual' condition, which may be amended by the user. The structural designer must give due regard to the default values assumed by the program, and either satisfy himself that these are appropriate, or amend the value accordingly. All input data, whether default or user input, remains the responsibility of the designer. Default values are common in both analysis and design software. Questions the designer should ask when reviewing the output from structural modelling software include: Program Are these correct? Are the default conditions appropriate for the physical details and the loading of the frame? Viewed graphically, do the loads in each loadcase appear correct? Are any elements without load? Do loads on some elements appear to be orders of magnitude different from others? Are the loads applied in the correct orientation? Deflected Form Is the deflected form correct? Has the structure deflected as expected, and is the order of the overall deflection as expected? Bending Moment Is the form of the bending moment diagram as expected? Are moments shown where releases would have been appropriate? Does the overall envelope on a member equate to that calculated by Diagram a simplistic approach, typically wL²/8 or WL/4? Reactions Checking by hand calculation, do the total reactions provided in the output (vertically and horizontally) equate to the applied loads? Do the reactions quoted for different unfactored loadcases differ by orders of magnitude? Is the distribution of load to the supports as expected? Spring Are the spring stiffness values assumed for analysis appropriate to the members as designed? If the truss is simply supported and carrying a uniformly distributed load, do the maximum chord forces equate to wL²/8 , divided by the depth of the truss? Does the vertical component of the end internal member equate to the vertical end reaction? Are the displacements of the correct form and order? In the analysis model, is one end of the truss free to move longitudinally? Do redundant members at the ends of the truss (if included in the model) attract load, or have they been released? [top] Analysis of a structure For common building structures, analysis is concerned with determining the building displacements together with the internal forces and moments that result from the applied loading. The results are determined by mathematically combining the structural stiffness of the analysis model together with the actions applied. [top] Assumed simplifications The analytical model of the structure is usually created by defining the idealised geometry, material properties and the structural supports. Assuming perfectly straight members is common, however this is not always the case. Next the load model is created defining the location, magnitudes and directions of actions on the structure. These actions are typically grouped into design situations by type, e.g. permanent, variable, wind and snow, etc. Finally, combinations of actions are created which add together the actions in the design situations multiplying them by relevant factors as defined in design standards. When the material properties are non-linear, for tension only elements or compression only supports, a 'non linear' analysis is required. Similarly, should the loads be other than static loads, for example time-dependant loading from a machine or an acceleration spectrum to model an earthquake, then a time history or a response spectrum analysis will be required. As well as common modelling idealisations such as assuming perfectly elastic material, straight members, consistent section properties, actions applied at one point or uniformly distributed, there are also analysis idealisations and simplifications. To determine the internal forces and moments in a building frame the following should be taken into account (if significant): • Second-order P-Δ effects - effects of deformed geometry, that introduce additional forces caused by deformation of the frame • Second-order P-δ effects - effects of deformed geometry, that introduce additional forces caused by deformation of the member • Global imperfections in the structure - e.g. the 'lean' of out of plumb columns • Local imperfections in members - e.g. initial bow of the member • Residual stresses in members • Flexural, shear and axial deformations • Joint behaviour. [top] Analysis by hand The calculations required to obtain the shear forces and bending moments in simply-supported beams form the basis of many other calculations. Generally only the simplest analysis is undertaken by hand - software is pervasive. There is no need to use software when designing simple columns or beams. Hand calculations are also useful for initial sizing of frames or continuous beams. Details regarding hand calculations can be found in many textbooks. Useful formulae for bending moments, shear and deflection are presented in The Steel Designer's Manual, Appendix: Design theory or in Steel Buildings . [top] Analysis by software There are many analysis packages currently available on the market. They offer a wide range of features and analysis capabilities. The majority of analysis programs performs elastic analysis, but some also offer plastic or elastic-plastic analysis. Analysis software is applicable to a wide range of structural forms: buildings, bridges, towers, masts, tented structures, etc. Some programs include the facility for different structure types, such as trusses , where the nodes are all pinned or portal frames, which allow the modelling of eaves haunches and restraint positions with ease. Usually, the engineer has to define structure geometry, member sizes, supports and actions before the analysis can be run. The modern trend is moving towards the analysis being a subset of 'design software'. A user builds a 'physical model' in the design software defining the members and connections, the software selects initial sizes for members based on engineering rules of thumb and then creates an analysis model automatically. Many analysis programs also offer some design capacity, however for the design of particular elements such as concrete or composite floors designers often use specialist, bespoke software intended only for the design of these components. [top] Element types Analysis packages have a range of different element types. Some of the more common are: • Inactive elements, which are not used in the analysis • Linear elements like beam, truss, link element or rigid beam element applicable to all analysis types • Non-linear elements such as tension/compression only elements, cables, non linear springs or gap elements used only in non-linear analysis • 2D elements like membranes, plates or shells. Software specific to steel design generally has libraries of default materials and member types, so as the member is selected its properties are automatically associated. [top] Joints Joints are usually by default set up as fixed, rigid or spring. In the latter case the user is prompted to enter the rotational spring stiffness. Some programs may have pre-set standard defaults, for example the base stiffness values. Determination of connection stiffness is described in BS EN 1993-1-8^[2]. [top] Analysis types Most commercially available software offers a multitude of analysis types. They can be broken down into a number of 'classes' which are described below: • Static analysis - used to determine the nodal displacements, the element deflections together with the element forces, moments and stresses. This is the most common form for the analysis of building structures. • Dynamic analysis - also called vibration analysis - used to determine the natural frequencies and corresponding mode shapes of vibration. • Buckling analysis - used to determine the modes and associated load factors for buckling and assess whether or not the structure is prone to buckling at a higher or lower load than has been • Response spectrum analysis - used in earthquake situations to apply an acceleration spectrum to a structure and to determine from this the design shears and moments in the elements. • Time history analysis - used to apply time-dependent loading to a structure. In the design of the majority of building structures in the UK, the only analysis types that are likely to be used are first-order analysis (linear static) and second-order analysis (P-delta static) - the latter is for structures which are susceptible to second-order effects. [top] First-order analysis In first-order analysis the stiffness of the structure is assumed to be constant and unaffected by changes in the geometry of the structure when it is loaded. This is the standard assumption of linear-elastic first-order analysis. The principle of superposition applies to this approach. Where the analysis model remains the same, the results from analyses of different sets of applied actions can be added together and the results of individual design situations can be scaled. The analysis results are proportional to the applied actions. [top] Second-order analysis In second-order analysis, the effective stiffness of the structure is changed by the action of the loads upon it. Examples of this are cable structures, where a cable becomes apparently stiffer as it straightens out. The principle of superposition does not apply, as effects of actions interact. Second-order effects, often called P-delta effects are commonly illustrated by considering the additional displacements, forces and moments which arise from the application of actions on a deflecting structure. These are known as second-order effects. In some circumstances a first-order analysis may be used to approximate the results of a second-order analysis, by techniques such as the Amplified Sway Method, which is suitable for elastic frame analysis by computer. [top] Second-order effects Second-order effects are non-linear effects that occur in every structure where elements are subject to axial load. P-delta is actually only one of many second-order effects. It is a genuine 'effect' that is associated with the magnitude of the applied axial compression (P) and a displacement (delta). There are two kinds of second-order effects: • P-Δ (P-'Big' delta) - a structure effect resulting from joint displacement • P-δ (P-'little' delta) - a member effect resulting from deformation in member geometry. Second-order effects increase the deflections, moments and forces beyond those calculated by first-order analysis. Sensitivity to second-order effects needs to be assessed for each designed When second-order effects are significant, two options are possible: • a rigorous second-order analysis (i.e. in practice, using an appropriate second-order analysis software) • an approximate second-order analysis (i.e. a modified first-order analysis with appropriate allowance for second-order effects). In the second method, also known as 'modified first-order analysis', the applied actions are amplified, to allow for second-order effects while using first-order calculations. [top] Elastic analysis Elastic analysis programs are the most widely used for structural analysis and are based on the assumption that the material which is being modelled is linear-elastic., therefore the limiting stress is the value corresponding to the strain of 0.002, up to which steel behaves as a linear material. The appropriate value of the elastic modulus has to be provided in the analysis. BS EN 1993-1-1^[1] allows the plastic cross-sectional resistance to be used with the results of elastic analysis, provided the section Class is Class 1 or Class 2. [top] Plastic analysis Plastic analysis relies on the formation and rotation of hinges, therefore it demands Class 1 sections. Plastic hinge rotations occur at sections where the bending moment reaches the plastic moment or resistance of the cross-section at load levels below the full ULS loading. Plastic analysis is generally used only for portal frame design, where it results in a more economical frame than an elastic analysis. This is because plastic analysis allows relatively large redistribution of bending moments throughout the frame, due to plastic hinge rotations. This redistribution 'relieves' the highly stressed regions and allows the capacity of under-utilised parts of the frame to be mobilised. [top] References 1. ^ ^1.0 ^1.1 ^1.2 ^1.3 BS EN 1993-1-1: 2005, Eurocode 3: Design of steel structures. General rules and rules for buildings, BSI 2. ^ ^2.0 ^2.1 ^2.2 BS EN 1993-1-8:2005. Eurocode 3: Design of steel structures. Design of joints, BSI 3. ^ ^3.0 ^3.1 ^3.2 UK NA to BS EN 1993-1-8: 2005: 2008, UK National Annex to Eurocode 3: Design of steel structures. Design of joints, BSI [top] Further reading • Steel Designers' Manual 7th Edition. Editors B Davison & G W Owens. The Steel Construction Institute 2012 • Guidelines for the use of computers for engineering calculations, IStructE, 2002. [top] Resources [top] See also [top] External links
{"url":"http://www.steelconstruction.info/Modelling_and_analysis","timestamp":"2014-04-20T19:27:19Z","content_type":null,"content_length":"82316","record_id":"<urn:uuid:56b62836-c519-4f91-8253-48f6c88e146c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 17,499 Science (?) What are your choices? Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you. How about blackjack? What is the legal voting age? P = probability that rejecting the null hypothesis is due solely to chance factors. What is your null hypothesis? If Ho = no difference, then I would agree with you. experimental psychology Have questionnaires for subjects to explore these phenomena or have behavioral criteria that allow one to infer these phenomena. Media/Social Science Since this is not my area of expertise, I searched Google under the key words "theory of Habermas" to get these possible sources: http://www.google.com/search?client=safari&rls=en&q= theory+of+Habermas&ie=UTF-8&oe=UTF-8 In the future, you can find the information you ... Whenever x = 0, value is on Y-axis. Since this is not my area of expertise, I searched Google under the key words "Transcription and translation" to get these possible sources: http://www.google.com/search?client=safari&rls=en&q= transcription+and+translation&ie=UTF-8&oe=UTF-8 In the future, you can fin... Probability of picking pepperoni on the first pick = 1/13. Assuming that customer does not want to "double up" on any topping, 12/13 for first pick for some other topping and 1/12 for second for pepperoni. If the events are independent, the probability of both/all ev... math-statistics (?) With your limited data, it is hard to tell. I don't know what the numbers represent. They need to be labeled better. Z = (mean1 - mean2)/standard error (SE) of difference between means SEdiff = √(SEmean1^2 + SEmean2^2) SEm = SD/√n If only one SD is provided, you can use just that to determine SEdiff. Find table in the back of your statistics text labeled something like "area... maths (?) Can you be more specific? Area? Perimeter? Width? Length? Consumer Math P = 2L + 2W Room size is irrelevant. Cerebral cortex makes rational decisions usually. Hint: Where is the x? Have you tried the conditioning theories of Skinner or Pavlov? Just substitute values and calculate. L = 50+11(14)/7+10^2/4(25) A. Let x = years ago 6(15-x) = 40-x Solve for x. B. B = 3J (B+7) +( J+7) = 68 Substitute 3J for B in second equation and solve for J. Insert that value into the first equation and solve for B. Check by inserting both values into the second equation. If the events are independent, the probability of both/all events occurring is determined by multiplying the probabilities of the individual events. O = 1/4 A = 1/3 If x=y, then it could be 15x^2 or 15y^2. a. mean ± 2 SD = ? b. Z = (score-mean)/SD Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability related to the Z score. c. Your decision. astronomy needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. A. Let x = odd, then x+1 = even 4x + 14 = 3(x+1) Solve for x. B. (x + x+1 + x+2 +x+3)/4 = 16 C. (x + x+1 + X+2)/9 = 7 I'll leave you to use a similar process to solve the last one. college stats (incomplete) Lacking needed data. Stats () You can answer 1-3 just by citing the data. EVave = sample mean SEave = SEm = SD/√n Z = (score-mean)/SEm Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability related to the Z ... If you don't have choices, try online or hard copy dictionary. We do not do your homework for you. Although it might take more effort to do the work on your own, you will profit more from your effort. We will be happy to evaluate your work though. We do not do your homework for you. Although it might take more effort to do the work on your own, you will profit more from your effort. We will be happy to evaluate your work though. Your data is only for the maximum temperatures. There is no data on the minimum. college stats 2. 95% = mean ± 1.96 SEm SEm = SD/√n 3. 90% = mean ± 1.645 SEm 5. Z = (score-mean)/SEm Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability related to the Z scor... stats college Which one has the highest percentage? We do not do your homework for you. Although it might take more effort to do the work on your own, you will profit more from your effort. We will be happy to evaluate your work though. (10+5)/(20+5) = ? (A) 2/2 = 1 + 24 = ? (B) 4/2 = 2 + 7 = ? C) 5/3 = 1 2/3 + 2 = ? Add them together. Use the same process with the remaining problems. Since this is not my area of expertise, I searched Google under the key words "fennec fox" to get these possible sources: http://www.google.com/search?client=safari&rls=en&q=fennec+fox&ie=UTF-8&oe= UTF-8 In the future, you can find the information you desire more quic... 1 - .3 - .15 = ? Psychology HELP ! It is asking for what you think. Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you. See later post. 1. Right 2. http://www.google.com/search?client=safari&rls=en&q=metabolism&ie=UTF-8&oe=UTF-8 3. Right 1. The experimental group receives the independent variable. The control group is similar to experimental, except it does not receive the independent variable. Extraneous variables are balanced between experimental and control groups. 2. http://www.google.com/search?client=saf... statistics (incomplete) More data needed. Z = (mean1 - mean2)/standard error (SE) of difference between means SEdiff = √(SEmean1^2 + SEmean2^2) SEm = SD/√n If only one SD is provided, you can use just that to determine SEdiff. Find table in the back of your statistics text labeled something like "area... Have followed the instructions on page 349? 1. let x = single rooms x + (3x+8) = 120 2. Let x = helper, then x+5 = mechanic 6(x + x+5) = 114 Solve for x. You're welcome. 2x1 + 4x1/10 + 9x 1/1,000 = 2x + .4x + .009x Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Since this is not my area of expertise, I searched Google under the key words "acceleration due to gravity formula" to get these po... statistics (?) What are your choices? Ho: women's level > men's Z = (mean1 - mean2)/standard error (SE) of difference between means SEdiff = √(SEmean1^2 + SEmean2^2) SEm = SD/√n If only one SD is provided, you can use just that to determine SEdiff. Find table in the back of your statistics t... Algebra 1A Although these suggestions are not specifically related to math, I hope they will be helpful. http://drdavespsychologypage.intuitwebsites.com/Learning_Requirements.pdf http:// drdavespsychologypage.intuitwebsites.com/Learning_Hints.pdf http://drdavespsychologypage.intuitwebsite... business communcation If those are your choices, A. Statistics (?) I don't know how you define "simulations." If it is the number of intervals, A would be the answer. Whatever year the percent was less than the maximum of the confidence interval of 20%, 2000 and 2003. Physics needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Agree with 1 and 2. However, once aware of symptoms and concerns, I would suggest ruling out organic disorders. a) 13/52 = ? b) 13/52 = ? c) If the events are independent, the probability of both/all events occurring is determined by multiplying the probabilities of the individual events. d) one card not a diamond = (52-13)/52. Take it from here. physics needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Medical law needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. algebra (incomplete) Lacking needed data. Either-or probabilities are found by adding the individual probabilities. 1/2 + 1/2 = 1 About 68% of scores will be between 67-83 (±1 SD), 95% between 59-91 (± 2 SD), 99% between 51-99 (±3 SD). .8(2300) - 250 = ? .2(2300) + 250 = ? For any normal distribution, about 68% lie between ± 1 SD. I searched Google under the key words "comparison distribution hypothesis testing" to get these possible sources: http://www.google.com/search?client=safari&rls=en&q= Comparison+distribution+hypothesis+testing&ie=UTF-8&oe=UTF-8 In the future, you can find the informat... What is the context for each of these words? If the person being described is feminine, the o endings would change to a. If the license tags are not included as taxes: 860/11595 = ? If the tags are included as taxes: (860+95)/11595 = ? Statistics is a branch of math used to summarize data and test hypotheses. If you are testing for significant differences: Z = (mean1 - mean2)/standard error (SE) of difference between means SEdiff = √(SEmean1^2 + SEmean2^2) SEm = SD/√n If only one SD is provided, you can use just that to determine SEdiff. Find table in the back of your s... It depends on how you define normal behavior. I searched Google under the key words "definitions of normal behavior" to get these possible sources: http://www.google.com/search?client=safari&rls=en&q =definition+of+normal+behavior+in+psychology&ie=UTF-8&oe=UTF-8 In th... 99% = mean ± 2.575 SEm SEm = SD/√n Within each month, there is more variation in terms of the daily temps. If the events are independent, the probability of both/all events occurring is determined by multiplying the probabilities of the individual events. Assuming equal amount of odd and even numbers, the probability of getting one odd number is .5. Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you. See your previous post. Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you. See your previous post. How would you like us to help? Standardization would allow more effective interpretation of the information given. Since these are only nominal scale data, a bar diagram with content on X-axis and frequency on Y-axis would be good. Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you. Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you. Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Use the Pythagorean theorem: x^2 + x^2 = 8^2 Solve for x. graduate statistics Z = (score-mean)/SD Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability related to the Z score. graduate statistics Use same process as indicated in your following post. graduate statistics Z = (score-mean)/SEm SEm = SD/√n Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability related to the Z score. For median (b), arrange scores in order of value. Median = 50th percentile = point where half the scores are valued above and half below. (a) Mean = sum of scores/number of scores Subtract each of the scores from the mean and square each difference. Find the sum of these squar... Geology (?) What type of rock are you seeking? Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. The prefrontal lobe of the cortex is a decision maker. It has my vote. 10º - 2ºh = -8º Solve for h. Using your formula: $6000 * .07(8) = ? However, after the first year, the principal is 6000 + .07(6000) = $6420, so the second year's interest = $6420 * .07(6420) = ? You need to revise your formula. Z = (score-mean)/SEm SEm = SD/√n Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability (.95) and its Z score. Insert Z score into top equation to calculate score. .4 * 50 = 20 20 + 20^2 + 20^3 + 20^4 = ? If one above a certain number is 3 under another, the difference is 4. Arrange scores in order of value and tally the frequency of each score for the frequency table. 0 = 5 1 = ? 2 = ? 3 = ? 4 = ? 5 = ? 6 = ? 9 = ? We cannot reproduce a histogram here. However, even the frequency table should give you an idea of the relative shape (normal, positi... 7! = "seven factorial" = 7*6*5*4*3*2*1 = ? SEm = SD/√n The larger the sample n, the smaller the SEm. Z = (score-mean)/SD Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability related to the Z score. Do you consider that probability extreme? Pages: <<Prev | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=PsyDAG&page=18","timestamp":"2014-04-21T08:36:16Z","content_type":null,"content_length":"27229","record_id":"<urn:uuid:97617f00-f0e8-4f37-beeb-19dbcd77c3fb>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Countryside, IL Algebra Tutor Find a Countryside, IL Algebra Tutor ...Having both an Engineering and Architecture background, I am able to explain difficult concepts to either a left or right-brained student, verbally or with visual representations. I am also great at getting students excited about the subject they are learning by relating it to something relevant... 34 Subjects: including algebra 1, algebra 2, reading, physics ...I am currently a fourth grade teacher in Lockport. Prior to that I was a second grade teacher for two years. I have also tutored students from first grade through college level. 13 Subjects: including algebra 1, reading, dyslexia, elementary (k-6th) ...I have two full years of chemistry in college. I have helped 4 daughters do trigonometry and precalculus, which also includes complicated trigonometric identities algebraically. I have taught anatomy and physiology at a career college in New York. 17 Subjects: including algebra 1, algebra 2, chemistry, reading ...I have been successful in the past tutoring high school students in math and science, but enjoy all ages and skill levels. As a current dental student, my passion for learning is proven every day, and I hope to inspire your child to achieve their academic goals. I played soccer from the time I was four until I was eighteen. 17 Subjects: including algebra 1, algebra 2, chemistry, biology I am certified math teacher. Currently, I work as a substitute teacher at Elmwood Park School District and Morton High Schools in Cicero. I have been tutoring students since 2008 and preparing them for ACT. I have BA in Mathematics and Secondary Education from Northeastern Illinois University. 12 Subjects: including algebra 1, algebra 2, calculus, geometry Related Countryside, IL Tutors Countryside, IL Accounting Tutors Countryside, IL ACT Tutors Countryside, IL Algebra Tutors Countryside, IL Algebra 2 Tutors Countryside, IL Calculus Tutors Countryside, IL Geometry Tutors Countryside, IL Math Tutors Countryside, IL Prealgebra Tutors Countryside, IL Precalculus Tutors Countryside, IL SAT Tutors Countryside, IL SAT Math Tutors Countryside, IL Science Tutors Countryside, IL Statistics Tutors Countryside, IL Trigonometry Tutors Nearby Cities With algebra Tutor Argo, IL algebra Tutors Bridgeview algebra Tutors Brookfield, IL algebra Tutors Hodgkins, IL algebra Tutors Justice, IL algebra Tutors La Grange Park algebra Tutors La Grange, IL algebra Tutors Lyons, IL algebra Tutors Mc Cook, IL algebra Tutors Mccook, IL algebra Tutors Riverside, IL algebra Tutors Summit Argo algebra Tutors Summit, IL algebra Tutors Western Springs algebra Tutors Western, IL algebra Tutors
{"url":"http://www.purplemath.com/Countryside_IL_Algebra_tutors.php","timestamp":"2014-04-21T14:48:43Z","content_type":null,"content_length":"24027","record_id":"<urn:uuid:5f66f7c6-a0f1-49bd-a5e1-0afabc4d8b33>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Electronics and Circuits Components that are considered as “ ” means that they do not rely on power and such component are the resistor, capacitor, and inductors . There’s more other type of passive components but the focus of this discussion is all about resistors. Circuit designing is a complicated task where you are puzzled on defining the right value of resistors that are to be used. are available on the market with a cheap price and there are thousand of them with different characteristics and specification. The function of a resistor on the circuit is to provide resistance where it limits the flow of electrons depending on the specification of the component. For this reason, it can be used through different applications. Other than its ability to provide resistance, it is can also limit the flow of current, adjust voltage levels and etc… The following are the properties of resistors: Tolerance, voltage rating, power rating, temperature rating, frequency response, and temperature coefficient. Let’s explain the properties of a resistor one at a time starting from the tolerance. This is the value of the resistor or the range on where it can vary. The common parameters are 1%, 5% and 10%. You may find that there are other resistors that their tolerance falls below the one percent value and they are categorized as Precision type of resistors. The voltage rating of a certain circuit can be safely adjusted or drop through the application of a resistance. Power rating is simply the amount of power consumed by the resistor. It is highly suggested to use a resistor with a power rating value over than what is exactly required. Temperature rating is the range of the device where it can operates normally. Exceeding the limit will cause the resistor to be damaged or burnt. Finally, the frequency response is simply the change in impedance. The changes on the value of impedance depend on the function of the circuit. Buying a resistor on the market will require you to provide them the exact amount of value as discussed on the previous paragraphs. However, before even trying to purchase them you need to make sure that they are particularly needed on your project. Summary of the Characteristics of Resistors Here are the following list of basic characteristics of resistors; 1. Resistance 2. Power 3. Temperature Coefficient 4. Noise 5. Inductance 0 comments:
{"url":"http://basicselectronicsandcircuits.blogspot.com/2010/10/characteristics-of-resistors.html","timestamp":"2014-04-18T03:12:21Z","content_type":null,"content_length":"55016","record_id":"<urn:uuid:e2672394-127c-49c9-8d7e-d4dc54646be2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
chemical potential-fictitious atoms Now, according to the formula of chemical potential you have kindly mentioned and the information we have from the pseudopotential of the fictitious "H", is there a way to calculate the derivative of "F" and then calculate the chmical potential of the "H" Well, that'd be the usual methods for determining a functional derivative (the WP article conveniently includes some exact values of [tex]\frac{\delta E[\rho]}{\delta\rho}[/tex] for some of the simpler approximate density-functionals.) The exact density functional is not known, though. Also, if we calculated the chemical potential, could it be used for calculating the formation energy? Or in the formation energy real atoms are considered? Well, no, not really. And using the formula given requires that you already know the energy (E[rho]). But no quantum-chemical method I know of requires Z to be an integer.
{"url":"http://www.physicsforums.com/showpost.php?p=2193878&postcount=4","timestamp":"2014-04-18T23:25:37Z","content_type":null,"content_length":"8930","record_id":"<urn:uuid:e42c7e83-f9b3-4091-a842-2eb253ed8f02>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
find number of points on the straight line how to find number of points on the straight line which joins (-4,11) and (16,-1) whose coordinates are positive integers The slope of the line segment is: $\frac{12}{-20}=-\frac{3}{5}$ and so the lattice points are found by taking the given leftmost point and successively adding 5 to the x-coordinate and subtracting 3 from the y coordinate to get: (-4,11), (1,8), (6,5), (11,2), (16,-1)
{"url":"http://mathhelpforum.com/geometry/210159-find-number-points-straight-line.html","timestamp":"2014-04-18T22:27:41Z","content_type":null,"content_length":"33308","record_id":"<urn:uuid:4bab770d-573d-4242-a477-8237ab336b18>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Polarization Ellipse If you take the polarization vector and rotate it 180 degrees, you are describing the same ellipse. That's where the factor of 2 comes from, to make those functions periodic over 180 degrees instead of 360 degrees. Not sure if that's exactly what you were talking about when you said rotate both of your polarizers 90 deg., but it's the same idea.
{"url":"http://www.physicsforums.com/showthread.php?t=186197","timestamp":"2014-04-20T18:29:47Z","content_type":null,"content_length":"22251","record_id":"<urn:uuid:62452647-7def-4d2e-bcbf-87ee19adabae>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Is Cartesian Product associative? December 1st 2010, 11:49 AM Matt Westwood Is Cartesian Product associative? Let $A, B, C$ be sets. Is it the case that $A \times (B \times C) = (A \times B) \times C$ where $(A \times B)$ denotes Cartesian Product? That is, $A \times B = \{(a, b): a \in A, b \in B\}$. I have J.A. Green's "Sets and Groups" (1965, Routledge & Kegan Paul) saying: "Show that the sets $A \times (B \times C)$ and $(A \times B} \times C$ are never the same" (Chapter One Exercise 8), on the one hand. On the other hand, I have W.E. Deskins: Abstract Algebra (1964, Dover) saying: "Theorem 1.9. $(a, b, c) = (d, e, f)$ if and only if $a=d, b=e, c=f$." The definition of $(a, b, c)$ in Deskins is given as $(a, (b, c))$, i.e. as a construction of ordered pairs. This latter sort of seems to imply that $A \times (B \times C) = (A \times B) \times C$, but no mention of it is made in that book. T.S. Blyth's "Set Theory and Abstract Algebra" (Longman, 1975) gives: $E \times F \times G = \{(x, y, z): x \in E, y \in F, z \in G\}$ which sort of seems to imply that in this context associativity does hold. I've seen this set as an exercise in several places, but never have I seen a definitive proof one way or another. It hinges on whether $(a, (b, c)) = (a, b, c) = ((a, b), c)$ which you instinctively feel ought to be true, but I can't come up with a convincing argument either way, unless you take literally the expression of $(a, b)$ as equal to $\{\{a\}, \{a, b\}\}$ (according to Wiener and Kuratowski) which is in the final instance a convenient way of obtaining an ordered pair in axiomatic set theory (e.g. ZF). And Green's "Sets and Groups" (mentioned above) is pretty assertive when it comes to exercise 1.9, and no mention has been made in that work of the Wiener/Kuratowski definition. What's the current thinking on this result? December 1st 2010, 12:30 PM From a set theoretic point of view, every object is a set which means that any true definition of an ordered pair should be equivalent to the last definition you have in terms of sets. In this case there are no issues. If associativity doesn't hold, then it still almost holds in the sense that there is a natural bijection between the cartesian products. I'm not sure if anyone currently uses the nonassociative notion, but I certainly wouldn't. December 1st 2010, 12:37 PM When one talks about associativity, one assumes that the product is a binary, not ternary, operation. Therefore, if Blyth's definition $E \times F \times G = \{(x, y, z): x \in E, y \in F, z \in G\}$ involves ordered triples, it probably defines a ternary operation and is irrelevant to the question of associativity. $(A\times B)\times C$ consists of elements of the form $((a, b), c)$, while $A\times (B\times C)$ consists of elements of the form $(a, (b, c))$. At least when A, B, C themselves don't contain pairs, two elements from $(A\times B)\times C$ and $A\times (B\times C)$ cannot be equal. As was pointed out, there is a natural bijection that maps ((a, b), c) to (a, (b, c)). December 1st 2010, 12:55 PM Matt Westwood *smacks forehead* D'oh. Of course. Thanks bro's.
{"url":"http://mathhelpforum.com/discrete-math/164983-cartesian-product-associative-print.html","timestamp":"2014-04-21T05:32:48Z","content_type":null,"content_length":"11011","record_id":"<urn:uuid:8d3fb325-a662-4aa2-8bb5-d395967d9907>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Gold River, CA Algebra 2 Tutor Find a Gold River, CA Algebra 2 Tutor ...The answer is simple. Unfortunately, most textbooks and math teachers go through lengthy discussions of all the theory behind the math, which leads to boredom and frustration. My solution to this problem is simple. 12 Subjects: including algebra 2, chemistry, biology, algebra 1 ...Then I allow them to recreate it and explain back to me how things work to make sure they have a clear understanding. Mistakes during learning are ok! We learn from mistakes! 30 Subjects: including algebra 2, reading, physics, geometry ...I've worked with computers for numerous projects and assignments since I was in the 5th grade. I help my mother, who usually has inquiries with specific functions of programs to internet browsers. I have plenty of experience with general computer programs ranging from Microsoft Word to Adobe Photoshop. 17 Subjects: including algebra 2, English, geometry, chemistry ...As a student I worked as a tutor and mentor for students in my department. My skills are in math and English, as well as history and economics. I'm good at explaining what I know in a simple, easy-to-understand way. 27 Subjects: including algebra 2, reading, English, Microsoft Excel ...Since then, I've taught adults at a technical college and a local software company. For the past several years I have had the opportunity to be a stay-at-home parent and be involved in raising my two boys (11 and 7). I've been able to coach them on their various sports teams (soccer, basketball,... 15 Subjects: including algebra 2, calculus, geometry, algebra 1 Related Gold River, CA Tutors Gold River, CA Accounting Tutors Gold River, CA ACT Tutors Gold River, CA Algebra Tutors Gold River, CA Algebra 2 Tutors Gold River, CA Calculus Tutors Gold River, CA Geometry Tutors Gold River, CA Math Tutors Gold River, CA Prealgebra Tutors Gold River, CA Precalculus Tutors Gold River, CA SAT Tutors Gold River, CA SAT Math Tutors Gold River, CA Science Tutors Gold River, CA Statistics Tutors Gold River, CA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Antelope, CA algebra 2 Tutors Broderick, CA algebra 2 Tutors Bryte, CA algebra 2 Tutors Cameron Park, CA algebra 2 Tutors Fair Oaks, CA algebra 2 Tutors Fruitridge, CA algebra 2 Tutors Lake Natoma, CA algebra 2 Tutors Latrobe, CA algebra 2 Tutors Mormon Island, CA algebra 2 Tutors Nimbus, CA algebra 2 Tutors North Sacramento, CA algebra 2 Tutors Pine Bluff, CA algebra 2 Tutors Sunset Whitney Ranch, CA algebra 2 Tutors Walsh Station, CA algebra 2 Tutors White Rock, CA algebra 2 Tutors
{"url":"http://www.purplemath.com/Gold_River_CA_Algebra_2_tutors.php","timestamp":"2014-04-17T01:04:38Z","content_type":null,"content_length":"24108","record_id":"<urn:uuid:e7ea30aa-c42a-40cd-b02e-6c2a8574ef65>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
In this section, we review the definitions and applications of variables and expressions. In case this is not review, we will attempt to go over the ideas carefully enough so that this section may serve as a first introduction. Types of NumbersEdit In mathematics there are names for many different types of numbers and you've encountered lots of these types already and some of these types contain the others. For instance we can start with the whole numbers such as 0, 1, 2, 3, etc. Using subtraction we can build negative numbers by subtracting a bigger number from a smaller giving us an answer in the set {... -3, -2, -1, 0}. Using division we can identify fractions between 0 and 1 by dividing a bigger number by a smaller e.g. {1/2, 2/3, 3/4, ...} or {-1/-2, -2/-3, -3/-4, ....} We can also identify negative fractions between -1 and 0 by dividing a negative number by a positive or a positive number by a negative {-1/2, -2/3, -3/4, ...} or {1/-2, 2/-3, 3/-4, ...}. Every whole number can be written as a fraction, such as $\textstyle 2 = \frac{2}{1}$. The rational numbers are exactly those numbers which can be written as fractions. Rational numbers are a subset of numbers we call real numbers. Some calculators allow you to differentiate between rational numbers and real numbers by representing the rational number as a fraction. If you use decimal notation the decimals in your rational number may go on forever, for example $\textstyle \frac{1}{3}=0.333\ldots$. The real numbers include all of the types of numbers mentioned before (whole numbers, negative numbers, fractions, etc.) and others that require special operations such as roots to represent. These other numbers may not have any recognizable pattern to their digits, such as $\sqrt{2}=1.41421356237\ldots$. But, at the end of the day, the real numbers act just like the rational numbers that you're already familiar with. For those readers that are geometrically inclined, one may think of the real numbers as a line (or ruler), where every point on the line corresponds to exactly one number, as in the picture below. Variables and ConstantsEdit When working with mathematics or science problems it is frequently necessary to talk about numbers whose value you do not know immediately. For example, suppose you are asked how many 1 × 1 squares fit inside a rectangle that measures 9 squares by 10 squares? In this case the number we do not know is "the number of squares." Some people answer correctly right away by saying "The number of squares that fit inside the rectangle is 90." But how did they find this? They could say "The number of squares in the rectangle is equal to the number of squares of its length times the number of squares of its width. The rectangle is 9 squares long and 10 squares wide, so it contains 90 squares." This is a lot of words. We could express ourselves mathematically by writing: \begin{align} \text{Area} & = \text{Length} * \text{Width} \\ \text{Length} & = 9 \text{ and } \text{Width} = 10 \\ \text{Area} & = 9 * 10 \\ \text{Area} & = 90 \end{align} For mathematicians, this is still a lot of writing. Mathematicians usually choose to name each unknown with just a single letter, mostly out of a need for conciseness (that is, shortness), since writing one letter is easier than writing a word. A variable is the letter (or symbol) we are using to represent an unknown number. We could rewrite the calculations above as: \begin{align} \text{A} &= L * W \\ \text{L} & = 9 \mbox{ and } W = 10 \\ \text{A} & = 9 * 10 \\ \text{A} & = 90 \end{align} Evaluating expressionsEdit An expression is a collection of symbols you could put into a calculator and get a number. To be precise, an expression is a well-formed formula, but don't worry about the formal details. Expressions are things we can evaluate and I like to call them "numbers in fancy clothing." Consider the following examples of expressions and non-expressions: \begin{align} \text{4} + 2 \\ \end{align} This is an expression, since we can simply add $4 + 2$ and get $6$ \begin{align} \text{(}4+10)/8 \\ \end{align} This is also an expression, since it can be evaluated to $14/8 = 7/6$ \begin{align} \text{(}*3 + 2 \\ \end{align} This is not an expression. Note that $3$ is not being multiplied by anything, and there is no right parenthesis to match left one. \begin{align} \sqrt{-4} \end{align} While $\sqrt{-4}$ is not a real number, it is still an expression since it follows the rules of the order of operations. Now that you're familiar with what expressions look like, let's consider a practical example. A cup of coffee at my favorite café costs $2.00. I drink 4 cups of coffee a day. I want to calculate how much money I spent on coffee this week, and in all of this year. Let's start with figuring out how much I spend on coffee in a week. • Let's call the total number of cups of coffee I drink in a week T. We can give a name to how much money I spend, lets call it C (for cost). C is an expression, but we need a more useful one involving T. Every cup of coffee costs $2.00, so: • 0 cups of coffee costs $0, as 0×2 is 0. • 1 cup of coffee costs $2, as 1×2 is 2. • 2 cups of coffee costs $4, as 2×2 is 4. • T cups of coffee costs ... $2T. So, 2T is an expression representing the cost of T cups of coffee. This is the first step in figuring out how much I spend on coffee in a week. To get further we need to know the value of T: how many cups of coffee I drink in a week. If I drink 4 cups a day, and there are 7 days in a week, this must mean that T = 4 × 7 = 28. By replacing T with the number we calculated we can see that C is the same as 2.00 × 28 = 56.00. So I spend $56.00 a week on coffee! Maybe I should cut back! What about how much I spend in a year? Well, we figure this out in the same way. Let's first think about a year, if we keep our names the same we still have that C is the same as 2.00 × T. But now there are 365 days in a year, so the total number of cups of coffee (that is the number T) changes. We can calculate it just as we did before, drinking 4 cups a day leads to: T = 365 × 4 = 1,460. Now that we know the value of T we can see that: C is the same as 2.00 × 1,460 = 2,920.00. Now I can see that I spend $2,920.00 a year on coffee. Yes, I need to cut back!! Notice that solution of these two problems is basically the same, the only things that changes is the values of C and T. This is why they are called variables, because the exact numbers varied. Some of the numbers we used in this calculation do not change. For example, the number of days in a week is always 7. We may still use a letter to represent the number of days in a week if we want, but since the number is not going to change, we simply leave it as 7. Letters or symbols that represent specific unchanging numbers are called constants. In practical situations what you think of as a constant sometimes depends on how you think about the problem. A careful reader may point out that there are not always 365 days in a year: on leap years there are 366. Since I was working out how much I spent for the year 2010, the number of days in the year is constant and equal to 365. If I wanted to make a table of how much I spent on coffee in the last 10 years, then it might be better to use a variable to represent the number of days in a year. Another example, which might be familiar from physics, is the amount of acceleration of a falling object due to gravity. For most problems this acceleration is treated as a constant g = 9.8 m/s^2. However, for problems involving objects that aren't on Earth this may be a bad approximation. The value we used for g in equations concerning objects on other planets may not be considered constants. The problems here are not going to ask us to worry about when it is OK to treat the acceleration due to gravity as a constant, as that is much more appropriate for a physics course. For us, constants are going to be those numbers fixed in the problem and that do not change like the price of coffee in the example above or in some cases some very well known fixed numbers may come up (such as the number of cards in a deck, the number of days in a week, etc.). Variables are typically written using letters, such as x, t, or C. For cultural reasons x is an extremely common choice for the name of a variable. But when naming a variable or constant yourself it is best to choose something connected to the problem such as C for cost, T for total, etc. This makes it much easier to make sense of the equations you end up looking at. Constants are typically written as the numbers themselves, such as 2, -5, and 0.75, or in some cases may be represented by letters, such as g (from above) and π. As you can imagine, equations that come up are often more complicated than the equation in the preceding example. We are going to need some vocabulary for the different parts of the equations we encounter. For example, if I let G be the amount of gas I spend driving to the café each day, then the expression for how much I spend C might look like: Now C is the sum of two things called terms. In this case there are two terms in the expression, namely 2.00×T and G×D. A term of a sum is just one of the pieces we are adding together. In the expression 2.00×T+G×D−7 there are three terms. Two of the terms are 2.00×T and G×D. There are two possibilities for handling −7. The first possibility is to consider subtraction the same as addition and consider 7 to be the third term. The second possibility is that we think of subtracting 7 as adding −7. In this case we might say that the terms are 2.00×T, G×D, and −7. To be honest, it doesn't really matter much how we choose to think of things, but we should try to be consistent. Since we used the word "sum" in the definition of term above, we shall try to consistently use the second possibility to describe terms that are subtracted instead of added. Implied Multiplication with ExpressionsEdit There are many ways to indicate that two numbers should be multiplied. You are probably most familiar with using the symbol ×, in equations such as 2 × 2 = 4. But because mathematics developed in many places, there are other symbols that sometimes get used and this becomes particularly important in algebra. Why have more than one notation? Believe it or not, for convenience! For cultural reasons the most common letter to choose for a variable is x, but now things can look confusing when we try to write out expressions like x × 2. It is just an unfortunate fact of life that our variable and our multiplication symbol look a lot alike. Add poor penmanship to this and you're asking for trouble! There are two common ways to deal with this. The first is to introduce another symbol for multiplication, namely a dot written in the middle of the line. For example instead of writing 2 × 2 = 4 one can write 2 · 2 = 4. Another even more common strategy is to do away with writing anything at all! Suppose I want to multiply x by 2. Since it can lead to confusion I do not want to write 2 × x. I could write, with our new dot notation 2 · x, or I could just be concise and decide you'll know what it means if I write 2x. That's right, I skipped writing any symbol for multiplication at all! This is known as implied multiplication, because I never really said I was multiplying, I just implied it. This is by far the most common way to express multiplying in algebra. At first this may seem a crazy way to do things, but it works particularly well with our intuition about units. We are taught from elementary school that if I have 1 apple and someone gives me 1 apple then I have 2 apples. In the same way it seems very natural to write 1x + 1x = 2x. If I have 1 x and someone gives me 1 x then I will just have 2 x's. Because variables have such a strong similarity to our units in simple examples like this, it has been culture to never place the variable before the number in implied multiplication. While it may be in some technical sense correct, to write x2 for the product of 2 and x, people may not understand what it means. So always place explicit numbers before the variables when using implied multiplication. Implied multiplication is also frequently used between two different variables, or even two whole expressions (provided you use parentheses). So we may encounter expressions like xy for the product of x and y, or x(a + b) for the product of x and a + b. Because implied multiplication is so common it gives us even more reason to use single letters for variables. While it might have been nice to use a variable YC for the yearly cost for my coffee, if I were not very careful to explain what I meant some readers may think this represents some variable Y multiplied by another variable C. On the other hand in complicated situations with lots of variables sometimes it is worth risking confusion to choose variable names that make sense. Finally, you can use implied multiplication between two numbers. Just try it, if we wanted to write 2 × 2 = 4 using implied multiplication we would end up writing 22 = 4, but twenty-two is not four! Instead, we write one or both constants in parentheses: 2(2) = 4 or (2)(2) = 4. Both forms are correct. Follow what your teacher says when in doubt. Evaluating expressionsEdit We treat variables just like numbers — numbers that maybe we do not know at the beginning, but numbers nonetheless. When we know what the numbers associated with a variable are we can figure out what some expression we have written before equals. Suppose we are asked to find the value of x − 5 when x = 7. To do this we substitute 7 for x. This means we rewrite the expression, except everywhere we would write x we write 7. So we get 7 − 5, and now we can use simple arithmetic to figure out that the expression equals 2. If you look back to discussion analyzing my coffee drinking habits, you'll see see we substituted a few times already. Let's look at some more examples. Problem. Find the value of x · y − 9 when x = 2 and y = 3. Solution. We will do this in two steps. First we will substitute 2 for x to get: 2 · y − 9. Now we will substitute 3 for y to get: 2 · 3 − 9 = 6 − 9 = -3. In the last line, since we had gotten to a problem of arithmetic we simply used the rules of arithmetic are already familiar with. One time to be a little bit careful is when implied multiplication is involved. Consider the following example. We needed to use our precedence rules to work the simple arithmetic in the right order in order to figure out that the expression equals 2. Problem. Evaluate 2x + 2 when x = 4. Solution. Substituting 4 for x we get: 2 · 4 + 2 = 8 + 2 = 10. Notice a very subtle change when I rewrote the expression. Specifically I inserted a multiplication symbol where there was an implied multiplication. Imagine that I hadn't: the last line would have started 24 + 2, which is not what we meant! We wanted to multiply x by 2, since x = 4, that means we wanted multiply 4 by 2. The number 24 shouldn't be part of our calculation. It is very important when evaluating expressions to follow the correct order of operations (Don't forget "Please excuse my dear aunt Sally"). For example Problem. Simplify the expression 3x^2(2z+k) given that x is 2, z is 1/2 and k is 1. Solution. Writing first by substituting for x, we have 3 · 2^2 · (2z+k) Now substiting for z 3 · 2^2 · (2 · 1/2 + k) Finally we substitute for k to get: 3 · 2^2 · (2 · 1/2 + 1). First we need to figure out the value inside the parentheses, that is we need to calculate 2 · 1/2 + 1. The correct order to do this in is to first multiply, then add. That is 2 · 1/2 + 1 = 1 + 1 = 2 Now that we have figured out what is inside the parentheses our the problem is now to calculate: 3 · 2^2 · (2). We do the exponentiation first to get 3 · 4 · (2). Now we simply multiply, and our answer is 24. Let's look at when it is OK to add or remove parentheses. The purpose of parentheses is to establish precedence. Precedence tells you which operation goes first. The operation rules for precedence say to evaluate the parentheses first (PEMDAS!). But, what can you do with $\frac{x + 1}{2} = \frac{1}{2}$ ? $x + 1$ doesn't have just one value; it has as many values as we choose to assign to $x$. This is where the distributive property shows its power. It allows us to rearrange the operations while maintaining precedence. $\frac{x + 1}{2} = \frac{1}{2} * (x+ 1) = \frac{1}{2}x + \frac{1}{2}$ so we can change our equation to $\frac{1}{2}x + \frac{1}{2} = \frac{1}{2}$ Add $\frac{-1}{2}$ to both sides $\frac{1}{2}x + \frac{1}{2} - \frac{1}{2} = \frac{1}{2}- \frac{1}{2}$ $\frac{1}{2} = 0$ And multiply both sides by the inverse of $\frac{1}{2}$ $\frac{1}{2}x * 2 = 0 * 2$ $x = 0$ Parentheses allow us to ensure that we treat expressions that have variables as if they were a value. For instance, if we want to know for which values the expression $\frac{2}{x + 1} = 3$ is true we need to use the properties of real numbers to place the variable $x$ by itself on one side of the equals sign. To do this we need to get $x+1$ out of the denominator of the fraction. We can do this by multiplying both sides of the equation by $(x + 1)$. We don't know what the value of $(x + 1)$ is, but it will always be the same thing on both sides of the equation so it doesn't change the notion of equality. $(x + 1) * \frac{2}{x+ 1} = 3 * (x+ 1)$ We use the inverse property to re-write $\frac{2}{x+1}$ as multiplication. $(x + 1) * 2 * \frac{1}{x+ 1} = 3 * (x+ 1)$ And the associative property to re-write the multiplication. $2* (x + 1) * \frac{1}{x+ 1} = 3 * (x + 1)$ And the identity property to re-write (x + 1) * $\frac{1}{x+ 1}$ $2 * (1) = 3 * (x+ 1)$ Since 2 * (1) has no variables we can evaluate it. We use the distributive property to re-write 3 * (x+ 1) $2 = 3*x+ 3*1$ $2 = 3*x+ 3$ We subtract 3 from both sides of the equation. $2 - 3 = 3*x+ 3 -3$ $-1 = 3*x$ And multiply both sides by $\frac{1}{3}$. $-1*\frac{1}{3}= 3*x*\frac{1}{3}$ $-\frac{1}{3}= x$ Using parentheses and the properties of real numbers and equality we were able to get x alone to determine the only number for which our initial statement is true. In the problems below letters will be given to represent various numbers. Decide if the following quantities should be referred to as variables or constants. Explain your reasoning. 1. Y, the number of years since the last moon landing. 2. x, the number of donuts in an unopened box of a dozen. 3. C, the price of a cup of coffee at a particular café. 4. c, the number of measured cups in a liter of liquid. 5. w, the number of windows open on your computer screen. 6. p, the number of problems in this book you have attempted. Evaluate the following expressions at the indicated values. 1. $x^2 + 2x+3$ when $x = 4$. 2. $xy$ when $x = 2$ and $y = 3$. 3. $x^3 - 1$ when $x = -1$. Answers to ExercisesEdit First set: 1. Variable; the number of years since the last moon landing changes 2. Constant; there are always 12 donuts in a box of a dozen 3. Constant; prices at the same place tend to stay the same 4. Constant; the number of cups in a liter never changes 5. Variable; the number of windows open on your computer screen changes 6. Variable or constant, depending on if you ever attempt these problems. Second set: 1. $4^2 + 2(4) + 3 = 16 + 8 + 3 = 27$ 2. $(2)(3) = 6$ 3. $(-1)^3 - 1 = -1 - 1 = -2$ Last modified on 16 March 2013, at 05:15
{"url":"http://en.m.wikibooks.org/wiki/Algebra/Variables","timestamp":"2014-04-20T03:11:33Z","content_type":null,"content_length":"44863","record_id":"<urn:uuid:7fb7dec0-b7e2-4a19-9ed2-ddb5c5bb06aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Probing polygons minimially is hard , 1993 "... A fundamental problem in model-based computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategi ..." Cited by 31 (4 self) Add to MetaCart A fundamental problem in model-based computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategies ("decision trees") for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine which single model is present. We show that a dlg ke height binary decision tree always exists for k polygonal models (in fixed position), provided (1) they are non-degenerate (do not share boundaries) and (2) they share a common point of intersection. Further, we give an efficient algorithm for constructing such decision tress when the models are given as a set of polygons in the plane. We show that constructing a minimum height tree is NP-complete if either of the two assumptions is omitted. We provide an efficient greedy heuristic strategy and show ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8260191","timestamp":"2014-04-21T09:10:11Z","content_type":null,"content_length":"12578","record_id":"<urn:uuid:84284a43-d3e0-4270-990d-f387d3eb0a1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Fortran and C/C++ mixed programming Example-1: Calling routines and functions The following sample shows how Fortran routines and functions can be called from a C++ program. (1) The C++ file: // This illustrates how a Fortran routine and function may be // called from a main program in C++ #include <iostream.h> extern "C" void __stdcall FR1(int*,int *); int __stdcall FF1(int *); int main() int n=10,nSq,nCube; cout << "The square is:" << nSq << endl; cout << "The Cube is:" << nCube << endl; return 0; (2) The Fortran File: SUBROUTINE FR1(N,M) C COMPUTES THE SQUARE OF N, RETURNS IN M INTEGER FUNCTION FF1(N) C COMPUTES THE CUBE OF N Example-2: Passing C char string to a Fortran routine The following sample shows how a C char string may be passed from a C++ program to a Fortran routine. (1) The C++ file: // This illustrates how a Fortran routine may be // called from a main program in C++, and a char[] string passed to it #include <iostream.h> #include <string.h> extern "C" void __stdcall FR1(int *,int *,char *); int main() int n=10,nSq; char szCtest[20]; cout << "The square is:" << nSq << endl; return 0; (2) The Fortran File: SUBROUTINE FR1(N,M,CSTR) INTEGER*4 CSTR(1) C HERE WE RECEIVE THE C CHAR STRING IN AN INTEGER ARRAY C COULD ALSO HAVE USED A BYTE ARRAY WRITE(6,20) (CSTR(L),L=1,3) 20 FORMAT(' CSTR=',3A4) WRITE(6,*) 'DONE' Example-3: Passing arrays to a Fortran routine The following sample shows how arrays may be passed from a C++ program to a Fortran routine. (1) The C++ file: // Illustrate passing integer and floating point arrays // from C++ to Fortran #include <iostream.h> extern "C" int __stdcall SUMIT(int *,int*); float __stdcall MEAN(float*,int*); int main() int iA[]={3,5,6,7,2,3,4,5,11,7},iN=10,iSum; float fpA[]={1.2f,3.f,44.f,2.5f,-1.3f,33.44f,5.f,0.3f,-3.6f,24.1f},fpMean; cout << "The Sum of iA is:" << iSum << endl; cout << "The Mean of fpA is:" << fpMean << endl; return 0; (2) The Fortran File: INTEGER FUNCTION SUMIT(IA,N) INTEGER IA(1) DO 50 J=1,N 50 ISUM=ISUM+IA(J) REAL FUNCTION MEAN(RA,N) REAL RA(1) DO 50 J=1,N 50 SUM=SUM+RA(J) IF(N.GT.0) MEAN=SUM/FLOAT(N) Calling C++ routines from Fortran The following examples work with Microsoft Visual C++ and Compaq Visual Fortran. Your mileage may vary on other systems. Example-1: Calling routines and functions The following sample shows how C++ routines and functions can be called from a Fortran program. (1) The Fortran file: INTEGER CR2 CALL CR1(N,M) WRITE(6,20) N,M 20 FORMAT(' The square of',I3,' is',I4) WRITE(6,30) N,K 30 FORMAT(' The cube of',I3,' is',I15) CALL EXIT (2) The C++ files: extern "C" void __stdcall CR1(int *,int *); int __stdcall CR2(int *); void __stdcall CR1(int *n, int *m) // Compute the square of n, return in m int k; int __stdcall CR2(int *n) // Compute the cube of n int m,k; return m; Further Reading These are some other sources of information. 1. Digital (now Compaq/HP Visual Fortran Programmer's Guide, esp. the chapter titled "Programming with Mixed Languages". This online book is included with all recent versions of the compiler. The book is also available online by clicking here. 2. Mixed-Language Issues (from Microsoft) 3. Also see Microsoft's C Calls to Fortran page. 4. Mixed Language Programming using C++ and Fortran 77 by Carsten A. Arnholm has many examples. 5. Mixed Language Programming - Fortran and C by Allan, Chipperfield and Warren-Smith is another good source. 6. FTN95 Mixed Language Programming from the University of Salford. 7. Interfacing Fortran and C by Janne Saarela. 8. Mixed Language Programming from Pittsburgh Supercomputing Center. 9. Some examples from DEC: 10. Will C++ be faster than Fortran? by T.L.Veldhuizen and M.E.Jernigan. 11. Mixing ANSI-C with Fortran 77 or Fortran 90 by B. Einarsson. 12. Comparison of C++ and Fortran 90 for Object Oriented Scientific Programming by J.Cary, S.Shasharina, J.Cummings, J.Reynders and P.Hinker. 13. Fortran and C Programming from Iowa State University. 14. Win32 Fortran Compiler Comparisons by John Appleyard. 15. Calling Fortran Routines from C/C++ by J. Thornburg. If you have questions about this document please send them by e-mail to kochhar@physiology.wisc.edu Return to Documentation Page Back to The Basement This page last modified on : Nov. 18, 2008
{"url":"http://www.neurophys.wisc.edu/comp/docs/notes/not017.html","timestamp":"2014-04-19T14:44:03Z","content_type":null,"content_length":"22625","record_id":"<urn:uuid:0e76395c-d4ee-4262-81a4-77ca51005426>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Simplify. 5 1/3 + (-3 9/18) A. 1 5/6 B. 2 8/15 C. 3 8/13 D. 8 5/6 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. it is 18 or 8 Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. 18 sorry Best Response You've already chosen the best response. b is ans Best Response You've already chosen the best response. Thanks N why tho? Best Response You've already chosen the best response. Best Response You've already chosen the best response. I mean why B sorry I kinda like to know why just so I understand ^ ^ Best Response You've already chosen the best response. The answer is NOT B. I'll explain in my next post. Best Response You've already chosen the best response. Since any number divided by itself equals 1, you can rewrite 1 as a fraction, e.g. \[1 = \frac{2}{2} = \frac{3}{3} = \frac{6}{6}\] Any whole number can be rewritten as a fraction by multiplying it by 1 in the form of a fraction. 5 can be made into a fraction like so: \[5 * 1 = 5*\frac{3}{3} = \frac{15}{3}\] \[5\frac{1}{3} = \frac{15}{3} + \frac{1}{3} = \frac{15+1}{3} = \frac{16}{3}\] Best Response You've already chosen the best response. Similarly, fractions can also be rewritten by multiplying them by 1 in the form of a fraction. In this case we'll need to multiply it by something to get the denominator to equal 18. What fraction do you think it should be @HammerG ? Best Response You've already chosen the best response. yeh its C from what I got. Best Response You've already chosen the best response. Nope. If you show your work I can help you figure out where you went of the rails, so to speak. Best Response You've already chosen the best response. no its b Best Response You've already chosen the best response. 5 and one third minus three and nine eighteenths cannot possibly be B. Even Wolfram Alpha agrees with me. Best Response You've already chosen the best response. \[5 + \frac{1}{3}-(3+\frac{9}{18})\] Best Response You've already chosen the best response. 8 5/6 Best Response You've already chosen the best response. @wired yo did a big flaw Best Response You've already chosen the best response. its not(3+9/18) Best Response You've already chosen the best response. that term value comes out to (16/3)-(63/18) =1.833 Best Response You've already chosen the best response. \[\huge 5 \frac{1}{3} = \frac{16}{3}\] \[\huge -3\frac{9}{18} \implies -3\frac 12 = -\frac{7}{2}\] \[\huge \frac{16}{3} - \frac{7}{2} = \frac{32}{6} - \frac{21}{6}\] Best Response You've already chosen the best response. \[\huge 1.833 \ne 2\frac{8}{15}\] Best Response You've already chosen the best response. another big flaw@igbasallote..it is not fraction to cancel like that(3 9/18)...it is a mixed fraction Best Response You've already chosen the best response. \[3\frac{9}{18} = 3\frac{1}{2}\] Best Response You've already chosen the best response. try it Best Response You've already chosen the best response. see for yourself Best Response You've already chosen the best response. no yaar Best Response You've already chosen the best response. \[\frac{9}{18} = \frac{1}{2}\] \[3\frac{1}{2} = \frac{6}{2}+\frac{1}{2}=\frac{7}{2}\] \[\frac{16}{3}+-\frac{7}{2} = \frac{16}{3}-\frac{7}{2} = \frac{32}{6}-\frac{21}{6} = \frac{11}{6}=1\frac{5} {6}\] @Raja99 What are you talking about? We KNOW it's a mixed fraction, hence why we're showing our work converting it to fractions, giving them equal denominators, and then adding them together and turning them back into a mixed fraction! Best Response You've already chosen the best response. ^exactly what i wrote Best Response You've already chosen the best response. 1 5/6 is the answer Best Response You've already chosen the best response. @HammerG Yes. Best Response You've already chosen the best response. okay don't want to get confused lol. Best Response You've already chosen the best response. All I can tell you is that @lgbasallote and I got the same answer, Wolfram Alpha got the same answer, and we showed how we got it, whereas Raja99 hasn't shown any of his work. Best Response You've already chosen the best response. @igbasallote is correct Best Response You've already chosen the best response. Can one person write out the whole problem step by step so i can kno exatly how to it an whats the answer is Best Response You've already chosen the best response. can one person show the whole work for this work please Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff7d805e4b01c7be8ca8fed","timestamp":"2014-04-19T07:10:07Z","content_type":null,"content_length":"142910","record_id":"<urn:uuid:5c3044e7-6ca9-4166-940f-fd3fd7d9e584>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help!!!! 2.68 [Archive] - Offspring.com Forums View Full Version : Need help!!!! 2.68 01-30-2005, 07:02 PM Is 2.68 a Real number, whole number, natural number, integer, or rational number??? 01-30-2005, 07:03 PM An integer is a whole number. But it's not a whole number, because it has a decimal. And what do you mean, is it a real number? It's not a mirage. 01-30-2005, 07:04 PM So what is 2.68?? 01-30-2005, 07:05 PM It's my house number 01-30-2005, 07:05 PM Less than 3. I don't know, a decimal. 01-30-2005, 07:05 PM Nothing, you watched too much TV and they entered that in your head. Don't believe what they say! 01-30-2005, 07:07 PM BTW - real numbers are all rational and irrational numbers; any number that can be written using decimals. So then 2.68 would be a real number correct?? 01-30-2005, 07:07 PM Uhh.. I suppose so. Why do you need to know, anyway? 01-30-2005, 07:10 PM It was on an assignment and I wasn't sure of the question. Until i came here! 01-30-2005, 07:11 PM Pay more attention in class! 01-30-2005, 07:17 PM if refering to time its 2.6 seconds if in numbers its either money or just decimals 01-30-2005, 07:23 PM Hey you didn't know it! Trip Boy 01-30-2005, 07:31 PM Try posting on some math site, they usual jump on the opportunity to solve nerd shit. 01-30-2005, 08:09 PM Hey you didn't know it! chill out Im part stupid I smoked weed in junior high thru senior year at high school Trip Boy 01-30-2005, 08:10 PM I believe you. 01-30-2005, 08:32 PM Yes, it's a real number. Unnatural Disaster 01-30-2005, 08:46 PM 2.68 is a real number Trip Boy 01-30-2005, 08:49 PM Let's see how many times people will reply with the answer that he found out on the first page. 01-30-2005, 09:16 PM It's a rational number. Which is a sub-category of real numbers. How old are you? 01-30-2005, 10:11 PM it's rational, it's real, it's surreal, it's.. um... ok, i guess that's all the common sets of numbers it's part of. 01-30-2005, 10:18 PM Let's see how many times people will reply with the answer that he found out on the first page. so far we have three 01-30-2005, 10:35 PM Trip Boy 01-30-2005, 10:46 PM I hate your avatar. Noodles is gay 01-31-2005, 05:55 AM Is 2.68 a Real number, whole number, natural number, integer, or rational number??? It's a real number. Process of elimination: - Not whole because of decimal. - Not natural because a natural number is any of the numbers 0,1,2,3,4, that can be used to count the members of a set; the nonnegative integers. - Not integer because of decimal. - not a rational number because a rational number is any real number of the form a/b, where a and b are integers and b is not zero. Hope that helps. :cool: EDIT: I know it's already been answered but i'm right and i just wanted to answer anyway. The Talking Pie 01-31-2005, 08:03 AM It's a good ol' floating point value http://e.deviantart.com/emoticons/l/lol.gif 01-31-2005, 10:28 AM - not a rational number because a rational number is any real number of the form a/b, where a and b are integers and b is not zero. umm... 268/100? 01-31-2005, 10:32 AM Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
{"url":"http://offspring.com/community/archive/index.php/t-4181.html","timestamp":"2014-04-16T17:03:31Z","content_type":null,"content_length":"9322","record_id":"<urn:uuid:09efbc2f-d3c2-4cc3-8711-5a07c86783e5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Los Altos Hills, CA Science Tutor Find a Los Altos Hills, CA Science Tutor ...I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. As I've always said, "If you can't explain it to an intelligent 12 year old, then you don't really understand it" I explain to my students because I u... 24 Subjects: including organic chemistry, ACT Science, anatomy, philosophy Hi,My name is Dean. I have been tutoring for the past eleven years in Physics, Math, and Chemistry. I started tutoring when I was an undergrad in Electrical Engineering at UC Berkeley. 11 Subjects: including chemistry, physics, geometry, calculus ...I have extensive teaching experience in college-level general chemistry, biochemistry and neuroscience from three different universities, as well as experience in tutoring individuals at both the high school and college levels. My goal is to help any student understand, summarize, and prepare fo... 18 Subjects: including physics, zoology, botany, genetics ...I have tutored several students to develop their problem solving skilsl, to improve their grades, and to establish a solid foundation for their future algebra study. I graduated from high school as top student in my class, especially the math subject. I have tutored several high school students... 15 Subjects: including organic chemistry, Chinese, calculus, chemistry My name is Jim and I am a PhD student studying Organic Chemistry at Stanford. I love teaching and I have experience tutoring students involved in all levels of Organic Chemistry. I have also helped teach both lab and lecture for undergraduate Organic Chemistry classes. 2 Subjects: including chemistry, organic chemistry Related Los Altos Hills, CA Tutors Los Altos Hills, CA Accounting Tutors Los Altos Hills, CA ACT Tutors Los Altos Hills, CA Algebra Tutors Los Altos Hills, CA Algebra 2 Tutors Los Altos Hills, CA Calculus Tutors Los Altos Hills, CA Geometry Tutors Los Altos Hills, CA Math Tutors Los Altos Hills, CA Prealgebra Tutors Los Altos Hills, CA Precalculus Tutors Los Altos Hills, CA SAT Tutors Los Altos Hills, CA SAT Math Tutors Los Altos Hills, CA Science Tutors Los Altos Hills, CA Statistics Tutors Los Altos Hills, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Los_Altos_Hills_CA_Science_tutors.php","timestamp":"2014-04-16T10:50:30Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:71602823-1d8a-4013-88a3-14735c02767d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: sample rate Date Prev Date Next ] [ Thread Prev Thread Next ] [ Date Index Thread Index Author Index Re: sample rate > Although you might be correct for a frequency of f when the sampling > frequency is 2f, the theorem correctly stated says that it will be good > for frquencies UP TO f Hz, i.e. not including f. So while you're > for one frequency, f, the theorem holds 100% true for all frequencies > below f and no information is lost. The mathematics bear Hi Bill, unfortunately, that's just not correct. Most of the information is lost even for a input frequency of 20KHz in a 44.1KHz digital system. The fact is, the ONLY information that is preserved is the frequency. The volume level is somewhat preserved. The phase information is completely lost. Its not until you get down closer to 1/4 of the sampling frequency until you start preserving the phase and volume rather faithfully. > BTW, I dare anyone to tell me they can HEAR that 20kHz has a wrong phase > relationship in a system sampled at 40kHz. Plus, in the real world, I agree with you here Bill. I doubt that _I_ could hear the difference phase. This is especially true when you consider the equipment we have to work with these days (amplifiers, speakers, ect). However, maybe we're going to have some kind of really incredible sound recreation available to the common man, and when that day comes I might want to to the stuff I've recorded right now with extremely high quality. That's why a lot of people favor 192KHz 24bit recording. One more thing - there is so much confusion abounding about digital audio that I have decided to make an excel spreadsheet which anyone can play to "see" firsthand how a sine wave might get captured in a digital system. Just email me, and I'll send it to you - its only 92Kb but I don't think I can attach this to Looper's Delight.
{"url":"http://www.loopers-delight.com/LDarchive/200512/msg00543.html","timestamp":"2014-04-19T14:30:38Z","content_type":null,"content_length":"6758","record_id":"<urn:uuid:2cd7dc67-c194-4efe-ad7d-9c8c1eecdd55>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Smooth characters of Smooth characters of \(p\)-adic fields Let \(F\) be a finite extension of \(\QQ_p\). Then we may consider the group of smooth (i.e. locally constant) group homomorphisms \(F^\times \to L^\times\), for \(L\) any field. Such characters are important since they can be used to parametrise smooth representations of \(\mathrm{GL}_2(\QQ_p)\), which arise as the local components of modular forms. This module contains classes to represent such characters when \(F\) is \(\QQ_p\) or a quadratic extension. In the latter case, we choose a quadratic extension \(K\) of \(\QQ\) whose completion at \ (p\) is \(F\), and use Sage’s wrappers of the Pari idealstar and ideallog methods to work in the finite group \(\mathcal{O}_K / p^c\) for \(c \ge 0\). An example with characters of \(\QQ_7\): sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupQp sage: K.<z> = CyclotomicField(42) sage: G = SmoothCharacterGroupQp(7, K) sage: G.unit_gens(2), G.exponents(2) ([3, 7], [42, 0]) The output of the last line means that the group \(\QQ_7^\times / (1 + 7^2 \ZZ_7)\) is isomorphic to \(C_{42} \times \ZZ\), with the two factors being generated by \(3\) and \(7\) respectively. We create a character by specifying the images of these generators: sage: chi = G.character(2, [z^5, 11 + z]); chi Character of Q_7*, of level 2, mapping 3 |--> z^5, 7 |--> z + 11 sage: chi(4) sage: chi(42) z^10 + 11*z^9 Characters are themselves group elements, and basic arithmetic on them works: sage: chi**3 Character of Q_7*, of level 2, mapping 3 |--> z^8 - z, 7 |--> z^3 + 33*z^2 + 363*z + 1331 sage: chi.multiplicative_order() class sage.modular.local_comp.smoothchar.SmoothCharacterGeneric(parent, c, values_on_gens) Bases: sage.structure.element.MultiplicativeGroupElement A smooth (i.e. locally constant) character of \(F^\times\), for \(F\) some finite extension of \(\QQ_p\). class sage.modular.local_comp.smoothchar.SmoothCharacterGroupGeneric(p, base_ring) Bases: sage.structure.parent_base.ParentWithBase The group of smooth (i.e. locally constant) characters of a \(p\)-adic field, with values in some ring \(R\). This is an abstract base class and should not be instantiated directly. alias of SmoothCharacterGeneric Return the character group of the same field, but with values in a new coefficient ring into which the old coefficient ring coerces. An error will be raised if there is no coercion map from the old coefficient ring to the new one. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupQp sage: G = SmoothCharacterGroupQp(3, QQ) sage: G.base_extend(QQbar) Group of smooth characters of Q_3* with values in Algebraic Field sage: G.base_extend(Zmod(3)) Traceback (most recent call last): TypeError: no canonical coercion from Rational Field to Ring of integers modulo 3 Return the character group of the same field, but with values in a different coefficient ring. To be implemented by all derived classes (since the generic base class can’t know the sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).change_ring(ZZ) Traceback (most recent call last): NotImplementedError: <abstract method change_ring at ...> character(level, values_on_gens) Return the unique character of the given level whose values on the generators returned by self.unit_gens(level) are values_on_gens. ☆ level (integer) an integer \(\ge 0\) ☆ values_on_gens (sequence) a sequence of elements of length equal to the length of self.unit_gens(level). The values should be convertible (that is, possibly noncanonically) into the base ring of self; they should all be units, and all but the last must be roots of unity (of the orders given by self.exponents(level). The character returned may have level less than level in general. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupQp sage: K.<z> = CyclotomicField(42) sage: G = SmoothCharacterGroupQp(7, K) sage: G.character(2, [z^6, 8]) Character of Q_7*, of level 2, mapping 3 |--> z^6, 7 |--> 8 sage: G.character(2, [z^7, 8]) Character of Q_7*, of level 1, mapping 3 |--> z^7, 7 |--> 8 sage: G.character(1, [z, 1]) Traceback (most recent call last): ValueError: value on generator 3 (=z) should be a root of unity of order 6 sage: G.character(1, [1, 0]) Traceback (most recent call last): ValueError: value on uniformiser 7 (=0) should be a unit An example with a funky coefficient ring: sage: G = SmoothCharacterGroupQp(7, Zmod(9)) sage: G.character(1, [2, 2]) Character of Q_7*, of level 1, mapping 3 |--> 2, 7 |--> 2 sage: G.character(1, [2, 3]) Traceback (most recent call last): ValueError: value on uniformiser 7 (=3) should be a unit sage: G.character(1, [2]) Traceback (most recent call last): AssertionError: 2 images must be given Calculate the character of \(K^\times\) given by \(\chi \circ \mathrm{Norm}_{K/\QQ_p}\). Here \(K\) should be a quadratic extension and \(\chi\) a character of \(\QQ_p^\times\). When \(K\) is the unramified quadratic extension, the level of the new character is the same as the old: sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupQp, SmoothCharacterGroupRamifiedQuadratic, SmoothCharacterGroupUnramifiedQuadratic sage: K.<w> = CyclotomicField(6) sage: G = SmoothCharacterGroupQp(3, K) sage: chi = G.character(2, [w, 5]) sage: H = SmoothCharacterGroupUnramifiedQuadratic(3, K) sage: H.compose_with_norm(chi) Character of unramified extension Q_3(s)* (s^2 + 2*s + 2 = 0), of level 2, mapping -2*s |--> -1, 4 |--> -w, 3*s + 1 |--> w - 1, 3 |--> 25 In ramified cases, the level of the new character may be larger: sage: H = SmoothCharacterGroupRamifiedQuadratic(3, 0, K) sage: H.compose_with_norm(chi) Character of ramified extension Q_3(s)* (s^2 - 3 = 0), of level 3, mapping 2 |--> w - 1, s + 1 |--> -w, s |--> -5 On the other hand, since norm is not surjective, the result can even be trivial: sage: chi = G.character(1, [-1, -1]); chi Character of Q_3*, of level 1, mapping 2 |--> -1, 3 |--> -1 sage: H.compose_with_norm(chi) Character of ramified extension Q_3(s)* (s^2 - 3 = 0), of level 0, mapping s |--> 1 Given an element \(x \in F^\times\) (lying in the number field \(K\) of which \(F\) is a completion, see module docstring), express the class of \(x\) in terms of the generators of \(F^\times / (1 + \mathfrak{p}^c)^\times\) returned by unit_gens(). This should be overridden by all derived classes. The method should first attempt to canonically coerce \(x\) into self.number_field(), and check that the result is not zero. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).discrete_log(3) Traceback (most recent call last): NotImplementedError: <abstract method discrete_log at ...> The orders \(n_1, \dots, n_d\) of the generators \(x_i\) of \(F^\times / (1 + \mathfrak{p}^c)^\times\) returned by unit_gens(). sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).exponents(3) Traceback (most recent call last): NotImplementedError: <abstract method exponents at ...> Return the level-th power of the maximal ideal of the ring of integers of the p-adic field. Since we approximate by using number field arithmetic, what is actually returned is an ideal in a number field. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).ideal(3) Traceback (most recent call last): NotImplementedError: <abstract method ideal at ...> The residue characteristic of the underlying field. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).prime() A set of elements of \((\mathcal{O}_F / \mathfrak{p}^c)^\times\) generating the kernel of the reduction map to \((\mathcal{O}_F / \mathfrak{p}^{c-1})^\times\). sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).subgroup_gens(3) Traceback (most recent call last): NotImplementedError: <abstract method subgroup_gens at ...> A list of generators \(x_1, \dots, x_d\) of the abelian group \(F^\times / (1 + \mathfrak{p}^c)^\times\), where \(c\) is the given level, satisfying no relations other than \(x_i^{n_i} = 1\) for each \(i\) (where the integers \(n_i\) are returned by exponents()). We adopt the convention that the final generator \(x_d\) is a uniformiser (and \(n_d = 0\)). sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).unit_gens(3) Traceback (most recent call last): NotImplementedError: <abstract method unit_gens at ...> class sage.modular.local_comp.smoothchar.SmoothCharacterGroupQp(p, base_ring) Bases: sage.modular.local_comp.smoothchar.SmoothCharacterGroupGeneric The group of smooth characters of \(\QQ_p^\times\), with values in some fixed base ring. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupQp sage: G = SmoothCharacterGroupQp(7, QQ); G Group of smooth characters of Q_7* with values in Rational Field sage: TestSuite(G).run() sage: G == loads(dumps(G)) class sage.modular.local_comp.smoothchar.SmoothCharacterGroupRamifiedQuadratic(prime, flag, base_ring, names='s') Bases: sage.modular.local_comp.smoothchar.SmoothCharacterGroupGeneric The group of smooth characters of \(K^\times\), where \(K\) is a ramified quadratic extension of \(\QQ_p\), and \(p \ne 2\). class sage.modular.local_comp.smoothchar.SmoothCharacterGroupUnramifiedQuadratic(prime, base_ring, names='s') Bases: sage.modular.local_comp.smoothchar.SmoothCharacterGroupGeneric The group of smooth characters of \(\QQ_{p^2}^\times\), where \(\QQ_{p^2}\) is the unique unramified quadratic extension of \(\QQ_p\). We represent \(\QQ_{p^2}^\times\) internally as the completion at the prime above \(p\) of a quadratic number field, defined by (the obvious lift to \(\ZZ\) of) the Conway polynomial modulo \(p\) of degree 2. sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupUnramifiedQuadratic sage: G = SmoothCharacterGroupUnramifiedQuadratic(3, QQ); G Group of smooth characters of unramified extension Q_3(s)* (s^2 + 2*s + 2 = 0) with values in Rational Field sage: G.unit_gens(3) [-11*s, 4, 3*s + 1, 3] sage: TestSuite(G).run() sage: TestSuite(SmoothCharacterGroupUnramifiedQuadratic(2, QQ)).run()
{"url":"http://www.sagemath.org/doc/reference/modmisc/sage/modular/local_comp/smoothchar.html","timestamp":"2014-04-20T20:59:56Z","content_type":null,"content_length":"118092","record_id":"<urn:uuid:a9db74c3-6258-4f3a-a0c5-a8a2297677eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Quadratic Function Equations please March 14th 2010, 04:06 PM #1 Mar 2010 Help with Quadratic Function Equations please If I am given the coordinates of the vertex of a parabola and the coordinatesof a point that it passes through, how do I find the equation? $y = a(x-h)^2 + k<br />$ $(h,k)$ is the vertex $(x_1,y_1)$ is the given point ... sub in $x_1$ for $x$ and $y_1$ for $y$ and calculate the coefficient $a$ . Thank you! That helped a lot March 14th 2010, 04:12 PM #2 March 14th 2010, 04:16 PM #3 Mar 2010
{"url":"http://mathhelpforum.com/pre-calculus/133797-help-quadratic-function-equations-please.html","timestamp":"2014-04-16T11:32:23Z","content_type":null,"content_length":"37301","record_id":"<urn:uuid:992979e2-0291-4410-a3a2-32f64577cd94>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
ing E Making Economic Sense by Murray Rothbard (Contents by Publication Date) Chapter 6 Statistics: Destroyed From Within? As improbable as this may seem now, I was at one time in college a statistics major. After taking all the undergraduate courses in statistics, I enrolled in a graduate course in mathematical statistics at Columbia with the eminent Harold Hotelling, one of the founders of modern mathematical economics. After listening to several lectures of Hotelling, I experienced an epiphany: the sudden realization that the entire "science" of statistical inference rests on one crucial assumption, and that that assumption is utterly groundless. I walked out of the Hotelling course, and out of the world of statistics, never to return. Statistics, of course, is far more than the mere collection of data. Statistical inference is the conclusions one can draw from that data. In particular, since--apart from the decennial US census of population--we never know all the data, our conclusions must rest on very small samples drawn from the population. After taking our sample or samples, we have to find a way to make statements about the population as a whole. For example, suppose we wish to conclude something about the average height of the American male population. Since there is no way that we can mobilize every male American and measure everyone's height, we take samples of a small number, say 500 people, selected in various ways, from which we presume to say what the average American's height may be. In the science of statistics, the way we move from our known samples to the unknown population is to make one crucial assumption: that the samples will, in any and all cases, whether we are dealing with height or unemployment or who is going to vote for this or that candidate, be distributed around the population figure according to the so-called "normal curve." The normal curve is a symmetrical, bell-shaped curve familiar to all statistics textbooks. Because all samples are assumed to fall around the population figure according to this curve, the statistician feels justified in asserting, from his one or more limited samples, that the height of the American population, or the unemployment rate, or whatever, is definitely XYZ within a "confidence level" of 90 or 95 %. In short, if, for example, a sample height for the average male is 5 feet 9 inches, 90 or 95 out of every 100 such samples will be within a certain definite range of 5 feet 9 inches. These precise figures are arrived at simply by assuming that all samples are distributed around the population according to this normal curve. It is because of the properties of the normal curve, for example, that the election pollsters could assert, with overwhelming confidence, that Bush was favored by a certain percentage of voters, and Dukakis by another percentage, all within "three percentage points" or "five percentage points" of "error." It is the normal curve that permits statisticians not to claim absolute knowledge of all population figures precisely but instead to claim such knowledge within a few percentage points. Well, what is the evidence for this vital assumption of distribution around a normal curve? None whatever. It is a purely mystical act of faith. In my old statistics text, the only "evidence" for the universal truth of the normal curve was the statement that if good riflemen shoot to hit a bullseye, the shots will tend to be distributed around the target in something like a normal curve. On this incredibly flimsy basis rests an assumption vital to the validity of all statistical inference. Unfortunately, the social sciences tend to follow the same law that the late Dr. Robert Mendelsohn has shown is adopted in medicine: never drop any procedure, no matter how faulty, until a better one is offered in its place. And now it seems that the entire fallacious structure of inference built on the normal curve has been rendered obsolete by high-tech. Ten years ago, Stanford statistician Bradley Efron used high-speed computers to generate "artificial data sets" based on an original sample, and to make the millions of numerical calculations necessary to arrive at a population estimate without using the normal curve, or any other arbitrary, mathematical assumption of how samples are distributed about the unknown population figure. After a decade of discussion and tinkering, statisticians have agreed on methods of practical use of this "bootstrap" method, and it is now beginning to take over the profession. Stanford statistician Jerome H. Friedman, one of the pioneers of the new method, calls it "the most important new idea in statistics in the last 20 years, and probably the last 50." At this point, statisticians are finally willing to let the cat out of the bag. Friedman now concedes that "data don't always follow bell-shaped curves, and when they don't, you make a mistake" with the standard methods. In fact, he added that "the data frequently are distributed quite differently than in bell-shaped curves." So that's it; now we find that the normal curve Emperor has no clothes after all. The old mystical faith can now be abandoned; the Normal Curve god is dead at long last.
{"url":"http://www.mises.org/econsense/ch6.asp","timestamp":"2014-04-16T13:50:13Z","content_type":null,"content_length":"31152","record_id":"<urn:uuid:eb4f08e1-e024-4b88-8d81-f0ac21c5a91f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Select the inequality that corresponds to the given graph. 2x - 5y -10 x - y 3 2x - y < -4 x + 2y -8 Image Below • 11 months ago • 11 months ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. none match up. Best Response You've already chosen the best response. if solid should have line underneath inequality. Best Response You've already chosen the best response. |dw:1367356502696:dw| from the graph Best Response You've already chosen the best response. sorry about that my anwsers messed up 2x - 5y greater than or equal to -10 x - y greater than or equal to 3 2x - y < -4 x + 2y greater than or each to -8 Best Response You've already chosen the best response. Best Response You've already chosen the best response. sorry, it is your second choice. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5180336ae4b0501a0d229d6f","timestamp":"2014-04-19T12:57:44Z","content_type":null,"content_length":"48974","record_id":"<urn:uuid:1b9597ab-cf3c-4e71-ab62-64c2512dcc4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinear Program Results 1 - 10 of 35 - Data Mining and Knowledge Discovery , 1998 "... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SV ..." Cited by 2272 (11 self) Add to MetaCart The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light. , 2004 "... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..." Cited by 473 (2 self) Add to MetaCart In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective. - In IEEE Infocom , 2004 "... Abstract — We study the problem of scheduling packet transmissions for data gathering in wireless sensor networks. The focus is to explore the energy-latency tradeoffs in wireless communication using techniques such as modulation scaling. The data aggregation tree – a multiple-source single-sink com ..." Cited by 80 (4 self) Add to MetaCart Abstract — We study the problem of scheduling packet transmissions for data gathering in wireless sensor networks. The focus is to explore the energy-latency tradeoffs in wireless communication using techniques such as modulation scaling. The data aggregation tree – a multiple-source single-sink communication paradigm – is employed for abstracting the packet flow. We consider a real-time scenario where the data gathering must be performed within a specified latency constraint. We present algorithms to minimize the overall energy dissipation of the sensor nodes in the aggregation tree subject to the latency constraint. For the off-line problem, we propose (a) a numerical algorithm for the optimal solution, and (b) a pseudo-polynomial time approximation algorithm based on dynamic programming. We also discuss techniques for handling interference among the sensor nodes. Simulations have been conducted for both long-range communication and short-range communication. The simulation results show that compared with the classic shutdown technique, between 20 % to 90 % energy savings can be achieved by our techniques, under different settings of several key system parameters. We also develop an on-line distributed protocol that relies only on the local information available at each sensor node within the aggregation tree. Simulation results show that between 15 % to 90 % energy conservation can be achieved by the on-line protocol. The adaptability of the protocol with respect to variations in the packet size and latency constraint is also demonstrated through several run-time scenarios. Index terms – System design, Mathematical optimization I. - Mathematical Programming , 2003 "... This work addresses the development of an efficient solution strategy for obtaining global optima of continuous, integer, and mixed-integer nonlinear programs. Towards this end, we develop novel relaxation schemes, range reduction tests, and branching strategies which we incorporate into the prototy ..." Cited by 51 (1 self) Add to MetaCart This work addresses the development of an efficient solution strategy for obtaining global optima of continuous, integer, and mixed-integer nonlinear programs. Towards this end, we develop novel relaxation schemes, range reduction tests, and branching strategies which we incorporate into the prototypical branch-and-bound algorithm. In the theoretical... , 1997 "... A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strate ..." Cited by 21 (8 self) Add to MetaCart A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strategy for obtaining global convergence is based on the trust region approach. The merit function is a type of augmented Lagrangian. A new updating scheme is introduced for the penalty parameter, by means of which monotone increase is not necessary. Global convergence results are proved and numerical experiments are presented. Key words: Nonlinear programming, successive quadratic programming, trust regions, augmented Lagrangians, Lipschitz conditions. Department of Applied Mathematics, IMECC-UNICAMP, University of Campinas, CP 6065, 13081970 Campinas SP, Brazil (chico@ime.unicamp.br). This author was supported by FAPESP (Grant 903724 -6), FINEP and FAEP-UNICAMP. y Department of - SIAM J. Optim , 1998 "... The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newton-type direction and a negative curvature direction. Th ..." Cited by 15 (4 self) Add to MetaCart The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newton-type direction and a negative curvature direction. The motivation for using negative curvature direction is that of taking into account local nonconvexity of the objective function. On the basis of this framework, we propose an algorithm which uses the Lanczos method for determining at each iteration both a Newton-type direction and an effective negative curvature direction. The results of an extensive numerical testing is reported together with a comparison with the LANCELOT package. These results show that the algorithm is very competitive and this seems to indicate that the proposed approach is promising. 1 Introduction In this work, we deal with the definition of new efficient unconstrained minimization algorithms for solving large scal... "... We describe some first- and second-order optimality conditions for mathematical programs with equilibrium constraints (MPEC). Mathematical programs with parametric nonlinear complementarity constraints are the focus. Of interest is the result that under a linear independence assumption that is stand ..." Cited by 12 (5 self) Add to MetaCart We describe some first- and second-order optimality conditions for mathematical programs with equilibrium constraints (MPEC). Mathematical programs with parametric nonlinear complementarity constraints are the focus. Of interest is the result that under a linear independence assumption that is standard in nonlinear programming, the otherwise combinatorial problem of checking whether a point is stationary for an MPEC is reduced to checking stationarity of single nonlinear program. We also present a piecewise sequential quadratic programming (PSQP) algorithm for solving MPEC. Local quadratic convergence is shown under the linear independence assumption and a second-order sufficient condition. Some computational results are given. KEY WORDS MPEC, bilevel program, nonlinear complementarity problem, nonlinear program, first- and second-order optimality conditions, linear independence constraint qualification, sequential quadratic programming, quadratic convergence. 2 Chapter 1 1 INTRODUC... - Computational Optimization and Applications , 1995 "... We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary ..." Cited by 12 (4 self) Add to MetaCart We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary point where the Hessian matrix is positive semidefinite is exhibited for this class of algorithms by means of a nonmonotone stabilization strategy. An implementation using the Bunch-Parlett decomposition is shown to outperform several other techniques on a large class of test problems. 1 Introduction In this work we consider the unconstrained minimization problem min x2IR n f(x); where f is a real valued function on IR n . We assume throughout that both the gradient g(x) := rf(x) and the Hessian matrix H(x) := r 2 f(x) of f exist and are continuous. Many iterative methods for solving this problem have been proposed; they are usually descent methods that generate a sequence fx k g su... - in Proc. IEEE Global Telecommun. Conf , 2003 "... Abstract — In this paper we compare cyclic prefix (CP) based single and multicarrier block transmission schemes in frequencyselective fading channels. Analytical comparison shows that at moderate-to-high signal-to-noise ratio (SNR), the uncoded error rate performance of multicarrier transmission is ..." Cited by 8 (5 self) Add to MetaCart Abstract — In this paper we compare cyclic prefix (CP) based single and multicarrier block transmission schemes in frequencyselective fading channels. Analytical comparison shows that at moderate-to-high signal-to-noise ratio (SNR), the uncoded error rate performance of multicarrier transmission is inferior to that of single carrier. We propose new minimum bit error rate (MBER) power loading algorithms for multicarrier transmission. It is also shown that a simpler approximate MBER (AMBER) power loading method has performance close to that of the optimum MBER scheme. Performance of a variety of methods is compared analytically and verified by simulations. I. - Mathematics of Operations Research , 1998 "... : We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature ..." Cited by 8 (1 self) Add to MetaCart : We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a non trivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC 1 functions. Key Words: Inequality constrained optimization, KKT second order necessary conditions, penalty function, LC 1 function, negative curvature direction. 1 Introduction We are concerned with the inequality constrained minimization problem (P) min f(x) s.t. g(x) 0; where f : IR n ! IR and g : IR n ! IR m are three times continuously differentiable. Our aim is to develope an algorithm that
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=85246","timestamp":"2014-04-18T21:37:03Z","content_type":null,"content_length":"40375","record_id":"<urn:uuid:8b265f4b-80ae-4084-9028-3ed301ec05ec>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Manchester, NH Algebra 2 Tutor Find a Manchester, NH Algebra 2 Tutor ...Not only does it lay the foundation for more advanced courses, it also teaches the student how to take real-life problems and translate them into the language of mathematics where they can be solved by some simple manipulations. I tutor all secondary level math subjects, and always seem to have ... 44 Subjects: including algebra 2, chemistry, writing, calculus ...As someone who has been teaching algebra for many years, I often see deficiencies in prealgebra skills that need to be remedied. I enjoy drilling those who study math on the concepts in precalculus that give them the most trouble. With practice, these calculations and simplifications become second nature. 55 Subjects: including algebra 2, English, reading, algebra 1 ...I graduated from Nagoya University, Japan, in 1996, received a Master's degree of physics from Nagoya University in 1998, and received a Phd in Science (physics) in 2001. I was a physics tutor in Japan for a high school student for one year, and a tutor for math and other science for a few years... 16 Subjects: including algebra 2, calculus, physics, geometry ...Most of what they really needed was a plan of study and to learn organizational skills. I have written a guide for students and parents on how to achieve success in school. Included in this booklet was how to study for tests, write papers and how to organize classwork for study at a later time. 24 Subjects: including algebra 2, writing, geometry, GED ...Please contact me if you are interested in having someone like myself help your child, a person who is patient, understanding and who knows their stuff! JessicaI've taught math for almost a decade, I've taught everything from pre-algbra, to calculus and statistics. The topics that are covered in the MTLE are ones that I have taught and am very familiar with. 6 Subjects: including algebra 2, geometry, algebra 1, precalculus Related Manchester, NH Tutors Manchester, NH Accounting Tutors Manchester, NH ACT Tutors Manchester, NH Algebra Tutors Manchester, NH Algebra 2 Tutors Manchester, NH Calculus Tutors Manchester, NH Geometry Tutors Manchester, NH Math Tutors Manchester, NH Prealgebra Tutors Manchester, NH Precalculus Tutors Manchester, NH SAT Tutors Manchester, NH SAT Math Tutors Manchester, NH Science Tutors Manchester, NH Statistics Tutors Manchester, NH Trigonometry Tutors Nearby Cities With algebra 2 Tutor Amherst, NH algebra 2 Tutors Auburn, NH algebra 2 Tutors Bedford, NH algebra 2 Tutors Concord, NH algebra 2 Tutors Derry, NH algebra 2 Tutors Goffstown algebra 2 Tutors Haverhill, MA algebra 2 Tutors Hooksett algebra 2 Tutors Lawrence, MA algebra 2 Tutors Londonderry, NH algebra 2 Tutors Lowell, MA algebra 2 Tutors Merrimack algebra 2 Tutors Methuen algebra 2 Tutors Nashua, NH algebra 2 Tutors Salem, NH algebra 2 Tutors
{"url":"http://www.purplemath.com/Manchester_NH_algebra_2_tutors.php","timestamp":"2014-04-18T18:56:30Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:a6bf8754-9bcd-48de-b342-e3d7d62a81e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Kinematics Problem [Related to Velocity and displacement] October 8th 2009, 04:19 PM #1 Junior Member Oct 2007 Kinematics Problem [Related to Velocity and displacement] I have the following problem... A person throws a ball upward into the air with an initial velocity of 15.0m/s. Calculate: how high it goes, how long is in the air before it comes back to his hand, the velocity of the ball when it returns to the thrower's hand, and the time when the ball passes a point 8.0m above the person's hand. With so little given [only the initial velocity is given, and I know that gravity plays a part, but I am still stumped], I'm not sure how to proceed with this. Help is appreciated. Thank you! I have the following problem... A person throws a ball upward into the air with an initial velocity of 15.0m/s. Calculate: how high it goes, how long is in the air before it comes back to his hand, the velocity of the ball when it returns to the thrower's hand, and the time when the ball passes a point 8.0m above the person's hand. With so little given [only the initial velocity is given, and I know that gravity plays a part, but I am still stumped], I'm not sure how to proceed with this. Help is appreciated. Thank you! what kinematics equations are you familiar with? All the basic ones... $a$ = acceleration $v_f$ = final velocity $v_0$ = initial velocity $d$ = displacement $t$ = time you'll need at a minimum these two (modified for vertical motion in a uniform gravitational field) to solve your problem ... $\Delta y = v_o t - \frac{1}{2}gt^2$ $v_f = v_o - gt$ Just a question--what is the $y$ variable in your case? The displacement? Okay, I'm just a *bit* confused, but I'll try. This was actually a bonus question that I'm trying to solve. :P So If I want to find the time first, do I replace the second equation you gave me with the first equation I listed? $g={{(v_o - gt)-v_o}\over{t}}$ (1) how high it goes ... at the top of its trajectory, $v = 0$ $(v_f)^2 = (v_o)^2 - 2g(\Delta y)$ $0 = 15^2 - 2g(\Delta y)$ solve for $\Delta y$ (2) how long the ball is in the air ... when the ball returns to its starting position, $\Delta y = 0$ $\Delta y = v_o t - \frac{1}{2}gt^2$ $0 = 15t - \frac{1}{2}gt^2$ solve for $t$ (3) velocity of the ball when it returns ... once you have $t$ from part (2) $v_f = v_o - gt$ $v_f = 15 - gt$ evaluate $v_f$ (4) time when the ball is 8.0 m above ... $\Delta y = v_o t - \frac{1}{2}gt^2$ $8 = 15t - \frac{1}{2}gt^2$ solve the quadratic for t ... you'll get two solutions, one time on the way up and one time on the way down. Wow, thanks! Now I feel so stupid. October 8th 2009, 04:39 PM #2 October 8th 2009, 04:48 PM #3 Junior Member Oct 2007 October 8th 2009, 04:55 PM #4 October 8th 2009, 04:58 PM #5 Junior Member Oct 2007 October 8th 2009, 05:04 PM #6 October 8th 2009, 05:14 PM #7 Junior Member Oct 2007 October 8th 2009, 05:31 PM #8 October 8th 2009, 06:33 PM #9 Junior Member Oct 2007
{"url":"http://mathhelpforum.com/math-topics/106913-kinematics-problem-related-velocity-displacement.html","timestamp":"2014-04-19T23:22:10Z","content_type":null,"content_length":"61801","record_id":"<urn:uuid:1cf8c626-d139-4373-bed5-88bb9aae847e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
_DATA STRUCTURES GALORE - Version 2 for PLT 3xx_ Jens Axel Søgaard This library provides functional implementations of some commonly used data structures. The data structures can be divided into: - collections (_sets_, _bags_, _heaps_) - associative collections (_finite maps_, _priority queues_) - sequences (_stacks_, _queues_, _deques_) [Although associative collections and sequences are included in the PLanet-distribution, their implementations, though in working order, should not be considered finished.] [Update: Finite maps are now in a usuable state] The data structures are implemented in a purely functional way. Inserting an element in, say, a collection will return a new data structure containing both the element and all elements in the old collection *without* destroying the old collection. The collections can thus be used *persistently* opposed to the normal *ephemeral* way, where an old collection is destroyed. This implies that the collection is safe to use in multi threaded programs and web servlets. Data structures can have multiple implementations, and each implementation has its own module. A "no-fuss" module exists for each data strucuture containing a reasonable default implemention for each data structure is given - for those cases, where it is unimportant to have specific time complexities. The names of the individual operations have been carefully chosen to have meaning for as many data structures as possible. Element insertion is for example named "insert" in all data structures (some data structures also provide alternative, more traditional names). This choice makes it easy to replace an unfortunate choice of data structure. Use (prefix ...) to avoid name conflicts working with more than one data structure. Many of the data structure implementations require an ordering of the elements. The order is represented using compare function from All exported functions are checked using contracts, the error messages are therefore considerably better than in the previous version of this library. Most algorithms are carefully explained by Chris Okasaki in the delightful book "Purely Functional Data Structures". [Terminology: bags are also called multi-sets; a heap is a priority queue, where the priority is the element; in a priority queue the element and priority are distinct; a deque is a double ended queue] EXAMPLE (Simple) The operation empty makes a new heap with no elements. ;; A heap of integers > (require (planet "heap.scm" ("soegaard" "galore.plt" 2 1))) > (find-min (insert 42 (insert 3 (heap 2 8 1)))) ;; A leftist heap of strings > (require (planet "leftist-heap.scm" ("soegaard" "galore.plt" 2 1))) > (find-min (insert "foo" (insert "bar" (empty)))) ;; Bags of integers > (require (planet "bag.scm" ("soegaard" "galore.plt" 2 1))) > (elements (difference (union (bag 1 3 2) (bag 1 2 4)) (bag 3))) (4 2 2 1 1) ;; Sets of integers (require (planet "set.scm" ("soegaard" "galore.plt" 2 1))) > (elements (difference (union (set 1 3 2) (set 1 2 4)) (set 3))) (1 2 4) EXAMPLE (Heap Sort) A simple heap sort: (require (planet "heap.scm" ("soegaard" "galore.plt" 2 1))) (define (heap-sort l) (heap->list (list->heap l))) (define (list->heap l) ; a more efficient list->heap is provided by the library (foldl insert (empty) l)) (define (heap->list H) (define (loop H l) (if (empty? H) (reverse l) (loop (delete-min H) (cons (find-min H) l)))) (loop H '())) (heap-sort '(3 1 4 1 5 9)) Eager comprehensions from srfi-42 are supported, so an alternative to the above is: (require (planet "heap.scm" ("soegaard" "galore.plt" 2 1)) (lib "42.ss" "srfi") (lib "67.ss" "srfi")) (define (heap-sort l) (heap->list (list->heap l))) (define (list->heap l) (heap-ec default-compare (: x l) x)) (define (heap->list H) (list-ec (: x H) x)) (heap-sort '(3 1 4 1 5 9)) EXAMPLE (User defined element type) The above heaps used the compare function given by the parameter current-compare to order the elements (because empty was given no arguments). The start value of current-compare is default-compare, which handles lists, booleans, characters, strings, symbols, numbers and vectors. To create a heap of user defined structs, we need to pass empty a custom compare function. (require (planet "heap.scm" ("soegaard" "galore.plt" 2 1)) (lib "67.ss" "srfi")) (define-struct hiscore (name score) (make-inspector)) (define (hiscore-compare s1 s2) ; highest scores first (number-compare (hiscore-score s2) (hiscore-score s1)) ; ties are sorted alphabetically after name (string-compare (hiscore-name s1) (hiscore-name s2)))) (find-min (insert* (list (make-hiscore "Foo" 200) (make-hiscore "Bar" 100) (make-hiscore "Qux" 300)) (empty hiscore-compare))) ; => #(struct:hiscore "Qux" 300) As an alternative one could extend the compare function given by the current-compare parameter: (let ([cmp (current-compare)]) (lambda (x1 x2) (select-compare x1 x2 [hiscore? (hiscore-compare x1 x2)] [else (cmp x1 x2)])))) (find-min (insert* (list (make-hiscore "Foo" 200) (make-hiscore "Bar" 100) (make-hiscore "Qux" 300)) ; => #(struct:hiscore "Qux" 300) Collections include sets, bags, and heaps. The following is a summary of the operations collections have in common: > empty : [cmp] -> col return an object representing an empty X of elements order by the compare function cmp, or the value of the parameter > insert : elm col -> col (insert x C) = {x} U C > insert* : (list elm ...) col -> col (insert xs c) = C U {x1, ...} , where xs = (list x1 ...) > list->set : [cmp] (list elm) -> col > list->bag : [cmp] (list elm) -> col > list->heap : [cmp] (list elm) -> col (list->set xs) = (insert* xs (empty)) (list->set cmp xs) = (insert* xs (empty cmp)) and similar for the others > singleton : [cmp] elm -> col (singleton x) = (insert x (empty)) (singleton cmp x) = (insert x (empty cmp)) > union : col col -> col x in (union A B) <=> x in A or x in B Futhermore shorthand for (insert* (list x1 ...) (empty)) is provided in terms of: > bag : (list elm) -> bag > heap : (list elm) -> heap > set : (list elm) -> set > elements : col -> (list elm) Returns a list of all occurrences of all elements in the > find-min : col -> elm The call (find-min C) returns an element x such that x <= y for all y in C, where <= is determined by C's compare function. It is an error to use find-min on an empty collection. > get : elm col -> elm return an element from col equivalent to the given element, or #f if no such element is found > select : col -> elm (select C) = x <=> x in C Selects an element in the collection. It is an error to pass an empty collecion to select. > delete : elm col -> col delete one occurence of the element from the collection > delete-all : elm col -> col delete all occurence of the element from the collection > delete* : (list elm) col -> col for each element in the list, delete one occurence of the element from the collction > delete-min : col -> col delete one occurence of the minimal element from the collection > count : elm col -> natural-number counts the number of occurences of the element in the collection > empty? : col -> boolean returns #t if the collection is empty, otherwise #f is returned > member? : elm col -> boolean returns #t if the collection contains an occurrence of the element > size : col -> natural-number returns the number of elements in the collection > fold : (elm alpha -> alpha) alpha col -> alpha Let the collection c contain the elements x1, x2, ..., xn, then (fold kons knil c) = (kons xn ... (kons x2 (kons x1 knil)) ... ) > set? : object -> boolean > bag? : object -> boolean > heap? : object -> boolean Returns #t, if the object is a collection of the proper type. - - - - - - - > (set-ec cmp <qualifier>* <expression>) > (bag-ec cmp <qualifier>* <expression>) > (heap-ec cmp <qualifier>* <expression>) The collection of values obtained by evaluating <expression> once for each binding in the sequence defined by the qualifiers. If there are no qualifiers the result is the list with the value of <expression>. The first argument cmp is the compare function used to order the elements of the collection. One has the following identity for set-ec (the actual implementation is more efficent): (set-ec cmp <qualifier>* <expression>) = (list->set cmp (list-ec <qualifier>* <expression>)) Similar identities hold for bag-ec and heap-ec. - - - - - The generators :set, :bag, and :heap are defined. - - - - > (elements (set-ec default-compare (:range i -5 5) (* i i))) (0 1 4 9 16 25) > (list-ec (: i (set 1 2 2 3)) i) (1 2 3) Heaps provide efficient access to the minimum element. Currently one heap implementation is provided namely leftist heaps, which is therefore also is the default heap implementation. (require (planet "heap.scm" ("soegaard" "galore.plt" 2 1))) (require (planet "leftist-heap.scm" ("soegaard" "galore.plt" 2 1))) Besides the common collection operations sets also provide the customary set operations. Most set implementations provide efficient access to all elements. There are two set implementations. One represents sets using red/black trees, the uses sorted lists. The default set implementation is based on red/black trees. (require (planet "set.scm" ("soegaard" "galore.plt" 2 1))) (require (planet "red-black-tree-set.scm" ("soegaard" "galore.plt" 2 1))) (require (planet "list-set.scm" ("soegaard" "galore.plt" 2 1))) Set operations - - - - - - - - Besides the common operations sets also support the following > difference : set set -> set (difference A B) = {x in A | x not-in B} > equal=? : set set -> boolean The call (equal=? A B) returns true, if all elements of A are members of B, and all elements of B are members of A. > intersection : set set -> set The call (intersection A B) returns the set of elements that are members of both A and B. If x in A and y in B represents the same element with respect to the compare function in question, it is unspecified whether x or y is returned. (Use intersection/combiner if you need a specific choice). > set : elm ... -> set (set x1 x2 ... xn) = (insert* (list x1 x2 ... xn) (empty)) > subset? : set set -> boolean The call (subset? A B) returns #t if all elements of A are members of B, otherwise #f is returned. Sets support variations of the normal operations, that allows you to control which element to keep in the case of duplicates. A combiner is function of two arguments that receive two equivalent (with respect to the compare function in question) and return the element that should be kept. The supported combiner operations are: > insert/combiner (insert with combiner), > insert*/combiner > intersection/combiner > list->set/combiner > union/combiner E.g. (insert*/combiner (list 1 2 3 4) (empty compare-mod2) max) will result in the set {3, 4}. The combiner is called on the pairs 1 and 3, and on 2 and 4. > intersection/combiner : set set combiner -> set Return the intersection of two sets. If x in A and y in B represent the same element, then the element returned by the call (c x y), where c is the combiner, is used as the representative. Bags (or multi-sets) keep a count of the number of occurences of each element. Some bag implementations hold each individual element inserted in the bag - the implementation provided by this library keeps only one representative together with a count of how many times it were inserted. There are two bag implementations. One represents bags using red/black trees, the uses sorted lists. The default bag implementation is based on red/black trees. (require (planet "bag.scm" ("soegaard" "galore.plt" 2 1))) (require (planet "red-black-tree-bag.scm" ("soegaard" "galore.plt" 2 1))) (require (planet "list-bag.scm" ("soegaard" "galore.plt" 2 1))) Bag Operations - - - - - - - - Besides the common operations bags also support the following > bag : elm ... -> bag (bag x1 x2 ... xn) = (insert* (list x1 x2 ... xn) (empty)) > difference : bag bag -> bag (difference A B) = (fold delete A B) > equal=? : bag bag -> boolean The call (equal=? A B) returns true, if A is a subbag of B and B is a subbag of A (see subbag?) > fold/no : (elm number alpha -> alpha) alpha bag -> alpha Like fold, but the combining function takes 3 arguments: an element, the number of occurences of the element, and the > intersection : bag bag -> bag The call (intersection A B) returns a bag of elements that are members of both A and B. The number of occurences of an element is the least of the number of occurences in A and B. If x in A and y in B represents the same element with respect to the compare function in question, it is unspecified whether x or y is returned. (Use intersection/combiner if you need a specific > subbag? : bag bag -> boolean The call (subbag? A B) returns #t if all elements of A are members of B and the number of occurences of an element in A is less or equal to the number of occurences of the element in B, otherwise #f is returned. Bags support the following combiner operations: > insert/combiner > insert*/combiner > intersection/combiner > list->set/combiner > union/combiner The following table shows the worst case time complixity. The O( ) has been omitted due to space considerations. RB-Set RB-Bag Leftist-Heaps List-Set List-Bag find-min 1 1 1 1 1 delete-min 1 1 1 1 1 get, member? log n log n n n n insert log n log n log n n n union n+m n+m log(n+m) n+m n+m elements n n n 1 n delete log n log n log n n n delete-all log n log n log n n n size n n n n n bag, list->set n log(n) n log(n) heap, list->bag n log(n) n log(n) set, list->heap n The operations: empty?, empty, singleton, and select are all O(1). The only asssociative collection supported in this version is finite Finite maps are indexed collections. Each entry maps one key to one element. They provide the standard collection operations, but with different contracts to account for both keys and elements. Finite maps support the following constructors: > empty [cmp] : -> finite-map Produces an empty finite map in which keys are ordered by the optional comparison argument. > singleton : [cmp] key elem -> finite-map Produces a finite map containing a single binding. > insert : key elm finite-map -> finite-map Produces a new finite map in which the given key maps to the given element, retaining all other mappings from the original. > delete : key finite-map -> finite-map > delete-all : key finite-map -> finite-map Produces a new finite map with no mapping for the given key (but retaining all other mappings from the original). > delete* : (list key) finite-map -> finite-map Produces a new finite map with no mapping for the given keys (but retaining all other mappings from the original). Finite maps support the following predicates: > empty? : finite-map -> boolean Reports whether a finite map contains mappings for any keys. > equal=? : finite-map finite-map -> boolean Reports whether two finite maps contain bindings for the same keys. > member? : key finite-map -> boolean Reports whether a finite map has a mapping for the given key. Finite maps support the following accessors: > size : finite-map -> natural-number Produces the number of bindings present in the finite map. > elements : finite-map -> (list elm) Produces a list of all elements to which keys are mapped in the given finite mapping. > count : key finite-map -> natural-number Produces the number of entries (0 or 1) for the given key in the given finite map. > get : key finite-map -> elm Produces the element corresponding to the given key in the given finite map. > select : finite-map -> elm Produces an arbitrary element from the finite map. Finite maps support the following traversals: > fold : (elm alpha -> alpha) alpha finite-map -> alpha Builds up a result by combining each finite map element with the residual. > fold/key : (key elm alpha -> alpha) alpha finite-map -> alpha Builds up a result by combining corresponding keys and elements from the finite map with the residual. Finite maps support the following set operations: > intersection : finite-map finite-map -> finite-map Produces a new finite map containing all key/element bindings whose keys were present in both arguments. Which binding of the two is kept is unspecified. > union : finite-map finite-map -> finite-map Produces a new finite map containing all bindings present in either argument. It is unspecified which binding will be kept for keys present in both finite maps. > difference : finite-map finite-map -> finite-map Produces a finite map containing all bindings in the first argument for which no corresponding binding exists in the second argument. In the folder 'examples' you will find the following examples: heap-sort.scm -- Heap example queens.scm -- Heap example knights-tour.scm -- Priority queue example primes.scm -- Priority queue example greedy-set-cover.scm -- Set example matching-parenthesis.scm -- Stack Example Argument order The argument order was chosen to resemble the order used for list (cons x xs) (insert x xs) (car xs) (select xs) (cdr xs) (delete-min xs) (append xs ys) (insert* xs ys) or (union xs ys) (delete x xs) (delete x xs) ; see srfi-1 The above choice makes it easy to use e.g. insert and delete in - The red-black trees implementation started as a port of Jean-Christophe Filliatre's Ocaml implementation. - Guillaume Marceau [Ada] Adams, Stepehn, "Implementing Sets Efficiently in a Functional Language" [Mar] Martinez, Conrado and Roura, Salvador: "Randomized Binary Search Trees" [Oka] Chris Okasaki, "Purely Functional Data Structures"
{"url":"http://planet.racket-lang.org/package-source/soegaard/galore.plt/2/1/doc.txt","timestamp":"2014-04-17T01:23:26Z","content_type":null,"content_length":"26958","record_id":"<urn:uuid:a21894ec-eb0c-4fd3-93dd-0b87ebea570f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Expected Value of 1/X October 5th 2012, 02:19 PM Expected Value of 1/X Hey there. I've been messing around with some of the statistics involved with dice rolls, particularly in the context of tabletop roleplaying games, but I've come to a snag. Mainly due to my incredibly shaky knowledge of statistics. An elementary method which players of a game use to 'guess' how many attacks it will take to defeat an enemy is to take the expected value of the dice roll involved with calculating damage and dividing that by the total HP (health/stamina) of the enemy. For example, if the monster has 20 HP and the weapon deals an average of '5' damage (like in the case of 2*4-sided dice), a decent guess for how many attacks it will take to defeat the enemy on average is 20/5 = 4. But it doesn't work out this way in practice. Because the enemy HP is divided by the random variable (damage), then low values for damage will affect the final result more than high values for damage. The true average number of hits required involves a more sophisticated calculation. One which I can't seem to figure out. So the question is, if X is a discrete random variable representing the sum of the face values when rolling m number of n-sided dice, then what is the expected value of $\frac{h}{X}$, where h is any positive integer (the HP of the enemy, in this case), in terms of m, n and h? (If nessecary, $E(X) = \frac{m(n + 1)}{2}$) October 5th 2012, 02:34 PM Re: Expected Value of 1/X Do you still need help with this question ? October 5th 2012, 02:39 PM Re: Expected Value of 1/X Yes, I do. :V October 5th 2012, 02:49 PM Re: Expected Value of 1/X I will think about it, it is a nice question The problem is that you canīt do E(h/X) = h/E(X) , i will post an answer tomorrow (if i have it) October 5th 2012, 08:08 PM Re: Expected Value of 1/X Hi again, i did the problem for m=2 and n = 4 (2 dices with 4 sides) E(h/X) = hE(1/X) = h(0.226711) Let me know..... If i find a generic formula i will let you know October 6th 2012, 03:03 PM Re: Expected Value of 1/X I found a generic formula for 2 dices (2 dices, n-sided) Let me know if that is useful....
{"url":"http://mathhelpforum.com/statistics/204712-expected-value-1-x-print.html","timestamp":"2014-04-20T16:34:34Z","content_type":null,"content_length":"6810","record_id":"<urn:uuid:75a3e847-a55d-411e-a4d1-360767a31a62>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
The Edit Distance Function and Symmetrization The edit distance between two graphs on the same labeled vertex set is the size of the symmetric difference of the edge sets. The distance between a graph, G, and a hereditary property, ℋ, is the minimum of the distance between G and each G'∈ℋ. The edit distance function of ℋ is a function of p∈[0,1] and is the limit of the maximum normalized distance between a graph of density p and ℋ. This paper utilizes a method due to Sidorenko [Combinatorica 13(1), pp. 109-120], called "symmetrization", for computing the edit distance function of various hereditary properties. For any graph H, Forb(H) denotes the property of not having an induced copy of H. This paper gives some results regarding estimation of the function for an arbitrary hereditary property. This paper also gives the edit distance function for Forb(H), where H is a cycle on 9 or fewer vertices. edit distance, hereditary properties, symmetrization, cycles, colored regularity graphs, quadratic programming Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v20i3p26","timestamp":"2014-04-21T13:16:40Z","content_type":null,"content_length":"16107","record_id":"<urn:uuid:caf296f4-d9b2-46ea-a909-538b1de76a08>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Lattices and Trajectories Richard Brak, The University of Melbourne Directed path models of certain polymer phase transitions I will review a range of directed path models of certain polymer phase transition problems, in particular, polymer collapse using interacting partially directed paths, polymer absorption onto a surface by Motzkin paths, and sensitized flocculation and steric stabilization using Dyck paths interacting with a pair of surfaces. I will also discuss some unexpected applications of these results to a non-equilibrium problem, that of the simple asymmetric exclusion processes, and to an equilibrium model, that of directed compact percolation. This talk will have a strong combinatorial flavour. Hugues Chaté, CEA Collective properties of active polar and nematic particles Active or self-propelled particles are in fashion today in models for the collective motion of animals, bacteria, cells, molecular motors, as well as driven granular matter or even for the swarming behavior of robots. I will review recent results obtained on minimal microscopic models of interacting polar and apolar (nematic) active particles, stressing the (probably) universal properties of the emerging collective dynamics. If time allows, recent proposals for mesoscopic descriptions of these systems will be discussed. Giovanni Ciccotti, Università degli Studi di Roma-"La Sapienza" Minimum free energy paths and "isocommittor" surfaces A computational technique is proposed which combines the string method with a sampling technique to determine minimum free energy paths. The technique only requires to compute the mean force and another conditional expectation locally along the string, and therefore can be applied even if the number of collective variables kept in the free energy calculation is large. This is in contrast with other free energy sampling techniques which aim at mapping the full free energy landscape and whose cost increases exponentially with the number of collective variables kept in the free energy. Provided that the number of collective variables is large enough, the new technique captures the mechanism of transition in that it allows to determine the committor function for the reaction and, in particular, the transition state region. The new technique is illustrated on the example of alanine dipeptide, in which we compute the minimum free energy path for the isomerization transition using either two or four dihedral angles as collective variables. It is shown that the mechanism of transition can be captured using the four dihedral angles, but not using only two of them. David Coker, Boston University Modeling electronic and vibrational pure dephasing and dissipation dynamics in condensed phase systems A path integral approach using a discrete representation for the quantum subsystem is implemented by linearizing in the difference between forward and backward paths for the continuous solvent degrees of freedom, and employing a full mapping Hamiltonian description for the discrete quantum subsystem states. The approach is employed to study electronic and vibrational dephasing in a realistic model of complex many-body systems that can be probed in experiments exploring, for example, the microscopic mechanism of how superposition states of the quantum subsystem undergo decoherence due to entanglement with their environment. Understanding decoherence mechanisms, the factors that affect these quantum processes, and how they might be controlled by molecular level engineering is important if advances are to be made in applications of quantum information theory with molecular systems, for example. The studies will address vibrational quantum decoherence effects in the presence of both weak and strong dissipation, and also explore how such phenomena are influenced by avoided crossings and conical intersections in realistic models including decoherence of a vibrational superposition state of electronically excited halogen molecules in condensed rare gas environments where recent detailed experiments are available. Joern Davidsen, University of Calgary Filament-induced surface spiral turbulence Surface defect-mediated turbulence in bounded three-dimensional (3D) excitable media is investigated in the regime of negative line tension. In this regime turbulence arises due to unstable filaments associated with scroll waves and is purely a 3D phenomenon. In this talk, I will show that the statistical properties of the turbulent defect dynamics can be used to distinguish surface defect-mediated turbulence from its 2D analog. Mechanisms for the creation and annihilation of surface defects will be discussed and generalizations of Markov rate equations are employed to model the results. Frank den Hollander, Universiteit Leiden Copolymers in solution This talk describes a model for a random copolymer in a random emulsion that was introduced in a recent paper with Stu Whittington. The copolymer is a two-dimensional directed self-avoiding walk, carrying a random concatenation of monomers of two types A and B, each occurring with density ½ . The emulsion is a random mixture of liquids of two types, A and B, organized in large square blocks occurring with density p and 1-p, respectively where p (0,1). The polymer in the emulsion has an energy that is minus times the number of AA-matches minus times the number of BB-matches, where , R are interaction parameters that may be restricted to the cone {( , ) R2 : | |}. We consider the quenched free energy per monomer in the limit as the length n of the polymer tends to infinity and the blocks in the emulsion have size Ln such that Ln and Ln/n 0. To make the model mathematically tractable, we assume that the polymer can only enter and exit a pair of neighbouring blocks at diagonally opposite corners. Although this is an unphysical restriction, it turns out that the model exhibits rich and physically relevant behaviour. Let pc 0.64 be the critical probability for directed bond percolation on the square lattice. We show that for p pc there is a localization vs. delocalization phase transition along one (!) critical curve in the cone, which turns out to be independent of p, while for p < pc there are three (!) critical curves, all of which depend on p. We derive a number of qualitative and quantitative properties of these curves. (This is joint work with Nicolas Pétrélis and Stu Whittington.) Rashmi Desai, University of Toronto Epitaxial Growth in coherent, strained, asymmetric alloy films I shall report on our recent work on epitaxial growth and surface instabilities in coherent, strained, asymmetric alloy films. A nonequilibrium continuum model is used to explore coupling of alloy segregation instability and morphological instability which arises from lattice mismatch and other elastic effects. Even though the model has interesting nonlinearities, interesting effects occur even in linear approximation (valid for thin films). I shall also discuss application to some real materials. Leon Glass, McGill University Predicting and Preventing Sudden Cardiac Death Sudden cardiac death kills hundreds of thousands of North Americans each year. This number could be reduced significantly if a medical device -- the implantable cardiac defibrillator -- had been implanted prior to the sudden death. However, since we do not have good ways of predicting who will suffer sudden cardiac death or when, physicians face a major problem in deciding in whom to implant a cardiac defibrillator. This problem is made more severe since implantable cardiac defibrillators are expensive, and complications, though rare, do add to the risk of using the devices in those who would not benefit. In this talk I will describe attempts to understand cardiac arrhythmias -- especially those responsible for sudden cardiac death. The methods include analysis of electrocardiographic records of patients who experienced sudden cardiac death, analysis of arrhythmias in German Shepherd dogs that experience sudden cardiac death, recording activity in tissue culture models of cardiac arrhythmias, and the formulation of mathematical models of cardiac arrhythmia employing a range of techniques from number theory to nonlinear dynamics. Tony Guttman, University of Melbourne Role of conformational entropy in force-induced bio-polymer unfolding A statistical mechanical description of flexible and semi-flexible polymer chains in a poor solvent is developed in the constant force and constant distance ensembles. The existence of many intermediate states at low temperatures stabilized by the force is found. A unified response of pulling and compressing force has been obtained in the constant distance ensemble. We show the signature of a cross-over length which increases linearly with the chain length. Below this cross-over length, the critical force of unfolding decreases with temperature, while above it increases with temperature. For stiff chains, we report for the first time a "saw-tooth" like behavior in the force-extension curves which has been seen earlier in the case of protein unfolding. (Joint work with Sanjay Kumar, Iwan Jensen and Jesper L. Jacobsen.) James T. Hynes, University of Colorado and Ecole Normale Supérieure, Paris Solvation and photochemical funnels: Environmental effects on conical intersection structure and dynamics Excited electronic state processes at conical intersections (CIs) have received intense scrutiny in photochemical experiment and theory in recent years. CIs often provide a "funnel" for (often ultrafast) passage from a photochemically accessed S1 state to the ground state S0, governing nonadiabatic transition rates; they have been referred to as "transition states" for photochemical Recent experiments on, e.g., photoactive proteins highlight the pronounced influence of a solvent or protein nanospace environment on CI dynamics. S1-S0 population transfer can be substantially modified, suggesting major changes in the underlying CI topology and dynamics. A central theoretical challenge is to select and describe the relevant features governing the complex chromophore-environment supermolecular systems. The present contribution focuses on excited electronic state processes at CIs where a charge transfer is involved. We describe the key features of a theoretical formulation recently developed to describe the chromophore-environment interaction and its consequences. This generalizes considerably an early important treatment by Bonacic-Koutecky, Koutecky and Michl to include important molecular coordinates, e.g., isomerization twisting motions, and the polar/polarizable environment's influence. The environment's electrostatic effects are accounted for by a dielectric continuum model. Applications to a model for the S1-S0 CI in protonated Schiff bases provide a free energy surfaces description for the coupled system represented by molecular coordinates (e.g., twisting and bond stretching/contracting) plus a solvent coordinate. The environment's significant impact on the CI is investigated, as are "reaction paths" and the dynamics leading to and through the CI. Nonequilibrium "solvation" effects are shown to be critical. (This work has been carried out in collaboration with Irene Burghardt (ENS, Paris), Riccardo Spezia (ENS, Paris), Joao Malhado (ENS, Paris) and L. Cederbaum (Heidelberg).) Jennifer Lee, University of Toronto Collapse transition in the presence of an applied force This talk will focus on a collapse transition of a linear homopolymer in dilute solution in the presence of an applied force. An interacting partially self-avoiding walk (IPDSAW) model was used to describe the system conditions, with energy and applied force variables associated with the near-neighbour contacts in the walk. Exact expressions were generated, where the analytic structure of such expressions will be presented. Theoretical results were then used as a comparison model for investigating a collapse transition in single molecule experiments. Force spectroscopy obtained using AFM generated single molecule force-extension profiles and will be presented. (This work is carried out in collaboration with S. G. Whittington, R. Brak, A. J. Guttmann and G. C. Walker.) Neal Madras, York University Polymers on hyperbolic lattices This talk discusses traditional lattice models of polymers (self-avoiding walks, lattice trees, and lattice animals) on "non-Euclidean lattices", specifically graphs that correspond to regular tilings of the hyperbolic plane (or 3-space). One example is the infinite planar graph in which every face is a triangle and eight triangles meet at every vertex. On such lattices, these models should exhibit mean field behaviour, as they would in high-dimensional Euclidean space, or, more simply, on an infinite regular tree. We have made progress towards rigorous understanding of these issues, as well as analogous ones for percolation, but some open questions remain. (This talk is based on joint work with C. Chris Wu.) G. Nicolis and C. Nicolis, Université Libre de Bruxelles Nonlinear dynamics and self-organization in the presence of metastable phases There is increasing evidence that self-organization phenomena in a variety of nanosize materials occur in the presence of metastable phases. This switches on non-standard nucleation mechanisms with combined structural and density fluctuations, entailing that kinetic effects and nonequilibrium states are playing an important role. In this presentation the effect of metastable phases on the free energy landscape is determined for a class of materials in which the attractive part of interparticle interactions is weak and short-ranged. The kinetics of the fluctuation-induced transitions between the different states, stable as well as metastable, is subsequently analyzed using a generic model involving two order parameters. Conditions are identified under which the transition rate towards the most stable state can be enhanced and the relevance of the results in the crystallization of protein solutions is discussed. Steven Nielsen, University of Texas at Dallas Quantifying the surfactant coverage of nanoparticles by molecular dynamics simulation: The physisorbed versus chemisorbed cases Potential energy terms are derived for the interactions between surfactants and solvent, and a spherical nanoparticle, which depend parametrically on the nanoparticle radius. The gradient of these potentials with respect to the nanoparticle radius allows the mean force of constraint on the radius to be calculated during a molecular dynamics simulation. This free energy method allows the optimal, or saturated, surfactant coverage to be found. The effects of curvature, surfactant geometry, and chemisorbed versus physisorbed conditions are explored. Gian-Luca Oppo, University of Strathclyde Spatio-temporal structures in photonics and chemistry We compare nonlinear spatio-temporal structures such as patterns, spatial solitons, spirals, defect-mediated turbulence, etc. in prototype models of photonic and chemical systems. Analogies and differences are drawn between systems driven by either diffraction (photonics) or diffusion (chemistry). It is found that while the investigated structures often have very similar nature, their names differ between these two research fields. Typical examples are chemical spirals and optical vortices, chemical spots and cavity solitons. As Gershwin said: "you like potato and I like potahto, you like tomato and I like tomahto". Garnett Ord, Ryerson University Counting oriented rectangles and the propagation of waves We propose a simple counting problem involving chains of rectangles on a planar lattice. The boundaries of the chains form a type of random walk with a finite inner scale. With orientation neglected, the continuum limit of the walk densities obeys the Telegraph equation, a form of diffusion equation with a finite signal velocity. Taking into account the orientation of the rectangles, the same continuum limit yields the Dirac equation. This provides an interesting context in which the Dirac equation is phenomenological rather than fundamental. E. Orlandini, Universita degli Studi di Padova Directed walk models of polymers stretched by a force In recent years, the mechanical properties of individual polymers and filaments have been thoroughly investigated experimentally, thanks to the rapid development of micromanipulation techniques such as optical tweezers and atomic force microscopy (AFM). Experiments such as the stretching of single DNA polymers or the force-induced desorption from an attractive surface enhance the possibility of understanding the physical properties of the single molecule. In order to interpret experiments quantitatively several theoretical models which allow one to calculate the response of a polymer to external forces have recently been introduced and studied by several authors. In this respect, Stu Whittington has been a pioneer in the field through the introduction of simple directed-walk models of polymers, subjected to an elongational force, that are either adsorbed on the surface or localized between two different solvents. In this talk I will review some of these models, showing that, although they are simple enough to be solved analytically, they can catch much of the underlying physics of the problem. Moreover, they can be extended to describe other interesting phenomena such as the mechanical unzipping of double-stranded DNA or the stretching of compact polymers. Aleksander Owczarek, University of Melbourne Polymers in a slab with attractive walls: Scaling and numerical results We summarize the latest results concerning models of polymers in a slab with sticky walls. We present a conjectured scaling theory and numerical confirmation. Antonio Politi, CNR Chaos without exponential instability Since several years, it is known that coupled map models can exhibit pseudochaotic behaviour, even in the absence of a strictly positive maximum Lyapunov exponent. Quite recently some more realistic systems have been identified, where this behaviour can be generated. In particular, I refer to a chain of hard-point particles and to a network of globally coupled leaky integrate-and-fire neurons. The peculiarity of this type of dynamical behaviour and the conditions for its generation will be discussed. Andrew Rechnitzer, University of British Columbia Mean unknotting times of random knots and embeddings We study mean unknotting times of knots and knot embeddings by crossing reversals, in a problem motivated by DNA entanglement. Using self-avoiding polygons (SAPs) and self-avoiding polygon trails (SAPTs) we prove that the mean unknotting time grows exponentially in the length of the SAPT and at least exponentially with the length of the SAP. The proof uses Kesten's pattern theorem, together with results for mean first-passage times in the two-parameter Ehrenfest urn model. We use the pivot algorithm to generate random SAPTs and calculate the corresponding unknotting times, and find that the mean unknotting time grows very slowly even at moderate lengths. Our methods are quite general -- for example the lower bound on the mean unknotting time applies also to Gaussian random polygons. (This is work together with Aleks Owcarek and Yao-ban Chan at the University of Melbourne, and Gord Slade at the University of British Columbia.) Katrin Rohlf, Ryerson University From excitable media on the large scale to reaction-diffusion mechanisms on the small scale In honour of Raymond Kapral, this talk will highlight some of our past and current work concerning simulations for reactive media both on the large scale, as well as on the small scale. The first part of the talk will be devoted to recent results concerning the self-organizational properties of spiral waves in a FitzHugh-Nagumo system. Such systems have a wide range of physical, chemical and biological applications, and -- in particular --have often been used to describe the electrical activity of heart tissue. Our results have important implications on the proper assessment of drug treatment options for cardiac arrhythmias. The talk will conclude with an overview of current work concerning the time evolution of a chemically reacting medium using a particle-based approach. In particular, some results will be presented for a Selkov reactive mechanism in a spatially extended system, and we will show its connection to a stochastic phase-space description for which the total number of particles in the system is not conserved. Tom Shiokawa, National Center for Theoretical Sciences Non-Markovian dynamics and quantum Brownian motion Kenneth Showalter, West Virginia University Spatiotemporal dynamics of networks of excitable nodes A network of excitable nodes based on the photosensitive Belousov-Zhabotinsky reaction is studied in experiments and simulations. The addressable medium allows both local and nonlocal links between the nodes. The initial spread of excitation across the network as well as the asymptotic oscillatory behavior are described. Synchronization of the spatiotemporal dynamics occurs by entrainment to high-frequency network pacemakers formed by excitation loops. Analysis of the asymptotic behavior reveals that the dynamics of the network is governed by a subnetwork selected during the initial transient period. (In collaboration with Aaron J. Steele and Mark Tinsley.) Christine Soteros, University of Saskatchewan Random copolymer models Self-avoiding walk models have been used for about 50 years to study linear polymers (long chain molecules) in dilute solution. For such models, the vertices of the walk represent the monomer units which compose the polymer and an edge of the walk joins two monomer units which are chemically bonded together in the polymer chain. Distinct self-avoiding walks on a lattice, such as the square or simple cubic lattice, represent distinct conformations of the polymer chain. Recently there has been much interest in extending the standard self-avoiding walk model of homopolymers (all monomer units considered identical) to the study of random copolymers. A random copolymer is a polymer composed of several types of comonomers where the specific distribution of these comonomers along the polymer chain is determined by a random process. The comonomer sequence can be thought of as being determined in or by the polymerization process (assumed to involve a random process) but then once determined the sequence of comonomers is fixed; this is an example of what is known as quenched randomness. In the simplest self-avoiding walk model of a random copolymer, one assumes that there are two types of comonomers and that they are distributed independently along the polymer chain. Based on a series of seminal papers by Stu Whittington and others, I will review the progress that has been made using self-avoiding walk models to study phase transitions, such as the absorption phase transition and the localization phase transition, in random copolymer systems. Special emphasis will be placed on recent progress made by us, in collaboration with Stu Whittington, studying bounds on the limiting quenched average free energy for directed walk models such as Dyck and Motzkin paths. De Witt Sumners, Florida State University Random thoughts about random knotting At the interface between statistical mechanics and geometry/topology, one encounters the very interesting problem of length dependence of the spectrum of geometric/topological properties (writhing, knotting, linking, etc.) of randomly embedded graphs. Stu Whittington has been at the forefront of research in this area, and this talk will discuss the proof of the Frisch-Wasserman-Delbruck conjecture (the longer a random circle, the more likely it is to be knotted), with some of its generalizations and scientific applications. E. J. Janse van Rensburg, York University Knotted lattice polygons Let pn(K) be the number of lattice polygons of length n and knot type K. It is known that limn [log pn( )]/n = log exists, where K = is the unknot, and is the growth constant of unknotted polygons in the cubic lattice. In addition, < , where is the growth constant of lattice polygons. This result implies that almost all lattice polygons are knotted in the large n limit. In this talk I shall review the statistical and scaling properties of lattice polygons of fixed knot types in the lattice and also in a slab geometry by using rigorous and scaling arguments and by presenting numerical results from Monte Carlo simulations using the BFACF algorithm. Xiao-Guang (Charles) Wu, Revionics, Inc. Ion dynamics in non-perfect quadrupole traps Ion dynamics in non-perfect quadrupole traps differ from those in a pure quadrupole field. We obtain an analytic expression for a quadrupole field superimposed with weak, higher-order multipole fields. Single ion dynamics in such trapping fields close to the instability point are investigated. We show that for an in-phase octopole field, oscillating envelopes of the axial displacement grow exponentially with the parameter deviation; whereas for an out-of-phase octopole field the growth of the oscillating envelopes follows a square-root law. A hard-sphere scattering model is assumed to incorporate collisions with buffer-gas molecules. The collision frequency and cross-section are defined. A simulation algorithm for many-ion dynamics is developed based on the Verlet algorithm and Monte Carlo techniques. We show how a weak octopole field affects the mass resolution in a significant way. Royce Zia, Virginia Tech Percolation of a collection of finite random walks: A model for gas permeation through thin polymeric membranes Bond percolation on a square lattice is well known. What if the bonds are not randomly distributed, but are correlated somehow? In particular, consider placing a fixed density of (non-self-avoiding) random walks of l bonds on the lattice. How does the critical density depend on l? This problem is motivated by a model for gas transport through thin polymeric films. Back to top
{"url":"http://www.fields.utoronto.ca/programs/scientific/06-07/lattices/abstracts.html","timestamp":"2014-04-20T04:00:16Z","content_type":null,"content_length":"44110","record_id":"<urn:uuid:1e9aca46-6916-4b96-baf9-cb79b9113afc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving recurrences Note: this page uses the following special characters: Greek capital letter theta: (Θ), Greek capital letter omega (Ω), minus sign (−). If these characters do not appear correctly, your browser is not able to fully handle HTML 4.0, and some of the following text will likely not have the correct appearance. In last lecture we saw how to solve recurrences, with the example of a simple multiplication function using only additions, and running in time O(log n). Today we will see more examples of those, by proving the complexity of mergesort, as well as the complexity of a function calculating Fibonacci numbers. Merge sort Implementation of merge sort let rec split (l: 'a list) : 'a list * 'a list = match l with [] -> [],[] | [x] -> [x],[] | x::y::t -> let l,r=split t in x::l,y::r (* A simpler way to write split. Recall the definition of List.fold_right. What is the asymptotic performance of List.fold_right f lst acc0 where f is an O(1) function and lst is an n-element list? O(n). *) let split' (l: 'a list) : 'a list * 'a list = List.fold_right (fun x (left,right) -> (x::right,left)) l ([],[]) let rec merge (left: 'a list) (right: 'a list): 'a list = match (left, right) with ([],_) -> right | (_,[]) -> left | (x::rest_left, y::rest_right) -> if x > y then y::(merge left rest_right) else x::(merge rest_left right) (* merge_sort l is a list containing the same elements as xs but in * ascending (nondescending) sorted order. *) let rec merge_sort (l: 'a list) : 'a list = (* Implementation: lists of size 0 or 1 are already sorted. Otherwise, * split the list into two lists of equal size, recursively sort * them, and then merge the two lists back together. *) match l with ([]|[_]) -> l | _ -> let (left, right) = split l in merge (merge_sort left) (merge_sort right) Merge sort asymptotic timing analysis Now let's show that merge_sort is not only a correct but also an efficient algorithm for sorting lists of numbers. We start by observing without proof that the performance of the split function is linear in the size of the input list. This can be shown by the same approach we will take for merge, so let's just look at merge instead. The merge function too is linear-time—that is, O(n)—in the total length of the two input lists. We will first find a recurrence relation for the execution time. Suppose the total length of the input lists is zero or one. Then the function must execute one of the two O(1) arms of the case expression. These take at most some time c[0 ]to execute. So we have T(0) = c[0] T(1) = c[0] Now, consider lists of total length n. The recursive call is on lists of total length n−1, so we have T(n) = T(n−1) + c[1] where c[1] is an constant upper bound on the time required to execute the if statement and the operator :: (which takes constant time for usual implementations of lists). This gives us a recurrence relation to solve for T. We can apply the iterative method to solve the recurrence relation by expanding out the recurrence relation inequalities for the first few steps. T(0) = c[0] T(1) = c[0 ]T(2) = T(1) + c[1 ]= c[0 ]+ c[1 ]T(3) = T(2) + c[1 ]= c[0 ]+ 2c[1 ]T(4) = T(3) + c[1 ]= c[0 ]+ 3c[1 ]T(n) = T(n−1) + c[1 ]= c[0 ]+ (n−1)c[1 ]= (c[0 ]- c[1])[ ]+ c[1]n[ We notice a pattern which the last line captures. This pattern can be proved more rigorously by induction: let us prove by induction that for n<0, T(n)=(c[0 ]- c[1])+ c[1]n. For n=0, the result is true (proved above), and if it is true for n-1, it is true for n using the last line above. Recall that T(n) is O(n) if for all n greater than some n[0], we can find a constant k such that T(n) < kn. For n at least 1, this is easily satisfied by setting k = c[0] + 2c[1]. Or we can just remember that any first-degree polynomial is O(n) and also Θ(n). An even simpler way to find the right bound is to observe that the choice of constants c[0] and c[1] doesn't matter; if we plug in 1 for both of them we get T(1) = 1, T(2)=2, T(3)=3, etc., which is clearly O(n). Now let's consider the merge_sort function itself. Again, for zero- and one-element lists we compute in constant time. For n-element lists we make two recursive calls, but to sublists that are about half the size, and calls to split and merge that each take Θ(n) time. For simplicity we'll pretend that the sublists are exactly half the size. The recurrence relation we obtain has this form: T(0) = c[0] T(1) = c[0 ]T(n) = 2 T(n/2) + c[1]n + c[2]n + c[3] Let's use the iterative method to figure out the running time of merge_sort. We know that any solution must work for arbitrary constants c[0] and c[4], so again we replace them both with 1 to keep things simple. That leaves us with the following recurrence equations to work with: T(1) = 1 T(n) = 2 T(n/2) + n Starting with the iterative method, we can start expanding the time equation until we notice a pattern: T(n) = 2T(n/2) + n = 2(2T(n/4) + n/2) + n = 4T(n/4) + n + n = 4(2T(n/8) + n/4) + n + n = 8T(n/8) + n + n + n = nT(n/n) + n + ... + n + n + n = n + n + ... + n + n + n Counting the number of repetitions of n in the sum at the end, we see that there are lg n + 1 of them. Thus the running time is n(lg n + 1) = n lg n + n. We observe that n lg n + n < n lg n + n lg n = 2n lg n for n>0, so the running time is O(n lg n). So now we've done the analysis by using the iterative method, let's use strong induction to verify that the bound is correct. Merge sort analysis using strong induction Property P(n) to prove: n ≥ 1 ⇒ T(n) = n lg n + n Proof by strong (course-of-values) induction on n. For arbitrary n, show P(n) is true assuming the induction hypothesis T(m) = m lg m + m for all m<n. Case n = 0: vacuously true Case n = 1: T(1) = 1 = 1 lg 1 + 1 Case n > 1: Induction Hypothesis: T(n) = 2 T(n/2) + n = (n/2) lg (n/2) + 2(n/2) + n (by induction hypothesis) = n lg (n/2) + 2n = n lg n − 1) 1) + 2n = n lg n + n Since n lg n + n is Θ(n lg n), we have shown that merge sort is Θ(n lg n). The Fibonacci numbers The Fibonacci numbers, written F(n), are defined by F(0)=0, F(1)=1, and for n>1, F(n)=F(n-1)+F(n-2). The first few Fibonacci numbers are 0,1,1,2,3,5,8,13,21,34,55,89,... A first implementation and its complexity We would like to write a function that would calculate the n-th Fibonacci number. A first implementation would be: (* requires n>=0 *) let rec fibo=function 0 -> 0 | 1 -> 1 | n -> (fibo (n-1))+(fibo (n-2)) This function follows directly from the definition of Fibonacci numbers, thus its correctness. Now, if we try running it in OCaml on say 100, it takes forever to complete. Let us see why by analyzing its asymptotic time. The asymptotic time taken for n=0 and n=1 is constant, let us call it c[0], so that T(0)=T(1)=c[0]. To calculate T(n) we make two recursive call, so that T(n)=T(n-1)+T(n-2). In mathematics, it can be shown that a solution of this recurrence relation is of the form T(n)=a[1]*r[1]^n+a[2]*r[2]^n, where r[1] and r[2] are the solutions of the equation r^2=r+1. We get r[1]= (1+sqrt(5))/2 and r[2]=(1-sqrt(5))/2. Then with T(0)=T(1)=c[0], we get a[1]+a[2]=a[1]r[1]+a[2]r[2]=c[0], leading to a[1]=c[0]r[1]/sqrt(5) and a[2]=-c[0]r[2]/sqrt(5). We can see that r[2]<1, therefore r[2]^n is o(1). Therefore T(n) is Θ(r[1]^n), with r[1]=(1+sqrt(5))/2. The algorithm thus takes an exponential time to complete. A better implementation (* requires n>=0 *) let fibo' n= if(n=0) then 0 else (* a is F(i-2) and b is F(i-1) *) let a=ref 0 and b=ref 1 and c=ref 0 in for i=2 to n do c:= !b; b:= !a + !b; a:= !c; done; !b This implementation is clearly linear: each loop takes a constant time to complete, there are order of n loops. Like before, the correctness can be obtained by a recurrence on n (by stating that at each loop, a=F(i-2) and b=F(i-1)).
{"url":"http://www.cs.cornell.edu/courses/cs3110/2011sp/recitations/rec18.htm","timestamp":"2014-04-18T09:16:55Z","content_type":null,"content_length":"14653","record_id":"<urn:uuid:9cc8bd54-e290-49e3-8e32-96081357d139>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
“New” Valuation Method – Q&A and other Ramblings Commentary, Q&A and Other Ramblings “New” Valuation Method – Q&A and other Ramblings by Benjamin Clark • • 31 Comments This post has been about 2 weeks in the making. A couple of weeks ago, we were asked in a comment if we could elaborate a little on our valuation process. We had been using a complex mathematical formula and system to determine value that took into account a number of different approaches. As the system was difficult to explain, we always had a stance that we would not discuss the details of how we came about our intrinsic value numbers. However, when I was thinking about how to respond to the latest question, I also began to wonder if we could find something that was easier to explain. I was led back to Graham’s The Intelligent Investor, where I found a formula that I had originally passed over as it is in a section that refers to “Growth” stocks. The first couple of times I read the book I thought of the formula as a way to look at high growth stocks, but this time, I saw it differently. If it worked for Graham with growth stocks, why couldn’t it be tweaked in order to work for all companies? I then set to work on “modernizing” Graham’s original formula. Here is the text from The Intelligent Investor that includes the formula (found on page 295 of 2006 updated edition – chapter 11): “Most of the writing of security analysts on formal appraisals relates to the valuation of growth stocks. Our study of the various methods has led us to suggest a foreshortened and quite simple formula for the valuation of growth stocks, which is intended to produce figures fairly close to those resulting from the more refined mathematical calculations. Our formula is: Value = Current (Normal) Earnings X (8.5 plus twice the expected annual growth rate) The growth figure should be that expected over the next seven to ten years.” (A footnote then explains that the formula does not give the “true value” but approximates) So Graham provides us with a base formula to approximate value. The next step was to find the correct figures to input into the formula. You’ll notice that Graham has specified that we should use “normal” earnings in this calculation. This is one of the most important things that Graham teaches – do not base your judgment on current period earnings that could be abnormally inflated or deflated. Instead, base your judgment on the earnings that the company has normally achieved and can be expected to achieve in the future. But how do you get normal earnings in a simple manner? One of the things we at ModernGraham have done since the beginning is use a weighted average of the last 5 years’ earnings per share. We use a weighted average because we do feel that some bearing must be placed on the current period even if it is an abnormal result. Accordingly, our average increases in weight each year. We refer to this weighted average earnings per share as the EPSmg. From there, the next question is where do you get a growth rate for the next seven to ten years? You can do that a number of ways. One, you can find an analyst forecast and plug their rate in. We don’t like to trust analysts because they are almost always wrong and they can hardly ever agree. Instead, we calculate our own expected growth rate using a weighted average on the previous 5 years again. Then we throw in one of our safety screens by multiplying the average by 0.75 (effectively decreasing it by approximately 25%). Sometimes this still gives us an outrageous figure. As a result, we limit our growth rate to a level between -4% and 15% per year. We feel that any company that is expected to shrink by more than 4% per year is unsuitable for investment anyway, and no company should be expected to achieve higher than 15% growth for seven to ten years. That’s not to say higher than 15% growth may be achieved – it can and it has been by a lot of high growth companies – but we do not want to put ourselves at risk of it not being done. If we put the cap at 15% and still find the company to be undervalued, then our results will be even greater when a higher rate is actually achieved. You’ll notice that there are a couple of implications that result from our cap of the growth rate. One, it means that we feel a zero-growth company would be worth at least 8.5 times normal earnings. This is the same as what Graham felt and could be a result of an approximation of the present value of a perpetuity of the normal earnings. Two, the cap means that we do not feel comfortable paying more than 38.5 times normal earnings. In fact, we do not feel comfortable for the defensive investor at more than 20 times normal earnings. After “modernizing” Graham’s original formula, I then set about testing the results of following it. In order to do this easily, I made a couple of assumptions: 1. I looked only at the current 30 components of the Dow Jones Industrial Average. 2. I did not consider whether the companies passed the defensive and enterprising investor guidelines we require. With the assumptions made, I came up with a valuation for each company on a quarterly basis going back to the first quarter of 1995. I then looked at the company each quarter to compare the price to the value. If the price was below 75% of the value determined by the formula, I considered the company a “buy.” If the price was above 110% of the value, I considered the company a “sell.” Next, I created a mock portfolio of $100,000.00 and set about following the recommendations of the formula. When a rating changed to “buy,” I “bought” and when it changed to “sell,” I “sold.” I set a target weighting of 5% per company and left the remainder in cold hard cash when I did not achieve a full level of investment. Some very interesting things happened. First, the portfolio held strong at a level of between 70 and 90% invested until Q4, 1998. At that point, the cash level increased to 46% and stayed between 40 and 60% until the end of 2003. After that, the cash level slowly decreased until 2006 when the portfolio reached a full investment level. As a result, the portfolio missed out on the over inflated market of the very late 90s and the subsequent recession and drop in value. Then once the price dropped relative to the value again, the portfolio increased the investment levels to take advantage of the low prices. Second, the portfolio as a whole outperformed the index it was up against, the DJIA. On an annual basis, the Dow averaged an 11.5% gain over the period, compared to the portfolio’s 12.9%. In addition, the standard deviation of the Dow was 17.1% compared to 11.1%. So the portfolio achieved a higher rate of return with a lower level of risk. The overall result was that if you had invested $100,000 in the Dow and $100,000 in the portfolio I tested, you would have $323,000 through the Dow and $408,000 through the portfolio today. I’d say that’s a significant difference, and here at ModernGraham we will be commencing use of the “modernized” formula that Graham provided in The Intelligent Investor for the foreseeable future. Get a free copy of ModernGraham Stocks & Screens: March 2014 by signing up for our free daily email list! Sign up now and make sure you don't miss any valuations. 31 comments for ““New” Valuation Method – Q&A and other Ramblings” Hi Ben, I have been working on something similar so was very interested to find your post! What tool do you use to do the backtesting of your formula please? Thank you very much and best of luck with your research! One question: Whats the basis for choosing the ’8.5′ figure in your value calculation formula above ? Is it related to the long term bond yield rates in some measure ? Also, isn’t this value discovery formula just another way of getting at the PEG ratio of a company ? How do you actually weight earnings and growth? While I’m on the subject, I was wondering if you could/would provide a short glossary of terms that you use in your “Defensive”/”Enterprising” questions (e.g., “current ratio”). Of course, it wouldn’t hurt if I just went back and reread The Intelligent Investor a few more times. Many thanks … love the site… This is very interesting and helpful. I have a few questions: 1. Did you record the number of “Buys”, “Sells”, etc. which occured during the backtesting? If you had that quantity, it could easily be mulitplied by a certain transaction price to come up with a net profit. 2. Are you using a spreadsheet for this analysis? If so, are you aware of the totally great tool called the smf_addin at Yahoo Groups? It provides functions that create new excel “commands” that make it easy to pull all kinds of data from several different databases! 3. It seems that the site’s authors like to incorporate “Security Analysis” and “The Intelligent Investor” but I haven’t seen much consideration for “The Interpretation of Financial Statements”. Admittedly, some of the examples are probably out of date but I think that digging a little deeper in the financial statements may be a worthwhile endeavor. Coming up with true valuations maybe be right considering all of the liberty GAAP allows and what appears to be an SEC which doesn’t really seem to mind “creative accounting”. 4. Have the site’s authors read “It’s Earnings that Count” by Hewitt Heiserman, Jr.? It’s another book which looks closely at financial statements and valuations. Do the author’s feel that any of those concepts are worthwhile trying to incorporate? As you can probably tell, I’m very skittish about investing and I often suffer from “Paralysis by Analysis”. I would love to find a somewhat simple, Graham based procedure which I felt comfortable “Pulling the trigger”. Thanks for any comments, Can you give some insight into your backtesting procedure? I’m having a hard time finding some of the historical data you are referencing – can you list a source? In terms of educational material, I can’t find any good books on the subject and the only websites I can find which really deal with this are actually software programs. Did you use one of these programs or can you recommend a good book on backtesting? Simple and useful Glad you like it! Dear all, I am wondering if there is not a mistake in the calculator’s formula: var EPSmgA = (A*.33333)+(B*.26667)+(C*.2)+(D*.13333)+(E*.06667); var EPSmgB = (E*.33333)+(F*.26667)+(G*.2)+(H*.13333)+(I*.06667); It doesn’t take into consideration the 9th year (form J). Is that normal? Thanks for the note. That year is actually not needed in the calculation of the value, and I’m not sure why it’s included in the form. I’ll try to take it out to make it. The method says you shoudl take into account the past 10 years. So, the formula should be: var EPSmgB = (F*.33333)+(G*.26667)+(H*.2)+(I*.13333)+(J*.06667); and so taken into account the 9th year -> J. The formula I developed for growth looks at the EPSmg for the current year and the EPSmg from 5 years ago, and uses the change in the two to estimate the growth going forward. The calculation for EPSmg from 5 years ago takes into account the EPS from 5 years ago, 6 years ago, 7 years ago, 8 years ago, and 9 years ago. It does not use the figure from 10 years ago, so the reference to needing the earnings from 10 years ago is mistaken. That I can easily change, but I’m having a little difficulty changing the coding of the form to remove the input box for that year of Hello all, After few research, I found a French company BIC (EPA:BB) which seams to be undervalued & Defensive.. Can anyone confirm my analysis? I don’t currently evaluate foreign companies as they don’t translate very well into the spreadsheet I use, but I’m working on finding a solution. I am still learning how to read your reports and understand your terms. I have a simple question that I did not see an answer for in your explanation of terms..Is the MG Value your forecasted price per share and what is the period that this guess good for? Thank you The MG Value is an estimate of the intrinsic value of the company. It can best be explained by imagining that you have an item that is worth $100. You know it is worth $100 because you have done research into the value. Consider $100 to be the intrinsic value. Meanwhile, every day someone comes to your door and offers to buy the item from you. Each day they offer a different price. Some days, the offer is to buy it from you for $75, other days they offer $125, and on a few occasions they offer $100. The offer they make in no way changes how much the item is worth, and sometimes it is downright astonishing that someone would make such an offer. That’s the same way the market works. You spend time researching a company and using its financial statements you determine its intrinsic value. That amount is not related to the stock price, but helps you decide whether the offer from the market is a good offer or not. So if a company’s intrinsic value is $100, you know that if the market price is $50, it may be a good time to buy. Alternatively, if the market price is $200, it may be a good time to sell. The problem is that the market is inherently unpredictable, so it is extremely difficult to predict when (or even if) the price will be close to the value. So to answer the question, the MG Value is not a forecast in the traditional sense of the word, but one would expect that over a long time it is better to invest in companies trading for less than their value than to invest in companies trading for more than their value. Why is your formula P/E * P/B < 50 where as Graham's original formula is P/E*P/B < 22? At first sight you seem to have been less cautious. Great question. It is a little less conservative, and I must admit that it has been so long since I decided on this requirement, that I cannot remember exactly why I changed it. However, I think part of the reason is that the truly defensive investor of today invests solely in index funds, so an investor using Graham’s Defensive Investor requirements is a little bit less passive than Graham originally intended. As a result, the Defensive Investor of today may be willing to take on a slightly higher level of risk. Although I understand you sold at 110% so that you had money to find more bargains, I don’t think Graham would advocate selling a good company bought at a good price even though it is currently overvalued? I really dig your website and only ask this question because I think you know better than I. Thanks Fran – I’m not sure if I know any better than the next person, but my view is that you should not hold an investment if you would not buy it today. I always want my money to be invested in the greatest opportunity for profit. If a company is priced well above its intrinsic value, I wouldn’t buy it so why should I continue to hold it, especially if there are other opportunities that may bring a greater return on the investment? Hi, I am new to this site so please forgive my ignorance. I am looking at your report on PEP. Admittedly, it is from 2009, but I am just trying to learn how you determine the EPSmg. Using your eps figures from the report & applying the sum of the years digits weighting, for 2004 your have an EPSmg of $1.95. My calculations are returning $2.15 as follows: 2000 – 1.42 x 1.0667=1.51 2001 – 1.33 x 1.133= 1.50 2002 – 1.67 x 1.2 = 2 2003 – 2.04 x 1.2667= 2.58 2004 – 2.41 x 1.333 = 3.20 I am assuming you add these figures and divide by 5 and that returns $2.15. Could you please explain what I am doing wrong? Thank you. Thanks for the comment. Using the EPS figures you have listed, the calculation is as follows: 2004 – 2.41 x (5/15) = .7953 2003 – 2.04 x (4/15) = .544 2002 – 1.67 x (3/15) = .334 2001 – 1.33 x (2/15) = .1773 2000 – 1.42 x (1/15) = .0947 Then the sum of all of those is $1.95. I hope that helps! Can you explain when you decide to use the 2014 estimates in figuring the EPSmg and the value & when you do not use the estimates. I ask because today, 4/15/14, in your e-mail you highlighted HCP & COV. On COV you used the 2014 estimates, but on HCP you did not. Your value on HCP was $59.34 When I entered the 2014 estimate of $2.98 the value calculated as $81.51. Quite a difference!! Could you please explain the theory behind this. Thank you for this website. It is very informative! Great question! Here are my guidelines for using earnings (based on fiscal years): If Q1 has been released, use the lowest available analyst estimate for the full year. If Q2 has been released, use actuals for Q1 and Q2 and the lowest available analyst estimate for Q3 and Q4. If Q3 has been released, use actuals for Q1, Q2, and Q3 and the lowest available analyst estimate for Q4. If Q4 has been released, use actual figure for the full year. In the case of COV, the 2014 Q1 earnings are available, so I used the lowest available analyst estimate for 2014, which is $3.95. For HCP, the 2014 Q1 earnings are not yet available, so I did not use any estimate for 2014. I hope that helps!
{"url":"http://www.moderngraham.com/2007/11/26/new-valuation-method-qa-and-other-ramblings/","timestamp":"2014-04-20T08:17:18Z","content_type":null,"content_length":"101111","record_id":"<urn:uuid:c80b74c8-4b35-432d-b797-62c93764e18a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Extraneous Roots Date: 06/13/2001 at 00:48:26 From: Kostya Tomashevsky Subject: An equation with a square root Recently I came across this equation: sqrt(x-3) = x-5 I squared both sides to solve the quadratic equation, and got the solutions 4 and 7. When I checked them I saw that 4 only worked when I calculated the square root as a negative, and 7 only worked when the root was assumed positive. My question is, are both of these solutions valid, or only 7? Or neither of them? Thank you, Kostya Tomashevsky Date: 06/13/2001 at 08:41:23 From: Doctor Peterson Subject: Re: An equation with a square root Hi, Kostya. What you have discovered is called "extraneous roots." You can read about them here: Why Multiple Roots? What happens is that, by squaring both sides of the equation, you have made an equation whose roots include those of both sqrt(x-3) = x-5 -sqrt(x-3) = x-5 because these two equations give the same result when you square them. If you treated the radical in the original problem as having two values, positive and negative, then both of the roots you found would be valid; but since we interpret a radical as a function that returns only the positive root, only one works. You can't be sure which root will be valid for the original equation until you check them. You always have to check each solution you find, and in this situation only those that work are valid solutions to the problem. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52659.html","timestamp":"2014-04-20T17:10:20Z","content_type":null,"content_length":"6520","record_id":"<urn:uuid:75a3c3f6-80ff-4fa6-b5bf-618984328eab>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
A Generalized Strategy Eliminability Criterion and Computational Methods for Applying It Vincent Conitzer, Tuomas Sandholm We define a generalized strategy eliminability criterion for bimatrix games that considers whether a given strategy is eliminable relative to given dominator & eliminee subsets of the players’ strategies. We show that this definition spans a spectrum of eliminability criteria from strict dominance (when the sets are as small as possible) to Nash equilibrium (when the sets are as large as possible). We show that checking whether a strategy is eliminable according to this criterion is coNP-complete (both when all the sets are as large as possible and when the dominator sets each have size 1). We then give an alternative definition of the eliminability criterion and show that it is equivalent using the Minimax Theorem. We show how this alternative definition can be translated into a mixed integer program of polynomial size with a number of (binary) integer variables equal to the sum of the sizes of the eliminee sets, implying that checking whether a strategy is eliminable according to the criterion can be done in polynomial time, given that the eliminee sets are small. Finally, we study using the criterion for iterated elimination of strategies. Content Area: 7.Game Theory and Economic Models Subjects: 7.1 Multi-Agent Systems; 1.8 Game Playing Submitted: May 6, 2005 This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://www.aaai.org/Library/AAAI/2005/aaai05-076.php","timestamp":"2014-04-18T18:48:33Z","content_type":null,"content_length":"3310","record_id":"<urn:uuid:1b051fc1-63b6-481b-b465-ef04945bc75b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
approximating error on taylor poly May 2nd 2009, 10:22 PM #1 Junior Member Apr 2009 approximating error on taylor poly Find the fifth order taylor polynomial for f(x) = 1/x^2 based at x_0 = 1, then find an interval centered at x_0 = 1 in which the approximation error |f(x) - P_5(x)| < 0.01 I found the taylor poly 1 -2(x-1) + 3(x-1)^2 -4(x-1)^3 ... -6(x-1)^5 I don't know how to simply find the approximation. I could try solving for x but that would take way too long. The error in an alternating series is always less than the n+1 term $7(x-1)^6<\frac{1}{100} \iff (x-1)^6<\frac{1}{700}$ $(x-1)< \left( \frac{1}{700} \right)^{\frac{1}{6}} \approx .336$ You have an expression of the rest : $R(x)=\frac{1}{5!}\int_{1}^{x}f^{(6)}(t)(x-t)^5dt$ After having calculate this integral we obtain : Then you study this function on $[0,2]$ for example. And we find this interval : $[0,76;1,24]$ Maybe it exists an easier solution. Edit : Yes, I didn't think to use series... May 2nd 2009, 10:50 PM #2 May 2nd 2009, 10:58 PM #3
{"url":"http://mathhelpforum.com/calculus/87035-approximating-error-taylor-poly.html","timestamp":"2014-04-20T04:15:40Z","content_type":null,"content_length":"37694","record_id":"<urn:uuid:2c8cd08a-12db-42f3-8586-6e5418718483>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Twenty third British Mathematical Colloquium This was held at Kent: 30 March - 1 April 1971 The enrolment was 531. The chairman was M E Noble and the secretary was J P Earl. Minutes of meetings, etc. are available by clicking on a link below General Meeting Minutes for 1971 Committee Meeting Minutes for 1971 The plenary speakers were: Carleson, L Recent results in harmonic analysis Dade, E C What is or isn't happening in the theory of representations of finite groups Kuiper, N H A generalisation of convexity The morning speakers were: Allan, G R Analytic functions and Banach algebras Birch, B J Elliptic curves parametrised by modular functions Cassels, J W S Old and new conjectures and theorems on trigonometric sums Edge, W L Some special curves of small genus Edmunds, D E Quasi-linear partial differential equations Halberstam, H Prime number theory: Hypothesis H Higgins, P J Equational theories in universal algebra Horrocks, G Cohomology of algebraic varieties Hubbuck, J Finite dimensional H-spaces Kegel, O H Locally finite groups Kovari, T Faber expansions and the order of polynomial approximation Kuran, U Some problems in potential theory Neumann, P M Homage to Burnside: permutation groups of prime degree Rose, J S The Frattini subgroup in finite group theory Sanderson, B J A geometric view of algebraic topology Segal, G B Homotopy everything H-spaces Slater, M Alternative rings West, T T Locally compact semi-algebras, compact semigroups and spectral theory
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/BMC/1971.html","timestamp":"2014-04-19T17:32:27Z","content_type":null,"content_length":"3079","record_id":"<urn:uuid:86a3ee55-bc17-482e-983a-dc2c952852ba>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Limits help Click here to go to the NEW College Discussion Forum Discus: SAT/ACT Tests and Test Preparation: May 2004 Archive: Limits help I just started studying calculus for next year and I'm stumped on two step-wise probs For what values of a does lim x->a [[x/2]] exist? f(x)= sin pi/[[x]] a=3 a)lim x->a- f(x) b)lim x->a+ f(x) c)lim x->a f(x) I'm no good at these types and help would be greatly appreciated. For the first one, I'm not too sure about it, but I think it is all real numbers except even numbers. If you try putting an even number in for 'a', the left and right hand limits for the greatest integer function are different, meaning that lim x->a does not exist. For the second one: a) lim x->3- f(x) will become sin pi/2. The [[3]] becomes 2 because if you look examine values between 2 and 3 (for [[a]]), you'll see that they all equal 2. For example, [[2.1]]=2, [[2.2]]=2, [[2.3]]=2. Thus, the limit as x approaches 3 from the left is 2 (for lim x->3- [[3]]). Therefore, sin pi/2 = 1. b) In this case, we have to see what the values of the function will be for values of 'a' greater than 3 (because x is approaching a from the right). [[3.3]]=3, [[3.2]]=3, [[3.1]]=3. Thus, the limit for the greatest integer part of the function will reduce down to 3. Therefore, f(x)=sin pi/3, which is root 3 over 2. c) From parts a and b, we saw that the left and right hand limits are different. Therefore, the limit for the function as x approaches 3 does not exist. I hope this is clear. Good luck with your calculus! thanks a lot, it's a bit more clear now Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/69/69706.html","timestamp":"2014-04-21T04:33:49Z","content_type":null,"content_length":"11405","record_id":"<urn:uuid:8c0cd5f5-9096-439b-8e6e-a83a2ae154fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Continuous Normalization for the Lambda-Calculus and G odel's T Klaus Aehlig ? and Felix Joachimski Mathematisches Institut, Ludwig-Maximilians-Universit at M unchen Theresienstrasse 39, 80333 M unchen, Germany Abstract. Building on previous work by Mints, Buchholz and Schwicht- enberg, a simplied version of continuous normalization for the untyped -calculus and G odel's T is presented and analyzed in the coalgebraic framework of non-wellfounded terms with so-called repetition construc- The primitive recursive normalization function is uniformly continuous w.r.t. the natural metric on non-wellfounded terms. Furthermore, the number of necessary repetition constructors is locally related to the num- ber of reduction steps needed to reach and the size of the normal form (as represented by the B ohm tree). It is also shown how continuous normal forms relate to derivations of strong normalizability in the typed -calculus and how this leads to new bounds for the sum of the height of the reduction tree and the size of the normal form.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/1016/3851843.html","timestamp":"2014-04-21T05:44:33Z","content_type":null,"content_length":"8313","record_id":"<urn:uuid:a0a8ad67-928b-41a1-b639-3352a6dbadaf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society Bulletin Notices AMS Sectional Meeting Program by Day Current as of Tuesday, April 12, 2005 15:09:32 Program | Deadlines | Inquiries: meet@ams.org 1996 Fall Central Sectional Meeting Columbia, MO, November 1-3, 1996 Meeting #916 Associate secretaries: Susan J Friedlander , AMS Saturday November 2, 1996 • Saturday November 2, 1996, 8:00 a.m.-10:50 a.m. Special Session on Commutative Algebra, II Room 105, General Classroom Building Steven Dale Cutkosky, University of Missouri, Columbia dale@cutkosky.math.missouri.edu Hema Srinivasan, University of Missouri, Columbia mathhs@mizzou1.missouri.edu • Saturday November 2, 1996, 8:30 a.m.-10:50 a.m. Special Session on Differential Equations and Dynamical Systems, II Room 204, General Classroom Building Carmen C. Chicone, University of Missouri, Columbia carmen@chicone.cs.missouri.edu Yuri D. Latushkin, University of Missouri, Columbia mathyl@mizzou1.bitnet • Saturday November 2, 1996, 8:30 a.m.-10:50 a.m. Special Session on Lie Groups and Physics, II Room 210, General Classroom Building Victor A. Ginzburg, University of Chicago ginzburg@math.uchicago.edu Yan Soibelman, Kansas State University soibelma@ihes.fr □ 8:30 a.m. A braid group action on invariant forms. Ben Cox*, Lulea University, Lulea, Sweden S-971 87 □ 9:20 a.m. Infinite wedge representations and combinatorics. Eugene Stern*, University of California, Berkeley □ 10:10 a.m. Canonical basis and cohomologies of local systems. Alexander Kirillov*, Massachusetts Institute of Technology • Saturday November 2, 1996, 8:30 a.m.-10:50 a.m. Special Session on Algebraic Geometry, II Room 223, General Classroom Building Dan Edidin, University of Missouri edidin@cantor.math.missouri.edu Qi Zhang, University of Missouri qi@zhang.math.missouri.edu • Saturday November 2, 1996, 9:00 a.m.-10:50 a.m. Special Session on Partial Differential Equations and Mathematical Physics, II Room 109, General Classroom Building Mark S. Ashbaugh, University of Missouri, Columbia mark@ashbaugh.math.missouri.edu • Saturday November 2, 1996, 9:00 a.m.-10:50 a.m. Special Session on Harmonic Analysis and Probability, II Room 221, General Classroom Building Nakhle Habib Asmar, University of Missouri, Columbia mathna@mizzou1.bitnet Stephen John Montgomery-Smith, University of Missouri, Columbia mathsms@mizzou1.missouri.edu • Saturday November 2, 1996, 9:00 a.m.-10:50 a.m. Special Session on Differential Geometry, II Room 114, General Classroom Building John Kelly Beem, University of Missouri, Columbia mathjkb@mizzou1.missouri.edu Adam D. Helfer, University of Missouri, Columbia dgsession@godel.math.missouri.edu • Saturday November 2, 1996, 9:00 a.m.-10:50 a.m. Special Session on Gauge Theory and Its Interaction With Holomorphic and Symplectic Geometry,II Room 117, General Classroom Building Stamatis A. Dostoglou, University of California, Santa Barbara Jan Segert, University of Missouri, Columbia jan@segert.cs.missouri.edu Shuguang Wang, University of Missouri, Columbia sw@wang.cs.missouri.edu • Saturday November 2, 1996, 9:00 a.m.-10:50 a.m. Special Session on Spectral Theory and Completely Integrable Systems, II Room 208, General Classroom Building Fritz Gesztesy, University of Missouri, Columbia mathfg@mizzou1.missouri.edu • Saturday November 2, 1996, 9:00 a.m.-10:50 a.m. Special Session on Classifying Spaces and Cohomology of Finite Groups, I Room 119, General Classroom Building Alejandro Adem, University of Wisconsin adem@math.wisc.edu Stewart B. Priddy, Northwestern University s_priddy@math.nwu.edu • Saturday November 2, 1996, 9:00 a.m.-10:20 a.m. Special Session on Banach Spaces and Related Topics,II Room 219, General Classroom Building Peter G. Casazza, University of Missouri, Columbia pete@casazza.cs.missouri.edu N. J. Kalton, University of Missouri, Columbia mathnjk@mizzou1.bitnet □ 9:00 a.m. Copies of $c_{0}$ and asymptotically isometric copies of $c_{0}$ in Banach spaces. Patrick N Dowling*, Miami University □ 9:30 a.m. On Banach spaces that contain $\ell_1$ S. J. Dilworth, University of South Carolina Maria Girardi*, University of South Carolina W. B. Johnson, Texas A\&M University □ 10:00 a.m. Unconditional bases in $c_0-$products Nigel J. Kalton*, University of Missouri Peter G. Casazza, University of Missouri • Saturday November 2, 1996, 11:00 a.m.-11:50 a.m. Invited Address Recent Developments in the Cohomology of Finite groups Arts and Science Building, Allen Auditorium Alejandro Adem*, University of Wisconsin • Saturday November 2, 1996, 1:30 p.m.-2:20 p.m. Invited Address Function Theory and Geometry on Covering Spaces Arts and Science Building, Allen Auditorium David E. Barrett*, University of Michigan • Saturday November 2, 1996, 2:30 p.m.-5:50 p.m. Special Session on Partial Differential Equations and Mathematical Physics, III Room 109, General Classroom Building Mark S. Ashbaugh, University of Missouri, Columbia mark@ashbaugh.math.missouri.edu • Saturday November 2, 1996, 2:30 p.m.-5:50 p.m. Special Session on Commutative Algebra, III Room 105, General Classroom Building Steven Dale Cutkosky, University of Missouri, Columbia dale@cutkosky.math.missouri.edu Hema Srinivasan, University of Missouri, Columbia mathhs@mizzou1.missouri.edu • Saturday November 2, 1996, 2:30 p.m.-9:50 p.m. Special Session on Lie Groups and Physics, III Room 210, General Classroom Building Victor A. Ginzburg, University of Chicago ginzburg@math.uchicago.edu Yan Soibelman, Kansas State University soibelma@ihes.fr □ 2:30 p.m. Geometric singularities and enhanced Gauge symmetries. Michael Bershadsky*, Harvard Univesity □ 3:20 p.m. Generating functional and effective action in CFT on higher genus Riemann surfaces. Leon Takhtajan*, SUNY, Stony Brook □ 4:10 p.m. On the proof of the mirror conjecture for flag manifolds and complete intersections. A. Givental*, University of California, Berkeley □ 5:00 p.m. Quantum Schubert polynomials and Gromov-Witten invariants. Alex Postnikov*, Massachusetts Institute of Technology □ 5:50 p.m. □ 7:30 p.m. Vertex operator algebras, quantum field theory and geometry. Andrei Radul*, Howard University □ 8:20 p.m. Deformation quantization and $[Q,R] = 0$ theorem. Boris Tsygan*, □ 9:10 p.m. Elliptic quantum groups and Bethe ansatz. Aleksander Varchenko*, University of North Carolina • Saturday November 2, 1996, 3:00 p.m.-4:50 p.m. Special Session on Harmonic Analysis and Probability, III Room 221, General Classroom Building Nakhle Habib Asmar, University of Missouri, Columbia mathna@mizzou1.bitnet Stephen John Montgomery-Smith, University of Missouri, Columbia mathsms@mizzou1.missouri.edu • Saturday November 2, 1996, 3:00 p.m.-5:50 p.m. Special Session on Differential Geometry,III Room 114, General Classroom Building John Kelly Beem, University of Missouri, Columbia mathjkb@mizzou1.missouri.edu Adam D. Helfer, University of Missouri, Columbia dgsession@godel.math.missouri.edu • Saturday November 2, 1996, 3:00 p.m.-5:50 p.m. Special Session on Differential Equations and Dynamical Systems, III Room 204, General Classroom Building Carmen C. Chicone, University of Missouri, Columbia carmen@chicone.cs.missouri.edu Yuri D. Latushkin, University of Missouri, Columbia mathyl@mizzou1.bitnet • Saturday November 2, 1996, 3:00 p.m.-5:50 p.m. Special Session on Gauge Theory and Its Interaction With Holomorphic and Symplectic Geometry, III Room 117, General Classroom Building Stamatis A. Dostoglou, University of California, Santa Barbara Jan Segert, University of Missouri, Columbia jan@segert.cs.missouri.edu Shuguang Wang, University of Missouri, Columbia sw@wang.cs.missouri.edu □ 3:00 p.m. Essential surfaces in bounded 3-manifolds D. Cooper, University of California, Santa Barbara D D. Long*, University of California, Santa Barbara A W. Reid, University of Texas, Austin □ 3:30 p.m. Floer homology of connect sums of Brieskhorn spheres. Christopher M. Herald*, Swarthmore College □ 4:00 p.m. Symplectic Invariants and Floer Homology Darko Milinkovic*, University of Wisconsin,Madison Yong-Geun Oh, Universityof Wisconsin, Madison □ 4:30 p.m. Stability properties of monopoles David M. Stuart*, □ 5:00 p.m. Discussion-W. Li □ 5:30 p.m. Computation formulas for monopole invariants for 3-manifolds Rongguang Wang*, HKUST, Hong Kong • Saturday November 2, 1996, 3:00 p.m.-5:50 p.m. Special Session on Spectral Theory and Completely Integrable Systems,III Room 208, General Classroom Building Fritz Gesztesy, University of Missouri, Columbia mathfg@mizzou1.missouri.edu • Saturday November 2, 1996, 3:00 p.m.-4:50 p.m. Special Session on Classifying Spaces and Cohomology of Finite Groups, II Room 119, General Classroom Building Alejandro Adem, University of Wisconsin adem@math.wisc.edu Stewart B. Priddy, Northwestern University s_priddy@math.nwu.edu • Saturday November 2, 1996, 3:00 p.m.-6:45 p.m. Special Session on Algebraic Geometry, III Room 223, General Classroom Building Dan Edidin, University of Missouri edidin@cantor.math.missouri.edu Qi Zhang, University of Missouri qi@zhang.math.missouri.edu □ 3:00 p.m. Symmetries of Littlewood-Richardson coefficients for flag varieties and Schubert polynomials Frank Sottile*, MSRI and University of Toronto Nantel Bergeron, York University, Ontario, and UQAM, Montr\'eal □ 3:30 p.m. On extensions of the infinitesimal invariant of normal functions Xian Wu*, University of South Carolina □ 3:30 p.m. Equivariant intersection theory Dan Edidin, University of Missouri William A. Graham*, Institute for Advanced Study □ 4:20 p.m. □ 4:45 p.m. Affine Lines Fibered by Affine Lines over the Projective Line David L. Wright*, Washington University in St. Louis □ 5:15 p.m. Dimension of the Hilbert scheme of space curves. Kyungho Oh, Prabhakar Rao*, UM-St.Louis □ 5:45 p.m. Hilbert functions of ``fat points'' Karen A. Chandler*, University of Notre Dame □ 6:15 p.m. Homogeneous basis for an ideal Tie Luo*, University of Texas at Arlington Erol Yilmaz, University of Texas at Arlington • Saturday November 2, 1996, 3:00 p.m.-5:50 p.m. Special Session on Banach Spaces and Related Topics, III Room 219, General Classroom Building Peter G. Casazza, University of Missouri, Columbia pete@casazza.cs.missouri.edu N. J. Kalton, University of Missouri, Columbia mathnjk@mizzou1.bitnet • Saturday November 2, 1996, 3:00 p.m.-3:55 p.m. Contributed Papers Room 217, General Classroom Building Inquiries: meet@ams.org
{"url":"http://ams.org/meetings/sectional/2007_program_saturday.html","timestamp":"2014-04-21T05:33:38Z","content_type":null,"content_length":"76428","record_id":"<urn:uuid:00566b42-cbdb-44c2-b647-ee62eaec7982>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: To solve this system using the addition method, you would need to multiply the first equation by what number in order for the y's to add out? 3x - y = 3 -2x + 2y = 6 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5070a1cde4b0c2dc834090af","timestamp":"2014-04-20T18:37:28Z","content_type":null,"content_length":"80396","record_id":"<urn:uuid:b1d088b1-8d2a-4c67-a2ff-11cbd287f9a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
James Mercer Born: 15 January 1883 in Bootle, Liverpool, England Died: 21 February 1932 in London, England Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index James Mercer's father was Thomas Mercer (born at Upholland, Lancashire about 1852) who was a bank cashier. His mother was Sarah Alice Mercer (born in Liverpool about 1858). James had eight younger siblings: Mary (born about 1885), Alice (born 1887), Richard (born 1889) and Thomas (born about 1891), Frank (born about 1893), Eric (born 1894), Sarah (born about 1897) and Ernest (born about 1899). James Mercer was educated in Liverpool, attending University College there before entering Trinity College Cambridge to study the mathematical Tripos. Among his contemporaries there were G N Watson and J E Littlewood while he was taught by Whitehead, Whittaker and Hardy. Hardy had just come onto the staff at Trinity and he acted as a private tutor to Mercer. In [2] Littlewood describes the Mathematical Tripos examinations which he and Mercer sat. Describing Part I of the Tripos, Littlewood writes:- [The examination] consisted of 7 papers ('first four days') on comparatively elementary subjects, the riders, however, being quite stiff, followed a week later by another 7 ('second four days'). A pass on the first four days qualified for a degree, but the second four days carried double the marks, and since it was impossible to revise everything the leading candidates concentrated on the second four days ... On the problem paper [for the first four days] Mercer got 270 out of 760 for 18 questions (I got only 180). ... In the second four days (ignoring the problem paper) Mercer and I each got about 2050 out of 4500 (each about 330 out of 1340 in the 18 question problem paper). Mercer graduated in 1907 bracketed Senior Wrangler (first equal) with Littlewood. He was awarded a Smith's prize and, in 1909, he was elected a Fellow of Trinity. He returned to his home town of Liverpool for a while, being appointed as an Assistant Lecturer there, but soon after, when the opportunity came to return to Cambridge, he accepted a Fellowship and Lectureship in Christ's College. During the First World War Mercer served as a Naval Instructor. He saw active service at the Battle of Jutland, the only major encounter between the British and German fleets during the First World War. Fought off the coast of Denmark, it began on 31 May 1916. The battle was inconclusive but the British fleet suffered heavy losses. Mercer survived the battle and at the end of the war returned to his duties at Cambridge. Back in Cambridge he resumed his mathematical research, continuing his work on function theory. He was elected a Fellow of the Royal Society in 1922 but after this time his health, which had been poor for some time, began to fail. He took ill health retirement in 1926. Hobson [1] gives this overview of Mercer's mathematical achievements:- Mercer was a mathematical analyst of originality and skill; he made noteworthy advances in the theory of integral equations, and especially in the theory of the expansion of arbitrary functions in series of orthogonal functions. His output of original work would no doubt have been much larger had he not been continually hampered by bad health. Mercer's theorem about the uniform convergence of eigenfunction expansions for kernels of operators appears in his 1909 paper Functions of positive and negative types and their connection with the theory of integral equations published in the Philosophical Transactions. There have been many papers written since then generalising Mercer's theorem to various other settings. For example Mathematical Reviews contains at least 65 papers studying such generalisations which have been published since 1980. Article by: J J O'Connor and E F Robertson List of References (2 books/articles) Mathematicians born in the same country Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © May 2001 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Mercer.html","timestamp":"2014-04-19T12:09:47Z","content_type":null,"content_length":"12650","record_id":"<urn:uuid:80148be7-f93a-467f-ae0a-f6a98309917e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Model-Based Therapeutic Correction of Hypothalamic-Pituitary-Adrenal Axis Dysfunction • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS Comput Biol. Jan 2009; 5(1): e1000273. Model-Based Therapeutic Correction of Hypothalamic-Pituitary-Adrenal Axis Dysfunction Simon M. Lin, Editor^ The hypothalamic-pituitary-adrenal (HPA) axis is a major system maintaining body homeostasis by regulating the neuroendocrine and sympathetic nervous systems as well modulating immune function. Recent work has shown that the complex dynamics of this system accommodate several stable steady states, one of which corresponds to the hypocortisol state observed in patients with chronic fatigue syndrome (CFS). At present these dynamics are not formally considered in the development of treatment strategies. Here we use model-based predictive control (MPC) methodology to estimate robust treatment courses for displacing the HPA axis from an abnormal hypocortisol steady state back to a healthy cortisol level. This approach was applied to a recent model of HPA axis dynamics incorporating glucocorticoid receptor kinetics. A candidate treatment that displays robust properties in the face of significant biological variability and measurement uncertainty requires that cortisol be further suppressed for a short period until adrenocorticotropic hormone levels exceed 30% of baseline. Treatment may then be discontinued, and the HPA axis will naturally progress to a stable attractor defined by normal hormone levels. Suppression of biologically available cortisol may be achieved through the use of binding proteins such as CBG and certain metabolizing enzymes, thus offering possible avenues for deployment in a clinical setting. Treatment strategies can therefore be designed that maximally exploit system dynamics to provide a robust response to treatment and ensure a positive outcome over a wide range of conditions. Perhaps most importantly, a treatment course involving further reduction in cortisol, even transient, is quite counterintuitive and challenges the conventional strategy of supplementing cortisol levels, an approach based on steady-state reasoning. Author Summary The hypothalamic-pituitary-adrenal (HPA) axis is one of the body's major control systems helping to regulate functions ranging from digestion to immune response to metabolism. Dysregulation of the HPA axis is associated with a number of neuroimmune disorders including chronic fatigue syndrome (CFS), depression, Gulf War illness (GWI), and posttraumatic stress disorder (PTSD). Objective diagnosis and targeted treatments of these disorders have proven challenging because they present no obvious lesion. However, the body's various components do not work in isolation, and it is important to consider exactly how their interactions might be altered by disease. Using a relatively simple mathematical description of the HPA axis, we show how the complex dynamical behavior of this system will readily accommodate multiple stable resting states, some of which may correspond to chronic loss of function. We propose that a well-directed push given at the right moment may encourage the axis to reset under its own volition. We use model-based predictive control theory to compute such a push. The result is counterintuitive and challenges the conventional time-invariant approach to disease and therapy. Indeed we demonstrate that in some cases it might be possible to exploit the natural dynamics of these physiological systems to stimulate recovery. The hypothalamic-pituitary-adrenal (HPA) axis constitutes one of the major peripheral outflow systems of the brain, serving to maintain body homeostasis by adapting the organism to changes in the external and internal environments. It does this by regulating the neuroendocrine and sympathetic nervous systems as well modulating immune function [1]. Through regulation of these systems, the HPA axis initiates and coordinates responses to physical stressors; such as infection, hemorrhage, dehydration, thermal exposure and to neurogenic stressors; such as fear, anticipation and fight or Many aspects of the organization and function of the HPA axis have been characterized in clinical and laboratory studies revealing a number of component feedback and feed forward signaling processes. Stress activates the release of corticotropin-releasing hormone (CRH) from the paraventricular nucleus (PVN) of the hypothalamus. The release of CRH into the hypophysial-portal circulation in turn acts in conjunction with arginine vasopressin on CRH-R1 receptors of the anterior pituitary stimulating the rapid release of adrenocorticotropic hormone (ACTH). ACTH then is released into the peripheral circulation and stimulates the release of the glucocorticoid cortisol from the adrenal cortex by acting on the receptor MC2-R (type 2 melanocortin receptor). Cortisol enters the cell and binds to the glucocorticoid receptor present in the cytoplasm of every nucleated cell; hence the widespread effects of glucocorticoids on practically every system of the body including endocrine, nervous, cardiovascular and immune systems. To keep HPA axis activity in check, glucocorticoids also exert negative feedback at the hypothalamus and pituitary glands to inhibit the synthesis and secretion of CRH and ACTH, respectively. Moreover, glucocorticoid negative feedback causes a reduction in corticotroph receptor expression leading to a desensitization of the pituitary to the stimulatory effects of CRH on ACTH release. This negative feedback is also felt in the hippocampus where it exerts a negative influence on the PVN. A detailed review of the physiology and biochemistry of the HPA axis as well as it's know interactions with the immune system may be found in work by Silverman et al. [2]. A number of chronic diseases have been characterized by abnormalities in HPA axis regulation. These include major depression and its subtypes, anxiety disorders such as post-traumatic stress disorder, panic disorder and cognitive disorders such as Alzheimer's disease and minimal cognitive impairment of aging [3]. Dysregulation of the HPA axis has also been linked to the pathophysiology of Gulf War illness [4], post-infective fatigue [5], and chronic fatigue syndrome (CFS) [6],[7]. It is not clear what causes this dysregulation, but it is manifested in many HPA axis disorders as a hypercortisol or hypocortisol state. The existence of these separate and stable states is not surprising when one considers the multiple feedforward and feedback mechanisms that regulate the HPA axis. Systems such as this often display complex dynamics that readily accommodate multiple stable steady states which are known as attractors because the system is naturally drawn back to these resting states after perturbation. However, if the perturbation is of sufficient strength and duration, the system can be pushed away from a given resting state and into the basin of new attractor. Though much is known about its components, one of the main difficulties in studying the behavior of the HPA axis has been in integrating the expansive body of published experimental information. Numerical models provide an ideal framework for such integration. Simple models of the HPA axis have been constructed using deterministic coupled ordinary differential equations [8],[9]. Though successful in reproducing some of the basic features of HPA axis dynamics these early models neglected to include feedback and feed-forward immune effector molecules and associated mechanisms. Linear approximations of some components lead to unrealistic predictions beyond a very narrow region of concentrations. In addition transport processes involved in the distribution of these chemical signals from the brain throughout the body were not modeled explicitly. This level of abstraction made direct comparison of simulation results to actual HPA axis chemistry and physiology highly tenuous. In a move towards increased fidelity Gupta et al. [10] introduced a more detailed description of glucocorticoid receptor dynamics enabling the latter to demonstrate bistability in HPA axis dynamics. As mentioned previously this theoretical proof of the existence of a second stable steady state is highly compatible with clinical observations. Moreover the abnormally low cortisol levels characterizing this stable resting state or basin of attraction are consistent with documented observations of hypocorticolism in patients with CFS [11], Gulf War illness and other similar conditions In this work we adopt the model proposed by Gupta et al. [10] as a recent and detailed representation of the HPA axis. On the basis of this model we propose a framework for estimating robust corrective measures for displacing the HPA axis from a chronic hypocortisol state back to a healthy state. Using model-based predictive control (MPC) methodology we demonstrate that it is possible to compute such treatment time courses while dealing with the inherently high level of uncertainty characteristic of biological systems. While this uncertainty might lead to compromises in efficiency, interventions can be computed that predict a positive outcome. Our analysis indicates that one such treatment could involve a pharmacologically induced reduction in cortisol forcing a build-up of ACTH. Upon reaching a specific threshold concentration of ACTH, the intervention is discontinued and the HPA axis will return to a healthy steady state under its own volition as this is now the closest attractor for the system. The HPA Axis Model A model of the HPA axis which includes glucocorticoid receptor and the dynamics of glucocorticoid receptor-cortisol interactions have been proposed by Gupta et al. [10]. This model is described by the following differential equations as System H (Eq. 1). The system states are given as x[1]; x[2]; x[3]; x[4]]^T and are described in Table 1. Note that the states in this model are scaled values, as described by Gupta et al. [10]. The system parameters are given by the vector p[i1]; k[cd]; k[ad]; k[i2]; k[cr]; k[rd]; k]^T. Nominal values for the system parameters are listed in Table 2. The variable d in System H is the stress term which describes the effect of stress (both physical and psychological) on the hypothalamus. This variable is seen as a disturbance that perturbs the System H from a steady-state value. Steady-state values for concentrations of CRH, ACTH, free GR and circulating cortisol. Parameter settings for the differential equation model of the HPA axis proposed by Gupta et al. [10]. Control of the HPA Axis System In this first analysis the HPA axis system is considered under idealized conditions where all parameters are assumed constant and precisely known. In addition, the states x are assumed known as a function of time with no measurement error and the control action is implemented perfectly. The approach taken for choosing an optimal control is based on the Model Predictive Control (MPC) framework [15]–[17]. Under this framework, an objective function of the manipulated and measured variables is defined. Typically the objective function is a mathematical expression which corresponds to engineering objectives or underlying system constraints. The input computed under the MPC framework is the one in a class of permissible inputs that minimizes the chosen objective function. In this work it is assumed that the variable to be manipulated for treatment is the rate of addition or removal of cortisol from circulation. To model this control action, System H is augmented with a control term u in the equation for cortisol (x[4]) (Eq. 2). Note that System Hu is affine with respect to the control action and the disturbance. To avoid dangerous destabilization of the HPA axis by the application of control action u(t) we define the following penalty function to enforce minimal departure from normal ACTH (x[2]) and cortisol (x[4]) levels even though we purposely manipulate circulating cortisol to perturb the system H[u]. Where t[0] and t[f] are the start and end time of the optimization horizon, λ is a tuning parameter taking values from zero to one and x[2]^* and x[4]^* are the healthy steady-state concentrations of ACTH and cortisol, respectively. R is a penalty assigned to the input and Q is the penalty assigned to the state variables. R was chosen as 0 because the cost for therapy was considered negligible compared to the cost of ongoing disability. Q was chosen as follows because x[2] and x[4] are the only measured states. The resulting cost function can be written as: The parameter λ is used to penalize excessive imbalance of the other hormones (x[1], x[2], x[3]) in response to the control action applied to cortisol (x[4]). In this case, the objective of the controller is to bring the cortisol concentration to set point while minimizing the impact of the treatment on the other three states of the HPA axis. Any change in CRH (x[1]) or the glucocorticoid receptor (GR; x[3]) will be reflected in the concentration of ACTH (x[2]) by virtue of the coupled dynamics described by System H[u]. The tuning parameter λ can be selected to match the intensity of the desired treatment. A λ value of near zero will lead to a more intense treatment while a value of λ near one will lead to very conservative treatment. For proof of concept, a more direct treatment was favored in this work and a λ value of 0.01 was used throughout. Note that x[2]^* and x[4]^* correspond to the stable steady state of the unperturbed system (i.e., when u Typically, a treatment or control action is applied at discrete intervals. As a result, the objective function in Equation 3 was optimized with respect to a piece-wise constant input signal x[4](u (t)). That is, the optimization procedure searched for an optimal input in the set U[c] of all piecewise constant functions on The initial condition, x(t[0]) is the steady state of the unperturbed system with d[0] Steady-State Analysis The steady-state solutions for HPA axis model described above as System H can be computed by setting H as a set of four algebraic equations in the four unknowns {x[1]; x[2]; x[3]; x[4]}. Under this framework, the disturbance variable, d, is assumed to take on a constant value The above is a set of polynomials in x, with real coefficients, and maximum total degree of five. Equations 5 to 8 can be simplified using the theory of polynomial ideals [18]. Specifically, the latter can be reduced to the following set of equations (Eq. 9–12). Therein f[3] is a polynomial in x[3] of degree seven, and f[1], f[2] and f[4] are functions only of x[3] and d[0]. The functions f[1] to f[4] can be computed using a symbolic algebra package such as Maple. For the nominal parameter values proposed in Gupta et al. [10] there are at most three real-valued solutions for x[3] and these correspond to the roots of f[3]. Each root is a steady-state value for x[3] and can be used to generate the corresponding values of x[1], x[2] and x[4] given Equations 10 to 12. Note that at steady state x[2][4] (Eq. 8). A plot of the steady-state values of x [1], x[2] and x[3] as a function of d[0] is shown in Figure 1. Steady states of the HPA axis system. In this model of HPA axis dynamics a chronically stressed individual would occupy the stable steady state associated with a depressed cortisol concentration (~0.05) at rest or at d[0][0]>0.168) for an extended period of time their body would reach the only steady state available locally that is one corresponding to chronic stress. In other words, for values of d[0] greater than 0.168, Equation (9) dictates that there is only one steady-state solution for free GR (x[3]) concentration as opposed to the 3 solutions available for 0≤d[0]<0.168. By virtue of Equation (12) this results in only one steady-state solution being available for cortisol (x[4]) for d[0]>0.168. When the stress is removed (i.e., d[0]Figure 2 by the red dashed trajectory. According to this model the inability of the body to return to the healthy steady state is due to the fact that once the body establishes a new equilibrium it inherently seeks to stay near this point. In order to force the body to return to its original equilibrium its state must first be shifted to a point where the only stable condition in proximity is one corresponding to this original healthy state. Once this is done, the internal regulatory mechanisms of the body will ensure that this healthy stable point is achieved and maintained. This approach is illustrated in Figure 2 by the green dashed trajectory. The design of such a shift is presented in the following section. Migration of cortisol concentration from one stable point to another. Redirecting the HPA Axis Using an Idealized Disturbance As one might expect the assumptions of ideal control do not correspond to a physically realizable system. However, the analysis of the system under idealized conditions allows the study of possible treatments. Any practical treatment would then be a suboptimal solution as compared to the treatment under idealized conditions. This allows proposed treatments to be benchmarked and compared. In addition, the solution obtained under idealized conditions can serve as a qualitative guideline for the creation of a practical, although suboptimal, treatment. In engineering terms the objective of treatment is to succeed in bring the subject to the healthy steady-state target while exerting the smallest disturbance possible to the HPA axis. For example, even though we intend to manipulate circulating cortisol concentration it should not be allowed to decrease excessively because of the important role cortisol plays in regulating a number of cellular and physiological functions. To avoid such excess perturbations the concentration of ACTH has been included in the objective function of Equation 3.This concentration is more readily measured than that of either CRH or GR making ACTH a good candidate for monitoring the progress of a treatment. The optimal control solution that minimizes disruption of HPA axis function (Eq. 3) is shown in Figure 3 along with the system's overall trajectory. Note that the optimal input does indeed bring the system to the healthy steady-state point. This is done while maintaining a circulating cortisol concentration that is near the steady-state value with the exception of a rapid drop at the start of treatment. The optimal control solution as computed under the MPC framework has several key features. The cortisol concentration is rapidly dropped at the outset. Once this drop in cortisol concentration is achieved, the system requires little additional control action to come to steady state. This qualitative information can be used to formulate a suboptimal control strategy that will bring the system to the healthy steady state. Idealized corrective control action. A More Clinically Realistic Manipulation of the HPA Axis System In this section a suboptimal control strategy is proposed for the HPA axis system. The goal of this strategy is to mimic the qualitative results of the MPC solution while being realizable in a clinical setting. The MPC solution suggests that manipulating cortisol concentration is a plausible strategy for redirecting the HPA axis to a healthy steady state. The key difficulty in applying this approach is determining when the cortisol concentration has been sufficiently lowered with regard to the other state variables to allow the system to return to a healthy equilibrium. That is, one must identify an observable event (corresponding to a measurable variable) which signals that the steady state of the system has shifted. In a clinical setting only ACTH and cortisol concentrations, corresponding to x[2] and x[4], respectively, can be readily measured. The availability of cortisol analogues makes it possible to manipulate x[4] directly. Therefore as postulated previously (Eq. 3–4) ACTH (x[2]) can be used to determine when a change in available steady state or attractor has occurred. Under the MPC framework, most of the control action is expended near the initial time. In Figure 3 the external control action prescribed by MPC under ideal conditions and the response of ACTH (x[2]) are both plotted as a function of time. The value of x[2] increases by about 30% as the system moves from the cusp of multiple candidate steady states to the basin of a single steady state. The following treatment is therefore proposed: Treatment 1[2]) have increased by more than 30% relative to the initial condition. Once this signal is observed, the system's own natural feedback control action should restore cortisol levels to Simulation results for Treatment 1 are shown in Figure 4. As indicated the system is brought to the healthy steady state via the suboptimal but more realistic treatment course. Furthermore, the drop in cortisol concentration is neither as severe nor as sharp as under naïve idealized MPC control. A positive outcome may also be obtained by applying even less severe levels of cortisol suppression and extending the duration of the treatment. Data presented in Figure 5 show that a combinations of treatment duration and cortisol suppression may be varied successfully over a large range. Nonetheless there exists a minimum level of cortisol suppression below which the treatment fails regardless of how long conditions are maintained. Conversely there also exists a minimal treatment duration below which even severe levels of cortisol suppression will prove unsuccessful in restoring normal hormone levels. A suboptimal but clinically realistic control treatment. Balancing intensity and duration of treatment. Robustness Analyses The results for Treatment 1 shown in Figure 4 are computed under nominal conditions. For the proposed treatment to be clinically useful, it must be effective over a wide variety of conditions, and parameter values. The robustness of the proposed approach to changes in the parameter values, initial conditions, and ambient stress level (i.e., value of d[0]) is examined in this section. A direct computational evaluation of robustness of Treatment 1 is difficult to implement. There are four initial conditions (x[1](0); x[2](0); x[3](0); x[4](0)), seven parameters, and one disturbance variable (d[0]). A simulation study where each variable (initial condition, parameter and disturbance) is evaluated at a nominal, high and low values, would require, at a minimum 3^12H[u]. Let the concentration of cortisol (x[4]) be manipulated so that the product of cortisiol and GR concentrations (x[3]x[4]) is constant. Under these conditions, the asymptotic value of glucocorticoid receptor concentration GR (x[3]) is obtained from Eq. 7 as: The asymptotic value or GR concentration x[3]∞ has a minimum as a function of cortisol concentration (x[4]) at x[4] At the steady-state point given by x[4]∞[3]∞[cr]/k[rd] the unique asymptotic solution for CRH (x[1]) and ACTH (x[2]) is given by It should be noted that the values for GR, CRH and ACTH identified in Equations 14–15 represent an asymptotic minimum for the externally controlled system (System H[u]). For the closed loop HPA axis system it represents the minimum achievable cortisol concentration. Note that this equilibrium point is only achievable under external input. This result is independent of the trajectory of the input u(t) and is a property of the HPA axis. Moreover the solution in Equation 15 is unique indicating that only a single steady state exists at the minimum for x[4]→0 and that this state corresponds to a stable set of non-zero real-valued concentrations of CRH, ACTH and GR. This result confirms that reducing the cortisol concentration to a small enough positive value can indeed take the system to a single stable condition. This is regardless of the value of d[0], parameters, or initial conditions. This condition will correspond to a healthy equilibrium value when treatment is administered in the absence of elevated levels of external stress d[0]. At high levels of the stressor d[0] the success of the treatment would be short lived as we would simply be immediately re-administering the same insult originally responsible for the illness state. This is true regardless of whether the idealized or the suboptimal treatment approach is used. Patients with CFS have been found to exhibit decreased adrenal response to ACTH stimulation and lower daily cortisol levels in plasma, urine and saliva [11],[19]. This is a chronic state in these patients and a detailed model by Gupta et al. [10] suggests that this condition may correspond to a stable steady state resulting from the higher order dynamics of the HPA axis. A robust treatment strategy was estimated using model-based predictive control methodology involving a controlled reduction of circulating cortisol concentration. This externally induced reduction in cortisol concentration is to be maintained until ACTH concentrations increase above a critical threshold. Though this treatment was derived through the use of a numerical model, it nonetheless provides an interesting conceptual strategy for treatment. Cortisol output of the HPA axis can in reality be manipulated either directly or indirectly through several interventions. The most direct approaches involve (1) inhibition of cortisol synthesis at the level of the adrenal gland or (2) inhibition of CRH induced ACTH synthesis by the pituitary. Inhibitors of cortisol synthesis include pharmaceutical agents such as ketoconazole that have been used in limited human trials [20]. These are generally used in the treatment of hypercortisolism in patients and have been known to cause side effects including decreased androgen and aldosterone synthesis, elevated pregnenolone, nausea, fever, vomiting and occasionally hypoadrenalism and liver toxicity [21]. Likewise CRH antagonists have demonstrated antidepressant and anxiolytic properties in animal models of depression [22]. However only one phase II study involving the treatment of depressed patients with the CRH antagonist R121919 [23] has been completed thus far. The inhibition of CRH would not be useful in the current context as the proposed treatment aims to artificially stimulate an increase in ACTH concentration. Indirect approaches to cortisol suppression focus on modulation of the biochemical feedback returning to the higher HPA axis from the immune system and the adrenal gland. Inflammatory events exert a positive immune system feedback to the HPA axis that is conducted via a number of pro-inflammatory cytokines for which several components of the HPA axis have receptors. Supported by immune, epidemiological and small-scale gene expression data [24], antagonists of the pro-inflammatory cytokine TNF-α have been used effectively in pilot clinical trials [25] to inhibit this positive feedback mechanism. The release of pro-inflammatory cytokines by the immune system can also be manipulated by altering the immune system's perception of circulating cortisol. Dexamethasone is a cortisol analogue that binds to GRII glucocorticoid receptor with a significantly higher affinity than that of endogenous cortisol [26],[27]. This saturation of the long-term receptor GRII with dexamethasone promotes down regulation of cortisol output by dampening the pro-inflammatory feedback signal. One possible explanation of hypocortisolism is an enhanced sensitivity to the negative feedback action of cortisol on the glucocorticoid receptors in what is termed dexamethasone hyper-suppression [12],[28]. Consistent with this mechanism, patients with CFS have shown a pronounced and prolonged suppression of salivary cortisol even after relatively low doses of dexamethasone [29]. Dexamethasone suppression has become a standard test procedure even though it has a significantly higher affinity for the GRII receptor over the GRI receptor, does not bind to corticosteroid binding globulin (CBG) and has a much longer half-life than endogenous cortisol. Recently Jerjes et al. [30] developed a similar protocol using prednisolone, a compound with physiological effects more similar to those of cortisol. Using 5 mg prednisolone, they achieved a 50% reduction in salivary cortisol in healthy subjects [30]. Similarly a 52% reduction in salivary cortisol and an 82% reduction in urinary cortisol were observed in CFS patients [31]. These relative levels of cortisol suppression are consistent with those required by this simulated treatment course and confirm that the system is indeed capable of accommodating such changes without ill effect. Unfortunately as in strategies involving the direct inhibition of CRH, a reduction of positive feedback to the hypothalamus also leads to a reduction in ACTH synthesis by the pituitary. Recall that the proposed treatment requires the inhibition of the negative cortisol feedback without the removal of positive stimulation of ACTH production. This could be achieved by temporarily reducing the bioavailability of cortisol itself. Binding proteins and metabolizing enzymes have been identified for cortisol. Corticosteroid-binding globulin (CBG) regulates the concentration of free or active cortisol [32]. Oral oestrogen preparations have been shown to increase CBG levels [33]. In addition to CBG, the enzyme 11-β-hydroxysteroid dehydrogenase rapidly inactivates endogenous glucocorticoid hormones upon entry into the cell [34]. Similarly the multi-drug resistance (MDR) P-glycoprotein (Pgp) has been shown to control access of cortisol and corticosterone to the brain [35]. In all cases a reduction in the bioavailability of cortisol would limit the effect of negative feedback on ACTH synthesis without hampering the positive feedback from pro-inflammatory cytokines. ACTH would conceivably accumulate as a natural consequence of such an imbalance. ACTH could also be administered directly [36] under these conditions of reduced cortisol inhibitory feedback to accelerate the treatment course. Finally the treatment might also be administered at a time of day that corresponds to the natural circadian reduction in cortisol secretion. It should be noted that although the model of HPA axis dynamics used in this work is currently the most credible model, it remains in many ways incomplete. For example, there is mounting data including observations of moderate hypocortisolism in depressed patients undergoing IFN-α therapy suggesting that GR receptor function not only affects the release of cytokines but is itself affected by these same cytokines [37]. The effects of cytokines and their signaling pathways on hormone signaling in general, and GR signaling in particular, is an important area of investigation regarding both the pathophysiology and treatment of inflammatory and neuropsychiatric diseases. To support these important aspects of HPA-immune signaling additional detail must be incorporated into the basic HPA axis model in particular at the level of the glucocorticoid receptors. Animal studies have not been exploited in this work but could undoubtedly serve as a basis for the construction of much more detailed models incorporating elements that are not readily measured in human experiments [38]. This is especially true of measurements at the hypothalamus. By the same token animal studies could be conducted to assess the tolerance of the overall system to more aggressive treatment and to determine a practical value for the parameter λ as well as the time period for monitoring and intervention. Certainly chronic and acute stressors such as the tail suspension test, the forced swim test and others have been used to produce depression-like symptoms in mice and have served to study hyperactivity of hypothalamic CRH neurons [39]. This model has also been used to test the effects of various anti-depressant therapies [40]. It should be noted however that CFS is characterized by a hypoactive rather than a hyperactive HPA axis [41]. Hyperactivity of hypothalamic CRH neurons observed in major depression produces a blunted ACTH response to further CRH challenge, likely reflecting a resultant down-regulation of pituitary CRH receptors [26],[37]. In contrast, subjects with CFS produce less cortisol in response to ACTH challenge but exhibit exaggerated ACTH responses to CRH [42] . This suggests that CFS hypocortisolism may arise from adrenal gland adaptation to a sensitized response at the level of the pituitary and/or the hypothalamus. While convincing murine models exist for the former condition [39],[40], we are not aware of an equivalent model that mimics the HPA axis hypoactivity observed in CFS. Models exist nonetheless that reproduce some facets of chronic fatigue. The most promising of these involve post-infectious fatigue induced in mice [43],[44]. No doubt as our understanding of the precise molecular signature of CFS improves so will the fidelity of our animal models enabling us to study CFS pathophysiology and treatment in earnest. It is important to note however that while the specific treatment solution identified using MPC is model-dependent the general MPC framework is not. Therefore as more detailed models become available these can easily be exploited to improve a treatment course. Putting aside issues of model fidelity and completeness, the proposed MPC framework could still be exploited in a two-step treatment approach. In a first step data obtained from a standard dexamethasone test could serve to calibrate a simple lumped-parameter model capturing the overall HPA dynamics for a given subject. The calibrated model could then be used within the proposed MPC framework to estimate the most appropriate combination of dosage and duration of treatment for that same patient. Ultimately even if a given model is not entirely correct our robustness analysis shows that the desired outcome may be obtained reliably over a wide range of parameter values. This will be true as long as the structure of the model is valid. In conclusion we have demonstrated in this work the use of model-based predictive control methodology in the estimation of robust treatment courses for displacing the HPA axis from an abnormal hypocortisol steady state back to a normal function. Using this approach on a numerical model of the HPA axis proposed by Gupta et al. [10] a candidate treatment that displays robust properties in the face of significant biological variability and measurement uncertainty requires that cortisol be suppressed for a short period until ACTH levels exceed 30% of baseline. At this point the treatment may be discontinued and the HPA axis will progress to a stable attractor defined by normal hormone profiles. The concentration of biologically available cortisol could in principle be altered by binding proteins or metabolizing enzymes to inhibit negative feedback to the HPA axis without affecting the synthesis and accumulation of ACTH. Our analysis shows that this treatment strategy is robust and that a positive outcome can be obtained reliably for a wide range of treatment efficiencies. Special thanks to the Dr. William C. Reeves and the staff of the Chronic Viral Diseases Branch at the Centers for Disease Control and Prevention for many helpful discussions. The authors have declared that no competing interests exist. This work was jointly funded by GB and by AB-Z through funds provided by the Faculty of Medicine and Dentistry at the University of Alberta and the Natural Sciences and Engineering Research Council of Canada, respectively. Jacobson L. Hypothalamic-pituitary-adrenocortical axis regulation. Endocrinol Metab Clin North Am. 2005;34(2):271–292. [PubMed] Silverman MN, Pearce BD, Biron CA, Miller AH. Immune modulation of the hypothalamic-pituitary-adrenal (HPA) axis during viral infection. Viral Immunol. 2005;18:41–78. [PMC free article] [PubMed] McEwen BS. Gonadal and adrenal steroids regulate neurochemical and structural plasticity of the hippocampus via cellular mechanisms involving NMDA receptors. Cell Mol Neurobiol. 1996;16:103–116. [ Golier JA, Schmeidler J, Legge J, Yehuda R. Twenty-four hour plasma cortisol and adrenocorticotropic hormone in Gulf War Veterans: relationships to posttraumatic stress disorder and health symptoms. Biol Psychiatry. 2007;62:1175–1178. [PubMed] Appel S, Chapman J, Shoenfeld Y. Infection and vaccination in chronic fatigue syndrome: myth or reality? Autoimmunity. 2007;40:48–53. [PubMed] Crofford LJ, Young EA, Engleberg NC, Korszun A, Brucksch CB, et al. Basal circadian and pulsatile ACTH and cortisol secretion in patients with fibromyalgia and/or chronic fatigue syndrome. Brain Behav Immun. 2004;18:314–325. [PubMed] Van Den Eede F, Moorkens G, Van Houdenhove B, Cosyns P, Claes SJ. Hypothalamic-pituitary-adrenal axis function in chronic fatigue syndrome. Neuropsychobiology. 2007;55:112–120. [PubMed] Sharma C, Gabrilove JL. A study of the adrenocortical disorders related to the biosynthesis and regulation of steriod hormones and their computer simulation. Mt Sinai J Med. 1975;42:S2–S39. [PubMed] Bing-Zheng L, Gou-Min D. An improved mathematical model of hormone secretion in the hypothalamo-pituitary-gonadal axis in man. J Theor Biol. 1991;150:51–58. [PubMed] Gupta S, Aslakson E, Gurbaxani BM, Vernon SD. Inclusion of the glucocorticoid receptor in a hypothalamic pituitary adrenal axis model reveals bistability. Theor Biol Med Model. 2007;4:8. [PMC free article] [PubMed] Cleare AJ. The neuroendocrinology of chronic fatigue syndrome. Endocr Rev. 2003;24:236–252. [PubMed] Heim C, Ehlert U, Hellhammer DH. The potential role of hypocortisolism in the pathophysiology of stress-related bodily disorders. Psychoneuroendocrinology. 2000;25:1–35. [PubMed] Glaser R, Kiecolt-Glaser JK. Stress-associated immune modulation: relevance to viral infections and chronic fatigue syndrome. Am J Med. 1998;105:35S–42S. [PubMed] Clauw DJ, Engel CC, Jr, Aronowitz R, Jones E, Kipen HM, et al. Unexplained symptoms after terrorism and war: an expert consensus statement. J Occup Environ Med. 2003;45:1040–1048. [PubMed] 15. Clarke DW, Mohtadi C, Tuffs PS. Generalized predictive control—part I. the basic algorithm. Automatica. 1987;23:137–148. 16. Clarke DW, Mohtadi C, Tuffs PS. Generalized predictive control—part II. extensions and interpretations. Automatica. 1987;23:149–160. 17. Camacho EF, Bordons C. Model Predictive Control. London: Springer-Verlag; 1998. 18. Cox DA, Little JB, O'Shea D. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. New York: Springer-Verlag; 1997. Demitrack MA, Dale JK, Straus SE, Laue L, Listwak SJ, et al. Evidence for impaired activation of the hypothalamic-pituitary-adrenal axis in patients with chronic fatigue syndrome. J Clin Endocrinol Metab. 1991;73:1224–1234. [PubMed] Van Houdenhove B, Neerinckx E, Lysens R, Vertommen H, Van Houdenhove L, et al. Victimization in chronic fatigue syndrome and fibromyalgia in tertiary care: a controlled study on prevalence and characteristics. Psychosomatics. 2001;42:21–28. [PubMed] van Denderen JC, Boersma JW, Zeinstra P, Hollander AP, van Neerbos BR. Physiological effects of exhaustive physical exercise in primary fibromyalgia syndrome (PFS): is PFS a disorder of neuroendocrine reactivity? Scand J Rheumatol. 1992;21:35–37. [PubMed] Zobel AW, Nickel T, Sonntag A, Uhr M, Holsboer F, et al. Cortisol response in the combined dexamethasone/CRH test as predictor of relapse in patients with remitted depression: a prospective study. J Psychiatr Res. 2001;35:83–94. [PubMed] Modell S, Lauer CJ, Schreiber W, Huber J, Krieg JC, et al. Hormonal response pattern in the combined DEX-CRH test is stable over time in subjects at high familial risk for affective disorders. Neuropsychopharmacology. 1998;18:253–262. [PubMed] Powell R, Ren J, Lewith G, Barclay W, Holgate S, et al. Identification of novel expressed sequences, upregulated in the leucocytes of chronic fatigue syndrome patients. Clin Exp Allergy. 2003;33 :1450–1456. [PubMed] Lamprecht K. Pilot study of etanercept treatment in patients with chronic fatigue syndrome. 2001. Meeting of the American Association of Chronic Fatigue Syndrome (AACFS), Seattle. Available: http:// Pariante CM, Miller AH. Glucocorticoid receptors in major depression: relevance to pathophysiology and treatment. Biol Psychiatry. 2001;49:391–404. [PubMed] De Kloet ER, Vreugdenhil E, Oitzl MS, Joels M. Brain corticosteroid receptor balance in health and disease. Endocr Rev. 1998;19:269–301. [PubMed] Fries E, Hesse J, Hellhammer J, Hellhammer DH. A new view on hypocortisolism. Psychoneuroendocrinology. 2005;30:1010–1016. [PubMed] Gaab J, Hustern D, Peisen R, Engert V, Schad T. Low-dose dexamethasone suppression test in chronic fatigue syndrome and health. Psychosom Med. 2002;64:311–318. [PubMed] Jerjes WK, Cleare AJ, Wood PJ, Taylor NF. Assessment of subtle changes in glucocorticoid negative feedback using prednisolone: comparison of salivary free cortisol and urinary cortisol metabolites as endpoints. Clin Chim Acta. 2006;364:279–286. [PubMed] Jerjes WK, Taylor NF, Wood PJ, Cleare AJ. Enhanced feedback sensitivity to prednisolone in chronic fatigue syndrome. Psychoneuroendocrinology. 2007;32:192–198. [PubMed] Rosner W. Plasma steroid-binding proteins. Endocrinol Metab Clin North Am. 1991;20:697–720. [PubMed] Qureshi AC, Bahri A, Breen LA, Barnes SC, Powrie JK, et al. The influence of the route of oestrogen administration on serum levels of cortisol-binding globulin and total cortisol. Clin Endocrinol (Oxf) 2007;66:632–635. [PubMed] Seckl JR, Walker BR. Minireview: 11β-hydroxysteroid dehydrogenase type 1—a tissue-specific amplifier of glucocorticoid action. Endocrinology. 2001;142:1371–1376. [PubMed] Karssen AM, Meijer OC, van der Sandt IC, Lucassen PJ, de Lange EC, et al. Multidrug resistance P-glycoprotein hampers the access of cortisol but not of corticosterone to mouse and human brain. Endocrinology. 2001;142:2686–2694. [PubMed] Kirnap M, Colak R, Eser C, Ozsoy O, Tutus A, et al. A comparison between low-dose (1 microg), standard-dose (250 microg) ACTH stimulation tests and insulin tolerance test in the evaluation of hypothalamo-pituitary-adrenal axis in primary fibromyalgia syndrome. Clin Endocrinol (Oxf) 2001;55:455–459. [PubMed] Pace TW, Hu F, Miller AH. Cytokine-effects on glucocorticoid receptor function: relevance to glucocorticoid resistance and the pathophysiology and treatment of major depression. Brain Behav Immun. 2007;21:9–19. [PMC free article] [PubMed] Dunn AJ, Swiergiel AH. The role of corticotropin-releasing factor and noradrenaline in stress-related responses, and the inter-relationships between the two systems. Eur J Pharmacol. 2008;583 :186–193. [PMC free article] [PubMed] Swiergiel AH, Leskov IL, Dunn AJ. Effects of chronic and acute stressors and CRF on depression-like behavior in mice. Behav Brain Res. 2008;186:32–40. [PubMed] Dhir A, Kulkarni SK. Venlafaxine reverses chronic fatigue-induced behavioral, biochemical and neurochemical alterations in mice. Pharmacol Biochem Behav. 2008;89:563–571. [PubMed] Tsigos C, Chrousos GP. Hypothalamic-pituitary-adrenal axis, neuroendocrine factors and stress. J Psychosom Res. 2002;53:865–871. [PubMed] Raison CL, Miller AH. When not enough is too much: the role of insufficient glucocorticoid signaling in the pathophysiology of stress-related disorders. Am J Psychiatry. 2003;160:1554–1565. [PubMed] Shi L, Smith SE, Malkova N, Tse D, Su Y, et al. Activation of the maternal immune system alters cerebellar development in the offspring. Brain Behav Immun. 2009;23:116–123. [PMC free article] [PubMed Chen R, Moriya J, Yamakawa J, Takahashi T, Li Q, et al. Brain atrophy in a murine model of chronic fatigue syndrome and beneficial effect of Hochu-ekki-to (TJ-41). Neurochem Res. 2008;33:1759–1767. [ Articles from PLoS Computational Biology are provided here courtesy of Public Library of Science Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2613527/","timestamp":"2014-04-16T11:27:24Z","content_type":null,"content_length":"129629","record_id":"<urn:uuid:34d0595f-6e70-4dab-82a6-cc74a3879146>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Search Results Ab initio protein structure prediction methods generate numerous structural candidates, which are referred to as decoys. The decoy with the most number of neighbors of up to a threshold distance is typically identified as the most representative decoy. However, the clustering of decoys needed for this criterion involves computations with runtimes that are at best quadratic in the number of decoys. As a result currently there is no tool that is designed to exactly cluster very large numbers of decoys, thus creating a bottleneck in the analysis. Using three strategies aimed at enhancing performance (proximate decoys organization, preliminary screening via lower and upper bounds, outliers filtering) we designed and implemented a software tool for clustering decoys called Calibur. We show empirical results indicating the effectiveness of each of the strategies employed. The strategies are further fine-tuned according to their Calibur demonstrated the ability to scale well with respect to increases in the number of decoys. For a sample size of approximately 30 thousand decoys, Calibur completed the analysis in one third of the time required when the strategies are not used. For practical use Calibur is able to automatically discover from the input decoys a suitable threshold distance for clustering. Several methods for this discovery are implemented in Calibur, where by default a very fast one is used. Using the default method Calibur reported relatively good decoys in our tests. Calibur's ability to handle very large protein decoy sets makes it a useful tool for clustering decoys in ab initio protein structure prediction. As the number of decoys generated in these methods increases, we believe Calibur will come in important for progress in the field.
{"url":"http://pubmedcentralcanada.ca/pmcc/solr/reg?pageSize=25&term=jtitle_s%3A(%22Bioinformatics%22)&sortby=score+desc&filterAuthor=author%3A(%22Li%2C+Shuai+Cheng%22)","timestamp":"2014-04-21T03:21:23Z","content_type":null,"content_length":"62887","record_id":"<urn:uuid:ad7a44c5-b678-419c-aa6f-bea18508b69a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of gravitational mass gravitational mass noun Physics. the mass of a body as measured by its gravitational attraction for other bodies. World English Dictionary gravitational mass Compare inertial mass the mass of a body determined by its response to the force of gravity Collins English Dictionary - Complete & Unabridged 10th Edition 2009 © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009 Cite This Source
{"url":"http://dictionary.reference.com/browse/gravitational+mass","timestamp":"2014-04-17T16:01:32Z","content_type":null,"content_length":"89889","record_id":"<urn:uuid:874147ce-cfad-4cb7-aaf6-4b7d435c61af>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Data Smoothing in Excel Statisticians typically have to look at large masses of data and find hard-to-see patterns. Sometimes an overall trend suggests a particular analytic tool. And sometimes that tool, although statistically powerful, doesn’t help the statistician arrive at an explanation. The following figure is a chart of home runs hit in the American League from 1901 until 2008. The obvious overall trend is that as the years go by, more home runs are hit. Fitting a regression line confirms this idea. The equation Home Runs = 24.325*Year – 465395 is a terrific fit to the data. The equation gives an R-Squared value of 0.91, indicating that a linear model nicely describes the relationship between home runs and years. And so . . . what? Just fitting a regression line glosses over important things within baseball — things both great and small that make up a baseball season, an era, a history. And baseball has many of those things. The objective is to get them to reveal themselves. The other extreme from the regression line is to connect the dots. That would just give a bunch of zigzags that likely won’t illuminate a century of history. The problem is how to summarize without eliminating too much: Get rid of the zigzags but keep the important peaks and valleys. How do you do this without knowing what’s important in advance? Exploratory data analysis (EDA) helps point the way. One EDA technique is called three-median smoothing. For each data point in a series, replace that data point with the median of three numbers: the data point itself, the data point that precedes it, and the data point that follows. Why the median? Unlike the mean, the median is not sensitive to extreme values that occur once in awhile — like a zig or a zag. The effect is to filter out the noise and leave meaningful ups and Why three numbers? Like most everything in EDA, that’s not ironclad. For some sets of data, you might want the median to cover more numbers. It’s up to the intuitions, experiences, and ideas of the Another technique, hanning, is a running weighted mean. You replace a data point with the sum of one-fourth the previous data point plus half the data point plus one-fourth the next data point. Still another technique is the skip mean. In EDA, you don’t just use one technique on a set of data. Often, you start with a median smooth, repeat it several times, and then try one or two others. For the data in the scatterplot, apply the three-median smooth, repeat it (that is, apply it to the newly smoothed data), han the smoothed data, and then apply the skip mean. Again, no technique (or order of techniques) is right or wrong. You apply what you think illuminates meaningful features of the data. Following is part of a worksheet for all of this. Column A shows the year, and Column B shows the number of home runs hit that year in the American League. The remaining columns show successive smooths of the data. Column C applies the three-median smooth to Column B, and Column D applies the three-median smooth to Column C. A quick look at the numbers shows that the repetition didn’t make much difference. Column E applies hanning to Column D, and Column F applies the skip mean to Column E. In Columns C through F, the actual number of home runs is used for the first value (for the year 1901) and for the final value (for the year 2008). You can easily watch the effect of each successive smoothing technique on the smoothed line. The key is to right-click on the plot area and choose Select Data from the pop-up menu. Click on the name of the data series that represents the smoothed line, edit the cell range of the series to reflect the column that holds the particular smoothing technique, and click OK to close the editing dialog And now the story begins to reveal itself. Instead of a regression line that just tells you that home runs increase as the years go by, the highs and lows stimulate thinking as to why they’re there. Here’s a highly abridged version of baseball history consistent with the twists and turns of the smoothed line. The low flat segment from 1901 through 1920 signifies the dead-ball era, a time when the composition of a baseball inhibited batted balls from going far enough to become home runs. Exploring and visualizing the data stimulates thought about what’s producing the patterns the exploration uncovers. Speculation leads to testable hypotheses, which lead to analysis.
{"url":"http://www.dummies.com/how-to/content/data-smoothing-in-excel.html","timestamp":"2014-04-18T06:20:10Z","content_type":null,"content_length":"55396","record_id":"<urn:uuid:648f3563-0162-4000-ae1b-a77844c5ea20>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Carnival of Mathematics #23: Haiku Edition Welcome to the 23rd Carnival of Mathematics: Haiku Edition! First, I must apologize for the delay: I usually have very little trouble with my hosting provider, but of course it went down just when the CoM was supposed to be posted. But it’s free, so I can’t complain! It’s back up now, and will hopefully stay that way. For this edition of the CoM, I decided to write a short seventeen-syllable haiku about each of the excellent seventeen submissions I received (along with additional commentary of the more prosaic variety). I’ve arranged the posts more or less in order of required mathematical background, but don’t stop halfway through because then you’ll miss the pretty pictures at the end. Enjoy! 1. English pols want to make math more interesting. It’s not already? From Naomi Stevens’s Diary From England: a government bid to make maths more interesting. 2. Neat, use perfect spheres to define the kilogram! Off by just atoms… Heather Lewis, of 360, writes about Australian scientists who are trying to make a perfect sphere. Pretty incredible stuff! 3. Freshmen work in groups, and answer their own questions. Effective? Discuss. JackieB of Continuities explains the pedagogical approach she takes with her freshman. Be sure to read (or contribute to!) the fascinating discussion that ensues in the comments section. 4. Multiple choice, now with bonus choice enhancement! Hard tests, nice to grade. Maria Andersen, at the Teaching College Math Technology Blog, shows off a new sort of multiple-choice test that’s easy to grade, but avoids many of the well-known problems with traditional multiple-choice tests. I wish I’d thought of this when I was teaching high school! 5. Are you learning two languages—math AND English? Great sites for you here. Larry Ferlazzo presents a list of the best math sites for english language learners. 6. Mathematics blogs are many; which are the best? Here’s one opinion. Denise of Let’s play math! writes about her favorite math blogs. 7. I have not yet read “Letters To A Young Mathster”. I’m not missing much. Andrée has written a (not-too-favorable) review of Ian Stewart’s book “Letters to a Young Mathematician”, over at her blog meeyauw. 8. Albatrosses fly in fractal patterns! Oh wait– experiment sucked. Julie Rehmeyer discusses how scientists are revisiting some research on fractal patterns in the flight patterns of albatross at MathTrek. Apparently, just because an albatross’s feet are dry doesn’t necessarily mean it’s flying. Who knew? 9. Eight ninety-eight, eight ninety-nine, nine hundred… sigh… infinity yet? Thad Guy has a funny comic about infinity. Check out some of his other comics, too—I’m a (new) fan! 10. Need socks in the dark? The pigeonhole principle comes to your rescue! Mary Pat Campbell (aka meep) presents a cute video explaining the pigeonhole principle. Did you know that at least two people in the US have the exact same number of hairs on their body? You can’t argue with math! 11. A counting problem: how many bracelets are there? Harder than it looks… MathMom came across an interesting MathCounts problem involving beaded bracelets, which generated some great discussion. How would you solve it? 12. List of rationals, both elegant and complete? Is it possible? Yours truly has posted the first in a planned multi-part series explaining a particularly elegant way to enumerate the positive rational numbers. 13. Koch snowflake fractal: Area? Perimeter? Fractals are so strange… Over at Reasonable Deviations, rod uses geometric series to calculate the area and perimeter of the Koch snowflake. The result is rather surprising! 14. Twelve Days of Christmas? How many presents is that? Let’s figure it out! Over at Wild About Math!, Sol Lederman presents a seasonally-appropriate exploration in counting presents. Fun! 15. A tricky puzzle: rectangles and angle sums. I solved it, can you? JD2718 shares a gem of a puzzle involving the sum of some angles. It’s tricky—are you up to the challenge? I would especially encourage would-be solvers to come up with a nice geometric solution (I couldn’t)! 16. Pascal’s Triangle: writing it out is a chore. How fast does it grow? Foxy, of FoxMaths! fame, presents an interesting two-part analysis of the asymptotic growth of the rows of Pascal’s triangle—not the growth of the actual values in the rows, but of the space needed to write them!—making use of some clever algebraic gymnastics and asymptotic analysis. 17. In how many ways can the Nauru graph be drawn? The answer: a lot! David Eppstein of 0xDE presents The many faces of the Nauru graph: a collection of diverse ways to visualize a particular graph which he dubs the “Nauru graph”, due to the similarity of one of its drawings to the flag of Nauru. Planar tesselation, hyperbolic tesselation, embedding on the surface of a torus… all that and much more, with, yes, pretty pictures for everything! Even those who don’t understand the article itself should still go take a look, solely for the sake of the pictures. =) Thanks to everyone for the great submissions, I had a fun time reading them and putting this together. The next CoM will be hosted at Ars Mathematica. As always, email Alon Levy (including “Carnival of Mathematics” in the subject line) if you’d like to host an edition. Wait! Before you go, in honor of the new year, here’s one last link from Mike Croucher at Walking Randomly, who wants to know: what is interesting about the number 2008? 17 Responses to Carnival of Mathematics #23: Haiku Edition 1. Pingback: The 23rd Carnival! « 360 2. Pingback: C o Math 23 « JD2718 3. The great Carnival Many flock to see it here We all read it now 4. Really great job. One detail: Andrée is a she. 5. Oops! My apologies! Thanks for the catch, Jonathan. 6. Hey, Brent, I’m impressed At your haiku-making skill. You made no mistakes. Then again, it’s not Often that “interesting” Has four syllables. 7. That word’s often said with three syllables, but it’s technically four. Yes, “technically” is another of those words. 8. Pingback: Larry Ferlazzo’s Websites Of The Day For Teaching ELL, ESL, & EFL » Blog Archive » Lots Of Good Blog Carnivals 9. Ars Mathematica is hosting the next Carnival. I reminded Alon, who updated the Carnival site with the information. I would appreciate it if you could also put something up to that effect. 10. On point 15, the angle puzzle, there is a nice geometric solution, though the analytic one with complex numbers might be easier to find. I don’t remember where I’ve read it, maybe it was an IMO task or something. Anyway, it’s this. As in the task, define the points A=(0,0), D=(3,0), E=(3,1), F=(2, 1), G=(1, 1). Also, take the point T=(2,-1). Then, segment AT is equal to segment ET and they are perpendicular, so the triangle ATE is an isosceles right triangle. Thus, the TAE angle is equal to the TEA angle. Also, obviously, the DAT angle is equal to the DAF angle, so the TAE angle is equal to the DAE angle plus the DAF angle. Finally, the DAG angle is obviously half a right angle. 11. My name is Kevin. History is my true love, Math is close second. 12. Draw a polygon. Any sided shape will do. Do not be a square.
{"url":"http://mathlesstraveled.com/2007/12/28/carnival-of-mathematics-23-haiku-edition/","timestamp":"2014-04-19T04:21:22Z","content_type":null,"content_length":"85736","record_id":"<urn:uuid:8f0191ac-4318-4a7e-a3bc-ac7b6113551c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting MM to inches Converting MM to inches In my steel book it says 1 MM dec. equivalent is .0394 when I x that x 100 I come up with 3.94 but the book says 100 MM is actually 3.937 The decimal equivalent for 100 MM is 3.937 so when I devide that x 100 I get .03937 If I take .03937 x say 78 MM I come up with 3.07086 but the book says its actually 3.0709 What am I missing. Tags: None Converting MM to inches The fact that they are rounding Senior Member • Oct 2009 • 305 • Miller syncrowave 200 runner with coolmate 4 and wp2025 weldcraft torch Miller 125c plasma cutter Simply rounded the last few decimal places to keep it simple. Unless you are measuring precision machined parts there is no need to be exact down to the 4th decimal place. They are rounding up. What are you doing that requires that level of accuracy? That difference is well within the tolerances for cold rolled steel in my information.---Meltedmetal The chart is rounding to the nearest 10-thousandth inch.1 MM = .03937 but it is rounded to .0394 for the chart"If I take .03937 x say 78 MM I come up with 3.07086 but the book says its actually 3.0709"Again, this is due to the chart being rounded to 4 decimal places - your 3.07086 rounds to 3.0709.The chart at http://mdmetric.com/tech/cvtcht.htm says for greater accuracy when converting MM to IN - divide MM by 25.4. So, in your example (78 MM to IN) -- 78 / 25.4 = 3.07086614173 (which would be rounded to 3.0709 in your chart) Senior Member • Aug 2007 • 170 • Miller Syncrowave 200 Milermatic 252 Lincoln AC/DC "Tombstone" Use .039371 for decimal/millimeter equivalent. 100 X .039371 = 3.9371. 78 X .039371 = 3.070938 Thanks guys, Yea I finally realized that, I'm just not used to rounding up and was surprised that alro steel did. When they were teaching us the metric system 30 plus years ago they should have stuck with it. inches to mm= multiply by 25.4 mm to inches= divide by 25.4 For most shop work, that is close enough. converting mm to in, etc. Originally posted by Portable Welder View Post Thanks guys, Yea I finally realized that, I'm just not used to rounding up and was surprised that alro steel did. When they were teaching us the metric system 30 plus years ago they should have stuck with it. To get an answer without any loss of precision simply open up Google. In the search area type in, for example, 100 mm in inches. It has a built in calculator and will display 3.93701. You can, of course, also go from inches to mm. For example, type in 3 inches in mm and you'll get 76.2. You can convert light years into yards, or fathoms into furlongs. Try it. It will do any conversion! Buy a metric tape measure,,, ha I have an app on my phone that is a unit converter. Very handy. I can convert any measure of a unit and transfer it to another with a button. Free too. Or, as my instructor did when I was back in machinist school-refuse to deal with metric! Kind of a waste anyway-you can't hold those kind of tolerances with the distortion. Oops!
{"url":"http://www.millerwelds.com/resources/communities/mboard/forum/welding-discussions/32614-converting-mm-to-inches?32086-Converting-MM-to-inches=&amp;p=460541","timestamp":"2014-04-19T01:57:55Z","content_type":null,"content_length":"147550","record_id":"<urn:uuid:13644fe3-952d-4a8f-8fb4-5f5ee29b58bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
level curves May 24th 2008, 10:11 PM level curves for f(x,y) = 1 -2x^2 - 3y^2 just wondering if someone could please help me find the equation of the curve of intersection of the surface z=f(x,y) with the plane x=1 i can sketch the levelcurves for valuesof c and sketch the cross sections of the surface in the xz and yz planes, will that help? i couldn't find an example of how to do this type of question in my notes or textbook, so any help would be much appreciated! May 25th 2008, 12:30 AM for f(x,y) = 1 -2x^2 - 3y^2 just wondering if someone could please help me find the equation of the curve of intersection of the surface z=f(x,y) with the plane x=1 i can sketch the levelcurves for valuesof c and sketch the cross sections of the surface in the xz and yz planes, will that help? i couldn't find an example of how to do this type of question in my notes or textbook, so any help would be much appreciated! 1. The graph of f is a paraboloid 2. The intersection of the paraboloid with a plane parallel to the axis of the paraboloid must be a parabola. 3. Use parametric equation to get the intersection parabola: $\left|\begin{array}{l}x=1 \\ y=t \\z=-1-3t^2\end{array}\right.$ 4. I've attached a drawing of f and the cutting plane. Because it is very difficult to detect the parabola in this drawing I've sketched the parabola alone in a separate coordinate system.
{"url":"http://mathhelpforum.com/calculus/39541-level-curves-print.html","timestamp":"2014-04-21T13:52:31Z","content_type":null,"content_length":"5286","record_id":"<urn:uuid:fdb07f1a-ac69-47ba-81e6-6a3afd0146c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the general solution to the equation 2u_x + u_xy = 0 November 19th 2011, 11:22 AM #1 Junior Member Jan 2010 Find the general solution to the equation 2u_x + u_xy = 0 where u_x is u(x,y) differentiated with respect to x and u_xy is u(x,y) twice differentiated with respect to x and with respect to y How would I go about solving this? Thanks for any help. Re: Find the general solution to the equation 2u_x + u_xy = 0 I'm pretty sure I've done it now actually. u = e^(ax+2y) Re: Find the general solution to the equation 2u_x + u_xy = 0 What you present is just one solution. Here are two more (1) $u = e^{-2y} \sin x$ (2) $u = x e^{-2y} + y^2$ Why not let $v = u_x$ and see where that gets you. Re: Find the general solution to the equation 2u_x + u_xy = 0 Re: Find the general solution to the equation 2u_x + u_xy = 0 Separate and integrate noting that you'll get a function of integration. Re: Find the general solution to the equation 2u_x + u_xy = 0 Since only differentiation with respect to y is involved, you can solve that as the ODE $\frac{dv}{dy}= -2y$. Of course, the "constant" may be a function of x. What does that give you for v? What does that make the orginal equation? Remember that the general solution to a partial differential equation may involve unknown functions of the variables rather than unknown constants. Re: Find the general solution to the equation 2u_x + u_xy = 0 Re: Find the general solution to the equation 2u_x + u_xy = 0 November 19th 2011, 11:42 AM #2 Junior Member Jan 2010 November 19th 2011, 02:11 PM #3 November 20th 2011, 04:40 AM #4 Junior Member Jan 2010 November 20th 2011, 04:44 AM #5 November 20th 2011, 04:45 AM #6 MHF Contributor Apr 2005 November 20th 2011, 04:57 AM #7 Junior Member Jan 2010 November 22nd 2011, 04:15 AM #8
{"url":"http://mathhelpforum.com/differential-equations/192271-find-general-solution-equation-2u_x-u_xy-0-a.html","timestamp":"2014-04-18T20:48:17Z","content_type":null,"content_length":"54628","record_id":"<urn:uuid:e916c219-edf2-4f29-a1c3-5a573800d7e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
chinese remainder theorem to show a unique polynomial January 28th 2011, 11:30 AM #1 Senior Member Feb 2008 chinese remainder theorem to show a unique polynomial The problem statement: Let $n_0,\cdots,n_m\in\mathbb{Z}$. Prove that there is a unique $f(x)\in\mathbb{Q}[x]$ of degree $\leq m$ such that $f(i)=n_i$ for all $i=0,\cdots,m$. [Hint: Use the Chinese remainder theorem.] Notice that $n_0,\cdots,n_m$ need NOT be pairwise coprime. It seems to me that this problem is equivalent to showing that the linear system $Ax=b$, where $A=\left(\begin{array}{ccc}0^0&\cdots&0^m\\\vdots&\ vdots&\vdots\\m^0&\cdots&m^m\end{array}\right)$, $x=\left(\begin{array}{c}a_0\\\vdots\\a_m\end{array }\right)$ and $b=\left(\begin{array}{c}n_0 \\\vdots\_m\end{array }\right)$, has exactly one solution, which in turn is equivalent to showing that $A$ is invertible. However, the hint tells me to use the Chinese remainder theorem---and I don't see how that's relevant. Any help would be much appreciated. Thanks! Another way to look at this problem is to set up a system of linear congruences. We can look at a concrete example first - you should be able to generalize it from there. Suppose we want to find a polynomial $f$ such that $f(1)=2, f(2)=5, f(3)=13$. If we treat $f(x)$ as an unknown, we have a system of linear (polynomial) congruences: If you run the Chinese Remainder Theorem algorithm on this, you find that the polynomial is $f(x)=4-\frac{9 x}{2}+\frac{5 x^2}{2}$. It is automatically guaranteed to be unique by the statement of the theorem. January 28th 2011, 04:54 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/169594-chinese-remainder-theorem-show-unique-polynomial.html","timestamp":"2014-04-16T06:42:36Z","content_type":null,"content_length":"36594","record_id":"<urn:uuid:897da75b-3768-4128-9b9d-745b85f28283>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
World Cup Bracketology Someone at work suggested that word has gone around that I blog. This felt weird to hear at first, but on further reflection it's not much different from the fact that there are quite a few of us on Facebook offering up status updates - this is just a bit more verbose. If you're from where I work and reading this, hello! (Don't be a stalky stranger; please tell me that I'm wearing odd socks next time you see me at work.) The definitive version of my blog is at Dreamwidth , but if you're sufficiently bored then you might care to look through the silly things I've said over the years at LiveJournal . You've all known that I'm a raging geek for years so I'm more than happy to own and stand up for what I've written, though much of what I've written is now rather old and some things change over time. Not many, though. A big feature on the annual American sporting calendar is March Madness, a tournament between the best college (i.e., university) basketball teams. (Parallel contests exist for men and women.) Simplifying at the cost of a little accuracy, the US is arbitrarily split into four regions, each of which is nearly-spuriously associated with a geographic name. A committee determines and produces a ranked order of the 16 best college basketball teams in each region. A single-elimination (i.e., knockout) competition takes place in each region, with the draw predetermined by strict seeding order: the first round features the number 1 ranked team vs. the number 16 ranked team, the number 2 seed vs. the number 15 seed and so on, the second round may feature number 1 vs. number 8, 2 vs. 7 and so forth. The four winners of the regions then play each other in semi-finals and a final to determine a national champion. This takes place over about three weeks or so. It's a huge deal, akin to the FA Cup over the course of three weeks; compare with Wimbledon, except that all the teams are supported rather than "just the British players". Part of the paraphernalia of the event is the Bracket Contest, a form of competition in which participants attempt to predict the result of every game in the tournament. You know who the 64 initial teams are and you can work out who is going to be playing whom in later rounds, so you predict the results of 63 matches. Predicting all 63 results in advance, not least who is going to be playing at each point, is , astronomically difficult. We're talking, very roughly, "winning a big lottery jackpot from a single ticket twice running" difficult here. Theoretically, picking each result at random requires you to get a fifty-fifty shot correct 63 times in a row, so the chances of getting it correct are [S:1 in:S] - 9,223,372,036,854,775,807 to 1 against, which is nearly as likely as winning the Euromillions jackpot at a single attempt two weeks running, with four numbers from a single National Lottery ticket on the Wednesday between. (I make no apology for catering to a British audience here in my references: my US readers likely know all about March Madness already. I wave respectfully to non-US, non-UK readers.) So that's a "least probable" bound for estimating the probability of a perfect bracket. The tremendous Sports Economist blog some stats about seeds' progression records over the past 24 years. We can use this to estimate a "most probable" bound for estimating the probability of a perfect bracket; let us assume that the most probable bracket has each of the four regions going exactly to seeding. I estimate that the probability of a region of 16 going exactly to seeding is a little worse than 1 in 1200. (I multiplied the probability of a number 1 seed having 4+ wins by the probability of a number 2 seed having exactly 3 wins given that they have at least 3 wins , by the probability of a number 3 seed having exactly 2 wins given that they have at least 2 wins , and so on, down to the probability of each of the 9 to 16 seeds having exactly 0 wins. Excel spreadsheet on request.) Then the probability of all four regions going exactly to seeding is (1 in 1200) to the power of four, and for your all-perfect bracket you have to get the last three matches correct as well. Accordingly, I reckon we're looking at a "most probable" bound of something like 16,605,026,108,360 to 1 against, which is similar to a National Lottery jackpot on one ticket followed by five balls plus the bonus on another. (Other estimates , but the tightest estimate is "1 in 150 million" - which, working backwards, implies they think the chance of getting a region perfect is better than 1 in 68. If that's so then I'll lay you a small amount at a generous 80-1 against getting an entire region correct any day. Look at the overlay on that...) Bracket Contests are common. Book of Odds a source suggesting 40 million brackets are filled out every year, and that ESPN had over 4.6 million submissions in 2009. (The best score was 58/63.) Now that's not going to be 40 million people with one bracket each, but that's still millions or maybe tens of millions of players. This is a sort of number, let alone a World of Warcraft sort of number, and probably less than an order of magnitude from a or a sort of number. Heck, the President plays (video), and the video starts with the Barack-et including a pick of #1 Kansas over #9 Northern Iowa, like just about everyone else. Which proved to be wrong. So a recent interesting story is this claim that someone managed to pick the first two rounds entirely correctly, going 48/48. I don't much care for the hook that the story uses, but given that ESPN claims to have had 4.78 million entries this year of which only four were 47/48 for the first two rounds, a perfect entry is a rarity. If you believe the claims that perfection for the first two rounds is 13.46 million to 1 against, then even 4.78 million shots at perfection all miss 94% of the time. It is said that the existence of Bracket Contests make March Madness a rare sporting contest that becomes followed the closer it gets to the final, simply because people take less interest when they know their bracket is out of contention. In conclusion: Bracket Contests are big news and fun. They're also pretty exclusive to March Madness. Why shouldn't the rest of the world enjoy Bracket Contests at a sporting event they'll be following... like this year's (association football) World Cup? The World Cup has a pretty well-defined structure which makes it amenable to running a Bracket Contest, of sorts. The finals take place in two stages; the first stage sees eight groups of four teams compete in parallel round robins, to generate eight first-placed teams and eight second-placed teams. These teams then fit into a completely deterministic , from which it is possible to identify all the potential matches in the remainder of the competition. This makes it ideal fertile territory for a Bracket Contest, and I don't think there are many of those taking place. (I thought this was a genuinely original idea, but hats off to for getting there first - and, quite possibly, lots of other people of whom I am not aware.) We then ask, for values of "we" equal to "I", what the probability of a perfect World Cup bracket is. A World Cup bracket, as I define it, would consist of correctly identifying which team would come first and which team would come second, out of four, in each of eight groups, followed by correctly choosing the results of all 15 competitive games in the rest of the competition. (Nobody cares about the Bert Bell Benefit Third-Place Game.) There are 12 ways to determine a first place and a second place from a group of four, so there are 12^8 = 429,981,696 possible ways to fill the brackets even before picking a bracket match result . There are 2^15 = 32,768 ways to fill the match results based on the same initial 16 teams in the bracket, so there are 14,089,640,214,528 different brackets possible. This gives us a lower bound for the probability (or, I suppose, a higher bound for the odds) of a perfect bracket of 14,089,640,214,527 to 1 against - close to "National Lottery jackpot followed by five-and-the-bonus" Working out a higher bound for the probability of a perfect bracket is trickier, not least as we don't have the wonderfully convenient seeding structures of March Madness in the World Cup from which to draw inferences. Instead, I fear we must attempt to mine the wisdom of the bookmakers, with being as good a starting point as any. As we are trying to generate an upper bound, and thus want to estimate the probabilities of results being as high as possible, I will take the least generous (non-trivial) set of odds offered by any bookmaker on a particular outcome and convert that to a probability, noting that it is in the bookmaker's interest to overestimate that probability and thus provide the potential for an overround. An upper bound for the probability of picking the eight winners is given as follows, quoting each favourite with what we can be reasonably confident is an overestimate of their probability: France (5 /9), Argentina (5/7), England (7/9), Germany (13/21), Netherlands (15/23), Italy (7/9), Brazil (9/13) and Spain (4/5). The chance of all eight winning is thus at most 100/1863, or about 5.4%. By extension, I think that, in general, picking all eight World Cup group winners is never going to be more than 10% likely, and that would take eight (3/4 probability, or 1/3 fair odds) shots. An upper bound for the probability of picking the eight winners and eight second places is given analogously: France/Mexico (3/13), Argentina/Nigeria (1/3), England/USA (1/2), Germany/Serbia (5/19), Netherlands/Cameroon (4/13), Italy/Paraguay (1/2), Brazil/Portugal (3/8), Spain/Chile (1/2). The chance of all eight pairs placing as listed is thus at most 15/51376, or about 0.03%. By extension, I think that, in general, picking all eight World Cup group winners and all eight World Cup group second places is never going to be more than 0.5% likely, and that would take eight (11/21 probability, or 10/11 fair odds) shots. An upper bound for the probability of getting all fifteen knockout matches correct, even given an accurate bracket of 16, is hard to estimate, not least because we don't have any indication of seeding. For an upper bound, I will be stingy and wildly overestimate that there is, on averge, a 75% chance of each match being correctly predictable, and thus the probability of getting 15 consecutive 75% shots correct is just greater than, well, 1 in 75. (If we go down to 72%, it's 1 in 138. 70%? 1 in 210. Two thirds? 1 in 437. 65%? 1 in 640. 60%? 1 in 2126. I don't know what the actual probability we should be looking at .) Thus an upper bound for the probability of being able to fill out an entire World Cup bracket correctly is 0.5% * (1 / 74.8309), which we can fairly safely push out the tiniest smidgeon to a nice and memorable 15,000 to 1 against. Incidentally, if you have spotted any errors in either my arithmetic or my logic, or if you can suggest any ways to tighten my bounds, they would be gratefully accepted. I do think that bookmakers or newspapers could run quite engaging, and simple to understand, "fill out your World Cup bracket" contests: one point for each team filled in correctly, first in terms of which teams make it from the group stages to the correct place in the final sixteen, then in terms of which teams make it through each round of the knockout stages. Bearing in mind the probability figures I quote are all overestimates, one should be able to offer 10,000/1 against a perfect bracket with consolations of 100/1 for placing all 8 winners and all 8 second places correctly and 10/1 for placing only all 8 winners correctly, and still make money on it. Alternatively, a newspaper might be able to run the competition and offer bonus prizes safe in the knowledge that they are moderately unlikely to be paid out - or, at least, could probably be reinsured against fairly easily. Part of the reason why I'm in the mood to think about such things is that not so long ago I encountered the The Wizard of Odds web site, written by the epnoymous wizard (real name Michael Shackleford) himself. It has scads of information about gambling games, particularly their practical implementations found in Las Vegas. The author is the titular Wizard, who has a career path that I would have idolised twenty years ago: he qualified as an actuary, then put his mind towards analysing casino games and made a career out of it, both from consulting work and as a university professor passing on his knowledge. There's probably only room for one of him in the world; I'm glad he exists, for the commitment he has demonstrated to getting very large volumes of high-quality information out there at no charge to the reader. I am terribly favourably predisposed towards him because of his writing style, which (quite correctly) focuses on expected value, near to exclusion of anything else, sometimes even to the fifth significant digit. The style is generally very unemotional, but when there is emotion, it's generally very direct, delightfully earnest and un-self-conscious. ( Case in point .) By chance in 2005 the site found itself top on Google for "Is my boyfriend cheating on me?" and he has wound up answering relationship questions amid the gambling questions ever since, approaching them in a very similar fashion, though his advice is sometimes a bit on the, well, side. He keeps out of his own writing to such an extent that it's charming when he allows himself a very occasional self-deprecatory anecdote. I even give him points for being a Settlers of Catan player , though these points may just be paying off debt from his October 2004 assertion that "Risk is the greatest board game ever made". There's no accounting for taste, of course, and it may well just be what he grew up with. Lastly, if any of the game design people around here have ever thought of turning their hands to designing casino games (because, as far as I'm concerned, there isn't a terribly high bar to beat) then the good wizard has a list of articles about this. Seems to me that I would probably enjoy attending a games event devoted to home-brewed gambling games, were such a thing ever to exist, at least as much as I would enjoy playing existing games in a casino. Could there be a gap in the market there for people who want a little more variety with their gambling, and might be prepared to pay for the privilege? (no subject) Date: 2010-03-27 05:37 pm (UTC) From: flourish Part of what is nice about bracket contests is that they're open to people of all degrees of knowledge about a game. When you're filling out a bracket, you can do it with lots of strategy; or, if you know nothing, you can just pick and choose on gut feeling or "hey, I liked the name." Of course, this means that people who don't know anything about the game and the teams can use brackets as a way to learn. Filling out a bracket with someone who does know the teams is intensely interesting and enlightening, and can really help you then enjoy all the games that you go on to watch. It's a good way to get non-sports-fans into the experience of watching the games, because it provides that conversation starter and helps give you a larger concept of what's actually going on. (no subject) Date: 2010-03-28 08:47 am (UTC) From: bateleur.livejournal.com Bracket Contests are big news and fun I'm not all that interested in sports, but some fun brackety action has been going on for many years now in the form of GameFAQs Character Battle (http://www.gamefaqs.com/features/cb8.html). It has elements of both bracket games and voting games, with the skill mostly lying in a deep knowledge of video game culture - the trick being to estimate how the fanbase of each character will have varied as a result of the year's releases. I don't tend to enter, but lathany always does and typically does quite well. if any of the game design people around here have ever thought of turning their hands to designing casino games Not really, because I get the impression they're not about gameplay so much as the psychology of parting people from their money. Not my thing.
{"url":"http://chris.dreamwidth.org/8308.html","timestamp":"2014-04-16T22:40:42Z","content_type":null,"content_length":"45073","record_id":"<urn:uuid:2e71efb6-c30b-47e8-80db-1bbd5fe8ad6a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving the Partial Differential Problems Using Differentiation Term by Term Theorem Department of Management and Information, Nan Jeon University of Science and Technology, Tainan City, Taiwan This paper took advantage of the mathematical software Maple as the auxiliary tool to study the partial differential problems of four types of two-variables functions. We can obtain the infinite series forms of any order partial derivatives of these two-variables functions by using differentiation term by term theorem, and hence greatly reduce the difficulty of calculating their higher order partial derivative values. On the other hand, we proposed some examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying our answers by using Maple. Keywords: partial derivatives, infinite series forms, differentiation term by term theorem, Maple Journal of Automation and Control, 2014 2 (1), pp 8-14. DOI: 10.12691/automation-2-1-2 Received November 28, 2013; Revised December 19, 2013; Accepted January 03, 2014 © 2014 Science and Education Publishing. All Rights Reserved. Cite this article: • Yu, Chii-Huei. "Solving the Partial Differential Problems Using Differentiation Term by Term Theorem." Journal of Automation and Control 2.1 (2014): 8-14. • Yu, C. (2014). Solving the Partial Differential Problems Using Differentiation Term by Term Theorem. Journal of Automation and Control, 2(1), 8-14. • Yu, Chii-Huei. "Solving the Partial Differential Problems Using Differentiation Term by Term Theorem." Journal of Automation and Control 2, no. 1 (2014): 8-14. Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks 1. Introduction As information technology advances, whether computers can become comparable with human brains to perform abstract tasks, such as abstract art similar to the paintings of Picasso and musical compositions similar to those of Beethoven, is a natural question. Currently, this appears unattainable. In addition, whether computers can solve abstract and difficult mathematical problems and develop abstract mathematical theories such as those of mathematicians also appears unfeasible. Nevertheless, in seeking for alternatives, we can study what assistance mathematical software can provide. This study introduces how to conduct mathematical research using the mathematical software Maple. The main reasons of using Maple in this study are its simple instructions and ease of use, which enable beginners to learn the operating techniques in a short period. By employing the powerful computing capabilities of Maple, difficult problems can be easily solved. Even when Maple cannot determine the solution, problem-solving hints can be identified and inferred from the approximate values calculated and solutions to similar problems, as determined by Maple. For this reason, Maple can provide insights into scientific research. In calculus and engineering mathematics curricula, the study of Laplace equation, wave equation, and other important physical equations are involved the partial differentiation. So the evaluation and numerical calculation of the partial derivatives of multivariable functions are important. On the other hand, calculating the q-th order partial derivative value of a multivariable function at some point, in general, needs to go through two procedures: firstly determining the q-th order partial derivative of this function, and then taking the point into the q-th order partial derivative. These two procedures will make us face with increasingly complex calculations when calculating higher order partial derivative values (i.e. q is large), and hence to obtain the answers by manual calculations is not easy. In this paper, we studied the partial differential problem of the following four types of two-variables functions where α, β are real numbers, β ≠ 0, and p is an integer. We can obtain the infinite series forms of any order partial derivatives of these four types of two-variables functions using differentiation term by term theorem; these are the major results of this study (i.e., Theorems 1-4), and hence greatly reduce the difficulty of calculating their higher order partial derivative values. The study of related partial differential problems can refer to [1-13]^[1]. In addition, we provided some examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple. This type of research method not only allows the discovery of calculation errors, but also helps modify the original directions of thinking from manual and Maple calculations. Therefore, Maple provides insights and guidance regarding problem-solving methods. 2. Main Results Firstly, we introduce some notations, formulas and some Maple's commands used in this paper. 2.1. Notations 2.1.1. Let z = a + ib be a complex number, where a, bare real numbers. We denote a the real part of z by Re(z), and b the imaginary part of z by Im(z). 2.1.2. Suppose m, n are non-negative integers. For the two-variables function f (x, y), its n-times partial derivative with respect to x, and m-times partial derivative with respect to y, forms a m+n -th order partial derivative, and denoted by 2.1.3. Suppose r is any real number, m is any positive integer. Define 2.1.4. evalf ( ); the Maple's command of calculating the approximation. 2.1.5. sum ( ); the command of evaluating the summation. 2.1.6. D [1$3, 2$4] (f) (1, 2); the command of finding the 7-th order partial derivative of f (x, y) at (1, 2), 2.1.7. Bernoulli (n); the command of evaluating the n-th bernoulli number. 2.1.8. product (8-j, j = 0..6); the command of determining the product 2.2. Formulas2.2.1. Euler’s Formula θ is any real number. 2.2.2. De Moivre’s Formula p is any integer, and θ is any real number. 2.2.3. ([14]) a, b are real numbers. 2.2.4. ([14]) a, b are real numbers. 2.2.5. Taylor Series Expression of Complex Hyperbolic Tangent Function ([15]) z is a complex number, B[k] are the k-th Bernoulli number. 2.2.6. Taylor Series Expression of Complex Hyperbolic Cotangent Function ([15]) z is a complex number, In the following, we introduce an important theorem used in this study. 2.3. Differentiation Term by Term Theorem ([16]) For all non-negative integers k, if the functions g[k]: (a, b) → R satisfy the following three conditions:(i) there exists a point g[k](x) are differentiable on open interval (a, b), (iii) a, b). Then a, b). Moreover, its derivative Before deriving the major results in this study, we need three lemmas. 2.4. Lemma 1 Suppose a, b are real numbers, b > 0, and p is an integer. Then (By DeMoivre’s formula) (By Euler’s formula) 2.5. Lemma 2 Suppose a, b are real numbers with (By Formulas 2.2.3 and 2.2.4) 2.6. Lemma 3 Suppose a, b are real numbers with Next, we determine the infinite series forms of any order partial derivatives of the two-variables function (1). 2.7. Theorem 1 Suppose α, β are real numbers, β ≠ 0, p is an integer, and m, n are non-negative integers. If the domain of the two-variables function is m+n-th order partial derivative of f (x, y), Proof Let (By (5) and (7)) (By Formula 2.2.5) Using differentiation term by term theorem, differentiating n-times with respect to x, and m-times with respect to y on both sides of (11), we obtain (By (6)) Next, we find the infinite series forms of any order partial derivatives of the two-variables function (2). 2.8. Theorem 2 If the assumptions are the same as Theorem 1. Suppose the domain of the two-variables function Proof g (x, y) (By (10)) Thus, by differentiation term by term theorem, we obtain In the following, we determine the infinite series forms of any order partial derivatives of the two-variables function (3). 2.9. Theorem 3 If the assumptions are the same as Theorem 1. Suppose the domain of the two-variables function Proof Let (By (5) and (8)) (By Formula 2.2.6) Also, by differentiation term by term theorem, we have Finally, we obtain the infinite series forms of any order partial derivatives of the two-variables function (4). 2.10. Theorem 4 Let the assumptions be the same as Theorem 1. If the domain of the two-variables function Proof By (15), we have Using differentiation term by term theorem, we obtain 3. Examples In the following, for the partial differential problem of the four types of two-variables functions discussed in this study, we proposed four examples and use Theorems 1-4 to obtain the infinite series forms of any order partial derivatives of these functions, and evaluate some of their higher order partial derivative values. On the other hand, we employed Maple to calculate the approximations of these higher order partial derivative values and their solutions for verifying our answers. 3.1. Example 1 Suppose the domain of the two-variables function we obtain any m+n-th order partial derivative of f (x, y), Therefore, we can evaluate the 7-th order partial derivative value of f (x, y) at Next, we use Maple to verify the correctness of (21). 3.2. Example 2 If the domain of the two-variables function we obtain 3.3. Example 3 Let the domain of the two-variables function we have 3.4. Example 4 Suppose the domain of the two-variables function we obtain 4. Conclusion In this article, we provided a new technique to evaluate any order partial derivatives of four types of two-variables functions. We will use this technique to solve another partial differential problems. On the other hand, the differentiation term by term theorem plays a significant role in the theoretical inferences of this study. In fact, the applications of this theorem are extensive, and can be used to easily solve many difficult problems; we endeavor to conduct further studies on related applications. In addition, Maple also plays a vital assistive role in problem-solving. In the future, we will extend the research topic to other calculus and engineering mathematics problems and solve these problems by using Maple. These results will be used as teaching materials for Maple on education and research to enhance the connotations of calculus and engineering mathematics. [1] A. Griewank and A. Walther, Evaluating derivatives: principles and techniques of algorithmic differentiation, 2nd ed., SIAM, Philadelphia, 2008. [2] C. H., Bischof, G. Corliss, and A. Griewank, “ Structured second and higher-order derivatives through univariate Taylor series,” Optimization Methods and Software, Vol. 2, pp. 211-232, 1993. [3] D. N. Richard, “ An efficient method for the numerical evaluation of partial derivatives of arbitrary order, ” ACM Transactions on Mathematical Software, Vol. 18, No. 2, pp. 159-173, 1992. [4] T-W, Ma, “ Higher chain formula proved by combinatorics,” The Electronic Journal of Combinatorics , Vol. 16, #N21, 2009. [5] L. E. Fraenkel, “ Formulae for high derivatives of composite functions,” Mathematical Proceedings of the Cambridge Philosophical Society, Vol. 83, pp. 159-165, 1978. [6] C.-H., Yu, “Application of Maple: taking the partial differential problem of two-variables functions as an example,” Proceedings of 2013 Business Innovation and Development Symposium, Taiwan, B20130113001, 2013. [7] C. -H. Yu, “ Partial derivatives of some types of two-variables functions, ” Pure and Applied Mathematics Journal , Vol. 2, No. 2, pp. 56-61, 2013. [8] C.-H. Yu, “ Application of Maple on the partial differential problem of four types of two-variables functions, ” Proceedings of the International Conference on Advanced Information Technologies, Taiwan, No. 87, 2013. [9] C.-H. Yu, “Using Maple to study the partial differential problems,” Applied Mechanics and Materials, in press. [10] C.-H. Yu, “ Application of Maple: taking the partial differential problem of some types of two-variables functions as an example, ” Proceedings of the International Conference on e-Learning, Taiwan, pp. 337-345, 2013. [11] C. -H. Yu, “ Using Maple to evaluate the partial derivatives of two-variables functions, ” International Journal of Computer Science and Mobile Computing , Vol. 2, Issue. 6, pp. 225-232, 2013. [12] C. -H. Yu, “ Evaluating partial derivatives of two-variables functions by using Maple,” Proceedings of the 6th IEEE/International Conference on Advanced Infocomm Technology, Taiwan, pp. 23-27, [13] C. -H. Yu and B. -H. Chen, “The partial differential problem,” Computational Research, Vol.1, No.3, pp. 53-60, 2013. [14] R. V. Churchill and J. W. Brown, Complex variables and applications, 4th ed., McGraw-Hill, New York, p65, 1984. [15] Hyperbolic functions, online available from http://en.wikipedia.org/wiki/Hyperbolic_function. [16] T. M. Apostol, Mathematical analysis, 2nd ed., Addison-Wesley, Boston, p 230, 1975.
{"url":"http://pubs.sciepub.com/automation/2/1/2/index.html","timestamp":"2014-04-18T18:13:20Z","content_type":null,"content_length":"70519","record_id":"<urn:uuid:c1229c34-846a-4343-87bb-62bc29482f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Büchi Store is an open repository of Büchi automata and other types of ω-automata. In particular, the repository contains a large collection of Büchi automata and their complements for common specification patterns and interesting temporal formulae. Different temporal formulae may specify the same ω-regular language. For a particular ω-regular language, only the three smallest Büchi automata known to the Store are kept. Some of these automata may be smaller than the corresponding ones that are generated by an automata-based model checker such as SPIN. These smaller automata could be adopted instead when using the model checker, as smaller specification automata often help in speeding up the model-checking process. Aside from the initial collection supplied by its maintenance team, the Store relies on the user to enrich the repository by uploading automata that define new languages or are smaller than existing equivalent ones. Such a repository of Büchi automata should also be useful as benchmark cases for researching translation or complementation algorithms and as pedagogical examples for teaching and learning Büchi automata and temporal logic. These apply analogously for other types of ω-automata, including deterministic Büchi and deterministic parity automata, which are also collected in the repository. For instance, the deterministic parity automata in the repository can be useful for reducing the complexity of automatic synthesis of reactive systems from temporal specifications. We hope that you will find the Store useful and appreciate your probable contributions. To cite this work, please use one of the following: Y.-K. Tsay, M.-H. Tsai, J.-S. Chang, Y.-W. Chang, and C.-S. Liu. Büchi Store: An Open Repository of ω-Automata. International Journal on Software Tools for Technology Transfer, 15(2):109-123, 2013. Y.-K. Tsay, M.-H. Tsai, J.-S. Chang, and Y.-W. Chang. Büchi Store: An Open Repository of Büchi Automata. In Proceedings of the 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS '11), LNCS 6605, 262-266, Springer, March/April, 2011. Thank you! Historical Notes Büchi automata are finite automata operating on infinite words. A Büchi automaton accepts an infinite word if it induces a run of the automaton that visits some accept state infinitely many times. The formalism was invented by Büchi around 1960 to prove the decidability of the Monadic Second Order Theory of Natural Numbers [1]. Near two decades later, they found practical applications in linear time model checking. In an appendix of his seminal FOCS 1977 paper [2] which introduced temporal logic to computer science, Pnueli suggested two ways to prove the decidability of essentially the linear-time model-checking problem. (The result was originally stated as "the validity of an arbitrary tense formula over a finite state system is decidable." In the paper, only two tense (temporal) operators F (eventually or sometime) and G (always) were considered.) One proof went via the translation of a temporal formula into a Büchi automaton and then invoked the decidability of language containment between two Büchi automata, one representing the system and the other the temporal formula whose validity over the system is to be checked. This proof was a precursor to the automata-theoretic approach to linear-time model checking eloquently advocated by Vardi and Wolper in [3]. 1. J. R. Büchi. On a decision method in restricted second order arithmetic. In Proceedings of the International Congress on Logic, Method, and Philosophy of Science, pages 1-12. Stanford University Press, 1962. 2. A. Pnueli. The temporal logic of programs. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science, pages 46-57, 1977. 3. M. Y. Vardi and P. Wolper. An automata-theoretic approach to automatic program verication. In Proceedings of the First Annual IEEE Symposium on Logic in Computer Science, pages 322-331, Cambridge, June 1986.
{"url":"http://buchi.im.ntu.edu.tw/","timestamp":"2014-04-24T08:34:05Z","content_type":null,"content_length":"9545","record_id":"<urn:uuid:83ccb12c-59ad-4af4-abc5-62184242cc7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
why not delay math? - Page 2 - General Education Discussion Board The example of the baby is incorrect IMO since by simply speaking and reading to our babies every day we are in a sense teaching them language and exposing them to language:) In fact they babies understand us long before they speak. IMO it is not a good idea to delay math especially when there are plenty of age appropriate and fun ways to learn math for early ages. I agree with you, unfortunately I think math is often taught in ways that are not developmentally appropriate. I think we should look more closely at the language model--find ways to expose children to math through daily life just as we expose them to language. What many children experience is I think rather like taking a 5 year old and trying to teach them a language to which they have had almost no exposure by having them conjugate lists of verbs--they can learn to put the right endings on the verbs, but they are not really learning to use and understand the language. And they can come to think that all there is to the language is conjugating verbs on a page. Children need to experience math as a living, breathing language, an integral part of living in and navigating and manipulating the world around them--THEN when you show them a verb conjugation (mathematical algorithm) on a page they will know what it is all about. It is possible to use a curriculum as a framework for age-appropriate math exploration, it is equally possible to do it in a more organic way without a curriculum.
{"url":"http://forums.welltrainedmind.com/topic/368951-why-not-delay-math/page-2","timestamp":"2014-04-19T17:26:37Z","content_type":null,"content_length":"259197","record_id":"<urn:uuid:9c213dfa-ef2b-4071-bbfb-9369aadac16e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Orissa Mathematical SocietyThere are three types of Olympiad Examinations conducted by the Orissa Mathematical society There are three types of Olympiad Examinations conducted by the Orissa Mathematical society Junior/ Regional/ Senior Mathematical Olympiad: The Indian National Mathematical Olympiad (INMO) is an annual contest organized by the National Board for Higher Mathematic (NBHM), a unit of the Department of Atomic Energy, Govt. of India, in order to spot and nurture mathematical talent among students. The Orissa Mathematical Society (OMS) conducts a Regional Mathematical Olympiad (RMO) every to select and train 30 students who can take part in the INMO. In order to encourage the students to imbibe mathematical interested an early age and to continue it till the graduation level, we also conduct two more examinations along with RMO, i.e, JMO and SMO. Gold and Silver medals will be awarded to RMO candidates who secure the top positions at the Regional level. Prizes and certificates will be given to the best thirty candidates at the Zonal level. There will be special prizes for girl candidates and candidates from rural areas. Merit Certificates will be awarded to all the candidates who come upto the level expected by the evaluation board. BHRATRU SMRUTI TRUST will award cash prizes of 2,000/- and RS. 1,000/- respectively to the students securing first and second positions in RMO. Prizes and merit certificates will also be given to some top successful candidates appearing in the JMO and SMO tests. Students of 1st year and 2nd year of +2/ Class XI and Class XII with Mathematics and bright students of Class IX and X are eligible to appear in RMO test. Students of B A/ B Sc/ B E/ B. Tech/ BCA courses are eligible to appear in the SMO test. Students of VI, VII and VIII classes are eligible to appear in JMO test. JMO : Sound knowledge of Mathematics of middle schools. RMO : Sound knowledge of Mathematics up to class X. SMO : Sound knowledge of Graduate Level Mathematics on topics such as Sets, Relations, Functions, Inequalities, Complex Numbers, Permutation and Combination, Probability, Linear Algebra, Differentiation & Integration, Maxima-Minima, Taylor's Series & Power Series, Ordinary Differentiatial Equations, Vector Calculus, Multiple Integral. The questions will not be of routine type. The test questions would be of a nature requiring insight, critical approach, logical thinking, the power of application of knowledge in solving problems. Rural Mathematics Talent Search To spot and nurture talent in rural students. Class V student onwards. Mathematics aptitude test (Junior Rural Mathematical Olympiad) amongst Class VI rural students region wise. There are broadly three regions, Western Orissa, Southern Orissa and Coastal Orissa. • The thirty top successful students from each region (from JRMO test) be selected. • We propose to organize one week training camp in headquarter of each of these regions, along with some selected school teachers from the region twice a year. For each batch such camps shall be held for 3 years consecutively. • The training should be such that they can acquire skill to feel confident to sit in all Orissa RMO test with the urban candidates. • Region specific Mathematics literature shall be developed in vernacular language and shall be supplied to the trainees and teachers.
{"url":"http://omsonnet.com/main.php?item=13","timestamp":"2014-04-16T21:51:41Z","content_type":null,"content_length":"14245","record_id":"<urn:uuid:60c3ae9b-90d3-4672-a11d-c6ec52baaf7d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Formalization Thesis messing messi001 at umn.edu Sun Jan 6 09:33:32 EST 2008 Rodin wrote: The notion of functor allows for a bunch of specifications (including faithfulness in the technical sense of CT) which could be useful in this context. Even more importantly properties of functors can be also specified “externally”. One may require, for example, for each “informal” theory to haveexactly one translation (of some specified type) into a fixed formal language. In CT terms this would mean that the chosen formal system is the terminal object in the corresponding category. The understanding of functors as “translations” or “transformations” is, of course, a matter of “informal”interpretation but it is fairly standard. I do not understand. A category which possesses a final object, does not necessarily possess an unique final object. For example, any one element set, is a final object in the category of sets. Any inverse limit (or for that matter any direct limit), if it exists, is defined only up to canonical isomorphism. I continue to be perplexed by the idea of using category theory to formulate the Formaization Thesis. The fact that faithfullness of a functor is a concept of category theory is, it seems to me, a linguistic coincidence with the idea of seeking a "faithful translation " into ZFC (or any other formal theory). Also, since the notion of ismomorphism betweeen categories is (obviously) too strict for any serious mathematical use, are two formalizations, expressed in the language of category theory to be regarded as "the same" if the categories are equivalent, and if so, is a specification of the equivalence between the two categorries to be considered part of the given data? If C and D are categories and F:C ---> D is an equivalence, is the choice of a quasi-inverse G:D ---> C to be part of the given data. If so, is the choice of a natural transformation t:GF ===> Id_C, which is such that, for every object, c, of C t_c:GF(c) ---> c is an isomorphism, to be part of the given data. Such specifications are frequently necessary in actually using category theoretic notions in other areas of matiematics. See, for example, my paper with Larry Breen, The Differential Geometry of Gerbes, Math ArXiv math.AG/0106083. Rodin wrote: The principle pre-requisite seems to be not formalization but some kind of“categorification” (making into a category). “Ordinary” CT just like any other part of ordinary maths is not formal. A known technique is considering formal calculi as internal languages of appropriate categories (in particular toposes). This provides an interesting view on relationships btw syntax and semantics but I admit it doesn’t solve the Can one be more explicit about the "interesting view on the relationships between syntax and semantics" provided by the use of category theory? William Messing More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012461.html","timestamp":"2014-04-18T06:17:54Z","content_type":null,"content_length":"5482","record_id":"<urn:uuid:721c424d-eef5-443d-b5dc-ecbb5f31596e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
rational points on algebraic curves over $Q^{ab}$ up vote 13 down vote favorite Let $Q_{\infty,p}$ be the field obtained by adjoining to $Q$ all $p$-power roots of unity for a prime $p$. The union of these fields for all primes is the maximal cyclotomic extension $Q^{cycl}$ of $Q$. By Kronecker-Weber, $Q^{cycl}$ is also the maximal abelian extension $Q^{ab}$ of $Q$. A well known conjecture due to Mazur (with known examples) asserts, for an elliptic curve $E$ with certain conditions, that $E(Q_{\infty,p})$ is finitely generated. This is the group of rational points of $E$ over $Q_{\infty,p}$ (not a number field!). A theorem due to Ribet asserts the finiteness of the torsion subgroup $E(Q^{ab})$ for certain elliptic curves. (a) Can one expect to find elliptic curves (or abelian varieties) $A$ with $A(Q^{ab})$ finitely generated? (c) Can one expect to find curves $C$ of genus $g >1$ with $C(Q^{ab})$ finite? nt.number-theory elliptic-curves iwasawa-theory arithmetic-geometry 3 Re question (c), Pete Clark has shown there are curves of every genus $g\geq 4$ with no points over ${\mathbb Q}^{ab}$: math.uga.edu/~pete/plclarkarxiv8v2.pdf. Fingers crossed and he'll appear here soon... – dke Apr 17 '11 at 3:14 1 Well, here I am, but you've already said most of what I would have. Nevertheless I left an answer, the main point being that there is something to say in the genus one case as well... – Pete L. Clark Apr 17 '11 at 7:58 Thanks for all these interesting comments! – SGP Apr 17 '11 at 12:01 To answer (c), $\mathbb{Q}^{ab}$ is a large field (in the sense of Florian Pop) so any variety that has a $\mathbb{Q}^{ab}$-rational point, has infinitely many. So the answer to (c) is that either, as people pointed out, there are no points -- or there are nec. infinitely many. – Makhalan Duff Apr 17 '11 at 16:44 3 $\mathbb{Q}^{ab}$ is only conjectured to be large. Otherwise Shafarevich's conjecture would be known (see math.upenn.edu/~harbater/patch35.pdf , pages 55-56.) – H. Hasson Apr 17 '11 at 18:20 show 2 more comments 3 Answers active oldest votes Actually Ken Ribet proved that if $K$ is a number field and $K(\mu_{\infty})$ is its infinite cyclotomic extension generated by all roots of unity then for every abelian variety $A$ over $K$ the torsion subgroup of $A(K(\mu_{\infty}))$ is finite: http://math.berkeley.edu/~ribet/Articles/kl.pdf . up vote 11 down On the other hand, Alosha Parshin conjectured that if $K_{p}$ is the extension of $K$ generated by all $p$-power roots of unity (for a given prime $p$) then the set $C(K_{p})$ is vote accepted finite for every $K$-curve $C$ of genus $>1$: http://arxiv.org/abs/0912.4325 (see also http://arxiv.org/abs/1001.3424 ). Thanks a lot for these very interesting papers! And thanks for pointing out the correct version of Ribet's result (I had the mistaken impression that it was only for elliptic curves). – SGP Apr 17 '11 at 12:37 You are welcome. – Yuri Zarhin Apr 17 '11 at 14:54 I think I am misremembering things and this is right and what I wrote in my answer (which I am going to delete) is wrong. – Felipe Voloch Apr 17 '11 at 16:08 add comment As dke mentioned, I have a paper in which I construct various kinds of varieties $X_{/\mathbb{Q}}$ without abelian points (i.e., with $X(\mathbb{Q}^{\operatorname{ab}}) = \varnothing$). Here is a brief summary: If $X$ admits a $2:1$ map to a variety $Y$ with infinitely many $\mathbb{Q}$-rational points, then $X$ itself has infinitely many quadratic points -- i.e., points defined over the union of all quadratic extensions of $\mathbb{Q}$. This certainly lives inside $\mathbb{Q}^{\operatorname{ab}}$, so gives infinitely many abelian points. Now I call a curve $X$ hyperelliptic if it admits a $2:1$ map down to $\mathbb{P}^1$. (I say "I call" because I am not making any genus restrictions and requiring the map to be defined over $\mathbb{Q}$. Standard terminology is taking a little while to catch up to me here...) Now any curve of genus $0$ or $2$ is hyperelliptic, as is any curve of genus one with a rational point. So they all have infinitely many abelian points. If $E$ is an elliptic curve over $\mathbb{Q}$, then what I'm saying is that if it is given as $y^2 = x^3 + Ax + b$, then take $x$ to be any rational number and extract the square root: that up vote will give you an abelian point. One can see that only finitely many of these quadratic points are torsion points, so we are certainly getting positive rank this way. Do these quadratic 9 down points already give infinite rank? I'm not sure (but I feel like I am forgetting something here). [Added: I think I was forgetting what is in Dror Speiser's nice comment below!] Note that vote here I am -- anemically -- addressing your question a). A genus one curve without rational points need not be hyperelliptic, and in my paper I construct lots of genus one curves over $\mathbb{Q}$ without elliptic points. This is the key part, actually, because using this I construct curves of every genus $g \geq 4$ over $\mathbb{Q}$ without abelian points. This leaves genus $3$, which I was frustratingly unable to deal with in the paper and still can't. (In an appendix, I show that there are genus $3$ curves over some field without points over the maximal abelian extension of that field, unlike in the hyperelliptic cases. So my guess is that this should be possible over $\mathbb{Q}$ as well.) I didn't think at all about the problem of infinitely versus finitely many abelian points, probably because it cannot be attacked using the methods of my paper. But of course it is interesting too. "Do these quadratic points already give infinite rank?" - yes. Say we have $n$ linearly independent points over the union of $\mathbb{Q}(\sqrt{d_i})$. Take $p$ a large prime that doesn't 3 divide any one of the $d_i$, and that $x^3+Ax+b=0\pmod{p}$ has a solution $x_0$ (Chebotarev). Then $(x_0,\sqrt{f(x_0)})$ is a new point, and these $n+1$ points are again linearly independent: if a relation exists, just add to it its galois conjugate relation ($\sqrt{f(x_0)}\mapsto -\sqrt{f(x_0)}$), getting a relation on the $n$ points. The answer to a) (for jacobians of hyperelliptic curves) is "no". – Dror Speiser Apr 17 '11 at 9:14 Thank you! Your paper is a real eye-opener! – SGP Apr 17 '11 at 11:59 @Speiser: thanks for the explanation! – SGP Apr 17 '11 at 12:07 add comment I admit that I haven't read it carefully, but in this paper E. Kobayashi conjectures that $E(\mathbb Q^{\rm ab})$ has infinite rank for all elliptic curves $E$ defined over $\mathbb Q^{\rm ab}$. In particular, assuming the "weak" Birch and Swinnerton-Dyer conjecture for $E$ and certain properties of twisted Hasse-Weil $L$-functions of $E$, she shows that if $E$ is defined over a number field $K$ of odd degree then $E(K\cdot\mathbb Q^{\rm ab})$ has infinite rank (Theorem 2 in Kobayashi's article). up vote 8 This result for elliptic curves seems to suggest that the answer to your question (a) might be "no". down vote As for a simple abelian variety $A$ over a number field $K$, it is perhaps worth pointing out that Zarhin proved here that the torsion subgroup of $A(K^{\rm ab})$ is finite if and only if $A$ is not of CM-type over $K$ (when $K=\mathbb Q$ this reduces to an earlier theorem of Ribet). Thanks! I did not know any of these results! – SGP Apr 17 '11 at 12:00 You're welcome! – Stefano V. Apr 17 '11 at 12:35 ``As for a simple abelian variety $A$ over a number field $K$, the torsion subgroup of $A(K^{ab})$ is finite if and only if $A$ is not of CM-type over" $K$. (in other words, $\bar{K}$ on the last line of your comment should be replaced by $K$.) – Yuri Zarhin Apr 17 '11 at 14:54 @Yuri Zarhin: Actually, there is no $\bar{K}$ in the last line of my comment: it's just the underline to the word "here" in the second to last line. Or perhaps I'm misunderstanding what you mean, sorry... – Stefano V. Apr 17 '11 at 20:10 OOPS! Sorry. I've mistook the underline as bar over K. – Yuri Zarhin Apr 17 '11 at 20:27 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory elliptic-curves iwasawa-theory arithmetic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/61984/rational-points-on-algebraic-curves-over-qab?sort=oldest","timestamp":"2014-04-18T23:58:58Z","content_type":null,"content_length":"82339","record_id":"<urn:uuid:d99d6133-0cff-44fe-bdfb-2e046ac43d68>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions on a Cartesian Plane ( Read ) | Algebra Suppose that you have a set of points, where the $x$$y$ Functions as Graphs Once a table has been created for a function, the next step is to visualize the relationship by graphing the coordinates (independent value, dependent value). In previous courses, you have learned how to plot ordered pairs on a coordinate plane. The first coordinate represents the horizontal distance from the origin (the point where the axes intersect). The second coordinate represents the vertical distance from the origin. To graph a coordinate point such as (4,2) we start at the origin. Because the first coordinate is positive four, we move 4 units to the right. From this location, since the second coordinate is positive two, we move 2 units up. Example A Plot the following coordinate points on the Cartesian plane. (a) (5, 3) (b) (–2, 6) (c) (3, –4) (d) (–5, –7) Solution: We show all the coordinate points on the same plot. Notice that: For a positive $x$ For a negative $x$ For a positive $y$ For a negative $y$ When referring to a coordinate plane, also called a Cartesian plane, the four sections are called quadrants. The first quadrant is the upper right section, the second quadrant is the upper left, the third quadrant is the lower left and the fourth quadrant is the lower right. Example B Suppose we wanted to visualize Joseph’s total cost of riding at the amusement park. Using the table generated in a previous Concept, the graph can be constructed as (number of rides, total cost). $r$ $J(r) = 2r$ 0 $2(0) = 0$ 1 $2(1) = 2$ 2 $2(2) = 4$ 3 $2(3) = 6$ 4 $2(4) = 8$ 5 $2(5) = 10$ 6 $2(6) = 12$ The green dots represent the combination of $(r, J(r))$$2 \frac{1}{2}$ scatter plot. Writing a Function Rule Using a Graph In this course, you will learn to recognize different kinds of functions. There will be specific methods that you can use for each type of function that will help you find the function rule. For now, we will look at some basic examples and find patterns that will help us figure out the relationship between the dependent and independent variables. Example C The graph below shows the distance that an inchworm covers over time. Find the function rule that shows how distance and time are related to each other. Solution: Make a table of values of several coordinate points to identify a pattern. $&\text{Time} && 0 && 1 && 2 && 3 && 4 && 5 && 6\\&\text{Distance} && 0 && 1.5 && 3 && 4.5 && 6 && 7.5 && 9$ We can see that for every minute the distance increases by 1.5 feet. We can write the function rule as: $\text{Distance} = 1.5 \times \ \text{time}$ The equation of the function is $f(x) = 1.5x$ In many cases, you are given a graph and asked to determine the relationship between the independent and dependent variables. From a graph, you can read pairs of coordinate points that are on the curve of the function. The coordinate points give values of dependent and independent variables. These variables are related to each other by a rule. It is important we make sure this rule works for all the points on the curve. Finding a function rule for real-world data allows you to make predictions about what may happen. Analyze the Graph of a Real-World Situation Graphs are used to represent data in all areas of life. You can find graphs in newspapers, political campaigns, science journals, and business presentations. Example D Here is an example of a graph you might see reported in the news. Most mainstream scientists believe that increased emissions of greenhouse gases, particularly carbon dioxide, are contributing to the warming of the planet. The graph below illustrates how carbon dioxide levels have increased as the world has industrialized. From this graph, we can find the concentration of carbon dioxide found in the atmosphere in different years. 1900 - 285 parts per million 1930 - 300 parts per million 1950 - 310 parts per million 1990 - 350 parts per million In future lessons, you will learn how to approximate an equation to fit this data using a graphing calculator. Video Review Guided Practice Graph the function that has the following table of values. Find the function rule. $& 0 && 1 && 2 && 3 && 4\\& 0 && 1 && 4 && 9 && 16$ The table gives us five sets of coordinate points: (0, 0), (1, 1), (2, 4), (3, 9), (4, 16). To graph the function, we plot all the coordinate points. We observe that the pattern is that the dependent values are the squares of the independent values. Because squaring numbers will always result in a positive output, and squaring a fraction results in a fraction, the domain of this function is all positive real numbers, or $x \ge 0$ Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Functions as Graphs (9:34) In 1 – 5, plot the coordinate points on the Cartesian plane. 1. (4, –4) 2. (2, 7) 3. (–3, –5) 4. (6, 3) 5. (–4, 3) Using the coordinate plane below, give the coordinates for a – e. In 7 – 9, graph the relation on a coordinate plane. According to the situation, determine whether to connect the ordered pairs with a smooth curve or leave the graph as a scatter plot. 7. $&& X && -10 && -5 && 0 && 5 && 10\\&& Y && -3 && -0.5 && 2 && 4.5 && 7$ Side of cube (in inches) Volume of cube (in inches $^3$ Time (in hours) Distance (in miles) –2 –50 –1 25 In 10 – 12, graph the function. 10. Brandon is a member of a movie club. He pays a $50 annual membership and $8 per movie. 11. $f(x) = (x - 2)^2$ 12. $f(x) = 3.2^x$ 13. The students at a local high school took the Youth Risk Behavior Survey. The graph below shows the percentage of high school students who reported that they were current smokers. A person qualifies as a current smoker if he/she has smoked one or more cigarettes in the past 30 days. What percentage of high school students were current smokers in the following years? (a) 1991 (b) 1996 (c) 2004 (d) 2005 14. The graph below shows the average lifespan of people based on the year in which they were born. This information comes from the National Vital Statistics Report from the Center for Disease Control. What is the average lifespan of a person born in the following years? (a) 1940 (b) 1955 (c) 1980 (d) 1995 15. The graph below shows the median income of an individual based on his/her number of years of education. The top curve shows the median income for males and the bottom curve shows the median income for females (Source: US Census, 2003). What is the median income of a male who has the following years of education? (a) 10 years of education (b) 17 years of education What is the median income of a female who has the same years of education? (c) 10 years of education (d) 17 years of education
{"url":"http://www.ck12.org/algebra/Functions-on-a-Cartesian-Plane/lesson/Functions-on-a-Cartesian-Plane/","timestamp":"2014-04-20T02:16:59Z","content_type":null,"content_length":"124701","record_id":"<urn:uuid:6aae1996-bd5d-40a5-86c7-6a2750d63bd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Solve Fractions How To Solve Fraction Equations Fractions are a very important part in maths and it helps young children to understand things better. It is very easy and simple to do, also great fun. In this video, I am going to teach you how to solve fraction equations. Okay, a fraction looks something like this 3/4 which means three divided by four or it is something like this which is 2/3 which means two divided by three. Now, this is called the numerator, this is called the denominator. So, step one: How to add? Let us say we have 1/2 plus 1/3. Now, what does this actually mean? I like to think this as a pie, half a pie plus one third. This is one half, this is one third. So, how much of a pie is this? So, to do this, you need to find a common denominator here, you need to find a common number with two as a factor of this number and three as a factor of this number. Now, I can see it as six. How do I find out this is six? I did this, two times three equals to six. So, I need to find out this in terms of six and this in terms of six. So, to do that, I multiplied this number two by three, so it's six. So if I multiply the bottom of this fraction by three and I multiply the top by three, so we have three which is three times one divided by six, so one half is the same as three six and the same here. I am going to multiply three by two to make six, so two times one is two and two times three is six, now I have three six and two six. This is three six and this two six, so how many six is that? Well, that is five six. I have three plus two is five and leave the denominator as it is. Step two: How to subtract? Now, subtracting fractions is exactly the same as adding fractions except for a minus sign in the middle. So, you need to minus the numerators. So, in this case, I am going to do three quarters minus two thirds. Once again, we need to find the common denominator. How do we do that? Four times three is equal to twelve, so I need to multiply this fraction four times three is twelve, so I need to multiply this fraction by three, so three times three is nine and three times four is twelve. Now, I want a common denominator. I am going to do exactly the same to this thing, four times three is twelve, so I need to multiply this fraction by four. Four times two is eight, four times three is obviously twelve. So, now, we have our common denominator and all we need to do now is the only difference between adding and subtracting is now we subtract the numerators. So, as we all know, nine minus eight is one, you keep the denominator the same, we have one twelfth. Step three: How to multiply fractions? This is the easiest of all. So, step three, multiply. So, we are going to take again three out of four and this time, we are going to multiply by two third. How do we do this? This is dead easy. You multiply the top numbers together and you multiply the bottom numbers together. Three times two is six, four time three is twelve. Now, six twelfth can actually be cancelled. If you had six twelfths, it means you have got, six goes into six once and six goes into twelve twice. If you divide this top and bottom by six, we get here one over two. So, six twelfth is exactly the same as one half. Now, step four: How to divide fractions? This is as simple as multiplying but there is one more step. So, step four, divide. This time, I am going to divide three quarters by two thirds. Okay, so we have three quarters. I am going to divide by two thirds. This is very similar to multiplying but you must do one step first. You take the second fraction only and flip it upside down. Okay, so I am going to do that, I am going to write three quarters, I am going to flip this one upside down, so now it looks like three over two and all you do is change this divide to multiply. So, we know how to multiply fractions, we just did it. So, we know how three times three is nine, and four times two is eight. So, the answer, a nine eight, and that is how to solve fraction equations.
{"url":"http://www.videojug.com/film/how-to-solve-fraction-equations-2","timestamp":"2014-04-17T12:30:20Z","content_type":null,"content_length":"37163","record_id":"<urn:uuid:71972289-24da-4f22-87fd-2ea5f9c75ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Lake Success, NY Algebra Tutor Find a Lake Success, NY Algebra Tutor ...For a year, I was a substitute para-professional with the NYC Dept. of Education, District 75, working with children on the severe end of the autistic spectrum. For 7 years, I worked as a case manager, outreach worker, and supervisor at the Hetrick-Martin Institute, providing services to runaw... 29 Subjects: including algebra 1, algebra 2, English, reading ...ACT math, as opposed to its SAT counterpart, favors students who like applied math and hinders those who aren't attuned to it or haven't been trained for it. Another significant factor in this test is about recognizing the patterns of missing information in their reasoning questions. That too can be learned. 55 Subjects: including algebra 2, algebra 1, Spanish, reading ...I have worked with students with learning disabilities as well as gifted students taking advanced classes or classes beyond their grade level. I enjoy helping students achieve their maximum potential! I have helped many math students raise their grades dramatically in short periods of time. 34 Subjects: including algebra 1, algebra 2, calculus, writing I am currently at St. John's University as a pharmacy major. I have had plenty of experience in tutoring children of all ages, through several volunteer organizations. 32 Subjects: including algebra 2, biology, chemistry, elementary (k-6th) ...My specialties are reading, writing, elementary math and science, as well as high school-level math and science. In my free time I practice ballet and yoga, and I love going to museums and watching sports! I have a unique tutoring style, using standard books but also bringing my own creativity to fuel a student's learning environment. 31 Subjects: including algebra 1, algebra 2, English, elementary math
{"url":"http://www.purplemath.com/Lake_Success_NY_Algebra_tutors.php","timestamp":"2014-04-16T18:58:33Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:a9b16772-4fdb-43a1-92f9-401ebc34bc92>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Amy on Monday, September 5, 2011 at 6:22pm. invested 2000 gains 0.2 every odd numbered month gains 0.15 every even numbered month write function for n months? i've tried solving this with a formula, but every formula i come up with is wrong. Could someone tell me how to solve this and plug in the right numbers into a formula? **i already posted this but without the % for even numbered months. should there be two different formulas for even and odd months? • MATH - Amy, Monday, September 5, 2011 at 6:32pm For the first five months algebrically i got: 1: 2400 2: 2040 3: 2448 4: 2080.8 5: 2496.96 i tried using this formula: when i plugged in 1 for n i got 2400. however, when i plugged in 2 for n i got 2880. which isn't right. so i changed the formula to 2040(1+ .2)^1 I got the right answer but is there a simple formula where i can just plug in the "n" and get the right amount? • MATH please help - Amy, Monday, September 5, 2011 at 8:41pm should i repost this? Related Questions Math please help! - invested 2000 gains 0.2 every odd numbered month gains 0.15... Math - invested 2000 gains 0.2 every odd numbered month final value after first ... mathamatics - on december31, 1995, paul invested $2000 in a stock. his stocker ... Math - Ms. Jones had her class keep track of the losses and gains of a ... Macroeconomics - Kelly purchased ten shares of Gentech stock for $200 in year 1 ... Atomic physics - 1)Five possible transitions for a hydrogen atom are listed ... help 2 - A decrease in purchasing power A. Debt B. Gains C. Depreciation D. ... I need HELP bad! Macroeconomics! - 1). Kelly purchased ten shares of Gentech ... Macroeconomics - 1). Kelly purchased ten shares of Gentech stock for $200 in ... Math - One three consecutive passes, a football team gains 5 yards, loses 11 ...
{"url":"http://www.jiskha.com/display.cgi?id=1315261354","timestamp":"2014-04-16T13:53:46Z","content_type":null,"content_length":"9142","record_id":"<urn:uuid:f1549352-adb3-4851-873b-21a33e213532>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On the connectedness of self-aĆne attractors Shigeki AKIYAMA and Nertila GJINI y Let T = T (A; D) be a self-aĆne attractor in R n dened by an integral expanding matrix A and a digit set D. In the rst part of this paper, in connection with canonical number systems, we study connectedness of T when D corresponds to the set of consecutive integers f0; 1; : : : ; j det(A)j 1g. It is shown that in R 3 and R 4 , for any integral expanding matrix A, T (A; D) is In the second part, we study connectedness of Pisot dual tiles which play an important role in the study of -expansions, substitutions and symbolic dynamical systems. It is shown that each tile of the dual tiling generated by a Pisot unit of degree 3 is arcwise connected. This is naturally expected since the digit set consists of consecutive integers as above. However surprisingly, we found families of disconnected Pisot dual tiles of degree 4. We even give a simple necessary and suĆcient condition of connectedness of the Pisot dual tiles of degree 4. Detailed proofs will be given in [4]. 1 Introduction In this paper, we shall give a brief summary of the paper [4]. Proofs given here are representative parts of detailed ones in [4]. Let Mn (Z) be the set of n n
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/712/3819822.html","timestamp":"2014-04-18T22:30:52Z","content_type":null,"content_length":"8377","record_id":"<urn:uuid:72dd4330-9159-4f35-996f-401aea3c8c7a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
matrix destructor 03-05-2010 #1 Registered User Join Date May 2009 matrix destructor My destructor works, but I wanted to check to see if it's correct to do it this way. Here's the code first for the default constructor: template<typename T, int n> Matrix<T, n>::Matrix() T** arr = new T*[n]; for (int i = 0; i < n; ++i) arr[i] = new T[n]; for (int i = 0; i < n; ++i) // rows for (int j = 0; j < n; ++j) // columns arr[i][j] = 0; // T must define 0 and 1 content = arr; Now the destructor: // destructor template<typename T, int n> Matrix<T, n>::~Matrix() // free memory on each row for (int i = 0; i < n; ++i) delete [] content[i]; content[i] = NULL; // delete content entirely delete [] content; content = NULL; correct? it at least seems to me that just delete [] content wouldn't set the individual blocks pointed to back as available memory. Yes, that's correct. Assigning pointers to NULL in this context isn't necessary. But it's a useful habit that could easily save weeks of confusion if you're re-using pointers and make the mistake of trying to use it after it has been freed. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. ok, cool, tx! I think your constructor should have a try-catch block to delete any memory you've allocated, just in case you get a std::bad_alloc exception in one of your calls to new. "I am probably the laziest programmer on the planet, a fact with which anyone who has ever seen my code will agree." - esbo, 11/15/2008 "the internet is a scary place to be thats why i dont use it much." - billet, 03/17/2010 03-05-2010 #2 03-05-2010 #3 Registered User Join Date May 2009 03-05-2010 #4 and the hat of sweating Join Date Aug 2007 Toronto, ON
{"url":"http://cboard.cprogramming.com/cplusplus-programming/124438-matrix-destructor.html","timestamp":"2014-04-16T22:10:50Z","content_type":null,"content_length":"49794","record_id":"<urn:uuid:ecbef2b4-69d8-4b4a-8f97-702864186e9b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Xinbing Wang, Luoyi Fu, Xiaohua Tian, Yuanzhe Bei, Qiuyu Peng, Xiaoying Gan, Hui Yu, Jing Liu, "Converge Cast: On the Capacity and Delay Tradeoffs," IEEE Transactions on Mobile Computing, vol. 11, no. 6, pp. 970-982, June, 2012. BibTex x @article{ 10.1109/TMC.2011.110, author = {Xinbing Wang and Luoyi Fu and Xiaohua Tian and Yuanzhe Bei and Qiuyu Peng and Xiaoying Gan and Hui Yu and Jing Liu}, title = {Converge Cast: On the Capacity and Delay Tradeoffs}, journal ={IEEE Transactions on Mobile Computing}, volume = {11}, number = {6}, issn = {1536-1233}, year = {2012}, pages = {970-982}, doi = {http://doi.ieeecomputersociety.org/10.1109/TMC.2011.110}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Mobile Computing TI - Converge Cast: On the Capacity and Delay Tradeoffs IS - 6 SN - 1536-1233 EPD - 970-982 A1 - Xinbing Wang, A1 - Luoyi Fu, A1 - Xiaohua Tian, A1 - Yuanzhe Bei, A1 - Qiuyu Peng, A1 - Xiaoying Gan, A1 - Hui Yu, A1 - Jing Liu, PY - 2012 KW - Converge cast KW - capacity KW - delay. VL - 11 JA - IEEE Transactions on Mobile Computing ER - In this paper, we define an ad hoc network where multiple sources transmit packets to one destination as Converge-Cast network. We will study the capacity delay tradeoffs assuming that n wireless nodes are deployed in a unit square. For each session (the session is a dataflow from k different source nodes to 1 destination node), k nodes are randomly selected as active sources and each transmits one packet to a particular destination node, which is also randomly selected. We first consider the stationary case, where capacity is mainly discussed and delay is entirely dependent on the average number of hops. We find that the per-node capacity is \Theta (1/\sqrt{n\log n}) (given nonnegative functions f(n) and g(n){:} f(n) = O(g(n)) means there exist positive constants c and m such that f(n) \le cg(n) for all n \ge m; f(n)=\Omega (g(n)) means there exist positive constants c and m such that f(n)\ge cg(n) for all n \ge m; f(n) = \Theta (g(n)) means that both f(n) =\Omega (g (n)) and f(n) = O(g(n)) hold), which is the same as that of unicast, presented in [CHECK END OF SENTENCE]. Then, node mobility is introduced to increase network capacity, for which our study is performed in two steps. The first step is to establish the delay in single-session transmission. We find that the delay is \Theta (n\log k) under 1-hop strategy, and \Theta (n\log k/m) under 2-hop redundant strategy, where m denotes the number of replicas for each packet. The second step is to find delay and capacity in multisession transmission. We reveal that the per-node capacity and delay for 2-hop nonredundancy strategy are \Theta (1) and \Theta (n\log k), respectively. The optimal delay is \Theta (\sqrt{n\log k}+k) with redundancy, corresponding to a capacity of \Theta (\scriptstyle \sqrt{{1\over n\log k} }+{k\over n\log k} ). Therefore, we obtain that the capacity delay tradeoff satisfies delay/rate \ge \Theta (n\log k) for both strategies. [1] M.J. Neely and E. Modiano, “Capacity and Delay Tradeoffs for Ad Hoc Mobile Networks,” IEEE Trans. Information Theory, vol. 51, no. 6, pp. 1917-1937, June 2005. [2] X. Wang, W. Huang, S. Wang, J. Zhang, C. Hu, “Delay and Capacity Tradeoff Analysis for MotionCast,” IEEE/ACM Trans. Networking, vol. 19, no. 5, pp. 1354-1367, Oct. 2011, doi:10.1109/ [3] P. Gupta and P.R. Kumar, “The Capacity of Wireless Networks,” IEEE Trans. Information Theory, vol. 46, no. 2, pp. 388-404, Mar. 2000. [4] F. Xue and P. Kumar, Scaling Laws for Ad-Hoc Wireless Networks: An Information Theoretic Approach. Now Publishers Inc., 2006. [5] X.-Y. Li, S.-J. Tang, and O. Frieder, “Multicast Capacity for Large Scale Wireless Ad Hoc Networks,” Proc. ACM MobiCom, Sept. 2007. [6] M. Grossglauser and D.N.C. Tse, “Mobility Increases the Capacity of Ad Hoc Wireless Networks,” IEEE/ACM Trans. Networking, vol. 10, no. 4, pp. 477-486, Aug. 2002. [7] X. Lin and N.B. Shroff, “The Fundamental Capacity-Delay Tradeoff in Large Mobile Wireless Networks,” technical report, http://cobweb.ecn.purdue.edu/linxpapers.html , 2004. [8] S. Shakkottai, X. Liu, and R. Srikant, “The Multicast Capacity of Large Multihop Wireless Networks,” Proc. ACM MobiHoc, Sept. 2007. [9] S. Zhou and L. Ying, “On Delay Constrained Multicast Capacity of Large-Scale Mobile Ad-Hoc Networks,” Proc. IEEE INFOCOM, 2010. [10] P. Li, Y. Fang, and J. Li, “Throughput, Delay, and Mobility in Wireless Ad Hoc Networks,” Proc. IEEE INFOCOM, 2010. [11] X.-Y. Li, S.-J. Tang, and O. Frieder, “Multicast Capacity for Large Scale Wireless Ad Hoc Networks,” Proc. ACM MobiCom, Sept. 2007. [12] L.-L. Xie and P.R. Kumar, “On the Path-Loss Attenuation Regime for Positive Cost and Linear Scaling of Transport Capacity in Wireless Networks,” IEEE Trans. Information Theory, vol. 52, no. 6, pp. 2313-2328, June 2006. [13] X. Lin, G. Sharma, R.R. Mazumdar, and N.B. Shroff, “Degenerate Delay-Capacity Trade-Offs in Ad Hoc Networks with Brownian Mobility,” Joint Special Issue of IEEE Trans. Information Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, vol. 52, no. 6, pp. 2777-2784, June 2006. [14] G. Sharma and R. Mazumdar, “Scaling Laws for Capacity and Delay in Wireless Ad Hoc Networks with Random Mobility,” Proc. IEEE Int'l Conf. Comm., 2004. [15] J. Mammen and D. Shah, “Throughput and Delay in Random Wireless Networks with Restricted Mobility,” IEEE Trans. Information Theory, vol. 53, no. 3, pp. 1108-1116, Mar. 2007. [16] G. Zhang, Y. Xu, X. Wang, and M. Guizani, “Capacity of Hybrid Wireless Networks with Directional Antenna and Delay Constraint,” IEEE Trans. Comm., vol. 58, no. 7, pp. 2097-2106, July 2010. [17] S. Shakkottai, X. Liu, and R. Srikant, “The Multicast Capacity of Large Multihop Wireless Networks,” Proc. ACM MobiHoc, Sept. 2007. [18] L. Ying, S. Yang, and R. Srikant, “Coding Achieves the Optimal Delay-Throughput Tradeoff in Mobile Ad Hoc Networks: A Hybrid Random Walk Model with Fast Mobiles,” Proc. Information Theory and Application Workshop (ITA), 2007. [19] U. Lee, S.-Y. Oh, K.-W. Lee, and M. Gerla, “RelayCast: Scalable Multicast Routing in Delay Tolerant Networks,” Proc. ACM Int'l Conf. Network Protocol (ICNP '08), Oct. 2008. [20] L. Ying, S. Yang, and R. Srikant, “Optimal Delay-Throughput Trade-Offs in Mobile Ad-Hoc Networks,” IEEE Trans. Information Theory, vol. 9, no. 54, pp. 4119-4143, Sept. 2008. [21] X. Wang, Y. Bei, Q. Peng, and L. Fu, “Speed Improves Delay-Capacity Tradeoff in MotionCast,” IEEE Trans. Parallel and Distributed Systems, vol. 22, no. 5, pp. 729-742, May 2011, doi: 10.1109/ [22] S. Toumpis and A.J. Goldsmith, “Large Wireless Networks under Fading, Mobility, and Delay Constraints,” Proc. IEEE INFOCOM, Mar. 2004. [23] A. El-Gamal, J. Mammen, B. Prabhakar, and D. Shah, “Optimal Throughput-Delay Scaling in Wireless Networks - Part I: The Fluid Model,” IEEE Trans. Information Theory, vol. 52, no. 6, pp. 2568-2592, June 2006. [24] A. El-Gamal, J. Mammen, B. Prabhakar, and D. Shah, “Optimal Throughput-Delay Scaling in Wireless Networks - Part II: Constant-Size Packets,” IEEE Trans. Information Theory, vol. 52, no. 11, pp. 5111-5116, Nov. 2006. [25] P. Zhang, C.M. Sadler, S.A. Lyon, and M. Martonosi, “Hardware Design Experiences in Zebranet,” Proc. ACM Int'l Conf. Embedded Networked Sensor Systems (SenSys), 2004. [26] M. Zhao, M. Ma, and Y. Yang, “Mobile Data Gathering with Space-Division Multiple Access in Wireless Sensor Networks,” Proc. IEEE INFOCOM, Apr. 2008. [27] K. Ota, M. Dong, and X. Li, “TinyBee: Mobile-Agent-Based Data Gathering System in Wireless Sensor Networks,” Proc. IEEE Int'l Conf. Network, Architecture, and Storage, 2009. [28] L. Ying, S. Yang, and R. Srikant, “Optimal Delay-Throughput Trade-Offs in Mobile Ad Hoc Networks,” IEEE Trans. Information Theory, vol. 54, no. 9, pp. 4119-4143, Sept. 2008. [29] M. Garetto, P. Giaccone, and E. Leonardi, “Capacity Scaling in Ad Hoc Networks with Heterogeneous Mobile Nodes: The Super-Critical Regime,” IEEE/ACM Trans. Networking, vol. 17, no. 5, pp. 1522-1535, Oct. 2009. [30] M. Garetto, P. Giaccone, E. Leonardi, “Capacity Scaling in Ad Hoc Networks with Heterogeneous Mobile Nodes: The Sub-Critical Regime,” ACM/IEEE Trans. Networking, vol. 17, no. 6, pp. 1888-1901, Dec. 2009. [31] D. Ciullo, V. Martina, M. Garetto, E. Leonardi, “Impact of Correlated Mobility on Delay-Throughput Performance in Mobile Ad-Hoc Networks,” Proc. IEEE INFOCOM, Mar. 2010. Index Terms: Converge cast, capacity, delay. Xinbing Wang, Luoyi Fu, Xiaohua Tian, Yuanzhe Bei, Qiuyu Peng, Xiaoying Gan, Hui Yu, Jing Liu, "Converge Cast: On the Capacity and Delay Tradeoffs," IEEE Transactions on Mobile Computing, vol. 11, no. 6, pp. 970-982, June 2012, doi:10.1109/TMC.2011.110 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tm/2012/06/ttm2012060970-abs.html","timestamp":"2014-04-16T05:11:07Z","content_type":null,"content_length":"62343","record_id":"<urn:uuid:7bd7ac3f-2f86-4b74-ac24-d28332ae249f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Relationship between density and deterministic complexity of NP-complete languages Results 1 - 10 of 18 , 1993 "... We obtain several results that distinguish self-reducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to de ..." Cited by 39 (9 self) Add to MetaCart We obtain several results that distinguish self-reducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to decision for L, and L is not self-reducible. Funding for this research was provided by the National Science Foundation under grant CCR9002292. y Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 z Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 x Research performed while visiting the Department of Computer Science, State University of New York at Buffalo, Jan. 1992--Dec. 1992. Current address: Department of Computer Science, University of Electro-Communications, Chofu-shi, Tokyo 182, Japan. -- Department of Computer Science, State University of New York at Buffalo, - IN PROCEEDINGS OF THE 43RD IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE , 2002 "... We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and non-uniform reductions. These sets are provably not complete under the usual many-one reductions. Let ..." Cited by 36 (15 self) Add to MetaCart We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and non-uniform reductions. These sets are provably not complete under the usual many-one reductions. Let , 1990 "... This paper surveys investigations into how strong these commonalities are. More concretely, we are concerned with: What do NP-complete sets look like? To what extent are the properties of particular NP-complete sets, e.g., SAT, shared by all NP-complete sets? If there are are structural differences ..." Cited by 29 (3 self) Add to MetaCart This paper surveys investigations into how strong these commonalities are. More concretely, we are concerned with: What do NP-complete sets look like? To what extent are the properties of particular NP-complete sets, e.g., SAT, shared by all NP-complete sets? If there are are structural differences between NP-complete sets, what are they and what explains the differences? We make these questions, and the analogous questions for other complexity classes, more precise below. We need first to formalize NP-completeness. There are a number of competing definitions of NP-completeness. (See [Har78a, p. 7] for a discussion.) The most common, and the one we use, is based on the notion of m-reduction, also known as polynomial-time manyone reduction and Karp reduction. A set A is m-reducible to B if and only if there is a (total) polynomial-time computable function f such that for all x, x 2 A () f(x) 2 B: (1) 1 , 1992 "... this paper was coauthored by K. Wagner) ..." - IN PROC. 12TH CONFERENCE ON THE FOUNDATIONS OF SOFTWARE TECHNOLOGY & THEORETICAL COMPUTER SCIENCE , 1992 "... In this paper we study the consequences of the existence of sparse hard sets for different complexity classes under certain types of deterministic, randomized and nondeterministic reductions. We show that if an NP-complete set is bounded-truthtable reducible to a set that conjunctively reduces to ..." Cited by 17 (8 self) Add to MetaCart In this paper we study the consequences of the existence of sparse hard sets for different complexity classes under certain types of deterministic, randomized and nondeterministic reductions. We show that if an NP-complete set is bounded-truthtable reducible to a set that conjunctively reduces to a sparse set then P = NP. Relatedly, we show that if an NP-complete set is bounded-truth-table reducible to a set that co-rp reduces to some set that conjunctively reduces to a sparse set then RP = NP. We also prove similar results under the (apparently) weaker assumption that some solution of the promise problem (1SAT; SAT) reduces via the mentioned reductions to a sparse set. Finally we consider nondeterministic polynomial time many-one reductions to sparse and co-sparse sets. We prove that if a coNPcomplete set reduces via a nondeterministic polynomial time many-one reduction to a co-sparse set then PH = \Theta p 2 . On the other hand, we show that nondeterministic polynomial ... , 1991 "... A basic question about NP is whether or not search reduces in polynomial time to decision. We indicate that the answer is negative: under a complexity assumption (that deterministic and nondeterministic doubleexponential time are unequal) we construct a language in NP for which search does not reduc ..." Cited by 13 (7 self) Add to MetaCart A basic question about NP is whether or not search reduces in polynomial time to decision. We indicate that the answer is negative: under a complexity assumption (that deterministic and nondeterministic doubleexponential time are unequal) we construct a language in NP for which search does not reduce to decision. These ideas extend in a natural way to interactive proofs and program checking. Under similar assumptions we present languages in NP for which it is harder to prove membership interactively than it is to decide this membership. Similarly we present languages where checking is harder than computing membership. Each of the following properties --- checkability, random-self-reducibility, reduction from search to decision, and interactive proofs in which the prover's power is limited to deciding membership in the language itself --- implies coherence, one of the weakest forms of self-reducibility. Under assumptions about triple-exponential time, we construct incoherent sets in NP.... - PROC. 10TH STRUCTURE IN COMPLEXITY THEORY CONFERENCE, IEEE , 1995 "... Over a decade ago, Schöning introduced the concept of lowness into structural complexity theory. Since then a large body of results has been obtained classifying various complexity classes according to their lowness properties. In this paper we highlight some of the more recent advances on selected ..." Cited by 10 (2 self) Add to MetaCart Over a decade ago, Schöning introduced the concept of lowness into structural complexity theory. Since then a large body of results has been obtained classifying various complexity classes according to their lowness properties. In this paper we highlight some of the more recent advances on selected topics in the area. Among the lowness properties we consider are polynomial-size circuit complexity, membership comparability, approximability, selectivity, and cheatability. Furthermore, we review some of the recent results concerning lowness for counting classes. - in Proc. Int. Conf. Communications (ICC , 2001 "... Bipartite graphs of bit nodes and parity check nodes arise as Tanner graphs corresponding to low density parity check codes. Given graph parameters such as the number of check nodes, the maximum check-degree, the bit-degree, and the girth, we consider the problem of constructing bipartite graphs wit ..." Cited by 10 (1 self) Add to MetaCart Bipartite graphs of bit nodes and parity check nodes arise as Tanner graphs corresponding to low density parity check codes. Given graph parameters such as the number of check nodes, the maximum check-degree, the bit-degree, and the girth, we consider the problem of constructing bipartite graphs with the largest number of bit nodes, that is, the highest rate. We propose a simple-to-implement heuristic BIT-FILLING algorithm for this problem. As a benchmark, our algorithm yields codes better or comparable to those in MacKay [1]. I. , 1996 "... The study of sparse hard sets and sparse complete sets has been a central research area in complexity theory for nearly two decades. Recently new results using unexpected techniques have been obtained. They provide new and easier proofs of old theorems, proofs of new theorems that unify previously k ..." Cited by 8 (2 self) Add to MetaCart The study of sparse hard sets and sparse complete sets has been a central research area in complexity theory for nearly two decades. Recently new results using unexpected techniques have been obtained. They provide new and easier proofs of old theorems, proofs of new theorems that unify previously known results, resolutions of old conjectures, and connections to the fascinating world of randomization and derandomization. In this article we give an exposition of this vibrant research area. 1 Introduction Complexity theory is concerned with the quantitative limitation and power of computation. During the past several decades computational complexity theory developed gradually from its initial awakening [Rab59, Yam62, HS65, Cob65] to the current edifice of a scientific discipline that is rich in beautiful results, powerful techniques, fascinating research topics and conjectures, deep connections to other mathematical subjects, and of critical importance to everyday computing. The buildin... , 1994 "... . Mahaney and others have shown that sparse self-reducible sets have time-ecient algorithms, and have concluded that it is unlikely that NP has sparse complete sets. Mahaney's work, intuition, and a 1978 conjecture of Hartmanis notwithstanding, nothing has been known about the density of complet ..." Cited by 5 (3 self) Add to MetaCart . Mahaney and others have shown that sparse self-reducible sets have time-ecient algorithms, and have concluded that it is unlikely that NP has sparse complete sets. Mahaney's work, intuition, and a 1978 conjecture of Hartmanis notwithstanding, nothing has been known about the density of complete sets for feasible classes until now. This paper shows that sparse self-reducible sets have space-ecient algorithms, and in many cases, even have time-space-ecient algorithms. We conclude that NL, NC k , AC k , LOG(DCFL), LOG(CFL), and P lack complete (or even Turing-hard) sets of low density unless implausible complexity class inclusions hold. In particular, if NL (respectively P, k , or NP) has a polylog-sparse logspace-hard set, then NL SC (respectively P SC, k SC, or PH SC), and if P has subpolynomially sparse logspace-hard sets, then P 6= PSPACE. Subject classications. 68Q15, 03D15. 1. Introduction Complete sets are the quintessences of their complexity cla...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=160644","timestamp":"2014-04-21T00:52:01Z","content_type":null,"content_length":"37590","record_id":"<urn:uuid:f40f8c9e-9b1b-4848-b155-35aef3a55cc2>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotating Rockets 1. The problem statement, all variables and given/known data Far out in space, a mass1=146500.0 kg rocket and a mass2= 209500.0 kg rocket are docked at opposite ends of a motionless 70.0 m long connecting tunnel. The tunnel is rigid and has a mass of 10500.0 The rockets start their engines simultaneously, each generating 59200.0 N of thrust in opposite directions. What is the structure's angular velocity after 33.0 s? 2. Relevant equations center of mass = (m1x1 + m2x2 + m3x3)/m1 + m2 + m3 where m1 = mass of rocket 1, m2 = mass of tunnel, m3 = mass of rocket 3 x(number) = distance to the origin (in this case, rocket one used as origin (set at 0) Moment of Inertia = m1r1^2 + m2r2^2 (m=mass, r= distance to center of mass) Moment of inertia of thin, long rod about center = 1/12 ML^2 (M=mass, L=length of rod) Torque = lengthxForce angular acceleration = Torque/inertia angular velocity = angular acceleration x time 3. The attempt at a solution So, I found the center of mass, and I know it's correct because the previous question asked for it and was found to be correct. It was 41m. Now, I think I may be calculating my moment of inertia wrong??? I = m1r1^2 + m3r3^2 + 1/12 ML^2 I = 447 938 063.8 kg*m^2 Am is supposed to include the moment of inertia of the tunnel (rod) in this scenario? Obviously I use the mass and their relative distances to the center of mass for the rockets to find their moment of inertia...but do I just add on the moment of inertia of the rod? My answer was incorrect, so I am trying to figure out what is wrong about my process. I am assuming my moment of inertia is incorrect. The torque is 70m x 59200N = 4144000 N*m Please help :) 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution
{"url":"http://www.physicsforums.com/showthread.php?t=438171","timestamp":"2014-04-20T23:35:29Z","content_type":null,"content_length":"38712","record_id":"<urn:uuid:8dc941f6-e4b3-407e-8d4d-a6da17e2c4b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Linear and Dynamic Programming Results 1 - 10 of 29 - ACTA NUMERICA , 2005 "... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..." Cited by 180 (30 self) Add to MetaCart Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems. , 1999 "... In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. The task of ordinal regression arises frequently in the social sciences and in information retr ..." Cited by 16 (3 self) Add to MetaCart In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. The task of ordinal regression arises frequently in the social sciences and in information retrieval where human preferences play a major role. Also many multi--class problems are really problems of ordinal regression due to an ordering of the classes. Although the problem is rather novel to the Machine Learning Community it has been widely considered in Statistics before. All the statistical methods rely on a probability model of a latent (unobserved) variable and on the condition of stochastic ordering. In this paper we develop a distribution independent formulation of the problem and give uniform bounds for our risk functional. The main difference to classification is the restriction that the mapping of objects to ranks must be transitive and asymmetric. Combining our theoretical framework with results from measurement theory we present an approach that is based on a mapping from objects to scalar utility values and thus guarantees transitivity and asymmetry. Applying the principle of Structural Risk Minimization as employed in Support Vector Machines we derive a new learning algorithm based on large margin rank boundaries for the task of ordinal regression. Our method is easily extended to nonlinear utility functions. We give experimental results for an Information Retrieval task of learning the order of documents with respect to an initial query. Moreover, we show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two - In Proceedings of 1993 IEEE International Conference on Robotics and Automation , 1993 "... In industrial assembly, a registration mark can be placed on parts to aid a computer vision system in determining the position and orientation (pose) of parts. However, when sensor noise and limits on resolution introduce errors in the measured location of the registration mark, these errors can pro ..." Cited by 14 (2 self) Add to MetaCart In industrial assembly, a registration mark can be placed on parts to aid a computer vision system in determining the position and orientation (pose) of parts. However, when sensor noise and limits on resolution introduce errors in the measured location of the registration mark, these errors can propagate into the measurement of part pose. In this paper we define the Registration Mark Problem: given an n-sided rigid planar polygonal part and a set of k poses for the part, locate a point on the surface of the part that maximizes the minimum distance between transformed points. A registration mark at this point will be maximally robust to sensor imperfections. We give an O(n log n + k 4 log k log k) time algorithm to solve this planar problem using a result from Schwartz and Sharir [22] and demonstrate the algorithm using a commercial vision system. Our results extend to classes of curved planar parts and polyhedral parts. 1 Introduction Determining the precise position and orient... - ACM Trans. on Information Systems , 1990 "... : Text compression is of considerable theoretical and practical interest. It is, for example, becoming increasingly important for satisfying the requirements of fitting a large database onto a single CDROM. Many of the compression techniques discussed in the literature are model based. We here prop ..." Cited by 9 (5 self) Add to MetaCart : Text compression is of considerable theoretical and practical interest. It is, for example, becoming increasingly important for satisfying the requirements of fitting a large database onto a single CDROM. Many of the compression techniques discussed in the literature are model based. We here propose the notion of a formal grammar as a flexible model of text generation that encompasses most of the models offered before as well as, in principle, extending the possibility of compression to a much more general class of languages. Assuming a general model of text generation, a derivation is given of the well known Shannon entropy formula, making possible a theory of information based upon text representation rather than on communication. The ideas are shown to apply to a number of commonly used text models. Finally, we focus on a Markov model of text generation, suggest an information theoretic measure of similarity between two probability distributions, and develop a clustering algorith... - Journal of Artificial Intelligence Research (JAIR , 2005 "... In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed ope ..." Cited by 7 (2 self) Add to MetaCart In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods. 1. , 2000 "... Networks, such as the electric grid, are operated by sets of agents that are heterogeneous, local and distributed. (By "heterogeneous" we mean that the agents can range from simple devices, like relays, to very intelligent entities, like committees of humans. By "local and distributed" we mean th ..." Cited by 6 (4 self) Add to MetaCart Networks, such as the electric grid, are operated by sets of agents that are heterogeneous, local and distributed. (By "heterogeneous" we mean that the agents can range from simple devices, like relays, to very intelligent entities, like committees of humans. By "local and distributed" we mean that each agent can sense only a few of the network's state variables and influence only a few of its control variables.) We are concerned with two issues: the quality and speed of decision-making by heterogeneous, local and distributed agents. For quality, our standard of comparison is an ideal, centralized agent, which senses the state of the entire network and makes globally optimal decisions. (Of course, such a centralized agent is impractical for large networks. "... This paper studies the following online replacement problem. There is a real function f(t), called the flow rate, defined over a finite time horizon [0; T ]. It is known that m f(t) M for some reals 0 m ! M . At time 0 an online player starts to pay money at the rate f(0). At each time 0 ! t T ..." Cited by 5 (2 self) Add to MetaCart This paper studies the following online replacement problem. There is a real function f(t), called the flow rate, defined over a finite time horizon [0; T ]. It is known that m f(t) M for some reals 0 m ! M . At time 0 an online player starts to pay money at the rate f(0). At each time 0 ! t T the player may changeover and continue paying money at the rate f(t). The complication is that each such changeover incurs some fixed penalty. The player is called online as at each time t the player knows f only over the time interval [0; t]. The goal of the player is to minimize the total cost comprised of cumulative payment flow plus changeover costs. This formulation of the replacement problem has various interesting applications among which are: equipment replacement, supplier replacement, the menu cost problem and mortgage refinancing. , 2000 "... We consider techniques for solving general KKT systems. In particular, we address the situation of a singular (1,1) block, and focus on ways to eliminate the singularity, either by reducing the system size or by employing an augmented Lagrangian technique. The latter is a parameter-dependent approac ..." Cited by 3 (2 self) Add to MetaCart We consider techniques for solving general KKT systems. In particular, we address the situation of a singular (1,1) block, and focus on ways to eliminate the singularity, either by reducing the system size or by employing an augmented Lagrangian technique. The latter is a parameter-dependent approach. We provide some observations regarding the condition number, the spectrum, and a sensible choice of the parameter. The analysis demonstrates how the case of a singular (1,1) block is different from other cases. We also present a few general results regarding inversion of certain KKT matrices, the spectra of their associated Schur complements, and error estimates for regularized systems. Finally, we present a parameter-dependent preconditioning technique, and discuss its spectral
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=52917","timestamp":"2014-04-17T14:24:46Z","content_type":null,"content_length":"37532","record_id":"<urn:uuid:4c92e09f-1cd6-4d54-9af5-97c825b7ff82>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equations Change Your Euler Quiz Think You Got it Down? Test your knowledge again: Differential Equations: Change Your Euler Quiz Think you’ve got your head wrapped around Differential Equations? Put your knowledge to the test. Good luck — the Stickman is counting on you! The line below has slope -1.5. Determine the value of the missing number. There is insufficient information to answer this question. Q. Determine the indicated value. Q. If f (x) = x^2 + 3, use a tangent line to f at x = 1 to estimate f (.75). The picture below shows the tangent line to = 2: f (2.1). ) be a solution to the initial value problem Use a tangent line to approximate f (1.5). ) be a solution to the initial value problem Use Euler's method with 2 steps to approximate f (3). ) be a solution to the initial value problem Euler's method with more than one step is used to approximate f (2). Which of the following numbers is most likely to be the value found by the approximation? Q. Let f (x) be a differentiable function defined for all real numbers, and let a be a real number. Which of the following statements is true? If f is concave up and decreasing then the tangent line to f at x = a lies under the graph of f. If f is concave up and decreasing then the tangent line to f at x = a lies over the graph of f. If f is concave up and increasing then the tangent line to f at x = a lies over the graph of f. If f is concave down except at x = a and f '(a) = 0 then the tangent line to f at x = a lies under the graph of f. ) be a solution to the initial value problem Euler's method produces an overestimate to the value f (1). an underestimate to the value f (1). the exact value f (1). a formula for the solution f (x).
{"url":"http://www.shmoop.com/differential-equations/quiz-3.html","timestamp":"2014-04-17T10:26:44Z","content_type":null,"content_length":"44590","record_id":"<urn:uuid:7da732c3-5bcd-47aa-8062-c1044c8fb9d2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
February 6th 2013, 01:58 PM The second span of the bluewater Bridge in Sarnia, Ontario is supported by two possible arches. Each arch is set in concrete foundations that are on opposite sides of the st. clair river. The feet of the arches are 281 . apart. The top of each arch rises 71m above the river. Write a function to model the arch. Can someone please help on this ? February 6th 2013, 02:29 PM Re: Functions! The roots are 0 and 281, so the middle of the parabola or the aos is 140.5. The vertex will be at (140.5,71) sub in the vertex and solve for a. a= -284/78961 There fore, the function is f(x)= -284/78961 (x)(x-281) February 6th 2013, 02:34 PM Re: Functions! Thankyou so much ! :)
{"url":"http://mathhelpforum.com/algebra/212686-functions-print.html","timestamp":"2014-04-16T10:47:55Z","content_type":null,"content_length":"4173","record_id":"<urn:uuid:efeefe93-1d6a-4a9c-a874-d923907288d1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
414298141056 Quarto Draws Suffice! In how many distinct ways can the integers 0 through 15 be arranged in a 4x4 array such that the bit-wise OR over each row, each column, and each diagonal is 15, and the bit-wise AND over each row, column, and diagonal is 0? For example, the following arrangement satisfies the requirement: This example can be trivially transformed into any of several other examples by means of rotation, reflection, and EXCLUSIVEly ORing each element with any single element. Assuming each of the results is distinct (?), this means that the above array represents a set of 128 arrays with the required property. But how many such sets are there? This led to Problem #5 on my Most Wanted list of unsolved problems, summarized as follows: In how many distinct ways can the integers 0 through 15 be arranged in a 4x4 array such that the bitwise OR over each row, column, and diagonal is 15, and the bitwise AND over each row, column, and diagonal is 0? This question was first posted on the Internet in early 1993, but was evidently just outside the range of a brute-force search on the typical personal computer at that time. In 1996, someone named Ian (ianm@tpower.com) sent the following comments on a restricted version of this problem Ian wrote: Partial (very) solution. Using my trusty PC, if you look for solutions that only meet the row constraint...I come up with 778339*24^5 solutions. How I came up with that number - there are 30816=24*1284 valid combinations of 4 of 0..15 that meet the AND/OR constraint. Here is the program I used...(Pascal). I am trying to think of a reasonable way to tackle the entire problem. This is a nice way of approaching the problem. In summary, Ian's method is to examine each of the 1820 4-element subsets of {0,1,2,..,15} to find the 1284 subsets with cumulative AND equal to 0 and cumulative OR equal to 15. Each of these subsets represents a valid row. Then he finds all combinations of four subsets that are mutually exclusive (because no number can appear in more than one row). We find 778339 such combinations. Of course, within each row any of the 4! arrangements is equally valid, so there are (4!)^4 distinct ways of arranging the four numbers in the four rows. Furthermore, any of the 4! permutations of the rows themselves is equally valid, so we arrive at the result 778339*(24)^5 = 6,197,620,801,536 Ian's method of checking the combinations of valid rows is to express each of the 1284 valid four-number subsets as a 16-bit binary number containing four 1's (corresponding to the four numbers in the subset). He then applied a four-nested loop on the 1284 numbers to construct the four-combinations and check that the pair-wise ANDs are zero. Of course if we had to check all 10^11 sets of four out of 1284 it would take quite a while. However, it turns out the pairwise exclusivity conditions effectively truncate the loops with relatively little wasted running. (On my old 33 MHz 486 PC it takes about 13 minutes.) It's worth noting that if you arrange the 1284 valid combinations in lexigraphical order, i.e., with the order given by the loops for a=0 to 15 for b=a+1 to 15 for c=b+1 to 15 for d=c+1 to 15 it turns out that all the valid combinations of four have indicies in the ranges shown below 1st index 1 321 2nd index 322 980 3rd index 594 1240 4th index 808 1284 Also, for any given 1st index, the number of valid combinations is one of just 13 values, as summarized in the table below: # of 1st indicies having exactly q valid combinations q (#)*(q) ---------------- ------ -------- -------- -------- This seems to suggest that there might be a combinatorial way of determining these numbers, or perhaps the corresponding numbers for some other arrangement of the 1284 valid combinations. Another interesting aspect of Ian's result is the comparison with a "probabilistic" estimate for the solution of the complete quarto problem. I once wrote a little program to generate random permutations of the numbers 0 through 15 and check to see if they were quarto squares. Based on this analysis it appears that about 1 out of every 50.52 squares is a quarto square, so the total number of distinct quarto squares is about (16!)/50.52 = 4.14E+11. This is roughly 1/15 of 778339*(24)^5 which, as Ian has shown, is the number of squares that satisfy the quarto conditions in just one direction (e.g., for the rows). Anyway, toward the end of 1996 there was in sharp increase in activity on the unrestricted question, as several people sensed that a direct search was just feasible on a high-speed Pentium computer. Finally, Steve Zook wrote a counting program and determined that the number in question is Steve's program used a simple depth-first search algorithm to count by complete enumeration the quarto draws having a zero in the upper left cell, knowing that each of those corresponds to 16 distinct quarto draws by performing an exclusive-OR of each cell's contents with a constant from 0 to 15. To optimize pruning he assigned the cell contents in the order shown below This allows excluding non-draws as soon as possible. The result of his count was 25,893,633,816. Multiplying this by 16 gives the number 414298141056. I also recieved other purported solutions to this problem, but most of them were clearly wrong. To evaluate the few that were in the right ballpark (like Steve's) I tried to find a relatively efficient way of counting these "quarto draws" so that I could verify which (if any) of them was correct. The method I used was based on the observation that the set P of all permutations can be partitioned into 16 equally-sized subsets, each of which consists of the elements of P that are equivalent up to XORing by a constant. Furthermore, each of those 16 subsets can be partitioned into 24 equally-sized subsets, each consisting of the elements that are equivalent up to a permutation of the binary bits. Thus, we have a 384-to-1 mapping of the elements of P onto the elements of the set B, where B is the set of permutations with "0" in the upper left corner and with the numbers "1", "2", "4", and "8" appearing in ascending order. On this basis we need to examine only the 16!/384 = 54486432000 elements of B. If q denotes the number of quarto draws in B then Q = 384*q is the total number of quarto draws in P. We can reduce the problem further by considering the 1365 possible placements of the numbers 1,2,4,8 in ascending order (with 0 in the upper left). Notice that if we "flip" the square about the diagonal through 0, the number of quarto draws for the new arrangement will be identical to the original. Thus, if flipping the square yields a distinct arrangement (up to permutations of the binary bits), we need only count the solutions for one of the arrangements, and double it. In addition, notice that for any placement of 1,2,4,8 if we exchange the middle two rows and then exchange the middle two columns, the number of quarto draws for the new arrangement will be identical to the original. This is the only one of the 24 possible permutations of the rows/columns that leaves fixed the upper left corner and all the contents of the rows, columns, and diagonals. Therefore, my overall approach is to scan through the 1365 placements of 1,2,4,8 in ascending order, with 0 in the upper left, and look at all four of the equivalent arrangements given by flipping about the 0 diagonal and exchanging the two middle rows/columns. In most cases this gives four distinct placements, but in some cases it gives only two (because of symmetry), and in a few cases it gives only one. If any of these four equivalent arrangements has already been examined, I discard it and go on to the next. If none of them have been seen before, I check for quarto draws and multiply the result by the number of distinct but equivalent placements (usually four). Of course, some of the 1365 placements can be seen to contain zero quarto draws because the number 0,1,2,4,8 already constitute a row, column, or diagonal that violates the quarto draw condition. It turns out that 36 of the 1365 placements can be immediately discounted out on that basis. (These 36 are composed of four sets of two equivalent placements, and seven sets of four equivalent Of the remaining placements we find that 312 give four equivalent but distinct placements, 38 give two, and 5 give only one. So, the totals are (312)*4 = 1248 (38)*2 = 76 (5)*1 = 5 nulls = 36 = (4)*2 + (7)*4 Total 1365 This means we only need to check 355 placements, each of which represents 11! = 39916800 cases, so overall we need to check exactly 14170464000 (about 14 billion) individual 4x4 squares. With a straight-forward implementation on a 200 MHz Pentium computer this takes about 9 hours. The result is that the set B contains exactly 1078901409 quarto draws, which means the overall set P of all possible permutations contains 414298141056 quarto draws. This means that 1 out of 50.5017711 permutations is a quarto draw, in good agreement with Monte Carlo simulations that count the number of quarto draws in a large number of randomly constructed By the way, the five placements that are invariant under the flipping and middle row/colums exchange operations are illustrated below 0 * * - 0 - - * 0 - - * 0 - - - 0 - - - * - - - - * - - - - * - - * * - - - - * * - - - - - * - - * - - - * * - - - - * - - - - * - - - * - - - - - - - - * * - The asterisks denote the locations of the numbers 1,2,4,8 (in ascending order) and the dashes represent the remaining 11 numbers. The numbers of quarto draws contained in these five placements are, respectively, 201599 prime 347496 (2^3)(3)(14479) 2344528 (2^4)(23^2)(277) 935024 (2^4)(58439) 2686796 (2^2)(7)(95957) These five are special cases because of their symmetry. Most of the placements give four distinct equivalent placements under the flipping and row/column exchange operations. For example, the four placements shown below are equivalent 0 * * - 0 * - - 0 * * - 0 - * - * * - - * * - - - - - - * - - - - - - - * - - - * - * - * - * - - - - - - - - - - - - - - - - - Each of these four placements contains 169560 = (2^3)(3^3)(5)(157) quarto draws. The question about Quarto draws was based on the understanding that a "winning" configuration is one containing four pegs all with a common property in any row, column, or main diagonal. This is how I understood the rules of the game Quarto, having heard a brief description of the game in 1992. Alternatively, we could ask for the number of draws if ALL of the diagonals are considered, i.e., interpreting the square as a torus that wraps around the edges, or we could consider the case involving ONLY the rows and columns, with no diagonals at all. Out of curiosity I recently did a web search on the game Quarto to find out the precise definition of the rules. The only definition I can find is one that says a "win" involves any four cells IN A ROW OR A SQUARE. I suppose this means, for example, the four corner cells would count, as would the four middle cells, and so on. There are nine + four + one = 14 sets of four cells arranged in a square pattern orthogonal with the sides of the board. If we treat the array as a torus, the number of distinct orthogonal square arrangements is 20. As for the phrase "IN A ROW", I assume this also includes the (main?) diagonals along with the rows and columns. The following is a summary of some computed results on the number of distinct Quarto draws based on various interpretations of the rules. The terms "maindiags" and "alldiags" refer to the sets of four diagonal cells based on a single bounded array or a torus, respectively. Likewise the terms "mainsqrs" and "allsqrs" refer to sets of four cells in a square pattern based on a single bounded array or a torus, respectively. OR=15, AND=0 number of number/384 --------------------------- ------------- ------------------ rows+cols 1329371360640 (3^2)(5)(76931213) rows+cols+maindiags 414298141056 (3)(359633803) rows+cols+alldiags 85842854016 (17)(197)(66751) rows+cols+mainsqrs 35347120896 (2)(19)(59)(41057) rows+cols+mainsqrs+maindiags 18596841216 (2)(173)(139969) rows+cols+mainsqrs+alldiags 7376330496 (2)(29)(227)(1459) rows+cols+allsqrs 7031789952 (11)(19)(41)(2137) rows+cols+allsqrs+maindiags 4031409024 (3)(499)(7013) rows+cols+allsqrs+alldiags 2522431872 (3)(383)(5717) With the exception of the rows+cols+maindiags, these results have not been checked. Also, I've not considered square arrangements that are oblique to the sides of the board. Return to MathPages Main Menu
{"url":"http://mathpages.com/home/kmath352.htm","timestamp":"2014-04-16T10:10:52Z","content_type":null,"content_length":"15452","record_id":"<urn:uuid:5a51ee33-5820-45b1-934d-891708bb3b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
functions differenciate August 23rd 2007, 05:34 PM functions differenciate im terrible @ math. please help me with this! given 2 functions: f(x) = 2x, g(x) = -2x²+5x im supposed to determine the following inverse of g g (g inverse) the difference quotient for f the average rate of change for f yeah... a lot of Q's. someone pls explain the process to me, cuz i fail to understand it... August 23rd 2007, 07:42 PM First part: $[g(f)](x)=g(f(x))=-2(f(x))^2+5f(x)=2(2x)^2+5(2x)=8x^2+10x<br />$ Second part: There is no function $g^{-1}$, as for most $y$ there are two $x$s such that $g(x)=y$. August 23rd 2007, 07:47 PM The difference quotient for $f$ is: $<br /> DQ(f,h)=\frac{f(x+h)-f(x)}{h}=\frac{2(x+h)-2x}{h}=\frac{2h}{h}=2<br />$ Thus we see that the average rate of change of $f$ over any interval $(x, x+h)$ is $2$ August 24th 2007, 04:26 AM CaptainBlack is absolutely correct about the inverse of g. It has no inverse. However we may informally get an inverse (that is to say, the process is correct even if the application is not) by: $g(x) = -2x^2 + 5x$ Let $y = -2x^2 + 5x$ Now switch the roles of x and y: $x = -2y^2 + 5y$ Now solve for y: $2y^2 - 5y + x = 0$ $y = \frac{5 \pm \sqrt{25 - 8x}}{4}$ <-- via the quadratic formula $g^{-1}(x) = \frac{5 \pm \sqrt{25 - 8x}}{4}$ The problem, as CaptainBlack mentioned, is that for the graph y = g(x) there are two y values for every x (except at the vertex point), so we need to be very careful about defining a domain on which an inverse exists and just what that inverse is. (It will either be the "+" or the "-" of the inverse formula given above.) August 24th 2007, 11:38 AM thanks for your help !! August 24th 2007, 01:28 PM CaptainBlack is absolutely correct about the inverse of g. It has no inverse. However we may informally get an inverse (that is to say, the process is correct even if the application is not) by: $g(x) = -2x^2 + 5x$ Let $y = -2x^2 + 5x$ Now switch the roles of x and y: $x = -2y^2 + 5y$ Now solve for y: $2y^2 - 5y + x = 0$ $y = \frac{5 \pm \sqrt{25 - 8x}}{4}$ <-- via the quadratic formula $g^{-1}(x) = \frac{5 \pm \sqrt{25 - 8x}}{4}$ The problem, as CaptainBlack mentioned, is that for the graph y = g(x) there are two y values for every x (except at the vertex point), so we need to be very careful about defining a domain on which an inverse exists and just what that inverse is. (It will either be the "+" or the "-" of the inverse formula given above.) There is a way around the problem which is to extend $g$ from $\bold{R}$ to $\mathcal{P}(\bold{R})$, when: $g(g^{-1})=I$ the identity function $I(S)=S$ for all subsets of $\bold{R}$ but I doubt that this is what is wanted.
{"url":"http://mathhelpforum.com/calculus/17992-functions-differenciate-print.html","timestamp":"2014-04-17T11:52:50Z","content_type":null,"content_length":"15832","record_id":"<urn:uuid:b33091c3-708d-4e8a-a465-e8d670dd12d5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning Standards for Learning Standards for Mathematics, Science, and Technology at Three Levels Students will use mathematical analysis, scientific inquiry, and engineering design, as appropriate, to pose questions, seek answers, and develop solutions. Students will access, generate, process, and transfer information using appropriate technologies. Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real-world settings, and by solving problems through the integrated study of number systems, geometry, algebra, data analysis, probability, and trigonometry. Students will understand and apply scientific concepts, principles, and theories pertaining to the physical setting and living environment and recognize the historical development of ideas in Students will apply technological knowledge and skills to design, construct, use, and evaluate products and systems to satisfy human and environmental needs. Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning. Students will apply the knowledge and thinking skills of mathematics, science, and technology to address real-life problems and make informed decisions. Students will use mathematical analysis, scientific inquiry, and engineering design, as appropriate, to pose questions, seek answers, and develop solutions. Mathematical Analysis 1. Abstraction and symbolic representation are used to communicate mathematically. • use special mathematical notation and symbolism to communicate in mathematics and to compare and describe quantities, express relationships, and relate mathematics to their immediate This is evident, for example, when students: □ describe their ages as an inequality such as 7 < £ < 10. 2. Deductive and inductive reasoning are used to reach mathematical conclusions. • use simple logical reasoning to develop conclusions, recognizing that patterns and relationships present in the environment assist them in reaching these conclusions. 3. Critical thinking skills are used in the solution of mathematical problems. • explore and solve problems generated from school, home, and community situations, using concrete objects or manipulative materials when possible. Scientific Inquiry 1. The central purpose of scientific inquiry is to develop explanations of natural phenomena in a continuing, creative process. • ask "why" questions in attempts to seek greater understanding concerning objects and events they have observed and heard about. • question the explanations they hear from others and read about, seeking clarification and comparing them with their own observations and understandings. • develop relationships among observations to construct descriptions of objects and events and to form their own tentative explanations of what they have observed. This is evident, for example, when students: □ observe a variety of objects that either sink or float when placed in a container of water. Working in groups, they propose an explanation of why objects sink or float. After sharing and discussing their proposed explanation, they refine it and submit it for assessment. The explanation is rated on clarity and plausibility. 2. Beyond the use of reasoning and consensus, scientific inquiry involves the testing of proposed explanations involving the use of conventional techniques and procedures and usually requiring considerable ingenuity. • develop written plans for exploring phenomena or for evaluating explanations guided by questions or proposed explanations they have helped formulate. • share their research plans with others and revise them based on their suggestions. • carry out their plans for exploring phenomena through direct observation and through the use of simple instruments that permit measurements of quantities (e.g., length, mass, volume, temperature, and time). This is evident, for example, when students: □ are asked to develop a way of testing their explanation of why objects sink or float when placed in a container of water. They tell what procedures and materials they will use and indicate what results will support their explanation. Their plan is critiqued by others, they revise it, and submit it for assessment. The plan is rated on clarity, soundness in addressing the issue, and feasibility. After the teacher suggests modifications, the plan is carried out. 3. The observations made while testing proposed explanations, when analyzed using conventional and invented methods, provide new insights into phenomena. • organize observations and measurements of objects and events through classification and the preparation of simple charts and tables. • interpret organized observations and measurements, recognizing simple patterns, sequences, and relationships. • share their findings with others and actively seek their interpretations and ideas. • adjust their explanations and understandings of objects and events based on their findings and new ideas. This is evident, for example, when students: □ prepare tables or other representations of their observations and look for evidence which supports or refutes their explanation of why objects sink or float when placed in a container of water. After sharing and discussing their results with other groups, they prepare a brief research report that includes methods, findings, and conclusions. The report is rated on its clarity, care in carrying out the plan, and presentation of evidence supporting the conclusions. Engineering Design 1. Engineering design is an iterative process involving modeling and optimization finding the best solution within given constraints which is used to develop technological solutions to problems within given constraints. Students engage in the following steps in a design process: • describe objects, imaginary or real, that might be modeled or made differently and suggest ways in which the objects can be changed, fixed, or improved. • investigate prior solutions and ideas from books, magazines, family, friends, neighbors, and community members. • generate ideas for possible solutions, individually and through group activity; apply age-appropriate mathematics and science skills; evaluate the ideas and determine the best solution; and explain reasons for the choices. • plan and build, under supervision, a model of the solution using familiar materials, processes, and hand tools. • discuss how best to test the solution; perform the test under teacher supervision; record and portray results through numerical and graphic means; discuss orally why things worked or didn’t work; and summarize results in writing, suggesting ways to make the solution better. This is evident, for example, when students: □ read a story called Humpty’s Big Day wherein the readers visit the place where Humpty Dumpty had his accident, and are asked to design and model a way to get to the top of the wall and down again □ generate, draw, and model ideas for a space station that includes a pleasant living and working environment. □ design and model footwear that they could use to walk on a cold, sandy surface. Standard 1-Analysis, Inquiry, and Design Students will use mathematical analysis, scientific inquiry, and engineering design, as appropriate, to pose questions, seek answers, and develop solutions. Mathematical Analysis 1. Abstraction and symbolic representation are used to communicate mathematically. • extend mathematical notation and symbolism to include variables and algebraic expressions in order to describe and compare quantities and express mathematical relationships. 2. Deductive and inductive reasoning are used to reach mathematical conclusions. • use inductive reasoning to construct, evaluate, and validate conjectures and arguments, recognizing that patterns and relationships can assist in explaining and extending mathematical phenomena. This is evident, for example, when students: □ predict the next triangular number by examining the pattern 1, 3, 6, 10, r. 3. Critical thinking skills are used in the solution of mathematical problems. • apply mathematical knowledge to solve real-world problems and problems that arise from the investigation of mathematical ideas, using representations such as pictures, charts, and tables. Scientific Inquiry 1. The central purpose of scientific inquiry is to develop explanations of natural phenomena in a continuing, creative process. • formulate questions independently with the aid of references appropriate for guiding the search for explanations of everyday observations. • construct explanations independently for natural phenomena, especially by proposing preliminary visual models of phenomena. • represent, present, and defend their proposed explanations of everyday observations so that they can be understood and assessed by others. • seek to clarify, to assess critically, and to reconcile with their own thinking the ideas presented by others, including peers, teachers, authors, and scientists. This is evident, for example, when students: □ After being shown the disparity between the amount of solid waste which is recycled and which could be recycled, students working in small groups are asked to explain why this disparity exists. They develop a set of possible explanations and to select one for intensive study. After their explanation is critiqued by other groups, it is refined and submitted for assessment. The explanation is rated on clarity, plausibility, and appropriateness for intensive study using research methods. 2. Beyond the use of reasoning and consensus, scientific inquiry involves the testing of proposed explanations involving the use of conventional techniques and procedures and usually requiring considerable ingenuity. • use conventional techniques and those of their own design to make further observations and refine their explanations, guided by a need for more information. • develop, present, and defend formal research proposals for testing their own explanations of common phenomena, including ways of obtaining needed observations and ways of conducting simple controlled experiments. • carry out their research proposals, recording observations and measurements (e.g., lab notes, audio tape, computer disk, video tape) to help assess the explanation. This is evident, for example, when students: □ develop a research plan for studying the accuracy of their explanation of the disparity between the amount of solid waste that is recycled and that could be recycled.* After their tentative plan is critiqued, they refine it and submit it for assessment. The research proposal is rated on clarity, feasibility and soundness as a method of studying the explanations accuracy. They carry out the plan, with teacher suggested modifications. This work is rated by the teacher while it is in progress. 3. The observations made while testing proposed explanations, when analyzed using conventional and invented methods, provide new insights into phenomena. • design charts, tables, graphs and other representations of observations in conventional and creative ways to help them address their research question or hypothesis. • interpret the organized data to answer the research question or hypothesis and to gain insight into the problem. • modify their personal understanding of phenomena based on evaluation of their hypothesis. This is evident, for example, when students: □ carry out their plan making appropriate observations and measurements. They analyze the data, reach conclusions regarding their explanation of the disparity between the amount of solid waste which is recycled and which could be recycled., and prepare a tentative report which is critiqued by other groups, refined, and submitted for assessment. The report is rated on clarity, quality of presentation of data and analyses, and soundness of conclusions. Engineering Design 1. Engineering design is an iterative process involving modeling and optimization finding the best solution within given constraints which is used to develop technological solutions to problems within given constraints. Students engage in the following steps in a design process: • identify needs and opportunities for technical solutions from an investigation of situations of general or social interest. • locate and utilize a range of printed, electronic, and human information resources to obtain ideas. • consider constraints and generate several ideas for alternative solutions, using group and individual ideation techniques (group discussion, brainstorming, forced connections, role play); defer judgment until a number of ideas have been generated; evaluate (critique) ideas; and explain why the chosen solution is optimal. • develop plans, including drawings with measurements and details of construction, and construct a model of the solution, exhibiting a degree of craftsmanship. • in a group setting, test their solution against design specifications, present and evaluate results, describe how the solution might have been modified for different or better results, and discuss tradeoffs that might have to be made. This is evident, for example, when students: □ reflect on the need for alternative growing systems in desert environments and design and model a hydroponic greenhouse for growing vegetables without soil. □ brainstorm and evaluate alternative ideas for an adaptive device that will make life easier for a person with a disability, such as a device to pick up objects from the floor. □ design a model vehicle (with a safety belt restraint system and crush zones to absorb impact) to carry a raw egg as a passenger down a ramp and into a barrier without damage to the egg. □ assess the performance of a solution against various design criteria, enter the scores on a spreadsheet, and see how varying the solution might have affected total score. Standard 1-Analysis, Inquiry, and Design Students will use mathematical analysis, scientific inquiry, and engineering design, as appropriate, to pose questions, seek answers, and develop solutions. Mathematical Analysis 1. Abstraction and symbolic representation are used to communicate mathematically. • use algebraic and geometric representations to describe and compare data. 2. Deductive and inductive reasoning are used to reach mathematical conclusions. • use deductive reasoning to construct and evaluate conjectures and arguments, recognizing that patterns and relationships in mathematics assist them in arriving at these conjectures and arguments. 3. Critical thinking skills are used in the solution of mathematical problems. • apply algebraic and geometric concepts and skills to the solution of problems. Scientific Inquiry 1. The central purpose of scientific inquiry is to develop explanations of natural phenomena in a continuing, creative process. • elaborate on basic scientific and personal explanations of natural phenomena, and develop extended visual models and mathematical formulations to represent their thinking. • hone ideas through reasoning, library research, and discussion with others, including experts. • work toward reconciling competing explanations; clarifying points of agreement and disagreement. • coordinate explanations at different levels of scale, points of focus, and degrees of complexity and specificity and recognize the need for such alternative representations of the natural world. This is evident, for example, when students: □ in small groups, are asked to explain why a cactus plant requires much less water to survive than many other plants. They are asked to develop, through research, a set of explanations for the differences and to select at least one for study. After the proposed explanation is critiqued by others, they refine it by formulating a hypothesis which is rated on clarity, plausibility, and 2. Beyond the use of reasoning and consensus, scientific inquiry involves the testing of proposed explanations involving the use of conventional techniques and procedures and usually requiring considerable ingenuity. • devise ways of making observations to test proposed explanations. • refine their research ideas through library investigations, including electronic information retrieval and reviews of the literature, and through peer feedback obtained from review and • develop and present proposals including formal hypotheses to test their explanations, i.e., they predict what should be observed under specified conditions if the explanation is true. • carry out their research plan for testing explanations, including selecting and developing techniques, acquiring and building apparatus, and recording observations as necessary. This is evident, for example, when students: □ develop, through research, a proposal to test their hypothesis of why a cactus plant requires much less water to survive than many other plants. After their proposal is critiqued, it is refined and submitted for assessment by a panel of students. The proposal is rated on clarity, appropriateness, and feasibility. Upon approval, students complete the research. Progress is rated holistically by the teacher 3. The observations made while testing proposed explanations, when analyzed using conventional and invented methods, provide new insights into phenomena. • use various means of representing and organizing observations (e.g., diagrams, tables, charts, graphs, equations, matrices) and insightfully interpret the organized data. • apply statistical analysis techniques when appropriate to test if chance alone explains the result. • assess correspondence between the predicted result contained in the hypothesis and the actual result and reach a conclusion as to whether or not the explanation on which the prediction was based is supported. • based on the results of the test and through public discussion, they revise the explanation and contemplate additional research. • develop a written report for public scrutiny that describes their proposed explanation, including a literature review, the research they carried out, its result, and suggestions for further This is evident, for example, when students: □ carry out a research plan, including keeping a lab book, to test their hypothesis of why a cactus plant requires much less water to survive than many other plants. After completion, a paper is presented describing the research. Based on the class critique, the paper is rewritten and submitted with the lab book for separate assessment or as part of a portfolio of their science work. It is rated for clarity, thoroughness, soundness of conclusions, and quality of integration with existing literature. Engineering Design 1. Engineering design is an iterative process involving modeling and optimization finding the best solution within given constraints which is used to develop technological solutions to problems within given constraints. Students engage in the following steps in a design process: • initiate and carry out a thorough investigation of an unfamiliar situation and identify needs and opportunities for technological invention or innovation. • identify, locate, and use a wide range of information resources, and document through notes and sketches how findings relate to the problem. • generate creative solutions, break ideas into significant functional elements, and explore possible refinements; predict possible outcomes using mathematical and functional modeling techniques; choose the optimal solution to the problem, clearly documenting ideas against design criteria and constraints; and explain how human understands, economics, ergonomics, and environmental considerations have influenced the solution. • develop work schedules and working plans which include optimal use and cost of materials, processes, time, and expertise; construct a model of the solution, incorporating developmental modifications while working to a high degree of quality (craftsmanship). • devise a test of the solution according to the design criteria and perform the test; record, portray, and logically evaluate performance test results through quantitative, graphic, and verbal means. Use a variety of creative verbal and graphic techniques effectively and persuasively to present conclusions, predict impacts and new problems, and suggest and pursue modifications. This is evident, for example, when students: □ search the Internet for world wide web sites dealing with renewable energy and sustainable living and research the development and design of an energy efficient home. □ develop plans, diagrams, and working drawings for the construction of a computer-controlled marble sorting system that simulates how parts on an assembly line are sorted by color. □ design and model a portable emergency shelter that could be heated by a person s body to a life sustaining temperature when the outside temperature is 20 ^o F. Students will access, generate, process, and transfer information using appropriate technologies. Information Systems 1. Information technology is used to retrieve, process, and communicate information and as a tool to enhance learning. • use a variety of equipment and software packages to enter, process, display, and communicate information in different forms using text, tables, pictures, and sound. • telecommunicate a message to a distant location with teacher help. • access needed information from printed media, electronic data bases, and community resources. This is evident, for example, when students: □ use the newspaper or magazine index in a library to find information on a particular topic. □ invite local experts to the school to share their expertise. 2. Knowledge of the impacts and limitations of information systems is essential to its effective and ethical use. • describe the uses of information systems in homes, schools, and businesses. • understand that computers are used to store personal information. • demonstrate ability to evaluate information. This is evident, for example, when students: □ look for differences among species of bugs collected on the school grounds, and classify them according to preferred habitat. 3. Information technology can have positive and negative impacts on society, depending upon how it is used. • describe the uses of information systems in homes and schools. • demonstrate ability to evaluate information critically. Standard 2-Information Systems Students will access, generate, process, and transfer information using appropriate technologies. Information Systems 1. Information technology is used to retrieve, process, and communicate information and as a tool to enhance learning. • use a range of equipment and software to integrate several forms of information in order to create good quality audio, video, graphic, and text-based presentations. • use spreadsheets and data-base software to collect, process, display, and analyze information. Students access needed information from electronic data bases and on-line telecommunication • systematically obtain accurate and relevant information pertaining to a particular topic from a range of sources, including local and national media, libraries, museums, governmental agencies, industries, and individuals. • collect data from probes to measure events and phenomena. • simple modeling programs to make predictions. This is evident, for example, when students: □ compose letters on a word processor and send them to representatives of industry, governmental agencies, museums, or laboratories seeking information pertaining to a student project. □ acquire data from weather stations. □ use a software package, such as Science Tool Kit, to monitor the acceleration of a model car traveling down a given distance on a ramp. □ use computer software to model how plants grow plants under different conditions. 2. Knowledge of the impacts and limitations of information systems is essential to its effective and ethical use. • understand the need to question the accuracy of information displayed on a computer because the results produced by a computer may be affected by incorrect data entry. • identify advantages and limitations of data-handling programs and graphics programs. • understand why electronically stored personal information has greater potential for misuse than records kept in conventional form. 3. Information technology can have positive and negative impacts on society, depending upon how it is used. • use graphical, statistical, and presentation software to presents project to fellow classmates. • describe applications of information technology in mathematics, science, and other technologies that address needs and solve problems in the community. • explain the impact of the use and abuse of electronically generated information on individuals and families. Standard 2-Information Systems Students will access, generate, process, and transfer information using appropriate technologies. Information Systems 1. Information technology is used to retrieve, process, and communicate information and as a tool to enhance learning. • understand and use the more advanced features of word processing, spreadsheets, and data-base software. • prepare multimedia presentations demonstrating a clear sense of audience and purpose. • access, select, collate, and analyze information obtained from a wide range of sources such as research data bases, foundations, organizations, national libraries, and electronic communication networks, including the Internet. • students receive news reports from abroad and work in groups to produce newspapers reflecting the perspectives of different countries. • utilize electronic networks to share information. • model solutions to a range of problems in mathematics, science, and technology using computer simulation software. This is evident, for example, when students: □ collect and amend quantitative and qualitative information for a particular purpose and enter it into a data-handling package for processing and analysis. □ visit businesses, laboratories, environmental areas, and universities to obtain on-site information □ receive news reports from abroad, and work in groups to produce newspapers reflecting the perspectives of different countries. □ join a list serve and send electronic mail to other persons sharing mutual concerns and interests. □ use computer software to simulate and graph the motion of an object. □ study a system in a dangerous setting (e.g., a nuclear power plant). 2. Knowledge of the impacts and limitations of information systems is essential to its effective and ethical use. • explain the impact of the use and abuse of electronically generated information on individuals and families. • evaluate software packages relative to their suitability to a particular application and their ease of use. • discuss the ethical and social issues raised by the use and abuse of information systems. This is evident, for example, when students: □ discuss how unauthorized people might gain access to information about their interests and way of life. 3. Information technology can have positive and negative impacts on society, depending upon how it is used. • work with a virtual community to conduct a project or solve a problem using the network. • discuss how applications of information technology can address some major global problems and issues. • discuss the environmental, ethical, moral, and social issues raised by the use and abuse of information technology. Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real-world settings, and by solving problems through the integrated study of number systems, geometry, algebra, data analysis, probability, and trigonometry. Mathematical Reasoning 1. Students use mathematical reasoning to analyze mathematical situations, make conjectures, gather evidence, and construct an argument. • use models, facts, and relationships to draw conclusions about mathematics and explain their thinking. • use patterns and relationships to analyze mathematical situations. • justify their answers and solution processes. • use logical reasoning to reach simple conclusions. This is evident, for example, when students: □ build geometric figures out of straws. □ find patterns in sequences of numbers, such as the triangular numbers 1, 3, 6, 10, . . . . □ explore number relationships with a calculator (e.g., 12 + 6 = 18, 11 + 7 = 18, etc.) and draw conclusions. Number and Numeration 2. Students use number sense and numeration to develop an understanding of the multiple uses of numbers in the real world, the use of numbers to communicate mathematically, and the use of numbers in the development of mathematical ideas. • use whole numbers and fractions to identify locations, quantify groups of objects, and measure distances. • use concrete materials to model numbers and number relationships for whole numbers and common fractions, including decimal fractions. • relate counting to grouping and to place-value. • recognize the order of whole numbers and commonly used fractions and decimals. • demonstrate the concept of percent through problems related to actual situations. This is evident, for example, when students: □ count out 15 small cubes and exchange ten of the cubes for a rod ten cubes long. □ use the number line to show the position of 1/4. □ figure the tax on $4.00 knowing that taxes are 7 cents per $1.00. 3. Students use mathematical operations and relationships among them to understand mathematics. • add, subtract, multiply, and divide whole numbers. • develop strategies for selecting the appropriate computational and operational method in problem solving situations. • know single digit addition, subtraction, multiplication, and division facts. • understand the commutative and associative properties. This is evident, for example, when students: • use the fact that multiplication is commutative (e.g., 2 x 7 = 7 x 2), to assist them with their memorizing of the basic facts. • solve multiple-step problems that require at least two different operations. • progress from base ten blocks to concrete models and then to paper and pencil algorithms. Modeling/Multiple Representation 4. Students use mathematical modeling/multiple representation to provide a means of presenting, interpreting, communicating, and connecting mathematical information and relationships. • use concrete materials to model spatial relationships. • construct tables, charts, and graphs to display and analyze real-world data. • use multiple representations (simulations, manipulative materials, pictures, and diagrams) as tools to explain the operation of everyday procedures. • use variables such as height, weight, and hand size to predict changes over time. • use physical materials, pictures, and diagrams to explain mathematical ideas and processes and to demonstrate geometric concepts. This is evident, for example, when students: □ build a 3 x 3 x 3 cube out of blocks. □ use square tiles to model various rectangles with an area of 24 square units. □ read a bar graph of population trends and write an explanation of the information it contains. 5. Students use measurement in both metric and English measure to provide a major link between the abstractions of mathematics and the real world in order to describe and compare objects and data. • understand that measurement is approximate, never exact. • select appropriate standard and nonstandard measurement tools in measurement activities. • understand the attributes of area, length, capacity, weight, volume, time, temperature, and angle. • estimate and find measures such as length, perimeter, area, and volume using both nonstandard and standard units. • collect and display data. • use statistical methods such as graphs, tables, and charts to interpret data. This is evident, for example, when students: □ measure with paper clips or finger width. □ estimate, then calculate, how much paint would be needed to cover one wall. □ create a chart to display the results of a survey conducted among the classes in the school, or graph the amounts of survey responses by grade level. 6. Students use ideas of uncertainty to illustrate that mathematics involves more than exactness when dealing with everyday situations. • make estimates to compare to actual results of both formal and informal measurement. • make estimates to compare to actual results of computations. • recognize situations where only an estimate is required. • develop a wide variety of estimation skills and strategies. • determine the reasonableness of results. • predict experimental probabilities. make predictions using unbiased random samples. • determine probabilities of simple events. This is evident, for example, when students: □ estimate the length of the room before measuring. □ predict the average number of red candies in a bag before opening a group of bags, counting the candies, and then averaging the number that were red. □ determine the probability of picking an even numbered slip from a hat containing slips of paper numbered 1, 2, 3, 4, 5, and 6. 7. Students use patterns and functions to develop mathematical power, appreciate the true beauty of mathematics, and construct generalizations that describe patterns simply and efficiently. • recognize, describe, extend, and create a wide variety of patterns. • represent and describe mathematical relationships. • explore and express relationships using variables and open sentences. • solve for an unknown using manipulative materials. • use a variety of manipulative materials and technologies to explore patterns. • interpret graphs. • explore and develop relationships among two- and three-dimensional geometric shapes. • discover patterns in nature, art, music, and literature. This is evident, for example, when students: • represent three more than a number is equal to nine as n + 3 = 9. • draw leaves, simple wallpaper patterns, or write number sequences to illustrate recurring patterns. • write generalizations or conclusions from display data in charts or graphs. Standard 3-Mathematics Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real-world settings, and by solving problems through the integrated study of number systems, geometry, algebra, data analysis, probability, and trigonometry. Mathematical Reasoning 1. Students use mathematical reasoning to analyze mathematical situations, make conjectures, gather evidence, and construct an argument. • apply a variety of reasoning strategies. • make and evaluate conjectures and arguments using appropriate language. • make conclusions based on inductive reasoning. • justify conclusions involving simple and compound (i.e., and/or) statements. This is evident, for example, when students: • use trial and error and work backwards to solve a problem. • identify patterns in a number sequence. • are asked to find numbers that satisfy two conditions, such as n > -4 and n < 6. Number and Numeration 2. Students use number sense and numeration to develop an understanding of the multiple uses of numbers in the real world, the use of numbers to communicate mathematically, and the use of numbers in the development of mathematical ideas. • understand, represent, and use numbers in a variety of equivalent forms (integer, fraction, decimal, percent, exponential, expanded and scientific notation). • understand and apply ratios, proportions, and percents through a wide variety of hands-on explorations. • develop an understanding of number theory (primes, factors, and multiples). • recognize order relations for decimals, integers, and rational numbers. This is evident, for example, when students: • use prime factors of a group of denominators to determine the least common denominator. • select two pairs from a number of ratios and prove that they are in proportion. • demonstrate the concept that a number can be symbolized by many different numerals as in: 1/4 = 3/12 = 25/100 = 0.25 = 25% 3. Students use mathematical operations and relationships among them to understand mathematics. • add, subtract, multiply, and divide fractions, decimals, and integers. • explore and use the operations dealing with roots and powers. • use grouping symbols (parentheses) to clarify the intended order of operations. • apply the associative, commutative, distributive, inverse, and identity properties. • demonstrate an understanding of operational algorithms (procedures for adding, subtracting, etc.). • develop appropriate proficiency with facts and algorithms. • apply concepts of ratio and proportion to solve problems. This is evident, for example, when students: □ create area models to help in understanding fractions, decimals, and percents. □ find the missing number in a proportion in which three of the numbers are known, and letters are used as place holders. □ arrange a set of fractions in order, from the smallest to the largest: 3/4 , 1/5 , 2/3 , 1/2 , 1/4 □ illustrate the distributive property for multiplication over addition, such as 2(a + 3) = 2a + 6. Modeling/Multiple Representation 4. Students use mathematical modeling/multiple representation to provide a means of presenting, interpreting, communicating, and connecting mathematical information and relationships. • visualize, represent, and transform two- and three-dimensional shapes. • use maps and scale drawings to represent real objects or places. • use the coordinate plane to explore geometric ideas. • represent numerical relationships in one- and two-dimensional graphs. • use variables to represent relationships. • use concrete materials and diagrams to describe the operation of real world processes and systems. • develop and explore models that do and do not rely on chance. • investigate both two- and three-dimensional transformations. • use appropriate tools to construct and verify geometric relationships. • develop procedures for basic geometric constructions. This is evident, for example, when students: □ build a city skyline to demonstrate skill in linear measurements, scale drawing, ratio, fractions, angles, and geometric shapes. □ bisect an angle using a straight edge and compass. □ draw a complex of geometric figures to illustrate that the intersection of a plane and a sphere is a circle or point. 5. Students use measurement in both metric and English measure to provide a major link between the abstractions of mathematics and the real world in order to describe and compare objects and data. • estimate, make, and use measurements in real-world situations. • select appropriate standard and nonstandard measurement units and tools to measure to a desired degree of accuracy. • develop measurement skills and informally derive and apply formulas in direct measurement activities. • use statistical methods and measures of central tendencies to display, describe, and compare data. • explore and produce graphic representations of data using calculators/computers. • develop critical judgment for the reasonableness of measurement. This is evident, for example, when students: □ use box plots or stem and leaf graphs to display a set of test scores. □ estimate and measure the surface areas of a set of gift boxes in order to determine how much wrapping paper will be required. □ explain when to use mean, median, or mode for a group of data. 6. Students use ideas of uncertainty to illustrate that mathematics involves more than exactness when dealing with everyday situations. • use estimation to check the reasonableness of results obtained by computation, algorithms, or the use of technology. • use estimation to solve problems for which exact answers are inappropriate. • estimate the probability of events. • use simulation techniques to estimate probabilities. • determine probabilities of independent and mutually exclusive events. This is evident, for example, when students: □ construct spinners to represent random choice of four possible selections. □ perform probability experiments with independent events (e.g., the probability that the head of a coin will turn up, or that a 6 will appear on a die toss). □ estimate the number of students who might chose to eat hot dogs at a picnic. 7. Students use patterns and functions to develop mathematical power, appreciate the true beauty of mathematics, and construct generalizations that describe patterns simply and efficiently. • recognize, describe, and generalize a wide variety of patterns and functions. • describe and represent patterns and functional relationships using tables, charts and graphs, algebraic expressions, rules, and verbal descriptions. • develop methods to solve basic linear and quadratic equations. • develop an understanding of functions and functional relationships: that a change in one quantity (variable) results in change in another. • verify results of substituting variables. • apply the concept of similarity in relevant situations. • use properties of polygons to classify them. • explore relationships involving points, lines, angles, and planes. • develop and apply the Pythagorean principle in the solution of problems. • explore and develop basic concepts of right triangle trigonometry. • use patterns and functions to represent and solve problems. This is evident, for example, when students: □ find the height of a building when a 20-foot ladder reaches the top of the building when its base is 12 feet away from the structure. □ investigate number patterns through palindromes (pick a 2-digit number, reverse it and add the two repeat the process until a palindrome appears) 42 86 +24 +68 palindrome 66 154 +451 605 +506 palindrome 1111 □ solve linear equations, such as 2(x + 3) = x + 5 by several methods Standard 3-Mathematics Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real-world settings, and by solving problems through the integrated study of number systems, geometry, algebra, data analysis, probability, and trigonometry. Mathematical Reasoning 1. Students use mathematical reasoning to analyze mathematical situations, make conjectures, gather evidence, and construct an argument. • construct simple logical arguments. • follow and judge the validity of logical arguments. • use symbolic logic in the construction of valid arguments. • construct proofs based on deductive reasoning. This is evident, for example, when students: □ prove that an altitude of an isosceles triangle, drawn to the base, is perpendicular to that base. □ determine whether or not a given logical sentence is a tautology. □ show that the triangle having vertex coordinates of (0,6), (0,0), and (5,0) is a right triangle. Number and Numeration 2. Students use number sense and numeration to develop an understanding of the multiple uses of numbers in the real world, the use of numbers to communicate mathematically, and the use of numbers in the development of mathematical ideas. • understand and use rational and irrational numbers. • recognize the order of the real numbers. • apply the properties of the real numbers to various subsets of numbers. This is evident, for example, when students: □ determine from the discriminate of a quadratic equation whether the roots are rational or irrational. □ give rational approximations of irrational numbers to a specific degree of accuracy. □ determine for which value of x the expression 2x + 6 is undefined. x- 7 3. Students use mathematical operations and relationships among them to understand mathematics. • use addition, subtraction, multiplication, division, and exponentiation with real numbers and algebraic expressions. • develop an understanding of and use the composition of functions and transformations. • explore and use negative exponents on integers and algebraic expressions. • use field properties to justify mathematical procedures. • use transformations on figures and functions in the coordinate plane. This is evident, for example, when students: □ determine the coordinates of triangle A(2,5), B(9,8), and C(3,6) after a translation (x,y) --> (x + 3, y - 1). □ evaluate the binary operation defined as x * y = x^-2 + (y + x)^2 for 3 * 4. □ identify the field properties used in solving the equation 2(x - 5) + 3 = x + 7. Modeling/Multiple Representation 4. Students use mathematical modeling/multiple representation to provide a means of presenting, interpreting, communicating, and connecting mathematical information and relationships. • represent problem situations symbolically by using algebraic expressions, sequences, tree diagrams, geometric figures, and graphs. • manipulate symbolic representations to explore concepts at an abstract level. • choose appropriate representations to facilitate the solving of a problem. • use learning technologies to make and verify geometric conjectures . • justify the procedures for basic geometric constructions. • investigate transformations in the coordinate plane. • develop meaning for basic conic sections. • develop and apply the concept of basic loci to compound loci. • use graphing utilities to create and explore geometric and algebraic models. • model real-world problems with systems of equations and inequalities. This is evident, for example, when students: □ determine the locus of points equidistant from two parallel lines. □ explain why the basic construction of bisecting a line is valid. □ describe the various conics produced when the equation ax^2 + by^2 = c^2 is graphed for various values of a, b, and c. 5. Students use measurement in both metric and English measure to provide a major link between the abstractions of mathematics and the real world in order to describe and compare objects and data. • derive and apply formulas to find measures such as length, area, volume, weight, time, and angle in real-world contexts. • choose the appropriate tools for measurement. • use dimensional analysis techniques. • use statistical methods including measures of central tendency to describe and compare data. • use trigonometry as a method to measure indirectly. • apply proportions to scale drawings, computer-assisted design blueprints, and direct variation in order to compute indirect measurements. • relate absolute value, distance between two points, and the slope of a line to the coordinate plane. • understand error in measurement and its consequence on subsequent calculations. • use geometric relationships in relevant measurement problems involving geometric concepts. This is evident, for example, when students: □ change mph to ft/sec. □ use the tangent ratio to determine the height of a tree. □ determine the distance between two points in the coordinate plane. 6. Students use ideas of uncertainty to illustrate that mathematics involves more than exactness when dealing with everyday situations. • judge the reasonableness of results obtained from applications in algebra, geometry, trigonometry, probability, and statistics. • judge the reasonableness of a graph produced by a calculator or computer. • use experimental or theoretical probability to represent and solve problems involving uncertainty. • use the concept of random variable in computing probabilities. • determine probabilities using permutations and combinations. This is evident, for example, when students: □ construct a tree diagram or sample space for a compound event. □ calculate the probability of winning the New York State Lottery. □ develop simulations for probability problems for which they do not have theoretical solutions. 7. Students use patterns and functions to develop mathematical power, appreciate the true beauty of mathematics, and construct generalizations that describe patterns simply and efficiently. • use function vocabulary and notation. • represent and analyze functions using verbal descriptions, tables, equations, and graphs. • translate among the verbal descriptions, tables, equations and graphic forms of functions. • analyze the effect of parametric changes on the graphs of functions. • apply linear, exponential, and quadratic functions in the solution of problems. • apply and interpret transformations to functions. • model real-world situations with the appropriate function. • apply axiomatic structure to algebra and geometry. • use computers and graphing calculators to analyze mathematical phenomena. This is evident, for example, when students: □ determine, in more than one way, whether or not a specific relation is a function. □ explain the relationship between the roots of a quadratic equation and the intercepts of its corresponding graph. □ use transformations to determine the inverse of a function. Standard 3-Mathematics Four year sequence in mathematics Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real-world settings, and by solving problems through the integrated study of number systems, geometry, algebra, data analysis, probability, and trigonometry. 1. Students use mathematical reasoning to analyze mathematical situations, make conjectures, gather evidence, and construct an argument. • construct indirect proofs or proofs using mathematical induction. • investigate and compare the axiomatic structures of various geometries. This is evident, for example, when students: □ prove indirectly that: if n 2 is even, n is even. □ prove using mathematical induction that: 1 + 3 + 5 + . . . + (2n - 1) = n^2 . □ explain the axiomatic differences between plane and spherical geometries. 2. Students use number sense and numeration to develop an understanding of the multiple uses of numbers in the real world, the use of numbers to communicate mathematically, and the use of numbers in the development of mathematical ideas. • understand the concept of infinity. • recognize the hierarchy of the complex number system. • model the structure of the complex number system. • recognize when to use and how to apply the field properties. This is evident, for example, when students: □ relate the concept of infinity when graphing the tangent function. □ show that the set of complex numbers form a field under the operations of addition and multiplication. □ show that the set of complex numbers forms a field under the operations of addition and multiplication. □ represent a complex number in polar form. 3. Students use mathematical operations and relationships among them to understand mathematics. • use appropriate techniques, including graphing utilities, to perform basic operations on matrices. • use rational exponents on real numbers and all operations on complex numbers. • combine functions using the basic operations and the composition of two functions. This is evident, for example, when students: □ relate specific matrices to certain types of transformations of points on the coordinate plane. □ evaluate expressions with fractional exponents, such as 8 2/3 4 -1/2 . □ determine the value of compound functions such as (f o g) (x). Modeling/Multiple Representation 4. Students use mathematical modeling/multiple representation to provide a means of presenting, interpreting, communicating, and connecting mathematical information and relationships. • model vector quantities both algebraically and geometrically. • represent graphically the sum and difference of two complex numbers. • model and solve problems that involve absolute value, vectors, and matrices. • model quadratic inequalities both algebraically and graphically. • model the composition of transformations. • determine the effects of changing parameters of the graphs of functions. • use polynomial, rational, trigonometric, and exponential functions to model real-world relationships. • use algebraic relationships to analyze the conic sections. • use circular functions to study and model periodic real-world phenomena. • illustrate spatial relationships using perspective, projections, and maps. • represent problem situations using discrete structures such as finite graphs, matrices, sequences, and recurrence relations. • analyze spatial relationships using the Cartesian coordinate system in three dimensions. This is evident, for example, when students: □ determine coordinates which lie in the solution of the quadriatic inequality, such as y < x^2 + 4x + 2. □ find the distance between two points in a three-dimension coordinate system. □ describe what happens to the graph when b increases in the function y = x^2 + bx + c. 5. Students use measurement in both metric and English measure to provide a major link between the abstractions of mathematics and the real world in order to describe and compare objects and data. • derive and apply formulas relating angle measure and arc degree measure in a circle. • prove and apply theorems related to lengths of segments in a circle. • define the trigonometric functions in terms of the unit circle. • relate trigonometric relationships to the area of a triangle and to the general solutions of triangles. • apply the normal curve and its properties to familiar contexts. • design a statistical experiment to study a problem and communicate the outcomes, including dispersion. • use statistical methods, including scatter plots and lines of best fit, to make predictions. • apply the conceptual foundation of limits, infinite sequences and series, the area under a curve, rate of change, inverse variation, and the slope of a tangent line to authentic problems in mathematics and other disciplines. • determine optimization points on a graph. • use derivatives to find maximum, minimum, and inflection points of a function. This is evident, for example, when students: □ use a chi-square test to determine if one cola really tastes better than another cola. □ can illustrate the various line segments which represent the sine, cosine, and tangent of a given angle on the unit circle. □ calculate the first derivative of a function using the limit definition. 6. Students use ideas of uncertainty to illustrate that mathematics involves more than exactness when dealing with everyday situations. • interpret probabilities in real-world situations. • use a Bernoulli experiment to determine probabilities for experiments with exactly two outcomes. • use curve fitting to predict from data. • apply the concept of random variable to generate and interpret probability distributions. • create and interpret applications of discrete and continuous probability distributions. • make predictions based on interpolations and extrapolations from data. • obtain confidence intervals and test hypotheses using appropriate statistical methods. • approximate the roots of polynomial equations. This is evident, for example, when students: □ verify the probabilities listed for the state lottery for second, third, and fourth prize. □ use graphing calculators to generate a curve of best fit for an array of data using linear regression. □ determine the probability of getting at least 3 heads on 6 flips of a fair coin. 7. Students use patterns and functions to develop mathematical power, appreciate the true beauty of mathematics, and construct generalizations that describe patterns simply and efficiently. • solve equations with complex roots using a variety of algebraic and graphical methods with appropriate tools. • understand and apply the relationship between the rectangular form and the polar form of a complex number. • evaluate and form the composition of functions. • use the definition of a derivative to examine the properties of a function. • solve equations involving fractions, absolute values, and radicals. • use basic transformations to demonstrate similarity and congruence of figures. • identify and differentiate between direct and indirect isometries. • analyze inverse functions using transformations. • apply the ideas of symmetries in sketching and analyzing graphs of functions. • use the normal curve to answer questions about data. • develop methods to solve trigonometric equations and verify trigonometric functions. • describe patterns produced by processes of geometric change, formally connecting iteration, approximations, limits, and fractals. • extend patterns and compute the nth term in numerical and geometric sequences. • use the limiting process to analyze infinite sequences and series. • use algebraic and geometric iteration to explore patterns and solve problems. • solve optimization problems. • use linear programming and difference equations in the solution of problems. This is evident, for example, when students: □ transform polar coordinates into rectangular forms. □ find the maximum height of an object projects upward with a given initial velocity. □ find the limit of expressions like n - 2 as n goes to infinity. 3n + 5 Students will understand and apply scientific concepts, principles, and theories pertaining to the physical setting and living environment and recognize the historical development of ideas in Physical Setting 1. The Earth and celestial phenomena can be described by principles of relative motion and perspective. • describe patterns of daily, monthly, and seasonal changes in their environment. This is evident, for example, when students: □ conduct a long-term weather investigation, such as running a weather station or collecting weather data. □ keep a journal of the phases of the moon over a one-month period. This information is collected for several different one-month periods and compared. 2. Many of the phenomena that we observe on Earth involve interactions among components of air, water, and land. • describe the relationships among air, water, and land on Earth. This is evident, for example, when students: □ observe a puddle of water outdoors after a rainstorm. On a return visit after the puddle has disappeared, students describe where the water came from and possible locations for it now. □ assemble rock and mineral collections based on characteristics such as erosional features or crystal size features. 3. Matter is made up of particles whose properties determine the observable characteristics of matter and its reactivity. • observe and describe properties of materials using appropriate tools. • describe chemical and physical changes, including changes in states of matter. This is evident, for example, when students: □ compare the appearance of materials when seen with and without the aid of a magnifying glass. □ investigate simple physical and chemical reactions and the chemistry of household products, e.g., freezing, melting, and evaporating; a comparison of new and rusty nails; the role of baking soda in cooking. 4. Energy exists in many forms, and when these forms change energy is conserved. • describe a variety of forms of energy (e.g., heat, chemical, light) and the changes that occur in objects when they interact with those forms of energy. • observe the way one form of energy can be transformed into another form of energy present in common situations (e.g., mechanical to heat energy, mechanical to electrical energy, chemical to heat This is evident, for example, when students: □ investigate the interactions of liquids and powders that result in chemical reactions (e.g., vinegar and baking soda) compared to interactions that do not (e.g., water and sugar). □ in order to demonstrate the transformation of chemical to electrical energy, construct electrical cells from objects, such as lemons or potatoes, using pennies and aluminum foil inserted in slits at each end of fruits or vegetables; the penny and aluminum are attached by wires to a millimeter. Students can compare the success of a variety of these electrical cells. 5. Energy and matter interact through forces that result in changes in motion. • describe the effects of common forces (pushes and pulls) on objects, such as those caused by gravity, magnetism, and mechanical forces. • describe how forces can operate across distances. This is evident, for example, when students: □ investigate simple machines and use them to perform tasks. The Living Environment 1. Living things are both similar to and different from each other and nonliving things. • describe the characteristics of and variations between living and nonliving things. • describe the life processes common to all living things. This is evident, for example, when students: □ grow a plant or observe a pet, investigating what it requires to stay alive, including evaluating the relative importance and necessity of each item. □ investigate differences in personal body characteristics, such as temperature, pulse, heart rate, blood pressure, and reaction time. 2. Organisms inherit genetic information in a variety of ways that result in continuity of structure and function between parents and offspring. • recognize that traits of living things are both inherited and acquired or learned. • recognize that for humans and other living things there is genetic continuity between generations. This is evident, for example, when students: □ interact with a classroom pet, observe its behaviors, and record what they are able to teach the animal, such as navigation of a maze or performance of tricks, compared to that which remains constant, such as eye color, or number of digits on an appendage. □ use breeding records and photographs of racing horses or pedigreed animals to recognize that variations exist from generation to generation but "like begets like." 3. Individual organisms and species change over time. • describe how the structures of plants and animals complement the environment of the plant or animal. • observe that differences within a species may give individuals an advantage in surviving and reproducing. This is evident, for example, when students: □ relate physical characteristics of organisms to habitat characteristics (e.g., long hair and fur color change for mammals living in cold climates). □ visit a farm or a zoo and make a written or pictorial comparison of members of a litter and identify characteristics that may provide an advantage. 4. The continuity of life is sustained through reproduction and development. • describe the major stages in the life cycles of selected plants and animals. • describe evidence of growth, repair, and maintenance, such as nails, hair, and bone, and the healing of cuts and bruises. This is evident, for example, when students: □ grow bean plants or butterflies; record and describe stages of development. 5. Organisms maintain a dynamic equilibrium that sustains life. • describe basic life functions of common living specimens (guppy, mealworm, gerbil). • describe some survival behaviors of common living specimens. • describe the factors that help promote good health and growth in humans. This is evident, for example, when students: □ observe a single organism over a period of weeks and describe such life functions as moving, eating, resting, and eliminating. □ observe and demonstrate reflexes such as pupil dilation and contraction and relate such reflexes to improved survival. □ analyze the extent to which diet and exercise habits meet cardiovascular, energy, and nutrient requirements. 6. Plants and animals depend on each other and their physical environment. • describe how plants and animals, including humans, depend upon each other and the nonliving environment. • describe the relationship of the sun as an energy source for living and nonliving cycles. This is evident, for example, when students: □ investigate how humans depend on their environment (neighborhood), by observing, recording, and discussing the interactions that occur in carrying out their everyday lives. □ observe the effects of sunlight on growth for a garden vegetable. 7. Human decisions and activities have had a profound impact on the physical and living environment. • identify ways in which humans have changed their environment and the effects of those changes. This is evident, for example, when students: □ give examples of how inventions and innovations have changed the environment; describe benefits and burdens of those changes. Standard 4-Science Students will understand and apply scientific concepts, principles, and theories pertaining to the physical setting and living environment and recognize the historical development of ideas in Physical Setting 1. The Earth and celestial phenomena can be described by principles of relative motion and perspective. • explain daily, monthly, and seasonal changes on earth. This is evident, for example, when students: • create models, drawings, or demonstrations describing the arrangement, interaction, and movement of the Earth, moon, and sun. • plan and conduct an investigation of the night sky to describe the arrangement, interaction, and movement of celestial bodies. 2. Many of the phenomena that we observe on Earth involve interactions among components of air, water, and land. • explain how the atmosphere (air), hydrosphere (water), and lithosphere (land) interact, evolve, and change. • describe volcano and earthquake patterns, the rock cycle, and weather and climate changes. This is evident, for example, when students: • add heat to and subtract heat from water and graph the temperature changes, including the resulting phase changes. • make a record of reported earthquakes and volcanoes and interpret the patterns formed worldwide. 3. Matter is made up of particles whose properties determine the observable characteristics of matter and its reactivity. • observe and describe properties of materials, such as density, conductivity, and solubility. • distinguish between chemical and physical changes. • develop their own mental models to explain common chemical reactions and changes in states of matter. This is evident, for example, when students: □ test and compare the properties (hardness, shape, color, etc.) of an array of materials. □ observe an ice cube as it begins to melt at temperature and construct an explanation for what happens, including sketches and written descriptions of their ideas. 4. Energy exists in many forms, and when these forms change energy is conserved. • describe the sources and identify the transformations of energy observed in everyday life. • observe and describe heating and cooling events. • observe and describe energy changes as related to chemical reactions. • observe and describe the properties of sound, light, magnetism, and electricity. • describe situations that support the principle of conservation of energy. This is evident, for example, when students: □ design and construct devices to transform/transfer energy. □ conduct supervised explorations of chemical reactions (not including ammonia and bleach products) for selected household products, such as hot and cold packs used to treat sport injuries. □ build an electromagnet and investigate the effects of using different types of core materials, varying thicknesses of wire, and different circuit types. 5. Energy and matter interact through forces that result in changes in motion. • describe different patterns of motion of objects. • observe, describe, and compare effects of forces (gravity, electric current, and magnetism) on the motion of objects. This is evident, for example, when students: □ investigate physics in everyday life, such as at an amusement park or a playground. □ use simple machines made of pulleys and levers to lift objects and describe how each machine transforms the force applied to it. □ build "Rube Goldberg" type devices and describe the energy transformations evident in them. The Living Environment 1. Living things are both similar to and different from each other and nonliving things. • compare and contrast the parts of plants, animals, and one-celled organisms. • explain the functioning of the major human organ systems and their interactions. This is evident, for example, when students: □ conduct a survey of the school grounds and develop appropriate classification keys to group plants and animals by shared characteristics. □ use spring-type clothespins to investigate muscle fatigue or rulers to determine the effect of amount of sleep on hand-eye coordination. 2. Organisms inherit genetic information in a variety of ways that result in continuity of structure and function between parents and offspring. • describe sexual and asexual mechanisms for passing genetic materials from generation to generation. • describe simple mechanisms related to the inheritance of some physical traits in offspring. This is evident, for example, when students: □ contrast dominance and blending as models for explaining inheritance of traits. □ trace patterns of inheritance for selected human traits. 3. Individual organisms and species change over time. • describe sources of variation in organisms and their structures and relate the variations to survival. • describe factors responsible for competition within species and the significance of that competition. This is evident, for example, when students: □ conduct a long-term investigation of plant or animal communities. □ investigate the acquired effects of industrialization on tree trunk color and those effects on different insect species. 4. The continuity of life is sustained through reproduction and development. • observe and describe the variations in reproductive patterns of organisms, including asexual and sexual reproduction. • explain the role of sperm and egg cells in sexual reproduction. • observe and describe developmental patterns in selected plants and animals (e.g., insects, frogs, humans, seed-bearing plants). • observe and describe cell division at the microscopic level and its macroscopic effects. This is evident, for example, when students: □ apply a model of the genetic code as an analogue for the role of the genetic code in human populations. 5. Organisms maintain a dynamic equilibrium that sustains life. • compare the way a variety of living specimens carry out basic life functions and maintain dynamic equilibrium. • describe the importance of major nutrients, vitamins, and minerals in maintaining health and promoting growth and explain the need for a constant input of energy for living organisms. This is evident, for example, when students: □ record and compare the behaviors of animals in their natural habitats and relate how these behaviors are important to the animals. □ design and conduct a survey of personal nutrition and exercise habits, and analyze and critique the results of that survey. 6. Plants and animals depend on each other and their physical environment. • describe the flow of energy and matter through food chains and food webs. • provide evidence that green plants make food and explain the significance of this process to other organisms. This is evident, for example, when students: □ construct a food web for a community of organisms and explore how elimination of a particular part of a chain affects the rest of the chain and web. 7. Human decisions and activities have had a profound impact on the physical and living environment. • describe how living things, including humans, depend upon the living and nonliving environment for their survival. • describe the effects of environmental changes on humans and other populations. This is evident, for example, when students: □ conduct an extended investigation of a local environment affected by human actions, (e.g., a pond, stream, forest, empty lot). Standard 4-Science Students will understand and apply scientific concepts, principles, and theories pertaining to the physical setting and living environment and recognize the historical development of ideas in Physical Setting 1. The Earth and celestial phenomena can be described by principles of relative motion and perspective. • explain complex phenomena, such as tides, variations in day length, solar insulation, apparent motion of the planets, and annual traverse of the constellations. • describe current theories about the origin of the universe and solar system. This is evident, for example, when students: □ create models, drawings, or demonstrations to explain changes in day length, solar insulation, and the apparent motion of planets. 2. Many of the phenomena that we observe on Earth involve interactions among components of air, water, and land. • use the concepts of density and heat energy to explain observations of weather patterns, seasonal changes, and the movements of the Earth s plates. • explain how incoming solar radiations, ocean currents, and land masses affect weather and climate. This is evident, for example, when students: □ use diagrams of ocean currents at different latitudes to develop explanations for the patterns present. 3. Matter is made up of particles whose properties determine the observable characteristics of matter and its reactivity. • explain the properties of materials in terms of the arrangement and properties of the atoms that compose them. • use atomic and molecular models to explain common chemical reactions. • apply the principle of conservation of mass to chemical reactions. • use kinetic molecular theory to explain rates of reactions and the relationships among temperature, pressure, and volume of a substance. This is evident, for example, when students: □ use the atomic theory of elements to justify their choice of an element for use as a lighter than air gas for a launch vehicle. □ represent common chemical reactions using three-dimensional models of the molecules involved. □ discuss and explain a variety of everyday phenomena involving rates of chemical reactions, in terms of the kinetic molecular theory (e.g., use of refrigeration to keep food from spoiling, ripening of fruit in a bowl, use of kindling wood to start a fire, different types of flames that come from a Bunsen burner). 4. Energy exists in many forms, and when these forms change energy is conserved. • observe and describe transmission of various forms of energy. • explain heat in terms of kinetic molecular theory. • explain variations in wavelength and frequency in terms of the source of the vibrations that produce them, e.g., molecules, electrons, and nuclear particles. • explain the uses and hazards of radioactivity. This is evident, for example, when students: □ demonstrate through drawings, models, and diagrams how the potential energy that exists in the chemical bonds of fossil fuels can be converted to electrical energy in a power plant (potential energy a heat energy a mechanical energy a electrical energy). □ investigate the sources of radioactive emissions in their environment and the dangers and benefits they pose for humans. 5. Energy and matter interact through forces that result in changes in motion. • explain and predict different patterns of motion of objects (e.g., linear and angular motion, velocity and acceleration, momentum and inertia). • explain chemical bonding in terms of the motion of electrons. • compare energy relationships within an atom s nucleus to those outside the nucleus. This is evident, for example, when students: □ construct drawings, models, and diagrams representing several different types of chemical bonds to demonstrate the basis of the bond, the strength of the bond, and the type of electrical attraction that exists. The Living Environment 1. Living things are both similar to and different from each other and nonliving things. • explain how diversity of populations within ecosystems relates to the stability of ecosystems. • describe and explain the structures and functions of the human body at different organizational levels (e.g., systems, tissues, cells, organelles). • explain how a one-celled organism is able to function despite lacking the levels of organization present in more complex organisms. 2. Organisms inherit genetic information in a variety of ways that result in continuity of structure and function between parents and offspring. • explain how the structure and replication of genetic material result in offspring that resemble their parents. • explain how the technology of genetic engineering allows humans to alter the genetic makeup of organisms. This is evident, for example, when students: □ record outward characteristics of fruit flies and then breed them to determine patterns of inheritance. 3. Individual organisms and species change over time. • explain the mechanisms and patterns of evolution. This is evident, for example, when students: □ determine characteristics of the environment that affect a hypothetical organism and explore how different characteristics of the species give it a selective advantage. 4. The continuity of life is sustained through reproduction and development. • explain how organisms, including humans, reproduce their own kind. This is evident, for example, when students: □ observe the development of fruit flies or rapidly maturing plants, from fertilized egg to mature adult, relating embryological development and structural adaptations to the propagation of the 5. Organisms maintain a dynamic equilibrium that sustains life. • explain the basic biochemical processes in living organisms and their importance in maintaining dynamic equilibrium. • explain disease as a failure of homeostasis. • relate processes at the system level to the cellular level in order to explain dynamic equilibrium in multicelled organisms. This is evident, for example, when students: □ investigate the biochemical processes of the immune system, and its relationship to maintaining mental and physical health. 6. Plants and animals depend on each other and their physical environment. • explain factors that limit growth of individuals and populations. • explain the importance of preserving diversity of species and habitats. • explain how the living and nonliving environments change over time and respond to disturbances. This is evident, for example, when students: □ conduct a long-term investigation of a local ecosystem. 7. Human decisions and activities have had a profound impact on the physical and living environment. • describe the range of interrelationships of humans with the living and nonliving environment. • explain the impact of technological development and growth in the human population on the living and non-living environment. • explain how individual choices and societal actions can contribute to improving the environment. This is evident, for example, when students: □ compile a case study of a technological development that has had a significant impact on the environment. Students will apply technological knowledge and skills to design, construct, use, and evaluate products and systems to satisfy human and environmental needs. Engineering Design 1. Engineering design is an iterative process involving modeling and optimization used to develop technological solutions to problems within given constraints. • describe objects, imaginary or real, that might be modeled or made differently and suggest ways in which the objects can be changed, fixed, or improved. • investigate prior solutions and ideas from books, magazines, family, friends, neighbors, and community members. • generate ideas for possible solutions, individually and through group activity; apply age-appropriate mathematics and science skills; evaluate the ideas and determine the best solution; and explain reasons for the choices. • plan and build, under supervision, a model of the solution using familiar materials, processes, and hand tools. • discuss how best to test the solution; perform the test under teacher supervision; record and portray results through numerical and graphic means; discuss orally why things worked or didn t work; and summarize results in writing, suggesting ways to make the solution better. This is evident, for example, when students: □ read a story called Humpty s Big Day wherein the readers visit the place where Humpty Dumpty had his accident, and are asked to design and model a way to get to the top of the wall and down again □ generate and draw ideas for a space station that includes a pleasant living and working environment □ design and model footwear that they could use to walk on a cold, sandy surface. Tools, Resources, and Technological Process 2. Technological tools, materials, and other resources should be selected on the basis of safety, cost, availability, appropriateness, and environmental impact; technological processes change energy, information, and material resources into more useful forms. • explore, use, and process a variety of materials and energy sources to design and construct things. • understand the importance of safety, cost, ease of use, and availability in selecting tools and resources for a specific purpose. • develop basic skill in the use of hand tools. • use simple manufacturing processes (e.g., assembly, multiple stages of production, quality control) to produce a product. • use appropriate graphic and electronic tools and techniques to process information. This is evident, for example, when students: □ explore and use materials, joining them with the use of adhesives and mechanical fasteners to make a cardboard marionette with moving parts. □ explore materials and use forming processes to heat and bend plastic into a shape that can hold napkins. □ explore energy sources by making a simple motor that uses electrical energy to produce continuous mechanical motion □ develop skill with a variety of hand tools and use them to make or fix things. □ process information electronically such as using a video system to advertise a product or service. □ process information graphically such as taking photos and developing and printing the pictures. Computer Technology 3. Computers, as tools for design, modeling, information processing, communication, and system control, have greatly increased human productivity and knowledge. • identify and describe the function of the major components of a computer system. • use the computer as a tool for generating and drawing ideas. • control computerized devices and systems through programming. • model and simulate the design of a complex environment by giving direct commands. This is evident, for example, when students: □ control the operation of a toy or household appliance by programming it to perform a task. □ execute a computer program, such as SimCity, Theme Park, or The Factory to model and simulate an environment. □ model and simulate a system using construction modeling software, such as The Incredible Machine. Technological Systems 4. Technological systems are designed to achieve specific results and produce outputs, such as products, structures, services, energy, or other systems. • identify familiar examples of technological systems that are used to satisfy human needs and wants, and select them on the basis of safety, cost, and function. • assemble and operate simple technological systems, including those with interconnecting mechanisms to achieve different kinds of movement. • understand that larger systems are made up of smaller component subsystems. This is evident, for example, when students: □ assemble and operate a system made up from a battery, switch, and doorbell connected in a series circuit. □ assemble a system with interconnecting mechanisms, such as a jack-in-the-box that pops up from a box with a hinged lid. □ model a community-based transportation system which includes subsystems such as roadways, rails, vehicles, and traffic controls. History and Evolution of Technology 5. Technology has been the driving force in the evolution of society from an agricultural to an industrial to an information base. • identify technological developments that have significantly accelerated human progress. This is evident, for example, when students: □ construct a model of an historical or future-oriented technological device or system and describe how it has contributed or might contribute to human progress. □ make a technological timeline in the form of a hanging mobile of technological devices. □ model a variety of timekeeping devices that reflect historical and modern methods of keeping time. □ make a display contrasting early devices or tools with their modern counterparts. Impacts of Technology 6. Technology can have positive and negative impacts on individuals, society, and the environment and humans have the capability and responsibility to constrain or promote technological development. • describe how technology can have positive and negative effects on the environment and on the way people live and work. This is evident, for example, when students: □ hand make an item and then participate in a line production experience where a quantity of the item is mass produced; compare the benefits and disadvantages of mass production and craft □ describe through example, how familiar technologies (including computers) can have positive and negative impacts on the environment and on the way people live and work. □ identify the pros and cons of several possible packaging materials for a student-made product. Management of Technology 7. Project management is essential to ensuring that technological endeavors are profitable and that products and systems are of high quality and built safely, on schedule, and within budget. • participate in small group projects and in structured group tasks requiring planning, financing, production, quality control, and follow-up. • speculate on and model possible technological solutions that can improve the safety and quality of the school or community environment. This is evident, for example, when students: • help a group to plan and implement a school project or activity, such as a school picnic or a fund-raising event. • plan as a group, division of tasks and construction steps needed to build a simple model of a structure or vehicle. • redesign the work area in their classroom with an eye toward improving safety Standard 5-Technology Students will apply technological knowledge and skills to design, construct, use, and evaluate products and systems to satisfy human and environmental needs. Engineering Design 1. Engineering design is an iterative process involving modeling and optimization used to develop technological solutions to problems within given constraints. Students engage in the following steps in a design process: • identify needs and opportunities for technical solutions from an investigation of situations of general or social interest. • locate and utilize a range of printed, electronic, and human information resources to obtain ideas. • consider constraints and generate several ideas for alternative solutions, using group and individual ideation techniques (group discussion, brainstorming, forced connections, role play); defer judgment until a number of ideas have been generated; evaluate (critique) ideas; and explain why the chosen solution is optimal. • develop plans, including drawings with measurements and details of construction, and construct a model of the solution, exhibiting a degree of craftsmanship. • in a group setting, test their solution against design specifications, present and evaluate results, describe how the solution might have been modified for different or better results, and discuss tradeoffs that might have to be made. This is evident, for example, when students: □ reflect on the need for alternative growing systems in desert environments and design and model a hydroponic greenhouse for growing vegetables without soil. □ brainstorm and evaluate alternative ideas for an adaptive device that will make life easier for a person with a disability, such as a device to pick up objects from the floor. □ design a model vehicle (with a safety belt restraint system and crush zones to absorb impact) to carry a raw egg as a passenger down a ramp and into a barrier without damage to the egg. □ assess the performance of a solution against various design criteria, enter the scores on a spreadsheet, and see how varying the solution might have affected total score. Tools, Resources, and Technological Process 2. Technological tools, materials, and other resources should be selected on the basis of safety, cost, r availability, appropriateness, and environmental impact; technological processes change energy, information, and material resources into more useful forms. • choose and use resources for a particular purpose based upon an analysis and understanding of their properties, costs, availability, and environmental impact. • use a variety of hand tools and machines to change materials into new forms through forming, separating, and combining processes, and processes which cause internal change to occur. • combine manufacturing processes with other technological processes to produce, market, and distribute a product. • process energy into other forms and information into more meaningful information. This is evident, for example, when students: □ choose and use resources to make a model of a building and explain their choice of materials based upon physical properties such as tensile and compressive strength, hardness, and brittleness □ choose materials based upon their acoustic properties to make a set of wind chimes. □ use a torch to heat a steel rod to a cherry red color and cool it slowly to demonstrate how the process of annealing changes the internal structure of the steel and removes its brittleness. □ change materials into new forms using separate processes such as drilling and sawing. □ process energy into other forms such as assembling a solar cooker using a parabolic reflector to convert light energy to heat energy. □ process information into more meaningful information such as adding a music track or sound effects to an audio tape. Computer Technology 3. Computers, as tools for design, modeling, information processing, communication, and system control, have greatly increased human productivity and knowledge. • assemble a computer system including keyboard, central processing unit and disc drives, mouse, modem, printer, and monitor. • use a computer system to connect to and access needed information from various Internet sites. • use computer hardware and software to draw and dimension prototypical designs. • use a computer as a modeling tool. • use a computer system to monitor and control external events and/or systems. This is evident, for example, when students: □ use computer hardware and a basic computer-aided design package to draw and dimension plans for a simple project. □ use a computer program, such as Car Builder, to model a vehicle to desired specifications. □ use temperature sensors to monitor and control the temperature of a model greenhouse. □ model a computer-controlled system, such as traffic lights, a merry-go-round, or a vehicle using Lego or other modeling hardware interfaced to a computer. Technological Systems 4. Technological systems are designed to achieve specific results and produce outputs, such as products, structures, services, energy, or other systems. • select appropriate technological systems on the basis of safety, function, cost, ease of operation, and quality of post purchase support. • assemble, operate, and explain the operation of simple open- and closed-loop electrical, electronic, mechanical, and pneumatic systems. • describe how subsystems and system elements (inputs, processes, outputs) interact within systems. • describe how system control requires sensing information, processing it, and making changes. This is evident, for example, when students: □ assemble an electronic kit that includes sensors and signaling devices and functions as an alarm system. □ use several open loop systems (without feedback control) such as a spray can, bubble gum machine, or wind-up toys, and compare them to closed-loop systems (with feedback control) such as an electric oven with a thermostat, or a line tracker robot. □ use a systems diagram to model a technological system, such as a model rocket, with the command inputs, resource inputs, processes, monitoring and control mechanisms, and system outputs labeled. □ provide examples of modern machines where microprocessors receive information from sensors and serve as controllers. History and Evolution of Technology 5. Technology has been the driving force in the evolution of society from an agricultural to an industrial to an information base. • describe how the evolution of technology led to the shift in society from an agricultural base to an industrial base to an information base. • understand the contributions of people of different genders, races, and ethnic groups to technological development. • describe how new technologies have evolved as a result of combining existing technologies (e.g., photography combined optics and chemistry; the airplane combined kite and glider technology with a lightweight gasoline engine). This is evident, for example, when students: □ construct models of technological devices (e.g., the plow, the printing press, the digital computer) that have significantly affected human progress and that illustrate how the evolution of technology has shifted the economic base of the country. □ develop a display of pictures or models of technological devices invented by people from various cultural backgrounds, along with photographs and short biographies of the inventors. □ make a poster with drawings and photographs showing how an existing technology is the result of combining various technologies. Impacts of Technology 6. Technology can have positive and negative impacts on individuals, society, and the environment and humans have the capability and responsibility to constrain or promote technological development. • describe how outputs of a technological system can be desired, undesired, expected, or unexpected. • describe through examples how modern technology reduces manufacturing and construction costs and produces more uniform products. This is evident, for example, when students: □ use the automobile, for example, to explain desired (easier travel), undesired (pollution), expected (new jobs created), unexpected (crowded highways and the growth of suburbs) impacts. □ provide an example of an assembly line that produces products with interchangeable parts. □ compare the costs involved in producing a prototype of a product to the per product cost of a batch of 100. Management of Technology 7. Project management is essential to ensuring that technological endeavors are profitable and that products and systems are of high quality and built safely, on schedule, and within budget. • manage time and financial resources in a technological project. • provide examples of products that are well (and poorly) designed and made, describe their positive and negative attributes, and suggest measures that can be implemented to monitor quality during • assume leadership responsibilities within a structured group activity. This is evident, for example, when students: □ make up and follow a project work plan, time schedule, budget, and a bill of materials. □ analyze a child s toy and describe how it might have been better made at a lower cost. □ assume leadership on a team to play an audio or video communication system, and use it for an intended purpose (e.g., to inform, educate, persuade, entertain) Standard 5-Technology Students will apply technological knowledge and skills to design, construct, use, and evaluate products and systems to satisfy human and environmental needs. Engineering Design 1. Engineering design is an iterative process involving modeling and optimization used to develop technological solutions to problems within given constraints. Students engage in the following steps in a design process: • initiate and carry out a thorough investigation of an unfamiliar situation and identify needs and opportunities for technological invention or innovation. • identify, locate, and use a wide range of information resources including subject experts, library references, magazines, videotapes, films, electronic data bases and on-line services, and discuss and document through notes and sketches how findings relate to the problem. • generate creative solution ideas, break ideas into the significant functional elements, and explore possible refinements; predict possible outcomes using mathematical and functional modeling techniques; choose the optimal solution to the problem, clearly documenting ideas against design criteria and constraints; and explain how human values, economics, ergonomics, and environmental considerations have influenced the solution. • develop work schedules and plans which include optimal use and cost of materials, processes, time, and expertise; construct a model of the solution, incorporating developmental modifications while working to a high degree of quality (craftsmanship). • in a group setting, devise a test of the solution relative to the design criteria and perform the test; record, portray, and logically evaluate performance test results through quantitative, graphic, and verbal means; and use a variety of creative verbal and graphic techniques effectively and persuasively to present conclusions, predict impacts and new problems, and suggest and pursue modifications. This is evident, for example, when students: □ search the Internet for world wide web sites dealing with renewable energy and sustainable living and research the development and design of an energy efficient home. □ develop plans, diagrams, and working drawings for the construction of a computer-controlled marble sorting system that simulates how parts on an assembly line are sorted by color. □ design and model a portable emergency shelter for a homeless person that could be carried by one person and be heated by the body heat of that person to a life-sustaining temperature when the outside temperature is 20 o F. Tools, Resources, and Technological Processes 2. Technological tools, materials, and other resources should be selected on the basis of safety, cost, availability, appropriateness, and environmental impact; technological processes change energy, information, and material resources into more useful forms. • test, use, and describe the attributes of a range of material (including synthetic and composite materials), information, and energy resources. • select appropriate tools, instruments, and equipment and use them correctly to process materials, energy, and information. • explain tradeoffs made in selecting alternative resources in terms of safety, cost, properties, availability, ease of processing, and disposability. • describe and model methods (including computer-based methods) to control system processes and monitor system outputs. This is evident, for example, when students: □ use a range of high- tech composite or synthetic materials to make a model of a product, (e.g., ski, an airplane, earthquake-resistant building) and explain their choice of material. □ design a procedure to test the properties of synthetic and composite materials. □ select appropriate tools, materials, and processes to manufacture a product (chosen on the basis of market research) that appeals to high school students. □ select the appropriate instrument and use it to test voltage and continuity when repairing a household appliance. □ construct two forms of packaging (one from biodegradable materials, the other from any other materials), for a children s toy and explain the tradeoffs made when choosing one or the other. □ describe and model a method to design and evaluate a system that dispenses candy and counts the number dispensed using, for example, Fischertecnik, Capsela, or Lego. □ describe how the flow, processing, and monitoring of materials is controlled in a manufacturing plant and how information processing systems provide inventory, tracking, and quality control data. Computer Technology 3. Computers, as tools for design, modeling, information processing, communication, and system control, have greatly increased human productivity and knowledge. • understand basic computer architecture and describe the function of computer subsystems and peripheral devices. • select a computer system that meets personal needs. • attach a modem to a computer system and telephone line, set up and use communications software, connect to various on-line networks, including the Internet, and access needed information using e-mail, telnet, gopher, ftp, and web searches. • use computer-aided drawing and design (CADD) software to model realistic solutions to design problems. • develop an understanding of computer programming and attain some facility in writing computer programs. This is evident, for example, when students: □ choose a state-of-the art computer system from computer magazines, price the system, and justify the choice of CPU, CD-ROM and floppy drives, amount of RAM, video and sound cards, modem, printer, and monitor; explain the cost benefit tradeoffs they have made. □ use a computer-aided drawing and design package to design and draw a model of their own room. □ write a computer program that works in conjunction with a bar code reader and an optical sensor to distinguish between light and dark areas of the bar code. Technological Systems 4. Technological systems are designed to achieve specific results and produce outputs, such as products, structures, services, energy, or other systems. • explain why making tradeoffs among characteristics, such as safety, function, cost, ease of operation, quality of post purchase support, and environmental impact, is necessary when selecting systems for specific purposes. • model, explain, and analyze the performance of a feedback control system. • explain how complex technological systems involve the confluence of numerous other systems. This is evident, for example, when students: □ model, explain, and analyze how the float mechanism of a toilet tank senses water level, compares the actual level to the desired level, and controls the flow of water into the tank. □ draw a labeled system diagram which explains the performance of a system, and include several subsystems and multiple feed-back loops. □ explain how the space shuttle involves communication, transportation, biotechnical, and manufacturing systems. History and Evolution of Technology 5. Technology has been the driving force in the evolution of society from an agricultural to an industrial to an information base. • explain how technological inventions and innovations have caused global growth and interdependence, stimulated economic competitiveness, created new jobs, and made other jobs obsolete. This is evident, for example, when students: □ compare qualitatively and quantitatively the performance of a contemporary manufactured product, such as a household appliance, to the comparable device or system 50-100 years ago, and present results graphically, orally, and in writing. □ describe the process that an inventor must follow to obtain a patent for an invention. □ explain through examples how some inventions are not translated into products and services with market place demand, and therefore do not become commercial successes. Impacts of Technology 6. Technology can have positive and negative impacts on individuals, society, and the environment and humans have the capability and responsibility to constrain or promote technological development. • explain that although technological effects are complex and difficult to predict accurately, humans can control the development and implementation of technology. • explain how computers and automation have changed the nature of work. • explain how national security is dependent upon both military and nonmilitary applications of technology. This is evident, for example, when students: □ develop and implement a technological device that might be used to assist a disabled person perform a task. □ identify a technology which impacts negatively on the environment and design and model a technological fix. □ identify new or emerging technologies and use a futuring technique (e.g., futures wheel, cross impact matrix, Delphi survey) to predict what might be the second and third order impacts. Management of Technology 7. Project management is essential to ensuring that technological endeavors are profitable and that products and systems are of high quality and built safely, on schedule, and within budget. • develop and use computer-based scheduling and project tracking tools, such as flow charts and graphs. • explain how statistical process control helps to assure high quality output. • discuss the role technology has played in the operation of successful U.S. businesses and under what circumstances they are competitive with other countries. • explain how technological inventions and innovations stimulate economic competitiveness and how, in order for an innovation to lead to commercial success, it must be translated into products and services with marketplace demand. • describe new management techniques (e.g., computer-aided engineering, computer-integrated manufacturing, total quality management, just-in-time manufacturing), incorporate some of these in a technological endeavor, and explain how they have reduced the length of design-to- manufacture cycles, resulted in more flexible factories, and improved quality and customer satisfaction. • help to manage a group engaged in planning, designing, implementation, and evaluation of a project to gain understanding of the management dynamics. This is evident, for example, when students: □ design and carry out a plan to create a computer-based information system that could be used to help manage a manufacturing system (e.g., monitoring inventory, measurement of production rate, development of a safety signal). □ identify several successful companies and explain the reasons for their commercial success. □ organize and implement an innovative project, based on market research, that involves design, production, testing, marketing, and sales of a product or a service. Standard 6-Interconnectedness: Common Themes Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning. Systems Thinking 1. Through systems thinking, people can recognize the commonalities that exist among all systems and how parts of a system interrelate and combine to perform specific functions. • observe and describe interactions among components of simple systems. • identify common things that can be considered to be systems (e.g., a plant population, a subway system, human beings). 2. Models are simplified representations of objects, structures, or systems used in analysis, explanation, interpretation, or design. • analyze, construct, and operate models in order to discover attributes of the real thing. • discover that a model of something is different from the real thing but can be used to study the real thing. • use different types of models, such as graphs, sketches, diagrams, and maps, to represent various aspects of the real world. This is evident, for example, when students: □ compare toy cars with real automobiles in terms of size and function. □ model structures with building blocks. □ design and construct a working model of the human circulatory system to explore how varying pumping pressure might affect blood flow. □ describe the limitations of model cars, planes, or houses. □ use model vehicles or structures to illustrate how the real object functions. □ use a road map to determine distances between towns and cities. Magnitude and Scale 3. The grouping of magnitudes of size, time, frequency, and pressures or other units of measurement into a series of relative order provides a useful way to deal with the immense range and the changes in scale that affect the behavior and design of systems. • provide examples of natural and manufactured things that belong to the same category yet have very different sizes, weights, ages, speeds, and other measurements. • identify the biggest and the smallest values as well as the average value of a system when given information about its characteristics and behavior. This is evident, for example, when students: □ compare the weight of small and large animals. □ compare the speed of bicycles, cars, and planes. □ compare the life spans of insects and trees. □ collect and analyze data related to the height of the students in their class, identifying the tallest, the shortest, and the average height. □ compare the annual temperature range of their locality. Equilibrium and Stability 4. Equilibrium is a state of stability due either to a lack of changes (static equilibrium) or a balance between opposing forces (dynamic equilibrium). • cite examples of systems in which some features stay the same while other features change. • distinguish between reasons for stability from lack of changes to changes that counterbalance one another to changes within cycles. This is evident, for example, when students: □ record their body temperatures in different weather conditions and observe that the temperature of a healthy human being stays almost constant even though the external temperature changes. □ identify the reasons for the changing amount of fresh water in a reservoir and determine how a constant supply is maintained. Patterns of Change 5. Identifying patterns of change is necessary for making predictions about future behavior and conditions. • use simple instruments to measure such quantities as distance, size, and weight and look for patterns in the data. • analyze data by making tables and graphs and looking for patterns of change. This is evident, for example, when students: □ compare shoe size with the height of people to determine if there is a trend. □ collect data on the speed of balls rolling down ramps of different slopes and determine the relationship between speed and steepness of the ramp. □ take data they have collected and generate tables and graphs to begin the search for patterns of change. 6. In order to arrive at the best solution that meets criteria within constraints, it is often necessary to make trade-offs. • determine the criteria and constraints of a simple decision making problem. • use simple quantitative methods, such as ratios, to compare costs to benefits of a decision problem. This is evident, for example, when students: □ describe the criteria (e.g., size, color, model) and constraints (e.g., budget) used to select the best bicycle to buy. □ compare the cost of cereal to number of servings to figure out the best buy. Standard 6-Interconnectedness: Common Themes Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning. Systems Thinking 1. Through systems thinking, people can recognize the commonalities that exist among all systems and how parts of a system interrelate and combine to perform specific functions. • describe the differences between dynamic systems and organizational systems. • describe the differences and similarities between engineering systems, natural systems, and social systems. • describe the differences between open- and closed-loop systems. • describe how the output from one part of a system (which can include material, energy, or information) can become the input to other parts. This is evident, for example, when students: □ compare systems with internal control (e.g., homeostasis in organisms or an ecological system) to systems of related components without internal control (e.g., the Dewey decimal, solar system). 2. Models are simplified representations of objects, structures, or systems used in analysis, explanation, interpretation, or design. • select an appropriate model to begin the search for answers or solutions to a question or problem. • use models to study processes that cannot be studied directly (e.g., when the real process is too slow, too fast, or too dangerous for direct observation). • demonstrate the effectiveness of different models to represent the same thing and the same model to represent different things. This is evident, for example, when students: □ choose a mathematical model to predict the distance a car will travel at a given speed in a given time. □ use a computer simulation to observe the process of growing vegetables or to test the performance of cars. □ compare the relative merits of using a flat map or a globe to model where places are situated on Earth. □ use blueprints or scale models to represent room plans. Magnitude and Scale 3. The grouping of magnitudes of size, time, frequency, and pressures or other units of measurement into a series of relative order provides a useful way to deal with the immense range and the changes in scale that affect the behavior and design of systems. • cite examples of how different aspects of natural and designed systems change at different rates with changes in scale. • use powers of ten notation to represent very small and very large numbers. This is evident, for example, when students: □ demonstrate that a large container of hot water (more volume) cools off more slowly than a small container (less volume). □ compare the very low frequencies (60 Hertz AC or 6 x 10 Hertz) to the mid-range frequencies (10 Hertz-FM radio) to the higher frequencies (10 15 Hertz) of the electromagnetic spectrum. Equilibrium and Stability 4. Equilibrium is a state of stability due either to a lack of changes (static equilibrium) or a balance between opposing forces (dynamic equilibrium). • describe how feedback mechanisms are used in both designed and natural systems to keep changes within desired limits. • describe changes within equilibrium cycles in terms of frequency or cycle length and determine the highest and lowest values and when they occur. This is evident, for example, when students: □ compare the feedback mechanisms used to keep a house at a constant temperature to those used by the human body to maintain a constant temperature. □ analyze the data for the number of hours of sunlight from the shortest day to the longest day of the year. Standard 6-Interconnectedness: Common Themes Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning. Patterns of Change 5. Identifying patterns of change is necessary for making predictions about future behavior and conditions. • use simple linear equations to represent how a parameter changes with time. • observe patterns of change in trends or cycles and make predictions on what might happen in the future. This is evident, for example, when students: □ study how distance changes with time for a car traveling at a constant speed. □ use a graph of a population over time to predict future population levels. 6. In order to arrive at the best solution that meets criteria within constraints, it is often necessary to make trade-offs. • determine the criteria and constraints and make trade-offs to determine the best decision. • use graphs of information for a decision making problem to determine the optimum solution. This is evident, for example, when students: □ choose components for a home stereo system. □ determine the best dimensions for fencing in the maximum area. Standard 6-Interconnectedness: Common Themes Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning. Systems Thinking 1. Through systems thinking, people can recognize the commonalities that exist among all systems and how parts of a system interrelate and combine to perform specific functions. • explain how positive feedback and negative feedback have opposite effects on system outputs. • use an input-process-output-feedback diagram to model and compare the behavior of natural and engineered systems. • define boundary conditions when doing systems analysis to determine what influences a system and how it behaves. This is evident, for example, when students: □ describe how negative feedback is used to control loudness automatically in a stereo system and how positive feedback from loudspeaker to microphone results in louder and louder squeals. 2. Models are simplified representations of objects, structures, or systems used in analysis, explanation, interpretation, or design. • revise a model to create a more complete or improved representation of the system. • collect information about the behavior of a system and use modeling tools to represent the operation of the system. • find and use mathematical models that behave in the same manner as the processes under investigation. • compare predictions to actual observations using test models. This is evident, for example, when students: □ add new parameters to an existing spreadsheet model. □ incorporate new design features in a CAD drawing. □ use computer simulation software to create a model of a system under stress, such as a city or an ecosystem. □ design and construct a prototype to test the performance of a temperature control system. □ use mathematical models for scientific laws, such as Hooke s Law or Newton s Laws, and relate them to the function of technological systems, such as an automotive suspension system. □ use sinusoidal functions to study systems that exhibit periodic behavior. □ compare actual populations of animals to the numbers predicted by predator/ prey computer simulations. Magnitude and Scale 3. The grouping of magnitudes of size, time, frequency, and pressures or other units of measurement into a series of relative order provides a useful way to deal with the immense range and the changes in scale that affect the behavior and design of systems. • describe the effects of changes in scale on the functioning of physical, biological, or designed systems. • extend their use of powers of ten notation to understanding the exponential function and performing operations with exponential factors. This is evident, for example, when students: □ explain that an increase in the size of an animal or a structure requires larger supports (legs or columns) because of the greater volume or weight. □ use the relationship that v=f l to determine wave length when given the frequency of an FM radio wave, such as 100.0 megahertz (1.1 x 10 8 Hertz), and velocity of light or EM waves as 3 x 10 8m/ sec can. Equilibrium and Stability 4. Equilibrium is a state of stability due either to a lack of changes (static equilibrium) or a balance between opposing forces (dynamic equilibrium). • describe specific instances of how disturbances might affect a system s equilibrium, from small disturbances that do not upset the equilibrium to larger disturbances (threshold level) that cause the system to become unstable. • cite specific examples of how dynamic equilibrium is achieved by equality of change in opposing directions. This is evident, for example, when students: □ use mathematical models to predict under what conditions the spread of a disease will become epidemic. □ document the range of external temperatures in which warm-blooded animals can maintain a relatively constant internal temperature and identify the extremes of cold or heat that will cause death. □ experiment with chemical or biological processes when the flow of materials in one way direction is counter-balanced by the flow of materials in the opposite direction. Patterns of Change 5. Identifying patterns of change is necessary for making predictions about future behavior and conditions. • use sophisticated mathematical models, such as graphs and equations of various algebraic or trigonometric functions. • search for multiple trends when analyzing data for patterns, and identify data that do not fit the trends. This is evident, for example, when students: □ use a sine pattern to model the property of a sound or electromagnetic wave. □ use graphs or equations to model exponential growth of money or populations. □ explore historical data to determine whether the growth of a parameter is linear or exponential or both. 6. In order to arrive at the best solution that meets criteria within constraints, it is often necessary to make trade-offs. • use optimization techniques, such as linear programming, to determine optimum solutions to problems that can be solved using quantitative methods. • analyze subjective decision making problems to explain the trade-offs that can be made to arrive at the best solution. This is evident, for example, when students: □ use linear programming to figure the optimum diet for farm animals. □ evaluate alternative proposals for providing people with more access to mass transportation systems. Students will apply the knowledge and thinking skills of mathematics, science, and technology to address real-life problems and make informed decisions. 1. The knowledge and skills of mathematics, science, and technology are used together to make informed decisions and solve problems, especially those relating to issues of science/technology/society, consumer decision making, design, and inquiry into phenomena. • analyze science/technology/society problems and issues that affect their home, school, or community, and carry out a remedial course of action. • make informed consumer decisions by applying knowledge about the attributes of particular products and making cost/benefit tradeoffs to arrive at an optimal choice. • design solutions to problems involving a familiar and real context, investigate related science concepts to inform the solution, and use mathematics to model, quantify, measure, and compute. • observe phenomena and evaluate them scientifically and mathematically by conducting a fair test of the effect of variables and using mathematical knowledge and technological tools to collect, analyze, and present data and conclusions. This is evident, for example, when students: □ develop and implement a plan to reduce water or energy consumption in their home. □ choose paper towels based on tests of absorption quality, strength, and cost per sheet. □ design a wheeled vehicle, sketch and develop plans, test different wheel and axle designs to reduce friction, chart results, and produce a working model with correct measurements. □ collect leaves of similar size from different varieties of trees, and compare the ratios of length to width in order to determine whether the ratios are the same for all species. 2. Solving interdisciplinary problems involves a variety of skills and strategies, including effective work habits; gathering and processing information; generating and analyzing ideas; realizing ideas; making connections among the common themes of mathematics, science, and technology; and presenting results. Students participate in an extended, culminating mathematics, science, and technology project. The project would require students to: • work effectively • gather and process information • generate and analyze ideas • observe common themes • realize ideas • present results This is evident, for example, when students, addressing the issue of solid waste at the school in an interdisciplinary science/technology/society project: □ use the newspaper index to find out about how solid waste is handled in their community, and interview the custodial staff to collect data about how much solid waste is generated in the school, and they make and use tables and graphs to look for patterns of change. Students work together to reach consensus on the need for recycling and on choosing a material to recycle in this case, □ investigate the types of paper that could be recycled, measure the amount (weight, volume) of this type of paper in their school during a one-week period, and calculate the cost. Students investigate the processes involved in changing used paper into a useable product and how and why those changes work as they do. □ using simple mixers, wire screens, and lint, leaves, rags, etc., students recycle used paper into useable sheets and evaluate the quality of the product. They present their results using charts, graphs, illustrations, and photographs to the principal and custodial staff. Skills and Strategies for Interdisciplinary Problem Solving Working Effectively: Contributing to the work of a brainstorming group, laboratory partnership, cooperative learning group, or project team; planning procedures; identify and managing responsibilities of team members; and staying on task, whether working alone or as part of a group. Gathering and Processing Information: Accessing information from printed media, electronic data bases, and community resources and using the information to develop a definition of the problem and to research possible solutions. Generating and Analyzing Ideas: Developing ideas for proposed solutions, investigating ideas, collecting data, and showing relationships and patterns in the data. Common Themes: Observing examples of common unifying themes, applying them to the problem, and using them to better understand the dimensions of the problem. Realizing Ideas: Constructing components or models, arriving at a solution, and evaluating the result. Presenting Results: Using a variety of media to present the solution and to communicate the results. Standard 7-Interdisciplinary Problem Solving Students will apply the knowledge and thinking skills of mathematics, science, and technology to address real-life problems and make informed decisions. 1. The knowledge and skills of mathematics, science, and technology are used together to make informed decisions and solve problems, especially those relating to issues of science/technology/society, consumer decision making, design, and inquiry into phenomena. • analyze science/technology/society problems and issues at the local level and plan and carry out a remedial course of action. • make informed consumer decisions by seeking answers to appropriate questions about products, services, and systems; determining the cost/benefit and risk/benefit tradeoffs; and applying this knowledge to a potential purchase. • design solutions to real-world problems of general social interest related to home, school, or community using scientific experimentation to inform the solution and applying mathematical concepts and reasoning to assist in developing a solution. • describe and explain phenomena by designing and conducting investigations involving systematic observations, accurate measurements, and the identification and control of variables; by inquiring into relevant mathematical ideas; and by using mathematical and technological tools and procedures to assist in the investigation. This is evident, for example, when students: □ improve a habitat for birds at a park or on school property. □ choose a telescope for home use based on diameter of the telescope, magnification, quality of optics and equatorial mount, cost, and ease of use. □ design and construct a working model of an air filtration device that filters out particles above a particular size. □ simulate population change using a simple model (e.g., different colors of paper clips to represent different species of birds). Timed removals of clips from plastic cups represents the action of predators and varying the percentage of the return of clips to cups represent differences in reproductive rates. Students apply mathematical modeling techniques to graph population growth changes and make interpretations related to resource depletion. 2. Solving interdisciplinary problems involves a variety of skills and strategies, including effective work habits; gathering and processing information; generating and analyzing ideas; realizing ideas; making connections among the common themes of mathematics, science, and technology; and presenting results. Students participate in an extended, culminating mathematics, science, and technology project. The project would require students to: • work effectively • gather and process information • generate and analyze ideas • observe common themes • realize ideas • present results This is evident, for example, when students, addressing the issue of auto safety in an interdisciplinary science/technology/society project: □ use an electronic data base to obtain information on the causes of auto accidents and use e-mail to collect information from government agencies and auto safety organizations. Students gather, analyze, and chart information on the number and causes of auto accidents in their county and look for trends. □ design and construct a model vehicle with a restraint system to hold a raw egg as the passenger and evaluate the effectiveness of the restraint system by rolling the vehicle down a ramp and into a barrier; the vehicle is designed with crush zones to absorb the impact. Students analyze forces and compute acceleration using F=ma calculations. They present their results, including a videotaped segment, to a driver education class. Skills and Strategies for Interdisciplinary Problem Solving Working Effectively: Contributing to the work of a brainstorming group, laboratory partnership, cooperative learning group, or project team; planning procedures; identify and managing responsibilities of team members; and staying on task, whether working alone or as part of a group. Gathering and Processing Information: Accessing information from printed media, electronic data bases, and community resources and using the information to develop a definition of the problem and to research possible solutions. Generating and Analyzing Ideas: Developing ideas for proposed solutions, investigating ideas, collecting data, and showing relationships and patterns in the data. Common Themes: Observing examples of common unifying themes, applying them to the problem, and using them to better understand the dimensions of the problem. Realizing Ideas: Constructing components or models, arriving at a solution, and evaluating the result. Presenting Results: Using a variety of media to present the solution and to communicate the results. Standard 7-Interdisciplinary Problem Solving Students will apply the knowledge and thinking skills of mathematics, science, and technology to address real-life problems and make informed decisions. 1. The knowledge and skills of mathematics, science, and technology are used together to make informed decisions and solve problems, especially those relating to issues of science/technology/society, consumer decision making, design, and inquiry into phenomena. • analyze science/technology/society problems and issues on a community, national, or global scale and plan and carry out a remedial course of action. • analyze and quantify consumer product data, understand environmental and economic impacts, develop a method for judging the value and efficacy of competing products, and discuss cost/benefit and risk/benefit tradeoffs made in arriving at the optimal choice. • design solutions to real-world problems on a community, national, or global scale using a technological design process that integrates scientific investigation and rigorous mathematical analysis of the problem and of the solution. • explain and evaluate phenomena mathematically and scientifically by formulating a testable hypothesis, demonstrating the logical connections between the scientific concepts guiding the hypothesis and the design of an experiment, applying and inquiring into the mathematical ideas relating to investigation of phenomena, and using (and if needed, designing) technological tools and procedures to assist in the investigation and in the communication of results. This is evident, for example, when students: □ analyze the issues related to local energy needs and develop a viable energy generation plan for the community. □ choose whether it is better to purchase a conventional or high definition television after analyzing the differences from quantitative and qualitative points of view, considering such particulars as the number of scanning lines, bandwidth requirements and impact on the frequency spectrum, costs, and existence of international standards. □ design and produce a prototypical device using an electronic volt-age divider that can be used to power a portable cassette tape or CD player in a car by reducing the standard automotive accessory power source of approximately 14.8 volts to a lower voltage. □ investigate two similar fossils to determine if they represent a developmental change over time. 2. Solving interdisciplinary problems involves a variety of skills and strategies, including effective work habits; gathering and processing information; generating and analyzing ideas; realizing ideas; making connections among the common themes of mathematics, science, and technology; and presenting results. Students participate in an extended, culminating mathematics, science, and technology project. The project would require students to: • work effectively • gather and process information • generate and analyze ideas • observe common themes • realize ideas • present results This is evident, for example, when students, addressing the issue of emergency preparedness in an interdisciplinary science/technology/society project: □ are given a scenario survivors from a disaster are stranded on a mountaintop in the high peaks of the Adirondacks they are challenged to design a portable shelter that could be heated by the body heat of five survivors to a life sustaining temperature, given an outside temperature of 20^o F. Since the shelter would be dropped to survivors by an aircraft, it must be capable of withstanding the impact. Students determine the kinds of data to be collected, for example, snowfall during certain months, average wind velocity, R value of insulating materials, etc. To conduct their research, students gather and analyze information from research data bases, national libraries, and electronic communication networks, including the Internet. □ design and construct scale models or full-sized shelters based on engineering design criteria including wind load, snow load, and insulating properties of materials. Heat flow calculations are done to determine how body heat could be used to heat the shelter. Students evaluate the trade-offs that they make to arrive at the best solution; for example, in order to keep the temperature at 20 degrees F., the shelter may have to be small, and survivors would be very uncomfortable. Another component of the project is assembly instructions designed so that speakers of any language could quickly install the structure on site. □ prepare a multimedia presentation about their project and present it to the school s ski club. Skills and Strategies for Interdisciplinary Problem Solving Working Effectively: Contributing to the work of a brainstorming group, laboratory partnership, cooperative learning group, or project team; planning procedures; identify and managing responsibilities of team members; and staying on task, whether working alone or as part of a group. Gathering and Processing Information: Accessing information from printed media, electronic data bases, and community resources and using the information to develop a definition of the problem and to research possible solutions. Generating and Analyzing Ideas: Developing ideas for proposed solutions, investigating ideas, collecting data, and showing relationships and patterns in the data. Common Themes: Observing examples of common unifying themes, applying them to the problem, and using them to better understand the dimensions of the problem. Realizing Ideas: Constructing components or models, arriving at a solution, and evaluating the result. Presenting Results: Using a variety of media to present the solution and to communicate the results.
{"url":"http://accelerateu.org/standards/mst.htm","timestamp":"2014-04-16T07:37:19Z","content_type":null,"content_length":"209886","record_id":"<urn:uuid:5465d5f6-dbd4-4b67-b9f7-700e1f44976d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
[Image-SIG] perspective transform Jeff Breidenbach jbreiden at parc.com Wed Feb 16 22:39:41 CET 2005 PIL's Image Module already has support for affine transformations. Is there any interest in adding support for the perspective transformation as well? Something like: im.transform(size, PERSPECTIVE, data) Applies a persepctive transform to the image, and places the result in a new image with the given size. Data is a 8-tuple (a, b, c, d, e, f, g, h) which contains the coefficients for a perspective transform. For each pixel (x, y) in the output image, the new value is taken from a position (a x + b y + c)/(g x + h y + 1), (d x + e y + f)/(g x + h y + 1) in the input image, rounded to nearest pixel. This function can be used to change the 2D perspective of the original image. More information about the Image-SIG mailing list
{"url":"https://mail.python.org/pipermail/image-sig/2005-February/003198.html","timestamp":"2014-04-20T07:17:00Z","content_type":null,"content_length":"3176","record_id":"<urn:uuid:ea117890-2b0e-4dce-b057-fee64b780dc2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Thompson Navigated Message to the Reader I apologize for the considerable space in this volume of Northwest Journal which is devoted to early 19th century navigation, as many readers may have only a passing interest in this topic. However, please recognize that the vast majority of the information presented in these articles has never before appeared in print. Such a complete assessment of David Thompson's techniques and skill has never been done before. I hope that the information in these articles will spur new interest in assessing the skills and contributions made by all North American geographers of that time. —J. See also Period Navigation The Author lecturing on Thompson's techniques at Old Fort William. (Photo by A. Gottfred) David Thompson is famous for his early exploration and mapping of western Canada and the northwestern United States. From 1790 to 1812, he traveled the Northwest using a sextant and compass to record valuable navigational information. He used this information to make some of the earliest detailed maps of the northwestern U.S. and western Canada. Paradoxically, although his navigational skills gave Thompson his claim to fame, they are poorly understood by both historians and geographers. How did he calculate his latitude and longitude, and how accurate was he? In this issue, I will use examples from Thompson's notes to illustrate and explain the navigational methods that he used. There are a number of challenges to studying Thompson's navigational methods. The techniques of celestial navigation have changed significantly in the last two hundred years, but there are still a number of common denominators. Marine navigators still rely on sextants today, although there is increasing reliance on other navigational instruments such as GPS receivers. (Sextants are no longer used for land navigation and mapping.) Modern navigators use the Marcq St. Hilaire method, which requires two sextant observations and a highly accurate knowledge of Greenwich Mean Time, to find their latitude and longitude simultaneously. A nautical almanac (ephemeris) provides the positions of the celestial bodies for the current year, in a format designed to be easily used with modern navigational methods. Electronic calculators or tables of pre-calculated solutions are used to simplify the calculations and ensure accuracy. In Thompson's day, finding latitude and longitude were two separate problems, requiring a number of different sextant observations made at different times. A nautical almanac provided the positions of the celestial bodies for the current year, including data on lunar distances for use in determining longitude. A large array of tables such as those in Neville Maskelyne's Tables Requisite (e.g. proportional logarithms, log-trig tables, double altitude tables, etc.) were used to simplify calculations and ensure accuracy. In this volume of Northwest Journal, I present a case study of David Thompson's navigational methods, skill, and accuracy. The case study examines Thompson's journey from Boggy Hall to the Whirlpool River, from October 19, 1810 until January 7, 1811. This is an important period because Thompson was near the end of his fur trade career (he retired in 1812) and had twenty years of navigational experience behind him. He was about to make two of his most important journeys : crossing the Athabasca Pass and descending the Columbia River to its mouth. So it is important to have a baseline appraisal of Thompson's skill, by examining his first few steps along the trail and evaluating his ability to accurately plot his position. The case study provides an example of each type of sextant observation made by Thompson during this period. I explain the purpose of the observation and show how he used it to help determine his latitude or longitude. In addition, I demonstrate that his 'goods shed' (possibly the first 'Henry House' on the Athabasca) was not located at the south end of Brûlé Lake, contrary to the currently accepted position. To my knowledge, most of the information presented in these articles has never before been published. Smyth (1981) and Sebert (1981) both discuss methods used by Thompson, but do not seem to be heavily based on Thompson's data ; instead, both authors turn to an examination of methods used by marine navigators of Thompson's time. Stewart (1936) and Smith (1961) compared the accuracy of Thompson's maps to modern maps of the same areas. In his journals, Thompson recorded all the information about his astronomical observations that he would need to re-do his calculations and double-check them at a later date. He also recorded key steps (intermediate solutions) in his calculations. Using Thompson's own data and recomputing his answers step by step, I will show : —How he periodically recomputed the index error of his sextant, and how this information can be used to show that the accuracy of his instrument and his observational skill were excellent. (Article IV, p. 15-16). —How he found his latitude by observing meridian altitudes of the sun, and that the observation made at the 'goods hut' was made using his reflecting artificial horizon and was accurately computed. ( Article V, p. 16-17). —How he found his latitude by double altitudes— observing two altitudes of the sun or other star, separated by a time interval measured on his watch, and how this data can be used to show that Thompson was capable of making highly accurate runs of sight while away from the conveniences of fort life. (Article VI, p. 17-20). —How he computed Greenwich Apparent Time by observing lunar distances and how lunar distance longitudes can be made by only one person who observes only the lunar distance and the altitude of some other body. I also show that this was the technique actually used by Thompson (Articles VII, p. 21-28, & Article VIII, p. 28-31). —How, with a knowledge of Greenwich Apparent Time, he computed longitude by observing the sun or another body, how he kept his watch set correctly to local time, and how he computed magnetic declination ('compass variation'). In addition I provide an estimate of the accuracy of his watch and how he used his watch in general. (Article VIII, p. 28-31). Along with these demonstrations, I also : — Describe, in general, Thompson's navigational routine and discuss my assessment of his diligence (Article II, p. 4-7). —Provide guidelines for assessing the accuracy of any position computed by Thompson (Article II). —Re-examine the location of the 'goods shed' constructed by Thompson and his men December 5-29, 1810, in the light of the case study (Article III, p. 7-14). —Compute a completely new latitude for the goods shed which, in conjunction with Thompson's latitude, demonstrates that the accepted location is incorrect (Article IX, p. 31-34). —Provide a complete explanation of each kind of notation (including marginal notations) in Thompson's astronomical notes for the period of the case study (passim). —Provide a glossary of navigational terms (Article X, p. 35-37). It should also be noted that Articles V through IX assume a basic knowledge of modern celestial navigation as used on small boats today. For background information on navigational instruments and the general principles of celestial navigation during Thompson's time, see Gottfred, 'Period Navigation'. More details on Thompson's navigational instruments are in Gottfred, 'Life', p. 3-6, and in By far the most accessible way for club members to see Thompson's data is through Columbia Journals, edited by Barbara Belyea. Columbia Journals is an excellent transcription of selections from David Thompson's daily notes from October 1800 to September 1811. Belyea has done an outstanding job of deciphering Thompson's often faint and nearly illegible handwriting, and presenting this information in a clear form true to the original. With this volume, Belyea has made a large portion of Thompson's textual descriptions of where he is and what he is doing available to the reader. Columbia Journals is not intended to be a complete transcription of all of Thompson's material for this period. Belyea has selected portions of the journals which focus on Thompson's efforts to cross the Rocky Mountains and establish fur posts and trade routes between the Rockies and the Pacific. She omits most of Thompson's navigational information. Consequently, for the period of the case study, the manuscripts journals contain much additional useful information about all the celestial observations. For the data presented in this case study, I have used Thompson's course and distance information as it appears in Columbia Journals, and his celestial navigation information as it appears in the original manuscripts. A complete list of references and resources is provided at the end of Article X. I have heard comments to the effect that Thompson was a sloppy surveyor. Is this so? In my opinion, for the period of the case study, the answer is yes. During the period of the case study, Thompson observed eleven lunar distances. He computed nine of these correctly, and in two he either made significant errors in computing the true distance, or he copied down the wrong data. He incorrectly writes 'N' instead of 'S' for solar declinations four times during this period, but he does not use them incorrectly in his calculations (Nov. 2, 3, 13, 21). He mis-identifies stars on two occasions and confuses himself to the point that he throws out good data (see Art. III). At one point he writes down 27º 49' 45" as 27º 29' 45", and ruins a day's work before he notices the error. However, he does go back and correct (almost) everything (Art. III). In his lunar distance calculations he sometimes records right ascensions in hours (which they should be), and sometimes he records the value after he has converted hours to degrees. Confusingly, he usually fails to note which of these units he is using (p. 23, 24). Almost every page of computations has values scratched out and changed or written over. In this regard it is not a 'neat' copy. Thompson does record everything required to go back and recompute his answers at a later date. Every observation has the temperature recorded, as well as each time and altitude pair taken during the run of sights. It must be remembered that Thompson likely suffered from a shortage of paper, and this is reflected by the fact that many intermediate steps in his calculations are missing from the pages, and the data is crammed into as small a space as possible. Overall, the notes strike one as being made by an individual for his own use. Certain items which were obvious to the note-keeper have not been recorded for posterity. Anyone viewing this material with an eye to information content for posterity would be justified in calling the work 'sloppy'. Yet Reliable Does this mean that Thompson's work is unreliable? In my opinion, no. The information is recorded in a fashion which made sense to the owner. Over the last ten years I have used my own sextant under many different conditions to replicate all of the techniques that Thompson used. I can honestly say that his notes make sense to me, and that his observational skill was probably better than mine. How he records information in his notebooks says nothing about his observational skill and I have found much evidence that his skill was considerable. Broad-based comparisons made by looking at Thompson's positions compared to modern positions may fail to account for the varying reliability of different observational techniques, and the possibility of errors in calculation. On the other hand, if one takes the time to carefully examine Thompson's observations, to recalculate them to check for errors, to assess the accuracy of the method used for each observation, and the conditions under which they were made, then error bars may be placed around each observation. In fact, it is even possible to find additional useful data. For example, in the case study I found two additional latitudes which are perfectly sound, and which were helpful in reconstructing his movements (Article IX). The error-bounded observations should then form the control points through which his course and distance positions must pass. General assessments of the accuracies of Thompson's observations should be avoided; instead, each observation should be independently assessed. However, if Thompson's calculations are accurate, if a mercury reflecting artificial horizon was used for all observations (except measuring lunar distance), and if the apparent altitudes of the bodies weren't too close to the horizon, then I feel that it should be generally safe to assume that any latitude by meridian transit observation would be correct within 1½ nautical miles, and any latitude from a double altitude observation should be correct to within 2 nm. This is based on my own experience using Thompson's methods in the field. Synopsis of Thompson's Navigational Routine Thompson's general routine for determining his position using the sun and the moon (for example) was as follows : Upon arrival at a new camp, Thompson would try to obtain an accurate latitude. If possible, he would observe a meridian transit of the sun ('noon sight') (Art. V). If this was not possible, he would make two observations of the sun one hour apart, which he would then use to compute a latitude with the double altitude method (Art. VI). If the moon was in a convenient location, Thompson would observe the distance between the moon and the sun (Art. VII). Then, within half an hour or so, he would observe the altitude of the sun— a 'time shot' (Art. VIII). When making these lunar distance observations, he checked the index error of his sextant to make sure that it had not changed (Art. IV). Using the observed altitude of the sun, the latitude computed earlier, and the declination of the sun as determined from the nautical almanac for the approximate Greenwich time as based on his ded reckoning longitude of the observation, Thompson computed the local apparent time. He then reset his watch to the correct local apparent time. This helped to ensure that he did not miss the next day's noon sight due to the inaccuracy of his watch. Sometimes, Thompson would note the compass bearing to the sun at the instant of the time shot. From his knowledge of his latitude, the sun's declination, and the observed altitude, he could compute the sun's true bearing (azimuth). The difference between the true bearing and the magnetic compass bearing was the magnetic variation (declination) at his position (Art. VIII). From his knowledge of his latitude, the local apparent time, and the declination of the sun, he then computed a close approximation of the sun's altitude at the instant of the lunar distance observation. Then, from his knowledge of the local apparent time, his latitude, the declination of the moon based on his approximation of the time in Greenwich, and the difference in the right ascensions of the sun and the moon at the approximate Greenwich time of his observation, he computed a close approximation of the true altitude of the moon at the instant of the lunar distance observation (Art. VII). From the close approximations of the moon's and sun's altitudes, combined with a highly accurate observation of the lunar distance, he then 'cleared the distance' of the effects of refraction and lunar parallax to determine an accurate true lunar distance between the sun and the moon for the local apparent time of the observation. He then used the nautical almanac to determine the apparent time in Greenwich at which the moon would be at the distance that he observed. The difference between his local apparent time and the apparent time in Greenwich, converted to degrees, resulted in his longitude (see Art. VII & VIII). Thompson also used the stars to compute lunar distances and double altitudes. The techniques are generally the same, but with a slight complication for the computation of local apparent time. The Location of David Thompson's 'Goods Shed' In December 1810, while preparing for his epic journey over the Athabasca Pass, David Thompson built a 'goods shed' near the Athabasca River in the vicinity of Brûlé Lake (near today's Jasper National Park). Thompson explains why he built the shed in his memoirs : '...our Guide told me it was of no use at this late season to think of going any further with Horses...but from this place prepare ourselves with Snow Shoes and Sleds to cross the Mountains : Accordingly the next day we began to make Log Huts to secure the Goods, and Provisions, and shelter ourselves from the cold and bad weather...' (Glover, 318) Thompson and his men built at least two buildings here : a storehouse and a 'meat shed'. They spent December 5-29 at this spot. On December 30, Thompson left the goods shed, leaving behind North West Company clerk William Henry and half of his twenty-four men. It is generally accepted that this shed was located at the south end of Brûlé Lake (Glover, 318n; Belyea, 253), but the precise location has not been found. Using my knowledge of his navigational methods I have come to the conclusion that the goods shed was actually located approximately three miles up Solomon Creek at the north end of Brûlé Lake. David Thompson's movements from late October to early December, 1810 took him from Boggy Hall to this 'goods shed'. While traveling, he used his compass to note courses and recorded distance measurements. He also made frequent observations using his sextant. Before trying to follow his trail it is necessary to try to get a feel for the accuracy of these observations. The course observations can be tricky to interpret. Thompson generally uses a compass to determine his course. His compass readings are magnetic headings, so correcting them for plotting on a modern map requires some knowledge of the magnetic declination of the area as it was in 1810. (Magnetic declination changes slowly over time as the north magnetic pole wanders.) In this regard I must caution the reader, for Columbia Journals contains many declination values which seem to be magnetic declinations. In fact, these values are the declinations of celestial bodies (usually the sun) and therefore cannot be used to correct magnetic compass headings. This is a perfectly understandable error for anyone unfamiliar with nautical astronomy. Thompson confuses matters by occasionally noting solar declinations as north ; these declinations are south during the fall and winter months. He does not use them in his calculations as north declinations, and in other places (most notably in the lunar distance calculations) he indicates that they are south declinations. To add to the confusion on this point, the text in Columbia Journals has an 'N' after the declination values listed under November 26 and December 6 which do not appear in the manuscript, and an 'N' after the declination for November 28 which is actually an 'S' in the manuscript. (All other declinations in Columbia Journals for the period of the case study are correct.) Incidentally, in Thompson's time, magnetic declination was called 'variation' and that term is used by marine navigators today (Belyea, 183; Bowditch, 85). (For an example of how Thompson differentiates between magnetic and celestial declinations see Belyea, 78.) During the period of the case study, Thompson only records one variation : '22º or 23º East', measured while at the 'Goods Usually what Thompson is recording is a dead reckoning course : an estimated position allowing for deviations around obstructions, minor bends in the river, &c. What he is really saying is 'we went down this river through a bunch of twists and turns and around a mountain over a distance of maybe ten miles, but I figure that our new position is six miles south west of our last recorded position.' (Modern navigators call this kind of course ded ('deduced') reckoning.) Not only are these bearings his estimate as a navigator, but sometimes he estimates his bearings by the sun rather than using his compass. (Thompson's compass may not be entirely reliable (Bowditch, 9-10)). I generally assume that his courses are correct to within plus or minus twenty degrees. Thompson estimates his distance traveled, and I usually assume that his distances are accurate to plus or minus ten percent. In the few journeys that I have plotted, I have been impressed by his ability to judge distances. I feel that his distances estimates are more reliable than his course bearings. The sextant observations are more interesting. The general accuracy of celestial navigation 200 years ago was basically identical to that of today. The tables of astronomical data and physical phenomena required for navigation appear to have been just as accurate as today's. A commander in the Royal Navy in 1899 stated that 'the maximum error of lunar table[s] may now be considered to be 10 seconds' (Sebert, 412). My modern nautical almanac states that the error in the position of the moon alone may reach 18 seconds! (United States Naval Observatory (USNO), 261) The sextants were also just as accurate as modern ones. My modern Astra IIIB sextant reads to 12" of arc and has an accuracy of plus or minus 20". David Thompson's sextant could read to 15" of arc ; I don't know its accuracy. However, my instrument in conjunction with modern tables is barely accurate enough to compute longitudes by lunar distance, and my results are generally not as good as Thompson's. Therefore Thompson's instrument and tables must have been at least as good as mine, and probably better. (I have found good evidence to support this by looking at Thompson's index error calculations. See Art. IV) Thompson's most inaccurate instrument was his timepiece. Accurate chronometers were very expensive, and Thompson did not have one. Instead, he used two or more 'common watches' (pocket watches) with second hands. Today, navigators can compute both latitude and longitude from a pair of observations. In Thompson's time, due to the lack of accurate watches, latitude and longitude observations were done The accuracy of any given observation depends upon what type it was. Thompson uses three different techniques for observing latitudes. The first is called a double meridian altitude observation, usually of the sun. This means that he is observing the height of the sun when it is at high noon (crossing the meridian), and he is doing it using his parallel glasses, a reflecting artificial horizon made by placing a glass cover over a bowl of mercury. ('Double' refers to the fact that the altitude measured with an artificial horizon is twice what would have been measured using a sea horizon.) Using the same techniques today, I generally compute latitudes to an accuracy better than half of a nautical mile. (1 nm = 1852 meters). On good days my accuracy is within 300 meters. Based on my comparisons with David Thompson's data from Rocky Mountain House, as well as the data in the case study, I feel that for any of Thompson's double meridian altitude observations it is reasonable to assume that his observation would place him within 1½ nautical miles of his true latitude. Thompson sometimes mentions a second type of latitude observation which he calls a meridian altitude. This is an ambiguous term. In most cases, it means that he made the observation using his artificial horizon, but in some cases it means that he has used a local body of water as an artificial horizon. This technique involves using the far shore of a lake or long river as a level, and estimating the distance to the far shore. The navigator then applies a correction to the observation based on this estimate. This technique is more properly known as the dip short method. It is far less reliable than using an artificial horizon because of the difficulty of accurately estimating the distance to the far shore. Thompson did not make any dip short observations during the case study The third method of observing for latitude is called a double altitude observation. This means that the navigator has made two observations of the same celestial body from the same position, separated by a time interval measured on a common watch. The idea is that the common watch, although not accurate enough to keep time over several days, is accurate enough to keep the time for about an hour. This means that two observations of the same star, separated by a known time interval, allow the navigator to compute his latitude directly using spherical trigonometry. The accuracy of this method depends upon a few variables. First, if both observations were made using an artificial horizon then the results can be very accurate. If one or both observations used the dip short method, then the result will be fairly suspect. For observations made using an artificial horizon and from one to two hours apart, it should be safe to say that the accuracy of the observation would be plus or minus two nautical miles. Finding longitude involves making highly accurate observations of lunar distances. Thompson makes eleven observations for longitude at three locations during the period of the case study. The method which he used is fully explained in articles VI and VII. Unfortunately, the accuracy of lunar distance longitudes is poor at best. In general, any single observation will be no better than ±20' of longitude. If more observations from the same spot are averaged together, the result will be more reliable. More than a dozen observations must be averaged together to obtain accuracies within four or five minutes of longitude. During the period of the case study Thompson does not stay in one spot long enough to observe enough longitudes to pin his location down to better than ±20' of Thompson's Trail I began by tracing Thompson's route from a position near the Athabasca River on November 26, 1810 (Belyea, 125). On that day, Thompson and his party camp at 12:15 p.m., and Thompson soon makes a double altitude observation. He computes a latitude of N53º 30' 39". The next two day's travels take him northwest around a 'long lake' and along a brook to near the Athabasca River. On November 28, he observes a double meridian altitude of the sun and computes a latitude of N53º 37' 54". There is no reason to suspect that either latitude would be farther than 2 nm from his actual position. In his November 27 journal entry, he draws a picture of the 'long lake'. It seems clear that this lake is Summit Lake, and the creek that they followed to the Athabasca River is Obed Creek. Summit Lake— David Thompson's drawing of the lake they passed on November 27, 1810. It seems clear that this is Summit Lake. Obed Creek is shown emerging from the north west corner of the lake. Thompson then begins to ascend the Athabasca, and on November 29 they camp near a large island in the river (¾ mile long.) River islands are usually poor landmarks, since they erode rapidly, but this island would probably be big enough to persist. Such an island appears on my topographic map within one statute mile of where Thompson's course and distance information would place him on November 29. This camp would be on the Athabasca River at about N53º 31'. The next day, Thompson continues up the river. A position for his camp based on his course and distance information on November 30 would be one mile north of the river at the junction of the Athabasca River and Maskuta Creek. This position tallies with his description : 'the river run around a large Point & is dist from our road thro' the Willow Plain & from the camp at abt 1M' (Belyea, 127). The Athabasca runs fairly straight up to this point, where it begins a series of curves into the mountains. On the evening of November 30 he makes observations for latitude and longitude. He chooses the star Procyon for his longitude calculation. Unfortunately this star was still below the horizon when he made his observations! I suspect that he confused the star Pollux with Procyon. He notes in his journal that he has observed the 'wrong star.' On the evening of December 1, while at the same camp, he tries once more but again throws out the data noting 'wrong star'. (There is another possible explanation— perhaps he didn't observe the wrong star, perhaps he used the wrong star. Thompson's nautical almanac may not have included lunar distance tables for another star he observed that night, Algenib, making the observation useless.) Two of the December 1 observations are of Aldebaran. I find it hard to imagine how Thompson could mis-identify this very bright (magnitude one) red star. The altitude that Thompson measures for the star is very close to what it should be for Aldebaran on that date and time. For these reasons, I think Thompson correctly identified Aldebaran. I see nothing else wrong with this Aldebaran observation. When I recompute it, I obtain the result N53º 25' 10". (Thompson calculated N53º 23' before throwing out all of his data for that night.) This latitude lies right on the spot that I obtain by plotting Thompson's course and distance information. I see no reason to think that it is not very close to the actual position. On December 2 they set out again. It is at this point that I feel Thompson's course diverges from the accepted one. Belyea states that he follows Maskuta Creek to the south end of Brulé Lake (253). However, Thompson says that they traveled south 1 mile to the bank, then southwest through plains and over brooks, and at the end of the day they had gone about 6 miles 'going to the right in curves' and the river was about one-third of a mile away. I feel that this indicates that they followed the course of the Athabasca River, not Maskuta Creek. They then set off towards the southwest, and met up with the banks of the river, which they followed. They continued southwest until they reached 'the entrance to the Flats which appear like a Lake' (Belyea, 129). I believe that this is the north end of Brulé Lake. Here they met some hunters, who took them to a Native hut on the lake. To get to this hut, they traveled southwest. I believe that they were traveling along the northwest shore of Brulé Lake. My estimate of their position would place them at modern-day Swan Landing. It may be significant that this spot is a hamlet today, as places where people meet tend to persist. Thompson says that the hunters' hut was small and dirty, and there was no grass for the horses, so they moved the next day. He says that they went north-northwest about 5 miles through aspen forest and camped 'near a small Fountain of Water amongst Pines & Aspins' (Belyea, 129). This is where they decided to build the 'goods shed'. If they were at Swan Landing on the northwest shore of Brûlé Lake the previous day, then Thompson's course and distance information would suggest a position for the hut somewhere up Solomon Creek at about N53º 23', Lo. 117º 53' W. (Solomon Creek flows southeast before emptying into the northwest end of Brûlé Lake.) I should note here that, although Thompson travels along Brûlé Lake, he doesn't say he is on or near a lake. This is likely because Brûlé Lake is very shallow, and would be at low water and frozen in December. Later, he does not mention the larger Jasper Lake. Both of these lakes are really just widenings in the river. On December 6, Thompson records a double meridian altitude of the sun, giving a latitude of N53º 23' 27", which is less than one nautical mile from the course and distance position. By examining Thompson's observations from the previous evening, I realized that I could use his observations of the stars Capella and Vega to compute a new latitude using the double meridian altitude method (see Art. IX for details). The position that I obtain is N 53º 21' 22", or two nautical miles from Thompson's December 6 position. Both of these two latitudes are about eight nautical miles from the south end of Brulé Lake, and effectively rule out that position as the location of the goods shed. This brings us to a possible error. Thompson compiled a table of observations which is reproduced in Belyea (314). Under December 1-6 he made the note 'Athabasca River, at the Shed Depot of Goods (Longitude of 4 observations)' and gives the position N53º 33' 33" Lo. 117º 36' 34". This latitude seems quite wrong. Where did it come from? On December 6, Thompson incorrectly copies the value 27° 49' 45" as 27° 29' 45". This causes him to compute a latitude for the goods shed of N53º 33' 33". He then seems to catch the error, and changes the copy to reflect the correct latitude. The incorrect value must have been copied to another journal or log and not corrected. The reader might be tempted to dismiss all of these calculations due to the seemingly large number of errors that I have described. To do so would be a mistake. I have recalculated Thompson's observations and, except as noted, they are correct. Also, an error of this sort is relatively easy for the navigator to catch, as Thompson did, because it gives a result that is markedly inconsistent with the other observations. I see nothing in the journals to indicate that the other values are not reliable. With the corroborating evidence of the newly computed latitude, I feel confident Thompson's value of N53º 23' 27" is within 1½ nm of his actual position. If we assume that the goods shed is on Solomon Creek, then is this consistent with an analysis of his journey to the Whirlpool River? I believe that it is. They departed the goods shed on December 30 and traveled southeast for five miles. Yet the Athabasca flows southwest from the south end of Brulé Lake. It would seem that they were actually traveling back to the north end of Brulé Lake. As mentioned earlier, Thompson does not describe this place as a lake, but as 'full of small Flats & Isles', in other words, a braided stream. His courses from here to the Whirlpool are not very accurate. This may be explained by the fact that on December 31 he gives an approximate course as judged by the sun. Again, I think taking his course directions too literally would be a mistake. However, there is a good way to judge whether or not a hut position on Solomon Creek is plausible, and that is to look at the distance traveled. Between December 30 and January 7 (arrival at the mouth of the Whirlpool), Thompson notes distances traveled, generally over good ground and in straight lines. They total 54½ miles. The distance from the proposed position of the hut on Solomon Creek to the mouth of the Whirlpool River is about 50 miles. This is about a 10% error on Thompson's part, which is consistent with his ability to judge distances over good ground. If the goods shed was located at the south end of Brulé Lake, then the actual distance from the hut to the mouth of Whirlpool River would be about 39 miles. Thompson's journal says he traveled 54½ miles. This is about a 30% error, which strikes me as being an unreasonably large error given the skill of the navigator and the type of terrain that he is covering. It is also inconsistent with the accuracy of the distances stated by Thompson on the first part of this journey. In summary, the latitude listed in Thompson's table for the 'goods shed' is erroneous, and should be discarded. In addition, I suggest that a position for the 'goods shed' approximately three miles up Solomon Creek is consistent with the navigational information supplied by Thompson. The index error of a sextant is a correctable instrument error. Index error is caused by the sextant mirrors being not quite parallel, a normal condition for sextants even today. The navigator can easily determine this index error and correct for it in calculations by observing a distant star and noting what the instrument reads when the star images are properly aligned. In a perfectly tuned instrument, the angle should be zero, but most sextants will show a small positive or negative angle which must then be applied to every observation made with that instrument. The index error should also be monitored with every observation and periodically recomputed to ensure that the mirrors have not been bumped in transit &c. Failure to note the correct index error or to notice a change in the value over time is the most likely way for a systematic observational error to result in positions which are many nautical miles in error. On November 3, 21, and 26, David Thompson makes marginal notations in his calculations which clearly show that he is checking his index error on a regular basis, and modifying its value over time as the instrument reacts to the rigors of travel and changing climactic conditions. David Thompson's journal entry for November 3, 1810 has an excellent example of an index error calculation which also demonstrates just how accurate his eye and his instrument are. In the corner of the page he makes the notes shown in the box at the right. Thompson is using the sun to compute the instrument's index error. He does not do this by superimposing the two sun images as one does with a star. This is because the sun is not a point source of light, and judging when the two sun images are precisely overlapped is difficult. To obtain the maximum accuracy, Thompson first aligns the bottom of the sun image with the top and records a measurement of 35' 52". He then reverses the sun images and records 28' 45". Note that the second measurement is actually a negative measurement. He sums the two numbers and then takes their difference, which is 7' 7". He divides the difference by two in order to compute the index error for the center of the body, with the result of 3' 34". Noting that this is a positive value (on the arc), and therefore he must subtract this value from any observation that he makes, he records his instrument error correction as –3' 34", the value he actually uses. By summing the two values he obtains 64' 37". Dividing by two will yield the sun's diameter. Dividing by two once again yields the sun's semi-diameter, 16' 9.4" (or 16.16'). Semi-diameters for the sun for each day are listed in the nautical almanac, because they change gradually throughout the year, as the earth revolves around the sun. The following table lists sun semi-diameters computed from Thompson's index error notes for the period of the case study (rounded to the nearest 0.1') compared to the actual values of the sun's semi-diameter for those dates, as listed in a 1996 nautical almanac. Date DT's S.D. Actual S.D. Nov. 3, 1810 16.2' 16.2' Nov. 21, 1810 16.2' 16.2' Nov. 26, 1810 16.3' 16.2' Note that the almanac only gives values to the nearest 0.1' as this is the limit of the resolution of the eye. (This limit corresponds to a maximum theoretical accuracy of 185 meters on the ground.) This demonstrates that Thompson and his instrument can measure the semi-diameter of the sun to an accuracy of 0.1 of a minute. Because this process is visually identical with actually making an altitude observation using a reflecting artificial horizon, this also demonstrates that both Thompson and his instrument were capable of measuring the height of the sun to an accuracy of 0.1'. On December 6, 1810, while at the 'goods shed', Thompson records the following information : Thompson is saying that he has observed a meridian altitude of the sun's lower limb to determine his latitude, and that the sun's declination was 22º 30' 50" when he made the observation at 7 hours 48 minutes Greenwich Time by his watch. (The astronomical day started at noon in Thompson's time (Belyea, 274)). On December 6, local apparent noon at Brûlé Lake would be about 19:48 Greenwich Time. A meridian altitude of the sun ('noon shot') is still used by navigators at sea to determine latitude. When an artificial horizon is used, this type of observation may be called a double meridian altitude, because the angle measured by the sextant is twice what it would have been if a sea horizon had been used. We can use the data given by Thompson to recompute a latitude for his position (Thompson's data in italics) : 27º 49' 45" Height of sun times 2 (artificial horizon), –3' less index correction (see Art. IV), 27º 46' 45" gives corrected height of the sun. 13º 53' 22" Divide by 2 to get height of the sun, –3' 48" less refraction of the air correction (ignore f), 13º 49' 34" gives height of sun corrected for refraction. +16' 18" Adding semi-diameter of sun for Dec. 6 14º 5' 52" gives height of the sun as observed. 90º 0' 0" Angle between the zenith and horizon, 14º 5' 52"– less the height of sun, 75º 54' 8" gives zenith distance. 22º 30' 50"– Subtract south declination of sun for 53º 23' 18" final latitude. (Please note that 'height of the sun' is height of the sun above the horizon.) This latitude differs from Thompson's by only 9 seconds, or 278 meters on the ground. Since I don't know what Thompson used as a refraction coefficient (f) to correct for local density altitude, the answers are essentially the same. This example clearly shows that Thompson used a reflecting artificial horizon for this observation and that he did not make any mathematical errors in calculating this latitude. On November 3, 1810, David Thompson records some observations which were used to compute latitudes by double altitude. In this article, I present an example calculation for latitude using Thompson's data. Note that I will show a mathematical technique for computing the latitude which illustrates how it is done. Thompson would have used tables to simplify his calculations. Thompson's first observation is an upper limb observation of the sun (Thompson's data in italics) : 0h 33m 53s 43º 44' 0" 0h 34m 41s 43º 42' 0" 0h 35m 27s 43º 40' 0" 0h 34m 40s 43º 42' 0" Average of values, –3' 34" less index correction, 43º 38' 26" gives Hs x 2 corrected. 21º 49' 13" Divide by 2 to find Hs. –2' 30" Less refraction correction. 21º 46' 43" Corrected for refraction. –16' 12" Less semi-diameter of sun 21º 30' 31" Height observed #1 (Ho1) The reason for averaging the time and sextant readings is that the average values will fall on a line which is the best fit through all three observations. Thompson provides a second run of sights roughly an hour later : 1h 30m 54s 39º 25' 45" 1h 31m 37s 39º 21' 30" 1h 32m 21s 39º 16' 0" 1h 31m 37s 39º 21' 5" Average of values, –3' 34" less index correction, 39º 17' 31" gives Hs x 2 corrected. 19º 38' 46" Divide by 2 & find Hs. –2' 48" Less refraction correction 19º 35' 58" Corrected for refraction. –16' 12" Less semi-diameter of sun. 19º 19' 46" Height observed #2 (Ho2) Thompson notes the declination of the sun at the time of the first observation is 15º 4' 20" south. I computed a declination at the time of the second observation based on the rate of change in the sun's declination for this time of the year as follows : The difference in time between the first and second observations is 56m 57s. The rate of change in the declination of the sun is 44" per hour on this day of the year, so the declination for the second observation would be 15º 5' 2" S. We now convert the time difference between the two observations (56m 57s) to arc in order to find the meridian hour angle t (See figure 1). By dividing the time interval by 4 minutes per degree of longitude, we can express angle t in degrees, and so we find that t is 14º 14' 15". Find d Now find d. For all of the following computations we will use the law of cosines for spherical triangles. It states that for any spherical triangle XYZ, consisting of sides of length x, y, z : Therefore, substituting for the triangle Pn-Sun1-Sun2 Figure 1 — The line segments connecting the points are all great circles on the surface of the earth. Pn— The north pole. Z— The observer's zenith, Sun1— the geographical position (GP) of the sun at the time of observation #1. Sun2— the geographical position of the sun at the time of observation #2. t—the meridian hour angle between Sun1 and Sun2. PD1— The polar distance of the sun at the time of observation #1. PD2— The polar distance of the sun at the time of observation #2. d— The distance between the GP's of Sun1 and Sun2. co-L— the observer's co-latitude (90º– latitude). co-Ho1— The co-height of the sun measured by observation #1. co-Ho2— The co-height of the sun measured by observation #2. A1— Angle Pn-Sun2-Sun1. A2— Angle Z-Sun2-Sun1 A3— Angle Pn-Sun2-Z. We note that the polar distances PD1 and PD2 are the sum of 90º plus the declinations of sun declinations 1 and 2 respectively. Solving, we find d = 13º 44' 44.6" Find A1 The next step is to find the angle Pn-Sun2-Sun1. Again, from the law of cosines for spherical triangles we can write: Solving, we find A1 = 91º 48' 45.0" Find A2 Next, we find the angle Z-Sun2-Sun1 in spherical triangle Z-Sun2-Sun1. Once again we use the law of cosines for spherical triangles to write: Because sin x = cos (90º–x) and cos x = sin (90º–x) : Solving, we find A2 = 78º 23' 26.2" Find A3 Now we find the angle Pn-Sun2-Z in triangle Pn-Sun2-Sun1 by simply subtracting A2 from A1: Find L Finally, we find the latitude, L. Once again from the law of cosines for spherical triangles we can write: Using the same trigonometric rules earlier we can write: Rearranging, we get: Solving, we find L = 53º 8' 22". This value for L differs from Thompson's computation (53º 7' 57") by only 25". This deviation is easily explained by the fact that we did not use exactly the same declination as Thompson, nor did we apply the refraction correction coefficient f, nor did we account for Thompson's watch error rate. Even so, this nearly identical result clearly shows that Thompson was using his reflecting artificial horizon for these observations, and that his calculations are correct. Upon recalculating three possible latitude pairs for observations on that day which are separated by up to 2h 28m I found the mean value to be 53º 8' 36" ± 26". This is only ± 0.4 nm, which is excellent shooting for double altitude observations, and demonstrates that Thompson was able to accurately measure the sun's changing altitude over the course of three hours while away from the conveniences of fort life. On November 21, 1810, David Thompson recorded three lunar distance observations in order to determine Greenwich Apparent Time (GAT). He needed to know GAT time, in conjunction with local time, to compute his longitude (see article VIII for details). Observing the motion of the moon to compute the time in Greenwich, England is based on the fact that the moon's proper motion relative to the stellar background is about 30' of arc per hour. This means that in about 12 seconds of time the moon will move far enough, relative to another object on the ecliptic, for the distance it moved to be measured. Since the moon's motion is predicted with high accuracy in the nautical almanac, this means that in theory an observer could determine at what GAT time the moon would be seen at the observed distance to an accuracy within 12 seconds of time. This would allow the observer (in conjunction with another observation) to compute a longitude to a theoretical accuracy of about 3'. In practice, accuracies of ± 20' are all that can be achieved. However, several observations taken from the same place and averaged together can yield significantly higher accuracy. Although the basic idea is simple, it is complicated by the distorting effects of the refraction of the earth's atmosphere. The refraction and parallax corrections for an observation taken perpendicular to the earth's surface are easily obtained from tables. However, lunar distance observations cut across the sky at oblique angles, and computing the effects of refraction and lunar parallax are more challenging. Correcting for these effects is called 'clearing the distance', and an example of Thompson's method is given below. Contemporary books describing lunar distance observations for mariners recommended that four observers and three sextants be used to obtain the maximum accuracy. One observer measured the distance between the moon and another object on the ecliptic, another observed the altitude of the moon above the horizon, a third observed the height of the second body above the horizon, and the fourth called out the time so that all these observations could be made simultaneously. Thompson worked alone (he only had one sextant), so how he could have made these observations has been a puzzle. In his article on Thompson's method for longitude, Sebert notes that : 'On smaller ships, and in Thompson's case, it was the custom for one observer to read all the angles, assisted only by a locally trained timekeeper. The readings, in the order taken, were the air temperature (for calculating refraction), the moon's elevation and time, the star's elevation and time, the lunar distance and time, the star's elevation and time, and the moon's elevation and time.' (Sebert, 408. cf. Garnett, 31-32.) Smyth also mentions this method (Smyth, 14-15). All of these observations should be made as quickly as possible. The idea is to determine the rate of change in the lunar and stellar altitudes, and compute what their altitudes would have been at the instant of the lunar distance observation. I have used this technique myself and it works admirably. However, David Thompson, although diligently recording all other observations, makes no such 'bracketing observations' of his lunar distances. In fact, for the period of the case study he never observes the moon's elevation at all. In short, Thompson's journals for the period of the case study do not show that he made his longitude observations in the manner Sebert & Smyth both suggest. The solution to this puzzle is found in Thompson's marginal notes for each lunar distance that he observes. For each distance observation, he notes the right ascension of the sun and moon, as well as their declinations. This information is not required for clearing a distance if the altitudes have been observed. If, however, the altitudes of the moon and second celestial body have not been observed, then they can be calculated from knowledge of the observer's known latitude and estimated longitude, using the right ascensions and declinations given in the nautical almanac for the approximate GAT of the observation. At first glance this may seem like a paradox— if Thompson knows where he is then why is he trying to figure out where he is? It is important to understand that Thompson knows where he is within some ever-widening circle of uncertainty. The object of all of these observations is to refine this estimate of his position and shrink the circle of uncertainty. After all, the art of navigation is the art of staying found. The mathematics of clearing lunar distances seem to be quite tolerant of errors in the computed altitudes of the moon and second body. What is critical is that the distance between the two be measured with high accuracy. Some preliminary calculations that I have done suggest that as long as his dead reckoning longitude was correct to within 30', it makes little or no difference to the cleared distance. However, more study is required before this figure can be accepted. Sebert notes that this method was used in Thompson's time : '...in the more advanced navigational texts there is a method given for the case where the horizon cannot be seen and only the lunar distance is read. The solution involves first calculating the true altitude of the two bodies, and then applying the corrections for refraction and parallax in reverse. It is a very long problem.' (Sebert, 412) This was exactly what Thompson was doing for every lunar distance I have examined, yet Sebert does not seem aware of Thompson's use of this method. (Readers of Sebert's paper should know that he makes a couple of minor errors in his calculations. On p. 409, he renders 51º 28' 35" as 51.4676º. The correct value is 51.4764º. The moon's refraction on p. 410 is 4' 20" which is 4.33', not 4.2'. These early errors throw off Sebert's calculations for both examples as well as his conclusion. The answer obtained using Borda's method should give the same result as using Young's formula given at the end of this article.) On November 21, 1810 Thompson observes a lunar distance between the sun and the moon's near limbs. In the margin he notes the following information : H ' " 's AR [sun's right ascension]— 15 .. 46 .. 16 [sun's declination] — 19 .. 54½ S [moon's right ascension]— 176 .. 15 .. 15 Dec [moon's declination] — 57 1/6' N S.D.[moon's semi-diameter]— 15' 9½" HP [moon's horizontal parallax]— 55 .. 37 ¤ 's TA [sun's true altitude]— 11 .. 44 .. 9 ¤ 's AA [sun's apparent altitude]— 11 .. 48 .. 50 [moon's true altitude]— 32 .. 20 .. 53 [moon's apparent altitude]— 31 .. 35 .. 10 TD [true lunar distance]— 62 .. 32 .. 36 + 2-5 + 2-20 + 9" 117º 13' W Thompson lists these values using little abbreviated symbols, the meaning of which should be clear if you know what the abbreviations are. All of the values are actually in degrees, minutes, and seconds of arc except for the sun's AR which is in hours, minutes and seconds of time. The moon's HP is in minutes and seconds of arc. Of these values, the AR, dec, and HP values come from the nautical almanac. The moon's S.D. may have come from the almanac, or may have been computed using the standard formula S.D. = 0.2724º * HP. The values for the true altitudes of the sun and moon would have been computed by Thompson in the following manner : From his observation of the altitude of the sun taken fifteen minutes after the lunar distance observation, Thompson would have computed the local apparent time as described in Article VIII. The only information he needs to compute the local apparent time is his ded reckoning longitude and his latitude, which he determines using one of the latitude methods discussed previously. From his dead reckoning longitude he determines the approximate time in Greenwich. For example, if he assumed that he was at Lo. 116º 30' W, then at 15º per hour, he would be 7 hours 46 minutes behind Greenwich. Knowing the approximate time at Greenwich allows him to compute the declination of the sun from the information in his nautical almanac. Using the declination of the sun (d) and his latitude (L = 53º 24' 52"), as well as the height of the sun above the horizon (Ho) that he observed, he computed a local apparent time of 22h 9m 31s for his 'time shot' (see Art. VIII). Using the difference in his watch time between the lunar distance observation and the time observation, he now knows the local apparent time of his lunar distance observation, in this case 21h 53m 15s. Remember, Thompson is using astronomical time, so this corresponds to 9h 53m 15s a.m., or 2h 6m 45s before noon, local time. At 15º per hour, this means that the sun's meridian angle (the angular distance from noon at Thompson's position) is 31.6875º. Thompson now knows his latitude (L), the polar distance of the sun (PD = 90º + declination), and the meridian hour angle (t). This allows him to compute the apparent altitude of the sun at his location at the instant of the lunar distance observation using the formula: In this case, I compute a true altitude for the sun of 11º 44' 17". The 8" difference from Thompson's value is probably explained by his use of tables to compute the result. He then computes the true altitude of the moon. Again, he knows his latitude (L), and he can find the declination of the moon from the nautical almanac for his estimate of the time at Greenwich. To find the meridian hour angle (t) of the moon, he needs the right ascensions of both the sun and the moon. Again, he gets these from the nautical almanac for the approximate GAT time of his observation. The difference in the right ascensions of the sun and the moon tell him the meridian hour angle between the sun and the moon, in this case 4h 1m 15s. At 15º per hour, this is 60.3125º. Incidentally, the moon's AR is really 11h 45' 1", but Thompson writes 176 .. 15 .. 15. This is because he has already converted it to degrees. The sun was not yet at Thompson's meridian, but the moon has already gone by. This means that the moon's meridian hour angle will be 60.3125º minus the sun's meridian hour angle of 31.6875º. The answer I get is 28.625º. Plugging these values into the formula above yields a true lunar altitude of 32º 26' 33". This differs from Thompson's answer by 5' 39". (For some reason, all of my recomputed lunar true altitudes for this day differ from Thompson's by about 5' 40".) However, this is sufficiently close to Thompson's answer that it seems clear that this is the method that he was Calculating Apparent Altitudes Now that Thompson has calculated his true altitudes from his dead reckoning position, he calculates the apparent altitudes from the true altitudes. First he calculates the apparent altitude of the 11º 44' 9" S (computed by Thompson), +4' 31" plus refraction correction, 11º 48' 40" gives s— apparent altitude of sun. This differs from Thompson's value by 10". Next, we want to find the moon's apparent altitude. However, before we proceed, we must compute the moon's parallax in altitude (PA), which is the cosine of the apparent altitude, times the horizontal parallax (HP). The value I obtain is 46' 59". 32º 20' 53" M (computed by Thompson), –46' 59" less PA, +1' 30" plus refraction correction, 31º 35' 24" gives m— apparent altitude of moon. This differs from Thompson's value by 14". The small difference notwithstanding, these examples demonstrate how apparent altitudes can be computed from the true altitudes. For the rest of the example, I will use Thompson's apparent altitudes so that the final answer can be compared to his. Finding Apparent Distance—d The first step in clearing the distance is to find the apparent distance, d, between the moon and another celestial body (sun, star, or planet). (See fig. 1). Thompson has measured the distance between the nearest limbs (edges) of the sun and moon to be 62º 0' 5", so this value must be corrected for the semi-diameters of both bodies to find the distance from center of body to center of body. This is computed as follows (Thompson's data in italics): 62º 0' 5" Average observed in sight run, –3' 34" less index error correction, gives 61º 56' 31" actual angle measured to near limbs. +15' 9" Add moon's semi-diameter (S.D.) and +16' 12" add sun's S.D. for date, for 62º 27' 52" d— the apparent distance. Thompson does not record this value in his notes. Nor does he note any d values for any of his lunar distances. Figure 1 — Clearing the distance. Z— observer's zenith. HO— The observer's horizon. M— the true altitude of the moon's center. m— The apparent altitude of the moon's center. S— The true altitude of the sun's center. s— The apparent altitude of the sun's center. D— Line SM, the true distance between the sun and moon's centers. d— Line sm, the apparent distance between the sun and moon's centers. The Approximation Method Thompson uses an approximation method to determine a solution to the distance d. The answers should not materially differ from computations using a more rigorous method, and in Thompson's day they were much easier to perform. Referring again to figure 1, perpendiculars (xM and yS) are drawn from the line connecting ms to the true positions M and S. The idea is that the distance xy is a close approximation to the distance MS. Angles y and x are 90º, and their sides are so small that they can be treated as plane triangles for the purposes of finding the lengths of xm and ys. We can now solve the spherical triangle sZm and compute the angles at m and s. (note that Zsm and YsS are congruent). Thompson provides the following data : Finding Angles m and s The first step is to find the value of angle m. For all of the following computations we will use the law of cosines for spherical triangles (see Art. VI). Using this formula for triangle sZm we can write : Solving, we find angle m = 92.8388º. Similarly we can write: Solving, we find s = 62.4644º Finding Segments xm and ys The line segments xm and ys can be found from plane trigonometry. xm = 2' 16" ys = 2' 10" From figure 1 it can be seen that if angle m is less than 90º (acute), then the length of line segment xm should be subtracted from d. If it is greater that 90º (obtuse), then it should be added to d. If angle s is less than 90° (acute), then the length of segment ys should be added to d, and if s is obtuse then the ys should be subtracted from d. In this case, m is obtuse, so we must add xm to d, and s is acute, so we must also add ys to d. This yields a true distance of 62º 32' 18". Thompson's value for D is 62º 32' 36", only an 18" difference. Interestingly enough, we could write our corrections as '+ 2-16 + 2-10' This is clearly what Thompson is writing at the bottom of the page where he notes '+ 2-5 + 2-20 + 9"' Thompson's values differ slightly from our results, but it must be kept in mind that there were several variations of this approximation method. When I clear this distance employing a rigorous mathematical method by Young (Cotter, 214) using the formula : I obtain the value 62º 32' 40" which is within 4" of Thompson's value. Finding GMT Thompson now has a true distance between the center of the moon at the center of the sun as measured when his watch said 21h 53m 15s (11:53:15 am). He would now turn to his nautical almanac, where he would find true lunar distances for every three hours GAT for various bodies close to the ecliptic. Thompson would then use linear interpolation, assisted by proportional log tables, to compute the GAT time for the distance which he observed. Thompson now knows the time GAT that corresponds to his watch time of 21h 53m 15s. How he uses this time to compute longitude is the subject of the following article. In addition to the various observations discussed in the previous articles, Thompson also computed longitudes from his knowledge of Greenwich and Local Apparent Times, set his watches to local apparent time by observing the sun or other stars, and computed the magnetic variation at his locale. To demonstrate how these values were determined, I will use a hypothetical case (since Thompson leaves us no calculations) using the data from November 3, 1810. On this day he observed two lunar distances, and made four observations of the sun's altitude. From figure 1, and using the law of cosines for spherical triangles, it can be seen that This can be simplified to : Figure 1 — Pn— The north pole. Z— The observer's zenith. Sun— The geographical position of the sun. PD— The polar distance of the sun. co-L— Observer's co-latitude (90º – Latitude). co-Ho— The co-height of the sun (90º – Ho). t— The meridian hour angle. z— Azimuth angle. This observation can be used to compute the observer's local apparent time, even if the observer does not know what time it is in Greenwich. The meridian hour angle t can be converted into hours with the formula: For an afternoon observation of the sun this converts directly into local apparent time p.m. If Thompson has done a lunar distance observation within a hour of this 'time shot' (to reduce the effects of an inaccurate watch), then he now knows the difference between the local apparent time in Greenwich, and the local apparent time at his position. This time difference is simply converted into degrees at 15º per hour to find his longitude west of Greenwich. Even if Thompson does not have a lunar distance to go along with his time shot, he will still make such time observations in order to keep his watch set to local apparent time. This allows him to know exactly when noon will occur (subject to watch error and how far he has moved in longitude since he last set his watch). This allows him to plan his day's events so that he does not miss a double meridian altitude observation, the most important daily observation for any navigator to make. Watch Rate Computed There is clear evidence that Thompson's watches were next to useless as navigational tools. First, he never computes a watch rate, nor uses one in his calculations. Secondly, he always keeps time-critical observation pairs as close together in time as possible to ensure the maximum accuracy. Thirdly, his notes show that at nearly every opportunity he computed the local apparent time and reset his watch. An excellent example of this is provided in his observational notes for November 26, 1810. In these notes he lists the following values (Thompson's data in italics) : H ' " º ' Line # 55 .. 9 28 .. 21½ 1 55 .. 42 18¼ 2 56 .. 13 15¼ 3 1[6]7 .. 4 55 4 55 .. 41 28 .. 18 .. 20 5 +18 .. 58 –3 6 1 .. 14 .. 39 28 .. 15 .. 20 7 Lines 1 to 3 are Thompson's observation pairs where he is recording his watch time (i.e. 0 hours (noon), 55 minutes, 9 seconds) and the height of the body as measured by the sextant (i.e. 28º 21.5'). On line 4 he writes down the sum of the values, and in line 5 the average values. This gives him a point on a line which is the best fit through all three values— a standard technique of the day for improving sight accuracy (Garnett, 31). The following step is not clear from Thompson's notes, for the values on line six are not filled in at the same time. What Thompson does next is to write down his sextant index error correction under the right-hand column on line six. He then finds the final sextant altitude which he records on line 7. At this point he then computes the local apparent time of the observation as discussed above. He then writes the corresponding local apparent time next to the sextant altitude to which it applies. This is the value in the left hand column on line 7. Thompson then computes the difference between what his watch said and the actual local apparent time at the instant of the observation. In this case, his watch is 18 minutes 58 seconds slow. He notes this value in the space on line 6 in order to conserve paper and to keep things neat. Thompson would no doubt immediately set his watch forwards by 19 minutes so that on the next day, he would know to within a few minutes when local apparent noon is going to occur so that he can get set up to make a meridian altitude observation of the sun— the navigator's most important daily observation. Indeed, in the notes for this day he records that 'Examined watch moved 20' forward'. Normally he does not bother to record the fact that he has reset his watch. When you first look at these columns of numbers it appears at first glance that the time value on line 6 is some sort of correction which applies to get the time on line 7, but this is not so. This example in his notes is a good one to examine, as it seems that the values in the left-hand column on lines 6 and 7 were written in later, as he is using a pen with a different width. This same pattern can be seen in many other entries where the time correction value is squeezed into too small a space or does not line up correctly with the other numbers in the column. Using two observations from November 3, I calculate that Thompson's watch was gaining about 3¼ seconds per hour on that day. When I look at his watch corrections overall for the period of the case study, correct them for changes in his longitude, and assume that he reset his watch each time he made a time observation, I find that his average watch rate was 4 seconds per hour fast, ±9 seconds. This is quite poor. In 1806, Garnett remarks that : '...Dr. Maskelyne observes, that a watch that can be depended upon within a minutes for 6 hours is absolutely necessary ; but I would recommend to have at least one pocket chronometer or time piece, for connecting the observation for finding the time, with that for the distance. They are made by Mr. Arnold in London as low as 25 guineas ; also by Mr. Earnshaw and Mr. Broeckbank ; and would be extremely useful for a variety of purposes both for the longitude and latitude, and in discovering currents.' (Garnett, 30) Thompson's watches from Joseph Jolly were worth only 12 guineas each in 1794 (Smyth, 8). These would be better-than-average watches for the time, but apparently not in the 'pocket chronometer' league. It would also appear that his watches were never upgraded, and that 16 years later he still had not acquired a pocket chronometer. Thompson himself noted on August 2, 1811 that : 'All the Obsns made going to the Sea was with a com[mon] Watch that went very badly, losing time— on my return also with a com[mon] Watch that went tolerable well'. (Belyea, 163) It would seem clear that his watches were only useful for tracking the general time of day, the Greenwich time to the nearest half-hour, and the time separations between double altitude or lunar distance longitude pairs for intervals of less than an hour. Compass Variation If the observer records the bearing of the body along with its altitude, then it is possible to compute the magnetic declination (variation). From figure 1, it can be seen that: This becomes : Angle Z is the true bearing of the sun at the time of the observation (called the azimuth), in this case in degrees west of north. In this example this can be converted to a standard compass bearing by subtracting z from 360º. The difference between the calculated bearing and the bearing measured by the compass is the variation. On December 5, 1810, while at the location of his 'goods shed' on the Athabasca, David Thompson observed a series of lunar distances and two time shots to be used for the longitude component of the two lunar distances. On December 6, he observed a meridian altitude of the sun and computed a latitude of N 53º 23' 27" for the location of the goods shed. He then used this latitude to compute the longitudes observed on the previous evening. This latitude is quite different from the latitude of the accepted location of Thompson's goods shed. In Article V, I examined Thompson's latitude observation from December 6 and showed how it was made with his reflecting artificial horizon, and that Thompson did not make any computational errors in performing the calculation. This suggests that the observation should be a very good one, and it should be close to the truth. Unfortunately, this single observation says nothing about how accurately he took the measurement. Any number of arguments may be made to suggest that this single value is not to be relied upon— perhaps he was having an 'off day' and was careless with the observation. Perhaps he misread the index vernier. Maybe his parallel glasses were not quite parallel and produced a significant distortion. Perhaps there was a strong temperature inversion caused by a nearby storm which strongly affected the refraction of the earth's atmosphere. It was partly cloudy on that day, so perhaps the cloud interfered with his ability to accurately see the edge of the sun's limb. All of these arguments can legitimately cast some doubt on the accuracy of any single observation. However, if another latitude for the site can be computed using different celestial objects observed on a different day under different conditions and at different azimuths, then, should the results agree to a reasonable extent, all of the above arguments can be shown to be unfounded. Thompson's time shots from December 5 allow a new latitude to be computed. This latitude was never computed by Thompson ; either he never realized that it could be done, or he didn't bother. If he did not realize that it could be done, then this suggests that he was well grounded in all of the standard techniques as outlined in the navigational texts, but that he really did not have a firm grasp of how he actually arrived at his answers— he just followed the instructions for 'standard sights'. I favor this interpretation because he computed all the other permutations of his sights on this journey, and since he did not have a latitude to use that evening he was forced to wait until noon the next day before he could do the calculations. Thompson's observations of the stars Vega ('Lyræ') and Capella allow me to compute a latitude using the double altitude method as described in article VI. Table XXX of Garnett's Tables Requisite... from 1806 lists the right ascensions and declinations of the principle navigational stars, as well as how much these values change per year. This information is summarized as follows : Star RA Var./yr Dec. Var./yr Vega 18h30m20s 2.03s 38º 36' 27" N +2.6" Capella 5h 2m 18s 4.41s 45º 47' 16" N +5.0" From this information, the right ascensions and declinations of these two stars can be estimated for the time of Thompson's observations in 1810 : Star RA Dec. Vega 18h 30m 28s 38º 36' 37" N Capella 5h 2m 36s 45º 47' 36" N We also have Thompson's observations and watch times which I provide as they appear in his journal (corrected for index error). Star WT Hs x2 corr. Vega 7h 14m 53s 77º 18' 30" Capella 7h 56m 39s 88º 24' 56" From the values of right ascension (RA) we can compute the difference in RA between the two stars in 1810. Note that Capella was visible in the east, and Vega was visible in the west, therefore the time separation is 24h - 18h 30m 28s + 5h 2m 36s = 10h 32m 8s. Multiplying by 15º per hour results in an angle between the two stars of 158º 2' 0". Thompson observed Vega first, which you can visualize as 'pinning' its location to the heavens at the instant of the observation. He observed Capella 41m 46s later. During that time, as the heavens appear to rotate overhead, Capella moves closer to the 'pinned' position of Vega. Again, converting time to arc and subtracting this from the angle separating the two stars we get the meridian hour angle (t) which is 147º 35' 30". Thompson noted that the air temperature was –2ºF, and from a topographic map of his approximate position I feel that his elevation was just under 1100 meters. Assuming average atmospheric conditions this results in a refraction factor f of 0.98. True refraction = mean refraction * f. The altitudes of these two stars above the horizon can therefore be computed using modern refraction tables in the following manner : Vega Capella Hs x 2 corr. 77º 18' 30" 88º 24' 56" Divide by 2 38º 39' 15" 44º 12' 28" R –1' 10" – 59" Height observed Hv = 38º 38' 5" Hc = 44º 11' 29" From figure 1, and using the data above as well as the formulae presented in the article on computing latitude from double altitudes, the following values can be computed : d = 90.72550º , A1 = 21. 94522º, A2 = 25.50479º. Note that in this case Z is south of the great circle connecting Vega and Capella, so A3 = A1 + A2. The final latitude computed is N 53º 21' 27" Figure 1 — Pn— The north pole. Z— Thompson's Zenith. co-Dv— 90º– declination of Vega. co-Dc— 90º– declination of Capella. d— distance between Vega and Capella along a great circle on the celestial sphere. t— The meridian hour angle between Vega and Capella. co-Hv—90º– observed altitude of Vega. co-Hc— 90º– the observed height of Capella. co-L— 90º – the latitude of the observer. This latitude is 2' (2 nm) south of the latitude that Thompson computes on December 6. This observation is probably not quite as accurate as the double meridian altitude observation of December 6, but if we assume for the sake of argument that they are equally valid, then the position of the 'goods shed' must lie at latitude N53º 22' 27" ± 1'. The close correlation between the latitudes observed on December 5 and 6 effectively removes the possibility of significant systematic or random error in the execution of the observations. As the south end of Brûlé Lake is roughly 7.8' south of this position, this rules it out as a possible location for Thompson's goods shed. For those readers familiar with celestial navigation who are eager to try Thompson's methods, you may write the author care of Northwest Journal for additional information, assistance, etc., as well as to obtain copies of the author's Tables Useful for Celestial Navigation. This booklet contains : mean refraction tables circa 1781; refraction factor (f) tables circa 1781; modern mean refraction tables; modern refraction factor tables; conversion between mb, inches Hg and mm Hg; barometer corrections for altitude above sea level; a table of barometric pressure by boiling point of water (insert) which when used with the barometer correction table yields the observer's altitude; conversion of arc to time; temperature conversion; and miscellaneous data and formulae. A full explanation of each table is included along with information on how they were computed and the data source used. Modern plastic practice sextants are available which are accurate enough for computing latitudes. (The Davis Mk 15, $107 US, looks like a good bet.) The Nautical Almanac sells for $16.95 US. All the stuff you require (except for the land navigation tables) can be ordered through Celestaire at 1-800-727-9785. Apparent altitude (Ha) - The height of the body above the horizon as it appears to the observer once mechanical measuring errors have been eliminated. See also Observed Altitude (Ho), and Sextant Altitude (Hs). Azimuth angle (Z) - The angle between the sides co-latitude and co-calculated altitude of the spherical triangle connecting the north pole, the observer's zenith, and the geographical position of the observed body. Azimuth (Zn) - The angle between true north and the body, measured clockwise from true north. Azimuth is always positive, and between 0° and 360° . Zn is computed from azimuth angle (Z). Cleared lunar distance (D) - The angular distance between the moon's center and another body as measured from the center of the earth. Obtained from the lunar distance (d). Co-declination (co-d) - One side of a spherical triangle equal to 90º minus the declination of the body. Co-latitude (co-L) - One side of spherical triangle equal to 90º minus the latitude of the observer. Declination (dec) - The position of a celestial body on the celestial sphere measured in degrees north or south of the celestial equator. It is exactly equivalent to latitude and is measured the same way. For example, if at some instant the declination of a body is S 15º 32' 4", then at that instant the geographical position of the body is at latitude S15º 32' 4". Magnetic declination is the difference between magnetic north (the direction the compass needle points) and true north (roughly Polaris). Also called variation or compass variation. f - Correction factor applied to mean refraction to correct for non-standard atmospheric pressure and temperature. Geographical position (GP) - The intersection of a line connecting the center of the Earth to the center of a celestial body and the surface of the Earth. For any instant of time this spot is the position on the Earth's surface at which the body is at the zenith. The GP may be expressed in terms of declination and right ascension/hour angle. Great circle - The shortest distance between two points on the surface of a sphere. A great circle is described by the intersection of a plane cutting through the center of a sphere and the surface of the sphere. Greenwich Apparent Time (GAT) - This is the local apparent time at the Greenwich meridian, which is defined as being at zero degrees of longitude. GAT differs from Greenwich Mean Time (GMT) by the difference of the equation of time for that day. Thompson did not use GMT. Ha - See apparent altitude Horizontal parallax (HP) - The parallax of the moon when it is observed at the horizon. Ho - See observed altitude. Hs - See sextant altitude. Index correction (IC) - The correction to applied to sextant altitude (Hs) to correct for registration error of the instrument. IC is opposite in sign to index error (IE). Index error (IE) - The registration error of the sextant caused by the horizon and index mirrors being non-parallel. IE is positive if the error is on the arc, and negative if the error is off the arc. See also index correction (IC). Latitude (L) - Imaginary parallel lines on the earth's surface at right angles to the earth's axis of rotation. The equator is 0° latitude, the north pole is 90° north latitude, and the south pole is 90° south latitude. See also declination. Longitude (Lo) - Imaginary lines on the earth's surface which are described by great circles passing through the north and south poles. The prime meridian is 0° longitude and is located in Greenwich, England. Longitude is measured east and west of Greenwich to 180º. Lunar distance (d) - The angular distance between the moon's limb and another celestial body, usually on or near the ecliptic as measured with a sextant. Also, a longitude calculated using this measurement. See also cleared distance (D). Mean refraction ( - The refraction of the atmosphere at a standard temperature of +7° C and a pressure of 1010 mb. Meridian - The meridian is the line of longitude which passes through the zenith. When the sun is on the observer's meridian, it is local apparent noon. Meridian angle (t) - The smallest angular distance between the meridian at the observer's position (Z) and the meridian of the geographical position (GP) of a celestial body. Also the smallest angle between the meridian of any two positions or bodies. Meridian angle is measured east or west and is always positive. Nautical mile (nm) 1nm = 1' of latitude = 1852 meters. Observed altitude (Ho) - The altitude of the body above the horizon as measured by the observer, once all corrections have been applied to the observation. Parallax in altitude (PA) - The component of the moon's horizontal parallax which applies for altitudes greater than 0° . PA is computed as : PA = HP × cos (Ha) Refraction correction (R) - A correction to the apparent altitude of a body which accounts for the bending of light from the body as it travels through the Earth's atmosphere. The refraction correction is computed as: Semi-diameter correction (SD) - A correction to the apparent altitude of a body which adjusts for observations on the limb of a body. For stars and planets, no SD correction is required. For the sun and moon, SD corrections are listed for each day in the nautical almanac. For the moon, SD can also be computed as: The value of SD is positive for a lower limb observation, and negative for an upper limb observation. Sextant - A hand-held instrument for measuring the angle between two distant observed objects. Sextant altitude (Hs) - The height (altitude) of a body as measured by the sextant and prior to applying any instrument or artificial horizon corrections. See also apparent altitude. Spherical triangle - A triangle drawn on the surface of a sphere consisting of sides which are segments of great circles. The length of any side of a spherical triangle is the angle of arc described by that side as measured from the center of the earth. The angle between two sides of a spherical triangle is the angle as measured on the surface of the sphere. Z - See azimuth angle. Zenith - The point on the celestial sphere which is directly overhead. References & Bibliography Alberta Forestry, Lands & Wildlife. Edson 83F. [Topographic Map.] Provincial Mapping Section, Land Information Services Division. 1988. Bowditch, Nathaniel, LL.D., The American Practical Navigator : An Epitome of Navigation, Volumes I, II. Defense Mapping Agency Hydrographic/Topographic Center Pub. No. 9, 1984. Cotter, Charles H. A History of Nautical Astronomy American Elsevier Publishing Company : New York, 1968. Garnett, John. Tables Requisite to be Used with the Nautical Almanac for the Finding of Latitude and Longitude at Sea. John Garnett : New Brunswick, New Jersey, 1806. Gottfred, J. 'Period Navigation', in Northwest Journal, Vol. III, April to July 1995. pp. 11-18. Gottfred, J. Tables Useful for Celestial Navigation over Land, 2d ed. J. Gottfred : Calgary, 1995. Gottfred, A. & J. 'The Life of David Thompson', in Northwest Journal, Vol. V, November 1995 to January 1996, pp. 1-19. Her Majesty's Nautical Almanac Office. The Star Almanac for Land Surveyors. London, HMSO. Henry, Alexander (the Younger). New Light on the Early History of the Northwest : The Manuscript Journals of Alexander Henry... Elliot Coues (ed.) Reprint-Ross & Haines : Minneapolis, 1965. Originally published 1897. Marriott, C. A. Skymap V2.2.4. Planetarium Software for Windows. 1992, 1994. Sebert, L. M. 'David Thompson's Determination of Longitude in Western Canada', in Canadian Surveyor. Vol. 35, March 1981, no. 4 : 405-414 Smith, Allan H. 'An Ethnohistorical Analysis of David Thompson's 1809-1811 journeys in the Lower Pend Oreille Valley, Northwestern Washington', in Ethnohistory, Vol. 8, No. 4, Fall 1961. Smyth, David. 'David Thompson's Surveying Instruments and Methods in the Northwest 1790-1812.' Cartographica 18, no. 4 (1981), pp. 1-17. Stewart, W. M. 'David Thompson's Surveys in the North-West', in Canadian Historical Review, 1936. Thompson, David. Columbia Journals. Barbara Belyea (ed.) McGill-Queen's : Montreal, 1994. Thompson, David. Original manuscript journals, Archives of Ontario volume 25. Unpublished. Archives of Ontario. United States Naval Observatory (USNO). Nautical Almanac 1996. Paradise Cay Publications : Middletown, California, 1996.
{"url":"http://www.northwestjournal.ca/dtnav.html","timestamp":"2014-04-17T03:51:26Z","content_type":null,"content_length":"147882","record_id":"<urn:uuid:c5b03547-da42-4663-9ce5-823f9427dbc2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
I need to prove that any element in a rings is representable as a product of some element and some central idempotent up vote 0 down vote favorite Let $R$ be an associative ring with identity and let $x$ be an arbitrary element from the ring $R$. Could you please help me to prove that $x=ye$, where $y$ is some element in $R$ and $e$ is some primitive central idempotent in $R$. In other words, I need to prove that any element in $R$ is representable as a product of some element and some primitive central idempotent in $R$. Thanks for the answers! Answers like "you are not right, for example ..." and "see /book/, p. /page number/" are also OK. 1 I was about to post a (completely trivial) answer but then realized this is probably homework. – Steven Landsburg Feb 6 '11 at 16:16 To Steven Landsburg: I'm 30 (probably, too old for homeworks :)) I'm a installation developer in a small IT company, rings theory is just not my field. However, if you tell me the answer is simple, I probably can easily find it in a book. I'll try. Anyway, thanks for your answer :) – ingrem Feb 6 '11 at 16:30 @ingrem: Why is a 30 years old installation developer in a small IT company interested in this? This is not a rhetorical question, I am genuinely curious. Related is mathoverflow.net/howtoask# motivation. – Did Feb 6 '11 at 16:35 2 Darij: Why not just take R = Z/6Z ? – Steven Landsburg Feb 6 '11 at 17:26 1 There is no kill like overkill. :) – darij grinberg Feb 6 '11 at 17:27 show 10 more comments 1 Answer active oldest votes Consider the direct product $k\times k$ of two copies of some field and the element $x=(1,1)$. In this simple example you can describe all primitive central idempotents. up vote 0 down vote accepted Thanks! :) Your example is very clear. – ingrem Feb 6 '11 at 17:36 add comment Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/54532/i-need-to-prove-that-any-element-in-a-rings-is-representable-as-a-product-of-som","timestamp":"2014-04-18T19:00:38Z","content_type":null,"content_length":"57704","record_id":"<urn:uuid:d9268d87-b46b-41e2-9346-288222363467>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Interest 3% yearly Find balance at year 5 =(Original deposit)(3%)=interest earned then add to balance for next year =(600)(.03)=6.00 first year interest =(606)(.03)=18.18 2nd year interest =(624.18)(.03)=18.72 3rd yr interest =(642.90)(.03)=19.29 4th yr interest =(662.19)(.03)=19.86 5th yr interest total balance end of 5th year=682.05 =(balance)+(interest earned each year) CherieS | 356 days ago
{"url":"http://www.bookrags.com/questions/math/Algebra/You-deposit-600-into-a-savings-account-that-earns-3-interest-compounded-annually-Find-the-balance-after-5-years-Round-to-the-nearest-cent-170523/","timestamp":"2014-04-17T05:35:27Z","content_type":null,"content_length":"28884","record_id":"<urn:uuid:72ff3cc6-ecdf-4955-a1b1-11afdf0270fc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Create A Code With Message Bits M 4 And Redundant ... | Chegg.com Image text transcribed for accessibility: Create a code with message bits m 4 and redundant hits r=3 for a total n=7 bits in each symbol. Your code should have Hamming distance H = 3. Your answer will consist of 16 (= 2m = 24) datawords of 7 bits in length chosen from the 178 ( = 2n= 27) possible bit patterns that are 7 bits in length. The table below is an embedded Excel spreadsheet object. Double click on the object to open Excel. Fill in the binary data word cells with your datawords. The decimal data word cell should show the decimal equivalent of your data word if you have entered a valid binary number. If not it will indicate a # NUM error. Click elsewhere in this World document to close Excel with your answers shown below. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/create-code-message-bits-m-4-redundant-hits-r-3-total-n-7-bits-symbol-code-hamming-distanc-q2932040","timestamp":"2014-04-20T19:51:21Z","content_type":null,"content_length":"18719","record_id":"<urn:uuid:5dc02c3f-eda8-4deb-a0e3-2bebd1239322>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
The String Coffee Table Streetfest Workshop I Posted by Guest Greetings from Canberra! It is very pleasant here. There was a good frost this morning and some black swans with their chicks down by the lake. Most of the Streetfest participants moved down here yesterday in buses or cars or planes, mostly uneventfully, although Tim Porter found himself driving on a dirt road through the mountains after realising that the main highway south actually went to Melbourne. The workshop started this morning. We’re so busy that there’s almost no time to tell you what’s going on. Kapranov was up first: the prequel to last week’s talk on NC Fourier transforms. At the end there was a little discussion with some people wondering exactly how this connects to Connes’ NCG. Panov spoke about model cats, homotopy colimits and toric topology. Toric topology is the study of torus actions on manifolds or complexes with a rich combinatorial structure in the orbit quotient. First he defined ‘face rings’ which are Stanley-Reisner algebras of simplicial complexes. Ross Street later highly recommended Stanley’s revolutionising of combinatorics … something I must look into later. The Poincare series for $R\left[K\right]$ was defined, and Panov listed some problems that could be attacked with this machinery: the Charney-Davis conjecture, the question of when ${\mathrm {Ext}}_{k\left[K\right]}\left(k,k\right)$ has rational Poincare series, and something called the g-conjecture - whatever that is. Then he defined the David-Januszkiewicz space $\mathrm{DJ}\left(K\right)$, which seems to be important because it turns up in a theorem (Panov, Ray, Vogt) giving a homotopy commutative diagram involving the loop functor $\Omega :\mathrm{Top}\to \mathrm{TMon}$ into the monoid category. Posted at July 18, 2005 4:08 AM UTC Re: Streetfest Workshop I Hi Marni, probably you do not have the time to check for, read and reply to my comments, but anyway. You wrote At the end there was a little discussion with some people wondering exactly how this connects to Connes’ NCG. Was there any consensus? What did Kapranov himself say? Posted by: Urs on July 18, 2005 11:23 AM | Permalink | Reply to this Re: Streetfest Workshop I Hi, Urs! No, there wasn’t exactly any “consensus” regarding a relation between Kapranov’s work and noncommutative differential geometry - mainly because the only people who seemed to understand this suggestion by Getzler were Kapranov and presumably Getzler himself! In particular, there was not any obvious relation between this comment and the sort of relation you like to envisage between noncommutative geometry and higher gauge theory. Getzler thought some of the homological algebra in Kapranov’s talk reminded him of Hochschild cohomology, and wanted Kapranov to push it to include more ideas from cyclic cohomology - but the connection was over my head. The beautiful SIMPLE idea in Kapranov’s talk was a noncommutative analog of the Fourier transform sending Laurent series in n noncommuting variables to measures on the space of paths in n-dimensional space. And, beautifully, the noncommutative Taylor series for $\mathrm{exp}\left(-{z}_{1}^{2}+\dots +{z}_{n}^{2}\right)$ turns out to give Wiener measure on paths! In other words, a “Gaussian function of n noncommuting variables” gives a new way of thinking about path integrals! It’s very cool and maybe I’ll explain it in This Week’s Finds someday. Best, John Posted by: John Baez on July 19, 2005 4:28 AM | Permalink | Reply to this Re: Streetfest Workshop I Hi, just a quick comment: measures on the space of paths in n-dimensional space Thanks for that piece of information. Do you see any relation to our ideas on surface holonomy? Can I find Kapranov’s ideas written down in detail anywhere? Posted by: Urs (from Aberystwyth/Wales) on July 22, 2005 5:29 PM | Permalink | Reply to this Maybe I should mention that I’ll be on vacation from tomorrow on up to August 3. So if you don’t hear anything from me the next days, that’s why. Posted by: Urs on July 18, 2005 11:54 AM | Permalink | Reply to this Re: Streetfest Workshop I Look I was searching the web and found my name in this theorem. I would like to know more if at all posible. Posted by: David Januszkiewicz on May 31, 2006 1:17 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/string/archives/000610.html","timestamp":"2014-04-18T14:22:58Z","content_type":null,"content_length":"17275","record_id":"<urn:uuid:7aadf298-2a60-4b40-9f14-0b8d2704de2b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00204-ip-10-147-4-33.ec2.internal.warc.gz"}