content
stringlengths
86
994k
meta
stringlengths
288
619
Given the points Pn(n,2n+1), which is the equation of the line that passes through the points P1 and P2? - Homework Help - eNotes.com Given the points Pn(n,2n+1), which is the equation of the line that passes through the points P1 and P2? To write the equation, we'll use the formula that describes the equation of the line which passes through 2 given points: (y2-y1)/(y-y1) = (x2-x1)/(x-x1) But first, let's find the coordinates for P1 and P2. P1(1,2*1+1) and P2(2,2*2+1) P1(1,3) and P2(2,5) Now, we'll substitute the coordinates of P1, P2, into the equation: (5-3)/(y-3) = (2-1)/(x-1) 2/(y-3) = 1/(x-1) We'll cross multiply and we'll get: 2*(x-1) = (y-3) We'll open the brackets and we'll get: 2x-2 = y-3 We'll move all terms to one side and we'll get the general form of the equation of the line that passes through P1, P2: Pn(x,2n+1). To find the equation of the line that passes through P1 and P2. P1 has the coordinates (1, 2*1+1) = (1 , 3) P2 has the coordinates (2, 2*2+1) = (2, 5) The line passing through (x1,y1) and (x2,y2) is y-y1 = {(y2-y1)/(x2-x1)}(x-x1). So the line passing through P1(1,3) and P2(2,5) Is y-3 = [(5-3)/(2-1)](x-1) = 2(x-1) Or y-3 = 2(x-1) 2x-y-2+3 = 0 2x-y+1 = 0. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/given-points-pn-n-2n-1-which-equation-line-that-180395","timestamp":"2014-04-24T18:48:14Z","content_type":null,"content_length":"27975","record_id":"<urn:uuid:8d24501a-c606-402e-b754-cea668f57bbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Unsteady-Flow Of A Viscous-Fluid Between 2 Parallel Disks With A Time-Varying Gap Width And A Magnetic-Field Kumari, M and Takhar, HS and Nath, G (1995) Unsteady-Flow Of A Viscous-Fluid Between 2 Parallel Disks With A Time-Varying Gap Width And A Magnetic-Field. In: International Journal Of Engineering Science, 33 (6). pp. 781-791. 2.pdf - Published Version Restricted to Registered users only Download (507Kb) | Request a copy The unsteady incompressible viscous fluid flow between two parallel infinite disks which are located at a distance h(t*) at time t* has been studied. The upper disk moves towards the lower disk with velocity h'(t*). The lower disk is porous and rotates with angular velocity Omega(t*). A magnetic field B(t*) is applied perpendicular to the two disks. It has been found that the governing Navier-Stokes equations reduce to a set of ordinary differential equations if h(t*), a(t*) and B(t*) vary with time t* in a particular manner, i.e. h(t*) = H(1 - alpha t*)(1/2), Omega(t*) = Omega(0) (1 - alpha t*)(-1), B(t*) = B-0(1 - alpha t*)(-1/2). These ordinary differential equations have been solved numerically using a shooting method. For small Reynolds numbers, analytical solutions have been obtained using a regular perturbation technique. The effects of squeeze Reynolds numbers, Hartmann number and rotation of the disk on the flow pattern, normal force or load and torque have been studied in detail Actions (login required)
{"url":"http://eprints.iisc.ernet.in/19300/","timestamp":"2014-04-18T18:53:32Z","content_type":null,"content_length":"22716","record_id":"<urn:uuid:6ecdf249-9a51-41e5-96de-00e32ceff81a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Critical Points February 14th 2007, 08:46 PM #1 Feb 2007 Finding Critical Points Hi. I was wondering if anybody could help me out with this problem. F(x) equals the Integration from x^2 to x^3 of (lnt)dt. We have to find all critical points of f(x). What I (think) I know. I think you have to use the fundamental theroem of calculus (1 and 2?), but I'm unsure of how to apply it. A friend helped me a bit and so far we have gotten this: ln(x^3)3x - ln(x^2)2x = 0 from there though, I'm not sure what you do. I'm not sure (if that is the correct procedure) how you go about solving for those points. Any help is greatly appreciated. I'm awfully sorry that I forgot to attach the diagram of the function. Hi. I was wondering if anybody could help me out with this problem. F(x) equals the Integration from x^2 to x^3 of (lnt)dt. We have to find all critical points of f(x). What I (think) I know. I think you have to use the fundamental theroem of calculus (1 and 2?), but I'm unsure of how to apply it. A friend helped me a bit and so far we have gotten this: ln(x^3)3x - ln(x^2)2x = 0 from there though, I'm not sure what you do. I'm not sure (if that is the correct procedure) how you go about solving for those points. Any help is greatly appreciated. If I understand what you mean.... F(x) = INT.(x^2 --->x^3)[ln(t)]dt F(x) = [1/t]|(x^2 ---> x^3) F(x) = 1/(x^3) -1/(x^2) --------------(i) Now to the critical points of F(x), we get the 1st derivative of F(x) and equate that to zero. F(x) = x^(-3) -x^(-2) ---------rewriting (i) in another form. F'(x) = (-3)[x^(-4)] -(-2)[x^(-3)] F'(x) = -3/(x^4) +2/(x^3) Set that ot zero, 0 = -3/(x^4) +2/(x^3) Clear the fractions, multiply both sides by (x^4)(x^3), 0 = -3x^3 +2x^4 0 = (-3 +2x)x^3 x = 0 or 3/2 Hence, the critical points of F(x) are: When x = 0, F(0) = 1/0 -1/0 = indeterminate. Meaning, zero is not a root of F(x). So discard x=0. When x = 3/2, F(3/2) = 1/[(3/2)^3] -1/[(3/2)^2] F(3/2) = 8/27 -4/9 F(3/2) = (8 -12)/27 = -4/27 So point (3/2,-4/27) --------------------the only critical point. :-), Heck, I made a honest mistake. INT.[ln x] dx = 1/x. Yeah, right. Because INT.[1/u]du = ln(u) +C. :-) I will not edit this wrong solution, nor will i delete it. Let me be wrong once in a while. Last edited by ticbol; February 15th 2007 at 12:32 AM. I have found that seeing an instructor make a mistake can be as educational as seeing an instructor doing it correctly. Both expose problem solving techniques and can point out possible errors that students can make. No shame there! February 15th 2007, 12:01 AM #2 February 15th 2007, 12:02 AM #3 February 15th 2007, 12:20 AM #4 MHF Contributor Apr 2005 February 15th 2007, 05:51 AM #5
{"url":"http://mathhelpforum.com/calculus/11612-finding-critical-points.html","timestamp":"2014-04-20T17:20:57Z","content_type":null,"content_length":"48508","record_id":"<urn:uuid:391a9d91-9014-4433-9714-cd436e7c9b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
South Colby Precalculus Tutor Find a South Colby Precalculus Tutor ...I have also had extensive classes with respect to the following subjects (all passed with B's or A's at an undergraduate and/or graduate institution): Genetics & Gene Regulation, Molecular & Cell Biology, Developmental Biology, Microbiology, Virology, Genetics, Plant Physiology, Animal Behavior ... 25 Subjects: including precalculus, Spanish, writing, chemistry ...I have used calculus throughout my career to develop numerical models of high power gasdynamic lasers, hydraulic borehole mining systems, Arctic sea ice mechanics, ferrofluids, and energy efficient technologies. Also, I have tutored physics, mathematics, chemistry, and English to three students ... 21 Subjects: including precalculus, chemistry, physics, English ...To give you an example of my creative methods of teaching - I once taught math in an inner city New York 2nd grade class room. I took a class of 15 students that didn't know how to multiply. I noticed they loved to compete and run around. 17 Subjects: including precalculus, calculus, statistics, geometry ...During that time, I worked in 3 schools in 3 different cities, all with a very different student population. So I have experience teaching Geometry to students at every level. Before becoming a middle school and high school teacher, I worked as a para-educator in elementary schools for 3 years. 16 Subjects: including precalculus, geometry, ASVAB, algebra 1 ...In my last two years in college I had many classes which utilized Matlab, both as part of my Engineering major and my Applied Math major. For Dynamics & Vibrations and Acoustics I used Matlab as a data analysis tool, importing CSV files and writing code to perform FFT's, parse data sets, etc... ... 25 Subjects: including precalculus, chemistry, physics, calculus
{"url":"http://www.purplemath.com/South_Colby_Precalculus_tutors.php","timestamp":"2014-04-17T10:53:10Z","content_type":null,"content_length":"24066","record_id":"<urn:uuid:4d77ed25-e7a1-4bd4-92ed-4fcfc46b12bc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: in the equation 6x-12y=a and 3x-6y=b, a and b are constant. The two equations have many solutions . What is the realtionship between a and b? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5120e58fe4b06821731cdc1f","timestamp":"2014-04-21T04:57:47Z","content_type":null,"content_length":"53598","record_id":"<urn:uuid:7e78cfdd-1947-42c9-bedf-89540b9bd1a9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/metalpen/asked","timestamp":"2014-04-19T02:06:16Z","content_type":null,"content_length":"77671","record_id":"<urn:uuid:3f391681-ec9e-46f8-9394-5650288bedb4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
petsc-dev 2014-04-19 PETSc matrices (Mat objects) are used to store Jacobians and other sparse matrices in PDE-based (or other) simulations. ex1.c: Reads a PETSc matrix and vector from a file and reorders it ex2.c: testing SeqDense matrices with an LDA (leading dimension of the user-allocated arrray) larger than M ex4.c: Reads U and V matrices from a file and performs y = V*U'*x ex5.c: Each process opens the file and reads its part ex8.c: Shows how to add a new MatOperation to AIJ MatType\n\n ex9.c: Tests MatCreateComposite()\n\n ex10.c: Reads a PETSc matrix and computes the 2 norm of the columns\n\n ex11.c: Tests MatMeshToDual()\n\n ex12.c: Reads a PETSc matrix and vector from a file appends the vector the matrix\n\n ex15.c: Example of using graph partitioning to segment an image\n\n ex16.c: Reads a matrix from PETSc binary file ex17.c: Example of using graph partitioning with a matrix in which some procs have empty ownership\n\n
{"url":"http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tutorials/index.html","timestamp":"2014-04-20T11:43:40Z","content_type":null,"content_length":"2242","record_id":"<urn:uuid:4fbe59ff-ded3-450b-9b15-418594e6d38f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: help log linear [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: help log linear From David Gottschlich <davidgottschlich@comcast.net> To statalist@hsphsun2.harvard.edu Subject Re: st: help log linear Date Sun, 22 Feb 2004 18:40:06 -0500 I seem to have gotten this a long time after you sent it, so you may have the answer already (and please forgive me if I giving the wrongly simple answer to a complex question), but.... predict logy will give you the predicted result for each independent x; of course, this will be in log space. If you used the natural log, then generate predicty=exp(logy) will convert the predictions back to real space generate predicty=10^logy works if you used the common log. For most (many?) applications, it's more common to use the form y=a ln(x) + b ln(y)=a x + b If you are using the second form, of course you cannot force y to equal zero unless you can reach negative infinity from your function. cas2111@columbia.edu wrote: I am having a bit of difficulties figuring out how to predict a y in a log linear model. I cannot determine the following items: 1)obtaining fitted values of logy(i) from the regression of logy on each independent x 2)for each observation i, creating a variable equal to exp(logy(i)) 3)performing a regression through the origin Any other information that could be provide to assist in running a log linear regression would be greatly appreciated. I could not find anything in my manual about it. Thank you, * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-02/msg00595.html","timestamp":"2014-04-20T03:27:12Z","content_type":null,"content_length":"6680","record_id":"<urn:uuid:9770617c-752c-436b-a9d3-d80c7cbe6b36>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
TempLS correlation with other indices. June 2011 I've been posting monthly global averages, before the other surface indices appear. The purpose of this haste is partly to see how well it performs in comparison, uninfluenced by "peeking". is a recent monthly comparison, with links to earlier months. I post the data So it's now time for a review on how well TempLS tracks. Along the way, I found some interesting results on how the main indices track each other. Data plot The data sources are: HADCrut 4 Gistemp Land/Ocean NOAA Global Land Ocean RSS MSU Lower Troposphere UAH Lower Troposphere and TempLS. The data is tabulated here So here's a plot of the indices for those 17 months, set to a common anomaly base period of 1979-2000. Generally the surface-based (non-satellite) follow each other pretty closely: Now to show more detail of the differences, I'll plot the monthly differences between TempLS and the others. I'll arbitrarily zero the plots in a staggered way to make a point: Now it becomes clearer. TempLS tracks NOAA very well, HADCrut 4 a little less, GISS less again, and the lower troposphere indices rather poorly. There is, of course, a good reason for this. TempLS and NOAA use very similar datasets - GHCN land data, and ERSST. TempLS uses unadjusted GHCN, but there is very little adjustment in this time I wanted to see also how the other indices track each other, and to give a statistically testable measure. An obvious one is just the standard deviation of the scatter seen in the figure above. Here is a table of that measure for each pairing: Standard Deviations of differences (°C) Had Gis NOAA RSS UAH TLS Had 0 0.0579 0.0274 0.0836 0.0721 0.0248 Gis 0.0579 0 0.0692 0.0825 0.1017 0.0653 NOAA 0.0274 0.0692 0 0.0925 0.0756 0.0183 RSS 0.0836 0.0825 0.0925 0 0.0561 0.0867 UAH 0.0721 0.1017 0.0756 0.0561 0 0.0754 TLS 0.0248 0.0653 0.0183 0.0867 0.0754 0 The differences are marked - 0.0183°C for NOAA vs 0.0653°C for GISS, relative to TempLS. Another measure is the correlation coefficient ρ for the monthly changes. This has the advantage that it can be easily tested for significance, with the formula for t-value: t = ρsqrt((n-2)/(1-ρ*ρ)) where n is number of months. As usual, t is significantly above zero at 95% confidence if it exceeds 1.96. Actually, the significance is diminished by autocorrelation etc. Still, in cases of interest it clears that level by a wide margin. Correlation coefficients of monthly changes t-value of monthly changes Had Gis NOAA RSS UAH TLS Had Gis NOAA RSS UAH TLS Had 0 2.76 5.92 2.81 3.45 8.99 Had 0 0.594 0.845 0.601 0.678 0.923 Gis 2.76 0 1.75 1.65 0.56 2.19 Gis 0.594 0 0.423 0.404 0.148 0.505 NOAA 5.92 1.75 0 2.46 3.9 14.43 NOAA 0.845 0.423 0 0.549 0.722 0.968 RSS 2.81 1.65 2.46 0 6.74 2.8 RSS 0.601 0.404 0.549 0 0.874 0.599 UAH 3.45 0.56 3.9 6.74 0 3.77 UAH 0.678 0.148 0.722 0.874 0 0.71 TLS 8.99 2.19 14.43 2.8 3.77 0 TLS 0.923 0.505 0.968 0.599 0.71 0 The correlation of TempLS with all the indices is significantly positive, although with GISS barely so, over this period Here's a graphical representation of the correlation. The circle areas are proportional to the t-value of the pairing. Big means close tracking. In fact, the area is proportional to ρ*sqrt(1/ (1-ρ*ρ)); there's no difference for one plot, but it means that when I compare to different periods, the circles do not inflate with the longer period. The best correlations are in fact between TempLS and HADCrut and NOAA, which likely indicates the commonality of their data sources. There is also quite good tracking between the satellite indices. It seems that the different methods used have less effect than the different data sets. Longer periods I looked at the 17 months for which TempLS made predictions. But comparisons between other indices are valid beyond that period. As indeed are comparisons with TempLS, because in calculating the monthly values I actually didn't peek. The story is very similar. All the correlations are now highly significant. I'll just show below the circle plot for periods of five and ten years: Correlations over 5 years Correlations over 10 years Correlation of TempLS and GISS seems better over the longer periods, and with NOAA not quite so good.. There are interesting patterns of correlation between the various temperature indices. Those using similar datasets correlate very well. GISS, which uses a more diverse set, behaves rather TempLS fits very well into the NOAA/HADCrut grouping. 2 comments: 1. OK, I was surprised. I expected from your global coverage you'd be closer the GISTEMP. However, I have a thought. Are you including the see ice readings in the ERSST data? (They're always set to -1.80) GISTEMP assumes these are unobserved and extrapolates them from the nearest land station. That might be the principal difference. There's a philosophical question about what you want to measure there - air over sea ice is more like air over land ice than air over sea. On the other hand if you treat it that way then the global 'land-cover' varies by season! GISTEMP finesses the question by always extrapolating land temps into unmeasured ocean cells. My version is a little more subtle, extrapolating both the land and ocean temps before optionally filling in either direction. But if we accept the ice-as-land proposition it would be more rigorous to actually do that rather than arbitrarily extending from land to ocean. I tried to tackle this problem with a clever joint-kriging algorithm in which land and ocean data were kriged separately in the same matrix with weaker cross terms between the land and ocean blocks. I eventually abandoned it because the hold-out stats weren't compelling, and also because the mathematical cleverness doesn't really address the physical question. Kevin C 2. Kevin, I also treat the -1.8 readings as NA - non-existent. I don't think they are a measure of climate. I don't extrapolate, which may be the difference. Because I am in effect doing months in isolation, I don't think there is a seasonal issue. There is a bit of bias in that cells which sometimes freeze in a given month and sometimes not will return readings in warn years and not cold. But using the -1.8 would make that worse.
{"url":"http://moyhu.blogspot.com/2012/12/templs-correlation-with-other-indices.html","timestamp":"2014-04-16T22:05:53Z","content_type":null,"content_length":"103948","record_id":"<urn:uuid:552e9d30-1438-4b93-b393-039ac55d8c45>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
New Rochelle ACT Tutor Find a New Rochelle ACT Tutor ...I have helped many math students raise their grades dramatically in short periods of time. I accomplish this by focusing on improving a student's problem solving ability, a skill that is not often taught well in school. I have worked with students with learning disabilities as well as gifted students taking advanced classes or classes beyond their grade level. 34 Subjects: including ACT Math, calculus, geometry, writing ...I've also worked as a stage manager and production assistant for both student and professional productions and have a deep understanding of the technical side of theater. I studied Computer Science in college (I received an engineering degree in Computer Science from Princeton University), so I ... 37 Subjects: including ACT Math, chemistry, physics, calculus ...Francis College and Berkeley College, overall I have been teaching for 15 years. I have also been tutoring for the past 5 years Elementary Math, Algebra, Precalculus and Calculus students, amongst others, at Hunter College's Dolciani Math Learning tutoring center. I have a Master of Arts and a Bachelor of Science in Pure Mathematics from City College of CUNY, where I also taught for 2 years. 21 Subjects: including ACT Math, calculus, economics, precalculus ...Je suis ici pour vous aider avec la langue la plus belle dans le monde. J'aimerais bien vous aider! I have a degree from Suny Fredonia where I graduated at the top of my class. 45 Subjects: including ACT Math, English, Spanish, reading ...I also have extensive teaching experience -- both as a mathematics tutor and an adjunct professor. I have a Ph.D. in chemical engineering from the California Institute of Technology, and a minor concentration in applied mathematics. I have worked over 20 years in research in the oil, aerospace, and investment management industries. 11 Subjects: including ACT Math, calculus, algebra 2, algebra 1
{"url":"http://www.purplemath.com/New_Rochelle_ACT_tutors.php","timestamp":"2014-04-20T21:02:19Z","content_type":null,"content_length":"23863","record_id":"<urn:uuid:16b85efa-338a-4929-8984-45453c0b1186>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Sammamish Geometry Tutor ...I've also programming in VBA most recently for an Excel update function. I was also responsible for our network at our satellite office working for GE as well as web-based instruction on parts of the GE system. I taught spiral math at the high school level in the Peace Corps. 39 Subjects: including geometry, reading, English, algebra 1 ...I interact with the child to find "where they are", what interests them and how we can use this information to best help the child learn the math necessary to his/her age/grade level (or beyond). I have gained graduate credit in tutoring/teaching phonics reading, basic writing and elementary math... 22 Subjects: including geometry, physics, ASVAB, algebra 1 ...It is my belief that students are capable of understanding their weaker subjects as long as it is explained in a manner most fitted for the student: whether that be in drawing, words, or otherwise. Schedule: My current schedule (as of March 2014), is fairly competitive and any scheduled sessions should be made a week in advanced. Cancellations should be made 6 hours in advanced. 17 Subjects: including geometry, chemistry, calculus, physics ...I have had several years of classical training in piano with two of those at University of Puget Sound and Oregon State. I am an excellent sight reader and have been paid as an accompanist and have training in music theory. I was employed for many years in the field of both local and wide area ... 43 Subjects: including geometry, chemistry, calculus, physics ...And then, I give the student sample problems to solve independently and coach them further as needed. My main goal is to make sure the student is self-sufficient, and capable of using the methods on quizzes or tests. With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. 26 Subjects: including geometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/sammamish_wa_geometry_tutors.php","timestamp":"2014-04-19T05:04:38Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:9c0a2b09-51d4-4a0f-8621-c4046913d5de>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 308 project 2005, by D M However these equations are unpractical. The sun does not reside at the center of the orbit but at one of the foci. the equation has to be modified so that the origin is positioned at the sun's center. Another point is that instead of using the minor radius b to help describe the orbit, it is better to use the eccentricity, e of the ellipse where e = (1 - b^2/a^2)^(1/2) Eccentricity describes the shape of the ellipse. When eccentricity is nearer to 1 the ellipse is flattened, more cigar shaped; when the eccentricity is nearer 0 it gives a more circular shape. To find the focal point c, the initial equation is c = ea = (1 - b^2/a^2)^(1/2) Now using all the above equations, the polar cordinates that describe the ellipse with the origin at the focus is r = a (1 - e^2)/(1 + e cosø), where r is radius from the focus and the parameterize version of this equation is x = a cos ø - e, y = a (1 - e^2)^(1/2) sin ø picture 2 Starting with a 2-D scenerio, it takes two points on the path to find the major radius and eccentricity of the path of the orbiting body L(xl,yl) and M(xm,ym) where xl = a cos øl - e, øl = tan^-1(yl/xl) and xm = a cos øm - e, øm = tan^-1(ym/xm) from these four equations a = (xm - xl)/(cos øm - cos øl) e = (cos øm - xm)(xm - xl)/(cos øm - xm) picture 3 Now in 3-D, the ellipse can be inclined and/or tilted. Inclination, i is the rotation about the y-axis and tilt, t is the rotation about the x-axis. Again only two points are needed to find the major radius, eccentricity, inclination, and tilt of the path of the orbiting body, with one caveat, that both points are not at the minor radius, otherwise another point will need to be included. To find the inclination of the path of the orbiting body, the normal, n of the plane that the ellipse resides on needs to be found. this can be done by using the cross product. Using the points L (xl,yl,zl ) and M(xm,ym,zm) and the origin, two vectors, u and v, can be created. u = (-xl,-yl,-zl), v = (xm,ym,zm) The normal is the outcome of taking the cross product of these two vectors divided by the lenght on the cross product. n = u X v = [ x,y,z ; -xl,-yl,-zl ; xm,ym,zm ] n = ((-yl)(zm)-(-zl)(ym), -(-xl)(zm)-(-zl)(xm), (xl)(ym)-(-yl)(xm)) and length of the normal, p = ((((-yl)(zm)-(-zl)(ym))^2 +(-(-xl)(zm)-(-zl)(xm))^2 + ((xl)(ym)-(-yl)(xm))^2)^(1/2) i = cos^-1(((xl)(ym)-(-yl)(xm))/p) The tilt can be found by finding the angle between one of the vectors and its projection onto the y-z plane using the cosine law, just as long as that vector is not along the x-axis. the project of L (xl, yl, zl) onto the y-z plane is (0, yl, zl) and the vector that describes the y-z plane is (0, -1, 0) t = cos^-1((0, -1, 0) • (0, yl, zl)/((yl^2 + zl^2)^(1/2)) So only two points and the focus are needed to find major radius, a = (xm - xl)/(cos øm - cos øl) eccentricity, e = (cos øm - xm)(xm - xl)/(cos øm - xm) inclination, i = cos^-1(((xl)(ym)-(-yl)(xm))/p) tilt, t = cos^-1((0, -1, 0) • (0, yl, zl)/((yl^2 + zl^2)^(1/2)) to describe the path of a celestial body orbiting the sun picture 5
{"url":"http://www.math.ubc.ca/~cass/courses/m308-05b/projects/moncado/deanm.htm","timestamp":"2014-04-20T13:36:17Z","content_type":null,"content_length":"13079","record_id":"<urn:uuid:e986fe5e-edec-407a-86b4-32d2c6156aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Search the Site Search across all the material in the Pedagogy in Action site. Use the boxes on the right to focus in on particular collections. Current Search Limits Results 1 - 20 of 134 matches How Do We Estimate Magma Viscosity? part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build a spreadsheet to examine how magma viscosity varies with temperature, fraction of crystals, and water content using the non-Arrhenian VFT model. Bubbles in Magmas part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build a spreadsheet and apply the ideal gas law to model the velocity of a bubble rising in a viscous magma. Porosity and Permeability of Magmas part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build a spreadsheet for an iterative calculation to find volume of bubbles and hence porosity, permeability and gas escape as a function of depth. What is the Volume of the 1992 Eruption of Cerro Negro Volcano, Nicaragua? part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build a spreadsheet to calculate the volume a tephra deposit using an exponential-thinning model. How Does Surface Deformation at an Active Volcano Relate to Pressure and Volume Change in the Magma Chamber? part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build a spreadsheet to examine and apply the Mogi model for horizontal and vertical surface displacement vs. depth and pressure conditions in the magma Being P-Waves and S-Waves part of Pedagogy in Action:Library:Role Playing:Examples Teach students about P-waves and S-waves by having them model them with their own bodies. Sun Spot Analysis part of Pedagogy in Action:Library:Teaching with Data:Examples Introductory students use Excel to graph monthly mean Greenwich sunspot numbers from 1749 to 2004 and perform a spectral analysis of the data using the free software program "Spectra". Waves Through Earth: Interactive Online Mac and PC part of Pedagogy in Action:Library:Mathematical and Statistical Models:Examples Students vary the seismic P and S wave velocity through each of four concentric regions of Earth and match "data" for travel times vs. angular distance around Earth's surface from the source to Mass Balance Model part of Pedagogy in Action:Library:Mathematical and Statistical Models:Examples Students are introduced to the concept of mass balance, flow rates, and equilibrium using an online interactive water bucket model. Slinky and Waves part of Pedagogy in Action:Library:Interactive Lecture Demonstrations:Examples Use a Slinky to show:P and S waves, Wave reflection, and Standing waves in interactive lecture demonstration. How Do We Estimate Melt Density? part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build spreadsheets to estimate melt density at high temperatures and pressures from the thermodynamic properties of silicates. How are Flow Conditions in Volcanic Conduits Estimated? part of Pedagogy in Action:Library:Teaching with SSAC:Examples SSAC Physical Volcanology module. Students build a spreadsheet to calculate velocity of rising magma in steady-state Plinian eruptions using conservation of mass and momentum. Lithospheric Density part of Pedagogy in Action:Library:Teaching with SSAC:Examples Students learn about the weighted mean by building spreadsheets that apply this concept to the average density of the oceanic lithosphere. The Transformer: Simulation Lecture Demo part of Pedagogy in Action:Library:Interactive Lectures:Examples The activity presents an interactive lecture demonstration of the operation of a transformer using a simulation. Work: pre, during and post class questions part of Pedagogy in Action:Library:Interactive Lectures:Examples This series of questions before instruction, in-class peer instruction, and post-instruction allow students to iterate and improve their understanding of work incrementally. Measuring voltage and current in a DC circuit part of Pedagogy in Action:Library:Interactive Lectures:Examples These exercises target student misconceptions about how to properly measure voltage and current in simple DC circuits by letting them investigate different meter arrangements without fear of damaging equipment. These activities also are designed to lead to other investigations about simple DC circuits.
{"url":"http://serc.carleton.edu/sp/search/index.html?q1=sercvocabs__43%3A6","timestamp":"2014-04-16T17:34:45Z","content_type":null,"content_length":"35027","record_id":"<urn:uuid:b24da0a3-a6ff-4bf0-a256-1ac8f55971e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Proper holomorphic map from unit disk to punctured unit disk up vote 3 down vote favorite It is easy to see that there cannot be a proper holomorphic map from the punctured unit disk to the unit disk in the complex plane. What about the other direction; does there exist a proper holomorphic map from the unit disk to the punctured unit disk? complex-analysis cv.complex-variables add comment 2 Answers active oldest votes I don't think so. A map $\mathbb D \to \mathbb D^\ast$ would lift to a map $\mathbb D\to \mathbb H$, where the upper halfplane $\mathbb H$ is seen as the universal cover of $\mathbb D^\ ast$. As $\mathbb D \to \mathbb D^\ast$ is proper, so is $\mathbb D\to \mathbb H$. In particular it has closed image but by the open mapping theorem it also has open image and is hence up vote 9 equal to $\mathbb H$. That means that the inverse under $\mathbb D \to \mathbb D^\ast$ of a point is the disjoint countable topological union of non-empty sets which contradicts down vote properness. add comment Question is somewhat not clear: take $i\colon D^*\rightarrow D$ to be identity map, which is holomorphic. Then you can take $f_n\colon D^{*}\rightarrow D^{*}$, $f(z)=z^n$, for $n\in \ up vote 0 mathbb{Z}$,and compose with $i$ to get different holomorphic maps from punctured unit disc to unit disc. down vote 1 In my question, proper means that the inverse image of a compact set is compact. – Jaikrishnan Feb 8 '11 at 12:33 add comment Not the answer you're looking for? Browse other questions tagged complex-analysis cv.complex-variables or ask your own question.
{"url":"http://mathoverflow.net/questions/53787/proper-holomorphic-map-from-unit-disk-to-punctured-unit-disk","timestamp":"2014-04-17T15:37:40Z","content_type":null,"content_length":"54133","record_id":"<urn:uuid:c35467f9-71f1-46d6-929e-2aaa17d352aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: factor u^2 -10u +9 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fa81449e4b059b524f3f21b","timestamp":"2014-04-19T15:13:36Z","content_type":null,"content_length":"44624","record_id":"<urn:uuid:e6ec3df1-50ff-4f96-badd-032153f8b2c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Transition to turbulence and turbulent bifurcation in a von Karman flow Seminar Room 1, Newton Institute We study the transition from laminar flow to fully developed turbulence in a von Karman flow (Re from 50 to 106). The flow undergoes a classical succession of bifurcations driven by the destabilization of the azimuthal shear-layer. We observe that the transition to turbulence is globally supercritical: the kinetic energy of the velocity fluctuations can be used as a single order parameter to characterize the transition. We also measure the dissipation through the torque injected in the flow. For high Reynolds numbers, the mean flow presents multiple solutions: the canonical symmetric solution becomes marginally unstable towards a flow which breaks the basic Rπ-symmetry. The global bifurcation between these states is highly subcritical and the system thus keeps a memory of its history. The transition recalls low-dimension dynamical system transitions and exhibits slow dynamics and peculiar statistics. References F. Ravelet et al., J. Fluid Mech. 601, 339 (2008) F. Ravelet et al., Phys. Rev. Lett. 93, 164501 (2004) The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008091110401.html","timestamp":"2014-04-18T23:17:09Z","content_type":null,"content_length":"7073","record_id":"<urn:uuid:99ecf541-d39a-46c3-a91f-9edfb9e35e45>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'Hard to describe...system of equations likes wh...' topic Author Comment/Response I have a couple-page-long, nonlinear system of equations that includes within it many conditionals, which I've found work best when I use Implies[ ]. The mathematical equations are relatively simple algebraic equations--nothing fancy there. I can find out all of the answers by hand if I need to, and this system has only one solution set, yet I like to use FindInstance[], specify Reals, and only ask for one example. (I do this because I'm going to expand the system of equations later on--to a magnitude that will not allow me to figure out the solutions by hand, and there may be more than one possible solution set for some of the variables.) So the problem is that there is one variable that I feel to be a sort of cornerstone of the equation. When I define it as equal to a whole number, the system evaluates in less than 10 seconds. If I define it as equal to a fraction made of whole numbers, the same thing: the system evaluates in less than 10 seconds. But when I define it as equal to a decimal, my computer revs and goes on for several minutes until I abort--even when it's a decimal identical to the successful fraction. Also, if I leave the variable blank and let FindInstance[] choose an appropriate value, it doesn't--it has a lot of trouble (in all but one useless instance); it just revs and revs, and slowly my computer's real memory gets eaten up by the operation (until I The big problem is that in the larger system of equations that I will make, I don't know what that cornerstone variable is going to be, and I need the computer to find it. The program is having terrible trouble doing something like that and I don't know why. It must have something to do with the way it accepts fractions but not decimals, mustn't it? URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/29985","timestamp":"2014-04-19T09:34:03Z","content_type":null,"content_length":"28513","record_id":"<urn:uuid:2b3fbd37-1ef1-4c92-a037-3732801d701e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Phase Crossovers: What are the benefits? Who makes them? post #1 of 48 8/5/09 at 9:55am Thread Starter so like the thread title indicates, 1. what are the benefits to a linear phase crossover? 2. what companies make linear phase crossovers? is there an alternative device that can "fix" (linearize) the phase of a system after it is in place? here is a dolby paper on linear phase crossovers: this post was inspired by reports from tom danley and others that the subjective level of "punch" of a system is, in part, related to having all the various sounds arrive at the listener at the same instant (minimal group delay/minimal phase changes) as well as subjective reports that when linear phase crossovers were employed at stag theater (skywalker ranch), they "cleaned up" the sound quite a bit. Do a search on 'infinite slope'. Joseph Audio has been doing this passively for years, although I'd think it would be easier today in the digital domain. The bottom line: All the issues inherent with such extreme slopes are in such a narrow passband that they are relatively inaudible. Here's another article on fast/slow bass suggests that driver integration is the main causality. I guess this was an issue 10 years ago too... Add that the drivers are not linear, it is a bit of a challenge. Linkwitz discusses it a bit in his Orion and Phoenix for the active domain. Gets a tad clumsy in the passive. Digital has it's own Originally Posted by cc00541 Do a search on 'infinite slope'. Joseph Audio has been doing this passively for years, although I’d think it would be easier today in the digital domain. The bottom line: All the issues inherent with such extreme slopes are in such a narrow passband that they are relatively inaudible. Here's another article on fast/slow bass suggests that driver integration is the main causality. I guess this was an issue 10 years ago too... Curt, I only skimmed the patent but my understanding of the Joseph Audio XO is it's a variation on the Cauer elliptical theme -- using a notch filter to increase the slope of a standard analog XO immediately above/below the XO frequency. Each of the filters is still minimum phase although the sum isn't. Linear-phase filters are different -- no phase shift as the magnitude changes. Bruno Putzeys and Siegfried Linkwitz have both done pieces showing why linear-phase crossovers can be 'perfect' on axis but they ring when you move off axis. Bruno is a digital kind of guy (invented the UcD amps) but he favors low-slope analog or IIR (digital equivalent of analog) crossover filters. personally, i am only interested in active crossovers, but if linear phase is possible passively some folks would surely be intereted in that. you got it backwards. Active crossovers can be implemented digitally using a DSP chip or other microprocessor. They either use digital approximations to traditional analog circuits, known as IIR filters (Bessel, Butterworth, Linkwitz-Riley etc.), or they use Finite impulse response (FIR) filters. IIR filters have many similarities with analog filters and are relatively undemanding of CPU resources; FIR filters on the other hand usually have a higher order and therefore require more resources for similar characteristics. They can be designed and built so that they have a linear phase response, which is thought desirable by many involved in sound reproduction. There are drawbacks though - in order to achieve linear phase response, a longer delay time is incurred than would be necessary with an IIR filter. IIR filters, which are by nature recursive have the drawback that if not carefully designed they may enter limit cycles resulting in non-linear distortion. so its IIR that we have in our Behringers. FIR is what they have in DEQX and Dolby Lake processor. also i recall from reading the writeup on EAW NT speakers ( digital prosound speakers ) that they had to develop a new kind of digital filter that had the benefits of FIR fitlers but didn't have the drawback of TIME DELAY now the time delay talked about here i believe is IRRELEVANT FOR HOME AUDIO. but in LIVE PERFORMANCE you would hope that the sound comes at the same time as performer moves his lips or strikes the cymbals, so time delay is BAD there. it seems like you can't get around time delay. you can only have it come evenly at all frequencies ( linear phase ) or have it all mixed into some sort of audio soup ( IIR and Analog ). with Analog and IIR there is a tradeoff between amount of attenuation ( 6db/oct, 12db/oct, 24db/oct etc ) and phase error ( 90 degrees, 180 degrees, 360 degrees etc ). some designers will say attenuation is more important and go with 48db/oct ( Alesis studio monitors ) and others will say phase is more important and use only 6db/oct ( Dynaudio home speakers ). but with FIR you just use 300db/oct and no phase error ! problem solved. you just need to shell out a couple grand on the crossover ! ! ! as most digital crossovers are developed for prosound it would explain why very few of them use FIR filters. for a live performance time apparently is more important than phase. even though you probably think its the same thing but it's not ! after all they're not called "zero phase" but LINEAR phase. i am certain such a device could be built. but i doubt it would be cheaper to use a regular crossover and than "fix" it then to just use a FIR crossover in the first place. a device to linearize phase might be worth it to correct for the INHERENT phase errors that arise due to FINITE bandwidth of any physical speaker. so if such a device existed i would use it around 20hz and around 20khz to flatten speaker's phase THERE but for a crossover i would just do it right form the beginning. Oops, brain fart, I meant IIR. I edited my post. Minimum phase is a more accurate term because system amplitude response deviations from flat imply a phase shift and all audio systems have finite bandwidth. With added delay elsewhere they'll get you imaging between different driver configurations like a WMTW center channel and TM mains and they let marketing departments brag that a square wave going in looks like a square wave coming out on a scope. This disregards that people can't hear the phase distortion of second order all-pass filters up through LR4. Event the paper you cite says One area of considerable research interest is phase distortion in loudspeaker crossover networks. In previous work, including the references in this paper, the discussion has been focused upon the audibility of phase distortion within a single loudspeaker system. Through subjective and empirical tests, it has been determined that the phase distortion introduced by a conventional crossover network is insignificant. Reading the paper farther says that this is good in pro-sound setups with different speaker configurations (main and auxiliary). I was thinking of imaging in a home setting; but summed amplitude response being flat would be good too. The most common analog realization of "linear phase" is a first-order analog cross-over, which still allows excursion to double with each dropping octave, leads to output level limits and/or IM distortion, often precludes using pistonic drivers so the system is always distorting, etc. Those things are all bad. It also sounds different due to the broader but shallower power response dip about Fc compared to high-order filters which is audible and preferred by some people. The driver choices, counts, cross-over-points and resulting response will obviously be different too. is there an alternative device that can "fix" (linearize) the phase of a system after it is in place? Yes. At least one of the big room correction boxes (TaCT?) will do it. here is a dolby paper on linear phase crossovers: It's a paper on a specific steep-slope realization which in turn has a limited overlap region and therefore well-behaved polar response (which is audible) and good behavior when different speaker enclosures are summed together. That's good, especially in a pro-sound environment where early reflections are less an issue. What's missing is how bad the cross-over rings off-axis in the time domain and whether that'd be audible in a home environment. I've seen this mentioned twice here, and something important to remember is that this off axis effect is entirely dependent on the spacing and directivity of the devices being integrated. At lower frequencies and tighter spacings, it won't be a concern. There are also matters of what we are after where an ideally flat phase response vs. a significant minimization in phase rotation through crossover are very different tasks. The implementation is one part of the discussion, the value of doing so and the trade offs involved will be the other half or more of the discussion. I forgot the Pioneer receivers. Their 'full-band phase control' claims to do that when it's enabled. I have no idea how well it works and I haven't seen any independent measurements to show how well it unwraps the phase rotation and cleans up the impulse response. Quoting the manual: The Full Band Phase Control feature calibrates the frequency-phase characteristics of the speakers Standard speakers designed exclusively for audio use generally reproduce sound with the divided frequency bands output from a speaker system consisting of multiple speakers (in case of typical 3-way speakers, for instance, the tweeter, the squawker (midrange), and the woofer output sound in the high-, middle-, and lowfrequency ranges, respectively). Though these speakers are designed to flatten the frequency-amplitude characteristics across wide ranges, there are cases where the group delay characteristics are not effectively flattened. This phase distortion of the speakers subsequently causes group delay (the delay of lowfrequency sound against high-frequency sound) during audio signal playback. This receiver analyzes the frequency-phase characteristics of the speakers by calibrating test signals output from the speakers with the supplied microphone, therefore flattening the analyzed frequency-phase characteristics during audio signal playback1 - the same correction is made for a pair of left and right speakers. This correction minimizes group delay between the ranges of a speaker and improves the frequency-phase characteristics across all ranges. Furthermore, the enhanced frequency-phase characteristics between channels ensure better surround sound integration for multichannel setting. Originally Posted by catapult what a pull catapult! Originally Posted by Drew Eckhardt Minimum phase is a more accurate term because system amplitude response deviations from flat imply a phase shift and all audio systems have finite bandwidth. With added delay elsewhere they'll get you imaging between different driver configurations like a WMTW center channel and TM mains and they let marketing departments brag that a square wave going in looks like a square wave coming out on a scope. This disregards that people can't hear the phase distortion of second order all-pass filters up through LR4. Event the paper you cite says Reading the paper farther says that this is good in pro-sound setups with different speaker configurations (main and auxiliary). I was thinking of imaging in a home setting; but summed amplitude response being flat would be good too. The most common analog realization of "linear phase" is a first-order analog cross-over, which still allows excursion to double with each dropping octave, leads to output level limits and/or IM distortion, often precludes using pistonic drivers so the system is always distorting, etc. Those things are all bad. It also sounds different due to the broader but shallower power response dip about Fc compared to high-order filters which is audible and preferred by some people. The driver choices, counts, cross-over-points and resulting response will obviously be different too. Yes. At least one of the big room correction boxes (TaCT?) will do it. It's a paper on a specific steep-slope realization which in turn has a limited overlap region and therefore well-behaved polar response (which is audible) and good behavior when different speaker enclosures are summed together. That's good, especially in a pro-sound environment where early reflections are less an issue. What's missing is how bad the cross-over rings off-axis in the time domain and whether that'd be audible in a home environment. lot's to chew on there drew, much thanks! nice pull on the tact box. if i find any good whitepapers by them, i'll link them up in this thread. Originally Posted by Mark Seaton There are also matters of what we are after where an ideally flat phase response vs. a significant minimization in phase rotation through crossover are very different tasks. The implementation is one part of the discussion, the value of doing so and the trade offs involved will be the other half or more of the discussion. thanks for weighing in ms, can you elaborate a little bit or link up a couple good places to read about this? Originally Posted by catapult again. nice pull. looks like the folks at pioneer are asking the same questions we are around here or vice versa. ;-) pioneer is a monster on the technology side but they don't know how to package it for example my pioneer car head unit had a microphone and a multi band equalizer and was supposed to calibrate speaker's response. too bad i was using a system with big amplifiers and if i let it run its test tones it would probably blow all of my speakers up. so i never used it. i just set equalizer by ear. that head unit also had an organic LED display - which obviously i didn't need. pioneer is all about putting state of the art technology into useless products ( how japanese of them ) this would be in contrast to say APPLE which puts middle-of-the-road technology into products people actually want. Originally Posted by catapult this technology actually is important. while crossovers introduce MOST of the phase errors the transducers themselves also introduce plenty. in my response to LTD i mentioned that this technology could be used to correct phase response at the fringes ( 20hz and 20khz ) but that's not all it should be used for. each driver has its own finite bandwidth ( aside from the overall finite bandwidth of the sound system ) which results in phase errors. these should be fixed ( driver by driver ) using technology such as this thuneau. after phase response of each driver has been flattened then the signal can be sent to to a linear phase crossover and that's the only way you can ensure overall linear phase performance of the sound i mean you must have linear phase crossover AND linear phase drivers. whether driver phase can be linearized in a passive system as pioneer claims to do ? i don't know. to me it seems unlikely that some DSP can take misaligned signals from several transducers and recombine them into a single signal. and even if this can be done theoretically i dont know if in practice it would do more harm than good. It is marketing mainly. Anytime you see companies making a big deal out of linear phase you can almost smell the BS. You have to recognize that transducers are not linear phase devices to begin with. You also have to recognize that most tweeter-midwoofers are acoustically out of alignment and that they can only be time aligned, even with a DSP, for one position in space since the drivers are acoustically separate. Even a single driver loudspeaker, without a tweeter, will have time alignment mismatch (the high freq arrives slightly time aligned differently than the midrange/bass). In most cases, this is completely inaudible. Time alignment & linear phase may mean different things although you see a lot of mixing of the terminology. You can have a smooth phase response through the crossover even though the tweeter-midwoofer are out of time alignment. Often that is the case as the typical dome tweeter on a baffle is 100-150us out of alignment with the midwoofer because the Voice Coils are at two different locations on the Y-axis of the loudspeaker (the tweeter VC is closer to the listening posn by an inch or two). The research shows this is pretty much inaudible. Correcting it buys you very little that is audible but it does give you another marketing checklist item. Horn speakers are another matter. You can get path length differences of many inches, and in those cases DSP correction is the only method to bring them into some reasonable time alignment. You can still design passive networks that give reasonable FR behavior but there is nothing you can do with a passive network that will delay the signal. The exception is using allpass filters with 0-180 phase shifts but flat amplitude response to compensate for some of the time-delay. These require picking the correct amount of phase shift for a given crossover point because they don't delay all of the signal equally. Also, they are only really suitable for modest amounts of delay compensation. They work for baffle mounted dome tweeters, but not long path length differences you see in horns. The bottom-line. Go with a good speaker designer and trust their work. You cannot just throw transducers together on a baffle, hook up an external crossover, linear phase or otherwise and get good Here's a post explaining Hypex's take on the situation, to be featured in their upcoming DSP modules: http://www.diyaudio.com/forums/showt...73#post1833373 There are also several threads on DIYaudio about using a PC as crossover and phase/amplitude correction. Originally Posted by findbuddha Here's a post explaining Hypex's take on the situation, to be featured in their upcoming DSP modules: http://www.diyaudio.com/forums/showt...73#post1833373 There are also several threads on DIYaudio about using a PC as crossover and phase/amplitude correction. Yeah, that's by Bruno Putzeys who I mentioned above. His slide show is an interesting read. Those new Hypex amp/DSP modules look sweet for DIYers. Hopefully they'll start building them with US power supplies sometime. The last AES Journal had a paper on audibility of high-order constant-delay crossovers. It would be worth reading. The summary is that if you use too sharp a filter, you will get into trouble (only) moderately off axis. If you're lucky. Using constant-delay crossovers is not a panacea. I don't find this surprising. The standalone DSP module will run off +/- 12V volt (or 15v, Jan-Peter wasn't sure when I asked) I'm probably going to run it off the Aux out of a Red Rocks SMPS I read further in the thread you posted and Jan-Peter said you can order the plate amps with a 120V/60Hz power supply. Just specify it in the comments when you order. can anybody explain what you people mean by running into problems off axis ? wouldn't you run into the same problems with a regular crossover too ? It is possible to design, using a modern DSP, a crossover of almost any slope you want. The problem, off-axis (i.e. up, down, left, right or whatever so that the two drivers being crossed over are no longer at the same distance, which of course depends on the driver arrangement) is that the "sum to 1" property does not hold when you add delays to the sum of the two filtered signals. Drivers aren't entirely zero-phase or linear, either, of course, and this can cause a problem even when you are on axis. The moral is that it's not smart to make a crossover TOO steep. When you use FIR's you can get really wierd pre-echos and ringing patterns resulting from ugly frequency response about the crossover point. When you use IIR's you can get similar problems, but they don't pre-echo, in general, they only have ringing after the main lobe of the resulting filter response. I've made some plots to explain: A pretty outrageous lowpass (woofer) filter (note, this plot is a bit odd, I should have plotted the frequency response using more fft points but it is an equiripple filter when you plot it with enough points): http://s238.photobucket.com/albums/f...current=lp.jpg The matching highpass filter: The sum when they are time-aligned and there is no problem with drivers: The sum when one speaker is .3" farther away than the other: And the impulse response of that filter (.3" delay) As you can see, this small amount of delay is bad. More would in fact be much worse. Originally Posted by jj_0001 The last AES Journal had a paper on audibility of high-order constant-delay crossovers. It would be worth reading. The summary is that if you use too sharp a filter, you will get into trouble (only) moderately off axis. If you're lucky. Using constant-delay crossovers is not a panacea. I don't find this surprising. I'm not an AES member but the abstract of that paper makes it sound like you can still go pretty darn steep with FIR filters before they start sounding bad. (bold emphasis mine) Perceptual Study and Auditory Analysis on Digital Crossover Filters Authors:Korhola, Henri; Karjalainen, Matti Affiliation:Helsinki University of Technology, Department of Signal Processing and Acoustics, Espoo, Finland The extensive research on the perceptual attributes of analog filters used for loudspeaker crossover networks does not necessarily apply to digital filters. In this study finite-impulse response (FIR) and Linkwitz-Riley (LR) digital crossover filters were examined for their perceptual artifacts. Subjective tests with headphones and loudspeakers showed that for LR filters the audibility of phase distortion can be predicted by group delay errors. But FIR filters of high order produce audible artifacts because of time smear created by extensive ringing. LR filters of order 8 or less and FIR filters of about 600 were without problems. These safety limits should be respected. Well, 600 taps isn't that steep. Don't forget, an FIR filter length is equivalent to the entire length of the IIR response in terms of total energy. My example used a 1K filter. People have proposed much sharper. I didn't even try to figure out what would happen with a much sharper filter, I think 'wrong' is the appropriate way to put it. Ok, I tried a 4097 length filter. The results for one sample delay (.3 inches at 44.1kHz) are just too wierd to contemplate. Thanks James, I misunderstood. I thought he meant 600th order not 600 taps. About how steep in orders or dB/octave does 600 taps turn out to be? First, it's "jj". Please. Well, for FIR's, the order of the filter (in terms of zeros) is the number of taps minus 1, so a 600 tap filter is a 599 order filter. To answer your second question, it's not so easy. The answer is in terms of transition bandwidth, which is itself in terms of fs/2 being '1'. This is why low-frequency crossovers are "interesting" digitally, and higher-frequency crossovers are "interesting" for different reasons In order to match a 100Hz 3rd order butterworth, one would need quite a long filter. In order to match a 3rd order butterworth at 10kHz (for 44.1 sampling rate) would require a much shorter filter. Of course, also "match" is not entirely fair, you'd have to "match" "how fast did I get to -n dB" because the filter shapes are most likely to be extremely different. To give you an idea. a 600 tap (599th order) FIR with a transition start band at 100 Hz and stop band starting at 200Hz has in-band ripple of about .3 dB, and rejection of around -28dB. Above 200Hz, the filter is equiripple, i.e. there are many peaks coming back up to -28dB. This is not a very good filter, frankly, for a crossover. Let's try the same 600 tap filter for 1 octave at 1khz now,cutting off at 2kHz... The in-band ripple is minescule (smaller than double-precision!) and the rejection of the filter is in excess of 180dB. (More importantly my optimization program gave up as "um, I hope this is good enough, sport, I'm not written in quadruple precision...") So, the comparison of FIR to IIR is not so simple. an IIR in analog will have a fixed slope/octave or decade. An FIR will have an out-of-band rejection, an in-band ripple, and a transition band from one to the other. Combinations are possible, of course. At 300Hz the ripple and rejection are reasonable, the ripple is about .004dB and the rejection about 70dB, which is reasonably good to avoid driver interactions. I was going to try a filter working from 10khz to 20khz, but I can't even do a 600 tap filter there. The point is that if the transition band is 100Hz, 600 taps isn't enough at 44.1khz samplgin rate. If the transition band is 300Hz (it doesn't matter 300 Hz or 3khz as the start of the stop band), it's just enough. If the transition band is 1kHz, it's too long. 10kHz? Fergetaboutit in this universe. To explain, if I make a crossover at 1kHz, and for which the end of the HP filter transition band is 1300Hz, I get very much the same filter performance (in terms of ripple and rejection) as I do for the 300/600Hz filter. (checking my own assertion, the two filters have equal ripple and stop-band rejection to better than .1dB stop band and 1% difference in passband) So you can't treat FIR and IIR's the same way when you think about them. With IIR's it's dB/octave, with FIR's it's "absolute width of transition band divided by fs/2" Hope this helps. post #2 of 48 8/5/09 at 2:42pm • 123 Posts. Joined 1/2006 • Location: latitude 40.8510 longitude -96.7592 altitude 362 meters • Thumbs Up: 10 post #3 of 48 8/5/09 at 4:02pm • 2,031 Posts. Joined 2/2009 • Location: Bowie, MD. • Thumbs Up: 10 post #4 of 48 8/5/09 at 5:53pm post #5 of 48 8/5/09 at 7:37pm Thread Starter post #6 of 48 8/6/09 at 2:05am post #7 of 48 8/6/09 at 2:15am post #8 of 48 8/6/09 at 9:00am post #9 of 48 8/6/09 at 9:03am post #10 of 48 8/6/09 at 10:22am • 2,674 Posts. Joined 5/2001 • Location: Sunnyvale, CA USA • Thumbs Up: 14 post #11 of 48 8/6/09 at 10:54am • 5,927 Posts. Joined 7/2000 • Location: Chicago, IL, USA • Thumbs Up: 129 post #12 of 48 8/6/09 at 12:09pm post #13 of 48 8/6/09 at 10:57pm Thread Starter post #14 of 48 8/6/09 at 11:07pm Thread Starter post #15 of 48 8/6/09 at 11:11pm Thread Starter post #16 of 48 8/6/09 at 11:21pm Thread Starter post #17 of 48 8/7/09 at 2:00am post #18 of 48 8/7/09 at 2:17am post #19 of 48 8/7/09 at 4:45am • 1,008 Posts. Joined 2/2005 • Thumbs Up: 10 post #20 of 48 8/7/09 at 5:34am post #21 of 48 8/7/09 at 11:07am post #22 of 48 8/7/09 at 1:29pm post #23 of 48 8/7/09 at 5:53pm post #24 of 48 8/7/09 at 6:44pm post #25 of 48 8/8/09 at 4:29am post #26 of 48 8/10/09 at 10:58am post #27 of 48 8/10/09 at 12:41pm post #28 of 48 8/10/09 at 1:07pm post #29 of 48 8/10/09 at 1:16pm post #30 of 48 8/10/09 at 2:26pm
{"url":"http://www.avsforum.com/t/1168565/linear-phase-crossovers-what-are-the-benefits-who-makes-them","timestamp":"2014-04-21T14:05:16Z","content_type":null,"content_length":"224177","record_id":"<urn:uuid:d8f0b288-cf3c-43af-87bf-64c8b156dab6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponentiation ladder for cryptography Method and apparatus for data security using exponentiation. This is suitable for public key cryptography authentication and other data security applications using a one-way function. A type of exponentiation is disclosed here where the bits of an exponent value expressed in binary form correspond to a course (path) in a given graph defining the one-way function. This uses an approach called here F sequences. Each value is in a ladder of a sequence of values, as defined from its predecessor values. This ladder satisfies certain algebraic identities and is readily calculated by a computer program or logic circuitry. Inventors: Ciet; Mathieu (Paris, FR), Farrugia; Augustin J. (Cupertino, CA), Fasoli; Gianpaolo (Palo Alto, CA), Paun; Filip (Cupertino, CA) Assignee: Apple Inc. (Cupertino, CA) Appl. No.: 12/054,249 Filed: March 24, 2008
{"url":"http://patents.com/us-8014520.html","timestamp":"2014-04-18T08:29:44Z","content_type":null,"content_length":"37673","record_id":"<urn:uuid:06bbd8be-bb73-44da-9c4d-1799d8cc07f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
23 Apr 15:22 2013 Instances for continuation-based FRP Hans Höglund <hans <at> hanshoglund.se> 2013-04-23 13:22:37 GMT Hi everyone, I am experimenting with various implementation styles for classical FRP. My current thoughts are on a continuation-style push implementation, which can be summarized as follows. > newtype EventT m r a = E { runE :: (a -> m r) -> m r -> m r } > newtype ReactiveT m r a = R { runR :: (m a -> m r) -> m r } > type Event = EventT IO () > type Reactive = ReactiveT IO () The idea is that events allow subscription of handlers, which are automatically unsubscribed after the continuation has finished, while reactives allow observation of a shared state until the continuation has finished. I managed to write the following Applicative instance > instance Applicative (ReactiveT m r) where > pure a = R $ \k -> k (pure a) > R f <*> R a = R $ \k -> f (\f' -> a (\a' -> k $ f' <*> a')) But I am stuck on finding a suitable Monad instance. I notice the similarity between my types and the ContT monad and have a feeling this similarity could be used to clean up my instance code, but am not sure how to proceed. Does anyone have an idea, or a pointer to suitable literature. Best regards, Haskell-Cafe mailing list Haskell-Cafe <at> haskell.org
{"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/104809","timestamp":"2014-04-21T12:28:38Z","content_type":null,"content_length":"41575","record_id":"<urn:uuid:6ad22c74-4ecc-4ac8-896f-21bf554a1dc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
SparkNotes: SAT Physics: Important Definitions 10.1 Important Definitions 10.6 Angular Momentum 10.2 Rotational Kinematics 10.7 Key Formulas 10.3 Frequency and Period 10.8 Practice Questions 10.4 Rotational Dynamics 10.9 Explanations 10.5 Kinetic Energy Important Definitions There are a few basic physical concepts that are fundamental to a proper understanding of rotational motion. With a steady grasp of these concepts, you should encounter no major difficulties in making the transition between the mechanics of translational motion and of rotational motion. Rigid Bodies The questions on rotational motion on SAT II Physics deal only with rigid bodies. A rigid body is an object that retains its overall shape, meaning that the particles that make up the rigid body stay in the same position relative to one another. A pool ball is one example of a rigid body since the shape of the ball is constant as it rolls and spins. A wheel, a record, and a top are other examples of rigid bodies that commonly appear in questions involving rotational motion. By contrast, a slinky is not a rigid body, because its coils expand, contract, and bend, so that its motion would be considerably more difficult to predict if you were to spin it about. Center of Mass The center of mass of an object, in case you have forgotten, is the point about which all the matter in the object is evenly distributed. A net force acting on the object will accelerate it in just the same way as if all the mass of the object were concentrated in its center of mass. We looked at the concept of center of mass in the previous chapter’s discussion of linear momentum. The concept of center of mass will play an even more central role in this chapter, as rotational motion is essentially defined as the rotation of a body about its center of mass. Axis of Rotation The rotational motion of a rigid body occurs when every point in the body moves in a circular path around a line called the axis of rotation, which cuts through the center of mass. One familiar example of rotational motion is that of a spinning wheel. In the figure at right, we see a wheel rotating counterclockwise around an axis labeled O that is perpendicular to the page. As the wheel rotates, every point in the rigid body makes a circle around the axis of rotation, O. We’re all very used to measuring angles in degrees, and know perfectly well that there are 360º in a circle, 90º in a right angle, and so on. You’ve probably noticed that 360 is also a convenient number because so many other numbers divide into it. However, this is a totally arbitrary system that has its origins in the Ancient Egyptian calendar which was based on a 360-day year. It makes far more mathematical sense to measure angles in radians (rad). If we were to measure the arc of a circle that has the same length as the radius of that circle, then one radian would be the angle made by two radii drawn to either end of the arc. Converting between Degrees and Radians It is unlikely that SAT II Physics will specifically ask you to convert between degrees and radians, but it will save you time and headaches if you can make this conversion quickly and easily. Just remember this formula: You’ll quickly get used to working in radians, but below is a conversion table for the more commonly occurring angles. Value in degrees Value in radians 30 π/6 45 π/4 60 π/3 90 π/2 180 π 360 2π Calculating the Length of an Arc The advantage of using radians instead of degrees, as will quickly become apparent, is that the radian is based on the nature of angles and circles themselves, rather than on the arbitrary fact of how long it takes our Earth to circle the sun. For example, calculating the length of any arc in a circle is much easier with radians than with degrees. We know that the circumference of a circle is given by P = 2πr, and we know that there are 2π radians in a circle. If we wanted to know the length, l, of the arc described by any angle P. Because P = 2πr, the length of the arc would be:
{"url":"http://www.sparknotes.com/testprep/books/sat2/physics/chapter10section1.rhtml","timestamp":"2014-04-20T23:45:32Z","content_type":null,"content_length":"51969","record_id":"<urn:uuid:cea47c44-b945-4f3b-9030-48cf43af2125>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
seeing whale songs January 29, 2010 8:50 AM Subscribe Visualizing Whale Songs Mark Fischer , an expert in marine acoustics, has come up with another way to illustrate whale song . He uses a more obscure method, known as the wavelet transform, which represents the sound in terms of components known as wavelets: short, discrete waves that are better at capturing cetacean posted by dhruva (12 comments total) 8 users marked this as a favorite Ah, wavelet transforms and color mapping. Is there anything you can't do? posted by demiurge at 8:57 AM on January 29, 2010 These look really cool, and I see how they are different per species, but are they different per posted by OmieWise at 9:23 AM on January 29, 2010 From what I understand of whalesongs, not very. But they're different per , and per posted by clarknova at 9:59 AM on January 29, 2010 These kind of look cool, but how is this really "visualizing" the whale songs? It doesn't really tell you much about what they sound like, unlike an FFT or something. posted by delmoi at 11:35 AM on January 29, 2010 thanks to New Scientist for again leaving out most of the science and math and instead just showing some (admittedly beautiful) handwaving. wavelet transforms are indeed beautiful mathematically, and efficient computationally, but, what possible advantage do they have over the standard linguistic model: language as a sequence of phonemes (intervals that exhibit similar spectral traits) and their larger scale usage patterns (ie. syllables, words, phrases, sentences.... or, in music: melodic voices, harmonic chord changes, rhythmic patterns). this Fischer guy is more about (selling his) art than science. posted by dongolier at 1:35 PM on January 29, 2010 Spectrograms are created using a mathematical process called the Fourier transform (FT), which can convert raw sound into a set of sinusoidal waves. Fourier Series translates a signal into a set of sinusoids, not the Fourier Transform. posted by dongolier at 1:48 PM on January 29, 2010 It doesn't really tell you much about what they sound like, unlike an FFT or something. this Fischer guy is more about (selling his) art than science. The article actually said why these are useful, they aid in identifying different species' songs: On a spectrogram it can be difficult to distinguish between similar-sounding species, particularly if the animal clicks very rapidly, because these get smeared out in an FT. With the wavelet method, the clicks show up as precise spikes. posted by OmieWise at 2:25 PM on January 29, 2010 It's nice looking art.. but I'd be more impressed if it was animated. posted by jmnugent at 2:57 PM on January 29, 2010 okay, okay.... i think i see the wavelet advantage. you dont have to fuss with picking the right time-interval window: with FFT if you pick too narrow you lose low freqs; too wide and the signal is nonstationary ("smeared out" might describe it). and from wikipedia it looks like everybody is dumping the FFT for wavelet method's---maybe we'll even get JPEG2000 browser support in ... wait for it... 2010! also interesting though: has the same error as the article, "the Fourier Transform... expresses a signal as a sum of sinusoids." someone at NewScientist was negligently lazy on the olde wikipedia-factcheck.... posted by dongolier at 4:18 PM on January 29, 2010 dongolier, the Fourier transform is an algorithm for switching from an arbitrary function to an equivalent Fourier series. What's wrong with the statement you quoted? posted by fantabulous timewaster at 12:42 AM on January 30, 2010 Joseph Fourier is responsible for giving use both the Fourier Series expansion of a function as well as all of Fourier Analysis which includes the Transform. Fourier Series is an expansion of a periodic function into sinusoidal components of multiples of the original functions periodic frequency. (ie. at a 1 sec period you get a weighted sum of 0Hz, 1Hz, 2Hz, ... terms---which is different from a continuous "spectrum") Fourier Transform (really, a special case of the Laplace Transform) transforms a periodic function of time (or data over an interval) into a function of frequency, where frequency is a variable from zero to infinity (more precisely: negative infinity to positive infinity). FFT is a Discrete Fourier Transform , an adaptation of the above for time. it is useful as it transforms arbitrary data samples equally spaced in time into a spectrum showing relative power at discrete frequencies---if the interval is 1 sec and their are 1024 samples in it, the FFT gives you a spectrum with values at 0Hz,1Hz, 2Hz,...511Hz. the Fourier Series gives an approximation to an input function. the FFT gives you the spectrum of that function. posted by dongolier at 3:25 AM on January 31, 2010 Well, the sum of the series gives an approximation to the input function. The terms in the series (that is, the weights associated with each frequency term in the sum) are what's usually meant by the I really don't see the hair that you're splitting. The sentence you quoted is one of the least wrong things I've ever read in New Scientist. posted by fantabulous timewaster at 3:52 PM on January 31, 2010 « Older WTF? Achtung! Nothing here is safe for anyone ever... | Build a treehouse... Newer » This thread has been archived and is closed to new comments
{"url":"http://www.metafilter.com/88730/seeing-whale-songs","timestamp":"2014-04-19T23:52:29Z","content_type":null,"content_length":"26746","record_id":"<urn:uuid:e5d565dd-e483-4e64-baf7-88fcc1537383>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Rutgers/Lucent ALLIES IN TEACHING MATHEMATICS AND TECHNOLOGY Grant 2000 Using technology not simply to do things better, but to do better things. USING THE GEOMETER'S SKETCHPAD TO EXPLORE CIRCLES As a review, show that you can: 1. create a segment, ray and line; 2. show or hide the labels of points; 3. construct the interior of the figure, color it green, and measure the area. To illustrate how the Sketchpad can be used to explore geometric relationships in circles, follow the sequence of steps outlined below: 1. Begin with a new sketch. In Display >> Preferences, turn Autoshow Labels ON for points. Make sure the unit for Distance is inches, with Precision to the hundreth; Make sure the unit for Angle Measure is degrees, with the precision of units. 2. Construct a circle. The center will be labeled A and the control point will be point B. Construct the radius, segment AB. Make this segment thick and red. 3. Place an additional point, C, on the circle. To construct a diameter, use the line tool to construct a line that goes from point A and passes through point C. Shift+select the line and the circle, and construct their point of intersection, point D. Hide the line, and connect points C and D with a segment. Make this segment thick and blue. 4. Measure the lengths of both the radius and the diameter. Also measure the circumference. 5. Use the calculator to calculate the ratio of (Circumference ÷ Radius). 6. What value do you get? Did you get the same value as others around you? What happens to the value as you grab-&-drag the center and control point to change the size of the circle? 7. How do you explain what's going on here? How does this relate to the formula for the circumference of a circle? 8. Select the circle; Construct >> Circle Interior. Color it yellow. Measure the area inside the circle. 9. What is the formula for the area of a circle? How could we use the calculator and the measures that are showing to calculate the area of the yellow circle? 10. Can you make a triangle that has the same perimeter as the circumference of the circle? Which is greater, the area of the circle or the area of the triangle? 11. Can you make a quadrilateral who's perimeter is the same as the circumference of the circle? Which is greater, the area of the circle or the area of the quadrilateral? What if we tried a pentagon, hexagon, 100-sided polygon, etc. What general conclusion do you think this leads to? For example, try to describe how to enclose the greatest area you can within a given amount of fencing. THE MATH FORUM: Creating community, developing resources, constructing knowledge...
{"url":"http://mathforum.org/workshops/lucent/Exploring_Circles.html","timestamp":"2014-04-17T13:10:01Z","content_type":null,"content_length":"4518","record_id":"<urn:uuid:9104121f-635e-4fd9-ad9c-05d03814c843>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Marie Sophie Germain (1776 - 1831) As a woman, Germain had many obstacles to overcome before her contributions to science were accepted, not only in the field of mathematics, but also in acoustics and the study of elasticity. Germain was born in Paris, into a wealthy family. Her father was a merchant and later became a Director of the (national) Bank of France. They were also a family strongly interested in liberal reforms - a common theme during the French and American Revolutions taking place at the time. Her family home was a meeting place at which she was introduced to the topics of the day. As was common at this time, women were not supposed to be interested- let alone be active - in studying subjects such as mathematics. But Germain was an exception. Against her family’s wishes, the young Germain would spend hours in her father’s library at night studying, when the rest of the household was asleep. Her parents relented in the face of stubbornness, and they accepted their daughter’s wishes. Germain was to face further obstacles still, when trying to enrol in the École Polytechnique, Paris in 1794, as women were simply not permitted to attend. Not discouraged, she obtained the lecture notes from other students and continued to teach herself. During this period Germain became fascinated with the work of Lagrange, a professor there, and submitted a paper to him on Analysis, under a man’s name. Having gained the attention of Lagrange, who was so impressed with its originality that he began a search for its author. Once Lagrange had accepted her deception he took it upon himself to become Germain’s mentor. The support that Lagrange provided was all that Sophie needed. With this new found encouragement she began submitting papers into competitions. The most notable was sponsored by the French Academy of Science, the competition ran from 1808 to 1816 and was based on a previous study by a German physicist on the subject of elastic surface vibrations. It took Germain three attempts before she was awarded this accolade! But why was Sophie interested in primes? Simple, she thought that it might lead to a proof of the notorious Fermat’s Last Theorem. Around 1825, Sophie Germain proved that the first case of Fermat’s Last Theorem is true for her primes, i.e. if p is a Sophie Germain prime, then there do not exist integers x, y, and z different from 0 and not multiples of p such that x^p+y^p=z^p. This was a breakthrough, but little progress has subsequently happened as a consequence of this result or the idea of such primes. Germain was the first woman to be allowed to attend the conferences set up by the Academy of Science, yet once again her research was over looked for many years to come. It was not until the intervention of Gauss in 1831 that Germain was awarded an honorary degree from the University of Göttingen. Alas this came too late for her to receive it - Germain died of cancer, having battled the disease for many years, at the age of 55. Germain’s mathematics A prime p is said to be a Sophie Germain prime if both p and 2p+1 are prime. The first few Sophie Germain primes are: 2, 3, 5, 11, 23, 29, 41, 53, 83, 89, 113, 131, \ldots Large Sophie Germain primes include: • 92305.2^{16998}+1, found in 1998. This number has 5117 digits; • 109433307.2^{66452}-1 found in 2001. This number has 20013 digits. It is still not known if there are an infinite number of Sophie Germain primes. However, if there are, then we now know that the number of such primes less than n, denoted by \lambda_G(n) is given {\lambda_G(n)}\approx {n\over(\log n)^2}\ (\approx\ \mbox{means approximately)} Using this approximation, we can compare the estimate with the actual number: N actual estimate 1,000 37 39 100,000 1171 1166 10,000,000 56032 56128 100,000,000 423140 423295 1,000,000,000 3308859 3307888 10,000,000,000 26569515 26568824
{"url":"http://www.counton.org/timeline/test-mathinfo.php?m=marie-sophie-germain","timestamp":"2014-04-21T12:09:47Z","content_type":null,"content_length":"7199","record_id":"<urn:uuid:7846e047-2178-4458-aada-ee1b5ac6ae60>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Discussion Papers All of the Department Discussion Papers are submitted to RePEc. The EconPapers or IDEAS sites allow you to search by author, title, keyword, JEL category and abstract contents. Discussion Papers 2014 Discussion Papers 2013 Discussion Papers 2012 Discussion Papers 2011 Discussion Papers 2010 Discussion Papers 2009 Discussion Papers 2008 Discussion Papers 2007 Discussion Papers 2006 Discussion Papers 2005 Discussion Papers 2004 Discussion Papers 2003 Discussion Papers 2002 Discussion Papers 2001 Discussion Papers 2000 Discussion Papers 1999 Discussion Papers 1998 Discussion Papers 1997 Discussion Papers 1996 Discussion Papers 1995 Papers from 1998 onwards are available on-line as .PDF files. 10 Most Recent Papers 14/07 Stephen Pollock A variety of filters that are commonly employed by econometricians are analysed with a view to determining their effectiveness in extracting well-defined components of economic data sequences. These components can be defined in terms of their spectral structures—i.e. their frequency content—and it is argued that the process of econometric signal extraction should be guided by a careful appraisal of the periodogram of the detrended data sequence. A preliminary estimate of the trend can often be obtained by fitting a polynomial function to the data. This can provide a firm benchmark against which the deviations of the business cycle and the fluctuations of seasonal activities can be measured. The trend-cycle component may be estimated by adding the business cycle estimate to the trend function. In cases where there are evident structural breaks in the data, other means are suggested for estimating the underlying trajectory of the data. Whereas it is true that many annual and quarterly economic data sequences are amenable to relatively unsophisticated filtering techniques, it is often the case that monthly data that exhibit strong seasonal fluctuations require a far more delicate approach. In such cases, it may be appropriate to use filters that work directly in the frequency domain by selecting or modifying the spectral ordinates of a Fourier decomposition of data that have been subject to a preliminary detrending. 14/06 Heather D. Gibson, Stephen G. Hall and George S. Tavlas Are All Sovereigns Equal? A Test of the Common Determination of Sovereign Spreads in the Euro Area With the outbreak of the Greek financial crisis in late 2009, spreads on Greek (and other) sovereigns reached unprecedented levels. Using a panel data of euro-area countries, we test whether the markets treated all euro-area countries in an equal manner over the period 1998:m1 to 2012:m6. In a F-test of the pooling assumptions suggests that Greece, Ireland and Portugal were not part of the overall pool. In a separate test on the individual coefficients we find that the coefficients on these three countries moved in a similar direction away from the pool, suggesting that markets treated these three countries more acutely than the rest of the pool. 14/05 Stephen Pollock Econometrics: An Historical Guide for the Uninitiated This essay was written to accompany a lecture to beginning students of the course of Economic Analytics, which is taught in the Institute of Econometrics of the University of Lodz in Poland. It provides, within a few pages, a broad historical account the development of econometrics. It begins by describing the origin of regression analysis and it concludes with an account of cointegration analysis. The purpose of the essay is to provide a context in which the students can locate various aspects of econometric analysis. A distinction must be made between the means by which new ideas were propagated and the manner and the circumstances in which they have originated. This account is concerned primarily with the propagation of the ideas. 14/04 Stephen Pollock Trends Cycles and Seasons: Econometric Methods of Signal Extraction Alternative methods of trend extraction and of seasonal adjustment are described that operate in the time domain and in the frequency domain. The time-domain methods that are implemented in the TRAMO–SEATS and the STAMP programs are described and compared. An abbreviated time-domain method of seasonal adjustment that is implemented in the IDEOLOG program is also described. Finite-sample versions of the Wiener–Kolmogorov filter are described that can be used to implement the methods in a common way. The frequency-domain method, which is also implemented in the IDEOLOG program, employs a ideal frequency selective filter that depends on identifying the ordinates of the Fourier transform of a detrended data sequence that should lie in the pass band of the filter and those that should lie in its stop band. Filters of this nature can be used both for extracting a low-frequency cyclical component of the data and for extracting the seasonal component. 14/03 Stephen Pollock Cycles, Syllogisms and Semantics: Examining the Idea of Spurious Cycles The claim that linear filters are liable to induce spurious fluctuations has been repeated many times of late. However, there are good reasons for asserting that this cannot be the case for the filters that, nowadays, are commonly employed by econometricians. If these filters cannot have the effects that have been attributed to them, then one must ask what effects the filters do have that could have led to the aspersions that have been made against them. 14/02 Stephen Pollock On Kronecker Products, Tensor Products and Matrix Differential Calculus The algebra of the Kronecker products of matrices is recapitulated using a notation that reveals the tensor structures of the matrices. It is claimed that many of the difficulties that are encountered in working with the algebra can be alleviated by paying close attention to the indices that are concealed beneath the conventional matrix notation. The vectorisation operations and the commutation transformations that are common in multivariate statistical analysis alter the positional relationship of the matrix elements. These elements correspond to numbers that are liable to be stored in contiguous memory cells of a computer, which should remain undisturbed. It is suggested that, in the absence of an adequate index notation that enables the manipulations to be performed without disturbing the data, even the most clear-headed of computer programmers is liable to perform wholly unnecessary and time-wasting operations that shift data between memory cells. 14/01 Wojciech Charemza, Carlos Diaz and Svetlana Makarova Term Structure Of Inflation Forecast Uncertainties And Skew Normal Distributions Empirical evaluation of macroeconomic uncertainties and their use for probabilistic forecasting are investigated. A new weighted skew normal distribution which parameters are interpretable in relation to monetary policy outcomes and actions is proposed. This distribution is fitted to recursively obtained forecast errors of monthly and annual inflation for 38 countries. It is found that this distribution fits inflation forecasts errors better than the two-piece normal distribution, which is often used for inflation forecasting. The new type of ‘fan charts’ net of the epistemic (potentially predictable) element is proposed and applied for UK and Poland. 13/27 Ali al-Nowaihi and Sanjit Dhami Foundations and Properties of Time Discount Functions A critical element in all discounted utility models is the specification of a discount function. We extend the standard model to allow for reference points for both out- comes and time. We consider the axiomatic foundations and properties of two main classes of discount functions. The first, the Loewenstein-Prelec discount function, accounts for declining impatience but cannot account for the evidence on subadditivity. A second class of discount functions, the Read-Scholten discount function accounts for declining impatience and subadditivity. We derive restrictions on an individual’s preferences to expedite or to delay an outcome that give rise to the discount functions under consideration. As an application of our framework we consider the explanation of the common difference 13/26 Ali al-Nowaihi and Sanjit Dhami We consider a discounted utility model that has two components. (1) The instan- taneous utility is of the prospect theory form, thus, allowing for reference dependent outcomes. (2) The discount function embodies a ‘reference time’ to which all future outcomes are discounted back to, hence, the name, reference time theory. We allow the discount function to exhibit declining impatience, as in hyperbolic discounting models, subadditivity or both. We show that if the discount function is non-additive, then the presence of a reference time has important effects on intertemporal choices. For instance, this helps to explain apparently intransitive choices over time. We also show how several recent approaches to time discounting can be incorporated within our proposed framework; these include attribute models and models of uncertainty. 13/25 Sanjit Dhami and Ali al-Nowaihi Evidential equilibria: Heuristics and biases in static games Standard equilibrium concepts in game theory find it difficult to explain the empirical evidence in a large number of static games such as prisoners’ dilemma, voting, public goods, oligopoly, etc. Under uncertainty about what others will do in one-shot games of complete and incomplete information, evidence suggests that people often use evidential reasoning (ER), i.e., they assign diagnostic significance to their own actions in forming beliefs about the actions of other like- minded players. This is best viewed as a heuristic or bias relative to the standard approach. We provide a formal theoretical framework that incorporates ER into static games by proposing evidential games and the relevant solution concept- evidential equilibrium (EE). We derive the relation between a Nash equilibrium and an EE. We also apply EE to several common games including the prisoners’ dilemma and oligopoly games.
{"url":"http://www2.le.ac.uk/departments/economics/research/discussion-papers?uol_r=72b8a5e8","timestamp":"2014-04-18T13:07:57Z","content_type":null,"content_length":"65161","record_id":"<urn:uuid:e903e3bd-5b4d-45d0-b56c-0c6435f7f924>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
[erlang-questions] Strange arithmetic behaviour Richard A. O'Keefe < Mon May 12 03:35:05 CEST 2008 I wonder if the original poster wasn't a bit smarter than some of us may have assumed. The existence of bignums in Erlang proves that a language need not be limited by the deficiencies of the hardware. ANSI Smalltalk provides "ScaledDecimal" numbers, which are numbers with a fixed number of decimal places after the point but an unbounded number before it. Maxima has bigfloats; source code can be found at http://www.cs.berkeley.edu/~fateman/mma1.6/bf.lisp and http://www.cs.berkeley.edu/~fateman/mma1.6/bfelem.lisp. Heck, XEmacs lisp comes with them built in (see Mathematica also has bigfloats. I have a vague recollection of some Smalltalk having bigfloats, but cannot recall which (the ANSI ScaledDecimal is great for money, not for trig.) And of course Java has java.math.BigDecimal, described as "Immutable, arbitrary-precision signed decimal numbers. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale." The particularly interesting thing about BigDecimal is that it's base 10. (High speed base 10 floating point hardware is shipping now in the current generation of Power and zSeries machines...) Base 10 bigfloats are particularly interesting as being likely to give results non-programmers would expect rather more often than limited precision base 2 floats do. But not ALL the time... More information about the erlang-questions mailing list
{"url":"http://erlang.org/pipermail/erlang-questions/2008-May/034929.html","timestamp":"2014-04-18T00:22:53Z","content_type":null,"content_length":"4449","record_id":"<urn:uuid:ae87310b-e6dd-4a2c-8968-fa5e241cf15f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> ACE/ADE with WLSMV and DIFFTEST ACE/ADE with WLSMV and DIFFTEST Michael J Zyphur posted on Wednesday, November 01, 2006 - 7:19 pm Hi Bengt, As noted recently by Dominicus, Skrondal, Gjessing, Pedersen, & Palmgren (2006), LRT difference tests in ACE/ADE models are biased because they test variances at the boundaries of the parameters' spaces (i.e., at 0). These authors derive solutions for correcting p-values based on the mixture of chi-square distributions produced in ACE/ADE difference tests. I am running ACE/ADE models with categorical variables, meaning that WLSMV is employed by default. Thus, I am using the statistical theory underlying the DIFFTEST option in Mplus (as per the My question is this: Because DIFFTEST in this case relies on a test of the parameter of interest at the boundary of its space, may I simply correct the provided DIFFTEST chi-square value in line with Dominicus et al. (2006), or is a correction required before DIFFTEST is performed? Thanks for your help!! Dominicus, Skrondal, Gjessing, Pedersen, & Palmgren (2006). Likelihood ratio tests in behavioral genetics: Problems and solutions. Behavior Genetics, 36, 331-340. P.S. I hope you're enjoying your retirement! Linda K. Muthen posted on Thursday, November 02, 2006 - 1:09 pm I think if any correction were made it should be during the process not after. DIFFTEST should not be used when the parameter of interest is on the boundary. Erika Wolf posted on Thursday, May 29, 2008 - 1:50 pm I seem to have a smiliar issue: I'm trying to examine the chi square difference test of nested models. The data are categorical and are used as indicators of two continuous latent factors. The parent model is the full ACE model, the nested model constrains the C path for 1 latent factor to 0 and the E path for the other latent factor to 0. I'm trying to use the DIFFTEST option to compute the difference in chi square between these models, but it won't compute the nested chi square and I think this is due to the parameter estimates being close to 0 in the parent model. Is there anyway to evaluate the change in chi square in this scenerio? Thanks. Linda K. Muthen posted on Thursday, May 29, 2008 - 2:12 pm Please send the relevant files and your license number to support@statmodel.com. It is not possible to know the reason without further information. Back to top
{"url":"http://www.statmodel.com/discussion/messages/11/1765.html?1212095540","timestamp":"2014-04-19T06:58:00Z","content_type":null,"content_length":"21149","record_id":"<urn:uuid:a6df77fc-840b-4aa0-b4f9-af4d3a9a8b90>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Biological assessment of robust noise models in microarray data analysis • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Bioinformatics. Mar 15, 2011; 27(6): 807–814. Biological assessment of robust noise models in microarray data analysis Motivation: Although several recently proposed analysis packages for microarray data can cope with heavy-tailed noise, many applications rely on Gaussian assumptions. Gaussian noise models foster computational efficiency. This comes, however, at the expense of increased sensitivity to outlying observations. Assessing potential insufficiencies of Gaussian noise in microarray data analysis is thus important and of general interest. Results: We propose to this end assessing different noise models on a large number of microarray experiments. The goodness of fit of noise models is quantified by a hierarchical Bayesian analysis of variance model, which predicts normalized expression values as a mixture of a Gaussian density and t-distributions with adjustable degrees of freedom. Inference of differentially expressed genes is taken into consideration at a second mixing level. For attaining far reaching validity, our investigations cover a wide range of analysis platforms and experimental settings. As the most striking result, we find irrespective of the chosen preprocessing and normalization method in all experiments that a heavy-tailed noise model is a better fit than a simple Gaussian. Further investigations revealed that an appropriate choice of noise model has a considerable influence on biological interpretations drawn at the level of inferred genes and gene ontology terms. We conclude from our investigation that neglecting the over dispersed noise in microarray data can mislead scientific discovery and suggest that the convenience of Gaussian-based modelling should be replaced by non-parametric approaches or other methods that account for heavy-tailed noise. Contact: peter.sykacek/at/boku.ac.at Availability: http://bioinf.boku.ac.at/alexp/robmca.html. The importance of microarray data for the biological sciences has generated a large number of sophisticated analysis methods. Approaches like t-tests (Baldi and Long, 2001; Tusher et al., 2001), linear models (Smyth, 2005) and many Bayesian methods (Bae and Mallick, 2004; Ibrahim et al., 2002; Ishwaran and Rao, 2003; Lewin et al., 2007; Zhao et al., 2008) consider data to be approximately Gaussian distributed. Recent investigations have, however, cast doubt on the correctness of the Gaussian assumption. By testing for Gaussianity, Hardin and Wilson (2009) find that microarray data does not follow a Gaussian distribution. The observed overdispersion leads to a large number of outlying values which can have a considerable influence on the inference results. The cost of measurements and the possibility that outlying data points are caused by biological processes rule out that such samples get removed. All samples must thus be taken into account carefully, as excluding outlying values or including them based on incorrect distribution assumptions would falsify the biological findings. The adverse effects of outliers in microarray data can be overcome with non-parametric approaches (cf. de Haan et al., 2009; Gao and Song, 2005; Lee et al., 2005; Troyanskaya et al., 2002; Tusher et al., 2001; Zhao and Pan, 2003). Non-parametric methods replace the restrictive assumptions linked with the Gaussian distribution with very general ones, however, at the expense of losing some power of tests (cf. Whitley and Ball, 2002). Alternatively, we can analyze overdispersed data with robust parametric noise models like Student's-t distributions (cf. Gottardo et al., 2006). The issue of appropriate noise models led to an ongoing discussion, with Giles and Kipling (2003) arguing that microarray data are Gaussian distributed. Similar methods let Hardin and Wilson (2009) conclude that microarray data require heavy-tailed noise models. The conclusion of Novak et al. (2006) was that 5–15% of genes are non-Gaussian distributed, with the majority following Gaussian distributions. Finding such diverse conclusions about noise in microarray data suggest an in-depth investigation of this issue. We propose to this end inferring the appropriate degree of over-dispersion in microarray data with a hierarchical Bayesian model, which is inspired by the proposal of Gottardo et al. (2006). Built-in means for ranking genes according to differential expression enable investigations of the biological implications of deviating from the optimal noise model. The essential components of the proposed model are thus two indicator variables, one decoding whether a gene is differentially expressed, the other decoding the most appropriate noise model. These variables are built into a hierarchical Bayesian analysis of variance (ANOVA) model which can be used for analyzing a variety of experimental designs. Inferring the proposed model with uninformative prior settings provides reliable probability measures, which quantify the suitability of competing noise models. This mode of operation compares the goodness of fit of a Gaussian noise model with t-distributions of different degrees of freedom and infers the appropriate robustness level required for analyzing a microarray dataset. The ultimate goal of microarray data analysis is, however, obtaining sound biological conclusions about which transcripts are involved in a particular process. Judgements about different noise models should therefore be linked with their implications on biological findings. The proposed model provides for this purpose a second mode of operation, in which we fix the noise model either to a Gaussian density or to a t-distribution with optimal degrees of freedom as found in the adaptive mode of operation. The biological implications of deviating from the optimal noise model can then be assessed from the noise model-dependent gene rankings. To warrant reliable conclusions, we calibrated the model on synthetic data and the golden-spike experiment from Choe et al. (2005), before analysing 14 microarray datasets. Independent of normalization and preprocessing, we found in every case that a t-distribution with small degrees of freedom provides a much better fit of the noise characteristics than a Gaussian density. The importance of robust inference is apparent from our observation that exchanging the optimal Student's-t density with a Gaussian leads to between 119 and 3561 differences in gene lists and to between 14 and 316 differences in Gene Ontology (GO) (cf. Ashburner et al., 2000) term lists. We have thus strong evidence that opting for Gaussian noise models in microarray data analysis may result in seriously misleading biological leads. Microarray data analysis should thus preferably use non-parametric approaches (cf. de Haan et al., 2009; Gao and Song, 2005; Lee et al., 2005; Troyanskaya et al., 2002; Tusher et al., 2001; Zhao and Pan, 2003) or approaches that allow for heavy-tailed noise models (cf. Gottardo et al., 2006). 2 METHODS The methods in this article provide a framework for thoroughly investigating whether microarray data analysis requires robust approaches, or whether we may safely rely on Gaussian assumptions. The Bayesian ANOVA model shown in Figure 1 as directed acyclic graph (DAG) infers to this end optimal robustness levels and a measure whether genes are differentially expressed. The proposed approach achieves robustness by using a parametric heavy-tailed noise model, with non-parametric methods (cf. de Haan et al., 2009; Gao and Song, 2005; Lee et al., 2005; Troyanskaya et al., 2002; Tusher et al., 2001; Zhao and Pan, 2003) being popular alternatives. To put our investigation into the context of these tools, we include the two methods by Lee et al. (2005) and de Haan et al. (2009) in our assessment. Similar to the approach in Gottardo et al. (2006), we propose inferring differentially expressed genes, while at the same time inferring the most appropriate noise model from a set of Student's t-distributions, which include the Gaussian as a special non-robust case. Whereas Gottardo et al. (2006) allow for all possible ANOVA contrasts simultaneously and infer a per gene posterior probability over all contrasts, our model follows the conventional strategy in microarray data analysis and infers differential expression with one common contrast. We represent the proposed model as DAG with rectangular nodes denoting observed quantities and circular nodes denoting random variables. Hyperparameters associated with priors are shown in brackets. Sheets indicate replication. With n denoting the sample ... An important aspect of our investigation is assessing the practical relevance of deciding for appropriate noise models. We propose to this end repeating inference of gene lists twice, once using the inferred noise characteristics and once using a Gaussian instead. When leaving all other settings identical, the differences in gene and GO term lists are indicative for the effect of using suboptimal noise models. Gaining far reaching validity requires analysing a representative collection of microarray datasets covering important organisms and measurement platforms and repeating assessments with different normalization and preprocessing methods. 2.1 Bayesian ANOVA with flexible noise model The Bayesian one-way ANOVA model shown in Figure 1 as DAG constitutes the core of our evaluation. ANOVA models are commonly used for analysing multi-level microarray experiments like time course data. The model is based on a linear relation between the gene expression y[n,g], measured for sample n and gene g and the mean expression β[g]. The S-dimensional vector x[n] is an indicator for the biological state. If sample n belongs to state s, x[n] has a 1 at the s-th position and zeros everywhere else. Depending on whether gene g is differentially expressed or not, the latent indicator variable I[g] switches between two different dimensional representations of β[g] (cf. Holmes and Held, 2006; Sykacek et al., 2007). The case that gene g is not differentially expressed is coded by I[ g] = 0 and corresponds to the null hypothesis of the classical ANOVA that all groups have the same mean. Vector β[g] contains in this case S identical entries of mean expression β[g,0], the latter being equipped with a Gaussian prior with mean μ and prior precision λ. The alternative hypothesis that gene g is differentially expressed is coded by I[g] = 1 with β[g] being multivariate with a Gaussian prior with mean μ and diagonal precision matrix λ. The indicator I[g] is a priori binomially distributed with probability p of differential expression. To reduce the sensitivity of the approach, the model is extended hierarchically by allowing for a beta prior over p with hyperparameters a and b. The observations y[n,g] follow a symmetric distribution centred around ŷ[n,g] = x[n]^T β[g] with precision [n,g] τ. Different robustness levels are achieved by selecting the noise model for the observations y[n,g] from a set containing K − 1 Student's t-distributions of different degrees of freedom, ν, and a Gaussian distribution (with ν = ∞). For obtaining computationally tractable representations of Student's t-distributions with arbitrary degrees of freedom, we introduce the auxiliary variables [n,g], represent p(y[n,g], [n,g]|β[g], τ, ν) as a certain Gaussian-Gamma density and integrate over [n,g] (cf. Bernardo and Smith, 1994). An essential aspect of robustness is adjusting the degrees of freedom ν to the level required by the data. We propose to this end selecting the best fitting degrees of freedom from a finite set of possible choices (cf. Berger, 1994), which includes the Gaussian (ν = ∞) as the non-robust special case. The proposed model implements this selection via the multinomial-one distributed indicator variable J, which chooses a particular ν from the set ν := {ν [min] + j · c[grid], ∞}^1 with j K − 2] and ν[min] ≥ 1. This formulation gives rise to K possible noise models. As we have no reason for preferring a particular choice, we use 1/K as uninformative prior probability for all J. The proposed model can be summarized by the joint density formulated in Equation (1) where I, β, X and Y are shortcuts for denoting all I[g], β[g], [n,g], x[n] and y[n,g] , respectively, and p(p|a, b) denotes a Beta density, p(λ|c, d), p(τ|g, h) and p([n,g]|ν/2, ν/2) denote Gamma densities, p(J|K) denotes a Multinomial-one density, p(I[g]|p) a Binomial-one density and p(β[g]|I[g], μ, λ) and p(y[n,g]|β[g], [n,g], I[g], τ, x[n]) denote Gaussian densities. 2.2 Algorithm The complexity of the model requires approximate inference. Although closed form approximations (Liu et al., 2006; Sykacek et al., 2007) have computational advantages, we prefer here an unbiased approximation and follow (Bae and Mallick, 2004; Gottardo et al., 2006; Huang et al., 2002; Lewin et al., 2007; Shahbaba and Neal, 2006; Tadesse et al., 2003) who, among many others, have previously used Markov chain Monte Carlo (MCMC) in a bioinformatics context. MCMC is an application of the Law of Large Numbers and allows approximating expectations by averages of random draws from a given distribution. The random samples are realizations of a Markov chain that behave under certain conditions like draws from a single stationary distribution (cf. Gilks et al., 1996; Robert and Casella, 2004). Denoting the sampling density as f and the random samples obtained from MCMC as β[g]^(i), MCMC allows us for example to approximate the expectation of the group-specific mean expression β[g] Algorithm 1 illustrates MCMC sampling as pseudo-code. Inference requires a combination of Gibbs, Metropolis Hastings and Reversible Jump steps. Gibbs steps are used for updating the prior probability of differential expression, p, the prior precision λ, the error precision, τ, and, when keeping the differential expression indicator I[g] fixed, for updating the group means β[g]. A Metropolis Hastings step is used for updating J as long as we keep the Student's-t noise model. Updates of J that propose changing from a Student's-t to a Gaussian density and vice versa and updates of I[g] rely on the reversible jump approach introduced in Green (1995). Further details about the model, the algorithm and a MatLab implementation are provided in http://bioinf.boku.ac.at/alexp/robmca.html. 2.3 Data collection For reliably inferring the optimal noise characteristics and evaluating the implications of potentially oversimplified Gaussian assumptions, we have to consider two aspects. A reliable assessment of different noise models requires calibrating the proposed inference scheme. Calibration makes sure that MCMC converges rapidly and that inference result are insensitive to the chosen hyperparameters. These aspects are best assessed when knowing the expected outcome by using synthetically generated data and dedicated spike-in experiments. Warranting that our findings are generally applicable requires analysing carefully selected microarray datasets, which cover a wide range of model organisms, experimental settings and measurement platforms and using several normalization and preprocessing methods. Artificial data were generated with Gaussian and Student's-t noise distributions, the latter with 4 and 10 degrees of freedom. We simulated a two way comparison of 500 hypothetical genes with each gene assigned to one of five groups, the later defining the amount of hypothetical differential expression. The mean structure and fraction of occurrence of each group are reported in Table 1. Variances have been chosen in the range of 0.1–10, without altering the reported results. To mimic a realistic microarray scenario, we generated five replicates per group, resulting in 10 data points per gene. Some aspects of computer-generated data might deviate from real microarray measurements. We endorse therefore our respective conclusions by including the spike-in experiment of Choe et al. (2005) in our analysis. Depending on sample type which is either 1 or 2, genes from subset i are drawn from distributions with means equal to μ[i,1] and μ[i,2] , respectively For warranting far reaching validity of our results, we analysed 14 microarray experiments covering various organisms and measurement platforms. The data include investigations of plant soil responses, drosophila sleep deprivation, primate dietary comparisons and animal liver metabolism. The experiments, which are summarized in Table 2, are identified by the Gene Expression Omnibus (GEO) reference number (cf. Edgar et al., 2002). Further details about each dataset can be found in the corresponding reference. The selection provided in Table 2 covers several different platforms and quantification algorithms (cf. column ‘Prep.’). We used all data as provided by the owner and applied the conservative normalization method vsn (cf. Huber et al., 2002). Overview of the biological datasets describing the organism (Org.), the GEO ID (CAMDA 08 refers to the Endothelial Apoptosis contest datasets of the meeting), the preprocessing method (Prep.), the overall number of arrays (N), the average degrees of freedom ... 2.4 Alternative normalization and analysis methods It is well known that results from microarray data analysis may depend on the chosen normalization method (cf. Bolstad et al., 2003). To ensure that our conclusions hold in general, we repeated the analysis for a subset of the data in Table 2 with additional normalization methods. Guided by their popularity in applied microarray papers, we chose loess (cf. Yang et al., 2002) and quantile (cf. Bolstad et al., 2003) normalization. In the light of recent findings that intensities of highly expressed targets cross-talk to neighbouring probes due to scanner inadequacy (cf. Upton and Harrisson, 2010), we may expect that Affymetrix probe sets contain outlying measurements. Being designed to alleviate the effect of artefacts contaminating individual probes, the mmgMOS approach (cf. Liu et al., 2005) and the PPLR method (cf. Liu et al., 2006) could help improving the Gaussianity of residuals. For testing whether such sophisticated representations of microarray expression can reduce the need for heavy-tailed noise models, we applied our algorithm to mmgMOS normalized data and the posterior expression estimates obtained by the PPLR method. Our investigation relied so far on achieving robustness by representing the noise in microarray data with a suitably chosen parametric density. A different strategy for achieving robustness in microarray data analysis is obtained by abolishing distributional assumptions and using non-parametric methods (cf. de Haan et al., 2009; Gao and Song, 2005; Lee et al., 2005; Tusher et al., 2001). To investigate whether non-parametric approaches are a viable alternative for robust analyses of microarray data, we compare gene rankings obtained with such approaches with gene rankings we obtain (i) with the Bayesian ANOVA when using the optimal (possibly heavy tailed) noise distribution and (ii) with the proposed model when assuming Gaussian distributed noise. Compatibility with our ANOVA model suggests applying a Kruskal–Wallis permutation test (cf. Lee et al., 2005) and a robust ANOVA (cf. de Haan et al., 2009). Unknown differences in scale which we have to expect when comparing P -values and Bayesian probabilities are overcome by using a P-value threshold of 0.01 for assigning differential expression in the statistical test and adjusting the probability threshold such that the number of differentially expressed genes match. 2.5 Biological implications An important aspect in our assessment of different noise models for microarray data analysis is evaluating the biological implications of deviations from the appropriate noise model. The implication of choosing Gaussian noise instead of the optimal noise model can be quantified by comparing the number of genes, which are irrespective of the noise model assessed as differentially expressed with the number of genes which show a noise model-dependent assessment. For investigating the implications of unsuitable noise models at a higher level of biological abstraction, we propose inferring GO terms from the gene lists which we obtain with different noise models. We use to this end, GO term-specific Fishers exact tests (cf. Al-Shahrour et al., 2004; Dennis. et al., 2003) on the gene lists obtained with different noise models and compare the number of significant GO terms which are found irrespective of the chosen noise model with the number of GO terms with noise model-dependent 2.6 Calibrating the algorithm Calibration efforts are important for assuring unbiased and efficient inference with MCMC methods. Making sure that inference is unbiased requires considering the influence of all hyperparameters individually. We have a and b which are prior counts and thus easy to grasp with small values corresponding to small influence. A Jeffreys prior (cf. Jeffreys, 1961) is obtained when using a = b = 0.5. The hyperparameters g and h of the Gamma prior over the noise precision τ also have no indirect consequences and can safely be set to 0 for obtaining the corresponding Jeffreys prior. Independent of whether we use a t-distribution or a Gaussian as noise model, the precision λ deserves more attention. Large values of λ indicate a strong preference for small β[g] values. By entering the Bayes factors of the models represented by I[g] = 0 and I[g] = 1, the precision λ influences, however, also P(I[g]|X, Y, a, b, c, d, e, h, K) (cf. MacKay, 1992), with smaller λ making identifying differentially expressed genes harder. This problem can be solved by regarding λ as a random variable and providing a conjugate Jeffreys hyper-prior which is a Gamma density parameterized with c = 0, d = 0. Such hierarchical Bayesian models (cf. Lewin et al., 2007; Shahbaba and Neal, 2006) are preferably used, because an indirect prior specification minimizes the dependency of inference results on hyper parameter settings. Jeffreys priors are theoretically well motivated in single variable cases, they can, however, exhibit strong indirect influence in multi-variable models (cf. Bernardo and Smith, 1994). Having an indirect influence on decisions about differential expression, the precision λ deserves particular attention. We, therefore, propose investigating the influence of the hyperparameters c and d on the posterior probabilities of differential expression P(I[g]|X, Y, a, b, c, d, e, h, K). By representing the precision λ in the Gaussian prior over β[g] as a random variable, the influence of c and d on λ is related to the prior variance Supplementary Material) and change the prior variance. By displaying the ordered posterior probabilities, P(I[g]|X, Y, a, b, c, d, e, h, K), for several c, d combinations, the graphs in Figure 2 illustrate this sensitivity analysis for synthetic data that was generated according to the description in Section 2.3. Choosing the Jeffreys prior c = 0, d = 0 is justified by observing that up to a prior variance of less than 1/100, the hyper-parameters c and d have, independently of the noise model, little influence on the posterior probabilities of differential expression. The hyperparameters c and d in the prior p(λ|c, d) have to be chosen carefully to avoid side effects. The graphs show the ordered posterior probabilities of differential expression P(I[g]|X, Y, a, b, c, d, e, h, K) with the legend denoting the corresponding ... Another important aspect of our inference scheme is providing accurate assessments of noise characteristics. This requires a clear distinction between Gaussian and Student's-t noise models and thus an appropriate choice for the upper limit for the degrees of freedom parameter ν, which marks the bound between Student's-t and Gaussian distributions. Simulations on synthetic data showed that taking ν[max] = 45 as upper limit is a good choice because larger degrees of freedom parameters render Student's-t densities as indistinguishable from Gaussians and smaller values would unnecessarily misjudge Student's-t densities as Gaussians. Further calibration efforts were concerned with assuring fast convergence of the sampling algorithm to the stationary distribution. Our simulations showed that convergence speed can be dramatically improved by adjusting the grid size c[grid] between two burn-in phases. After starting with an initial value in the range of 1–5, we switch to a smaller value of about 0.05 which is then also used for sampling. A large initial grid size allows the algorithm to quickly determine the approximately correct error model with the smaller grid size improving the convergence properties of the Markov Chain and leading to better approximations of the true continuous degrees of freedom. Convergence towards the stationary distribution was assessed with the R package coda (cf. Plummer et al., 2006). We found that 11 000 draws were a suitable overall simulation length and that the first 500 draws should be considered as burn-in phase (cf. Algorithm 1). After calibration, we could confirm that the resulting algorithm infers the correct noise model in synthetic data. Data generated with Student's-t distributed noise with 4 and 10 degrees of freedom lead to little variation of the samples around the true value, whereas data generated with Gaussian distributed noise would assign all mass to the Gaussian density. We also tested whether the proposed algorithm infers differentially expressed genes reliably. We used for that purpose the golden-spike experiment from Choe et al. (2005). Resulting from a wet lab experiment, these data are both a realistic test case for microarray data and a gold standard with known ground truth. When using a cutoff probability threshold of 0.85, we find for Gaussian noise 72% and for the optimal Student's-t noise 78% of correctly assigned genes. These performance figures are in the top range of the results reported in Choe et al. (2005). The better performance of the Student's-t model is paired with a by far larger evidence in favour of this noise model. This observation allows the conclusion that already the technical noise component in microarray data, which is the only remaining source of variation in the golden-spike data, requires considering robust models. 3 RESULTS To highlight the importance of choosing valid noise models for microarray analysis, we applied the proposed inference scheme to 14 microarray datasets, which are summarized in Table 2. The arresting result of our evaluation is that a heavy-tailed Student's-t noise model is a better fit than a Gaussian noise model for every dataset we looked at (cf. column ‘t-distribution with degrees of freedom between 3 and 5 got the highest posterior probability. This indicates the need of robust noise models, which can handle outlying data points well and suggests that Gaussian noise models are unsuitable for microarray data analysis, even if according to Novak et al. (2006) only about 5–15% of samples are non-Gaussian distributed. Our assessments also revealed that biological inference depends considerably on the chosen noise model. For obtaining a quantitative statement, we inferred the differentially expressed genes set twice: once with a Gaussian noise model and once with the optimally inferred t-distribution. This approach provided for every dataset two gene lists with the intersect representing agreement and the symmetric difference representing different biological interpretations, which are solely caused by the different noise models (cf. Table 2, columns ‘Comm. genes’ and ‘Diff. genes’). Microarray data analysis depends often critically on chosen preprocessing and normalization (cf. Bolstad et al., 2003). To rule out being mislead by a particular choice, we repeated the assessment of optimal noise models using loess and quantile normalization and mmgMos and PPLR preprocessing. The expected degrees of freedom, Table 3 for these data allow concluding that our observations are independent of normalization and even sophisticated analysis methods do not compensate for the need of robust noise models. The robust model is in general less sensitive to outlying values. Models with t-distributed noise will therefore assign lower posterior probabilities of differential expression, when differential expression is caused by one or a few outlying measurements. In situations where outliers lead to an increase of variance or a decrease in average differential expression, the Gaussian noise model will overlook differentially expressed genes, which would be captured by the more appropriate t-distributed noise model. A wrongly chosen noise model will therefore lead to false positives and false negatives. Both error types are confirmed by the illustrations in Figure 3, which show many genes with noise model-dependent probabilities of differential expression. Noise model dependencies of posterior probabilities. Subplot (A) illustrates the Arabidopsis data (GDS3216) ranked by the posterior probabilities of differential expression obtained with the most probable Student's-t distribution (probabilities shown ... An assessment of robustness levels in dependence of normalization and preprocessing showing the expected degrees of freedom parameters Subgraph (A) in Figure 3 is ranked by the posterior probabilities obtained with the optimal t-distributed noise model (probabilities shown as black line). A subset of posteriors obtained with Gaussian noise is shown as dots. Subgraph (B) is ranked by the posterior probabilities obtained with a Gaussian noise model (probabilities shown as black line). A subset of the posteriors obtained with optimal Student's-t noise is shown as dots. We find in both subgraphs for many genes a substantial influence of the noise model on the posterior probability of differential expression. In the context that inference over degrees of freedom ν clearly favoured the Student's-t model, we can consider all genes which get only under t-distributed noise a large posterior probability of differential expression as potential false negatives under a standard Gaussian noise model. Genes that get only under a Gaussian density a large posterior probability of differential expression are likely to be false positives. Table 2 shows that the number of genes with noise model-dependent assessment of differential expression range from 119 to 3561. This is about one tenth to two times the number of genes, which are independently of the noise model assessed as differentially expressed. We can thus conclude that the choice of noise model can have a considerable influence on the inferred gene lists with a wrongly chosen noise model introducing both false positives and false negatives. To investigate the biological significance of the noise model-dependent differences in gene lists, we applied a GO term inference (cf. Al-Shahrour et al., 2004) twice: once using the gene list which we obtained with the Gaussian noise model and a second time using the gene list which we obtained when the noise is fixed to the most probable t-distribution. Table 2 lists the number of GO terms, which were found unambiguously and the number of GO terms with a noise model-dependent assessment (cf. columns ‘Comm. GO terms’ and ‘Diff. GO terms’). Observing that the noise model-dependent GO term lists contain between one fifth and 22 times as many differences than common entries suggests that an unsuitably chosen noise model is likely to have a profound implication on biological conclusions drawn from an analysis. Having gathered substantial evidence that microarray data should be analysed by considering heavy-tailed noise, the question arises whether non-parametric approaches can help solving this issue. We compare to this end the agreement in gene lists obtained with two non-parametric tests with our robust Bayesian ANOVA and compare this with the agreement we observe between the same tests and the Gaussian version of our Bayesian ANOVA. The results in Table 4 show a better agreement of rankings between the robust methods, which suggests that non-parametric methods should be considered for analysing microarray data. Our results do, however, in agreement with Whitley and Ball (2002) also reveal the loss in power inherent to non-parametric methods. In our analysis this manifests in finding no significant P-values with the robust ANOVA method for GEO ID GDS3216. For GEO ID GDS3225, GEO ID GDS1555 and the CAMDA 08 data, both non-parametric methods fail in finding significant P -values (data omitted from table). From Table 4, it is also obvious that small sample sizes (cf. Table 2, column ‘N’) lead in general to poor agreement. If sample sizes permit application, we can however recommend non-parametric methods for microarray data analysis. For comparing non-parametric robust methods with robust parametric methods, we provide the percentage agreement about differentially expressed genes This article provides an in-depth assessment of two competing assumptions about the noise characteristics in microarray data. Assuming Gaussian noise has the benefit of leading to highly efficient analysis methods. A considerable sensitivity to outlying observations is, however, an unfortunate weakness of Gaussian noise-based data analysis. This weakness may be overcome with non-parametric methods or by methods which assume heavy-tailed noise distributions. Applying robust analysis methods to microarray data has the disadvantage of introducing more involved computations. The application of non-parametric methods is in addition limited to problems with sufficiently many samples. Comparing robust analysis methods with Gaussian-based microarray data analysis has to provide conclusions, which are relevant for biological practise. Certain technical aspects can be tested by gold standards like the spike-in data from Choe et al. (2005). Other aspects like, for example, biological variation are only captured by data analysing real-world biology. Although certain facts about individual experiments are well known, complete knowledge of ground truth is not available for any biological microarray experiment. An assessment of biological implications has thus to resort to indirect strategies. The route chosen in this article first compares the technical suitability of Gaussian noise and heavy tailed t-distributions. This requires a mode of operation in data analysis, which allows comparing different noise models. Once we established which noise model is preferred for technical reasons, we can turn to investigating the biological implications caused by changing the noise model. This mode of operation relies in our analysis on counting the number of genes which show a noise model-dependent difference in differential expression. These gene counts are complemented by investigating which GO terms are significantly affected from the noise model-dependent gene lists. For providing conclusions of far-reaching validity, we analysed 14 carefully chosen microarray experiments, covering a wide range of model organisms and measurement platforms. To avoid reporting spurious results, our simulations included careful tuning of hyperparameters to minimize model sensitivity, steps for assessing convergence of the algorithm and applied different normalization and preprocessing methods. The arresting result of our assessment is that we find highly decisive evidence in favour of t-distributions with high kurtosis for every experiment we looked at. The significance of this finding is backed up by the observation that the choice of error model considerably influences the biological conclusions drawn from the analyses. Gene lists differ in dependence of the noise model by between 119 and 3561 genes. These differences have a substantial influence on the conclusions we draw on a higher level of biological abstraction. The number of differences in the GO term lists we find in dependence of the chosen noise model ranges from 14 to 316. For many datasets, the number of GO terms with noise model-dependent equivocal assessment is larger than the number of GO terms we can unambiguously assign to these experiments irrespective of the chosen noise model. We may thus conclude that a substantial number of outlying measurements is present in many microarray studies. Relying on implicit Gaussian assumptions means ignoring the heavy tails of the residuals and that can have adverse effects on biological conclusions drawn from microarray data. Practitioners should thus apply robust approaches for microarray data analysis, which work reliably irrespective of whether noise is Gaussian or heavy tailed. We suggest for this purpouse considering non-parametric approaches (cf. de Haan et al., 2009; Gao and Song, 2005; Lee et al., 2005; Troyanskaya et al., 2002; Tusher et al., 2001; Zhao and Pan, 2003), or, for small sample sizes, apply Bayesian approaches like Gottardo et al. (2006) or the MatLab implementation which accompanies this paper at http://bioinf.boku.ac.at/alexp/robmca.html. The authors would like to thank D. Kreil for his input on microarray normalization. We are also grateful to our anonymous reviewers who helped improving our article with their valuable comments. Funding: A. Posekany and P. Sykacek are grateful to the Vienna Science and Technology Fund for funding this work within the project WWTF-LSC-2005-#35. Conflict of Interest: none declared. ^1To simplify notation ν denotes the set and individual values of the degrees of freedom parameter. • Affara M, et al. Understanding endothelial cell apoptosis: what can the transcriptome, glycome and proteome reveal? Philos. Trans. R. Soc. B. 2007;362:1469–1487. [PMC free article] [PubMed] • Al-Shahrour F, et al. Fatigo: a web tool for finding significant association of gene ontology terms with groups of genes. Bioinformatics. 2004;20:578–580. [PubMed] • Ashburner M, et al. Gene ontology: tool for the unification of biology. the gene ontology consortium. Nat. Genet. 2000;25:25–29. [PMC free article] [PubMed] • Bae K, Mallick B. Gene selection using a two-level hierarchical bayesian model. Bioinformatics. 2004;20:3423–3430. [PubMed] • Baldi P, Long A. A bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics. 2001;17:509–519. [PubMed] • Berger JO. An overview of robust Bayesian analysis. Test. 1994;3:5–124. • Bernardo J, Smith A. Bayesian Theory. Chichester: Wiley; 1994. • Blalock E, et al. Incipient alzheimer's disease: microarray correlation analyses reveal major transcriptional and tumor suppressor responses. Proc. Natl Acad. Sci. 2004;101:2173–2178. [PMC free article] [PubMed] • Bolstad BM, et al. A comparison of normalization methods for high density oligonucleotide array data based on bias and variance. Bioinformatics. 2003;19:185–193. [PubMed] • Cameron D, et al. Gene expression profiles of intact and regenerating zebrafish retina. Mol. Vis. 2005;11:775–791. [PubMed] • Choe S, et al. Preferred analysis methods for affymetrix genechips revealed by a wholly defined control dataset. Genome Biol. 2005;6:R16. [PMC free article] [PubMed] • de Haan J, et al. Robust anova for microarray data. Chemometr. Intell. Lab. Syst. 2009;98:38–44. • Dennis G, et al. DAVID: Database for Annotation, Visualization, and Integrated Discovery. Genome Biol. 2003;4:R60. [PMC free article] [PubMed] • Dinneny J, et al. Cell identity mediates the response of Arabidopsis roots to abiotic stress. Science. 2008;320:942–945. [PubMed] • Edgar R, et al. Gene expression omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acid Res. 2002;30:207–210. [PMC free article] [PubMed] • Gao X, Song P. Nonparametric tests for differential gene expression and interaction effects in multi-factorial microarray experiments. BMC Bioinformatics. 2005;6:186. [PMC free article] [PubMed] • Giles P, Kipling D. Normality of oligonucleotide microarray data and implications for parametric statistical analyses. Bioinformatics. 2003;19:2254–2262. [PubMed] • Gilks W, et al. Markov Chain Monte Carlo in Practice. London: Chapman and Hall; 1996. • Gottardo R, et al. Bayesian robust inference for differential gene expression in microarrays with multiple samples. Biometrics. 2006;62:10–18. [PubMed] • Green PJ. Reversible jump Markov Chain Monte Carlo computation and Bayesian model determination. Biometrika. 1995;82:711–732. • Hardin J, Wilson J. A note on oligonucleotide expression values not being normally distributed. Biostatistics. 2009;10:446–450. [PubMed] • Holmes C, Held L. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Anal. 2006;1:145–168. • Huang E, et al. Gene expression profiling for prediction of clinical characteristics of breast cancer. Hormone Res. 2002;58:55–73. [PubMed] • Huber W, et al. Variance stabilization applied to microarray data calibration and to the quantification of differential expression. Bioinformaics. 2002;18(Suppl. 1):S96–S104. [PubMed] • Ibrahim J, et al. Bayesian models for gene expression with dna microarray data. J. Am. Stat. Assoc. 2002;97:88–99. • Irizarry R, et al. Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003;31:249–264. [PubMed] • Ishwaran H, Rao J. Detecting differentially expressed gene in microarrays using Bayesian model selection. J. Am. Stat. Assoc. 2003;98:438–455. • Jeffreys H. Theory of Probability. 3rd. Oxford: Clarendon Press; 1961. • Jin J, et al. Modeling of corticosteroid pharmacogenomics in rat liver using gene microarrays. J. Pharmalcol. Exp. Ther. 2003;307:93–109. [PubMed] • Lee M, et al. Nonparametric methods for microarray data based on exchangeability and borrowed power. J. Biopharm. Stat. 2005;15:783–797. [PubMed] • Lewin A, et al. Fully Bayesian mixture model for differential gene expression: simulations and model checks. Stat. Appl. Genet. Mol. Biol. 2007;6 doi:10.2202/1544-6115.1314. [PubMed] • Li S, et al. Assessment of diet-induced obese rats as an obesity model by comparative functional genomics. Obesity. 2008;16:811–818. [PubMed] • Liu X, et al. A tractable probabilistic model for affymetrix probe-level analysis across multiple chips. Bioinformatics. 2005;21:3637–3644. [PubMed] • Liu X, et al. Probe-level measurement error improves accuracy in detecting differential gene expression. Bioinformatics. 2006;22:2107–2113. [PubMed] • MacKay DJC. Bayesian interpolation. Neural Comput. 1992;4:415–447. • MacLennan N, et al. Targeted disruption of glycerol kinase gene in mice: expression analysis in liver shows alterations in network partners related to glycerol kinase activity. Hum. Mol. Genet. 2006;15:405–415. [PubMed] • Middleton F, et al. Application of genomic technologies: DNA microarrays and metabolic profiling of obesity in the hypothalamus and in subcutaneous fat. Nutrition. 2004;20:14–25. [PubMed] • Novak J, et al. Generalization of DNA microarray dispersion properties: microarray equivalent of t-distribution. Biol. Direct. 2006;1:27. [PMC free article] [PubMed] • Plummer M, et al. CODA: convergence diagnosis and output analysis for MCMC. R. News. 2006;6:7–11. • Robert CP, Casella R. Monte Carlo Statistical Methods. New York: Springer; 2004. • Shahbaba B, Neal RM. Gene function classification using Bayesian models with hierarchy-based priors. BMC Bioinformatics. 2006;7:448. [PMC free article] [PubMed] • Small C, et al. Profiling gene expression during the differentiation and development of the murine embryonic gonad. Biol. Reprod. 2005;72:492–501. [PMC free article] [PubMed] • Smyth GK. Bioinformatics and Computational Biology Solutions using R and BioConductor. New York: Springer; 2005. Limma: linear models for microarray data; pp. 397–420. • Somel M, et al. Human and chimpanzee gene expression differences replicated in mice fed different diets. PLoS One. 2008;3:e1504. [PMC free article] [PubMed] • Someya S, et al. The role of mtdna mutations in the pathogenesis of age-related hearing loss in mice carrying a mutator dna polymerase gamma. Neurobiol. Aging. 2008;29:1080–1092. [PubMed] • Sykacek P, et al. Bayesian modelling of shared gene function. Bioinformatics. 2007;23:1936–1944. [PubMed] • Tadesse M, et al. Identification of differentially expressed genes in high-density oligonucleotide arrays accounting for the quantification limits of the technology. Biometrics. 2003;59:542–554. • Talantov D, et al. Novel genes associated with malignant melanoma but not benign melanocytic lesions. Clin. Cancer Res. 2005;11:7234–7242. [PubMed] • Troyanskaya O, et al. Nonparametric methods for identifying differentially expressed genes in microarray data. Bioinformatics. 2002;18:1454–1461. [PubMed] • Tusher V, et al. Significance analysis of microarrays applied to the ionizing radiation response. Proc. Natl Acad. Sci. 2001;98:5116–5121. [PMC free article] [PubMed] • Upton GJG, Harrisson AP. The detection of blur in Affymetrix GeneChips. Stat. Appl. Genet. Mol. Biol. 2010;9 doi:10.2202/1544-6115.1590. [PubMed] • Van Hoewyk D, et al. Transcriptome analyses give insights into selenium-stress responses and selenium tolerance mechanisms in arabidopsis. Physiol. Plant. 2008;132:236–253. [PubMed] • Whitley E, Ball J. Statistics review 6: nonparametric methods. Crit. Care. 2002;6:509–513. [PMC free article] [PubMed] • Yang Y, et al. Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation. Nucleic Acid Res. 2002;30:e15. [PMC free article] [ • Yao Z, et al. A Marfan syndrome gene expression phenotype in cultured skin fibroblasts. BMC Genomics. 2007;8:319. [PMC free article] [PubMed] • Zhao H, et al. Multivariate hierarchical Bayesian model for differential gene expression analysis in microarray experiments. BMC Bioinformatics. 2008;9(Suppl. 1):S9. [PMC free article] [PubMed] • Zhao Y, Pan W. Modified nonparametric approaches to detecting differentially expressed genes in replicated microarray experiments. Bioinformatics. 2003;19:1046–1054. [PubMed] • Zimmerman J, et al. Multiple mechanisms limit the duration of wakefulness in Drosophila brain. Physiol. Genomics. 2006;27:337–350. [PubMed] Articles from Bioinformatics are provided here courtesy of Oxford University Press Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3051324/?tool=pubmed","timestamp":"2014-04-19T05:13:38Z","content_type":null,"content_length":"129114","record_id":"<urn:uuid:e08a6241-0467-4f06-b92f-b64706880e29>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Merging of launching or detachment points of weakly damped leaky ASA 127th Meeting M.I.T. 1994 June 6-10 1aSA5. Merging of launching or detachment points of weakly damped leaky waves on S-shaped surfaces and complex launching or detachment points. Philip L. Marston Dept. of Phys., Washington State Univ., Pullman, WA 99164-2814 In previous work, Fermat's principle and properties of leaky waves on flat surfaces were combined to obtain a high-frequency approximation to scattering by cylinders of slowly varying curvature [P. L. Marston, J. Acoust. Soc. Am. 94, 1861 (A) (1993)]. The usual assumption was made that only the first derivative of optical-path length (with respect to path variation) vanishes. The present work considers situations where the second (or higher) derivatives vanish at launching or detachment points. The simplest example is for coupling to (or detachment from) a tilted S-shaped surface. Such surfaces may have pairs of launching or detachment points; the pairs can merge at an appropriate surface tilt that results in a k[sup 1/6] amplitude enhancement coefficient. The launching or detachment factor becomes proportional to an Airy function of complex argument and launching or detachment in a shadow region of the Airy factor is described approximately by a complex ray. Higher-order launching or detachment points may also result from a sufficiently rapid dependence of the leaky-wave phase velocity on position. [Work supported by ONR.]
{"url":"http://www.auditory.org/asamtgs/asa94mit/1aSA/1aSA5.html","timestamp":"2014-04-17T21:27:59Z","content_type":null,"content_length":"1854","record_id":"<urn:uuid:bc57ac8c-67f1-404b-a590-5f6ab6ac015b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: General correlation coefficient. Replies: 2 Last Post: Jul 10, 1996 10:46 PM Messages: [ Previous | Next ] General correlation coefficient. Posted: Jul 9, 1996 12:11 AM MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="------------6A7A58B05FE7" This is a multi-part message in MIME format. Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Can anyone advise me of a general purpose algorithm for calculating a correlation For linear equations of the form y=ax+b, it's pretty well documented that you can calculate a correlation coefficient r = Sxy / sqrt (Sxx * Sxy). I see this often labeled as Pearson's r. Somewhere I came across a more general expression of r that worked for the non-linear equations I was working with by comparing y_hat to y_obs, the estimated and actual values of y. I coded this in C as shown in the attachment, and it always matched SAS, so I was happy till today: Today, while fitting the equation y=d+((a-d)/(1+(x/c)^b)) to the data x y 0.0117 0.160503 0.181 0.176018 0.438 0.203961 1.46 0.331981 2.88 0.529194 4.77 0.80795 9.89 1.600164 20 3.178673 After fitting the curve, d=29.513744 (if your milage varies, it should be close anyway) and I get a correlation coefficient of 1.0024, which is obviously bogus. I tried to find a rounding error, or lack of precision somewhere, but the residual sum of squares is really calculated as larger than the total sum of squares using that So I'm left with the conclusion that this is not the most general way to calculate a correlation coefficient for a generalized non-linear equation, not to mention inaccurate under at least some circumstances. What do you think would be better? Thank you for considering this puzzle. Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="Junk.rot" // corrcoef.c // Purpose: // Calculates the correlation coefficient, r. R is derived from the // square root of the coefficient of determination, R^2. The coefficient // of determination is calculated as the ratio of explained variation // divided by the total variation. The explained variation is estimated // from the least squares fit of the equation to the data. #include <math.h> /* for sqrt */ #include "common.h" /* for the function prototype */ double correlation_coefficient ( double y_obs[], // the observed data double y_hat[], // the data predicted by the relationship double w[], // the weighting factors short num_obs) // the number of observations double y_hat_ave = 0.0; // average of the estimated y values double y_obs_ave = 0.0; // average of the observed y values short i; // loop index double r; // the correlation coefficient double w_sum = 0.0; // the total weight double sst = 0.0; // total variation in the data double ssr = 0.0; // total variation explained by the // relationship between x and y double temp1; // temporary value used for efficiency double temp2; // temporary value used for efficiency // First get the average for (i = 0; i < num_obs; i++) y_obs_ave += (w[i] * y_obs[i]); y_hat_ave += (w[i] * y_hat[i]); w_sum += w[i]; y_obs_ave /= w_sum; y_hat_ave /= w_sum; // Then get the sums of the squares for (i = 0; i < num_obs; i++) temp1 = y_hat[i] - y_hat_ave; ssr += (w[i] * temp1 * temp1); temp2 = y_obs[i] - y_obs_ave; sst += (w[i] * temp2 * temp2); // Finally, take the ratio r = sqrt (ssr / sst); return (r); Date Subject Author 7/9/96 General correlation coefficient. Steve Cable 7/10/96 Re: General correlation coefficient. Bob Wheeler 7/10/96 Re: General correlation coefficient. Jeff Brush
{"url":"http://mathforum.org/kb/thread.jspa?threadID=13478","timestamp":"2014-04-20T00:56:30Z","content_type":null,"content_length":"22252","record_id":"<urn:uuid:5e59ffe0-72e1-418e-aab0-ee7632738519>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
IIT JEE Physic Study Material IIT JEE being one of the most competitive exams needs through preparation. There is no standard study material for IIT JEE. Although number of IIT-JEE preparation books are available in the market and with coaching institutes, but the correct recipe for success still baffles the aspirants. AskIITians esteemed faculties have put tremendous efforts and energy to bring out a very well structured study material for IIT JEE, which is available, free of cost to all aspirants on our website. It’s our continuous effort to make all IIT JEE preparatory material as lucid as possible. The study material has been compiled from various sources which have always been the toppers choice for IIT JEE. We have also kept in mind the requirements of AIEEE – so you can use this as AIEEE study material as well. Most of the topics are same in IIT JEE and AIEEE. The topics which are unique to both the exams have also been covered separately so that it serves as one stop solution for all your needs. We have divided our study material into three broad categories as per the three subjects covered in IIT JEE and AIEEE viz Mathematics, Physics and Chemistry. Further we have divided each subject into chapters which in turn are broken down to subtopics of every chapter. At the end of every chapter there are some solved examples which have been carefully chosen to make all the concepts clear to our readers. We are sure you would tremendously enjoy reading our free IIT JEE and AIEEE study material and would also like to contribute to this with your suggestions and feedbacks. Set Theory & Function | Complex Numbers | Quadratic Equations and Expressions | Progressions | Logarithms and their Properties | Permutations and Combinations | Binomial Theorem for A +Ve Integral Index | Matrices and Determinants | Probability Trigonometric Functions | Multiple and Sub-Multiple Angles | Trigonometric Equations | Properties and Solution of Triangles | Inverse Trigonometric Functions | Trgonometric Identities & Equations Straight Lines | Circle | Parabola | Ellipse | Hyperbola | 3D Geometry Limits, Continuity and Differentiability | Differentiation | Application of Derivatives | Tangents and Normals | Maxima and Minima Indefinite Integral | Definite Integral | Area Under Curves | Differential Equations Basic Concepts | Stoichiometry | Gaseous and Liquid States | Atomic Structure | Chemical Bonding | Energetics | Chemical Equilibrium | Ionic Equilibrium | Electrochemistry | Chemical Kinetics | Solid State | States of Matter | Solutions | Redox Reactions | Surface Chemistry | Some Basic Concepts of Chemistry | State of Matter | Stoichiometry And Redox Reactions Basic Concepts | Nomenclature | Isomerism | Reaction Mechanism | Saturated Hydrocarbons | Alkenes and Alkynes | Reactions of Benzene | Aromatic compounds | Alkyl halides | Alcohols and Ethers | Aldehydes and Ketones | Carboxylic Acid and their Derivatives | Amines and Nitrogen Containing Compounds | Carbohydrates Amino Acids and Peptides | Uses of some important polymers | Purification Classification And Nomenclature of Organic Compounds | Carbonyl Compounds | Hydrocarbon | Envirtonmental Chemistry | Hydrogen And It's Compounds | IUPAC And GOC Basic Concepts | Extraction of elements | Hydrogen | S and P Block Elements | D and F-Block Elements | Coordination Compounds & Organometallics | Qualitative Analysis | Extractive Metallurgy | Polymers And Bi-Molecules General Physics | Mechanics | Wave Motion | Thermal Physics | Electrostatics | Magnetism | Electric Current | Electromagnetic Induction | Ray Optics | Wave Optics | Modern Physics
{"url":"http://www.askiitians.com/iit-study-material/","timestamp":"2014-04-16T18:57:16Z","content_type":null,"content_length":"56032","record_id":"<urn:uuid:2c96601c-2594-4cc6-8e0a-2bed380ff806>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional probability + Poisson question December 9th 2011, 11:36 AM #1 Oct 2011 Conditional probability + Poisson question Hi, I have the following problem... I have solved most of it, but I am stuck on this question "B) Given that eight buses arrive, what is the probability that six of them are number 29's?" Thanks a lot for your help! The problem and what I have done so far is: "Number 29 and number 42 buses arrive at a bus stop according to independent Poisson processes each at a rate of four per hour. A) What is the probability that eight buses arrive between 10 and 11am? B) Given that eight buses arrive, what is the probability that six of them are number 29's?" My answer to A: $X$ (buses number $29$) has Poisson distribution with $μ$ and $Y$ (buses number $42$) has Poisson with $λ$. Let $Z=X+Y$, therefore $Z$ will have Poisson distribution with parameter $μ+λ$. The solution is supposed to be $7/64$, but I don't know how to get it. Re: Conditional probability + Poisson question You need to find $P(X=6\mid Z=8)$, which is equal to $\frac{P(X=8\cap Z=8)}{P(Z=8)}$. The event $X=6\cap Z=8$ is the same as $X=6\cap Y=2$, and, since $X$ and $Y$ are independent, $P(X=6\cap Y=2) Re: Conditional probability + Poisson question No, X and Y are not independent, since we know X+Y = 8. Given that X+Y=8, X has a Binomial distribution with p = 1/2 and n = 8. Re: Conditional probability + Poisson question Re: Conditional probability + Poisson question December 9th 2011, 01:25 PM #2 MHF Contributor Oct 2009 December 10th 2011, 11:45 AM #3 December 10th 2011, 11:56 AM #4 MHF Contributor Oct 2009 December 10th 2011, 04:06 PM #5
{"url":"http://mathhelpforum.com/advanced-statistics/193894-conditional-probability-poisson-question.html","timestamp":"2014-04-16T06:19:38Z","content_type":null,"content_length":"50212","record_id":"<urn:uuid:2fa147e1-171e-45b4-be18-1fdfc7b91b5f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Matrix-free version of connected_components Per Nielsen evilper@gmail.... Tue Aug 7 02:23:22 CDT 2012 Ah hehe. I have had a hard time optimizing my graph neighbor extraction code, so right now it is actually to slow to be of any use. I will try to find a different approach and I dont want you spending any of your time cleaning code, but thank you very much for the offer :) On Tue, Aug 7, 2012 at 3:41 AM, Charles R Harris > On Mon, Aug 6, 2012 at 3:33 AM, Per Nielsen <evilper@gmail.com> wrote: >>> I'm not sure what your application is, but if you just need connected >>> components and have an easy way to find neighbors, then unionfind will >>> partition the set for you. Although the common version doesn't make it easy >>> to extract them, I have an implementation that keeps the connected nodes in >>> a circular list for just that application. >> I would very like to have copy of your algorithm, it might be easier to >> modify than the networkX code as Gael suggested. > Well, it's in Cython ;) It is set up to use integer indices as node labels > and all components are extracted as lists of integers. It's also pretty old > and needs documentation, so I'll do that and clean it up a bit. I'll be on > travel for the next few days, so unless matters are pressing, I won't get > to it until the weekend. > Chuck > _______________________________________________ > SciPy-User mailing list > SciPy-User@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120807/cf63b83e/attachment.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-August/032757.html","timestamp":"2014-04-16T22:45:28Z","content_type":null,"content_length":"5033","record_id":"<urn:uuid:516307a7-4f22-4094-b0ee-5f8cde52362a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Arnaud Legrand, H?l?ne Renard, Yves Robert, Fr?d?ric Vivien, "Mapping and Load-Balancing Iterative Computations," IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 6, pp. 546-558, June, 2004. BibTex x @article{ 10.1109/TPDS.2004.10, author = {Arnaud Legrand and H?l?ne Renard and Yves Robert and Fr?d?ric Vivien}, title = {Mapping and Load-Balancing Iterative Computations}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {15}, number = {6}, issn = {1045-9219}, year = {2004}, pages = {546-558}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2004.10}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Mapping and Load-Balancing Iterative Computations IS - 6 SN - 1045-9219 EPD - 546-558 A1 - Arnaud Legrand, A1 - H?l?ne Renard, A1 - Yves Robert, A1 - Fr?d?ric Vivien, PY - 2004 KW - Scheduling KW - load-balancing KW - iterative computations KW - heterogeneous clusters. VL - 15 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—This paper is devoted to mapping iterative algorithms onto heterogeneous clusters. The application data is partitioned over the processors, which are arranged along a virtual ring. At each iteration, independent calculations are carried out in parallel, and some communications take place between consecutive processors in the ring. The question is to determine how to slice the application data into chunks, and to assign these chunks to the processors, so that the total execution time is minimized. One major difficulty is to embed a processor ring into a network that typically is not fully connected, so that some communication links have to be shared by several processor pairs. We establish a complexity result that assesses the difficulty of this problem, and we design a practical heuristic that provides efficient mapping, routing, link-sharing, and data distribution schemes. [1] J. Barbosa, J. Tavares, and A.J. Padilha, Linear Algebra Algorithms in a Heterogeneous Cluster of Personal Computers Proc. Ninth Heterogeneous Computing Workshop, pp. 147-159, 2000. [2] O. Beaumont, V. Boudet, A. Petitet, F. Rastello, and Y. Robert, A Proposal for a Heterogeneous Cluster ScaLAPACK (Dense Linear Solvers) IEEE Trans. Computers, vol. 50, no. 10, pp. 1052-1070, [3] O. Beaumont, V. Boudet, F. Rastello, and Y. Robert, Matrix Multiplication on Heterogeneous Platforms IEEE Trans. Parallel and Distributed Systems, vol. 12, no. 10, pp. 1033-1051, Oct. 2001. [4] F. Berman, High-Performance Schedulers The Grid: Blueprint for a New Computing Infrastructure, I. Foster and C. Kesselman, eds., pp. 279-309, Morgan-Kaufmann, 1999. [5] D. Bertsekas and R. Gallager, Data Networks. Prentice Hall, 1987. [6] V. Bharadwaj, D. Ghose, V. Mani, and T.G. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems, IEEE CS Press, 1996. [7] V. Bharadwaj, D. Ghose, and T.G. Robertazzi, A New Paradigm for Load Scheduling in Distributed Systems Cluster Computing, vol. 6, no. 1, pp. 7-18, Jan. 2003. [8] R.P. Brent, The LINPACK Benchmark on the AP1000: Preliminary Report Proc. CAP Workshop, 1991. [9] R. Buyya, High Performance Cluster Computing. Volume 1: Architecture and Systems. Upper Saddle River, N.J.: Prentice Hall PTR, 1999. [10] K.L. Calvert, M.B. Doar, and E.W. Zegura, “Modeling Internet Topology,” IEEE Comm. Magazine, vol. 35, no. 6, pp. 160-163, June 1997. [11] M. Cierniak, M.J. Zaki, and W. Li, Compile-Time Scheduling Algorithms for Heterogeneous Network of Workstations The Computer J., vol. 40, no. 6, pp. 356-372, 1997. [12] M. Cierniak, M.J. Zaki, and W. Li, Customized Dynamic Load Balancing for a Network of Workstations J. Parallel and Distributed Computing, vol. 43, pp. 156-162, 1997. [13] T.H. Cormen, C.E. Leiserson, and R.L. Rivest, Introduction to Algorithms. MIT Press, 1990. [14] P.E. Crandall and M.J. Quinn, “Block Data Decomposition for Data-Parallel Programming on a Heterogeneous Workstation Network,” Proc. Second Int'l Symp. High Performance Distributed Computing, pp. 42-49, 1993. [15] E. Deelman and B.K. Szymanski, Dynamic Load Balancing in Parallel Discrete Event Simulation for Spatially Explicit Problems Proc. PADS'98 12th Workshop Parallel and Distributed Simulation, pp. 46-53, 1998. [16] M. Doar, A Better Model for Generating Test Networks Proc. Globecom '96, Nov. 1996. [17] A.B. Downey, Using Pathchar to Estimate Internet Link Characteristics Measurement and Modeling of Computer Systems, pp. 222-223, 1999. [18] J.E. Flaherty, R.M. Loy, C. Özturan, M.S. Shephard, B.K. Szymanski, J.D. Teresco, and L.H. Ziantz, Parallel Structures and Dynamic Load Balancing for Adaptive Finite Element Computation Applied Numerical Math., vol. 26, nos. 1-2, pp. 241-263, 1997. [19] J.E. Flaherty, R.M. Loy, M.S. Shephard, B.K. Szymanski, J.D. Teresco, and L.H. Ziantz, Adaptive Local Refinement with Octree Load Balancing for the Parallel Solution of Three-Dimensional Conservation Laws J. Parallel and Distributed Computing, vol. 47, no. 2, pp. 139-152, 1997. [20] The Grid: Blueprint for a New Computing Infrastructure. I. Foster and C. Kesselman, eds., Morgan-Kaufmann, 1999. [21] M.R. Garey and D.S. Johnson, Computers and Intractability, a Guide to the Theory of NP-Completeness. W.H. Freeman and Company, 1991. [22] Y.F. Hu and R.J. Blake, Load Balancing for Unstructured Mesh Applications Parallel and Distributed Computing Practices, vol. 2, no. 3, 1999. [23] S. Ichikawa and S. Yamashita, Static Load Balancing of Parallel PDE Solver for Distributed Computing Environment Proc. 13th Int'l Conf. Parallel and Distributed Computing Systems, pp. 399-405, [24] M. Kaddoura, S. Ranka, and A. Wang, Array Decomposition for Nonuniform Computational Environments J. Parallel and Distributed Computing, vol. 36, pp. 91-105, 1996. [25] A. Kalinov and A. Lastovetsky, Heterogeneous Distribution of Computations while Solving Linear Algebra Problems on Networks of Heterogeneous Computers Proc. Conf. High-Performance Computing and Networking (HPCN Europe), pp. 191-200, 1999. [26] D. Katabi, M. Handley, and C. Rohrs, Congestion Control for High Bandwidth-Delay Product Networks Proc. ACM 2002 Conf. Applications, Technologies, Architectures, and Protocols for Computer Comm. (SIGCOMM), pp. 89-102, 2002. [27] A. Legrand, H. Renard, Y. Robert, and F. Vivien, Load-Balancing Iterative Computations in Heterogeneous Clusters with Shared Communication Links Technical Report RR-2003-23, LIP, ENS Lyon, France, also available as INRIA Research Report 4800, Apr. 2003. [28] M. Nibhanupudi and B. Szymanski, BSP-Based Adaptive Parallel Processing High Performance Cluster Computing. Volume 1: Architecture and Systems, R. Buyya, ed., pp. 702-721, Prentice-Hall, 1999. [29] D. Nicol and P. Reynolds, “Optimal Dynamic Remapping of Data Parallel Computations,” IEEE Trans. Computers, vol. 39, no. 2, pp. 206-219, Feb. 1990. [30] D.M. Nicol and J.H. Saltz, "Dynamic Remapping of Parallel Computations with Varying Resource Demands," IEEE Trans. Computers., vol. 37, no. 9, pp. 1,073-1,087, Sept. 1988. [31] H. Renard, Y. Robert, and F. Vivien, Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters Proc. Euro-Par'03: Parallel Processing, pp. 148-159, 2003. [32] B.A. Shirazi, A.R. Hurson, and K.M. Kavi, Scheduling and Load Balancing in Parallel and Distributed Systems. IEEE Computer Science Press, 1995. [33] A.S. Tanenbaum, Computer Networks. Prentice Hall, 2003. [34] A.G. Taylor and A.C. Hindmarsh, User Documentation for KINSOL, a Nonlinear Solver for Sequential and Parallel Computers Technical Report UCRL-ID-131185, Lawrence Livermore Nat'l Laboratory, July [35] J. Watts and S. Taylor, “A Practical Approach to Dynamic Load Balancing,” IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 3, pp. 235–248, Mar. 1998. [36] M.-Y. Wu, On Runtime Parallel Scheduling for Processor Load Balancing IEEE Trans. Parallel and Distributed Systems, vol. 8, no. 2, pp. 173-186, 1997. Index Terms: Scheduling, load-balancing, iterative computations, heterogeneous clusters. Arnaud Legrand, H?l?ne Renard, Yves Robert, Fr?d?ric Vivien, "Mapping and Load-Balancing Iterative Computations," IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 6, pp. 546-558, June 2004, doi:10.1109/TPDS.2004.10 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/2004/06/l0546-abs.html","timestamp":"2014-04-20T19:31:51Z","content_type":null,"content_length":"59738","record_id":"<urn:uuid:69ffa2e3-5d8a-405e-afe7-42df97111a30>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
What can artificial life offer ecology? (abstract) Noble, J., Clarke, D. and Mills, R. (2008) What can artificial life offer ecology? (abstract). In, Bullock, S., Noble, J., Watson, R. and Bedau, M. A. (eds.) Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems. , MIT Press, Cambridge, MA, 790. PDF - Presentation Download (278Kb) Artificial life is the simulation and synthesis of living systems, and ALife models show how interactions between simple entities give rise to complex effects. Ecology is the study of the distribution and abundance of organisms, and ecological modelling involves fitting a linear model to a large data set and using that model to identify key causal factors at work in a complex ecosystem. We are interested in whether the individualbased modelling approach of ALife can be usefully employed in ecology. ALife models are “opaque thought experiments” (Di Paolo et al., 2000, Proc. ALife VII, p.497). They show that a phenomenon can arise from a given set of assumptions in cases where the implication is not clear from intuition alone: e.g., that spatial structure in a population can lead to altruistic behaviour. This type of modelling can be useful to ecology by showing the plausibility of a novel concept or process, which in turn suggests new natural experiments and new forms of data to collect. However, we argue that ALife models can go beyond this “proof of concept” role and serve as a direct account of data in the same way that statistical models do. We focus on a typical problem from ecology: the effect of clearing powerline corridors through a forest on the local wildlife populations (Clarke et al., 2006,Wildlife Research, 33, p.615). The real data set in this case is complex and, of course, we don’t know the true effects that underlie it. We therefore generated a fictional data set that reflects aspects of the original problem while allowing complete control over the simulated environment. The idea is to construct a test case for looking at the relative success of different modelling approaches. We know the true picture because we generated the data, but which modelling approach will get closer to the truth? The fitting of generalized linear models as is conventional in ecology, or the use of individual-based simulations as in ALife? Statistical models are fitted using some variant of the method of maximum likelihood: given the data, which of the models in the family we’re considering (e.g., a linear regression) makes the observed data most plausible? When dealing with simulations, however, it is difficult to establish that one model is a better fit to data than another. Simulations have many parameters, it may be difficult to determine a level of granularity at which the simulation output is supposed to “match” the data, and there will be no analytically tractable likelihood function. These problems are solved by the method of indirect inference (Gouri´eroux et al., 1993, J. Applied Econometrics, 8, p.S85) in which an auxiliary model is fitted to both the real data and to the output from competing simulation models. The best simulation model is the one producing the closest match to the data in terms of fitted parameter values in the auxiliary model. Using indirect inference with our fictional data set we demonstrate that ALife simulation models can be fitted to realistic ecological data, that they can out-compete standard statistical approaches, and that they can thus be used in ecology for more than just conceptual exploration. Actions (login required)
{"url":"http://eprints.soton.ac.uk/266738/","timestamp":"2014-04-18T10:42:26Z","content_type":null,"content_length":"30199","record_id":"<urn:uuid:18f3287f-baee-49a5-8f7b-f98bbe76fa01>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: nonlinear optimization problem with constraints Replies: 7 Last Post: Sep 14, 2012 5:13 AM Messages: [ Previous | Next ] nonlinear optimization problem with constraints Posted: Sep 8, 2012 4:50 AM Hello.I have an nonlinear optimization problem with constraints. I realy need help. I am using a package problem to solve this problem. However, although the program finds a solution without an error( since tolarance values are satisfied), when I check the constraints values , O see that some constraint values are not satisfactory (some constraints are realy close to zero which is satisfactory, but some are not close enough to zero). There are 5 unknowns and 4 constraints. So this system has infinitly many solutions. I just want which solution makes the objevtive function minimum. I don't have a much optmization background. I just want to determine among solutions(where the constraints are close enough to zero) which one makes the objective function minimum. As I mentioned, my priority is constraints to be satisfied satisfactorily. I don't know how to searh this subject. I studied some nonlinear optmization resources. However all terminates when the tolareance values are close to zero not each of the constraints close to zero. Is there a way to control the closenes of the constraint to zero. If there is a way, I will be very glad for recomendaton of some resources. At least if this subject has an special name in the literature, I want to leaarn it.Thanks a lot. Best regards. Date Subject Author 9/8/12 nonlinear optimization problem with constraints oercim@yahoo.com 9/8/12 Re: nonlinear optimization problem with constraints Gordon Sande 9/8/12 Re: nonlinear optimization problem with constraints oercim@yahoo.com 9/8/12 Re: nonlinear optimization problem with constraints oercim@yahoo.com 9/10/12 Re: nonlinear optimization problem with constraints Herman Rubin 9/8/12 Re: nonlinear optimization problem with constraints Gordon Sande 9/14/12 Re: nonlinear optimization problem with constraints oercim@yahoo.com 9/10/12 Re: nonlinear optimization problem with constraints Robert H. Lewis
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2400998","timestamp":"2014-04-16T17:44:39Z","content_type":null,"content_length":"25540","record_id":"<urn:uuid:efbaa479-e100-4502-8e22-876a124bc164>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
“Hold Only That Pair of 2s?” Studying a Video Poker Hand with R Whenever I tell people in my family that I study Statistics, one of the first questions I get from laypeople is “do you count cards?” A blank look comes over their face when I say “no.” Look, if I am at a casino, I am well aware that the odds are against me, so why even try to think that I can use statistics to make money in this way? Although I love numbers and math, the stuff flows through my brain all day long (and night long), every day. If the goal is to enjoy and have fun, I do not want to sit there crunching probability formulas in my head (yes that’s fun, but it is also work). So that leaves me at the video Poker machines enjoying the free drinks. Another positive about video Poker is that $20 can sometimes last a few hours. So it should be no surprise that I do not agree with using Poker to teach probability. Poker is an extremely superficial way to introduce such a powerful tool and gives the impression that probability is a way to make a quick buck, rather than as an important tool in science and society. The only time that I have used Poker in teaching (besides when required), is to cover the hypergeometric distribution and sampling without Since I took Intro Probability Theory, I have always wondered what to do in the following situation. Say a pair of cruddy low cards appear on the first draw. The game only awards money for pairs of jacks or better. If all I have in the hand is a pair of low cards and no face cards, my decision is easy: hold the pair of low cards. But what if there is at least one face card showing (no other pairs)? Pictorially this looks like The conundrum: 1. Hold the two low cards and deal, hoping for a three of a kind, or 2. Hold the two low cards AND one of the face cards, hoping for a three of a kind, OR a pair of Jacks of Better. Under each of these decisions, which yields the highest probability of winning something and which one yields the highest payout? This problem can be solved exactly by using combinatorics, conditional probability and expectation, but since a video poker game is basically a simulator (though likely biased), I wrote my own simulation. For the answer, scroll to the end! Data Structure In most card games, we would want to store the state of the game: the outstanding cards in the deck(s), and the hand(s) of each player. In standard video poker, there is one deck, and one player, so only the player hand needs to be recorded because every card in the deck is either in the hand, or it is not. One obvious way to represent the hand is as an array of denomination/suit tuples in an array. Unfortunately, this data structure requires other data structures to store the possible suits, and possible denominations. It is also more tedious to detect certain kinds of wins. For this simulation, I use a 13 x 4 matrix where each row is a different denomination, and each column is each of the four suits. This matrix allows us to easily see which cards are possible to be dealt. Additionally, this matrix, as well as vector-based languages such as R, make it easy to detect wins. Such a matrix looks like the following for the hand 2♠ 5♣ 8♥ 8♣ A♦ where Cij denotes a card, i is the denomination $i \in \{ 2, \ldots, 10\} \cup \{J, Q, K, A\}$ and j is the suit $j \in \{\heartsuit, \diamondsuit, \spadesuit, \clubsuit \}$ and H is the player’s hand in question. Poker wins are not disjoint. A three of a kind involving Jacks is also a pair of Jacks or better, etc. When checking wins, I start with the lowest paying win, and move up to Royal Flush, only keeping track of the highest win. Thus, this algorithm detects a four-of-a-kind involving Queens as Jacks or Better, two pairs of Queens, and a three-of-a-kind of Queens, but only counts it as the highest win, the four-of-a-kind. 1. Pair of Jacks or Better: a pair of Jacks, Queens, Kings or Aces. In A, this is simply the condition that at least one row in rows 10 through 13 has a row sum greater than 1. 2. Two pair: two pairs of anything. In A, this is the condition that at least two rows have a sum greater than 1. 3. Three of a kind: three of any card. In A, this is the condition that at least one row has a sum of at least 3. 4. Straight: all 5 cards can be permuted such that they form an ascending sequence: A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, A. This case is interesting and will be discussed in a bit. 5. Flush: all 5 cards are of the same suit. In A, this is the condition that at least one column has a sum of at least 5. 6. Full House: one three-of-a-kind, and a pair of anything. In A, this is the condition that a row has sum 3, and another row has sum 2. 7. Four of a Kind: 4 of any card. In A, this is the condition that a row has sum 4. 8. Straight Flush: the 5 cards can be permuted to form an ascending sequence and are all of the same suit. In A, this is simply the condition that we have a straight and a flush in the same hand. 9. Royal Flush: a straight flush with the Ace as the high card. In A, this is simply the condition that we have a straight flush AND the sum of row 13 is 1. Of course, this “short circuit logic” only works for a game containing 5 cards. Also, note that under my scenario (a pair of low cards is dealt first), it is never possible to have a straight, flush, royal flush, or straight flush as the highest wins. Also, it is not possible to have Jacks or Better as the highest win because we already have one pair (low cards), and if we randomly are drawn a pair of Jacks or Better, we then have two pairs as the highest win. Detecting the Straight: In A, we have a straight when five successive rows have sum equal to 1. We can do this iteratively, but there is a better way. Note that if all of the row sums are 0 or 1, we can treat the vector of row sums as a binary number and convert it to its integer representation. Each binary number has 13 bits. If we let 2 be the zeroth power, then straights will lead to the following binary and integer representations: Bug alert: It just occurred to me that there are many more wrap-around straights such as Q, K, A, 2, 3. This will be fixed this evening. From basic computer science and number theory, every natural number can be written as the sum of distinct powers or 2 and the representation of such an integer is unique. Furthermore, the sum of n successive powers of 2 is divisible by $2^n - 1$. After some experimentation I came up with the following rule: if all of the row sums are 0/1 and the integer representation of this binary vector is divisible by $\frac{2^5-1}{2}$, then A is a straight. The only straight that does not fit this pattern is the wrap-around straight: J, Q, K, A, 2 which can be checked manually. The Algorithm 1. Randomly generate a hand containing a pair of low cards (2-10) and at least one face card. 2. Hold the pair of low cards. Under strategy 2, hold one (and only one) of the face cards. 3. Discard the unheld cards from the deck and draw 2 or 3 cards at random from the same deck. 4. Check for wins. 5. Increment a win counter. 6. Repeat steps 1-5 tons of times, recording the percentage of hands that yielded a win, of the n games/hands played. Results: Hold the Pair of Low Cards Only My usual strategy is to always hold the low pair and take one face card along for the ride. That way, I hopefully match one of the two denominations I hold. My parents on the other hand, always told me to hold the low pair only, because that gives one more card (degree of freedom) for a win. It turns out they were right. Each game consisted of 1,000 hands. A percentage of these hands yields a win. This percentage is a random variable, so I ran this simulation to play 1,000 games. The table below shows the distribution of the win percentages. Note that under strategy 1 (hold low pair only), all wins are more likely than under strategy 2! Of course, the estimate in the last column is an average; the mean in this case. The plot below shows the distribution of win percentages for both strategies. The Code The code for my simulation is below. Note that it can easily be modified for your own target hands of interest. In my simulation, certain functions were never used because certain winning hands were not possible. DISCLAIMER: I did this for fun, and it is possible that there are bugs or problems with my code, algorithm or simulation. The results seem correct because I empirically I seem to do about the same using either strategy, and in a gambling perspective, an 8% discrepancy is not likely to set off bells in the head. Very cool post, and nice, easy-to-read code. You’re pretty explicit about the question here – “what proportion of hands am I likely to win?” in each of the two situations. Of course, the decision of what to do in this situation should be “which of these two options maximizes my expected profit?” Because a full house pays out more than a pair, for example, the second strategy may be more profitable. The way to handle this would be to weight the outcomes by the payouts when you take the average which would give you average dollars expected by from each strategy. Then, pick the strategy that gives you the highest expected payout. As you say, though, all outcomes are more likely in strategy 1 than in strategy 2, so weighting the outcomes differently won’t have an impact. This is awesome. Ta. I guess you forgot to insert ‘library(ggplot2)’ in your code, because when someone runs it, there s an error in last row (chart using ggplot2 syntax) I would propose that you answered the wrong question. I found this page looking for an answer to “Which gives a better chance for winning – Holding the pair of 2s OR holding the Jack?” Not AND. Holding the Jack doesn’t help anything in this scenario – it might as well be a 3 you’re holding, because for the 2nd pair, you don’t NEED Jacks or Better. Holding the Jack as a 3rd card only decreases your chances, as your parents told you (and your Simulator discovered). But what about holding the Jack ONLY, compared to the pair of 2s? I know the win would be better with Two Pair, but I’m wondering about the chances for Any Win. So if you could recode your Simulator to answer that question, I would appreciate it. You state the question as: Hold the two low cards and deal, hoping for a three of a kind, or Hold the two low cards AND one of the face cards, hoping for a three of a kind, OR a pair of Jacks of Better. in these conditions, neither a straight, a flush, or a straight flush are possible. given that are no straights, flushes, straight flushes, or Royal Flushes, there is no difference between 2,2, 5 ,6 ,7, J and any other combination of <10, same <10, different <10, different<10, different 10 so no need to randomly select them. only need to look at all combination under that. actually =(46*45*44) + (46*45) would be all possible combinations, quite possible to get a complete number. Shouldn’t there be a 1 in row 13′s total in your matrix graphic? The numbers don’t add up to 5 without it. Now somebody needs to implement the algorithm for the optimal playing strategy as outlined here: http://wizardofodds.com/games/video-poker/strategy/jacks-or-better/9-6/optimal/
{"url":"http://www.bytemining.com/2012/01/hold-only-that-pair-of-2s-studying-a-video-poker-hand-with-r/","timestamp":"2014-04-16T22:52:20Z","content_type":null,"content_length":"91956","record_id":"<urn:uuid:e7cf84f0-fccd-4cde-859b-04b4a01b42ec>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
How tall is 56.9inches? You asked: How tall is 56.9inches? 4 feet and 8.9 inches tall the height 4 feet and 8.9 inches tall Assuming you meant • the height 56.9 inches tall Did you mean? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/how_tall_is_56.9inches","timestamp":"2014-04-18T23:19:16Z","content_type":null,"content_length":"66639","record_id":"<urn:uuid:c5aa98ba-64af-4fd0-8d02-ebbdc4a144de>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Single Variable Ron Larson Bruce H Edwards Home > Search > Book Calculus of a Single Variable Ron Larson Bruce H Edwards Results for "Book Calculus of a Single Variable Ron Larson Bruce H Edwards" in Books Featured Product ISBN13: 9780618879182. ISBN10: 0618879188. by Ron Larson, Robert Hostetler and Bruce H. Edwards. Published by Cengage Learning. Edition: 08 More at Textbooks.com FREE Shipping Featured Product FREE Shipping This text combines the theoretical instruction of calculus with current best-practise strategies. ISBN13: 9781285338248. ISBN10: 1285338243. by Ron Larson and Bruce H. Edwards. Published by Cengage Learning. Edition: 10TH 14 More at Textbooks.com FREE Shipping Designed for the first two semesters of a three-semester engineering calculus course, Calculus of a Single Variable: Early Transcendental Functions, 4... Ideal for the single-variable, one-, or two-semester calculus course,Calculus of a Single Variable,8/e, contains the first 9 chapters ofCalculus,8/e. ... The Larson Calculus program has a long history of innovation in the calculus market. It has been widely praised by a generation of students and profes... Designed specifically for the non-math major who will be using calculus in business, economics, or life and social science courses, Calculus: An Appli... Ideal for the single-variable, one calculus course,Calculus I,8/e, contains the first 6 chapters ofCalculus,8/e. The text continues to offer instructo... FREE Shipping Designed specifically for the non-math major who will be using calculus in business, economics, or life and social science courses, Brief Calculus: An... Designed specifically for the Calculus III course, Multivariable Calculus, 8/e, contains chapters 10 through 14 of the full Calculus, 8/e, text. The t... CALCULUS I WITH PRECALCULUS, brings you up to speed algebraically within precalculus and transition into calculus. The Larson Calculus program has bee... This manual includes worked-out solutions to every odd-numbered exercise in the text.
{"url":"http://www.epinions.com/search/?search_string=Book+Calculus+of+a+Single+Variable+Ron+Larson+Bruce+H+Edwards&sb=1","timestamp":"2014-04-23T21:02:54Z","content_type":null,"content_length":"107830","record_id":"<urn:uuid:572eaea7-70e8-4f20-8820-ac604a1eff6a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply A rectangle with height 8 and length 24 is wrapped around a cylinder with height 8. The rectangle perfectly covers the curved surface of the cylinder without overlapping itself at all. What is the volume of the cylinder? A plane intersects a sphere, forming a circle that has area 24pi. If this plane is 5 units from the center of the sphere, then what is the surface area of the sphere? A sphere is inscribed in a cylinder so that it is tangent to both bases of the cylinder, and tangent to the curved surface of the cylinder all the way around. If the volume of the cylinder is 54pi, then what is the volume of the sphere?
{"url":"http://www.mathisfunforum.com/post.php?tid=19749&qid=277645","timestamp":"2014-04-17T12:31:19Z","content_type":null,"content_length":"23231","record_id":"<urn:uuid:04661c91-05fb-4478-a8c8-a1b0ea87c69d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
trig identity question August 12th 2010, 08:53 PM #1 Mar 2010 need some help with this one please.. i need to prove that $cot^2 \theta - cos^2 \theta = cot^2 \theta cos^2 \theta$ i took the right hand side and tried to do a similar thing to my text book in proving the question... this is what I got $cot^2 \theta cos^2 \theta =$$(cos^2 \theta - cos^2 \theta.sin^2 \theta) \over(sin^2 \theta)$ $\rightarrow$$cos^2 \theta\over sin^2\theta$ - $cos^2 \theta \dot sin^2\theta \over sin^2 \theta$ $\rightarrow$ RHS = LHS first of all is this correct? and second of all what rule gives me my first line? many thanks $cot^2\theta - cos^2\theta = \frac{cos^2\theta}{sin^2\theta} - cos^2\theta$ When you simplify you get the first line. $cot^2\theta cos^2\theta =$ $= \frac{cos^2\theta cos^2\theta}{sin^2\theta}$ = $\frac{cos^2\theta (1 - sin^2\theta)}{sin^2\theta}$ Last edited by grgrsanjay; August 13th 2010 at 06:04 PM. August 12th 2010, 09:18 PM #2 Super Member Jun 2009 August 13th 2010, 05:30 PM #3
{"url":"http://mathhelpforum.com/trigonometry/153566-trig-identity-question.html","timestamp":"2014-04-21T07:12:44Z","content_type":null,"content_length":"36299","record_id":"<urn:uuid:a264d0c6-fbff-42b5-97bb-8bef44ecc991>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Triple Vector Products Next: Non-Cartesian Coordinates Up: Curl , , Triple Previous: The Laplacian Operator, As in vector there are two types of triple vector products that compare with the scalar triple product (that gives the area of a parallelopiped) and the vector triple product. There are differences due to the operator nature of Using operator rules Now including the vector rules but in the second term the operator must act on An important rule is similar to the vector triple product, namely Prof. Alan Hood
{"url":"http://www-solar.mcs.st-and.ac.uk/~alan/MT3601/Fundamentals/node10.html","timestamp":"2014-04-18T06:10:49Z","content_type":null,"content_length":"5737","record_id":"<urn:uuid:eea21819-25eb-4c68-ba3a-3b504a917ca1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference between Continuity and Derivatives. Hey. I am quite confused by continuity and derivatives. Both are finding the limits of a particular function as x approaches a. Then why is it that a graph that is continuous cannot be differentiable? If it is continuous, it means that the limit exists and so, it should be differentiable right? "limit exists"? The limit you look at to determine if f(x) is continuous at x= a, is [itex]\lim_{x\to a} f(x)[/itex] while the limit you look at to determine if f(x) is differentiable at x= a is [itex]\lim_{h\to a} (f(a+h)- f(a))/h[/itex]. It is easy to show that if a function is differentiable at x= a, it must be continuous but the other way is not true.
{"url":"http://www.physicsforums.com/showthread.php?t=342507","timestamp":"2014-04-16T16:15:53Z","content_type":null,"content_length":"29666","record_id":"<urn:uuid:0800bc40-f935-460c-9d43-20acae485d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Qingguo Wang, Dmitry Korkin, Yi Shang, "A Fast Multiple Longest Common Subsequence (MLCS) Algorithm," IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 3, pp. 321-334, March, 2011. BibTex x @article{ 10.1109/TKDE.2010.123, author = {Qingguo Wang and Dmitry Korkin and Yi Shang}, title = {A Fast Multiple Longest Common Subsequence (MLCS) Algorithm}, journal ={IEEE Transactions on Knowledge and Data Engineering}, volume = {23}, number = {3}, issn = {1041-4347}, year = {2011}, pages = {321-334}, doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2010.123}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Knowledge and Data Engineering TI - A Fast Multiple Longest Common Subsequence (MLCS) Algorithm IS - 3 SN - 1041-4347 EPD - 321-334 A1 - Qingguo Wang, A1 - Dmitry Korkin, A1 - Yi Shang, PY - 2011 KW - Longest common subsequence (LCS) KW - multiple longest common subsequence (MLCS) KW - dynamic programming KW - dominant point method KW - divide and conquer KW - parallel processing KW - multithreading. VL - 23 JA - IEEE Transactions on Knowledge and Data Engineering ER - Finding the longest common subsequence (LCS) of multiple strings is an NP-hard problem, with many applications in the areas of bioinformatics and computational genomics. Although significant efforts have been made to address the problem and its special cases, the increasing complexity and size of biological data require more efficient methods applicable to an arbitrary number of strings. In this paper, we present a new algorithm for the general case of multiple LCS (or MLCS) problem, i.e., finding an LCS of any number of strings, and its parallel realization. The algorithm is based on the dominant point approach and employs a fast divide-and-conquer technique to compute the dominant points. When applied to a case of three strings, our algorithm demonstrates the same performance as the fastest existing MLCS algorithm designed for that specific case. When applied to more than three strings, our algorithm is significantly faster than the best existing sequential methods, reaching up to 2-3 orders of magnitude faster speed on large-size problems. Finally, we present an efficient parallel implementation of the algorithm. Evaluating the parallel algorithm on a benchmark set of both random and biological sequences reveals a near-linear speedup with respect to the sequential algorithm. [1] A. Apostolico, M. Atallah, L. Larmore, and S. Mcfaddin, "Efficient Parallel Algorithms for String Editing and Related Problems," SIAM J. Computing, vol. 19, pp. 968-988, 1990. [2] A. Apostolico, S. Browne, and C. Guerra, "Fast Linear-Space Computations of Longest Common Subsequences," Theoretical Computer Science, vol. 92, no. 1, pp. 3-17, 1992. [3] T.K. Attwood and J.B.C. Findlay, "Fingerprinting G Protein-Coupled Receptors," Protein Eng., vol. 7, no. 2, pp. 195-203, 1994. [4] K.N. Babu and S. Saxena, "Parallel Algorithms for the Longest Common Subsequence Problem," Proc. Fourth Int'l Conf. High Performance Computing, pp. 120-125, 1997. [5] L.J. Bentley, "Multidimensional Divide-and-Conquer," Comm. ACM, vol. 23, no. 4, pp. 214-229, 1980. [6] L. Bergroth, H. Hakonen, and T. Raita, "A Survey of Longest Common Subsequence Algorithms," Proc. Int'l Symp. String Processing Information Retrieval (SPIRE '00), pp. 39-48, 2000. [7] M. Blanchette, T. Kunisawa, and D. Sankoff, "Gene Order Breakpoint Evidence in Animal Mitochondrial Phylogeny," J. Molecular Evolution, vol. 49, no. 2, pp. 193-203, 1999. [8] P. Bork and E.V. Koonin, "Protein Sequence Motifs," Current Opinion in Structural Biology, vol. 6, pp. 366-376, 1996. [9] G. Bourque and P.A. Pevzner, "Genome-Scale Evolution: Reconstructing Gene Orders in the Ancestral Species," Genome Research, vol. 12, pp. 26-36, 2002. [10] L. Brocchieri and S. Karlin, "Protein Length in Eukaryotic and Prokaryotic Proteomes," Nucleic Acids Research, vol. 33, no. 10, pp. 3390-3400, 2005. [11] Y. Chen, A. Wan, and W. Liu, "A Fast Parallel Algorithm for Finding the Longest Common Sequence of Multiple Biosequences," BMC Bioinformatics, vol. 7, p. S4, 2006. [12] F.Y. Chin and C.K. Poon, "A Fast Algorithm for Computing Longest Common Subsequences of Small Alphabet Size," J. Information Processing, vol. 13, no. 4, pp. 463-469, 1990. [13] M.O. Dayhoff, "Computer Analysis of Protein Evolution," Scientific Am., vol. 221, no. 1, pp. 86-95, 1969. [14] R.C. Edgar, "MUSCLE: Multiple Sequence Alignment with High Accuracy and High Throughput," Nucleic Acids Research, vol. 32, no. 5, pp. 1792-1797, 2004. [15] R.C. Edgar, "MUSCLE: A Multiple Sequence Alignment Method with Reduced Time and Space Complexity," BMC Bioinformatics, vol. 5, no. 1, p. 113, 2004. [16] S.M. Elbashir, J. Harborth, W. Lendeckel, A. Yalcin, K. Weber, and T. Tuschl, "Duplexes of 21-Nucleotide RNAs Mediate RNA Interference in Cultured Mammalian Cells," Nature, vol. 411, no. 6836, pp. 494-498, 2001. [17] R.D. Finn, J. Tate, J. Mistry, P.C. Coggill, J.S. Sammut, H.R. Hotz, G. Ceric, K. Forslund, S.R. Eddy, E.L. Sonnhammer, and A. Bateman, "The Pfam Protein Families Database," Nucleic Acids Research, vol. 36, pp. D281-D288, 2008. [18] R.D. Finn, J. Mistry, B. Schuster-Böckler, S. Griffiths-Jones, V. Hollich, T. Lassmann, S. Moxon, M. Marshall, A. Khanna, R. Durbin, S.R. Eddy, E.L.L. Sonnhammer, and A. Bateman, "Pfam: Clans, Web Tools and Services," Nucleic Acids Research, vol. 34, pp. D247-D251, 2006. [19] V. Freschi and A. Bogliolo, "Longest Common Subsequence between Run-Length-Encoded Strings: A New Algorithm with Improved Parallelism," Information Processing Letters, vol. 90, no. 4, pp. 167-173, 2004. [20] T.R. Gregory, Animal Genome Size Database, http:/www. genomesize.com, 2005. [21] K. Hakata and H. Imai, "Algorithms for the Longest Common Subsequence Problem," Proc. Genome Informatics Workshop III, pp. 53-56, 1992. [22] K. Hakata and H. Imai, "Algorithms for the Longest Common Subsequence Problem for Multiple Strings Based on Geometric Maxima," Optimization Methods and Software, vol. 10, pp. 233-260, 1998. [23] K.F. Han and D. Baker, "Recurring Local Sequence Motifs in Proteins," J. Molecular Biology, vol. 251, no. 1, pp. 176-187, 1995. [24] D.S. Hirschberg, "Algorithms for the Longest Common Subsequence Problem," J. ACM, vol. 24, pp. 664-675, 1977. [25] W.J. Hsu and M.W. Du, "Computing a Longest Common Subsequence for a Set of Strings," BIT Numerical Math., vol. 24, no. 1, pp. 45-59, 1984. [26] J.W. Hunt and T.G. Szymanski, "A Fast Algorithm for Computing Longest Common Subsequences," Comm. ACM, vol. 20, no. 5, pp. 350-353, 1977. [27] D. Korkin, "A New Dominant Point-Based Parallel Algorithm for Multiple Longest Common Subsequence Problem," Technical Report TR01-148, Univ. of New Brunswick, 2001. [28] D. Korkin and L. Goldfarb, "Multiple Genome Rearrangement: A General Approach via the Evolutionary Genome Graph," Bioinformatics, vol. 18, pp. S303-S311, 2002. [29] D. Korkin, Q. Wang, and Y. Shang, "An Efficient Parallel Algorithm for the Multiple Longest Common Subsequence (MLCS) Problem," Proc. 37th Int'l Conf. Parallel Processing (ICPP '08), pp. 354-363, 2008. [30] H.T. Kung, F. Luccio, and F.P. Preparata, "On Finding the Maxima of a Set of Vectors," J. ACM, vol. 22, pp. 469-476, 1975. [31] M.A. Larkin, G. Blackshields, N.P. Brown, R. Chenna, P.A. McGettigan, H. McWilliam, F. Valentin, I.M. Wallace, A. Wilm, R. Lopez, J.D. Thompson, T.J. Gibson, and D.G. Higgins, "Clustal W and Clustal X Version 2.0," Bioinformatics, vol. 23, pp. 2947-2948, 2007. [32] H.F. Lodish, Molecular Cell Biology. WH Freeman, 2003. [33] M. Lu and H. Lin, "Parallel Algorithms for the Longest Common Subsequence Problem," IEEE Trans. Parallel and Distributed System, vol. 5, no. 8, pp. 835-848, Aug. 1994. [34] G. Luce and J.F. Myoupo, "Systolic-Based Parallel Architecture for the Longest Common Subsequences Problem," VLSI J. Integration, vol. 25, pp. 53-70, 1998. [35] D. Maier, "The Complexity of Some Problems on Subsequences and Supersequences," J. ACM, vol. 25, pp. 322-336, 1978. [36] W.J. Masek and M.S. Paterson, "A Faster Algorithm Computing String Edit Distances," J. Computer and System Sciences, vol. 20, pp. 18-31, 1980. [37] J.F. Myoupo and D. Seme, "Time-Efficient Parallel Algorithms for the Longest Common Subsequence and Related Problems," J. Parallel and Distributed Computing, vol. 57, pp. 212-223, 1999. [38] A. Nekrutenko and W.H. Li, "Transposable Elements Are Found in a Large Number of Human Protein-Coding Genes," Trends in Genetics, vol. 17, no. 11, pp. 619-621, 2001. [39] C. Rick, "New Algorithms for the Longest Common Subsequence Problem," Technical Report No. 85123-CS, Computer Science Dept., Univ. of Bonn, Oct. 1994. [40] Y. Saito, H.-P. Nothacker, Z. Wang, S.H.S. Lin, F. Leslie, and O. Civelli, "Molecular Characterization of the Melanin-Concentrating-Hormone Receptor," Nature, vol. 400, pp. 265-269, 1999. [41] D. Sankoff, "Matching Sequences Under Deletion/Insertion Constraints," Proc. Nat'l Academy of Sciences USA, vol. 69, pp. 4-6, 1972. [42] D. Sankoff and M. Blanchette, "Phylogenetic Invariants for Genome Rearrangements," J. Computational Biology, vol. 6, pp. 431-445, 1999. [43] D. Sankhoff and J.B. Kruskal, Time Warps, String Edits and Macromolecules: The Theory and Practice of Sequence Comparison. Addison-Wealey, 1983. [44] R.P. Sheridan and R. Venkataraghavan, "A Systematic Search for Protein Signature Sequences," Proteins, vol. 14, no. 1, pp. 16-28, 1992. [45] T.F. Smith and M.S. Waterman, "Identification of Common Molecular Subsequences," J. Molecular Biology, vol. 147, pp. 195-197, 1981. [46] The Los Alamos National Laboratory Website, http://www.lanl. gov/roadrunnerindex.shtml , 2009. [47] E.N. Trifonov and I.N. Berezovsky, "Evolutionary Aspects of Protein Structure and Folding," Current Opinion in Structural Biology, vol. 13, no. 1, pp. 110-114, 2003. [48] R.A. Wagner and M.J. Fischer, "The String to String Correction Problem," J. ACM, vol. 21, no. 1, pp. 168-173, 1974. [49] X. Xu, L. Chen, Y. Pan, and P. He, "Fast Parallel Algorithms for the Longest Common Subsequence Problem Using an Optical Bus," Lecture Notes in Computer Science, pp. 338-348, Springer, 2005. [50] T.K. Yap, O. Frieder, and R.L. Martino, "Parallel Computation in Biological Sequence Analysis," IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 3, pp. 283-294, Mar. 1998. [51] M.S. Zastrow, D.B. Flaherty, G.M. Benian, and K.L. Wilson, "Nuclear Titin Interacts with A-and B-Type Lamins In Vitro and In Vivo," J. Cell Science, vol. 119, no. 2, pp. 239-249, 2006. Index Terms: Longest common subsequence (LCS), multiple longest common subsequence (MLCS), dynamic programming, dominant point method, divide and conquer, parallel processing, multithreading. Qingguo Wang, Dmitry Korkin, Yi Shang, "A Fast Multiple Longest Common Subsequence (MLCS) Algorithm," IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 3, pp. 321-334, March 2011, Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tk/2011/03/ttk2011030321-abs.html","timestamp":"2014-04-16T05:35:06Z","content_type":null,"content_length":"60652","record_id":"<urn:uuid:99aed7bf-7679-4a8c-a25c-b416728c31d4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Cyclic automorphism group not implies cyclic From Groupprops This article gives the statement and possibly, proof, of a non-implication relation between two group properties. That is, it states that every group satisfying the first group property (i.e., group whose automorphism group is cyclic) need not satisfy the second group property (i.e., cyclic group) View a complete list of group property non-implications | View a complete list of group property implications Get more facts about group whose automorphism group is cyclic|Get more facts about cyclic group It is possible to have a group that is an [group whose automorphism group is cyclic]] (i.e., the automorphism group is a cyclic group) but the group itself is not a cyclic group. Related facts Further information: group of rational numbers with square-free denominators Let group of rational numbers with square-free denominators, i.e., group of rational numbers comprising those rational numbers that, when written in reduced form, have denominators that are square-free numbers, i.e., there is no prime number
{"url":"http://groupprops.subwiki.org/wiki/Aut-cyclic_not_implies_cyclic","timestamp":"2014-04-19T09:29:33Z","content_type":null,"content_length":"25361","record_id":"<urn:uuid:1f16a6a5-392f-4e60-8665-066019177ead>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Distributions The functions described here are among the most commonly used discrete univariate statistical distributions. You can compute their densities, means, variances, and other related properties. The distributions themselves are represented in the symbolic form . Functions such as Mean, which give properties of statistical distributions, take the symbolic representation of the distribution as an argument. "Continuous Distributions" describes many continuous statistical distributions. BernoulliDistribution[p] Bernoulli distribution with mean p BetaBinomialDistribution[,,n] binomial distribution where the success probability is a BetaDistribution[, ] random variable negative binomial distribution where the success probability is a BetaDistribution[, ] random variable BinomialDistribution[n,p] binomial distribution for the number of successes that occur in n trials, where the probability of success in a trial is p discrete uniform distribution over the integers from to GeometricDistribution[p] geometric distribution for the number of trials before the first success, where the probability of success in a trial is p hypergeometric distribution for the number of successes out of a sample of size n, from a population of size containing successes LogSeriesDistribution[] logarithmic series distribution with parameter NegativeBinomialDistribution[n,p] negative binomial distribution with parameters n and p PoissonDistribution[] Poisson distribution with mean ZipfDistribution[] Zipf distribution with parameter Discrete statistical distributions. Most of the common discrete statistical distributions can be understood by considering a sequence of trials, each with two possible outcomes, for example, success and failure. The Bernoulli distribution BernoulliDistribution[p] is the probability distribution for a single trial in which success, corresponding to value 1, occurs with probability , and failure, corresponding to value 0, occurs with probability . The binomial distribution BinomialDistribution[n, p] is the distribution of the number of successes that occur in independent trials, where the probability of success in each trial is . The negative binomial distribution NegativeBinomialDistribution[n, p] for positive integer is the distribution of the number of failures that occur in a sequence of trials before successes have occurred, where the probability of success in each trial is . The distribution is defined for any positive , though the interpretation of as the number of successes and as the success probability no longer holds if is not an integer. The beta binomial distribution BetaBinomialDistribution[, , n] is a mixture of binomial and beta distributions. A BetaBinomialDistribution[, , n] random variable follows a BinomialDistribution[n, p] distribution, where the success probability is itself a random variable following the beta distribution BetaDistribution[, ]. The beta negative binomial distribution BetaNegativeBinomialDistribution [, , n] is a similar mixture of the beta and negative binomial distributions. The geometric distribution GeometricDistribution[p] is the distribution of the total number of trials before the first success occurs, where the probability of success in each trial is . The hypergeometric distribution HypergeometricDistribution[n, n[succ], n[tot]] is used in place of the binomial distribution for experiments in which the trials correspond to sampling without replacement from a population of size with potential successes. The discrete uniform distribution DiscreteUniformDistribution[{i[min], i[max]}] represents an experiment with multiple equally probable outcomes represented by integers through . The Poisson distribution PoissonDistribution[] describes the number of events that occur in a given time period where is the average number of events per period. The terms in the series expansion of about are proportional to the probabilities of a discrete random variable following the logarithmic series distribution LogSeriesDistribution[]. The distribution of the number of items of a product purchased by a buyer in a specified interval is sometimes modeled by this distribution. The Zipf distribution ZipfDistribution[], sometimes referred to as the zeta distribution, was first used in linguistics and its use has been extended to model rare events. PDF[dist,x] probability mass function at x CDF[dist,x] cumulative distribution function at x InverseCDF[dist,q] the largest integer x such that CDF[dist, x] is at most q Quantile[dist,q] q Mean[dist] mean Variance[dist] variance StandardDeviation[dist] standard deviation Skewness[dist] coefficient of skewness Kurtosis[dist] coefficient of kurtosis CharacteristicFunction[dist,t] characteristic function Expectation[f[x],x] expectation of for x distributed according to dist Median[dist] median Quartiles[dist] list of the dist InterquartileRange[dist] difference between the first and third quartiles QuartileDeviation[dist] half the interquartile range QuartileSkewness[dist] quartile-based skewness measure RandomVariate[dist] pseudorandom number with specified distribution RandomVariate[dist,dims] pseudorandom array with dimensionality dims, and elements from the specified distribution Some functions of statistical distributions. Distributions are represented in symbolic form. PDF[dist, x] evaluates the mass function at x if x is a numerical value, and otherwise leaves the function in symbolic form whenever possible. Similarly, CDF[dist, x] gives the cumulative distribution and Mean[dist] gives the mean of the specified distribution. The table above gives a sampling of some of the more common functions available for distributions. For a more complete description of these functions, see the description of their continuous analogues in "Continuous Distributions". Here is a symbolic representation of the binomial distribution for 34 trials, each having probability 0.3 of success.
{"url":"http://reference.wolfram.com/mathematica/tutorial/DiscreteDistributions.html","timestamp":"2014-04-20T11:04:04Z","content_type":null,"content_length":"58441","record_id":"<urn:uuid:d8f9a14f-0228-44a9-a4e7-a126058db4ef>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
January 29: Ed Spiegel, Department of Astronomy, Columbia University Continuum Equations for Rarefied Gases Kirchhoff, who participated in the early development of the fluid dynamical equations, used them to study the propagation of sound waves. In the last century, it was found that his results for the phase speeds and the damping lengths of the waves were in disagreement with experiment when the mean free paths of the particles in the gas were longer than the acoustic wave lengths. One source of this problem is in the iterative character of the usual Chapman-Enskog development. For long mean free paths, the Navier-Stokes equations may be repaired by excluding the iterative steps from their derivation. The fluid equations that are then extracted from kinetic theory lead to phase speeds that agree with experimental results. But the damping lengths from the augmented N-S equations still do not agree with those from experiments in the long mean free path limit. Rational approximations of the leading terms of the fluid equations may repair this fault, provided attention is paid to the role of time in the dynamics, but they lack a certain inevitability. The acausal aspect of the standard fluid equations also is removed. Astrophysical topics that may relate to these issues will be mentioned as time permits.
{"url":"http://www.cims.nyu.edu/seminars/gsps/past_talks/Ed.Spiegel.html","timestamp":"2014-04-17T12:30:06Z","content_type":null,"content_length":"3059","record_id":"<urn:uuid:32f1910c-1be4-4097-b2fa-cc2ca2c8489d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Can the Quantum Zeno Effect be solely attributed to decoherence? Can the Quantum Zeno Effect be solely attributed to decoherence? In every single case? Is the concensus on this matter opinion, or rigorously tested fact in which every case can be attributed to decoherence? On a more well known note, can the supposed wavefunction collapse (which gives rise to the quantum zeno effect) be entirely attributed to decoherence? Can a wavefunction collapse without any decoherence? (Or maybe there's always some sort of decoherence if the particle exists in the universe?) Also, doesn't decoherence kind of disprove the many worlds interpretation (obviously it hasn't, or this interpretation wouldn't exist anymore..)? Are we supposed to believe that we have somehow ended up in this universe out of infinite potentials where decoherence exists and continues to exist?
{"url":"http://www.physicsforums.com/showpost.php?p=2421450&postcount=1","timestamp":"2014-04-21T04:42:33Z","content_type":null,"content_length":"9311","record_id":"<urn:uuid:b295c734-7cf2-484c-811a-3a33a117f4f9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmwood Park, NJ Algebra 1 Tutor Find an Elmwood Park, NJ Algebra 1 Tutor ...I have a bachelor's degree in physics. I have tutored trigonometry both privately and for the Princeton Review. I have a bachelor's degree in physics. 20 Subjects: including algebra 1, English, algebra 2, grammar ...Good luck with the studying!I studied Physics with Astronomy at undergraduate level, gaining a master's degree at upper 2nd class honors level (approx. 3.67 GPA equivalent). I then proceeded to complete a PhD in Astrophysics, writing a thesis on Massive star formation in the Milky Way Galaxy usin... 8 Subjects: including algebra 1, physics, geometry, algebra 2 ...I am very familiar with how to structure essays, especially essays utilized on standardized tests. I am very familiar with statistics, and I passed the AP Statistics exam. I just completed AP Human Geography. 43 Subjects: including algebra 1, English, calculus, reading My name is Alix and I'm an experienced tutor who is eager to work with you in studying for tests, schoolwork or extracurricular/adult learning! I've been tutoring for 15 years and can work with you on all levels of a French or English and Amarth up to pre-Calculus. I am also experienced with test prep (ISEEs, SSATs, SATs and SAT II) and have excellent references available. 35 Subjects: including algebra 1, English, reading, writing ...I also believe in simplification, and am persistent in ensuring that students truly understand the current topic before moving on to the next. I would love to assist in vamping up your math skills and providing the confidence that every student should enjoy from really 'getting it'. I am curren... 21 Subjects: including algebra 1, reading, Spanish, writing Related Elmwood Park, NJ Tutors Elmwood Park, NJ Accounting Tutors Elmwood Park, NJ ACT Tutors Elmwood Park, NJ Algebra Tutors Elmwood Park, NJ Algebra 2 Tutors Elmwood Park, NJ Calculus Tutors Elmwood Park, NJ Geometry Tutors Elmwood Park, NJ Math Tutors Elmwood Park, NJ Prealgebra Tutors Elmwood Park, NJ Precalculus Tutors Elmwood Park, NJ SAT Tutors Elmwood Park, NJ SAT Math Tutors Elmwood Park, NJ Science Tutors Elmwood Park, NJ Statistics Tutors Elmwood Park, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Elmwood_Park_NJ_algebra_1_tutors.php","timestamp":"2014-04-17T04:52:53Z","content_type":null,"content_length":"24159","record_id":"<urn:uuid:f16b2763-1ea4-4088-9432-81824c46bf30>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
N. L. Zhang and D. Poole (1996) Exploiting Causal Independence in Bayesian Network Inference N. L. Zhang and D. Poole (1996) "Exploiting Causal Independence in Bayesian Network Inference", Volume 5, pages 301-328 PDF | PostScript | HTML | doi:10.1613/jair.305 A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as ``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms. Click here to return to Volume 5 contents list
{"url":"http://jair.org/papers/paper305.html","timestamp":"2014-04-19T17:28:11Z","content_type":null,"content_length":"3856","record_id":"<urn:uuid:42d53a7e-c34a-4d06-82c9-ee3807fc88a8>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Can the Quantum Zeno Effect be solely attributed to decoherence? Can the Quantum Zeno Effect be solely attributed to decoherence? In every single case? Is the concensus on this matter opinion, or rigorously tested fact in which every case can be attributed to decoherence? On a more well known note, can the supposed wavefunction collapse (which gives rise to the quantum zeno effect) be entirely attributed to decoherence? Can a wavefunction collapse without any decoherence? (Or maybe there's always some sort of decoherence if the particle exists in the universe?) Also, doesn't decoherence kind of disprove the many worlds interpretation (obviously it hasn't, or this interpretation wouldn't exist anymore..)? Are we supposed to believe that we have somehow ended up in this universe out of infinite potentials where decoherence exists and continues to exist?
{"url":"http://www.physicsforums.com/showpost.php?p=2421450&postcount=1","timestamp":"2014-04-21T04:42:33Z","content_type":null,"content_length":"9311","record_id":"<urn:uuid:b295c734-7cf2-484c-811a-3a33a117f4f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Felix Hausdorff Felix Hausdorff graduated from Leipzig in 1891, and then taught there until 1910 when he went to Bonn. Within a year of his appointment to Leipzig he was offered a post at Göttingen but he turned it Hausdorff's main work was in topology and set theory. He introduced the concept of a partially ordered set, and from 1906 to 1909 he proved a series of results on ordered sets. In 1907, he introduced special types of ordinals in an attempt to prove Cantor's continuum hypothesis. He also posed a generalization of the continuum hypothesis. Hausdorff proved further results on the cardinality of Borel sets in 1916. Building on work by Fréchet and others, he created a theory of topological and metric spaces. Earlier results on topology fitted naturally into the framework set up by Hausdorff. In 1919, he introduced the notion of Hausdorff dimension, sometimes called fractal dimension. He also introduced the Hausdorff measure and the term "metric space" is due to him. Hausdorff worked at Bonn until 1935 when he was forced to retire by the Nazi regime. Although as early as 1932 he sensed the oncoming calamity of Nazism, he made no attempt to emigrate while it was still possible. As a Jew his position became more and more difficult. In 1941 he was scheduled to go to an internment camp but managed to avoid being sent. However by 1942 he could no longer avoid being sent to the internment camp and, together with his wife and his wife's sister, he committed suicide.
{"url":"http://www2.stetson.edu/~efriedma/periodictable/html/Hf.html","timestamp":"2014-04-18T05:30:52Z","content_type":null,"content_length":"2136","record_id":"<urn:uuid:ac459de7-98a8-4047-91e8-b0b3f01678c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Release of NumPy Stéfan van der Walt stefan@sun.ac... Wed Apr 16 04:45:02 CDT 2008 On 16/04/2008, Anne Archibald <peridot.faceted@gmail.com> wrote: > I don't think of arrays as containers of anything but scalars, so I > find this whole argument from intuition extremely strange. I see now for the first time that Matrices can't have dims > 2. Grim. I do think that ColumnVector and RowVector could be useful in general; some statements read more clearly, e.g. (for x an (N,)-array) instead of np.c_[x] # turn x into a column vector And, while is valid, RowVector(x) * ColumnVector(x) is clearer than x = np.dot(np.r_[x], np.c_[x]) (which is a pattern I'd expect to find amongst linear algebra users) The last expression also yields array([14]) instead of 14! > My (draconian) suggestion would be to simply raise an exception when a > matrix is indexed with a scalar. They're inherently two-dimensional; > if you want a submatrix you should provide both indices (possibly > including a ":"). If you actually want a subarray, as with an array, > use ".A". Your idea isn't that far out -- but I think we can provide the expected (ndarray-like) behaviour in this case. > That said, I don't actually use matrices, so I don't get a vote. Apparently, neither do I :) But I do get to modify I also changed ProposedEnhancements into a Category page, that automatically accumulates all WikiPages tagged with More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/032809.html","timestamp":"2014-04-19T17:36:41Z","content_type":null,"content_length":"4335","record_id":"<urn:uuid:bd3ad4fc-1879-4a7f-be7f-03dc20a67cba>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Project 7 method 3 7. Third construction of the pentagram from a golden rectangle. Add a square onto the side of our original golden rectangle. The big rectangle is also a golden rectangle. Rotate GF about G and HC about H until they meet at a point which we will call I. At I draw a circle whose radius is AB and one whose radius is GH. Extend the line JK until it hits the large circle in two points which we will call N and O. Extend GM and HL until they meet this line also at N and O.
{"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_150/projects/p7m3.html","timestamp":"2014-04-20T20:57:30Z","content_type":null,"content_length":"1949","record_id":"<urn:uuid:6fc21dd0-4ec6-436d-a312-8b7ad9ce3c6b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
umerology and nothing more Numerology is parascience that studies numbers. Numerology is also known as Magic of Numbers. Its concept is very similar to that of such ancient science as astrology. This parascience is related to ancient time - even primitive tribes used the numbers. People consciously or unconsciously obey numerology. A bright example of this is a number of superstitions such as: the number of flowers in the bouquet should be odd, you need to repeat three times, a frightening number "13", 666 is the number of the devil. There are many similar examples. Basics of Numerology In Numerology all words, names, numbers can be reduced to digits (simple numbers) that correspond to various occult characteristics influencing people's life. This means that according to numerology each simple number corresponds with certain properties, images and concepts. Numerology is primarily used to determine the character, natural abilities, to identify the strong and weak points of personality, to forecast the future, to choose the best time for making serious decisions and actions, as well as to find a suitable profession, place of residence and many other factors. Sometimes people use this science to find friends, life and business partners that would satisfy their requirements. History of Numerology As we already mentioned, numerology has very ancient origin. In ancient languages (such as Hebrew) letters had numeric values and were used to denote numbers. It's hard to say when exactly Numerology began its existence, as in ancient times it wasn't distinguished into a separate area of knowledge. Then scientists didn't divide science into areas. Scientists studied numbers both from mathematical and philosophical points of view. That's why there was no reason to define the doctrine about numbers as a separate science. Pythagorean Numerology Despite its ancient origin, this science has become popular recently. Main provisions of the Western Numerology in the form which is known to us today, were developed already in the 6th century BC by the great mathematician and philosopher Pythagoras. It was he who integrated mathematical systems of the Arabs, the Druids, the Egyptians with sciences studying human nature. Pythagoras was born in about 570 BC, he traveled around the world and, having returned, he founded in Southern Italy the philosophical society – Pythagorean School. There mostly natural sciences, such as arithmetic, geometry and astronomy were taught. Besides, important discoveries were made there. Pythagoras discovered that 4 musical intervals known at that time could be expressed in a ratio of 1 to 4. He assumed if music could be expressed through figures, everything material could also be expressed through numbers. Kabbalah Numerology Numerology as a science was of particular importance in the Kabbalah, where the most developed kind of this teaching was Gematria. Kabbalists have expanded Pythagorean concept, they used magic squares with numbers for a variety of purposes. In the 19th century scientists examined the nature of light, electricity and magnetism. Occult values previously assigned to numbers later became attributed to vibrations of energy. Numerology, which is known to us, today prefers the Pythagorean theory, namely simplified numerical and alphabetical code. Reduction of numbers to figures There are many different systems used for reducing numbers to single digits (figures). The most basic, convenient and popular way is addition of all the digits of the number. Thus, if the amount is equal to or exceeds number 10, it is necessary to add figures included in this number together. This process should be continued until you get a number from 1 to 9. Using this method you can reduce any number, such as: date of birth, phone number and so on. From the mathematical point of view this process is equivalent to the replacement of the original number by its remainder after integer division on 9. From the point of view of Numerology time moves in cycles from 1 to 9. Through the centuries and decades each new year brings a new number. Days and months within the year can also can be divided into cycles. Numerology on NumerologyLife.com On our site we have provided basic information on numerology. You can see the basic numerology numbers and perform calculations Destiny Card in Numerology, Compatibility Partners and Pythagorean Square. These calculations will help you more deeply acquainted with numerology and provide for themselves the sea of useful information that is necessary for each person to create a complete picture of their world. Did you like it? Share it with friends! ;)
{"url":"http://numerologylife.com/","timestamp":"2014-04-24T09:31:12Z","content_type":null,"content_length":"13692","record_id":"<urn:uuid:c8e4178b-1552-47c1-995e-8cea366c172f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorics Multiplication Principles January 30th 2013, 08:40 AM Combinatorics Multiplication Principles Hi in my book the multiplication principle is stated as follows. Suppose a procedure can be broken into m successive ordered stages,with r1 different outcomes in the first stage r2 different out comes in the second stage... And rm different outcomes in the mth stage. if the number of ooutcomes at each stage is independent of the choices in the previous stages and if the composite outcomes are all distinct then the procedure has r1*r2....*rm different composite outcomes. But all the multiplication states theoretically is that |r1xr2x..xrm|= |r1|*|r2|*...*|rm| so when you use it don't you have to prove there is bijection between the thing that you are counting and some cross product of sets? Why is this always skipped? For example if you want to count the number of ways to put r distinct balls in n distinct boxes no restrictions on the number of balls in the box, traditionally you break it up into choosing the box for each ball. So there are n boxes for the first ball n for the second ect so the answer is n^r. But technically don't you have to show there is a bijection between the possible arrangements of the balls in the boxes and the cross product of the set of the boxes r times? I guess the purpose of the first part of the principle to set up a bijection but should you always trust it, can it be proven it always creates a bijecttive pairing? January 31st 2013, 02:23 PM Re: Combinatorics Multiplication Principles But the thing you are counting IS some cartesian (cross?) product of sets! E.g. the set of possible arrangements of the balls in the boxes IS the cartesian product of r sets of n positions.
{"url":"http://mathhelpforum.com/discrete-math/212272-combinatorics-multiplication-principles-print.html","timestamp":"2014-04-21T14:00:31Z","content_type":null,"content_length":"6245","record_id":"<urn:uuid:80cdc002-52a2-42df-ae8c-5ded64a37f75>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Proceedings of the Southeastern Conference on Combinatorics, Graph Theory, and Computing Proceedings of the Southeastern Conference on Combinatorics, Graph Theory, and Computing, Volume 4 Utilitas Mathematica Pub - Combinatorial analysis From inside the book 10 pages matching labeled in this book Where's the rest of this book? Results 1-3 of 10 What people are saying - Write a review We haven't found any reviews in the usual places. Related books LovSsz 3 W Mills 23 J S Wallis 53 19 other sections not shown Other editions - View all Common terms and phrases 2-colorable 4TH S-E CONF abelian group adjacent algorithm array of order balanced incomplete block Baumert-Hall array bicritical binary bipartite graph block design boundary colorations braid chain-group circuit Combinatorial Theory conjecture consider construction contains core cycle defined definition denote digits edges equation example exist finite four coloring given graph G Graph Theory Hadamard matrices hamiltonian Hamiltonian graphs hence hypergraph incidence geometry incidence matrix incomplete block design integer isomorphic Jennifer Seberry k-tuples labeled Lemma Let G lower bound Math matroid maximal snakes minimal n-set nodes obtained occurs pair parameters partially ordered partially ordered set partition perfect 1-factorization permutation groups planar graph polynomial prime power problem PROC projective plane Proof reduced sequences Room square satisfy skew Stanton starter subcube subset Suppose symmetric snake T-sequence Table tactical configurations Theorem tree Turyn upper bound values vertices wreath Bibliographic information Proceedings of the Southeastern Conference on Combinatorics, Graph Theory, and Computing, Volume 4 Utilitas Mathematica Pub - Combinatorial analysis
{"url":"http://books.google.com/books?id=-1A_AQAAIAAJ&q=labeled&dq=related:ISBN091962863X&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-19T04:41:14Z","content_type":null,"content_length":"125973","record_id":"<urn:uuid:c27d0485-e93c-4c94-8f7a-bc9470d60eae>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Mathematics and Statistics Associate Professor Department of Mathematics and Statistics Stephen F. Austin State University Office: Math Building 316 Phone: 936-468-1704 E-mail: judsontw@sfasu.edu Web Site: http://faculty.sfasu.edu/judsontw/ About Dr. Judson: Dr. Judson is interested in high school and university mathematics education in the United States and Japan, the effects of lesson study on teaching practice, and how new teachers learn to understand their students. He has written an open-source textbook and is interesting in the Sage mathematical software. He also studies complete filtered Lie algebras, the algebraic objects corresponding to pseudogroups and transitive differential geometries. Degrees Earned: Ph.D., University of Oregon, Eugene, Mathematics (1984) M.A., University of Oregon, Eugene, Mathematics (1979) B.S., University of Illinois, Urbana, Mathematics (1975) Office Hours for Spring 2014: ┃ Monday │Tuesday│ Wednesday │Thursday│Friday ┃ ┃ 10-11am │10-11am│ 10-11am │ │ 8-9am ┃ ┃6-7pm(AARC) │ │6-7pm(AARC) │ │10-11am┃ Last updated: 1/28/2014
{"url":"http://www2.sfasu.edu/math/people/faculty/judson.html","timestamp":"2014-04-17T01:18:43Z","content_type":null,"content_length":"9083","record_id":"<urn:uuid:8160180f-fbb3-466a-b347-a853040a5d60>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
An overlapping Schwarz preconditioner for a spectral element atmospheric model on the cubed-sphere Stephen Thomas National Center for Atmospheric Research Spectral element formulations of the atmospheric 2-D shallow-water equations on the cubed-sphere are described. The equations are written in generalized curvilinear coordinates using contravariant/ covariant components following Rancic, Purser and Mesinger (1996). A semi-implicit time discretization results in a Helmholtz problem for the pressure. The Laplacian operator is approximated by the L_2 pseudo-Laplacian arising in the P_N/P_N-2 spectral element formulation of the incompressible Stoke's problem. The two-level overlapping Schwarz preconditioner of Fischer and Tufo (1998), based on the fast diagonalization method (FDM) and scalable coarse grid solver, is extended to generalized curvilinear coordinates. To obtain a separable operator for the linear finite-element tensor-product approximation within each spectral element, the minimum of the inverse metric tensor and the maximum of its determinant are employed. Convergence rates and parallel CPU timings on an IBM SP are compared against a block-Jacobi preconditioner.
{"url":"http://www.emc.ncep.noaa.gov/seminars/old/abstract/Thomas.html","timestamp":"2014-04-18T03:03:21Z","content_type":null,"content_length":"1703","record_id":"<urn:uuid:a711de0c-f0c0-4b73-b160-fb97015bee3e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Safe Haskell Safe-Inferred class Monad solver => Solver solver whereSource add :: Constraint solver -> solver BoolSource add a constraint to the current state, and return whether the resulting state is consistent run :: solver a -> aSource mark :: solver (Label solver)Source mark the current state, and return its label markn :: Int -> solver (Label solver)Source mark the current state as discontinued, yet return a label that is usable n times goto :: Label solver -> solver ()Source go to the state with given label Solver OvertonFD FDSolver s => Solver (FDInstance s) (Monoid w, Solver s) => Solver (WriterT w s) WriterT decoration of a solver useful for producing statistics during solving class Solver solver => Term solver term whereSource Term OvertonFD FDVar FDSolver s => Term (FDInstance s) ModelCol FDSolver s => Term (FDInstance s) ModelBool FDSolver s => Term (FDInstance s) ModelInt (Monoid w, Term s t) => Term (WriterT w s) t
{"url":"http://hackage.haskell.org/package/monadiccp-0.7.6/docs/Control-CP-Solver.html","timestamp":"2014-04-19T13:41:27Z","content_type":null,"content_length":"7699","record_id":"<urn:uuid:a1a62909-213c-4e1c-ab0c-eeebdeea2896>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
The multinomial distribution on rooted labeled forests - J. COMBINATORIAL THEORY A , 1998 "... Various enumerations of labeled trees and forests, including Cayley's formula n n\Gamma2 for the number of trees labeled by [n], and Cayley's multinomial expansion over trees, are derived from the following coalescent construction of a sequence of random forests (R n ; R n\Gamma1 ; : : : ; R 1 ..." Cited by 38 (18 self) Add to MetaCart Various enumerations of labeled trees and forests, including Cayley's formula n n\Gamma2 for the number of trees labeled by [n], and Cayley's multinomial expansion over trees, are derived from the following coalescent construction of a sequence of random forests (R n ; R n\Gamma1 ; : : : ; R 1 ) such that R k has uniform distribution over the set of all forests of k rooted trees labeled by [n]. Let R n be the trivial forest with n root vertices and no edges. For n k 2, given that R n ; : : : ; R k have been defined so that R k is a rooted forest of k trees, define R k\Gamma1 by addition to R k of a single edge picked uniformly at random from the set of n(k \Gamma 1) edges which when added to R k yield a rooted forest of k \Gamma 1 trees. This coalescent construction is related to a model for a physical process of clustering or coagulation, the additive coalescent in which a system of masses is subject to binary coalescent collisions, with each pair of masses of , 1998 "... Given an arbitrary distribution on a countable set S consider the number of independent samples required until the first repeated value is seen. Exact and asymptotic formulae are derived for the distribution of this time and of the times until subsequent repeats. Asymptotic properties of the repeat ..." Cited by 26 (14 self) Add to MetaCart Given an arbitrary distribution on a countable set S consider the number of independent samples required until the first repeated value is seen. Exact and asymptotic formulae are derived for the distribution of this time and of the times until subsequent repeats. Asymptotic properties of the repeat times are derived by embedding in a Poisson process. In particular, necessary and sufficient conditions for convergence are given and the possible limits explicitly described. Under the same conditions the finite dimensional distributions of the repeat times converge to the arrival times of suitably modified Poisson processes, and random trees derived from the sequence of independent Research supported in part by N.S.F. Grants DMS 92-24857, 94-04345, 92-24868 and 97-03691 trials converge in distribution to an inhomogeneous continuum random tree. 1 Introduction Recall the classical birthday problem: given that each day of the year is equally likely as a possible birthday, and that birth... , 1998 "... Extensions of binomial and multinomial formulae due to Abel, Cayley and Hurwitz are related to the probability distributions of various random subsets, trees, forests, and mappings. For instance, an extension of Hurwitz's binomial formula is associated with the probability distribution of the random ..." Cited by 13 (12 self) Add to MetaCart Extensions of binomial and multinomial formulae due to Abel, Cayley and Hurwitz are related to the probability distributions of various random subsets, trees, forests, and mappings. For instance, an extension of Hurwitz's binomial formula is associated with the probability distribution of the random set of vertices of a fringe subtree in a random forest whose distribution is defined by terms of a multinomial expansion over rooted labeled forests which generalizes Cayley's expansion over unrooted labeled trees. Contents 1 Introduction 2 Research supported in part by N.S.F. Grant DMS97-03961 2 Probabilistic Interpretations 5 3 Cayley's multinomial expansion 11 4 Random Mappings 14 4.1 Mappings from S to S : : : : : : : : : : : : : : : : : : : : : : : : : : : : 15 4.2 The random set of cyclic points : : : : : : : : : : : : : : : : : : : : : : : 18 5 Random Forests 19 5.1 Distribution of the roots of a p-forest : : : : : : : : : : : : : : : : : : : : 19 5.2 Conditioning on the , 2001 "... Various random combinatorial objects, such as mappings, trees, forests, and subsets of a finite set, are constructed with probability distributions related to the binomial and multinomial expansions due to Abel, Cayley and Hurwitz. Relations between these combinatorial objects, such as Joyal's b ..." Cited by 13 (9 self) Add to MetaCart Various random combinatorial objects, such as mappings, trees, forests, and subsets of a finite set, are constructed with probability distributions related to the binomial and multinomial expansions due to Abel, Cayley and Hurwitz. Relations between these combinatorial objects, such as Joyal's bijection between mappings and marked rooted trees, have interesting probabilistic interpretations, and applications to the asymptotic structure of large random trees and mappings. An extension of Hurwitz's binomial formula is associated with the probability distribution of the random set of vertices of a fringe subtree in a random forest whose distribution is defined by terms of a multinomial expansion over rooted labeled forests. Research supported in part by N.S.F. Grants DMS 97-03961 and DMS-0071448 1 Contents 1 , 1999 "... We introduce a family of probability distributions on the space of trees with I labeled vertices and possibly extra unlabeled vertices of degree 3, whose edges have positive real lengths. Formulas for distributions of quantities such asdegree sequence, shape, and total length are derived. An interpr ..." Cited by 11 (9 self) Add to MetaCart We introduce a family of probability distributions on the space of trees with I labeled vertices and possibly extra unlabeled vertices of degree 3, whose edges have positive real lengths. Formulas for distributions of quantities such asdegree sequence, shape, and total length are derived. An interpretation is given in terms of sampling from the inhomogeneous continuum random tree of Aldous and Pitman (1998). Key words and phrases. Continuum tree, enumeration, random tree, spanning tree, weighted tree, Cayley's multinomial expansion. - Combinatorics, Probability and Computing , 1998 "... Hurwitz's extension of Abel's binomial theorem defines a probability distribution on the set of integers from 0 to n. This is the distribution of the number of non-root vertices of a fringe subtree of a suitably defined random tree with n + 2 vertices. The asymptotic behaviour of this distribution i ..." Cited by 5 (5 self) Add to MetaCart Hurwitz's extension of Abel's binomial theorem defines a probability distribution on the set of integers from 0 to n. This is the distribution of the number of non-root vertices of a fringe subtree of a suitably defined random tree with n + 2 vertices. The asymptotic behaviour of this distribution is described in a limiting regime where the distribution of the delabeled fringe subtree approaches that of a Galton-Watson tree with a mixed Poisson offspring distribution. 1 Introduction and statement of results Hurwitz [10] discovered the following identity of polynomials in n + 2 variables x; y and z s ; s 2 [n] := f1; : : : ; ng, which reduces to the binomial expansion of (x + y) n when Research supported in part by N.S.F. Grant DMS97-03961 z s j 0: X A`[n] x(x + z A ) jAj\ Gamma1 (y + z ¯ A ) j ¯ Aj = (x + y + z [n] ) n (1) where the sum is over all 2 n subsets A of [n], with the notations z A := P s2A z s , and jAj for the number of elements of A, and ¯ A := [n] \... , 2001 "... This paper presents a systematic approach to the discovery, interpretation and verification of various extensions of Hurwitz's multinomial identities, involving polynomials defined by sums over all subsets of a finite set. The identities are interpreted as decompositions of forest volumes define ..." Cited by 2 (0 self) Add to MetaCart This paper presents a systematic approach to the discovery, interpretation and verification of various extensions of Hurwitz's multinomial identities, involving polynomials defined by sums over all subsets of a finite set. The identities are interpreted as decompositions of forest volumes defined by the enumerator polynomials of sets of rooted labeled forests. These decompositions involve the following basic forest volume formula, which is a refinement of Cayley's multinomial expansion: for R ` S the polynomial enumerating out-degrees of vertices of rooted forests labeled by S whose set of roots is R, with edges directed away from the roots, is ( P r2R x r )( P s2S x s ) jS j\GammajRj\Gamma1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2059475","timestamp":"2014-04-17T06:30:27Z","content_type":null,"content_length":"29447","record_id":"<urn:uuid:60a0ec5f-aad1-4e5c-a721-c86ec3951c6f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Peoria, AZ Precalculus Tutor Find a Peoria, AZ Precalculus Tutor ...I have completed various digital photography classes and training programs, and I own my own photography business. I have a degree in Mechanical Engineering and I have been working in the IT field for over 20 years. I have worked as a computer programmer using Visual. 7 Subjects: including precalculus, photography, computer programming, SQL ...I believe in figuring out (quickly!!) what a student's strong points are and then working with those points. Confidence is great for dealing with study skills! Many times, study skills problems stem from being overwhelmed by the task at hand rather than a problem with the task itself. 58 Subjects: including precalculus, reading, chemistry, English ...I find it much more productive to take the student out of the home environment where there are multiple distractions so we can focus and make the most of our time together. I received my B.A. from Yale University (double major in Economics and Japanese Studies) at 20 years old and a law degree (J.D.) from U.C. Berkeley, Boalt Hall at 23. 21 Subjects: including precalculus, reading, calculus, geometry ...This is where you really start getting into core math topics needed in later math classes. You learn all about exponents, lines, parabolas, and a lot of the cartesian plane (graphing). Biology is so unique because we can observe this topic in our every day lives. It is nature in its most basic form. 21 Subjects: including precalculus, chemistry, calculus, physics ...As a tutor with many years of math experience I find there are two common areas people struggle with when it comes to geometry. The first is visualizing the problem and understanding how to create an equation from the given problem, and the second struggling with previous algebra skills that may... 20 Subjects: including precalculus, calculus, computer programming, C Related Peoria, AZ Tutors Peoria, AZ Accounting Tutors Peoria, AZ ACT Tutors Peoria, AZ Algebra Tutors Peoria, AZ Algebra 2 Tutors Peoria, AZ Calculus Tutors Peoria, AZ Geometry Tutors Peoria, AZ Math Tutors Peoria, AZ Prealgebra Tutors Peoria, AZ Precalculus Tutors Peoria, AZ SAT Tutors Peoria, AZ SAT Math Tutors Peoria, AZ Science Tutors Peoria, AZ Statistics Tutors Peoria, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/Peoria_AZ_Precalculus_tutors.php","timestamp":"2014-04-18T03:58:54Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:6d49544f-957d-4c33-9be5-b0a2d1712f3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Fermat's theorem (mathematics) Fermat’s theorem Article Free Pass Fermat’s theorem, also known as Fermat’s little theorem and Fermat’s primality test, in number theory, the statement, first given in 1640 by French mathematician Pierre de Fermat, that for any prime number p and any integer a such that p does not divide a (the pair are relatively prime), p divides exactly into a^p − a. Although a number n that does not divide exactly into a^n − a for some a must be a composite number, the converse is not necessarily true. For example, let a = 2 and n = 341, then a and n are relatively prime and 341 divides exactly into 2^341 − 2. However, 341 = 11 × 31, so it is a composite number (a special type of composite number known as a pseudoprime). Thus, Fermat’s theorem gives a test that is necessary but not sufficient for primality. As with many of Fermat’s theorems, no proof by him is known to exist. The first known published proof of this theorem was by Swiss mathematician Leonhard Euler in 1736, though a proof in an unpublished manuscript dating to about 1683 was given by German mathematician Gottfried Wilhelm Leibniz. A special case of Fermat’s theorem, known as the Chinese hypothesis, may be some 2,000 years old. The Chinese hypothesis, which replaces a with 2, states that a number n is prime if and only if it divides exactly into 2^n − 2. As proved later in the West, the Chinese hypothesis is only half Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/204696/Fermats-theorem","timestamp":"2014-04-18T11:50:31Z","content_type":null,"content_length":"77128","record_id":"<urn:uuid:17660ca9-e4de-46e0-a789-a234f9c1c73b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Third Weekly Report GSoC 2012 - Clash of Assumptions Third Weekly Report GSoC 2012 – Clash of Assumptions After a weekend of debugging I can proudly say that I am able to run toy examples using my implementation of SO-SVM for the multiclass classification case of use \o/ Just for the record, and for the fun of its discovery, I am going to write about the last bug I fixed in my code. As it is commented in the previous posts, part of the project deals with solving an optimization problem, a QP in particular. The fact is that I could run the code just fine, no more failing ASSERTS popping up nor segmentation faults. However, something weird was going on with the solution. The weight vector $\vec{w}$ was zero for every problem instance. After double checking with MATLAB’s function quadprog that the correct solution for the problem was effectively non-trivial, I tried adding some lines here and there to ensure that the parameters given to MOSEK’s were the correct ones. All the matrices and vectors were ok. After some more lines to print out the bounds of the problem I found it. The variables of the optimization problem were all fixed to zero! At the beginning I just assumed that if no bound is given for a variable, then MOSEK would consider that this one is free; i.e. it may take values within $(-\infty, \infty)$. However, this is false. If no bound is given, then the reality is that MOSEK fixes the value of the variable to zero. I still don’t get the point of including variables to the optimization vector whose values are fixed… but that’s another story. For the next time I will try to remember to ensure that the assumptions I make are correct or, even better, to RTFM :) My next objective is to try out the application with bigger examples. Unfortunately my MOSEK license file just allows me to solve problems with up to 300 variables or constraints, which currently prevents me from doing tests with training data bigger than 17 bidimensional training examples!
{"url":"http://iglesiashogun.wordpress.com/2012/06/11/clash-of-assumptions/","timestamp":"2014-04-19T01:47:16Z","content_type":null,"content_length":"49284","record_id":"<urn:uuid:5305a245-95f2-4a19-81b5-62ff90a3363c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
For questions involving single events, the formula for simple probability is sufficient. For questions involving multiple events, the answer combines the probabilities for each event in ways that may seem counter-intuitive. The following strategy is excellent for acquiring a better feel for probability questions involving multiple events or for making a quick guess if time is short. We will focus on questions involving two events. • If two events have to occur together, generally an "and" is used. Take a look at Statement 1: "I will only be happy today if I get email and win the lottery." The "and" means that both events are expected to happen together. • If both events do not necessarily have to occur together, an "or" may be used as in Statement 2, "I will be happy today if I win the lottery or have email." Consider Statement 1. Your chances of getting email may be relatively high compared to your chances of winning the lottery, but if you expect both to happen, your chances of being happy are slim. Like placing all your bets at a race on one horse, you've decreased your options, and therefore you've decreased your chances. The odds are better if you have more options, say if you choose horse 1 or horse 2 or horse 3 to win. In Statement 2, we have more options; in order to be happy we can either win the lottery or get email. The issue here is that if a question states that event A and event B must occur, you should expect that the probability is smaller than the individual probabilities of either A or B. If the question states that event A or event B must occur, you should expect that the probability is greater than the individual probabilities of either A or B. This is an excellent strategy for eliminating certain answer choices. These two types of probability are formulated as follows: Probability of A and B P(A and B) = P(A) × P(B). In other words, the probability of A and B both occurring is the product of the probability of A and the probability of B. Probability of A or B P(A or B) = P(A) + P(B). In other words, the probability of A or B occurring is the sum of the probability of A and the probability of B (this assumes A + B cannot both occur). If there is a probability of A and/or B occurring, then you must subtract the overlap. Look at the following examples. Example 4 If a coin is tossed twice, what is the probability that on the first toss the coin lands heads and on the second toss the coin lands tails? a) 1/6 b) 1/3 c) ¼ d) ½ e) 1 First note the "and" in between event A (heads) and event B (tails). That means we expect both events to occur together, and that means fewer options, a less likely occurrence, and a lower probability. Expect the answer to be less than the individual probabilities of either event A or event B, so less than ½. Therefore, eliminate d and e. Next we follow the rule P(A and B) = P(A) × P (B). If event A and event B have to happen together, we multiply individual probabilities. ½ × ½ = ¼. Answer c is correct. NOTE: Multiplying probabilities that are less than 1 (or fractions) always gives an answer that is smaller than the probabilities themselves. Example 5 If a coin is tossed twice what is the probability that it will land either heads both times or tails both times? Note the "or" in between event A (heads both times) and event B (tails both times). That means more options, more choices, and a higher probability than either event A or event B individually. To figure out the probability for event A or B, consider all the possible outcomes of tossing a coin twice: heads, heads; tails, tails; heads, tails; tails, heads. Since only one coin is being tossed, the order of heads and tails matters. Heads, tails and tails, heads are sequentially different and therefore distinguishable and countable events. We can see that the probability for event A is ¼ and that the probability for event B is ¼. We expect a greater probability given more options, and therefore we can eliminate choices a, b and c, since these are all less than or equal to ¼. Now we use the rule to get the exact answer. P(A or B) = P(A) + P(B). If either event 1 or event 2 can occur, the individual probabilities are added: ¼ + ¼ = 2/4 = ½. Answer d is correct. NOTE: We could have used simple probability to answer this question. The total number of outcomes is 4: heads, heads; tails, tails; heads, tails; tails; heads, while the desired outcomes are 2. The probability is therefore 2/4 = ½. The following chart summarizes the "and's" and "or's" of probability: │Probability│Formula │Expectation │ │P(A and B) │P(A) × P(B)│Lower than P(A) or P(B) │ │P(A or B) │P(A) + P(B)│Higher than P(A) or P(B) │ Additional Example?? • Need more help? Tests, classroom prep and tutoring through GMAT Classroom & Tutoring Prep at Veritas Prep offers in-depth help for GMAT students. • C. Independent and Dependent Events If you have any more questions or suggestions, email 800score.com
{"url":"http://www.800score.com/guidec8bview1b.html","timestamp":"2014-04-20T01:18:31Z","content_type":null,"content_length":"31347","record_id":"<urn:uuid:04d6acbc-3426-47bc-afb8-f122f475ce43>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Given: In ∆ACB, c2 = a2 + b2. Prove: ∆ACB is a right angle. Complete the flow chart proof with missing reasons to prove that ∆ACB is a right angle. Which pair of reasons correctly completes this proof? Answers Reason #1 - Transitive Property of Equality Reason #2 - SSS Postulate Reason #1 - Reflexive Property of Equality Reason #2 - SAS Postulate Reason #1 - Transitive Property of Equality Reason #2 - SAS Postulate Reason #1 - Reflexive Property of Equality Reason #2 - SSS Postulate • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5051f486e4b02b4447c1518a","timestamp":"2014-04-20T06:16:52Z","content_type":null,"content_length":"77464","record_id":"<urn:uuid:072abb27-185b-4030-82c9-65750022a603>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
minimal Kan fibration minimal Kan fibration A kind of fibration in the context of homotopy theory. A Kan fibration $p : E \to B$ is called a minimal Kan fibration if for all cells $x,y : \Delta[n] \to E$ the condition $p(x) = p(y)$ and $\partial_i x = \partial_i y$ implies for all $k$ that $\ partial_k x = \partial_k y$. A useful (if old) survey article which contains a summary of early results on these is: • E. Curtis, Simplicial Homotopy Theory, Advances in Math., 6, (1971), 107 – 209. Revised on June 21, 2012 23:21:37 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/minimal+Kan+fibration","timestamp":"2014-04-21T02:00:12Z","content_type":null,"content_length":"14341","record_id":"<urn:uuid:34352673-d86a-45b0-bb5b-603b6d015841>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: STATA help for GLM misleading? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: STATA help for GLM misleading? From Joseph Coveney <jcoveney@bigplanet.com> To Statalist <statalist@hsphsun2.harvard.edu> Subject Re: st: STATA help for GLM misleading? Date Mon, 14 Nov 2005 17:32:46 +0900 Rijo John wrote (excerpted): The STATA help for GLM with the family(binomial) link(logit) option says "For family(binomial) link(logit) models, we recommend using the logistic command in preference to glm. Both produce the same answers, but logistic provides useful post-estimation commands". [Cut] This is actually misleading. When we have independent variables that are fractions which can take any values between 1 and 0 including 1 and zero, using family(binomial) link(logit) along with a robust option is certainly different from logistic regression. And the stata help as written above sort of asserted that using family(binomial) link(logit) is going to give the same result as logistic, giving us the impression that STATA treats all the non-zero values in the dependent variable as 1 thus resulting a (0,1) Bernoulli distribution. But for me family(binomial) link(logit) with a robust option gave a better result than logistic command. Stata's help for -glm- and -logistic- is not misleading: you'll see that you get identical results for fractional logistic regression in the example below. Cut and paste it into Stata's do-file editor to run it. Just be aware that -logistic- only recognizes zeros and nonzeroes for the response (as the help file for -logistic- states), so you need to set up your dataset to make sense to -logistic- See the do-file below for how. This shouldn't be taken as an endorsement of fractional logistic regression for your data. Alternatives were suggested last month by other list Joseph Coveney set more off set seed `=date("2005-11-14", "ymd")' set obs 200 generate byte group = _n > _N / 2 generate float proportion = uniform() // No claim as to distribution glm proportion group, family(binomial) link(logit) robust eform nolog rename proportion proportion1 generate float proportion0 = 1 - proportion1 generate int row = _n quietly reshape long proportion, i(row) j(positive) logistic positive group [pweight = proportion], cluster(row) * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-11/msg00504.html","timestamp":"2014-04-18T08:41:35Z","content_type":null,"content_length":"6767","record_id":"<urn:uuid:fa804697-e538-4a80-82f7-77049c7a603c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus-Help.com: Survive calculus class! - Problems Problem of the Week Dormant for almost a decade, the Problem of the Week is back with a vengeance for the 2011-2012 school year! Each Saturday morning, a new problem and its complete solution will be posted here. The problems range in difficulty, but they are tuned to the A.P. Calculus AB school year. In other words, the content should match topics you recently learned in class. Think of it as a way to keep your calculus skills fresh over the weekend. You can search the entire website at any time by typing a word or phrase into the search bar in the right column of any page on this site. Looking for a problem on limits? Seek and you will find! 2011-2012 School Year
{"url":"http://www.calculus-help.com/problems-of-the-week/","timestamp":"2014-04-20T00:49:22Z","content_type":null,"content_length":"23729","record_id":"<urn:uuid:774a7d80-9878-4624-81b8-ca81463091aa>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Please help me with yet ANOTHER infinite series question I have not seen hundreds of such sums or hundreds of ANY infinite sums for that matter. I am attending a first year calculus course in which we only just started sums last friday. I would not post a question to physicsforums unless I cannot figure it out. It's a geometric series, which, in the US at least, is covered in high school math and probably in junior high. You've likely seen it before but have since forgotten about it. With all due respect, you must be wrong here. There is a single answer field in which to input a response. Since the answer changes as x changes from -10 and -4, this is clearly not what is expected. This is backed by a few of my fellow engineering majors who have tried what you mentioned above, and had the wrong answer. The question is asking you for the sum of the series at values of x for which it converges. There's an infinite number of possible values, so obviously you're supposed to come up with an expression S (x) that depends on x. Your interpretation can't possibly be right because S(-10) and S(-4) don't exist. But regardless of whether it is asking that, I am finding it difficult to find the sum when x=-10. I get the sum of (-1)k from k=0 to k=n as n approaches infinity. So I get the sum of My initial thought was that s=0, because all the terms would cancel out. Then I realized, it could also equal 1 because as n approaches infinity, the terms would always change between 1 and -1. So I couldn't decide between the two. When you say an infinite series converges, it means the sequence of partial sums s converges. In this case, you find s flips back and forth between 1 and 0 (not -1), so the series doesn't converge.
{"url":"http://www.physicsforums.com/showthread.php?t=577974","timestamp":"2014-04-20T11:31:07Z","content_type":null,"content_length":"56350","record_id":"<urn:uuid:1831e1d9-6701-4852-b4ff-ef5977e2f847>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
The Stresses Shown In The Figure Act At A Point ... | Chegg.com Image text transcribed for accessibility: The stresses shown in the figure act at a point in a stressed body. Determine the normal and shear stresses at this point on the inclined plane shown. sigma n = -131.1 MPa tau nt = -19.15 MPa Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/stresses-shown-figure-act-point-stressed-body-determine-normal-shear-stresses-point-inclin-q1299314","timestamp":"2014-04-24T10:25:11Z","content_type":null,"content_length":"21185","record_id":"<urn:uuid:1c5133ec-3da0-470d-a72a-1a12d3c997f2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Redundancy_(information_theory) information theory is the number of bits used to transmit a message minus the number of bits of actual information in the message. Informally, it is the amount of wasted "space" used to transmit certain data. Data compression is a way to reduce or eliminate unwanted redundancy, while are a way of adding desired redundancy for purposes of error detection when communicating over a noisy channel of limited Quantitative definition In describing the redundancy of raw data, recall that the rate of a source of information is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the most general case of a stochastic process, it is $r = lim_\left\{n to infty\right\} frac\left\{1\right\}\left\{n\right\} H\left(M_1, M_2, dots M_n\right),$ the limit, as n goes to infinity, of the joint entropy of the first n symbols divided by n. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a memoryless source is simply $H\left(M\right)$, since by definition there is no interdependence of the successive messages of a memoryless source. The absolute rate of a language or source is simply $R = log |mathbb M| ,,$ the logarithm of the cardinality of the message space, or alphabet. (This formula is sometimes called the Hartley function.) This is the maximum possible rate of information that can be transmitted with that alphabet. (The logarithm should be taken to a base appropriate for the unit of measurement in use.) The absolute rate is equal to the actual rate if the source is memoryless and has a uniform distribution. The absolute redundancy can then be defined as $D = R - r ,,$ the difference between the absolute rate and the rate. The quantity $frac D R$ is called the relative redundancy and gives the maximum possible data compression ratio, when expressed as the percentage by which a file size can be decreased. (When expressed as a ratio of original file size to compressed file size, the quantity $R : r$ gives the maximum compression ratio that can be achieved.) Complementary to the concept of relative redundancy is efficiency, defined as $frac r R ,$ so that $frac r R + frac D R = 1$. A memoryless source with a uniform distribution has zero redundancy (and thus 100% efficiency), and cannot be compressed. Other notions of redundancy A measure of redundancy between two variables is the mutual information or a normalized variant. A measure of redundancy among many variables is given by the total correlation. Redundancy of compressed data refers to the difference between the expected compressed data length of $n$ messages $L\left(M^n\right) ,!$ (or expected data rate $L\left(M^n\right)/n ,!$) and the entropy $nr ,!$ (or entropy rate $r ,!$). (Here we assume the data is ergodic and stationary, e.g., a memoryless source.) Although the rate difference $L\left(M^n\right)/n-r ,!$ can be arbitrarily small as $n ,!$ increased, the actual difference $L\left(M^n\right)-nr ,!$, cannot, although it can be theoretically upper-bounded by 1 in the case of finite-entropy memoryless sources. See also • Fazlollah M. Reza. An Introduction to Information Theory. New York: McGraw-Hill 1961. New York: Dover 1994. ISBN 0-486-68210-2 • B. Schneier, Applied Cryptography: Protocols, Algorithms, and Source Code in C. New York: John Wiley & Sons, Inc. 1996. ISBN 0-471-12845-7
{"url":"http://www.reference.com/browse/wiki/Redundancy_(information_theory)","timestamp":"2014-04-17T12:33:06Z","content_type":null,"content_length":"73251","record_id":"<urn:uuid:c640443a-b1e7-4a97-846a-32a52b040ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Einstein's 'spooky action' common in large quantum systems, mathematicians find Entanglement is a property in quantum mechanics that seemed so unbelievable and so lacking in detail that, 66 years ago this spring, Einstein called it "spooky action at a distance." But a mathematician at Case Western Reserve University and two of his recent PhD graduates show entanglement is actually prevalent in large quantum systems and have identified the threshold at which it The finding holds promise for the ongoing push to understand and take advantage of the property. If harnessed, entanglement could yield super high-speed communications, hack-proof encryptions and quantum computers so fast and powerful they would make today's supercomputers look like adding machines in comparison. The mathematicians don't tell us how entanglement works, but were able to put parameters on the property by combining math concepts developed for a number of different applications during the last five decades. In a nutshell, the researchers connected the math to properties of quantum mechanics—the otherworldly rules that best apply to atomic and subatomic particles—to describe physical "There have been indications that large subgroups within quantum systems are entangled," said Stanislaw Szarek, mathematics professor at Case Western Reserve and an author of the study. "Our contribution is to find out exactly when entanglement becomes ubiquitous." Szarek worked with Guillaume Aubrun, assistant professor of mathematics at Université Claude Bernard Lyon 1, France, and Deping Ye, assistant professor of mathematics and statistics at Memorial University of Newfoundland, Canada. Their work is published online in the Early View section of Communications on Pure and Applied Mathematics. The behaviors of materials down at the level of atoms are often strange, but entanglement borders on our concepts of sorcery. For example, if two electrons spinning in opposite directions are entangled, when one changes direction, the other immediately changes, whether the electrons are side by side, across the room or at opposite ends of the universe. Other particles, such as photons, atoms and molecules, can also become entangled, but taking advantage of the property requires more than a pair or handful. Szarek, Aubrun and Ye focused on large quantum systems—large groups of particles that have the potential for use in our world. They found that, in systems in a random state, two subsystems that are each less than one-fifth of the whole are generally not entangled. Two subsystems that are each greater than one-fifth of the whole typically are entangled. In other words, in a system of 1,000 particles, two groups that are smaller than 200 each typically won't be entangled. Two groups larger than 200 each typically will. Further, the research shows, "the change is abrupt when you reach the threshold of about 200," Szarek said. The team also calculated the threshold for positive partial transpose, or PPT, a property related to entanglement. If the property is violated, entanglement is present. "From these two perspectives, the calculations are very precise." Szarek said. Harsh Mathur, a physics professor at Case Western Reserve whom Szarek consulted to better understand the science, said, "Their point is entanglement is hard to create from a small system, but much easier in a large system." "And the thing that Einstein thought was so weird is the rule rather than the exception," Mathur added. The researchers used mathematics where analysis, algebra and geometry meet, Szarek said. The math applies to hundreds, thousands or millions of dimensions. "We put together several things from different parts of mathematics, like a puzzle, and adapted them," he said. "These are mathematical tools developed largely for aesthetical reasons, like music." The ideas—concepts developed in the 1970s and 1980s and more recently— turned out to be relevant to the emerging quantum information science. "We have found there is a way of computing and quantifying the concept of quantum physics and related it to some calculable mathematical quantities," Szarek continued. "We were able to identify features and further refine the description, which reduces the questions about the system to calculable and familiar looking mathematical quantities." So, if entanglement is more common in large quantum systems, why aren't they being used already? "In the every day world, it's hard to access or create large quantum mechanical systems to do meaningful quantum computations or for communications or other uses," Mathur said. "You have to keep them isolated or they decohere and behave in a classical manner. But this study gives some parameters to build on." Szarek will continue to investigate mathematics and quantum information theory while attending the Isaac Newton Institute for Mathematical Sciences in Cambridge, England in the fall. He will work with computer scientists and quantum physicists during a semester-long program called Mathematical Challenges in Quantum Information. He received a $101,000 National Science Foundation grant to 4 / 5 (2) May 28, 2013 So it should be easier to entangle 1 Million qubits as opposed to 16 --- is the problem of creating quantum memory that we started too low and tried to build up. we should have tried stupidly huge and it might have worked -- HA thats rich -- i hope it works 2.4 / 5 (7) May 28, 2013 So, if entanglement is more common in large quantum systems, why aren't they being used already? Because the steps between energy levels gradually decrease with increasing number of objects in system and they become interchangeable with background quantum noise itself. From this moment the behavior of entangled quantum system isn't distinguishable from classical one anymore. I presume, everyone of you did see such a diagram before. This is for example the reason, why the common light bulb cannot serve as a source of individual photons: too many electrons at the metal surface are entangled together, so that the spectrum of light bulb is continuous. My feeling from the discussion above quoted is, that this mathematician (Harsh Mathur) doesn't understand, what he actually describes. 1.4 / 5 (9) May 28, 2013 The mathematicians don't tell us how entanglement works, but were able to put parameters on the property by combining math concepts developed for a number of different applications during the last five decades. In a nutshell, the researchers connected the math to properties of quantum mechanics—the otherworldly rules that best apply to atomic and subatomic particles—to describe physical reality. Quantum mechanics is 'physics' which was created by God (nature), while 'mathematic' is an abstract invent by man! Physics is the science of nature which works via some physical mechanism (not just mathematical formulas), something like the one as follow… 1 / 5 (4) May 28, 2013 "Our contribution is to find out exactly when entanglement becomes ubiquitous." Where the observation of life stops. Now what? vlaaing peerd 4.7 / 5 (3) May 29, 2013 Quantum mechanics is 'physics' which was created by God (nature), while 'mathematic' is an abstract invent by man! Everything in nature is as it is, both QM manifestations as well as mathematics, humanity just labeled it to describe it. 3 / 5 (2) May 29, 2013 The math applies to hundreds, thousands or millions of dimensions. The measurement applies to 3+1 dimensions. And the result is the article report above. Where the observation of life stops. Now what? - DW Easy. Redefine (your) the definition of life. 3 / 5 (2) May 29, 2013 If harnessed, entanglement could yield super high-speed communications, hack-proof encryptions and quantum computers so fast and powerful they would make today's supercomputers look like adding machines in comparison. - hack-proof encryption: yes (with the proviso that any hardware in between is still subject to the usual forms of eavesdropping. The datastream and the key exchange can be made proof against undetected interception) - quantum computers so fast and powerful: yes (for certain kinds of problems) - super high-speed communications: no. Entanglement is not good for information transmission, as you can't encode anything on it. So it should be easier to entangle 1 Million qubits as opposed to 16 --- is the problem of creating quantum memory Qbits aren't just memory. You use them for the calculations also. It's not like in ordinary computers where you have memory and CPU as separate entities. Having many qbits is worthless without precise control. 1 / 5 (2) Jun 05, 2013 Is it possible that magnetic monopoles do exist but two opposite pole pairs are always entangled ??
{"url":"http://phys.org/news/2013-05-einstein-spooky-action-common-large.html","timestamp":"2014-04-16T17:27:23Z","content_type":null,"content_length":"82517","record_id":"<urn:uuid:abe295a0-10ac-45ae-b221-d626bda65161>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Dearborn, MI Math Tutor Find a Dearborn, MI Math Tutor I have a passion for helping people understand mathematical principles. For the last 7 years I have successfully tutored students in math from 4th grade through college. I have helped students raise their ACT test scores. 8 Subjects: including geometry, algebra 1, algebra 2, SAT math I have Bachelor’s and Master’s degrees in biology, a minor in chemistry and I worked in the industry as a chemist for over 5 years. I have tutored for about 2 years now, taught a supplemental genetics class in the past and love helping others to understand and love science the way I do. I am smart... 10 Subjects: including geometry, reading, chemistry, zoology ...I have been teaching special education for the past five years. I taught two years in a middle school setting and three in a high school setting. My masters is in special education with an endorsement in learning disabilities. 37 Subjects: including precalculus, English, reading, SAT math ...I am certified by the state of Michigan to teach grades 6-12. I enjoy helping people learn the language of mathematics. This is my first year as a teacher since I’m a career charger. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I started teaching at Macomb Community College in 2010. I taught Introduction to Philosophy and Ethics at Macomb until 2012. I started teaching logic, ethics, and philosophy and humanities courses at Oakland Community College Southfield in 2011 and I still currently teach here. 15 Subjects: including logic, English, writing, grammar Related Dearborn, MI Tutors Dearborn, MI Accounting Tutors Dearborn, MI ACT Tutors Dearborn, MI Algebra Tutors Dearborn, MI Algebra 2 Tutors Dearborn, MI Calculus Tutors Dearborn, MI Geometry Tutors Dearborn, MI Math Tutors Dearborn, MI Prealgebra Tutors Dearborn, MI Precalculus Tutors Dearborn, MI SAT Tutors Dearborn, MI SAT Math Tutors Dearborn, MI Science Tutors Dearborn, MI Statistics Tutors Dearborn, MI Trigonometry Tutors Nearby Cities With Math Tutor Allen Park Math Tutors Dearborn Heights Math Tutors Detroit, MI Math Tutors Farmington Hills, MI Math Tutors Farmington, MI Math Tutors Hamtramck Math Tutors Lincoln Park, MI Math Tutors Livonia, MI Math Tutors Melvindale Math Tutors Redford Twp, MI Math Tutors Redford, MI Math Tutors River Rouge Math Tutors Southfield, MI Math Tutors Taylor, MI Math Tutors Westland, MI Math Tutors
{"url":"http://www.purplemath.com/dearborn_mi_math_tutors.php","timestamp":"2014-04-19T07:14:32Z","content_type":null,"content_length":"23641","record_id":"<urn:uuid:902a1f8a-4c38-43a8-94f7-d5c3e03d808e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the relationship between motivic cohomology and the theory of motives? up vote 35 down vote favorite I will begin by giving a rough sketch of my understanding of motives. In many expositions about motives (for example, www.jmilne.org/math/xnotes/MOT102.pdf), the category of motives is defined to be a category such that every Weil cohomology (viewed as a functor) factors through it. This does not define the category uniquely, nor does it imply that it exists. There are two concrete candidates that we can construct. The category of Chow motives, which is well-defined, is trivially a category of motives. However, it has some bad properties. For example, it is not Tannakian. The second candidate is the category of numerical motives. It too is well-defined, however it is only conjectured that it is category of motives (i.e., that every Weil cohomology factors through this category). This conjecture is closely related to (or perhaps even equivalent to?) Grothendieck's standard conjectures. That would be desirable, because the category of numerical motives is very well-behaved. Furthermore, the original motivation for motives is that Grothendieck has proven that if the category of numerical motives is indeed a category of motives, then the Weil conjectures are correct. So far, even though I a murky on many of the details, I follow the storyline. Where does "motivic cohomology" (in the sense of, for example, www.claymath.org/library/monographs/cmim02.pdf) fit into this story? I know that motivic cohomology has something to do with Milnor K-theory, but that is more or less where my understanding of the context of motivic cohomology ends. If motives are already an abstract object that generalizes cohomology, what does motivic cohomology signify? What is the motivation for defining it? What is the context in which it arose? motives motivic-cohomology ag.algebraic-geometry add comment 1 Answer active oldest votes Classically, Grothendieck's motives are only the pure motives, meaning abelian-ish things which capture the (Weil-cohomology-style) $H^i$ of smooth, projective varieties. To see the relationship with motivic cohomology, one should extend the notion of motive so that non-pure (i.e. "mixed") motives are allowed, these mixed motives being abelian-ish things which capture the $H^i$ of arbitrary varieties. The main novelty with mixed motives is that the (conjectural) abelian category of them is not semi-simple -- in fact every mixed motive should be a (generally non-trivial) iterated extension of pure motives, these extensions essentially coming from compactification and resolution of singularities, as in the story of mixed Hodge Then once one thinks of mixed motives, a natural direction of study (or speculation, as the case may be...) is that of determining all possible extensions (or iterated extensions) between two motives. And that's what motivic cohomology is, essentially: the study of these Ext groups. More formally, every variety $X$ should determine an object $C(X)$ in the bounded derived category of mixed motives, collecting together all the various mixed motives $H^i(X)$, and the $(i,j)^{th}$ motivic cohomology of $X$ is (up to twisting conventions) the abelian group of maps from the unit object to $C(X)$ \ $[i](j)$ (the $j^th$ Tate twist of the $i^th$ shift of $C(X)$) in the derived category of mixed motives. Now, there are a few points to make here. The first is that, though the above motivation and definition of motivic cohomology rely on an as-yet-conjectural abelian category of mixed motives, one can, independently of any conjectures, define a triangulated category which, as far as anyone can tell, behaves as if it were the bounded derived category of this conjectural abelian category. The most popular such definition, because of its simplicity and relative workability, is Voevodsky's. So the basic theory and many basic results on motivic cohomology are Another thing to say is that, as always, matters with motives are illuminated by considering realization functors. Let me single out the $\ell$-adic etale realization, since its extension from pure to mixed motives is straightforward (unlike for Hodge structures): any mixed motive, just as any pure motive, yields a finite-dimensional $\ell$-adic vector space with a up vote continuous action of the absolute Galois group of our base field. It then "follows" (in our conjectural framework... or actually follows, without quotation marks, in Voevodsky's framework) 41 down that the $(i,j)^{th}$ motivic cohomology of X maps to the abelian group of maps from the unit object to $C^{et}(X)$ \ $[i](j)$ in the bounded derived category of $\ell$-adic Galois vote representations. But this abelian group of maps is just the classical (continuous) $\ell$-adic etale cohomology $H^i(X(j))$ of the variety $X$, making this latter group the natural target accepted of an $\ell$-adic etale "realization" map from motivic cohomology. So here comes the third point: note that this is the etale cohomology of $X$ itself, not of the base change from $X$ to its algebraic closure. So this etale cohomology group mixes up arithmetic information and geometric information, and the same is true of motivic cohomology in general. (Think especially of the case $X=pt$: the motivic cohomology of a point admits a generally nontrivial realization map to the $\ell$-adic Galois cohomology of the base field.) For example, it is expected (e.g. by Grothendieck -- see http://www.math.jussieu.fr/~leila/ grothendieckcircle/motives.pdf for this and more) that for an abelian variety $A$ over an ``arithmetic'' base field $k$, the most interesting part of the motivic cohomology $H^(2,1)(A)$ (again my twists may be off...), by which I mean the direct summand which classifies extensions of $H^1(A)$ by $H^1(G_m)$, should identify with the rationalization of the abelian group of $k$-rational points of the dual abelian variety of $A$, the map being given by associating to such $k$-rational point the mixed motive given as $H^1$ of the total space of the corresponding $G_m$-torsor on $A$. And in this case, the above "realization" map to $\ell$-adic etale cohomology is closely related to the classical Kummer-style map used in the proof of the Mordell-Weil So in a nutshell: motivic cohomology is very related to motives, since morally it classifies extensions of motives. But it is of a different nature, since it is an abelian group rather than an object of a more exotic abelian category; and it's also quite different from a human standpoint in that we know how to define it unconditionally. Finally, motivic cohomology realizes to Galois cohomology of a variety itself, rather than to the base change of such a variety to the algebraic closure. Hope this was helpful in some way. 1 It does indeed help! – Makhalan Duff Jul 22 '12 at 1:26 8 I loved the «from a human standpoint» :-) – Mariano Suárez-Alvarez♦ Jul 22 '12 at 1:57 add comment Not the answer you're looking for? Browse other questions tagged motives motivic-cohomology ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/102839/what-is-the-relationship-between-motivic-cohomology-and-the-theory-of-motives","timestamp":"2014-04-18T16:19:31Z","content_type":null,"content_length":"60742","record_id":"<urn:uuid:f5e7b59c-1a54-4f35-8e4f-0778bf1aa2ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Fortran 90 Fortran 90 texts and programs, assembled by Michel Olagnon Contents : • Some documents : □ Le ``Support de cours Fortran 90 IDRIS''(de P. Corde et H. Delouis, un excellent document de reference de plus de 200 pages), ainsi qu'un cours Fortran 95 de niveau debutant et qu'une premiere approche de la norme 200x. • Some of Michel's utilities : These are public domain, and may be freely used under the terms of the Free Software Foundation licence. • Some of Michel's applications : □ Stejoi, Fortran 90 statistical software for Sun 4.1.x, interactive analysis of jointly occuring events (conditional probabilities). □ Example programs, from Michel's book (in French) ``Traitement de données numériques avec Fortran 90'', Masson, 1996, ISBN 2-225-85259-6.
{"url":"http://www.ifremer.fr/ditigo/molagnon/fortran90/","timestamp":"2014-04-16T16:22:18Z","content_type":null,"content_length":"3377","record_id":"<urn:uuid:d6fe3446-a613-46ed-82ec-cd8d247ffdde>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
goes on forever, and then some . As with Andy Kaufman 's death, few people believe it to be real. What is infinity? Infinity is generally thought of as the "endless number", despite such a number being logically unable to exist. Such fallacies are rife in its history, which began with a caveman wondering how long it would take for humanity to ever enjoy mornings. Some people believe that infinity is merely a hypothetical concept, used to represent that beyond mathematics's current understanding of numbers, although that is much less interesting. In general, infinity is a very confusing and pointless idea, and it isn't used for anything outside of extremely crude metaphors and numerical showboating. How big is infinity? Infinity is of an infinite size. To compare it to another number, one would most likely have to compare the whole of the universe to a single atom, which implies that the universe is spatially limitless, although it is certainly not a proven theory that such an amount of matter could ever exist in the conceptual space we have granted certain existence. Ergo, infinity would be omnipresent in its only tangible incarnation, giving theorists scope for great assumptions on the scale of infinity as an intangible number, and how such a broad concept's application is so finite to the field of mathematics only, when the number itself is infinite. That is to say, it's quite big. Who uses infinity? Infinity is greatly used in mathematics and similar fields, which is slightly confusing considering it cannot exist. However, its place on the confusion/comprehension scale, squarely between pi and i , has lead it to become something of an asset in calculating useless applications purely for the sake of it. Although not directly related, the definition of infinity is connotated with such expressions as "[this activity] is taking forever!" and "Car trips to [location] never end!". Such phrases are undoubtedly clichéd, but common nonetheless, and idealise infinity as a perfectly normal expression which is, quite appropriately, used to notify terrific dullness. What does infinity look like? The most common imagery of infinity is the number eight (8) tipped on its side, as seen above. This is relative for a number of reasons, the predominant one being the sloppy description of such a massive idea and it being parallel to having a tipped over eight its symbol. Other mental images are conjured at the mention of infinity. The most resounding is an infinite amount of monkeys chained to an infinite amount of typewriters, for although the image is almost totally random, monkeys are funny. There is a final visual aid for infinity, which, all things considered, makes the most sense: a pale, starving mathematician hunched over a series of never-ending calculations in a dim room which the Mafia probably perform executions in. Can I have some pi? No, you can't. See Also goes on forever, and then some . As with celebrity autobiographies, few people believe it to be real. What is infinity? Infinity is generally thought of as the "endless number", despite such a number being logically unable to exist. Such fallacies are rife in its history, which began with a caveman wondering how long it would take for humanity to ever enjoy mornings. Some people believe that infinity is merely a hypothetical concept, used to represent that beyond mathematics's current understanding of numbers, although that is much less interesting. In general, infinity is a very confusing and pointless idea, and it isn't used for anything outside of extremely crude metaphors and numerical showboating. How big is infinity? Infinity is of an infinite size. To compare it to another number, one would most likely have to compare the whole of the universe to a single atom, which implies that the universe is spatially limitless, although it is certainly not a proven theory that such an amount of matter could ever exist in the conceptual space we have granted certain existence. Ergo, infinity would be omnipresent in its only tangible incarnation, giving theorists scope for great assumptions on the scale of infinity as an intangible number, and how such a broad concept's application is so finite to the field of mathematics only, when the number itself is infinite. That is to say, it's quite big. Who uses infinity? Infinity is greatly used in mathematics and similar fields, which is slightly confusing considering it cannot exist. However, its place on the confusion/comprehension scale, squarely between pi and i , has lead it to become something of an asset in calculating useless applications purely for the sake of it. Although not directly related, the definition of infinity is connotated with such expressions as "[this activity] is taking forever!" and "Car trips to [location] never end!". Such phrases are undoubtedly clichéd, but common nonetheless, and idealise infinity as a perfectly normal expression which is, quite appropriately, used to notify terrific dullness. What does infinity look like? The most common imagery of infinity is the number eight (8) tipped on its side, as seen above. This is relative for a number of reasons, the predominant one being the sloppy description of such a massive idea and it being parallel to having a tipped over eight its symbol. Other mental images are conjured at the mention of infinity. The most resounding is an infinite amount of monkeys chained to an infinite amount of typewriters, for although the image is almost totally random, monkeys are funny. There is a final visual aid for infinity, which, all things considered, makes the most sense: a pale, starving mathematician hunched over a series of never-ending calculations in a dim room which the Mafia probably perform executions in. Can I have some pi? No, you can't. See Also goes on forever, and then some . As with Andy Kaufman 's death, few people believe it to be real. What is infinity? Infinity is generally thought of as the "endless number", despite such a number being logically unable to exist. Such fallacies are rife in its history, which began with a caveman wondering how long it would take for humanity to ever enjoy mornings. Some people believe that infinity is merely a hypothetical concept, used to represent that beyond mathematics's current understanding of numbers, although that is much less interesting. In general, infinity is a very confusing and pointless idea, and it isn't used for anything outside of extremely crude metaphors and numerical showboating. How big is infinity? Infinity is of an infinite size. To compare it to another number, one would most likely have to compare the whole of the universe to a single atom, which implies that the universe is spatially limitless, although it is certainly not a proven theory that such an amount of matter could ever exist in the conceptual space we have granted certain existence. Ergo, infinity would be omnipresent in its only tangible incarnation, giving theorists scope for great assumptions on the scale of infinity as an intangible number, and how such a broad concept's application is so finite to the field of mathematics only, when the number itself is infinite. That is to say, it's quite big. Who uses infinity? Infinity is greatly used in mathematics and similar fields, which is slightly confusing considering it cannot exist. However, its place on the confusion/comprehension scale, squarely between pi and i , has lead it to become something of an asset in calculating useless applications purely for the sake of it. Although not directly related, the definition of infinity is connotated with such expressions as "[this activity] is taking forever!" and "Car trips to [location] never end!". Such phrases are undoubtedly clichéd, but common nonetheless, and idealise infinity as a perfectly normal expression which is, quite appropriately, used to notify terrific dullness. What does infinity look like? The most common imagery of infinity is the number eight (8) tipped on its side, as seen above. This is relative for a number of reasons, the predominant one being the sloppy description of such a massive idea and it being parallel to having a tipped over eight its symbol. Other mental images are conjured at the mention of infinity. The most resounding is an infinite amount of monkeys chained to an infinite amount of typewriters, for although the image is almost totally random, monkeys are funny. There is a final visual aid for infinity, which, all things considered, makes the most sense: a pale, starving mathematician hunched over a series of never-ending calculations in a dim room which the Mafia probably perform executions in. Can I have some pi? No, you can't. See Also Glossary of mathematical terms Augustin-Louis Cauchy · Albert Einstein · Isaac Newton · Blaise Pascal · Bernhard Riemann Mathematicians · Mathemagicians · Nerds · Asians · Your Math Tutor Fundamentals Algorithm · Proof Tools Calculator · Flow chart · Graphs · Slide rule · Ti-83 · Texas Instruments Education Intelligent Mathematics · Extreme mathematics · Hex · Maxwell equations · Newmath · Nude math By Field: Numerology 0 · 1+1 · 9/11 · 0.999... · Pi equals exactly three · Nillion · Oodles · Infinity Veggie maths Negative potato · Counting to potato Number Theory The largest number · Integer · Legend of Zelda Link theory · Negative Numbers · Odd · Prime number · Fibonacci Sequence · Rational numbers · Riemann Hypothesis · Imaginary Number · Complex numbers · Is one a number? · Fermat's Penultimate Theorem · Fermat's Last Theorem Arithmetic Arithmetic · Addition · % · How to Divide by Zero · The Quantity 2 plus 4 times y = Your Mom Algebra Pre-Algebra · Al Gebra · Linear Algebra · Equation · Polynomial · Fourier Transform · Hilbert's Hotel Geometry & Topology Geometry · Trigonometry · Fractal · Hairy ball theorem · Tangent · Paradox (Achilles and the Tortoise) · Transcendental curve · Soviet Integration (Mathematics) Calculus Calculus · Integral · Vector calculus · Differential Equations · Cauchy's theorem · ∫ · 2 Girls 1 Calculus Equation Probability & Statistics · Random Statistics · Soviet Union (Mathematics) Logic & Computer Logic · Recursive · Monty Hall problem · Turing test · Number Bases Physics & Mathy Laws of Physics · Principle of Least Action · Einstein's Malicious Theories · The Popularity of War
{"url":"http://uncyclopedia.wikia.com/wiki/Infinity?diff=prev&oldid=5644011","timestamp":"2014-04-17T14:18:45Z","content_type":null,"content_length":"685346","record_id":"<urn:uuid:19aaf496-3557-4a53-99fd-1fbe3d3b47f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematical Biology Job Market As many of the articles in this feature have indicated, biology is at a crossroads. Biologists have more data than they know what to do with, and mathematicians have the tools and expertise to begin to make sense of it. But mathematicians lack the fundamental knowledge of biology necessary to understand the results. Meanwhile, biologists grasp the systems they're studying, but they lack the tools needed to properly analyze the reams of data they produce. These days more than ever before, mathematics undergrads are matriculating with at least some exposure to biology. Yet a big gap remains between the words of traditional biology and mathematics. That divide is certain to narrow and, perhaps, eventually disappear, but in the meantime, representatives of both fields must wax interdisciplinary if the promise of the genetic revolution is to be fulfilled. And this--genetics--is just one of many fields of biology where mathematics has started to make its mark. For mathematicians who grasp biology, opportunities are plentiful. But which mathematicians? And which math? "People outside of the field see it as one uniform thing, but there are definitely different disciplines," says Steve Lincoln, who is vice president of Bioinformatics at Affymetrix Inc. in Santa Clara, California. "Pure math is the study of systems and equations, and the logical structure of things that are generally pretty abstract in nature. [On the other hand] if you are trained as a biostatistician, you're trained to do applied [work]. Some people wind up doing basic research for a living, but most apply [statistics] to a problem, be it [in the] life sciences, or health care, or modeling the stock market." Biostatistics is big, but such basic mathematics tools such as differential equations--especially partial differential equations--are also useful. These equations are handy for tracking quantities in time and space--a quality just right for investigating systems and mechanisms, biological or otherwise. Variables might include metabolites of a cell, the strength of a neuron's signal over time, or the number of patients infected by a virus as it spreads over a geographical area. Ordinary differential equations typically apply when several variables are a function of time, while partial differential equations get used when a variable is dependent on both time and space, says Michael Reed, a professor of mathematics at Duke University who applies mathematics to physiology and medicine. For example, a protein in a cell might start life in the nucleus and then move into the cytoplasm to take part in cell signaling. Hence the amount of protein in one area of the cell depends both on time and on the amount of protein somewhere else. Academic boom Academia and the National Institutes of Health (NIH) figure to be important employers of mathematicians that cross over into biology. "The experience in the human genome project was that 25% to 30% of every project's budget went into informatics. If you need to coordinate a lot of data, you need to devote significant resources to doing that. If some significant fraction of the NIH budget is going to large-scale projects, and a substantial fraction of each project's budget goes into informatics, that translates into a lot of jobs," says David States, who is a professor of bioinformatics at the University of Washington, Seattle. Mathematics departments are on the look out for mathematicians well versed in biology. "You get mathematicians who don't really know much biology, or biologists who don't know much math. It's not so easy to find a mathematician who is trained well enough in biology to talk to biologists and be taken seriously. I see a big opportunity there in the foreseeable future," says Reinhardt Laubenbacher, a research professor at the Virginia Bioinformatics Institute (VBI) and a mathematics professor at Virginia Polytechnic Institute and State University in Blacksburg. Laubenbacher should know; VBI just added one new research professor, bringing the number of permanent faculty to 15, as well as two visiting research scientists. If the outlook is bright in academia, will there be similar heady times in industrial sectors like pharmaceuticals and biotechnology? Newspaper and magazine articles often quote corporate managers describing grand visions of computer simulations of disease states, as well as "in silico" drug design that could, it is argued, replace the battery of compounds churned out today by synthetic chemists, as well as the expensive animal tests used to weed out poor performers. "It's a nice long-term goal, but we have a lot of work to do [to get there]," says States. "I'm not sure the pharmaceutical companies are investing in it right now. I think there's more of an empirical [frame of mind]: Don't show me a model, show me experimental data that I can show the Food and Drug Administration. And some of that may be appropriate. A lot of the modeling opportunities are probably more academic than commercial." Still, it isn't all bad news for employment at big pharma and big biotech. Biostatisticians are in demand to assist with analysis of clinical trial data. Genetic data and analysis is playing an increasing role in clinical trials, with companies beginning to track side effects and sometimes responses of patients based on genetic markers (pharmacogenomics). "Biostatistics is one of the biggest employment opportunities in the pharmaceutical industry," says Robert Jernigan, director of the Laurence H. Baker Center for Bioinformatics and Biological Statistics at Iowa State University. "Biologists have to pick up the mathematics" But the onus isn't all on mathematicians. Biologists could use an infusion of mathematics as well, says Iya Khalil, vice president of R&D at Gene Network Sciences Inc. in Ithaca, New York. Biologists frequently run experiments that generate large amounts of data, but the usefulness of the data will likely depend on the design of the experiment. "In the realm of high-throughput experiments, often times [mathematicians] figure out that if the biologist had done the experiment in a particular way, it would have improved the statistics of the analysis by an order of magnitude," she says. By then it's too late. "Biologists have to pick up the mathematics," agrees Laubenbacher. "If you want to use your data to make a mathematical model, then you need to take the modeling method into account when you design new experiments. Different modeling methods will require different kinds of data." Mathematicians are following in the footsteps of physicists, who have crossed into biology in droves, in part because the work looks familiar to them, says Lincoln. "[Automated experiments] produce very large data sets, which tend to be multivariate in nature. In any biological experiment you can do on a large sale, you're bound to capture a variety of phenomena. One is the one you are interested in, and the other six or 600 are either noise or confounding factors. It's fairly analogous to the kind of work that physicists have been doing for years." There are no limits on opportunities for mathematicians. "The work may not look fancy to pure mathematicians ... you may be using 19th century math. The intellectual difficulty is in the biology, and how to use the mathematics to study it," says Reed. And much depends on mathematics departments embracing biology so that students can get proper training. "I think they will, but it's a slow Read the companion article Profile: The Scrutable and the Inscrutable, also part of this Next Wave feature.
{"url":"http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2004_02_27/nodoi.6305720559640560046","timestamp":"2014-04-24T20:01:35Z","content_type":null,"content_length":"46497","record_id":"<urn:uuid:71e4a7de-281d-4dd5-8c6d-a01fb43105cc>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzles.COM - Puzzle Projects - Puzzle Help Items 025-036 Home / Puzzle Projects / Puzzle Help / 036 Twelve Crayons - MC6 035 Puzzle Definition 034 Bridge Puzzle - MC5 033 Bears & People - MC4 032 Walls & Lines 031 Grandfather's Breakfast - MC3 030 Making Your Own Puzzles 029 Alexander's Star 028 "Hasty Retreat" 027 How to Recognize a Puzzle, or What Puzzle is your Query About? 026 Send More Puzzles!! 025 Rubik's Cube 036 Twelve Crayons - MC6 Question: If you have 12 crayons arranged to make four perfect squares and take away three crayons to make three perfect squares can you answer that? David V. Mini-Contest 6 is Finished It was our sixth mini-contest. Now we have the first six correct answers and propose this contest's results and some details about the puzzle and its solution. With this Mini-Contest 6 is Contest Results The winners are: 1. Nicole Takahashi. 2. Joao Paulo. 3. Jensen Lai. 4. Alex Packard. 5. Federico Bribiesca Argomedo. 6. Jeffrey Czarnowski. 035 Puzzle Definition Question: What is your definition of a visual puzzle? Ebrahim M. Answer: A visual puzzle is an image containing a hidden information (solution) that you need to reveal using just your eyes and interpretation of what you see. 034 Bridge Puzzle - MC5 Question: I have a math puzzle that I just can't get. You have 4 people (A, B, C and D) that need to cross a bridge under 13 min and only 1 flashlight. You can only go with the flashlight, no more than 2 persons at a time, and always as fast as the slowest person. To cross the bridge persons need: A=1min; B=2min; C=5min; D=6min. How do you get all 4 of the across in only 13 min? P.S. they can't go on each others back! Mini-Contest 5 is Finished It was our fifth mini-contest. Because of some technical problem now we have the first six+one correct answers and propose this contest's results and some details about the puzzle and its solution. With this Mini-Contest 5 is finished. Contest Results The winners are: 1. Jensen Lai. 2. Hüsnü Sincar. 3. Nicole Takahashi. 4. Tanya Bulls. 5. Nigel Wilson. 6. Terry & Lisa. 7. Alex Packard. We got an almost correct solution from Jennifer Ormos, but in this solution its left (timing) and right (persons) parts don't correspond in some steps; persons A and B show another time than it's stated in the puzzle description. 033 Bears & People - MC4 Question: You have three bears and three humans. The bears and humans have to take a boat from one island to another island. One bear is a special bear who can drive the boat. The boat can hold no more than two - 2 bears, or 2 humans, or 1 bear plus 1 human. There can never be more bears than humans on either island or the human will be killed. How can this be done? Mini-Contest 4 is Finished It was our fourth mini-contest. This puzzle wasn't as easy as it might seem at first sight, but finally we have six+one (last minute) correct answers. Now we propose this contest's results and some details about the puzzle and its solution. With this Mini-Contest 4 is finished. We thank you for your active participation, and look forward to the next Mini-Contests. Happy Puzzling! Contest Results The winners are: 1. Husnu Sincar. 2. Nicole Takahashi. 3. Xiner. 4. Marie Sabbe. 5. Katherine Cardiff. 6. Adam Dutke. 7. Kaj Braek. 032 Walls & Lines Question: here is an attachment that i hope you will open and i hope you will tell me if this is unsolvable. ive been trying to solve this puzzle for 7 years and please just look at the setup for the puzzle. You have to touch every intersecting line with one continuous line without touching each intersecting line twice. here an example on paint. and please tell me if you understand or if you know about this puzzle. please give me some answers Arthur M. Question: I was wondering if anyone saw this one. I am told it is solvable. The object is to bisect each line without lifting the pencil, without crossing / touching any given line twice. Michael S. Question: I've been working on this one for a while and don't see the possibility. There is 2 rectangular boxes on top of 3 square boxes, you need to draw a line through each line of the boxes without overlapping any lines or going through them twice and the line needs to be continuous. Any ideas? Thanks Gary J. Question: Hello, Any idea where I might find the solution to this puzzle? You are supposed to draw a line through each square without crossing a line twice. Thanks! Rachel A. Question: hi i have a puzzle that i have been trying to figure out for about 4 months now and i have asked many people around me for their help but no answer. it is a long rectangle with a horizontal line going completely through the center of the triangle, on the top of the horizontal line is a vertical line going from the center of the top of the rectangle to the center of the horizontal line, on the bottom half of the rectangle are two more vertical lines going from the bottom line of the rectangle to the horizontal line they are on the side of the center vertical line on the top half of the rectangle. now the trick is you have to draw one line going through this rectangle so as to pass through each line (there are 16 lines) but you may not pass through the line being drawn and you cannot pass over a line twice. PLEASE HELP!!!! please help me with this puzzle it is driving me carzy. thank you so much send me a solution when possible. sincerely, Question: MY teacher showed the class this puzzle last month and still nobody has figured it out yet. It is this box with a line going strait through the center .Then on one side of the line is a line going to the wall of the box and on the other side is two lines going to the other wall. The object is to draw a line going through each line segment without crossing one line segment twice.Although,you can cross your own line twice. The line is also aloud to swerve or whatever you want it to do.My teacher says it's possiblle ,but I'm not to sure.If you guys have any idea of what im talking about,please tell me what the answer is because its driving me insane. Question: I am having trouble finding a puzzle that has 2 boxes on top of three boxes and you have to make a long line that goes through every line on the boxes and you cannot go through a line Question: This puzzle was given to a friend of ours some years ago. He has never been able to solve it and he doesn't know if it has a solution. Does it? I am starting to think that this puzzle has no solution. I have been playing with it with no luck. The idea is to cross each line (segment) with a continuous line without crossing any segment twice or the continuous line itself. Rafael R. This solution is wrong, because the line crosses one segment twice!! Question: My teacher gave me this puzzle and she said no one in 15 yrs has ever figured it out. She told us that we would have to draw a continuous line through each wall without tracing over a wall and entering the wall twice*. Can you help me? The puzzle looks like this. * Walls that have to be passed are marked with a circle. Answer: Unfortunately, this, one of the most popular classic puzzles, has no solution. At least, one wall always will be left unpassed. It was easy proved by Martin Gardner. The proof (adopted to our case with the walls and rooms) is as follows: <<A continuous line that enters and leaves one of the rectangular rooms must of necessity cross two walls. Since the three bigger rooms have each an odd number of walls to be crossed, it follows that an end of a line must be inside each if all the 16 walls are crossed. But a continuous line has only two ends, so the puzzle is insoluble.>> Answer: Just an observation you or your visitors may appreciate. there is a logic problem that goes like this: draw a square, divide in half horizontally, divide the top half in to two equal parts with a vertical line, then divide the bottom portion into 3 equal portions with 2 vertical lines. the task is to draw a continues line through all lines without ever crossing your own line or crossing any line two times. The problem is presented on this sight. Now according to conventional logic this problem seems impossible because line always needs an entry and exit but there are an odd number of spaces and an odd number of segments in three of them. The real difficulty here is that an assumption is made, creating an unwritten rule. This unwritten rule, this self imposed limitation forces the problem solver to focus on the problem, NOT THE SOLUTION. By recognizing the problem (NOT FOCUSING ON IT) - a long line cannot enter and leave each space enough times without making an illegal crossing- we can find the solution. The solution is this: use a very wide marker of brush and cross the entire box in one diagonal line. All stated conditions are met, the problem is circumvented and the solution is found. Clearly this is not the intended answer, but it is indisputable. Comment: Puzzle 32 walls and lines. The solution presented that says one should use a large brush because it fits within the “rules.” However, the definition of a line says that it is only one (1) dimensional, a line there for cannot have height and width it would then be a plane. So the “solution” presented where this individual accuses people of not being able to “focus” on the solution is in fact not focusing enough on the “rules” The first time I was told the puzzle it was told to me to cross each line once and only once. So I crossed every time (picking up the pencil each time). However when the rules say make a line this does not work since this makes several lines. Modified: May 27, 2007 031 Grandfather's Breakfast - MC3 Question: I was wondering if you can help me solve this puzzle: Grandfather is a very hard-boiled customer. In fact, his eggs must be boiled for exactly 15 minutes, no more, no less. One day he asks you to prepare breakfast for him, and the only timepieces in the house are two hourglasses. The larger hourglass takes 11 minutes for all the sand to descend; the smaller takes 7 minutes. What do you do? ...Grandfather grows impatient... Mini-Contest 3 is Finished It was our third mini-contest. Now we have the first six correct answers and propose this contest's results and some details about the puzzle and its solution. With this Mini-Contest 3 is Contest Results The winners are: 1. Jensen Lai. 2. Hüsnü Sincar. 3. KT - "ktendall". 4. Tamie D. 5. Jane Average. 6. Nicole Takahashi. 7. Matt Kaspar. We got an almost correct solution from Sandra Payne, but she did say nothing about the moment when to put the eggs into the boiling water. See details at the solution page. Also we got a correct solution from Matt Kaspar before our Mini-Contest 3 was finished, but accidentally his message was put into a folder with another contest, so when we posted the results and the names of the first six correct solvers his name wasn't included. Our apology. We add his name to the winners list. 030 Making Your Own Puzzles Question: We were wondering if you might be able to help us. We are trying to complete a word search and find for our elementary school newspaper. We have come up with the topic and have a listing of words we'd like to use. Can you put them together for us? A teacher suggested that we contact you. Thank you, TJ and Ashton Question: Several years ago I was able to log on and input my choice of words and you would create a puzzle for me, do you still offer that service. Question: I want to create a puzzle of my own search a word for church how do I go about it please let me know I need it by tonight Question: How Do I make my own puzzle Question: Do you know the name of a website that allows you to create wordsearches. Thanks Question: Do you know where I can find a website where I can make my own crossword puzzles or wordsearches? Question: How do you make a puzzle? Question: I would like to know the address-website to create my own puzzles...Do you have that website.? Jane T. Question: I would like to know how to create my own word search, with my own words that I chose. Question: how can I get help wih making my wordsearch on here? Question: I am looking for word search puzzle on "nutrition". Can you help? Joycelle J. Question: i have a report due on friday the 12th and i have to do some puzzles for English~would u be able to give me some if i tell u the headings?? well write back soon please thank you Question: I want to no how to create my own puzzle Question: we are having to make an activity sheet for our class and my teacher suggested this site but i cant find a page were u make your own puzzles with your help can you please direct me to a page which will help me? Question: I was under the assumption that I was able to make my own puzzle at your web site!? Upon pulling your web site up, I was unable to see how this was accomplished. Can you help me? Am I at the wrong site? Confused. Answer: You may take a look at the site http://www.puzzlemaker.com/ which proposes some great possibilities to make different puzzles with words. Answer: Here is a message from our visitor with the very useful information on the matter. Thank you very much for your help, Lisa! <<I happened to be reading where some students had questions about how to create their own puzzles. There is a site called, edHelper (www.edhelper.com) where you can create your own puzzles. I hope this can be of some assistance. Lisa Melvin>> Answer: We suppose that there is something in the name of our new section "Puzzle Help" that gives the idea that we may help you to make your own puzzles with our help. If you mean customized puzzles - we can't do this, and this never was our intention. Actually, we created this section first of all to help you be oriented in the wonderful and vast Puzzle World. There is much our team knows about puzzles, but nobody knows and can do everything. On the other hand we have at our PuzzlePLAYGROUND sector a huge variety of cool puzzles you may choose from, print them out and have really your own puzzles. In any case every your question is considered as a very important thing and for each of them we are trying to find as precise answer as possible. See also Item 010 at this section. Modified: November 29, 2005 029 Alexander's Star Question: I bought an Alexander's Star puzzle at a thrift store and it is unsolved. I know this sounds lame, but do you have a picture of what a solved puzzle of this sort should look like? It would really help me get off to a better start. Thanks. Kathy D. Answer: Congratulations on such a wonderful purchase! The Alexander's Star puzzle is one of the most beautiful sequential movement puzzles similar to the famous Rubik's Cube puzzle. It's based not on a cube, but a great dodecahedron. The puzzle has ten pentagonal faces with ten stars which protrude from the faces. The object is to make every pentagonal side (better say its visible part that consists of five triangles) monochromatic. A picture of Alexander's Star, its description, notation and how to solve it you can see at Jaap's Puzzle Page with this puzzle. Also a short solution to Alexander's Star can be found here. 028 "Hasty Retreat" Question: I am in search of a picture puzzle I did as a child. It's a very exciting one, filled with the drama and romanticism of an old time adventure story. Essentially there are 4 participants. A North American Indian man and women in traditional dress, a dog, and a grizzly bear. The dog is on the right side, bristled and snarling. The female, startled/terrified is standing in a buckskin dress and moccasins with one arm raised in alarm, on the left. The man is bare chested/muscular and reaching for his bow and arrows, located in their canoe which is only partially shown in the foreground, both on the left and moving toward center. The forest, on either side, leading into the clearing, is thick and vibrant green. The grizzly is reared on hind legs, preparing for attack, mid center/mid back. The mountains in the background suggest late afternoon in their paleness of color. This picture story has a timeless nature and is frozen on the canvas of the artist as well as in my mind. I do not know the artist or manufacturer, or title, but in subject should be easier for you to find then myself, if you keep a picture archive by subject. (I have had no luck). Although I only want to give this to my sister as a 50th birthday present, I have at least 10 others I would love to present it to. If there is anything else I can do besides identify it, please let me know. I estimate that it was produced in the 1950's, and my mother bought it at a church auction in Barton, Vermont. Can you help me? If you cannot, perhaps you could make some helpful suggestions. Sincerely, Suzen A. Answer: Thank you very much for your very interesting message and request. Our puzzle friends - Anne Williams and Chris McCann - helped us to identify this puzzle. We are grateful to them for their help. The puzzle you are searching for can be seen at the following URLs: http://oldpuzzles.com/416.htm or http://www.oldpuzzles.com/416.htm. The title is "Hasty Retreat", and the artist is Walter Haskell Hinton. The puzzle consists of 1203 pieces with a lot of amazing, "non-standard" pieces. We hope this information will be helpful for you. Last but not least. Your detailed description of this nice picture puzzle is as impressive as the puzzle itself. :-) 027 How to Recognize a Puzzle, or What Puzzle is your Query About? Very often we are receiving your messages with requests to help with some puzzles. But sometimes they describe such puzzles in general and thus we can't even understand what puzzles you mean (please see below). Apparently this prevents us to help you with your troubling puzzles. In all such cases we may just recommend you to give us as many details on the puzzles you want us to help with as possible. Don't hesitate to send us these puzzles' names, producers, publishers, sources, their authentic texts or descriptions, any other details that may be helpful when we try to recognize the puzzles and help you with your queries. Good samples of such detailed descriptions are Items 028 and 029. Question: i need help with a dell crossword puzzle called, cross sums, could you help me out. or where can i find the answer to the puzzle Gail B. Question: I was given a 3D puzzle but it has no directions. I have tried putting them in numerical order and that doesn't work and I have also tried putting them together by shape. I am very confused. I love puzzles and also a challenge but this makes absolutely no since to me. I appreciate any guidance you can provide. Thanks! Question: I have a puzzle about cars, colors, states, and parking spaces? I can't figure it out. Where can I find the answers? Question: I don't remember how this puzzle is call but I saw it like a year ago i been trying to find it. I don't know if you could help me. It comes with a pink circle and three yellow lines that you got to put the three lines together and the circle it slides to the side three times and the two row from the bottom to the top and they slide at the same time. Annel Q. Question: do u have any celestial puzzles. If not can u tell me where i might be able to find some? thanks Question: I need help solving a del logic puzzle. it is called pass the egg by glen schoen. please help me because i don't understand it. this is due this tuesday so write back asap. thank you. Question: While visiting El Salvador I found a young man selling interesting wire puzzles that had degrees of difficulty, #4 was a bicycle, which I was able to do but #5, consisting of a Y with rings and springs, I have been unable to do. He told me I could find the answer at www.puzzles.com, but I have been unable to locate it. Can you help me, this is driving me crazy! Thanks 026 Send More Puzzles!! Many of you ask us to send different puzzles (please see below). We must say that at the moment the only way how to deliver new and challenging puzzles and other cool things is sectors, sections and pages of our Puzzles.COM site. We're preparing one more way for this - our E-PuzzLETTER for the members of our PuzzleCLUB - but these are our future plans. Question: Can you send me all kinds of puzzles Question: Send puzzles to challenge the mind. Thanks 025 Rubik's Cube Question: I have a rubix cube and I need help I put one side together! Gary M. Answer: To learn how to do one face of a Rubik's Cube (and much more) go to the site http://jeays.net/rubiks.htm. You'll find at the site everything about this phenomenal puzzle, including two complete solutions to solving the cube from any legal position. Also it has a lot of links to other places about Rubik's Cube on the Web. Or visit another excellent site "The Ultimate Solution to Rubik's Cube" at: http://www.olympus.net/personal/prmhem/. Also there is a comprehensive page about Rubik's Cube 3x3x3, its notation, solutions and very useful links at Jaap's Puzzle Page with this famous puzzle. And, last but not least, you can use the Rubik's Cube Solution - the newest solution guide, recommended for everyone, directly from the official Rubik's site - Rubiks.com. Also there you can play the Virtual Rubik's Cube online and solve it from some scrambled position. Additionally, the Virtual Rubik's Cube has a very clever and useful feature (it's accessible under some special conditions, though) that allows you to solve your real Rubik's Cube 3x3x3 from any position you can simply paint at the screen sample of Rubik's Cube. Items 037-048 Items 013-024 Last Updated: January 21, 2009 top
{"url":"http://puzzles.com/PuzzleHelp/PuzzleHelpItems25_36.htm","timestamp":"2014-04-19T09:24:16Z","content_type":null,"content_length":"112414","record_id":"<urn:uuid:e82d088e-6093-4fe3-b3bb-e080331b664c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Number theroy and Cryptography?? The role that number theory played is that it was used to solve the key distribution problem by providing a usable "one-way" function for the encryption algorithm. Other forms of cryptography (than RSA or PGP) do not rely on a number theoretic approach, but suffer from the difficulty (and security needed) in transporting keys. A key is something that allows the recipient to decipher a coded message. If the key is compromised, a whole series of communications may be intercepted. Furthermore, when sending a communication to several locations, the distribution of keys becomes cumbersome. And lastly, to ensure security, it may often be safe to keep changing the key periodically, and that just adds to the complexity of the Public key encryption avoids these difficulties. And that's what RSA is. PS : This was just meant to supplement what you got from the link you posted...which seems quite limited in explanation. The "how" of RSA (and of Euclid's algorithm) has not been talked about...I'll pass the baton on to someone else (maybe matt will take it) for that. Or I'll come back to it later.
{"url":"http://www.physicsforums.com/showthread.php?t=45450","timestamp":"2014-04-19T12:32:50Z","content_type":null,"content_length":"28739","record_id":"<urn:uuid:7e1276ca-44c5-42be-81e5-92705d706853>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Leonard J Savage: Leonard J Savage: Foundations of Statistics The International Congress of Mathematicians was held in Edinburgh from 14 August to 21 August 1958. The President was W V D Hodge who was also Chairman of the Executive Committee. The Vice-Chairman of the Executive Committee was A C Aitken and the Scientific Programme Committee was M H A Newman (Chairman), A C Aitken, M S Bartlett, M L Cartwright, J L B Cooper, H Davenport, P Hall, N Kemmer, M J Lighthill, E A Maxwell, D G Northcott, W W Rogosinski, A G Walker, J H C Whitehead, M V Wilkes and J A Green. L J Savage gave a 30 minute invited address to Section VI of the Congress. We give the text of his lecture below:- Recent Tendencies in the Foundations of Statistics By Leonard J Savage 1. Introduction This is an expository talk directed mainly at any non-statisticians who may have wandered in. It is important to address the non-specialists at a congress like this, to help maintain the bonds between the diversifying branches of mathematics. In this particular talk, a restraint on technicalities will have the added advantage of helping experts and the speaker keep their feet on the The foundations of statistics are controversial, changing, and subtle. Therefore, try though I shall to be fair and clear, you must keep yourselves especially aware that you are hearing mainly the present views of a single person, imperfectly expressed. The foundations of statistics are a part of the foundations of science in its widest sense. Their study is not mathematics in principle, and by no means all the important contributions to them have been made by mathematicians. But the use of some mathematical techniques is inevitable in the study of a quantitative subject. Still more, mathematical training and outlook have led, and will surely continue to lead, to important advances in the foundations of statistics. The relation is reciprocal in that mathematics is sometimes stimulated by the foundations, as it is by the other theoretical aspects of statistics. The reference to recent tendencies in my title has a continuum of possible meanings, and in fact various parts of the talk will refer to tendencies of the present century, of the period since World War II, and of the last few years or even months. 2. Meanings of 'statistics' I begin by outlining some meanings that have been given to the word 'statistics', not to enter into an argument that would be out of place here (and perhaps anywhere) as to what the word ought to mean, but to indicate the subject of this talk and to set the stage for it. Etymologically, 'statistics' refers to numerical data about the state. Even today there are many professional statisticians to whom 'statistics' means effectively demography in a more or less extensive sense - the compilation and interpretation of census data, economic statistics, or vital statistics (records of births, deaths, and illness). For many of us, however, the word has drifted far from its original meaning and come to refer to quantitative thinking about uncertainty as it affects scientific and other investigations. It is this meaning, suggested by 'inductive' or 'statistical inference', that 'statistics' has for us here. This subject goes back historically to at least the early eighteenth century, when Jacob Bernoulli, and a little later Thomas Bayes, made great contributions to it. It was pushed forward in the nineteenth century by Laplace, Gauss, and others, and it has been subject to a fervour of activity since the early twenties of this century, when it received great impetus from the work of R A Fisher. In physics, 'statistics' usually pertains to probability without special reference to the problem of inference but with emphasis on large aggregates. 3. Inductive inference and inductive behaviour One of the most important trends of the past few decades is a shift in formulation of the central problem of statistics from the problem of what to say or believe in the face of uncertainty to the problem of what to do. It would be hard and unrewarding to seek out the very beginning of this economic movement, as I shall call it because of its emphasis on the costs of wrong decision. It goes back at least as far as Gauss [ 7], but Neyman brought it forward with particular explicitness in 1938 [ 8], coining the expression 'inductive behaviour' in contrast to 'inductive inference'. Wald took up the theme with energy and enthusiasm, exploring it in great detail and stimulating many others to do so during his own life and after his untimely death. That many important and interesting problems concerned with uncertainty are economic in nature is clear and undisputed. Going much further, some of us believe that economic models are of great value throughout the whole of statistics. This is controversial, and it is maintained, especially by Barnard [ 1],[ 3] and Fisher [ 5],[ 6] that the methods and ideas appropriate to frankly economic problems are not appropriate to the problems of science, the problems of finding out the truth for the truth's sake. Fisher says in a particularly pungent way that science ought not to be confused with the sordid business of either the market place or the five-year planners' bureau [ 5]. Admittedly a close relation between frankly economic problems and more academic ones is not obvious, or even thoroughly demonstrable, but some case can be made for it. To illustrate, in practical problems of point estimation, there are certain systematic reasons why the penalty for mis-estimation is often nearly proportional to the square of the error. These same reasons are, to say the least, suggestive even for problems of pure science-a precedent for this idea can be seen in Gauss [ 7]. More generally, it should be kept in mind that science does have goals and that mistakes made in approaching them do entail costs, however subtle and abstract. There seems to me nothing at the present time to substitute for the hope that an economic theory of decision in the face of uncertainty will be a valuable guide for the whole problem of inference. If there is an important kind of inference problem that cannot properly be discussed in economic terms, no one yet seems able to state these problems with enough precision so that they can be analyzed and solved. In brief, the economic outlook seems to me of great promise for the whole of statistics, though it is not necessarily the last word. We should continue to explore and use it with hope and discretion and with an eye open for new ideas. One thing that has been said about the putative distinction between scientific and economic problems is that the scientific inference to be drawn from given data is unique and universal, whereas the economic conclusions change with circumstances, such as values and opportunities [ 3]. I myself believe that the idea of a universal summary of data - that is, the likelihood-ratio function or some effective substitute-is valid and important, but the idea of such a summary does not for me rest on any distinction between science and business. 4. Objectivism and subjectivism It was for a long time generally believed that all uncertainties could be measured by probabilities, and a few of us today believe that this view, which has recently been very unpopular, must soon again come into its own. It was part of the creed of the great renaissance of statistics in the second quarter of the century that only special uncertainties associated with gambling apparatus and the like were measurable by probabilities and that other uncertainties would have to be analyzed and dealt with in some other ways. This renunciation swept away the classical framework for inference, built on Bayes's theorem, and thereby created many new problems. There was especially the problem of finding new meanings to important-sounding questions that had been rendered nonsensical by the renunciation. The situation was a fertile and stimulating one. Many new ideas directed at filling the gaps were introduced. Some of these ideas are apparently of lasting value, but some of them (such as confidence limits in their current formulation or tests of narrow hypotheses) may not be. In any event, the over-all program has not yet been even nearly successful, nor do I think it ever can be. Statisticians have always recognized that subjective judgments of fact (as well as of value) necessarily play a role in statistical practice. First, much personal, that is subjective, judgment is obviously required to decide what kind of an experiment is the promising one to perform, and on what scale. There are, therefore, subjective aspects to the essential statistical activity of designing experiments and other investigations. Again, it has long been recognized that the user of statistics, in analyzing data, must make a subjective choice among available operating characteristic curves and the like. To be sure, the minimax theory can be seen as an attempt almost to eliminate all judgments but those of value from both design and analysis, but few if any would contend that there has been more than the formal appearance of success here. A certain subjective theory of probability formulated by Ramsey [ 9] and later and more extensively by de Finetti [ 4] promises great advantages for statistics. Contrary to what the word 'subjective' seems to connote to many, the theory is not mysterious or particularly unoperational. It gives, a few of us believe, a consistent, workable, and unifying analysis for all problems about the interpretation of the theory of probability, a much contested subject. It unifies the treatment of uncertainties, measuring them all by probabilities and emphasizing that they depend not only on patterns of information but on the opinions of individual people. Experience seems to me to show that this theory provides a better framework for understanding both the objective and the subjective aspects of statistics than we have heretofore had. 5. Does It matter? As is often said, and with much truth, the explicit study of the foundations of a subject is usually of relatively little practical importance, for common sense and experience over the course of time develop a science more securely than it could possibly be built up by direct application of abstract principles. None the less, I believe that present-day discussions about inference and behaviour, about subjectivism and objectivism are stimulating practical advances in statistics. The evidences of this are widely scattered, but I shall mention only two examples. First, it is becoming increasingly accepted that, once an experiment has been done, any analysis or other reaction to the experiment ought to depend on the likelihood-ratio function and on it alone, without any further regard to how the experiment was actually planned or performed. I believe that this doctrine, which contradicts much that was recently most firmly established in statistical theory and practice, is basically correct and that it will soon greatly simplify and strengthen statistics. Let me not falsify history by intimating that appreciation of the likelihood-ratio function as much more than is ordinarily understood by a 'sufficient statistic' originated in the economic outlook and subjectivism. Actually, it was, so far as I know, begun by Barnard [ 2] and Fisher [ 6] , and quite apart from these ideas. None the less, the economic outlook and the subjectivistic theory of probability lend strong support to the likelihood-ratio doctrine and promise to hasten its acceptance and exploitation. Secondly, David Wallace has recently obtained a valuable new insight into the much vexed Behrens-Fisher problem by reconsidering it from the point of view of subjective probability. 1. G A Barnard, Sequential tests in industrial statistics, J. R. Statist. Soc. (Suppl.) 8 (1946), 1-26. 2. G A Barnard, A review of 'Sequential Analysis' by Abraham Wald, J. Amer. Statist. Ass. 42 (1947), 658-664. 3. G A Barnard, Simplified decision functions, Biometrika 41 (1954), 241-251. 4. Bruno de Finetti, La prévision: ses lois logiques, ses sources subjectives, Ann. lnst. Poinearé 7 (1937), 1-68. 5. Sir Ronald A Fisher, Statistical methods and scientific induction, J. R. Statist. Soc. (B) 17 (1955), 69-78. 6. Sir Ronald A Fisher, Statistical Methods and Scientific Inference (Oliver and Boyd, Edinburgh, 1956). 7. Carl Friedrich Gauss, Abhandlungen zur Methode der kleinsten Quadrate von Carl Friedrich Gauss (Berlin, 1887). (Translation from Latin by A Borsch and P Simon.) 8. Jerzy Neyman, L'estimation statistique, traitée comme un probléme classique de probabilité, in Actualités scientifiques et industrielles no. 739 (Hermann et Cie., Paris, 1938), 25-57. 9. Frank P Ramsey, The Foundations of Mathematics and Other Logical Essays (Kegan Paul, London, 1931). 10. Leonard J Savage, The Foundations of Statistics (John Wiley and Sons, New York, 1954). JOC/EFR March 2006 The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Extras/Savage_Statistics.html","timestamp":"2014-04-21T12:13:20Z","content_type":null,"content_length":"18302","record_id":"<urn:uuid:2cb8bb8d-d3e8-466c-aa68-136797f0000c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Annotated Articles We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We consider primarily the issue of approximation to within a specified positive ε, but we also address the question of arbitrarily close Alan Turing pioneered semantics-to-syntax analysis of algorithms. You start with a large species of algorithms, and you finish up with a syntactic artifact that characterizes the species, typically a kind of machines that compute all and only the algorithms of the species. The task of analyzing a large species of algorithms seems daunting if not impossible. As in quicksand, one needs a rescue point, a fulcrum. In computation analysis, a fulcrum is a particular viewpoint on computation that clarifies and simplifies things to the point that analysis become possible. We review from that point of view Turing's analysis of human-executable computation, Kolmogorov's analysis of sequential bit-level computation, Gandy's analysis of a species of machine computation, and our own analysis of sequential computation. Most modern applications are empowered by online services, so application developers frequently implement authentication and authorization. Major online providers, such as Facebook and Microsoft, provide SDKs for implementing authentication and authorization. This paper considers whether those SDKs enable typical developers to build secure apps. Our work focuses on explicating implicit assumptions that are necessary for secure use of an SDK. Understanding these assumptions depends critically on not just the SDK itself, but on the underlying runtime systems with which the SDK interacts. Our work develops a systematic process for identifying critical implicit assumptions by building semantic models that capture both the logic of the SDK and the essential aspects of underlying systems. These semantic models provide the explicit basis for reasoning about the SDK's security. We use a formal analysis tool, along with the semantic models, to reason about all apps that can be built using the SDK. In particular, we formally check whether the SDK, along with the explicitly-captured assumptions, is sufficient to imply the desired security properties. We applied our approach to several widely used authentication/authorization SDKs. Our approach led to the discovery of several implicit assumptions in each SDK, including issues deemed serious enough to receive Facebook bug bounties and change the OAuth 2.0 specification. We verified that many apps constructed with these SDKs (indeed, the majority of apps in our study) are vulnerable to serious exploits because of these implicit assumptions, and we built a prototype testing tool that can detect several of the vulnerability patterns we identified. We develop the first constructive algorithms for compiling single-qubit unitary gates into circuits over the universal V basis. The V basis is an alternative universal basis to the more commonly studied {H,T} basis. We propose two classical algorithms for quantum circuit compilation: the first algorithm has expected polynomial time (in precision log(1/ε)) and offers a depth/precision guarantee that improves upon state-of-the-art methods for compiling into the {H,T} basis by factors ranging from 1:86 to log2(5). The second algorithm is analogous to direct search and yields circuits a factor of 3 to 4 times shorter than our first algorithm, and requires time exponential in log(1/ε); however, we show that in practice the runtime is reasonable for an important range of target precisions. We describe the current version of the Distributed Knowledge Authorization Language (DKAL) and some proposed extensions. The changes, in comparison with previous versions on DKAL, include an extension of primal infon logic to include a weak form of disjunction, the explicit binding of variables in infons and in rules, the possibility of updating a principal's policy, a more detailed treatment of a principal's datasources, and the introduction of DKAL communities. Primal infon logic (PIL) was introduced in 2009 in the framework of policy and trust management. In the meantime, some generalizations appeared, and there have been some changes in the syntax of the basic PIL. This paper is on the basic PIL, and one of our purposes is to ``institutionalize'' the changes. We prove a small-model theorem for the propositional fragment of basic primal infon logic (PPIL), give a simple proof of the PPIL locality theorem, and present a linear-time decision algorithm (announced earlier) for PPIL in a form convenient for generalizations. For the sake of completeness, we cover the universal fragment of basic primal infon logic. We wish that this paper becomes a standard reference on basic primal infon logic. Our main goal is to put Datalog into a proper logic perspective. It may be too early to put Datalog into a proper perspective from the point of view of applications; nevertheless we discuss why Datalog pops up so often in applications. The interview appeared in the section ``News from New Zealand'' curated by Cristian S. Calude Many prior trust management frameworks provide authorization logics for specifying policies based on distributed trust. However, to implement a security protocol using these frameworks, one usually resorts to a general-purpose programming language. When reasoning about the security of the entire system, one must study not only policies in the authorization logic but also hard-to-analyze implementation code. This paper proposes DKAL*, a language for constructing executable specifications of authorization protocols. Protocol and policy designers can use the DKAL* authorization logic for expressing distributed trust relationships, and its small rule-based programming language to describe the message sequence of a protocol. Importantly, many low-level details of the protocol (e.g., marshaling formats or management of state consistency) are left abstract in DKAL*, but sufficient details must be provided in order for the protocol to be executable. We formalize the semantics of DKAL*, giving it both an operational semantics and a type system. We prove various properties of DKAL*, including type soundness and a decidability property for its underlying logic. We also present an interpreter for DKAL*, mechanically verified for correctness and security. We evaluate our work experimentally on several examples. Using our semantics, DKAL* programs can be analyzed for various protocol-specific properties of interest. Using our interpreter, programmers obtain an executable version of their protocol which can readily be tested and then Primal infon logic was introduced in 2009 in connection with access control. In addition to traditional logic constructs, it contains unary connectives "p said" indispensable in the intended access control applications. Propositional primal infon logic is decidable in linear time, yet suffices for many common access control scenarios. The most obvious limitation on its expressivity is the failure of the transitivity law for implication: x → y and y → z do not necessarily yield x → z. Here we introduce and investigate equiexpressive "transitive" extensions TPIL and TPIL* of propositional primal infon logic as well as their quote-free fragments TPIL0 and TPIL0* respectively. We prove the subformula property for TPIL0* and a similar property for TPIL*; we define Kripke models for the four logics and prove the corresponding soundness-and-completeness theorems; we show that, in all these logics, satisfiable formulas have small models; but our main result is a quadratic-time derivation algorithm for TPIL*. How can one possibly analyze computation in general? The task seems daunting if not impossible. There are too many different kinds of computation, and the notion of general computation seems too amorphous. As in quicksand, one needs a rescue point, a fulcrum. In computation analysis, a fulcrum is a particular viewpoint on computation that clarifies and simplifies things to the point that analysis become possible. We review from that point of view the few foundational analyses of general computation in the literature: Turing's analysis of human computations, Gandy's analysis of mechanical computations, Kolmogorov's analysis of bit-level computation, and our own analysis of computation on the arbitrary abstraction level. We attempt to put the title problem and the Church-Turing thesis into a proper perspective and to clarify some common misconceptions related to Turing's analysis of computation. We examine two approaches to the title problem, one well-known among philosophers and another among logicians. The logic core of Distributed Knowledge Authorization Logic, DKAL, is constructive logic with a quotation construct "said". This logic is known as the logic of infons. The primal fragment of infon logic is amenable to linear time decision algorithms when policies and queries are ground. In the presence of policies with variables and implicit universal quantification, but no functions of positive arity, primal infon logic can be reduced to Datalog. We here present a practical reduction of the entailment problem for primal infon logic with individual variables to the entailment problem of Datalog. Propositional primal logic, as defined by Gurevich and Neeman, has two kinds of quotations: p said φ, and p implied φ. Note 1. The derivation problem for propositional primal logic with one kind of quotations is solvable linear time. Note 2. In the Hilbertian calculus for propositional primal logic, the shortest derivation of a formula φ from hypotheses H may be exponential in the length of (H,φ). John organized a state lottery and his wife won the main prize. You may feel that the event of her winning wasn't particularly random, but how would you argue that in a fair court of law? Traditional probability theory does not even have the notion of random events. Algorithmic information theory does, but it is not applicable to real-world scenarios like the lottery one. We attempt to rectify Gurevich and Neeman introduced Distributed Knowledge Authorization Language (DKAL). The world of DKAL consists of communicating principals computing their own knowledge in their own states. DKAL is based on a new logic of information, the so-called infon logic, and its efficient subsystem called primal logic. In this paper we simplify Kripkean semantics of primal logic and study various extensions of it in search to balance expressivity and efficiency. On the proof-theoretic side we develop cut-free Gentzen-style sequent calculi for the original primal logic and its extensions. Abstract. In the first part of the paper, we discuss abstract Hilbertian deductive systems; these are systems defined by abstract notions of formula, axiom, and inference rule. We use these systems to develop a general method for converting derivability problems, from a broad range of deductive systems, into the derivability problem in a quite specific system, namely the Datalog fragment of universal Horn logic. In this generality, the derivability problems may not be recursively solvable, let alone feasible; in particular, we may get Datalog ``programs'' with infinitely many rules. We then discuss what would be needed to obtain computationally useful results from this method. In the second part of the paper, we analyze a particular deductive system, primal infon logic with variables, which arose in the development of the authorization language DKAL. A consequence of our analysis of primal infon logic with variables is that its derivability problems can be translated into Datalog with only a quadratic increase of size. Consider interaction of principals where each principal has its own policy and different principals may not trust each other. In one scenario the principals could be pharmaceutical companies, hospitals, biomedical labs and health related government institutions. In another scenario principals could be navy fleets of different and not necessarily friendly nations. In spite of the complexity of interaction, one may want to prove that certain properties remain invariant. For example, in the navy scenario, each fleet should have enough assurances from other fleets to avoid unfortunate incidents. Furthermore, one want to use automated provers to prove invariance. A natural approach to this problem is to provide a high-level logic-based language for the principals to communicate. We do just that. Three years ago two of us presented the first incarnation of Distributed Knowledge Authorization Language (DKAL). Here we present a new and much different incarnation of DKAL that we call Evidential DKAL. Statements communicated in Evidential DKAL are supposed to be accompanied with sufficient justifications. The tower-of-Babel problem is rather general: How to enable a collaboration among experts speaking different languages? A computer security version of the tower-of-Babel problem is rather important. A recent Microsoft solution for that security problem, called Security Assessment Sharing, is based on this idea: A tiny common language goes a long way. We construct simple mathematical models showing that the idea is sound. Recent analysis of sequential algorithms resulted in their axiomatization and in a representation theorem stating that, for any sequential algorithm, there is an abstract state machine (ASM) with the same states, initial states and state transitions. That analysis, however, abstracted from details of intra-step computation, and the ASM, produced in the proof of the representation theorem, may and often does explore parts of the state unexplored by the algorithm. We refine the analysis, the axiomatization and the representation theorem. Emulating a step of the given algorithm, the ASM, produced in the proof of the new representation theorem, explores exactly the part of the state explored by the algorithm. That frugality pays off when state exploration is costly. The algorithm may be a high-level specification, and a simple function call on the abstraction level of the algorithm may hide expensive interaction with the environment. Furthermore, the original analysis presumed that state functions are total. Now we allow state functions, including equality, to be partial so that a function call may cause the algorithm as well as the ASM to hang. Since the emulating ASM does not make any superfluous function calls, it hangs only if the algorithm does. Knowledge and information are central notions in DKAL, a logic based authorization language for decentralized systems, the most expressive among such languages in the literature. Pieces of information are called infons. Here we present DKAL 2, a surprisingly simpler version of the language that expresses new important scenarios (in addition to the old ones) and that is built around a natural logic of infons. Trust became definable, and its properties, postulated earlier as DKAL house rules, are now proved. In fact, none of the house rules postulated earlier is now needed. We identify also a most practical fragment of DKAL where the query derivation problem is solved in linear time. Abstract State Machines (ASMs) allow us to model system behaviors at any desired level of abstraction, including levels with rich data types, such as sets or sequences. The availability of high-level data types allows us to represent state elements abstractly and faithfully at the same time. AsmL is a rich ASM-based specification and programming language. In this paper we look at symbolic analysis of model programs written in AsmL with a background T of linear arithmetic, sets, tuples, and maps. We first provide a rigorous account of the update semantics of AsmL in terms of background T, and we formulate the problem of bounded path exploration of model programs, or the problem of Bounded Model Program Checking (BMPC), as a satisfiability modulo T problem. Then we investigate the boundaries of decidable and undecidable cases for BMPC. In a general setting, BMPC is shown to be highly undecidable (Σ^1[1]-complete); restricted to finite sets, the problem remains RE-hard (Σ^0[1] -hard). On the other hand, BMPC is shown to be decidable for a class of basic model programs that are common in practice. We apply Satisfiability Modulo Theories (SMT) tools to BMPC. The recent SMT advances allow us to directly analyze specifications using sets and maps with specialized decision procedures for expressive fragments of these theories. Our approach is extensible; background theories need in fact only be partially solved by the SMT solver; we use simulation of ASMs to support additional theories that are beyond the scope of available decision procedures. Infons are statements viewed as containers of information (rather then representations of truth values). In the context of access control the logic of infons is a conservative extension of logic known as constructive or intuitionistic. Distributed Knowledge Authorization Language uses additional unary connectives "p said" and "p implied" where p ranges over principals. Here we investigate infon logic and a narrow but useful primal fragment of it. In both cases, we develop model theory and analyze the derivability problem: Does the given query follow from the given hypotheses? Our more involved technical results are on primal infon logic. We construct an algorithm for the multiple derivability problem: Which of the given queries follow from the given hypotheses? Given a bound on the quotation depth of the hypotheses, the algorithm works in linear time. We quickly discuss the significance of this result for access control. DKAL is a new authorization language based on existential fixed-point logic and more expressive than existing authorization languages in the literature. We present some lessons learned during the first practical application of DKAL and some improvements that we made to DKAL as a result. We develop operational semantics for DKAL and present some complexity results related to the operational We propose a syntax and semantics for interactive abstract state machines to deal with the following situation. A query is issued during a certain step, but the step ends before any reply is received. Later, a reply arrives, and later yet the algorithm makes use of this reply. By a persistent query, we mean a query for which a late reply might be used. Syntactically, our proposal involves issuing, along with a persistent query, a location where a late reply is to be stored. Semantically, it involves only a minor modification of the existing theory of interactive small-step abstract state machines. In connection with machine arithmetic, we are interested in systems of constraints of the form x + k ≤ y + l. Over integers, the satisfiability problem for such systems is polynomial time. The problem becomes NP complete if we restrict attention to the residues for a fixed modulus N. Existential fixed point logic (EFPL) is a natural fit for some applications, and the purpose of this talk is to attract attention to EFPL. The logic is also interesting in its own right as it has attractive properties. One of those properties is rather unusual: truth of formulas can be defined (given appropriate syntactic apparatus) in the logic. We mentioned that property elsewhere, and we use this opportunity to provide the proof. A natural liberalization of Datalog is used in the Distributed Knowledge Authorization Language (DKAL). We show that the expressive power of this liberal Datalog is that of existential fixed-point logic. The exposition is self-contained. People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists. DKAL is a new declarative authorization language for distributed systems. It is based on existential fixed-point logic and is considerably more expressive than existing authorization languages in the literature. Yet its query algorithm is within the same bounds of computational complexity as e.g. that of SecPAL. DKAL's communication is targeted which is beneficial for security and for liability protection. DKAL enables flexible use of functions; in particular principals can quote (to other principals) whatever has been said to them. DKAL strengthens the trust delegation mechanism of SecPAL. A novel information order contributes to succinctness. DKAL introduces a semantic safety condition that guarantees the termination of the query algorithm. DKAL is an expressive declarative authorization language based on existential fixed-point logic. It is considerably more expressive than existing languages in the literature, and yet feasible. Our query algorithm is within the same bounds of computational complexity as e.g. that of SecPAL. DKAL's distinguishing features include information order that contributes to succinctness. [190] Nikolaj Bjørner, Andreas Blass, and Yuri Gurevich Content-Dependent Chunking for Differential Compression, the Local Maximum Approach Journal of Computer and System Sciences Volume 76, Issues 3-4, May-June 2010, Pages 154-203 Originally published as Microsoft Research technical report MSR-TR-2007-109, July 2007 When a file is to be transmitted from a sender to a recipient and when the latter already has a file somewhat similar to it, remote differential compression seeks to determine the similarities interactively so as to transmit only the part of the new file not already in the recipient's old file. Content-dependent chunking means that the sender and recipient chop their files into chunks, with the cutpoints determined by some internal features of the files, so that when segments of the two files agree (possibly in different locations within the files) the cutpoints in such segments tend to be in corresponding locations, and so the chunks agree. By exchanging hash values of the chunks, the sender and recipient can determine which chunks of the new file are absent from the old one and thus need to be transmitted. We propose two new algorithms for content-dependent chunking, and we compare their behavior, on random files, with each other and with previously used algorithms. One of our algorithms, the local maximum chunking method, has been implemented and found to work better in practice than previously used algorithms. Theoretical comparisons between the various algorithms can be based on several criteria, most of which seek to formalize the idea that chunks should be neither too small (so that hashing and sending hash values become inefficient) nor too large (so that agreements of entire chunks become unlikely). We propose a new criterion, called the slack of a chunking method, which seeks to measure how much of an interval of agreement between two files is wasted because it lies in chunks that don't agree. Finally, we show how to efficiently find the cutpoints for local maximum chunking. [189] Yuri Gurevich, Dirk Leinders and Jan Van den Bussche A Theory of Stream Queries 11th International Symposium on Database Programming Languages (DBPL 2007) Springer Lecture Notes in Computer Science 4797 (2007), 153-168. Data streams are modeled as infinite or finite sequences of data elements coming from an arbitrary but fixed universe. The universe can have various built-in functions and predicates. Stream queries are modeled as functions from streams to streams. Both timed and untimed settings are considered. Issues investigated include abstract definitions of computability of stream queries; the connection between abstract computability, continuity, monotonicity, and non-blocking operators; and bounded memory computability of stream queries using abstract state machines (ASMs). [188a] Yuri Gurevich Proving Church's Thesis Computer Science --- Theory and Applications CSR 2007, 2nd International Symposium on Computer Science in Russia Springer Lecture Notes 4649 (2007), 1-3 This is an extended abstract of the opening talk of CSR 2007. It is based on 188. [188] Nachum Dershowitz and Yuri Gurevich A natural axiomatization of computability and proof of Church's Thesis Bulletin of Symbolic Logic 14:3 (Sept. 2008), 299-350 An earlier version was published as Microsoft Research technical report MSR-TR-2007-85, July 2007 Church's Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turing-computable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally equivalent to an abstract state machine. This theorem presupposes three natural postulates about algorithmic computation. Here, we show that augmenting those postulates with an additional requirement regarding basic operations gives a natural axiomatization of computability and a proof of Church's Thesis, as Goedel and others suggested may be possible. In a similar way, but with a different set of basic operations, one can prove Turing's Thesis, characterizing the effective string functions, and---in particular---the effectively-computable functions on string representations of numbers. [187] Robert H. Gilman, Yuri Gurevich and Alexei Miasnikov A Geometric Zero-One Law The Journal of Symbolic Logic 74:3, Sep. 2009. Each relational structure X has an associated Gaifman graph, which endows X with the properties of a graph. If x is an element of X, let B[ n](x) be the ball of radius n around x. Suppose that X is infinite, connected and of bounded degree. A first-order sentence s in the language of X is almost surely true (resp. a.s. false) for finite substructures of X if for every x in X, the fraction of substructures of B[ n](x) satisfying s approaches 1 (resp. 0) as n approaches infinity. Suppose further that, for every finite substructure, X has a disjoint isomorphic substructure. Then every s is a.s. true or a.s. false for finite substructures of X. This is one form of the geometric zero-one law. We formulate it also in a form that does not mention the ambient infinite structure. In addition, we investigate various questions related to the geometric zero-one law. [186] Andreas Blass and Yuri Gurevich Background of Computation Bulletin of the European Association for Theoretical Computer Science, Number 92 (June 2007) In a computational process, certain entities (for example sets or arrays) and operations on them may be automatically available, for example by being provided by the programming language. We define background classes to formalize this idea, and we study some of their basic properties. The present notion of background class is more general than the one we introduced in an earlier paper 143, and it thereby corrects one of the examples in that paper. The greater generality requires a non-trivial notion of equivalence of background classes, which we explain and use. Roughly speaking, a background class assigns to each set (of atoms) a structure (for example of sets or arrays or combinations of these and similar entities), and it assigns to each embedding of one set of atoms into another a standard embedding between the associated background structures. We discuss several, frequently useful, properties that background classes may have, for example that each element of a background structure depends (in some sense) on only finitely many atoms, or that there are explicit operations by which all elements of background structures can be produced from atoms. [185] Andreas Blass and Yuri Gurevich Zero-One Laws: Thesauri and Parametric Conditions Bulletin of the European Association for Theoretical Computer Science Number 91 (February 2007), 125-144 Reprinted in Logic at the Crossroads: An Interdisciplinary View Amitabha Gupta, Rohit Parikh, Johan van Benthem, eds. Allied Publishers, New Delhi, 2007, pages 187-206 Reprinted in Proof, Computation and Agency: Logic at the Crossroads Amitabha Gupta, Rohit Parikh, Johan van Benthem, eds. Springer 2011, pages 99-114 The 0-1 law for first-order properties of finite structures and its proof via extension axioms were first obtained in the context of arbitrary finite structures for a fixed finite vocabulary. But it was soon observed that the result and the proof continue to work for structures subject to certain restrictions. Examples include undirected graphs, tournaments, and pure simplicial complexes. We discuss two ways of formalizing these extensions, Oberschelp's parametric conditions (Springer Lecture Notes in Mathematics 969, 1982) and our thesauri of 149. We show that, if we restrict thesauri by requiring their probability distributions to be uniform, then they and parametric conditions are equivalent. Nevertheless, some situations admit more natural descriptions in terms of thesauri, and the thesaurus point of view suggests some possible extensions of the theory. [184] Martin Grohe, Yuri Gurevich, Dirk Leinders, Nicole Schweikardt, Jerzy Tyszkiewicz, and Jan Van den Bussche Database Query Processing Using Finite Cursor Machines Theory of Computing Systems 44:4 (April 2009), pages 533-560 An earlier version appeared in ICDT 2007, International Conference on Database Theory Springer Lecture Notes in Computer Science 4353 (2007), 284-298 We introduce a new abstract model of database query processing, finite cursor machines, that incorporates certain data streaming aspects. The model describes quite faithfully what happens in so-called ``one-pass'' and ``two-pass query processing''. Technically, the model is described in the framework of abstract state machines. Our main results are upper and lower bounds for processing relational algebra queries in this model, specifically, queries of the semijoin fragment of the relational algebra. [183] Dan Teodosiu, Nikolaj Bjørner, Yuri Gurevich, Mark Manasse, Joe Porkka Optimizing File Replication over Limited-Bandwidth Networks using Remote Differential Compression Microsoft Research technical report MSR-TR-2006-157, November 2006 Remote Differential Compression (RDC) protocols can efficiently update files over a limited-bandwidth network when two sites have roughly similar files; no site needs to know the content of another's files a priori. We present a heuristic approach to identify and transfer the file differences that is based on finding similar files, subdividing the files into chunks, and comparing chunk signatures. Our work significantly improves upon previous protocols such as LBFS and RSYNC in three ways. Firstly, we present a novel algorithm to efficiently find the client files that are the most similar to a given server file. Our algorithm requires 96 bits of meta-data per file, independent of file size, and thus allows us to keep the metadata in memory and eliminate the need for expensive disk seeks. Secondly, we show that RDC can be applied recursively to signatures to reduce the transfer cost for large files. Thirdly, we describe new ways to subdivide files into chunks that identify file differences more accurately. We have implemented our approach in DFSR, a state-based multimaster file replication service shipping as part of Windows Server 2003 R2. Our experimental results show that similarity detection produces results comparable to LBFS while incurring a much smaller overhead for maintaining the metadata. Recursive signature transfer further increases replication efficiency by up to several orders of magnitude. [182] Andreas Blass, Yuri Gurevich, Dean Rosenzweig and Benjamin Rossman Interactive Small-Step Algorithms II: Abstract State Machines and the Characterization Theorem Logical Methods in Computer Science 3:4 (2007), paper 4. Preliminary version: Microsoft Research technical report MSR-TR-2006-171, November 2006 In earlier work, the Abstract State Machine Thesis --- that arbitrary algorithms are behaviorally equivalent to abstract state machines --- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. In a companion paper 176 the axiomatisation was extended to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1)~can complete a step without necessarily waiting for replies to all queries from that step and (2)~can use not only the environment's replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend here the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. We prove the characterization theorem for extended ASMs with respect to general algorithms as axiomatised in 176. [181] Yuri Gurevich ASMs in the Classroom: Personal Experience in Logics of Specification Languages editors Dines Bjørner and Martin C. Henson Springer, 2008, pages 599-602 We share our experience of using abstract state machines for teaching computation theory at the University of Michigan. [180] Andreas Blass and Yuri Gurevich A Note on Nested Words Tech report MSR-TR-2006-139, Microsoft Research, October 2006 For every regular language of nested words, the underlying strings form a context-free language, and every context-free language can be obtained in this way. Nested words and nested-word automata are generalized to motley words and motley-word automata. Every motley-word automation is equivalent to a deterministic one. For every regular language of motley words, the underlying strings form a finite intersection of context-free languages, and every finite intersection of context-free languages can be obtained in this way. [179] Yuri Gurevich, Margus Veanes and Charles Wallace Can Abstract State Machines Be Useful in Language Theory? Theoretical Computer Science 376 (2007) 17-29 Extended Abstract in Proc. DLT 2006 (Developments in Language Theory) Eds. O.H. Ibarra and Z. Dang Springer Lecture Notes in Computer Science 4036 (2006), pp. 14-19. The abstract state machine (ASM) is a modern computation model. ASMs and ASM based tools are used in academia and industry, albeit in a modest scale. They allow you to give high-level operational semantics to computer artifacts and to write executable specifications of software and hardware at the desired abstraction level. In connection to the 2006 conference on Developments in Language Theory, we point out several ways that we believe abstract state machines can be useful to the DLT community. [178] Andreas Blass and Yuri Gurevich Program Termination, and Well Partial Orderings ACM Transactions on Computational Logic 9:3 (July 2008) The following known observation may be useful in establishing program termination: if a transitive relation R is covered by finitely many well-founded relations U[1],...,U[n] then R is well-founded. A question arises how to bound the ordinal height |R| of the relation R in terms of the ordinals α[i] = |U[i]|. We introduce the notion of the stature ||P|| of a well partial ordering P and show that |R| less than or equal to the stature of the direct product α[1]×...×α[n] and that this bound is tight. The notion of stature is of considerable independent interest. We define ||P|| as the ordinal height of the forest of nonempty bad sequences of P, but it has many other natural and equivalent definitions. In particular, ||P|| is the supremum, and in fact the maximum, of the lengths of linearizations of P. And the stature of the direct product α[1]×...×α[n] is equal to the natural product of these ordinals. [177] Yuri Gurevich and Tanya Yavorskaya On Bounded Exploration and Bounded Nondeterminism Tech report MSR-TR-2006-07, Microsoft Research, January 2006 This report consists of two separate parts, essentially two oversized footnotes to 141. In Chapter I, Yuri Gurevich and Tatiana Yavorskaya present and study a more abstract version of the bounded exploration postulate. In Chapter II, Tatiana Yavorskaya gives a complete form of the characterization, sketched in 141, of bounded-choice sequential algorithms. [176] Andreas Blass, Yuri Gurevich, Dean Rosenzweig and Benjamin Rossman Interactive Small-Step Algorithms I: Axiomatization Logical Methods in Computer Science 3:4 (2007), paper 3. Preliminary version: Microsoft Research technical report MSR-TR-2006-170, November 2006 In earlier work, the Abstract State Machine Thesis --- that arbitrary algorithms are behaviorally equivalent to abstract state machines --- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1)~can complete a step without necessarily waiting for replies to all queries from that step and (2)~can use not only the environment's replies but also the order in which the replies were received. This is essentially part one of Microsoft Research technical report MSR-TR-2005-113, August 2005. 182 is essentially the remainder of the 2005 technical report. [175] Yuri Gurevich and Paul Schupp Membership Problem for Modular Group SIAM Journal on Computing 37:2 (2007), 425-459 The modular group plays an important role in many branches of mathematics. We show that the membership problem for the modular group is polynomial time in the worst case. We also show that the membership problem for a free group remains polynomial time when elements are written in a normal form with exponents. [174] Yuri Gurevich Interactive Algorithms 2005 with Added Appendix Originally published, without the appendix, in Proceedings of MFCS 2005 Math Foundations of Computer Science, 2005, Gdansk, Poland Editors J. Jedrzejowicz and A. Szepietowski Springer Lecture Notes in Computer Science 3618 (2005), 26-38 Reprinted, with the appendix, in Interactive Computation: The New Paradigm eds. Dina Goldin, Scott A. Smolka, Peter Wegner Springer-Verlag, 2006, pages 165-182 A sequential algorithm just follows its instructions and thus cannot make a nondeterministic choice all by itself, but it can be instructed to solicit outside help to make a choice. Similarly, an object-oriented program cannot create a new object all by itself; a create-a-new-object command solicits outside help. These are but two examples of intra-step interaction of an algorithm with its environment. Here we motivate and survey recent work on interactive algorithms within the Behavioral Computation Theory project. [173] Andreas Blass, Yuri Gurevich, Lev Nachmanson, and Margus Veanes Play to Test Microsoft Research, technical report MSR-TR-2005-04, January 2005 5th International Workshop on Formal Approaches to Testing of Software (FATES 2005) Edinburgh, July 2005 Testing tasks can be viewed (and organized!) as games against nature. We introduce and study reachability games. Such games are ubiquitous. A single industrial test suite may involve many instances of a reachability game. Hence the importance of optimal or near optimal strategies for reachability games. We find out when exactly optimal strategies exist for a given reachability game, and how to construct them. [172] Andreas Blass and Yuri Gurevich Why Sets? Originally published in Bullettin of the European Association for Theoretical Computer Science Number 84 (October 2004) Revised and published as Microsoft Research technical report MSR-TR-2006-138, September 2006 Reprinted in "Pillars of Computer Science: Essays Dedicated to Boris (Boaz) Trakhtenbrot on the Occasion of His 85th Birthday" Editors Arnon Avron, Nachum Dershowitz, and Alexander Rabinovich Lecture Notes in Computer Science 4800 Springer-Verlag, Berlin, 2008. Sets play a key role in foundations of mathematics. Why? To what extent is it an accident of history? Imagine that you have a chance to talk to mathematicians from a far away planet. Would their mathematics be set-based? What are the alternatives to the set-theoretic foundation of mathematics? Besides, set theory seems to play a significant role in computer science, in particular in database theory and formal methods. Is there a good justification for that? We discuss these and related issues. [171] Andreas Blass and Yuri Gurevich Ordinary Interactive Small-Step Algorithms, III ACM Transactions on Computation Logic 8:3 (July 2007), article 16 a preliminary version was published as a part of MSR-TR-2004-88 This is the third in a series of three papers extending the proof of the Abstract State Machine Thesis --- that arbitrary algorithms are behaviorally equivalent to abstract state machines --- to algorithms that can interact with their environments during a step rather than only between steps. The first two papers are 166 and 170 As in the first two papers of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous papers' definitions of such algorithms, of behavioral equivalence, and of abstract state machines (ASMs), we prove the main result: Every ordinary, interactive, small-step algorithm is behaviorally equivalent to an ASM. We also discuss some possible variations of and additions to the ASM semantics. [170] Andreas Blass and Yuri Gurevich Ordinary Interactive Small-Step Algorithms, II ACM Transactions on Computation Logic 8:3 (July 2007), article 15 a preliminary version was published as a part of MSR-TR-2004-88 This is the second in a series of three papers extending the proof of the Abstract State Machine Thesis --- that arbitrary algorithms are behaviorally equivalent to abstract state machines --- to algorithms that can interact with their environments during a step rather than only between steps. The first papers is 166. As in the first paper of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous paper's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASM's). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms, and we show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final paper in the series, in which we shall prove the Abstract State Machine Thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs. [169] Yuri Gurevich, Benjamin Rossman and Wolfram Schulte Semantic Essence of AsmL Theoretical Computer Science Volume 343, issue 3, 17 October 2005, pages 370-412 Originally published as Microsoft Research TR-2004-27, March 2004 The Abstract State Machine Language, AsmL, is a novel executable specification language based on the theory of Abstract State Machines. AsmL is object-oriented, provides high-level mathematical data-structures, and is built around the notion of synchronous updates and finite choice. AsmL is fully integrated into the .NET framework and Microsoft development tools. In this paper, we explain the design rationale of AsmL and provide static and dynamic semantics for a kernel of the language. [169a] Yuri Gurevich, Benjamin Rossman and Wolfram Schulte Semantic Essence of AsmL: Extended Abstract In Formal Methods of Components and Objects, FMCO 2003 Frank S. de Boer, Marcello M. Bonsangue, Susanne Graf, Willem-Paul de Roever, Editors Springer Lecture Notes in Computer Science 3188 (2004), 240-259 This is an extended abstract of article 169. [168] Yuri Gurevich and Rostislav Yavorskiy Observations on the Decidability of Transitions Abstract State Machines 2004 W. Zimmerman and B. Thalheim, editors Springer Lecture Notes in Computer Science 3052 (2004), 161-168 Consider a multiple-agent transition system such that, for some basic types T[1],...,T[n], the state of any agent can be represented as an element of the Cartesian product T[1]×...×T[n]. The system evolves by means of global steps. During such a step, new agents may be created and some existing agents may be updated or removed, but the total number of created, updated and removed agents is uniformly bounded. We show that, under appropriate conditions, there is an algorithm for deciding assume-guarantee properties of one-step computations. The result can be used for automatic invariant verification as well as for finite state approximation of the system in the context of test-case generation from AsmL specifications. [167] Yuri Gurevich Intra-Step Interaction Abstract State Machines 2004 W. Zimmerman and B. Thalheim, editors Springer Lecture Notes in Computer Science 3052 (2004), 1-5 For a while it seemed possible to pretend that all interaction between an algorithm and its environment occurs inter-step, but not anymore. Andreas Blass, Benjamin Rossman and the speaker are extending the Small-Step Characterization Theorem (that asserts the validity of the sequential version of the ASM thesis) and the Wide-Step Characterization Theorem (that asserts the validity of the parallel version of the ASM thesis) to intra-step interacting algorithms. A later comment. This was my first talk on intra-step interactive algorithms. The intended audience was the ASM community. 174 is a later talk on this topic, and it is addressed to a general computer science audience. [166] Andreas Blass and Yuri Gurevich Ordinary Interactive Small-Step Algorithms, I ACM Transactions on Computation Logic Vol. 7, no. 2 (April 2006), pages 363 - 419 a preliminary version was published as MSR-TR-2004-16 This is the first in a series of papers extending the Abstract State Machine Thesis --- that arbitrary algorithms are behaviorally equivalent to abstract state machines --- to algorithms that can interact with their environments during a step rather than only between steps. In the present paper, we describe, by means of suitable postulates, those interactive algorithms that (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. We indicate how a great many sorts of interaction meet these requirements. We also discuss in detail the structure of queries and replies and the appropriate definition of equivalence of algorithms. Finally, motivated by our considerations concerning queries, we discuss a generalization of first-order logic in which the arguments of function and relation symbols are not merely tuples of elements but orbits of such tuples under groups of permutations of the argument places. [165] Yuri Gurevich Abstract State Machines: An Overview of the Project in "Foundations of Information and Knowledge Systems" editors Dietmar Seipel and Jose Maria Turull-Torres Springer Lecture Notes in Computer Science 2942 (2004), pages 6-13 We quickly survey the ASM project, from its foundational roots to industrial applications. [164] Andreas Blass and Yuri Gurevich Algorithms: A Quest for Absolute Definitions Originally in Bulletin of the European Association for Theoretical Computer Science Number 81 (October 2003), pages 195-225 Reprinted in 2004 World Scientific book Current Trends in Theoretical Computer Science pages 283-311 Reprinted in Church's Thesis After 70 Years eds. Adam Olszewski, Jan Wolenski, Robert Janusz Ontos Verlag, 2006, pages 24-57 What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the Church-Turing thesis and contrast Church's and Turing's approaches, and we finish with some recent investigations. [163] Mike Barnett, Wolfgang Grieskamp, Yuri Gurevich, Wolfram Schulte, Nikolai Tillmann and Margus Veanes Scenario-oriented Modeling in AsmL and Its Instrumentation for Testing Proc. of 2nd International Workshop on Scenarios and State Machines: Models, Algorithms, and Tools (pages 8-14) held at ICSE 2003, International Conference on Software Engineering 2003 We present an approach for modeling use cases and scenarios in the Abstract State Machine Language and discuss how to use such models for validation and verification purposes. [162] Yuri Gurevich and Saharon Shelah Spectra of Monadic Second-Order Formulas with One Unary Function 18th Annual IEEE Symposium on Logic in Computer Science IEEE Computer Society, 2003, pages 291-300. We prove that the spectrum of any monadic second-order formula F with one unary function symbol (and no other function symbols) is eventually periodic, so that there exist natural numbers p>0 (a period) and t (a p-threshold) such that if F has a model of cardinality n>t then it has a model of cardinality n+p. (In the web version, some additional proof details are provided because some readers asked for them.) [161] Yuri Gurevich and Nikolai Tillmann Partial Updates Theoretical Computer Science Volume 336, Issues 2-3 , 26 May 2005, Pages 311-342 (A preliminary version was published in "Abstract State Machines 2003" Springer Lecture Notes in Computer Science 2589 (2003), pages 57-86.) A datastructure instance, e.g. a set or file or record, may be modified independently by different parts of a computer system. The modifications may be nested. Such hierarchies of modifications need to be efficiently checked for consistency and integrated. This is the problem of partial updates in a nutshell. In our first paper on the subject 156, we developed an algebraic framework which allowed us to solve the partial update problem for some useful datastructures including counters, sets and maps. These solutions are used for the efficient implementation of concurrent data modifications in the specification language AsmL. The two main contributions of this paper are (i)~a more general algebraic framework for partial updates and (ii)~a solution of the partial update problem for sequences and labeled ordered trees. [160] Andreas Blass and Yuri Gurevich Pairwise Testing Originally in Bulletin of the European Association for Theoretical Computer Science Number 78, October 2002, 100-132 Reprinted in 2004 World Scientific book Current Trends in Theoretical Computer Science pages 237-266 We discuss the following problem, which arises in software testing. Given some independent parameters (of a program to be tested), each having a certain finite set of possible values, we intend to test the program by running it several times. For each test, we give the parameters some (intelligently chosen) values. We want to ensure that for each pair of distinct parameters, every pair of possible values is used in at least one of the tests. And we want to do this with as few tests as possible. [159] Uwe Glaesser, Yuri Gurevich and Margus Veanes Abstract Communication Model for Distributed Systems IEEE Transactions on Software Engineering Vol. 30, no. 7, July 2004, pages 458-472. In some distributed and mobile communication models, a message disappears in one place and miraculously appears in another. In reality, of course, there are no miracles. A message goes from one network to another; it can be lost or corrupted in the process. Here we present a realistic but high-level communication model where abstract communicators represent various nets and subnets. The model was originally developed in the process of specifying a particular network architecture, namely the Universal Plug and Play architecture. But it is general. Our contention is that every message-based distributed system, properly abstracted, gives rise to a specialization of our abstract communication model. The purpose of the abstract communication model is not to design a new kind of network; rather it is to discover the common part of all message-based communication networks. The generality of the model has been confirmed by its successful reuse for very different distributed architectures. The model is based on distributed abstract state machines. It is implemented in the specification language AsmL and is being used for testing distributed systems. [158] Andreas Blass and Yuri Gurevich Algorithms vs. Machines Originally in Bulletin of the European Association for Theoretical Computer Science Number 77, June 2002, 96-118 Reprinted in 2004 World Scientific book Current Trends in Theoretical Computer Science pages 215-236 In a recent paper, the logician Yiannis Moschovakis argues that no state machine describes mergesort on its natural level of abstraction. We do just that. Our state machine is a recursive ASM. [157-2] Andreas Blass and Yuri Gurevich Abstract State Machines Capture Parallel Algorithms: Correction and Extension ACM Transactions on Computation Logic Vol. 9, No. 3, Article 19, Publication date: June 2008 We consider parallel algorithms working in sequential global time, for example circuits or parallel random access machines (PRAMs). Parallel abstract state machines (parallel ASMs) are such parallel algorithms, and the parallel ASM thesis asserts that every parallel algorithm is behaviorally equivalent to a parallel ASM. In an earlier paper 157-1, we axiomatized parallel algorithms, proved the ASM thesis and proved that every parallel ASM satisfies the axioms. It turned out that we were too timid in formulating the axioms; they did not allow a parallel algorithm to create components on the fly. This restriction did not hinder us from proving that the usual parallel models, like circuits or PRAMs or even alternating Turing machines, satisfy the postulates. But it resulted in an error in our attempt to prove that parallel ASMs always satisfy the postulates. To correct the error, we liberalize our axioms and allow on-the-fly creation of new parallel components. We believe that the improved axioms accurately express what parallel algorithms ought to be. We prove the parallel thesis for the new, corrected notion of parallel algorithms, and we check that parallel ASMs satisfy the new axioms. [157-1] Andreas Blass and Yuri Gurevich Abstract State Machines Capture Parallel Algorithms Technical report MSR-TR-2001-117 ACM Transactions on Computation Logic Volume 4, Number 4 (October 2003), pages 578-651 We give an axiomatic description of parallel, synchronous algorithms. Our main result is that every such algorithm can be simulated, step for step, by an abstract state machine with a background that provides for multisets. See also 157-2. [156] Yuri Gurevich and Nikolai Tillmann Partial Updates: Exploration Springer J. of Universal Computer Science 7:11 (2001), 918-952. The partial update problem for parallel abstract state machines has manifested itself in the cases of counters, sets and maps. We propose a solution of the problem that lends itself to an efficient implementation and covers the three cases mentioned above. There are other cases of the problem that require a more general framework. [155] Yuri Gurevich, Wolfram Schulte and Margus Veanes Toward Industrial Strength Abstract State Machines Technical report MSR-TR-2001-98 Microsoft Research, October 2001 A powerful practical ASM language, called AsmL, is being developed in Microsoft Research by the group on Foundations of Software Engineering. AsmL extends the language of original ASMs in a number of directions. We describe some of these extensions. [154] Wolfgang Grieskamp, Yuri Gurevich, Wolfram Schulte and Margus Veanes Generating Finite State Machines from Abstract State Machines ISSTA 2002, International Symposium on Software Testing and Analysis ACM Software Engineering Notes 27:4 (2002), 112-122. We give an algorithm that derives a finite state machine (FSM) from a given abstract state machine (ASM) specification. This allows us to integrate ASM specs with the existing tools for test case generation from FSMs. ASM specs are executable but have typically too many, often infinitely many states. We group ASM states into finitely many hyperstates which are the nodes of the FSM. The links of the FSM are induced by the ASM state transitions. [153] Uwe Glaesser, Yuri Gurevich and Margus Veanes Universal Plug and Play Machine Models Technical report MSR-TR-2001-59 Microsoft Research, June 2001 Recently, Microsoft took a lead in the development of a standard for peer-to-peer network connectivity of various intelligent appliances, wireless devices and PCs. It is called the Universal Plug and Play Device Architecture (UPnP). We construct a high-level Abstract State Machine (ASM) model for UPnP. The model is based on the ASM paradigm for distributed systems with real-time constraints and is executable in principle. For practical execution, we use AsmL, the Abstract state machine Language, developed at Microsoft Research and integrated with Visual Studio and COM. This gives us an AsmL model, a refined version of the ASM model. The third part of this project is a graphical user interface by means of which the runs of the AsmL model are controlled and inspected at various levels of detail as required for e.g. simulation and conformance testing. [152] Anuj Dawar and Yuri Gurevich Fixed Point Logics The Bulletin of Symbolic Logic 8:1 (2002), 65-88. Fixed point logics are extensions of first order predicate logic with fixed point operators. A number of such logics arose in finite model theory but they are of interest to much larger audience, e.g. AI, and there is no reason why they should be restricted to finite models. We review results established in finite model theory, and consider the expressive power of fixed point logics on infinite structures. [151] Yuri Gurevich Logician in the land of OS: Abstract State Machines at Microsoft Sixteenth Annual IEEE Symposium on Logic in Computer Science IEEE Computer Society, 2001, 129-136. Analysis of foundational problems like "What is computation?" leads to a sketch of the paradigm of abstract state machines (ASMs). This is followed by a brief discussion on ASMs applications. Then we present some theoretical problems that bridge between the traditional LICS themes and abstract state machines. [150a] Andreas Blass and Yuri Gurevich A Quick Update on the Open Problems in article [150] December 2005. [150] Andreas Blass and Yuri Gurevich and Saharon Shelah On Polynomial Time Computation Over Unordered Structures Journal of Symbolic Logic 67:3 (2002), 1093-1125. This paper is motivated by the question whether there exists a logic capturing polynomial time computation over unordered structures. We consider several algorithmic problems near the border of the known, logically defined complexity classes contained in polynomial time. We show that fixpoint logic plus counting is stronger than might be expected, in that it can express the existence of a complete matching in a bipartite graph. We revisit the known examples that separate polynomial time from fixpoint plus counting. We show that the examples in a paper of Cai, Fürer, and Immerman, when suitably padded, are in choiceless polynomial time yet not in fixpoint plus counting. Without padding, they remain in polynomial time but appear not to be in choiceless polynomial time plus counting. Similar results hold for the multipede examples of Gurevich and Shelah, except that their final version of multipedes is, in a sense, already suitably padded. Finally, we describe another plausible candidate, involving determinants, for the task of separating polynomial time from choiceless polynomial time plus counting. [149] Andreas Blass and Yuri Gurevich Strong Extension Axioms and Shelah's Zero-One Law for Choiceless Polynomial Time Journal of Symbolic Logic 68:1 (2003), 65-131. This paper developed from Shelah's proof of a zero-one law for the complexity class "choiceless polynomial time," defined by Shelah and the authors. We present a detailed proof of Shelah's result for graphs, and describe the extent of its generalizability to other sorts of structures. The extension axioms, which form the basis for earlier zero-one laws (for first-order logic, fixed-point logic, and finite-variable infinitary logic) are inadequate in the case of choiceless polynomial time; they must be replaced by what we call the strong extension axioms. We present an extensive discussion of these axioms and their role both in the zero-one law and in general. [144] is an abridged version of this paper, and [148] is a popular version of this paper. [148] Andreas Blass and Yuri Gurevich A New Zero-One Law and Strong Extension Axioms Originally in Bulletin of the European Association for Theoretical Computer Science Number 72 (October 2000), 103-122 Reprinted in 2004 World Scientific book Current Trends in Theoretical Computer Science pages 99-118 This article is a part of the continuing column on Logic in Computer Science. One of the previous articles in the column was devoted to the zero-one laws for a number of logics playing prominent role in finite model theory: first-order logic FO, the extension FO+LFP of first-order logic with the least fixed-point operator, and the infinitary logic where every formula uses finitely many variables [95]. Recently Shelah proved a new, powerful, and surprising zero-one law. His proof uses so-called strong extension axioms. Here we formulate Shelah's zero-one law and prove a few facts about these axioms. In the process we give a simple proof for a "large deviation" inequality a la Chernoff. [147] Yuri Gurevich and Alex Rabinovich Definability in Rationals with Real Order in the Background Journal of Logic and Computation 12:1 (2002), pp. 1-11 The paper deals with logically definable families of sets of rational numbers. In particular we are interested whether the families definable over the real line with a unary predicate for the rationals are definable over the rational order alone. Let φ(X,Y) and ψ(Y) range over formulas in the first-order monadic language of order. Let Q be the set of rationals and F be the family of subsets J of Q such that φ(Q,J) holds over the real line. The question arises whether, for every formula φ, the family F can be defined by means of a formula ψ(Y) interpreted over the rational order. We answer the question negatively. The answer remains negative if the first-order logic is strengthen to weak monadic second-order logic. The answer is positive for the restricted version of monadic second-order logic where set quantifiers range over open sets. The case of full monadic second-order logic remains open. [146] Andreas Blass and Yuri Gurevich Inadequacy of Computable Loop Invariants ACM Transactions on Computer Logic Volume 2, Number 1 (January 2001), 1-11 Hoare logic is a widely recommended verification tool. There is, however, a problem of finding easily-checkable loop invariants; it is known that decidable assertions do not suffice to verify WHILE programs, even when the pre- and post-conditions are decidable. We show here a stronger result: decidable invariants do not suffice to verify single-loop programs. We also show that this problem arises even in extremely simple contexts. Let N be the structure consisting of the set of natural numbers together with the functions S(x)=x+1, D(x)=2x and function H(x) that is equal x/2 rounded down. There is a single-loop program P using only three variables x,y,z such that the asserted program x = y = z = 0 {P} false is partially correct on N but any loop invariant I(x,y,z) for this asserted program is undecidable. [145.5] Colin Campbell and Yuri Gurevich Table ASMs in "Formal Methods and Tools for Computer Science" (Proceedings of Eurocast 2001) eds. R. Moreno-Diaz and A. Quesada-Arencibia, Universidad de Las Palmas de Gran Canaria Canary Islands, Spain, February 2001, 286-290 Ideally, a good specification becomes the basis for implementing, testing and documenting the system it defines. In practice, producing a good specification is hard. Formal methods have been shown to be helpful in strengthening the meaning of specifications but despite their power, few development teams have successfully incorporated them into their software processes. This experience indicates that producing a usable formal method is also hard. This paper is the story of how a particular theoretical result, namely the normal forms of Abstract State Machines, motivated a genuinely usable form of specification that we call ASM Tables. We offer it for two reasons. The first is that the result is interesting in and of itself and --- it is to be hoped --- useful to the reader. The second is that our result serves as a case study of a more general principle, namely, that in bringing rigorous methods into everyday practice, one should not follow the example of Procrustes: we find that it is indeed better to adapt the bed to the person than the other way round. We also offer a demonstration that an extremely restricted syntactical form can still contain sufficient expressive power to describe all sequential machines. [145] Mike Barnett, Egon Boerger, Yuri Gurevich, Wolfram Schulte and Margus Veanes Using Abstract State Machines at Microsoft: A Case Study Proceedings of ASM'2000 in "Abstract State Machines: Theory and Applications" Eds. Y. Gurevich, P. Kutter, M. Odersky, and L. Thiele Springer Lecture Notes in Computer Science 1912 (2000), 367-379 Our goal is to provide a rigorous method, clear notation and convenient tool support for high-level system design and analysis. For this purpose we use abstract state machines (ASMs). Here we describe a particular case study: modeling a debugger of a stack based runtime environment. The study provides evidence for ASMs being a suitable tool for building executable models of software systems on various abstraction levels, with precise refinement relationships connecting the models. High level ASM models of proposed or existing programs can be used throughout the software development cycle. In particular, ASMs can be used to model inter component behavior on any desired level of detail. This allows one to specify application programming interfaces more precisely than it is done currently. [144] Andreas Blass and Yuri Gurevich Choiceless Polynomial Time Computation and the Zero-One Law Proceedings of CSL'2000 Editors Peter Clote and Helmut Schwichtenberg Springer Lecture Notes in Computer Science 1862 (2000), 18-40. This paper is a sequel to [120], a commentary on [Saharon Shelah 634, "Choiceless polynomial time logic: inability to express", these proceedings], and an abridged version of [149] that contains complete proofs of all the results presented here. The BGS model of computation was defined in [120] with the intention of modeling computation with arbitrary finite relational structures as inputs, with essentially arbitrary data structures, with parallelism, but without arbitrary choices. It was shown that choiceless polynomial time, the complexity class defined by BGS programs subject to a polynomial time bound, does not contain the parity problem. Subsequently, Shelah proved a zero-one law for choiceless-polynomial-time properties. A crucial difference from the earlier results is this: Almost all finite structures have no non-trivial automorphisms, so symmetry considerations cannot be applied to them. Shelah's proof therefore depends on a more subtle concept of partial After struggling for a while with Shelah's proof, we worked out a presentation which we hope will be helpful for others interested in Shelah's ideas. We also added some related results, indicating the need for certain aspects of the proof and clarifying some of the concepts involved in it. Unfortunately, this material is not yet fully written up. The part already written, however, exceeds the space available to us in the present volume. We therefore present here an abridged version of that paper and promise to make the complete version available soon. [143] Andreas Blass and Yuri Gurevich Background, Reserve, and Gandy Machines Proceedings of CSL'2000 Editors Peter Clote and Helmut Schwichtenberg Springer Lecture Notes in Computer Science 1862 (2000), 1-17. Algorithms often need to increase their working space, and it may be convenient to pretend that the additional space was really there all along but was not previously used. In particular, abstract state machines have, by definition [103], an infinite reserve. Although the reserve is a naked set, it is often desirable to have some external structure over it. For example, in [120] every state was required to include all finite sets of its atoms, all finite sets of these, etc. In this connection, we define the notion of a background class of structures. Such a specifies the constructions (like finite sets or lists) available as "background" for algorithms. The importation of reserve elements must be non-deterministic, since an algorithm has no way to distinguish one reserve element from another. But this sort of non-determinism is much more benign than general non-determinism. We capture this intuition with the notion of inessential non-determinism. Alternatively, one could insist on specifying a particular one of the available reserve elements to be imported. This is the approach used in [Robin Gandy, "Church's thesis and principles for mechanisms" in: "The Kleene Symposium" (Ed. Jon Barwise et al.), North-Holland, 1980, 123-148.]. The price of this insistence is that the specification cannot be algorithmic. We show how to turn a Gandy-style deterministic, non-algorithmic process into a non-deterministic algorithm of the sort described above, and we prove that Gandy's notion of "structural" for his processes corresponds to our notion of "inessential non-determinism." [142] Andreas Blass and Yuri Gurevich The Underlying Logic of Hoare Logic Originally in Bulletin of the European Association for Theoretical Computer Science Number 70, February 2000, 82-110 Reprinted in 2001 World Scientific book Current Trends in Theoretical Computer Scienc pages 409-436 Formulas of Hoare logic are asserted programs φ P ψ where P is a program and φ, ψ are assertions. The language of programs varies; in the 1980 survey of K. Apt, one finds the language of while programs and various extensions of it. But the assertions are traditionally expressed in first-order logic (or extensions of it). In that sense, first-order logic is the underlying logic of Hoare logic. We question the tradition and demonstrate, on the simple example of while programs, that alternative assertion logics have some advantages. For some natural assertion logics, the expressivity hypothesis in Cook's completeness theorem is automatically satisfied. [141b] Jessica Millar Finite Work Tech report MSR-TR-2006-06, Microsoft Research, January 2006 In the appendix of 141, Gurevich defines the notion of finite exploration for small-step algorithms which do not intrastep interact with the environment. Although satisfying finite exploration is a seemingly weaker property than satisfying bounded exploration, he proves the two notions are equivalent for this class of algorithms. We investigate what happens in the case of ordinary small-step algorithms -- in particular, these algorithms do intrastep interact with the environment -- as described by Blass and Gurevich in 166. Our conclusion is that every algorithm satisfying the appropriate version of finite exploration is equivalent to an algorithm satisfying bounded exploration. We provide a counterexample to the stronger statement that every algorithm satisfying finite exploration satisfies bounded exploration. This statement becomes true if the definition of bounded exploration is modified slightly. The proposed modification is natural for algorithms operating in isolation, but not for algorithms belonging to a larger systems of computation. We believe the results generalize to general interactive small-step algorithms. [141a] Yuri Gurevich Sequential Abstract State Machines capture Sequential Algorithms Russian translation Translated to Russian by P.G. Emelyanov In System Informatics, vol. 9, pages 7-50. Ed. A.G. Marchuk The Siberian Branch of the Russian Academy of Sciencies (1940) [141] Yuri Gurevich Sequential Abstract State Machines capture Sequential Algorithms ACM Transactions on Computational Logic Volume 1, Number 1 (July 2000), pages 77-111 The paper also can be found at the journal's webpage. What are sequential algorithms exactly? Our claim, known as the sequential ASM thesis, has been that, as far as behavior is concerned, sequential algorithms are exactly sequential abstract state machines: For every sequential algorithm A, there is a sequential abstract state machine B that is behaviorally identical to A. In particular B simulates A step for step. In this paper we prove the sequential ASM thesis, so that it becomes a theorem. But how can one possibly prove a thesis? Here is what we do. We formulate three postulates satisfied by all sequential algorithms (and in particular by sequential abstract state machines). This leads to the following definition: a sequential algorithm is any object that satisfies the three postulates. At this point the thesis becomes a precise statement. And we prove the statement. This is a non-dialog version of the dialog #136. An intermediate version was published as technical report MSR-TR-99-65, Microsoft Research, September 1999. [140] Yuri Gurevich, Wolfram Schulte and Charles Wallace Investigating Java Concurrency Using Abstract State Machines Proceedings of ASM'2000 in Abstract State Machines: Theory and Applications Eds. Y. Gurevich, P. Kutter, M. Odersky, and L. Thiele Springer Lecture Notes in Computer Science 1912 (2000), 151-176. We present a mathematically precise, platform-independent model of Java concurrency using the Abstract State Machine method. We cover all aspects of Java threads and synchronization, gradually adding details to the model in a series of steps. We motivate and explain each concurrency feature, and point out subtleties, inconsistencies and ambiguities in the official, informal Java specification. [139] Andreas Blass, Yuri Gurevich and Jan Van den Bussche Abstract state machines and computationally complete query languages Information and Computation 174:1 (2002), 20-36 An earlier version published in Proceedings of ASM'2000 "Abstract State Machines: Theory and Applications" Eds. Y. Gurevich, P. Kutter, M. Odersky, and L. Thiele Springer Lecture Notes in Computer Science 1912 (2000), 22-33 Abstract state machines (ASMs) form a relatively new computation model holding the promise that they can simulate any computational system in lock-step. In particular, an instance of the ASM model has recently been introduced for computing queries to relational databases [120]. This model, to which we refer as the BGS model, provides a powerful query language in which all computable queries can be expressed. In this paper, we show that when one is only interested in polynomial-time computations, BGS is strictly more powerful than both QL and WHILE_NEW, two well-known computationally complete query languages. We then show that when a language such as WHILE_NEW is extended with a duplicate elimination mechanism, polynomial-time simulations between the language and BGS become [138] Yuri Gurevich and Dean Rosenzweig Partially Ordered Runs: a Case Study in "Abstract State Machines: Theory and Applications" Eds. Y. Gurevich, P. Kutter, M. Odersky, and L. Thiele Springer Lecture Notes in Computer Science 1912 (2000), 131-150 We look at some sources of insecurity and difficulty in reasoning about partially ordered runs of distributed abstract state machines, and propose some techniques to facilitate such reasoning. As a case study, we prove in detail correctness and deadlock--freedom for general partially ordered runs of distributed ASM models of Lamport's Bakery Algorithm. [137] Giuseppe Del Castillo, Yuri Gurevich and Karl Stroetmann Typed Abstract State Machines Unfinished manuscript, 1998. [This manuscript was never published. The work, done sporadically in 1996-98, was driven by the enthusiasm of Karl Str\"otmann of Siemens. Eventually he was reassigned away from ASM applications, and the work stopped. The item wasn't removed from the list because some of its explorations may be useful. An additional minor reason was to avoid changing the numbers of the subsequent items.] [136] Yuri Gurevich The Sequential ASM Thesis Originally in Bulletin of the European Association for Theoretical Computer Science Number 67, 93-124, February 1999 Reprinted in 2001 book World Scientific book Current Trends in Theoretical Computer Science pages 363-392 #141 is a much revised and polished journal version. The thesis is that every sequential algorithm, on any level of abstraction, can be viewed as a sequential abstract state machine. Abstract state machines (ASMs) used to be called evolving algebras. The sequential ASM thesis and its extensions inspired diverse applications of ASMs. The early applications were driven, at least partially, by the desire to test the thesis. Different programming languages were the obvious challenges. (A programming language L can be viewed as an algorithm that runs a given L program on given data.) From there, applications of (not necessarily sequential) ASMs spread into many directions. So far, the accumulated experimental evidence seems to support the sequential thesis. There is also a speculative philosophical justification of the thesis. It was barely sketched in the literature, but it was discussed at much greater length in numerous lectures of mine. Here I attempt to write down some of those explanations. This article does not presuppose any familiarity with ASMs. [135] Andreas Blass and Yuri Gurevich The Logic of Choice Journal of Symbolic Logic Vol. 65, no. 3, September 2000, pages 1264-1310. We study extensions of first-order logic with the choice construct (choose x : phi(x)). We prove some results about Hilbert's epsilon operator, but in the main part of the paper we consider the case when all choices are independent. [134] Thomas Eiter, Georg Gottlob and Yuri Gurevich Existential Second-Order Logic over Strings Journal of the ACM, vol. 47, no. 1, Jan. 2000, 77-131. We study existential second-order logic over finite strings. For every prefix class C, we determine the complexity of the model checking problem restricted to C. In particular, we prove that, in the case of the Ackermann class, for every formula φ, there is a finite automaton A that solves the model checking problem for φ. [133] Erich Graedel, Yuri Gurevich and Colin Hirsch The Complexity of Query Reliability 1998 ACM Symposium on Principles of Database Systems (PODS'98). We study the reliability of queries on databases with uncertain information. It turns out that FP^#P is the typical complexity class and that many results generalize to metafinite databases which allow one to use common SQL aggregate functions. [132] Andreas Blass, Yuri Gurevich, Vladik Kreinovich and Luc Longpr&eacute A Variation on the Zero-One Law Information Processing Letters 67 (1998) 29-30. Given a decision problem P and a probability distribution over binary strings, do this: for each n, draw independently an instance x(n) of P of length n. What is the probability that there is a polynomial time algorithm that solves all instances x(n)? The answer is: zero or one. [131] Yuri Gurevich From Invariants to Canonization Originally in Bulletin of the European Association for Theoretical Computer Science no. 63, October 1997 Reprinted in 2001 World Scientific book Current Trends in Theoretical Computer Science pages 327-331 We show that every polynomial-time full-invariant algorithm for graphs gives rise to a polynomial-time canonization algorithm for graphs. [130] Yuri Gurevich and Alex Rabinovich Definability and Undefinability with Real Order at the Background J. of Symbolic Logic, vol. 65, no. 2 (2000), 946-958 Let R be the integer order, that is the set of real numbers together with the standard order of reals. Let I be the set of integer numbers, let Y range over subsets of I, let P(I,X) be a monadic second-order formula about R, and let F be the collection of all subsets X of I such that P(I,X) holds in R. Even though F is a collection of subsets of I, its definition may involve quantification over reals and over sets of reals. In that sense, F is defined with the background of real order. Is that background essential or not? Maybe there is a monadic second-order formula Q(X) about I that defines F (so that F is the collection of all subsets X of I such that Q(X) holds in I). We prove that this is indeed the case, for any monadic second-order formula P(I,X). The claim remains true if the set I of integers is replaced above with any closed subset of R. The claim fails for some open subsets. [129] Yuri Gurevich May 1997 Draft of the ASM Guide Tech Report CSE-TR-336-97, EECS Dept, University of Michigan, 1997 The draft improves upon the ASM syntax (and appears here because it is used by the ASM community and it is not going to be published). [128b] Yuri Gurevich and Andrei Voronkov Monadic Simultaneous Rigid E-Unification Theoretical Computer Science volume 222, number 1-2 (1999), 133-152. The journal version of [128a]. [128a] Yuri Gurevich and Andrei Voronkov Monadic Simultaneous Rigid E-Unification and Related Problems 24th Intern. Colloquium on Automata, Languages and Programming ICALP'97, Bologna, Italy, July 1997 Springer Lecture Notes in Computer Science 1256 (1997), 154-165 We study the monadic case of a decision problem known as simultaneous rigid E-unification. We show its equivalence to an extension of word equations. We prove decidability and complexity results for special cases of this problem. [127b] A. Degtyarev, Y. Gurevich, P. Narendran, M. Veanes and A. Voronkov Decidability and Complexity of Simultaneous Rigid E-Unification with One Variable and Related Results Theoretical Computer Science volume 243/1-2 (August 2000), 167-184. The journal version of [127a] containing also a decidability proof for the case of simultaneous rigid E-unification when each rigid equation either contains (at most) one variable or else has a ground left-hand side and the right-hand side of the form x=y where x and y are variables. [127a] A. Degtyarev, Y. Gurevich, P. Narendran, M. Veanes and A. Voronkov The Decidability of Simultaneous Rigid E-Unification with One Variable Tech. Rep. 139, March 1997 Computing Science Department, Uppsala University, Sweden RTA'98, 9th Conf. on Rewriting Techniques and Applications Tsukuba, Japan, March 30 --- April 1, 1998. The title problem is proved decidable and in fact EXPTime complete. Furthermore, the problem becomes PTime complete if the number of equations is bounded by any (positive) constant. It follows that the A*EA* fragment of intuitionistic logic with equality is decidable, which contrasts with the undecidability of the EE fragment [126]. Notice that simultaneous rigid E-unification with two variables and only three rigid equations is undecidable [126]. [126] Yuri Gurevich and Margus Veanes Logic with Equality: Partisan Corroboration and Shifted Pairing Information and Computation, vol. 152, no. 2, August 1999, 205-235. Herbrand's theorem plays a fundamental role in automated theorem proving methods based on tableaux. The crucial step in procedures based on such methods can be described as the corroboration (or Herbrand skeleton) problem: given a positive integer m and a quantifier-free formula, find a valid disjunction of m instantiations of the formula. In the presence of equality (which is the case in this paper), this problem was recently shown to be undecidable. The main contributions of this paper are two theorems. Partisan Corroboration Theorem relates corroboration problems with different multiplicities. Shifted Pairing Theorem is a finite tree-automata formalization of a technique for proving undecidability results through direct encodings of valid Turing machine computations. The theorems are used to explain and sharpen several recent undecidability results related to the corroboration problem, the simultaneous rigid E-unification problem and the prenex fragment of intuitionistic logic with equality. [125] Anatoli Degtyarev, Yuri Gurevich and Andrei Voronkov Herbrand's Theorem and Equational Reasoning: Problems and Solutions Originally in Bulletin of the European Association for Theoretical Computer Science Vol. 60, Oct 1996, 78-95 Reprinted in 2001 World Scientific book Current Trends in Theoretical Computer Science pages 303-326 The article (written in a popular form) explains that a number of different algorithmic problems related to Herbrand's theorem happen to be equivalent. Among these problems are the intuitionistic provability problem for the existential fragment of first-order logic with equality, the intuitionistic provability problem for the prenex fragment of first-order with equality, and the simultaneous rigid E-unification problem (SREU). The article explains an undecidability proof of SREU and decidability proofs for special cases. It contains an extensive bibliography on SREU. [124] Natasha Alechina and Yuri Gurevich Syntax vs. Semantics on Finite Structures in "Structures in Logic and Computer Science: A Selection of Essays in Honor of Andrzej Ehrenfeucht" Editors J. Mycielski, G. Rozenberg and A. Salomaa Lecture Notes in Computer Science 1261, 14-33 Springer-Verlag, Heidelberg, 1997. Logic preservation theorems often have the form of a syntax/semantics correspondence. For example, the Tarski-Los theorem asserts that a first-order sentence is preserved by extensions if and only if it is equivalent to an existential sentence. Many of these correspondences break when one restricts attention to finite models. In such a case, one may attempt to find a new semantical characterization of the old syntactical property or a new syntactical characterization of the old semantical property. The goal of this paper is to provoke such a study. In particular, we give a simple semantical characterization of existential formulas on finite structures. [123] Yuri Gurevich Platonism, Constructivism, and Computer Proofs vs. Proofs by Hand Originally in Bulletin of the European Association for Theoretical Computer Science No. 57, Oct. 1995, 145-166. A slightly revised version is published in 2001 World Scientific book Current Trends in Theoretical Computer Science pages 281-302 In one of Krylov's fables, a small dog Moska barks at the elephant who pays no attention whatsoever to Moska. This image comes to my mind when I think of constructive mathematics versus "classical" (that is mainstream) mathematics. In this article, we put a few words into the elephant's mouth. The idea to write such an article came to me in the summer of 1995 when I came across a fascinating 1917 bet between the constructivist Hermann Weyl and George Polya, a classical mathematician. An English translation of the bet (from German) is found in the article. Our main objection to the historical constructivism is that it has not been sufficiently constructive. The constructivists have been obsessed with computability and have not paid sufficient attention to the feasibility of algorithms. However, the constructivists' criticism of classical mathematics has a point. Instead of dismissing constructivism offhand, it makes sense to come up with a positive alternative, an antithesis to historical constructivism. We believe that we have found such an alternative. In fact, it is well known and very popular in computer science: the principle of separating [Added in July 2006] The additional part on computer proofs vs. proofs by hand was a result of frustration that many computer scientists would not trust informal mathematical proofs, while many mathematicians would not trust computer proofs. I seemed obvious to me that, on large scale, proving is not only hard but also is imperfect and has engineering character. We need informal proofs and computer proofs and more, e.g. stratification, experimentation. [122] Charles Wallace, Yuri Gurevich and Nandit Soparkar A Formal Approach to Recovery in Transaction-Oriented Database Systems Springer J. of Universal Computer Science 3:4 (April 1997), 320-340. Failure resilience is an essential requirement for transaction-oriented database systems, yet there has been little effort to specify and verify techniques for failure recovery formally. The desire to improve performance has resulted in algorithms of considerable sophistication, understood by few and prone to errors. In this paper, we show how the formal methodology of Gurevich Abstract State Machines can elucidate recovery and provide formal rigor to the design of a recovery algorithm. In a series of refinements, we model recovery at several levels of abstraction, verifying the correctness of each model. This initial work indicates that our approach can be applied to more advanced recovery mechanisms. [121] Scott Dexter, Patrick Doyle and Yuri Gurevich Gurevich Abstract State Machines and Schoenhage Storage Modification Machines Springer J. of Universal Computer Science 3:4 (April 1997), 279-303. We show that, in a strong sense, Schoenhage's storage modification machines are equivalent to unary basic abstract state machines without external functions. The unary restriction can be removed if the storage modification machines are equipped with a pairing function in an appropriate way. [120] Andreas Blass, Yuri Gurevich and Saharon Shelah Choiceless Polynomial Time Annals of Pure and Applied Logic 100 (1999), 141-187. The question "Is there a computation model whose machines do not distinguish between isomorphic structures and compute exactly polynomial time properties?" became a central question of finite model theory. One of us conjectured the negative answer [74]. A related question is what portion of Ptime can be naturally captured by a computation model. (Notice that we speak about computation whose inputs are arbitrary finite structures e.g. graphs. In a special case of ordered structures, the desired computation model is that of Ptime-bounded Turing machines.) Our idea is to capture the portion of Ptime where algorithms are not allowed arbitrary choice but parallelism is allowed and, in some cases, implements choice. Our computation model is a Ptime version of abstract state machines (formerly called evolving algebras). Our machines are able to Ptime simulate all other Ptime machines in the literature, and they are more programmer-friendly. A more difficult theorem shows that the computation model does not capture all Ptime. [119] Yuri Gurevich and Marc Spielmann Recursive Abstract State Machines Springer J. of Universal Computer Science 3:4 (April 1997) 233-246. The abstract state machine (ASM) thesis, supported by numerous applications, asserts that ASMs express algorithms on their natural abstraction levels directly and essentially coding-free. The only objection raised to date has been that ASMs are iterative in their nature, whereas many algorithms are naturally recursive. There seems to be an inherent contradiction between (i) the ASM idea of explicit and comprehensive states, and (ii) higher level recursion with its hiding of the stack. But consider recursion more closely. When an algorithm A calls an algorithm B, a clone of B is created and this clone becomes a slave of A. This raises the idea of treating recursion as an implicitly multi-agent computation. Slave agents come and go, and the master/slave hierarchy serves as the stack. Building upon this idea, we suggest a definition of recursive ASMs. The implicit use of distributed computing has an important side benefit: it leads naturally to concurrent recursion. In addition, we reduce recursive ASMs to distributed ASMs. If desired, one can view recursive notation as mere abbreviation. [118] Andreas Blass and Yuri Gurevich The Linear Time Hierarchy Theorems for RAMs and Abstract State Machines Springer J. of Universal Computer Science 3:4 (April 1997) 247-278 Contrary to polynomial time, linear time badly depends on the computation model. In 1992, Neil Jones designed a couple of computation models where the linear-speed-up theorem fails and linear-time computable functions form a proper hierarchy. However, the linear time of Jones' models is too restrictive. We prove linear-time hierarchy theorems for random access machines and Gurevich abstract state machines (formerly evolving algebras). The latter generalization is harder and more important because of the greater flexibility of the ASM model. One long-term goal of this line of research is to prove linear lower bounds for linear time problems. [117] Yuri Gurevich and James K. Huggins Equivalence is in the Eye of the Beholder Theoretical Computer Science 179, 1-2 (1997), 353-380. In a provocative paper "Processes are in the Eye of the Beholder" in the same issue of TCS (pages 333-351), Lamport points out "the insubstantiality of processes" by proving the equivalence of two different decompositions of the same intuitive algorithm. More exactly, each of the two distributed algorithms is described by a formula in Lamport's favorite temporal logic and then the two formulas are proved equivalent. We point out that the equivalence of algorithms is itself in the eye of the beholder. In this connection, we analyse in what sense the two distributed algorithms are and are not equivalent. Our equivalence proof is direct and does not require formalizing algorithms as logic formulas. [116] Yuri Gurevich and James K. Huggins The Railroad Crossing Problem: An Experiment with Instantaneous Actions and Immediate Reactions in "Computer Science Logics, Selected papers from CSL'95" ed. H. Kleine-Buening Springer Lecture Notes in Computer Science 1092 (1996) 266-290 We give an evolving algebra (= abstract state machine) solution for the well-known railroad crossing problem, and we use the occasion to experiment with computations where agents perform instantaneous actions in continuous time and some agents fire at the moment they are enabled. [115] Thomas Eiter, Georg Gottlob and Yuri Gurevich Normal Forms for Second-Order Logic over Finite Structures and Classification of NP Optimization Problems Annals of Pure and Applied Logic, 78 (1996), 111-125. We prove a new normal form for second-order formulas on finite structures and simplify the Kolaitis-Thakur hierarchy of NP optimization problems. [114] Yuri Gurevich The Value, if any, of Decidability Originally in Bulletin of the European Association for Theoretical Computer Science No. 55, Feb. 1995, 129-135 Reprinted in 2001 World Scientific book Current Trends in Theoretical Computer Science pages 274-280 A decidable problem can be as hard as an undecidable one for all practical purposes. So what is the value of a mere decidability result? That is the topic discussed in the paper. [113] Yuri Gurevich and Saharon Shelah On Rigid Structures Journal of Symbolic Logic, vol. 61, no. 2, June 1996, 549-562. This is related to the problem of defining linear order on finite structures. If a linear order is definable on a finite structure A then A is rigid (which means that its only automorphism is the identity). There had been a suspicion that if K is the collection of all finite structures of a finitely axiomatizable class and if every K structure is rigid, then K permits a relatively simple uniform definition of linear order. That happens to be not the case. The main result of the paper is a probabilistic construction of finite rigid graphs. Using that construction, we exhibit a finitely axiomatizable class of finite rigid structures (called multipedes) such that no L^ω[∞, ω] sentence φ with counting quantifiers defines a linear order in all the structures. Furthermore, φ does not distinguish between a sufficiently large multipede M and a multipede M' obtained from M by moving a "shoe" to another foot of the same segment. [112] Yuri Gurevich Evolving Algebras in "IFIP 1994 World Computer Congress, Volume I: Technology and Foundations" Eds. B. Pehrson and I. Simon, Elsevier, Amsterdam, 423-427. The opening talk at the first workshop on evolving algebras. Sections: Introduction, The EA Thesis, Remarks, Future Work. (Evolving algebras have been later renamed abstract state machines.) [111] Yuri Gurevich and James K. Huggins Evolving Algebras and Partial Evaluation in "IFIP 1994 World Computing Congress, Volume 1: Technology and Foundations" Eds. B. Pehrson and I. Simon, Elsevier, Amsterdam, 587-592. The authors present an automated (and implemented) partial evaluator for sequential evolving algebras. (Evolving algebras have been later renamed abstract state machines.) [110] Andreas Blass and Yuri Gurevich Evolving Algebras and Linear Time Hierarchy in "IFIP 1994 World Computer Congress, Volume I: Technology and Foundations" Eds. B. Pehrson and I. Simon, North-Holland, Amsterdam, 383-390. A precursor of [118] [109] Erich Graedel and Yuri Gurevich Metafinite Model Theory Information and Computation 140:1 (1998), 26-81. Preliminary version in Logic and Computational Complexity, Selected Papers, ed. D. Leivant Springer Lecture Notes in Computer Science 960 (1995) 313-366 Earlier the second author criticized database theorists for admitting arbitrary structures as databases: databases are finite structures [60]. However, a closer investigation reveals that databases are not necessarily finite. For example, a query may manipulate numbers that do not even appear in the database, which shows that a numerical structure is somehow involved. It is true nevertheless that database structures are special. The phenomenon is not restricted to databases; for example think about the natural structure to formalize the traveling salesman problem. To this end, we define metafinite structures. Typically such a structure consists of (i) a primary part, which is a finite structure, (ii) a secondary part, which is a (usually infinite) structure e.g. arithmetic or the real line, and (iii) a set of "weight" functions from the first part into the second. Our logics do not allow quantification over the secondary part. We study definability issues and their relation to complexity. We discuss model-theoretic properties of metafinite structures, present results on descriptive complexity, and sketch some potential applications. [108] Yuri Gurevich, Neil Immerman and Saharon Shelah McColm's Conjecture Symposium on Logic in Computer Science, IEEE Computer Society Press, 1994, 10-19. Gregory McColm conjectured that, over any class K of finite structures, all positive elementary inductions are bounded if every FOL + LFP formula is equivalent to a first-order formula over K. Here FOL + LFP is the extension of first-order logic with the least fixed point operator. Our main results are two model-theoretic constructions --- one deterministic and one probabilistic --- each of which refutes McColm's conjecture. [107] Egon Boerger, Dean Rosenzweig and Yuri Gurevich The Bakery Algorithm: Yet Another Specification and Verification in the 1995 Oxford University Press book Specification and Validation Methods pages 231-243. The so-called bakery algorithm of Lamport is an ingenious and sophisticated distributed mutual-exclusion algorithm. First we construct a mathematical model A1 which reflects the algorithm very closely. Then we construct a more abstract model A2 where the agents do not interact and the information is provided by two oracles. We check that A2 is safe and fair provided that the oracles satisfy certain conditions. Finally we check that the implementation A1 of A2 satisfies the conditions and thus A1 is safe and fair. [106] Yuri Gurevich and Raghu Mani Group Membership Protocol: Specification and Verification in the 1995 Oxford University Press book Specification and Validation Methods pages 295-328. An interesting and useful group membership protocol of Flavio Christian involves timing constraints, and its correctness is not obvious. We construct a mathematical model of the protocol and verify the protocol (and notice that the assumptions about the environment may be somewhat weakened). [105] Yuri Gurevich Logic Activities in Europe ACM SIGACT NEWS, vol. 25, no. 2 (June 1994), 11-24. This is a critical analysis of European logic activities in computer science based on a Fall 1992 European tour sponsored by the Office of Naval Research. [104] Erich Grädel and Yuri Gurevich Tailoring Recursion for Complexity J. Symbolic Logic, vol. 60, no. 3, Sept. 1995, 952-969. Complexity classes are easily generalized to the case when inputs of an algorithm are finite ordered structures of a fixed vocabulary rather than strings. A logic L is said to capture (or to be tailored to) a complexity class C if a class of finite ordered structures of a fixed vocabulary belongs to C if and only if it is definable in L. Traditionally, complexity tailored logics are logics of relations. In his FOCS'83 paper, the second author showed that, on finite structures, the class of Logspace computable functions is captured by the primitive recursive calculus, and the class of Ptime computable functions is captured by the classical calculus of partially recursive functions. Here we continue that line of investigation and construct recursive calculi for various complexity classes of functions, in particular for (more challenging) nondeterministic classes NLogspace and NPtime. [103] Yuri Gurevich Evolving Algebra 1993: Lipari Guide in the 1995 Oxford University Press book Specification and Validation Methods pages 9-36 Computation models and specification methods seem to be worlds apart. The project on abstract state machines (a.k.a. evolving algebras) started as an attempt to bridge the gap by improving on Turing's thesis [92]. We sought more versatile machines which would be able to simulate arbitrary algorithms, on their natural abstraction levels, in a direct and essentially coding-free way. The ASM thesis asserts that ASMs are such versatile machines. The guide provided the definition of sequential and -- for the first time -- parallel and distributed ASMs. The denotational semantics of sequential and parallel ASMs is addressed in the Michigan guide 129. [102] Yuri Gurevich The AMAST Phenomenon Originally in Bulletin of the European Association for Theoretical Computer Science no. 51, October 1993, 295-299 Reprinted in 2001 World Scientific book Current Trends in Theoretical Computer Science pages 247-253 This humorous article incorporates a bit of serious criticism of algebraic and logic approaches to software problems. [101] Yuri Gurevich Logic in Computer Science Chapter in "Current Trends in Theoretical Computer Science" Eds. G. Rozenberg and A. Salomaa, World Scientific Series in Computer Science, Volume 40, 1993, 223-394. [100] Yuri Gurevich Feasible Functions London Mathematical Society Newsletter, No. 206, June 1993, 6-7. Some computer scientists, notably Steve Cook, identify feasibility with polynomial time computability. We argue against this point of view. [99] Thomas Eiter, Georg Gottlob and Yuri Gurevich Curb Your Theory! A Circumscriptive Approach for Inclusive Interpretation of Disjunctive Information Proc. 13th Intern. Joint Conf. on AI (IJCAI'93) ed. R. Bajcsy, Morgan Kaufman, 1993, 634-639. We introduce, study and analyze the complexity of a new nonmonotonic technique of common sense reasoning called curbing. Like circumscription, curbing is based on model minimality but, unlike circumscription, it treats disjunction inclusively. [98] Yuri Gurevich and Jim Huggins The Semantics of the C Programming Language CSL'92 (Computer Science Logics), Eds. E. Boerger et al. Springer Lecture Notes in Computer Science 702 (1993) 274-308. The method of successive refinement is used. The observation that C expressions do not contain statements gives rise to the first evolving algebra (ealgebra) which captures the command part of C; expressions are evaluated by an oracle. The second ealgebra implements the oracle under the assumptions that all the necessary declarations have been provided and user-defined functions are evaluated by another oracle. The third ealgebra handles declarations. Finally, the fourth ealgebra revises the combination of the first three by incorporating the stack discipline; it reflects all of C. [97] Andreas Blass and Yuri Gurevich Matrix Transformation is Complete for the Average Case SIAM J. on Computing 24:1, 1995, 3-29. This is a full paper corresponding to the extended abstract [88] by the second author. We present the first algebraic problem complete for the average case under a natural probability distribution. The problem is this: Given a unimodular matrix X of integers, a set S of linear transformations of such unimodular matrices and a natural number n, decide if there is a product of at most n (not necessarily different) members of S that takes X to the identity matrix. A revised and extended version of [88] [96] Andreas Blass and Yuri Gurevich Randomizing Reductions of Search Problems SIAM J. on Computing 22 (1993), no. 5, 949-975. The journal version of an invited talk at FST&TCS'91, 11th Conference on Foundations of Software Technology and Theoretical Computer Science, New Delhi, India; see Springer Lecture Notes in Computer Science 560 (1991), 10-24. First, we clarify the notion of a (feasible) solution for a search problem and prove its robustness. Second, we give a general and usable notion of many-one randomizing reductions of search problems and prove that it has desirable properties. All reductions of search problems to search problems in the literature on average case complexity can be viewed as such many-one randomizing reductions. This includes those reductions in the literature that use iterations and therefore do not look many-one. [95] Yuri Gurevich Zero-One Laws Originally in Bulletin of the European Association for Theoretical Computer Science Nu. 51, Feb. 1991, 90-106 Reprinted in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 293-309 [94] Yuri Gurevich Average Case Complexity ICALP'91, International Colloquium on Automata, Languages and Programming Madrid, Springer Lecture Notes in Computer Science 510 (1991) 615-628. We motivate, justify and survey the average case reduction theory. [93] Andreas Blass and Yuri Gurevich On the Reduction Theory for Average-Case Complexity CSL'90, 4th Workshop on Computer Science Logic Springer Lecture Notes in Computer Science 533 (1991) 17-30. A function from instances of one problem to instances of another problem is a reduction if together with any admissible algorithm for the second problem it gives an admissible algorithm for the first problem. This is an example of a descriptive definition of reductions. We simplify slightly Levin's usable definition of deterministic average-case reductions and thus make it equivalent to the appropriate descriptive definition. Then we generalize this to randomized average-case reductions. [92] Yuri Gurevich Evolving Algebras: An Introductory Tutorial Originally in Bulletin of the European Association for Theoretical Computer Science No. 43, February 1991, 264-284 This slightly revised version appeared in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 266-292 Computation models and specification methods seem to be worlds apart. The evolving algebra project is as an attempt to bridge the gap by improving on Turing's thesis. We seek more versatile machines able to simulate arbitrary algorithms, on their natural abstraction levels, in a direct and essentially coding-free way. The evolving algebra thesis asserts that evolving algebras are such versatile machines. Here sequential evolving algebras are defined and motivated. In addition, we sketch a speculative "proof" of the sequential evolving algebra thesis: Every sequential algorithm can be lock-step simulated by an appropriate sequential evolving algebra on the natural abstraction level of the algorithm. [91] Yuri Gurevich On the Classical Decision Problem Originally in Bulletin of the European Association for Theoretical Computer Science October 1990, 140-150 Reprinted in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 254-265 [90] Yuri Gurevich On Finite Model Theory in "Feasible Mathematics" Ed. Samuel R. Buss and Philip J. Scott, Birkhäuser, Boston, 1990, 211-219. This is a little essay on finite model theory. Section 1 gives some counterexamples to classical theorems in the finite case. Section 2 gives a finite version of the classical compactness theorem. Section 3 announces two Gurevich-Shelah results. • A new preservation theorem, Theorem 3.1. One of the consequences is Theorem 3.2: a first-order formula p preserved by any homomorphism from a finite structure into another finite structure is equivalent to a positive existential formula q. • A lower bound result, Theorem 3.3, according to which a shortest q may be non-elementary longer than p. Unfortunately, the proof of Theorem 3.1 fell through -- a unique such case in the history of the Gurevich-Shelah collaboration. Theorem 3.1 was later proved by Benjamin Rossman; see Proceedings of LICS 2005. [89] Yuri Gurevich and L. A. Moss Algebraic Operational Semantics and Occam CSL'89, 3rd Workshop on Computer Science Logic Springer Lecture Notes in Computer Science 440 (1990) 176-192. We give evolving algebra semantics to the Occam programming language generalizing in the process evolving algebras to the case of distributed concurrent computations. Later note: the first example of a distributed abstract state machine. [88] Yuri Gurevich Matrix Decomposition Problem is Complete for the Average Case FOCS'90, 31st Annual Symposium on Foundations of Computer Science IEEE Computer Society Press, 1990, 802-811. The first algebraic average-case complete problem is presented. See [97] in this connection. [87] Yuri Gurevich and Saharon Shelah Nondeterministic linear-time tasks may require substantially nonlinear deterministic time in the case of sublinear work space JACM 37:3 (1990), 674-687. We develop a technique to prove time-space trade-offs and exhibit natural search problems (e.g. Log-size Clique Problem) that are solvable in linear time on polylog-space (and sometimes even log-space) nondeterministic Turing machine, but no deterministic machine (in a very general sense of this term) with sequential-access read-only input tape and work space n^σ solves the problem within time n^1+τ if σ + 2τ < 1/2. [86] Yuri Gurevich Games people play in "Collected Works of J. Richard Büchi" ed. Saunders Mac Lane and Dirk Siefkes Springer-Verlag, 1990, 517-524. [85] Yuri Gurevich The Challenger-Solver game: Variations on the Theme of P=?NP Originally in Bulletin of the European Association for Theoretical Computer Science October 1989, 112-121 Reprinted in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 245-253 The question P=?NP is the focal point of much research in theoretical computer science. But is it the right question? We find it biased toward the positive answer. It is conceivable that the negative answer is established without providing much evidence for the difficulty of NP problems in practical terms. We argue in favor of an alternative to P=?NP based on the average-case complexity. [84] Yuri Gurevich Infinite Games Originally in Bulletin of the European Association for Theoretical Computer Science June 1989, 93-100 Reprinted in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 235-244 Infinite games are widely used in mathematical logic. Recently infinite games were used in connection to concurrent computational processes that do not necessarily terminate. For example, operating system may be seen as playing a game "against" the disruptive forces of users. The classical question of the existence of winning strategies turns out to be of importance to practice. We explain a relevant part of the infinite game theory. [83] Miklos Ajtai and Yuri Gurevich Datalog vs First-Order Logic J of Computer and System Sciences Vol 49, no 3, Dec 1994, 562-588 (Extended abstract in FOCS'89, 142-147.) Our main result is that every datalog query expressible in first-order logic is bounded; in terms of classical model theory this is a kind of compactness theorem for finite structures. In addition, we give some counter-examples delimiting the main result. In the infinite case, that is if structures may be infinite, the main result is a simple consequence of the compactness theorem. The finite case much harder. It turned out, as Bruno Courcelle pointed out to us, that we reinvented the notion of finite width to establish the main result. [82] Yuri Gurevich and Saharon Shelah Nearly linear time Symposium on Logical Foundations of Computer Science in Pereslavl-Zalessky, USSR Springer Lecture Notes in Computer Science 363 (1989) 108-118 The notion of linear time is very sensitive to machine model. In this connection we introduce and study class NLT of functions computable in nearly linear time n(log n)^O(1) on random access computers or any other "reasonable" machine model (with the standard multitape Turing machine model being "unreasonable" for that low complexity class). This gives a very robust approximation to the notion of linear time. In particular, we give a machine-independent definition of NLT and a natural problem complete for NLT. [81] Andreas Blass and Yuri Gurevich On Matiyasevich's non-traditional approach to search problems Information Processing Letters 32 (1989), 41-45. Yuri Matijasevich, famous for completing the solution of Hilbert's tenth problem, suggested to use differential equations inspired by real phenomena in nature, to solve the satisfiability problem for boolean formulas. The initial conditions are chosen at random and it is expected that, in the case of a satisfiable formula, the process, described by differential equations, converges quickly to an equilibrium which yields a satisfying assignment. A success of the program would establish NP=R. Attracted by the approach, we discover serious complications with it. [80] Yuri Gurevich and Saharon Shelah Time polynomial in input or output J. Symbolic Logic 54:3 (1989), 1083-1088. There are simple algorithms with large outputs; it is misleading to measure the time complexity of such algorithms in terms of inputs only. In this connection, we introduce the class PIO of functions computable in time polynomial in the maximum of the size of input and the size of output, and some other similar classes. We observe that there is no notation system for any extension of the class of total functions computable on Turing machines in time linear in output and give a machine-independent definition of partial PIO functions. [79] Yuri Gurevich and Saharon Shelah On the strength of the interpretation method Journal of Symbolic Logic 54:2 (1989), 305-323. The interpretation method is the main tool in proving negative results related logical theories. We examine the strength of the interpretation method and find a serious limitation. In one of our previous papers 57, we were able to reduce true arithmetic to the monadic theory of real line. Here we show that true arithmetic cannot be interpreted in the monadic theory of real line. [78] Yuri Gurevich On Kolmogorov machines and related issues Originally in Bulletin of the European Association for Theoretical Computer Science Number 35, June 1988, 71-82 Reprinted in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 225-234 One contribution of the article was to formulate Kolmogorov-Uspensky thesis. In "To the Definition of an Algorithm" [Uspekhi Mat. Nauk 13:4 (1958), 3-28 (Russian)] Kolmogorov and Uspensky wrote that they just wanted to comprehend the notions of computable functions and algorithms, and to convince themselves that there is no way to extend the notion of computable function. In fact, they did more than that. It seems that their thesis was this: every computation, performing only one restricted local action at a time, can be viewed as (not only being simulated by, but actually being) the computation of an appropriate KU machine (in the more general form). Uspensky agreed [J. Symb. Logic 57 (1992), page 396]. Another contribution of the paper was a popularization of a beautiful theorem of Leonid Levin Theorem. For every computable function F(w) = x from binary strings to binary strings, there exists a KU algorithm A such that A conclusively inverts F and (Time of A on x) = O(Time of B on x) for every KU algorithm B that conclusively inverts F. which had been virtually unknown, partially because it appeared (without a proof) in his article "Universal Search Problems" [Problems of Information Transmission 9:3 (1973), 265-266] which is hard to read. [77] Yuri Gurevich and Jim Morris Algebraic operational semantics and Modula-2 CSL'87, 1st Workshop on Computer Science Logic Springer Lecture Notes in Computer Science 329 (1988) 81-101 Jim Morris was a PhD student of Yuri Gurevich at the Electrical Engineering and Computer Science Department of the University of Michigan, the first PhD student working on the abstract state machine project. This is an extended abstract of 1988 Jim Morris's PhD thesis (with the same title) and the first example of the ASM semantics of a whole programming language. [76] Yuri Gurevich Average case completeness J. Computer and System Sciences 42:3, June 1991, 346-398 (a special issue with selected papers of FOCS'87) We explain and advance Levin's theory of average case complexity. In particular, we exhibit the second natural average-case-complete problem and prove that deterministic reductions are inadequate. [75] Yuri Gurevich Algorithms in the world of bounded resources In "The universal Turing machine - a half-century story" (ed. R. Herken), Oxford University Press, 1988, 407-416. In the classical theory of algorithms, one addresses a computing agent with unbounded resources. We argue in favor of a more realistic theory of multiple addressees with limited resources. [74] Yuri Gurevich Logic and the Challenge of Computer Science A chapter in book Current Trends in Theoretical Computer Science ed. Egon Boerger, Computer Science Press, 1988 pages 1-57. The chapter consists of two quite different parts. The first part is a survey (including some new results) on finite model theory. One particular point deserves a special attention. In computer science, the standard computation model is the Turing machine whose inputs are strings; other algorithm inputs are supposed to be encoded with strings. However, in combinatorics, database theory, etc., one usually does not distinguish between isomorphic structures (graphs, databases, etc.). For example, a database query should provide information about the database rather than its implementation. In such cases, there is a problem with string presentation of input objects: there is no known, easily computable string encoding of isomorphism classes of structures. Is there a computation model whose machines do not distinguish between isomorphic structures and compute exactly PTime properties? The question is intimately related to a question by Chandra and Harel in "Structure and complexity of relational queries", J. Comput. and System Sciences 25 (1982), 99-128. We formalize the question as the question whether there exists a logic that captures polynomial time (without presuming the presence of a linear order) and conjecture the negative answer. The first part is based on lectures given at the 1984 Udine Summer School on Computation Theory and summarized in the technical report "Logic and the Challenge of Computer Science", CRL-TR-10-85, Sep. 1985, Computing Research Lab, University of Michigan, Ann Arbor, Michigan. In the second part, we introduce a new computation model: evolving algebras (later renamed abstract state machines). This new approach to semantics of computations and in particulur to semantics of programming languages emphasizes dynamic and resource-bounded aspects of computation. It is illustrated on the example of Pascal. The technical report mentioned above contained an earlier version of part 2. The final version was written in 1986. [73] Andreas Blass and Yuri Gurevich Existential fixed-point logic In "Logic and complexity" (ed. E. Boerger) Springer Lecture Notes in Computer Science 270 (1987) 20-36 The purpose of this paper is to draw attention to existential fixed-point logic (EFPL). Among other things, we show the following. • If a structure A satisfies an EFPL formula φ then A has a finite subset F such that every structure that coincides with A on F satisfies φ. • Using EFPL instead of first-order logic removes the expressivity hypothesis in Cook's completeness theorem for Hoare logic. • In the presence of a successor relation, EFPL captures polynomial time. [72] Miklos Ajtai and Yuri Gurevich Monotone versus positive J. of ACM, 34, 1987, 1004-1015 A number of famous theorems about first-order logic were disproved in [60] in the case of finite structures, but Lyndon's theorem on monotone vs. positive resisted the attack. It is defeated here. The counter-example gives a uniform sequence of constant-depth polynomial-size (functionally) monotone boolean circuits not equivalent to any (however nonuniform) sequence of constant-depth polynomial-size positive boolean circuits. [71] Yuri Gurevich and Saharon Shelah Expected computation time for Hamiltonian Path Problem SIAM J. on Computing 16:3 (1987) 486-502 Let G(n,p) be a random graph with n vertices and the edge probability p. We give an algorithm for Hamiltonian Path Problem whose expected run-time on G(n,p) is cn/p + o(n) for any fixed p. This is the best possible result for the case of fixed p. The expected run-time of a slighty modified version of the algorithm remains polynomial if p = p(n) > n[-c] where c is positive and small. The paper is based on a 1984 technical report. [70] Yuri Gurevich and Saharon Shelah Fixed-point extensions of first-order logic Annals of Pure and Applied Logic 32 (1986), 265-280 We prove that the three extensions of first-order logic by means of positive, monotone and inflationary inductions have the same expressive power in the case of finite structures. An extended abstract of the above in Proc. 26th Annual Symposium on Foundation of Computer Science, IEEE Computer Society Press, 1985, 346-353 contains some additions. [69] Yuri Gurevich What does O(n) mean? SIGACT NEWS 17 (1986), Number 4, 61-63 [68] Amnon Barak, Zvi Drezner and Yuri Gurevich On the number of active nodes in a multicomputer system Networks 16 (1986), 275- 282 Simple probabilistic algorithms enable each active node to find estimates of the fraction of active nodes in the system of n nodes (with a direct communication link between any two nodes) in time o [67] L. Denenberg, Y. Gurevich and S. Shelah Definability by constant-depth polynomial-size circuits Information and Control 70 (1986), 216-240 We investigate the expressive power of constant-depth polynomial-size circuit models. In particular, we construct a circuit model whose expressive power is precisely that of first-order logic. [66] Andreas Blass and Yuri Gurevich Henkin quantifiers and complete problems Annals of Pure and Applied Logic 32 (1986), 1-16 We show that almost any non-linear quantifier, applied to quantifier-free first-order formulas, suffices to express an NP- complete predicate; the remaining non-linear quantifiers express exactly co-NL predicates (NL is Nondeterministic Log-space). [65] Andreas Blass, Yuri Gurevich and D. Kozen A zero-one law for logic with a fixed-point operator Information and Control 67 (1985), 70-90 The zero-one law, known to hold for first-order logic but not for monadic or even existential monadic second-order logic, is generalized to the extension of first-order logic by the least (or iterative) fixed-point operator. We also show that the problem of deciding, for any pi, whether it is almost sure is complete for exponential time, if we consider only pi's with a fixed finite vocabulary (or vocabularies of bounded arity) and complete for double-exponential time if pi is unrestricted. [64.5] Yuri Gurevich A New Thesis Abstracts, American Mathematical Society Vol. 6, no. 4 (August 1985), page 317, abstract 85T-68-203. The first announcement of the "new thesis", latern known as the Abstract State Machine thesis. [64] Yuri Gurevich Monadic second-order theories Chapter in book Model-Theoretical Logics eds. Jon Barwise and Sol Feferman Springer-Verlag, Perspective in Mathematical Logic, 1985, 479-506 In this chapter we make a case for the monadic second-order logic (that is to say, for the extension of first-order logic allowing quantification over monadic predicates) as a good source of theories that are both expressive and manageable. We illustrate two powerful decidability techniques here. One makes use of automata and games. The other is an offshot of a composition theory where one composes models as well as their theories. Monadic second-order logic appears to be the most natural match for the composition theory. Undecidability proofs must be thought out anew in this area; for, whereas true first-order arithmetic is reducible to the monadic theory of the real line R, it is nevertheless not interpretable in the monadic theory of R. A quite unusual undecidability method is another subject of this chapter. In the last section we briefly review the history of the methods thus far developed and mention numerous results obtained by the methods. [63] Yuri Gurevich and Saharon Shelah The decision problem for branching time logic Journal of Symbolic Logic, 50 (1985), 668-681 Define a tree to be any partial order satisfying the following requirement: if (y < x and z < x) then (y < z or y = z or y > z). The main result of the two papers [62, 63] is the decidability of the theory of trees with additional unary predicates and quantification over nodes and branches. This gives the richest decidable temporal logic. [62] Yuri Gurevich and Saharon Shelah To the decision problem for branching time logic In "Foundations of Logic and Linguistics: Problems and their Solutions" (ed. P. Weingartner and G. Dold), Plenum, 1985, 181-198 [61] J. P. Burgess and Yuri Gurevich The decision problem for linear temporal logic Notre Dame Journal of Symbolic Logic 26 (1985), 115-128 The main result is the decidability of the temporal theory of the real order. [60.5] Yuri Gurevich Reconsidering Turing's thesis (toward more realistic semantics of programs) Technical report CRL-TR-36-84 University of Michigan, September 1984 The earliest publication on the abstract state machine project. [60] Yuri Gurevich Toward logic tailored for computational complexity In "Computation and Proof Theory" (Ed. M. Richter et al.) Springer Lecture Notes in Math. 1104 (1984), 175-216 The pathos of this paper is that classical logic, developed to confront the infinite, is ill prepared to deal with finite structures whereas finite structures, e.g. databases, are of so much importance in computer science. We show that famous theorems about first-order logic fail in the finite case, and discuss various alternatives to classical logic. The message has been heard. [59] Yuri Gurevich and H. R. Lewis A logic for constant depth circuits Information and Control 61 (1984), 65-74 We present an extension of first-order logic that captures precisely the computational complexity of (the uniform sequences of) constant-depth polynomial-time circuits. [58] W. D. Goldfarb, Yuri Gurevich and Saharon Shelah A decidable subclass of the minimal Goedel case with identity Journal of Symbolic Logic 49 (1984), 1253-1261 [57] Yuri Gurevich and Saharon Shelah The monadic theory and the `next world' Israel Journal of Mathematics 49 (1984), 55-68 Let r be a Cohen real over a model V of ZFC; then the second-order V[r]-theory of the integers (even the reals if V satisfies CH) is interpretable in the monadic V-theory of the real line. Contrast this with the result of 79. [56] Andreas Blass and Yuri Gurevich Equivalence relations, invariants, and normal forms, II Springer Lecture Notes in Computer Science 171 (1984), 24-42 We consider the questions whether polynomial time solutions for the easier problems of the list for 55 yield NP solutions for the harder ones, or vice versa. We show that affirmative answers to several of these questions are equivalent to natural principles like NP = co-NP, (NP intersect co-NP) = P, and the shrinking principle for NP sets. We supplement known oracles with enough new ones to show that all questions considered have negative answers relative to some oracles. In other words, these questions cannot be answered affirmatively by means of relativizable polynomial-time Turing reductions. Finally, we show that the analogous questions in the framework where Borel sets play the role of polynomial time decidable sets have negative answers. [55] Andreas Blass and Yuri Gurevich Equivalence relations, invariants, and normal forms SIAM Journal on Computing 13 (1984), 682-689 For an equivalence relation E on the words in some finite alphabet, we consider the recognition problem (decide whether two words are equivalent), the invariant problem (calculate a function constant on precisely the equivalence classes), the normal form problem (calculate a particular member of an equivalence class, given an arbitrary member) and the first member problem (calculate the first member of an equivalence class, given an arbitrary member). A solution for any of these problems yields solutions for all earlier ones in the list. We show that, for polynomial time recognizable E, the first member problem is always in the class Δ^P[2] (solvable in polynomial time with an oracle for an NP set) and can be complete for this class even when the normal form problem is solvable in polynomial time. To distinguish between the other problems in the list, we construct an E whose invariant problem is not solvable in polynomial time with an oracle for E (although the first member problem is in NP^E ∩ co-NP^E), and we construct an E whose normal form problem is not solvable in polynomial time with an oracle for a certain solution of its invariant problem. [54] Yuri Gurevich, L. J. Stockmeyer and Uzi Vishkin Solving NP-hard problems on graphs that are almost trees, and an application to facility location problems Journal of ACM 31 (1984), 459-473 Imagine that you need to put service stations (or MacDonald's restaurants) on roads in such a way that every resident is within, say, 10 miles from the nearest station. What is the minimal number of stations and how to find an optimal placement? In general, the problem is NP hard, however in important special cases there are feasible solutions. [53] Yuri Gurevich and H. R. Lewis The word problem for cancellation semigroups with zero Journal of Symbolic Logic 49 (1984), 184-191 In 1947, Post showed the word problem for semigroups to be undecidable. In 1950, Turing strengthened this result to cancellation semigroups, i.e. semigroups satisfying the cancellation property (1) if xy = xz or yx = zx then y = z. No semigroups with zero satisfies (1). The cancellation property for semigroups with zero and identity is (2) if xy = xz \neq 0 or yx = zx \neq 0 then y = z. The cancellation property for semigroups with zero bur without identity is the conjunction of (2) and (3) if xy = x or yx = x then x = 0. Whether or not a semigroup with zero has an identity, we refer to it as a cancellation semigroup with zero if it satisfies the appropriate cancellation property. It is shown in 8, that the word problem for finite semigroups is undecidable. Here we show that the word problem is undecidable for finite cancellation semigroups with zero; this holds for semigroups with identity and also for semigroups wihtout identity. (In fact, we prove a stronger effective inseparabilit result.) This provides the necessary mathematical foundation for 41). [52] Yuri Gurevich and P. H. Schmitt The theory of ordered abelian groups does not have the independence property Transactions of American Math. Society 284 (1984), 171-182 [51] Yuri Gurevich Algebras of feasible functions 24th Annual Symposium on Foundations of Computer Science IEEE Computer Society Press, 1983, 210-214 We prove that, under a natural interpretation over finite domains, (i) a function is primitive recursive if and only if it is logspace computable, and (ii)a function is general recursive if and only if it is polynomial time computable. [50] Yuri Gurevich Critiquing a critique of Hoare's programming logics Communications of ACM, May 1983, p. 385 (Tech. communication) [49] A. M. W. Glass and Yuri Gurevich The word problem for lattice-ordered groups Transactions of American Math. Society 280 (1983), 127-138 The problem is proven to be undecidable. [48] Yuri Gurevich and Saharon Shelah Random models and the Goedel case of the decision problem Journal of Symbolic Logic 48 (1983), 1120-1124 We replace Goedel's sophisticated combinatorial argument with a simple probabilistic one. [47] Yuri Gurevich and Saharon Shelah Rabin's Uniformization Problem Journal of Symbolic Logic 48 (1983), 1105-1119 The negative solution is given. [46] Yuri Gurevich and Saharon Shelah Interpreting second-order logic in the monadic theory of order Journal of Symbolic Logic 48 (1983), 816-828 Under a weak set-theoretic assumption, we interpret full second-order logic in the monadic theory of order. [45] Yuri Gurevich, Menachem Magidor and Saharon Shelah The monadic theory of ω[2] Journal of Symbolic Logic 48 (1983), 387-398 In a series of papers, Büchi proved the decidability of the monadic (second-order) theory of ω[0], of all countable ordinals, of ω[1], and finally of all ordinals < ω[2]. Here, assuming the consistency of a weakly compact cardinal, we prove that, in different set-theoretic worlds, the monadic theory of ω[2] may be arbitrarily difficult (or easy). [44] Yuri Gurevich Decision problem for separated distributive lattices Journal of Symbolic Logic 48 (1983), 193-196 It is well known that for all recursively enumerable sets X[1], X[2] there are disjoint recursively enumerable sets Y[1], Y[2] such that Y[i] is a subset of X[i] and (Y[1] ∪ Y[2]) = (X[1] ∪ X[2]). Alistair Lachlan called distributed lattices satisfying this property separated. He proved that the first-order theory of finite separated distributed lattices is decidable. We prove here that the first-order theory of all separated distributed lattices is undecidable. [43] Clarke, N. Francez, Y. Gurevich and P. Sistla E. M. Can message buffers be characterized in linear temporal logic? Symposium on Principles of Distributed Computing, ACM, 1982, 148-156 In the case of unbounded buffers, the negative answer follows from a result in [28]. [42] Andreas Blass and Yuri Gurevich On the unique satisfiability problem Information and Control 55 (1982), 80-88 Papadimitriou and Yannakakis were interested whether Unique Sat is hard for {L - L' : L, L' are NP} when NP differs from co-NP (otherwise the answer is obvious). We show that this is true under one oracle and false under another. [41] Yuri Gurevich and H. R. Lewis The inference problem for template dependencies Information and Control 55 (1982), 69-79 Answering a question of Jeffrey Ullman, we prove that the problem in the title is shown to be undecidable. [40] Yuri Gurevich and Leo Harrington Automata, trees, and games 14th Annual Symposium on Theory of Computing, ACM, 1982, 60-65 We prove a forgetful determinacy theorem saying that, for a wide class of infinitary games, one of the players has a winning strategy that is virtually memoryless: the player has to remember only bounded many bits of information. We use forgetful determinacy to give a transparent proof of Rabin's celebrated result that the monadic second-order theory of the infinite tree is decidable. [39] Yuri Gurevich A review of two books on the decision problem Bulletin of the American Mathematical Society 7 (1982), 273-277 [38] I. Gertsbakh and Y. Gurevich Homogeneous optimal fleet Transportation Research 16B (1982), 459-470 [37] Yuri Gurevich and Saharon Shelah Monadic theory of order and topology in ZFC Annals of Mathematical Logic 23 (1982), 179-198 In 1975 Annals of Mathematics Shelah interpreted true first-order arithmetic in the monadic theory of order under the assumption of the continuum hypothesis. The assumption is removed here. [36] Yuri Gurevich Existential interpretation, II Archiv für Math. Logik und Grundlagenforschung 22 (1982), 103-120 [35] S. O. Aandraa, E. Boerger and Yuri Gurevich Prefix classes of Krom formulas with identity Archiv für Math. Logik und Grundlagenforschung volume 22 (1982), 43-49 [34] Yuri Gurevich Crumbly spaces Sixth International Congress for Logic, Methodology and Philosophy of Science (1979) North-Holland, 1982, 179-191 Answering a question of Henson, Jockush, Rubel and Takeuti, we prove that the rationals, the irrationals and the Cantor set are all elementary equivalent as topological spaces. "Unfortunately, Gurevich's proof ... contains a small gap, which we take the opportunity to fill. The oversight does not affect the core of the argument, but occurs at an 'obvious' place," kindly wrote Lutz Heindorf in his article "Moderate Families in Boolean Algebras" in Annals of Pure and Applied Logic 57 (1992), 217-250. [33] A. M. W. Glass, Y. Gurevich, W. C. Holland and M. Jambu-Giraudet Elementary theory of automorphism groups of doubly homogeneous chains Springer Lecture Notes in Mathematics 859 (1981), 67-82 [32] A. M. W. Glass, Yuri Gurevich, W. C. Holland and Saharon Shelah Rigid homogeneous chains Math. Proceedings of Cambridge Phil. Society 89 (1981), 7-17 [31] Yuri Gurevich and W. C. Holland Recognizing the real line Transactions of American Math. Society 265 (1981), 527-534 We exhibit a first-order statement about the automorphisms group of the real line that characterizes the real line among all homogeneous chains. [30] Yuri Gurevich Two notes on formalized topology Fundamenta Mathematicae 57 (1980), 145-148 [29] Yuri Gurevich and Saharon Shelah Modest theory of short chains, II Journal of Symbolic Logic 44 (1979), 491-502 We analyze the monadic theory of the rational line and the theory of real line with quantification over "small" subsets. The results are in some sense the best possible. [28] Yuri Gurevich Modest theory of short chains, I Journal of Symbolic Logic 44 (1979), 481-490 The composition (or decomposition) method of Feferman-Vaught is generalized and made much more applicable. [27] Yuri Gurevich Monadic theory of order and topology, II Israel Journal of Mathematics 34 (1979), 45-71 Assuming the Continuum Hypothesis, we interprete the theory of (the cardinal of) continuum with quantification over constructible (monadic, dyadic, etc.) predicates in the monadic (second-order) theory of real line, in the monadic theory of any other short non-modest chain, in the monadic topology of Cantor's Discontinuum and some other monadic theories. We exhibit monadic sentences defining the real line up to isomorphism under some set-theoretic assumptions. There are some other results. [26] Yuri Gurevich Monadic theory of order and topology, I Israel Journal of Mathematics 27 (1977), 299-319 We disprove two Shelah's conjectures and prove some more results on the monadic theory of linearly orderings and topological spaces. In particular, if the Continuum Hypothesis holds then there exist monadic formulae expressing the predicates ``X is countable'' and ``X is meager'' over the real line and over Cantor's Discontinuum. [25] Yuri Gurevich Expanded theory of ordered abelian groups Annals of Mathematical Logic 12 (1977), 193-228. The first-order theory of ordered abelian groups was analyzed in 3. However, algebraic results on ordered abelian groups in the literature usually cannot be stated in first-order logic. Typically they involve so-called convex subgroups. Here we introduce an expanded theory of ordered abelian groups that allows quantification over convex subgroups and expresses almost all relevant algebra. We classify ordered abelian groups by the properties expressible in the expanded theory, and we prove that the expanded theory of ordered abelian groups is decidable. Curiously, the decidability proof is simpler than that in 3. Furthermore, the decision algorithm is primitive recursive. [24] Yuri Gurevich Intuitionistic logic with strong negation Studia Logica 36 (1977), 49-59 Classical logic is symmetric with respect to True and False but intuitionistic logic is not. We introduce and study a conservative extension of first-order intuitionistic logic that is symmetric with respect to True and False. [23] I. Gertsbakh and Y. Gurevich Constructing an optimal fleet for a transportation schedule Transportation Science 11 (1977), 20-36 A general method for constructing all optimal fleets is described. [22] Yuri Gurevich Semi-conservative reduction Archiv für Math. Logik und Grundlagenforschung 18 (1976), 23-25 [21] Yuri Gurevich The decision problem for standard classes Journal of Symbolic Logic 41 (1976), 460-464 The classification of prefix-signature fragments of (first-order) predicate logic with equality, completed in [7], is extended to first-order logic with equality and functions. One case was solved (confirming a conjecture of this author) by Saharon Shelah. [20] Yuri Gurevich The decision problem for first-order logic Manuscript, 1971, 124 pages (Russian) This was supposed to be a book but the publication was aborted when the author left USSR. A German translation can be found in Universitaetsbibliothek Dortmund (Ostsprachen-Übersetzungsdienst) und TIB Hannover. [19] Yuri Gurevich The decision problem for the expanded theory of ordered abelian groups Soviet Institute of Scientific and Technical Information (VINITI) number 6708-73, 1974, 1-31 (Russian) [18] Yuri Gurevich Formulas with one universal quantifier pages 97-110 in book "Selected Questions of Algebra and Logic" Volume dedicated to the memory of A.I.\ Malcev Publishing house ``Nauka" --- Siberian Branch Novosibirsk, 1973 (Russian) The main result, announced in 9, is that the E*AE* class of first-order logic with functions but without equality has the finite model property (and therefore is decidable for satisfiability and finite satisfiability). This result completes the solution in 9 for the classical decision problem for first-order logic with functions but without equality. [17] Yuri Gurevich and Tristan Turashvili Strengthening a result of Suranyi Bulletin of the Georgian Academy of Sciences Volume 70 (1973), 289-292 (Russian) [16a] Yuri Gurevich and Igor O. Koriakov A remark on Berger's paper on the domino problem Siberian Mathematical Journal, 13 (1972), 319-321 (English) This is an English translation of 16. [16] Yuri Gurevich and Igor O. Koriakov A remark on Berger's paper on the domino problem Siberian Mathematical Journal, 13 (1972), 459-463 (Russian) Berger proved that the decision problem for the unrestricted tiling problem (a.k.a. the unrestricted domino problem) is undecidable. We strengthen Berger's result. The following two collection of domino sets are recursively inseparable: (1) those that can tile the plane periodically (equivalently, can tile a torus), and (2) those that cannot tile the plane at all. It follows that the collection of domino sets that can cover a torus is undecidable. [15] Yuri Gurevich Minsky machines and the AEA&E* case of the decision problem Trans. of Ural University 7:3 (1970), 77-83 (Russian) An observation that Minsky machines may be more convenient than Turing machines for reduction purposes is illustrated by simplifying the proof from [7] that some [AEA&E*,(k,1)] is a reduction class. [14] Yuri Gurevich The decision problem for decision problems Algebra and Logic 8 (1969) Pages 640-642 of the Russian origina Pages 362-363 of the English translation Consider the collection D of first-order formulas α such that the first-order theory with axiom α is decidable. It is proven that D is neither r.e. nor co-r.e. (The second part has been known [13] Yuri Gurevich The decision problem for logic of predicates and operations Algebra and Logic 8 (1969) Pages 284-308 of the Russian original Pages 160-174 of the English translation The article consists of two chapters. In the first part of the first chapter, the author rediscovers well-partial-orderings and well-quasi-orderings, which he calls tight partial orders and tight quasi orders, and develops a theory of such orderings. (In this connection, it may be appropriate to point out Joseph B. Kruskal's article ``The theory of well-quasi-ordering: A frequently discoverred concept'' in J. Comb. Theory A, vol. 13 (1972), 297-305.) To understand the idea behind the term "tight", think of a boot: you cannot move your foot far down or sidewise -- only up. This is similar to tight partial orders where infinite sequences have no infinite descending subsequences, no infinite antichains, but always have infinite ascending subsequences. In the second part of the first chapter, the author applies the theory of tight orders to prove a classifiability theorem for prefix-vocabulary classes of first-order logic. The main part of the classifiability theorem is that the partial order of prefix-vocabulary classes (ordered by inclusion) is tight. But there is an additional useful part of the classifiability theorem, about the form of the minimal classes outside a downward closed collection, e.g. the minimal classes that are undecidable in one way or another. In the second chapter, the author completes the decision problem for (the prefix-vocabulary fragments of) pure logic of predicates and functions, though the treatment of the most difficult decidable class is deferred to 18. In particular, the classes [∀^2,(0,1),(1)] and [∀^2,(1),(0,1)] are proved to be conservative reduction classes. (This abstract is written in January 2006.) [12] Yuri Gurevich The decision problem for some algebraic theories Doctor of Physico-Mathematical Sciences Thesis Sverdlovsk, 1968 (Russian) [11] Yuri Gurevich A new decision procedure for the theory of ordered abelian groups Algebra and Logic 6:5 (1967), 5-6 (Russian) [10a] Yuri Gurevich Lattice-ordered abelian groups and K-lineals Soviet Mathematics 8 (1967), 987-989 This is an English translation of 10. [10] Yuri Gurevich Lattice-ordered abelian groups and K-lineals Doklady 175 (1967), 1213-1215 (Russian) [9] Yuri Gurevich Hereditary undecidability of the theory of lattice-ordered abelian groups Algebra and Logic 6:1 (1967), 45-62 (Russian) Delimiting the decidability result of [3] for linearly ordered abelian groups and answering Malcev's question, we prove the theorem in the title. [8] Yuri Gurevich The word problem for some classes of semigroups Algebra and Logic 5:2 (1966), 25-35 (Russian) The word problem for finite semigroups is the following decision problem: given some number n of word pairs (u[1],v[1]), ..., (u[n],v[n]) and an additional word pair (u[0],v[0]), decide whether the n equations u[1]=v[1],..., u[n]=v[n] impliy the additional equation u[0]=v[0] in all finte semigroups. We prove that the word problem for finite semigroups is undecidable. In fact, the undecidability result holds for a particular premise E =&nbsp (u[1]=v[1] and ... and u[n]=v[n]). Furthermore, this particular E can be chosen so that the following classes K1 and K2 of word pairs are recursively inseparable: K1 is the class of word pairs (u[0],v[0]) such that E implies u[0]=v[0] in every periodic semigroup. K2 is the class of word pairs (u[0],v[0]) such that E fails to imply u[0]=v[0] in some finite semigroup. The paper contains some additional undecidability results. [7] Yuri Gurevich Recognizing satisfiability of predicate formulas Algebra and Logic 5:2 (1966), 25-35 (Russian) This is a detailed exposition of the results announced in 6. [6a] Yuri Gurevich The decision problem for predicate logic Soviet Mathematics 7 (1966), 669-670 This is an English translation of 6. [6] Yuri Gurevich The decision problem for predicate logic Doklady 168 (1966), 510-511 (Russian) The ∀∃∀∃* fragment of pure predicate logic with one binary and no unary predicates is a conservative reduction class and therefore undecidable for satisfiability and for finite satisfiability. This completes the solution of the classical decision problem for pure predicate logic: the prefix-vocabulary classes of pure predicate logic are fully classified into decidable and undecidable. See a more complete exposition in [7]. [5a] Yuri Gurevich On the decision problem for pure predicate logic Soviet Mathematics 7 (1966), 217-219 This is an English translation of 5. [5] Yuri Gurevich On the decision problem for pure predicate logic Doklady 166 (1966), 1032-1034 (Russian) The AEAE* fragment of pure predicate logic with one binary and some number k of unary predicates is proven to be a conservative reduction class. Superseded by [6]. [4] Yuri Gurevich Existential interpretation Algebra and Logic 4:4 (1965), 71-85 (Russian) We introduce a method of existential interpretation, and we use the method to prove the undecidability of fragments of the form E^rA* of various popular first-order theories. [3a] Yuri Gurevich Elementary properties of ordered abelian groups AMS Translations 46 (1965), 165-192 This is an English translation of 3. [3] Yuri Gurevich Elementary properties of ordered abelian groups Algebra and Logic 3:1 (1964), 5-39 (Russian, Ph.D. Thesis) We classify ordered abelian groups are by first-order properties. Using that classification, we prove that the first-order theory of ordered abelian groups is decidable; this answers a question of Alfred Tarski. [2] Yuri Gurevich and Ali I. Kokorin Universal equivalence of ordered abelian groups Algebra and Logic 2:1 (1963), 37-39 (Russian) We prove that no universal first-order property distinguishes between any two ordered abelian groups. [1] Yuri Gurevich Groups covered by proper characteristic subgroups Trans. of Ural University 4:1 (1963), 32-39 (Russian, Master Thesis)
{"url":"http://research.microsoft.com/en-us/um/people/gurevich/annotated.htm","timestamp":"2014-04-18T13:08:55Z","content_type":null,"content_length":"214268","record_id":"<urn:uuid:3b265e60-331d-4fa0-9297-440012b856aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Contemporary Mathematics 1994; 225 pp; softcover Volume: 163 ISBN-10: 0-8218-5163-2 ISBN-13: 978-0-8218-5163-0 List Price: US$53 Member Price: US$42.40 Order Code: CONM/163 This volume describes the most significant contributions made by Chinese mathematicians over the past decades in various areas of computational mathematics. Some of the results are quite important and complement Western developments in the field. The contributors to the volume range from noted senior mathematicians to promising young researchers. The topics include finite element methods, computational fluid mechanics, numerical solutions of differential equations, computational methods in dynamical systems, numerical algebra, approximation, and optimization. Containing a number of survey articles, the book provides an excellent way for Western readers to gain an understanding of the status and trends of computational mathematics in China. Graduate students and applied and computational mathematicians. • K. Feng and D. Wang -- Dynamical systems and geometric construction of algorithms • B. Guo -- Generalized stability of discretization and its applications to numerical solutions of nonlinear differential equations • H. Han -- The boundary element method for solving variational inequalities • E. Jiang -- The study of numerical analysis of matrix eigenvalue problem in China • Y. Kwok, H. Huang, and R. H. Chan -- Numerical analysis and scientific computation in Hong Kong • Q. Lin -- Interpolated finite elements and global error recovery • Z. Shi and M. Wang -- Mathematical theory of some non-standard finite element methods • J. Sun -- Some results on the field of spline theory and its applications • R. Wang -- Approximation theory and spline function • X. Wang -- A summary on continuous complexity theory • L. Ying -- Viscosity splitting schemes • D. Yu -- Natural boundary element method and adaptive boundary element method • Y. Yuan -- Trust region algorithms for nonlinear programming
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-163","timestamp":"2014-04-18T03:41:18Z","content_type":null,"content_length":"15760","record_id":"<urn:uuid:39c047ef-71bc-43ea-9f1d-e01c1ce093d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the value of x in the rhombus below? (Picture below.) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/512c0f82e4b02acc415d9671","timestamp":"2014-04-18T00:32:22Z","content_type":null,"content_length":"71052","record_id":"<urn:uuid:27d445a9-0ddb-4291-bb01-bd75a184fee0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [sl4] Convergence of Expected Utilities with Algorithmic Probability Distributions - uh? From: Eliezer Yudkowsky (sentience@pobox.com) Date: Sun Dec 07 2008 - 16:12:52 MST On Sun, Dec 7, 2008 at 11:50 AM, Peter de Blanc <peter@spaceandgames.com> wrote: > So to consider a slightly simpler case than in the paper, suppose p and U > were both computable functions. p is a function of an _index_ of a program, > whereas U is a function of the _output_ of the program. Since the map from > programs to outputs is not a total computable function, it should seem > conceivable that U(program_n(0)) could grow more quickly than 1/p(n), > because the former is not a total function in n but the latter is. Ah, I had a similar question when reading the paper. I haven't gone through it in depth, but need to do so at some point. From this I suspect that my reply may be some variant of, "If you know the output, you can take the output into account as information in assessing the probability that you exert unique control over that universe rather than being within that universe; if you don't know the output, your utility function can't be run over it either." Eliezer Yudkowsky Research Fellow, Singularity Institute for Artificial Intelligence This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT
{"url":"http://www.sl4.org/archive/0812/19590.html","timestamp":"2014-04-18T18:47:00Z","content_type":null,"content_length":"5318","record_id":"<urn:uuid:14fc29b1-78f3-4177-b747-b3530012ad19>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
moving particles in a conical spiral manner hi there, i m trying to move particles in a conical spiral path from top to bottom which some thing looks like a tornado… in c++ so kindly help me in with a solution You have two problems. One, how to calculate a spiral path along a cone, and two, learning steering/flocking algorithms. The first is to simply write a function f(x,y,z) that behaves like an Archimedes spiral or some such. See Wikipedia for the math behind different spirals. Then, the right algorithm would allow you to move particles along the path. See http://www.red3d.com/cwr/boids/ as an example for the flocking of birds. Of course, one should mention that tornadoes do not follow a single 3D spiral… 1. Make the particles rotate around a circle (x = cos(theta), y = sin(theta), where theta is an angle that increases at a certain speed) 2. Make the circle get larger over time by scaling x and y. 3. Make the circle move up along the Z axis over time. Here I am writing a lengthy reply about how to construct great looking tornadoes, but then looking at the OP question I realize Reed pretty much answered it… /me back to work. Are you making rpg spell effects or what? ;) no rouncer its a tornado effect I was doing some more WebGL work and thought it would be interesting to do a tornado effect, so I came up with this. I believe the effect here closely mimics how things were done in Sacrifice and Giants: Citizen Kabuto. I’m using a Bezier curve to construct and animate the funnel. I didn’t have any decent textures on hand to emit a particle cloud around the funnel (which would give it a nice puffy look), so you could say it’s a naked twister ;) I used 4 control points, which allows the tornado to bend in interesting ways. The screenshot below is an example I created in Blender. This curve has only 3 control points. You can use the middle and ground points to control the twist and curve of the tornado, simulating ground wind. You calculate the mesh by using a tubing algorithm (which primarily uses Reed’s circular formula). The tubing algorithm is exactly how you would construct a cylinder. You construct multiple circles at each level in the curve. You can add as many levels as you want for more resolution, and each circle can have any number of points. Using the direction vector of the curve, you construct a circle around that point and then connect its edges with the next circle you will create in the curve. Each control point contains a radius, which defines the size of the tube at that point. You linearly interpolate that radius as you move along the curve. So in the example above, the ground point has a radius of 2, the middle has a radius of 5, and the top has a radius of 10. The wireframe should look something like this. Now all you have to do is write logic to move the tornado around. Due to the nature of Bezier curves and how the 3D meshes are constructed, you can also animate the birth of a tornado. You can display the top portion of the funnel and slowly build on it until it connects with the ground. Freebie effect. On another note, this same algorithm can be used to generate trees.
{"url":"http://devmaster.net/posts/19326/moving-particles-in-a-conical-spiral-manner","timestamp":"2014-04-16T04:13:37Z","content_type":null,"content_length":"22136","record_id":"<urn:uuid:6ba0ee60-2b80-407c-82b5-2a6d9b13eee9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Plainsboro Math Tutor Find a Plainsboro Math Tutor ...DanDiscrete Math is often coined as "finite mathematics". It does not deal with the real numbers and it's continuity. I have studied discrete math as I obtained my BS in mathematics from Ohio 14 Subjects: including algebra 1, algebra 2, calculus, geometry ...In the last few years, I have assisted various professors as a teaching assistant, which has helped me sharpen my teaching skills. I am willing to tutor a range of subjects - from history, to writing skills, to social studies, to languages (Arabic, German, ESL and Spanish). I am looking for enga... 24 Subjects: including SAT math, English, reading, writing I am an experienced high school teacher that has spent her entire educational career in one place. And I love it. Teaching is a second career for me. 9 Subjects: including algebra 2, prealgebra, algebra 1, chemistry ...I programmed in FORTRAN which is algebra-like language. I have a BS in teaching of HS chemistry. I published in gas phase kinetics, of NOx reduction. 6 Subjects: including algebra 2, algebra 1, trigonometry, chemistry ...I hold my students to the same standards that I set for myself, and appreciate all feedback and referrals! Time is important to all of us, so I try to be flexible with my schedule, as well as the locations in which we work. I ask that cancellations be made 24 hours ahead of time, and offer makeup classes. 9 Subjects: including trigonometry, algebra 1, algebra 2, calculus
{"url":"http://www.purplemath.com/plainsboro_nj_math_tutors.php","timestamp":"2014-04-18T11:38:29Z","content_type":null,"content_length":"23523","record_id":"<urn:uuid:5ded7b06-0597-4178-b1c7-6b0be8bb5856>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 1A: Single-Variable Calculus and Analytic Geometry Prerequisite: Mathematics 10 or Mathematics 8B with a grade of 'C' or better. Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: MATH 18, MATH SEQ. B Limits and continuity, analyzing the behavior and graphs of functions, derivatives, implicit differentiation, higher order derivatives, related rates and optimization word problems, Newton's Method, Fundamental Theorem of Calculus, and definite and indefinite integrals. 0572 LEC PB13 WAGMAN K 4.0 F 1250P - 0140P 1 WAGMAN K 4.0 MW 1250P - 0210P MATH 1B: Single-Variable Calculus and Analytic Geometry Prerequisite: Mathematics 1A with a grade of 'C' or better. Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: MATH 20, MATH SEQ. B This course is a standard second semester Calculus course covering methods of integration, applications of the integral, differential equations, parametric and polar equations, and sequences and 2132 LEC PB13 MAISCH F 4.0 TuTh 0600P - 0805P MATH 5: Introduction to Statistics Prerequisite: Mathematics 233 with a grade of 'C' or better. Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: STAT 2 Descriptive analysis and presentation of either single-variable data or bivariate data, probability, probability distributions, normal probability distributions, sample variability, statistical inferences involving one and two populations, analysis of variance, linear correlation and regression analysis. Statistical computer software will be extensively integrated as a tool in the description and analysis of data. 0574 LEC SS214 JUKL H 3.0 MW 0945A - 1105A 1 PB14 JUKL H 3.0 F 0900A - 0950A 0575 LEC PB3 BUTTERWORTH 3.0 TuTh 1120A - 1240P 1 PB14 BUTTERWORTH 3.0 F 1100A - 1150A 2133 LEC PB12 LITTIG A 3.0 TuTh 0600P - 0805P 1 MATH 7: Finite Mathematics Prerequisite: Mathematics 233 with a grade of 'C' or better. Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: MATH 12 Systems of linear equations and matrices, introduction to linear programming, finance, counting techniques and probability, properties of probability and applications of probability. 0576 LEC PB8 LOCKHART L 3.0 TuTh 0810A - 0930A 1 MATH 8A: First Half of Precalculus Prerequisite: Mathematics 233 with a grade of 'C' or better. Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4 Math 8A prepares the student for the study of calculus by providing important skills in algebraic manipulation, interpretation, and problem solving at the college level. Topics will include basic algebraic concepts, complex numbers, equations and inequalities of the first and second degree, functions, and graphs, linear and quadratic equations, polynomial functions, exponential and logarithmic functions, systems of equations, matrices and determinants, right triangle trigonometry, and the Law of Sines and Cosines. 0577 LEC SS206 DRESCH M 4.0 TuTh 0945A - 1105A 1 DRESCH M 4.0 F 0920A - 1010A 0578 LEC PB3 DWYER M 4.0 F 0100P - 0150P 1 SS206 DWYER M 4.0 MW 1250P - 0210P MATH 8B: Second Half of Precalculus Prerequisite: Mathematics 8A with a grade of 'C' or better. Advisory: Math 208 Survey of Practical Geometry. Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4 Math 8B prepares students for the study of calculus by providing important skills in algebraic manipulation, interpretation, and problem solving at the college level. Topics will include trigonometric functions, identities, inverse trigonometric functions, and equations; applications of trigonometry, vectors, complex numbers, polar and parametric equations; conic sections; sequences, series, counting principles, permutations, mathematical induction; analytic geometry, and an introduction to limits. 2135 LEC PB13 JUKL H 4.0 M W 0600P - 0805P MATH 201A: Math for Science and Engineering Transferable: No This course will provide a combination of math study skills, introduction to scientific equipment and technology for mathematics, analysis of data from various branches of science, one or more field trips, investigation of educational plans and program choices at the transfer level. 0579 LEC PB4 DRESCH M 1.0 F 1200P - 1250P 2E MATH 201B: Math for Science and Engineering Transferable: No This course will provide a combination of math study skills, introduction to scientific equipment and technology for mathematics, collection and analysis of data from various branches of science, one or more field trips, investigation of science careers and program choices at the transfer level. 0580 LEC SS206 DWYER M 1.0 F 1200P - 1250P 2G MATH 205: Elementary Algebra Prerequisite: MATH 402 with a grade of 'C' or better or assessment test recommendation. Transferable: GAV-GE: B4 This course is a standard beginning algebra course, including algebraic expressions, linear equations and inequalities in one variable, graphing, equations and inequalities in two variables, integer exponents, polynomials, rational expressions and equations, radicals and rational exponents, and quadratic equations. Mathematics 205, 205A and 205B, and 206 have similar course content. This course may not be taken by students who have completed Mathematics 205B or 206 with a grade of "C" or better. This course may be taken for Mathematics 205B credit (2.5 units) by those students who have successfully completed Mathematics 205A with a grade of "C" or better. 0581 LEC PB3 DWYER M 5.0 MWF 0810A - 0910A 1 DWYER M 5.0 TuTh 0583 LEC PB3 WAGMAN K 5.0 MTuWTh 0945A - 1050A 1 0584 LEC PB13 WASHBURN A 5.0 F 0920A - 1125A 1 WASHBURN A 5.0 MW 0945A - 1105A 0586 LEC PB4 DRESCH M 5.0 MTuWTh 1120A - 1225P 1 2F 0587 LEC PB12 JUKL H 5.0 MW 1250P - 0155P 1 PB13 JUKL H 5.0 TuTh 0588 LEC PB4 DACHKOVA E 5.0 MTuWTh 0400P - 0505P MATH 205A: First Half of Elementary Algebra Prerequisite: Effective Fall 2005: MATH 402 with a grade of 'C' or better or assessment test recommendation. Advisory: Concurrent enrollment in Guidance 563A is advised. Transferable: GAV-GE: B4 This course is the first half of the Elementary Algebra course. It will cover signed numbers, evaluation of expressions, ratios and proportions, solving linear equations, and applications. Graphing of lines, the slope of a line, graphing linear equations, solving systems of equations, basic rules of exponents, and operations on polynomials will be covered. 0589 LEC PB4 WASHBURN A 2.5 F 0810A - 0900A 1 WASHBURN A 2.5 MW 0810A - 0930A 0590 LEC SS206 KIM D 2.5 F 1250P - 0140P 1 KIM D 2.5 TuTh 1250P - 0210P 0825 LEC PB5 LOCKHART L 2.5 MW 0230P - 0435P Class meets 09/04/07 - 10/25/07 PB3 LOCKHART L 2.5 TuTh 2139 LEC PB4 GROVER MJ 2.5 M W 0600P - 0805P 1 MATH 205B: Second Half of Elementary Algebra Prerequisite: Math 205A with a grade of 'C' or better. Advisory: Concurrent enrollment in Guidance 563B is advised. Transferable: GAV-GE: B4 This course contains the material covered in the second half of the Elementary Algebra Course. It will cover factoring, polynomials, solving quadratic equations by factoring, rational expressions and equations, complex fractions, radicals and radical equations, solving quadratic equations by completing the square and the quadratic formula. Application problems are integrated throughout the 0591 LEC PB3 DACHKOVA E 2.5 MW 1120A - 1240P DACHKOVA E 2.5 F 1140A - 1230P 0826 LEC PB5 LOCKHART L 2.5 MW 0230P - 0455P Class meets 10/29/07 - 12/19/07 PB3 LOCKHART L 2.5 TuTh MATH 233: Intermediate Algebra Prerequisite: Mathematics 205 or Mathematics 205A and 205B or Mathematics 206 with a grade of 'C' or better. Transferable: GAV-GE: B4 Review of basic concepts, linear equations and inequalities, graphs and functions, systems of linear equations, polynomials and polynomial functions, factoring, rational expressions and equations, roots, radicals, and complex numbers, solving quadratic equations, exponential and logarithmic functions, and problem solving strategies. Mathematics 233, 233A, and 233B have similar course content. This course may not be taken by students who have completed Mathematics 233B with a grade of 'C' or better. This course may be taken for Mathematics 233B credit (2.5) units by those students who have successfully completed Mathematics 233A with a grade of 'C' or better. 0592 LEC PB5 DRESCH M 5.0 MWF 0810A - 0910A 1 DRESCH M 5.0 TuTh 0594 LEC SS206 NARI J 5.0 MW 0945A - 1050A 1 PB4 NARI J 5.0 TuTh 0596 LEC SS206 DWYER M 5.0 MTuWTh 1120A - 1225P 1 2H 0597 LEC PB3 LEE R 5.0 MTuWTh 1250P - 0155P 1 0598 LEC SS206 VIARENGO A 5.0 MTuWTh 0400P - 0505P 0911 LEC CJ500 MALOKAS J 5.0 MTuWTh 0810A - 0915A 2140 LEC PB3 KIM D 5.0 TuTh 0600P - 0825P MATH 233A: First Half of Intermediate Algebra Prerequisite: Completion of Mathematics 205 or the equivalent with a grade of 'C' or better. Transferable: No The course will start with a review of basic concepts and then cover the following topics with an emphasis on applications and problem solving strategies: solving linear equations; solving linear, compound, and absolute value inequalities; equations and graphs of lines; functions and function notation including composition of functions; solving systems of linear equations and inequalities; an introduction to matrices and Cramer's rule; operations with polynomials; factoring polynomials; and solving polynomial equations. 0599 LEC HU102 WAGMAN K 2.5 MW 1120A - 1240P PB5 WAGMAN K 2.5 F 1140A - 1230P 2141 LEC PB4 LOCKHART L 2.5 TuTh 0600P - 0805P MATH 400: Elements of Arithmetic Transferable: No Essential arithmetic operations, whole numbers, integers, fractions, decimals, ratio, proportion, percent, applications of arithmetic, and critical thinking, as well as math-specific study skills. Units earned in this course do not count toward the associate degree and/or other certain certificate requirements. 0600 L/L SS206 FULLER G 3.0 MTuWTh 0810A - 0900A 1 0601 L/L PB9 DACHKOVA E 3.0 MW 0945A - 1105A 1 DACHKOVA E 3.0 F 1030A - 1120A 2142 L/L PB5 ARID A 3.0 TuTh 0600P - 0805P MATH 402: Pre-Algebra Prerequisite: Completion of Math 400 with a 'C' or better, or assessment test recommendation. Transferable: No This course covers operations with integers, fractions and decimals and associated applications, percentages, ratio, and geometry and measurement, critical thinking and applications. Elementary algebra topics such as variables, expressions, and solving equations are introduced. 0602 L/L PB13 NARI J 3.0 MTuWTh 0810A - 0910A 1 0603 L/L SS210 WAGMAN K 3.0 TuTh 1120A - 1240P 1 PB3 WAGMAN K 3.0 F 1030A - 1120A 0604 L/L PB13 JUKL H 3.0 MTh 0230P - 0340P PB3 JUKL H 3.0 W 2143 L/L PB3 ARID A 3.0 M W 0600P - 0805P 1 MATH 404A: Self-Paced Basic Math Transferable: No This course is a remedial, modular, self-paced course. Application and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures. This course has the option of a letter grade or credit/no credit. 0605 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79 MATH 404B: Self-Paced Basic Math Transferable: No This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures. This course has the option of a letter grade or credit/no credit. 0606 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79 MATH 404C: Self-Paced Basic Math Transferable: No This is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures. This course has the option of a letter grade or credit/no credit. 0607 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79 MATH 404D: Self-Paced Basic Math Transferable: No This course is a remedial modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures. This course has the option of a letter grade or credit/no credit. 0608 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79 MATH 404E: Self-Paced Basic Math Transferable: No This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures. This course has the option of a letter grade or credit/no credit. 0609 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79 MATH 404F: Self-Paced Basic Math Transferable: No This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms and similar figures. This course has the option of a letter grade or credit/no credit. 0610 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79 MATH 404G: Self-Paced Basic Math Transferable: No This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions, multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures. This course has the option of a letter grade or credit/no credit. 0611 L/L PB14 DACHKOVA E 1.0 MW 1255P - 0325P 1 79
{"url":"http://www.gavilan.edu/schedule/OLD/fall2007/gilroy/mathematics.htm","timestamp":"2014-04-18T13:07:47Z","content_type":null,"content_length":"56159","record_id":"<urn:uuid:347ac018-80f7-40c4-891e-2394a2a63422>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
ti 84 plus algebra 2 tips Author Message TemanVab Posted: Thursday 28th of Dec 07:50 Hello friends, I misplaced my algebra textbook yesterday. It’s out of stock and so I can’t get it in any of the shops near my place. I have an option of hiring a private instructor but then I live in a very far off place so any tutor would charge a very high hourly rate to come over. Now the problem is that I have my assessment next week and I am not able to study since I lost my textbook. I couldn’t read the chapters on ti 84 plus + algebra 2 tips and ti 84 plus + algebra 2 tips. A few more topics such as adding matrices, inverse matrices, system of equations and mixed numbers are still not so clear to me. I need some help guys! United States AllejHat Posted: Saturday 30th of Dec 11:01 If you can be explicit about ti 84 plus + algebra 2 tips, I could possibly help to solve the algebra problem. If you don’t want to pay for a math tutor, the next best option would be a accurate software program which can help you to solve the problems. Algebrator is the best I have come upon which will explain every step to any algebra problem that you may write from your book. You can simply write it down as your homework . This Algebrator should be used to learn math rather than for copying answers for assignments. From: Odense, Hiinidam Posted: Sunday 31st of Dec 08:23 Hey there! I used Algebrator last year when I was having issues with my college math. This program made solving equations so easy. Since then, I always keep a copy of it on my computer Greeley, CO, Jot Posted: Tuesday 02nd of Jan 11:47 I remember having problems with point-slope, graphing circles and adding matrices. Algebrator is a really great piece of algebra software. I have used it through several math classes - Basic Math, Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on Solve, step by step solution would appear. The program is highly From: Ubik Jevxri Posted: Thursday 04th of Jan 11:30 Ok, after hearing so much about Algebrator, I think it definitely is worth a try. How do I get hold of it? Thanks! SjberAliem Posted: Saturday 06th of Jan 10:22 You can get it at http://www.mathscitutor.com/laws-of-exponents-and-multiplying-monomials.html. Please do post your feedback here. It may help a lot of other novices as well. Macintosh HD
{"url":"http://www.mathscitutor.com/expressions-maths/least-common-measure/ti-84-plus--algebra-2-tips.html","timestamp":"2014-04-17T03:55:34Z","content_type":null,"content_length":"60456","record_id":"<urn:uuid:0ec55f52-26ab-4ca8-a0c5-d2013c38bd6b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Toolbox D03 – Partial Differential Equations • D03 Introduction • d03ea – Elliptic PDE, Laplace's equation, two-dimensional arbitrary domain • nag_pde_2d_laplace – d03ea • d03eb – Elliptic PDE, solution of finite difference equations by SIP, five-point two-dimensional molecule, iterate to convergence • nag_pde_2d_ellip_fd – d03eb • d03ec – Elliptic PDE, solution of finite difference equations by SIP for seven-point three-dimensional molecule, iterate to convergence • nag_pde_3d_ellip_fd – d03ec • d03ed – Elliptic PDE, solution of finite difference equations by a multigrid technique • nag_pde_2d_ellip_mgrid – d03ed • d03ee – Discretize a second-order elliptic PDE on a rectangle • nag_pde_2d_ellip_discret – d03ee • d03fa – Elliptic PDE, Helmholtz equation, three-dimensional Cartesian coordinates • nag_pde_3d_ellip_helmholtz – d03fa • d03ma – Triangulation of plane region • nag_pde_2d_triangulate – d03ma • d03nc – Finite difference solution of the Black–Scholes equations • nag_pde_1d_blackscholes_fd – d03nc • d03nd – Analytic solution of the Black–Scholes equations • nag_pde_1d_blackscholes_closed – d03nd • d03ne – Compute average values for d03nd • nag_pde_1d_blackscholes_means – d03ne • d03pc – General system of parabolic PDEs, method of lines, finite differences, one space variable • nag_pde_1d_parab_fd – d03pc • d03pd – General system of parabolic PDEs, method of lines, Chebyshev C^0 collocation, one space variable • nag_pde_1d_parab_coll – d03pd • d03pe – General system of first-order PDEs, method of lines, Keller box discretization, one space variable • nag_pde_1d_parab_keller – d03pe • d03pf – General system of convection-diffusion PDEs with source terms in conservative form, method of lines, upwind scheme using numerical flux function based on Riemann solver, one space • nag_pde_1d_parab_convdiff – d03pf • d03ph – General system of parabolic PDEs, coupled DAEs, method of lines, finite differences, one space variable • nag_pde_1d_parab_dae_fd – d03ph • d03pj – General system of parabolic PDEs, coupled DAEs, method of lines, Chebyshev C^0 collocation, one space variable • nag_pde_1d_parab_dae_coll – d03pj • d03pk – General system of first-order PDEs, coupled DAEs, method of lines, Keller box discretization, one space variable • nag_pde_1d_parab_dae_keller – d03pk • d03pl – General system of convection-diffusion PDEs with source terms in conservative form, coupled DAEs, method of lines, upwind scheme using numerical flux function based on Riemann solver, one space variable • nag_pde_1d_parab_convdiff_dae – d03pl • d03pp – General system of parabolic PDEs, coupled DAEs, method of lines, finite differences, remeshing, one space variable • nag_pde_1d_parab_remesh_fd – d03pp • d03pr – General system of first-order PDEs, coupled DAEs, method of lines, Keller box discretization, remeshing, one space variable • nag_pde_1d_parab_remesh_keller – d03pr • d03ps – General system of convection-diffusion PDEs, coupled DAEs, method of lines, upwind scheme, remeshing, one space variable • nag_pde_1d_parab_convdiff_remesh – d03ps • d03pu – Roe's approximate Riemann solver for Euler equations in conservative form, for use with d03pf, d03pl and d03ps • nag_pde_1d_parab_euler_roe – d03pu • d03pv – Osher's approximate Riemann solver for Euler equations in conservative form, for use with d03pf, d03pl and d03ps • nag_pde_1d_parab_euler_osher – d03pv • d03pw – Modified HLL Riemann solver for Euler equations in conservative form, for use with d03pf, d03pl and d03ps • nag_pde_1d_parab_euler_hll – d03pw • d03px – Exact Riemann solver for Euler equations in conservative form, for use with d03pf, d03pl and d03ps • nag_pde_1d_parab_euler_exact – d03px • d03py – PDEs, spatial interpolation with d03pd or d03pj • nag_pde_1d_parab_coll_interp – d03py • d03pz – PDEs, spatial interpolation with d03pc, d03pe, d03pf, d03ph, d03pk, d03pl, d03pp, d03pr or d03ps • nag_pde_1d_parab_fd_interp – d03pz • d03ra – General system of second-order PDEs, method of lines, finite differences, remeshing, two space variables, rectangular region • nag_pde_2d_gen_order2_rectangle – d03ra • d03rb – General system of second-order PDEs, method of lines, finite differences, remeshing, two space variables, rectilinear region • nag_pde_2d_gen_order2_rectilinear – d03rb • d03ry – Check initial grid data in d03rb • nag_pde_2d_gen_order2_checkgrid – d03ry • d03rz – Extract grid data from d03rb • nag_pde_2d_gen_order2_rectilinear_extractgrid – d03rz • d03ua – Elliptic PDE, solution of finite difference equations by SIP, five-point two-dimensional molecule, one iteration • nag_pde_2d_ellip_fd_iter – d03ua • d03ub – Elliptic PDE, solution of finite difference equations by SIP, seven-point three-dimensional molecule, one iteration • nag_pde_3d_ellip_fd_iter – d03ub
{"url":"http://www.nag.com/numeric/MB/manual64_24_1/html/D03/d03conts.html","timestamp":"2014-04-20T04:35:20Z","content_type":null,"content_length":"13347","record_id":"<urn:uuid:9537166c-5db8-42d4-b9b4-4f0899e2be79>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Geogebra Lessons Sponsored High Speed Downloads Acknowledgements GeoGebra is dynamic mathematics open source (free) software for learning and teaching mathematics in schools. Lesson Plan – Slope with GeoGebra Prepared by: Laura Weakland, Fall 2008 Grade Level: 7th grade Objectives • Student will access the Internet and the Web Poster created for this activity with the Lesson Plan for Triangle Properties and Similarities GeoGebra Wikipedia Dynamic Worksheets Lesson Plan Version 0.0 11 November 2006 Norm Ebsary GeoGebra – Lesson 1 Using GeoGebra to draw a parallelogram Author: Linda Fahlberg-Stojanovska With thanks to Robert Fant Produced with: Camtasia Studio Constructing 3D graph of function with GeoGebra(2D) Jeong-Eun Park [email protected] Gyeonggi-Buk Science High School Young-Hyun Son [email protected] Geogebra Workshop http://www.geogebra.org/cms/ Workshop run by Priscilla Allan http://2011geogebralessons.wikispaces.com/Calc+Day+2011 To structure our learning, and to make learning visible, I have an “I can do” Checklist for you Using Geogebra for Online College Algebra & Trigonometry Courses 2 Geogebra is a free dynamic mathematics software and online program that many teachers Lesson 17 – Riemann Sums Using GGB; Definite Integrals 3 Example 3: Use GeoGebra to find the upper sum and the lower sum for () 1 2 2 f x e= +x on the Activities with Geogebra in a preservice program Andreas Philippou and Costantinos Christou University of Cyprus Abstract ... lessons with GeoGebra improved not only teachers’ mathematical content The GeoGebra lessons in the laboratory guide focus on promoting the understand-ing of concepts and procedures. Figures 1 and 2 provide an example of a student ac-tivity sheet and accompanying GeoGebra sketch from the An Analysis of Students’ Research on Model Lessons That Integrate GeoGebra into School Mathematics Jack A. Carter [email protected] Title: Learning to Develop Mathematics Lessons with GeoGebra….MSOR Connections May 2009 Vol 9 No 2 Author: Erhan Selcuk Haciomeroglu Subject: This paper describes how prospective secondary mathematics teachers Technological Pedagogical Content Knowledge (TPCK) and perspectives about teaching ... lessons with GeoGebra Reflections and group discussions on open lessons with GeoGebra 11:30 - 12:15 Lunch 12:15 - 13:45 Workshop Talk and workshop Talk Learn how to use GeoGebra’s programming features to create self-correcting exercises. Troels ... GeoGebra in 10 Lessons 2 GeoGebra adalah software matematika yang dinamis dan bersifat open source (free) untuk pembelajaran dan pengajaran matematika di sekolah. Teacher facilitation: Using the Geogebra applet, the teacher will demonstrate how to graph the quadratic function. ... Suggested Subsequent Lessons Solving Quadratic Equations by Graphing . Title: Lesson 1. Graphing Quadratic Functions Author: Teacher facilitation: Using the Geogebra applet, the teacher will demonstrate how to graph the quadratic function. ... Suggested Subsequent Lessons Solving Quadratic Equations by Factoring . Title: Lesson 2. Lesson Plan - Solving Quadratic Equations by Graphing 1 Professional development through lesson study: teaching the derivative using GeoGebra Nellie C. Verhoef, Fer Coenders, Jules M. Pieters, Daan van Smaalen, and David O. Tall The GeoGebra Circle you have on your computer is a “Unit Circle”. What makes it so special? The radius is equal to one, so the circumference is 2 . An angle has been created that lies in standard position (vertex at origin, one vector on the x-axis oriented Geogebra, a useful tool for achieving school progress Adriana Bînzar ... These lessons in Geogebra have raised the interest of many students as evidenced by their active participation in the lessons, the highest scores obtained, their ... Lesson 3 – Regressions 1 Math 1314 Lesson 3: Regressions Using GeoGebra In this course, you will frequently be given raw data about a quantity, rather than a function. First participants learned basic commands about GeoGebra. During lessons pre service teachers of mathematics used dynamic worksheets. Data were collected by participants' works and opinions on dynamic mathematics software. GeoGebra: Another way of looking at Mathematics ... GeoGebra, lessons becoming more attractive for the audience. Also it should be mentioned that such lessons allow the presentation and explanation of a larger number of particular cases. By this ... Effectiveness of Using Geogebra on Students’ Understanding in Learning Circles Praveen SHADAAN[1] ... Such information is crucial in planning lessons for large classes and where learners are of varied abilities. The study ... experience in designing lessons plans with GeoGebra, and this experience positively influenced prospective teachers’ perspectives about the use of technology in the teaching and learning of mathematics. However, some prospective teachers, ... GeoGebra, “who is dynamic mathematics software for all levels of education that joins arithmetic, geometry, algebra and calculus. ... avoided, such as: the students low interest and the very theoretical character of the lessons. References Integrating GeoGebra into IWB-equipped teaching environments: preliminary results ... lessons by participating teachers are currently being video-recorded and further interviews are to be conducted with teachers and students to better understand the range of materials that could be useful to teachers in preparing GeoGebra lessons. There are two GeoGebra institutes in Serbia and they offer different activities in order to increase GeoGebra use in classrooms”. A teacher from seminar in Novi Sad: lessons are an excellent tool for learning and teaching mathematics. •In this presentation, we propose the use of GeoGebra ... The GeoGebra`s facilities allow the teacher to: • give high quality, attractive presentations linking to real GeoGebra is free algebra, geometry and calculus software [1], [2], [3] developed in the University of ... and online lessons are available in these languages [2]. GeoGebra NA2010 July 27-28 2010 Ithaca College, Ithaca, NY, USA GeoGebra as e-Learning Resource for Teaching and Learning Statistical Concepts Dijana Capeska Bogatinoska1, Aleksandar Karadimce1, ... The probability and statistics lessons should provide to the students the ability to collect, organize and analyze numerical data, ... Lessons 1. physics 1. visualizing projectile motion and its dependence on all variables ... Use GeoGebra’s built-in vector math to find the net force. Does your answer always make sense? If not, how is what we’ve built in GeoGebra wrong? organize the work during lessons but also the use of GeoGebra together with other modern instructional equipments, e.g. other computer programs, SmartBoards, course management systems etc. The Institute will publish information and news of its work as well as GeoGebra worksheets in GeoGebra in a geography class 63 Volume 6 Number 1, 2013 Picture 2. The motion of the Sun 2.3. Orientation in a city The third example is a bit more complex. (GeoGebra) in their math lessons and some don’t. (See project description.) About myself: I am a textbook writer and math teacher at an upper secondary school in Maaloy, at the west coast of Norway. Last year I got a scholarship from the Norwegian University of GeoGebra in the Context of the IT Surrounding Environment and Curriculum, 2010 ... GeoGebra software in lessons. We agree a new level of competences, teacher trainer for GeoGebra which have expertise in pedagogy, psychology and didactic science. Classical geometry with GeoGebra ... In my lessons I use computer software for visualization, for the proving of geometric problems in the plane and in the space or for the demonstration of the application of geometry in practice. Math 804T Experimentation, Conjecture, Reasoning Geogebra Sample Problem Steve Dunbar November 5, 2007 Sample Problem In rectangle ABCD, AB = 5, and BC = 3, Points F and G are on CD so Calculus Animations with GeoGebra GeoGebra is a free, web-based software that does dynamic geometry and graphing. ... lessons, questioning strategies, and activities and watching clips of the lessons , I will lead the participants in a discussion about ways in which lessons, activity-only lessons, Unit Activities, and Online Discussions. ... GeoGebra, helping learners to explore properties of geometric shapes and to test conjectures. lessons, activity-only lessons, Unit Activities, and Online Discussions. ... GeoGebra, helping learners to explore properties of geometric shapes and to test conjectures. Constructing a Steiner Tree Using GeoGebra The following is a script of the construction of a Steiner Point and Steiner Tree using GeoGebra. The flash Use Geogebra’s “Exterior Segments in Circles”. (example on the left) Using explicit instruction, have students practice calculating the missing segment length. Repeat the exercise in Geogebra with several different measurements. For these lessons offered, GeoGebra can be considered as an important choice. GeoGe-176 MUHARREM AKTUMEN AND MEHMET BULUT bra provides important opportunities in the classroom, with its interface that has been trans-lated into 48 languages, its help menu, its alge- Integrating technology into Math lessons is a complex issue that has to be addressed from a holistic viewpoint that takes into account different interrelated components. In ... GeoGebra software is lower than the motivation for the use of technology. This unit is comprised of lessons in which students will be given various information and data to use to investigate different parent functions. ... Students can create a GeoGebra file to graph their equations from [3] L. Stojanovska, Z. Trifunov. Constructing and Exploring Triangles with GeoGebra. XA2010 “European Con on Computer Sciences & Applications”, 3rd Edition, September Also open source software GeoGebra is used for teaching geometry and algebra concepts. Through this paper I would highlight my explorations, experiments and ... (Moodle, eXe), Photo story lessons, using GeoGebra for teaching geometry and algebra. GeoGebra for planning lessons is summed up by Amanda Ladbury’s comment that “I wish GeoGebra had been around when I was at school”. Further ideas An opportunity arose a couple of weeks after our introduction to the software to share GeoGebra Geogebra, a Tool for Mediating Knowledge in the Teaching and Learning of Transformation of Functions in Mathematics by RAZACK SHERIFF UDDIN DISSERTATION ... in subsequent lessons, that the learner could not recall it or talk about it. All lessons are discussed in the context of a real world application. VIII. REFERENCE/RESOURCE MATERIALS: Graphing calculators will be required. Student Exploration Worksheets and Exit Slip Assessments will be needed for all three lessons. Computers to access GeoGebra would
{"url":"http://ebookily.org/pdf/geogebra-lessons","timestamp":"2014-04-23T15:08:05Z","content_type":null,"content_length":"42730","record_id":"<urn:uuid:9a9e6114-1a1b-4973-84b2-f39b8b72b0c1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
[Makeover] Bedroom Carpet July 1st, 2013 by Dan Meyer The Task This task comes from MathWorks, which, as I understand it, is intended for Canada's vocational track math students. I purchased PDFs of the curriculum in Saskatchewan because it featured a lot of interesting applications of secondary math, even if the print medium did those applications no favors. What I Did • Not a lot. The last makeover took it out of me and it's summer. Let's do something a little simpler. • Put students in the shoes of the person who might actually experience this problem. Perhaps that person is a homeowner. The homeowner either doesn't have a carpet or has a carpet in need of replacement. She knows only one thing at this point: "I want carpet." She wonders several things at this point: "How much will it cost me?" and "How much time will it take me?" and "How will I do it?" are probably high on the list. What she doesn't have yet are all these facts, figures, and dimensions the problem includes. • Add intuition. Our homeowner might try to ballpark the cost of the installation before she does anything else. Let's ask students to do that. • Raise the ceiling on the task. We need to extend this task at the top end for students who need the challenge. Let me run this by you. Shoot some quick video of a room in a house that has a similar design – composite rectangles. If it's emptied of furniture so much the better. (Anybody moving this summer? Get at me.) Tell students, "We need new carpet in this room. Can you give me a guess how much it'll cost?" Some of them won't have a clue, but we'd like them to take their intuition as far as it'll go, even if that's just to say, "It's definitely going to cost less than $10,000." Then ask them to brainstorm in groups: "What information will be important here? What skills will you need?" Because that's the question our homeowner is likely asking herself and we're trying to put our students in her shoes. (Also because the first task in "modeling" according to the Common Core is to "identify essential variables.") I have no trouble imagining the student response here because my own knowledge base for home handiwork is pretty much comparable. • What kind of carpet are we buying? • How much does it cost? • How much does it cost to install? • How do you get carpet? • Are there any other costs we're forgetting? I'm sure I'd be (pleasantly) surprised by what students ask for. At this point, offer them information they want. Teach them about carpet installation. Show a YouTube video. (Or have them research all of the above online, though I'm not inclined to sacrifice the time myself.) Basically give them the same information given in the task, only after they've had a minute to think about why they'd need it and how they'd get it. I'd probably pass out a floor plan of the room without dimensions. An interesting observation the original task glides past is that you don't have to measure every single side of the room. You can measure some and use them to find the others. So ask them what sides they'd want or what's the fewest sides they'd need? As students work, some will need more of your help and others will finish quickly. My first attempt at an extension problem for the latter group is to switch the known and unknowns of the original problem. So previously we gave students dimensions and we asked for cost. Now give them cost and ask for dimensions. "Tell me about a scenario where the total bill for the carpeting job was $1,000,000." They can change anything they want. What You Did Over on the blogs: Over on Twitter: Call for Submissions You should play along. I post Monday's task on Twitter the previous Thursday and collect your thoughts. (Follow me on Twitter.) If you have a textbook task you'd like us to consider, you can feel free to e-mail it. Include the name of the textbook it came from. Or, if you have a blog, post your own makeover and send me a link. I'll feature it in my own weekly installment. I'm at dan@mrmeyer.com. 2013 Jul 2. Jennifer Orr sends in two pictures we can all use. 10 Responses to “[Makeover] Bedroom Carpet” 1. on 01 Jul 2013 at 11:37 am1 My suggestion: In Canada many trades people receive contractors’ deep discounts on materials. I might find a local carpet layer and ask for job bids on say 3 types of carpet, and then ask the students to work out the DIY costs (after a trip to the local carpet warehouse store where they can compare types of carpet). The problem could be to decide if the DIY route is worthwhile, and that could lead to conversations about other ‘costs’ that might be factored into the DIY price or about how trades people figure out job bids. 2. I like the idea of taking a video of a room needing new carpeting and asking the students what it would cost. This allows students to pick out the important details needed to solve the problem. I’d probably already have the carpeting picked out. That decision can take weeks for people actually going through the process. So, with an eye to the clock, that’s a decision I would make for them. When they ask how much the carpeting costs, I would show them a picture and say the homeowner decided on this, which is $_.__ per square foot. We could add an extra step of complexity by comparing modular carpeting sold by the square (19.7″ by 19.7″) rather than the roll. I think students will be surprised at how much carpeting a room can cost (close tiling it with dollar bills). Getting this done in one 45 minute class period–priceless. 3. on 01 Jul 2013 at 12:23 pm3 l hodge The most intriguing part of this to me is that you will have to buy more carpet than the area of the room (because carpet only comes in 12 foot width). How much more? Somewhere I read that the rule of thumb is to buy 10% more than the area of room. This is my attempt. Minimizing the number of cuts might be interesting for larger rooms that would require several sections of carpet. 4. on 01 Jul 2013 at 1:06 pm4 (1) Re vocabulary: ‘bolt’ can easily be switched to ‘roll’ but ‘nap’ is important so the carpet layer doesn’t rotate left over pieces to fit them in. Bringing a couple of samples in would make this point. Pattern match is part if the reason for adding 10%. (2) Re units: Canadian trades people have to be able to transpose between metric and imperial. There’s some realism to someone measuring their room in metres and then finding to their surprise that carpet is sold by the sq. yd. Lesson to learn? Go home and measure again or know that a meter is 3″ longer than a yd. (3) Re complexity: To me those are authentic complications of unravelling the problem, but has anyone mentioned that the sketch badly misrepresents the room’s proportions? I think this might be an exercise in giving students practise in sorting through the information to reconstruct the problem for themselves into manageable chunks. They might not even have to solve it, just find and deal with all the red herrings. 5. on 01 Jul 2013 at 4:23 pm5 Patti As I am currently in the process of getting new carpet for a house that desperately needs it, I’ll tell you what some of my students answered when I posed a similar problem to them using my own “Don’t the Lowe’s* guys take care of all the math for you?” And, they’d be right. The Lowe’s guys WILL take care of all the math for you. Of course, that takes the fun out of it. You’ve inspired me to try to get pictures or video of my house once we get the furniture moved. We have tons of stuff, so there may be a lot of shifting while the installers are here. If I get anything usable, I’ll share it. I’m guessing it will be too much of a mad scramble for me to do a decent job, but we’ll see. *Lowe’s is a large home improvement chain, for those who don’t have them. 6. on 01 Jul 2013 at 8:21 pm6 Roger Gemberling Carpet is sold in a variety of widths. Twelve feet is the most common width. Fifteen feet is next most popular width. Finally 13 feet 6 inches in the third most common width sold. A carpet company in Georgia (Georgia Carpet Industries) has a variety of widths available. They are 6 feet, 8 feet, 12 feet, 13 feet 2 inches, 13 feet 6 inches, 15 feet, and 15 feet 4 inches. My students have completed a similar problem. I would provide carpet with widths of 12 feet and 15 feet. 7. on 01 Jul 2013 at 8:23 pm7 Scott Don’t the guys from Lowes….. I know of two math teachers and a very astute science teacher (Jeff) who caught mistakes by these people who *will do* the math for you… Even the sneaky kids (sorry, maybe I know my kids better than you) perk up when it comes to money…. I want to call 3 contractors to do something myself tomorrow, now… 8. on 02 Jul 2013 at 8:03 am8 Sarah Miller I really like Dan’s addition of watching ‘how to lay carpet’ videos. I know something like that would do a lot for my kids and would be worth the time. It builds on my relationship with the class when we can talk about non-math stuff, even something “boring” like laying carpet. (well, non-math to them, anyway). Plus, I like the message of “You could take on a big project like this if you wanted to. There are free resources (YouTube) to help you figure out how to do it. And you can figure it out” 9. Sue Hellman: I think this might be an exercise in giving students practise in sorting through the information to reconstruct the problem for themselves into manageable chunks. They might not even have to solve it, just find and deal with all the red herrings. I’m not convinced that practice has a lot of application outside of math textbooks and math tests. In life, we tend to start with a goal state (eg. the room gets fully carpeted) and then we work towards that state by deciding what information is relevant. It’s unusual to start with the goal state and a pile of information someone has given you, some of which may be relevant, some of which may not be. Patti and Scott are onto something useful here. What do you do when students realize other people will do the math for you? One option is to make those people incompetent or liars. “Right. The carpet guys came by and said they’ll do it for ten thousand dollars. They insist it won’t cost any less than that. Can you prove they’re liars?” 10. [...] is my first attempt at a #MakeoverMonday problem that Dan Meyer has been leading. However, I want to put my own spin on it and try to use the concepts from Stephen Brown & [...]
{"url":"http://blog.mrmeyer.com/2013/makeover-bedroom-carpet/","timestamp":"2014-04-21T07:26:19Z","content_type":null,"content_length":"50905","record_id":"<urn:uuid:ddc19cef-0560-4492-a20d-96d8eccf388b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Use A 20μF Capacitor To Design A Low-pass ... | Chegg.com 1. Use a 20μF capacitor to design a low-pass passive filter circuit having a cutoff frequency of 20kHz. Draw a schematic of your design, write the transfer function, and sketch the bode plot of the 2. Use a 20mH inductor to design a high-pass passive filter circuit having a cutoff frequency of 2kHz. Draw a schematic of your design, write the transfer function, and sketch the bode plot of the 3. (a) Given an amplifier below, find the voltage gain and power gain in dB for the following conditions: Vin = 10mV, Pin=5mW Vout = 600mV, Pout=100mW Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-use-20-f-capacitor-design-low-pass-passive-filter-circuit-cutoff-frequency-20khz-draw-sc-q1287071","timestamp":"2014-04-18T11:21:54Z","content_type":null,"content_length":"22418","record_id":"<urn:uuid:ec94432c-3534-4d19-a51d-edb04d38fd64>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Exact Test Home | 50-50 MANOVA | Rotation Tests | Software | Fisher's Exact Test | Publications This page can be used to test statistically whether there is any relation between two categorical variables (with two levels). Fill inn the table and press COMPUTE. The output consists of three p-values: • Left: Use this when the alternative to independence is that there is negative association between the variables. That is, the observations tend to lie in lower left and upper right. • Right: Use this when the alternative to independence is that there is positive association between the variables. That is, the observations tend to lie in upper left and lower right. • 2-Tail: Use this when there is no prior alternative. NOTE: Decide to use Left, Right or 2-Tail before collecting (or looking at) the data. An example of such data (not real) are 100 persons classified after sex and e-mail address: ┃ │female │male┃ ┃e-mail address │3 │15 ┃ ┃no e-mail address │37 │45 ┃ I.e. 3 of 40 women have e-mail address and 15 of 60 men have e-mail address. The data could have been collected in different ways: 1. We have asked 60 men and 40 women. I.e. the total number of men and women is fixed. 2. We have asked 100 persons about their sex and whether they have e-mail address. I.e. only the total number of persons is fixed in advance. 3. We have asked all persons in Norway born 24/11-68. I.e.: The total number of persons is not fixed in advance. The 2-Tail p-value is calculated as defined in Agresti (1992) Sec. 2.1. (b). REFERENCE: Agresti A, (1992), A Survey of Exact Inference for Contegency Tables, Statitical Science,7,131-153
{"url":"http://www.langsrud.com/stat/fisher.htm","timestamp":"2014-04-16T10:09:33Z","content_type":null,"content_length":"9567","record_id":"<urn:uuid:a8af30c4-eff5-42aa-b14b-1d0a7b53548c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational & Applied Mathematics Services on Demand Related links On-line version ISSN 1807-0302 Comput. Appl. Math. vol.31 no.2 São Carlos 2012 The sodium pump controls the frequency of action-potential-induced calcium oscillations Shivendra G. Tewari^* Systems Science and Informatics Unit, Indian Statistical Institute, 8^th Mile, Mysore Road, Bangalore 560059, India E-mail: tewarisg@gmail.com Calcium plays a significant role in a number of cellular processes, like muscle contraction, gene expression, synaptic plasticity, signal transduction, but the significance of calcium oscillations (CaOs) is not yet completely understood in most of the cell types. It is a widely accepted fact that CaOs are a frequency encoded signal that allows a cell to use calcium as a second messenger while avoiding its toxic effects. These intracellular CaOs are primarily driven by some agonist-dependent pathways or fluctuations in membrane potential. The present mathematical model is of the latter type. The model incorporates expression for all major intracellular ionic species and membrane proteins. Especially, it integrates the coupling effect of sodium pump and Na^+ / Ca^2+ exchanger over CaOs. By varying sodium pump current, it is found that, sodium pump is a key player in modulating intracellular CaOs. The model predicts that the sodium pump can play a decisive role in regulating intercellular cell signaling process. The present study forms the basis for sodium pump controlled intercellular signaling process and requires further experimental verification. Mathematical subject classification: 34M10, 92C20. Key words: Na^+/Ca^2+ exchanger, sodium pump, calcium oscillations, membrane potential. 1 Introduction But till date the function of cytosolic Calcium oscillations (CaOs) has not been completely understood in most cell types. CaOs are known to play a key role ina number of mechanisms like activation of extracellular signal regulated kinase (ERK) [4, 5], the contraction of smooth muscle [6], increase in the frequency of synaptic currents [7] and maturation of Xenopus laevis Oocyte [8]. These CaOs are supposed to contain frequency-encoded signals that help in using Ca^2+ as a second messenger while avoiding its high intracellular concentrations [9]. Also, in the process of signal transduction, intracellular Ca^2+ behaves like a switch and decides whether a particular signal needs to be further propagated or not. The increase in intracellular concentration is facilitated by the opening of transmembrane Ca^2+ channels which lead to the opening of channels at the intracellular stores. There are mainly Rynodine Receptors (RyRs) or Inositol Triphosphate Receptors (IP[3]Rs) that are located at the membrane of endoplasmic reticulum (in neurons) or sarcoplasmic reticulum (in myocytes) which causes an efflux of Ca^2+ from the intracellular stores. The release of Ca^2+ through IP[3] Rs is as a result of some agonist or neurotransmitter binding to its receptor which can cause via G-protein link to phospholipase C (PLC), the cleavage of phosphotidylinositol (4,5)-bisphosphate (PIP [2]) to inositol triphosphate (IP[3]) and diacylglycerol (DAG). This released IP[3] is free to diffuse through the cytosol and binds with IP[3]R and leading to the subsequent opening of these receptors and release of Ca^2+ from the intracellular stores. CaOs can be classified into mainly two types: 1) that is induced by changing membrane potential as in the case of an action potential; and 2) that occur in the presence of voltage clamp. The latter part can be further categorized based on the fact that the oscillatory Ca^2+ flux is from RyRs or IP[3]Rs but our focus is on the first Jafri et al. [6] showed CaOs for changing membrane potential of endoplasmic reticulum (ER), Atri et al. [10] showed CaOs for Ca^2+ flux through the IP[3]Rs in Xenopus laevis Oocyte and determined the intermediate range of IP[3] for CaOs. Wagner and Keizer [11] showed the effect of rapid buffering over CaOs. Later on, Kusters et al. [12] proposed an integrated model which combines excitable membrane with an IP[3] mediated Ca^2+ oscillator for normal rat kidney (NRK) fibroblast. Recently, Silva et al. [13] proposed a mathematical model for endothelial cells which incorporated nearly all the important biophysical parameters but is unable to exhibit CaOs. Thus, we can say that in all the investigations on CaOs carried out by researchers so far, none of the investigators have tried to incorporate the effect of changing cytosolic Na^+ and K^+ ions over CaOs. Thus, in this article, we have proposed a mathematical model governing CaOs for changing membrane potential in the absence of fluxes from the intracellular stores. Holmgren et al. [14] determined the three distinct steps of Na^+ ion release from the sodium pump with the help of high speed voltage jumps. Here, we have tried to incorporate the impact of these distinct steps of sodium pump over cytosolic CaOs, in case of an action potential. Thus, we have incorporated L-type Ca^2+ channel, Na^+ channel, K^+ channel, Plasma-Membrane (PM) Ca^2+ ATPase, Na^+ / Ca^2+ exchanger (NCX), Na^+ / K^+ ATPase (Sodium pump), inward rectifier potassium channel (K[ir]), Ca^2+ dependent intermediate potassium channel (IK[Ca], Ca^2+ dependent small potassium channel (SK[Ca]) and dynamic membrane potential. The gating mechanism of the trasmembrane channels emulate the gating mechanism of the famous Hodgkin and Huxley model [15]. Further, the pumps and proteins are modeled to have realistic gating mechanism such that they are in agreement with the biological facts. The proposed mathematical model leads to a system of non-linear ordinary differential equations. We have used Euler's method for the simulation of the proposed model for which a MATLAB script has been written. 2 Mathematical formulation Our cell model assumes that cell is cylindrical in shape. The diameter of cell is assumed to be 20 µm and its length to be 100 µm (see Fig. 1). Its specific membrane capacitance is taken to be 1 µF/ cm^2. As mentioned in literature [16, 17, 18], we have taken actual surface area (A[cap]) of cell to be larger than its geometrical surface area. Further, the channels, proteins and pumps are supposed to be homogenously distributed throughout the membrane. The formulation of the proposed mathematical model comprises of different components which are elaborated in the following 2.1 The Ca^2+ channel The L-type Ca^2+ channel is supposed to have a permeability ratio of 36000:1:18 for Ca^2+, Na^+ and K^+ ions, respectively [19]. The voltage gated Ca^2+ channel was modeled using non-linear Goldman-Hodgkin-Katz (GHK) current equation [9, 20], which can be stated as, where, S is any of the ions, [S][i] and [S][o] are the intracellular and extracellular concentration ofSion respectively (in mM), P[S] is permeability (in cm/s) of S ion, z[S] is its valence, γ[i], γ [o] are the activity coefficient (a.c.) of the S ion, F is Faradays constant (in Coulombs/moles), V[m] is membrane potential (in volts), R is real gas constant (in J/K moles) and T is absolute temperature (in K). The total current through the L-type Ca^2+ channel is I[Ca,t] = I[Ca,L]+I[Na,L]+I[K,L]. Further, equation (1) is converted into fluxes (in mM/second), before being used as an expression for individual ionic concentrations, by using Faraday's constant, Volume of the cytosol (V[cyt]) and using the fact that 1 L = 10^-3 m^3, where, S is any of the ion and L signifies L-type Ca^2+ channel. In equation (2) there is a negative sign against it because by convention inward current is taken to be negative. 2.2 PM Ca^2+ ATPase PM Ca^2+ ATPase (PMCA) is a P-type ATPase. The energy required to extrude Ca^2+ out of the cytosol is met by ATP. The kinetics of this pump follows the enzyme-substrate formulism and hence using Michaelis Menten type kinetics [21], one can formulate the net efflux of Ca^2+ ions out of the cytosol by, where, Î[pump] is the pump current given by the following equation, where, [Ca^2+] is the intracellular Ca^2+ concentraton (in mM), H,pump is the Hill's coefficient for PMCA, ^2+ concentration at which the maximum pump current is halved (in mM). 2.3 Na^+ / Ca^2+ exchanger This protein is known to play an important role in excitation-contraction coupling in cardiac myocytes [22]. In neurons this protein helps in the extrusion of cytosolic Ca^2+ concentration and hence helps in the modulation of neurotransmitter release [23]. It is known that the cardiac type 3Na^+ / 1Ca^2+ exchanger is dominant in brain [23]. Thus, we have used the same exchange for our model. We know that the amount of energy required to extrude an ion against its concentration gradient is given by [21, 20, 9], where, S is the extruded ion. Introducing energy barrier, η, and using the fact that Δ[Ca] = 3 Δ[Na] we can write NCX current equation with an allosteric dependence over [Ca^2+] as [13], where, [Ca^2+][o] is the extracellular Ca^2+ concentration (in mM), [Na^+] is the intracellular Na^+ concentration (in mM), [Na^+][o] is the extracellular Na^+ concentration (in mM), ^2+ concentration at which I[NCX] is halved, H, N C X is the Hill's coefficient of NCX, d[NCX] is constant for saturability of I[NCX], g[NCX] is the conductance of NCX (in nS). 2.4 Na^+ / K^+ ATPase Na^+ / K^+ ATPase (NaK) is also a P-type ATPase which is also known as the sodium pump and is a 147 kDa membrane protein [24]. It is known for the extrusion of Na^+ ions at an expense of some ATP and inflow of K^+ ions. Its formulation is based on the steps given by Holmgren et al. [14], where heused high speed voltage jumps to determine three distinct steps of Na^+ ions deocclusion from the pump. The current through the sodium pump has the following form, here, I[NaK] is the scaling factor of NaK current (in µA/cm^2), k[f] (in ms) is the forward (deocclusion) rate constant, k[b] (in ms) is the backward (occlusion) rate constant, K[0.5](0) is half activating [Na^+][o] concentration at 0 mV, H, N a K is the Hill's coefficient for half activating NaK current, λ is the fraction of electrical field dropped along the access channel and τ[NaK] (in ms) is someconstant. 2.5 Cytosolic Ca^2+ buffers It is assumed that a single buffer specie is present inside the cytosol and follows the following bi-molecular reaction and which can be formulated in terms of the following differential equations, If we also assume that there are no sources and no sinks present for buffer. Then letting B[T] represent the total buffer concentration, the above equation can be written in the reduced form as, where, k^+ is the buffer association rate, k^- is the buffer dissociation rate, [C a B] represents bound buffer concentration. 2.6 Na^+ and K^+ channels To generate action potentials Na^+ current and K^+ channels are taken as modeled by Hodgkin and Huxley [15]. The transmembrane current due to Na^+ and K^+ channels has been modeled using the linear current-voltage relationship derived with the help of Ohm's Law, where, S is either Na^+ or K^+ ion, g[S] is conductance of the given ion, V[S] is the reversal potential of the given ion determined by Nernst Equilibrium Potential equation (or simply Nernst where, 1000 is used to convert volts into milli-volts. All other symbols have their usual meanings. Here, and in all other instances, individual ionic reversal potential has been determined using Nernst equation at each integration step during runtime. 2.7 Ca^2+ activated small and intermediate K^+ channel The current through SK[Ca] is modeled using a linear current voltage relation as follows, Here, g[SK[Ca]] is SK[Ca] channel conductance per unit area (in mS/cm^2), P[o,SK[Ca]] is its Ca^2+ dependent open probability, V[K] is the reversal potential of K^+ ions. Similarly, Ca^2+ activated intermediate K^+ current is modeled using a linear current voltage relation as follows, Here, g[IK[Ca]] is I K[Ca] channel conductance per unit area (in mS/cm^2), P[o, I K[Ca]] is its Ca^2+ dependent open probability, V[K] is the reversal potential of K^+ ions. 2.8 Inward rectifier K^+ channel Inward rectifier K^+ current is known to contribute to resting membrane potential. It is also modeled using the linear current voltage relationship, and g[K[ir]] is the K[ir] conductance (in nS), V[K] is the reversal potential for K^+ given by Nernst equation, [K^+][o] is the extracellular K^+ ion concentration. Here, g[K[ir]] is converted into mS/cm^2 using A[cap] before being used in the equation governing membrane potential. 2.9 Ca^2+ and Na^+ leak currents To balance the net effect of I[NCX] and I[pump] there is supposed to be a Ca^2+ leak current given by, I[Ca,b] = g[Ca,b](V[m] V[Ca]) where, g[Ca,b] is the Ca^2+ leak conductance per unit area (in mS/cm^2), V[Ca] is the reversal potential (in mV) for Ca^2+ given by Nernst equation. Similarly, we can formulate Na^+ leak current to balance the net effect of I[NCX] and Î[NaK], I[Na,b] = g[Na,b](V[m] V[Na]) where, g[Na,b] is the Na^+ leak conductance per unit area (in mS/cm^2), V[Na] is the reversal potential (in mV) for Na^+ given by Nernst equation. The current due to all other ions is considered as leak and is incorporated as, where, g[L] is leak conductance (in mS/cm^2) and V[L] is leak reversal potential assumed to be constant. 2.10 Membrane potential Like the formulation of Hodgkin and Huxley [15] we have divided the total membrane current into capacitive current and ionic currents. Thus, for capacitive current we have, where, I[app] is the applied membrane current density (in µA/cm^2), V[m] is the membrane potential (in mV), C[m] is the specific membrane capacitance (µF/cm^2), I[i] accounts for all the transmembrane currents discussed earlier and t is time (in ms). The gating mechanism of the transmembrane currents follows Hodgkin and Huxley [15]. Combining equation (1)-(13) we can write the mathematical model governing CaOs with relevance to Na^+ and K^+ ions, as in the case of an action potential as, In equation (14), α[i], β[i] (i = m, n, h, m[c], h[c]) are rate constants which vary with membrane potential but not with time (ms^-1) and m, n,h, m[c], h[c] are dimensionless gating variables with values lying between 0 and 1. In this model we assume that fluxes from IP[3]Rs are absent. This can be achieved by blocking IP[3]R channel by using an IP[3]R antagonist like heparin [25]. This assumption has been taken to exclude the effect of intracellular stores over CaOs. The initial condition of the system is, All ionic concentrations are in the units of mM. The ordinary differential equations governing the gating variables (m, n, h, m[c], h[c]) are Here, m and (1 m) are representing on and off state of the variable m, respectively. Variables n, h, m[c] and h[c] also follow likewise. The mathematical expressions of the voltage-dependent rate constants in equation (14) are as follows, For the solution of equations (14)-(16), we have used Euler's method and written a script in MATLAB that has been simulated on an AMD Turion 64 × 2 machine with 1.6 GHz processing speed and 2.5 GB memory. The time taken per simulation is ~9 sec when simulating for 30 ms using 4000 time steps i.e. Δt = 0.0075 ms. The numerical results obtained are used to study the effect of varying transmembrane currents over CaOs which are discussed in the following section. 3 Results and discussion In all the figures, it is assumed that cytosolic Ca^2+ is buffered with 50 µM Ethylene Glycol Tetraacetic Acid (EGTA). The standard biophysical parameters used for simulation of the model are listed in Table 1-4 unless stated along with the figures. Since our main objective is to study CaOs, we have shown results that are pertinent to CaOs only. In Figure 2 we observe the effect of an impulse of 10 µA/cm^2 over membrane potential. Such an effect has been studied in great detail by Hodgkin and Huxley [15], Luo and Rudy [16, 17, 18] thus, we need not give much emphasis over it here. In Figure 3, we have shown different current densities. All these current densities result in the shown action potential. The rest of the results shown are relevant to CaOs and has been studied in detail in the following figures. In Figure 4 we have shown Ca^2+ oscillation and buffered Ca^2+ curves with respect to standard parameters listed in Table 1. Initially it was assumed that 2.3 µM of Ca^2+ is buffered. Further it is apparent from Figure 4 that the amplitude of first Ca^2+ spike is more than the second spike. It is because of the slow dissociation constant of EGTA which results in higher buffered Ca^2+ and lower cytosolic Ca^2+ concentration. Comparing Figure 4 with Figure 2, it is clear that both of them are positively correlated. As membrane potential rises the Ca^2+ concentration rises and when membrane potential drops Ca^2+ concentration drops. In Figure 5 the effect of increasing total buffer concentration of EGTA isshown. The results are shown for B[T] = 50 µM (dark line) and B[T] = 200 µM (broken line). As expected increasing buffer concentration results in lower amplitude of Ca^2+ oscillation, which is also evident from Figure 5. In Figure 6 we observe the effect of increasing NCX conductance. Our simulation is in support of the biological fact that at negative potentials NCX works in reverse direction i.e. outflow of Ca^2+ ions and inflow of 3Na^+ ions. As increasing NCX conductance results in higher amplitude of Ca^2+ oscillation while there is no change resting Ca^2+ concentration at more positive membrane potentials as we have used a leak to neutralize the effect of NCX and pump currents. In Figure 7 we observe the results for which the mathematical model was proposed. It is widely believed that cells encode information in the frequency of Ca^2+ oscillations rather than its amplitude. There are a number of authors who have shown different roles of this ubiquitous sodium pump [26, 27]. Matchkov et al. [27] also experimentally demonstrated that sodium pump plays a significant role in regulating CaOs via regulation of cytosolic Na^+ ions. Similar philosophy is suggested by our present simulations. We increased pumping rate, I[NaK] = 1.5 µA/cm^2 (dark line), 3 µA/cm^2 (broken line), 4.5 µA/cm^2 (dotted line), of sodium pump and observed changes in CaOs. It is seen that an increase in I[NaK] results in an increase in the period of Ca^2+ oscillations. The changes were quite apparent and are reflected from Figure 7. In Figures 8-10, the effect of different extracellular concentrations of Ca^2+, K^+ and Na^+ are shown over CaOs. The findings of Figure 8 are quite obvious but should be mentioned to show accordance with the biological facts. The curves are shown for [Ca^2+][o] = 1.8 mM (dark line) and [Ca^2+][o] = 1 mM (broken line). It is apparent from Figure 8 that lowering [Ca^2+][o] results in lower amplitude of Ca^2+ oscillation which is because of a corresponding decrease in Ca^2+ gradient. In Figure 9, the findings are worth mentioning as an increase in [K^+][o] concentration resulted in an increase in frequency of Ca^2+ oscillation. Although these findings are also pretty obvious as increasing [K^+][o] leads to an increase in reversal potential of K^+ ions and hence increases the frequency of action potential which in turn increases the frequency of CaOs. In Figure 10, the effect of decreasing [Na^+][o] concentration is shown. The curves are shown for [Na^+][o] = 145 mM (dark line) and [Na^+][o] = 140 mM (broken line), it is apparent from figure that decreasing [Na^+][ o] results in an increase in amplitude of CaOs. As in the previous case changing [Na^+][o] concentration changes the reversal potential of Na^+ ion. But the observed change in V[Na] is minimal and obviously does not affect amplitude of action potential. The reason behind the increase in amplitude and latency in Ca^2+ oscillation is the change in Na^+ ion gradient. This gradient regulates the pumping rate of NCX exchanger decreasing the gradient means decreasing the pumping rate of NCX exchanger. Hence, affecting the net extrusion of Ca^2+ ions via NCX exchanger; resulting in an increase in amplitude and latency of CaOs. The results obtained in this paper are new and are subject to CaOs. The intent behind the present study was to investigate the effect of Na^+ / K^+ ATPase over Ca^2+ oscillation influenced by the experimental results obtained by Matchkov et al. [27]. The results obtained by our simulations are quite convincing with biological facts. The obtained results also confirmed the hypothesis of Matchkov et al. [27] that interaction between NCX and Na^+ / K^+ ATPase modulates intercellular communication. It was observed that increasing NaK current decreases the frequency of CaOs. The results obtained by previous investigators regarding CaOs have been mainly concerned with membrane potential, inositol triphosphate (IP[3]) or ryanodine receptor [6, 28, 29, 10, 11, 12, 13]. None of the earlier investigators gave much emphasis over this interaction of NCX and sodium pump which in turn effects CaOs. Thus, in this article, we have looked into and demonstrated a novel mechanism which modulates frequency of Ca^2+ oscillation. Here, we have proposed a mathematical model which can be used for problems related to similar cell processes. The results obtained in this paper give new and useful insight for neurologists to look into the paradigm of CaOs at a different perspective. Also, the results obtained are relevant to biomedical scientists for developing protocols for diagnosis and treatment of neurological disorders. Acknowledgments. The author acknowledges fruitful discussions with Dr. Ronald J. Clarke, School of Chemistry, The University of Sydney, Australia for giving useful insights over the kinetics of Na^+ / K^+ ATPase. [1] A.C. Charles, C.C.G. Naus, D. Zhu, G.M. Kidder, E.R. Dirksen and M.J. Sanderson, Intercellular Calcium Signaling via Gap Junctions in Glioma Cells. The Journal of Cell Biology, 118 (1992), 195-201. [ Links ] [2] A. Peskoff and G.A. Langer, Calcium Concentration and Movement in the Ventricular Cardiac Cell during an Excitation-Contraction Cycle. Biophys. J., 74 (1998), 153-174. [ Links ] [3] J. Shuai, J.E. Pearson and I. Parker, Modeling Ca^2+ Feedback on a Single Inositol 1,4,5-Trisphosphate Receptor and Its Modulation by Ca^2+ Buffers. Biophys. J., 95 (2008), 3738-3752. [ Links ] [4] O. Melien, L.S. Nilssen, O.F. Dajani, K.L. Sand, J-G Iversen, D.L Sandnes and T. Christoffersen, Ca^2+-mediated activation of ERK in hepatocytes by norepinephrine and prostaglandin F2 role of calmodulin and src kinases. BMC Cell Biol., 3 (2002). [ Links ] [5] C.J. Dixon, J.F. Hall, T.E. Webb and M.R. Boarder, Regulation of Rat Hepatocyte Function by P2Y Receptors: Focus on Control of Glycogen Phosphorylase and Cyclic AMP by 2-Methylthioadenosine 5-Diphosphate. The Journal of Pharmacology and Experimental Therapeutics, 311 (2004), 334-341. [ Links ] [6] M.S. Jafri, S.P. Vajda, S. Pasik and B. Gillo, A membrane model for cytosolic calcium oscillations: A study using Xenopus oocytes. Biophys. J., 63 (1992), 235-246. [ Links ] [7] T.A. Fiacco and K.D. McCarthy, Intracellular Astrocyte Calcium Waves In SituIncrease the Frequency of Spontaneous AMPA Receptor Currents in CA1 Pyramidal Neurons. The Journal of Neuroscience, 24 (2004), 722-732. [ Links ] [8] L. Sun, R. Hodeify, S. Haun, A. Charlesworth, A.M. MacNicol, S. Ponnappan, U. Ponnappan, C. Prigent and K. Machaca, Ca^2+ Homeostasis Regulates Xenopus Oocyte Maturation. Biology of Reproduction, 78 (2008), 726-735. [ Links ] [9] J. Keener and J. Sneyd, Mathematical Physiology. Springer, 8 (1998). [ Links ] [10] A. Atri, J. Amundson, D. Clapham and J. Sneyd, A Single-Pool Model for Intracellular Calcium Oscillations and Waves in the Xenopus laevis Oocyte. Biophys. J., 65 (1993), 1727-1739. [ Links ] [11] J. Wagner and J. Keizer, Effects of Rapid Buffers on Ca^2+ Diffusion and Ca^2+ Oscillations. Biophys. J., 67 (1994), 447-456. [ Links ] [12] J.M.A.M. Kusters, M.M. Dernison, W.P.M. van Meerwijk, D.L. Ypey, A.P.R. Theuvenet and C.C.A.M. Gielen, Stabilizing Role of Calcium Store-Dependent Plasma Membrane Calcium Channels in Action-Potential Firing and Intracellular Calcium Oscillations. Biophys. J., 89 (2005), 3741-3756. [ Links ] [13] H.S. Silva, A. Kapela and N.M. Tsoukias, A mathematical model of plasma membrane electrophysiology and calcium dynamics in vascular endothelial cells. Am. J. Physiol. Cell. Physiol., 293 (2007), C277-C293. [ Links ] [14] M. Holmgren, J. Wagg, F. Bezanilla, R.F. Rakowski, P. De. Weer and D.C. Gadsby, Three distinct and sequential steps in the release of sodium ions by the Na+ / K+ ATPase. Nature, 403 (2000), 898-901. [ Links ] [15] A.L. Hodgkin and A.F. Huxley, A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve. J. Physiol., 117 (1952), 500-544. [ Links ] [16] C.H. Luo and Y. Rudy, A model of the ventricular cardiac action potential. Depolarization, repolarization, and their interaction. Circ. Res., 68 (1991), 1501-1526. [ Links ] [17] C.H. Luo and Y. Rudy, A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes. Circ. Res., 74 (1994), 1071-1096. [ Links ] [18] C.H. Luo and Y. Rudy, A dynamic model of the cardiac ventricular action potential. II. After depolarizations, triggered activity, and potentiation. Circ. Res., 74 (1994), 1097-1113. [ Links ] [19] T.R. Shannon, F. Wang, J. Puglisi, C. Weber and D.M. Bers, A Mathematical Treatment of Integrated Ca^2+ Dynamics Within the Ventricular Myocyte. Biophys. J., 87 (2004), 3351-3371. [ Links ] [20] G.L. Fain, Moleculer and cellular physiology of neurons. Harvard University Press (1999). [ Links ] [21] D.L. Nelson and M.M. Cox, Lehninger Principles of Biochemistry. W.H. Freeman (2005). [ Links ] [22] Y. Fujioka, K. Hiroe and S. Matsuoka, Regulation kinetics of Na+-Ca^2+ exchange current in guinea-pig ventricular myocytes. J. Physiol., 529 (2000), 611-623. [ Links ] [23] M.P. Blaustein and W.J. Lederer, Sodium / Calcium exchange: its physiological implications. Physiol. Rev., 79 (1999), 763-854. [ Links ] [24] R.J. Clarke and D. J. Kane, Two Gears of Pumping by the Sodium Pump. Biophys. J., 93 (2007), 4187-4196. [ Links ] [25] L.Y. Bourguignon, N. Iida, L. Sobrin and G.J. Bourguignon, Identification of an IP[3] receptor in endothelial cells. J. Cell. Physiol., 159 (1994), 29-34. [ Links ] [26] A. Miyakawa-Naito, P. Uhlen, M. Lal, O. Aizman, K. Mikoshiba, H. Brismar, S. Zelenin and A. Aperia, Cell Signaling Microdomain with Na,K-ATPase and Inositol 1,4,5-Trisphosphate Receptor Generates Calcium Oscillations. J. Bio. Chem., 278 (2003), 50355-50361. [27] V.V. Matchkov, H. Gustafsson, A. Rahman, D.M. Boedtkjer, S. Gorintin, A.K. Hansen, E.V. Bouzinova, H.A. Praetorius, C. Aalkjaer and H. Nilsson, Interaction Between Na+/K+-Pump and Na+/Ca^ 2+-Exchanger Modulates Intercellular Communication. Circ. Res., 100 (2007), 1026-1035. [ Links ] [28] G.W. De Young and J. Keizer, A single-pool inositol 1,4,5-trisphosphate-receptor-based model for agonist-stimulated oscillations in Ca^2+ concentration. Proc.Natl. Acad. Sci. USA, 89 (1992), 9895-9899. [ Links ] [29] J. Sneyd, S. Girard and D. Clapham, Calcium wave propagation by calcium-induced calcium release: an unusual excitable system. Bull. Math. Biol., 55 (1993), 315-344. [ Links ] Received: 02/V/11. Accepted: 11/IX/11. *Present address: Biotechnology & Bioengineering Center, and Department of Physiology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, USA.
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1807-03022012000200004&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-21T03:20:15Z","content_type":null,"content_length":"69184","record_id":"<urn:uuid:762a6661-f5fe-4175-89b7-30ec3e4c88f2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: rolling regression calculation speed and large data sets [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: rolling regression calculation speed and large data sets From Malcolm Wardlaw <malcolm@mail.utexas.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: rolling regression calculation speed and large data sets Date Fri, 03 Oct 2008 12:47:06 -0500 Thanks for the reply. It's not really the question I was asking, but I really like what you did. I think it solves my immediate problem in an oblique way, so thank you. That's pretty slick, I must say. I think I often get so used to typing in "reg y x" that I forget how easy it is just to calculate simple linear regression output manually. I think I'll expand a program like this for my permanent files. I haven't heard from anyone who knows about how the memory schema works, but I was wondering if there are any written resources on this. Does NC152 handle this at all? "Austin Nichols" <austinnichols@gmail.com <mailto:austinnichols@gmail.com>>" wrote: >The described phenomenon seems odd to me, and worth some further >investigation, but have you considered generating those variables >using lag operators (h tsvarlist) and a by: prefix instead of looping >over obs and running regressions? That approach would have the added >advantage of ensuring you are not tripped up by any missing time >periods, assuming you have -tsset- properly (e.g. if some company 36 >obs in a row are not for 36 consecutive trading days but for 50, say, >because of missing obs). * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-10/msg00200.html","timestamp":"2014-04-19T04:42:29Z","content_type":null,"content_length":"6573","record_id":"<urn:uuid:d1c3b3a2-b5e0-401a-8610-394ffc54c656>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Annotations For Bubble Sort I did the following proof annotations for bubblesort today morning in the underground (yes I admit, I worked around with it a bit when I keyed it in): The following bubble sort method is given in a hypothetical java-like language. // [ predicate ] is a proof annotation about the variables at the spot where this "comment" is located. // The predicate must hold (evaluate to true) every time the program can reach that point. // These predicates state simple but relevant facts about the state of the algorithm like what part is already sorted. // The predicate language must be closed i.e. not depend on unlimited execution (not turing complete), but only predicate logic. // Example: "forall 0 < num i < n : a[i]>0" means that all elements of a with an index of 1..n-1 are positive. // Strictly this forall is syntactic sugar for "forall i elem num : (0 < i && i < n) && ( a(i) > 0 )" void bubblesort(number[] a) unsigned n = a.length; for (unsigned i = 0; i + 1 < n; i++) // 1 (partially sorted at the end, trivially true for i=0) // [ forall n-i <= num l && l < n : // forall l <= k && k < n : // a[k] <= a[l] // ] for (unsigned j = 0; j + 1 < n-i; j++) // 2 (higher than all earlier elements; trivially true for j=0) // [ forall 0 <= num k && k < j : // a[k] <= a[j] // ] if (a[j] > a[j+1]) //[ a[j] > a[j+1] ] // 3a (trivial from if condition) a[j] <=> a[j+1]; //[ a[j] < a[j+1] ] // 3b (trivial, swap) //[ a[j] <= a[j+1] ] // 3c (trivial, relaxation of 3b) //[ a[j] <= a[j+1] ] // 3d (trivial) //[ a[j] <= a[j+1] ] // 3 (two elements ordered) //[ forall 0 <= num k && k < j+1: a[k] <= a[j+1] ] // 4 (higher than all earlier elements+1) //[ forall 0 <= num k && k < n-i: a[k] <= a[n-i] ] // 5 (last one is higher than all earlier elements) //[ forall n-i-1 <= num l && l < n : forall l <= num k && k < n : a[k] <= a[l] ] // 6 (partially sorted at the end+1) //[ forall 0 < num l && l < n : forall l < num k && k < n : a[k] <= a[l] ] // 7 (all sorted) // might be shortened to [ sorted a ] if "sorted" were a predefined predicate I started with the result of the "swap" (annotation 3); later I added 3a to 3d, because that would guarantee that 3 can be proven by simply joining over both if-branches. I remembered, that the inner loop will move the highest element to the end, so I wrote it down and to prove it I added annotation 2 to the beginning of the loop. Any simple reasoning system will now be able to prove 4 from 2 and 3 as well as 2 from 4 (next iteration). Then I wrote down 6, that the last elements are ordered. To prove this, I added 1 (trivially true at start) and 5 (just the annotation for the last nested loop iteration). From this follows 7 the same way as 5. We do not even need mathematical induction in the prover to prove 5 or 7, but just plug in the instance of the annotation for the last element of the nested loop. I believe, that these annotations can be proven fully automatically, because they are • trivial facts about statements in the program • simple logical joins over the branches of the control flow graph. I suppose, that most of these annotations could be dropped and inferred automatically (especially 3a-3d, but possibly also 1, 2, 4 and 6). But every dropped annotation will drive the search space for the prover up (with all the given annotations it will not have to do any searching, but only to test). I think the difficulty is not the proof itself, but selecting the few relevant annotations (invariants), that together lead to the conclusion (here 7). The programmer, who has an understanding of the algorithm will be able to select these easily and the prover will easily prove them from the expressions (or not) and combine them together to the surrounding annotations (or not). An automatic prover cannot guess these invariants, because there are just too many of them. To repeat: The difficulty lies in the selection of the annotations not in their complexity. Of course, I can't test the above program, because these proof annotations are not yet (:-) understood by any system, but they might be added easily, say to Java as code attributes, that may (!) be evaluated by the to, e.g., improve the code (remove unneeded bounds checks or assertions) or by an automatic verification system. The latter could, e.g., run in the IDE and tell us beforehand, that e.g. the last annotation ([sorted a]) cannot be proved from the other (I probably made fence post errors and could then correct these until the proof completes successfully). That way there will be no errors in the proof the sorted predicate is defined correctly, because the primitive assertions must follow from the code itself. -- Shouldn't you also prove that n is the correct number of elements to sort? • I don't know, what you mean. N is no parameter, it is identical to the length of the array. • You should prove it is correct because it is not a parameter, because you generated its value inside your function. A hypothetical automatic proof-checker should tell you that you can't use n as an input to an annotation until you either prove it is correct, or assume it is correct as an axiom. (I'm thinking it should work kind of like Perl's taint feature -- if I haven't proven or assumed something, it and everything dependent on it is unproven). If the checker had access to the definition of a.length, it could check that it returned an unsigned, and therefore n doesn't lose any data. It would also have to have already proven that a.length is correct, or you would have to tell it to assume that. • I assumed, that 'the checker has access to the definition of a.length'. But you are right, that I have to state something about n, namely the allowed range (your comment below), e.g. 'assert n < unsigned.max' (if the assert is part of the contract of the method it should better be called "require"), otherwise I'd get a failure to validate due to overflow. What if n is the maximum unsigned value? Then your calculations i+1 and j+1 can overflow! • You are right here. I glossed over the problem of value ranges. But I'd prefer a language, that distinguishes between numeric types without overflow and ring types with overflow (for the record: I'd like "<" and ">" to behave for these overflow type like water-mark comparisons, which can be implemented in assembly with one branch instruction, e.g. bpl/bne). For numeric types I'd assume, that arithmetic overflow is either not possible (due to proof checking) or raises an exception (which can be modeled as a failure or bottom type). • I would expect it to sort any input it said it could sort, i.e. any legal array of numbers, without raising an overflow exception (an out of memory exception or similar, I could see). Otherwise, the proof of any function depending on bubblesort would have to include a proof that the length of the array it passes to bubblesort <= max(unsigned) - 1 • See comment above. • By the way, a number of languages (including PerlLanguage) support these two types of integers with a "big int" package. The standard int of the language can overflow, and the "big int" is correct until you run out of memory. • No, I didn't mean a "big int", but an int, which is either guaranteed by the checker not to overflow, or to throw some kind of OverflowException? if it does. □ The "big int" package from PerlLanguage does work this way. The number of bits is not just a bigger constant; it allocates more memory for a given int as needed. It will fail due to lack of memory before overflowing. However, I don't think you can use one to index an array... • I'd assume array.length to be of this kind of type, thus overflow is not possible, but boundary conditions still are. I will prepare an example on ProofAnnotationsForArithmetic shortly. Why do the annotations have int types for indices and the code has unsigned? Are the ints in the annotations supposed to be of infinite range? • Yes, more inaccuracies on my side - remember, this is just a sketch. But yes, I'd like annotations to be supposed to be the real - err - integer types. • When you attempt to prove something, inaccuracies are not allowable. I am pointing out what I believe to be holes in your proof to see if they can be plugged, and if so, can it still be simple? (I would like it to be simple, but I'm currently not sure that's possible). • You are right 'When you attempt to prove something, inaccuracies are not allowable'. My focus was not on proving the correctness of bubblesort, but to show the way of doing it. Your points indicate, that I have overlooked some relevant points, but to me it seems, as if these could be ameliorated easily (compared to the proof annotations and the checker itself). • As for the int/unsigned. I should have used "num" instead of "int" in the proof annotations indicating 'unlimited' size (corrected in text). Because the proof holds for this more general case, it follows immediately, that it holds for the restriction to ranges. One could add a step 8: //[ forall 0 < unsigned l && l < n : forall l < unsigned k && k < n : a[k] <= a[l] ] // 8 (all sorted in range) • But I think, that such restrictions are usual business for a checker anyway. Critique: I cannot read the annotations, even for an example as simple as this. • May I ask why? What is the problem? The "forall" predicates? These are fairly simple PredicateLogic statements. I added an short explanation at the top. PredicateLogic is required for such kinds of proofs in any case. Or is it unclear how they combine or are derived? • Interleaving declaration and definition thoroughly derails me when I read the forall statements. I think it'd be substantially easier to read if things were written as [ given l, k unsigned: forall l. 0 < l < n in forall k. l < k < n in a[k] <= a[l] ]. However, trying to understand how the pieces fit together also impedes my willingness to use the technique, much less offer encouragement for others to use it. When I find I need formality in verifying a piece of code works, I find it far easier to rewrite it in a functional language (e.g., HaskellLanguage), where I can prove things work inductively, turn the inductive proofs into unit tests for the imperative language, and re-code the desired implementation in the imperative language. I find it works better for me for two reasons: I have unit tests that codify my proofs of correctness, and because the functional representation is kept concise. I don't have to worry about imperative flow and invariant checking interleaved. • I understand that the interleaving is difficult to follow. But I don't really understand your method. Could you please give a short example of your method. Preferably using the bubble sort? • Please see QuasiFormalMethods. Possible implementation strategy This probably would not generate a universal checker, but it seems like it would lead to something damn useful: Add an "assume" annotation, to patch over what you can't get the checker to do automatically (yet). Then, start with a sample piece of code (probably much simpler than a sort), and an annotation that assumes the final result, and one to check the final result: if (a >= 10) a = 0; //[ assume [a < 10]] //[a < 10] At each step, find an assumption that you are unsure about, then set about proving it, and remove the assumption. Proving the assumption may require only changing the annotations, or it may require extending the checker. I take it, the checker should reject any program with assumptions it cannot prove (yet) and list exactly these remaining assumptions. Actually, I was thinking that the checker should accept an assumption as a fact, unless a previously existing fact contradicts it - i.e. it is an axiom. Also, the programming language definition (and maybe the implementation) would be treated as a set of axioms. The source code could be treated also as a set of axioms, and the non-assumption annotations as hypotheses. So, maybe there are 3 states for any given annotation: • known false (an unresolvable contradiction arose while attempting to prove it) • known true (follows from axioms and/or other known true hypotheses and/or assumptions) • unknown -- can't prove or disprove it based on the axioms And an attribute "depends_on_assumptions" that is true if • The annotation is an assumption • The annotation depends on any annotation that depends_on_assumptions. □ The really tricky bit here is with rules like: X is true if either A or B are true. If A is true and does not depend on assumptions, then X is also true and does not depend on assumptions, regardless of B. It would, of course, be able to list the assumptions that any hypothesis depends on. Maybe I should add, that I have done little actual formal verification of programs myself. I know, that it is done, but not, if the annotations sketched here are a viable way. This sounds similar to the Z Notation ( ), a http://en.wikipedia.org/wiki/Z_notation From what I have seen, Z is not about annotating a program (like shown in the comments above), but about the separate specification of a program. Further it has quite a different syntax and use. Not what I expected. It looks like does at least 90% of this. See also
{"url":"http://www.c2.com/cgi/wiki?ProofAnnotationsForBubbleSort","timestamp":"2014-04-17T05:03:09Z","content_type":null,"content_length":"16995","record_id":"<urn:uuid:48665c1c-0945-424d-a658-c8b91d52228e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Help Prove/Disprove Paradox April 25th 2008, 10:53 AM Help Prove/Disprove Paradox Hi, I'm completely stuck on this hw problem. It's the last one, and I'm completely.... lost. Any help would be much appreciated Prove or disprove: There exists a book that refers to all those books and only those books that do not refer to themselves April 25th 2008, 11:20 AM One does not prove or disprove a paradox. A paradox has these properties. If A is a paradox then A being true implies that A is false. As well as, A being false implies that A is true. If R(A,B) on the set of books means that A refers to B if and only if B does not refer to B then does A refer to itself? April 25th 2008, 11:59 AM Funky Deal I'm more lost now that before... But thanks a lot! April 25th 2008, 12:40 PM I think I got it!
{"url":"http://mathhelpforum.com/algebra/35978-help-prove-disprove-paradox-print.html","timestamp":"2014-04-20T12:22:43Z","content_type":null,"content_length":"4611","record_id":"<urn:uuid:e187ab9f-97fe-4e58-8814-6c5d67386109>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Hot Air Balloon 3.2: Hot Air Balloon Created by: CK-12 This activity is intended to be used with Algebra I, Chapter 2, Lesson 3. ID: 8613 Time required: 30 minutes Activity Overview In this activity, students use a dynamic, electronic manipulative to perform integer addition and subtraction. The goals of the activity are to (1) provide students with a visual for adding and subtracting integers and (2) help students understand that subtraction can be thought of as “adding the opposite” or “adding the inverse.” The model given in this activity shows a hot air balloon that begins at ground level with a certain number of helium bags (providing lift) and the same number of sand bags (providing weight). The vertical position of the hot air balloon is determined by adding or removing a number of helium bags (representing positive integers) and sand bags (representing negative integers). Topic: From Arithmetic to Algebra • Use technology to verify that adding the number $-x$$x$ • Solve one-step linear equations of the form $x + a = b$$a$$b$ Teacher Preparation and Notes • This activity may be used to introduce or review integer addition and subtraction. You may choose to use the activity in its entirety or break it up into separate activities by sets. • It is very important that you thoroughly describe the model to students prior to them exploring the model on their own. Use a projector or a real balloon to demonstrate the model in a whole class, teacher-led setting. Some students will catch on very quickly and wean themselves from using the model. Others will prefer and/or need to stay a longer time with the model. • To download the calculator file, go to http://www.education.ti.com/calculators/downloads/US/Activities/Detail?id=8613 and select "HOTAIR.8xp" to download. Associated Materials In this activity, students will explore: • adding and subtracting integers using a model of a hot air balloon • the relationship between addition and subtraction Explain how the model of the hot air balloon works. Students will need to download the HOTAIR program to their handheld and work with a partner to complete the activity. Things to Remember… This model is similar to a hot air balloon and provides a way to visualize adding and subtracting integers. • Positive integers are represented by helium bags. They raise the balloon. Negative integers are represented by sand bags. They lower the balloon. • Addition is the operation of putting bags on the balloon. Subtraction is the operation of taking bags off of the balloon. • Always reset the balloon’s vertical position to $0$ To begin, press PRGM and find the HOTAIR program in the list. Press ENTER to load the program. Problem 1 – Integer addition Students will now use the model to find the sum $2 + (-6)$ENTER to choose option 1, Integer Addition. The program displays instructions. The right arrow adds a helium bag to the balloon, and the left arrow adds a sand bag. Students can see the balloon at the bottom of the screen. The horizontal line represents ground level. The balloon’s vertical position is displayed at the bottom of the screen. Right now, the position is $-12$ To view this addition $a + b$ To find the sum $2 + (-6)$$2$$2 \ units$$2$ Adding $-6$$6 \ units$$6 \ times$$6$ Together with a partner, students need to translate the following expressions into “balloon language” and use the model to find the sum. Remind them to reset the balloon to ground level each time. When they are finished, they can press CLEAR to return to the menu. 1. $-4 + 7 = \underline{3}$ 2. $7 + 3 = \underline{10}$ 3. $5 + (-7) = \underline{-2}$ 4. $-5 + (-3) = \underline{-8}$ Problem 2 – Missing addend In this problem, students are given the value of $a$$a + b$$b$ Students are to arrow down to option 2, Missing Addend, and press ENTER. Suppose you want to find the value of $b$$-4 + b = 3$$4$$3$$3$ They are to input the value for $A$ENTER. Then input the sum and press ENTER. The program displays instructions. Remind students that the right arrow adds a helium bag to the balloon, and the left arrow adds a sand bag, as with Integer Addition. They can see the balloon at the bottom of the screen. Now there is also a target balloon to the right. The target balloon is positioned at the value of the sum. The task for students is to now to find the value of $b$ To find $b$$0$ Add $4$$4 \ times$ The target balloon is above the balloon’s position, so students need to add helium bags. They should press the right arrow to add helium bags until the balloons line up. Instruct them to count as they press to find how many helium bags they added. For this example, $b = 7$ Together with a partner, students are to translate the following equations into “balloon language” and use the model to find the missing addend. When they are finished, they can press ENTER to return to the menu. 1. $2 + b & = -3\\b & = \underline{\;-5\;}$ 2. $-6 + b & = -1\\b & = \underline{\;5\;}$ 3. $5 + b & = 1\\b & = \underline{\;-4\;}$ 4. $-2 + b & = 4\\b & = \underline{\;6\;}$ Problem 3 – Integer subtraction The model also provides a way of visualizing the subtraction of integers. As with addition, positive integers are represented by helium bags and negative integers by sand bags. However, subtraction is the operation of taking off a bag. For example, the expression $-2 \ -5$$2$$5$ Students are to now arrow down to option 3, Integer Subtraction, and press ENTER. Use the model in the same manner as in Problem 1, with one difference: In this model, the right arrow removes a helium bag, and the left arrow removes a sand bag. The model is the same as before. The balloon is at the top of the screen. Students need to reset the balloon at ground level by pressing the right arrow. Now they can move the balloon to its starting point, $-2$ To remove $5$$5 \ times$$5 \ units$$-7$$-2 \ -5$ Together with a partner, students are to translate the following expressions into “balloon language” and use the model to find the difference. When you are finished, press $\subseteq$ 1. $2 - 7 = \underline{-5}$ 2. $-3 - 1 = \underline{-4}$ 3. $5 - (-2) = \underline{7}$ 4. $-4 - (-7) = \underline{3}$ Problem 4 – Missing subtrahend This model shows two balloons side by side—like the model from Problem 2, except that it is used to find a missing subtrahend rather than a missing addend. Students can now arrow down to option 4, Missing Subtrahend, and press ENTER. For example, find the value of $b$$-3 - b = 8$$8$$3$remove to have a resulting position of $8$ Students need to input the value for $A$ENTER. Then input the difference and press ENTER. Use the model in the same manner as in Problem 2, with one difference: In this model, the right arrow removes a helium bag, and the left arrow removes a sand bag. The balloon is at the top of the screen, and the target balloon to the right represents the difference. Students need to first reset the balloon at ground level by pressing the right arrow. Then they can move the balloon to its starting position, $-3$ The target balloon is above their balloon’s position, so they need to remove sand bags. Press the left arrow to remove sand bags until the balloons line up. Remind students to count as they press to find how many sand bags they removed. For this example, students should find that $b = -11$ Together with a partner, students are to translate the following equations into “balloon language” and use the model to find the missing subtrahend. When they are finished, they can press ENTER to return to the menu. 1. $6 - b & = 9\\b & = \underline{\;-3\;}$ 2. $5 - b & = -3\\b &= \underline{\;8\;}$ 3. $-4 - b & = -1\\b &= \underline{\;-3\;}$ 4. $-2 - b & = 6\\b &= \underline{\;-8\;}$ Problem 5 (Extension) – Addition and subtraction exploration Students are to arrow down to option 5, Addition and Subtraction, and press ENTER. The balloon on the left is for addition and the one on the right is for subtraction. Students can use the up and down arrows to move between the two models. For each of the following expressions, students are to use what they’ve learned from Problems 1 and 3 to translate into “balloon language” and then find each sum or difference. 1. $-2 - 4 = \underline{\;\;\;\;\;\;}$ 2. $-2 + (-4) = \underline{\;\;\;\;\;\;}$ 3. $5 - (-6) = \underline{\;\;\;\;\;\;}$ 4. $5 + 6 = \underline{\;\;\;\;\;\;}$ For each of the following equations, students use what they’ve learned from Problems 2 and 4 to translate into “balloon language” and then find each missing addend or subtrahend. 5. $-1 - b &= 5\\b &= \underline{\;\;\;\;\;\;}$ 6. $-1 + b &= 5\\b &= \underline{\;\;\;\;\;\;}$ 7. $3 - b &= -4\\& b = \underline{\;\;\;\;\;\;}$ 8. $3 + b &= -4\\b &= \underline{\;\;\;\;\;\;}$ Now students are to complete the following statements. 9. Taking off $8$$8$ 10. Taking off $5$$5$ 11. If $a$$b$$a - b = a + \underline{\;\;\;\;\;\;}$ To exit the program, students can arrow down to option $6$ENTER. 1. $-6$ 2. $-6$ 3. $11$ 4. $11$ 5. $-6$ 6. $6$ 7. $7$ 8. $-7$ 9. helium 10. sand 11. $-b$ You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/tebook/Texas-Instruments-Algebra-I-Teacher%2527s-Edition/r1/section/3.2/","timestamp":"2014-04-19T10:34:12Z","content_type":null,"content_length":"132419","record_id":"<urn:uuid:b052a309-e439-4b31-bc2b-c7c78d215174>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Help again...Math!! Number of results: 223,030 geometry again again again again I know you've helped me alot but is there anyway that you can help me understand two more problems. Geometry andGeometry again again again again Sunday, September 27, 2009 at 2:40pm by Monique N. geometry again again again again this is 10th grade math. Im not very smart just to let you know. But thank you anyways Sunday, September 27, 2009 at 2:40pm by Monique N. geometry again again again again Thank you but I just don't understand the whole equation thing tho. Sunday, September 27, 2009 at 2:40pm by Monique N. geometry again again again again If angle AFB= 8x-6 and angle BFC= 14x +8, find the value of x so that angle AFC is a right angle. Sunday, September 27, 2009 at 2:40pm by Monique N. geometry again again again please go to this website called angles Sunday, September 27, 2009 at 2:36pm by anna (58.7 * 10^16)/(9.3 * 10^4) = ? seconds To convert to hours, multiply by 360. For days, multiply again by 24. For years, multiply again by 365. for centuries, multiply again by 100. Tuesday, November 19, 2013 at 11:21pm by PsyDAG Who is correct? (chemistry issues here) Okay then. Apparently from your statement, it seems as if you feel I will bring it up again and again and again. I will just mute myself. Tuesday, May 6, 2008 at 10:00pm by ~christina~ geometry again again again again What grade level is this geometry ? Doing these kind of questions without knowing how to solve basic equations would be like learning how to play hockey without knowing how to skate. Sunday, September 27, 2009 at 2:40pm by Reiny Hello. Please help me with a few questions. 1) Which is the correct position of "again" in the sentence: "They have (again)gathered (again)at the square (again)"? 2)Does the phrase "the allegations doubt his integrity" have the same meaning as "the allegations question his ... Sunday, May 15, 2011 at 4:11am by Ilma Please don't post the same question again and again. Check to see if has been answered first. It has! Tuesday, December 13, 2011 at 8:15pm by Reiny geometry again again again again right angle means 90º so 8x-6 + 14x +8 = 90 22x = 90 + 6 - 8 22x = 88 x = 4 Sunday, September 27, 2009 at 2:40pm by Reiny geometry again again again what is the website called. Sunday, September 27, 2009 at 2:36pm by Monique N. Then read it again ... and again and again, if you need to. It's not 4. Wednesday, April 24, 2013 at 6:54pm by Writeacher math $$ again sorry did it again Thursday, February 21, 2013 at 5:52pm by Angie ms.sue 5 grade math 2-A 4-B 5-B 7-C KNOW YOU CAN CHECK MY SON ANSWER AGAIN WHICH YOU SAID WRONG HE FIX IF AGAIN I HOPE THIS TIME IT WILL ALL RIGHT THANK YOU Tuesday, March 27, 2012 at 7:28pm by dw Once again, it is true again and again: Momentum before = momentum after if no EXTERNAL force on system, like here: 0 = 5 v + 3 * 3 v = -9/5 = -1.8 Monday, February 3, 2014 at 1:58pm by Damon I am very sorry but I was wrong with the problem. The math problem is: 9 / (3.6-2.1) Thank you Anonymus for your answer but I want the problem that I wrote above. If you help me again pleaseeeeeeeee. I don't know how to resolve this. Thanks again. Thursday, August 25, 2011 at 12:22am by Veronica geometry again again again do you happen to know how to do geometry again again??? Sunday, September 27, 2009 at 2:36pm by Monique N. geometry again again again Thank You So Much!!!!!!!!!!!!!!!!!!!!! Sunday, September 27, 2009 at 2:36pm by Monique N. Summer School Calculus Another one for you :P, add these two vectors using trigonometry (again)... 9N[S2W] and 11N[N31W]...Again, I am confused about the angles, I am not sure what value I should use for the cosine and sine law. THANKS AGAIN!! Sunday, June 29, 2008 at 7:03pm by Derek Ok. Thank you again! :D Saturday, September 12, 2009 at 12:34pm by Cecilia (Again) 1. Can we meet again one day? 2. Can we meet again oneday? 3. Can we meet again some day? 4. Can we meet again someday? (Which ones are grammatical? Which one is commonly used? Thank you. Have a good Wednesday, December 14, 2011 at 4:35am by rfvv How do I solve this equivalent equation? 5x +2 < 17 (again the <is underlined) Do I subtract 2 from each side so it would read 5x < 15 (again the < is underlined) than divide each side by 5 so the x < 3 (again the <is underlined) is the answer? Also, how do I... Wednesday, April 9, 2008 at 10:17pm by Tyler geometry again again again Two angles are supplementary. one angle measures 12 degrees more than the other. Find the measures of the angles. Sunday, September 27, 2009 at 2:36pm by Monique N. 6th grade math if its a percent its always over 100 so 40/100 simp. again 4/10 simp. again 2/5 Tuesday, May 4, 2010 at 10:29am by Lilly I snap my fingers once. I wait one second and snap them again. I wait two seconds and snap them a third time. I wait four seconds and snap them again. I wait 8 seconds and snap them again. If this pattern continues, how many times will I snap my fingers in a year? Thanks! Wednesday, February 4, 2009 at 8:17pm by Meg 7th Grade Math Ms. Sue! OMG THX SOOO MUCH REINY lol. However I will make sure to do it all myself so that I'm not cheating but will look over my whole lesson again THX AGAIN~!!! Friday, April 19, 2013 at 9:02pm by Gabby math again please help me and thank you. the product of two whole numbers is 48 and their sum is less than or equal to 20.what are all the possiblities for the two whole number? again thank you for helping me Thursday, November 13, 2008 at 10:34pm by i dont understand! Math - please double check me Only your second answer is correct. For the first one, you're asked to estimate. Hint: 4 * 3 = 12 Please try again, and we'll be glad to check them again. Friday, March 28, 2008 at 12:13pm by Ms. Sue I answered this question previously. However, here it is again. $500 + .04x = $400 + .05x x will give you the amount she has to sell to get the same income. The better offer depends on how good a salesperson she is, how much she expects to sell. I hope this helps again. Thanks... Monday, February 18, 2008 at 8:57pm by PsyDAG Solve the system by graphing by any method of your choice. 3x-y=1 or y=3x-1 3x-y=2 y=3x-2 (0,-1),(2, 5)and(0,-2),(-2,4) Solution is Consistent, lines intersect at (-1/4, -5/4). Above is my answer but I keep coming back to this again and again. Am I missing something? I solved ... Sunday, May 20, 2007 at 9:36pm by Amanda Probably shouldn't tell you this, but there is "magic" fraction button on your calculator that does all your operations in fraction form look for a key labeled: a b/c to enter 57/18 57 a b/c 18 = see what you get ? now press 2ndF a b/c and 2ndF a b/c again, WOW notice the ... Friday, December 13, 2013 at 5:31pm by Reiny Commas ... again and again. 2. watchfully 3. unwilling (no -ly on that word) Wednesday, May 4, 2011 at 6:28pm by Writeacher social studies thanks again i'm definitely coming here again! Thursday, September 13, 2007 at 9:04pm by robin English plz help me . Thanks thank thanks . U help me alot. I am working on it ,can I submit it again ? thanks again. Thursday, August 19, 2010 at 9:12am by shan English grammer thank you again i answered them again and got 80% Tuesday, July 5, 2011 at 4:44pm by Eddie thanks sorry i posted it again when i didnt get an answer for a long time thanks again Wednesday, December 10, 2008 at 7:05pm by lyne macbeth again when is the quote where the witches say beware macduff...etc can you help?? thankyou Sunday, February 1, 2009 at 11:02am by Lucy again Again, please try the posts above this before you again ask questions you can not answer. Tuesday, June 3, 2008 at 9:35am by SraJMcGin Computers/ Bussiness Thank you so much,I knew I was going in the wrong direction. I will ask again for help, you are so quick. Thanks again Friday, May 14, 2010 at 4:34pm by Mike They are arguing about rules and regulations again. I have to identify the adverb. again Wednesday, November 3, 2010 at 10:14pm by Jake Re, meaning again, is the prefix. Literally, represent means to present again. Friday, January 21, 2011 at 4:06pm by Ms. Sue Yes, there's a common denominator. Please look again and again until you find it. Thursday, February 28, 2013 at 4:44pm by Ms. Sue yes...i copied it correctly..well i'll try again..thanks again Thursday, April 18, 2013 at 1:18pm by jessica geometry again again again again exactly the same idea as http://www.jiskha.com/display.cgi?id=1254075681 Sunday, September 27, 2009 at 2:40pm by Reiny Thank you for using the Jiskha Homework Help Forum again and again, I agree totally with you! Tuesday, October 23, 2007 at 4:05pm by SraJMcGin geometry again again Thank You so much. You made it a little bit easier to understand. I really appreciate it. Sunday, September 27, 2009 at 2:35pm by Monique N. i tried again and i got a again is there anyway you can help me out it says it means a straight line.thanks Wednesday, March 31, 2010 at 8:00pm by henry social studies Read the entire chapter (or section) again and again until you find it. Tuesday, December 13, 2011 at 5:55pm by Ms. Sue English 7 - Journal Entry Assignment Check Ok then I read it again revise and edit it then I'll post it again. Thursday, January 12, 2012 at 5:25pm by Laruen Once again - You did not scroll down to see my full reply Here it comes again Sunday, September 23, 2012 at 7:00pm by Damon So, yet again, I am confused! State whether or not the following statements are true. Justify your reasoning. a). a• ( b+c ) = a• b + a • c b.) a × ( b+c ) = a × b + a × c c.) a × ( b•c ) = a × b • a × c Thanks, again! Wednesday, July 2, 2008 at 10:58pm by Derek calculus hs Here's the question again How do I find all line through (6, -1) for which the product of x- & y- intercepts is 3? Thanks again Sunday, September 21, 2008 at 11:03pm by kelly Me again :) Why might a car manufacturer change the shape of the side mirrors on a particular model? Thanks again. Monday, January 11, 2010 at 7:42pm by 1234 This is not my field -- but I think all except the last answer are wrong. Check your book again -- and again. Thursday, November 7, 2013 at 8:16pm by Ms. Sue More Algebra On A I did 113 + x = 225 X= 225-113 X= 112 Thanks so much Ms. Sue. I would have failed this math asignment if it werent for you! I will most likely come back here whenerever I have issues again. Thanks again! Sunday, October 2, 2011 at 5:22pm by Jab Grade 12 Math-Urgent! Looking at this again, I am sure that mine is right. Mine is the same as yours. The only difference is the variables, and I also didn't type out the last part of my solution where the 10 hours is placed over t...I was taught to use n where you use t. I am using B(t) because I ... Wednesday, January 9, 2008 at 9:34pm by Math genius! Urgent! Oh thank you! It wasn't easy but once I figured it out life became fabulous again! :) Thanks again! Have a great night! Tuesday, January 20, 2009 at 8:25pm by Lauren We might say that a person who reads the same newspaper again and again is demented or has Alzheimer's disease. Friday, May 28, 2010 at 9:25pm by Ms. Sue i haveeee!! ive posted the right question again please look it up again if you have time :( Wednesday, February 15, 2012 at 5:01pm by beavis For all tutors Thanks, I will try posting again. I understand that you are not here to do our homework - but will try to answer any questions that we have in trying to solve the problem. Thanks for the quick response to my question about being Canadian. I will try again if I come across any ... Thursday, October 18, 2007 at 8:50pm by Karen vocab, again! sorry! Hi, again. I need help with this word decorum. the def is ADJ proper; seemly correct how do i use this in a sentence? Monday, October 6, 2008 at 5:39pm by Kristin NO, NO, NO! Review the Subjunctive again = reason 1 (judgment makes it still imposition of will = It's better, important, etc.) Sra Choose again. Friday, June 12, 2009 at 1:12pm by SraJMcGin OK. Thank you. If the father reads the same newspaper again an again, what expressions do we have to use? If he has such a strange habit, what do we say? Friday, May 28, 2010 at 9:25pm by rfvv A man comes up to play a game. you flip a coin, if heads you win $1 and keep playing if heads again you get $2, if you get heads again, $4 and so on. If you flip tails you take your money and the game is over. How much should he be willing to pay? Tuesday, August 10, 2010 at 3:06pm by Tara The Age Of Jefferson To The American Expansion You have only one honest and real solution. Go back and study your book again. Then study it again. That's the only way you are going to learn this material. After you've studied again, please post your answers. Do not post pure guesses. Then I'll be glad to check your answers. Friday, December 7, 2012 at 5:36pm by Ms. Sue art - again! sorry ive used that bit but am struggling what else to write after this im not very good at this type of work. thanks for your help again Tuesday, December 30, 2008 at 10:08am by jasmine Thank you so much. Sorry if you thought I was being impatient. I have no clue how I managed to do that. I will be careful to make sure it doesn't happen again. Thanks again. Friday, December 24, 2010 at 6:19pm by Kimmie After you study the paragraphs carefully, go back and answer these questions again. If you repost them, I'll be glad to check them again. Sunday, April 3, 2011 at 5:45pm by Ms. Sue die again and again Tuesday, May 31, 2011 at 9:12pm by hgtug how many 5-digit palindromes exist? i get 9*10*10*10*9=81000 is this correct? my thinking being that the first and last digit must be 1-9 and the other 3 0-9. thanks again for the time and effort. oh, and if any of you fine folks have the time could you browse down to my other... Saturday, October 25, 2008 at 6:51pm by courtney's dad Math(Please Check Again) 25-x^2 6x^2 ------ * ------ 12 2x *I got: 5-x/2 No. Try again. Can you tell me what I did wrong please.? You have to factor the numerator, you should have two factors for the first, and one for the second. In the denominator, you will have 24 x. I think the x divides out, and ... Wednesday, February 21, 2007 at 7:17pm by Margie Math - Trig - Double Angles Okay? But, I still don't get what you did in: 2cos^2(2x) - 1 Double-angle again. 2(cos2x * cos2x) - 1 Double-angle again. Saturday, November 17, 2007 at 6:17pm by Anonymous Gr.11 Math If you call the five senior managers as the first level, then in the second level there would be 15 and in the third level there would be 45 so you want the 7th term in this sequence, not the sum of 7 terms (because you would be counting the same people again and again) so a=5... Monday, January 5, 2009 at 11:43pm by Reiny Math again Thanks Ms Sue .. another question built on the last ... i figured 143 miles per year .. then next question, .4 miles per day ... now i have to figure out how many feet per day .. how do i calculate this? thank you again ... Thursday, November 29, 2007 at 9:42pm by Emily Math - Trig - Double Angles For the following lines, 2cos^2(2x) - 1 Double-angle again. 2(cos2x * cos2x) - 1 Double-angle again. I don't get how you got the second line from the first line... Saturday, November 17, 2007 at 6:17pm by Anonymous College Critical Reading Thank you so much Ms. Sue! I will definitely check my text again but these are some of the questions that I couldn't find information in my text about. Or if I did find info. it didn't really help me with the particular question. But I will definitely check again ;-) Thanks ... Tuesday, October 26, 2010 at 10:43am by Jessie Suppose you toss a coin and it you win a dollar if it comes up heads. If it comes up tails you toss the coin again. This time you get two dollars if it is heads. If it is tails you toss it again. This time you win four dollars if it is heads, but if it is tails you toss it ... Friday, October 31, 2008 at 7:42pm by brianne a) how about C(7,2) = 21 b) You mean only the prof shakes hands? And he does it again and again?? Silly question, anyway.... In one specific sequence there are 7 handshakes if the prof shakes hands with each of the 7 members. Now if we "arrange" that sequence in all its ... Sunday, October 6, 2013 at 11:50pm by Reiny An other question states: "Solve: sin(35)" Again, same idea, how should I take this question? Should I just convert it into radians or should I just enter it into a calculator? The way they word these review questions are confusing to me. They should really be more specific... Tuesday, September 11, 2012 at 10:57pm by Mary French for Anonymous to Sra its me again could u please tell me how to say by: to: due: I need it for my title page thanks again:) Sunday, January 11, 2009 at 6:53pm by Anonymous whats that checking for answer again nd again nd no one has yet replied Tuesday, June 4, 2013 at 9:28am by keshav Health (again) Hi! It's me again! Here is the question: How many teenagers smoke in ontario? (as recent year as you can find) 20.7 teenagers Thank you Sam!! :) Thursday, December 7, 2006 at 1:50pm by Caley Ok, Thank You. I will revise and post again. One question though. Do you think that this is a 8th grade level paper? Thank You Again, Walker Tuesday, March 18, 2008 at 12:27pm by Walker 1. Wilma was able to walk again at age twelve. 2. Wilma was able to walk again at the age of twelve. 3. Wilma was able to walk again at twelve. 4. Wilma was able to walk again aged twelve. (Which ones are correct? Could we use other expressions?) Sunday, March 13, 2011 at 9:41pm by rfvv Social studies Since you've posted this again after you received an answer, I assume that you are not able to demonstrate an understanding of movement and acculturation. Please go back and study these again. Monday, September 23, 2013 at 2:37pm by Ms. Sue hi again why is mentally retarded person not like a normal person is what i meant thanks again Sunday, March 1, 2009 at 12:20pm by lila please help me solve this riddle. "you've heard me before, yet you hear me again. Then I die, Til' you call me again." Who wrote this riddle? Monday, November 5, 2007 at 6:53pm by kali v = 0 + a*t 98 = 12.3 * t solve for t v = 0 + a*t again x = X0 + Vo t +.5 a t^2 .8 = 0 + 0 + (.5) (6.8*10^4 )t^2 t^2 = 2.35 10^-5 t = .00485 seconds in barrel then use v = o + a t again Friday, January 4, 2008 at 6:57am by Damon Physics-bobpursley can u check this again please Its wrong again. is the value of d same as x??? I'm keep getting the wrong answers for both part a and b. Thursday, October 28, 2010 at 10:54pm by Lyn English 1 1. look at the last "their" 3. Look at the last answer again. 4,5,6,7,8,9,10. check again Read the sentences carefully to see what the pronoun actually refers to. Tuesday, January 4, 2011 at 11:49am by GuruBlue Again and again ... you need to put proper punctuation at the ends of sentences -- usually periods, but sometimes question marks or exclamation marks. Wednesday, January 20, 2010 at 9:42am by Writeacher try x = 1 yes that works so divide by (x-1) (x-1)(3x^3-13x^2+18x-8) try x=1 again yes that works so divide by (x-1)again (x-1)(x-1)(3x^2-10x+8) factor that quadratic (x-1)(x-1)(3x-4)(x-2) so x = 1, 1 , 2, 4/3 Thursday, July 1, 2010 at 7:04pm by Damon Substitute a word beginning with re- for each of the underlined phrases. 1) The patient was(admitted again) after.. 2)I would like you to(copy this assignment)in.... 3) When the CD player broke I asked for my money (back again) Thursday, March 1, 2012 at 10:45pm by flower Algebra 1 Please bobpursley again Thanks to everyone for trying to help me. This is not really a question but an answer to what grade level this promblem is for my son is in ninth grade. Thank You all again Saturday, September 12, 2009 at 11:27pm by Mary 3x=-12y+15 and x + 4y=5 These two equations are the same. There is a y for any x. Y=6x+2 and 3y-18x=12 3y = 18 x + 12 is y = 6 x + 4 These are parallel lines that never intersect, no solution x-2y=6 and 3x – 6y =18 again divide equation 2 by 3 and get x - 2 y - 6 the two ... Monday, January 13, 2014 at 6:51pm by Damon Carlos has the slow to go hiccups. When they started, he hiccuped after 1 minute had elpased, then again after 2 minutes, again after 4 minutes, next after 8 minutes, and so on. How many total hiccups did he hiccup in the month of April if they began 12 midnight, april 1st? Tuesday, April 8, 2008 at 5:24pm by Peter Good story! In the sentence: my hands: when I fell = not a colon but a semicolon = my hands; when I fell Either what you have or what is in parentheses about falling and the snowy bush A morning = That morning... maybe for a hole, = because of a hole, etc. it drove off the ... Thursday, January 28, 2010 at 12:01pm by SraJMcGin Ethos and Pathos Watch it again and again until you decide on details that you could add. https://www.facebook.com/video/video.php?v=176110229125188 Wednesday, March 20, 2013 at 11:06am by Ms. Sue My assignment is about reading "Sociology as an individual pastime"( Peter L. Berger) and summarizing it. I have read it again and again for many times, but I still don't understand it clearly. Please help me Wednesday, September 22, 2010 at 11:08pm by Kimberly Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Help+again...Math!!","timestamp":"2014-04-19T13:50:50Z","content_type":null,"content_length":"35651","record_id":"<urn:uuid:76d1b22f-e95a-48bd-99ed-c6c960204d84>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00410-ip-10-147-4-33.ec2.internal.warc.gz"}