content
stringlengths
86
994k
meta
stringlengths
288
619
Daly City Algebra 1 Tutor Find a Daly City Algebra 1 Tutor My name is Henry, graduated from Michigan State University major in Economics. I live in Daly City. I have a car, I can drive to a student's place, or we could meet somewhere else which I will prefer this option. 8 Subjects: including algebra 1, geometry, algebra 2, economics ...I am very effective in helping students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. I do this in a way that is positive, supportive, and also fun. I explain difficult math and science concepts in simple English, and continue working with the students until they understand the concepts really well. 14 Subjects: including algebra 1, calculus, statistics, geometry ...That frustration can turn into a loss of confidence toward math. Simplifying expressions, binomials, powers, factoring, linear equations, and graphing don't have to be difficult topics but if a student misses a key idea on a certain class day, it can quickly snowball. Many times, all it takes to get back on track is some focused review work. 11 Subjects: including algebra 1, calculus, algebra 2, geometry ...I have tutored many students both privately and as a high school teacher in the San Francisco Unified School District for over 26 years in this second year algebra course. This course builds on algebraic and geometric concepts for success in higher level math courses. In tutoring algebra, it is... 6 Subjects: including algebra 1, statistics, geometry, prealgebra I believe that the biggest hurdle to overcome with most struggling students is a fear of failure. Let me help your child to build the confidence they need to be successful. I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here. 11 Subjects: including algebra 1, chemistry, physics, calculus
{"url":"http://www.purplemath.com/Daly_City_algebra_1_tutors.php","timestamp":"2014-04-21T02:34:02Z","content_type":null,"content_length":"24201","record_id":"<urn:uuid:e9564719-8a37-49d7-8d1b-39b662de8920>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Survey articles written by students in graduate courses in Peking University Xiongwei Cai, Morita equivalence of Lie groupoids Xiaoqing Fan, Clifford Algebra Wenzhe Wei, Deformation Quatization of Symplectic Manifolds Haofei Fan, Clifford Algebra, K-theory and Riemann-Rock Theorem Shaofeng Wang, Topics in Clifford Algebra Xuwen Zhu, Lie Groupoids, Lie Algebroids and Some Examples
{"url":"http://www.math.psu.edu/ping/pku_teaching.html","timestamp":"2014-04-17T21:44:01Z","content_type":null,"content_length":"1445","record_id":"<urn:uuid:789a9292-c95c-4280-bcc2-cd2f9867fb57>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Speed of VectorPlot3D... What is Rendering? Replies: 1 Last Post: Sep 18, 2012 3:53 AM Re: Speed of VectorPlot3D... What is Rendering? Posted: Sep 18, 2012 3:53 AM ....I take Graphics Rendering to mean the process by which the Object is generated either by means of hardware, software or both,... ....that is the extent of my knowledge about it... here is the message that Yves Klet sent me..... > http://mathematica.stackexchange.com/a/7509 > and here > http://mathematica.stackexchange.com/a/190 > for starters. The first link shows you the option Inspector under > Graphics Options -> Rendering Options -> Graphics3DRenderingEngine. Try > to play around with these (no clue if or what settings may apply to your > machine / problem). jerry blimbaum -----Original Message----- From: Alexei Boulbitch Sent: Sunday, September 16, 2012 11:22 PM Subject: Re: Speed of VectorPlot3D... What is Rendering? I asked a question several weeks ago about speeding up 3D Graphics using VectorPlot3D and Mouse Rotation....I never got a direct answer, however, several persons (Yves and Alexei) posted answers for another question that helped to solve mine......for my case and computer .......... I set MaxRecursion->3 and in the Options Inspector under Graphics Options I changed Graphics Rendering to HardwareDepthBuffer...... The increase in speed of rendering really surprized me....and the Graphics object rotated almost instantly........MaxRecursion especially made a difference....if I set it to ten it took forever....at 3 worked like a thanks to Yves and Alexei...... jerry blimbaum Dear Community, I wonder, what is Rendering? First, how should this word be translated in the Mathematica context. I am not English native speaker, and looked into a dictionary. A good one. It returned me about 300 meanings of the word none of which I could really correlate with the graphics operations. I looked into help and find this: "Rendering opens a submenu to control rendering operations." and details do not help to understand what does it do. It is like in one of Lem's stories: "Sepulkys are necessary to make sepulation". "Sepulation is the process using sepulkys" (not sure that my translation is equal to the original one, but the sense is there. So, what the Rendering mean and what does it do? Best regards, Alexei Alexei BOULBITCH, Dr. , habil. IEE S.A. Department for Material Development ZAE Weiergewan 11, rue Edmond Reuter L-5326 Contern, Luxembourg Tel. ++352-2454-2566 Fax.: ++352 424737201 mobile: +49 (0) 151 524 066 44 E-mail: alexei.boulbitch@iee.lu
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2403308&messageID=7892008","timestamp":"2014-04-20T13:31:54Z","content_type":null,"content_length":"16944","record_id":"<urn:uuid:7120c214-3a7c-430e-b287-6c2c9a81eeaf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
material removal rate formula Some Information About material removal rate formula is hidden..!! Click Here to show material removal rate formula's more details.. Do You Want To See More Details About "material removal rate formula" ? Then .Ask Here..! with your need/request , We will collect and show specific information of material removal rate formula's within short time.......So hurry to Ask now (No Registration , No fees ...its a free service from our side).....Our experts are ready to help you... .Ask Here..! In this page you may see material removal rate formula related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format with Page / Author tags Posted by: Created at: Tuesday material removal rate formula for wire edm using peak current , wire edm formula, material removal rate formula , material removal rate in wire edm calculations, how to 29th of January 2013 calculate material removal rate in wire edm , how to calculate mrr in wire edm, mrr formula for edm , formula for material removal, direct formula to calculate material 04:03:21 AM removal rate in wire edm , mrr in wire edm, farmula for calculate metal removal rate in wire electrical discharge machining , wire edm removal rate, wire edm mrr calculation Last Edited Or Replied , mrr formula in wedm, wire edm mrr , formula for mrr in wire edm, wedm material removal rate formulas , estimating wire edm time formula, formula mrr in wedm , at :Tuesday 29th of January 2013 04:03:21 i wan..................[:=> Show Contents <=:] Posted by: Created at: Tuesday material removal rate formula for wire edm using peak current, wire edm formula , material removal rate formula, material removal rate in wire edm calculations , how to 29th of January 2013 calculate material removal rate in wire edm, how to calculate mrr in wire edm , mrr formula for edm, formula for material removal , direct formula to calculate material 04:03:21 AM removal rate in wire edm, mrr in wire edm , farmula for calculate metal removal rate in wire electrical discharge machining, wire edm removal rate , wire edm mrr calculation, Last Edited Or Replied mrr formula in wedm , wire edm mrr, formula for mrr in wire edm , wedm material removal rate formulas, estimating wire edm time formula , formula mrr in wedm, at :Tuesday 29th of January 2013 04:03:21 i want to calculate MRR and ..................[:=> Show Contents <=:] Posted by: suraj abrasive action, abrasive airbrush , abrasive answers, abrasive athletics , abrasive accessories inc, abrasive accessories , abrasive ale, abrasive antonym , abrasive dental shetty floss, abrasive define , abrasive distributors, abrasive developments , abrasive dictionary, 3m abrasive disc , abrasive disk, abrasive diamond tool , abrasive discs, Created at: Friday abrasive definition , machining, abrasive , abrasive process broaching ppt, report of abrasive jet machining , report on abrasive jet machining, working of abrasive jet 05th of March 2010 machining with diagram , abrasive jet machining pdf paper 2011, abrasive jet machining material removal rate , abrasive jet machining ppt, material removal mechanism in 12:47:08 AM abrasive jet machining ppt , abrasive jet machining, introduction abresive jet machining , diagram of abrasive jet machining, abrasive jet machining project , magneto Last Edited Or Replied abrasive jet machining, abrasive jet machine nozzle design , how to make a small size of abrasive jet machining, abrasive jet machining with materials , abrasive jet at :Monday 08th of machining process, mechanism of jet cutting ppt , August 2011 12:36:26 hi boss i wanted a report and ppt on abrasive jet machining ..................[:=> Show Contents <=:] Cloud Plugin by Remshad Medappil
{"url":"http://seminarprojects.net/c/material-removal-rate-formula","timestamp":"2014-04-20T08:17:46Z","content_type":null,"content_length":"21813","record_id":"<urn:uuid:d6c0e8b1-bd15-46d5-9b6a-036de6a9c1cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
What you think of duels one on one? 02-20-2009, 08:41 PM joćo paulo What you think of duels one on one? How about we make duels one on one? This duel lasted a week and at the end there is a vote to see which of the two is best. 02-20-2009, 11:34 PM OK, how about me against you, JP?! :P Of course, that wouldn't be fair, who'd decide who duels whom? It could get complicated like a GIMP duel, which I wouldn't participate, as I'm no GIMPer. You'd almost have to have multiple duels going on, and winning duelers would compete with other winning duelers. Its got possibilities, but could be complicated and could only work with lots of participants. I voted, "Yes", but I don't know how well this would work. Any other opinions on this? If there were enough "new blood" interest, it might work better as a lite challenge as well. 02-21-2009, 12:59 AM Some random quick thoughts: 1. Set up brackets, like in college hoops, but draw names by random for the match-ups (out of a pool of those who volunteer to join in). 2. Start with a base sketch that has to be done up proper. 3. Start with one idea, like a town, or a castle, or something and best map wins. 4. Set limits as to size...like what can you do with a 500 x 500 pixel map? 5. Lite challenges are probably the best place as not too many would want to get bogged down for a full month in a duel only to have to do it again next month. 6. Generate something random, split it down the middle, each person gets one half. 7. Use that rpg city generator to make a city, then let each person try to devise a siege of that town...most plausible strategy wins (by popular vote). Anyways, it's got some possibilities, as does the further idea of a co-operative map contest (like one person does one part and another person does another part) and teams are picked out of a 02-21-2009, 02:04 AM Some random quick thoughts: 1. Set up brackets, like in college hoops, but draw names by random for the match-ups (out of a pool of those who volunteer to join in). 2. Start with a base sketch that has to be done up proper. 3. Start with one idea, like a town, or a castle, or something and best map wins. 4. Set limits as to size...like what can you do with a 500 x 500 pixel map? 5. Lite challenges are probably the best place as not too many would want to get bogged down for a full month in a duel only to have to do it again next month. 6. Generate something random, split it down the middle, each person gets one half. 7. Use that rpg city generator to make a city, then let each person try to devise a siege of that town...most plausible strategy wins (by popular vote). Anyways, it's got some possibilities, as does the further idea of a co-operative map contest (like one person does one part and another person does another part) and teams are picked out of a I love the idea of two people starting with a base image, and especially if you give half to each contestant and see how the two join with different styles... Say for example, A would get 1 half of an island and B would get the other half to map. 02-21-2009, 08:13 AM joćo paulo Thanks to all for comments. And good information Ascencion. Of course this is an idea that needs to be implemented, but the original idea is a participant challenge another participant for a challenge. 02-21-2009, 09:51 AM Look at my opposing fortresses idea, it's kinda like this. 02-21-2009, 12:22 PM joćo paulo Look at my opposing fortresses idea, it's kinda like this. Looks like a good idea too. Let's see what the community thinks Hoel. 02-21-2009, 07:56 PM I think the idea of a tree like a footie match league where you all start on the bottom rung and get paired up randomly with another and all duke it out on one mini challenge then winner goes on to next round with a different mini challenge (determined after the previous one had finished) until two remain and duke it out for the win / runner up would be a cool idea. The challenges would have to be quite short so maybe a week per round each say. If we got about 16 or 32 people to do it then it would be easy to set up and take 4 or 5 weeks. If the number is more uneven then we would have to have groups of three instead of two at some points in the tier system but we would know that before it started. Might have to hold off starting the challenge until enough people throw their gauntlets down - at very least need an even number of starters but probably a number divisible by 4 or something like that would be better. It would have to be a really random starting grid tho. And it would be cool if there was a way to set that up so that it was completely automated by some random event which determined the starting grid. Maybe we all choose a number and use random.org's numbers at a certain known point in time to determine the order. If people pull out for any reason then your challenger gets lucky and goes through with a bye. You could start with a list of say 24 people, take one random number, do a modulo 24 on it and select that person as first box. Take next random number modulo 23 and remove that person as box number 2 (i.e. challenger to number 1), keep doing that until all 24 are boxed up into starting pairs. Would be bad to get paired up with known multi challenge winners but at least if it were random you don't feel slighted by the luck of the draw. And at least in this Guild World Cup, if you get paired up with Brazil you might stand a chance to win - heh heh... 02-22-2009, 12:12 AM Equally it could be like a squash ladder - you can challenge anyone above you to a match, and if you win you swap places. Challenge laid down by the challenger, winner decided by popular vote. You can decline a challenge, but you move down one place. 02-22-2009, 08:03 AM I'd prefer an Elo-ladder to a squash ladder.
{"url":"http://www.cartographersguild.com/mapping-challenge-suggestions/4448-what-you-think-duels-one-one-print.html","timestamp":"2014-04-17T09:19:35Z","content_type":null,"content_length":"19654","record_id":"<urn:uuid:d3ad16bb-0757-4ac1-b445-773003b67e4b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
In this course, we will be looking at the general solutions of cosine There is always a value The general expression of these solutions is given by This is called the general solution of cosine If the solution is required in degrees, then the general solution is Scholar's Tip: If Also, if k < -1 or k > 1, there is no solution for cosine Find the general solutions to the following cosine equations. a) cosine We make use of supplementary angles. cosine a) cosine We have to consider the two different cases. Case 1 (taking the sign to be positive): Case 2 (taking the sign to be negative): We hope you have understood the general solutions of cosine. You may want to look at sine, tangent solutions as well . There is also the law of sines and law of cosines . Return to Trigonometry Help or Basic Trigonometry .
{"url":"http://www.trigonometry-help.net/cosine.php","timestamp":"2014-04-18T10:33:47Z","content_type":null,"content_length":"9223","record_id":"<urn:uuid:0b395810-a05c-41aa-8b87-330c5ebbfee5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
San Diego Geometry Tutor ...I have a B.A. from UCLA in Linguistics and Philosophy, and I also minored in Teaching English as a Second Language. I have taken classes in teaching Reading and Writing, Speaking and Listening, and Grammar, in addition to studying the methods of teaching and the sounds and structure of English. ... 16 Subjects: including geometry, reading, writing, English ...I love Spanish, but recognize that not everyone does; I aim to share my enthusiasm without alienating my students. At UCSD I was a tutor for an undergraduate poetry class. I met weekly with a group of eleven students, gave feedback on their work, and led discussions. 15 Subjects: including geometry, English, Spanish, reading ...Having said that, I have quite an experience in training people and developing programs in Excel since I was one of the experts in the firm for that matter. If you need help with an Excel project I am sure you can benefit from my experience in developing Excel programs I have the needed knowledg... 9 Subjects: including geometry, Spanish, calculus, algebra 1 ...I am currently in the credential program at Cal State San Marcos working on getting my single subject credential in mathematics. I am currently student teaching at High Tech Middle School in San Marcos. I primarily help my students with math. 19 Subjects: including geometry, writing, algebra 1, ASVAB ...On the science side, I completed multiple levels of physics and chemistry during my college career. I have a natural ability to approach math problems and concepts in a simple and easy to understand manner (which is usually not the case in textbook explanations) and I am great at communicating t... 8 Subjects: including geometry, algebra 1, precalculus, elementary (k-6th)
{"url":"http://www.purplemath.com/San_Diego_geometry_tutors.php","timestamp":"2014-04-19T02:32:11Z","content_type":null,"content_length":"23882","record_id":"<urn:uuid:2a9d00d8-75b6-423e-a241-949cae55fe55>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium Discussion ROTHMAN: I would like to hear just a few things about the reward system. JOHN LEHOCZKY: You may not believe this, but in terms of value and cross-disciplinary research, at CMU we actually value applications papers, papers in journals that are not statistics journals, equally with — I would say more than — non-applications papers. And we value those just as much in our promotion, in our year-to-year performance appraisals, which have to do with year-to-year salaries, let alone promotion and tenure. We value those contributions in the same coin as contributions to JASA or the Annals of Statistics or whatever your favorite statistics journals may be. We have those applications papers reviewed. If we ourselves cannot review them, because we cannot always judge such contributions, we have them reviewed by substantive experts to assure ourselves that they are in fact good contributions. We go through that extra step. We do expect our faculty to have excellent credentials in some areas of the core discipline of statistics, whether it be Bayesian statistics or probability theory or time series, or whatever be the classic areas. The person has to have notoriety in some area or set of areas. The individual has to have credentials in statistics. And we want that person to be collaborating with other faculty members in the department, because that is a very important way our department works, and to be collaborating with subject matter experts in other departments in the university or outside the university. So we value those applications and non-applications aspects in the same coin. Molly Hahn wondered earlier about mathematics departments, and I do, too. I think that for statistics departments that are within mathematics departments and whose faculty members are evaluated along with their partners within the mathematics department, achieving this will be incredibly difficult to ever pull off. JEAN THIEBAUX: I did not hear John Lehoczky suggest that students with specific other disciplinary backgrounds be recruited to statistics graduate programs. Doing that is a different way of creating cross-disciplinary graduates, rather than retrofitting them. It does not take time from graduate concentration in statistics. Has CMU looked at that possibility? LEHOCZKY: Our original master's program had as a concept that students could have a disciplinary area of their own, whether it be biology or oceanography, whatever the field would happen to be, and would study statistics. Our faculty strongly endorsed that as a concept. There is a unity in feeling that we are very interested in such students. But I think it is simply a failure of the recruiting process that we are not seeing those students. We are just not getting the applicants to have the opportunity to bring them into the program. And I think the failure is ours; it is a marketing question. But I agree wholeheartedly with the spirit. EDDY: I am very interested in Joan Garfield's references to research in using collaborative work as a way to improve teaching. I am starting to teach a course in probability OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium theory, and I want to try and use some collaborative learning methods that I have never tried before. JOAN GARFIELD: A paper of mine that just appeared in the new, first issue of the Journal of Statistics Education, an electronic journal available on gopher, is specifically on using cooperative learning in teaching statistics. Concerning the research that says this is a more effective way of learning, some of my colleagues at the University of Minnesota, David and Roger Johnson, have put together a huge literature review [see p. 46, above]; I believe they cite over 250 articles that have shown that students do tend to learn better, that is, achievement seems to be higher, when they work in groups. ANTONIAK: We technical types tend to be more impressed personally with these computer-generated animated demonstrations of principles. We tend to think that the right way to get a concept across is to find a new, good way of demonstrating something, by being colorful. What comes to my mind is Tom Apostle's work with Project Mathematics in which, for example, there are very neat demonstrations of the theorem of Pythagoras. Joan Garfield's presentation focused mainly on the methodology, the dynamics, the learning environment, the interpersonal dynamics between the students and the teacher. Have any studies been made on whether the cleverness of the demonstration is 30 percent, or 50 percent or whatever, of what is required? Or are you really saying that the biggest problem is in recognizing a different way to go about teaching any kind of material of this type? GARFIELD: I am not sure I understood the question. I first thought you were going to ask me if there was research on the use of computer graphics, demonstrations, and so on, and would that have an impact on student learning? Is that part of what you were asking me? ANTONIAK: Yes, basically. GARFIELD: I think that studies are starting to be done on the use of different kinds of software and ways that students interact with them, and what is the most effective way to use that kind of software to help students learn. And it seems to me that it is very encouraging. With technology offering such sophisticated ways to demonstrate things that we have never been able to present before, it seems to offer the potential to help students understand very complex concepts in better ways than they had previously. But I cannot say that there is a set of literature out there that supports that right now. JOHN TUCKER: Computer technology can be very effective in improving student learning, especially in reinforcing class presentations and for self-paced instruction, but its effectiveness strongly depends on how well or poorly the software is designed. GARFIELD: Right, and how well students are able to interact with it, whether it is just a demonstration or whether it permits them to manipulate variables. ROTHMAN: I would like to address this issue of grading and assessment and also encouraging cooperative learning at the same time. If in fact we put people in competition for a grade, do we not undermine the purpose of learning? GARFIELD: I think I have a different view of assessment than that. My idea of assessment is giving feedback to students on their learning, not just handing them an end-of-the-term grade. It is more an ongoing interaction with the student that says, here are some areas of weakness, here are things I think you need to work on. I view assessment as an ongoing process, and as a very complex process where ideally we would be giving feedback to students on their statistical knowledge, how well they apply it, how well they communicate it, and so on. OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium I think that assessment is very much a part of collaborative activity because if a group works together and turns in a product, they need feedback on how well they did on that product. I know that most professors view assessment as grading, and I think that issues do come up when you are grading group work and students are worried about their grades. There have been different suggestions in the literature on different approaches to dealing with that. ROTHMAN: Specifically, how do you feel about comparison between students? If you base assessment on how well they present their work, you are making a comparison, relative to someone else rather than to what that student has already done. GARFIELD: I guess I do not see assessment as comparison to other students. I see it more as comparison to a standard: "Here is what we would like you to be able to do, and you are not there yet, but here are some suggestions for areas you should work on." I personally do not think of assessment as a way of comparing students to each other. I do not do rankings. I believe in more of a mastery approach, whereby if every student in the class masters things to the level I am looking for, they will all get the same grades. JAMES ROSENBERGER: The idea that statistics is at the hub of a hub-and-spoke paradigm is one of the themes that I have encountered here that Pennsylvania State University is probably not aware of. I wonder if we need to do a great selling job of the statistics discipline for the rest of the academic community. MORRIS: I am afraid so. But let us get started. FIENBERG: One of the problems that is going to come up repeatedly, and has been alluded to everywhere, is how to fit everything in. In reflecting, I have been associated in one form or another with at least five different departments over my career, and every department has had this problem. So it was not a problem that only I encountered; indeed, it existed at Harvard when I was associated with that university early in my career. John Lehoczky was correct in saying there is clear agreement on the goals and the importance of data analytic and cross-disciplinary training. But it is also very clear to me that there is not unanimity at CMU about the curricular details. Further, one could probably put any pair of people together who, when looked at from afar could seem to coincide, and find they think very differently about the curriculum. A number of years ago, when I was at Minnesota — before Joan was there — I observed that, when put together in a room, the faculty demanded the union of the knowledge of all of the people in the room rather than the intersection. A consequence is that you add course requirements and you never take them away. If allowed to go to its ultimate end, you have an infinite-year curriculum, a curriculum that cannot work. So there is a serious problem here. The other observation comes from my life as an administrator, which is now over; I am now languishing back into the field of being a faculty member. We in statistics are not alone. We talk about this as if it were unique to statistics, but in fact every field in every university faces these decisions. In fact, the pace of curricular reform and change is similar in other places, and indeed, I believe statistics in many respects is moving more rapidly ahead. In my most recent administrative role as a vice president, I was astonished at the slowness of some fields' willingness to embrace the notion that you had to reexamine what you were doing, let alone change it. What I would commend to everybody is to think not in terms of 2 or 4 or 5 or even 10 years as the increment for comparison, but to think in terms of generations and centuries. If you OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium go back a century, statistics did not exist as a discipline. If you look at universities a hundred years ago, there was not an English department because it did not exist as a separate, identifiable field. And therefore, anybody who tells you that you cannot change the curriculum over that length of time is just talking from ignorance. If you use that long-term view, you know that change has to occur, and the question is how rapidly you make it happen, and how acceptable it is to be making changes regularly. Statistics as a field has actually been a good model for that. The notion of process control, where you do make regular changes and adapt, is something that we have been teaching others for years. Perhaps it is appropriate to take that and bring it back and use it ourselves as we adapt. MORRIS: It is always easier to make change when departments are being built. The first statistics departments in the United States were formed just before World War II. Changing is much harder once you become institutionalized. Statistics is going to be more like the classics department the next time. SACKS: Joan Garfield offered a set of tactics to go with the strategy that had been discussed before by Peter Bickel. Do you have, or do you know if there has been attempted, an assessment of the cost, in terms of time or resources, of implementing those sorts of tactics at a graduate level, or even at an undergraduate level, or whether it is cost-effective to do so? GARFIELD: I do not know how to answer that. I have lots of suggestions on how it can be done, but I am not directly involved in a statistics program, and so I cannot speak to what the cost would be. ROTHMAN: At the University of Michigan, we have one class in which we use masteries or portfolios rather than tests at the end of the period of time. Students demonstrate their understanding by writing something that indicates that they understand the facts and that they can apply the facts to situations that have not been described in class. They have to go to a newspaper, a scientific journal, and say, "Here is an application of this principle to some other situation." They get some feedback from the teaching fellow, and then they either have mastered the topic or have to revisit it. So the grade is "mastered" or "not yet." That simple change from assigning numbers as grades is very important because it focuses on learning as opposed to performance on tests. We are out there trying to encourage learning. We have a class of 250 students. Even being involved in this teaching college for more than one term in this section, using three plus two other graders, it is a full-time job just getting involved in that seemingly small change. We are trying to find new ways of doing it by putting more of a burden on the student, and getting some software that allows them to check their own work. We have the $4,000 to do that this summer and will see how that plays out. We are going to have to do a lot of work to get the cost down. The bottom line is that, from my understanding, it is going to be a very expensive policy. EDDY: What we have been talking about seems to me to be the distinction between educating students and training them in something specific, and that the historical mode of lectures is to drum the information into them. What these various things that have been talked about this afternoon really focus on is educating students so they have the tools and savvy for these situations, and not worrying so much about training in the specifics. In thinking about this, I am still struggling with my earlier question of how we are going to teach them all of these things. The answer is that we are not, and we do not have to worry so much about it. OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium Also, John Lehoczky omitted mentioning one of the other mechanisms that we at CMU have incorporated in the last few years, namely, small groups of students and faculty members that get together. We now have six or seven of these groups that meet two or three nights a week, in which the students have to make presentations. It is so much smaller; it is not a course or anything of the like, it is just a get-together or workshop. In the ones in which I play a role, in the course of a month the students probably make one or two presentations. They get feedback on the communications part and on the technical part, and their fellow students get exposed to whatever ideas they are discussing. So there are other mechanisms that are imparting the knowledge that they did not get in course work, and in realizing this I actually feel much better about it than I did earlier. JAMES LANDWEHR: I want to comment about this issue of how one covers more in the same amount of time, from the perspectives of having worked with the Quantitative Literacy Project and of trying to get high school and middle school teachers to teach more statistics. Of course, if you say to a mathematics teacher that, in addition to everything else, he or she now should be teaching statistics and the students should understand statistics by the time they graduate, the immediate response is, "Well, what can I throw out? You tell me what to throw out, what is not important, and I will consider it." Eventually other teachers may say to them something like the following: If we think about it differently, if instead of spending a lot of time teaching them linear algebra — in which the straight line is presented so abstractly that the students do not get it anyway and more time is thus taken than should be — we give the students some real data and ask them what it means and talk about scatter plots and look at association and eventually ask the class, "Could a straight line help us in understanding this?", we may end up not only teaching some statistics, but also teaching the simple algebra better — and I am now repeating what teachers tell me — so that the kids end up learning the simple algebra about the straight line better than previously happened the traditional way. So if you can come up with a different method, sometimes you can kill two birds with one stone. That is not to say it is easy, but I think this is a perspective worth taking, as opposed to the "What-can-I-get-rid-of?" perspective. BICKEL: I have to agree completely with Steve Fienberg that when you get the department together, the tendency, of course, is always toward the conclusion that one must offer the union of all things rather than the intersection. Furthermore, it is driven not only by the faculty interests that lead to that union, but also by what skills are, say, most relevant in the environment. For universities in the Washington, D.C., area, focusing on survey sampling and statistics policy may be most relevant; for the University of Michigan, focusing on Teach UM and its various aspects may be what is most important. On the other hand, it is clear that is impossible to offer everything. So there has to be selection. Nothing prevents there being a large number of topics available as long as you can get the faculty to agree on the intersection, that is, on what every student should have some exposure to. Then have the other things offered, or offered as time permits or as there are faculty members willing to teach them. BODAPATI GANDHI: There has been discussion about cost. We try to extend the basic philosophy of quantitative literacy to undergraduate students who are taking undergraduate introductory courses. We ask them to do a project instead of a third partial examination. Each student has to collect primary and secondary data, prepare a proposal, a small one, and then OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium analyze the data on a computer. It is done at no cost to the university. There have been about 10 sections with each section containing about 34 students, so that around 300 students total have experienced this for two semesters. In the first semester they focus on descriptive statistics, and in the second semester they do a regression analysis project. We offered it this year also and it was very successful, according to comments of the faculty. SNELL: I was just going to comment that we at Dartmouth and many other people have experimented, and many have approaches that are very expensive, and also some other methods are very cheap. I heard a wonderful presentation the other day about a physics program at Harvard in which the person who teaches a group of a couple hundred people simply comes in each day with a very short question, gives the students a few minutes to think of the answer, and has them turn around and discuss the problem with a neighbor and come to a joint conclusion. This is done merely to exhibit by first-hand experience how much better they do after they have talked about a problem a little bit with each other. He happened to have a lot of high-technology equipment that probably was expensive, because all the statistical results were displayed in front of the room almost instantly with automatic recording and such things, to also show the students how much they had learned from just one or two minutes of discussion with their neighbors. However, to do something such as that is very simple and does not cost a lot of money. OCR for page 52 Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium This page in the original is blank.
{"url":"http://www.nap.edu/openbook.php?record_id=2355&page=52","timestamp":"2014-04-19T17:32:04Z","content_type":null,"content_length":"60328","record_id":"<urn:uuid:7c7922d5-c4b2-4c5b-8d1a-b5f5f502ea7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Subject:Number theory Number theory Books in this subject area deal with number theory : the branch of mathematics that studies the properties of whole numbers, prime numbers and their distribution, whole number solutions to equations, and number systems related to the whole numbers (usually called number fields). Number theory has found important applications in the fields of the fields of computer science, physics and other branches of mathematics. Completed books In subsections: Subsections Books nearing completion In subsections: Half-finished books In subsections: Partly developed books Featured Books In subsections: Freshly started books In subsections: In subsections: Unknown completion In subsections: Last modified on 28 November 2011, at 07:31
{"url":"http://en.m.wikibooks.org/wiki/Subject:Number_theory","timestamp":"2014-04-17T12:38:38Z","content_type":null,"content_length":"19243","record_id":"<urn:uuid:32b320a5-e055-4309-8ff7-54917a23945a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Weehawken Statistics Tutor Find a Weehawken Statistics Tutor Hi! Thanks for taking a look at my profile. My name is Laura. 27 Subjects: including statistics, English, reading, writing ...I can submit transcripts if necessary. I have a Master's degree in Applied Math from the University of Michigan and three years of tutoring experience (both private tutoring and with the University's learning center). I have taken multiple college-level courses, including symbolic logic and introduction to mathematical proof which involve logic. I received an A (4.00) in both 15 Subjects: including statistics, calculus, geometry, algebra 1 ...In addition, I provide exercises that build problem-solving and reasoning skills. My objective is to make sure the student fully understands the material presented. Test taking is another area some students find stressful, so I like to provide test-taking strategies that can be used regardless of the subject matter.I have over 20 years of business experience. 21 Subjects: including statistics, reading, English, Spanish ...Ideally, we could meet by Columbia and use a classroom on campus. I have taught courses in Calculus and Statistics while in graduate school so I am very detailed about teaching; I always try to make sure the student understands what I am trying to get across to them well. I have lots of experience and am a native English speaker. 25 Subjects: including statistics, calculus, geometry, algebra 1 ...I am aware of precisely what is required in terms of techniques to help students focus and succeed. I give frequent breaks to my students and keep them entertained with games relating to the subject material. I've been tutoring for 12 years and have helped hundreds of students master difficult subjects. 47 Subjects: including statistics, English, chemistry, calculus
{"url":"http://www.purplemath.com/weehawken_statistics_tutors.php","timestamp":"2014-04-16T13:35:08Z","content_type":null,"content_length":"24047","record_id":"<urn:uuid:22cfd0da-8980-433b-82f8-08dcbc1eb3a2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Material Results Search Materials Return to What's new in MERLOT Get more information on the MERLOT Editors' Choice Award in a new window. Get more information on the MERLOT Classics Award in a new window. Get more information on the JOLT Award in a new window. Go to Search Page View material results for all categories Click here to go to your profile Click to expand login or register menu Select to go to your workspace Click here to go to your Dashboard Report Click here to go to your Content Builder Click here to log out Search Terms Enter username Enter password Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you. select OK to launch help window cancel help You are now going to MERLOT Help. It will open a new window. An interactive multimedia tutorial for healthcare professionals wishing to refresh math skills and learn how to calculate... see more Material Type: Margaret Hansen Date Added: Jul 19, 2005 Date Modified: Apr 03, 2014 This subsite of Mathematics Tutorials and Problems (with applets) is divided into Interactive Tutorials, Calculus Problems,... see more Material Type: kader dendane Date Added: Jun 28, 2008 Date Modified: Oct 22, 2013 The CCP includes modules that combine the flexibility and connectivity of the Web with the power of computer algebra systems... see more Material Type: David Smith & Lawrence Moore, CCP Co-Directors Date Added: Sep 25, 2005 Date Modified: Jul 23, 2007 Finite Math for Windows is a software package that enables students to easily solve problems and/or check their work in... see more Material Type: Howard Weiss Date Added: Jan 25, 2007 Date Modified: Sep 21, 2009 Quoted from the site: [This site contains...] "Free mathematics tutorials to help you explore and gain deep understanding of... see more Material Type: kader dendane Date Added: Jun 20, 2008 Date Modified: Jan 24, 2013 OpenAlgebra.com is a free online algebra study guide and problem solver designed to supplement any algebra course. There are... see more Material Type: John Redden Date Added: Jul 28, 2013 Date Modified: Apr 09, 2014 The CCP includes modules that combine the flexibility and connectivity of the Web with the power of computer algebra systems... see more Material Type: David Smith & Lawrence Moore, CCP Co-Directors Date Added: Jan 26, 2006 Date Modified: Oct 07, 2010 No ads, just Math tutorials from arithmetic to differetial equations. There is a whole collection of videos on YouTube as... see more Material Type: James L. Sousa Date Added: Jul 16, 2013 Date Modified: Nov 05, 2013 The site is dedicated to the Pascal Triangle and its connections to different areas of mathematics. It also contains a set of... see more Material Type: Michael Frame Date Added: Jan 08, 2009 Date Modified: Dec 09, 2010 The aim of these investigations is not to provide drill (although links to other resources on the web that do have been... see more Material Type: Leo Jonker Date Added: Sep 28, 2004 Date Modified: Jun 29, 2011
{"url":"http://www.merlot.org/merlot/materials.htm?materialType=Tutorial&keywords=mathematics","timestamp":"2014-04-23T20:20:50Z","content_type":null,"content_length":"190844","record_id":"<urn:uuid:f7cebd46-17a4-488b-ae23-7834ac03ede8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Equalizing terms by difference reduction techniques , 2000 "... not be interpreted as representing the o cial policies, either expressed or implied, of NSF or the U.S. Government. This thesis describes the design of a meta-logical framework that supports the representation and veri cation of deductive systems, its implementation as an automated theorem prover, a ..." Cited by 81 (17 self) Add to MetaCart not be interpreted as representing the o cial policies, either expressed or implied, of NSF or the U.S. Government. This thesis describes the design of a meta-logical framework that supports the representation and veri cation of deductive systems, its implementation as an automated theorem prover, and experimental results related to the areas of programming languages, type theory, and logics. Design: The meta-logical framework extends the logical framework LF [HHP93] by a meta-logic M + 2. This design is novel and unique since it allows higher-order encodings of deductive systems and induction principles to coexist. On the one hand, higher-order representation techniques lead to concise and direct encodings of programming languages and logic calculi. Inductive de nitions on the other hand allow the formalization of properties about deductive systems, such as the proof that an operational semantics preserves types or the proof that a logic is is a proof calculus whose proof terms are recursive functions that may be consistent.M + , 1999 "... In this paper we present a framework for the definition of generic and thus reusable tactics. We present an extension of the window inference technique which is the formal basis of a hierarchical, problem-reduction style of reasoning. The window inference technique is analyzed and general reasoni ..." Add to MetaCart In this paper we present a framework for the definition of generic and thus reusable tactics. We present an extension of the window inference technique which is the formal basis of a hierarchical, problem-reduction style of reasoning. The window inference technique is analyzed and general reasoning rules are separated from logic specific rules. The separation between logic specific and general rules is used to define a framework offering generic window reasoning rules to allow for the definition of generic tactics, where logic specific parts are separated from the tactic level. 1 Introduction Interactive theorem proving systems are used to support strict mathematical reasoning wrt. different logics, as for example classical first-order logic, higher-order logic or modal logics. These systems are either meta-logical systems allowing the declarative encoding of a large variety of logics or they are "multi-logical" systems, where the different logics are build into the system on t...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2528274","timestamp":"2014-04-21T07:29:37Z","content_type":null,"content_length":"15540","record_id":"<urn:uuid:2cf49aaa-883f-474f-9e38-540dd8107169>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Masses Are Connected By A String Horizontally ... | Chegg.com Two masses are connected by a string horizontally on a frictionless surface. One hand applies a 5.4 N force to the right (with mass=5 kg), while another hand applies a -3.1 N force to the left (with mass=4kg). Calculate the acceleration of the system and the Tension (T) in the string between the two masses. I got 0.25 m/s^2 for the acceleration but I am confused on how to find the Tension.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/two-masses-connected-string-horizontally-frictionless-surface-one-hand-applies-54-n-force--q1233880","timestamp":"2014-04-19T07:16:28Z","content_type":null,"content_length":"21078","record_id":"<urn:uuid:69f22517-d338-479d-90bc-2bb793cc87c8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Differences of Integers Using Absolute Value 4.8: Differences of Integers Using Absolute Value Created by: CK-12 Practice Differences of Integers Using Absolute Value Remember Cameron and the diving? While on the plane, Cameron looked at photos from his Dad's deep sea dive. On a shark dive, Cameron's Dad had gone down to a depth of 80 feet with hopes of seeing a shark. After ten minutes or so, he had spotted a beautiful shark swimming above him. Cameron’s Dad went up about 20 feet to try to catch a picture of the shark. He did get a few good shots before the shark swam away. “What depth did you see the shark at?” Cameron asked his Dad showing him the picture. Do you know? To figure this out, you will need to subtract integers. Subtracting integers is the focus of this Concept. By the end of it, you will know the depth of the shark. Another strategy for subtracting integers involves using opposites. Remember, you can find the opposite of an integer by changing the sign of an integer. The opposite of any integer, $b$$-b$ For any two integers, $a$$b$$a-b$$a+(-b)$ For any two integers, $a$$b$$a-b$$a+(-b)$ Write this down in your notebook and then continue with the Concept. Find the difference of $5-(-8)$ The integer being subtracted is -8. The opposite of that integer is 8, so add 8 to 5. So, the difference of $5-(-8)$ Our answer is 13. Find the difference of $-12-(-2)$ The integer being subtracted is -2. The opposite of that integer is 2, so add 2 to -12. Add as you would add any integers with different signs. Give that answer the same sign as the integer with the greater absolute value. $12>2$ So, the difference of $-12-(-2)$ Our answer is -10. Find the difference of $-20-3$ The integer being subtracted is 3. The opposite of that integer is -3, so add -3 to -20. Add as you would add any integers with the same sign––a negative sign. Give that answer the same sign as the two original integers, a negative sign. So, the difference of $-20-3$ Our answer is -23. Now take a few minutes to practice what you have learned. Find the differences using opposites. Example A $-5 - 7$ Example B $8 - (-4)$ Example C $-12 - (-8)$ Here is the original problem once again. While on the plane, Cameron looked at photos from his Dad's deep sea dive. On a shark dive, Cameron's Dad had gone down to a depth of 80 feet with hopes of seeing a shark. After ten minutes or so, he had spotted a beautiful shark swimming above him. Cameron’s Dad went up about 20 feet to try to catch a picture of the shark. He did get a few good shots before the shark swam away. “What depth did you see the shark at?” Cameron asked his Dad showing him the picture. To find the depth that Cameron’s Dad saw the shark, we need to write a subtraction problem and solve it. Remember that depth has to do with below the surface, so we use negative integers to represent different depths. -80 was his starting depth, then he went up -20 so we take away 20 feet. $-80 - (-20) = -60 \ feet$ Cameron’s Dad saw the shark at 60 feet below the surface. Here are the vocabulary words in this Concept. the answer in a subtraction problem. the set of whole numbers and their opposites. Guided Practice Here is one for you to try on your own. The temperature inside a laboratory freezer was $-10^\circ$$5^\circ$ The problem says that the temperature was lowered. This means that the temperature decreased, so you should subtract. To find the new temperature, you can subtract the amount by which the temperature was lowered from the original temperature, using one of these equations. $-10^\circ C-5^\circ C &= ?\\\text{or} \qquad -10-5&=?$ The integer being subtracted is 5. The opposite of that integer is -5, so add -5 to -10. Add as you would add any integers with the same sign––a negative sign. Give that answer the same sign as the two original integers, a negative sign. So, the difference of $-10-5$ This means that the new temperature inside the freezer must be $-15^\circ$ Video Review Here is a video for review. - This is a James Sousa video on subtracting integers. Directions: Find each difference using opposites. 1. $15-7$ 2. $-7-12$ 3. $0-4$ 4. $13-(-9)$ 5. $-21-4$ 6. $33-(-4)$ 7. $-11-(-8)$ 8. $18-28$ 9. $28-(-8)$ 10. $13-18$ 11. $-21-(-8)$ 12. $-9-(-38)$ 13. $22-8$ 14. $25-38$ 15. $19-(-19)$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r4/section/4.8/","timestamp":"2014-04-24T18:25:58Z","content_type":null,"content_length":"136310","record_id":"<urn:uuid:3f3528d9-44a3-4c4d-90d1-897d93f9dc5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Inverses of disjointness preserving operators. (English) Zbl 0974.47032 Let $X$ and $Y$ be vector lattices. A linear map $T:X\to Y$ is disjointness preserving if $Tx\perp Ty$ whenever $x\perp y$ in $X$; if $T$ is bijective and ${T}^{-1}$ is also disjointness-preserving then $T$ is said to be a $d$-isomorphism. The main results of this monograph include: (A) a characterization of those Dedekind-complete lattices $X$ having the property that every disjointness-preserving bijection with domain $X$ is a $d$-isomorphism, and (B) a theorem to the effect that for Dedekind-complete vector lattices, $d$-isomorphism implies order isomorphism. Partial results along these lines were announced in [the authors, “Functional analysis and economic theory”. Based on the special session of the conference on nonlinear analysis and its applications in engineering and economics, Samos, Greece, July 1996, Berlin: Springer, 1-8 (1998; Zbl 0916.47026)]. Besides being more definitive and complete, the present account also initiates a new perspective: rather than regarding disjointness preserving bijections which fail to be $d$-isomorphisms as aberrant curiosities, their existence is now taken as an opportunity to learn about the structures of domain and range spaces. This new perspective is reflected in the rich variety of counterexamples presented and in the full treatments of concepts like ‘determining/cofinal families of band projections’, ‘$d$-splitting numbers’, ‘essentially constant functions’, and ‘cofinal universal completeness’ leading up to (A). 47B60 Operators on ordered spaces 46B40 Ordered normed spaces 46A40 Ordered topological linear spaces, vector lattices 47B65 Positive and order bounded operators 46B42 Banach lattices 54G05 Extremally disconnected spaces, $F$-spaces, etc. 47B38 Operators on function spaces (general)
{"url":"http://zbmath.org/?q=an:0974.47032","timestamp":"2014-04-17T06:58:08Z","content_type":null,"content_length":"24607","record_id":"<urn:uuid:c882e790-7ad4-49f6-9281-b12a3427df76>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
South San Francisco Algebra Tutor ...In addition to my experience teaching recitation sections of up to 40 students, I am very comfortable (and experienced with) explaining probability concepts one-on-one :) I have been writing code in MATLAB for the past 7 years. I'm currently a computer science PhD student at Cal, studying comput... 27 Subjects: including algebra 1, algebra 2, calculus, chemistry ...I will also emphasize the technique of concentrating on special parts of diagrams (one part at a time). I have many recent and present pre-Calculus students and review with them for tests and homework. I have a PhD in Math (from US) and have taught Calculus and other Math classes at San Jose St. U and other colleges. 15 Subjects: including algebra 2, algebra 1, calculus, GRE ...I teach Algebra with crystal clear methods for a subject often disliked and poorly taught. Please let me reverse your feelings toward it. I would welcome the opportunity and look forward to hearing from you soon. 3 Subjects: including algebra 1, Japanese, prealgebra ...In addition to the sporting fields, I was able to graduate from Carnegie Mellon in 3.5 years (Dec 2012) and begun work as a Corporate Finance Analyst on the Energy Team for PNC Bank, where I underwrote a portfolio consisting of $1.4 Billion in direct hard exposure. I then decided to take on a bu... 33 Subjects: including algebra 2, algebra 1, calculus, geometry With a BA in Economics from the University of Chicago, and an MFA in Creative Writing from the University of Georgia, I can tutor a wide variety of subjects. I have worked with kids of all ages through 826 Valencia, and I currently teach undergraduate writing at the University of San Francisco. I was also a Research Fellow at Stanford Law School, where I did empirical economics research. 39 Subjects: including algebra 2, algebra 1, reading, English
{"url":"http://www.purplemath.com/South_San_Francisco_Algebra_tutors.php","timestamp":"2014-04-16T10:33:33Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:dbc6fb3c-caea-4b58-94a9-5e88d44820df>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Shadow Mapping 11-07-2012, 11:41 PM #1 Newbie Newbie Join Date Oct 2012 Shadow Mapping Dear frnds, please clarify the doubt, I have a doubt in light source point of view, ]= [view matrix of light ] * [ projection matrix of light ] * [ object of world matrix ] *[vertex (hitting point)] I'm guessing you meant you have a "question" rather than a "doubt". What's your question? BTW, your space transformations need the light VIEWING and the light PROJECTION matrices swapped. And even then that's going to leave you in light CLIP-SPACE. You still need a bias matrix to shift your projected coords from -0.5..0.5 to 0..1 needed for shadow map texture lookup. sorry for posting like that... In light source point of view,i have doubt in depth buffer (COLUMN MATRIX)[x,y,z,w]=[bias matrix]*[view matrix of light]*[projection matrix of light]*[object of world matrix]*[vertex] In this calculation,how to find the depth value ? As Dark Photon mentioned, you've got the wrong matrix order. It should be (COLUMN MATRIX)[x,y,z,w]=[bias matrix]*[projection matrix of light]*[view matrix of light]*[object of world matrix]*[vertex] The depth value will be in 'z'. Note, if you are using a perspective projection, then you also need to divide by 'w'. The bias matrix should be: 0.50000 0.00000 0.00000 0.50000 0.00000 0.50000 0.00000 0.50000 0.00000 0.00000 0.50000 0.50000 0.00000 0.00000 0.00000 1.00000 But you can also do this transformation in the shader with the more optimized (and easier to read?): [x,y,z,w]=[projection matrix of light]*[view matrix of light]*[object of world matrix]*[vertex]/2+0.5 Maybe the shader compiler will optimize it this way. It should be equivalent? Good enough for me to use it, and it is working. Doing multiplication or division with a scalar on a vector is done piecewise. Whether the compiler can do the optimization anyway, I can't say. It should be equivalent? Good enough for me to use it, and it is working. Doing multiplication or division with a scalar on a vector is done piecewise. That's an interesting puzzle. Code : [x, y, z, w] / 2 + 0.5 = [x*0.5+0.5, y*0.5+0.5, z*0.5+0.5, w*0.5+0.5] Code : [bias matrix]*[x,y,z,w] = [ 0.5 0.00 0.00 0.50 0.0 0.50 0.00 0.50 * [x,y,z,w] 0.0 0.00 0.50 0.50 0.0 0.00 0.00 1.00 ] = [ x*0.5+0.5*w, y*0.5+0.5*w, z*0.5+0.5*w, w ] where in both cases [x,y,z,w] is a CLIP-SPACE position. If we do the perspective divide on the latter (the standard "bias matrix" approach), we get the nice [ x/w*0.5+0.5, y/w*0.5+0.5, z/w*0.5+0.5, 1 ], which is just the scale-and-shift we need to shift the NDC cube (-1..1,-1..1,-1..1) into (0..1,0..1,0..1). But with the former, we end up with: [x*0.5+0.5, y*0.5+0.5, z*0.5+0.5, w*0.5+0.5]. And if we do the perspective divide, ...we end up with something very different. I could be missing something (very possible!), but I don't see how to map this as equivalent to the former. Any ideas? Last edited by Dark Photon; 11-14-2012 at 07:27 PM. Thanks, I see what you mean! And you are right of course. The reason is works for me is that I don't do the perspective divide (using orthographic projections). The simplification should still be valid if it is done on the xyz component only, shouldn't it? edit: I did some performance testing (using queries), and could find no measurable differences between using a const mat4 bias. Last edited by Kopelrativ; 11-14-2012 at 11:50 PM. 11-08-2012, 06:18 AM #2 11-13-2012, 06:57 AM #3 Newbie Newbie Join Date Oct 2012 11-14-2012, 01:05 AM #4 11-14-2012, 05:13 AM #5 11-14-2012, 12:03 PM #6 11-14-2012, 07:16 PM #7 11-14-2012, 11:11 PM #8 11-20-2012, 08:21 PM #9
{"url":"http://www.opengl.org/discussion_boards/showthread.php/179586-Shadow-Mapping?p=1244676&viewfull=1","timestamp":"2014-04-21T05:05:28Z","content_type":null,"content_length":"70826","record_id":"<urn:uuid:3fccc867-c275-414b-943f-fdff8803066d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivariant cohomology of finite group actions and invariant cohomology classes up vote 2 down vote favorite Let $W$ be a finite group acting on a space $X$. In what generality is it true that $H^*_W(X) = H^* (X)^W$? We always have a map $H^*_W(X) \rightarrow H^* (X)^W$, but it is certainly not an isomorphism with $\mathbb{Z}$-coefficients (take $X = E_W$, the total space of a universal bundle). But is it true with $\mathbb{Q}$-coefficients? Perhaps if we invert the order of the group? This is a follow up to my prior question about equivariant cohomology. Again, I am referring to the notes http://www-fourier.ujf-grenoble.fr/~mbrion/notesmontreal.pdf on equivariant cohomology by Michel Brion. I am interested in understanding the proof of Proposition 1 on pages 6 and 7. Let $G$ be a compact Lie group, let $T$ be a maximal torus, let $N$ be the normalizer of $T$ in $G$, and let $W = N/T$ denote the Weyl group. In part (i), we have the $W$-bundle $G/T \rightarrow G/N$, from which the author claims $H^* (G/N) = H^* (G/T)^W$ when using $\mathbb{Q}$-coefficients. A few sentences later, a similar statement is made for a more arbitrary $W$ bundle. So it seems like the above statement about $W$-invariants of cohomology is true in some generality. Could someone explain why this is true or give a reference? Or perhaps I am mistaken in interpreting this argument. equivariant-cohomology finite-groups rational-homotopy-theory add comment 1 Answer active oldest votes These results follow from the Cartan-Leray spectral sequence, which for a regular covering map $X\to X/W$ and a commutative ring $k$ of coefficients has $$ E_2^{p,q}=H^p(W,H^q(X;k)) $$ (cohomology of the group $W$ with coefficients in the $kW$-module $H^\ast(X,k)$) and converges to a graded group associated to $H^\ast(X/W)$. A reference is Ken Brown's "Cohomology of groups", section VII.7. In case the group $W$ is finite, if $|W|$ is invertible in $k$ then $H^p(W;H^q(X;k))=0$ for all $q$ and all $p>0$ (see Brown, Corollary III.10.2). In particular this is true if $k=\ up vote 2 down mathbb{Q}$. Thus the spectral sequence is concentrated in the $0$ column and therefore collapses, giving $H^\ast(X/W)\cong H^0(W;H^\ast(X;k))$. Since $H^0(W;M)=M^W$ for any group $W$ vote accepted and any $W$-module $M$, this gives the stated results. So you were exactly right in your first paragraph! More generally, the same collapse happens for the Serre spectral sequence of the fibration $X\to X_W\to BW$, which has $$ E_2^{p,q}=H^p(BW;H^q(X))\cong H^p(W;H^q(X)), $$ giving the isomorphism $H^\ast_W(X)\cong H^\ast(X)^W$ you mentioned. Thanks for your answer and the reference! – Edgar Feb 1 '13 at 16:43 add comment Not the answer you're looking for? Browse other questions tagged equivariant-cohomology finite-groups rational-homotopy-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/120447/equivariant-cohomology-of-finite-group-actions-and-invariant-cohomology-classes/120458","timestamp":"2014-04-19T04:49:15Z","content_type":null,"content_length":"53997","record_id":"<urn:uuid:6bbe3574-f726-4802-86fa-0eca29285df1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Let f(x) = x + 2 and g(x) = x2 - 6x + 3. Find g(f(x)) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/503cbafee4b0f52e046a996c","timestamp":"2014-04-20T06:25:55Z","content_type":null,"content_length":"48850","record_id":"<urn:uuid:96ad80cd-de74-47d7-b4ca-99e0b5f3c99d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximable Concepts, Chu spaces, and information systems Approximable Concepts, Chu spaces, and information systems Guo-Qiang Zhang and Gongqin Shen This paper serves to bring three independent but important areas of computer science to a common meeting point: Formal Concept Analysis (FCA), Chu Spaces, and Domain Theory (DT). Each area is given a perspective or reformulation that is conducive to the flow of ideas and to the exploration of cross-disciplinary connections. Among other results, we show that the notion of state in Scott's information system corresponds precisely to that of formal concepts in FCA with respect to all finite Chu spaces, and the entailment relation corresponds to ``association rules". We introduce, moreover, the notion of approximable concept and show that approximable concepts represent algebraic lattices which are identical to Scott domains except the inclusion of a top element. This notion serves as a stepping stone in recent work in which a new notion of morphism on formal contexts results in a category equivalent to (a) the category of complete algebraic lattices and Scott continuous functions, and (b) a category of information systems and approximable mappings. Keywords: Formal concept analysis, domain theory, Chu spaces 2000 MSC: 03B70, 06A15, 06B23, 08A70, 68P99, 68Q55 Theory and Applications of Categories, Vol. 17, 2006, No. 5, pp 79-102. TAC Home
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/TAC/volumes/17/5/17-05abs.html","timestamp":"2014-04-21T09:43:27Z","content_type":null,"content_length":"2708","record_id":"<urn:uuid:4b3c5b42-b875-459d-b043-328f0fad6fef>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
sieve of eratosthenes December 29th, 2010, 09:37 AM #6 Junior Member October 27th, 2009, 03:04 PM #5 Super Moderator October 27th, 2009, 10:12 AM #4 October 27th, 2009, 08:52 AM #3 Super Moderator October 27th, 2009, 07:36 AM #2 October 27th, 2009, 05:55 AM #1 False, all multiples are non-prime You only use the sqrt trick when you're checking if a number is prime by running through the multiples. Ex: is 100 prime? only have to check integers from 1 to sqrt(100) by using the "naive" primality checker. However, if you use the sieve, you must mark all multiples of 2 up to 100, all multiples of 3 up to 100, all multiples of 5 up to 100... etc. this could have been why you were getting 49 being marked as prime. If you're bound was 100, then it would mark: 2,3,5,7 as being prime correctly, but everything else greater than 10 would be marked as prime (or, not marked as non-prime) Take a look at the animated picture on wikipedia's website about the sieve of eratosthenes, and you'll see they're marking up to bound, not just sqrt(bound) Last edited by helloworld922; October 27th, 2009 at 03:07 PM. A better way is too look for the next number that's not zero, and start marking off it's multiples. Also, remember in a sieve you can't go to just the sqrt(bound), you have to go through the entire array, marking off all the multiples. Another efficiency modification you can make is to increase your inner for loop by the value you're marking off (that's the multiple you're marking off), rather than by 1. Also, it's not necessary to have an array that contains numbers, but just true/false for prime/non-prime (a tricky optimization is to have prime=false and non-prime=true). thanks, i think thats exactly what i was looking for. but also with this sieve i think actually once you progress through 1 -> sqrt(n) you cover all non-prime multiples. To find all the prime numbers less than or equal to a given integer n by Eratosthenes' method: 1. Create a list of consecutive integers from two to n: (2, 3, 4, ..., n). 2. Initially, let p equal 2, the first prime number. 3. Strike from the list all multiples of p greater than p. 4. Find the first number remaining on the list greater than p (this number is the next prime); let p equal this number. 5. Repeat steps 3 and 4 until p^2 is greater than n. 6. All the remaining numbers on the list are prime. Last edited by rsala004; October 27th, 2009 at 10:15 AM. The Following User Says Thank You to helloworld922 For This Useful Post: rsala004 (October 27th, 2009) A better way is too look for the next number that's not zero, and start marking off it's multiples. Also, remember in a sieve you can't go to just the sqrt(bound), you have to go through the entire array, marking off all the multiples. Another efficiency modification you can make is to increase your inner for loop by the value you're marking off (that's the multiple you're marking off), rather than by 1. Also, it's not necessary to have an array that contains numbers, but just true/false for prime/non-prime (a tricky optimization is to have prime=false and non-prime=true). for (int i = 2; i < primes.length; i++) if (primes[i] == true) for (int j = i + i; j < primes.length; j += i) primes[j] = true; Last edited by helloworld922; October 27th, 2009 at 09:32 AM. To check if a number is even, you can do this: if (num % 2 == 0) The Java API: http://java.sun.com/javase/6/docs/api/ Last edited by rsala004; December 29th, 2010 at 09:53 AM. I'm new in programming and I was looking for a simple code for the Sieve of Eratosthenes and all I found was optimized and a little difficult for understanding code..so I want to post this one for the beginners like me. public class Erathostenes { private static int MAX = 10000; * @param args public static void main(String[] args) { // TODO Auto-generated method stub int [] numbers = new int[MAX]; int [] primes = new int[MAX]; // 0 = not used ; 1 = prime ; -1 = erased // initialize the arrays for(int i = 0; i < MAX; i++) { numbers[i] = i; primes[0] = primes[1] = -1; //crossing 0 and 1 for(int idx = 2; idx < MAX; idx++) { // check if erased if(primes[idx] < 0) { // erased, skip it // the first not erased is prime primes[idx] = 1; // check and erase for(int i = idx+1; i < MAX; i++) { // check if erased if(primes[i] < 0) { // erased, skip it if(numbers[i] % numbers[idx] == 0) { // erase primes[i] = -1; for(int i = 0; i < MAX; i++) { if(primes[i] > 0) { System.out.print(numbers[i] + ",");
{"url":"http://www.javaprogrammingforums.com/collections-generics/1245-sieve-eratosthenes.html","timestamp":"2014-04-18T12:22:42Z","content_type":null,"content_length":"78613","record_id":"<urn:uuid:2e0ec8f4-931a-46a6-a128-13fdf709d7df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Hip-Hop Physics Hip-Hop Physics Electrons dance to a quantum beat in the Hubbard model of solid-state physics The Crowded Dance Floor The stage setting for the Hubbard model is the same as that of the Ising model: a simple lattice with cubic symmetry—a cartoon of a crystalline solid. But the Hubbard dancers are more acrobatic. As noted above, Hubbard electrons can jump from one lattice site to another. (The range of motion is usually limited to nearest-neighbor sites.) The electrons also interact with one another, experiencing mutual repulsion whenever two electrons land on the same site. Finally, the choreography of Hubbard electrons is subject to a special rule, the Pauli exclusion principle, a definitive element of quantum mechanics. Think of the Pauli principle (named for the Austrian physicist Wolfgang Pauli) as a generalization of the commonsense notion that two objects cannot be in the same place at the same time. The quantum version says that no two particles can occupy exactly the same quantum state. If two electrons have the same energy, for example, they must differ in angular momentum or some other property. On the Hubbard lattice, the exclusion principle implies that if two electrons occupy the same site, they must have opposite spins. An obvious corollary is that no site can ever accommodate more than two electrons, since at least two of them would have the same spin. With these facts in hand, we can get a rough vision of the Hubbard model in action. Suppose the lattice is two-dimensional, like a sheet of graph paper. Some of the lattice points are occupied by electrons; some of those electrons are spin-up and the rest are spin-down. Thus a site can have any of four occupation states: no electrons, one up electron, one down electron or a pair of electrons with opposite spins. An electron can hop to any neighboring site, provided the move is allowed by the exclusion principle. There’s one more essential element to introduce: the energy of the electrons. The exclusion principle requires that electrons with the same spin have distinct energies, which means there must be a ladder of available energy levels. If all the electrons have the same spin, they will necessarily fill all the rungs of the ladder from bottom to top. However, if half the electrons are spin-up and half are spin-down, they can be packed two to a rung, lowering the average energy level. This sharing of levels means that configurations with mixed spins can be energetically favorable. On the other hand, the presence of both up and down spins also allows pairs of electrons to occupy the same lattice site, which incurs an energy penalty because of their mutual repulsion. For each doubly occupied site, the overall energy of the system increases by an amount designated U. Thus there is a subtle competition between the cost of populating higher levels of the energy ladder and the cost of overcoming electromagnetic repulsion. What happens when we push the Start button and let the electrons hop around on the lattice? In general, this is a very hard question, but a few “corner cases”—where some parameter is set to an extreme value—offer clues. One such parameter is the number of electrons. For a lattice of N sites, this number must lie between zero and 2N. Nothing much happens with zero electrons, of course, and it turns out the same is true with 2N electrons: All sites are filled with paired electrons, and none of the electrons can move. Another parameter is U, the energy of electrostatic repulsion for electrons at the same lattice site. If U is zero (no repulsion at all), the spin-up and the spin-down electrons form two independent populations, each of which drifts through the lattice oblivious of the other’s existence. At the opposite extreme, if U is infinite, the repulsion is so great that no site ever holds more than one electron. In this circumstance electrons can move only when there is an adjacent vacant site; if the lattice is half full (N electrons, with no vacancies), the configuration is frozen solid. Still another parameter, whose role I have neglected so far, is temperature. At a temperature of absolute zero, the Hubbard model is compelled to adopt the configuration of lowest possible energy—the ground state. Thermal agitation at higher temperatures allows the system to escape this fate. With warming, higher-energy states come within reach. At infinite temperature all possible configurations are equally likely, and energy differences between states cease to have any influence on the behavior of the system. As a practical matter, interest focuses not on the extreme cases but on realistic values of the parameters. Physicists would most like to know what happens when the number of electrons is at or near half-filling (one electron per site) and when the repulsion parameter U is greater than zero but far from infinite. As for temperature, it is important to identify the ground state, but we would also like to know how the behavior of the system changes as it warms up from absolute zero.
{"url":"http://www.americanscientist.org/issues/pub/hip-hop-physics/3","timestamp":"2014-04-16T08:13:42Z","content_type":null,"content_length":"131567","record_id":"<urn:uuid:9bfa32f7-eb65-4153-96e4-9715026778d5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to She Loves Math! To go directly to the SITE MAP, click here (index of all topics I’ve written so far). Finally! A free math site with a practical approach and happens to include more girls’ examples. And, even better, a site that covers math topics from before kindergarten through high school. OK, so I’m a girl, and I LOVE MATH! I mean really love math. I’ve written this web site since I believe I can be feminine and like pretty things, and still be a very technical person. It all started when, at the request of several of my math students (who happen to be girls), I was asked to “write down” several of the hints that I use when explaining math to them. I have to admit, I like to make things “look pretty” (and that includes interior decorating). And to be quite honest, the other reason I’ve written this web site is so I don’t forgot how to tutor the more difficult topics from year to year A little bit of history….. Early on, I found math to be my favorite subject since I wasn’t crazy about memorizing, and I found that I could memorize a lot less in math. It really was like working on puzzles. What’s more fun than that? I just didn’t see what the big deal was with math. So, having been a math tutor for 20 years, I have found if I relate the problems to stuff in my students’ real life (being actively involved instead of passively involved), and direct the students on how much to memorize, the math becomes much easier and much more fun. One thing I’ve noticed over the years is that the math books in schools tend to be directed towards boys’ things: baseball, rocket ships, and throwing balls. I really don’t think this is necessarily done on purpose, but the textbooks always struck me as being more masculine. Plus, math books are BORING! Thus I’ve also tried to make the pages look “prettier”. So, to sum up, my philosophy in teaching math is: • Learning math should be an active experience and should relate to the world. And always use “simpler numbers” if a problem’s numbers are complicated. (For example, paying $4 for 2 oranges makes it more obvious that you need to divide than paying $5.88 for 3 oranges). • Learning math requires an understanding of what to “memorize” (for example, the tools), and what to “understand”. We don’t need to reinvent the wheel; it’s already been invented. • Learning more advanced math is no more than building on what is already known: if you can add 2 + 2, and build with math tools, you can be taught to solve a complicated Calculus problem. • Sometimes there’s just one little concept that isn’t known or understood that makes a whole new math concept difficult. We need to find that and learn it! • Math = Rules + Examples + Practice, Practice, Practice!!! This SheLovesMath web site will have blog post entries (see Blog) but will mainly consist of web pages covering subjects from first learning numbers through Calculus and Geometry. The pages are meant to be a primers, meaning I briefly cover topics starting with basic counting and working through high school math. I try to incorporate many of the hints and helpful tricks that I use in my day-to-day tutoring. If the books seem “babyish,” that’s because I mean it that way; I’ve been told that I explain things in “simple and plain” terms, which is usually not the case in “normal” math books. You can go through these pages from the beginning, or use them to catch up, stay on course, or even get ahead of your peers, like during the summers (something I did as a student, since I was quite the nerd). Or, you can also just go to a specific page if you’re having trouble with that particular topic as you’re learning it in school. And remember, not unlike learning ballet, math requires practice to get better at it. So sit back and enjoy and I will show you how to make your most terrifying math topic easy to understand. I promise you! HINT: Read the sections before you study those topics in class. It will make class much more enjoyable. PLEASE NOTE: In no way am I insinuating with this web site that girls are worse in math than boys. I just believe that math examples need to be more geared towards girls, and math should be taught more simply for both girls and boys! I don’t have a problem with boys reading this site, since, for years and years, girls have been reading boys’ math textbooks. About me: I am passionate about mathematics, and most of my students are girls. I started to think about writing math books directed towards girls back in the early 1990′s, when I started tutoring. I have a B.A. degree in Mathematical Sciences from Rice University, and an M.S. degree in Operations Research from Stanford University. I worked over 26 years in technical positions at telecom companies (including several years as a Technical Writer, which I loved) before becoming a math tutor, and have also worked as an associate Math Professor at a local college. I would love your feedback! You can contact me here: lisa@shelovesmath.com
{"url":"http://www.shelovesmath.com/","timestamp":"2014-04-20T16:34:12Z","content_type":null,"content_length":"39849","record_id":"<urn:uuid:a1372a03-6a73-4a46-8f83-9b2b8e366b6a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
A Cardiac Monitor Is Used To Measure The Heart ... | Chegg.com A cardiac monitor is used to measure the heart rate of a patient after surgery. It compiles the number of heartbeats after t minutes. When the data in the table are graphed, the slope of the tangent line represents the heart rate in beats per minute. t (min) 36 38 40 42 44 Heartbeats 2509 2651 2784 2924 3059 The monitor estimates this value by calculating the slope of a secant line. Use the data to estimate the patient's heart rate after 42 minutes using the secant line between the points with the given values of t. (Round your answers to one decimal place.) (a) t = 36 and t (b) t = 38 and t = 42 (c) t = 40 and t = 42 (d) t = 42 and t = 44
{"url":"http://www.chegg.com/homework-help/questions-and-answers/cardiac-monitor-used-measure-heart-rate-patient-surgery-compiles-number-heartbeats-t-minut-q1970045","timestamp":"2014-04-19T16:09:51Z","content_type":null,"content_length":"21866","record_id":"<urn:uuid:e26608d0-8a2c-4043-a3f1-99b2d142dafd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal Extension Field August 17th 2006, 08:43 AM #1 Global Moderator Nov 2005 New York City Maximal Extension Field Is the algebraic closure of a field the maximal extension field? It happens to be maximal algebraic extension field but is it also maximal extension field. Note: Such as field exists by partial ordering a set of extension fields and applying Zorn's lemma. Take any set I and adjoin a set of transcendentals x_i indexed by I. Given any such extension there is always a strictly larger one obtained by adjoining one more transcendental not in the set { x_i : i in I }. Take any set I and adjoin a set of transcendentals x_i indexed by I. Given any such extension there is always a strictly larger one obtained by adjoining one more transcendental not in the set { x_i : i in I }. But how can that be!?! Zorn's lemma gaurentees the existstence of a maximal element. While we are on this subject I encountered another simpler problem. Consider a group $\mathcal{G}$. Assume it have a maximal normal subgroup. Then create a set of all maximal normal subgroups $S$. Applying Zorn's lemma using ordering by inclusion we can show that there is such thing a maximal maximal normal subgroup. Meaning each group has a unique maximal normal subgroup. But that cannot be. For example, consider $\mathbb{Z}_15$ surly, $\mathbb{Z}_3 \ mbox{ and }\mathbb{Z}_5$ are maximal normal subgroups and they are different!!! Hey Hacker, I am pretty sure the answer to your question is no. C(x) for instance, the field of all rational functions with complex coefficients, is a proper extension of C (and R). Note that we had to adjoin the transcendental indeterminate "x" to get beyond C. That is, the only proper extensions of C are necessarily infinite dimensional over C, since finite dimensional extensions are algebraic, and C has no proper algebraic extensions. So I think you can say that the algebraic closure K of a field F is a maximal finite dimensional extension (so long as K/F is finite dimensional, which certainly isn't always true, e.g. C/Q) This raises the question though... given any field F, can't we just adjoin an indeterminate "x" to yield a proper extension F(x)? Seems like we can, so I was trying to figure out where the reasoning using Zorn's Lemma breaks down. I'm no set theorist, but I think the flaw is in considering "the set of all extension fields of F", which is just a stone's throw from invoking "the set of all sets". In other words, I think that the collection of all extension fields of F is a proper class, not a set, and so Zorn's Lemma doesn't apply. That's my best guess anyhow. Maybe you could prove some things by just considering "the set of all extension fields of F with cardinality less than ____ " or something like that. Seems like we can, so I was trying to figure out where the reasoning using Zorn's Lemma breaks down. Okay, given field F, define, Let, $S=\{ F\leq E\}$ (we note S is non-empty trivially). Define partial ordering on $S$ as $A\leq B \leftarrow \rightarrow A\subseteq B$ We can easily verify, that the relation $\leq$ on S is: reflexsive, transitive and anti-symettric. Now consider any chain $C\subseteq S$ define a set, $Z=\bigcup_{x\in C}x$. Clearly, $\forall X\ in C\leq Z$. Next we turn Z into a field by using the same binary operations as we for the other fields. Thus, $Z\in S$. Therefore, every chain has an upper bound. Thus, there is a maximal element in $S$- "maximal extension field". But according to you that set is not well-founded? Is that the right term to use? How about the confusion with my second problem that every group that has a maximal normal subgroup has a unique maximal normal subgroup. Same idea? I find it interesting, my book, when it proved the existence of algebraic closure used Zorn's lemma in the same fashion as I did but for some reason it worked for them. On your group question... In the conclusion part of Zorn's Lemma, there is no reason to take the maximal element in the poset to be unique. Maximal for a poset just means "there is nothing bigger than it", not necessarily that "everything is smaller than it". In fact every element in the poset of all maximal normal subgroups is maximal, since the chains will consist merely of single element subsets. On your group question... In the conclusion part of Zorn's Lemma, there is no reason to take the maximal element in the poset to be unique. Maximal for a poset just means "there is nothing bigger than it", not necessarily that "everything is smaller than it". In fact every element in the poset of all maximal normal subgroups is maximal, since the chains will consist merely of single element subsets. Ahh! I see my mistake. When I used Zorn's lemma and got a maximal maximal normal subgroup I assumed it meant that every Normal subgroup is a subset of it. But that was the mistake. Because the initial ordering was $\leq$ not $\subseteq$. Stupid me... I hate it when you such a mistake in algebra. If you are really curious I will show you the stupidest mistake I ever made. (I think the reason why I arrived at a faulty conclusion is because I was using a famous theorem that every ideal is contained in some maximal ideal. But the problem is that that set is ordered by the sub set relation). Your use of Zorn's Lemma for fields is absolutely correct as far as I can tell, assuming that the set S exists, which is not guaranteed the way you've defined it. You wrote: S = {F <= E} But from what universe is F being taken from? I'm really getting out of my league here, but a principle that seems to come up often in axiomatic set theory is that you can't just invoke "the set of all 'things' that have property X". If I remember correctly, this is essentially what Frege tried to do, and Russel put the hammer down on him with his famous paradox. You have to, in one way or another, start with a previously shown (or assumed) to exist set, a universe, and snatch your elements from there. In my book, Hungerford's "Algebra", his argument for algebraic closures is indeed very reminiscent of your argument. However, as he states before he goes into the proof, "The chief difficulty in proving that every field K has an algebraic closure is set-theoretic rather than algebraic. The basic idea is to apply zrn's Lemma to a suitably chosen set (his italices) of algebraic extension fields of K" He then goes on to spend a great deal of energy showing that algebraic extensions of a field K only get so big... that their cardinality is bounded in fact by |K|*aleph[0]. This allows him to later construct an actual bonafied set in which the K-algebraic extensions can be embedded. "the set of all 'things' that have property X". This demonstrates the problem of not having a formal definition of a set. I once on this forum asked a question which I found interesting. The set of all finite sets. I was able to demonstrate that this set is not contained by any cardinal number. My attempt was to hopefully introduce a new concept in set theory. Just like there are some sets which cannot be contained by finite number, so too there are sets which cannot be contained by cardinal numbers. The discussion ended with CaptainBlank explaining why I cannot do that (I did not really understand because I never studied axiomatic set theory). In my book, Hungerford's "Algebra", his argument for algebraic closures is indeed very reminiscent of your argument. However, as he states before he goes into the proof, In my algebra book my author does the same thing. Half of the discussion is on showing the set can be contained in some other set. But back then I did not see the purpose in doing that and just ignored that (in fact I once posed a question on this forum of my version of the proof of algebraic closure asking whether it is correct. Now I realize that it was not because I did not prove my set was countained). that their cardinality is bounded in fact by |K|*aleph[0]. The only sets that can be used are those bounded by cardinal numbers? Zorn's Lemma: Every non-empty partially ordered set in which every chain (i.e. totally ordered subset) has an upper bound contains at least one maximal element. The extensions of a field do not form a set in which every chain has an upper bound. In fact it's doubtful that they form a set at all, as opposed to a proper class. I disagree rgep... my suspiscion, like yours, is that the collection S of ALL extension fields of F is a proper class, and so ZL doesn't apply. But if we could take it be an actual set, we would have that: 1) S is partially ordered by inclusion 2) for any chain of extension fields in S, their union is again an extension field of F, and is an inclusion-upper bound for that chain (not that the union of ANY collection of extension fields is an extension field, but the union over a CHAIN certainly is) which are the hypotheses for ZL. The extensions of a field do not form a set in which every chain has an upper bound. In fact it's doubtful that they form a set at all, as opposed to a proper class. But if you take the union of all the fields in this chain they do form a field!! And since the union of sets always contains the sets this union set is the upper bounf. Thus, I believe you made a mistake. You had some good questions earlier, wish I could say I could answer all of them off the top of my head. In any event I'd recommend "Classic Set Theory: for guided independent study" by Derek Goldrei. Not terribly dense, good for a first look at the subject, and definitely something you can get through on your own. And when you're done with it it's still good reference (for awkward little math help forum moments like these). August 17th 2006, 11:35 AM #2 August 17th 2006, 11:42 AM #3 Global Moderator Nov 2005 New York City August 17th 2006, 12:05 PM #4 Junior Member Nov 2005 August 17th 2006, 12:23 PM #5 Global Moderator Nov 2005 New York City August 17th 2006, 12:29 PM #6 Junior Member Nov 2005 August 17th 2006, 12:37 PM #7 Global Moderator Nov 2005 New York City August 17th 2006, 12:44 PM #8 Junior Member Nov 2005 August 17th 2006, 12:57 PM #9 Global Moderator Nov 2005 New York City August 17th 2006, 01:28 PM #10 August 17th 2006, 02:19 PM #11 Junior Member Nov 2005 August 17th 2006, 04:29 PM #12 Global Moderator Nov 2005 New York City August 17th 2006, 09:49 PM #13 Junior Member Nov 2005
{"url":"http://mathhelpforum.com/advanced-algebra/4986-maximal-extension-field.html","timestamp":"2014-04-20T06:43:36Z","content_type":null,"content_length":"76681","record_id":"<urn:uuid:73e03dd2-4f11-4e53-9bb0-8d8423c298df>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Participants at Participants at UK MMSG Reading 2011 The table below lists the participants at the Reading 2011 meeting in the UK MMSG series. Name Affiliation Dr Ivan Argatov Institute of Mathematics and Physics, Aberystwyth University prof Mike Baines Department of Mathematics and Statistics, University of Reading Mrs Michelle Baker School of Mathematical Sciences, University of Nottingham Dr Bonhi Bhattacharya Department of Mathematics and Statistics, University of Reading Dr Charlotte Billington* Division of Therapeutics & Molecular Medicine, University of Nottingham Mr Lloyd Chapman Mathematical Institute, Oxford University Mr Igor Chernyavsky DAMTP, University of Cambridge Prof Kevin Chipman* Biosciences, University of Birmingham Dr Huguette Croisier School of Mathematical Sciences, University of Nottingham Mrs Allison Davies Mathematics, University of Birmingham Professor Donna Davies Clinical and Experimental Sciences, University of Southampton Dr Yohan Davit Mathematical Institute/OCCAM, University of Oxford Dr Gianne Derks Department of Mathematics, University of Surrey Dr John Doe Parker Doe Ltd Dr Joe Dunster Department of Mathematics and Statistics, University of Reading Ms Louise Dyson Centre for Mathematical Biology, University of Oxford Dr Rosemary Dyson Mathematics, University of Birmingham Mr Matthew Edgington Department of Mathematics and Statistics, University of Reading Miss Laura Gallimore Mathematical Institute (OCCAM), Oxford University Ms Elnaz Gederi Institute of Biomedical Engineering, Oxford University Ms Kyriaki Giorgakoudi Mathematical Biology / Mathematical Sciences, Institute for Animal Health / Loughborough Univ. Professor Ian Hall* Therapeutics and Molecular Medicine, University of Nottingham Mr Jonathan Hiorns School of Mathematical Sciences, University of Nottingham Dr Anthony Holmes* NC3Rs Dr Huguette Huguette School of Mathematical Sciences, University of Nottingham Prof Oliver Jensen School of Mathematical Sciences, University of Nottingham Mr Alistair Johnson Engineering Science, University of Oxford Prof Simon Johnson Division of Therapeutics & Molecular Medicine, University of Nottingham Prof Rob Krams* Department of Bioengineering, Imperial College London Miss Georgina Lang Mathematical Institute, University of Oxford Rev Louis Mayaud Department of Engineering Sciences, Univeristy of Oxford Dr Tracy Melvin Optoelectronics Research Centre, University of Southampton Dr Pratibha Mistry* Toxiciology, Syngenta Mr Sunny Modhara Biosciences, University of Nottingham Dr Benjamin Neuman* School of Biological Sciences, University of Reading Mr Aniayam Okrinya Mathematical Sciences, Loughborough University Dr Sevil Payvandi Bioengineering, Imperial College London Dr Joel Phillips Mathematics, University College London Prof Colin Please School of Mathematics, University of Southampton Miss Fran Pool Department of Mathematics and Statistics, University of Reading Mr Adrian Pratt School of Mathematical Sciences, University of Nottingham Prof Jon Preece* Chemistry, University of Birmingham Dr Joshua Rappoport* School of Biosciences, University of Birmingham Miss Aoife Roebuck Engineering Science, University of Oxford Dr Domingo Salazar* Product Safety, Syngenta Ms Teedah Saratoon Department of Mathematics and Statistics, University of Reading Dr David Schley Mathematical Biology Group, Centre for Integrative Biology, BBSRC Institute for Animal Health Dr Jennifer Siggers Department of Bioengineering, Imperial College London Miss Amy Smith Mathematical Institute, University of Oxford Mr Tom Snowden Mathematics and Statistics, Department of Mathematics and Statistics Dr Alexander Stevens* Product Metabolism & Analytical Sciences, Syngenta Dr Tom Sumner Mathematical Biology, Institue for Animal Health Dr Marcus Tindall Department of Mathematics and Statistics, University of Reading Mr Kim Travis* Product Safety, Syngenta Dr Russell Viner* Chemistry, Syngenta Dr John Ward Mathematical Sciences, Loughborough University Mr Tomasz Warzocha Department of Theoretical Chemistry, Maria Curie-Sklodowska University Dr Sarah Whalley Product Metabolism, Syngenta Dr Robert Whittaker School of Mathematics, University of East Anglia Dr Yinghui Zhou Department of Mathematics and Statistics, University of Reading An asterisk (*) denotes a problem presenter.
{"url":"http://www.maths-in-medicine.org/uk/2011/participants.html","timestamp":"2014-04-17T21:32:02Z","content_type":null,"content_length":"10202","record_id":"<urn:uuid:8a8b9e2a-bcae-4bf4-91fd-00fa9e98248c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Helping with homework! Please go easy on me as it's 35yrs since I did this at school! I'm helping my daughter and the equation is x^2 - y^2 =12 where y = x-2 . By substitution I've worked the answer out as x=4 but none of the methodology (e.g. factorisation or quadratic formula) in her notes or the text books give me the right answer. Through factorisation, x = -2 which is wrong, any one able to help as this driving me nuts!! Rgds, Mark Re: Helping with homework! R515 wrote:the equation is x^2 - y^2 =12 where y = x-2 . What are you supposed to do with this? Are you solving a system of equations? R515 wrote:none of the methodology (e.g. factorisation or quadratic formula) in her notes or the text books give me the right answer. Please show what you did so we can see what's going wrong. R515 wrote:Through factorisation, x = -2 which is wrong How did you get this by factoring?? Substitution is what they show here for doing this. x^2 - (x - 2)^2 = 12 x^2 - (x^2 - 4x + 4) = 12 What did you get from this? Re: Helping with homework! Thanks for the reply, the question is to find the value of x and y, this is how it went using factorisation as my daughter has been taught: x^2 - (x-2)^2 = 12 x^2 - (x-2)(x-2) = 12 x^2 -x^2 -4x +4 = 12 x^2 -x^2 -4x = 8 so -4x =8 which gives x= -2 but substituting -2 for x will not give the correct answer. Where are we going wrong? Re: Helping with homework! R515 wrote:x^2 - (x-2)^2 = 12 x^2 - (x-2)(x-2) = 12 x^2 -x^2 -4x +4 = 12 Try doing grouping like I showed so you don't forget the minus: x^2 - (x^2 - 4x + 4) = 12 x^2 - x^2 + 4x - 4 = 12 when you do the signs right the rest should come out right too. Re: Helping with homework! Thanks for taking the time to help, think I've nearly got it, is it the effect of removing the bracket on x^2 -(x^2 +4x -4) =12 that turns +4x into -4x and -4 into +4 then as this is what makes it work out right with x=4 ? Thanks Mark Re: Helping with homework! R515 wrote:is it the effect of removing the bracket on x^2 -(x^2 +4x -4) =12 that turns +4x into -4x and -4 into +4 then as this is what makes it work out right with x=4 ? Yeah it's the negatives. You can learn about doing negatives with brackets here. It's kind of important. Re: Helping with homework! Thanks for the help, it finally came to me yesterday evening how the (-) outside the bracket would influence what happens to the numbers inside when the brackets are removed, appreciate the help
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=9044","timestamp":"2014-04-19T15:21:52Z","content_type":null,"content_length":"27496","record_id":"<urn:uuid:48f68435-a203-46f6-9db5-4c582f55048e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how many atoms are in one level teaspoon of salt? • one year ago • one year ago Best Response You've already chosen the best response. Look up the density of salt, and convert from volume to mass. Then divide by the formula mass of NaCl to find the number of moles of NaCl formula units. Multiply by Avogadro's number to convert from moles to number of formula units. Multiply by 2 because there are two atoms (Na and Cl) in each formula unit. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50aa926fe4b06b5e4933126f","timestamp":"2014-04-25T07:21:48Z","content_type":null,"content_length":"30326","record_id":"<urn:uuid:c671d8a8-b8d3-47a6-8123-9be4c29769fc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Subtract Time in Excel Before learning how to subtract time in Excel, it is important to first understand the way that Excel stores times. Times are actually stored as positive decimal values in Excel. It is only the formatting of an Excel cell that causes a numerical value to be displayed as a time, rather than a decimal. Therefore you can subtract time in Excel, in the same way that you can subtract any other numbers. The table below shows examples of numerical values and the times that they represent in Excel. equivalent If you have a cell containing any positive decimal value, this can be diplayed as a time, by formatting the cell with the time format [hh]:mm or [hh]:mm:ss (depending on whether decimal time you want to show hours, minutes and seconds or just hours and minutes). 0 00:00 To format a cell with a time format: 0.25 06:00 0.5 12:00 - Right click on the cell to be formatted 0.75 18:00 - Select the option Format Cells... 1.0 24:00 - Ensure the Number tab is selected in the window that pops up 1.25 30:00 - Select the option Custom from the list of Categories and type the 1.5 36:00 format style ([hh]:mm or [hh]:mm:ss) into the box on the right - Click OK Note that the square brackets surrounding the hour part of the time format definition (i.e. [hh]) tell Excel to display the total number of hours, even if this is greater than 24. If the square brackets were not included, Excel would break down the result into a date plus the remaining number of hours. Excel Time Subtraction Examples The following spreadsheet shows four examples in which Excel times are subtracted. The formulas used are shown in column C of the spreadsheet on the left and the results are shown in the spreadsheet on the right. Formulas: Results: A B C A B C 1 Start Time End Time Time Difference 1 Start Time End Time Time Difference 2 05:30 10:00 =B2-A2 2 05:30 10:00 04:30 3 13:45 17:02 =B3-A3 3 13:45 17:02 03:17 4 00:00:15 07:32:05 =B4-A4 4 00:00:15 07:32:05 07:31:50 5 22:05:36 23:01:19 =B5-A5 5 22:05:36 23:01:19 00:55:43 Note that, in the results spreadsheet of the above example, cells C2 and C3 are formatted with the time format hh:mm and cells C4 and C5 are formatted with the format [hh]:mm:ss.
{"url":"http://www.excelfunctions.net/Subtract-Time-In-Excel.html","timestamp":"2014-04-18T20:56:37Z","content_type":null,"content_length":"19406","record_id":"<urn:uuid:b254aee5-351d-40a7-8c64-324f989947ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply BuyQuantopian - Simply Buy I had a request for an algo that was dirt-simple. Buy a basket of stocks, hold them, and see how it does. Here it is! If you want to try different stocks (these were chosen mostly at random) it's very easy. Click "clone algorithm" and edit the When you start typing s-i-d- in the code editor, you'll get an auto-complete window that will help you find the stock you want. Just type the company name and let the auto-complete do the work. Backtest from to with initial capital ( data) Cumulative performance: Algorithm Benchmark Custom data: Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Information Ratio 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes There was a runtime error.
{"url":"https://www.quantopian.com/posts/simply-buy","timestamp":"2014-04-21T09:35:32Z","content_type":null,"content_length":"123679","record_id":"<urn:uuid:8205a69c-8d76-458c-970a-a564449e78e7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Equation of a Graphed Line - Concept Sometimes we'll be given a graph of a line, and told to find the equation. There are many methods of finding the equation of a line with only a graph such as finding the slope and a point or finding two points. In order to understand finding the equation of line with its graph, one should understand the different forms of an equation of a line, especially point-slope and slope-intercept form. Alright guys this one of the most important skills you're going to get out of your Algebra one class. And that is when you're given a graph you're asked to write the equation that created that graph. A lot of the times we do it backwards. Most people are pretty good at taking an equation and drawing the graph of it here you have to do it backwards someone is going to you a graph you're going to be asked to write the equation. So let's think about what that means, you know a bunch of different forms of equations for lines. One really commonly used form is the slope intercept form, for me personally this is my favorite form I think graphs are easiest to think of in terms of their slope and their y intercepts. So if you wanted to write an equation in this form you need two things you need to find the slope and you need to find the y intercept. If you can find those two things you can do the equation really If you want to use this, the point slope form again you're going to need to find a point and a slope hence the name. To use this equation you need the point and the slope. So no matter how you go about writing the equation you're going to have to find the slope and you know the y intercepts or one of the other points. Keep in mind what you guys know about finding a slope if you know two points on the line, you can use this equation to find the slope or if the line is drawn on graph paper a shortcut I like to use is just counting the vertical rise putting that on top of the horizontal run to find the slope it's like using a slope triangle. So when you guys are given a graph and you're asked to write an equation, keep in the back in your mind you have some options. You can either find the slope in the intercept y intercept or you can find the slope at any of point you want to and then once you have any of those pairs of information you can go through and find the equation for the line. point slope form slope intercept form slope y intercept
{"url":"https://www.brightstorm.com/math/algebra/linear-equations-and-their-graphs/finding-the-equation-of-a-graphed-line/","timestamp":"2014-04-16T07:44:51Z","content_type":null,"content_length":"69562","record_id":"<urn:uuid:725fd779-8039-4d6b-b6b9-fead856a157b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] if ... problem with compound instructions Bill.Venables at csiro.au Bill.Venables at csiro.au Mon Jan 1 07:09:24 CET 2007 Step 1: quadrant <- 1 + (X[, 1] < 0) + 2*(X[, 2] > 0) This is not the usual labelling of the quadrants as '3' and '4' are interchanged. If you want to be picky about it quadrant <- ifelse(quadrant > 2, 7 - quadrant, quadrant) Step 2: angle <- atan2(X[,2], X[,1]) %% (2*pi) # I think this is what you want (why did you want to know the quadrant?) Oh, then you might do X[, 3:4] <- cbind(quadrant, angle) Bill Venables CMIS, CSIRO Laboratories, PO Box 120, Cleveland, Qld. 4163 Office Phone (email preferred): +61 7 3826 7251 Fax (if absolutely necessary): +61 7 3826 7304 Mobile (rarely used): +61 4 1963 4642 Home Phone: +61 7 3286 7700 mailto:Bill.Venables at csiro.au -----Original Message----- From: r-help-bounces at stat.math.ethz.ch [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Richard Rowe Sent: Monday, 1 January 2007 12:36 PM To: r-help at stat.math.ethz.ch Subject: [R] if ... problem with compound instructions I am having problems with the 'if' syntax. I have an n x 4 matrix, X say. The first two columns hold x, y values and I am attempting to fill the second two columns with the quadrant in which the datapoint (x, y) is and with the heading angle. So I have two problems 1) how to do this elegantly (at which I've failed as I can't seem to vectorize the problem) and 2) how to accomplish the task in a for loop ... for (i in 1: length(X[,1])) ( if ((X[i,1] ==0) & (X[i,2] ==0)) (X[i,3] <- NA; X[i,4] <-NA) else ( removing the pathological case ... then a series of nested if statements assigning quadrant and calculating heading if ((X[i,1] < 0) & (X[i,2] >= 0)) (X[i,3] <- 4; X[i,4] <- atan(X[i,1]/X[i,2]) + 2*pi) else ( In the first instance the ';' seems to the source of a syntax error. Removing the second elements of the compound statement solves the syntax problem and the code runs. As the R syntax is supposed to be 'Algol-like' I had thought if <A> then <B> else <C> should work for compound <B> ... ie that the bracket (X[i,3] <- NA; X[i,4] <-NA) should be actioned 1) any elegant solutions to what must be a common task? 2) any explanations for the ';' effect ? Richard Rowe Dr Richard Rowe Zoology & Tropical Ecology School of Tropical Biology James Cook University Townsville 4811 ph +61 7 47 81 4851 fax +61 7 47 25 1570 JCU has CRICOS Provider Code 00117J R-help at stat.math.ethz.ch mailing list PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2007-January/122923.html","timestamp":"2014-04-17T00:56:28Z","content_type":null,"content_length":"5891","record_id":"<urn:uuid:6e068fe7-dee9-4219-883d-3f45ec059183>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Giveaway #9: Vanilla Beans It’s the 12 Days of Christmas Giveaways! On the 9th day of Christmas, there were vanilla beans. I haven’t bought vanilla extract in years. I can make a 750ml bottle of my own with about a quarter of an order of vanilla beans. The rest? Split, scrape, add to batter/ice cream custard/cookies, repeat. Every dessert tastes better with those pretty little flecks of vanilla beans. True story. To enter the giveaway, simply answer the following question in the comments below: What dessert are you serving/looking forward to eating with your big Christmas/holiday meal? Update: The random generator thingy has spoken: Jen L with comment #611 is the winner! Please check your email for instructions on claiming your prize. The elves are waiting! I absolutely love Mince pie! It gets the weirdest looks, mostly because it’s traditionally known as mincemeat pie – and nobody wants to touch a dessert with “meat” in the title so I typically have at least a couple of slices leftover. But really, there’s no meat. You can earn up to 4 extra entries by: – Tweet about the giveaway (it’s easy, just click here!) and leave the link to your Tweet in the comments. – Leaving a separate comment below that you like us on Facebook. – Leaving a separate comment below that you follow the blog’s Instagram feed. – Leaving a separate comment below that you follow my mad recipe pinning and house projects that I wish I could pay someone to do for me on Pinterest. (Extra entries can only be counted if left as a separate comment – your comment # is your entry.) Good luck! The fine print: - Maximum of five (5) entries per person. - Giveaway ends at 12:01am (Texas time!) on December 13th. - Winner will be selected by one of those cold, soulless, unfeeling random number generator thingies and announced on this post after selected. - Winner will receive a 1/2 lb of vanilla beans (maximum retail value of prize = $25). - Prize must be claimed within 7 days or it will be forfeited. - Winning “extra entry” comments can be subject to verification. - Prize can only be shipped to a US address. - All prizes are provided by Confections of a Foodie Bride, unless stated otherwise. - Official giveaway rules can be found here. { 716 comments } I’m looking forward to peanut butter balls that my mom makes every year. They’re so flippin’ good! I follow you on Pinterest! I “like” you on facebook! I am looking forward to gingerbread cookies with my husband’s family. I look forward to a tray of homemade cookies (all made by me!!!) My Dr. banned me from nuts so I’ll just have to look and dream at my sugared pecan cookies and thumbprint cookies…but will be able to enjoy every bite of candy cane cookies. Those are my favorites though I do make others too. These vanilla beans look like a dream…I can almost taste the vanilla sugar. I visited a vanilla plantation in Hawaii…and wow were those beans phenomenal! There is nothing like a fresh vanilla bean to enhance the flavor of baked goods. I’m looking forward to pie, any variety. It’s all good! We’ll be baking apple pie for the holidays-yum! I follow you on Pinterest. I will probably make a gingerbread roll again! I’m looking forward to gingerbread cookies. It looks like I won’t be decorating any this year, but eating them is a close second! I like you on facebook I follow you on instagram I follow you on Pinterest I love red velvet cake balls! I follow you on pinterest. I follow you on facebook. I’m looking forward to gingerbread and chewy triple ginger cookies! eggnog cookies! yum! I am a facebook fan. I follow you on instagram. I follow you on pinterest. Sugar cookies! I do not make them, I just eat them! I follow you on pinterest. I follow you on facebook. I follow you on instagram. Sugar cream pie – it’s an Indiana thing, but I’ve made it my mission to spread it to my little corner of Florida! We’re going to try a new S’mores pie we saw in a magazine. Hope it’s as good as it looks. Its hard to say because I love it all so much! Maybe the jam thumbprints because it reminds me of making them as a kid. Or pie..I do love pie. chocolate pie I’m looking forward to Christmas cookies My mom’s raspberry bars with cream cheese coconut frosting! I cant wait to make sugar cookies I like on you FB I follow on Pinterest too. I’m going to make an eggnog icecream pie this year. It won’t quite make up for the wine/cocktails I’m missing out on this year, but it will help. Cheesecake with strawberries I’m looking forward to blueberry cheesecake- I only have it on Christmas! I tweeted about the giveaway: https://twitter.com/kaitlynspong/status/277763749308403712 I follow you on pinterest I follow you on instagram I like you on facebook I look forward to the fudge and chocolate cream pie! I just liked you on Facebook Just started following you on instagram I follow you on pinterest Apple Pie at my parents’ homw, cookies to bring to my in-laws! I follow you on Facebook Following your instagram! Following on Pinterest We make a giant assortment of Christmas cookies (which takes all day on the 23rd) for Christmas dessert. This year will be the first year my daughter is old enough to help! sugar cookies! I am going to make triple chocolate mousse cake. My own recipe for New York style cheesecake! Tweeted https://twitter.com/KDFF/status/277768710322786305 Follow you on Facebook Follow you on Pinterest Pecan pie, it is my favorite. Thanks! I liked you on facebook following you on facebook I am looking forward to all the Christmas cookies! They count as dessert right!? I like you on FB. Cheesecake. Cheesecake for every special occasion. I follow you on twitter Probably pie and cookies My Greek cookies and pastries!! We have a variety of different cookies for dessert, so I look forward to whatever others might have brought. Last year, my niece brought those Oreo Truffles. I had never had one before and they were wonderful! She won’t be here this Christmas, so I might just have to make them myself! I tweeted. https://twitter.com/The_Sugar_Queen/status/277777071168692224 I like you on Facebook. I follow you on Pinterest. I can’t wait to make my chocolate bread pudding! I make the traditional christmas Buche de Noel every year for christmas and New year- my mom’s recipe is the best….. Happy Holidays I look forward to the cookies, they are simple, but a great traditional holiday treat that always reminds me of past holidays! I am looking forward to trying some new cookie recipes. I am going to try and make caramels, which I’ve never done before! Also I like your page on Facebook! Every year, I look forward to homemade pierogi and kapusta (sauerkraut and onions) on Christmas Eve with my husband’s family I follow you on pinterest. Also I follow you on instagram! Sugar cookies!!!! I am making a white chocolate cheesecake! Pie or cheesecake! I have a nice Christmas cake I’m planning to make that I greatly look forward to. I follow you on pinterest. I tweeted. Liked you on Facebook. I follow on Pinterest. Kolacky cookies! I’m looking forward to the meringues! I follow you on Facebook! I tweeted! https://twitter.com/TheresaSea/status/277787827369037825 I follow you on Pinterest! Pecan Pie!!! I like you on fb! I might see if I can convince my mom to let me bake some cookies for Christmas! My mom makes the best gingerbread cookies. I look forward to them every year! Already following on Pinterest! I’m looking forward to sugar cookies that we make around Christmas time as a family! I will be making pumpkin pie for my husband, and have some spritz cookies and penuche for me! We are having chocolate souffle! I like your page on Facebook! I follow you on Pinterest! I am excited for spritz cookies and molasses cookies! I follow you on FB. I follow you on Pinterest. I love cookies of all kinds at Christmas, but this year I’m going to try making bread pudding. I follow you on Facebook I follow you on pinterest. Christmas cutout cookies. I’m excited for my grandma’s molasses crinkles!! They’re the absolute best – no one can make them like her! I follow you on Pinterest! I follow on Instagram! I like you on Facebook! A friend of mine is Greek and she makes Baklava and it is the most delicious baklava I have ever tasted. Then my husband makes anise biscotti and they are wonderful. I make eggnog cheesecake chocolate chip bars which have the flavor of the season. I’m looking forward to candy cane cookies and decorated Christmas cut-out cookies. Icebox cake! I have no idea what the dessert will be… we don’t really have a tradition for that. I like you on facebook, too. Pecan or Derby pie, the cookies, fudge, truffles, caramel corn…. Unfortunately, since we’re not traveling this year, I won’t get any of my moms delicious desserts so I will have to settle for making my own. Maybe I’ll make some peanut butter blossoms or some sugar cookies with royal frosting. I follow you on Instagram I follow you on Pinterst looking forward to pumpkin and pecan pie are my favorite follow your pinning on pinterest follow you on fb Homemade Pumpkin Pie!!! I follow on FB. My mom makes ice cream every Christmas Eve, and as i’m pregnant with our second kid this year and craving ice cream already, i really can’t wait! Also following you on pintere I’m looking forward to all of the Christmas cookies!!! Sugar, chocolate chip, snickerdoodles, peanut butter… the list goes on and on! Dishpan cookies Maybe some cinnamon rolls with our breakfast! I like you on Facebook I follow you on Instagram I follow you on Pinterest as well I am looking forward to all the desserts this christmas. I’m looking forward to my grandma’s cheesecake! I like you… on Facebook =) I’m a Pinterest addict too! I follow you on Instagram Apple tart (with vanilla beans!?) and ice cream, also a chocolate fudge cake! Vanilla bean ice cream We make an assortment of cookies and candies that are all delicious! Buckeyes, gingerbread cookies with white choc. drizzle, dipped pretzels and marshmallows, etc. Mmmm! I’m just now starting to think about goodies to make I haven’t decided yet – - too many things to choose from! Maybe that will be today’s activity! I follow you on pinterest! I follow you on facebook! I love pecan pie… and also cinnamon babka, which is a Christmas day tradition at our house. Chocolate Meringue Pie!!! I have tweeted about your giveaway (http://www.twitter.com/bevvbevv) And I follow you on pinterest (so much great stuff!) And I am a facebook fan I follow on Facebook I follow on Pinterest I’m looking forward to fudge! I like you on Facebook Vanilla custard! It seems simple, but sometimes those simple things are the best. Having these vanilla beans would be a wonderful for the custard! Liked you on Facebook! Following you on Pinterest! Hello Dolly Bars! I follow on Pinterest! I like you on FB! I follow on IG! This year, I am most interested to see how the raspberry pie I plan to bake for one of my nephews turns out. I haven’t made a raspberry pie before, so I’m keeping my fingers crossed that it’s not a juicy mess! Tweeted at: https://twitter.com/HyeThymeCafe/status/277837539082113024 I follow you on Facebook as Hye Thyme Cafe I follow you on Pinterest as HyeThymeCafe Cookies, cookies, cookies, and pralines! I never decide what kind until a few days before. I am looking forward to making (and of course eating!) cheesecake! I will admit, I’ve never had mincemeat pie as I am one of those that always thought it, just by name, sounded gross. Reading your recipe though has changed my mind! It sounds fantastic, and I think I may have to give it a try now! I don’t really do dessert because too much sugar gives me headaches but if I could I would eat pecan pie. MMMmmmm. I’m looking forward to the Cherry Almond Danish Wreath that I make for Christmas Brunch. I follow you on pinterest I follow you on pinterest as blackcat90 I mostly look forward to these peanut butter Rollo cookies I make. They are so simple, but my entire family gets so excited about them for the Holidays. I follow you on Facebook. I also follow you on Pinterest! I’m making my grandmother’s nantucket cranberry pie, it’s amazing! I follow you on pinterest I like you on Facebook My mother in law always had a yule log and now I make one in her memory I like you on Facebook I also follow you on Pinterest Trifle for Christmas dinner but i will also make my cinnamon pecans as gifts for family. For Christmas Eve dessert, I am going to try and make Marjolaine from David Lebovitz’s book, Ready for Dessert. It will be a challenge but I am excited to try! I like you on Facebook. My Mom’s Sweet Potato Pie. Delislh!! I follow on FB I follow you on Pinterest. I tweeted the giveaway. So looking forward to pecan pie cookies! Pumpkin pie always tops my list – but Christmas is with grandma this year, and she won’t touch it. My parents have also gently suggested I not try out my idea of an apple pie with candied bacon as the lattice work top, cause they don’t think she’ll go for it. So this year it’s gunna be cherry pie! Which is still delicious, but just not quite as good as pumpkin! Tweeted: https://twitter.com/meerow/status/277851089020272642 My mom makes a killer apple pie, so it’s gotta be that! I like you on facebook I’m also a fan on Facebook I follow you on Pintrest I follow you on Instagram I make Luby’s fluffy cheesecake for Christmas every year and I can’t get enough of it! I’m looking forward to cherry squares. I love the combination of cherry, coconut and walnuts! I think I’m most looking forward to apple pie, whether my mom or I make it. I like you on FB. I am going to be making/serving apple-cheddar pie. It’s not exactly a Christmas-y dessert, but it’s always a hit! I like you on facebook! I follow you on pinterest! chocolate pecan pie is my favorite! My mom makes a vanilla fudge for me for Christmas. I am sure I could make it myself, but this way it is a special treat. I’m looking forward to my grandmother’s cheesecake! I follow you on Pinterest! I like you on Facebook! I can’t wait to try my hand at a chocolate peppermint roll-up cake! I follow you on Pinterest! All of the christmas cookies! I don’t have to pick one. do I? I follow you on Facebook! I follow on Pinterest! I follow you on Instagram! Christmas pudding with raspberry sauce. Tripe Berry Cake with Streusel Topping. Devil’s Food Cake with dark chocolate ganache. I think I’ll make something with Maple this holiday season. I’ve been missing it. I’m hoping to make a cardamom braid, which I am looking forward to. I follow you on Twitter I like you on FB I like you on facebook. Pumpkin caramel swirl cheesecake with gingersnap crust! Pizzelles have been my favorite Christmas cookie since I was very small. I’ll be making some today! Tweeted https://twitter.com/Stephan97907475 I like you on Facebook! I follow you on Pinterest! I follow you on Instagram! Brown sugar chocolate chip cookies. I follow you on facebook I follow you on pinterest I can’t wait for my grandmother’s Russian tea cakes! I like you on Facebook. I follow you on Pinterest. My grandma is hosting a cookie baking competition this year so I’m attempting French macarons! I’m also making a cheesecake for a dinner party. I’m just pretty excited about desserts in general I like your Facebook page. A tray of my homemade cookies I follow on instagram SUFGANYOT! (jelly donuts) pintrest follower @violarulz I follow on Pinterest! Praline Pumpkin Cheesecake. I know pumpkin is not exactly Christmas-y, but I made it for Thanksgiving, & it was the best cheesecake I’ve ever had! I follow on Twitter! Pumpkin pie…… I’m looking forward to bourbon-pecan pie! Fruitcake. Really! I like you on Facebook! I follow you on pinterest! I follow you on instagram! I like you on Facebook. I can’t wait for pecan pie! Panna Cotta Like on FB Follow on Pinterest I follow you on pintrest I love Baileys Irish Cream Cheese cake I will be making my usual pumpkin pie and apple gallette. I tweeted about the giveaway.https://twitter.com/intent/tweet?status=Giveaway%20%239%3A%20Vanilla%20Beans%20http%3A%2F%2Fj.mp%2F12fS2Ty&related=micropat&via=Lockerz_Share I like us on Facebook. I follow the blog via e-mail. I follow you on Pinterest. I liked you on Facebook Christmas cookies. So far I have made brown butter snickerdoodles. tweet https://twitter.com/SaraMama/status/277895108240736259 liked on fb sara peterson davis follow instagram saramama1 follow pinterest saramama I love homemade peppermint ice cream with homemade chocolate sauce. Yum! I’m making cookies and a couple of fruit pies…would love to make some vanilla ice cream with these beans to serve with them I follow you on Pinterest! I follow you on Instagram. I love all pie! Not sure what kind I’m making yet. I like you on Facebook! I follow you on fAcebook I look forward to cranberry cheesecake! Posted Tweet https://twitter.com/MandalynnJDG/status/277900512161046528 I like on FB I follow on Pinterest I’m doing a salted caramel and chocolate Yule log…… I follow you on pinterest I love traditions and my mama has been gone almost 40 years but I still make and look forward to eating my Mama’s Chess Pie and Nut cake. They are family traditions that go back over 150 years. As a matter of fact hen I was little, my mama had to watch my grandmother and write down how she made them and change them into actual measurements cause my grandmother did not own a measuring cup or measuring spoons. Speaking of tradition it might interest you to know that originally the mince pie did contain meat. It also contained 13 ingredients to represent Christ and his 12 Disciples. It had to contain 3 spices representing the 3 wise men. It was baked in a rectangular shape like a cradle and had a place on the very top for the baby Jesus. It was originally known as the Christmas Pie and was traditional and it still is quite traditional in Great Britain where you can still find them contaning meat. Cookies and more cookies! I love Christmas cookies. I liked you on FB! We are having peanut butter pie and pecan date pie. I’m totally looking forward to bring my peppermint brownies. So delicious! I follow on facebok I follow on Instagam It’s a toss-up: either the chocolate fudge or my mom’s glazed pecans I follow on Pinterest So many things I love serving/eating at Christmas – hard to choose! My grandmother’s recipe for Pecan patties would top my list! I follow you on Pinterest! I “liked” you on fb! a cake wrapped in modelling chocolate! I make a cranberry tart that is really good. Like you on FB Follow on Pinterest I will probably make french macarons because my hubby always wants them. I might try to make something else like a tart too. I like you on FB I follow you on Pinterest I’m not sure if it counts as a dessert, but I always have to sneak as many of my grandma’s cinnamon rolls as I can before and after dinner! I know it’s a Thanksgiving dessert, but Pumpkin Pie. I make it for Christmas too!!! Cookies, fudge and holiday popcorn are always a bonus!!!! I love a huge platter of assorted holiday cookies. Tweeted the giveaway https://twitter.com/Gingersnoop/status/277918879588700160 Follow you on Pinterest We are going to have a bunch of different Christmas cookies, yummy, yummy! I tweeted https://twitter.com/MaressaDur/status/277921573485301760 I like you on Facebook I follow you on Pinterest I’m planning to make Creme Brulee. It’s always a favorite. I like you on Facebook. I follow on Pinterest. (jodoobers) Vanilla bean cheesecake! I like you on Facebook! I follow you on Pinterest! Christmas Sandwich Cremes are always requested and I am asked to bring desserts. I love Chocolate Cheesecake with Candy Cane Sprinkles at Christmas! I follow you on Pinterest! I love polvorones (russian tea cookies ) made with hazelnuts and filled with hersheys kisses, but I try to have a new dessert every christmas. An assortment of treats is our tradition…fudge, decorated sugar cookies, peppermint bark, and almond bark covered pretzels. And, pie, I’m not sure what kind yet. Ah baking gold, yes please!!! Sugar cookies and using vanilla sugar would be wonderful! My mom handles Christmas dinner and honestly it’s a bit of a disaster every year…it’s a tradition! It’s really beyond her to handle the dinner, but it’s also tricky to take it over. So…nothing for Christmas dessert, but my hubby wants Floating Island for his birthday dessert on Christmas Eve. I like you on FB. I tweeted. https://twitter.com/Mom24_4evermom/status/277935074001960960 I follow you on Pinterest. Cherry pie, cookies, and fudge! There’s a chocolate hazelnut roulade recipe from an Alice Medrich cookbook that my mom often makes for Christmas, and it’s delicious. I like you on Facebook. I’ll be making peppermint Oreo and chocolate chunk cookies! Very much looking forward to trying them. Great giveaway! I’d love to make my own vanilla extract. My girlfriend makes awesome cookies, and my family often has many pies. My aunt’s raspberry lemon pie is real good. I look forward to being done with finals and able to bake up some classic sugar cookies at home with my mamma. My family always has Baked Alaskas for dessert – this year instead of a pastry crust I’m going to use a chocolate raspberry fudge brownie as the base! I tweeted: https://twitter.com/floptimism/status/277945995051024384 I like your FB page! Decorating sugar cookies with my children is the dessert I’m looking forward too and we always save some for Santa! I tweeted about the contest. I follow you on Pinterest! I like you on Facebook. I can’t wait to eat the red velvet cheesecake we’re making as Jesus’ Birthday Cake! I follow you on pinterest! I like you on facebook! Apple Pie. I’ll be making a red velvet cake for our friends’ Christmas gathering this year. My mother in law’s recipe, but YOUR cream cheese frosting!! Can’t wait! Ginger cookies are a favorite! sugar cookies!! it’s the only time of the year we make them. I am looking forward to my old fashioned 16 layer cake with cooked fudge icing! So very YUMMY! like on FB I follow you on Facebook I follow you on Pinterest I love sitting down with a plate of cookies and a cocktail in front of the tree – keeping it simple! I follow you on Pinterest I tweeted this giveaway! https://twitter.com/keshakeke/status/277951314011496448 I Instagram follow you I can’t wait for some chocolate chip pecan pie! peppermint cheesecake! Leftover christmas cookies! Especially kolackys and rosettes! follow on pinterest I follow on Facebook. I follow on pinterest follow on fb My family always enjoys a yule log made with a chocolate sponge cake and chocolate whipped cream for dessert on Christmas day I always look forward to Italian rainbow cookies! I am looking forward to the latkes and sufganiyot! I follow you on facebook! I follow you on pinterest! I won’t be near family this year but my aunt used to make this french cherry pie that had cherry filling, cream cheese, cool whip, and a graham cracker crust. So good! Homemade eclairs!! Texas Chocolate Sheet Cake I follow on twitter! I follow on instagram too! and pinterest… Going to try some chocolate/caramel moneybread I like you on fb! Cheesecake…LOVE cheesecake!!! I’m an email subscriber. Chocolate lava cakes!! I like you on fb I follow you on Pinterest!! The American Beauty Cake from your blog! And I follow on Facebook! And twitter! I love having a variety of cookies, especially sugar cookies and peanut butter blossoms! Going to make pumpkin pie since my family loved it at Thanksgiving, and two pies were eaten lickety split. Will also make sugar cookie cutouts with my children, and one other cookie variety which is undecided at this point. Yum! I like you on Facebook! I always look forward to the large cookie assortment. I don’t know why but I don’t bake cookies very often outside of the holidays. I always make a Nutella Cream Pie for Christmas, it is the best! I follow you on Pinterest. My fav item to prepare for Christmas is a dish of assorted cookies. Thumbprints are my favorite. I follow you on Facebook. I like you on FB I follow you on Pinterest All of the cookies – spritz, gingersnaps, shortbread – impossible to pick just one! I follow you on Pinterest. I like you on Facebook. banana pudding I’m going to make a gingerbread trifle for our family gathering – but I can’t wait to eat my mom’s awesome chocolate dipped peanut butter cookies and holiday sugar cookies! Follower via Facebook! Hmm favorite dessert is probably the cookies we leave out for Santa (not his! just extras from the batch) I’m most looking forward to making, and of course, eating cheesecake. I like you on facbook I follow you on Pinterest. I liked you on FB. Pear crumble pie! Baked Alaska! We always enjoy a great Christmas cheesecake after our Christmas dinner…can’t wait! I follow you on Pinterest! Looking forward to pecan pie! Posted it on twitter! https://twitter.com/ashleyklaty I like you on Facebook! Following you on interest Homemade chocolate truffles!! We love Martha Washington balls! Every Christmas morning we make egg nog pancakes! I like you on facebook! minicream cheese cupcakes I like you on facebook I follow you on pinterest! I follow you on instagram! My favorite is a cranberry crisp. Or any crisp for that matter! Yum! I am a self admitted fruitcake lover Follow Confections of a Foodie Bride on Instagram Follow Confections of a Foodie Bride on Pintrest Looking forward to making Mrs. Mueller’s cookies. A recipe my mom fit when we were stationed in Italy. I like you on Facebook. I follow you on Pinterest I Tweeted! I’m looking forward to bourbon chocolate pecan pie! It’s incredible! I follow you on Pinterest! I like your page on Facebook! Creme brulee! A very small slice of maple syrup pie and gingerbread in just about any shape or form. I follow you on Pinterest I like you on fb! I’m looking forward to my mom’s almond cherry pound cake and homemade marshmallows! I like you on facebook. I’m looking forward to eating many of my chocolate chip pudding cookies. Simple and perfect! I like you on Facebook. My Grandma always has a big tray full of homemade candies and cookies and other sweets. That’s always my fave! My mom is going to make a sour cream cake my great-grandmother used to make. I can’t wait to try it! I tweeted the giveaway – https://twitter.com/tasteofhomecook/status/278134283829383168 I like you on Facebook My Cherry Pie is always a winner I am a fellow Pinner =) & follow u I follow you on Facebook I like you on Facebook I love the pecan balls/snowball cookies! Yummm! Pumpkin cheesecake! With a gingersnap crust, whipped topping, caramel and pecans. Hand me my fat pants….. I like you…on facebook. I follow you on pinterest. I’m looking forward to serving a triple chocolate cheesecake for Christmas dessert! I follow you on Facebook! (Coleen Hill) I tweeted the phrase! http://twitter.com/TheRedheadBaker/status/278144153089634305 I follow you on Instagram! (TheRedheadBaker) I follow you on Pinterest! (TheRedheadBaker) Pecan Pie! Awesome giveaway! I’ve been meaning to make my own vanilla extract for so long! I’m most looking forward to to eating cheesecake for Christmas. Already a fan on FB I love Cranberry pie! So festive and yummy! I am looking forward to a peppermint cake roll! gingerbread cake with homemade vanilla whipped cream. yum! I am looking forward to a Swedish specialty – Rice ala malta (a rice pudding of sorts) I like you on facebook. My sugar cookies. I’m excited to make them this year with my 21 month old. He loves to help me bake! I love my Mom’s Pecan pie, but now that my husband is making artisan ice cream…that will be at the top of the list too. i’m looking forward to cookies! lots & lots of cookies. tweeted https://twitter.com/thepajamachef/status/278152938457796608 i like you on facebook i follow you on instagram i follow you on pinterest! We don’t have a traditional Christmas dessert but this year, I think we’ll be having pecan pie! i follow you on instagram!! i follow you on pinterest!! I’m looking forward to my mothers flan! It’s amazing and definitely a Christmas tradition in our house. Can’t wait to taste some on Christmas day!! Following on FB Following on Instagram Following on Pinterest My DH loves pumpkin pie. I like baked French toast Xmas morning. I really look forward to the meal more. I am not much on the desserts that day. I think its due to all the parties and cookies exchanges prior to the Big Day. My mom’s Christmas cut-out cookies. Love them! And I can never make them like she does. I’m looking forward to homemade cinnamon rolls and pecan pie! I love my sweets!!! I like you on facebook! Fig tarts from my mom! I follow on Pinterest! Apple cider muffins. Yum! I love Yorks, for some reason our family always has those for dessert! I tweeted! https://twitter.com/rebeccanels/status/278181103582445569 I like you on facebook! I follow you on Pinterest! Key lime bars! Pecan pie! I like you on Facebook. Millionaire Salad – but cookies, fudge, Texas trash are all part of the dessert table! Cookies, cookies and Texas trash! Hopefully I’ll have the time to make them with the new babe. A pumpkin roll! I know it’s much more of a Thanksgiving dessert, but I’m always asked to make one at Christmas and I don’t argue! I like Foodie Bride on FB I follow you on Instagram My mom’s Grannydear’s Chocolate Chess Pie with homemade whipped cream- it’s always my favorite! I follow all your wonderful pins on Pinterest! I follow you on Pinterest Chocolate chip cookies I like your Facebook page Christmas fudge! …and maybe a pumpkin bread pudding!! Molten lava cake! I look forward to the cut-out cookies! I follow on Instagram The dessert I am most looking forward to serving is an orange/lavender tart. My first time this year and it sounded different! I liked you on FB Am a fan on FB I follow you on Pinterest we normally don’t have a big dessert after xmas dinner its just cookies cookies cookies….all day lon g!!!! i follow your “mad pinning” on pinterest : ) I tweeted! https://twitter.com/wishesNdishes/status/278203496564658176 My family is German/Scandinavian, so we are all about the cookie assortment! We generally have three or four large platters of cookies…and I have to try one of each. I can’t wait for chocolate chip cookies for Santa that are really for me Milk punch? Can alcohol & ice cream count as dessert!? I’m looking forward to Christmas cookies! Vanilla cheesecake. I follow on fb I follow on instagram I follow on pinterest Peanut Butter Chocolate fudge – sooo good I might trying making a cranberry-apple pie this year. I can’t wait to make (and eat!) white chocolate peppermint m&m cookies! I follow you on pinterest I like you on facebook I made Brandy Alexander Cheesecake topped with fresh strawberries last year and I think I’ll make it again because it was such a hit. My father-in-law had seconds! Cheesecake topped with chocolate mousse and chocolate ganache…..it’s divine! I like you on Facebook I follow all your boards on Pinterest i love a good shortbread cookie sugar cookies I like you on facebook I usually make cookie dough truffles -yum. Need to get my baking list together! Pumpkin roll with cream cheese filling. I’m making that Southern Comfort Caramel Apple Pie you posted last year. I like you on Facebook. I follow you on Pinterest. It seems like the desserts are always different, so anything chocolate will work! I like you on Facebook. I also follow you on Pinterest. i’m making truffles this year and am so excited!! I follow your mad recipe pinning and house projects that you wish you could pay someone to do for you on Pinterest. My favorite, and most fought over dessert in our family at Christmas time is my mother’s home-made nanaimo bars. It’s the most delicious thing that’s ever touched your tongue. I love mince meat tarts as well but my favorite has to be Christmas pudding I follow you on facebook And pintrest too~! Liked you on Facebook! Follow you on Pinterest! My dad’s pumpkin pie and his fruit cake I tweeted https://twitter.com/thespiffycookie/status/278251694461165568 I like you on Facebook I follow you on Pinterest Cheesecake and pumpkin pie!! Peppermint bark cookies. A dreamy 3-layer pumpkin cake with maple cream cheese frosting. *sigh* I follow on Pinterest. Italian Creme Cake I follow you on twitter as mm1411 574 December 10, 2012 at 4:31 pm …and tweeted https://twitter.com/mm1411/status/278265547999830016 I am a lover of vanilla beans also. It is hard to pin down what I am looking forward to eating, but my SIL makes these little mini hoagies that are to die for. I follow you on Facebook. I love looking at all the things you post on Pinterest, so of course I follow you. I am looking forward to my mom’s cookie dough truffles. I like you on fb. I follow you on pinterest. Gingerbread trifle or cheesecake… can’t decide! I like you on FB I look forward to making butter cookies each year. I follow you on FB! Christmas Cookies! ^_^ Follow you on FB. Follow you on instgram. Follow you on pinterest. cherry cheesecake and carrot cake! pinterest follower xoeskie1 twitter follower @cabbie413 I am looking forward to fudge….chocolate….peanut butter……any kind is great! Pie…any kind of Pie. I like on FB. I follow on Pinterest. Looking forward to making and eating dark chocolate brownie cookies, peanut butter balls, and possibly a delicious pumpkin crumb cake with a brown sugar glaze. Buttermilk pie–it’s kinda like a chess pie I love making and eating pecan pie after the Christmas meal! Left this tweet: https://twitter.com/confessrecipe/status/278312104988114944 I like your page on Facebook! I’m following your boards on Pinterest! I love any of the pies that my granny bakes, especially her pecan pie! I like you on Facebook. I follow you on Pinterest. my mom’s famous sugar cookies! Butternut Cake I Tweeted: https://twitter.com/jenron1/status/278336948425355264 I like you on Facebook I follow you on Instagram I follow you on Pinterest Looking forward to all those cookies! Especially chewy ginger cookies. I follow on Instagram. I follow on Pinterest. bread pudding! My husband and I usually bake a version – cinnamon raisin or croissant of challah I’m a FB fan. follow you on facebook! pumpkin banana mousse tart. Merry Christmas I follow you on twitter Creme Brullee french toast Looking forward to eating fudge, apple pie cookies, and pizalles!! I liked you on Facebook! I follow your Instagram feed… I also follow you on Pinterest peanut butter blossoms! My mom’s cut out cookies… yummmm… I am a FB fan! Not much of a pie person… I LOVE cookies, and the holidays are the best excuse to indulge. Those vanilla beans could be great in some cookies. We have mexican food on Christmas eve. I can hardly wait. I already follow you on pinterest! I did some of my Christmas baking and made thin mints – they are to die for! Can’t wait! I follow you on Pinterest I think I am making chocolate cheesecake for Christmas dinner. Follow you on Pinterest. http://pinterest.com/scrapdiva1/ Following you on Pinterest. Leann Lindeman I love making sugar cookies with royal icing. They turn out so lovely and so nummy. Small families make for small desserts! I follow you on FB I follow you on pinterest I want to follow you on instagram! I love chocolate pie. I look foward to indulging on Christmas cookies. I tweeted… https://twitter.com/swirledsweeties/status/278551698543366144 I like you on Facebook. I like you on Pinterest. I like you on Instagram. Looking forward to sticky toffee pudding, which I make for new year’s eve. my family always has apple pie on christmas i like you on fb i follow you on instagram I follow you on pinterest My mom’s chocolate covered cherry cookies. Delicious! I may have already had two. I like you on facebook. I follow you on pinterest. I follow you on Pinterest. Chinese food! I’m looking forward to the chocolate peppermint cake we’ll be making to celebrate Christ’s birth. I like you on facebook. I follow you on pinterest. I’m looking forward to eating apple pie this year for Christmas. I tweeted about the giveaway: https://twitter.com/kdelavega87/status/278636821099999232 I like you on Facebook I follow you on Pinterest Mac n chesse!!! I am really looking forward to eating some scottish christmas pudding! I like you on FB I like you on Instagram I like you on Pinterest chocolate chip cookies Mom’s streufuli (that I still can’t spell, 30 years later) Cheesecake! We normally have my sister bake a Bouche de Noel but she won’t be with us this year, so we are trying something new! I like you on facebook. Apple pie! I’ve been overseas for ages and I’ve missed pie so much. I always look forward to my mom’s rum balls for Christmas dessert! And, of course, some cutout cookies! Coconut Cream Bread Pudding All of the cookies! Especially gingerbread! I like you on Facebook I follow you on Pinterest! Seven layer cookie bars! I make them every year. I am a fan on facebook. i follow you on pinterest. cinnamon rolls… do those count?! love them! like you on facebook following you on pinterest following you on instagram Sugar cookies! Decorated of course. We make fondue every Christmas Eve. I love it and can’t wait! Although, those tiny forks are dangerous… I follow you on Pinterest and I like you on Facebook. Pecan Pie! I like you on FB Home made ice cream I follow you on Pinterest Clam chowder! I follow you on facebook I follow you on pinterest I like you on facebook I like you on FB. I will be making pecan pie. For me, a gingerbread pudding with hard sauce and the hubby can’t wait for his raspberry cheesecake! I’m looking forward to having some delicious pecan pie. I like you on Facebook. (Becky Jordan-Tracey) I follow you on Pinterest! idgiet Hot cross buns! Yum! I’m looking forward to eating gingerbread cookies! pecan pie! I follow on pinterest I follow on facebook I f ollow on instagram Apple pie or cobbler. Comments on this entry are closed.
{"url":"http://www.jasonandshawnda.com/foodiebride/archives/14457/","timestamp":"2014-04-19T18:02:47Z","content_type":null,"content_length":"639760","record_id":"<urn:uuid:0cd62a36-f06e-4bc9-9511-6855298af276>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
From version 4.0.0, mjograph can be called from Matlab, where you can use it as an alternative to the “plot” command. mjograph interacts with Matlab with the “mplot” and “mfigure” commands which you can use after some easy setup was properly done. The usage of these commands is easy and almost the same as the “plot” and “figure” commands in Matlab. ••mplot(y, 'overwrite') --- plot y. previous plot from the same variable will be overwritten. ••mplot(x, y, 'overwrite') --- plot the pair of x and y. previous plot from the same pair of variables will be overwritten. ••mplot(x, y, 'new') --- plot the pair of x and y as a new series. activate a graph window. (The same role as the “figure” command) • mfigure(index): activate a graph window. The index must be between 0 and 24. The index can be omitted. mplot(x, y); % the quadratic function will change to cubic. mplot(x, y, ‘new’); % another quadratic function will be plotted as a new entry. mplot(randn(1, 1000)); % you will get a noise-like waveform in the new window.
{"url":"http://www.ochiailab.dnj.ynu.ac.jp/mjograph/manual/Site/Matlab.html","timestamp":"2014-04-19T19:33:43Z","content_type":null,"content_length":"14416","record_id":"<urn:uuid:03babad1-6a34-4d5a-ad4b-b9525a7b1569>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Ratio of areas January 13th 2013, 06:46 AM Ratio of areas The ratio of similitude of two similar triangle is 3:5. What is the ratio of their areas? please have a little help. I don't understand the problem on the term similitude.. January 13th 2013, 07:28 AM Re: Ratio of areas Ratio of similitude refers to the ratio of the corresponding sides in the similar figures in this case the triangles. January 13th 2013, 08:44 PM Prove It Re: Ratio of areas If you have two similar figures, then you have a scaling factor for the sides, call it k. Then the scaling factor for the areas is k^2. January 13th 2013, 09:20 PM Re: Ratio of areas January 14th 2013, 06:08 PM Prove It Re: Ratio of areas Yes, well done :)
{"url":"http://mathhelpforum.com/geometry/211246-ratio-areas-print.html","timestamp":"2014-04-19T08:29:53Z","content_type":null,"content_length":"5864","record_id":"<urn:uuid:56c27add-bb2d-48e3-a769-d47db4810eab>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
STATISTICAL PROGRAMS - PSYCHOLOGY PROGRAM LIBRARY (psylib) These programs may be downloaded to your PC and will run under Microsoft Windows (just click on the program name and follow the instructions to download the program to your PC) (locate program icon on your PC & click to start -- follow instructions -- output if created has the extention ".txt") agree.exe - Computes agreement indeces from Likert data & tests significance alpha.exe - Performs Ss x A Anova & Cronbach's alpha canchi.exe - Canonical correlation of row & column variables of contingency tables cancorr.exe - Canonical correlation from user supplied correlation matrix chi.exe - Chi-square for Contingency Tables cl.exe - Computes Clear Language Effect Size Indicators corr.exe - Simple Correlation cv.exe - Critical Values of z, t, chi-square, & F dunn.exe - Dunnett's Test: Exp. Groups vs. Control epsilon.exe - Computes critical values for Box's epsilon fet.exe - Fisher's Exact Test heter.exe - Simulates Heteroscedasticity mort.exe - Computes payments for fixed rate mortgages mp.exe - Calculates Exact Multinomial (or Binomial) Probabilities oneway.exe - Computes One-way ANOVA pa.exe - Parallel Analysis for Principal Components Factor Analysis pca.exe - Principal Components Analysis & Varimax Rotation power.exe - Computes Power for one-way ANOVA powmr.exe - Computes Power for Multiple Regression powr.exe - Computes Power for Simple Correlation range.exe - Studentized Range Tests: N-K & Tukeys ranper.exe - Random Permutations Rinv.exe - Inverts Correlation matrix - Computes multiple Rs, betas, partials, etc. rpb.exe - Point Biserial Correlations rsig.exe - Significance of Simple Correlation Coefficients rtgrp.exe - Randomization Test for Independent Groups rtr.exe - Randomization Test for Correlation rtrm.exe - Randomization Test for Repeated Measures Designs runs.exe - Runs Tests satt.exe - Pooled error for Ss/AxB designs (Satterthwaite) stat.exe - Probabilities of Common Stats: z, t, chi-square, & F steig.exe - Comparing Independent & Nonindependent r's tetra.exe - Phi Coefficient and Tetrachoric Correlation wilco.exe - Performs Wilcoxin's Rank Sum Test (or Kruskal-Wallis)
{"url":"http://www.tulane.edu/~dunlap/psylib.html","timestamp":"2014-04-18T18:11:17Z","content_type":null,"content_length":"5160","record_id":"<urn:uuid:2edb662e-949b-4994-a645-ba551b3f8b02>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Guess How Many Candies Are in a Jar Dealing with bored children? Entertain them with the children's games in this Howcast video series. You Will Need • A jar filled with spherical or oblate spheroid candies • Calculator • Tape measure • Vernier calipers (optional) • Computer with internet access (optional) 1. Step 1 Estimate jar capacity Ask for or estimate the total volume of the jar as best you can. Convert the volume to milliliters. 2. Convert to liters quickly by typing "convert (your original units) into liters" into a search engine. 3. Step 2 Determine whether the candies are spheres Determine whether the candies are spheres. If they are balls, like gumballs or jawbreakers, they're spheres. If the candies are round, but longer than they are wide, they are "oblate spheroids." 4. Step 3 Find the volume of one candy Find the volume of one candy, also in milliliters. First, find the radius of one candy, either my estimating, using a tape measure, or by using vernier calipers, which will provide the most precise measurement. If your candy is spherical, use the formula V = 4/3πr3, where r is the radius of one candy, in centimeters. Round pi to 3.142 if you don't have a scientific calculator. 5. If the candies are oblate spheroids, use the formula V = 4/3πa2b, where a is the longer radius, and b is the shorter radius. 6. Step 4 Determine percentage of volume used Calculate the percentage of the total volume the candies take up in the jar. Calculate 64 percent of the jar's total volume if the candies are spheres, and calculate 66.5 percent of its volume if they are oblate spheroids. 7. Step 5 Figure it out For spherical candies, divide your estimate for the size of one candy into 64 percent of the volume of the jar. For oblate spheroid candies, divide the average size of one candy into 66.5 percent of the volume. You've got the answer; now amaze your friends with your guess! 8. Did you know? Bubble gum was invented by Walter Diemer in 1928.
{"url":"http://www.howcast.com/videos/282260-How-to-Guess-How-Many-Candies-Are-in-a-Jar","timestamp":"2014-04-17T21:49:21Z","content_type":null,"content_length":"45743","record_id":"<urn:uuid:2d740dfb-cfaa-443c-9220-e712b06a3cf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Builders of the Great Pyramid had access to a modern computer reply to post by AthlonSavage Wow, that's amazing... but you haven't seen anything yet! Take a look at this one! Yes, The Empire State building.... it is encoded with mathematical constants, just like the pyramid! All you have to do, is take the known dimensions of the building, and then preform a few COMPLETELY OBVIOUS calculations on the numbers, like so... The Floor area is: 208,879 square meters.... If we take the square root of the floor area, we get: 457.032822 If we then DIVIDE that by the height of the observatory, we get: 1.22487268829 If we compute the tangent of 1.22487268829 radians, we get: 2.77457304 Now, e is a mathematical constant that works out to roughly 2.7 So there you have it... the builders of the Empire state building encoded complex mathematical formula into it's very dimensions!!!! What sinister purpose was this done for? Is the empire building a stargate that uses it's precise location and geometric perfection to communicate with aliens? What sort of advanced technology did the builders of the Empire State Building have that we don't have today? Maybe we will never know!
{"url":"http://www.abovetopsecret.com/forum/thread930216/pg3","timestamp":"2014-04-18T13:09:35Z","content_type":null,"content_length":"59124","record_id":"<urn:uuid:f19b8c34-4a5a-49dd-abdf-73ca255a94c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Dover Publications Math and Science Discount Yes, Dover has a lot of nice titles at wonderfully low prices. Wow, browsing around their web site I'm surprised to see that science and mathematics are just the tip of the iceberg: Also, I discovered that they have a series of hardback mathematics titles that I wasn't aware of at all: I'm familiar mostly with their math titles. I'll list some of the ones I think are standouts among those I've read or skimmed. It would be great if others could list their favorites. I would be especially interested to see some recommendations for their best physics titles. - Jacobson, Basic Algebra I - Shilov, Linear Algebra - Gelfand, Lectures on Linear Algebra Fourier Analysis: - Tolstov, Fourier Series - Euclid's Elements - Heath, A History of Greek Mathematics Group Theory: -Scott, Group Theory Information Theory: - Ash, Information Theory Number Theory: - Dudley, Elementary Number Theory - Kolmogorov and Fomin, Introductory Real Analysis - Kolmogorov and Fomin, Elements of the Theory of Functions and Functional Analysis - Gelbaum, Counterexamples in Analysis - Flanigan, Complex Variables - Cartan, Elementary Theory of Analytic Functions of One or Several Complex Variables - Friedman, Foundations of Modern Analysis - Knopp, Infinite Sequences and Series - Edwards, The Riemann Zeta Function - Mendelson, Introduction to Topology - Willard, General Topology - Steen and Seebach, Counterexamples in Topology - Gemignani, Elementary Topology
{"url":"http://www.physicsforums.com/showthread.php?t=500220","timestamp":"2014-04-20T16:00:29Z","content_type":null,"content_length":"35292","record_id":"<urn:uuid:f4af5ec0-1986-414d-b855-03f690e6aab2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime Numbers: Mersenne Primes Edition Recall that a natural number is called prime if the only numbers which divide it evenly (ie. when you divide you get a remainder of 0) is 1 and the number itself. The first few prime numbers are: Mathematically, prime numbers are important because every natural number can be decomposed uniquely into a product of prime numbers: $n = p_{1}^{d_1} p_{2}^{d_2} \cdot \dotsb \cdot p_{k}^{d_k}.$ Where $p_1, p_2, \dotsc, p_k$ are prime numbers. For example, $67914 = 2 \cdot 3^{2} \cdot 7^{3} \cdot 11.$ So primes are the “atoms” of the natural numbers. They are real-life useful as well. The encryption used on webpages when you enter your credit card uses very, very large prime numbers. What it uses is the fact that it’s easy to multiply large primes together, but it’s hard to take a very large number and break it down into its prime factors. Recently the Great Internet Mersenne Prime Seach (GIMPS) announced that computers have verified that are both prime numbers. They are the first primes large enough to qualify for the $100,000 prize from the Electronic Frontier Foundation. The prize is for the first prime with more than 10,000,000 digits and these primes have 11,185,272 and 12,978,189 digits, respectively. If you would like to see the largest prime known to humankind, click here. Warning! That link takes a long time to load, so only do it if you have a fast internet connection and lots of time on your hands! More on Mersenne Primes below: A Mersenne Prime is a prime number of the form $2^{n}-1$ for some natural number n. They were consider way, way back by Euclid, but were first seriously considered by Marin Mersenne in the 17th He compiled a list of all the $n = 1, 2, 3, \dotsc, 257$ which give prime numbers. No one is quite sure how he did such a massive calculation ($2^{257}-1$ has 78 digits!). After 200 years mathematicians were able to verify his calculations and see that he was (mostly) correct! Not all numbers of the form $2^{n}-1$ are prime. In fact, it is easy to see that if n can be factored, then so can $2^{n}-1$. You just use the factorization: $2^{ab} - 1 = (2^{a} - 1)(1+2^{a} + 2^{2a} + 2^{3a} + \dotsc + 2^{(b-1)a}).$ However, just because n is a prime number doesn’t mean that $2^{n}-1$ is prime. For example, $2^{11}-1$ is not a prime because it can be divided by numbers other than 1 and itself (Which ones? That’s homework!). Lots of things are still unknown about Mersenne primes. For example, nobody knows if there is finitely many or infinitely many! So far (counting the ones above) 45 Mersenne primes have been found so far. If you’re interested in finding the next Mersenne prime, GIMPS is always looking for people to volunteer their computers to work on this problem in their free time. If you’re lucky enough to find the next one, you’ll be famous and wealthy (the EFF is offering $150,000 to whomever discovers a prime number with more than 100,000,000 digits and $250,000 for the first prime with more than 1,000,000,000 digits). If you would like to buy a poster of showing one (only one can be squeezed onto a poster!) Mersenne prime, you can get them here. They are supposed to offer a poster with the newest primes in the near future. Addendum: Today Terence Tao, Fields Medalist and UCLA faculty member, has a post on his blog where he talks about the math used by the GIMPS project to test if a Mersenne number is actually prime. You can check it out here. Note: UCLA computers were used to find one of the new primes. 14 thoughts on “Prime Numbers: Mersenne Primes Edition” 1. Prime numbers are awesome. I clicked on the link for the largest prime number and it took over 7 minutes to load, and then Internet Explorer crashed. 2. I think it’s amazing that these two primes have been found so close together in time. I’ve been a member of GIMPS for over a year now, and I’ve tested fewer than a score of exponents; they aren’t kidding when they say you need patience to run the software. Way to go, GIMPS for finding these numbers. The concept of distributed computing has always interested me, and the virtual supercomputers that these projects make are truly mind blowing: GIMPS reports 28 Trillion floating point operations per second (28 terra-FLOPS). If you want to join a group like this, GIMPS is fairly simple, but I have recently discovered BOINC, a client that interfaces with a number of different organizations in a standardized way. With the help of BOINC I have joined Primenet, which searches for large (non-Mersenne) prime numbers, as well as LHC@home, which analyzes data from/for the LHC. As a side note, I’d like to point out that it must suck to be the owner of the computer that found the second prime: if it had been found just a few days earlier then he would have won the $50,000 prize from the EFF. Oh well, I guess it can’t be that bad–having your name go down in history as one of the discoverers of one of the largest primes. Now the race is on for the first 100 million digit prime. 3. That is really a superhuman achievement. The shear processing power behind such large numbers must be a truly amazing to work with. I’m glad that there are people out there using immense amounts of RAM to fight mankind’s battles of discovery. 4. wow I never knew prime numbers were so important. Especially, enough for you to will $100,000. I would hate to see a prime number on a math test with more than 1,000,000,000 digits in it. 5. I vaguely rememeber my Dad running program trying to find a large prime number. I wonder if this is what he was trying to do. I would have thought it would be easy to find a large prime number using a program but i guess not since they would give $100,000 to find one. 6. This is very interesting indeed. I may have to try outsourcing my computer in order to win $100k. I wonder what applications prime numbers are used for outside of encryption? 7. The idea of opening this software for anyone to have a stab at finding these numbers is very interesting. I remember reading, I think in Popular Mechanics, that the playstation 3 consoles, which can connect to the internet via wifi, have been used remotely to compute something having to do with protein folding. I think that OU has also linked all of their computer lab computers together to do immense computations while not actively used by a student. I don’t know for sure what the drawbacks are to using a network of computers versus one very powerful computer are, but it seems like these principles could be used to compute even larger primes. Perhaps that is what Prof Tao did at UCLA? 8. First, I am greatly amused that my internet almost crashed when I tried to load the largest known prime number. Secondly, why is our discrete math class learning about discrete math when we could be trying to win $250,000? 9. I can’t even begin to conceive how big a number with 12,978,189 digits would be. Just think, one trillion has only 12 digits… This is like comparing the size of a human to the sun. It’s amazing to think how far we’ve come with technology. To be able to accurately come up with numbers this big is astounding. Let alone, prime numbers. It’ll be interesting to see where the next 20 years will take us. 10. Hi, I’m a contributor to the GIMPS project and I’ve made the official verification of the last 7 Mersenne primes found by the GIMPS project. I’m looking for Math help about a conjecture we have about a primality test for Wagstaff numbers ( (2^q+1)/3 ). Sir Wagstaff has asked a student to search for such a proof, but AFAIK he found nothing yet. In few words, it is well known that the method (LLT = Lucas-Lehmer Test) used for proving that a Mersenne number is prime can also be used for Fermat numbers. There is also a LLR (Lucas Lehmer Riesel test) for numbers of the form: k*2^n +/- 1 . However, we think that a modified version of the LLT “could” be used (once proven) for proving that a Wagstaff number is prime. Up to now, we only have a sufficiency proof, providing a PRP test. We are looking for the necessity proof. This work is based on the properties of a DiGraph under x^2-2 modulo a prime, using the Cycles rather than the Tree. Why this is important ? Because that would be the first fast primality test for a kind of numbers that are not of the N-1 or N+1 form, like Mersenne and Fermat numbers are. Look at: http://tony.reix.free.fr/Mersenne/SummaryOfThe3Conjectures.pdf and: http://trex58.wordpress.com/math2matiques/ . Tony Reix □ Tony, Thanks for the comment! We’ll be sure to pass along your invitation to the number theorists in the department. Please keep us updated on your investigations! 11. Hi, Thanks for passing my invitation to OU number theorists. I’m an amateur. So my skills are limited to some basic methods that Lehmer, HC. Williams, and P. Ribenboim used in their books. There are many other ways to prove the theorem (the LLT) that is used for searching for Mersenne primes. I hope that someone will be interested and will use his Math skills to make progress in this area, exploring the cycles of the DiGraph under x^2-2. Anton Vrba and Robert Gerbicz used different technics than mine. What I found surprising and so beautiful with the Lucas-Lehmer Test is that it is so… simple : start with S=4, compute S^2-2 (modulo a 2^q-2 Mersenne number to be checked), and do it again, till q-2. If it is 0, then the number is prime. Astonishly simple ! (though it requires very fast FFT and computers to do it). These times, I’m using a 2xquad-Nehalem machine for testing some Mersenne numbers with the exponent of size 46M. It takes only about 5.5 days per number, using the Glucas program. The PRP test for Wagstaff numbers has been implemented by Jean Penné, based on the GIMPS mprime library. We hope to find a big Wagstaff PRP (PRobably Prime) one of these days… About my investigations, I’m afraid I’ve made no progress since months. I should summarize the ideas I had… Maybe that could help. Not sure… For people interested with this subject, I recommend the following books: “The little book of Bigger primes” by Paulo Ribenboim, and “Édouard Lucas and primality testing” by HC Williams (who gave me some help). Join the GIMPS ! 12. Hi. I have created articles on prime numbers here: You may want to check it out.
{"url":"http://oumathclub.wordpress.com/2008/09/28/prime-numbers-mersenne-primes-edition/","timestamp":"2014-04-19T22:06:58Z","content_type":null,"content_length":"76337","record_id":"<urn:uuid:894625cb-7142-43ae-9fe8-444b8e25394b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Solve for Reactions The first step in many problems is solving for reactions, here is an example of some situations that you may encounter. Simplify the distributed load into a point load: Since this distribution is constant the load is applied at the midpoint, 2.5 ft to the left of B. (If the distributed load was not linear the point load should be applied at wherever the "center of mass" of the distributed load is; the center of mass of a constant distributed load is the midpoint). Now you can continue on to finding the reactions:
{"url":"http://www.reviewcivilpe.com/mechanics-solving-for-reactions/","timestamp":"2014-04-16T20:19:33Z","content_type":null,"content_length":"54879","record_id":"<urn:uuid:7932d9fb-5c10-47c8-a1b6-c221f9c8d891>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula You were wrong when you did not answer them in the first place. It is too late now though. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes, it is too late. Still, if PJ gives me nothing, I still have two potentials I can pursue, despite having developed no attraction to them. Re: Linear Interpolation FP1 Formula If you were sure of that I would say do not bother to pursue them. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I am close to being sure. No real physical attraction and don't know them at all. Re: Linear Interpolation FP1 Formula You also said you had no desire to talk with PJ ever again. So you did not answer her email. Then you went and talked to her and emailed her. It does not pay to burn your bridges when you are unsure. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes. I did not think I would be in a situation where I have basically no prospects for dating in the following academic year, because I thought I could have IY in the bank, but clearly this is not the case. Re: Linear Interpolation FP1 Formula Let us establish the goal. You are determined to date? You are sure? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula It is difficult to establish something like that, because I change my mind about it every day. Re: Linear Interpolation FP1 Formula Yep, and such vacillation can be fatal. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula But I cannot decide what I want... Re: Linear Interpolation FP1 Formula Oh boy. Then how can you achieve it? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I do not know I am afraid. Re: Linear Interpolation FP1 Formula That is okay. Everybody is afraid. Some people just do not show it or let it affect their decisions. When I was competing I was nervous every time. Fear of failure. It does not disappear even after thousands of times. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Well, you keep saying that I am ready for a girlfriend, but I do not think so. It still feels like a major jump for me. Re: Linear Interpolation FP1 Formula I am not saying that if you do not want one. It is your choice but do not react out of fear. It is a major jump for everyone. If you are not yet ready then wait until you are. If you are ready then go ahead. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula The thought of having one seems nice, but it will take me a while before I have the attributes I would prefer to start a relationship with, e.g. independence and financial stability. The latter may not happen until my late 20s. Re: Linear Interpolation FP1 Formula If you can wait then do so. I have no problem with whatever you decide. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I just find it a bit scary -- the thought of having someone know a lot of personal details about you, the intricacies of how your life works, etc... for example, I've never had anyone over at my house (from school) for about 5 years now. That's not because they don't want to -- they do -- but I don't feel comfortable with it. The thought of a girl meeting my parents is especially scary. These kinds of things make me think I am not ready. Re: Linear Interpolation FP1 Formula Hi zetafunc.; There are a lot of things in life worse than those. We all have to get through them everyday. Conquering, or at least masking fear is a vital part of the maturation process. The first thing to do is establish a pattern of success. This will build self confidence. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula What would constitute a pattern of success? Re: Linear Interpolation FP1 Formula That is the start of this thread. A few successful dates. Nothing like a couple of trial runs before the main event. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula To do that I must first master the art of talking to a woman for longer than 5 minutes. Re: Linear Interpolation FP1 Formula You do not have to master it. You just have to do it. Confidence will make the whole process go well. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula But when I try, it is difficult to find things that hold her interest. Re: Linear Interpolation FP1 Formula What do you talk about? How do you know they have lost interest? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=234432","timestamp":"2014-04-20T05:53:10Z","content_type":null,"content_length":"35429","record_id":"<urn:uuid:71e31d0e-aac1-445c-ae64-d0edd0073700>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Switched Reluctance Motor Model the dynamics of switched reluctance motor The Switched Reluctance Motor (SRM) block represents three most common switched reluctance motors: three-phase 6/4 SRM, four-phase 8/6 SRM, five-phase 10/8 SRM, as shown in the following figure. The electric part of the motor is represented by a nonlinear model based on the magnetization characteristic composed of several magnetizing curves and on the torque characteristic computed from the magnetization curves. The mechanic part is represented by a state-space model based on inertia moment and viscous friction coefficient. To be versatile, two models are implemented for the SRM block: specific and generic models. In the specific SRM model, the magnetization characteristic of the motor is provided in a lookup table. The values are obtained by experimental measurement or calculated by finite-element analysis. In the generic model, the magnetization characteristic is calculated using nonlinear functions and readily available parameters. Dialog Box and Parameters Configuration Tab Parameters Tab: Generic Model Parameters Tab: Specific Model Advanced Tab Inputs and Outputs The block input is the mechanical load torque (in N.m). TL is positive in motor operation and negative in generator operation. The block output m is a vector containing several signals. You can demultiplex these signals by using the Bus Selector block from Simulink^® library. ┃ Signal │ Definition │ Units ┃ ┃ V │ Stator voltages │ V ┃ ┃ flux │ Flux linkage │ V.s ┃ ┃ I │ Stator currents │ A ┃ ┃ Te │ Electromagnetic torque │ N.m ┃ ┃ w │ Rotor speed │ rad/s ┃ ┃ teta │ Rotor position │ rad ┃ The power_SwitchedReluctanceMotorpower_SwitchedReluctanceMotor example illustrates the simulation of the Switched Reluctance Motor. To develop positive torque, the currents in the phases of a SRM must be to the rotor position. The following figure shows the ideal waveforms (Phase A inductance and current) in a 6/4 SRM. Turn-on and turn-off angles refer to the rotor position where the converter's power switch is turned on and turned off, respectively. [1] T.J.E. Miller, Switched Reluctance Motors and Their Control, Clarendon Press, Oxford, 1993. [2] R. Krishnan, Switched Reluctance Motor Drives, CRC Press, 2001. [3] D.A. Torrey, X.M. Niu, E.J. Unkauf, "Analytical modelling of variable-reluctance machine magnetisation characteristics," IEE Proceedings - Electric Power Applications, Vol. 142, No. 1, January 1995, pp. 14-22. [4] H. Le-Huy, P. Brunelle, "Design and Implementation of a Switched Reluctance Motor Generic Model for Simulink SimPowerSystems," Electrimacs2005 Conference.
{"url":"http://www.mathworks.com.au/help/physmod/sps/powersys/ref/switchedreluctancemotor.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T18:56:37Z","content_type":null,"content_length":"47015","record_id":"<urn:uuid:6561b0a4-6e04-44a3-a298-21d8cff78791>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: RE: RE: comparing different means using ttest Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: RE: RE: comparing different means using ttest From DE SOUZA Eric <eric.de_souza@coleurope.eu> To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject RE: st: RE: RE: comparing different means using ttest Date Fri, 17 Dec 2010 10:06:33 +0100 " The regression still assumes independent error terms." True. But GDP does often behave as a random walk (with structural breaks, may be). Hence the errror terms are very likely to be uncorrelated. One could also robustify against serial correlation in the error terms. Eric de Souza College of Europe BE-8000 Brugge (Bruges) -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox Sent: 17 December 2010 09:57 To: 'statalist@hsphsun2.harvard.edu' Subject: RE: st: RE: RE: comparing different means using ttest The regression still assumes independent error terms. There is more scope for doing something about that in a regression framework then within -ttest-, but in terms of what Eric suggested it is still a matter of six on one side and half-a-dozen on the other. DE SOUZA Eric It does, because it simply avoids the starting point of David Lempert which in my opinion is a false start: regressing GDP levels on a time trend will get you nowhere. If David is interested testing the equality of GDP growth rates across two time periods, you pool the data, calculate the GDP growth rate and regress this variable on two dummy (binary) variables for each time period. In order to avoid perfect collinearit you drop one of the two dummies and test whether the coefficient on the other is equal to zero. Steven Samuels But. Eric, I don't think that pooling will solve the dependence issues that Nick mentioned. On Dec 16, 2010, at 1:26 PM, DE SOUZA Eric wrote: Why not just pool your data and regress %GDP-growth on a dummy (binary) variable (and a constant, of course) which takes the value of one for one of the two sub-samples and zero for the other; and test whether the coefficient on the dummy is significantly different from zero (or examine its confidence interval) ? You can robustify for heteroscedasticity. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-12/msg00662.html","timestamp":"2014-04-20T13:54:57Z","content_type":null,"content_length":"11912","record_id":"<urn:uuid:b9e9a554-9df1-420b-abb2-0c03eda39925>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Economical toric spines via Cheeger's Inequality Noga Alon Bo'az Klartag Let G = (Cd m) denote the graph whose set of vertices is {0, . . . , m - 1}d , where two distinct vertices are adjacent if and only if they are either equal or adjacent in the m-cycle Cm in each coordinate. Let G1 = (Cd m)1 denote the graph on the same set of vertices in which two vertices are adjacent if and only if they are adjacent in one coordinate in Cm and equal in all others. Both graphs can be viewed as graphs of the d-dimensional torus. We prove that one can delete O( ) vertices of G1 so that no topologically nontrivial cycles remain. This improves an O(dlog2(3/2) ) estimate of Bollob´as, Kindler, Leader and O'Donnell. We also give a short proof of a result implicit in a recent paper of Raz: one can delete an O(
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/613/1705896.html","timestamp":"2014-04-16T07:35:15Z","content_type":null,"content_length":"7908","record_id":"<urn:uuid:2684f086-0644-4cf1-9212-18edd9f66dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
What is sum nC0+2(nC1)+----+2^n(nCn)=? - Homework Help - eNotes.com What is sum nC0+2(nC1)+----+2^n(nCn)=? `(1+x)^n=^nC_0 1^nx^0+^nC_1 1^(n-1)x^1+...+^nC_n 1^0 x^n` `=sum_{r=0}^{r=n } (^nC_r) 1^(n-r)x^r` (i) Thus if we substitute x=2 in (i) ,then we have `sum_(r=0)^(r=n) {^nC_r }1^(n-r) 2^r=(1+2)^n` `^nC_0 1^n+^nC_1 1^(n-1) 2^1+........+^nC_n 1^0 2^n=3^n` `^nC_0+2 (^nC_1)+...+2^n ^nC_n=3^n` `` Thus sum is `3^n` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-sum-nc0-2-nc1-2-n-ncn-443422","timestamp":"2014-04-19T16:14:22Z","content_type":null,"content_length":"24700","record_id":"<urn:uuid:ff1fc483-fb4b-4fc9-a1ce-ff4bd37bd7a0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the general term!!! Hi guys find the one equality general term formula for the sequence: The general term formula must not be defined partially,but must contain only one equality. Last edited by anonimnystefy (2012-03-31 23:29:15) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! I really wonder if anyone will get that one. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! OK OK, only just got in. I'm working on it. Would the next two terms be 156, 158 ? You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Find the general term!!! Yes,those are the next two terms. The term are gained by repeatedly multiplying by 2 and the adding 2. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! Hi anonimnystefy, "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! Hi gAr Nice work.Congrats also for showing me that there is another formula. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! Thanks for the problem. What's your formula? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! Hi gAr You're welcome. Iwill post my formula in a few mintes,so stay put. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! hi gAr Last edited by anonimnystefy (2012-04-01 01:12:21) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! I'm getting a(1) = 12 for your formula? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! hi gAr The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! Oh, okay. Now I'm getting a(2) = 2*(5√2 - 2)*(1) Did I substitute for n correctly? Last edited by gAr (2012-04-01 01:35:20) "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! hi gAr if I remember correctly (1-1)/2=0 The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! wait a sec! i typed it out wrong. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! hi gAr Last edited by anonimnystefy (2012-04-01 01:50:56) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! I thought that was (2-1)/2 = 1/2 and hence 2*(5*2^(1/2)-2)*1 ? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! hi gAr look at #16. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! Looking at it... Meanwhile, there's some problem with the LaTeX code, can you correct it? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! hi gAr I fixed the LaTeX code. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! Hi anonimnystefy, The formula's correct now. LaTeX code's also fine, thanks. Last edited by gAr (2012-04-01 01:53:07) "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! hi gAr Thank you as well for your formula.Your seems better because it doesn't use the {a} "function". Who knows,maybe bob finds a third one. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! Hi anonimnystefy, You're welcome. I just combined two recurrences for odd and even terms, and the CAS derived the closed form for me. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Find the general term!!! cool.I wonder if anything can be done using G.F.'s? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Find the general term!!! hi Stefy and gAr Haven't been looking at yours so who knows. Here is mine: Please don't ask me to simplify it. Sob and tears. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=206356","timestamp":"2014-04-16T04:42:27Z","content_type":null,"content_length":"40254","record_id":"<urn:uuid:1247ecd3-4167-450e-9c1d-51f5e346222b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Re: $a++ allowed by $a-- is not ! why? Modern algebra would say that although ++ is one-to-one, it is not onto. why? it is a mapping from N to N (if i understand you correctly). Given a function F from set S to set T, F is said to be one-to-one if F(s) is equal to a unique t in T for all s in S, and F is said to be onto if for all t in T there exists exactly one s in S such that F(s)=t. In our case both S and T are the same set, the set of all finite numbers and alphanumeric strings. (I believe (Inf)++ is also defined, but that can be considered a special ++ as it is defined in Perl is one-to-one because each possible number or alphanumeric string has a unique successor, but it is not onto because it is not true that each number or alphanumeric string has a unique predecessor. why do you think -- is a permutation? Because, its domain and range are the same set (specifically, the set of all finite numbers; again, (Inf)-- is also defined but can be considered a special case, a polymorphism if you will) and it is both one-to-one and onto. You can also consider ++ to be a permutation over the set of all finite numbers, if you consider the string magic to be a polymorphism, but ++ is definitely not a permutation over the set of alphanumeric strings, because it is not onto. split//,".rekcah lreP rehtona tsuJ";$\=$ ;->();print$/
{"url":"http://www.perlmonks.org/index.pl?node_id=288160","timestamp":"2014-04-16T05:44:41Z","content_type":null,"content_length":"25621","record_id":"<urn:uuid:262693ce-08d0-4965-8c24-44981c38a5ec>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f764a35e4b0ddcbb89d5742","timestamp":"2014-04-19T20:04:16Z","content_type":null,"content_length":"51286","record_id":"<urn:uuid:fecffc17-098d-4988-9887-5aa6382a24fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Making An English Foot Making an English Foot First we need to look at how to measure circles and time. We do this the way the ancients did in degrees, minutes, seconds (not in things like radians). For time, a day will have 24 hours, each of 60 minutes, each of 60 seconds. For arc of a circle: the earth will have 360 degrees of circumference, each of 60 minutes of arc, each of 60 seconds of arc. Why do we use these numbers? Because the ancients did a lot of their math in fractions and ratios, and 60 was discovered by the ancients to be highly divisible by a lot of other smaller numbers: That is, it has a lot of factors. 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60. And even things like dividing by 8 are not too bad in that the fraction rapidly reduces to a fairly easy number to calculate 60/ 8 =15/2 or 7 1/2. So if you are going to be doing a lot of calculations in your head or scratched on the dirt, having things that “divide easily” is a big feature. Compare with the base 10 metric system where your factors are 1, 2, 5 and 10. Not very useful for anything other than 10s letting you move a decimal point. A Standard In Time With our calculation system settled and our way of measuring time and the earth decided, we can proceed with making a time standard, and from that a length standard. SideBar on Precision: I will be doing this demonstration using simple common definitions of “day” “year” and related things. I don’t want to get tied up in the minutia of sidereal, vs. tropical, vs. whatever year; precession of the equinoxes, etc. Yes, those are important out in the small digits of precision; and I expect the ancients put a lot of time into polishing the system for those details over the several thousand years it evolved. I just don’t expect folks to absorb all that in 10 minutes of modern impatience. So we keep it simple and direct. That “polish” can be added later by anyone who cares to go down that path. We’re taking the “KISS path” – Keep It Simple… The Solstice In Time: The Earth rotates. Not as consistently as the ancients believed, but far more consistently than most folks need for anything like day to day use. There are 360 degrees of rotation per day. That is 15 degrees per hour (360 / 24 = 30 / 2 = 15 You see, I told you those factors would come in handy…) So all we need is a way to mark off 15 degrees of arc of the sky and we have an hour of transit time for a star (or most other heavenly bodies – modulo the planets and sun that are slightly different, but not enough different for minor uses) To draw a circle we need a rope and a pole. The pole is stuck in the ground and the rope is stretched out from it. Take the other end and swing it. That makes a circular arc. You don’t need to make the whole circle, just a bit more than 1/6 of a circle. But where to put that 1/6 ? While it doesn’t matter too much, pointing it at the point where the sun, moon, planets et. al. rise from the horizon at the time of a major celestial turning point (the Solstice) has been the common way to do it. Many circular “monuments” from Stonehenge to Medicine Wheels have had an “alignment” to the summer solstice. It is also more likely that you will have a clear night sky in the summer, so this is a practical point too. You can use a modern calendar to find the solstice date (much faster – about June 21st in the Northern Hemisphere) or you can do it the way the ancients did. Put a pole in the ground where you will observe, and another toward the horizon a ways away. Every morning from mid winter on, when the sun rises, it will rise a bit further to the left (north) of the pole, until one day it doesn’t. That is the Solstice. Where the sun stops its migration and starts migrating back the other way. That is the longest day of the year. The Arc and a Time Covenant: So now, from your observation pole (made plumb with a bit of string and a weight / plumb bob) to your Solstice pole (also made plumb) you can swing an arc off to the south. There is an interesting property of circles. A string (or rope) one radius long (like the one used to draw your arc) will divide the circumference of a circle exactly 6 times. Another way to think of this is that the 60 – 60 – 60 degree triangle, the equilateral triangle, will exactly fit in a circle 6 times. We use this to make a 360 / 6 = 60 degree arc. Use your “radius line” but now measure from your solstice pole and swing an arc to the south. Where it crosses your circumference arc is exactly 1/6 of a circumference, or 60 degrees of arc. Put a marker or pole at that point (plumb, if a pole). Now use your rope to draw a straight line from one of the circumference poles to the other. The “chord”. Now we need to cut the arc into smaller pieces. Pick a spot a little more than half way down your “radius line” rope. Swing one arc from the first circumference pole a bit past the midline of the chord. Make sure it crosses the circumference a bit more than 1/2 way from one pole to the other. Go to the other pole and do the same thing so that the two arc cross each other. Stretch your rope between the two points where these two arcs cross each other. Where it crosses the circumference is the halfway point, or 30 degrees. Put a marker there. Starting from the 30 degree point, do it all again to get a 15 degree arc. That is your 1 hour transit arc. If you watch a star disappear behind one pole, it will disappear behind the next pole in one hour. Now, you can repeat this division process as many times as you like to get a workable arc. You will be timing transits between the two poles a few times to “tune” your clock, so if each time takes an hour, that might be a bit much. Also stars rise as they rotate, so you need a fairly tall pole for the 1 hour marker! In my opinion, the 7 1/2 degree or 3 3/4 degree arcs are easier to time (so divide in half two more times…). The Clock: We now have a celestial clock. There are 3600 seconds of time in 15 degrees of arc. For 7.5 that would be 1800 seconds or for 3 3/4 degrees it would be 900 seconds (and for the 1 7/8 degree arc 450 seconds etc.). This is your universal time standard. Build a Pendulum Take a bit of string somewhat near 4 feet long and tie a weight on the end. A large metal washer or stone doughnut works well. You want to be able to find the exact middle of the weight, since that is the centre of mass / centre of oscillation of the pendulum. A large used wheel bearing set works well too, as does any large metal or stone ring with a small hole in the centre. For a formal standard, you could even go so far as to build a large grandfather clock mechanism with a long pendulum and room for a wide swing. You want a pendulum that makes one “swing” from one side to the other in exactly one second. “Second Clocks” with a one second pendulum were very popular at one time (I wonder why…) For your washer on a string, you need to calibrate it. So start it swinging. It needs to swing 3600 times (or 1800 “periods” of out and back, from one side, to the other, and back) in an hour, or as a star transits 15 degrees from entering the left side of the first pole to entering the left side of the second pole. Now you know why I like the idea of a 3 3/4 or even a 1 7/8 degree arc! It is a lot easier to count 225 periods (450 swings out, then 450 swings back) than 3600. A longer pendulum swings slower. A shorter pendulum swings faster. If you get a number larger than 225 beats, your pendulum is swinging too fast, make your pendulum longer. If you get a smaller number, your pendulum is too slow, make the pendulum shorter. The “arc” that the pendulum swings in ought to be about 90 degrees when you start it swinging. When your count is right, your string, from centre of the weight to pivot point at the top is 1 yard long. A Pendulum In Time For small angles of swing, the time a pendulum takes does not change much. It is more or less driven by gravity and length. There is a small variation as the angle gets larger (exponentially with size of the angle). For an example of the amount of variation, you can visit this site and play with the numbers: You will find that 42.445 degrees of swing each side of straight down ( a bit more than 84 degrees in total) with a 3 foot long pendulum ( I used a 1 pound setting for the weight, but that doesn’t change the time, just the energy equation.) gives an exactly 2.00000 second period (or a 1 second “swing” from one side to the other, but not back). Play with the exact degrees of swing and you can see how many digits of precision you get from less accurate control of the arc of swing. Just for fun, change the length of the pendulum to 2 cubits. (Cue spooky music ;-) As a pendulum becomes of lesser swing, it becomes more consistent, so modern pendulum clocks use a very short swing. It also becomes longer by a little bit. This is also why the commercial “seconds pendulum clocks” have a pendulum longer than a yard (a bit over 39 inches) and approaching a meter. To some extent, the move from a yard, to a seconds clock pendulum, to the meter can be seen as changing the length of the swing of the pendulum from an easy to observe by hand and eye rather large 84 degrees of arc, to something much much smaller, but more consistent and precise. Oddly enough, if you narrow the swing to 5 degrees (2.5 alpha in the calculator) you get 2.00665 seconds from a 1 meter pendulum. While you can not get a time from a zero length swing, it does look like we are approaching a 1 meter pendulum at a near zero swing in the 3rd decimal point. Kind of makes me wonder if someone on the committee to make the meter standard knew something about the older yard and seconds pendulums and just looked at taking the pendulum to a limit point. Rampant speculation, yes, but there was that odd moment when the meter ended up being not quite 1/10,000,000 of the arc from pole to equator … “an error” by folks that didn’t make many errors. It still ended up a bit long. Then again, perhaps allowing for a non-point mass in a pendulum would tighten that up to An Alternative Pendulum Now 84 degrees is a very large arc, and is not very accurate nor precise and repeatable. Is there another way? Say we made our pendulum very long, rather than a yard, and say we made it swing very slowly in a very small arc. Then it would be quite precise and not very sensitive to exact swing length. How about if we make it swing in a 4.6 degree arc, and had it be one “rod” in length. (A rod being 16.5 feet). What time would it measure then? 4 1/2 seconds to the period, exactly. Gee, that looks useful… Our very small arc of 1 7/8 degrees took 450 seconds. So this pendulum would only take 100 swings to have a “match”, and it would be far less sensitive to the exact size of the swing, varying in the 1/1000 place if you increase the arc to 10 degrees. So if we ‘go large’ to increase our precision and accuracy, the “rod” works very well. I could easily see our paleo-astronomer counting 450 a few times, and with less than stellar repeatability, wondering if maybe counting 100 would be easier, then finding that (with the same distance of swing, but a far smaller arc) his repeatability and ease of determination became much easier. Then it would just be a matter of calibrating his yard to the rod. Below we will see how this might be So now you have two very useful things. 1) An excellent time standard. The second. 2) A very good distance standard. The yard (or cubit). And a more precise and more easily timed / measured distance standard, the rod. Your precision is limited mostly by the care you put into the construction, the size you make things (which is probably why Stonehenge and Medicine Wheels are so large) and a bit by the “fiddly bits’ we ignored about the minor variations in the day length from orbital mechanics, along with keeping the arc of your pendulum swing near 84 degrees (or using the “rod” to make it much easier to make more precise). If you need much more precision than that, you need to be taking a degree in astronomy… For a set of cave man tools (a couple of straight poles, a rock, some cordage) and the sky to be able to make a decent time and distance standard that can be recreated by anyone anywhere is rather a neat trick! That it matches the English Yard and Rod is no accident. But you said “Foot” not Yard! OK, we need to divide that yard long string into 1/3 parts. We could just fold it back on itself twice to get three lengths, but that is a bit imprecise due to the radius of curvature in the ‘turns’. Good enough to build a house, but not for fine precision. So lets think about this for a minute ( or second or degree ;-). A yard is 36 inches long… Golly, 36 inches is 360 tenths, where have I heard of 360 before… We know how to divide a circle into 1/6 parts. We have several choices at this point. We could simply stake out our string in the shape of a hexagon (overlay it on the center of a circle, with diameter lines at each 60 degrees constructed as we did above for the arc of 60 degrees). Each segment is now 60 tenths of an inch long (or 1/2 foot). A line from one vertex to another through the centre is 1 foot long, or we could take a segment of line from 2 sides of the hexagon and that would be a foot too. We could lay the line out as three 60 degree arc chords of a half circle. Then each segment would be a foot long (as would the radius of the circle). Or we could form the string into a circle and figure out how to divide it into 360 degrees, each degree segment 1/10 Th. of an inch long. At this point, it is really just a matter of High School Geometry how you choose to divide up the string. I’m sure you can find many other interesting ways to make the same divisions… 2 rods are 11 yards or 33 feet long. A bit inconvenient to divide, but not impossible. I’ll leave dividing that one for another day… Now it suddenly makes a lot more sense why a yard is 36 inches ( 360 1/10ths) and why there are 3 feet in a yard. Maybe this system of time and distance standards is a bit more rational than some folks think… My inspiration for this exercise came from: Wherein they recreate the “megalithic yard” that is a unit of measure widely found in very old stone works over much of the ancient world. It is based on a 366 “degree” circle and the number of sunrises in one orbit of the sun from solstice to solstice, not of the celestial sphere that gives our 365.x day year. More detail on the kinds of “year” that are defined: For even more details on the “fiddly bits” see: And if you think nobody would do something this complicated, here is an interesting circles and hexagons construction done for far less inherent value, but beautiful in the construction. And what Mascheroni did because he felt using a straight edge and compass was too much technology for constructing geometric forms and wanted to prove all you really needed was a compass alone (our rope and pole for swinging arcs, though we treat the stretched rope as a straightedge of sorts when stretched to make a chord line). Past Tense and Past Time One can now see the change from the Megalithic Yard to the English Yard and some of the various “cubits” as the shift from a 366 degree circle standard to a 360 degree circle standard (for more factors and a bit easier fractional math) and a transition from a solstice sunrises year standard to a Tropical year. We can now see in the stones on the ground the discovery that the Solstice Sunrises Year was only one way of seeing the orbit of the earth (and in some ways a bit too sun centered and not in touch with the greater celestial sphere) along with a move from the “366” degrees, based on that Solstice Sunrises Year, to a circle divided for easier math (since 366 was no longer so “special” as to deserve preserving and 365.x is hard to use in dividing). In the units of measure we can see the step forward in understanding. Interestingly enough, a “rod” is 16.5 feet, or 5.5 yards, or 11 cubits, or 6 megalithic yards. In that context we can see the “rod” as a unit that unifies the various systems of measurement. I can even envisage an alternate path of history where the megalithic yard came first, then the Rod as 6 megalithic yards, but using the “new” 360 degree circle and divisible by the hexagon method above; and then the English yard coming to replace the megalithic yard (with it’s own 360 divisions for fine work). Not wanting to fully replace the megalithic yard and rod, but augmenting it, only replacing it later. The exact path through history we will likely never know. Far fetched? Look at the present change from the yard to the meter… Since a “chain” is 4 “rods” or 22 yards, and an acre is 1 chain x 10 chains of area, we carry with us in our 1/8 or 1/6 acre urban home lot the history of both the Megalithic Yard, The English Yard and Foot, and The Greek and Minoan foot. All neatly interoperable. Just don’t try to convert these measurements into the less flexible metric system nor to do fractional math in your head with metric to the same accuracy and precision… 7 Responses to Making An English Foot 1. I’ve just dug out my copy of Ureil’s Machine by Lomas and Knight to refresh my memory on this stuff. I rember looking into the material on the Book of Enoch. Very strange but it looks like something happened. 2. Greetings. I reckon the 360 degree division of the circle came from musical ratios and division of a string. The ratios of 16/15, 9/8, 6/5, 5/4, 4/3, 3/2 and their inverses cover all but the flat fifth in a natural temperament chromatic scale. With a root of 360Hz, every note in the scale would be a whole number of Hz. For the flat five (45/32), you need to start 2 octaves up, at 1440Hz, the number of minutes in a day. The lunar tides return close to 25hrs, so 24 is a useful division of a day for this reason, 1 day and 1 hour. The first whole number ratio between feet and Megalithic yards is 25:68, five times this is the diameter of the inner stone circle rings at Avebury (125:340). A very handy number of Meg. yards to measure the circumference when done in tenths of Meg. yards. 3927/1250 = 3.1416. This gives an error of 0.03 inch on the circumference from the real value of Pi. 3927/17 = 231, a triangular number, 3927/7 = 561, also a triangular number, the ancients were known to divide the Earth and the the Heavens by 7 divisions. 3. Past history wisdom and science can be very fasinating. I thought our host and others may be interested in the following if he has never heard of it. Vedic Mathamatics Ancient math from time untold… In 1958 the Sri Sankaracarya Bharahti Krsna Tirtha of India paid a vist to the United States. He was the ecclesiastical head of the Gorvardhana monestary in Puri India, and was the apostolic sucessor of the first Sankaracarya. (ninth century; considerd by many to be India’s greatest Philosopher). It was the first time in the history of the thousand year old order that one of its leaders had visited the United States. He delivered many lectures at numerous universties, did radio shows, and at Washington and Lee University, Lexington, Virginia he engaged in a debate on religion and peace with distinguished historian, Arnold Toynbee, insisting that peace with honor is something worth fighting for. One of the most fasinataing aspects of his visit was his lectures on mathmatics from ancient India, at least 5,000 years old according to him. He has the academic credentials to at least lend some veracity to his claims. While still in his 16th year he was awarded the title of “Saraswati” for his mastery of Sanskrit. He appeared for the M.A. examinations of the American college of Sciences Rochester, N.Y. from Bombay Center in 1903; and in 1904 at the age of twenty passed M.A. examinations in further subjects simultaneously, securing the highest honors in all. his subjects included Sanskrit, Philosophy, English, Mathematics, History and Science. Quite an achivement for anyone, let alone a twenty year old. The history of Vedic Mathmatics he relates is a fasinating study. The entirely new twist of doing math with a completely different method, often easier and more efficient then the accepted comon methods. And all of this derived from the Indian sutras. Additionaly his concept of unity between variou religions, and religion and science you would I think find very stimulating. The title of the book is Vedic Metaphysics / DK Tirthaii Publisher was Montilal Banarsidass. It is out of print. I have a copy. If you would like to borrow it I would be happy to send it to you. Having thoughts one has never thought before is such fun, and should occur daily. REPLY:[ I'd heard some of this before, but as different bits here and there; never so nicely packaged together. I read a bit about the vedic math and saw that it would do what was claimed, but also realized that at 50 something years old I was not going to be learning a whole new math AND Sanskrit too (it helps to know Sanskrit to really understand what the Vedas are saying...); so I let it pass me by. Were I a 20-something I'd probably have learned Sanskrit, then the rest of the Indo-European languages just become a subset with modernizations ;-) At any rate, the book sounds really interesting, and when I've caught up on the 3 or 4 years of things on my "must do" list, I'll take you up on the offer of a loan 8-{ One of the things I found fascinating what the size of a Bramahn year; roughly 3 Trillion years: http://www.sanskritmantra.com/brahma.htm and when you read of the 'cycle of creation' and that much of our universe is supposed to exist in other indistructible universes "above"... it really starts to sound like a big bang cosmology with multidimentional phase space... Some times I wonder if we are not just rediscovering what was known 12,000 years ago before the comet hit North America and melted all the ice causing "the great flood" and washing it all away ... Sigh. Cycle of creation... -E.M.Smith ] 4. You forgot that 2 is also a factor of 10. It’s quite a useful factor of ten, actually–especially when trying to multiply a long number by 5. Just to be devil’s advocate, I find that the most common *demand* for conversions in daily life is between small, medium, and large distances, followed by converting between small, medium, and large areas. Converting cm to m to km is somewhat easier than converting in to yds to miles, but converting cm^2 to m^2 to km^2 is a heckuva lot easier than converting in^2 to yd^2 to miles^2. 5. @slick: OK, it was implied by the 5, but yeah, I left it out. Fixed. 6. Knowing your interest in henges and the like: RTÉ.ie will be streaming the Winter Solstice at Newgrange from 8.55am on 21 December. This live feed will be available on RTÉ.ie/live This entry was posted in Earth Sciences, Metrology, Stonehenge and tagged history, science. Bookmark the permalink.
{"url":"https://chiefio.wordpress.com/2009/06/13/making-an-english-foot/","timestamp":"2014-04-19T09:25:00Z","content_type":null,"content_length":"88953","record_id":"<urn:uuid:488dc383-9a52-45d4-84fc-303825bc99b7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Crossover Experts! Hole in 12dB/octave, (both sides), response? Help!! - diyAudio Wizard of Kelts diyAudio Moderator Join Date: Sep 2001 Location: Connecticut, The Nutmeg State Crossover Experts! Hole in 12dB/octave, (both sides), response? Help!! Years ago., the loudspeaker books were telling us that if you have a 12 dB/octave slope on both sides, there will be a hole in the response right at the crossover frequency. Therefore, they recommended that the higher frequency speaker-either the midrange or the tweeter-reverse it's polarity when hooked up to a 12 dB/octave crossover. I am told that this solution is little used today. Two questions, then. A) Is there in fact a hole in the middle of the response at the crossover frequency in a 12 dB?octave, (both sides) crossover? B) If so, and they don't reverse the polarity of one of the speakers, then what methods do they use to deal with it?
{"url":"http://www.diyaudio.com/forums/multi-way/4590-crossover-experts-hole-12db-octave-both-sides-response-help.html","timestamp":"2014-04-25T06:54:41Z","content_type":null,"content_length":"81612","record_id":"<urn:uuid:b785d694-3233-4f9c-8034-cd568cfd6efc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Iterative Simulated Quenching for Designing Irregular-Spot-Array Generators We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed © 2000 Optical Society of America OCIS Codes (050.1950) Diffraction and gratings : Diffraction gratings (050.1960) Diffraction and gratings : Diffraction theory (050.1970) Diffraction and gratings : Diffractive optics (090.1760) Holography : Computer holography (090.1970) Holography : Diffractive optics Jean-Numa Gillet and Yunlong Sheng, "Iterative Simulated Quenching for Designing Irregular-Spot-Array Generators," Appl. Opt. 39, 3456-3465 (2000) Sort: Year | Journal | Reset 1. H. Dammann and E. Klotz, “Coherent optical generation and inspection of two-dimensional periodic structures,” Opt. Acta 24, 505–515 (1977). 2. J. Jahns, M. M. Downs, M. E. Prise, N. Streibl, and S. J. Walker, “Dammann gratings for laser beam shaping,” Opt. Eng. 28, 1267–1275 (1989). 3. U. Krackhardt, J. N. Mait, and N. Streibl, “Upper bound on the diffraction efficiency of phase-only fanout elements,” Appl. Opt. 31, 27–37 (1992). 4. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972). 5. A. Vasara, M. R. Taghizadeh, J. Turunen, J. Westerholm, E. Noponen, H. Ichikawa, J. M. Miller, T. Jaakkola, and S. Kuisma, “Binary surface-relief gratings for array illumination in digital optics,” Appl. Opt. 31, 3320–3336 (1992). 6. J.-N. Gillet and Y. Sheng, “Irregular spot array generator with trapezoidal apertures of varying heights,” Opt. Commun. 166, 1–7 (1999). 7. S. Kirckpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science 220, 671–680 (1983). 8. S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Trans. Pattern. Anal. Mach. Intell. 6, 721–141 (1984). 9. P. J. M. van Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications (Reidel, Dordrecht, The Netherlands, 1987). 10. L. Ingber, “Simulated annealing: practice versus theory,” J. Math. Comput. Model. 18, 29–57 (1993). 11. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: the Art of Scientific Computing (Cambridge University, Cambridge, UK, 1992), pp. 444–455. 12. H. Szu and R. Hartley, “Fast simulated annealing,” Phys. Lett. 122, 157–162 (1987). 13. K. H. Hoffmann and P. Salomon, “The optimal simulated annealing schedule for a simple model,” J. Phys. A 23, 3511–3523 (1990). 14. G. Ruppeiner, J. M. Pedersen, and P. Salamon, “Ensemble approach to simulated annealing,” J. Phys. I 1, 455–470 (1991). 15. R. Frost and P. Heineman, “Simulated annealing a heuristic for parallel stochastic optimization,” in Proceedings of the 1997 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’97) (Computer Science Research, Education, and Applications, Las Vegas, Nev., 1997). 16. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” J. Chem. Phys. 21, 1087–1092 (1953). 17. I. M. Barton, P. Blair, and M. R. Taghizadeh, “Diffractive phase elements for pattern formation: phase-encoding geometry considerations,” Appl. Opt. 36, 9132–9137 (1997). 18. J. W. Goodman and A. M. Silvestri, “Some effects of Fourier domain phase quantization,” IBM J. Res. Dev. 14, 478–484 (1970). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-39-20-3456","timestamp":"2014-04-16T08:49:10Z","content_type":null,"content_length":"120809","record_id":"<urn:uuid:2a296cee-9743-4a01-bcd2-05a64fd7adf6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivation of kinetic energy Hi everyone, There are 2 things I do not understand in the derivation of kinetic energy from work: (1) W = [itex]\int[/itex][itex]\vec{F}[/itex](t).d[itex]\vec{r}[/itex](t)= (2) m.[itex]\int[/itex][itex]\frac{d\vec{v}(t)}{dt}[/itex].d[itex]\vec{r}[/itex](t)= (3) m.[itex]\int[/itex]d[itex]\vec{v}[/itex](t).[itex]\frac{d\vec{r}(t)}{dt}[/itex]= (4) [itex]\frac{m}{2}[/itex].(v(t1) - v(t0))[itex]^{2}[/itex] Question I: I don't understand why you can just change the division like that between (2) and (3). I know multiplication and division have the same order but they are calculated from left to right. So why can you do that from (2) to (3)? Question II: I don't understand why the change in kinetic energy between t1 and t0 is sometimes written as [itex]\frac{1}{2}[/itex].m.v(t1)[itex]^{2}[/itex] - [itex]\frac{1}{2}[/itex].m.v(t0)[itex]^ E.g. like in After (4), it should be [itex]\frac{m}{2}[/itex].v(t1)[itex]^{2}[/itex] - m.v(t1).v(t0) + [itex]\frac{m}{2}[/itex].v(t0)[itex]^{2}[/itex] , right?
{"url":"http://www.physicsforums.com/showthread.php?t=580749","timestamp":"2014-04-18T03:08:14Z","content_type":null,"content_length":"41258","record_id":"<urn:uuid:2fe06740-ccb6-43ea-90cd-f423feec06e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Series February 19th 2009, 05:10 AM #1 Feb 2009 South Africa Geometric Series The first term of a geometric series is 27, the last term is 8 and the sum of the series is 65. What is the common ratio and how many terms are there in the series? $a_n=8\Rightarrow a_1q^{n-1}=8\Rightarrow q^{n-1}=\frac{8}{27}$ $S_n=65\Rightarrow a_1\frac{q^n-1}{q-1}=65\Rightarrow\frac{q^n-1}{q-1}=\frac{65}{27}$ $q^n=q^{n-1}\cdot q=\frac{8}{27}q$ Then, $\frac{\frac{8}{27}q-1}{q-1}=\frac{65}{27}\Rightarrow q=\frac{2}{3}$ $\left(\frac{2}{3}\right)^{n-1}=\frac{8}{27}\Rightarrow n=4$ February 19th 2009, 05:53 AM #2
{"url":"http://mathhelpforum.com/algebra/74470-geometric-series.html","timestamp":"2014-04-16T13:47:22Z","content_type":null,"content_length":"33346","record_id":"<urn:uuid:d9c25a7d-4826-4b30-b7d4-13a2d6c46d3a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory Steven A. Frank (Submitted on 16 Nov 2012) The equations of evolutionary change by natural selection are commonly expressed in statistical terms. Fisher’s fundamental theorem emphasizes the variance in fitness. Quantitative genetics expresses selection with covariances and regressions. Population genetic equations depend on genetic variances. How can we read those statistical expressions with respect to the meaning of natural selection? One possibility is to relate the statistical expressions to the amount of information that populations accumulate by selection. However, the connection between selection and information theory has never been compelling. Here, I show the correct relations between statistical expressions for selection and information theory expressions for selection. Those relations link selection to the fundamental concepts of entropy and information in the theories of physics, statistics, and communication. We can now read the equations of selection in terms of their natural meaning. Selection causes populations to accumulate information about the environment. One thought on “Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory” 1. Thanks. I was possibly in danger of sounding like a “kook” without being rescued by this paper.
{"url":"http://haldanessieve.org/2012/11/20/natural-selection-v-how-to-read-the-fundamental-equations-of-evolutionary-change-in-terms-of-information-theory/","timestamp":"2014-04-18T05:30:27Z","content_type":null,"content_length":"37243","record_id":"<urn:uuid:65acb4e4-3a45-41b0-8dff-10231ba61ef2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A can in your pantry has a 1.4 inch radius and is 5 inches tall. What is the surface area of the can to the nearest tenth? Choose one answer. a. 6.2 inches squared b. 12.3 inches squared c. 44 inches squared d. 56.3 inches squared • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bd1f51e4b0de42629fa8f5","timestamp":"2014-04-16T16:27:56Z","content_type":null,"content_length":"38180","record_id":"<urn:uuid:1716d3b9-bad0-48c4-9eab-5de8d75f8497>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
a sequence does not converge to a Remember: if a sequence [tex] (x_n)[/tex] does converge to [tex] a [/tex] then, for any [tex] \espilon > 0 [/tex] there is an integer [tex] N [/tex] such that, for all [tex] n > N [/tex] it is true that [tex] x_n \in B(x,\epsilon)[/tex]. With this in mind, if [tex] (x_n)[/tex] [b] does not converge to [tex] a [/tex], it has to be true that there is no [tex] N [/tex] that satisfies the previous requirement. If saying [tex] x_n \in B (x, \epsilon)[/tex] from some point on is false, it has to be true that [tex] x_n \not \in B(x,\epsilon)[/tex] for infinitely many values of [tex] n [/tex].
{"url":"http://www.physicsforums.com/showthread.php?t=442090","timestamp":"2014-04-16T07:45:05Z","content_type":null,"content_length":"32435","record_id":"<urn:uuid:0af9ad77-68a7-4ed6-8b57-d158f10e27bc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
PhysicsLAB: Continuous Charge Distributions: Electric Potential Resource Lesson Continuous Charge Distributions: Electric Potential Printer Friendly Version We have already derived in an earlier lesson that the electric potential at a distance r from a point charge is given by the formula We will now construct a method to calculate the electric potential resulting from a continuous charge distribution in a similar fashion to how we developed expressions for the electric field due similar distributions. 1. Construct a diagram with a coordinate set of axes 2. Locate the point at which we want to calculate the absolute potential and label appropriate distances 3. Divide the total charge distribution into small charge segments (deltaq) 4. Develop an expression for one piece and then sum up the contributions for all of the charge segments 5. Replace the sum of the charge segments with an integral incorporating expressions for the charge density and an appropriate infinitesimal (ds, dA, 6. Integrate and simplify We will use this technique to calculate the electric potential along the axes of a thin ring of charge, a uniformly charged disk, and a concentric set of Uniformly Charged Ring Suppose we have a ring of charge with a uniform charge distribution, λ, and radius a. We will now develop an expression for the electric potential at a position on the positive x-axis, at point P in the following diagram. Since the ring has a uniform distribution of charge, we know that the total charge equals allowing us to write an expression for the electric potential at point as If we let x = 0, then we find that the potential at the center of the ring equals Refer to the following information for the next three questions. Suppose you have a 5 µC uniformly-charged ring of radius 10 cm whose origin is coincident with the origin. A small particle of mass 4 x 10^-6 kg, having a charge of 2µC, is placed on the x-axis (the axis of the ring) at a distance of 20 cm. What is the voltage of the ring at the particle's position? What is the particle's initial potential energy? When the particle is released, what will be its final velocity when it is a "great distance" from the ring? Uniformly Charged Disks To find the potential of a thin, charged disk having a uniform surface charge σ and radius R, start by placing its axis along the x-axis and build the disk from a series of charged rings. Each ring will have a surface area of its circumference multiplied by its thickness, or where the radius of the selected ring is a and its radial thickness is da. The charge carried on this charge segment would equal With this initial setup, we can use our result for the potential of a thin ring to integrate from a radius of a = 0 to a = R to calculate the voltage due to the entire disk. To integrate, use substitution where this will now give us our conclusion: Related Documents Resource Lesson: Resource Lesson Continuous Charge Distributions: Electric Potential Printer Friendly Version We have already derived in an earlier lesson that the electric potential at a distance r from a point charge is given by the formula We will now construct a method to calculate the electric potential resulting from a continuous charge distribution in a similar fashion to how we developed expressions for the electric field due similar distributions. 1. Construct a diagram with a coordinate set of axes 2. Locate the point at which we want to calculate the absolute potential and label appropriate distances 3. Divide the total charge distribution into small charge segments (deltaq) 4. Develop an expression for one piece and then sum up the contributions for all of the charge segments 5. Replace the sum of the charge segments with an integral incorporating expressions for the charge density and an appropriate infinitesimal (ds, dA, dV). 6. Integrate and simplify We will use this technique to calculate the electric potential along the axes of a thin ring of charge, a uniformly charged disk, and a concentric set of cylinders. Uniformly Charged Ring Suppose we have a ring of charge with a uniform charge distribution, λ, and radius a. We will now develop an expression for the electric potential at a position on the positive x-axis, at point P in the following diagram. Since the ring has a uniform distribution of charge, we know that the total charge equals allowing us to write an expression for the electric potential at point as If we let x = 0, then we find that the potential at the center of the ring equals Refer to the following information for the next three questions. Suppose you have a 5 µC uniformly-charged ring of radius 10 cm whose origin is coincident with the origin. A small particle of mass 4 x 10^-6 kg, having a charge of 2µC, is placed on the x-axis (the axis of the ring) at a distance of 20 cm. What is the voltage of the ring at the particle's position? What is the particle's initial potential energy? When the particle is released, what will be its final velocity when it is a "great distance" from the ring? We have already derived in an earlier lesson that the electric potential at a distance r from a point charge is given by the formula We will now construct a method to calculate the electric potential resulting from a continuous charge distribution in a similar fashion to how we developed expressions for the electric field due similar distributions. We will use this technique to calculate the electric potential along the axes of a thin ring of charge, a uniformly charged disk, and a concentric set of cylinders. Suppose we have a ring of charge with a uniform charge distribution, λ, and radius a. We will now develop an expression for the electric potential at a position on the positive x-axis, at point P in the following diagram. Since the ring has a uniform distribution of charge, we know that the total charge equals allowing us to write an expression for the electric potential at point as If we let x = 0, then we find that the potential at the center of the ring equals Refer to the following information for the next three questions. What is the voltage of the ring at the particle's position? When the particle is released, what will be its final velocity when it is a "great distance" from the ring? Uniformly Charged Disks To find the potential of a thin, charged disk having a uniform surface charge σ and radius R, start by placing its axis along the x-axis and build the disk from a series of charged rings. Each ring will have a surface area of its circumference multiplied by its thickness, or where the radius of the selected ring is a and its radial thickness is da. The charge carried on this charge segment would equal With this initial setup, we can use our result for the potential of a thin ring to integrate from a radius of a = 0 to a = R to calculate the voltage due to the entire disk. To integrate, use substitution where this will now give us our conclusion: To find the potential of a thin, charged disk having a uniform surface charge σ and radius R, start by placing its axis along the x-axis and build the disk from a series of charged rings. Each ring will have a surface area of its circumference multiplied by its thickness, or where the radius of the selected ring is a and its radial thickness is da. The charge carried on this charge segment would equal With this initial setup, we can use our result for the potential of a thin ring to integrate from a radius of a = 0 to a = R to calculate the voltage due to the entire disk.
{"url":"http://dev.physicslab.org/Document.aspx?doctype=3&filename=Electrostatics_ContinuousChargedRingDisk.xml","timestamp":"2014-04-18T05:59:55Z","content_type":null,"content_length":"31607","record_id":"<urn:uuid:30ae2c43-d5b7-41ae-b0ae-c6213572f683>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
combining like terms worksheet Author Message tOnh3l Posted: Saturday 30th of Dec 07:57 Guys , I need some help with my math assignment . It’s a really long one having almost 30 questions and it covers topics such as combining like terms worksheet, combining like terms worksheet and combining like terms worksheet. I’ve been trying to solve those questions since the past 4 days now and still haven’t been able to solve even a single one of them. Our instructor gave us this homework and went for a vacation, so basically we are all on our own now. Can anyone help me get started ? Can anyone solve some sample questions for me based on those topics; such solutions would help me solve my own questions as well. From: London oc_rana Posted: Saturday 30th of Dec 09:18 It's true , there are programs that can assist you with study . I think there are several ones that help you solve math problems, but I heard that Algebrator stands out amongst them. I used the software when I was a student in Pre Algebra for helping me with combining like terms worksheet, and it never failed me since then. In time I understood all the topics, and after a while I was able to solve the most challenging of the tasks without the program. Don't worry; you won't have any problem using it. It was meant for students, so it's simple to use. Basically you just have to type in the keyword and that's it .Of course you should use it to learn algebra, not just copy the answers , because you won't learn that Registered: way. molbheus2matlih Posted: Monday 01st of Jan 08:14 Algebrator indeed is a very good tool to help you learn math, sitting at home . You won’t just get the answer to the question but the entire solution as well, that’s how you can build a strong mathematical foundation. And to do well in math, it’s important to have strong concepts. I would advise you to use this software if you want to finish your project on From: France xilnos16 Posted: Tuesday 02nd of Jan 07:08 Hey! That sounds really great . So where did you get the software ? alhatec16 Posted: Thursday 04th of Jan 10:43 Yes. Here is the link – http://www.algebra-equation.com/linearequations-2.html. There is a quick buy routine and I think they also give a cool money-back guarantee. They know the software is wonderful and you would never use it. Enjoy! From: Notts, UK. Flash Fnavfy Posted: Saturday 06th of Jan 10:53 Liom I remember having problems with adding numerators, mixed numbers and rational equations. Algebrator is a really great piece of math software. I have used it through several algebra classes - Intermediate algebra, Intermediate algebra and Algebra 2. I would simply type in the problem and by clicking on Solve, step by step solution would appear. The program is highly recommended.
{"url":"http://www.algebra-equation.com/solving-algebra-equation/graphing-inequalities/combining-like-terms-worksheet.html","timestamp":"2014-04-16T19:26:30Z","content_type":null,"content_length":"25871","record_id":"<urn:uuid:f6d4278b-8ed1-4fe5-90a7-3e8fd1d83fbb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of distribution function probability theory , the cumulative distribution function (CDF) , also probability distribution function or just distribution function , completely describes the probability distribution of a real-valued random variable X . For every real number , the CDF of X is given by $x mapsto F_X\left(x\right) = operatorname\left\{P\right\}\left(Xleq x\right),$ where the right-hand side represents the probability that the random variable X takes on a value less than or equal to x. The probability that X lies in the interval (a, b] is therefore $F_X\left(b\ right)-F_X\left(a\right)$ if a < b. If treating several random variables X, Y, ... etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is omitted. It is conventional to use a capital F for a cumulative distribution function, in contrast to the lower-case f used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution. The CDF of X can be defined in terms of the probability density function ƒ as follows: $F\left(x\right) = int_\left\{-infty\right\}^x f\left(t\right),dt$ Note that in the definition above, the "less or equal" sign, '≤' is a convention, but it is a universally used one, and is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depend upon this convention. Moreover, important formulas like Levy's inversion formula for the characteristic function also rely on the "less or equal" Every cumulative distribution function F is (not necessarily strictly) monotone increasing and right-continuous. Furthermore, we have $lim_\left\{xto -infty\right\}F\left(x\right)=0, quad lim_\left\{xto +infty\right\}F\left(x\right)=1.$ Every function with these four properties is a cdf. The properties imply that all CDFs are càdlàg functions. If X is a discrete random variable, then it attains values x[1], x[2], ... with probability p[i] = P(x[i]), and the cdf of X will be discontinuous at the points x[i] and constant in between: $F\left(x\right) = operatorname\left\{P\right\}\left(Xleq x\right) = sum_\left\{x_i leq x\right\} operatorname\left\{P\right\}\left(X = x_i\right) = sum_\left\{x_i leq x\right\} p\left(x_i\right) If the CDF F of X is continuous, then X is a continuous random variable; if furthermore F is absolutely continuous, then there exists a Lebesgue-integrable function f(x) such that $F\left(b\right)-F\left(a\right) = operatorname\left\{P\right\}\left(aleq Xleq b\right) = int_a^b f\left(x\right),dx$ for all real numbers a and b. (The first of the two equalities displayed above would not be correct in general if we had not said that the distribution is continuous. Continuity of the distribution implies that P (X = a) = P (X = b) = 0, so the difference between "<" and "≤" ceases to be important in this context.) The function f is equal to the derivative of F almost everywhere, and it is called the probability density function of the distribution of X. Point probability The "point probability" that is exactly can be found as $operatorname\left\{P\right\}\left(X=b\right) = F\left(b\right) - lim_\left\{x to b^\left\{-\right\}\right\} F\left(x\right)$ Kolmogorov-Smirnov and Kuiper's tests Kolmogorov-Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance we might use Kuiper's test to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month. Complementary cumulative distribution function Sometimes, it is useful to study the opposite question and ask how often the random variable is a particular level. This is called the complementary cumulative distribution function ), defined as $F_c\left(x\right) = operatorname\left\{P\right\}\left(X > x\right) = 1 - F\left(x\right)$. In survival analysis, $F_c\left(x\right)$ is called the survival function and denoted $S\left(x\right)$. Folded cumulative distribution While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median and dispersion of the distribution or of the empirical results. As an example, suppose is uniformly distributed on the unit interval [0, 1]. Then the CDF of X is given by $F\left(x\right) = begin\left\{cases\right\}$ 0 &: x < 0 x &: 0 le x le 1 1 &: 1 < x end{cases} Take another example, suppose X takes only the discrete values 0 and 1, with equal probability. Then the CDF of X is given by $F\left(x\right) = begin\left\{cases\right\}$ 0 &: x < 0 1/2 &: 0 le x < 1 1 &: 1 le x end{cases} If the cdf is strictly increasing and continuous then $F^\left\{-1\right\}\left(y \right), y in \left[0,1\right]$ is the unique real number such that $F\left(x\right) = y$ Unfortunately, the distribution does not, in general, have an inverse. One may define, for $y in \left[0,1\right]$, F^{-1}(y) = inf_{x in mathbb{R}} { F(x) geq y } . Example 1: The median is $F^\left\{-1\right\}\left(0.5 \right)$. Example 2: Put $tau = F^\left\{-1\right\}\left(0.95 \right)$. Then we call $tau$ the 95th percentile. The inverse of the cdf is called the quantile function. The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions. Some useful properties of the inverse cdf are: 1. $F^\left\{-1\right\}$ is nondecreasing 2. $F^\left\{-1\right\}\left(F\left(x\right)\right) leq x$ 3. $F\left(F^\left\{-1\right\}\left(y\right)\right) geq y$ 4. $F^\left\{-1\right\}\left(y\right) leq x$ if and only if $y leq F\left(x\right)$ 5. If $Y$ has a $U\left[0, 1\right]$ distribution then $F^\left\{-1\right\}\left(Y\right)$ is distributed as $F$ 6. If $\left\{X_alpha\right\}$ is a collection of independent $F$-distributed random variables defined on the same sample space, then there exist random variables $Y_alpha$ such that $Y_alpha$ is distributed as $U\left[0,1\right]$ and $F^\left\{-1\right\}\left(Y_alpha\right) = X_alpha$ with probability 1 for all $alpha$. Multivariate Case When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables X,Y, the joint CDF is given $x,y to F\left(x,y\right) = operatorname\left\{P\right\}\left(Xleq x,Yleq y\right),$ where the right-hand side represents the probability that the random variable X takes on a value less than or equal to x and that Y takes on a value less than or equal to y See also
{"url":"http://www.reference.com/browse/distribution%20function","timestamp":"2014-04-19T02:02:52Z","content_type":null,"content_length":"90167","record_id":"<urn:uuid:9f723061-3001-451e-aa4c-2e2f3744d483>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersections of lines with parabolas September 10th 2009, 01:51 PM #1 Sep 2009 Intersections of lines with parabolas Consider the function y=ax^2+bx+c, a>0 and calculate for what values of a,b and c this function will be intersected in 4 points in quadrant one of the coordinate plane by the lines y=x and y=2x Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/algebra/101558-intersections-lines-parabolas.html","timestamp":"2014-04-20T14:07:40Z","content_type":null,"content_length":"28911","record_id":"<urn:uuid:3761bb6d-804e-439a-8026-ff6af627374e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Lift-to-drag ratio The lift-to-drag ratio of an aircraft is the ratio of the lift force to the drag force. This number is dimensionless, and is commonly abbreviated to L/D, pronounced "l over d". The L/D of an aircraft is one of its more important design variables. The value of L/D, in the case of commercial aircraft, is typically maximized, which results in the greatest aerodynamic efficiency. This means better fuel economy and thus lower costs. Early in the design, L/D is estimated by using the coefficient of lift (C[L]) and coefficient of drag (C[D]) calculated from similar types of aircraft and the designer's personal experience. The ratio of these dimensionless coefficients is the same as the ratio of the actual forces, because each coefficient is scaled to the same reference dimension, namely, (1/2)*ρ*V^2*S, where S is the planform area of the wing. It can be shown that the (L/D)[max] occurs when the parasite drag of the aircraft is equal to its induced drag. This makes sense because parasite drag increases (at an increasing rate) with velocity, and induced drag decreases (at a decreasing rate) with velocity, therefore, the speed at which they intersect on a graph is going to be the smallest sum of the two values.
{"url":"http://everything2.com/title/Lift-to-drag+ratio?showwidget=showCs1330658","timestamp":"2014-04-18T19:06:16Z","content_type":null,"content_length":"21231","record_id":"<urn:uuid:414a2c7e-7540-48e3-8183-404b42acd788>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: RE: RE: estimation with a time trend. Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: RE: RE: estimation with a time trend. From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject RE: st: RE: RE: estimation with a time trend. Date Mon, 5 Jul 2010 15:41:19 +0100 Good question, but clearly 1. Most postings in the thread, and the specific suggestions you received, were sent _before_ you made it clear that you were using -xtreg-. 2. If the question is, how best to estimate a time trend, with nothing else said, then translating time to a sensible origin remains good general advice. natasha agarwal On Mon, Jul 5, 2010 at 2:22 PM, Martin Weiss <martin.weiss1@gmx.de> wrote: > <> > The coeff for "t" will stay the same, then, no matter how you "center" it: > ************* > use http://www.stata-press.com/data/r11/nlswork, clear > egen t=group(year) > xtreg ln_wage wks_work tenure south nev_mar age t, fe vce(robust) > replace t=t-7 > xtreg ln_wage wks_work tenure south nev_mar age t, fe vce(robust) > replace t=t-3 > xtreg ln_wage wks_work tenure south nev_mar age t, fe vce(robust) > ************* I tried this and as you said the coefficient stays the same. I was wondering then what is the point in doing this then? natasha agarwal > On Mon, Jul 5, 2010 at 2:01 PM, Nick Cox <n.j.cox@durham.ac.uk> wrote: >> Your original question was >> "I was trying to estimate a production function with an unbalanced >> firm-year panel data and wanted to include a time trend. However I was >> not sure if the time trend was created correctly. >> egen t=group(year) >> I was wondering if anyone could please tell me if this was correct?" >> Who knows whether the commands you are using "mean center" the data if >> you do not tell us what they are? > I am sorry about this. > I issue the following line of commands: > egen t=group(year) > xtreg lnrval lnk lnw lnvfdi t, fe vce(robust) >> natasha agarwal >> On Mon, Jul 5, 2010 at 1:14 PM, Maarten buis <maartenbuis@yahoo.co.uk> >> wrote: >>> --- On Mon, 5/7/10, natasha agarwal wrote: >>>> I am afraid I did not understand "year- (or -year- >>>> MINUS> midpoint) for interpretability?" >>> Say you have a variable representing year of birth >>> ranging between 1910 and 1980. I would then usually >>> have a line like this in my do-file: >>> gen c_year = year - 1910 >>> Afterwards I would use c_year instead of year. In >>> this case "midpoint" is 1910, and makes sure that >>> your new variable is shifted to the left such that >>> it has the value zero in 1910. This makes sure >>> that the baseline in your models refer to a point >>> at the beginning of your observed period. This is >>> particularly important if you include interactions >>> or if you care about the constant (e.g. in a random >>> effects model) >> I am estimating the model with a within estimator. I thought that they >> mean center the data anyways? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-07/msg00217.html","timestamp":"2014-04-17T00:58:19Z","content_type":null,"content_length":"11711","record_id":"<urn:uuid:c6e63b69-4184-40b4-a3c5-cb33ab454ca0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
This reference is for Processing 2.0+. If you have a previous version, use the reference included with your software. If you see any errors or have suggestions, please let us know. If you prefer a more technical reference, visit the Processing Javadoc. Name noise() float xoff = 0.0; void draw() { xoff = xoff + .01; float n = noise(xoff) * width; line(n, 0, n, height); float noiseScale=0.02; void draw() { for (int x=0; x < width; x++) { float noiseVal = noise((mouseX+x)*noiseScale, line(x, mouseY+noiseVal*80, x, height); Returns the Perlin noise value at specified coordinates. Perlin noise is a random sequence generator producing a more natural, harmonic succession of numbers than that of the standard random() function. It was invented by Ken Perlin in the 1980s and has been used in graphical applications to generate procedural textures, shapes, terrains, and other seemingly organic In contrast to the random() function, Perlin noise is defined in an infinite n-dimensional space, in which each pair of coordinates corresponds to a fixed semi-random value (fixed only for the lifespan of the program). The resulting value will always be between 0.0 and 1.0. Processing can compute 1D, 2D and 3D noise, depending on the number of coordinates given. The Description noise value can be animated by moving through the noise space, as demonstrated in the first example above. The 2nd and 3rd dimensions can also be interpreted as time. The actual noise structure is similar to that of an audio signal, in respect to the function's use of frequencies. Similar to the concept of harmonics in physics, Perlin noise is computed over several octaves which are added together for the final result. Another way to adjust the character of the resulting sequence is the scale of the input coordinates. As the function works within an infinite space, the value of the coordinates doesn't matter as such; only the distance between successive coordinates is important (such as when using noise() within a loop). As a general rule, the smaller the difference between coordinates, the smoother the resulting noise sequence. Steps of 0.005-0.03 work best for most applications, but this will differ depending on use. Syntax noise(x, y) noise(x, y, z) x float: x-coordinate in noise space Parameters y float: y-coordinate in noise space z float: z-coordinate in noise space Returns float Related noiseDetail() Updated on March 27, 2014 07:26:06pm EDT
{"url":"http://processing.org/reference/noise_.html","timestamp":"2014-04-16T10:09:56Z","content_type":null,"content_length":"9984","record_id":"<urn:uuid:c8997d75-6485-4dc6-90aa-acae03b07dac>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
It's official: They're prime August 28, 2008, 1:57 pm Estimating the digits in a Mersenne prime — for dummies At the end of this post, I made a totally naive guess that the recently discovered candidate to be the \(M_{45}\), the 45th Mersenne prime, would have 10.5 million digits. There was absolutely no systematic basis for that guess, but I did suggest having an office pool for the number of digits, so what I lack in mathematical sophistication is made up for by my instinct for good nerd party games. On the other hand, Isabel at God Plays Dice predicted 14.5 million digits based on a number theoretic argument. Since I am merely a wannabe number theorist, I can’t compete with that sort of thing. But I can make up a mean Excel spreadsheet, so I figured I’d do a little data plotting and see what happened. If you make a plot of the number of digits in \(M_n\), the nth Mersenne prime, going all the way back to antiquity, here’s what you get: The horizontal axis is n and the vertical… September 17, 2008, 11:56 am It's official: They're prime The numbers believed to be the 45th and 46th Mersenne primes have been proven to be prime. The 45th Mersenne prime is \(2^{37156667} -1\) and the 46th is \(2^{43112609} – 1\).Full text of these numbers is here and here. Of course what you are really wanting to know is how my spreadsheet models worked out for predicting the number of digits in these primes. First, the data: • Number of digits actually in \(M_{45}\): 11,185,272 • Number of digits actually in \(M_{46}\): 12,978,189 My exponential model (\(d = 0.5867 e^{0.3897 n}\)) was, unsurprisingly, way off — predicting a digit count of over 24.2 million for \(M_{45}\) and over 35.8 million for \(M_{46}\). But the sixth-degree polynomial — printed on the scatterplot at the post linked to above — was… well, see for yourself: • Number of digits predicted by 6th-degree polynomial model for \(M_{45}\):… Search Casting Out Nines □ The Chronicle Blog Network, a digital salon sponsored by The Chronicle of Higher Education, features leading bloggers from all corners of academe. Content is not edited, solicited, or necessarily endorsed by The Chronicle. More on the Network... Casting Out Nines through your favorite RSS reader: SUBSCRIBE
{"url":"http://chronicle.com/blognetwork/castingoutnines/tag/mersenne-prime/","timestamp":"2014-04-18T05:39:39Z","content_type":null,"content_length":"54646","record_id":"<urn:uuid:332b5370-e777-4999-9249-cdd7d447b3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2010 [00383] [Date Index] [Thread Index] [Author Index] More about algebraic simplification • To: mathgroup at smc.vnet.net • Subject: [mg110494] More about algebraic simplification • From: "S. B. Gray" <stevebg at ROADRUNNER.COM> • Date: Mon, 21 Jun 2010 02:10:57 -0400 (EDT) • Reply-to: stevebg at ROADRUNNER.COM In response to someone who objected to my statement that replacement rules are not adequate for what I want to do, I will state the problem more completely without getting into irrelevant details. I have n points in R3 in general position. Through all (n choose 3) sets of 3 points I construct a plane and a circle. The plane computation results in three parameters for each plane, a[i],b[i], and c[i], each a simple but not trivial function of x,y,z of the 3 points it goes through. The plane is now described by 3 parameters instead of the 9 for the 3 points. Each circle is described, after a moderately complex computation, as center, radius, and unit normal, each also a function of x,y,z of the 3 points. So the circle is now described by 7 parameters, each with a clear geometric meaning that can be verified using Graphics3D, rather than the original 9. To avoid extreme complexity in the following steps, I do not let Mathematica reduce these parameters to the initial x,y,z's, but define new symbolic variables describing the plane and circle. Next, I want to find the line that is the intersection of two of the planes, z=x*zm + zd and y=x*ym + yd, where the four new parameters are functions of the 6 parameters describing the 2 planes. Finally I want to compute the two (or zero) intersections of that line and the circle on one of the 2 planes, in terms of ym,yd,zm,zd and the 7 circle parameters. If I let Mathematica express these intersections in terms of the x,y,z's of the six points defining the two planes, the result is a horribly complicated expression that is not only incomprehensible but involves repeated evaluation of quite a few subexpressions. It makes much more sense to define new variables at each step and derive symbolic expressions for them in terms of the results of the previous step. My questions are: 1. If I do everything symbolically, what is the best method of preventing Mathematica from expressing the final answer in terms of the initial 2. If substitution rules allow this to be done, how to do it? 3. Having read a few responses to my first, crude, statement of this problem, I still do not see a clear answer. 4. It would be nice for Mathematica to first, be able to prevent full evaluation of the final answer in terms of the initial parameters, and, second, for Mathematica to find suitable intermediate symbolic variables by itself, both to create comprehensible expressions and to reduce duplicate evaluations. 5. Obviously computations by many Mathematica users involve a similar number of separate steps where the same type of help would be useful, so I wonder if WR is planning to address this. Steve Gray
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jun/msg00383.html","timestamp":"2014-04-21T07:21:43Z","content_type":null,"content_length":"27709","record_id":"<urn:uuid:39c6a02f-3e5a-4dd2-977f-83014b21a28a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
in Schelog The arguments of Schelog predicates can be any Scheme objects. In particular, composite structures such as lists, vectors and strings can be used, as also Scheme expressions using the full array of Scheme’s construction and decomposition operators. For instance, consider the following goal: (%member x '(1 2 3)) Here, %member is a predicate, x is a logic variable, and '(1 2 3) is a structure. Given a suitably intuitive definition for %member, the above goal succeeds for x = 1, 2, and 3. Now to defining predicates like %member: (define %member (%rel (x y xs) [(x (cons x xs))] [(x (cons y xs)) (%member x xs)])) Ie, %member is defined with three local variables: x, y, xs. It has two clauses, identifying the two ways of determining membership. The first clause of %member states a fact: For any x, x is a member of a list whose head is also x. The second clause of %member is a rule: x is a member of a list if we can show that it is a member of the tail of that list. In other words, the original %member goal is translated into a subgoal, which is also a %member goal. Note that the variable y in the definition of %member occurs only once in the second clause. As such, it doesn’t need you to make the effort of naming it. (Names help only in matching a second occurrence to a first.) Schelog lets you use the expression (_) to denote an anonymous variable. (Ie, _ is a thunk that generates a fresh anonymous variable at each call.) The predicate %member can be rewritten as (define %member (%rel (x xs) [(x (cons x (_)))] [(x (cons (_) xs)) (%member x xs)])) We can use constructors — Scheme procedures for creating structures — to simulate data types in Schelog. For instance, let’s define a natural-number data-type where 0 denotes zero, and (succ x) denotes the natural number whose immediate predecessor is x. The constructor succ can be defined in Scheme as: (define succ (lambda (x) (vector 'succ x))) Addition and multiplication can be defined as: (define %add (%rel (x y z) [(0 y y)] [((succ x) y (succ z)) (%add x y z)])) (define %times (%rel (x y z z1) [(0 y 0)] [((succ x) y z) (%times x y z1) (%add y z1 z)])) We can do a lot of arithmetic with this in place. For instance, the factorial predicate looks like: (define %factorial (%rel (x y y1) [(0 (succ 0))] [((succ x) y) (%factorial x y1) (%times (succ x) y1 y)])) The above is a very inefficient way to do arithmetic, especially when the underlying language Scheme offers excellent arithmetic facilities (including a comprehensive number “tower” and exact rational arithmetic). One problem with using Scheme calculations directly in Schelog clauses is that the expressions used may contain logic variables that need to be dereferenced. Schelog provides the predicate %is that takes care of this. The goal (%is X E) unifies X with the value of E considered as a Scheme expression. E can have logic variables, but usually they should at least be bound, as unbound variables may not be palatable values to the Scheme operators used in E. We can now directly use the numbers of Scheme to write a more efficient %factorial predicate: (define %factorial (%rel (x y x1 y1) [(0 1)] [(x y) (%is x1 (- x 1)) (%factorial x1 y1) (%is y (* y1 x))])) A price that this efficiency comes with is that we can use %factorial only with its first argument already instantiated. In many cases, this is not an unreasonable constraint. In fact, given this limitation, there is nothing to prevent us from using Scheme’s factorial directly: (define %factorial (%rel (x y) [(x y) (%is y (scheme-factorial or better yet, “in-line” any calls to %factorial with %is-expressions calling scheme‑factorial, where the latter is defined in the usual manner: (define scheme-factorial (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) One can use Scheme’s lexical scoping to enhance predicate definition. Here is a list-reversal predicate defined using a hidden auxiliary predicate: (define %reverse (%rel (x y z w) [('() y y)] [((cons x y) z w) (revaux y (cons x z) w)])]) (%rel (x y) [(x y) (revaux x '() y)]))) (revaux X Y Z) uses Y as an accumulator for reversing X into Z. (Y starts out as (). Each head of X is consed on to Y. Finally, when X has wound down to (), Y contains the reversed list and can be returned as Z.) revaux is used purely as a helper predicate for %reverse, and so it can be concealed within a lexical contour. We use letrec instead of let because revaux is a recursive procedure. Schelog provides a couple of predicates that let the user probe the type of objects. The goal (%constant X) succeeds if X is an atomic object, ie, not a list or vector. The predicate %compound, the negation of %constant, checks if its argument is indeed a list or a vector. The above are merely the logic-programming equivalents of corresponding Scheme predicates. Users can use the predicate %is and Scheme predicates to write more type checks in Schelog. Thus, to test if X is a string, the following goal could be used: (%is #t (string? X)) User-defined Scheme predicates, in addition to primitive Scheme predicates, can be thus imported.
{"url":"http://www.ccs.neu.edu/home/dorai/schelog/schelog-Z-H-3.html","timestamp":"2014-04-18T11:28:15Z","content_type":null,"content_length":"17998","record_id":"<urn:uuid:9f4b2949-ae4e-4c1a-9996-4086d7a0a9d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Palm Springs North, FL Math Tutor Find a Palm Springs North, FL Math Tutor ...I have a yoga certification and have been teaching yoga since 2003. I have worked in churches, wellness centers, gyms and yoga studios. I have been helping students for the SAT Math for the last two years. 16 Subjects: including SAT math, algebra 1, algebra 2, chemistry ...As the testing deadline approaches we will meet less but each meeting will be for a longer duration. The emphasis on grades through 2nd is add, subtract, multiply, and divide whole numbers. Speed will be a factor in mastery of 1 and 2 column whole numbers. 30 Subjects: including SAT math, probability, linear algebra, discrete math ...I worked in a wet laboratory for six years doing medical research. This experience has expanded my biological background. I taught many graduate students basic skills in a wet laboratory. 18 Subjects: including geometry, study skills, biochemistry, cooking I graduated from Florida International University with a masters degree in Accounting. I have had prior experience working at a substitute teacher in elementary and middle school, teaching Math and English. I am highly skilled in the subject matter of math and algebra and I will provide you with t... 3 Subjects: including algebra 2, algebra 1, prealgebra ...My love for teaching stems from my passion to learn. I was a Biology major in college and hope to one day go to medical school. I pride myself on being a tough yet fair educator, one who knows when to balance being strict and fun. 15 Subjects: including calculus, elementary (k-6th), physics, precalculus Related Palm Springs North, FL Tutors Palm Springs North, FL Accounting Tutors Palm Springs North, FL ACT Tutors Palm Springs North, FL Algebra Tutors Palm Springs North, FL Algebra 2 Tutors Palm Springs North, FL Calculus Tutors Palm Springs North, FL Geometry Tutors Palm Springs North, FL Math Tutors Palm Springs North, FL Prealgebra Tutors Palm Springs North, FL Precalculus Tutors Palm Springs North, FL SAT Tutors Palm Springs North, FL SAT Math Tutors Palm Springs North, FL Science Tutors Palm Springs North, FL Statistics Tutors Palm Springs North, FL Trigonometry Tutors Nearby Cities With Math Tutor Carl Fisher, FL Math Tutors Carol City, FL Math Tutors Fisher Island, FL Math Tutors Gables By The Sea, FL Math Tutors Hallandale Beach, FL Math Tutors Keystone Islands, FL Math Tutors Lauderdale Isles, FL Math Tutors Miami Lakes, FL Math Tutors Ojus, FL Math Tutors Seybold, FL Math Tutors Snapper Creek, FL Math Tutors South Florida, FL Math Tutors Sunny Isles, FL Math Tutors Sunset Island, FL Math Tutors Uleta, FL Math Tutors
{"url":"http://www.purplemath.com/Palm_Springs_North_FL_Math_tutors.php","timestamp":"2014-04-20T06:36:56Z","content_type":null,"content_length":"24203","record_id":"<urn:uuid:0ba2734e-b530-45a2-a281-41087d2bd447>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstracts - Listed by Speaker Abstracts are displayed on the website for information only and are not to be considered a published document. Some inconsistencies in display of fonts may occur in some web browser configurations. Abstracts will appear on the website within 10 working days of the date of submission. A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z • Adamus, Janusz - Vertical components and local geometry of analytic mappings • Agarwal, Mahesh - p-adic L-functions for GSp(4) ×GL(2) • Aguiar, Marcelo - Monoidal categories, Joyal's species, and combinatorial Hopf algebras • Alderson, Tim - Maximal Projective Codes • Alper, Jarod - Good moduli spaces for Artin stacks • Amleh, Amal - On Second-Order Rational Difference Equations • Anco, Stephen - Symmetry analysis of nonlinear wave equations in n > 1 dimensions • Archibald, Tom - Formulas, Concepts, and the "Jacobi Limit" in the 19th C. • Badescu, Alex - Risk neutral measures for GARCH option pricing with normal variance-mean mixture examples • Badzioch, Bernard - Categorical algebra of mapping spaces • Barr, Michael - A *-autonomous category of topological abelian groups • Bauer, Kristine - Spectral sequences of operad algebras • Baum, Paul - Geometric structure in the representation theory of p-adic groups • Bayer, Arend - Quantum cohomology of C^N/m[r] • Bell, John - The Axiom of Choice and the Law of Excluded Middle • Bell, John - On the Indecomposability of the Continuum • Bellhouse, David - Eighteenth Century English Life Annuities: Calculations and Applications • Beltran, Carlos - On the complexity of approximating solutions of systems of polynomial equations • Beny, Cedric - Decoherence, broadcasting and the emergence of phase-space • Bergner, Julia - Homotopy fiber products of homotopy theories • Berlekamp, Elwyn - History of Long Block Codes • Bierstone, Edward - Problems on resolution of singularities • Bilinski, Robert - Realism as a source of mathematical imagination • Blahut, Richard - Source Coding in Information Theory • Bolder, Dave - Simple Joint Macroeconomic and Term-Structure Models • Borba, Marcelo C. - Modeling, Projects and Internet: Alternatives to undergraduate basic mathematics courses • Borwein, Jon - Computationally discovered and proved generating functions • Brewster, Rick - The Restricted Homomorphism Problem • Brown, James Robert - Mathematical Explanation • Brudnyi, Alexander - On Local Behavior of Holomorphic Functions Along Complex Submanifolds of C^N • Bunge, Marta - Fundamental Pushout Toposes • Burchard, Almut - Solitary waves on an elastic curve • Burgess, Andrea - Closed trail decompositions of complete equipartite graphs • Butler, Steve - Eigenvalues of 2-edge-coverings • Butscher, Adrian - Collapsing Constant Mean Curvature Surfaces in Riemannian Manifolds • Cadenillas, Abel - Optimal Dividend Policy When There Are Business Cycles • Calculus: The Musical - Sadie Bowman & Mark Gutman • Campolieti, Joe - Closed-Form Spectral Expansions for First-Passage Time Densities, Lookback and Barrier Options under New Families of Diffusions • Can, Mahir - Bruhat Orders and Combinatorics on Reductive Monoids • Carette, Jacques - Algorithm families, or how to write less code that does more • Cavalieri, Renzo - G-Hodge Integrals, Gerby Localization and the GW Theory of [C^3/Z[3]] • Cavalieri, Renzo - Hyperelliptic Hodge integrals • Cavers, Michael - Reducible inertially arbitrary matrix patterns • Chakrabarti, Debraj - Holomorphic extension of CR functions from non-smooth hypersurfaces • Charette, Virginie - Affine deformations of the holonomy group of a three-holed sphere • Chau, Albert - The Kahler-Ricci flow on complete non-compact manifolds and uniformization • Chebolu, Sunil A - A new perspective on groups with periodic cohomology • Chebolu, Sunil B - Towards a refinement of the Bloch-Kato conjecture • Chekhov, Leonid - Teichmuller theory of bordered surfaces • Cheviakov, Alexei F. - Construction and Applications of Nonlocally Related Systems of Partial Differential Equations • Christensen, Dan - The generating hypothesis in the stable module category • Cioaba, Sebastian - Eigenvalues of Graphs • Cockett, Robin - On the semantics of reversible computation • Coffman, Adam - Unfolding CR singularities of real 4-manifolds in C5 • Consani, Katia - Motives and Noncommutative Geometry • Cordy, Michelle and Donna Kotsopoulos - Seeing And Squinting: Occasioning Imagination In Mathematics Learning • Craven, Stewart - Creating Sculptures to Explore Mathematics • Davison, Matt - A model of contagion in the P&C Insurance industry • Dawson, Robert - Your Name Here: The Scandalous Evolution of Bryce's Commercial Arithmetic • DeVidi, David - Pluralisms, Mathematical and Logical • Diaz-Espinosa, Oliver - Long wave expansions for water waves over random bottom • Drudge, Keldon - Existence and non-existence results for "Extremal" line sets in PG(3,q) • Drudge, Keldon - Measuring gap risk for CPPI strategies: implied and otherwise • Dukes, Peter - Directed complete bipartite graph decompositions and three-state sensor networks • Ebenfelt, Peter - Real hypersurfaces with constant Levi degeneracy • Eberly, Wayne - On the Reliability of Block Wiedemann and Lanczos Algorithms-Another Piece of the Puzzle • Elliott, George - A canonical form for the Pimsner-Voiculescu embedding • Elzinga, Randy - The Isotropic Bound on the Independence Number • Emerson, Heath - Equivariant correspondences and applications • Escobar-Anel, Marcos - Stochastic Correlation in the Valuation of Low-Dimensional Derivative Contracts • Fallat, Shaun - The combinatorics of totally positive minors and implications • Farmer, William M. - Formalizing the Context in Computational Mathematics • Feke, Jackie - Ptolemy's Mathematical Realism • Felikson, Anna - Combinatorics of Coxeter polytopes • Figalli, Alessio - A mass transportation approach to quantitative isoperimetric inequalities • Fomin, Sergei - Cluster algebras associated with bordered surfaces • Fountain, John - Reflection Monoids • Fried, Mike - Atomic Orbital-type cusps on Alternating Group Modular Towers • Funk, Jonathon - The universal locally constant covering of an inverse semigroup • Gallo, Clement - Transverse instability for the dark solitons of the cubic defocusing NLS equation • Gaudet, Vincent - Energy Efficient Decoding Using Analog VLSI Techniques • Ge, Yuxin - Regularity of optimal transportation maps on nearly spherical manifolds • Geiss, Christof - Cluster algebras arising from preinjective modules • Geng, Jiansheng - Invariant Tori of Full Dimension for a Nonlinear Schrödinger Equation • Giesbrecht, Mark - New Algorithms for Lacunary Polynomials • Godelle, Eddy - The braid rook monoid • Godin, Veronique - Recent/future advances in string topology • Gong, Xianghong - Regularity for the CR vector bundle problem • Gorokhovsky, Sasha - Algebraic index theorem for Fourier integral operators • Goulden, Ian - New combinatorial solutions to the KP equations • Grasselli, Matheus - Insurance products in markets with stochastic volatility • Graveline, Jeremy - Exchange Rate Volatility and the Forward Premium Anomaly • Grugric, Izak - Equivariant geometric bordism • Guo, Li - Differential Algebraic Birkhoff Decomposition and the renormalization of multiple zeta values • Guyenne, Philippe - Hamiltonian formulation and long wave models for internal waves • Hajac, Piotr - Yetter-Drinfeld Structures Revisited • Hassner, Martin - Self-Replicating Trellis Graphs • Heden, Olof - A tree of perfect codes • Henry, Marc - Mass transportation duality in econometrics • Higginson, William - The Mathematics of Paper Folding • Hohlweg, Christophe - Permutahedra and Generalized Associahedra • Howard, Ben - Intersection theory on Shimura surfaces • Hurd, Tom - First passage problems for jump diffusions arising in finance • Hyndman, Cody - Forward-backward stochastic differential equations and term structure derivatives • Iovita, Adrian - A Jacquet-Langlands correspondence for p-adic families of modular • Isaksen, Dan - Motivic Ext computations • Jackson, David - The f^4-and Penner models of 2d Quantum gravity, the moduli space of curves and properties of 2-cell embeddings of graphs in Riemann surfaces • Jacobson, Michael - Computing the Regulator of a Real Quadratic Field • Jaimungal, Sebastian - From Spot to Forward Stochastic Volatility Models for Commodities • Jardine, Rick - Parabolic groupoids • Jiang, Yufeng - The integral Chow ring of toric Deligne-Mumford stacks • Johnson, Brenda - Algebraic Goodwillie Calculus Revisited • Johnson, Keith - Abel Formal Group Laws and Cohomology • Johnson-Leung, Jennifer - The equivariant main conjecture of Iwasawa theory for imaginary quadratic fields • Jones, Alexander - The Crime of Vettius Valens • Kalimipalli, Madhu - Information Content of Equity Volatility for Default and Liquidity Risks in Corporate Bond Market • Kaltofen, Erich - Expressing a Fraction of Two Determinants as a Determinant • Kaminker, Jerry - Matrix integrals and von Neumann algebras of compact groups • Kapranov, Mikhail - Algebro-geometric models for the spaces of unparametrized paths • Kauffman, Louis - q-Deformed Spin Networks and Topological Quantum Computation • Kaveh, Kiumars - Convex bodies, isoperimetric inequality and degree of line bundles • Kempf, Achim - On an information theoretic ultraviolet cutoff in curved space • Kent, Deborah - Mathematicians in search of war work, 1917-1918 • Kezys, John - Geometry from the Renaissance Artist's Eye • Khanin, Konstantin - Localization and pinning for directed polymers • Kholodnyi, Valery - The Semilinear Evolution Equation for American Contingent Claims: Successive Approximations and Bounds • Kim, Byoung-du - Iwasawa theory for supersingular primes and the parity conjecture • Kim, Jaehong - A splitting theorem for holomorphic Banach bundles • Kim, Young-Heon - Curvature and continuity of optimal transport • Kinzebulatov, Damir - On uniform subalgebras of H^¥ generated by almost-periodic functions • Kirkland, Steve - Laplacian integral graphs of maximum degree 3 • Kolkiewicz, Adam - Estimation for Diffusion Processes Using Reverse-Time Specifications • Kotsireas, Ilias - Systems of Polynomial Equations in Combinatorial Design Theory • Kotsopoulos, Donna and Michelle Cordy - Seeing And Squinting: Occasioning Imagination In Mathematics Learning • Krashen, Daniel - Index reduction formulas for Brauer classes • Kumar, Manish - The fundamental group of smooth affine curves in positive characteristic • Labahn, George - Conditioning of the Generalized Hankel Eigenvalue Problem • Laca, Marcelo - Hecke algebras from groups acting on trees • Landi, Giovanni - Dirac operators on noncommutative manifolds • Lannes, David - The Camassa-Holm and Degasperis-Procesi equations and water waves • Lau, Lap Chi - On approximate min-max theorems for graph problems • Lawson, Blaine - A Projective Analogue of Wermer's Theorem • Lawson, Blaine - Projective Hulls, Projective Linking, and Boundaries of Analytic Varieties • Lebl, Jiri - Levi-flat hypersurfaces with real analytic boundary • Lempert, Laszlo - Holomorphic Banach bundles over compact manifolds • Li, Hanfeng - Smooth algebras • Li, Hua - Pricing and hedging European Options with uncertain parameters • Li, Zhenheng - Representations of the Symplectic Renner Monoid • Li, Zhuo - Poincaré Polynomials of Combinatorially Smooth Toric Varieties • Liang, Songxin - A New Maple Package for Solving Parametric Polynomial Systems • Lisonek, Peter - Steganography with linear codes • Lloyd, Seth - Quantize clocks, not gravity • Loten, Cynthia - Holes and Chordal Graphs • Lukacs, Gabor - Categorical Methods in Topological Groups • Lyaghfouri, Abdeslem - Hölder Continuity of Solutions to the A-Laplace Equation Involving Measures • Magyar, Peter - Fusion of affine Schubert varieties • Mahanta, Snigdhayan - DG categories in noncommutative geometry • Makarov, Roman - Analysis and Classification of Nonlinear Diffusion Financial Models • Malandro, Martin - A Fast Fourier Transform for the Rook Monoid • Manes, Ernie - Relational Models Revisited • Marcotte, Odile - On the relationship between graph colouring invariants and their fractional counterparts • Martin, Robert - Towards sampling theory on spacetime • Math Imagination Musical Performance - Daryn Bee, Jenna Bee, George Gadanidis, and friends • Math-e-Motion - Stewart Craven • May, John - A Symbolic-Numeric Approach to Computing Inertia of Products of Matrices of Rational Numbers • McQuillan, Dan - Magic Labeling of 2-regular graphs • McQuillan, Jim - Codes and Hyperovals • Meagher, Karen - Using algebraic graph theory to solve problems in design theory • Mendivil, Franklin - Annealing a GA for Constrained Optimization • Menni, Matias - Abstract properties of monads arising in combinatorics • Misiewicz, Zoë - Greek Geometers on Geometry • Moameni, Abbas - Existence and concentration of solitary waves for a class of quasilinear Schrödinger Equations • Mokler, Claus - The face monoid associated to a Kac-Moody group and its infinite Renner monoid • Morava, Jack - Motives of spaces • Moreno, Santiago - Risk Minimization and Optimal Derivative Design in a Principal Agent Game • Moreno-Maza, Marc - Triangular decomposition of polynomial systems: from practice to high performance • Mosca, Michele - Self-testing Quantum Apparatus • Murty, Kumar - Growth of Selmer ranks of Abelian varieties with complex multiplication • Myrvold, Wendy - Finding Independent Sets of a Graph • Niefield, Susan - Lax Presheaves and Exponentiability • Noohi, Behrang • Nowakowski, Richard - "Can you repeat that, sir?" • Offin, Daniel - Stability of minimizing solutions in the N-body problem • Open Discussion - Trends and Problems in Category Theory • Ordonez, Hugo Rodriguez - A counterexample to Ganea's conjecture with minimum dimension • Panel Discussion - Mathematical Imagination • Pare, Robert - Kan extensions for double categories • Park, Jeehoon - The Eisenstein-Siegel cocycles and p-adic zeta functions of real quadratic fields • Pearson, Paul - Homology of topological modular forms • Phillips, John - APS Boundary Conditions, KK-Theory and Cuntz-Krieger Systems • Ponge, Raphael - Noncommutative geometry and lower dimensional volumes in Riemannian geometry • Ponto, Kate - Fixed point theory and trace for bicategories • Potgieter, Paul - Large fluctuations of complex oscillations • Pronk, Dorette - Homotopy Theory for Double Categories • Pronk, Dorette - Orbifold Translation Groupoids • Quastel, Jeremy - Wiener meets Kortweg and deVries • Rahbarnia, Freydoon - Knots, Projections and Graphs • Renner, Lex - Betti Numbers and H-polynomials • Rodrigo, Marianito - A new representation of the local volatility surface • Roe, John - The analytic surgery sequence and eta invariants • Rosebrugh, Bob - Database views, lenses and monads • Roy, Aidan - Highly nonlinear functions in terms of codes, graphs, and designs • Sagan, Bruce - Monomial Bases for NBC Complexes • Saliola, Franco - Left regular bands and Solomon's descent algebras • Saunders, Dave - Risk Contributions of Systematic Factors in Multi-Factor Credit Risk Models • Schlegel, Christian - Generalized Modulation and Iterative Demodulation • Schochet, Claude - C^*-algebras, gauge groups, and rational homotopy • Schost, Eric - Conversion algorithms for orthogonal polynomials • Scull, Laura - Orbifolds and Equivariant Homotopy • Seco, Luis • Shirazi, Hamed - Equiangular lines and abelian covers of complete graphs • Shnirelman, Alexander - Variational problem in a partially ordered set and the problems of fluid dynamics • Skandera, Mark - On the cluster basis of z[x[1,1],...,x[3,3]] • Smith, Greg - Constructions of toric Deligne-Mumford stacks • Stancu, Alina - On some new characterizations of ellipsoids • Stanley, Don • Steinberg, Benjamin - On semigroups with basic complex algebra • Stembridge, John - Admissible W-graphs • Storjohann, Arne - Faster algorithms for the Frobenius canonical form • Swishchuk, Anatoliy - Pricing Variance Swaps for Stochastic Volatilities with Delay and Jumps • Tang, Xiang - Characteristic classes for trivialized flat U[n] bundle • Tardif, Claude - A dualistic approach to graph colouring • Tardif, Claude - Hedentiemi's conjecture, 40 years later • The, Dennis - The Principle of Symmetric Criticality in Gauge Theory • Thomas, Hugh - A new partial order on a finite reflection group • Tierney, Myles - The homotopy factorisation system of n-connected maps and n-covers • Torres, Enrique - Commuting Elements and Group Cohomology • Trokhimtchouk, Maxim - Everywhere regularity of certain nonlinear systems • Trukhachev, Dmitry - Analyzing Capacity of Ad-hoc Wireless Networks • Usefi, Hamid - Fox subgroups in modular group algebras • Vakil, Ravi - A natural smooth compactification of the space of elliptic curves in projective space • van Brummelen, Glen - A Different Sort of Sacred Geometry: The Medieval Analemma for Finding the Direction of Mecca • Vassiliadou, Sophia - L^2-cohomology of some complex spaces with singularities • Vatsal, Vinayak - Special values of L-functions modulo p • Veliche, Razvan - Maximally symmetric stable curves • Venjakob, Otmar - Are zeta-functions able to solve Diophantine equations? • Vetzal, Ken - Earnings Volatility and Corporate Bond Spreads • Wagner, Dave - Negative correlation inequalities for random-cluster models • Wang, Steven - Schur rings and planar functions • Wang, Xikui - Sequential optimization under uncertainty • Weiss, Al - Congruences between pseudomeasures and Iwasawa theory • Wood, Richard - Frobenius objects in general cartesian bicategories • Xie, Yuzhen - Solving Polynomial Systems Symbolically and in Parallel • Xing, Yongjun - On the Spread of Real Symmetric Matrices with Entries in an Interval • Yang, Tian - A BV structure on the Hochschild cohomology of truncated polynomials • Zhang, Bei - Nonvanishing mod p of Eisenstein series • Zvengrowski, Peter - Cohomology Rings of Finite Fundamental Groups of 3-Manifolds
{"url":"http://cms.math.ca/Events/winter07/abs/by_speaker","timestamp":"2014-04-18T15:46:03Z","content_type":null,"content_length":"30461","record_id":"<urn:uuid:5d5d36a4-c494-4efb-94fb-626907879e9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Voorhees Township, NJ Algebra 1 Tutor Find a Voorhees Township, NJ Algebra 1 Tutor ...I was originally an engineer for a helicopter company for nearly 4 years and I resigned to start a career in education. I found little fulfillment in the business world especially because I didn't believe I was having a strong positive impact on society. This is not to say that I did not have a... 16 Subjects: including algebra 1, Spanish, calculus, physics ...I then started a homeschool cooperative which I headed for six years in Collingswood, NJ. It grew to over 70 families and 200 students. My oldest starting taking college classes during his senior year of high school as a part of the high school option at Gloucester County College. 23 Subjects: including algebra 1, reading, writing, geometry I am currently a senior at Swarthmore College, graduating in June with Pennsylvania Teaching Certification in grades preK-8. I have experience working one-on-one and in groups with students from preK-7 in a variety of diverse settings (urban, suburban, international) and with a variety of diverse n... 32 Subjects: including algebra 1, reading, English, grammar ...I have the ability to present math concepts in a simple, step-by-step approach that those who are "math averse" can understand. I am patient with students and have tutored students ranging from elementary school age through PhD candidates. I am flexible and will work around the student's schedule. 22 Subjects: including algebra 1, geometry, statistics, ASVAB ...They will become more proficient in using ratios, proportions and solving algebraic equations. I have my students develop and expand problem solving skills (creatively and analytically) in order to solve real-world problems, using manipulatives and calculators. Successful completion of this course prepares students for success in Algebra 1. 12 Subjects: including algebra 1, geometry, ASVAB, algebra 2 Related Voorhees Township, NJ Tutors Voorhees Township, NJ Accounting Tutors Voorhees Township, NJ ACT Tutors Voorhees Township, NJ Algebra Tutors Voorhees Township, NJ Algebra 2 Tutors Voorhees Township, NJ Calculus Tutors Voorhees Township, NJ Geometry Tutors Voorhees Township, NJ Math Tutors Voorhees Township, NJ Prealgebra Tutors Voorhees Township, NJ Precalculus Tutors Voorhees Township, NJ SAT Tutors Voorhees Township, NJ SAT Math Tutors Voorhees Township, NJ Science Tutors Voorhees Township, NJ Statistics Tutors Voorhees Township, NJ Trigonometry Tutors Nearby Cities With algebra 1 Tutor Collingswood algebra 1 Tutors Deptford Township, NJ algebra 1 Tutors Echelon, NJ algebra 1 Tutors Evesham Twp, NJ algebra 1 Tutors Gibbsboro algebra 1 Tutors Haddonfield algebra 1 Tutors Hi Nella, NJ algebra 1 Tutors Laurel Springs, NJ algebra 1 Tutors Lindenwold, NJ algebra 1 Tutors Mount Laurel algebra 1 Tutors Pine Hill, NJ algebra 1 Tutors Somerdale, NJ algebra 1 Tutors Stratford, NJ algebra 1 Tutors Voorhees algebra 1 Tutors Voorhees Kirkwood, NJ algebra 1 Tutors
{"url":"http://www.purplemath.com/Voorhees_Township_NJ_algebra_1_tutors.php","timestamp":"2014-04-19T12:23:48Z","content_type":null,"content_length":"24728","record_id":"<urn:uuid:67e50e45-3895-42eb-a90a-fc4445d243cd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus of variations May 16th 2010, 04:25 AM Calculus of variations Write down the Euler-Lagrange equations for critical points of the functional $I(y)=\int_{x_1}^{x_2}y^2+y'y'''-(y'')^2~dx$ I'm finding this odd since the only functions i've looked at are of the form $g(x,y,y')$ not $f(y',y'',y''')$. I only thought the Euler-Lagrange equations were defined the the first type of function, not the second. Would I do it like this: $\frac{\partial f}{\partial y'''}=y'$ $\frac{\partial f}{\partial y''}=2(y'')$ So the E-L equation is $2(y'')-\frac{d}{dy'}(y')=0$ Which becomes $2y''-1=0$ May 16th 2010, 05:55 AM I'm finding this odd since the only functions i've looked at are of the form $g(x,y,y')$ not $f(y',y'',y''')$. I only thought the Euler-Lagrange equations were defined the the first type of function, not the second. Would I do it like this: $\frac{\partial f}{\partial y'''}=y'$ $\frac{\partial f}{\partial y''}=2(y'')$ So the E-L equation is $2(y'')-\frac{d}{dy'}(y')=0$ Which becomes $2y''-1=0$ For higher order Lagrangians, the Euler-Lagrange equation is $<br /> \frac{\partial L}{\partial y} - \frac{d}{d x}\left(\frac{\partial L}{\partial y'}\right) + \frac{d^2}{d x^2} \left(\frac{\partial L}{\partial y''} \right) - \cdots + (-1)^n \frac{d^n}{d x ^n}\left(\frac{\partial L}{\partial y^{(n)}}\right) = 0.<br />$
{"url":"http://mathhelpforum.com/calculus/144970-calculus-variations-print.html","timestamp":"2014-04-16T21:12:36Z","content_type":null,"content_length":"7992","record_id":"<urn:uuid:28a58946-c6fb-45fc-8f0b-e185b1dd31df>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Economic Order Quantity (EOQ) Economic order quantity (EOQ) is the order quantity of inventory that minimizes the total cost of inventory management. Two most important categories of inventory costs are ordering costs and carrying costs. Ordering costs are costs that are incurred on obtaining additional inventories. They include costs incurred on communicating the order, transportation cost, etc. Carrying costs represent the costs incurred on holding inventory in hand. They include the opportunity cost of money held up in inventories, storage costs, spoilage costs, etc. Ordering costs and carrying costs are quite opposite to each other. If we need to minimize carrying costs we have to place small order which increases the ordering costs. If we want minimize our ordering costs we have to place few orders in a year and this requires placing large orders which in turn increases the total carrying costs for the period. We need to minimize the total inventory costs and EOQ model helps us just do that. Total inventory costs = Ordering costs + Holding costs By taking the first derivative of the function we find the following equation for minimum cost EOQ = SQRT(2 × Quantity × Cost Per Order / Carrying Cost Per Order) ABC Ltd. is engaged in sale of footballs. Its cost per order is $400 and its carrying cost unit is $10 per unit per annum. The company has a demand for 20,000 units per year. Calculate the order size, total orders required during a year, total carrying cost and total ordering cost for the year. EOQ = SQRT(2 × 20,000 × 400/10) = 1,265 units Annual demand is 20,000 units so the company will have to place 16 orders (= annual demand of 20,000 divided by order size of 1,265). Total ordering cost is hence $64,000 ($400 multiplied by 16). Average inventory held is 632.5 ((0+1,265)/2) which means total carrying costs of $6,325 (i.e. 632.5 × $10). Written by Obaidullah Jan
{"url":"http://accountingexplained.com/managerial/inventory-management/economic-order-quantity","timestamp":"2014-04-19T04:32:07Z","content_type":null,"content_length":"10160","record_id":"<urn:uuid:3ff52487-0b69-4549-a6fa-562f4c0af07a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
The Lagrangian for cosmological perturbations 7.4 The Lagrangian for cosmological perturbations In Section 7.1 we used the fact that the field which should be quantized corresponds to 6.1) expanded at second-order in the perturbations [437]. We recall again that we are considering an effective single-field theory such as f (R) gravity and scalar-tensor theory with the coupling 6.2) in second order, we find that the action for the curvature perturbation [311]where 7.38). In fact, the variation of this action in terms of the field 7.37) in Fourier space. We note that there is another approach called the Hamiltonian formalism which is also useful for the quantization of cosmological perturbations. See [237, 209, 208, 127] for this approach in the context of f (R) gravity and modified gravitational theories. Introducing the quantities 7.80) can be written as where a prime represents a derivative with respect to the conformal time 7.81) leads to Eq. (7.39) in Fourier space. The transformation of the action (7.80) to (7.81) gives rise to the effective mass We have seen in Eq. (7.42) that during inflation the quantity 7.81) reduces to the one for a canonical scalar field 7.1. From the action (7.81) we understand a number of physical properties in f (R) theories and scalar-tensor theories with the coupling Having a standard d’Alambertian operator, the mode has speed of propagation equal to the speed of light. This leads to a standard dispersion relation The sign of f (R) gravity (with [100 ] (where [145 , 161 ]. The field 7.82). In f (R) gravity it can be written as where we used the background equation (2.16) to write For f (R) theories need to satisfy which is consistent with the result (5.2) derived by the linear analysis about the Minkowski background. Together with the ghost condition 4.56).
{"url":"http://relativity.livingreviews.org/Articles/lrr-2010-3/articlesu17.html","timestamp":"2014-04-18T23:46:44Z","content_type":null,"content_length":"21929","record_id":"<urn:uuid:c2ba1dcf-8205-43e4-864b-6217fb7f6570>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: 2014; approx. 282 pp; softcover ISBN-10: 0-8218-9420-X ISBN-13: 978-0-8218-9420-0 List Price: US$39 Member Price: US$31.20 Order Code: MBK/83 The question "What am I doing?" haunts many creative people, researchers, and teachers. Mathematics, poetry, and philosophy can look from the outside sometimes as ballet en pointe, and at other times as the flight of the bumblebee. Reuben Hersh looks at mathematics from the inside; he collects his papers written over several decades, their edited versions, and new chapters in his book Experiencing Mathematics, which is practical, philosophical, and in some places as intensely personal as Swann's madeleine. --Yuri Manin, Max Planck Institute, Bonn, Germany What happens when mid-career a mathematician unexpectedly becomes philosophical? These lively and eloquent essays address the questions that arise from a crisis of reflectiveness: What is a mathematical proof and why does it come after, not before, mathematical revelation? Can mathematics be both real and a human artifact? Do mathematicians produce eternal truths, or are the judgments of the mathematical community quasi-empirical and historically framed? How can we be sure that an infinite series that seems to converge really does converge? This collection of essays by Reuben Hersh makes an important contribution. His lively and eloquent essays bring the reality of mathematical research to the page. He argues that the search for foundations is misleading, and that philosophers should shift from focusing narrowly on the deductive structure of proof, to tracing the broader forms of quasi-empirical reasoning that star the history of mathematics, as well as examining the nature of mathematical communities and how and why their collective judgments evolve from one generation to the next. If these questions keep you up at night, then you should read this book. And if they don't, then you should read this book anyway, because afterwards, they will! --Emily Grosholz, Department of Philosophy, Penn State, Pennsylvania, USA Most mathematicians, when asked about the nature and meaning of mathematics, vacillate between the two unrealistic poles of Platonism and formalism. By looking carefully at what mathematicians really do when they are doing mathematics, Reuben Hersh offers an escape from this trap. This book of selected articles and essays provides an honest, coherent, and clearly understandable account of mathematicians' proof as it really is, and of the existence and reality of mathematical entities. It follows in the footsteps of Poincaré, Hadamard, and Polya. The pragmatism of John Dewey is a better fit for mathematical practice than the dominant "analytic philosophy". Dialogue, satire, and fantasy enliven the philosophical and methodological analysis. Reuben Hersh has written extensively on mathematics, often from the point of view of a philosopher of science. His book with Philip Davis, The Mathematical Experience, won the National Book Award in science. Hersh is emeritus professor of mathematics at the University of New Mexico. The book is of interest to everyone who wonders what math really is, whether they are students, teachers, mathematicians, philosophers, or otherwise.
{"url":"http://ams.org/bookstore-getitem/item=mbk-83","timestamp":"2014-04-20T06:10:18Z","content_type":null,"content_length":"17972","record_id":"<urn:uuid:e205bff4-5bc9-4232-ae68-995e68f3891a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
South Bowie, MD Trigonometry Tutor Find a South Bowie, MD Trigonometry Tutor ...It is formula heavy and requires more memorization than the previous year of Algebra 2. Performing well on exams like the SAT is partially about the mathematics and partially about the strategies needed to do well. The SAT includes math equations that range all the way to Algebra 2. 24 Subjects: including trigonometry, reading, geometry, algebra 1 ...I have six years of experience teaching ACT for some of the largest exam prep companies in the country. I have over six years of experience teaching this section of the ACT for some of the largest exam prep companies in the country. I have over six years of experience helping students prepare f... 37 Subjects: including trigonometry, chemistry, English, biology ...Strong reading, writing, and math skills are the foundation for successful completion of the verbal and quantitative skills, reading, mathematics, and language portions of the HSPT. (Please let me know if the school requires any of the optional tests for science, mechanical aptitude, or religion.... 32 Subjects: including trigonometry, chemistry, reading, biology ...I have hundreds of hours of experience and am well-versed in explaining complicated concepts in a way that beginners can easily understand. I specialize in tutoring math (from pre-algebra to differential equations!) and statistics. I completed a B.S. degree in Applied Mathematics from GWU, grad... 16 Subjects: including trigonometry, calculus, geometry, statistics ...Right now I am a full-time SAS programmer, and I have finished the first SAS programming source provided by SAS Institute. I was born and raised in China. I lived in Qingdao, China for 17 13 Subjects: including trigonometry, calculus, geometry, Chinese Related South Bowie, MD Tutors South Bowie, MD Accounting Tutors South Bowie, MD ACT Tutors South Bowie, MD Algebra Tutors South Bowie, MD Algebra 2 Tutors South Bowie, MD Calculus Tutors South Bowie, MD Geometry Tutors South Bowie, MD Math Tutors South Bowie, MD Prealgebra Tutors South Bowie, MD Precalculus Tutors South Bowie, MD SAT Tutors South Bowie, MD SAT Math Tutors South Bowie, MD Science Tutors South Bowie, MD Statistics Tutors South Bowie, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/South_Bowie_MD_trigonometry_tutors.php","timestamp":"2014-04-21T14:47:36Z","content_type":null,"content_length":"24353","record_id":"<urn:uuid:965ae347-aebb-4057-88d7-c7a351bd3ac1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Due to a previously mentioned grant my school received, I basically have free reign for supplies/resources for the math classroom. I obviously have no idea what I need. So here's what I currently Colored Pencils Electric Pencil Sharpener Graphing Calculators Individual white boards Document Camera SMART Board 2 Printers (1 is color) 4 Student Computers (that we never use) Kagan Timer Student Response System (Clickers) 1 Flip Video Camera Basically I want supplies that are durable, sustainable, and reusable. But then again, I want to take advantage of the money while we can. Here are the ideas I've found/heard so far. Classroom Laptops (Don't know how to use them) Ipads (Have one, not really a fan, rather have laptops) TI-Inspire and Navigator (scared of these!) CBL/CBR data collection devices for the TI 83/84 (not sure how to use this) Deluxe Probability Kit Geometry Reproducibles Folding Shapes: Solids and Nets Geometric Solids (Are these the same as the above?) (Recommendations? What size do I need?) Algebra Tiles (Recommendations?) Easy Smartboard Teaching Templates Any other ideas? Professional development is kind of iffy so for now I'm looking more for manipulatives, books, supplies, etc. What should every math classroom have? I've never taught a unit on transformations before so I started from scratch. Wait, all my lessons are created from scratch. Just sometimes it's someone else's scratch. I digress. My unit only covered reflections, translations, and rotations and I discovered I suck at teaching rotations. But, my translation lesson went over really well and my reflection lesson was probably the my best lesson idea ever! Here's what I did. In explicit detail. With bullets. Download PowerPoint here first. • Using the questions posed [slide 1] have students answer and discuss. You should get some pretty interesting information. Sum up the discussion by telling students that most fashion models usually have very symmetric features. Also, a study was done using babies. Pictures were put up and babies tended to stare longer at the faces that were most symmetric, alluding to the fact that symmetric faces are more attractive to the eye. • [Slide 2] names the objective. [Slide 3] Have students guess which face is the real one. The real face is always the one on top (this is for you to know and them to find out!). The bottom left is the left side of the face reflected and the bottom right is the right side of the face reflected. As you go through [slide 3] through [slide 9] discuss the similarities and differences. Ask students which pictures look realistic and which don’t. Point out birth marks, shadowing, eye shape, mouth shape, etc. Basically, make the conversation as interesting as possible. • Now go to your internet browser. Ask the students to pick a celebrity famous for being attractive. Google their name to find a picture. The picture needs to be of them facing forward and preferably with both ears showing (which is harder if the person is female). Classroom management tip: You might want to do this ahead of time or where the students can’t see. You never know what type of picture might come up! Copy the url to the picture you’ve found. Then go to the website http://www.anaface.com. Paste the url into the box that says Enter Image URL. Click submit. Then place the dots as directed. Have the students help guide you. Then click next. The site will analyze your picture and talk about vertical and horizontal symmetry. This is a good place to introduce those as vocabulary terms as well as introducing a line of symmetry. • [Slide 10] Put up your picture. Your face. I inserted a 10 x 10 table with a red border and no fill over my picture. Now we start talking about where the lines of horizontal and vertical symmetry would be. Ask the students how we decide if the eyes are symmetrical. What about the ears? We want to lead students to measuring the distance from each eyes to the line of symmetry and comparing the lengths. • Before advancing to [slide 11], I had students guess what rating the site gave me. I had previously analyzed my own face and took a screenshot of the website. I put it on the slide to save class time. We want to the next slide and talked about the different aspects of symmetry. I used my own face so that no one else would be offended by the negative comments. • Now pass out the notes worksheet. I picked celebrities that I knew my students liked. Please change any of these to pertain better to your class. Have students use a ruler to draw a straight vertical line. Then draw dots in the center of each eye. Use the centimeter side of the ruler to measure the distance from the left eye to the line. Then measure from the right eye to the line. Repeat for each celebrity. Students may get bored doing the same repeated action. If so, jump straight to the geomirror. Have students put the mirror part on the line of symmetry. Have them look at the left and the right side to see the difference in symmetry. • Have the students do the back of the worksheet on their own, using the geomirror. For left-handed students, they will need to turn the paper upside down. • To end beautifully (pun intended), have the students complete the exit slip [slide 12] on scrap paper. This brings us back to the beginning of our conversation. OMG if the students did not eat this up!! During this whole unit, I heard students talk abut how math was actually fun now and they looked forward to this class and it went by so fast. It was encouraging to finally find something that they truly enjoyed. And for the record, their exit slip answers brought out really good comments on what their opinion of beauty was. I shared ALL of them the next day with the whole class. Students also loved the geomirrors and borrowed them throughout the day to use on their own pictures and yearbooks and so on. They wanted to use them every day!
{"url":"http://misscalculate.blogspot.com/2010_12_01_archive.html","timestamp":"2014-04-17T09:41:50Z","content_type":null,"content_length":"87817","record_id":"<urn:uuid:a8b7bbb0-eed7-433f-a90b-64b597d07fb9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Itasca, IL Calculus Tutor Find a Itasca, IL Calculus Tutor ...I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the ACT. I've helped students push past the 30 mark, or just bring up one part of their score to push up their overall score. In the past 5 years, I've written proprietary guides on ACT strategy for local companies. 24 Subjects: including calculus, physics, geometry, GRE ...I provide information to students on test taking skills, time management, and simple approaches to complicated problems. Also I provide information on which problems should be performed using the calculator to obtain the correct answer in short amount of time. We work together to create Basic formula sheet to help them memorize the important topics needed for the test. 11 Subjects: including calculus, geometry, algebra 2, trigonometry ...My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are able to grasp. I use both analytical as well as graphical methods or a combination of the two as needed ... 34 Subjects: including calculus, reading, writing, statistics ...By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As the oldest member of the team, other girls looked to me as their leader and my coaches expected me to lead practices and team warm-ups. Although, I no longer play competitively, I am always looking for opportunities to practice, keep up my skills, and play a friendly match. 13 Subjects: including calculus, chemistry, geometry, biology ...I can be a little flexible about the timing if given prior notice. In my current role I work in a problem solving environment where I pick up issues and deliver solutions by working with different groups and communicating to management level. This type of work helped me to communicate better and to make people understand at all levels. 16 Subjects: including calculus, chemistry, physics, geometry Related Itasca, IL Tutors Itasca, IL Accounting Tutors Itasca, IL ACT Tutors Itasca, IL Algebra Tutors Itasca, IL Algebra 2 Tutors Itasca, IL Calculus Tutors Itasca, IL Geometry Tutors Itasca, IL Math Tutors Itasca, IL Prealgebra Tutors Itasca, IL Precalculus Tutors Itasca, IL SAT Tutors Itasca, IL SAT Math Tutors Itasca, IL Science Tutors Itasca, IL Statistics Tutors Itasca, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Itasca_IL_Calculus_tutors.php","timestamp":"2014-04-21T12:50:42Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:dffc0cb0-5c61-4b61-9aa3-b8d4dad7fb5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Redwood Estates Find a Redwood Estates Precalculus Tutor ...I've been tutoring math at West Valley College for 4 years now, usually I tutor groups of 4-5 students at a time, but I do have a lot of one on one tutoring experience.I've been tutoring algebra students for about 3 years now. I've always tried to make math as relatable as possible for students ... 17 Subjects: including precalculus, reading, writing, English ...I have a Master's degree in Applied Economics from the University of Santa Clara. My specialty is in Microeconomics, but I am very familiar with all the major aspects of free-market economic theory, including Macroeconomics, Econometrics, Money & Banking and International Economics. I have stro... 22 Subjects: including precalculus, calculus, geometry, statistics I have been teaching science and math for about 30 years. After retiring from the Pajaro Valley Unifed Schools, where I taught secondary Chemistry, Physics, Algebra and Geometry in the Watsonville /Aptos area, I worked at UCSC to supervise the training of college graduates for secondary science teac... 13 Subjects: including precalculus, chemistry, physics, geometry ...I hold a B.S. in chemistry and M.S. in biochemistry. Using molecular cloning, I have also conducted research in biochemistry and neuroscience laboratories at UC Riverside and San Diego State University. I have extensive teaching experience in college-level general chemistry, biochemistry and ne... 18 Subjects: including precalculus, chemistry, calculus, physics I am an experienced, enthusiastic, and dedicated instructor who will help students understand physics and mathematics using various method of instruction. I have a M.S. degree in Condensed Matter Physics from Iowa State University with more than 5 years of teaching experience in physics. I was a r... 14 Subjects: including precalculus, calculus, physics, geometry
{"url":"http://www.purplemath.com/Redwood_Estates_precalculus_tutors.php","timestamp":"2014-04-18T13:56:45Z","content_type":null,"content_length":"24389","record_id":"<urn:uuid:7e8d1ca1-32b9-4a97-a67c-c04be399d0b1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
st: How do I generate data for count models? Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: How do I generate data for count models? From Matthijs De Zwaan <m.dezwaan@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: How do I generate data for count models? Date Sun, 16 May 2010 16:14:56 +0200 Dear Stata-listers, To get a feel for the behavior of different count model estimators, I am trying to set up a small Monte Carlo experiment. I would like to compare the Poisson pseudo ML estimate (-poisson, vce(robust)-) and the NegBin2 model (-nbreg-) for variance specifications that do not correspond to the Poisson or NegBin models, and see how they behave for different sample sizes. Cameron and Trivedi (2009, microeconometrics for Stata, ch. 17) describe how to set a Poisson model using the -rpoisson()- command or a NegBin1 model using -rgamma()- and then -rpoisson()- (see C&T, section 17.2.2). I am not sure if I understand them correctly. Below this text is some basic code that I use to make first a Poisson set up and then a NegBin1 set up. I have the following questions: (1) is the set up below correct for those models? If not, what am I doing wrong? (2) How could I get a set up that corresponds to the NegBin2 model? (3) How do I make a set up where the mean is still exp(xb), but the variance does not correspond nicely to either the Poisson or the NegBin2 model? C&T (p. 567) mention that variance could for example be lognormally distributed, but I have no preference for any distribution, as long as is does not follow the NegBin2 model. I am using Stata 10.1 for Mac OSX. Any help is greatly appreciated, Matthijs de Zwaan **** SET UP *** * Poisson model set obs 1000 gen x = .2*invnorm(uniform()) gen e = .2*invnorm(uniform()) gen ypois = exp(1 + x + e) // E(y) = exp(xb) replace ypois = rpoisson(ypois) // y~poisson(exp(xb)) summ ypois poisson ypois x *NegBin model gen ynb = exp(1 + x + e) // E(y) = exp(xb) replace ynb = rgamma(ynb,1) // replace ynb = rpoisson(ynb) // y~NegBin(mu = exp(xb), alpha = 1) summ ynb nbreg ynb x, d(c) *** END *** * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-05/msg00850.html","timestamp":"2014-04-17T07:12:49Z","content_type":null,"content_length":"9131","record_id":"<urn:uuid:ceb57e16-83e3-449c-9e56-74904e38d023>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
A Concave Mirror Has A Radius Of Curvature Equal ... | Chegg.com A concave mirror has a radius of curvature equal to 24cm. Use clearly drawn ray diagrams to locate the image, if itexists, for an object near the axis at distances of (a) 55 cm, (b)24 cm, (c) 12 cm, and (d) 8.0 cm from the mirror. For eachcase state the orientation of the image (upright or inverted) andwhether it’s real or virtual, (e) calculate the imagedistance and the magnification for the above cases.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/concave-mirror-radius-curvature-equal-24cm-use-clearly-drawn-ray-diagrams-locate-image-ite-q538320","timestamp":"2014-04-18T10:03:48Z","content_type":null,"content_length":"20996","record_id":"<urn:uuid:3ea8ed6b-2254-4e15-a47d-fea7c76a9564>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Wayland, MA Algebra Tutors These private tutors in Wayland, MA are brought to you by WyzAnt.com, the best place to find local tutors on the web. When you use WyzAnt, you can search for Wayland, MA tutors, review profiles and qualifications, run background checks, and arrange for home lessons. Click on any of the results below to see the tutor's full profile. Your first hour with any tutor is protected by our Good Fit Guarantee: You don't pay for tutoring unless you find a good fit. ...I can also help prepare for the SAT and ACT exams. I have experience working with students with ADHD, autism, and specific learning disabilities. Helping students find success where they have only struggled in the past is what drives me. 29 Subjects: including algebra 1, reading, English, writing ...I am also the advisor for the high school math club and the advisor of the National Honor Society at a local high school.I have taught: SAT Prep, Pre-Caluculus, Trigonometry, Algebra 2 honors, Algebra 2 standard course, Geometry honors & Standard, Algebra 1, MCAS Prep, Pre-Algebra and 4-8th grade... 12 Subjects: including algebra 2, algebra 1, geometry, SAT math ...I worked with the program regularly to create slides for use when running meetings or when reporting to upper management. Algebra goes down much easier if there's a solid prealgebra foundation in graphing, ratios, and exponents. Also important to be developed is the ability to read a word problem and translate it into mathematical terms. 44 Subjects: including algebra 1, algebra 2, chemistry, writing ...I work with students in the core academic subjects: math, science, social studies, and English. I also work on study skills, organization and time management with students as needed. I have extensive experience working with students with ADD/ADHD. 31 Subjects: including algebra 2, algebra 1, chemistry, writing ...I enjoy working with students who are motivated but need a little help to understand the subject at hand. I'm very good at explaining hard concepts or problems using easy to understand and every day examples. I'm patient with my students and experienced in helping them improve their grades in s... 11 Subjects: including algebra 1, algebra 2, calculus, geometry Related Wayland, MA Tutors Wayland, MA act tutors Wayland, MA act math tutors Wayland, MA algebra tutors Wayland, MA calculus tutors Wayland, MA chemistry tutors Wayland, MA excel tutors Wayland, MA geometry tutors Wayland, MA math tutors Wayland, MA physics tutors Wayland, MA prealgebra tutors Wayland, MA precalculus tutors Wayland, MA sat tutors Wayland, MA sat math tutors Wayland, MA statistics tutors Wayland, MA trigonometry tutors Nearby Cities With Tutors Ashland, MA algebra tutors Auburndale, MA algebra tutors Concord, MA algebra tutors Holliston algebra tutors Lincoln Center, MA algebra tutors Lincoln, MA algebra tutors Maynard, MA algebra tutors Needham Jct, MA algebra tutors Newtonville, MA algebra tutors Southboro, MA algebra tutors Southborough algebra tutors Sudbury algebra tutors Wellesley algebra tutors Wellesley Hills algebra tutors Weston, MA algebra tutors
{"url":"http://www.algebrahelp.com/Wayland_MA_algebra_tutors.jsp","timestamp":"2014-04-21T04:37:41Z","content_type":null,"content_length":"24928","record_id":"<urn:uuid:93dcac63-f08d-4896-9cc5-3c327aeb8c28>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/heisenberg/answered","timestamp":"2014-04-16T07:43:46Z","content_type":null,"content_length":"123429","record_id":"<urn:uuid:b83567d5-9ca2-4385-932b-c7a459340a48>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Ghastly Mathematics A good number of very high profile philosophers and mathematicians have drawn attention to what they see as the intrinsic beauty in mathematical solutions. For example : "It seems to me now that mathematics is capable of an artistic excellence as great as that of any music, perhaps greater; not because the pleasure it gives (although very pure) is comparable, either in intensity or in the number of people who feel it, to that of music, but because it gives in absolute perfection that combination, characteristic of great art, of godlike freedom, with the sense of inevitable destiny; because, in fact, it constructs an ideal world where everything is perfect but true." Bertrand Russell (1872-1970), Autobiography, George Allen and Unwin Ltd, 1967, v1, p158 Not so often mentioned, however, is that where there is beauty there can also be ugliness – or worse. For instance, take as an example the paper : ‘A GHASTLY GENERALIZED N-MANIFOLD’- by professor Robert J. Daverman and Dr. John J. Walsh (published in the ILLINOIS JOURNAL OF MATHEMATICS, Volume 25, Number 4, Winter 1981) [Note: The full paper can be accessed by clicking 'Full-text: Open access PDF file' via the link above.] Less mathematically gifted readers may not find the ghastliness immediately apparent though, indeed the word ‘ghastly’ appears only in the title of the paper, and thus at the risk of irritating those who are familiar with 2-ghastly spaces in acyclic manifold cell-like decompositions, and who will no doubt find the inherent ghastliness to be self-evident, reprinted below is a concise explanation that Professor Daverman has kindly supplied. “It is ghastly because it contains no cuber of dimension 2, 3 …, or n-1, where N is the dimension of the ghastly object.” Interesting, from the link provided above: "J.H.Poincare (1854-1912), (cited in H.E.Huntley, The Divine Proportion, Dover, 1970) The mathematician does not study pure mathematics because it is useful; he studies it because he delights in it and he delights in it because it is beautiful. Which reflects the difference between experimentalists and engineers from the abstractionists and theoreticians. Damn, if mathematics were legally considered an "art" based in aesthetics rather than pragmatic tools for our use, well, just think about it. Imagine the descendants of the person who first figured out that 2 + 2 = 4 and the royalties their descendants would still be enjoying from the ownership of the copyrights for their intellectual "art." Sarcasm aside, I think that real beauty in mathematics occurs when seriously convoluted and complex mathematics can be reduced to extraordinarily simple equations, such as the case with Newton, Copernicus, Maxwell and, of course, Einstein. This type of "artful" beautification is sorely needed today. Remember that snake oil salesmanship is also an "art." (think anthropogenic global warming) PhotoDady (not verified) | 12/19/13 | 10:17 AM
{"url":"http://www.science20.com/beachcombing_academia/ghastly_mathematics-126701","timestamp":"2014-04-17T23:31:52Z","content_type":null,"content_length":"46336","record_id":"<urn:uuid:d1dfa794-b065-4ab0-ad6d-6a8ab2a1cf00>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinear dynamics of localized structures in a hydrogenbonded chain model including dipole interactions "... Abstract. We fully characterize the small-parameter limit for a class of lattice models with twoparticle long or short range interactions with no \exchange energy. " One of the problems we consider is that of characterizing the continuum limit of the classical magnetostatic energy of a sequence of m ..." Cited by 1 (0 self) Add to MetaCart Abstract. We fully characterize the small-parameter limit for a class of lattice models with twoparticle long or short range interactions with no \exchange energy. " One of the problems we consider is that of characterizing the continuum limit of the classical magnetostatic energy of a sequence of magnetic dipoles on a Bravais lattice, (letting the lattice parameter tend to zero). In order to describe the small-parameter limit, we use discrete Wigner transforms to transform the stored-energy which is given by the double convolution of a sequence of (dipole) functions on a Bravais lattice with a kernel, homogeneous of degree with N with the cancellation property, as the lattice parameter tends to zero. By rescaling and using Fourier methods, discrete Wigner transforms in particular, to transform the problem to one on the torus, we are able to characterize the small-parameter limit of the energy depending on whether the dipoles oscillate on the scale of the lattice, oscillate on a much longer lengthscale, or converge strongly. In the case where> N, the result is simple and can be characterized by anintegral with respect to the Wigner measure limit on the torus. In the case where = N, oscillations essentially on the scale of the lattice must be separated from oscillations essentially onamuch longer lengthscale in order to characterize the energy in terms of the Wigner measure limit on the torus, an H-measure limit, and the limiting magnetization. We show that the classical
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=12009328","timestamp":"2014-04-21T01:46:31Z","content_type":null,"content_length":"13466","record_id":"<urn:uuid:f2ad12fb-5620-42db-880b-97eb5c4c285c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Hardness of computing the most signi bits of secret keys in Die-Hellman and related schemes , 2002 "... This paper takes the pairing-based tripartite key agreement protocol of Joux and develops it to produce three-party key agreement protocols offering additional security properties. We present a number of tripartite, one round, authenticated protocols related to the MTI and MQV protocols. We also pre ..." Cited by 23 (2 self) Add to MetaCart This paper takes the pairing-based tripartite key agreement protocol of Joux and develops it to produce three-party key agreement protocols offering additional security properties. We present a number of tripartite, one round, authenticated protocols related to the MTI and MQV protocols. We also present pass-optimal authenticated and key confirmed tripartite protocols that generalise the station-to-station protocol. - Proc. Crypto’02, LNCS , 2002 "... Abstract. A common practice to encrypt with RSA is to first apply a padding scheme to the message and then to exponentiate the result with the public exponent; an example of this is OAEP. Similarly, the usual way of signing with RSA is to apply some padding scheme and then to exponentiate the result ..." Cited by 21 (1 self) Add to MetaCart Abstract. A common practice to encrypt with RSA is to first apply a padding scheme to the message and then to exponentiate the result with the public exponent; an example of this is OAEP. Similarly, the usual way of signing with RSA is to apply some padding scheme and then to exponentiate the result with the private exponent, as for example in PSS. Usually, the RSA modulus used for encrypting is different from the one used for signing. The goal of this paper is to simplify this common setting. First, we show that PSS can also be used for encryption, and gives an encryption scheme semantically secure against adaptive chosenciphertext attacks, in the random oracle model. As a result, PSS can be used indifferently for encryption or signature. Moreover, we show that PSS allows to safely use the same RSA key-pairs for both encryption and signature, in a concurrent manner. More generally, we show that using PSS the same set of keys can be used for both encryption and signature for any trapdoor partial-domain one-way permutation. The practical consequences of our result are important: PKIs and public-key implementations can be significantly simplified. Key-words: Probabilistic Signature Scheme, Provable Security. 1 "... Let E=F p be an elliptic curve, and G 2 E=F p . Dene the Die{Hellman function on E=F p as DH E;G (aG; bG) = abG. We show that if there is an ecient algorithm for predicting the LSB of the x or y coordinate of abG given hE ; G; aG; bGi for a certain family of elliptic curves, then there is an algori ..." Cited by 13 (4 self) Add to MetaCart Let E=F p be an elliptic curve, and G 2 E=F p . Dene the Die{Hellman function on E=F p as DH E;G (aG; bG) = abG. We show that if there is an ecient algorithm for predicting the LSB of the x or y coordinate of abG given hE ; G; aG; bGi for a certain family of elliptic curves, then there is an algorithm for computing the Die{Hellman function on all curves in this family. This seems stronger than the best analogous results for the Die{Hellman function in F p . Boneh and Venkatesan showed that in F p computing approximately (log p) 1=2 of the bits of the Die{Hellman secret is as hard as computing the entire secret. Our results show that just predicting one bit of the Elliptic Curve Die{Hellman secret in a family of curves is as hard as computing the entire secret. 1 - In ASIACRYPT 2001, volume 2248 of LNCS , 2001 "... Abstract. We study a class of problems called Modular Inverse Hidden Number Problems (MIHNPs). The basic problem in this class is the following: Given many pairs � � � � −1 xi, msbk (α + xi) mod p for random xi ∈ Zp the problem is to find α ∈ Zp (here msbk(x) refers to the k most significant bits o ..." Cited by 12 (1 self) Add to MetaCart Abstract. We study a class of problems called Modular Inverse Hidden Number Problems (MIHNPs). The basic problem in this class is the following: Given many pairs � � � � −1 xi, msbk (α + xi) mod p for random xi ∈ Zp the problem is to find α ∈ Zp (here msbk(x) refers to the k most significant bits of x). We describe an algorithm for this problem when k> (log 2 p)/3 and conjecture that the problem is hard whenever k < (log 2 p)/3. We show that assuming hardness of some variants of this MIHNP problem leads to very efficient algebraic PRNGs and MACs. , 2002 "... The Weil and Tate pairings are a popular new gadget in cryptography and have found many applications, including identity-based cryptography. In particular, the pairings have been used for key exchange protocols. This paper studies the bit security of keys obtained using protocols based on pairings ( ..." Cited by 4 (1 self) Add to MetaCart The Weil and Tate pairings are a popular new gadget in cryptography and have found many applications, including identity-based cryptography. In particular, the pairings have been used for key exchange protocols. This paper studies the bit security of keys obtained using protocols based on pairings (that is, we show that obtaining certain bits of the common key is as hard as computing the entire key). These results are valuable as they give insight into how many "hard-core" bits can be obtained from key exchange using pairings. , 2001 "... Optimal Asymmetric Encryption Padding (OAEP) is a technique for converting the RSA trapdoor permutation into a chosen ciphertext secure system in the random oracle model. OAEP padding can be viewed as two rounds of a Feistel network. We show that for the Rabin and RSA trapdoor functions a much simpl ..." Add to MetaCart Optimal Asymmetric Encryption Padding (OAEP) is a technique for converting the RSA trapdoor permutation into a chosen ciphertext secure system in the random oracle model. OAEP padding can be viewed as two rounds of a Feistel network. We show that for the Rabin and RSA trapdoor functions a much simpler padding scheme is sufficient for chosen ciphertext security in the random oracle model. We show that only one round of a Feistel network is sufficient. The proof of security for this simpler padding is more efficient than the proof for OAEP, resulting in much tighter security bounds. The proof of security uses the algebraic properties of the RSA and Rabin functions.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1365484","timestamp":"2014-04-17T23:50:16Z","content_type":null,"content_length":"26072","record_id":"<urn:uuid:1599c90a-9d6b-4fbd-9b6c-f9eab068b803>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Debris field for a broken meteor I happened to catch two parts of two different episodes of Meteorite Men – a show about two guys that look for meteorites. In both of the snippets I saw, they were talking about a debris field for a meteor that breaks up. In these fields, the larger chunks of the meteorite are further down in the field. Why is this? Let me approach this first from a terminal velocity view. This requires a model for air resistance. I will use the following: • rho is the density of air • A is the cross sectional area of the object • C is a drag coefficient that depends on the shape of the object • v is the speed of the object • And this gives a force with a direction opposite of the velocity vector Let me assume that all the pieces of a meteor have the same density and shape – for simplicity, I will assume a sphere. Here is a diagram for two different sized pieces falling (straight down) at the same speed. Meteor A (the big one) has a greater gravitational force because it has more mass. It also has a greater air resistance because it’s cross sectional area is larger. I picked a speed so that meteor B would be at terminal velocity. This is when the air resistance has the same magnitude as the gravitational force. If I assume that meteor B has a radius of r[B] and a density of rho[m] then: Where v[T] is the terminal velocity. If I solve for this value, I get: Here you can see the key point. The terminal velocity depends on the size. This is because the air resistance is proportional the area (r^2) and the weight is proportional to the [DEL:area:DEL] volume (r^3). These two things do not cancel. Modeling a debris field I have created a python model for shooting bullets. I can simply modify this to calculate the trajectory of a dozen or so different sized (but same shape and density) meteor pieces. The following is a plot of the trajectory of a few pieces of a meteor. I (for random reasons) started the model at 5,000 meters above the ground moving at 350 m/s aimed 30 degrees below the horizontal. Here is what I get: So, the bigger the piece, the farther it will go. My biggest piece was 1 meter. 1. #1 Rob (no, the other Rob) March 23, 2010 Hi Rhett, I think that in the paragraph under your bottom most equation (the one the gives v sub T) you meant that weight is proportional to volume, not area. I wonder what a realistic speed for a meteor is? I would imagine that they generally exceed the speed of sound so I’m not sure the naive equation for drag would apply. They typically burn so both the cross sectional area and volume would be decreasing as they descend. 2. #2 Rhett Allain March 23, 2010 @the other Rob, Thanks for catching my mistake – I fixed it. I really don’t know about the speed of a meteor – I guess I could estimate this by modeling this as a rock coming from Jupiter or something. But, for this case, I just need the speed and height when it breaks up to get the debris field. 3. #3 Grep Agni March 24, 2010 According to this site, a medium speed meteor has a speed of ~40 km/s. I remember reading (though I don’t remember where) that large bolides are moving so fast that air doesn’t have time to move around them and the huge pressure difference between the front side and the rear side is what can cause them to fragment. I don’t think your model of air resistance applies at all. 4. #4 Rhett Allain March 24, 2010 Yes – at 40 km/s, my model is fubared. But…the meteor will eventually slow down and then I can use the v^2 model of air resistance. The key is to consider when the thing breaks up. Either way, I assume the air resistance model for high speeds would still be proportional in some way to the cross sectional area and the weight is proportional to the volume. 5. #5 Mark Jackson March 26, 2010 Hello Rhett, I am a proffessional meteorite hunter (semi-retired) and I know Mr. Notkin and Mr. Arnold (the Meteorite Men) very well. Your analysis is pretty much right on; I’m going to restate the facts in a way that may help crystalize the dynamic creation of strewnfield distribution. There are two basic levels of velocity that matter, each with thier own set of physics; what we call the meteor’s cosmic velocity (starting @ 72 to 12 km/sec), this velocity in a very general sense operates independent of gravity and air resistance and is referred to as a vector of that velocity. As the body transitions to terminal velocity then gravity and air resistance (winds aloft) become “everything” to the movement behavior of the body(s). In 2003 when the Park Forest meteorite fell on Chicago, we learned just how much different the two modes of flight are and how counterintuitive a strewnfield distribution can seem w/o understanding them. We plotted the velocity vector expecting large stones farthest along the vector which was exactly what we found … except the successively smaller bodies weren’t on or along the vector(SSW-NNE) but extended out perpendicular to the vector (W-E) from the heavy impacts. The smallest were several kilometers away from that velocity vector. Why on earth (pardon the pun) was this distribution so skewed from the vel vector? The answer we found in the upper level winds; the jet stream was over Chicago that night and it’s 150 mph west to east winds stretched the whole strewnfield to the east of the heavy stones. So there are two paradigms that follow the heaviest meteorite; they will be found furthest along the velocity vector AND closest to the velocity BTW all of this fluid and gravitational dynamics only helps when a body falls that is observed. If one happens to find a strewnfield that was not observed to fall, the only paradigms that help are the ellipsis rule and the general weight distribution. Strewnfields of any type tend to fall in an elliptical pattern. PS: I almost forgot the third phase of strewnfield distribution TERRESTRIAL (erosional, wind, alluvial). Needless to say, this phase can obliterate any trace of its cosmic beginings if left to the job for long enough. Take care and good physics!
{"url":"http://scienceblogs.com/dotphysics/2010/03/23/debris-field-for-broken-meteor/","timestamp":"2014-04-21T09:44:41Z","content_type":null,"content_length":"65261","record_id":"<urn:uuid:f2d108e6-dedf-46eb-b0e6-d078f53ff0e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: computing this probability Replies: 2 Last Post: Nov 6, 2012 3:06 PM Messages: [ Previous | Next ] Re: computing this probability Posted: Nov 6, 2012 1:20 PM On 06/11/2012 12:20 PM, Anja wrote: > Hi everyone, > I am doing some discrete optimisation in my problem and I obtain some marginal probabilities as the following expression: > P(x) = exp(-E(x)) / (exp(-E(x)) + exp(-E(y)) + exp(-E(z)) + ...) > Where E(v) is the energy that the system takes for some configuration v. > Now, my issue is that these energy values can take very large numbers and hence this P(x) expression affectively becomes 0. If I scale all the energy values by say E(x), so that the expression > P(x) = exp(-1) / (exp(-1) + exp(-E(y)/E(x)) + exp(-E(z)/E(x)) + ...) then usually these numbers get too close and the probability takes a value very close to 1 and does not say anything useful. Hold it. Your algebra seems all messed up. You should scale by exp(E(x)), and then your expression turns into P(x) = 1 / (1 + exp(E(x) - E(y)) + exp(E(x) - E(z)) + ...) However, with your given numbers this is a value too close to 1 to distinguish it in any meaningful way. I suspect that you have other errors as well. > Can someone suggest how I can scale this data in a way, so that it becomes easy to calculate and the probabilities are still something useful. > As an example, in the last problem, the values were something like: > E(x) = 17247 > E(y) = 20425 > E(z) = 26487 > What would be ideal is if I could somehow scale everything so that the probabilities also make sense. > If I scale everything by E(x), I get probabilities of 0.4 for the most likely configuration but if I scale by 02.*E(x), then the probability for the most likely configuration jumps to 0.68... So it is really tricky... > Thanks, > Anja Date Subject Author 11/6/12 computing this probability anja.ende@googlemail.com 11/6/12 Re: computing this probability rt servo 11/6/12 Re: computing this probability RGVickson@shaw.ca
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2413575&messageID=7918779","timestamp":"2014-04-19T20:40:15Z","content_type":null,"content_length":"20064","record_id":"<urn:uuid:3f417aeb-04d9-43b5-8ad5-e0988bf7439d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help!!! What is the lateral area of a rectangular prism if the base edges are 8 feet and 6 feet and the height is 10 feet? • 9 months ago • 9 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51ed8d16e4b0c8f6bc3397c9","timestamp":"2014-04-20T16:01:56Z","content_type":null,"content_length":"42146","record_id":"<urn:uuid:d41b4961-016d-4794-9747-aa7d99c031d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
How close to Fermat's theorem? November 7th 2009, 08:37 AM I thought you were claiming that no multigrade of level>2 can be "dotted" with another vector, and create another multigrade of level>2. First I never brought up vectors in any of my posts on this thread. I'm talking about dotting with consecutive numbers 1,2,3,4... (if it's a 2-termed multigrade on both sides of the equal sign, a^n + b^n = c^n + d^n, then I would dot with 1 and 2 with the multigrade arranged from lowest to highest numbers on both sides of the equal sign; if it's a 3-termed multigrade on both sides of the equal sign, a^n + b^n + c^n = d^n + e^n + f^n, then I would dot with 1,2,3 with the multigrade going from lowest to highest numbers on both sides of the equal sign, the number of consecutive numbers I dot with depends on the number of terms on either side of the multigrade). I should point out that I would add a 0 to a multigrade to make it symmetric because the chances of my observation about equality to the second power improves when you do so. E.g. if you have a multigrade: a^n + b^n = c^n + d^n + e^n, then add a 0 to the left side to make it 0^n + a^n + b^n = c^n + d^n + e^n and dot multiply with 1,2,3 because you've made a three-termed multigrade on both sides of the equal sign. A demonstration: when you have this bigrade, 4^n + 9^n + 2^n = 8^n + 1^n + 6^n for n = 1,2, rearrange from lowest to highest: 2^n + 4^n + 9^n = 1^n + 6^n + 8^n and dot multiply with 1,2,3 to get: 1 x 2^n + 2 x 4^n + 3 x 9^n = 1 x 1^n + 2 x 6^n + 3 x 8^n, you will see that you get equality for only n = 1. Another demo: I mentioned earlier about the trigrade 2,9,15,8 and 3,5,14,12. Rearrange the numbers to 2,8,9,15 and 3,5,12,14 and dot multiply with 1,2,3,4 to get: 1 x 2^n + 2 x 8^n + 3 x 9^n + 4 x 15^n = 1 x 3^n + 2 x 5^n + 3 x 12^n + 4 x 14^n which is true for n = 1,2. I've checked a number of multigrades with dot multiplication on the consecutive numbers and n never goes above 2 to have equality just like Fermat's equation also never has equality when n goes above 2. (btw even if you do have symmetric multigrades with its terms arranged from lowest to highest doesn't guarantee equality for n = 1 or n = 2 which also corresponds to FLT). Would like to mention to Media_Man that I find 16,4,4,1 interesting and I thank him for his input for using his computer. I would recommend reading Simon Singh's "Fermat's Enigma" particularly where it was proven that elliptic equations and modular forms are the same (and I feel that my observation relates to this). Well, where it was proved that (semistable) elliptic and modular forms are the same is in the work of Wiles when he proved FLT: this is the modern form of the celebrated Taniyama-Shimura Conjecture and which was proved to be equivalent to FLT, mostly by Serre and Ribet, though already proposed by Frey. Perhaps Singh mentions this result (I've had the book for long months but haven't just had the necessary will to read it, in spite of working myself some time ago with modular forms, elliptic curves and related stuff ). Anyway, a clear, neat and easy-to-read presentation can make wonders to get people itnerested in your work. November 7th 2009, 12:36 PM mr fantastic "I cannot enlighten you with my derivation of this result. I took my own advice and let a computer sift through thousands of candidates and that is one that popped up." You have already enlightened me. I hope I have enlightened you. BTW I have came up with another method for making a multigrade two days ago starting with another multigrade which I'll demonstrate with the base numbers: 2,8,15,9 = 3,5,14,12 (a trigrade), go from right to left and add numbers that are next to each other - the end numbers add the numbers on the far left: 10^n + 23^n+ 24^n + 11^n = 8^n+19^n + 26^n + 15^n which is valid for n = 1,2,3. Simple algebra will show why this method works. I want to thank you for helping me out with your computer (incidentally I know about half a dozen methods for making multigrades. I refrain from going into this because I don't know if the purpose of this website is for recreational math or more serious math such as helping out with homework problems - if you read the book I mentioned, you'll see why I posted my findings here as it seems to be of major importance to mathematics) It can be a year before I get a home computer. In the meantime I look forward to your computer help, but the best thing would be to prove or disprove my conjecture. (thank you too Tonio) This thread has got to the point where it is now beyond the scope of the purpose of MHF. I'm therefore closing the thread. By all means continue developing your theories, but find an alternative avenue for doing so. As a sidenote, I will remind members not to pm to solicit help etc. This can make people feel uncomfortable and it's against forum rules.
{"url":"http://mathhelpforum.com/number-theory/111733-how-close-fermats-theorem-2-print.html","timestamp":"2014-04-18T16:57:07Z","content_type":null,"content_length":"12443","record_id":"<urn:uuid:8415ecec-3595-4bc1-ae73-3497d326176c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. Third one. Both right and have a similar side. Good enough for me Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50316b90e4b0aa95e34cbc3f","timestamp":"2014-04-20T06:06:21Z","content_type":null,"content_length":"30286","record_id":"<urn:uuid:4c851610-ecd4-4276-aeb2-e06f79faf91a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Lula, GA Algebra 1 Tutor Find a Lula, GA Algebra 1 Tutor ...I have been tutoring for about three years from elementary to high school students. As well as tutoring, I have volunteered in my local elementary school to help student with their homework for their homework club. Also I mentor students from middle school to high school on behavior, studies, and other topics. 14 Subjects: including algebra 1, chemistry, biology, algebra 2 ...I am currently a Junior in college, studying Pre-Med. I am a well-rounded student with a 3.95 GPA. I am very learned in math and the sciences, and have the most experience in these two fields. 20 Subjects: including algebra 1, reading, chemistry, biology ...I am currently a Gainesville State College student studying Psychology. However, I have taken the initiative to take extra math classes, since math comes easy to me. I have taken up to 6 Subjects: including algebra 1, calculus, precalculus, algebra 2 ...I have a Bachelor degree in Economics (Business Administration) and am specialized in Accounting and foreign languages, especially in grammar. I speak fluent Spanish, too. However, most pupils have problems in maths, so this is what I tutored most. 29 Subjects: including algebra 1, Spanish, reading, English ...I enjoy working with children. I worked at a daycare all throughout high school. I'm available any day of the week. 10 Subjects: including algebra 1, reading, grammar, elementary (k-6th) Related Lula, GA Tutors Lula, GA Accounting Tutors Lula, GA ACT Tutors Lula, GA Algebra Tutors Lula, GA Algebra 2 Tutors Lula, GA Calculus Tutors Lula, GA Geometry Tutors Lula, GA Math Tutors Lula, GA Prealgebra Tutors Lula, GA Precalculus Tutors Lula, GA SAT Tutors Lula, GA SAT Math Tutors Lula, GA Science Tutors Lula, GA Statistics Tutors Lula, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/lula_ga_algebra_1_tutors.php","timestamp":"2014-04-21T12:54:21Z","content_type":null,"content_length":"23534","record_id":"<urn:uuid:52921ed8-fe58-4424-b1bd-371ac872d569>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about homotopy theory on Existential Type What’s the big deal with HoTT? June 22, 2013 Now that the Homotopy Type Theory book is out, a lot of people are asking “What’s the big deal?”. The full answer lies within the book itself (or, at any rate, the fullest answer to date), but I am sure that many of us who were involved in its creation will be fielding this question in our own ways to help explain why we are so excited by it. In fact what I think is really fascinating about HoTT is precisely that there are so many different ways to think about it, according to one’s interests and backgrounds. For example, one might say it’s a nice way to phrase arguments in homotopy theory that avoids some of the technicalities in the classical proofs by treating spaces and paths synthetically, rather than analytically. Or one might say that it’s a good language for mechanization of mathematics that provides for the concise formulation of proofs in a form that can be verified by a computer. Or one might say that it points the way towards a vast extension of the concept of computation that enables us to compute with abstract geometric objects such as spheres or toruses. Or one might say that it’s a new foundation for mathematics that subsumes set theory by generalizing types from mere sets to arbitrary infinity groupoids, sets being but particularly simple types (those with no non-trivial higher-dimensional structure). But what is it about HoTT that makes all these interpretations and applications possible? What is the key idea that separates HoTT from other approaches that seek to achieve similar ends? What makes HoTT so special? In a word the answer is constructivity. The distinctive feature of HoTT is that it is based on Per Martin-Löf’s Intuitionistic Theory of Types, which was formulated as a foundation for intuitionistic mathematics as originally put forth by Brouwer in the 1930′s, and further developed by Bishop, Gentzen, Heyting, Kolmogorov, Kleene, Lawvere, and Scott, among many others. Briefly put, the idea of type theory is to codify and systematize the concept of a mathematical construction by characterizing the abstract properties, rather than the concrete realizations, of the objects used in everyday mathematics. Brouwer’s key insight, which lies at the heart of HoTT, is that proofs are a form of construction no different in kind or character from numbers, geometric figures, spaces, mappings, groups, algebras, or any other mathematical structure. Brouwer’s dictum, which distinguished his approach from competing alternatives, is that logic is a part of mathematics, rather than mathematics is an application of logic. Because for him the concept of a construction, including the concept of a proof, is prior to any other form of mathematical activity, including the study of proofs themselves (i.e., logic). So under Martin-Löf’s influence HoTT starts with the notion of type as a classification of the notion of construction, and builds upwards from that foundation. Unlike competing approaches to foundations, proofs are mathematical objects that play a central role in the theory. This conception is central to the homotopy-theoretic interpretation of type theory, which enriches types to encompass spaces with higher-dimensional structure. Specifically, the type $\textsf{Id}_A(M,N)$ is the type of identifications of $M$ and $N$ within the space $A$. Identifications may be thought of as proofs that $M$ and $N$ are equal as elements of $A$, or, equivalently, as paths in the space $A$ between points $M$ and $N$. The fundamental principles of abstraction at the heart of type theory ensure that all constructs of the theory respect these identifications, so that we may treat them as proofs of equality of two elements. There are three main sources of identifications in HoTT: 1. Reflexivity, stating that everything is equal to itself. 2. Higher inductive types, defining a type by giving its points, paths, paths between paths, and so on to any dimension. 3. Univalence, which states that an equivalence between types determines a path between them. I will not attempt here to explain each of these in any detail; everything you need to know is in the HoTT book. But I will say a few things about their consequences, just to give a flavor of what these new principles give us. Perhaps the most important conceptual point is that mathematics in HoTT emphasizes the structure of proofs rather than their mere existence. Rather than settle for a mere logical equivalence between two types (mappings back and forth stating that each implies the other), one instead tends to examine the entire space of proofs of a proposition and how it relates to others. For example, the univalence axiom itself does not merely state that every equivalence between types gives rise to a path between them, but rather that there is an equivalence between the type of equivalences between two types and the type of paths between them. Familiar patterns such as “$A$ iff $B$” tend to become “$A\simeq B$“, stating that the proofs of $A$ and the proofs of $B$ are equivalent. Of course one may choose neglect this additional information, stating only weaker forms of it using, say, truncation to suppress higher-dimensional information in a type, but the tendency is to embrace the structure and characterize the space of proofs as fully as possible. A close second in importance is the axiomatic freedom afforded by constructive foundations. This point has been made many times by many authors in many different settings, but has particular bite in HoTT. The theory does not commit to (nor does it refute) the infamous Law of the Excluded Middle for arbitrary types: the type $A+(A\to \textbf{0})$ need not always be inhabited. This property of HoTT is absolutely essential to its expressive power. Not only does it admit a wider range of interpretations than are possible with the Law included, but it also allows for the selective imposition of the Law where it is needed to recover a classical argument, or where it is important to distinguish the implications of decidability in a given situation. (Here again I defer to the book itself for full details.) Similar considerations arise in connection with the many forms of Choice that can be expressed in HoTT, some of which are outright provable, others of which are independent as they are in axiomatic set theory. Thus, what makes HoTT so special is that it is a constructive theory of mathematics. Historically, this has meant that it has a computational interpretation, expressed most vividly by the propositions as types principle. And yet, for all of its promise, what HoTT currently lacks is a computational interpretation! What, exactly, does it mean to compute with higher-dimensional objects? At the moment it is difficult to say for sure, though there seem to be clear intuitions in at least some cases of how to “implement” such a rich type theory. Alternatively, one may ask whether the term “constructive”, when construed in such a general setting, must inevitably involve a notion of computation. While it seems obvious on computational grounds that the Law of the Excluded Middle should not be considered universally valid, it becomes less clear why it is so important to omit this Law (and, essentially, no other) in order to obtain the richness of HoTT when no computational interpretation is extant. From my point of view understanding the computational meaning of higher-dimensional type theory is of paramount importance, because, for me, type theory is and always has been a theory of computation on which the entire edifice of mathematics ought to be built. The Homotopy Type Theory Book is out! June 20, 2013 By now many of you have heard of the development of Homotopy Type Theory (HoTT), an extension of intuitionistic type theory that provides a natural foundation for doing synthetic homotopy theory. Last year the Institute for Advanced Study at Princeton sponsored a program on the Univalent Foundations of Mathematics, which was concerned with developing these ideas. One important outcome of the year-long program is a full-scale book presenting the main ideas of Homotopy Type Theory itself and showing how to apply them to various branches of mathematics, including homotopy theory, category theory, set theory, and constructive analysis. The book is the product of a joint effort by dozens of participants in the program, and is intended to document the state of the art as it is known today, and to encourage its further development by the participation of others interested in the topic (i.e., you!). Among the many directions in which one may take these ideas, the most important (to me) is to develop a constructive (computational) interpretation of HoTT. Some partial results in this direction have already been obtained, including fascinating work by Thierry Coquand on developing a constructive version of Kan complexes in ITT, by Mike Shulman on proving homotopy canonicity for the natural numbers in a two-dimensional version of HoTT, and by Dan Licata and me on a weak definitional canonicity theorem for a similar two-dimensional theory. Much work remains to be done to arrive at a fully satisfactory constructive interpretation, which is essential for application of these ideas to computer science. Meanwhile, though, great progress has been made on using HoTT to formulate and formalize significant pieces of mathematics in a new, and strikingly beautiful, style, that are well-documented in the book. The book is freely available on the web in various formats, including a PDF version with active references, an ebook version suitable for your reading device, and may be purchased in hard- or soft-cover from Lulu. The book itself is open source, and is available at the Hott Book Git Hub. The book is under the Creative Commons CC BY-SA license, and will be freely available in Readers may also be interested in the posts on Homotopy Type Theory, the n-Category Cafe, and Mathematics and Computation which describe more about the book and the process of its creation. Univalent Foundations at IAS December 3, 2012 As many of you may know, the Institute for Advanced Study is sponsoring a year-long program, called “Univalent Foundations for Mathematics” (UF), which is developing the theory and applications of Homotopy Type Theory (HTT). The UF program is organized by Steve Awodey (CMU), Thierry Coquand (Chalmers), and Vladimir Voevodsky (IAS). About two dozen people are in residence at the Institute to participate in the program, including Peter Aczel, Andrej Bauer, Peter Dybjer, Dan Licata, Per Martin-Löf, Peter Lumsdaine, Mike Shulman, and many others. I have been shuttling back and forth between the Institute and Carnegie Mellon, and will continue to do so next semester. The excitement surrounding the program is palpable. We all have the sense that we are doing something important that will change the world. A typical day consists of one or two lectures of one or two hours, with the rest of the day typically spent in smaller groups or individuals working at the blackboard. There are many strands of work going on simultaneously, including fundamental type theory, developing proof assistants, and formulating a body of informal type theory. As visitors come and go we have lectures on many topics related to HTT and UF, and there is constant discussion going on over lunch, tea, and dinner each day. While there I work each day to the point of exhaustion, eager to pursue the many ideas that are floating around. So, why is homotopy type theory so exciting? For me, and I think for many of us, it is the most exciting development in type theory since its inception. It brings together two seemingly disparate topics, algebraic topology and type theory, and provides a gorgeous framework in which to develop both mathematics and computer science. Many people have asked me why it’s so important. My best answer is that it’s too beautiful to be ignored, and such a beautiful concept bmust be good for something! We’ll be at this for years, but it’s too soon to say yet where the best applications of HTT will arise. But I am sure in my bones that it’s as important as type theory itself. Homotopy type theory is based on two closely related concepts: 1. Constructivity. Proofs of propositions are mathematical objects classified by their types. 2. Homotopy. Paths between objects of a type are proofs of their interchangeability in all contexts. Paths in a type form a type whose paths are homotopies (deformations of paths). Homotopy type theory is organized so that maps and families respect homotopy, which, under the identification of paths with equality proofs, means that they respect equality. The force of this organization arises from axioms that specify what are the paths within a type. There are two major sources of non-trivial paths within a type, the univalence axiom, and higher inductive types. The univalence axiom specifies that there is an equivalence between equivalences and equalities of the objects of a universe. Unravelling a bit, this means that for any two types inhabiting a universe, evidence for their equivalence (a pair of maps that are inverse up to higher homotopy, called weak equivalence) is evidence for their equality. Put another way, weak equivalences are paths in the universe. So, for example, a bijection between two elements of the universe $\textsf{Set}$ of sets constitutes a proof of the equality (universal interchangeability) of the two sets. Higher inductive types allow one to define types by specifying their elements, any paths between their elements, any paths between those paths, and so on to any level, or dimension. For example, the interval, $I$, has as elements the endpoints $0, 1 : I$, and a path $\textsf{seg}$ between $0$ and $1$ within $I$. The circle, $S^1$ has an element $\textsf{base}$ and a path $\textsf{loop}$ from $\ textsf{base}$ to itself within $S^1$. Respect for homotopy means that, for example, a family $F$ of types indexed by the type $\textsf{Set}$ must be such that if $A$ and $B$ are isomorphic sets, then there must be an equivalence between the types $F(A)$ and $F(B)$ allowing us to transport objects from one “fiber” to the other. And any function with domain $\textsf{Set}$ must respect bijection—it could be the cardinality function, for example, but it cannot be a function that would distinguish $\{\,0,1\,\}$ from $\{\,\textsf{true},\textsf{false}\,\}$. Univalence allows us to formalize the informal convention of identifying things “up to isomorphism”. In the presence of univalence equivalence types (spaces) are, in fact, equal. So rather than rely on convention, we have a formal account of such identifications. Higher inductives generalize ordinary inductive definitions to higher dimensions. This means that we can now define maps (computable functions!) between, say, the 4-dimensional sphere and the 3-dimensional sphere, or between the interval and the torus. HTT makes absolutely clear what this even means, thanks to higher inductive types. For example, a map out of $S^1$ is given by two pieces of data: 1. What to do with the base point. It must be mapped to a point in the target space. 2. What to do with the loop. It must be mapped to a loop in the target space based at the target point. A map out of $I$ is given similarly by specifying 1. What to do with the endpoints. These must be specified points in the target space. 2. What to do with the segment. It must be a path between the specified points in the target space. It’s all just good old functional programming! Or, rather, it would be, if we were to have a good computational semantics for HTT, a topic of intense interest at the IAS this year. It’s all sort-of-obvious, yet also sort-of-non-obvious, for reasons that are difficult to explain briefly. (If I could, they would probably be considered obvious, and not in need of much explanation!) A game-changing aspect of all of this is that HTT provides a very nice foundation for mathematics in which types ($\infty$-groupoids) play a fundamental role as classifying all mathematical objects, including proofs of propositions, which are just types. Types may be classified according to the structure of their paths—and hence propositions may be classified according to the structure of their proofs. For example, any two proofs of equivalence of two natural numbers are themselves equivalent; there’s only one way to say that $2+2=4$, for example. Of course there is no path between $2+2$ and $5$. And these two situations exhaust the possibilities: any two paths between natural numbers are equal (but there may not even be one). This unicity of paths property lifts to function spaces by extensionality, paths between functions being just paths between the range elements for each choice of domain element. But the universe of Sets is not like this: there are many paths between sets (one for each bijection), and these are by no means equivalent. However, there is at most one way to show that two bijections between sets are equivalent, so the structure “peters out” after dimension 2. The idea to apply this kind of analysis to proofs of propositions is a distinctive feature of HTT, arising from the combination of constructivity, which gives proofs status as mathematical objects, and homotopy, which provides a powerful theory of the equivalence of proofs. Conventional mathematics ignores proofs as objects of study, and is thus able to express certain ideas only indirectly. HTT brings out the latent structure of proofs, and provides an elegant framework for expressing these concepts directly. Update: edited clumsy prose and added concluding paragraph. Transformations as strict groupoids May 30, 2011 The distinguishing feature of higher-dimensional type theory is the concept of equivalence of the members of a type that must be respected by all families of types. To be sufficiently general it is essential to regard equivalence as a structure, rather than a property. This is expressed by the judgement $\displaystyle \Gamma\vdash \alpha::M\simeq N:A$ which states that $M$ and $N$ are equivalent members of type $A$, as evidenced by the transformation $\alpha$. Respect for equivalence is ensured by the rule $\displaystyle{{\Gamma,x:A\vdash B\,\textsf{type}\quad \Gamma\vdash \alpha :: M\simeq N:A \quad \Gamma\vdash P:B[M/x]}\over {\Gamma\vdash \textit{map}\{x:A.B\}[\alpha](P):B[N/x]}},$ which states that equivalent members determine equivalent instances of a family of types. The equivalence between instances is mediated by the operation $\textit{map}\{x:A.B\}[\alpha](-)$, which sends members of $B[M/x]$ to members of $B[N/x]$. We call this mapping the action of the family $x:A.B$ on the transformation $\alpha$. For reasons that will only become apparent as we go along, it is important that “equivalence” really be an equivalence: it must be, in an appropriate sense, reflexive, symmetric, and transitive. The “appropriate sense” is precisely that we require the existence of transformations $\displaystyle{\Gamma\vdash \textit{id}::M\simeq M:A}$ $\displaystyle{{\Gamma\vdash\alpha::M\simeq N:A}\over{\Gamma\vdash\alpha^{-1}::N\simeq M:A}}$ $\displaystyle{{\Gamma\vdash \beta:N\simeq P:A\quad \Gamma\vdash \alpha:M\simeq N:A}\over{\Gamma\vdash\beta\circ\alpha:M\simeq P:A}}$ Moreover, these transformations must be respected by the action of any family, in a sense that we shall make clear momentarily. Before doing so, let us observe that these transformations constitute the operations of a groupoid, which we may think of either as an equivalence relation equipped with evidence or a category in which every map is invertible (a generalized group). While the former interpretation may not suggest it, the latter formulation implies that we should impose some requirements on how these transformations interact, namely the axioms of a groupoid: 1. Composition (multiplication) is associative: $\gamma\circ(\beta\circ\alpha)\equiv (\gamma\circ\beta)\circ\alpha::M\simeq N:A$. 2. Identity is the unit of composition: $\textit{id}\circ\alpha\equiv\alpha::M\simeq N:A$ and $\alpha\circ\textit{id}\equiv\alpha::M\simeq N:A$. 3. Inverses cancel: $\alpha^{-1}\circ\alpha\equiv\textit{id}::M\simeq M:A$ and $\alpha\circ\alpha^{-1}\equiv\textit{id}::N\simeq N:A$. These conditions, which impose equalities on transformations, demand that the second-dimensional structure of a type form a strict groupoid. I will come back to an important weakening of these requirements later. We further require that the action of a type family preserve the groupoid structure. For this it is enough to require that it preserve identities and composition: $\displaystyle{\textit{map}\{x:A.B\}[\textit{id}](-) \equiv \textit{id}(-):B[M/x]}$ Thinking of a groupoid as a category, these conditions state that the action of a type family be (strictly) functorial. (Here again we are imposing strong requirements in order to facilitate the exposition; eventually we will consider a relaxation of these conditions that will admit a richer range of applications.) (The alert reader will note that I have not formally introduced the concept of a transformation between types, nor the equality of these, into the theory. There are different ways to skin this cat; for now, I will be a bit loose about the axiomatics in order to focus attention on the main ideas. Rest assured that everything can be made precise!) By demanding that the groupoid axioms hold strictly (as equalities) and that the action of families be strictly functorial, we have simplified the theory considerably by restricting it to dimension 2. To relax these restrictions requires higher dimensions. For example, we may demand only that the groupoid conditions hold up to a transformation of transformations, but hold strictly from then on; this is the 3-dimensional case. Or we can relax all such conditions to hold only up to a higher transformation, resulting in finite dimensional type theory. Similar considerations will apply to other conditions that we shall impose on the action of families, in particular to specify the action of type constructors on transformations, which I will discuss next time. The presentation of finite-dimensional type theory will be aided by the introduction of identity types (also called path types). Identity types avoid the need for an ever-expanding nesting of transformations between transformations between …. More on that later! Update (August 2012): Egbert Rijke has written lucidly on the topic of Yoneda’s Lemma and it’s relation to homotopy type theory in his Master’s Thesis, which I encourage readers to consult for a nice summary of higher-dimensional type theory. Higher-Dimensional Type Theory May 30, 2011 Ideas have their time, and it’s not for us to choose when they arrive. But when they do, they almost always occur to many people at more or less the same time, often in a slightly disguised form whose underlying unity becomes apparent only later. This is perhaps not too surprising, the same seeds taking root in many a fertile mind. A bit harder to explain, though, is the moment in time when an idea comes to fruition. Often all of the ingredients are available, and yet no one thinks to put two-and-two together and draw what seems, in retrospect, to be the obvious inference. Until, suddenly, everyone does. Why didn’t we think of that ages ago? Nothing was stopping us, we just didn’t notice the opportunity! The recent development of higher-dimensional structure in type theory seems to be a good example. All of the ingredients have been present since the 1970′s, yet as far as I know no one, until quite recently, no one quite put together all the pieces to expose the beautiful structure that has been sitting there all along. Like many good ideas, one can see clearly that the ideas were foreshadowed by many earlier developments whose implications are only now becoming understood. My plan is to explain higher type theory (HTT) to the well-informed non-expert, building on ideas developed by various researchers, including Thorsten Altenkirch, Steve Awodey, Richard Garner, Martin Hofmann, Dan Licata, Peter Lumsdaine, Per Martin-Löf, Mike Shulman, Thomas Streicher, Vladimir Voevodsky, and Michael Warren. It will be useful in the sequel to be familiar with The Holy Trinity, at least superficially, and preferably well enough to be able to move back and forth between the three manifestations that I’ve previously outlined. One-dimensional dependent type theory is defined by derivation rules for these four fundamental forms of judgement (and, usually, some others that we suppress here for the sake of concision): $\displaystyle \Gamma\vdash A\,\mathsf{type}$ $\displaystyle \Gamma\vdash M : A$ $\displaystyle \Gamma\vdash M \equiv N : A$ $\displaystyle \Gamma\vdash A\equiv B$ A context, $\Gamma$, consists of a sequence of declarations of variables of the form $x_1:A_1,\dots,x_n:A_n$, where it is presupposed, for each $1\leq i\leq n$, that $x_1:A_1,\dots,x_{i-1}:A_{i-1}\ vdash A_i\,\mathsf{type}$ is derivable. The key notion of dependent type theory is that of a family of types indexed by (zero or more) variables ranging over a type. The judgement $\Gamma\vdash A\,\mathsf{type}$ states that $A$ is a family of types indexed by the variables given by $\Gamma$. For example, we may have $\vdash\textit{Nat}\,\textsf{type}$, specifying that $\textit{Nat}$ is a closed type (a degenerate family of types), and $x{:}\textit{Nat}\vdash\textit{Seq}(x)\,\textsf{type}$, specifying that $\textit{Seq}(n)$ is a type (say, of sequences of naturals of length $n$) for each $\vdash n:\textit{Nat}$. The rules of type theory ensure, either directly or indirectly, that the structural properties of the hypothetical/general judgement are valid. In particular families of types respect equality of $\displaystyle{{\Gamma,x:A\vdash B\,\textsf{type}\quad \Gamma\vdash M\equiv N:A \quad \Gamma\vdash P:B[M/x]}\over {\Gamma\vdash P:B[N/x]}}.$ In words, if $B$ is a family of types indexed by $A$, and if $M$ and $N$ are equal members of type $A$, then every member of $B[M/x]$ is also a member of $B[N/x]$. The generalization to two- (and higher-) dimensional type theory can be motivated in several ways. One natural source of higher-dimensional structure is a universe, a type whose elements correspond to types. For example, we may have a universe of sets given as follows: $\displaystyle \vdash \textit{Set}\,\textsf{type}$ $\displaystyle x:\textit{Set}\vdash \textit{Elt}(x)\,\textsf{type}$ $\displaystyle \vdash \textit{nat}:\textit{Set}$ $\displaystyle \vdash \textit{Elt}(\textit{nat})\equiv\textit{Nat}$ $\displaystyle a:\textit{Set},b:\textit{Set}\vdash a\times b : \textit{Set}$ $\displaystyle a:\textit{Set},b:\textit{Set}\vdash \textit{Elt}(a\times b)\equiv \textit{Elt}(a)\times\textit{Elt}(b)$ and so forth, ensuring that $\textit{Set}$ is closed under typical set-forming operations whose interpretations are given by $\textit{Elt}$ in terms of standard type-theoretic concepts. In many situations, including much of informal (yet entirely rigorous) mathematics, it is convenient to identify sets that are isomorphic, so that, for example, the sets $\textit{nat}\times\textit {nat}$ and $\textit{2}\to\textit{nat}$ would be interchangeable. In particular, these sets should have the “same” (type of) elements. But obviously these two sets do not have the same elements (one consists of pairs, the other of functions, under the natural interpretation of the sets as types), so we cannot hope to treat $\textit{Elt}(\textit{nat}\times\textit{nat})$ and $\textit{Elt}(\textit {2}\to\textit{nat})$ as equal, though we may wish to regard them as equivalent in some sense. Moreover, since two sets can be isomorphic in different ways, isomorphism must be considered a structure on sets, rather than a property of sets. For example, $\textit{2}$ is isomorphic to itself in two different ways, by the identity and by negation (swapping). Thus, equivalence of the elements of two isomorphic sets must take account of the isomorphism itself, and hence must have computational significance. It is precisely the desire to accommodate equivalences such as this that gives rise to higher dimensions in type theory. Specifically, we introduce two-dimensional structure by adding a new judgement to type theory stating that two members of a type are related by a specified transformation: $\displaystyle \Gamma\vdash \alpha :: M\simeq N : A$ Crucially, families of types must respect transformation: $\displaystyle{{\Gamma,x:A\vdash B\,\textsf{type}\quad \Gamma\vdash \alpha :: M\simeq N:A \quad \Gamma\vdash P:B[M/x]}\over {\Gamma\vdash \textit{map}\{x:A.B\}[\alpha](P):B[N/x]}}.$ A transformation should be thought of as evidence of interchangeability of the members of a type; the map operation puts the evidence to work. Returning to our example of the universe of sets, let us specify that a transformation from one set to another is an pair of functions constituting a bijection between the elements of the two sets: $\displaystyle{ {\begin{array}{c} \Gamma,x:\textit{Elt}(a)\vdash f(x):\textit{Elt}(b) \\ \Gamma,x:\textit{Elt}(b)\vdash g(x):\textit{Elt}(a) \\ \Gamma,x:\textit{Elt}(a)\vdash g(f(x))\equiv x:\ textit{Elt}(a) \\ \Gamma,x:\textit{Elt}(b)\vdash f(g(x))\equiv x:\textit{Elt}(b) \end{array}} \over {\Gamma\vdash\textit{iso}(f,g)::a\simeq b:\textit{Set}}}$ (The equational conditions here are rather strong; I will return to this point in a future post. For now, let us just take this as the defining criterion of isomorphism between two sets.) Evidence for the isomorphism of two sets induces a transformation on types given by the following equation: $\displaystyle{ {\Gamma\vdash M:\textit{Elt}(a)}\over {\Gamma\vdash \textit{map}\{\textit{Elt}\}[\textit{iso}(f,g)](M)\equiv f(M) : \textit{Elt}(b)}}$ (suppressing the obvious typing premises for $f$ and $g$). In words an isomorphism between sets $a$ and $b$ induces a transformation between their elements given by the isomorphism itself. This, then, is the basic structure of two-dimensional type theory, but there is much more to say! In future posts I intend to develop the ideas further, including a discussion of these topics: 1. The definition of $\textit{map}\{x:A.B\}$ is given by induction over the structure of $x:A.B$. The above equation covers only one case; there are more, corresponding to each way of forming a family of types $x:A.B$. The extension to function types will expose the role of the inverse of the isomorphism between sets. 2. The judgement $\alpha::M\simeq N:A$ may be internalized as a type, which will turn out to correspond to the identity type in Martin-Löf’s type theory, albeit with a different interpretation given by Altenkirch. The identity type plays an important role in the extension to all higher dimensions. 3. To ensure coherence and to allow for greater expressiveness we must also discuss equality and equivalence of transformations and how these influence the induced transformation of families of types. In particular transformations admit a groupoid structure which expresses reflexivity, symmetry, and transitivity of transformation; these conditions can be considered to hold strongly or weakly, giving rise to different applications and interpretations. 4. Higher-dimensional type theory admits a fascinating interpretation in terms of homotopy theory which types are interpreted as spaces, members as points in those spaces, and transformations as paths, or homotopies. This, together with a generalization of the treatment of universes outlined above, is the basis for Voevodsky’s work on univalent foundations of mathematics. 5. One may consider relaxing the groupoid structure on transformations to a “monoidoid” (that is, category) structure by not requiring symmetry (inverses). The structure of type theory changes significantly in the absence of symmetry, posing significant open problems, but admitting a wider range of applications of higher-dimensional structure in both CS and mathematics. To keep up to date with the latest developments in this area, please visit the Homotopy Type Theory blog!
{"url":"http://existentialtype.wordpress.com/tag/homotopy-theory/","timestamp":"2014-04-17T12:35:53Z","content_type":null,"content_length":"91446","record_id":"<urn:uuid:c8489fb6-42cd-445f-86fb-2afa571d126c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Monday, March 3, 2003 I have been interested in puzzles for as long as I can remember. When I discovered Tsunami puzzles I was immediately addicted, the combination of logic and the artistic nature of the finished puzzle appealed greatly. This is a combination that a number of puzzle types share - logic and aesthetics, some physical and others mathematical. Physical puzzles - or mechanical puzzles as they are more often known - can have a great beauty about their appearance, or in the way they work. I started collecting mechanical puzzles a few years ago, and soon found that the range of puzzles is huge, but my favorites can be split into two groups, those I like because of their look, and those that have a clever twist to solve them. L shaped pieces Make your own dissected Checkerboard puzzle First, download full image of the pieces and the unsolved puzzle. Now either print out the pieces, or copy them carefully onto squared paper. You will need to paste these onto either thick cardboard (or even better - plywood), and then cut out the individual pieces. You could use normal paper, but these tend to move around too much when solving the puzzle. This leaves you with all the necessary parts to start on the puzzle. One particular type of puzzle is a checkerboard dissection, where a standard checkerboard has been cut up, and you have to reassemble it to make the completed checkerboard. It occurred to me that Tsunami (Pic-a-Pix) puzzles could be treated in the same way as the checkerboard, with the added puzzle of finding the picture to be made. My first attempt took a 15 x 15 Tsunami puzzle, added an extra row and column - both with zero as the clue numbers. This meant that the puzzle was a multiple of a standard 8x8 checkerboard puzzle, with 4 Tsunami squares to one checkerboard square. I then cut the solution up into L shaped pieces, each one with part of the solution on it. This proved to be fairly easy to solve, once the Tsunami had been solved. To make it more difficult I decided to remove some of the clues from the Tsunami and replace them with a question mark. This means that you have to complete as much of the Tsunami as possible, and then use the pieces to complete the puzzle. There are a number of other possibilities that move on from this idea. If the pieces are double sided then the assembling of the finished picture becomes more difficult, or if more of the puzzle were to be removed then the mechanical puzzle is more difficult, but this means that less of the Tsunami is required. What I am trying to make at the moment is a puzzle where you have to solve part of the Tsunami, and then use some of the pieces to add some squares to the solution. Alternating this way would use both your logic skills, and visual skills to solve the puzzle. Mechanical Tsunami Another puzzle that I have been trying is the sliding block type - commonly with the numbers 1-15, which have to be mixed up and rearranged. Tsunami can very easily be printed on these and used as a two-part puzzle. And if you cut the pieces up into squares you can make it a three part puzzle, solve the Tsunami, assemble the pieces, and finally use it as a sliding block puzzle.Obviously any mechanical puzzle could have a Tsunami printed on it. But what I am trying to do is to use the Tsunami puzzle to add an extra dimension to the puzzle. I hope you agree that the above puzzles do that. About the author Frank Potts is the editor of pottypuzzles.co.uk
{"url":"http://www.conceptispuzzles.com/ja/index.aspx?uri=info/article/225","timestamp":"2014-04-19T04:40:20Z","content_type":null,"content_length":"20248","record_id":"<urn:uuid:23b73a7c-1563-470a-8e09-f7c9ce6ca0b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
efficient frontier Hi to all I have a problem about frontier code. (Its in finance toolbox.) There is nothing written in Help about frontier, I mean there is no example. And I cant find right inputs. It allways give error at line 154 ([Mean,Covar] = ecmnmle(SubData);) SubData cant pass ecmnmle's checks. please help me to find the right inputs, or I am going to think that code is wrong... Ceren ... Efficient Frontier Plot Hi, I am using frontcon function to generate Efficient Frontier and would like to plot the same. Can anyone guide me on this? I don't want to use the portopt function. Thanks in advance. ... Efficient Frontier Weights Dear all, i am analyzing financial time series data with the help of "Financial Toolbox - Efficient Frontier Demo". I wonder how can i know the weights on efficient frontier line?..because i would like to get the weights on my portfolio holding before constructing GARCH Value at Risk..lets say if on efficient frontier with return 5% and standard deviation 3% consist of 10% S&P 500 35% FTSE 100 and 55% IDX..any ideas how to do that?.. anyone please share with me, thank you. ... Combining efficient frontiers I am analyzing financial data with the help of "Financial Toolbox". Using the function frontcon(returns, coavariance, numberOfPortfolios) you get a plot/graph of the efficient frontier, which is basically a curve/line. It is just an x,y graph with y being return and x being risk/standard deviation. How can I plot multiple efficient frontiers into one graph? Could I also plot single points into the same graph? The single points being ofcourse single assets. Well, I found out how, but it involves clicking and copying and pasting rather than programatically. In the current figure,... Efficient Frontier #2 Hi, I?m a beginner using matlab. I would like to illustrate the efficient frontier for 2 portfolios. Each portfolio contains about 100 securities. I just calculated the returns, variances and the covariances (matrix)for the assets. When I use the function "frontcon" for one portfolio, matlab shows the frontier just for the assets. But I would like to illustrate the location for the whole portfolio on the frontier. First, do you now how to calculate the portfolio variance? And second, how to display 2 portfolios on the efficient frontier? Thank you for your help, best regards J?rg ... Plotting efficient frontier the problem i am have is that i have a few hundred assets with time series is 100 period length to plot for, but have so far be unable to do the basic commands for standard dev and correlation and convert it in to covariation as matlab responds with unmatched matrices COULD ANY ONE PLEASE OFFER AN EXAMPLE OF SOME KIND OR POINT ME TO A LINK THANKS "gro" <a_omar101@hotmail.co.uk> wrote in message news:ef4be62.-1@webcrossing.raydaftYaTP... > the problem i am have is that i have a few hundred assets with time > series is 100 period length to plot for, > but have so far... Efficient Frontier Hi, I would like to draw a rolling efficient frontier using the matlab syntax [PortWts, AllMean, AllCovariance] = frontier(Universe, Window, Offset, NumPorts, ActiveMap, ConSet, NumNonNan) Please find my preparation: [Sample,B]=xlsread('EfficientFrontier_3.xls'); dates=x2mdate(Sample(:,1)) ts = timeseries(Sample(:,2:7)) Efficient frontier is a 156:7 matrix in excel; first one is date vector. 2:7 are log-returns. Has anyone an idea how to use the following matlab syntax in this context [PortWts, AllMean, AllCovariance] = frontier(Universe, Window, Offset, NumPorts, Ac... generate three efficient frontiers Hello everybody, I need to generate three mean variance efficient frontiers in the some graph; can anybody help me? Thanks ... Pareto / Efficient Frontier / Multiobjective / Nondominated solutions Does anyone have code to generate the 'efficient frontier' for a an array of points? This concept is known by a wide variety of names... There is a short overview here: http://math.ut.ee/~toomas_l/ optimization/main-text/Ch7/node3.html Any help will be greatly appreciated. -Chess For anyone else that's interested in this topic, I've found a basic solution, which I've posted below. Note also that the performance for this technique can be improved through the use of KD-trees or quadtrees, as discussed in http://tinyurl.com/caz3j % Generate a boolean array K that indica... Finance: Mean-Variance Efficient Frontier - Portfolio Optimization Hi, is there anyone that have some reliable tool, package or website that can help me to solve a financial portfolio optimization problem in Python? Many thanks in advance Davide Dalmasso On Thu, Nov 7, 2013 at 12:01 PM, Davide Dalmasso <davide.dalmasso@gmail.com> wrote: > Hi, > is there anyone that have some reliable tool, package or website that can help me to solve a financial portfolio optimization problem in Python? > Many thanks in advance > > Davide Dalmasso > -- > https://mail.python.org/mailman/listinfo/python-list google gives this: htt... about Efficiency I want to select one field from a table,but it should on some conditions which refer to 5 table ,such as A.FILED1=B.FIELD1 AND B.FIELD2=C.FIELD3 AND .... Should I use case "select sum(a.amount) from a,b,c,... where a.field1=b.field1 and b.field2=c.field2 and ..." or "select sum(a.amount) from select b.field1 from select c.field2 from...."?And which case is more efficiency? thanks! �������һ�����е�ij���ֶεĺͣ����˼�¼���ڴӶ�����в�ѯ�˼�¼�Ƿ������ض�����������ô������select ..from ...where ..and ..and..and ..and ..������select ..from select ..from select ..from ....... i++ or ++i efficient HI all, Which is more efficient i++ or ++i and why is it?? cheers.. * Radde: > Which is more efficient i++ or ++i and why is it?? The FAQ is at <url: http://www.parashift.com/c++-faq-lite/>. -- A: Because it messes up the order in which people normally read text. Q: Why is it such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail? Alf P. Steinbach wrote: > * Radde: > > Which is more efficient i++ or ++i and why is it?? > > The FAQ is at <url: http://www.parashift.com/c++-faq-lite/>. > and this is the section ... could be this more efficient? Dear Matlab users, please help me if possible to make this code more efficient. k =1 for i=1:end if B(i) <= 1 A(k)=B(i); k=k+1; end end The point is how to select data from matrix B when B(i) <=1 without using loop. Is it possible at all? Thank you very much for remarks. Irek Try: A=B(B<=1); Regards, Stefan irek wrote: > > > Dear Matlab users, please help me if possible to make this code > more > efficient. > > k =1 > for i=1:end > if B(i) <= 1 > A(k)=B(i); > k=k+1; > end > end > > The point is how to select da... What is more efficient? Let's say I have a class with few string properties and few integers, and a lot of methods defined for that class. Now if I have hundreds of thousands (or even more) of instances of that class - is it more efficient to remove those methods and make them separate functions, or it doesn't matter? Thanks... -- _______ Karlo Lozovina - Mosor | | |.-----.-----. web: http://www.mosor.net || ICQ#: 10667163 | || _ | _ | Parce mihi domine quia Dalmata sum. |__|_|__||_____|_____| On Feb 19, 2:17 pm, Karlo Lozovina &lt... When using slot-value and the same value is used more than once in a form, is it more efficient to use a LET to extract that value (and use the local variable repeatedly) or does it not make a difference? In fact, is using LET ineffient? Or is the difference so slight it doesn't really make a difference in most normal programming? WoodHacker <RamsayW@comcast.net> wrote: > When using slot-value and the same value is used more than once in a > form, is it more efficient to use a LET to extract that value (and use > the local variable repeatedly) or does it not make a... for a hex conversion, Is it more efficient to do this : Printit is the char whose low order bits represent the nibble to print cout << "0123456789ABCDEF" [ Printit ] ; or : const char hexdigits [17] = "0123456789ABCDEF" ; cout << hexdigits [ Printit ] ; Or doesn't it matter ? Thanks Joe "js" <jb.simon@lmco.com> wrote in message news:bf1rrm$m7n1@cui1.lmms.lmco.com... > for a hex conversion, Is it more efficient to do this : > > Printit is the char whose low order bits represent the nibble to print > > cout <<... Which is more efficient? Hi, For the following to snips, which one do you think is more efficient? snip1: BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(System.out)); bw.write("abc" + "def"); snip2: BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(System.out)); bw.write("abc"); bw.write("def"); -- Best regards, Qingjia Zhu Q@z@J metfan wrote: > Hi, > For the following to snips, which one do you think is more efficient? > > snip1: > BufferedWriter bw = new BufferedWriter(new > OutputStreamWriter(System.out)); ... Which is efficient? Hi, Which is efficient? Checking whether the given path for Directory/file using stat or Openinf the path using opendir Thanks, Ganesh Subramanian <sgane2001@yahoo.co.in> wrote in message news:1107413299.147296.302240@g14g2000cwa.googlegroups.com... > Which is efficient? > Checking whether the given path for Directory/file using stat or > Openinf the path using opendir 1) There probably won't be much of a difference (same work to do internally). 2) It will depend on the platform you are developing on 3) This is not a C++ question 4) This question does not belong to this... more efficient? I have the following two implementation techniques in mind. def myfunc(mystring): check = "hello, there " + mystring + "!!!" print check OR structure = ["hello, there",,"!!!"] def myfunc(mystring): structure[2] = mystring output = ''.join(mystring) i heard that string concatenation is very slow in python; so should i go for the second approach? could someone tell me why? Would there be another 'best-practice-style'? Please help. Thankx in advance! cheers!!! Zubin On 12/22/09 7:13 AM, Zubin Mithra ... which is efficient Hi This is the code DO J1=1,SS DO I1=1,TT DO ISUMT3=1,MA SUMGMT(I1,J1,ISUMT3)=0.0 END DO END DO END DO i can write the following code in FORTRAN 90 as SUMGMT(1:TT,1:SS,1:ISUMT3) = 0.0 now my question is which one is the efficeint way to do it? I think i know the answer, the second way is more efficient. Am i right ? And if yes how do i prove it? How can i prove this with this simple code that the second one is efficient ? regards aeroguy wrote: > This is the code > > DO J1=1,SS > DO I1=1,TT > DO ISUMT3=1,MA > ... Which is more efficient? float fMax=0, faArray[16], fValue; int i; for (i=0; i<16; ++i) if ((faArray[i]-fValue)>fMax) fMax=faArray[i]-fValue; OR float fMax=0, faArray[16], fValue, fTemp; int i; for (i=0; i<16; ++i) if ((fTemp=faArray[i]-fValue)>fMax) fMax=fTemp; In the former case I am doing a subtraction twice but only if the condition is true. In the latter case, I am writing the difference into a variable that may not be used, depending on the condition. Many thanks in advance, Peter. PeterOut wrote: ) <snip: which is more efficient> ) In the form... Hi anyone know where I can find out the efficiency of various addressing modes and the efficiecy of the interrupt system. I'm a second year computing student and have been asked to compare a c515 and pic16C64, but i've got a little stuck on these two points Many Thanks Simon schrieb: > Hi > anyone know where I can find out the efficiency of various addressing modes > and the efficiecy of the interrupt system. I'm a second year computing > student and have been asked to compare a c515 and pic16C64, but i've got a > little stuck on these two points > Hi, ... Being an efficient coder, can VIM make me more efficient? Hi, all, I'm a programmer (Windows) and now I'm very efficient on coding in Windows style editors like Delphi IDE or other editors. But sometimes I feel dull to do some repeating work such like add a continuous number before each line, remove some pattern text from each line. I think VIM maybe more effective in that area. I have been using VIM now and then for some days, but I've only known the very basically command such like i, dd, o, etc. I think it will take me long time to master VIM. My question is, is it worth for me to try to use VIM instead of other edi... Which is More Efficient? I have a program that uses up a lot of CPU and want to make it is efficient as possible with what I have to work with it. So which of the following would be more efficient, knowing that l is a list and size is a number? l=l[:size] del l[size:] If it makes a difference, everything in the list is mutable. Measure it and find out. Sounds like a little investment in your time learning how to measure performance may pay dividends for you. "Dustan" <DustanGroups@gmail.com> writes: > I have a program that uses up a lot of CPU and want to make it is > efficient as possibl...
{"url":"http://compgroups.net/comp.soft-sys.matlab/efficient-frontier/409033","timestamp":"2014-04-21T05:01:25Z","content_type":null,"content_length":"33661","record_id":"<urn:uuid:31b68766-6dd9-4b42-baee-1e3d4a3fe1fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Why are the effects of Time Dilation permanent but Length Contraction is Not? Okay, imagine this scenario. An observer on Earth is going to measure the distance to Alpha Centauri in two ways. First, he will send a photon and calculate the distance as 1/2 ct. Then, he will send an odometer to Alpha Centauri traveling near the speed of light (say .9c) and have it return and will take 1/2 of the odometer reading. Which distance will be shorter? Assuming that Alpha Centauri is 4 light years away from Earth, the observer on Earth will measure 8 years for a signal to get there and back and so will calculate its distance to be 4 light years. He can send an odometer there and back at .9c and measure the distance that way. He can make an odometer by observing the spectrum of light coming from Alpha Centauri and from the Sun prior to sending away the odometer. Then, as the odometer is traveling, it continuously measures the Relativistic Doppler coming from both stars. One will be the reciprocal of the other. (Only one is required for the measurement but I'm showing that either one or both can be used.) Assuming that one of these ratios is R, the speed of the odometer is: β = |(1-R Integrating the speed over time yields distance traveled as a function of time. To make the calculation easier, we will assume that the speed is constant throughout the entire roundtrip which means that we only have to multiply the speed by the total time to get the total distance. Now let's work out the details for our example: At .9c, the values of R will be √[(1-β)/(1+β)] and its reciprocal. So R for Earth will be 0.2294 and for Alpha Centari will be 4.359 during the outbound portion of the trip. For the inbound portion of the trip, these numbers are exchanged. The odometer will use the equation above to determine that β is indeed 0.9c. Now at 0.9c, the clock on the odometer will be running slow by a factor of 1/γ. We calculate γ as 1/√(1-β ), so 1/γ = √(1-β ) which equals 0.4359. Now we need to calculate how long the trip will take. We do this first in the Earth frame as distance divided by speed which is 8 light years (round trip) divided by 0.9c which equals 8.8889 years. Now we multiply this by 1/γ to figure out what the time will be in the odometer's frame. This will be 8.8889 years times 0.4359 or 3.8747 years. This means that the total distance traveled is 3.8747 times 0.9 c or 3.4872 light years for the round trip or 1.7436 light years for the distance between Earth and Alpha Centauri. As a sanity check, this should be the distance in the Earth frame divided by γ or multiplied by 1/γ which we calculated as 0.4359. Indeed, 4 times 0.4359 is 1.7436 light years. So to answer your question, the distance measured by the odometer is shorter.
{"url":"http://www.physicsforums.com/showthread.php?p=3681031","timestamp":"2014-04-18T21:30:28Z","content_type":null,"content_length":"91559","record_id":"<urn:uuid:bf6c4145-0e9f-42bd-b988-d0b3ed9b2f64>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Eclipse Community Forums - RDF feedOCL query to find sequence diagram methodsRe: OCL query to find sequence diagram methodsRe: OCL query to find sequence diagram methods //1. an expression to compute all sequence diagram methods context Interaction inv: self.message.signature->select(Operation) //2. an expression to compute all class operations context Class inv: self.ownedOperation // 3. an expression to compute the intersection of the above ?? how this expression should be formed
{"url":"http://www.eclipse.org/forums/feed.php?mode=m&th=169376&basic=1","timestamp":"2014-04-21T15:43:25Z","content_type":null,"content_length":"4442","record_id":"<urn:uuid:7b9695fd-fc3b-44a7-baae-ce714854e8ba>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Rong Ge, Princeton CS Home Rong Ge, Princeton CS Computational Complexity and Information Asymmetry in Financial Products with Sanjeev Arora, Boaz Barak and Markus Brunnermeier. In ICS 2010 A short version appeared in Communications of the ACM, 2011 Issue 5. Full text PDF See some blog comments on this paper: Intractability Center, Freedom to Tinker, Boing Boing, Daily Kos, In Theory, Healthy Algorithms, Lipton's blog We put forward the issue of Computational Complexity in the analysis of Financial Derivatives, and show that there is a fundamental difficulty in pricing financial derivatives even in very simple This is still a working paper. The latest version (updated Jan. 2012) can be found here: Computational Complexity and Information Asymmetry in Financial Products FAQ for Computational Complexity and Information Asymmetry in Financial Products 1. Why do you use a simple model? In real life people use more complex models while thinking about CDO pricing. This paper concerns a hardness result, not a pricing algorithm. As noted in the paper, our simple model can be embedded in the industry standard models like Gaussian copula, and the hardness result therefore extends to these complex models. In general, when proving a hardness result one should use as simple a model as possible. When exhibiting a pricing algorithm, on the other hand, one should use as complex a model as possible. 2. Does the paper rely upon the P vs NP conjecture? The paper relies upon a stronger form of "P not equals NP", namely, that the planted dense subgraph problem does not have an efficient algorithm. (In fact it is conjectured that there is no algorithm to even compute any approximate solutions to this problem. There is also believed to be no short certificate that a dense subgraph of the specified size does not exist in the graph.) For more details on computational complexity see the first few chapters of this book. 3. The hardness result applies in an asymptotic sense. How should one interpret this? The planted dense subgraph problem (with parameters as discussed in the paper) appears to have no efficient algorithm for even moderate graph sizes. So the hardness result should apply even with reasonable parameter choices. For example, say the number of asset classes is 1000, and each contains 20 mortgages. The seller group these into 200 CDOs, each with 100 mortgages. The threshold for senior tranche is set to 35%. Then in a binary CDO the buyer's loss (= lemon cost) is as high as 12.5%, and in a tranched CDO the buyer's loss is as high as 1.875%. Strictly speaking we only need to assume that the specific pricing algorithm used by the buyers and rating agencies does not detect dense subgraphs. In real life (at least before the current crisis hit) many buyers simply trusted the rating agencies, which usually use monte carlo simulations. Those would certainly not detect dense subgraphs for the above parameter choices. 4. Are you suggesting that the phenomenon described in this paper was a major cause of the current global crisis? The current crisis had many causes, including poor modeling, regulatory failures, etc. This paper is focusing on an aspect (difficulty of pricing and how it worsens in presence of asymmetric information) that will remain even relevant after the above causes will be fixed. In general, financial experts suspect that the sheer volume of derivative transactions probably allows many kinds of manipulations to go undetected. We are highlighting one particular kind of manipulation where this undetectability as well as its financial cost can be made precise. 5. Does your paper imply that CDOs and related products should be banned? For now, that would be a hasty conclusion. One can imagine some free market solutions to the issues raised in the paper such as (a) turn over design of CDOs to a neutral 3rd party to avoid cherry-picking. (But, keep in mind the experience with rating agencies in recent years.) (b) use CDOs with parameters where the planted densest subgraph problem is easy. (But, keep in mind that the market currently lacks full transparency, so it is unclear how to even obtain the graph on which to solve densest-subgraph!) Of course, one must then investigate if these new fixes allow other kinds of manipulation. Our paper is only a first cut at establishing a role for computational complexity in thinking about derivatives. Hitherto this aspect has been ignored.
{"url":"http://www.cs.princeton.edu/~rongge/derivativeFAQ.html","timestamp":"2014-04-17T18:23:20Z","content_type":null,"content_length":"8131","record_id":"<urn:uuid:0b039f9f-20c6-4213-adc3-4eca1cfbb531>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
standing-wave ratio (SWR, VWSR, IWSR) Standing-wave ratio (SWR) is a mathematical expression of the non-uniformity of an electromagnetic field (EM field) on a transmission line such as coaxial cable. Usually, SWR is defined as the ratio of the maximum radio-frequency (RF) voltage to the minimum RF voltage along the line. This is also known as the voltage standing-wave ratio (VSWR). The SWR can also be defined as the ratio of the maximum RF current to the minimum RF current on the line (current standing-wave ratio or ISWR). For most practical purposes, ISWR is the same as VSWR. Under ideal conditions, the RF voltage on a signal transmission line is the same at all points on the line, neglecting power losses caused by electrical resistance in the line wires and imperfections in the dielectric material separating the line conductors. The ideal VSWR is therefore 1:1. (Often the SWR value is written simply in terms of the first number, or numerator, of the ratio because the second number, or denominator, is always 1.) When the VSWR is 1, the ISWR is also 1. This optimum condition can exist only when the load (such as an antenna or a wireless receiver), into which RF power is delivered, has an impedance identical to the impedance of the transmission line. This means that the load resistance must be the same as the characteristic impedance of the transmission line, and the load must contain no reactance (that is, the load must be free of inductance or capacitance). In any other situation, the voltage and current fluctuate at various points along the line, and the SWR is not 1. When the line and load impedances are identical and the SWR is 1, all of the RF power that reaches a load from a transmission line is utilized by that load. When the load is an antenna, the utilization takes the form of EM-field radiation. If the load is a communications receiver or terminal, the signal power is converted into some other form, such as an audio-visual display. If the impedance of the load is not identical to the impedance of the transmission line, the load does not absorb all the RF power (called forward power) that reaches it. Instead, some of the RF power is sent back toward the signal source when the signal reaches the point where the line is connected to the load. This is known as reflected power or reverse power. The presence of reflected power, along with the forward power, sets up a pattern of voltage maxima (loops) and minima (nodes) on the transmission line. The same thing happens with the distribution of current. The SWR is the ratio of the RF voltage at a loop to the RF voltage at a node, or the ratio of the RF current at a loop to the RF current at a node. In theory, there is no limit to how high this ratio can get. The worst cases (highest SWR values) occur when there is no load connected to the end of the line. This condition, known as an unterminated transmission line, is manifested when the end of the line is either short-circuited or left open. In theory, the SWR is infinite in either of these cases; in practice, it is limited by line losses, but can exceed 100. This can give rise to extreme voltages and currents at certain points on the line. The SWR on a transmission line is mathematically related to (but not the same as) the ratio of reflected power to forward power. In general, the higher the ratio of reflected power to forward power, the greater is the SWR. The converse is also true. When the SWR on a transmission line is high, the power loss in the line is greater than the loss that occurs when the SWR is 1. This exaggerated loss, known as SWR loss, can be significant, especially when the SWR exceeds 2 and the transmission line has significant loss to begin with. For this reason, RF engineers strive to minimize the SWR on communications transmission lines. A high SWR can have other undesirable effects, too, such as transmission-line overheating or breakdown of the dielectric material separating the line conductors. In some situations, such as those encountered at relatively low RF frequencies, low RF power levels, and short lengths of low-loss transmission line, a moderately high SWR does not produce significant SWR loss or line overheading, and can therefore be tolerated. This was last updated in September 2005 Contributor(s): Olivier Cauvin Tech TalkComment Contribute to the conversation All fields are required. Comments will appear at the bottom of the article.
{"url":"http://whatis.techtarget.com/definition/standing-wave-ratio-SWR-VWSR-IWSR","timestamp":"2014-04-17T13:34:15Z","content_type":null,"content_length":"64286","record_id":"<urn:uuid:5773927c-46cd-4986-833c-5d092830d25c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Majority Rules with Random Tie-Breaking in Boolean Gene Regulatory Networks We consider threshold Boolean gene regulatory networks, where the update function of each gene is described as a majority rule evaluated among the regulators of that gene: it is turned ON when the sum of its regulator contributions is positive (activators contribute positively whereas repressors contribute negatively) and turned OFF when this sum is negative. In case of a tie (when contributions cancel each other out), it is often assumed that the gene keeps it current state. This framework has been successfully used to model cell cycle control in yeast. Moreover, several studies consider stochastic extensions to assess the robustness of such a model. Here, we introduce a novel, natural stochastic extension of the majority rule. It consists in randomly choosing the next value of a gene only in case of a tie. Hence, the resulting model includes deterministic and probabilistic updates. We present variants of the majority rule, including alternate treatments of the tie situation. Impact of these variants on the corresponding dynamical behaviours is discussed. After a thorough study of a class of two-node networks, we illustrate the interest of our stochastic extension using a published cell cycle model. In particular, we demonstrate that steady state analysis can be rigorously performed and can lead to effective predictions; these relate for example to the identification of interactions whose addition would ensure that a specific state is absorbing. Citation: Chaouiya C, Ourrad O, Lima R (2013) Majority Rules with Random Tie-Breaking in Boolean Gene Regulatory Networks. PLoS ONE 8(7): e69626. doi:10.1371/journal.pone.0069626 Editor: Jesus Gomez-Gardenes, Universidad de Zarazoga, Spain Received: March 2, 2013; Accepted: June 12, 2013; Published: July 26, 2013 Copyright: © 2013 Chaouiya et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was partially supported by the Fundação de Ciência e Tecnologia (PTDC/EIA-CCO/099229/2008). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. No additional external funding received for this study. Competing interests: The authors have declared that no competing interests exist. Cellular processes are driven by large and heterogeneous interaction networks that are being uncovered thanks to tremendous technological advances. In this context, a range of modelling frameworks has been deployed to represent and analyse biological networks, aiming at better understanding these complex systems [1], [2]. Among these frameworks, Boolean Genetic Regulatory Networks (GRN) introduced more than forty years ago provide a convenient qualitative formalism [3], [4], which has since been the subject of numerous theoretical studies and extensions [5], [6]. Boolean GRNs, including their generalisation to account for multi-valued variables [7], have proved useful for modelling and analysing regulatory and signalling networks for which precise quantitative data are often scarce (see e.g. [8]–[13] for this framework applied to cell cycle modelling). Briefly, a Boolean GRN is defined by a signed, directed graph, where the nodes represent genes (or more generally regulatory components) and signed edges represent the regulatory interactions between these components: positive (resp. negative) edges denote activations (resp. inhibitions). Each node is associated with a Boolean variable that accounts for the expression state (ON/OFF) of the corresponding gene, and a logical function specifies the evolution of this variable, depending on the variables associated with the regulators of the gene. More precisely, at each time step, gene values are updated according to the results returned by their logical functions. There is a variety of Boolean GRN models that differ in their classes of logical functions (e.g. additive, canalizing, unrestricted), in their structural properties (e.g. fixed, bounded or unrestricted indegrees), or in their updating scheme (e.g. synchronous, asynchronous, block-sequential). To define a model, in addition to the already challenging problem of identifying the wiring of the (signed) regulatory network, one has to specify the logical functions associated to the nodes. That is to say to specify how regulatory effects are combined. In this context, some authors choose to rely on functions uniquely defined from the regulatory structure [8], [10], [14]. In particular, in Boolean threshold networks, regulatory effects are assumed to be additive: each function is defined as a majority rule where the decision to activate a gene follows from the comparison of the sum of the (possibly weighted) contributions from the regulators to a specific threshold. Boolean threshold networks have been successfully used to model the control of cell cycle [8], [10]. Zañudo et al. have performed a thorough study of random Boolean threshold networks defined as a subset of the ensemble of Kauffman's random Boolean networks, where regulators and regulatory functions are randomly chosen [15]. Finally, it is worth noting that Boolean threshold networks originate from the McCulloch-Pitts neural model [16], which gave rise to countless studies and applications. To account for the inherent stochasticity of regulation processes, stochastic versions of Boolean GRNs have been proposed in the literature [17]–[22]. Schlumevitch and colleagues define Probabilistic Boolean Networks, where a set of regulatory functions is assigned to each gene and, at each time step, one function is randomly chosen within this set [17]. This setting results in dynamics that can be represented as a Markov chain. Other authors propose to update each gene according to its regulatory function with a given probability [18]–[21]. Garg et al. discuss this model they call Stochasticity In Nodes (SIN), indicating that it can lead to noise overrepresentation. They propose an alternate model, called Stochasticity In Functions (SIF), that differently accounts for the stochasticity of the function failure: it associates different failure probability to different logical gates and stochasticity also depends on the state of the regulators [22]. We finally refer to [23] for a seminal discussion of the complete probabilistic version of such models in the context of neural networks. Here, focussing on threshold Boolean networks, we propose that the majority rule is particularly suitable to combine deterministic and probabilistic updates. Indeed, the combined contribution of the regulators at a given time is not always conclusive to enable an unambiguous choice of the gene evolution. Hence, we propose a stochastic tie-breaking that associates a probability to the update value when positive effects countervail negative effects. Furthermore, various majority rule settings can be devised that are specified and discussed in this paper. We extensively study a class of two gene networks, considering different majority rule settings. We show that this simple motif gives rise to a wide variety of behaviours and that the regulatory structure plays a role in the degree of stochasticity exhibited by the dynamics. We further revisit the Li et al.'s deterministic Boolean threshold model of the budding yeast cell cycle [8]. Interestingly, several studies have considered stochastic versions of this model, with intent to explore the model robustness (e.g. [18], [24], [25]). Here, we illustrate the interest of our approach to tackle this question. In particular, we demonstrate that steady state analysis can be rigorously performed and lead to effective predictions; these relate to the identification of interactions whose addition would ensure that a specific state is an absorbing state. Boolean Gene Regulatory Networks (GRN) are defined by a directed graph where the nodes represent the regulatory components (genes or their products) and the edges represent the regulatory interactions. We denote the nodes (, the number of nodes). Each node is associated with a level of expression (or of activity) referred to as for simplicity. This level may change in time, taking the value (ON) or (OFF). An edge from to is denoted and is associated with a sign , which is positive for an activation or negative for a repression . The source of the edge is thus a regulator of gene . If does not regulate then . The dynamics takes place in the configuration space () and configuration is defined by the values of the nodes: . The evolution of each node is defined by an updating rule, which depends on the regulators of that node and the time variable is discrete: . Note that there is an edge from to if, for some fixed values of the other regulators of , changing the value of has an effect on the value of at the next time step: such regulatory interactions are said functional (e.g. [26]). We first introduce the Majority Rule (MR) that, given the configuration of the system at time , , defines the configuration at the next time :(1) Hence, in Equation 1, an activator (resp. a repressor) has a positive contribution if it is present (resp. absent). When the sum of the contributions is zero (i.e. there are as many positive and negative contributions), rather to arbitrarily opt for a value, the MR sets with probability and with probability . A node is deterministic if its updating rule is deterministic for any configuration, and probabilistic if its updating rule is probabilistic for some configurations. Therefore, in the case of the MR, a node is deterministic if it has an odd in-degree (i.e. an odd number of regulators) and probabilistic if it has an even in-degree. If there is at least one probabilistic node, the dynamics of the model can be represented by a finite Markov chain on the configuration space ; otherwise, we have a deterministic dynamical system in . Extending the usual notion of absorbing chains [27], we say that the chain is absorbing if all ergodic sets are deterministic: either fixed points (i.e. configurations such that with probability one) or cycles (i.e. sets of configurations such that there exists a for which with probability one). Hence, with this definition, the set of absorbing states includes states that are members of deterministic cycles. It corresponds to the usual definition applied to a power of the transition matrix. Moreover, we will often refer to the terminology of the dynamical systems community by calling attractors the (minimal) ergodic sets of a chain, that are also defined as the terminal strongly connected components of the transition diagram. For completeness, we also investigate two variants of the MR. The first variant, referred to as Inertial Majority Rule (IMR), considers the current state of a probabilistic node to define its next value in the case of equal number of positive and negative contributions:(2) We designate this rule inertial because its deterministic version (when ) specifies that nodes keep their current values when activations and repressions cancel each other out. It is worth noting that this rule amounts to adding a functional self-activation on each node: when the sum of the contributions from all other regulators is zero, it is the value of the proper node that determines its next level. In the next MR variant, referred to as Null Majority Rule (NMR), the nodes take values and . Hence the configuration space is and we denote the level of the th node, to distinguish from , which takes values and :(3) Hence, under the NMR, when the level of a regulator is zero, it plays no role in the regulatory function. As a consequence, whatever the sign of the interaction (activation or inhibition), the absence of a regulator results to the same (lack of) contribution in contrast to the MR, where e.g. the absence of a repressor has a positive contribution. Importantly, whatever their in-degree, all nodes are probabilistic. These two variants of the majority rule can be combined in an Inertial Null Majority Rule (INMR) as in the model of the cell cycle control in yeast specified by Li et al. [8] (see below, the section devoted to the yeast cell cycle model). Because the evolution of any node only depends on its regulators, it will be convenient to focus on structures that we call modules, which are composed by one node and its incoming interactions. Finally, it is worth noting that the majority rules defined above are special cases of the regulatory functions considered in threshold Boolean networks, where the sums of contributions include interaction weights () and compare to activation thresholds [15]. Here, all interaction weights are set to , and all thresholds are zero. Two-node Gene Regulatory Networks Here, we consider connected Gene Regulatory Networks (GRNs) encompassing two nodes and . There are three classes of such two-node GRNs that include respectively two, three and four interactions. The first class contains three elementary cross-regulatory circuits; two circuits are positive circuits (i.e. the product of the interaction signs is positive) and one circuit is negative with a node activating its repressor. There are indeed two such circuits which are equivalent up to exchanging node labels: activates , which inhibits or inhibits , which activates . In these models, both nodes are deterministic under the Majority Rule (MR). The second class encompasses the networks made by cross-interactions and a single self-interaction (six such networks, up to exchanging node labels). Under the MR, the self-regulated node is probabilistic, whereas the other node is deterministic. These models give rise to: 1) bi-stable dynamics (when both circuits are positive), 2) an absorbing period-2 cycle (when the cross-regulatory circuit is positive and the self-regulation is negative) and 3) combination of cycles over the four configurations (when the cross-regulatory circuit is We choose to thoroughly study the third class, for which both nodes are probabilistic. We thus consider all the GRNs defined by cross-interactions between nodes and , which are both self-regulated (for convenience, we use free variables and such that and ). We start by considering the MR. Then, we point out the differences with the inertial and null MR variants (IMR and NMR). We denote by the module where is self-regulated (with sign ) and is regulated by the node (with sign ); there are four modules of this type. We are thus interested in the networks that result from the composition (denoted ) of two such modules. In what follows, the Markov transition matrices are matrices with entries corresponding to configurations , , , (in this order, which facilitates the description of the rotation that transforms one model into another, see below). Figure 1 summarises the dynamical rules for the four modules, considering the MR as defined by Equation 1. There are 16 models corresponding to the different combinations of two modules. Notice that a row rotation (modulo 4, from top to bottom) transforms each module (column) into the next one. Denoting by this transformation and arbitrarily denoting by the module, we refer to the remaining modules as indicated in Figure 1: , and (). Figure 1. The four modules and their evolutions for the majority rule (MR). The sign corresponds to the probabilistic choice: with probability and with probability . In each column, is the symbol associated to the module (sign of the self-regulation and sign of the We first observe a node symmetry that relates and by exchanging and . Referring to the relation between the two modules that define a two-node GRN, we partition the set of the models in two subsets: eight models are said in phase (IP), when , that is when the probabilistic choices are located in the same row in Figure 1; the remaining eight models are out of phase (OP), when . In the former case (IP), the Markov matrix has two rows with four probabilistic entries each combining the two parameters () and two rows with a deterministic entry (i.e. with probability one). This defines 10 transitions in the corresponding dynamical diagrams. Whereas in the later case (OP), each row has two probabilistic entries (either or ), giving rise to eight transitions in the dynamical diagrams. We search for other symmetries to reduce the case studies of our two-node models. From a mathematical standpoint, which does not always fit the functional perspective, two models are equivalent when their Markov matrices are the same up to a renaming of the state space and a bijective correspondence of the parameters . Clearly, a necessary condition for this equivalence is that the diagonal elements of the matrices are the same up to parameter exchanges. In particular, an IP model cannot be isomorphic to an OP model. By inspection of the diagonal entries of each model and elementary computations, we end up with a complete classification of all the models. There are eight IP models grouped into three isomorphic classes, IP1, IP2 and IP3. They are characterised by the existence of two deterministic transitions whose specific locations govern the dynamics of the model. There are also eight OP models grouped into three isomorphic classes, OP1, OP2 and OP3. Contrary to the IP models, all the transitions are probabilistic and depend on only one of the parameters ( or ), allowing a complete flexibility of the mean visit times associated to each connected component of the dynamical graph. Model class IP1. It includes the two models (i.e.) and 0 (i.e.). From the structural symmetry point of view, this class contains the models with self-activations and symmetrical cross-interactions (i.e. positive two-node circuits). The transition matrix of is:(4) The model together with its dynamics depending on the values of the parameters and are depicted in Figure 2. The transition matrix of 0 can be deduced from the matrix of by permuting the entries and changing to and to :(5) Figure 2. The dynamics of the IP1 model . Therefore the dynamics of is isomorphic to that of 0. These models are self-symmetric by node symmetry. The two deterministic transitions (i.e. with probability one) are loops on single states (i.e. diagonal elements in the transition matrix). In other words, the corresponding Markov chains are absorbing with two fixed point attractors. The fundamental matrix is [27]:(6) where the first entry is for and the second for . Recall that the fundamental matrix of an absorbing chain is defined as the inverse of the matrix , where is the sub-matrix of the transition matrix restricted to the set of transient states [27]. Entry of the fundamental matrix has a nice probabilistic interpretation: it corresponds to the mean time spent by the process in configuration if it starts in . Note that this value is finite because is defined on the transient states. Relying on our extended notion of absorbing chains, when ergodic sets are deterministic cycles, we can similarly define a fundamental matrix and use the same rationale by simply considering a power of instead of . Therefore, starting in the configuration (or ), for typical values of the parameters around (), the mean time spent by the process in one of the transient configurations is of order one (actually ). It diverges when the parameters tend to opposite extreme values (, ) or (, ), where at the limit, a third fixed point appears. Instead, when both parameters are close to or , the dynamics still encompasses two absorbing configurations, while expected times to reach these configurations tend to or . When and are fixed to their extreme values ( or ), the system is deterministic, and the rules governing the evolution of the nodes can be defined by means of logical connectors. Here, • corresponds to an AND rule on both nodes (the presence of the two activators is required to reach level ); • corresponds to an OR rule on both nodes (the presence of at least one activator is required to reach level ); • , corresponds to an OR rule on node and an AND rule on node ; • , corresponds to an AND rule on node and an OR rule on node . A remarkable feature of this type of models is its ability to continuously exchanging two logical connectors by weighting the respective probabilities of implementation. For instance when , is the probability to activate the dynamical connection corresponding to an OR rule on node and is the probability corresponding to an AND rule. This is clearly illustrated in the dynamical graphs in Figure 2. In this sense, we can say that the border of the parameter domain constitutes a continuous family of Stochasticity In Functions models (SIF) following the definition in [22]. The whole parameter domain can thus be seen as a generalisation of these stochastic models, also corresponding to the probabilistic Boolean networks proposed by Schmulevich et al. [17]. In fact, by a theorem on random map realisations of Markov chains (see [28], chapter 1.2), our two-node models can be realised as random walks on the set of the dynamical graphs of the four extreme models (i.e. for which the parameters and equal or ). Let us denote these dynamical graphs by (for ), , and (see Figure 2). Notice that, in the dynamics of these deterministic models, any configuration has a unique outgoing transition. At each time step, one extreme model is randomly and independently selected and the next configuration is chosen according to the (unique) transition leaving the current configuration of the corresponding dynamical graph. is taken with probability , with probability , is taken with probability and with probability . This random walk has exactly the same probabilistic transitions as the original IP1 model depicted in Figure 2. Model class IP2. It includes the two models and . From the structural symmetry point of view, this class contains the models with self-inhibitions and symmetrical cross-interactions (i.e. positive two-node circuits). The model is changed into by permuting the entries and changing to and to . The two models are also self-symmetric by node symmetry. Because the two deterministic arrows (i.e. with probability ) interchange two states, the corresponding Markov chains are absorbing with a unique attractor, a period-2 cycle (see Figure 3). Therefore, regardless the initial configuration, all the realisations end up in this cycle, with probability one. Figure 3. The dynamics of the IP2 model . Because , the transient dynamics of and of are identical and the analysis of the parameter space follows along the same lines as for the previous class. Model class IP3. It includes four models: , , and their homologous node symmetric and . From the structural symmetry point of view, this class contains all the models asymmetrical with respect either to the self-interaction or to the cross-interactions. By permuting the entries: and by changing to , to , is changed in . Notice that an IP2 model cannot be isomorphic to an IP3 model, even if they share the same diagonal elements. This is because, in the IP2 class, the deterministic arrows deal with two states while in the IP3 class, four states are concerned and this property is invariant by isomorphism of the state space. IP3 models define regular chains (the four states constitute a unique ergodic set, unless the parameters take extreme values), but the presence of the two deterministic transitions put an extra weight on the correspondent target states. Figure 4 shows that there are many cycles, giving rise to oscillations that can visit any configuration in any order and with different return times. The mean return times to each configuration , kind of a mean period of the oscillations, can be computed from the invariant probability distribution and reads: (7) We recall that , the mean time taken by a regular chain that starts at to return to its starting point (the mean return time at ), is given by the inverse of the th component of the limiting probability vector (see [27], Theorem 4.4.5). It is also possible to compute this value using the fundamental matrix of the process ([27], Theorem 4.4.7). Note that, for a regular Markov chain, the definition of the fundamental matrix slightly differs from that of an absorbing chain (see [27], Definition 4.3.2). Figure 4. The dynamics of the IP3 model . In Figure 5, the values of the mean return times are depicted as functions of , for . Not surprisingly, due to the deterministic transitions, the mean return time to (resp. ) is always larger than that to (resp. ). When tends to an extreme value, the system turns into an absorbing chain and the return times of the transient configurations diverge. As for the other IP models, the extreme cases correspond to models where rules are defined by means of logical connectors. Hence, Figure 5 is a further illustration of a continuous parameter swap between different logical rules. Figure 5. Mean return times of the IP3 model as a function of , with fixed to . In this plot, one can observe for instance, that the mean return times to varies from (when ) to (when ). Model class OP1. It includes and its node symmetric counterpart . From the structural symmetry point of view, the class contains all the models with self-activations and asymmetrical cross-interactions. OP1 models are the probabilistic counterpart of the negative circuits studied in [29]: the dynamics is built on a fundamental period-4 cycle combined with fluctuating sojourns in each configuration. The transition matrix is:(8) Figure 6 illustrates the relevant features of the dynamics of this model. Notice that, by changing the parameters, it is possible to modulate the time spent in each configuration and therefore the mean period of the oscillations. This observation is corroborated by the computation of the mean return times:(9) Figure 6. The dynamics of the OP1 model . For extreme values of the parameters, the system is bistable. Model class OP2. It includes the two models and its node symmetric counterpart . From the structural symmetry point of view, the class contains all the models with self-inhibitions and asymmetrical cross-interactions (negative circuits between the two nodes). Figure 7 shows the existence of synchronous transitions where both nodes change simultaneously their values, inducing various period-2, 3 and 4 cycles. Combinations of these cycles lead to oscillations of any order. The extreme cases display four deterministic periodic dynamics, each including one synchronous transition that involves simultaneous updates of the two nodes. The analytical expressions of the mean return times are:(10) Figure 7. The dynamics of the OP2 model . The set of equations (10) fully supports the idea of continuous parametric transitions among these dynamics: while the probability of a period-3 cycle increases as parameters tend to their extreme values ( or ), for intermediate parameter values, higher values of indicate that the period-4 orbits become prominent. Model class OP3. It includes four models, and and their node symmetric counterparts. From the structural symmetry point of view, the class contains all the models with self asymmetrical interactions and symmetrical cross-interactions. By permuting the entries: and changing to and to , model is changed into . The dynamics of these models alternate chains of period-1 to 4 cycles. It may thus be viewed as a transition between OP1 and OP2 models. Figure 8 exhibits the dynamical properties of this model. In particular, in the extreme cases, we observe the existence of deterministic fixed points possibly combined with a period-2 cycle. Figure 8. The dynamics of the OP3 model . The existence of oscillations of any period is also shown in Figure 8 and Equation (11) points to a large variety of time scales of the oscillations when parameters are changed:(11) Two Majority Rule variants Here, we briefly analyse the cases of the two variants previously introduced: the Inertial Majority Rule (IMR) and the Null Majority Rule (NRM). The Inertial Majority Rule. This rule defines that, whenever activations and repressions cancel each other out, the next level of a node depends on its current level (Equation (2)). For our two-node models under the IMR, we can define the same isomorphism classes as those of the MR. From Figure 9, one can observe that the symmetry for the IMR is slightly different from that of the MR. There are two types of probabilistic choices, introducing a row reflection besides the rotation to relate the modules. For example, evolution in Figure 9 is obtained by rotating module rows (transforming into ). As a consequence, the isomorphism between models under the IMR relies on a different parameter change when compared to the MR: is changed to and to . However, IMR and MR have exactly the same model classes and similar dynamics. Only differences regarding transition probabilities arise for the models combining an even and an odd module, i.e. an even and an odd column of Figure 1 (for the MR model) and Figure 9 (for the corresponding IMR model). For instance, in the case of the OP3 model , defined by the third and fourth columns of Figures 1 and 9, the two loop transition probabilities are different for the MR (namely and ), whereas they are identical for the IMR (namely ). The probabilities of the transitions connecting configurations and similarly differ between the MR and the IMR. The reason for this clearly appears in the Figures 1 and 9 where the probabilistic choices are identical in both columns for the MR whereas they are opposite for the IMR. Figure 9. The four modules and their evolutions for the Inertial Majority Rule (IMR). The sign corresponds to the probabilistic choice: with probability and with probability whereas the sign corresponds to the opposite probabilistic choice: with probability and with probability . is a reflection, is a rotation as for MR in Figure 1. The Null Majority Rule. The majority determined under the NMR is quite different as compared to that of the MR and the IMR (see Equation (3)). Indeed, a node whose level is has no contribution in the updating decision of its targets. Still, one can define a bijection between both representations. In any configuration, let denote the (global) contribution of the regulators targeting node (i.e.). We have:(12) Note that the very same change of variables was defined by F. Robert, coming up with two equivalent formulations for threshold networks [30]. However, to ensure equal dynamics, the threshold functions and the thresholds were accordingly modified. Here, our purpose is different and amounts to revising the semantics of repression contributions (therefore the zero threshold is maintained for all the nodes). The modules and are identical under the MR and NMR because, in these cases, (see Figure 10). As a consequence, the four NMR models built with these modules have the very same dynamics as their MR counterparts. Moreover, considering the NMR, if at a given time, , then and the sixteen models have the same probabilistic updating for this configuration. Finally, it is easy to check that starting at time from the remaining configurations , or , the updating process leads to in the module and in . From these observations, it turns out that NMR models have more deterministic transitions than their MR analogs. Not surprisingly, there are thus more absorbing models under the NMR than under the MR. This is a remarkable difference from the biological perspective since under the NMR, in eleven out of sixteen models, the dynamics converge to a fixed point or a small cycle. Hence the NMR displays robust, restricted behaviours. Moreover, changes in parameters values only impact times for convergence to attractors whose identities are conserved. In contrast, the MR is more flexible, leading to models with a larger variety of behaviours. Figure 10. The four modules and their evolutions for the null majority rule (NMR). The sign corresponds to a probabilistic choice: with probability and with probability . Node levels take values and . Finally, with the INMR that results from the combination of the inertial and null majority rules, the module evolutions are similar to those defined in Figure 10, except that for configuration , and are interchanged (i.e. for all the modules, value is chosen with probability and with probability ). The yeast cell cycle network revisited The original model. The eukaryotic cell cycle defines a series of phases undergone by cells that divide, giving rise to daughter cells. G1 is a growing phase, known as gap 1 phase, followed by the S phase of DNA synthesis and chromosome replication. Then, after the gap phase G2, the M phase proceeds with the separation of the chromosomes and culminates with cell division. In [8], Li et al define a Boolean Gene Regulatory Network that encompasses the main regulators of the cell cycle progression in the budding yeast. The network supporting this model is depicted in Figure 11. The authors use a deterministic Inertial Null Majority Rule, hence the 11 variables take values or , and when with probability . Interestingly, Davidich and Bornhold's Boolean model of the fission yeast cell cycle uses the very same rule [10]. Recently, Fauré and Thieffry describe and compare logical models of the molecular networks controlling the cell cycle in different eukaryotic organisms [11]. Figure 11. The yeast cell cycle model as defined in [8]. Green arrows denote activations, whereas red T-ended edges denote inhibitions. In Li et al.'s model, self-degradations (dashed red loops) were added to the nodes that have no negative regulators. When considering the stochastic MR rule, these self-loops can be discarded (see text). With the addition of the two activatory edges in blue, becomes the unique attractor of the model when . The table on the right indicates molecular counterparts of nodes as well as their values in the configuration. Cyclin Cln3 is known to be crucial for the cell commitment to S phase, i.e. for the cell cycle progression. In this model, Cln3 () thus acts as an input of the network (possibly stimulated by a start signal). As a key feature, the model has a fixed point denoted , which corresponds to the G1 phase and that attracts most of the trajectories, considering all possible initial conditions. There are other six fixed points in the model, but those have a rather restricted basin of attraction and no meaningful biological counterparts. Moreover, starting from the state , and artificially switching Cln3 ON, the model follows a trajectory matching the cell cycle progression until reaching back the state . Li et al considered the large size of the basin of attraction of the biological fixed point as a good indication of the robustness of the network to perform its function. This is confirmed by showing that the size of this basin of attraction is mostly preserved under perturbations that randomly remove or introduce a regulatory interaction. In [25], Stoll et al. propose another type of perturbations: 1) shuffling the wiring yet keeping the connectivity at each node or 2) removing one to several regulatory interactions. Using Li et al.'s model as a case study, they consider the size distribution of the basins of attraction and distance to a reference attractor as useful measures to assess impact of these perturbations. Zhang and colleagues assess the effect of stochasticity on the Li et al. model by turning it to a probabilistic model where all transitions in the configuration space are made possible [18]. In the framework of the present work, it is natural to consider the model described above as an extreme case of its stochastic version and to study the robustness of the dynamical behaviour faced to perturbations in the probability parameter space. Therefore, we consider the stochastic version of this model under the Inertial Null Majority Rule: when (the sum of the contributions is zero) we have with probability , otherwise with probability . As for the two-node models under the NMR, all the modules are probabilistic. In particular, when all the node values equal zero (see Figure 10). In the configuration , all genes are inactive but (which negatively regulates and ) and (which negatively regulates ). We have thus that , hence is stable in , similarly for . Consequently, is not absorbing, except if . When these parameters are closed to , the system may be steady in long enough to match the biological situation, but it will eventually (after a finite time, with probability ) leave , following a trajectory different from the cycle described in [8]. In the deterministic case, the INMR favours the existence of steady states including those with active genes whose regulators are all inactive; as discussed in [15], the fact that a node keeps its current value when the sum of the contributions is zero leads to frozen nodes. As already mentioned, the inertial rule amounts to add a self-activation on every node. It is worth mentioning that the self-inhibitions of the model (see Figure 11) are not functional (see [26]), they merely cancel out these self-activations, which are hidden in Li et al.'s model. In other words, for nodes that are only positively regulated, the NMR is applied. In contrast to the deterministic INMR, the stochastic model does not display such a stability. The aforementioned property of the inertial deterministic rule that generates frozen nodes does not hold anymore. In particular, when regulators are absent, activations and inhibitions are not discriminated, giving rise to a large number of probabilistic configurations. This is the main reason why , together with the other steady configurations of the INMR model, are not robust to the stochastic extension and are not absorbing states. The model revised, considering the stochastic MR. We now consider Li et al.'s model under the stochastic MR as defined by Equation (1). Node values are thus set to or (and denoted by rather than ). We recall that when the sum of its input contributions equals zero (), takes the value with probability and with probability . In order to analyse the dynamical features of the model, in particular regarding its steady states, we take advantage of the combination of deterministic and probabilistic operation modes. As we shall see, the deterministic part of the dynamics imposes strict restrictions that are worth to inspect prior to follow up the study. We describe the strategy in some detail because it can be easily generalised and thus used to study any model under the same rule. Recall that a configuration of the module includes the values of all the regulators of . Beside the input node , the yeast cell-cycle network has five deterministic modules, i.e. with odd in-degree, the remaining five being probabilistic. For a probabilistic module , only configurations such that have a probabilistic outcome. An absorbing configuration , i.e for which , the element of the transition matrix equals , verifies: We first search for the steady configurations of the deterministic modules (they strongly restrict the number of candidates of absorbing configurations). Among the 32 configurations of the five deterministic nodes, we easily end up with only two candidates. All the other 30 configurations are discarded because they are not steady for at least one deterministic module. These two remaining configurations, steady for all the five deterministic modules, are = (, , , , ) and = (, , , , ). The former matches the biological fixed point for the five deterministic modules, and the latter corresponds to its mirror image. Notice that the existence of these two solutions is a consequence of the correspondent symmetry of the MR (versus). We then look for all the possible extensions to the remaining six probabilistic nodes of these two solutions. The number of such extensions may be reduced if the values of the deterministic regulators of a probabilistic module determine the value of the corresponding node. Because implies that , which is not compatible with , we conclude that has no steady extensions. Let us now explore the possible steady extensions of . Recall that in . Clearly, from the already known inputs of module (that are and ), it follows that . Looking now to the five known values for module (i.e, , , and itself), we conclude that , which in turn implies . It remains to investigate and . In order to have with non-zero probability (in fact ) we should have . For module , we have with probability . On the other hand, in order to be consistent with the values already fixed for module , we need to set , which is the case with probability . Therefore is steady with probability . Remarkably, this analysis shows that the only steady configuration is , even if it is not absorbing; no other configuration remains steady with a non-zero probability. This encouraging result naturally leads us to search for minimal changes in the interaction network that would turn into an absorbing configuration. The first simple modification consists in eliminating the self-inhibition of , making this module deterministic with the proper outcome. Note that, because the MR accounts for the absence of a regulator, we could safely clean up the model by discarding the self-inhibitions of , , , and . These were artificially added in the original model to ensure self-degradation of components that are not subject to other inhibition, under the INRM, and their elimination does not modify the results presented here. It remains the drawback of modules and . They can be fixed with probability one either by adding a positive interaction from a node whose values is in the configuration , or by adding a negative interaction from a node whose value is in . Interestingly, a modification that fulfils these constraints was mentioned by Fauré and Thieffry who propose to account for biological data suggesting that Cln1/2 and Clb5/6 positively their own transcription factors [11]. Adding these positive interactions from to (Cln1/2 to SBF) and from to (Clb5/6 to MBF), is the only steady configuration that turns out to be absorbing, that is to say to have a maximal robustness in the Markov chain context. A subsequent question arises that concerns the existence of other absorbing trajectories in this modified model. By generating the state transition diagram of the corresponding Markov chain, we could verify that, when , the state is the unique attractor and thus, as mentioned above, for , it is easy to deduce that the unique attractor is , the mirror state of . Hence, with probability , the system will reach either or , depending on the value of the input node . We have thus a full characterisation of the asymptotical behaviour of the model. In this section, the cell cycle model of Li et al. has been used to illustrate the interest of our stochastic majority rule. Detailed biological interpretation of the model properties and further study to assess transient behaviours go beyond the scope of this paper. In this work, we have presented a stochastic extension of threshold Boolean networks that includes both deterministic and probabilistic rules. In contrast to other studies where all transitions are made stochastic (e.g. [18]), a probabilistic choice is made only when the sum of the contributions equals the threshold (often set to ), otherwise, the update is deterministic. This is rather natural from the biological view point. Indeed, it is reasonable to assign a probability to the update choice when regulatory effects cancel each others. The originality of this model lies in the coexistence of deterministic and probabilistic nodes (or modules) in the same gene network; the former have a deterministic outcome for any input configuration, while the latter have probabilistic choice in certain configurations. This natural ambivalence open new interesting dynamical characteristics, yet avoiding a useless combinatorial explosion of trajectories. This point allows a rigorous analysis of certain dynamical properties of the model. In particular, we have shown how all the steady configurations may be identified and their properties modified in agreement with biological observations. More specific features of the dynamics, as for instance the mean sojourn and return times, can be studied in this formalism, allowing an almost complete description of the dynamical properties of the models. We have introduced the majority rule (MR) as a convenient setting, compared to the null (inertial) majority rule: variables taking values and amount to consider that the absence of a regulator has an effect opposite to that observed when the regulator is present. When variables take values and , the absence of a regulator is not accounted for in the rule. This has serious consequences: if a node is exclusively subject to inhibitions, there is no configuration for which its value is updated to , except under the inertial majority rule. The inertial majority rule introduces a self-activation on all the nodes and, for this reason, Li et al. as Davidich and Bornholdt, have introduced self-inhibitions on genes that are not negatively regulated otherwise [8], [10]. By thoroughly exploring the properties of simple two-node motifs, we could demonstrate the variety of the behaviours induced by our stochastic extension. Its application to Li et al.'s model indicates that it can be used to propose modifications of the model: here, we have shown that to turn the biological state into an absorbing state, one needs to add specific regulatory arcs to the As shortly demonstrated for the cell cycle model, a systematic, efficient method to search for steady (absorbing) states should be relatively easy to implement. Moreover, this method can provide useful indications for model revision in order to ensure that a given state is absorbing. To search for other steady complex behaviours of the revised model, we have generated the corresponding transition diagram. Noticeably, we have verified that and its mirror states are the sole ergodic states. Future work would focus on a more detailed analysis of the properties of the model such as the nature of the transient dynamics, e.g. providing measures on mean return times. Extension of the present work also includes the consideration of non-zero thresholds in the majority rule. Importantly, the stochastic extension presented here applies for integer thresholds (considering integer interaction weights); indeed, threshold real values avoid the case of equality in the sum of the regulatory contributions [15]. Note however that, in this case, the probabilistic alternative may be considered as a consequence of uncertainty when gene expression is too close to the theoretical threshold, specially due to local inhomogeneities of protein concentrations. Author Contributions Conceived and designed the experiments: RL CC. Performed the experiments: RL OO. Analyzed the data: RL CC. Contributed reagents/materials/analysis tools: CC RL. Wrote the paper: CC RL.
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0069626","timestamp":"2014-04-19T17:15:17Z","content_type":null,"content_length":"377172","record_id":"<urn:uuid:c5ca92b3-6945-4785-ac31-1e72f5975cf0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 35: Doppler Effect and The Big Bang Today I want to talk with you about Doppler effect, and I will start with the Doppler effect of sound which many of you perhaps remember from your high school physics. If a source of sound moves towards you or if you move towards a source of sound, you hear an increase in the pitch. And if you move away from each other you hear a decrease of a pitch. Let this be the transmitter of sounds and this is the receiver of sound, it could be you, your ears. And suppose this is the velocity of the transmitter and this is the velocity of the receiver. And V should be larger than 0 if the velocity is in the direction. And in the equations what follow, smaller than zero it is in this direction. The frequency that the receiver will experience, will hear if you like that word, that frequency I call F prime. And F is the frequency as it is transmitted by the transmitter. And that F prime is F times the speed of sound minus V receiver divided by the speed of sound minus V of the transmitter. So this is known as the Doppler shift equation. If you have volume one of Giancoli you can look it up there as well. Suppose you are not moving at all. You are sitting still. So V receiver is 0. But I move towards you with 1 meter per second. If I move towards you then F prime will be larger than F. If I move away from you with 1 meter per second then F prime will be smaller than F. The speed of sound is 340 meters per second. So if F, which is the frequency that I will produce, is 4000 hertz, then if I move to you with 1 meter per second, which I'm going to try to do, then the frequency that you will experience is about 4012 hertz. It's up by 0.3 percent. Which is that ratio one divided by 340. And if I move away from you with 1 meter per second, then the frequency that you will hear is about 12 hertz lower. So you hear a lower pitch. About 0.3 percent lower. I have here a tuning fork. Tuning fork is 4000 hertz. I will bang it and I will try to move my hand towards you one meter per second roughly. That's what I calculated it roughly is. Move it away from you, towards you, away from you, as long as the sound lasts. You will hear the pitch change from 4012 to 3988. Very noticeable. Have you heard it? Who has heard clearly the Doppler shift, raise your hands, please? Chee chee chee chee it's very clear. Increased fre- frequency and then when I move my hands, away a lower pitch. Now you may think that it makes no difference whether I move towards you or whether you move towards me. And that is indeed true if the speeds are very small compared to the speed of sound. But it is not true anymore when we approach the speed of sound. As an example, if you move away from me with the speed of sound, you will never hear me. Because the sound will never catch up with you, and so F prime is 0. And you can indeed confirm that with this equation. But if I moved away from you with the speed of sound, for sure the sound will reach with you. And the frequency that you will hear is only half of the one that I produce. So there's a huge asymmetry. Big difference whether I move or whether you move. So I now want to turn towards electromagnetic radiation. There is also a Doppler shift in electromagnetic radiation. If you see a traffic light red and you approach it with high enough speed you will experience a higher frequency and then you will see the wavelengths shorter than red and you may even think it's You may even go through that traffic light. To calculate the proper relation between F prime and F requires special relativity. And so I will give you the final result. F prime is the one that you receive. F is the one that is emitted by the transmitter. And we get here then 1 - beta divided by 1 + beta to the power one-half. And beta is V over C, C being the speed of light, and V being the s- speed, the relative speed between the transmitter and you. If beta is larger than 0, you are receding from each other in this equation. If beta is smaller than 0, you are approaching each other. You may wonder why we don't make a distinction now between the transmitter on the one hand, the velocity, and the receiver on the other hand. There's only one beta. Well, that is typical for special relativity. What counts is only relative motion. There is no such thing as absolute motion. The question are you moving relative to me or I relative to you is an illegal question in special relativity. What counts is only relative motion. If we are in vacuum, then lambda = C / F and so lambda prime = C / F prime. Lambda prime is now the wavelength that you receive and lambda is the wavelength that was emitted by the -- by the source. So I can substitute in here, in this F, C / lambda which is more commonly done. So this Doppler shift equation for electromagnetic radiation is more common given in terms of lambda. But of course the two are identical. And then you get now 1+ beta upstairs divided by 1- beta to the power one-half. The velocity, there if I'm completely honest with you, is the radial velocity. If you are here and here is the source of emission and if the relative velocity between the two of you were this, then it is this component, this angle is theta, this component which is V cosine theta, which we call the radial velocity, that is really the velocity which is in that equation. Police cars measure your speed with radar. They reflect the radar off your car and they measure the change in frequency as the radar is reflected. That gives a Doppler shift because of your speed and that's the way they determine the speed of your car to a very high degree of accuracy. You can imagine that in astronomy Doppler shift plays a key role. Because we can measure the radial velocities of stars relative to us. Most stellar spectra show discrete frequencies, discrete wavelength, which result from atoms and molecules in the atmosphere of the stars. Last lecture I showed you with your own gratings a neon light source and I convinced you that there were discrete frequencies and discrete wavelengths emitted by the neon. If a particular discrete wavelength, for instance in our own laboratory, would be 5000 Angstrom, I look at the star, and I see that that wavelength is longer, lambda prime is larger than lambda, then I conclude -- lambda prime is larger than lambda, that means the wavelength the way I observe it is shifted towards longer wavelength, is shifted in the direction of the red, and we call that It means that we are receding from each other. If however I measure lambda prime to be smaller than lambda, so lambda prime smaller than lambda, we call that blueshift in astronomy, and it means that we are approaching each other. And so we make reference to the direction in the spectrum where the lines are moving. I can give you a simple example. I looked up for the star Delta Leporis what the redshift is. There is a line that most stars show in their spectrum which is due to calcium, it even has a particular name, I think it's called the calcium K line, but that's not so important, the name. In our own laboratory, lambda is known to a high degree of accuracy, is 3933.664 Angstroms. We look at the star and we recognize without a doubt that that's due to calcium in the atmosphere of the star and we find that lambda prime is 1.298 Angstroms higher than lambda. So lambda prime is larger than lambda. So there is redshift and so we are receding from each other. I go to that equation. I substitute lambda prime and lambda in there and I find that beta equals +3.3 times 10 to the -4. The + for beta indeed confirms that we are receding, that our relative velocity is away from each other, and I find therefore that the radial velocity -- I stress it is the radial component of our velocity is then beta times C and that turns out to be approximately 99 kilometers per second. So I have measured now the relative velocity, radial velocity, between the star and me, and the question whether the star is moving away from me or I move away from the star is an irrelevant question, it is always the relative velocity that matters. How can I measure the wavelength shifts so accurately that we can see the difference of 1.3 angstroms out of 4000? The way that it's done is that you observe the starlight and you make a spectrum and at the same time you make a spectrum of light sources in the laboratory with well-known and well-calibrated Suppose there were some neon in the atmosphere of a star. Then you could compare the neon light the way we looked at it last lecture. You could compare it with the wavelength that you see from the star and you can see very, very small shifts. You make a relative measurement. So you need spectrometers with very high spectral resolution. So there was a big industry in the early twentieth century to measure these relative velocities of stars. And their speeds were typically 100, 200 kilometers per second. Not unlike the star that I just calculated for you. Some of those stars relative to us are approaching. Other stars are receding in our galaxy. But it was Slipher in the 1920s who observed the redshift of some nebulae which were believed at the time to be in our own galaxy and he found that they were -- had a very high velocity of up to 1500 kilometers per second, and they were always moving away from us. And it was found shortly after that that these nebulae were not in our own galaxy but that they were galaxies in their own right. So they were collections of about 10 billion stars just like our own galaxy. And so when you take a spectrum of those galaxies, then of course you get the average of millions and millions of stars, but that still would allow you then to calculate the redshift, the average red shift, of the galaxy, and therefore its velocity. And Hubble, the famous astronomer after which the Hubble space telescope is named, and Humason made a very courageous attempt to measure also the distance to these galaxies. They knew the velocities. That was easy because they knew the redshifts. The distance determinations in astronomy is a can of worms. And I will spare you the details about the distance determinations. But Hubble made a spectacular discovery. He found a linear relation between the velocity and the distances. And we know this as Hubble's law. And Hubble's law is that the velocity is a constant which is now named after Hubble, capital H, times D. And the modern value for H, the modern value for H is 72 kilometers per second per megaparsec. What is a megaparsec? A megaparsec is a distance. In astronomy we don't deal with inches, we don't deal with kilometers, that is just not big enough, we deal with parsecs and megaparsecs. And one megaparsec is 3.26 times 10 to the 6 light-years. And if you want that in kilometers, it's not unreasonable question, it's about 3.1 times 10 to the 19 kilometers. So I could calculate for a specific galaxy that I have in mind, I can calculate the distance if I know the red shift. I have a particular galaxy in mind for which lambda prime -- for which lambda prime is 1.0033 times lambda. So notice again that the wavelength that I receive is indeed longer than lambda, so there is a redshift. I go to my Doppler shift equation which is this one. I calculate beta. One equation with one unknown, can solve for beta. And I find now that V is 5000 kilometers per second. Very straightforward, nothing special, very easy calculation. But now with Hubble's law I can calculate what D is. Because D now is the velocity which is 5000 kilometers per second divided by that 72 and that then is approximately 69 megaparsec. Again we have the distance if we do it in these units in megaparsecs. That's about 225 million light-years. And so the object is about 225 million light-years away from us. So it took the light 225 million years to reach us. So when you see light from this object you're looking back in time. And if you have a galaxy which is twice as far away as this one, then the velocity would be twice as high. And they're always receding relative to us. I'd like to show you now some spectra of three galaxies. Can I have the first slide, John? All right, you see here a galaxy and here you see the spectrum of that galaxy. That may not be very impressive to you. The lines that are being recognized to be due to calcium K and calcium H are these two dark lines. Some of you may not even be able to see them. And this is the comparison spectra taken in the laboratory. These lines are seen as dark lines, not as bright lines. We call them absorption lines. They are formed in the atmosphere of the star. Why they show up as dark lines and not as bright lines is not important now. I don't want to go into that. That's too much astronomy. But they are lines and that's what counts. And these lines are shifted towards the red part of the spectrum by a teeny weeny little bit. You see here this little arrow. And the conclusion then is that in this case the velocity of that galaxy is t- 720 miles per second which translates into 1150 kilometers per second, and so that brings this object if you believe the modern value for Hubble constant at about 16 megaparsec. This galaxy is substantially farther away. No surprise that it therefore also looks smaller in size, and notice that here the lines have shifted. These lines have shifted substantially further. And if I did my homework, using the velocity that they claim, which they can do with high degree of accuracy because you can calculate lambda prime divided by lambda, those measurements can be made with enorm- accuracy, I find that this object is about 305 megaparsecs away from us, so that's about 20 times further away than this object. So the speed is also about 20 times higher of course because there's a linear relationship. And if you look at this one which is even further away, then notice that these lines have shifted even more. The next slide shows you what I would call Hubble diagram. It was kindly sent to me by Wendy Freedman and her coworkers. Wendy is the leader of a large team of scientists who are making observations with the Hubble space telescope. You see here distance and you see here velocity in the units that we used in class, kilometers per second. Forget this part. That's not so important. But you see the incredible linear relationship. And Wendy concluded that Hubble's constant is around 72. It could be a little lower, it could be a little higher. She goes out all the way to 400 megaparsecs with associated velocities of about 26000 kilometers per second. That's about 9% of the speed of light. So beta is about one-tenth. So for this object lambda prime / lambda would be about 1.1. With a 10% shift in the wavelength. Hubble, who published his data in the twenties, his whole data set, when he concluded that there was a linear relation, had only objects with velocities less than 1100 kilometers per second. And 1100 kilometers per second is this point here. So Hubble had only points -- there are not even any in Wendy's diagram, which are here. And he concluded courageously that there was this linear relationship. And you see it has stood the acid test. We still believe it is linear. The only difference was that Hubble's distances were very different from what we believe today. They were about 7 times smaller. So Hubble constant was different for him but the linear relationship was there. OK, that's enough for this slide. So now comes a 64 dollar question, why do all galaxies which are far away, why w- do they move away from us? Well, I can suggest a very simple picture to you. We are at the center of the universe and there was a huge explosion a long time ago. We refer to that explosion as the Big Bang. And since we are at the center where the explosion occurred, the galaxies which obtained the largest speed in the explosion are now the farthest away from us. Now assume that this explosion is the correct idea. Assume that there was a Big Bang. Then I can ask the question now when did it occur? I can now turn the clock back and I can do the following. I can take two objects which are a distance D apart today but they were together when the universe was born at the Big Bang. And let's assume that they have been going away from each other always with the same velocity. Let's assume that now for simplicity. So if they always went away with the same velocity from each other then the distance that they are now today is their velocity times the time T which is then the age of the universe. But we also know with Hubble's law that the velocity V is H times D. And we assume that these velocities are the same now for simplicity. You multiply these two equations with each other and you find immediately that the age of the universe is one over H. And that indeed has the unit of time. If you take H, the one that we believe in nowadays, and you calculate 1/H, and you work in MKS units, you'll find that TH is about 14 billion years, I'll first give it to you in seconds, it's about 4.3 times 10 to the 17 seconds. And that is about 14 billion years. So with this picture in mind, the universe would be about 14 billion years old, but because of the gravitational attraction of these galaxies, they attract each other, you may expect that the speed of the galaxies was larger in the past, and therefore the speed that we have -- we assume that the speed doesn't change is not quite accurate, and so maybe the universe is a little younger, maybe 12 billion years or so. We know from theoretical calculations that the oldest stars in our own galaxy are about 10 billion years old. Therefore the universe cannot be younger than 10 billion years. And there is general consensus in the community that our universe is probably 12 to 14 billion years old. Now the whole issue of this deceleration that I mentioned as the galaxies moved away from each other is at the heart of research in cosmology. And in fact it is now believed that very early on in the universe there was first acceleration followed by deceleration, and maybe again acceleration. That is quite mysterious. Frontier research is going on in this area. At MIT we have three world experts, Professors Alan Guth, w- made major contributions to this concept, cosmology, we have Ed Bertschinger and we have Scott Burles. If we take Hubble's law at face value, I can calculate how far the edge of our visible universe is. Which is the horizon. We call that the horizon. I can calculate what the maximum distance is that we can look. D maximum can be found by making the velocity C. So that the galaxies are moving -- we are moving away from the galaxies, the galaxies are moving away from us -- with the speed of light. And so you would find then that D max is C / H. That is a distance. And you will find then, no surprise, if you use the modern number, that that distance is 14 billion light-years. We can never see beyond that. Because if V = C then beta becomes 1, and if beta becomes 1, lambda prime becomes infinitely large, you have an infinite amount of redshift, and F prime becomes 0. So the electromagnetic radiation has no frequency anymore and so there's no energy anymore in the photons. So that is then the edge of our universe, of our visible universe. You can never see beyond that. So now comes a reasonable question. How far have we been able to see into the universe? And to my knowledge the record holder is a galaxy for which lambda prime / lambda is 7.56. Was published only two months ago. Now at such very large values of redshift, general relativity becomes very important. And the equation that we derived here was derived for special relativity. And so with very high values of red shift like lambda prime / lambda 7.56 you cannot reliably calculate the velocities using that equation. And so you cannot use that velocity then and shove it into Hubble's law and find the -- the distance. But there is no question that the -- that object is probably at a distance of something like 13 billion light-years. Very very far away from us, near the edge of our universe. I will show you an object that is also believed to be near the edge of the universe. It comes up in the next slide. The distance is roughly 12 billion light-years. So for one, when you look at that object, there it is -- it doesn't look very impressive but what do you expect from an object that is 12 billion light-years away from us? It's a quasar, which is a very peculiar galaxy. It emits emission lines, the spectra do not show these dark lines that I showed you earlier, but they actually have emission lines, and the light that you see here was emitted some 12 billion years And now comes the spectrum from this object in the next slide. This was published last year by Scott Anderson and his coworkers, University of Washington in Seattle. I have collaborated with Scott on many projects. So here you see the spectrum of that quasar that you just saw. And here you see a line, an emission line, at roughly 7 -- 7800 Angstroms. And they are all reasons to believe that this in the frame of reference of that quasar -- was the Lyman alpha line which is emitted by hydrogen, which is 1216 Angstroms. Now we have here 5000, 4000, 3000, 2000, 1000, so here is roughly where the wavelength lambda is, and here is lambda prime. Lambda prime is 6.41 times larger than lambda. He mentions 5.41, but Z is what astronomers in general quote, is lambda prime /lambda - 1, so the ratio lambda prime over lambda is 6.41. Absolutely amazing that you can make such accurate measurements, such incredible beautiful data, and this line is all the way in the infrared, you cannot see this with your naked eye anymore, our eyes I think can only see up to 6500. So the 1216 line was in the UV, shifts all the way into the infrared, and this allows astronomers then to measure the value lambda prime / lambda, and there is little doubt that this object is also near the edge of our visible universe. That's enough, John, thank you. I'd like to return to the Big Bang, to the explosion some 12 or 15 billion years ago. And I'd like to raise the question, are we at the center of that explosion? Are we really at the center of our universe? That cannot be of course. It's an incredible arrogance. It would be too egocentric. I know that we all think very highly of ourselves, but this cannot be. We are nothing in the framework of the total universe. We cannot possibly be at the center. So how do we reconcile this now with what we observe? Imagine that you were a raisin in a raisin bread. Quite a promotion, from a human being to a raisin in a raisin bread. And I put you in an oven. And the raisin bread dough is going to expand. All raisins will see other raisins move away from each other and the larger the distance to your raisins the larger the speed will be. And each raisin will think that they are very special. Suppose here this is you, one raisin, and here's another raisin, and here's another raisin. After a certain amount of time all distances have doubled. So this one is here. And this one is here. So you can immediately see that when you look at this one, that its velocity is substantially lower than that one. This is twice as far away, you will see twice as high a speed. But this raisin will look at this one. And it will also conclude that this raisin relative to this one has a higher velocity than this raisin has relative to this one. So all of them will think that they are special and you as a raisin would come up with Hubble's law. You would conclude that the velocity of your other raisins are linearly proportional to the distance. There is an analogy which is even nicer than raisin bread, and that analogy is with Flatlanders. A Flatlander is someone who lives on a two-dimensional world. He happens to live on the surface of a balloon. And light travels only along the surface of the balloon. So the two-dimensional world is curved in the third dimension, but the Flatlanders cannot see in the third dimension. They can only see the second dimension. So here you have such a world. So here are the galaxies. Flat world. And the universe is curved in the third dimension which these Flatlanders cannot see. And when you blow this balloon up, the galaxies move away from each other, and the farther the galaxies are away from each other, the higher the velocity. This model works actually quite well and I want to pursue that in my next calculations. Let me first try to bring this universe to a halt. Because I don't want the universe to collapse again. I succeeded. So you can pursue this idea very nicely and you can see that the Flatlanders would draw quite amazing conclusions. Here is that balloon. The balloon has a radius R. Here is one galaxy. And here is another galaxy. And they are a distance S apart. I will call that later D. But now I want to call it S. You will see why. A little later in time, the universe has expanded, this galaxy is here and this galaxy is here. And this distance now is R + dR and so this distance now between the two galaxies is S + dS. And it follows immediately from the geometry that S + dS / S is R + dR / R. Simple high school geometry. I can work this out. I get S R + R dS is SR + S dR. I lose this SR. I divide by dT. dS/dT is the velocity with which these two galaxies move away from each other. That's they- what they would measure in their universe. So there is a V here. It's clear that S is the distance between them. I will call that D again now. So that is D. And then I have 1/R times dR/dT. 1/R I will write this a little higher. dR / R. No, no, no, we had dR/dT. So now I have 1/R dR/dT. And look at this. I have V = D times something. And that something at a given moment in time has a unique value. R of the balloon has a unique value. And dR/dT which is the expansion velocity also has a unique value. And so it's immediately obvious that in this universe this is Hubble's constant. And this Hubble's constant is a function of time. It is changing with time. And it's obvious that it should change in time. No reason why it shouldn't do the same in our own galaxy. Because R in the past was much smaller. So even if you take an expansion velocity which is constant, if R is smaller in the past, then H was larger in the past. And that is the reason why if you ever see a quote of H to be 72 kilometers per second per megaparsec, there's always a little 0 here. And the 0 means now. The 0 means not a billion years from now and not a billion years ago. We really don't know what it was a billion years ago. Now don't get -- don't carry this analogy between the 2-D balloon and the -- our own universe too far. But it gives some interesting insights. It is suggestive of the idea that our own three-dimensional space may be curved in the fourth dimension that we cannot see. This is very fascinating and I would advise you if you are interested in this area that you take a course in cosmology. You should also take one in general relativity. It will open a whole new world for you. And both Allen Guth and Ed Bertschinger and also Scott Burles are the experts in this area and they happen to be one of our best teachers. So you can't lose there. Now comes a key question and that is, will our universe expand forever? If the universe expands forever, we call that an open universe, that's just a name. It's also possible that our universe will come to a halt. That means that H, Hubble's constant, will become 0, that everything will stand still, no relative motion anymore, which then will be followed by collapse. And so all the redshifts will then come to 0 and will turn to blueshifts. It's the same idea, the same question, when you throw up an apple, will the apple come back or will the apple not come back. It depends on the speed of the apple and on the gravitational field of the earth, and we all know that if you throw it fast enough, about 11 kilometers per second in the absence of atmosphere the apple would never come back. Now if only gravity played the key role in our universe, then we can do a very simple calculation. And the answer to whether or not our universe is open or closed would then depend on the average density of the universe. And when I say average density then you have to think in terms of a big scale. You don't think in terms of Cambridge. That's not representative for the average density of the universe. Nor is our solar system. Nor is our galaxy. But you have to think probably on the scale of a few hundred million parsecs. Maybe 500 megaparsecs. And so I bring you out now into the universe. Here is the universe. And these are galaxies. And here is a sphere which has a radius R and that's on a scale of about 500 megaparsecs. So rho, the average density, is representative for the universe. And here, let's suppose you were here, or I can take any part in the universe, there's nothing special about it, and you see here a galaxy and that galaxy moves away from you with a velocity V. That galaxy has a mass little m. The mass inside here, capital M, inside this sphere, is 4/3 pi R cubed times rho. It's the average density, right? Now we know from Newton that the force that this galaxy will experience is only determined by the mass inside this sphere and not by the mass outside the sphere. And so if I want to calculate whether these two objects will forever move away from each other or whether they will fall back to each other then all I have to make sure that I make the total energy 0, the sum of the kinetic energy and the potential energy must be 0. So one-half mV squared of this object, it must be m M G / R. That is when the total energy is 0. We will expand forever and ever and ever and it will never come back. Little m cancels out. Capital M, I can write 4/3 pi R cubed rho. Here comes my G and here comes R. Notice that the R cubed upstairs becomes R squared. And so if I have an R squared here and I have a V squared here, remember that V / R, that is Hubble's constant. Because R is D, it's the distance between us and the galaxy. And so V squared / R squared is the Hubble constant as we measure it today, squared. And so you'll find then from this simple result that rho as it should be today, that's why I put a little 0 there, is 3 / 8 pi -- I get a G there, and I get H0 squared. And so this tells me that if the density, the average density of our universe, is larger than this value, then our universe will come to a halt and will collapse. And we can calculate that value. Because we know H0, we think we know, we know G, and so you will find then -- I'll write it down here, that rho 0 is about 10 to the -26 kilograms per cubic meter. And so if rho is smaller than this amount then we will continue to expand forever, the universe would be open. If the mean density right now is larger than that amount, then we will -- the expansion will come to a halt, redshift will become blueshifts, and we will collapse again. The matter here, this matter density, doesn't have to be tomatoes or potatoes. It could be electromagnetic radiation. Because according to Einstein E = MC squared. So any form of energy represents mass. So don't think of it necessarily as this being the stars and galaxies and tomatoes. It is generally believed today that the expansion of our universe will not come to a halt and collapse. But our views could change. Enormous development has been going on in the last 10 years and you can read about that in the New York Times. Almost every month you will read something about the enormous progress that's being made in cosmology. And of course the idea of whether or not the universe will expand forever, whether it's open or whether it is closed, is something that's emotionally an important issue for us. If the universe is open and it will expand forever, then stars will all burn out and the universe will become a cold, dead and boring place. If however the universe is closed, the expansion will come to a halt, it will collapse, and it will end up with what we call the Big Crunch as opposed to the Big Bang. And it will be hot, there will be fireworks, it will be like the early days of the Big Bang. Temperatures of billions of degrees. I'd like to read a poem from Robert Frost which he wrote in 1920. It's called Fire and Ice. "Some say the world will end in fire, some say in ice. From what I've tasted of desire, I hold with those who favor fire. But if it had to perish twice, I think I know enough of hate to know that for destruction ice is also great and would suffice." There are many people who want our universe to be closed, probably for emotional reasons, maybe for religious reasons, maybe it's more static, maybe it's more reassuring, maybe it's more romantic. I don't know. But if it's open the end is not very spectacular. T.S. Eliot wrote, "This is the way the world ends, not with a bang but a whimper." Now it is conceivable that the expansion of the universe will come to a halt and that the universe will ultimately We will have a big crunch. And it is even conceivable that a new universe will then be born afterwards. That there will be a new Big Bang. And if the evolution of that universe were a carbon copy, exact carbon copy of the present universe, a few thousand billion years from now we may have a great 8.02 reunion. Same place, same time, same people, perhaps see you then.
{"url":"http://ocw.mit.edu/courses/physics/8-02-electricity-and-magnetism-spring-2002/video-lectures/lecture-35-doppler-effect-and-the-big-bang/","timestamp":"2014-04-16T10:13:41Z","content_type":null,"content_length":"88946","record_id":"<urn:uuid:1bc1b471-93d7-4c14-99d0-46f01fe5129b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00560-ip-10-147-4-33.ec2.internal.warc.gz"}