content
stringlengths
86
994k
meta
stringlengths
288
619
Next: Basic model construction Up: Algorithms for Inductive Learning Previous: Algorithms for Inductive Learning We begin by specifying a large collection a priori that these features are actually relevant or useful. Instead, we let the pool be as large as practically possible. Only a small subset of this collection of features will eventually be employed in our final model. In short, we would like to include in the model only a subset active features. The choice of With each f is active ( (A Bayesian would prefer to put a distribution over the possible values of As a Bernoulli variable, f is subject to the law of large numbers, which asserts that for any That is, by increasing the number (N) of examples will exhibit Searching exhaustively for this subset is hopeless for all but the most restricted problems. Even a priori fixing its size 25 active features. Viewed as a pure search problem, finding the optimal set To find f is shorthand for requiring that the set of allowable models all satisfy the equality Thus, each time a candidate feature is adjoined to 1 by a series of intersecting lines (hyperplanes, in general) in a probability simplex. Perhaps more intuitively, we could represent it by a series of nested subsets of 2. As an aside, the intractability of the ``all at once'' optimization problem is not unique to or an indictment of the exponential approach. Decision trees, for example, are typically constructed recursively using a greedy algorithm. And in designing a neural network representation of an arbitrary distribution, one typically either fixes the network topology in advance or performs a restricted search within a neighborhood of possible topologies for the optimal configuration, because the complete search over parameters and topologies is intractable. Figure 2: A nested sequence of subsets Next: Basic model construction Up: Algorithms for Inductive Learning Previous: Algorithms for Inductive Learning Adam Berger Fri Jul 5 11:43:50 EDT 1996
{"url":"http://www.cs.cmu.edu/afs/cs/user/aberger/www/html/tutorial/node12.html","timestamp":"2014-04-17T05:35:01Z","content_type":null,"content_length":"11874","record_id":"<urn:uuid:7a7b176f-ec3e-4c65-bf5b-7a540ed3dcf1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Rumford, RI SAT Math Tutor Find a Rumford, RI SAT Math Tutor ...I am also experienced in writing proofs. I have taught physics in high school for 17 years. My main focus is helping students understand what the problem is about and finding the best solutions for the problems. 15 Subjects: including SAT math, chemistry, algebra 2, biology ...I have several years of experience working with students on all sections of the SAT as well as math subjects such as algebra, trigonometry, geometry, and calculus. I am located in Cranston, RI and am willing to work with students in grades 7-12 within a 30 minute drive. I have a flexible schedule and am available most evenings as well as weekends. 32 Subjects: including SAT math, calculus, statistics, geometry ...Mr. C. was a Special Needs teacher for ten years. He holds a Bachelor of Science degree in Business Administration and a Master of Education degree. 31 Subjects: including SAT math, reading, English, algebra 1 ...Since then I remember playing teacher with my school friends at the time. Of course, I was the teacher and they were the students! I have gratefully been able to attend the University of Rhode Island where I will be graduating with a bachelors in Elementary Education, Theatre, and with a certification in English as a Second Language. 63 Subjects: including SAT math, English, reading, Spanish ...When I tutor, I focus on revealing the logical framework surrounding a seemingly random collection of facts. I treat my students as intellectually mature beings who shouldn't be forced to complete wave upon wave of repetitive drills. And finally, I ensure that my students understand that studyi... 20 Subjects: including SAT math, chemistry, calculus, French Related Rumford, RI Tutors Rumford, RI Accounting Tutors Rumford, RI ACT Tutors Rumford, RI Algebra Tutors Rumford, RI Algebra 2 Tutors Rumford, RI Calculus Tutors Rumford, RI Geometry Tutors Rumford, RI Math Tutors Rumford, RI Prealgebra Tutors Rumford, RI Precalculus Tutors Rumford, RI SAT Tutors Rumford, RI SAT Math Tutors Rumford, RI Science Tutors Rumford, RI Statistics Tutors Rumford, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/Rumford_RI_SAT_Math_tutors.php","timestamp":"2014-04-17T07:20:10Z","content_type":null,"content_length":"23887","record_id":"<urn:uuid:4eca9df3-c336-4738-affd-284167e50fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Media Centre ACER releases results of PISA 2009+ participant economies Posted on:Friday, 16th December 2011 ACER releases results of PISA 2009+ participant economies • Costa Rica, Georgia, India (Himachal Pradesh & Tamil Nadu), Malaysia, Malta, Mauritius, Venezuela (Miranda), Moldova, United Arab Emirates • Girls significantly outperform boys in reading 16 December 2011: The Australian Council for Educational Research (ACER) this morning released the OECD Programme for International Student Assessment (PISA) 2009+ results for ten economies. PISA is an international comparative survey of 15-year-olds’ knowledge and skills in reading, mathematical and scientific literacy, conducted by ACER. It measures how well young adults have acquired the knowledge and skills that are required to function as successful members of society. Sixty-four economies originally participated in PISA 2009. Ten additional partner participants, who were unable to participate within the PISA 2009 project timeframe, participated in the PISA 2009 study on a reduced and delayed timeline in 2010. This is known as the PISA 2009+ project. The PISA 2009+ economies are: Costa Rica, Georgia, India (Himachal Pradesh & Tamil Nadu), Malaysia, Malta, Mauritius, Venezuela (Miranda), Moldova, United Arab Emirates. PISA 2009+ involved testing just over 46 000 students across these ten economies, representing a total of about 1 377 000 15-year-olds. ACER CEO, Professor Geoff Masters, said the results found that in the PISA 2009+ economies, girls significantly outperformed boys in reading (reflecting the PISA 2009 results). “Girls not only tended to attain higher reading scores than boys, they were also more aware of strategies for understanding, remembering and summarising information,” Professor Masters said. “Students who are highly aware of effective strategies for learning who also regularly read a wide range of material, tend to demonstrate better reading proficiency than those who either have a lower awareness of effective strategies or read a narrower range of materials regularly.” Professor Masters said that while school level factors account for a considerable proportion of variation in reading performance between schools, much of this is associated with socioeconomic and demographic factors. “This suggests that policies around governance, accountability, the investment of educational resources and the overall learning environment are influenced by the social and demographic intake of the school,” Professor Masters said. “Schools containing students with higher socioeconomic backgrounds, tend to be more autonomous in their decision about curriculum, make more of assessments for accountability purposes, have better student-teacher relationships, and utilise more educational resources. Students attending these schools have better educational outcomes.” The results also showed both girls and boys from the PISA 2009+ nations had results in reading, mathematical and scientific literacy that were lower than the OECD average. The results reveal the following highlights for each PISA 2009+ participants: Costa Rica • Students in Costa Rica attained an average score on the PISA reading literacy scale the same as that observed for one OECD country, Chile, and was significantly higher than that for one other country, Mexico. The average reading score for Costa Rica was statistically the same as those for Bulgaria, Malta and Serbia. • Just over two-thirds of students in Costa Rica are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This compares to 81% in the OECD countries, on average. • While in Costa Rica girls outperformed boys in reading, the difference was among the lowest in magnitude of all PISA 2009 and PISA 2009+ participants. • Costa Rican students attained an average score on the mathematical literacy scale below the average attained in all OECD countries. 43% of students in Costa Rica are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. • Costa Rican students were estimated to have an average score on the scientific literacy scale which was significantly higher than that estimated for the lowest scoring OECD country, Mexico. 61% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. • Georgia’s students attained an average score on the reading literacy level below the average attained in all OECD countries. Georgia’s average score was below the average attained in all OECD countries. Georgia’s average score is the same as those of Qatar, Peru and Panama. 38% of students in Georgia are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. The majority of students therefore perform below the baseline level of proficiency in reading. • Georgia’s students attained an average score in the mathematical literacy scale below the average of all OECD nations. In Georgia, 31% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. In Georgia, there was no statistically significant difference in the performance of boys and girls in mathematical literacy. • Georgian students were estimated to have an average score on the scientific literacy scale below the average of all OECD countries. In Georgia, 34% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. In Georgia, there was a statistically significant gender difference in scientific literacy, favouring girls. Himachal Pradesh-India • The average reading literacy score for Himachal Pradesh-India was the lowest average reading score observed in PISA 2009 and PISA 2009+, along with that of Kyrgyzstan. • In Himachal Pradesh-India, 11% of students are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. It follows that 89% of students in Himachal Pradesh-India are estimated to be below this baseline level. This compares to 81% of student performing at or above the baseline level in reading in the OECD countries, on average. • In Himachal Pradesh-India, students attained an average score on the mathematical literacy scale statistically the same as observed in Tamil Nadu-India and Kyrgyzstan. 12% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. • Himachal Pradesh-India’s students were estimated to have an average score on the scientific literacy scale which is below the means of all OECD countries. This was the lowest average science score observed in PISA 2009 and PISA 2009+, along with that of Kyrgyzstan. • Himachal Pradesh-India’s students were estimated to have an average score on the scientific literacy scale which is below the average of all OECD countries. 11% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. • Students in Malaysia attained an average score on the PISA reading literacy scale that was below the average attained in all OECD countries and equivalent to the average scores estimated for Brazil, Colombia, Miranda-Venezuela, Montenegro, Thailand, Trinidad and Tobago. In Malaysia, 56% of students are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This compares to 81% in the OECD countries, on average. • Students in Malaysia attained an average score on the mathematical literacy scale below the average attained in all OECD countries. In Malaysia, 41% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. In Malaysia, there was no statistically significant difference in the performance of boys and girls in mathematical literacy. • Malaysian students were estimated to have an average score on the scientific literacy scale that was significantly higher than that estimated for the lowest scoring OECD country, Mexico. • In Malaysia, 57% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. • In Malaysia, there was a statistically significant gender difference in scientific literacy, favouring girls. • Malta’s students were estimated to have an average score significantly higher than for the lowest performing OECD country, Mexico. The Maltese average was statistically the same as those for Serbia, Costa Rica and Bulgaria. • In Malta, girls significantly outperformed boys and have the largest gender gap in reading across all 74 PISA 2009 and PISA 2009+ participants. • 64% of students in Malta are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This compares to 81% in the OECD countries, on average. Malta is notable among PISA 2009+ participants in that it has a relatively large proportion of advanced readers but also a relatively large proportion of poor and very poor readers in the population. • The Maltese students’ estimated mathematical literacy average was the same as that estimated for students from Greece, and higher than those from the OECD countries Israel, Turkey, Chile and Mexico. In Malta, 66% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. • In Malta, there was a statistically significant gender difference in mathematical literacy, favouring girls. • Maltese students were estimated to have an average score on the scientific literacy scale that was statistically the same those observed in the OECD countries Turkey and Israel and significantly higher than those estimated for two other OECD countries, Chile and Mexico. • In Malta, two-thirds of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. • In Malta, there was a statistically significant gender difference in scientific literacy, favouring girls. This was the largest gender gap in scientific literacy among all PISA 2009 and PISA 2009+ participants, along with those observed in Jordan and the United Arab Emirates. • Students in Mauritius attained an average score on the PISA reading literacy scale below the average attained in all OECD countries and equivalent to the average scores estimated for Argentina, Brazil, Colombia, Indonesia, Jordan Montenegro and Tunisia. • In Mauritius, 53% of students are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This compares to 81% in the OECD countries, on average. • Students in Mauritius attained an average score on the mathematical literacy scale that was the same as those observed in the two lowest performing OECD countries, Chile and Mexico. • In Mauritius, 50% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. • There was no statistically significant difference in Mauritius in the performance of boys and girls in mathematical literacy. • Students in Mauritius were estimated to have an average score on the scientific literacy scale which is statistically the same as that observed in the lowest scoring OECD country, Mexico. • In Mauritius, 53% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. • There was a statistically significant gender difference in scientific literacy, favouring girls. • Students within state funded public schools and private schools within the state of Miranda, Venezuela, achieved an average score on the PISA reading literacy scale as that observed in one OECD country, Mexico. It is also equivalent to those observed in Brazil, Bulgaria, Colombia, Malaysia, Romania, Thailand, Trinidad and Tobago, the United Arab Emirates and Uruguay. • In Miranda-Venezuela, girls significantly outperformed boys in reading, but the difference was among the lowest in magnitude of all PISA 2009 and PISA 2009+ participants. • 58% of students in Miranda-Venezuela are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This compares to 81% in the OECD countries, on average. • Students in Miranda-Venezuela attained an average score on the mathematical literacy scale that is below the average attained in all OECD countries. In Miranda-Venezuela, 40% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. • Students in Miranda-Venezuela were estimated to have an average score on the scientific literacy scale that is statistically the same as that observed in the lowest scoring OECD country, Mexico. • In Miranda, 57% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. • In Miranda-Venezuela, there was no statistically significant difference in the performance of boys and girls in scientific literacy. • Students in Moldova attained an average score on the PISA reading literacy scale below the average attained in all OECD countries and equivalent to the mean scores estimated for Albania, Argentina and Kazakhstan. • In Moldova, 43% of students are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. The majority of students do not perform at the baseline level of proficiency in reading. • Students in Moldova attained an average score on the mathematical literacy scale that is below the average attained in all OECD countries. In Moldova, 39% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. There was no statistically significant difference in the performance of boys and girls in mathematical literacy. • Students in Moldova were estimated to have an average score on the scientific literacy scale that is statistically the same as that observed in the lowest scoring OECD country, Mexico. In Moldova, 53% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. There was a statistically significant gender difference in scientific literacy, favouring Tamil Nadu-India • Students in Tamil Nadu-India attained an average score on the PISA reading literacy scale that is significantly higher than those for Himachal Pradesh-India and Kyrgyzstan, but lower than all other participants in PISA 2009 and PISA 2009+. • In Tamil Nadu-India, 17% of students are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This means that 83% of students in Tamil Nadu-India are estimated to be below this baseline level. This compares to 81% of student performing at or above the baseline level in reading in the OECD countries, on average. • Students in the Tamil Nadu-India attained a mean score on the PISA mathematical literacy scale as the same observed in Himachal Pradesh-India, Panama and Peru. This was significantly higher than the mean observed in Kyrgyzstan but lower than those of other participants in PISA 2009 and PISA 2009+. • In Tamil Nadu-India, 15% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. This compares to 75% in the OECD countries, on average. In Tamil Nadu-India, there was no statistically significant difference in the performance of boys and girls in mathematical literacy. • Students in Tamil Nadu-India were estimated to have a mean score on the scientific literacy scale, which is below the means of all OECD countries, but significantly above the mean observed in the other Indian state, Himachal Pradesh. In Tamil Nadu-India, 16% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. In Tamil Nadu-India, there was a statistically significant gender difference in scientific literacy, favouring girls. The United Arab Emirates • Dubai participated as a separate economy in PISA 2009. The remaining emirates of the United Arab Emirates participated in PISA 2009+. Dubai’s data were merged with that of the remaining emirates and they are reported as a single entity: the United Arab Emirates. • Students in the United Arab Emirates attained an average score on the PISA reading literacy scale, the same as that observed in one OECD country, Mexico. It is also equivalent to those observed in Bulgaria, Miranda-Venezuela, Romania and Uruguay. • 60% of students in the United Arab Emirates are estimated to have a proficiency in reading literacy that is at or above the baseline needed to participate effectively and productively in life. This compares to 81% in the OECD countries, on average. • Students in the United Arab Emirates attained an average score on the PISA mathematical literacy scale that is statistically the same as those observed in the two lowest performing OECD countries, Chile and Mexico. In the United Arab Emirates, 49% of students are proficient in mathematics at least to the baseline level at which they begin to demonstrate the kind of skills that enable them to use mathematics in ways that are considered fundamental for their future development. In the United Arab Emirates, there was a statistically significant gender difference in mathematical literacy, favouring girls. • Students in the United Arab Emirates were estimated to have an average score on the scientific literacy scale that was significantly higher than that estimated for the lowest scoring OECD country, Mexico. • In the United Arab Emirates, 61% of students are proficient in science at least to the baseline level at which they begin to demonstrate the science competencies that will enable them to participate actively in life situations related to science and technology. This compares to 82% in the OECD countries, on average. • In the United Arab Emirates, there was a statistically significant gender difference in scientific literacy, favouring girls. This gender gap was the largest observed in scientific literacy among all PISA 2009 and PISA 2009+ participants, along with those observed in Jordan and Malta. To download the PISA 2009+ report, go to: https://mypisa.acer.edu.au/ Media enquiries: Petros Kosmopoulos Phone: +61 3 9277 5582 Mobile : +61 417 754 570 Email: communications@acer.edu.au Related links You might also like to read: Higher education registration supports professional learning Unfinished business: PISA shows Indigenous youth are being left behind Act now for 2015 school scholarships Increased financial support leads to fewer deferrals Latest PISA results ‘cause for concern’, says ACER
{"url":"http://www.acer.edu.au/media/acer-releases-results-of-pisa-2009-participant-economies","timestamp":"2014-04-17T07:00:45Z","content_type":null,"content_length":"42378","record_id":"<urn:uuid:48b205ae-444b-4838-99fd-8a7dd10b7fa2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Electrical Engineering Interview Questions 1 What is Current? Current can be defined as the motion of charge through a conducting material. The unit of current is Ampere whilst charge is measured in Coulombs. Please define Ampere The quantity of total charge that passes through an arbitrary cross section of a conduct-ing material per unit second is defined as an Ampere.” I =Q/t or Q = It where Q is the symbol of charge measured in Coulombs (C), I is the current in amperes (A) and t is the time in seconds (s). Could you measure current in parallel? No,Current is always measured through(in series with) a circuit element. What Is the difference between Voltage or Potential Difference? And What are they? Voltage or potential difference between two points in an electric circuit is1V if 1J (Joule) of energy is expended in transferring 1 C of charge between those points. It is generally represented by the symbol V and measured in volts (V). Note that the symbol and the unit of voltage are both denoted by the same letter, however, it rarely causes any confusion. The symbol V also signifies a constant voltage (DC) whereas a time-varying (AC) voltage is represented by the symbol v or v(t) Could you measure Voltage in series? No, Voltage is always measured across(in parallel with) a circuit element How many Types of Circuit Loads are there in a Common Electrical Circuit? A load generally refers to a component or a piece of equipment connected to the output of an electric circuit. In its fundamental form, the load is represented by any one or a combination of the 1. Resistor (R) 2. Inductor (L) 3. Capacitor (C) A load can either be of resistive, inductive or capacitive nature or a blend of them. For example, a light bulb is a purely resistive load where as a transformer is both inductive and resistive. A circuit load can also be referred to as a sink since it dissipates energy whereas the voltage or current supply can be termed as a source. What are the different Sign Conventions used in electric circuits? It is common to think of current as the flow of electrons. However, the standard convention is to take the flow of protons to determine the direction of the current. In a given circuit, the current direction depends on the polarity of the source voltage. Current always flow from positive (high potential) side to the negative (low potential) side of the source as shown in the schematic diagram of Figure 2.4(a) where Vs is the source voltage, VL is the voltage across the load and I is the loop current flowing in the clockwise direction. In Source current leaves from the positive terminal In Load (Sink) current enters from the positive terminal What do you mean by Passive Circuit Elements and why these are called Passive? Passive Circuit Elements: Resistor, Capacitor, Inductor State and define Ohm’s Law? It is the most fundamental law used in circuit analysis. It provides a simple formula describing the voltage-current relationship in a conducting material. The current through a conducting material is directly proportional to the voltage or potential difference across the material. I ∝ V V = RI or I =V/R or R =V/I where the constant of proportionality R is called the resistance or electrical resistance, measured in ohms (Ω). Please Define Ohm’s Law for A.C(Alternating Current)? Everything else would remain same only the resistance will be replaced with Impedance, which is defined as the opposition to the flow of A.C. What is the function of Capacitor in Electrical Circuits? A capacitor is a passive circuit element that has the capacity to store charge in an electric field. It is widely used in electric circuits in the form of a filter. Why Inductors are installed in electrical Circuits? An inductor is a piece of conducting wire generally wrapped around a core of a ferromagnetic material. Like capacitors, they are employed as filters as well but the most well known application is their use in AC transformers or power supplies that converts AC voltage levels. 3 comments: 1. thank uuuu i m expecting more 2. lush push 3. The statement of ohms law is wrong. i is proportional to v
{"url":"http://electricalpowerinterview.blogspot.com/2011/09/basic-electrical-engineering-interview.html","timestamp":"2014-04-18T09:10:24Z","content_type":null,"content_length":"97866","record_id":"<urn:uuid:6955064e-ae6e-44f4-aabe-305a14faf334>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Trig Ratios - Circles, Tangents and Triangles March 8th 2011, 02:58 PM #1 Mar 2011 --- A sphere of radius 8 cm rests inside a conical funnel whose axis is vertical. The highest point of the sphere is 44 cm above the vertex of the cone. Determine the angle of the cone, correct to one decimal. (Answer: 25.7 degrees) --- a.) Two tangents from a point A are drawn to a circle with centre C and radius 12cm. If the tangents make an angle of 43 degrees with each other at A, then find the length of each tangent, correct to one decimal. b.) Determine the the area of quadrilateral ABCD where B and D are the points of contact of the tangents with the circle. {This part is continuation to part (a) only.} I would really appreciate full solutions. --- A sphere of radius 8 cm rests inside a conical funnel whose axis is vertical. The highest point of the sphere is 44 cm above the vertex of the cone. Determine the angle of the cone, correct to one decimal. (Answer: 25.7 degrees) make a sketch and look for a right triangle (it's there) ... you'll be using the inverse sine function to find half the vertex angle of the cone. --- a.) Two tangents from a point A are drawn to a circle with centre C and radius 12cm. If the tangents make an angle of 43 degrees with each other at A, then find the length of each tangent, correct to one decimal. b.) Determine the the area of quadrilateral ABCD where B and D are the points of contact of the tangents with the circle. {This part is continuation to part (a) only.} make a sketch ... right triangles are involved again. I ACTUALLY don't get it... Can you please help me with a bit of the solution? I have been on this for an hour! I have a test 2 days from now. I need this solution. March 8th 2011, 03:40 PM #2 March 8th 2011, 04:07 PM #3 Mar 2011 March 8th 2011, 04:52 PM #4
{"url":"http://mathhelpforum.com/trigonometry/173897-basic-trig-ratios-circles-tangents-triangles.html","timestamp":"2014-04-20T16:42:54Z","content_type":null,"content_length":"41510","record_id":"<urn:uuid:18c34b3d-805e-4c9c-9293-f3c95abcada8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
How to create an unfair coin and prove it with math Want to make sure you win the coin toss just a little more often than you should? I certainly do, so I made some unfair coins. We’ll use the beta distribution to see just how unfair they are. While this is just a toy example problem for using the beta distribution, machine learning algorithms rely on this distribution for learning just about everything. Math is an amazing thing that way. Making the coins We’ll make our unfair coins by bending them. Our hypothesis is that the concave side will have less area to land on, and so the coin should land on it less often. Let’s get started. It’s easy to bend the coins with your teeth: WAIT! That really hurts! Using pliers or wrenches works much better: I made seven coins this way, each with a different bending angle. I did 100 flips for each coin, making sure each flip went at least a foot in the air and spun real well. “Umm… only 100 flips?” you ask, “That can’t be enough!” Just you wait until the section on the math. Here’s the raw results: │Coin│Total Flips │Heads│Tails│ │0 │ 100 │53 │47 │ │1 │ 100 │55 │45 │ │2 │ 100 │49 │51 │ │3 │ 100 │41 │59 │ │4 │ 100 │39 │61 │ │5 │ 100 │27 │73 │ │6 │ 100 │0 │100 │ Now for the math Coin flipping is a Bernoulli process. This just means that all trials (flips) can have only two outcomes (heads or tails), and each trial is independent of every other trial. What we’re interested in calculating is the expected value of a coin flip for each of our coins. That is, what is the probability it will come up heads? The obvious way to calculate this probability is simply to divide the number of heads by the total number of trials. Unfortunately, this doesn’t give us a good idea about how accurate our estimate is. Enter the beta distribution. This is a distribution over the bias of a Bernoulli process. Intuitively, this means that CDF(x) equals the probability that the expectation of a coin flip is x. In other words, we’re finding the probability that a probability is what we think it should be. That’s a convoluted definition! Some examples should make it clearer. The beta distribution takes two parameters and . is the number of heads we have flipped plus one, and is the number of tails plus one. We’ll talk about why that plus one is there in a bit, but first let’s see what the distribution actually looks like with some example parameters. In both the above cases, the distribution is centered around 0.5 because and are equal—we’ve gotten the same number of heads as we have tails. As these parameters increase, the distribution gets tighter and tighter. This should makes sense. The more flips we do, the more confident we can be that the data we’ve collected actually match the characteristics of the coin. When the parameters are not equal to each other—for example, we’ve seen twice as many heads as we have tails—then the distribution is skewed to the left or right accordingly. The peak of the PDF occurs at: That’s exactly what we said the expectation of the next coin flip should be above. Awesome! So what happens when and are one? We get the flat distribution. Basically, we haven’t flipped the coin at all yet, so we have no data about how our coin is biased, so all biases are equally likely. This is why we must add one to the number of heads and tails we have flipped to get the appropriate and . If and are less than one, we get something like this: Essentially, this means that we know our coin is very biased in one way or the other, but we don’t know which way yet! As you can imagine, such perverse parameterizations are rarely used in Hopefully, this has given you an intuitive sense for what the beta distribution looks like. But for the pedantic, here’s how the beta distribution’s pdf is formally defined: Where is the gamma function—you can think of it as being a generalization of factorials to the real numbers. That is, . Excel, many calculators, and any scientific programming package will be able to calculate that for you easily. Most of these applications will even have the beta function already built in. Applying the beta distribution to our coins We’re finally ready to see just how biased our coins actually are! Coin 0 Heads: 53 Tails: 47 Coin 1 Heads: 55 Tails: 45 Coin 2 Heads: 49 Tails: 51 Coin 3 Heads: 41 Tails: 59 Coin 4 Heads: 39 Tails: 61 Coin 5 Heads: 27 Tails: 73 Coin 6 Heads: 0 Tails: 100 Amazingly, it takes some pretty big bends to make a biased coin. It is not until coin 3, which has an almost 90 degree bend that we can say with any confidence that the coin is biased at all. People might notice if you tried to flip that coin to settle a bet! 1. This is great. I really enjoyed this post. Here is my crack at the para describing the meaning of the beta function (“Enter the beta distribution. …. “): Enter the beta distribution. Given our observation of H heads and T tails, this distribution allows us to plot how likely a given fraction of heads (or tails) is going to be. If the beta distribution is narrow, which happens when we have many observations, we can be pretty sure of where the “real” fraction of heads lies. If the beta distribution is wide (when we have few observations), our margin of uncertainty gets larger. (As a side note, I think the CDF might detract from the expostion). Any how, once again, I really enjoyed your experiment! Best wishes. Yeah, that was by far the hardest paragraph to write in the whole thing. It probably only makes sense if you already know what I’m trying to say 3. Would have been nice to mention that the beta function there is not magic — the terms involving x are proportional to the probability of flipping that many heads/tails for a particular underlying rate x, and the gamma function terms are a normalization such that, when integrated over all x, you get a net probability of 1. 4. Your expression for the pdf is slightly wrong. The denominator should be \Gamma(\alpha) \times \Gamma(\beta), \Gamma(\alpha) \plus \Gamma(\beta). 5. You say that people concerned about whether 100 flips is enough should “wait until the section on the math”. Then you find that for the mildly bent coins you don’t have enough data to determine if they are biased. I would guess that with more trials you could find a bias in coins 1 and 2. That’s probably true, but the number of trials required would be WAY more than I was willing to do. For example, if you set alpha = 2000 and beta = 2100, you still couldn’t say with 95% confidence that the coin was biased. That’s over four thousand flips. So you’re right. If the coin is only slightly biased, then 100 flips is no where near enough. But with a large enough bias, it becomes sufficient. > Our hypothesis is that the concave side will have less area to land on, and so the coin should land on it less often. The result is correct, but (for a modestly bent coin) the reasoning is not. The reason a bent coin prefers to land on its convex side is because when it strikes the surface on its edge, it tends to fall toward the convex side for simple reasons of balance and mass distribution (the center of mass is biased toward the convex side compared to the mean of the circumference). Also, while in flight, a bent coin tends to align itself in the air with its convex side down just as a falling leaf does, and for the same reason — simple aerodynamics. If an experimenter flipped a coin from a great height, most of the coins would eventually stop flipping and stabilize convex side down. The result is correct, but the reasoning is not…. I believe your explanation and the hypothesis are actually equivalent statements, at least in the mathematical sense if not the physical sense. Also, while in flight, a bent coin tends to align itself in the air with its convex side down Wouldn’t it align edge side down? Unlike the leaf, it’s rather heavy. 7. I am curious about your flipping method. did you always start on heads/tails, alternate between tosses or flip from the resulting orientation of the previous toss? it would be interesting to see if different methods produced a bias That’s a great point. I made no special effort to control the starting position of the flips. Some of the latter coins were very awkward to flip with the concave side down, so I probably flipped concave side up most of the time for these. I doubt that starting on heads/tails would make a difference, but I do think the orientation of the bend axis relative to your thumb might. For example, I would guess that flipping so that the coin spins about the bend axis would enhance the coin’s bias relative to spinning perpendicular to the spin axis. 8. How were the coins landed? Bounced? Cushioned? I think how it settles is where the determination mostly occurs, rather than in flight. A coin bent like a cardioid has to settle always the same way (approximately like coin 6). Your coins are all degrees of cardioid. They landed on a wooden table covered by a table cloth. They bounced a little, but not too much. 9. Hi Mike, Talking about unfair coins. Supose we know a coin is unfair but we don’t know how biased it is. We perform the bernoulli experiment flipping the biased coin 100 times. We get +4 standard deviations for head hits. But, with this small sample we cannot certify that heads will hit +4 sd again. We only know it is biased but we do not know how much because fluctuations can fool you easily when you are not an expert. What we also know is that the mean is not 50/100 but a higher number more than 50 of 100. How can we know the real deviation from the real mean? The number of 100-toss experiments will depend on the strentgh of the bias. Is there a way to guess or calculate the bounderies of this coin when we already know it is unfair and we have performed several tests? How many? Best regards. Thanks in advance That’s exactly what the beta distribution is for. We can’t say with 100% certainty exactly what the coin’s bias is, but we can use the beta distribution to say it has e.g. a 56% chance of having a bias greater than 52%. You would have to do many more trials after getting only 54/100 heads. That’s not enough to indicate a reasonable chance that there even is a bias. 10. So, having more trials, we could be closer to a conclusion. In my question we are 100% sure the coin is biased. What we want to know is in what bounderis the bias is(+1% to +3% or +10 to 15%) in what i work I need to identify the degree of bias to decide what to do. can it be done? >In my question we are 100% sure the coin is biased. I highly doubt that you are that sure the coin is biased. But if you insist, what you would do is to “chop off” the part of the beta distribution that goes below 50% and then renormalize. 1. Then, the degree are from 51% to whatever(80%). My intencion is to cut out the 51 to 54% and take the over 55% chance. I mean +10% the normal distribution. Is there a way to know the strentgh of the bias? I’m sorry I don’t understand what you’re trying to do, and probably won’t be able to help you. Hello, the following side might be interesting for those who liked Mike’s experiment. “You can load a die, but you can not bias a coin” is, what the authors claim. Their statement is pretty in line with Mike’s observation “Amazingly, it takes some pretty big bends to make a biased coin.” By the way, you can “load” wooden dice very easily by watering them for 24 hours.
{"url":"http://izbicki.me/blog/how-to-create-an-unfair-coin-and-prove-it-with-math","timestamp":"2014-04-19T14:29:04Z","content_type":null,"content_length":"76950","record_id":"<urn:uuid:06b13249-3ae2-46ee-9417-7126accfdcfa>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Appearance Model Evaluation Next: Generalisation Up: A Generic Method for Previous: The Correspondence Problem Appearance Model Evaluation Our approach to model evaluation is based on measuring, directly, key properties of the model. To be effective, a model needs the ability to generate a broad range of examples of the class of images that have been modelled. We refer to this as Generalisation ability. Although this property is necessary , it is not sufficient. We also require that the model can only generate examples that are consistent with the class of images modelled. We refer to this as Specificity. We define both of these measures by comparing the distribution of training images and the distribution of images generated using the model. An overview of the approach is given in Figure 2. Any image can be considered as a point in a high-dimensional space (defined by it's intensity values). The training set forms a cloud of points in such a space. If we sample from the model, we generate a second cloud of points in this space. For an ideal model, the two clouds are coincident. We define Generalisation and Specificity in terms of the distance from each training image to the nearest model-generated image, and the distance from each model-generated image to the nearest training image. We discuss the choice of an appropriate distance metric in section 3.3. Figure 2: Hyperspace representation of the model (metric) evaluation approach [width = 0.85 ]../Graphics/hyperspace_example.png Next: Generalisation Up: A Generic Method for Previous: The Correspondence Problem Roy Schestowitz 2005-11-17
{"url":"http://schestowitz.com/Research/Papers/2006/CVPR_2006/Revision1/HTML/node5.html","timestamp":"2014-04-20T23:41:14Z","content_type":null,"content_length":"5234","record_id":"<urn:uuid:f28116d4-28d0-4d6d-b850-a67ec747ce3a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Clean output of mathematica Replies: 0 Clean output of mathematica Posted: Dec 11, 2012 7:54 PM Dear All Following code solves the 2nd order differential equation. s = DSolve[{y''[x] == - a^2 y[x]}, y, x] t = Reduce[y[0] == 0 && y[L] == 0 && C[1] == 0 && C[2] == 2 /. s, a] These give output {{y->({x}\[Function]Subscript[c, 2] sin(a x)+Subscript[c, 1] cos(a x))}} Subscript[c, 3]\[Element]\[DoubleStruckCapitalZ]\[And]((Subscript[c, 1]\[LongEqual]0\[And]Subscript[c, 2]\[LongEqual]2\[And](L\[LongEqual]0\[Or](L!=0\[And]a\[LongEqual](2 \[Pi] Subscript[c, 3])/L))) \[Or](L!=0\[And]Subscript[c, 1]\[LongEqual]0\[And]Subscript[c, 2]\[LongEqual]2\[And]a\[LongEqual](\[Pi] (2 Subscript[c, 3]+1))/L)) Now it is possible to extract the possible eigen values of a but that is not efficient automation. I wish to write a program that effectively plots eigenvalues solution without customizing the extract command each time. Is it possible to produce a clean output like all possible eigen-values enclosed by bracket defined by just{n Pi/L}. Sure it won't be so hard to overwrite some basic files.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2420029","timestamp":"2014-04-16T14:07:38Z","content_type":null,"content_length":"14492","record_id":"<urn:uuid:0def699c-97fb-4579-afc6-11cd740ec901>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Comment on In fact, the multiplication symbol (the middle-dot, ·) was reused for the "and" operator, and the addition symbol (+) was reused for the "or" operator, IIRC (which I may not recall correctly...). You recall correctly. Some people find it counterintuitive that 'plus' (+) should be used for 'or' because, in natural language, we often use 'plus' as an informal conjunction meaning 'and'. But... it makes perfect sense when you compare the boolean algebra operators with their counterparts in arithmetic... │A│B│A and B │A times B │ │0│0│0 │0 │ │0│1│0 │0 │ │1│0│0 │0 │ │1│1│1 │1 │ │A│B│A or B │A plus B │ │0│0│0 │0 │ │0│1│1 │1 │ │1│0│1 │ 1 │ │1│1│1 │2 │ So, addition is essentially boolean disjunction with the added feature of counting how many clauses are true. Having more than two values comes in handy. (Note that xor is the same as addition modulus 2.) The OP's query can be answered using only standard boolean algebra operators, of course. $x xor $y xor $z and not ($x and $y) would be one way. "My two cents aren't worth a dime."; Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data! Read Where should I post X? if you're not absolutely sure you're posting in the right place. Please read these before you post! — Posts may use any of the Perl Monks Approved HTML tags: a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr Outside of code tags, you may need to use entities for some characters: For: Use: & &amp; < &lt; > &gt; [ &#91; ] &#93; Link using PerlMonks shortcuts! What shortcuts can I use for linking? See Writeup Formatting Tips and other pages linked from there for more info. Log In^? How do I use this? | Other CB clients Other Users^? Others drinking their drinks and smoking their pipes about the Monastery: (5) As of 2014-04-20 00:26 GMT Find Nodes^? Voting Booth^? April first is: Results (485 votes), past polls
{"url":"http://www.perlmonks.org/?parent=502942;node_id=3333","timestamp":"2014-04-20T00:28:34Z","content_type":null,"content_length":"20618","record_id":"<urn:uuid:f4c8a93e-e2be-40df-87f4-f5a0820a6f9d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Tukwila, WA Calculus Tutor Find a Tukwila, WA Calculus Tutor ...I specialize in identifying the roadblocks to your success and getting you to your goal. I can also help students understand their style of learning and assist with study skills. My philosophy in tutoring is to build confidence and academic proficiency so I can release the student. 46 Subjects: including calculus, reading, English, algebra 1 ...I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I am aiming for more than just good test scores - I will build confidence so that my students know that they know the material. Math is my passion, not just what I majored in. 8 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Differential equations are one of the multitude of basic tools that engineers, scientists use. I've had formal training in my formal education as well as practical experience in the many techniques that are employed to solve today's real-life problems. I've been using Macintosh computers for my engineering/science work for the past 25 years. 45 Subjects: including calculus, chemistry, physics, geometry ...I have helped students at University Tutoring Service and Central Test Prep in Seattle and at Boston Global Education in Westborough, MA. I am committed to helping students gain a deep understanding of the material they are studying, not just getting through their current homework assignment or ... 18 Subjects: including calculus, geometry, GRE, algebra 1 ...Students taking my lessons will learn the material for their course and study strategies that will help them in future classes. My approach is to teach the student how to identify the nature of the problem and to recognize the appropriate way to solve it. We work through the process several times together, identifying simple, logical steps for solving each type of problem. 26 Subjects: including calculus, chemistry, physics, geometry Related Tukwila, WA Tutors Tukwila, WA Accounting Tutors Tukwila, WA ACT Tutors Tukwila, WA Algebra Tutors Tukwila, WA Algebra 2 Tutors Tukwila, WA Calculus Tutors Tukwila, WA Geometry Tutors Tukwila, WA Math Tutors Tukwila, WA Prealgebra Tutors Tukwila, WA Precalculus Tutors Tukwila, WA SAT Tutors Tukwila, WA SAT Math Tutors Tukwila, WA Science Tutors Tukwila, WA Statistics Tutors Tukwila, WA Trigonometry Tutors Nearby Cities With calculus Tutor Auburn, WA calculus Tutors Bellevue, WA calculus Tutors Bremerton calculus Tutors Burien, WA calculus Tutors Des Moines, WA calculus Tutors Federal Way calculus Tutors Issaquah calculus Tutors Kent, WA calculus Tutors Kirkland, WA calculus Tutors Newcastle, WA calculus Tutors Normandy Park, WA calculus Tutors Redmond, WA calculus Tutors Renton calculus Tutors Seatac, WA calculus Tutors Seattle calculus Tutors
{"url":"http://www.purplemath.com/Tukwila_WA_Calculus_tutors.php","timestamp":"2014-04-21T12:36:23Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:0afeefab-71eb-44f3-8fc4-96455538e159>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove n is divisible by 2^m March 23rd 2008, 05:51 PM #1 Feb 2008 Prove n is divisible by 2^m Prove that if n is a positive integer such that the integer which is made up from the last m digits of n in its decimal representation is divisible by 2^m then n is divisible by 2^m Decompose $n$ into a number $k_1$ made from its last $m$ digits and the number made by its remaining digits: $n=k_2 10^{m} +k_1$ Now we are told that $2^m|k_1$, so to complete this problem therefore it is sufficient to show that $2^m|k_210^m$ I know what the problem is asking and I don't dispute the result, but I don't like how its phrased. For example, 4|100, but "00" is divisible by anything. It probably should be mentioned as a special case for the problem to be stated correctly. (Yes, I'm in a picky mood this morning.) March 23rd 2008, 11:13 PM #2 Grand Panjandrum Nov 2005 March 24th 2008, 03:54 AM #3
{"url":"http://mathhelpforum.com/number-theory/31832-prove-n-divisible-2-m.html","timestamp":"2014-04-17T13:24:59Z","content_type":null,"content_length":"37618","record_id":"<urn:uuid:6382d93a-56c8-4a7a-b2e0-108dd45761d7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnitude and Direction Force Problem Hi Dr Meow! (just got up Dr Meow I'm sorry, I didn't understand what you really meant by the resultant in this case. When you said solve for the resultant. The "resultant" was in the question that asked … Dr Meow Use the cosine and sine rules to determine the magnitude and direction of the resultant of a force of 11 kN acting at an angle of 50 degrees to the horizontal and a force of 8 kN acting at an angle of -30 degrees to the horizontal. It means the (vector) sum of the two forces I thought you knew that, because you found its magnitude. Had you forgotten what you did? btw, I didn't say "solve for the resultant", I said … Find the angle between the resultant and the 11kN force … then subtract that angle from 50º (which is the angle between the 11kN force and the horizontal) to get the angle between the resultant and the horizontal.
{"url":"http://www.physicsforums.com/showthread.php?p=2529134","timestamp":"2014-04-20T01:04:08Z","content_type":null,"content_length":"62933","record_id":"<urn:uuid:af2145ac-c530-430b-902c-436133a092a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
What is analog to digital converter- ADC using LM324 IC - Circuits Gallery What is analog to digital converter- ADC using LM324 IC The process of converting an analog voltage into an equivalent digital signal is known as Analog to Digital Conversion , abbreviated as . An ADC is an electronic circuit which converts its input to corresponding value.The output depends up on the coding scheme followed in the ADC circuit. For example Analog value may convert to Gray code, excess 3 code and so on. Analog to Digital converter ICs are also available to do this operation. Which reduce the circuit complexity such that a single IC capable of doing Analog to Digital Conversion. The circuit below shows a 2 bit ADC circuit using LM324 comparator IC. A potential divider network and some combinational circuits are used for making this simple ADC. LM324 best suited for Analog to Digital Converters because it has four embedded op amps, it require Vcc (5V) and ground only. No need of -Vcc like 741 op amp. Components Required for ADC 1. Resistors (1Kx4) 2. IC LM324 3. IC 7404 4. IC 7432 5. IC 7409 Design steps of Analog to Digital Converter Truth Table of ADC K-Maps for the design of ADC Circuit Diagram of ADC using LM324 Analog to Digital Converter Block Diagram Block diagram of ADC explains the basic operation and signal flow, • Analog signal fed to the parallel combination of Comparators, it will produce encoded signal corresponding to input analog signal. • The encoded signal is then applied to Digital Code Converter (a combinational circuit), that will produce binary output. Working of ADC Circuit • This is a simultaneous ADC, Simultaneous ADC is also called flash ADC and the speed of conversion is very fast. • Comparators continuously compare reference voltage at inverting terminal and analog voltage at non inverting terminal. • The reference voltage of each comparator is derived from potential divider network. Reference voltage of lower comparator : Vcc (1/4) = Vcc/4 Reference voltage of middle comparator : Vcc (2/4) = Vcc/2 Reference voltage of upper comparator : Vcc (3/4) • If the analog input exceeds the reference voltage to any comparator, that comparator turns ON. • If all the comparators are OFF, the analog input signal will be between 0 and +Vcc/4. • When lower comparator ON and others are OFF, then input must be between Vcc/4 and Vcc/2. • For input voltage between +Vcc/2 and Vcc(3/4) , Lover and middle comparators are ON. • Above Vcc (3/4), all the three comparators will ON. • Thus the input analog voltage get converted in to encoded form with 3 output bits, but actually we need binary output like 00, 01, 10, and 11. • To represent 4 states in binary, only 2 bits are needed, so we are using a digital combinational code converter circuit with 3 logic gates. Thus it is possible to get binary outputs like 00, 01, 10, and 11. What is the Resolution of ADC? • The term Resolution is used to describe the accuracy of ADC, resolution means the number of distinct values that ADC can generate over the range of analog values. • The output values are always in binary form, hence the resolution is typically expressed in bits. Consequently these outputs are the power of 2. • For example, an ADC with a resolution of 4 bits can encode an analog input to 16 different levels, since 2^4 = 16. • In our circuit we have 2 number of output bits, so the resolution is 2^2=4. • However the complexity of the useful circuit increases as resolution is increased. Components Pin out The circuit which does the revers operation that is conversion from Digital to Analog called as DAC circuit.
{"url":"http://www.circuitsgallery.com/2011/12/what-is-analog-to-digital-converter-adc_25.html","timestamp":"2014-04-16T10:33:47Z","content_type":null,"content_length":"50598","record_id":"<urn:uuid:5a9029c0-4bff-402e-b13b-dec71b0ba19a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Signs and functoriality of tensor products up vote 9 down vote favorite Let $C,C',D,D'$ be chain complexes of $R$-modules (let's say with upper indexing, so perhaps I should call them cochain complexes, though they're not duals of anything). Let $f\in Hom^\ast(C,C')$ and $g\in Hom^*(D,D')$. Then the standard convention is that $$(f\otimes g)(x\otimes y)=(-1)^{|g||x|}f(x)\otimes g(y),$$ where $|g|$ is the degree of $g$ and $|x|$ is the degree of $x$. As observed on page 171 of Dold, this is consistent with having a degree 0 chain map $$Hom^\ast(C,C')\otimes Hom^\ast(D,D')\to Hom^\ast(C\otimes C',D\otimes D').$$ What bothers me, though, is that this formula forces $$(h\otimes k)\circ(f\otimes g)=(-1)^{|k||f|}hf\otimes kg,$$ which seems to violate the definition of a bifunctor as given, for example, on page 17 of Kashiwara and Schapira's "Categories and Sheaves", which would seem to require (adapting the notation) $$(1_{C'}\otimes g)(f\otimes 1_D)=(f\otimes 1_{D'})(1_C\otimes g).$$ (Here I suppose we assume that the relevant categories are the category of chain complexes of $R$ modules with $Mor(X,Y)=Hom(X,Y)$ (certainly such things can be composed functorially and the identity behaves properly) and the products of this category with itself). If I'm reading it correctly, this requirement in Kashiwara-Schapira seems to be the same as what Mac Lane is asking for in Proposition II.3.1 of "Categories for the Working Mathematician". So are we to believe $\otimes$ is not a functor or is there a way to reformulate all of this to be consistent (or am I just getting something wrong)? Thanks in advance! at.algebraic-topology homological-algebra ct.category-theory add comment 1 Answer active oldest votes There are two options. If you just want an ordinary category of cochain complexes, then you have to take the morphisms to be cochain maps of degree zero. In that context we have $(-1)^{|k ||f|}=1$ so there is no problem. Alternatively, you can have an enriched category of cochain complexes. In more detail, for any symmetric monoidal category $(\mathcal{V},\otimes)$ there is a theory of $\mathcal{V} $-enriched categories. Such a thing has a class of objects, and for each pair of objects $X$ and $Y$, it has an object $\text{Hom}(X,Y)\in\mathcal{V}$. Given a third object $Z$ there is up vote 14 also a composition morphism $c:\text{Hom}(Y,Z)\otimes\text{Hom}(X,Y)\to\text{Hom}(X,Z)$, subject to some obvious axioms. The symmetric monoidal structure on $\mathcal{V}$ includes natural down vote twist isomorphisms $\tau_{PQ}:P\otimes Q\to Q\otimes P$ for all $P,Q\in\mathcal{V}$. When formulating the definition of a bifunctor in an enriched context, you find that you need to use accepted the morphisms $\tau_{PQ}$ in various places. In the case of interest, we can regard cochain complexes as a category enriched over graded abelian groups. The sign $(-1)^{|k||f|}$ is provided automatically by the relevant twist maps, so the tensor product becomes a bifunctor in the enriched sense. Thanks, that makes sense that the degrees would have to be part of some extra structure somewhere. – Greg Friedman Jul 8 '11 at 7:03 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homological-algebra ct.category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/69582/signs-and-functoriality-of-tensor-products?sort=newest","timestamp":"2014-04-20T08:53:13Z","content_type":null,"content_length":"53204","record_id":"<urn:uuid:851208dd-01c2-4a4f-9bb9-14f5d4b45b91>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Rectangular arrangements I had a version of that ready to post, and then my internet broke temporarily. Grr. Your version's better, so it's probably for the best, but still... grr. Edit: I got thinking about it for a bit, and I believe that if a number splits into prime factors of the form a^b*c^d*e^f*..., then it will have in total (b+1)*(d+1)*(f+1)*... factors. As each of these factors has a partner that it can multiply with to produce the original, then you would just divide the result of the above formula by 2 to find the number of arrangements. The exception to this is when the original number is a square, because then one of its factors would group with itself to produce the original. Luckily, square numbers, and only square numbers, produce an odd number of factors, so when you try to find the amount of arrangements using the above method you will get a remainder and so when this happens you will know to just round up. Disclaimer: All of that stuff was just from me thinking and I didn't actually calculate anything, so I can't vouch for its accuracy. I think its right though.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=1811","timestamp":"2014-04-20T18:29:40Z","content_type":null,"content_length":"12483","record_id":"<urn:uuid:3404b927-e691-4db2-8420-e0fee2dcea56>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Exact Solution of a Generalized Polya Process Advances in Mathematical Physics Volume 2010 (2010), Article ID 504267, 12 pages Research Article On the Exact Solution of a Generalized Polya Process Department of Risk Engineering, Faculty of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki 305-8573, Japan Received 1 August 2010; Accepted 10 October 2010 Academic Editor: Pierluigi Contucci Copyright © 2010 Hidetoshi Konno. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. There are two types of master equations in describing nonequilibrium phenomena with memory effect: (i) the memory function type and (ii) the nonstationary type. A generalized Polya process is studied within the framework of a non-stationary type master equation approach. For a transition-rate with an arbitrary time-dependent relaxation function, the exact solution of a generalized Polya process is obtained. The characteristic features of temporal variation of the solution are displayed for some typical time-dependent relaxation functions reflecting memory in the systems. 1. Introduction The generalized master equation of memory function type [1] is a useful basis for analyzing non-equilibrium phenomena in open systems as where the kernel is conventionally assumed to have the product of a memory function with a transition rate as . The transition rate has the constraint with . This generalized master equation approach corresponds to the generalized Langevin equation of the memory function type [2, 3]. One can see many successful applications with long memory along the line of traditional formulation [1]. Looking around recent studies in complex open systems, there is an alternative approach based on a generalized non-stationary master equation [4] as The master equation in this form corresponds to the generalized Langevin equation of the convolutionless type, which is derived with the aid of projection operator method by Tokuyama and Mori [5]. The time-dependent coefficient may be written in the following form: . It is expected from the projection operator method [5] that the time-dependent function reflects the memory effect from varying environment in a different way associated with the memory function (cf. also Hänggi and Talkner [6]). The memory function (MF) formalism has been utilized in anomalous diffusion like Lévy type diffusion in atmospheric pollution, diffusion impurities in amorphous materials, and so on. The alternative convolution-less, non-stationary (NS) formalism gives only a small number of applications. The paper intends to exhibit a potential ability of the NS formalism by taking an arbitrary time-dependent function which is representing memory effect. The paper is organized as follows. Section 2 reviews the non-stationary Poisson process. Section 3 shows a generalized Polya process wherein it is involved a generalized non-stationary transition rate with an arbitrary function of time. The exact solution and the expression mean and variance are displayed as a function of . Some important remarks are given for a generalized non-stationary Yule-Furrey process with . Section 4 discusses (i) the solvability condition of the generalized Polya model and (ii) the relation to the memory function approach. The last section is devoted to concluding remarks. 2. Nonstationary Poisson Process The simplest example of the generalized master equation in the form of (1.2) is a non-stationary Poisson process (an inhomogeneous Poisson process) described by where is the time-dependent rate of occurrence of an event . The function is an arbitrary function of time. The solution is readily obtained, with the aid of the generating function, in the following form: where . It is easy to show that the mean and the variance take the same value . Namely, the Fano factor takes 1 for any time-dependent function . The process gives rise to only the Poissonian (P) statistics at any time. Three typical examples of are shown in Table 1. All of them are relaxation functions as the time goes to infinity. It is shown in the same table that is the increasing function as the time goes to infinity. The temporal development of the probability density in (2.2) for these three examples is depicted in Figure 1. In seismology, (Ohmori formula) [7] is frequently used in analyzing and predicting aftershocks. Many applications are also found in environmental, insurance, and financial problems [8]. Further, various engineering problems involve many potential applications especially in the probabilistic risk analysis [9]. However, the applicability of the non-stationary Poisson process is quite limited since . 3. Generalized Polya Process 3.1. Model Equation Now let us consider a generalized Polya process within the class of generalized birth processes where takes into account the -dependence up to the first order and a memory effect with an arbitrary relaxation function as When and , the model reduces to a Polya process [10]. When , the model reduces to an extended Polya process [11]. 3.2. Exact Solution The method of characteristic curves is used to get the exact solution under the initial condition (cf., the recursion method with variable transformations [11]). The generating function is defined by . The equation for corresponding to (3.1) becomes From the initial condition, one obtains . To eliminate the second term in the right hand side of (3.4), let us assume that when , one obtains . Without the loss of generality, . So the equation for becomes Then, a variable transformation, leads (3.6) to the simple wave equation, The solution of the wave equation in (3.8) is given by where . From the initial condition , one obtains Therefore, is expressed as Thus, we have where and . When , the exact analytic expression of the probability density function is given by 3.3. Mean and Variance The probability density function in (3.15) is the Pascal distribution (the negative binomial distribution) with the parameters with and . Thus, the mean and the variance are obtained in the following form: The variance is generally greater than the mean, that is, the Fano factor is larger than 1 as follows: It is shown that the generalized Polya process with and is subjected to the super-Poissonian (SUPP) statistics (). Three examples of the relaxation function are given in Table 2. They are decreasing function (i.e., ) as the time goes infinity. In the case of an exponential relaxation (i), the Fano factor becomes a double exponential function as shown in the table. In the case of inverse power function (ii), the Fano factor takes the form in the time region : (a) subdiffusion for and (b) superdiffusion for . On the other hand, in the case of the power relaxation (iii), the Fano factor becomes the fractional power exponential function of time. To understand the feature of temporal variation, numerical examples are depicted in Figures 2(a) and 2(b) as well as Figures 3(a) and 3(b). 3.4. Nonstationary Yule-Furrey Process When , in (3.3). So one must omit (3.2) (i.e., one must redefine the range of variation) for the case of a generalized non-stationary Yule-Furrey process as follows: The solution of under the initial condition becomes The corresponding probability density is obtained as This is the geometric distribution with the parameter , which is a special case of the Pascal (the negative binomial) distribution in (3.16). The mean and the variance are obtained as The Fano factor becomes This means that the nature of statistics (Sub-Poissonian (SUBP, ), Poissonian (P, ), and Super-Poissonian (SUPP, )) changes depending on the functional form of and its parameter values involved. It is important to make notice of the fact that the variability of changes in the two cases for and . They are summarized in Table 3. In Table 3, , , and are defined by The temporal development of the probability density for the generalized Polya process in (3.15) and the generalized Yule-Furry process in (3.21) for these three examples is depicted in Figures 4 and 4. Discussions 4.1. Solvability Condition We have studied the generalized Polya process with the transition rate in (3.3). How is the solvability if the transition rate is a more general one than that of (3.3) as with and being arbitrary functions with time, how is the solvability. In this case, the exact analytic solution is not obtained. The solvability condition is equivalent to the fact that is written in the form of (3.5); . For the transition rate in (4.1), the time-independent function reduces to This means that and must have the same time-dependent scaling function with and (i.e., in (3.3)) to get the exact analytic 4.2. Master Equation in Memory Function Formalism An alternative master equation in the memory function (MF) formalism for the generalized Polya process in (3.1) and (3.2) may be written as where and are constants ( and ). The Laplace transform of the memory function is defined by . For , the recursion relation is obtained for the Laplace transform of as The general formal solution is given under the initial condition in terms of the inverse Laplace transform as When the memory function or the pausing time distribution is given (i.e., the Laplace transform of is related to as ), the probability density in (4.5) can be evaluated numerically. The explicit analytic expressions are obtained only for a few special cases [1] with . The two formalisms have different features complement with each other (cf. Montroll and Shlesinger [1], Tokuyama and Mori [5], and Hänggi and Talkner [6]). 5. Concluding Remarks In this paper, it is shown that there are two types of generalized master equation: (i) the memory function (MF) formalism in (1.1) and (ii) the convolution-less, non-stationary (NS) formalism in ( 1.2). Then, we propose a new model in the NS formalism: a generalized Polya process in (3.1) and (3.2) with the transition rate having an arbitrary time-varying function in (3.3). Further, we exhibit the exact analytic solutions of the probability density and the mean and variance for an arbitrary function of time. For some typical examples of , the temporal variations of the mean and the variance are numerically exhibited. There are many potential applications of the master equation in the NS formalism to non-equilibrium phenomena. In biological systems, the human EEG response to light flashes [12] (i.e., microscopic molecular transport associated with transient visual evoked potential (VEP)) and the transition phenomenon from spiral wave to spiral turbulence in human heart [13] can be formulated by the master equation in the NS formalism. In considering a stochastic model of infectious disease like a stochastic SIR model [14], the introduction of temporal variation of infection rate on account of various environmental changes leads to the master equation in the NS formalism. In auditory-nerve spike trains, there are interesting observations [15, 16] that (i) the Fano factor exhibits temporal variation in the intermediate time region and (ii) also shows fractional power dependence in the time region ( = noninteger number). In the generalized Polya process, time variation of the Fano factor (i.e., SUBP(), P (), and SUPP ()) changes depending on the choice of the relaxation function and the values of and (cf. Tables 2 and 3). The related discussions in detail will be reported elsewhere. This work is partially supported by the JSPS, no. 16500169 and no. 20500251. 1. “On the wonderful world of random walks,” in Nonequilibrium Phenomena II: From Stochastic to Hydrodynamics, E. W. Montroll and M. F. Shlesinger, Eds.J. L. Lebowitz and E. W. Montroll, Eds., chapter 1, pp. 1–121, North-Holland, Amsterdam, The Netherlands, 1984. View at Zentralblatt MATH 2. R. Kubo, “The fluctuation-dissipation theorem,” Reports on Progress in Physics, vol. 29, no. 1, pp. 255–284, 1966. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. H. Mori, “A continued-fraction representation of the time-correlation functions,” Progress of Theoretical Physics, vol. 34, pp. 399–416, 1965. View at Publisher · View at Google Scholar 4. R. Kubo, “Stochastic Liouville equations,” Journal of Mathematical Physics, vol. 4, pp. 174–183, 1963. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 5. M. Tokuyama and H. Mori, “Statistical-mechanical theory of random frequency modulations and generalized Brownian motions,” Progress of Theoretical Physics, vol. 55, no. 2, pp. 411–429, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. P. Hänggi and P. Talkner, “On the equivalence of time-convolutionless master equations and generalized Langevin equations,” Physics Letters A, vol. 68, no. 1, pp. 9–11, 1978. View at Publisher · View at Google Scholar 7. Y. Ogata, “Statistical models for earthquake occurrences and residual analysis for point processes,” Journal of the American Statistical Association, vol. 83, pp. 9–27, 1988. View at Publisher · View at Google Scholar 8. R. L. Smith, “Statistics of extremes, with applications in environment, insurance and finance, in extremevalue in finance,” in Telecommunications and the Environment, B. Finkenstadt and H. Rootzen, Eds., Chapman & Hall/CRC, London, UK, 2003. 9. T. Bedford and R. Cooke, Probabilistic Risk Analysis, Cambridge University Press, New York, NY, USA, 2001. 10. W. Feller, Introduction to Probability Theory and Its Applications, vol. 1&2, John Wiley & Sons, New York, NY, USA, 1967. 11. H. Konno, “The stochastic process of non-linear random vibration. Reactor-noise analysis of hump phenomena in a time domain,” Annals of Nuclear Energy, vol. 13, no. 4, pp. 185–201, 1986. View at Publisher · View at Google Scholar 12. D. Regan, Human Brain Electrophysiology, Elsevier, New York, NY, USA, 1989. 13. K. H. W. J. ten Tusscher and A. V. Panfilov, “Alternans and spiral breakup in a human ventricular tissue model,” American Journal of Physiology, vol. 291, no. 3, pp. H1088–H1100, 2006. View at Publisher · View at Google Scholar 14. O. Diekmann and J. A. P. Heesterbeek, Mathematical Epidemiology of Infectious Diseases, Wiley Series in Mathematical and Computational Biology, John Wiley & Sons, Chichester, UK, 2000. 15. S. B. Lowen and M. C. Teich, “The periodogram and Allan variance reveal fractal exponents greater than unity in auditory-nerve spike trains,” Journal of the Acoustical Society of America, vol. 99, no. 6, pp. 3585–3591, 1996. View at Publisher · View at Google Scholar 16. G. Buzsáki, Rhythms of the Brain, Oxford University Press, Oxford, UK, 2006. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/amp/2010/504267/","timestamp":"2014-04-17T17:44:08Z","content_type":null,"content_length":"350334","record_id":"<urn:uuid:960629a7-745c-4708-b2ec-1492ab2a8d6e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Correct ground state electronic configuration chromium atom, Chemistry The correct ground state electronic configuration of chromium atom is: (1) [Ar] 3d^5 4s^1 (2)[Ar]3d^4 4s^2 (3) [AR]3d^6 4s^0 (4)[Ar]4d^5 4s^1 Ans: [Ar] 3d^5 4s^1 Posted Date: 5/20/2013 2:17:06 AM | Location : United States Your posts are moderated If magnetic quantum number of a given atom represented by -3, then what will be its principal quantum number: (1) 2 (2) 3 (3) 4 (4 What will happen if you mix nitrogen and hydrogen gas at room temperature? Explain your reasoning. Calculate the mole fraction of NH 3 that would form theoretically at 298.15 The valency of phosphorus in H3 PO 4 is: (1) 2 (2) 5 (3) 4 (4) 1 Ans: 5 Which of the following is ideal for Oxidation of propene to propan-1-ol a)Alkaline KMnO 4 b)B 2 H 6 /Alkaline KMnO 4 c)O 3 /Zn dust d)CrO 3 Ans) B 2 H 6 /Alkaline KMno4 is th It is important to differentiate the stoichiometric or empirical formula of a molecular substance from its molecular formula. The former expresses only the relative numbers of atom Q. Compare pot furnace and tank furnace used in manufacture of glass. Ans. S.NO. Pot furnace Tank furnace 1. Used f Q. How do we measure food texture? Food texture can be evaluated by mechanical test or instrumental methods. This is called as objective evaluation. When we use the human sens This is an example of pinacone pinacone rearrangemnt acetone with Mg and H2O forms 2,3 Dimethylbutane2.3 diol which on reacting with H+ from forms first a carbocation on carbon 2 Ionization energy of group: Why does Ionization energy of group IIA elements decrease from Be to Ra? A. As the atomic size increases from Be to Ra, the attraction between what are polyfunctional compounds?expalin the nomenclature of polyfunctional organic compounds with examples
{"url":"http://www.expertsmind.com/questions/correct-ground-state-electronic-configuration-chromium-atom-30168731.aspx","timestamp":"2014-04-18T05:29:54Z","content_type":null,"content_length":"29471","record_id":"<urn:uuid:3f5c8b48-a1e5-405c-a89f-1aa1372ccda3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Use this interactive chart to figure out how many alien worlds exist across the entire universe BBC Future, with the help of iib Studio, has put together a very cool interactive calculator that takes the Drake equation to the next level. Typically, the equation is used to estimate how many radio-transmitting civilizations could currently exist in the Milky Way Galaxy — but by adding just one extra variable, it lets you take the entire universe into account. The Drake equation is dependant on the entry of key variables — things like the number of stars in the galaxy, the number of habitable planets per solar system, the length of time a radio-emitting civilization could last, and so on. But because we don't know the answer to many (if not all) of these questions, the equation is used to generate a rough estimate based on our best guesses. The BBC calculator lets you add an extra variable to the equation — the estimated number of galaxies in the universe — so you can come up with a number of intelligent civilizations in the entire cosmos. Which also means that anyone can enter their own values to make a calculation. And because the BBC chart is completely interactive, you can determine just how many — or how painfully few — intelligent civilizations currently reside in the depths of space. So, how optimistic or pessimistic are you? Check out the calculator and let us know what you came up with in the comments. Images via BBC and Space-Wise. 1 61Reply
{"url":"http://io9.com/5936890/use-this-interactive-chart-to-figure-out-how-many-alien-worlds-exist-across-the-entire-universe?tag=seti","timestamp":"2014-04-18T21:24:49Z","content_type":null,"content_length":"85967","record_id":"<urn:uuid:7340eeaa-3251-4d19-a323-56acfb3b2e9b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Hastings On Hudson Algebra 1 Tutor ...Building on the concepts from algebra 1 and the knowledge of shapes and solids, algebra 2 will teach your child how to apply the rules of the spatial world to the coordinate plane. When applying the idea of secants and tangents from circles to a graph of a function, students might need guidance ... 8 Subjects: including algebra 1, calculus, geometry, algebra 2 ...He is now tutoring Westchester and Connecticut students in math, chemistry and physics. Typically Ken explains the core principle or subject clearly, often with an example. He then lets the students talk to gauge their initial understanding before working through specific problems. 12 Subjects: including algebra 1, chemistry, physics, calculus ...I graduated from SUNY Oswego with a degree in childhood education (grades 1-6) with a concentration in mathematics. I am currently attending Lehman College for my master's in mathematics education (grades 5-9). After I finish my master's, I plan on getting grades 7-12 extension. I have been tutoring for about five years. 5 Subjects: including algebra 1, geometry, SAT math, elementary math ...My aim in life is to be a successful chemical engineer. Tutoring has always been an interesting but a little challenging task for me. But I always make sure that my students are fully confident with the subjects. 9 Subjects: including algebra 1, calculus, algebra 2, organic chemistry ...I am a very patient person with a positive attitude. I want to not only help students on their current math challenges but inspire them for the future. I want this to be a low-stress environment, but a successful one. 11 Subjects: including algebra 1, algebra 2, precalculus, grammar
{"url":"http://www.purplemath.com/hastings_on_hudson_ny_algebra_1_tutors.php","timestamp":"2014-04-20T06:41:17Z","content_type":null,"content_length":"24399","record_id":"<urn:uuid:fd244ac9-a7d1-4654-bce4-1299bbff37e7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
matrix -- proving. Re: matrix -- proving need help!! Originally Posted by please give me some hints to solve this question. thanks so much Is this part of an assignment that counts towards your final grade? Re: matrix -- proving need help!! Originally Posted by mr fantastic Is this part of an assignment that counts towards your final grade? Looks like it to me: http://www.math.uwo.ca/~nlemire/2120/hw2.pdf PAGE 2 Due in a couple of days!
{"url":"http://mathhelpforum.com/advanced-algebra/188677-matrix-proving-print.html","timestamp":"2014-04-19T23:19:54Z","content_type":null,"content_length":"7053","record_id":"<urn:uuid:568d6da0-0503-4bad-a27d-e02bd0e2406e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Joint Distribution Question June 5th 2012, 05:24 PM #1 May 2012 Basic Joint Distribution Question G'day! I have the answer to this question but I do NOT understand it. Any help much appreciated... Three Players play 10 independent rounds of a game. Each player has probability $\frac{1}{3}$ of winning each round. Find the joint distribution of the numbers of games won by each of the players. I am having immense difficulty with this joint probability stuff. if $f_{XYZ} (x,y,z) = P_{XYZ} (x,y,z)$ I assume then that, for example $P_{XYZ} (x,y,z) = P_{XYZ} (1,0,0)$ means the probability of player x winning 1 game and players y,z winning no games? This then could only account for one round? I am totally lost!!! Re: Basic Joint Distribution Question The question is asking you to find p(x,y,z), where x+y+z = 10 and x is the number of games won by the first player, y the number won by the second player, and z the number own by the third player. For any other values of x,y,z (not summing to 10), the probability is zero. June 9th 2012, 01:30 PM #2
{"url":"http://mathhelpforum.com/statistics/199689-basic-joint-distribution-question.html","timestamp":"2014-04-16T19:10:49Z","content_type":null,"content_length":"33366","record_id":"<urn:uuid:e66a9ca0-9a20-4715-9839-39a122130944>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Expression evaluator : using RPN This article will demonstrate how to evaluate complex mathematical expressions by converting them from infix notation to postfix notation and evaluating the expression. In the process we will be using STL's stack and string classes. When finished, the program should be able to evaluate expressions such as: expr = "(12232+(43*43-(250/(3*8))*44)/12-311) * (5==5)" Most programming languages require that you enter expressions in infix notation that is: operators and operands are intermixed, example: "5*6-3*2*3". The postfix notation, introduced in 1950 by the Polish logician Jan Lukasiewicz, is a method of representing an expression without using parenthesis and still conserving the precedence rules of the original expression. For example, the previous expression could have been written like: "5 6 * 3 2 * 3 * -" Explaining the code Here's is how to convert from an infix notation to postfix notation: 1. Initialize an empty stack (string stack), prepare input infix expression and clear RPN string 2. Repeat until we reach end of infix expression I. Get token (operand or operator); skip white spaces II. If token is: a. Left parenthesis: Push it into stack b. Right parenthesis: Keep popping from the stack and appending to RPN string until we reach the left parenthesis. If stack becomes empty and we didn't reach the left parenthesis then break out with error "Unbalanced parenthesis" c. Operator: If stack is empty or operator has a higher precedence than the top of the stack then push operator into stack. Else if operator has lower precedence then we keep popping and appending to RPN string, this is repeated until operator in stack has lower precedence than the current operator. d. An operand: we simply append it to RPN string. III. When the infix expression is finished, we start popping off the stack and appending to RPN string till stack becomes empty. Now evaluating a postfix (RPN) expression is even easier: 1. Initialize stack (integer stack) for storing results, prepare input postfix (or RPN) expression. 2. Start scanning from left to right till we reach end of RPN expression 3. Get token, if token is: I. An operator: a. Get top of stack and store into variable op2; Pop the stack b. Get top of stack and store into variable op1; Pop the stack c. Do the operation expression in operator on both op1 and op2 d. Push the result into the stack II. An operand: stack its numerical representation into our numerical stack. 4. At the end of the RPN expression, the stack should only have one value and that should be the result and can be retrieved from the top of the stack. To use the code: #include <iostream.h> #include <string> #include "ExpressionEvaluator.h" using std::string; int main() long result; double resultdbl; int err; string s; s = "1+2*(1-2-3-4)"; err = ExpressionEvaluator::calculateLong(s, result); if (err != ExpressionEvaluator::eval_ok) cout << "Error while evaluating!" << endl; cout << "Evaluation of (int):" << s.c_str() << " yielded: " << result << endl; s = "1.1/5.5+99-(4.1*(2+1)-5)"; err = ExpressionEvaluator::calculateDouble(s, resultdbl); if (err != ExpressionEvaluator::eval_ok) cout << "Error while evaluating!" << endl; cout << "Evaluation of (double):" << s.c_str() << " yielded: " << resultdbl << endl; return 0; Extending the code This code can be extended to allow you perform other operations, however they must be binary operation (takes two operands). To extended the code simply add a new operator into the "operators" array along with its precedence value. If you introduce a new symbol make sure you add the symbol into the "operators[0]" string too. Precedence is important for generating a proper postfix expression. After adding a new operator, define its behaviour in the "evaluateRPN" function as: if (token == "PUT YOUR OPERATOR SYMBOL HERE") r = doMyOperation(op1, op2); Hope you find this code and article useful. • Sunday, November 2, 2003 • Monday, November 3, 2003 □ Fixed precedence rule of multiplication • Tuesday, November 4, 2003 □ Fixed a bug in isOperator() □ Added support for negative and positive numbers as: -1 or +1 (initially they were supported as: 0-1 or 0+1) □ Added exception handling and foolproof against malformed expression □ Added >=, <=, != operators
{"url":"http://www.codeproject.com/Articles/5346/Expression-evaluator-using-RPN?msg=917810","timestamp":"2014-04-23T21:44:49Z","content_type":null,"content_length":"123346","record_id":"<urn:uuid:5a8b9830-3137-4779-bd0a-ee592fd4989e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: i really need help with a project. all i need is 1 example for each of the following: •Greatest Common Factors (GCF) •Special Products (Difference of Squares and Perfect Square Trinomials) •Factoring Trinomials •Factoring by Grouping (four-term polynomials and trinomials) •Sum and Difference of Cubes • one year ago • one year ago Best Response You've already chosen the best response. here for GCF look at this pic |dw:1360964062476:dw| Best Response You've already chosen the best response. for the trinomials i found a tht pic for example Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511ea90fe4b03d9dd0c4c24d","timestamp":"2014-04-18T00:33:23Z","content_type":null,"content_length":"41273","record_id":"<urn:uuid:d033495c-6440-4987-94aa-3705e755b6f6>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Convergence/divergence of an integral May 26th 2008, 12:06 PM [SOLVED] Convergence/divergence of an integral Hi again ! This time, I'm really struggling with a problem... Let $C_0(\mathbb{R}^+)$ be the set of functions defined over $\mathbb{R}^+$, continuous, with real values. $f \in C_0(\mathbb{R}^+)$. Let F be the antiderivative of f that annulates in 0. Let E be the subset of functions f in $C_0(\mathbb{R}^+)$ such that $I(f)=\int_0^\infty \frac{F(t)}{(1+t)^2} \ dt$ converges. Previous questions... 1/ Determine the positive functions f in E such that $I(f)=0$. It's ok, f=0. 2/ Let f be a positive function of $C_0(\mathbb{R}^+)$ Show : $\int_0^\infty \frac{f(t)}{1+t} \ dt \text{ converges } \Longleftrightarrow f \in E$ It's ok for this. Problem : Find an example for f (necessarily not of constant sign) in E and such that $\int_0^\infty \frac{f(t)}{1+t} \ dt$ diverges. Thanks for your help, we are really struggling with that :( (and I have no assignment, I'm asking several questions these days because I have an exam tomorrow :)) May 26th 2008, 05:02 PM Let $C_0(\mathbb{R}^+)$ be the set of functions defined over $\mathbb{R}^+$, continuous, with real values. $f \in C_0(\mathbb{R}^+)$. Let F be the antiderivative of f that annulates in 0. Let E be the subset of functions f in $C_0(\mathbb{R}^+)$ such that $I(f)=\int_0^\infty \frac{F(t)}{(1+t)^2} \ dt$ converges. Problem : Find an example for f (necessarily not of constant sign) in E and such that $\int_0^\infty \frac{f(t)}{1+t} \ dt$ diverges. this is a good question! an example is this: $f(t)=\sin t + (t+1)\cos t.$ then $F(t)=(t+1)\sin t.$ to see why this function satisfies the condition, we first prove that $J=\int_0^{\infty} \frac{\sin t}{t+1} \ dt$ is convergent. this is easy to prove, because using integration by parts we have $J=1 - \int_0^{\infty} \frac{\cos t}{(t+1)^2} \ dt,$ and $\int_0^{\infty} \frac{\cos t}{(t+1)^2} \ dt$ is (absolutely) convergent because $\left|\frac{\cos t}{(t+1)^2} \right| \ leq \frac{1}{(t+1)^2}.$ thus $\int_0^{\infty} \frac{F(t)}{(t+1)^2} \ dt = J$ is convergent and $\int_0^{\infty} \frac{f(t)}{t+1} \ dt = J + \int_0^{\infty} \cos t \ dt,$ which is clearly divergent. Q.E.D. May 27th 2008, 09:02 AM Thanks a bunch ! You're great (Bow)
{"url":"http://mathhelpforum.com/calculus/39674-solved-convergence-divergence-integral-print.html","timestamp":"2014-04-16T14:07:49Z","content_type":null,"content_length":"10880","record_id":"<urn:uuid:f635f39b-0424-4159-bf4c-3ebc8f0e3b9d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
etd AT Indian Institute of Science: Modular Kinematic Analysis Of Planar Linkages & Collections Thesis Guide Submitted Date Sign on to: Receive email Login / Register authorized users Edit Profile About DSpace etd AT Indian Institute of Science > Division of Mechanical Sciences > Mechanical Engineering (mecheng) > Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/466 Title: Modular Kinematic Analysis Of Planar Linkages Authors: Chowdary, Sekhar V S C Advisors: Sen, Dibakar Kinematic Analysis Planar Linkages Modular Kinematics Keywords: Kinematic Linkage Modeling Pseudo Spatial Mechanism Multiphase Modular Kinematics Submitted Jul-2006 Series/ G20530 Report no.: This thesis has developed an efficient methodology for automatic kinematic analysis of planar linkages using the concept of modular kinematics. Unlike conventional general purpose kinematic analysis packages where each joint in the mechanism is represented using a set of non-linear constraint equations which need to be solved by some iterative numerical procedure, modular kinematics is based on the original observation by Assur that kinematic state of a mechanism involving large number of links can be constructed out of the kinematic states of patterns of sub chains called modules taken in a given sequence called module sequence which in turn emulates the step by step construction procedure of traditional graphical methods. The position, velocity and acceleration analysis of modules are available in closed form. Kinematic analysis of modules later in the sequence is enabled by those of the ones earlier in the sequence, hence, the kinematic analysis of a mechanism is accomplished without any iterative endeavor by doing the kinematics of the modules as given in the module sequence. [102] classified all modules into three fundamental types namely input, dyad and transformation and also introduced the concept of constraint module for analyzing graphically non-constructible mechanisms within the paradigm of modular kinematics where a small step of numerical search was needed in an over all closed form kinematic formulation. Module sequence for a mechanism using the modules is not unique. Choice of a later module in the sequence depends upon the selection of modules earlier in the sequence. This thesis has presented a systematic approach of identifying all such methods for all the inversions of the mechanism and represented in the form of a module hierarchy or a module tree where each path from root to the leaf node represents a valid module sequence for the kinematic chain in hand. The work also extended the set of modules by adding eight new modules to what has already been used in literature to make it complete in the sense that all planar mechanisms involving revolute, prismatic and pin-in-slot (including circular slots) can be handled. The computational effort involved for analyzing these mechanisms thus depend on the number of constraint modules occurring in succession in the module sequence. However, maximum possible number of constraint modules in any mechanism with up to twelve links is only two. The derivative analysis also uses the same module sequence, but they are always devoid of any iterative steps. During the process of generation of a module sequence, at every stage multitude of modules could be identified for their potential placement in the sequence. But for every module sequence the difference between Abstract: the number of input modules and that of constraint modules is constant and is equal to the kinematic degrees-of-freedom (d.o.f) of the mechanism. The algorithm presented in this thesis minimizes the number of generalized inputs (and hence extraneous constraints) and thus attempting to identify the simplest of the module sequences. In that sense the module sequences represented in the module tree are all optimal module sequences. The present work introduced the concept of multi phase modular kinematics which enables a large variety of mechanisms, conventionally identified as complex mechanisms, to be solved in closed form. This is achieved through the use of novel virtual link and virtual joints. Virtual link is slightly different from a normal rigid link in the sense that the joint locations on this are functions of some independent parameters. Since, the locations of joints are not fixed even in the local coordinate frame of the virtual link, the relative velocities between joints are not zero, they need to be appropriately accounted in kinematic analysis. The theory presented in the thesis is implemented in a computer program written in C++ on Windows platform and Graphics library (OpenGL) is used to display linkage configurations and simulations. The program takes the data of joints, input pairs, ground link in certain format through a file. Geometric models developed in any of the existing modeling softwares like ProE, Ideas, AutoCad etc. can be imported in VRML format to the links and in case of no geometric models a simple convex 2D geometry is created for each link for the purpose of visualization. Geometric import of links helps not only in understanding the simulations better but also in useful for dynamic analysis, dynamic motion analysis and interference analysis. A complete kinematic analysis (position, velocity and acceleration) is given for a four bar mechanism and illustrated the positional ( configuration) analysis using modular kinematics for several other examples like old-ham, quick-return mechanisms etc. in the current work. Multi-phase modular approach is illustrated using a five bar with floating input pairs, a back actor and a drafter mechanism and the Back actor configuration is shown with the imported link geometries. It is observed in practice that there are many apparently spatial Mechanisms, which are constructed out of symmetric dispositions of planar mechanisms in space. A pseudo spatial mechanism concept is proposed to solve this class of spatial mechanisms, which can actually be analyzed with the effort of solving only one such component. This concept is illustrated with Shaker and Umbrella mechanisms. Possible extensions of the concept for modeling and analysis of more general class of pseudo-spatial mechanisms are also indicated. URI: http://hdl.handle.net/2005/466 Appears in Mechanical Engineering (mecheng) Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated.
{"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/466","timestamp":"2014-04-23T14:41:35Z","content_type":null,"content_length":"24757","record_id":"<urn:uuid:6d3f0642-732f-4c83-9cea-1956c4e6def4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Union City, CA Algebra Tutor Find an Union City, CA Algebra Tutor ...I was then a high school teacher for 7 years. I taught physics, honors physics, general physical science and algebra. More recently I taught a full year of chemistry in a six week, all day summer session in Oakland. 9 Subjects: including algebra 1, algebra 2, chemistry, physics ...In fact, I enjoyed teaching so much that I kept on doing it to this day. Through the years, I have worked mostly with junior high and high schoolers, but I have also worked with kids as young as 4th graders and adults at university or community colleges. My number one goal is the academic success of my students. 11 Subjects: including algebra 2, algebra 1, chemistry, calculus ...I have ten years of practical, hands-on computer programming experience through my work as a scientist. Python is my primary programming language. I have also programmed in Pascal and C. 17 Subjects: including algebra 1, algebra 2, chemistry, statistics I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including algebra 1, algebra 2, calculus, statistics Students are often taught the same curriculum, and may even be in the same class, but what they take away is vastly different. For one, people have notably disparate ways of learning. Whether it is through diagrams, examples, or step-by-step instructions, each method is as valid as the next. 16 Subjects: including algebra 1, algebra 2, calculus, reading
{"url":"http://www.purplemath.com/union_city_ca_algebra_tutors.php","timestamp":"2014-04-21T04:38:46Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:c7974faa-7e69-4486-b562-ee0f24a0ee2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
The Drunkard's walk : how randomness rules our lives by Leonard Mlodinow The Drunkard's walk : how randomness rules our lives (original 2008; edition 2008) Member: mpultroon Title: The Drunkard's walk : how randomness rules our lives Authors: Leonard Mlodinow Info: New York : Pantheon Books, c2008. Collections: R's Tags: Mathematics, Statistics Work details The Drunkard's Walk : How Randomness Rules Our Lives by Leonard Mlodinow (2008) Recently added by Kurciska, pa5t0rd, jaroslawr, bridgitshearth, sanabo, ExpatTX, cohoek, Robertotcestari No current Talk conversations about this book. » See also 60 mentions An engaging review of probability and statistics, but I sometimes wished he would explain the mathematics behind things in more detail. Also, there was more historical information than I was expecting, which I wasn't that interested in. An interesting and very funny look at the history of statistics and how our tendency to see patterns in randomness can affect our lives and decisions. Essentially a powerful argument that winning isn't necessarily an argument for merit--when you set up a game, you will have winners and BIG winners, even when all the players are precisely the same. Important, life-altering outcomes are more random than we care to recognize. Agh, I love this book. The first time I read it, I hadn't yet encountered The Tipping Point: How Little Things Can Make a Big Difference or Freakonomics: A Rogue Economist Explores the Hidden Side of Everything. On a second read, I discovered just how snarky Mlodinow is about them and their tendency to infer patterns where they may or may not exist. I still think he goes too far, though. Just because randomness is everywhere doesn't mean that there isn't an underlying signal; randomness doesn't imply 50-50. Mlodinow is both knowledgeable and passionate about his subject, and puts together a large number of examples in which not fully taking randomness into account has been disastrous in analysing data. He does a great job of providing an intuitive and entertaining introduction to probability and statistics, and the book is absolutely chock-full of stories which are both entertaining and illuminating. I found the book a little problematic because I think Mlodinow goes too far in the other direction. In several cases, he seems to me to conflate randomness with the total lack of signal, and this seems to me to be both problematic and dangerous. One example he uses is our reverence for the one-in-a-million entrepreneurs which succeed. He points out that given a very large sample, the probability of having one success is actually very large and implies nothing about the individual who succeeds. However, to me, it is just as dangerous to assume that there is no underlying signal (that essentially everything is blind luck) as it is to assume that everything has a deeper meaning. Oftentimes, we treat something as (totally) random because we don't have the knowledge to make a better prediction. For example, we talk about a coin flip as if it were totally random, drawn from a uniform distribution. But if you are flipping a coin and I am able to observe the angle and initial velocity, then I can do a heck of a lot better than a 50-50 guess! The same is true of the entrepreneur example. Just because we can talk about the probability of success doesn't imply that some internal mechanism doesn't exist! Mlodinow seems to me to assume that if the current features don't do a good job predicting outcomes, the outcomes must be (totally) random. He doesn't seem to consider that we might just be using the wrong features for the predictions! All the same, I think this is a wonderful read and a great reality check for all of us who assume skill and meaning rule everything in our lives. The basic message of this book : Life is a series of random events, over which you have no control. Deal with it. Series (with Canonical title Original title publication date Important places Important events Related movies Awards and honors Dedication To my three miracles of randomness: Olivia, Nicolai, and Alexi ... and for Sabina Jakubowicz First words A few years ago, a man won the Spanish national lottery with a ticket that ended in the number 48. Quotations If psychics really existed, you'd see them in places like [Monte Carlo], hooting and dancing and pushing wheelbarrows of money down the street, and not on Web sites calling themselves Zelda Who Knows All and Sees All and offering twenty-four-hour free online love advice [...]. Last words Most of all it has taught me to appreciate the absence of bad luck, the absence of events that might have brought us down, and the absence of the disease, war, famine, and accident that have not - or have not yet - befallen us. (Click to show. Warning: May contain spoilers.) Blurbers Gilbert, Daniel Berlinski, David Hawking, Stephen Publisher series References to this work on external resources. Amazon.com Amazon.com Review (ISBN 0307275175, Paperback) Amazon Guest Review: Stephen Hawking Published in 1988, Stephen Hawking’s A Brief History of Time became perhaps one of the unlikeliest bestsellers in history: a not-so-dumbed-down exploration of physics and the universe that occupied the London Sunday Times bestseller list for 237 weeks. Later successes include 1995’s A Briefer History of Time The Universe in a Nutshell , and God Created the Integers: The Mathematical Breakthroughs that Changed History . Stephen Hawking is Lucasian Professor of Mathematics at the University of Cambridge. The Drunkard’s Walk Leonard Mlodinow provides readers with a wonderfully readable guide to how the mathematical laws of randomness affect our lives. With insight he shows how the hallmarks of chance are apparent in the course of events all around us. The understanding of randomness has brought about profound changes in the way we view our surroundings, and our universe. I am pleased that Leonard has skillfully explained this important branch of mathematics. --Stephen Hawking (retrieved from Amazon Mon, 30 Sep 2013 13:30:14 -0400) (see all 3 descriptions) An irreverent look at how randomness influences our lives, and how our successes and failures are far more dependent on chance events than we recognize. (summary from another edition) » see all 3 descriptions Swap Ebooks Audio 3 avail. 5 pay 6 pay 533 wanted Popular covers
{"url":"http://www.librarything.com/work/4850753/30453511","timestamp":"2014-04-17T02:57:53Z","content_type":null,"content_length":"93690","record_id":"<urn:uuid:bd2404d4-6d7e-4da1-91c0-071459591464>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling Thu, 06/07/2012 - 18:41 OK, a big problem seems to OK, a big problem seems to have been including covariances between the variables in each cluster. Once I fixed them to zero the model converges easily to the ML estimates (I used 100 sets of random However, I am still having the same problem with the confidence intervals. I've run many (non-mixture) models and have obtained confidence intervals. I've run the growth mixture model (modified to handle irregularly spaced time points for each subject) and computed confidence intervals. Not sure why I can't get the for this LCA. Wed, 06/06/2012 - 13:26 It's interesting that the It's interesting that the lbound and ubound are equal, both of which are higher than the estimate. Do any warnings or error messages come back on the code with the CIs? Can you share code/data that causes this problem? When you say that some of the variance and covariance parameters are NaN, do you mean that the standard errors/Hessian contains NaNs, or that you have free parameters in your model that are NaN? Wed, 06/06/2012 - 18:51 I can share the code but I I can share the code but I cannot include the data set. The standard errors listed for some model parameters (that represent variances and covariances among the observed variable cluster errors - that is, the variability of the cluster after accounting for the latent means) show as NaN. I modified the code from the growth mixture model example. I have 3 latent variables that represent the means for 3 observed vaiables in a cluster. The variances of these latent variables are fixed to zero (and hence there are no covariances among the latent variables). I should also note that I ran similar code on a well-behaved simulated data set and it zeroed in on the cluster means. Here is the code for class1: class1 <- mxModel("Class1", # residual variances and covariances values = 10, # latent variances and covariance # intercept loadings # manifest means # latent means to=c("m_tbut_1", "m_sch_1","m_tosm_1"), labels=c("mean_tbut_1", "mean_sch_1", "mean_tosm_1") # enable the likelihood vector mxRAMObjective(A = "A", S = "S", F = "F", M = "M", vector = TRUE) ) # close model Thu, 06/07/2012 - 16:47 I don't see any problems in I don't see any problems in the code you've provided. Without some type of data that replicates the error, I/we don't know where to start looking for a fix. Can you throw your data through fakeData ( http://openmx.psyc.virginia.edu/wiki/generating-simulated-data)? It'll take your private data and generate multivariate normal data with the same variable names, missingness patterns and a covariance matrix that's pretty close to yours. If you can do that or make up other data that also causes the error in some portion of code, we'll have a starting point to diagnosing this problem. I'm going to guess that the NaN standard errors are a clue. Those indicate that the Hessian for your model isn't positive definite; that is, at its "final" iteration, the asymptotic variance of a parameter was negative. OpenMx, NPSOL and other quasi-Newtownian methods find new values for future iterations by defining a "step size" that depends on the model's gradient and Hessian (first and second derivatives of the likelihood function). It's possible, though not likely, that the Hessian is of such a shape that it always tries to point positive for some of your parameters. Until I/we have a model to test it with, that's just a guess, though. As an aside, please submit future code examples in an attached file. When users submit code inline in forum posts, we have to strip out all of the added formatting prior to helping you. Fri, 06/08/2012 - 12:42 Numerical precision I think that the lower CI being slightly above the initial estimate is simply a numerical precision issue. Optimization failed because the gradient was too flat for it to get started, and it returned its best estimate which was basically equal (within 3-4 decimal places) to the starting value. Agreed it looks weird, and I think OpenMx should flag apparent failures of optimization when trying to find CI's. I suspect that increasing step size or decreasing numerical precision would improve the chances that optimization would get going to find the CI's. You said latent class analysis so I am thinking that your variables are binary or ordinal (the usual term for continuous variables would be latent profile analysis). If the variables are ordinal I'd try making function precision around 1.E-8 or 1.E-9. If continuous then I'd go with say 1.E-14 or 1.E-15 and see if it helps.
{"url":"http://openmx.psyc.virginia.edu/thread/1417","timestamp":"2014-04-19T17:36:31Z","content_type":null,"content_length":"40898","record_id":"<urn:uuid:83b05baa-0e87-4bd5-ab9d-993917f3178a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Revised GDP estimates support the model of inertial growth On July 29, the BEA revised real GDP estimates for the years after 2007. The most important news is: For 2007-2010, real GDP decreased at an average annual rate of 0.3 percent; in the previously published estimates, real GDP had increased at an average annual rate of less than 0.1 percent. From the fourth quarter of 2007 to the first quarter of 2011, real GDP decreased at an average annual rate of 0.2 percent; in the previously published estimates, real GDP had increased at an average annual rate of 0.2 percent. These new BEA data strongly support our model of real economic growth. Previously in this blog, we found that real GDP per capita in developed countries grows as a linear function of time. Similarly to classical mechanics, we interpret this linear growth as “inertial” growth. When the population pyramid does not change over time one can write the following relationship for real GDP per capita, G G(t) = At + C (1) Relationship (1) defines the linear trajectory of the GDP per capita, where C=G[i](t[0])=G(t[0]) and t[0] is the starting time. In the regime of inertial growth, the real GDP per capita increases by the constant value A per time unit. Figure 1 depicts the evolution of annual increment of real GDP per capita in the U.S. since 1950. The new GDP revision makes the slope of the linear regression line (trend) almost negligible (+$1.9 per year) and thus supports our concept. In 2011, the slope may become negative if the increment is below $432. After the two mediocre quarters in 2011, we would not expect real GDP per capita in 2011 to grow faster than in 2010. On June 5 we had a post on the current position of the U.S. economy relative to some long term trend. As a rule, economists consider real growth as an exponential process and see the U.S. economy far below its trend. We compared the trends in real GDP and GDP per capita. The latter should be a linear one. Figure 2 depicts the evolution of both variables between 1950 and 2010 with the new readings between 2007 and 2010. The real GDP curve has an exponential shape as related to the growth in total population. One can easily observe the current deviation from the exponential trend and blame poor economic conditions after 2007. With the decelerating rate of total population growth we would not expect the observed curve to return to the exponential trend (exponential extrapolation of the previous growth.) The real GDP per capita evolves along a straight line. After the revision, the curve falls below the linear trend. It touched the trend with the previous set of GDP estimates. All in all, during the past four years the observed curve returned to the long-term trend and may stay below the trend for a while. We also presented an exponential trend which has a small coefficient of 0.02. This coefficient effectively makes the line very close to a straight one between 1 and 60. However, the deviation from the (extrapolated) exponential trend will be growing and observations will contradict the hypothesis of exponential growth. Figure 1. Annual increment of real GDP per capita in the U.S. between 1950 and 2010. Figure 2. The evolution of real GDP and real GDP per capita between 1950 and 2010.
{"url":"http://mechonomic.blogspot.com/2011/08/revised-gdp-estimates-support-model-of.html","timestamp":"2014-04-18T02:58:30Z","content_type":null,"content_length":"119397","record_id":"<urn:uuid:fa103329-e3a8-4f33-b174-3a89ee313a8b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
type 'a t type 'a t is the type of objects, where each object is part of an equivalence class that is associated with a single value of type 'a. val create : 'a -> 'a t create v returns a new object in its own equivalence class that has value v. val get : 'a t -> 'a get t returns the value of the class of t. val set : 'a t -> 'a -> unit set t v sets the value of the class of t to v. val same_class : 'a t -> 'a t -> bool same_class t1 t2 returns true iff t1 and t2 are in the same equivalence class. val union : 'a t -> 'a t -> unit union t1 t2 makes the class of t1 and the class of t2 be the same (if they are already equal, then nothing changes). The value of the combined class is the value of t1 or t2; it is unspecified which. After union t1 t2, it will always be the case that same_class t1 t2.
{"url":"https://ocaml.janestreet.com/ocaml-core/latest/doc/core_kernel/Union_find.html","timestamp":"2014-04-17T13:59:17Z","content_type":null,"content_length":"3849","record_id":"<urn:uuid:f5c5900c-f586-4bd6-83b5-76e3e80a5e9b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Posts by Simon Willerton Remind yourself about basics of the Legendre-Fenchel transform. Check out a new Spanish language maths magazine See Bruce tell of Ghys’ modular knots. Watch me try to understand fuzzy logic from an enriched category theory perspective Translate from category theory to order theory End your ignorance of ends! A novel teaching method involves figuring out what students do and don’t understand Find out what algebraic varieties, convex sets, linear subspaces, real numbers, logical theories and extension fields have in common with formal concepts. Have a peek at the notion of formal concept analysis Watch some linear algebra being categorified. Read about the October workshop. Read about a different take on torsors Read about recent talks by Kremnitzer and Corfield View the gallery online. Read about my student’s thesis. Find out what PERT graphs have to do with enriched categories Apply for a lectureship in Sheffield. See a new book for scientists about category theory that’s been put on the arXiv. See how these three things are related. Read how this notion of size for metric spaces has some interesting properties. Tell me the difference between these related ideas. Start to see how enriched profunctors can be viewed as categorifications of integral kernels. Discover how enriched category theory leads to the definition of some generalized metrics on the space of continuous functions on the unit interval. A three-day meeting during August in England on Music, Patterns and Mathematics. See the details of a new paper on the magnitude of metric spaces. Watch some videos to see how I’m trying to make 3d models of categorical surface diagrams. Read about how the volume of a tube around a surface in 3-space depends only on intrinsic invariants of the surface. See how the rational numbers 2 and 3/2 gave birth to the Western musical scale. Apply for a post-doc in Sheffield. Over in a discussion at Math Overflow I was reminded about Halmos’ great article on writing mathematics, which I highly recommend to all graduate students (or anyone else, for that matter). P. R. Halmos, How to write mathematics, L’Enseignement… Learn about the tenuous link between emperor penguins and the magnitude of metric spaces.
{"url":"http://golem.ph.utexas.edu/category/willerton.html","timestamp":"2014-04-20T00:41:20Z","content_type":null,"content_length":"9964","record_id":"<urn:uuid:ce98472f-f3c1-4241-9b8d-ca341f9d862a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Besicovitch set Besicovitch sets A Besicovitch set is a set in the plane (or in higher dimensions) which contains at least one unit line segment in every direction. The triangle pictured below forms one quarter of a Besicovitch set, if its base and height are exactly one unit long: you can place a unit line segment in any direction between N-S and NW-SE (which forms a quarter of all possible orietnations), simply by placing one end at the top corner. Around the turn of the century, it was thought that there was a certain minimum size to Besicovitch sets; that you could never have a Besicovitch set which had an area smaller than pi/8. (In particular, you could never have a quarter-Besicovitch set whose area was smaller than pi/32). However, this conjecture was disproved by A.S. Besicovitch, who showed that you could have a Besicovitch set of arbitrarily small area (or even zero area)! The applet below gives one version of Besicovitch's construction. Basically the idea is to cut up the triangle you see below and shove the pieces together so that there is a lot of overlap. There are two parameters: n, which controls how many times you cut the triangle up, and alpha, which controls how much you shove things closer together. By choosing the two parameters carefully, you can make a quarter-Besicovitch set whose area is as small as you please. By putting four of these quarter-Besicovitch sets together, you can manufacture a genuine Besicovitch set with arbitrarily small area. In this demo, n can be set to any integer from 0 to 9, and alpha can be any number from 0.5 to 1. You have to click the "Redraw" button to actually redraw the Besicovitch set. A crude upper bound for the area of the set is also given at the bottom of the applet; it is off by a factor of two or so, but I'm too lazy to find the exact area. I suggest setting alpha to 0.8 and incrementing n from 0 to 9 to get an idea of the construction. Details of the construction (together with applications to summation of multiple Fourier series) can be found in • E.M. Stein, "Harmonic Analysis", Princeton University Press, 1993, Chapter X.
{"url":"http://www.math.ucla.edu/~tao/java/Besicovitch.html","timestamp":"2014-04-16T16:19:15Z","content_type":null,"content_length":"2761","record_id":"<urn:uuid:e3324ce1-88cc-41f3-bb19-fa20e256ee1a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
sequence, limit point February 20th 2009, 08:54 AM #1 Dec 2008 sequence, limit point If $(x_n)$ is a sequence in $\mathbb{R}$ that converges to $x \in \mathbb{R}$, prove that the set $\{ a_n | n \in \mathbb{N} \}$ has at most one limit point. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/74692-sequence-limit-point.html","timestamp":"2014-04-19T08:26:08Z","content_type":null,"content_length":"29448","record_id":"<urn:uuid:23b69683-ed8a-4909-a7d0-eb4f67f2b93c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Limits Problem December 31st 2008, 08:27 PM #1 Dec 2007 Limits Problem The graph of the even function y=f(x) consists of 4 line segments, as shown above. Which of the following statements about f is false? (A) $\lim_{x \to 0}(f(x) - f(0)) = 0.$ (B) $\lim_{x \to 0}\frac{f(x) - f(0)}{x}=0.$ (C) $\lim_{x \to 0}\frac{f(x) - f(-x)}{2x}=0.$ (D) $\lim_{x \to 2}\frac{f(x) - f(2)}{x - 2}=1.$ (E) $\lim_{x \to 3}\frac{f(x) - f(3)}{x - 3}$ does not exist. The graph of the even function y=f(x) consists of 4 line segments, as shown above. Which of the following statements about f is false? (A) $\lim_{x \to 0}(f(x) - f(0)) = 0.$ (B) $\lim_{x \to 0}\frac{f(x) - f(0)}{x}=0.$ (C) $\lim_{x \to 0}\frac{f(x) - f(-x)}{2x}=0.$ (D) $\lim_{x \to 2}\frac{f(x) - f(2)}{x - 2}=1.$ (E) $\lim_{x \to 3}\frac{f(x) - f(3)}{x - 3}$ does not exist. any initial results from your own work? Hmm I think I had eliminated answers D, and E. The slope of the tangent line at 2 equals 1, and at x = 3 where f(x) has a corner I don't think the graph is differentiable. A and C I'm completely puzzled on...I hadn't made any real attempt to figure out what they mean. B was my original guess for this question, because f(x) is not differentiable at x = 0. Hmm I think I had eliminated answers D, and E. The slope of the tangent line at 2 equals 1, and at x = 3 where f(x) has a corner I don't think the graph is differentiable. A and C I'm completely puzzled on...I hadn't made any real attempt to figure out what they mean. B was my original guess for this question, because f(x) is not differentiable at x = 0. Do you know the equation for f? $f(x) = \begin{cases} +x &, \mbox{ if }x \in [0,3] \\ -x &, \mbox{ if }x \in ]3, \infty[ \\ -x &, \mbox{ if }x \in [-3,0[ \\ +x &, \mbox{ if }x \in ]-\infty,-3[ \end{cases}$ This question confuses me because the graph is not differentiable at -3, 0, and 3. And these points are not differentiable solely because the left and right limits differ, it's similar to the problem with differentiating $y = |x|$ at x = 0. If this is indeed the case, then A through C do not exist, D results in an indeterminate form, but can be reduced using L'Hopital's rule, but even so the limit is still 0, which makes D false as well. E is the only true statement. Since f is not differentiable at x = 3, then there is no true limit at 3, which means that it does not exist in any context with reference to f. So E is true and the rest are false. i say that the only false here is (B). (A) is true.. first, $f(0)=0$ and thus $\lim_{x\rightarrow 0} f(x) = 0$ (C) is true.. take note that on the interval $[-3,3]$ the function can be defined as $f(x)=|x|$ and hence $f(-x)=f(x)$ on this interval. thus, the numerator is always $0$. and therefore the limit as $x$ approaches $0$ is $0$. (D) is true.. you can check that this is true by taking some values.. (E) is true.. i you check the left- and right- hand limit, they go on different values. and thus the limit does not exist. (B) is false.. you can do similar thing like (E) and you will see that the limit should not exist. i just avoided the concept of derivatives here. but in case you already know it, (B), (D) and (E) are definitions of derivatives at the points 0, 2 and 3 respectively.. How is D true? When you split it all up, you get: $\frac{\lim_{x \to 2} f(x) - \lim_{x \to 2} f(2)}{\lim_{x \to 2} x - \lim_{x \to 2} 2}$ It's easy to see that the limit as x approaches 2 is 2, and that the function value at 2 is also 2. The limit of x as it approaches 2 is also 2, and the limit of a constant is the constant $\frac{2 - 2}{2 -2} = \frac{0}{0}$ This is an indeterminate form, so we can do L'Hopital's rule (xxlvh, you may or may not be familiar with this): $\lim_{x \to 2} \frac{f'(x) - f'(2)}{1}$ The derivative of f'(x) at x = 2 is 1, so: $\lim_{x \to 2} (f'(x) - f'(2))$ $\lim_{x \to 2}f'(x) - \lim_{x \to 2} f'(2)$ $f'(2) - f'(2) = 0$ By the way, it is a definition of the derivative, but only when you need to find a c for the mean value theorem (Because Rolle's theorem usually assumes that f(a) = f(b) it is always a numerator of 0). Hello, xxlvh! The graph of the even function y=f(x) consists of 4 line segments, as shown above. .Which of the following statements about f is false? $(A)\;\lim_{x \to 0}(f(x) - f(0)) \:=\: 0 \qquad (B)\;\lim_{x \to 0}\frac{f(x) - f(0)}{x}\:=\:0$ . . $(C)\;\lim_{x \to 0}\frac{f(x) - f(\text{-}x)}{2x}\:=\:0$ $(D)\;\lim_{x \to 2}\frac{f(x) - f(2)}{x - 2}\:=\:1 \qquad (E)\;\lim_{x \to 3}\frac{f(x) - f(3)}{x - 3}\text{ does not exist.}$ Examine the statements. If we can interpret them, we can virtually "eyeball" the problem. $(A)\;\lim_{x\to0}\bigg[f(x) - f(0)\bigg] \:=\:0$ $f(x)-f(0)$ is the height of the function at $x.$ The statement becomes: . $\lim_{x\to0}(\text{height})$ It says: the height of a point, as the point approaches the origin, is zero. (A) is true. $(C)\;\lim_{x\to0}\frac{f(x) - f(-x)}{2x} \:=\:0$ Since $f(x)$ is an even function, $f(x) = f(\text{-}x)$ Hence, the numerator is always 0. (C) is true. $(D)\;\lim_{x\to0}\frac{f(x) - f(2)}{x-2} \:=\:1$ This says: the slope of $f(x)$ at $x = 2$ is 1. Looking at the graph, (D) is true. $(E)\;\lim_{x\to0}\frac{f(x) - f(3)}{x-3}\text{ does not exist.}$ This says: the slope at $x = 3$ does not exist. Looking at the graph, (E) is true. Therefore, (B) is false . . . why? We have: . $\lim_{x\to0}\frac{f(x)-f(0)}{x-0} \:=\:0$ This says: the slope at $x = 0$ is 0. And we can see that this slope does not exist. December 31st 2008, 11:18 PM #2 January 1st 2009, 08:38 PM #3 Dec 2007 January 1st 2009, 08:49 PM #4 Senior Member Nov 2008 January 1st 2009, 09:18 PM #5 January 1st 2009, 09:41 PM #6 January 2nd 2009, 06:14 AM #7 January 2nd 2009, 06:43 AM #8 January 2nd 2009, 07:39 AM #9 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/calculus/66460-limits-problem.html","timestamp":"2014-04-18T09:03:10Z","content_type":null,"content_length":"67702","record_id":"<urn:uuid:1f26ebc0-237f-4d4b-9ddf-f1471a4863b8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Upcoming MIRI Research Workshops - Machine Intelligence Research Institute Upcoming MIRI Research Workshops From November 11-18, 2012, we held (what we now call) the 1st MIRI Workshop on Logic, Probability, and Reflection. This workshop had four participants: The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on unrestricted comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction). These results suggest a similar approach may be used to work around Löb’s theorem, but this has not yet been explored. This work will be written up over the coming months. In the meantime, MIRI is preparing for the 2nd MIRI Workshop on Logic, Probability, and Reflection, to take place from April 3-24, 2013. This workshop will be broken into two sections. The first section (Apr 3-11) will bring together the 1st workshop’s participants and 8 additional participants: The second section (Apr 12-24) will consist solely of the 4 participants from the 1st workshop. Participants of this 2nd workshop will continue to work on the foundations of reflective reasoning, for example Gödelian obstacles to reflection, and decision algorithms for reflective agents (e.g. Additional MIRI research workshops are also tentatively planned for the summer and fall of 2013. Update: An early draft of the paper describing the first result from the 1st workshop is now available here. Exciting news! Can’t wait to hear about the results.
{"url":"http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/","timestamp":"2014-04-20T18:23:01Z","content_type":null,"content_length":"33368","record_id":"<urn:uuid:d374c28f-fb4f-46d8-b766-ce3134fd4a99>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic topology question December 23rd 2010, 07:55 PM #1 Oct 2007 Algebraic topology question Hey, I need some help with part 2 of this question. i) compute $\Pi_1(\Re P^2)$ I got $Z_2$ for this one. here is part 2. ii) Describe a map $S^1 \rightarrow \Re P^2$ with i* not equal to 0. Assume $\Pi_1 (S^1)=Z$ and $A \rightarrow X$ is a retraction means $r \cdot i = I_{da}$. The i is suppose to be on top of the arrow, but I'm not sure how to do that with latex. if I understood your question correctly, you need to find a non-trivial loop in RP^2. Consider a curve c connecting the north pole pn=(0,0,1) and the south pole ps=(0,0,-1) on a sphere S^2, and let p(c) be the image of this curve by the projection p: S^2 -> RP^2 which indetifies all antipodal points. Then p(c) is a closed loop since p(pn)=p(ps). If p(c) is trivial, there is a homotopy, that is, a family of closed curves c_t with the common base point p0=p(pn), with c_0=p0 the constant map and c_1 = p(c). Lift this family of curves we get a homotopy on S^2 that continously deform the c to a pole, while keeping the ending points of c fixed. This is a contradiction. December 24th 2010, 02:46 AM #2 Senior Member Mar 2010 Beijing, China
{"url":"http://mathhelpforum.com/differential-geometry/166828-algebraic-topology-question.html","timestamp":"2014-04-20T07:18:45Z","content_type":null,"content_length":"33035","record_id":"<urn:uuid:4e02acef-b554-4c13-b4f7-ed558e9e9b47>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrepancy Principle for Statistical Inverse Problems with Application to Conjugate Gradient Iteration Gilles Blanchard and Peter Mathé Preprint, University of Potsdam Number 2011.07, 2011. The authors discuss the use of the discrepancy principle for statis- tical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well-dened, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modi- cation of the discrepancy is introduced, which takes into account both of the above deciencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00008701/","timestamp":"2014-04-18T08:07:26Z","content_type":null,"content_length":"7385","record_id":"<urn:uuid:037109d8-8acf-4ac5-b928-a75fe602f290>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Deflection of electron beam in oscilloscope 1. The problem statement, all variables and given/known data Calculate the deflection of an electron beam as it passes between the plates of a CRT tube. In the picture, the parallel plates create an electric field, with the positive plate on top and the negative plate on bottom, causing the electron's path to be deflected upwards as it travels to the 2. Relevant equations Kinetic energy of electrons: K = 3.2 x 10^-16 J Electric field between plates: E = 1.2 x 10^4 N/C Distance along plates = 15mm 3. The attempt at a solution I know the equation F=Eq is involved. I've been told that the force is centripetal, so that the equation for centripetal force is also involved, but im not entirely convinced. This is because centripetal force is always perpendicular to the velocity, but in this example that can't be the case. So im stuck and need help. [edit] and the answer is 0.34mm
{"url":"http://www.physicsforums.com/showthread.php?t=629678","timestamp":"2014-04-16T10:38:16Z","content_type":null,"content_length":"25169","record_id":"<urn:uuid:53b491ea-5089-4ce3-9c9a-2e63ba23aa8e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Bernoulli Trials October 29th 2009, 04:58 PM #1 Oct 2009 Bernoulli Trials Hey guys I'm new to this forum and I am having some difficulty with a certain problem concerning Bernoulli Trials. Eight Bernoulli Trials are performed with probability p for success. Pr(no success) = Pr(2 successes). Calculate p and the Pr(1 success)(simplify to fraction in lowest term) Any help of this problem would be great. Thanks! The sum of N identical Bernoulli RV's is a binomial(N, p) [call the sum X], p being the parameter for the Bernoulli. The first step is to find p, which you can get by solving $P(X = 2) = P(X = 0) $ for the parameter p, noting that for the general binomial random variable that $P(X = x) = \binom{n}{x} p^x (1-p)^{n - x}$. October 29th 2009, 06:41 PM #2 Senior Member Oct 2009
{"url":"http://mathhelpforum.com/advanced-statistics/111256-bernoulli-trials.html","timestamp":"2014-04-19T05:36:59Z","content_type":null,"content_length":"31721","record_id":"<urn:uuid:14171b80-7cf0-49a5-98f0-1dc69f7f372e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Performance Assessment - Rigid Motions One of the tasks that we gave our students at the end of our unit on Rigid Motions came from Illustrative Mathematics: The triangle in the upper left of the figure below has been reflected across a line into the triangle in the lower right of the figure. Use a straightedge and compass to construct the line across which the triangle was reflected. I took the task and made it dynamic in a TI-Nspire document. Instead of creating the line of reflection for two static triangles, they had to construct it so that it still worked when we moved one of the vertices of the original triangle. They were also required to show some measurements on their screen that justified their work. On the next page, △ABC has been reflected across a line into the blue triangle. Construct the line across which the triangle was reflected. Justify your conclusion. Many students were successful on this task. Most of them constructed the perpendicular bisector of one of the segments with endpoints that were pre-image point and its image. Then next task that we asked them to do was the following, which I first heard about from an instructor at the University Lab School in Honolulu. On the next page, you are given segment AB. Construct a regular hexagon ABCDEF with segment AB as one of its sides. -You may not use any Shapes tools. -You many not use any Measurement tools. When you are finished, we will use Measurement tools to justify your construction. After proposing the task to the students, I made sure they knew what we meant by regular hexagon. Then I let them think and work for a little while before the class discussed a few questions. Someone wanted to know what one of the measures of the angles of the regular hexagon was. So I drew a triangle and asked for the sum of the measures. We ended up having a mini-lesson – and didn’t even get to the point of generalizing the sum of the measures of the triangle (that will come later) – just enough for them to have what they needed to make their hexagon using transformations. Most students rotated the sides all of the way around to create their hexagon. A few students rotated segment AB twice and then reflected the segments to get the rest of the hexagon. All of the students learned something about hexagons that they had not previously considered. And then one more… I love the angles of rotation that the students used to rotate the pentagon onto itself…angles that are easy to use because of technology. My students are entering into the practice of use appropriate tools strategically. And so the journey continues….
{"url":"http://easingthehurrysyndrome.wordpress.com/2012/09/26/performance-assessment-rigid-motions/","timestamp":"2014-04-16T18:56:08Z","content_type":null,"content_length":"56022","record_id":"<urn:uuid:02e56cd9-b164-4130-a570-d8ea5e2e4bee>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with proof If S is uncountable. Show it is impossible that P({i}) > 0 for every i∈S. The relevant constraint is that $\sum P(i) = 1$ cannot hold under these conditions. proposition: If S is uncountable. Show it is impossible that P({i}) > 0 for every i∈S. Equivalent statement: Min (P {i}) > 0 Denote this minimum = x Now note that $(P_1 + P_2 + P_3 + ...) \geq (x + x + x + x ....)$ So its sufficient to show that $(x + x + x + x ....) > 1$ This is true, since you are adding up a strictly positive number an uncountably large number of times (ie, more than 1/x times). Can we always talk of a minimum when the set under consideration is countable ? I think you mean uncountable, and I dont understand why that would be a problem. For example the subset of real numbers $[2,\infty)$ is uncountable but clearly has a minimum of 2. Also, it seems that your proof does not really use uncountability. it did use and require uncountability. The final step was to show that (x + x + x + x + x... ) > 1 This is true iff there are more than $\displaystyle \lceil \ 1/x \rceil$ terms in the summation. Every element of the set S creates 1 term in the summation. Since S is uncountable, it must have more than $\displaystyle \lceil \ 1/x \rceil$ elements. (note that $\displaystyle \lceil \ 1/x \rceil$ is well defined as the proposition was that x>0) Last edited by SpringFan25; September 27th 2010 at 04:17 AM. You haven't said that P is supposed to denote a probability on the set S, so that $\sum P(\{i\}) = 1$. As SpringFan25 pointed out, this is crucial to the proof. For each value of n=1,2,3,..., there are at most n elements i of S with P({i}) > 1/n (otherwise the sum of their probabilities would be greater than 1). Call the (finite) set of all such elements $A_n$. If P({i}) > 0 then it must be true that P({i}) > 1/n for some n and so $i\in A_n$, and therefore i is in the union of all the $A_n$. Thus the set of all elements i such that P({i}) > 0 is a countable union of finite sets and therefore countable. So if S is uncountable it cannot be true that P({i}) > 0 for all i in S.
{"url":"http://mathhelpforum.com/statistics/157497-help-proof.html","timestamp":"2014-04-20T04:48:58Z","content_type":null,"content_length":"45267","record_id":"<urn:uuid:05e402ad-7586-4647-a0b1-d16e195e57b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Getting the variable of a polynomial Jeroen Demeyer on Tue, 28 Sep 2004 16:48:10 +0200 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] Getting the variable of a polynomial • To: pari-users <pari-users@list.cr.yp.to> • Subject: Getting the variable of a polynomial • From: Jeroen Demeyer <jdemeyer@cage.ugent.be> • Date: Tue, 28 Sep 2004 16:40:59 +0200 • Delivery-date: Tue, 28 Sep 2004 16:48:10 +0200 • Mailing-list: contact pari-users-help@list.cr.yp.to; run by ezmlm • User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.3) Gecko/20040923 Hi all, Is there a way in GP to get the variable of a polynomial? Something like polvar(t^2 + t - 5) which would return t. It would be very useful to make polynomials (say, using Pol()) having the same variable as a given polynomial. I checked the documentation, but could not find it.
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0409/msg00006.html","timestamp":"2014-04-21T02:38:25Z","content_type":null,"content_length":"4156","record_id":"<urn:uuid:b6c8fe19-a72e-4ce6-a734-bb89710c504d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Part 6 - Create a Variation Diagram In any step, click the 'Show me' link to reveal extra information. A sequence of 'Show me's indicates a series of steps. If you prefer a printout of the full set of instructions for this part, choose Print from the File menu. These instructions will walk you through the basics of X-Y plotting in Excel, using a plot of SiO vs. MgO as an example. If you are already familiar with how to do this, skip ahead to Part 7 Step 1 - Select Data and Open the Chart Wizard Step 2 - Select the Chart Type Select the graph type X-Y (Scatter) . Click Step 3 - Label the Crater Lake series Choose the page, and label the Crater Lake series as Crater Lake . Click Step 4 - Add the Yellowstone series to the graph and name the new series 'Yellowstone'. If you don't want to type in the cell references for the X and Y values, you can click on the up-arrows (at the right); then go to the Yellowstone worksheet, drag on the X values (but not the entire column this time--doing this creates an error!), and click on the shaded down-arrow to return to the Series page. Repeat for the Y values. Step 5 - Label the Axes Enter 'SiO (wt.%)' for the Value (X) axis and 'MgO (wt.%)' for the Value (Y) axis. Click Step 6 - Remove those unsightly gridlines , and remove the major gridlines along the Y-axis. Click Step 7- Modify the Graph for Easier Reading Re-size the graph to an appropriate size. Change the fonts for the axes and legend titles so that they are readable for the new graph size. Rescale the axes to fit the range of the data. 1. Drag any of the eight handles you see around the perimeter of the background to re-size both the background and the graph area. You can also select just the graph area and re-size it in the same way, so that the graph fills more of the background area. 2. Drag the Legend, which shows the symbols for the Crater Lake and Yellowstone series, onto the upper right corner of the graph. 3. Change the font sizes of the axes and legend titles so they are readable. 4. Consider reducing the size of the data symboles, especially if there are lots of them. Do this by clicking directly on a datapoint from one of the plotted series. 5. Finally, choose an appropriate scale for the axes. This is done by clicking on either the X or Y axis and modifying its minimum and/or maximum value. For this particular graph, an appropriate range for the X-axis would be 40-80 wt.% SiO Here's the finished product...
{"url":"http://serc.carleton.edu/research_education/cyberinfrastructure/Harkers/part_6.html","timestamp":"2014-04-17T18:33:47Z","content_type":null,"content_length":"30655","record_id":"<urn:uuid:830b0cd5-b134-476e-ae04-906de0e27921>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Absolute deviation This article needs additional citations for verification. (December 2013) In statistics, the absolute deviation of an element of a data set is the absolute difference between that element and a given point. Typically the deviation is reckoned from the central value, being construed as some type of average, most often the median or sometimes the mean of the data set. $D_i = |x_i-m(X)|$ D[i] is the absolute deviation, x[i] is the data element and m(X) is the chosen measure of central tendency of the data set—sometimes the mean ($\overline{x}$), but most often the median. Measures of dispersion[edit] Several measures of statistical dispersion are defined in terms of the absolute deviation. Average absolute deviation[edit] The average absolute deviation, (MAD) or simply average deviation of a data set is the average of the absolute deviations and is a summary statistic of statistical dispersion or variability. In its general form, the average used can be the mean, median, mode, or the result of another measure of central tendency. The average absolute deviation of a set {x[1], x[2], ..., x[n]} is $\frac{1}{n}\sum_{i=1}^n |x_i-m(X)|.$ The choice of measure of central tendency, $m(X)$, has a marked effect on the value of the average deviation. For example, for the data set {2, 2, 3, 4, 14}: Measure of central tendency $m(X)$ Average absolute deviation Mean = 5 $\frac{|2 - 5| + |2 - 5| + |3 - 5| + |4 - 5| + |14 - 5|}{5} = 3.6$ Median = 3 $\frac{|2 - 3| + |2 - 3| + |3 - 3| + |4 - 3| + |14 - 3|}{5} = 2.8$ Mode = 2 $\frac{|2 - 2| + |2 - 2| + |3 - 2| + |4 - 2| + |14 - 2|}{5} = 3.0$ The average absolute deviation from the median is less than or equal to the average absolute deviation from the mean. In fact, the average absolute deviation from the median is always less than or equal to the average absolute deviation from any other fixed number. The average absolute deviation from the mean is less than or equal to the standard deviation; one way of proving this relies on Jensen's inequality. For the normal distribution, the ratio of mean absolute deviation to standard deviation is $\scriptstyle \sqrt{2/\pi} = 0.79788456\dots$. Thus if X is a normally distributed random variable with expected value 0 then, see Geary (1935):^[1] $w=\frac{ E|X| }{ \sqrt{E(X^2)} } = \sqrt{\frac{2}{\pi}}.$ In other words, for a normal distribution, mean absolute deviation is about 0.8 times the standard deviation. However in-sample measurements deliver values of the ratio of mean average deviation / standard deviation for a given Gausiian sample n with the following bounds: $w_n \in [0,1]$, with a bias for small n.^[2] Mean absolute deviation (MAD)[edit] The mean absolute deviation (MAD), also referred to as the mean deviation (or sometimes average absolute deviation, though see above for a distinction), is the mean of the absolute deviations of a set of data about the data's mean. In other words, it is the average distance of the data set from its mean. Because the MAD is a simpler measure of variability than the standard deviation, it can be used as pedagogical tool to help motivate the standard deviation.^[3]^[4] This method forecast accuracy is very closely related to the mean squared error (MSE) method which is just the average squared error of the forecasts. Although these methods are very closely related MAD is more commonly used^[citation needed] because it does not require squaring. More recently, the mean absolute deviation about mean is expressed as a covariance between a random variable and its under/over indicator functions;^[5] as $D_m = E|X-\mu|=2Cov(X,I_O)$ D[m] is the expected value of the absolute deviation about mean, "Cov" is the covariance between the random variable X and the over indicator function ($I_{O}$). and the over indicator function is defined as $\mathbf{I}_O := \begin{cases} 1 &\text{if } x >\mu, \\ 0 &\text{else } \end{cases}$ Based on this representation new correlation coefficients are derived. These correlation coefficients ensure high stability of statistical inference when we deal with distributions that are not symmetric and for which the normal distribution is not an appropriate approximation. Moreover an easy and simple way for a semi decomposition of Pietra’s index of inequality is obtained. Average absolute deviation about median[edit] Mean absolute deviation about median (MAD median) offers a direct measure of the scale of a random variable about its median $D_{med} = E|X-median|$ For the normal distribution we have $D_{med} = \sigma \sqrt(2/\pi)$. Since the median minimizes the average absolute distance, we have $D_{med} <= D_{mean}$. By using the general dispersion function Habib (2011) defined MAD about median as $D_{med} = E|X-median|=2Cov(X,I_O)$ where the indicator function is $\mathbf{I}_O := \begin{cases} 1 &\text{if } x > median, \\ 0 &\text{else } \end{cases}$ This representation allows for obtaining MAD median correlation coefficients;^[6] Median absolute deviation (MAD)[edit] The median absolute deviation (also MAD) is the median of the absolute deviation from the median. It is a robust estimator of dispersion. For the example {2, 2, 3, 4, 14}: 3 is the median, so the absolute deviations from the median are {1, 1, 0, 1, 11} (reordered as {0, 1, 1, 1, 11}) with a median of 1, in this case unaffected by the value of the outlier 14, so the median absolute deviation (also called MAD) is 1. Maximum absolute deviation[edit] The maximum absolute deviation about a point is the maximum of the absolute deviations of a sample from that point. While not strictly a measure of central tendency, the maximum absolute deviation can be found using the formula for the average absolute deviation as above with $m(X)=\text{max}(X)$, where $\text{max}(X)$ is the sample maximum. The maximum absolute deviation cannot be less than half the range. The measures of statistical dispersion derived from absolute deviation characterize various measures of central tendency as minimizing dispersion: The median is the measure of central tendency most associated with the absolute deviation. Some location parameters can be compared as follows: This section requires expansion. (March 2009) The mean absolute deviation of a sample is a biased estimator of the mean absolute deviation of the population. In order for the absolute deviation to be an unbiased estimator, the expected value (average) of all the sample absolute deviations must equal the population absolute deviation. However, it does not. For the population 1,2,3 both the population absolute deviation about the median and the population absolute deviation about the mean are 2/3. The average of all the sample absolute deviations about the mean of size 3 that can be drawn from the population is 44/81, while the average of all the sample absolute deviations about the median is 4/9. Therefore the absolute deviation is a biased estimator. See also[edit] 1. ^ Geary, R. C. (1935). The ratio of the mean deviation to the standard deviation as a test of normality. Biometrika, 27(3/4), 310-332. 2. ^ See also Geary's 1936 and 1946 papers: Geary, R. C. (1936). Moments of the ratio of the mean deviation to the standard deviation for normal samples. Biometrika, 28(3/4), 295-307 and Geary, R. C. (1947). Testing for normality. Biometrika, 34(3/4), 209-242. 3. ^ Kader, Gary (March 1999). "Means and MADS". Mathematics Teaching in the Middle School 4 (6): 398–403. Retrieved 20 February 2013. 4. ^ Franklin, Christine, Gary Kader, Denise Mewborn, Jerry Moreno, Roxy Peck, Mike Perry, and Richard Scheaffer (2007). Guidelines for Assessment and Instruction in Statistics Education. American Statistical Association. ISBN 978-0-9791747-1-1. 5. ^ Elamir, Elsayed A.H. (2012). "On uses of mean absolute deviation: decomposition, skewness and correlation coefficients". Metron: International Journal of Statistics LXX (2-3). 6. ^ Habib, Elsayed A.E. (2011). "Correlation coefficients based on mean absolute deviation about median". International Journal of Statistics and Systems 6 (4): pp. 413–428. External links[edit]
{"url":"http://blekko.com/wiki/Absolute_deviation?source=672620ff","timestamp":"2014-04-18T13:26:51Z","content_type":null,"content_length":"68352","record_id":"<urn:uuid:d9a45fd6-4885-42b9-ac95-e6c23e02b7d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 11 Thank You So Much!!! hi there, i have to write a letter in French to my au-pair family(the family that i am going to work with) introducing myself, my experience with children, proficiency in French and say why i think i will be suitable for this position. do anyone have any similar example letter... how many excess electrons must be placed on each of two small metal spheres placed 3.00 cm apart in a vacuum if teh force of repulsion betwen the spheres is to be 0.000000008 N? how do i solve this problem??=( help please.thx. Use Coulomb's Law: F = k Q^2/R^2 The force is ... a company makes x toys daily at a cost of C(x)=125+30x+2(x to the power of 2/3) dollars. what daily production level will minimize the average cost? (note define average cost as the Total cost divided by the total number of items) thx According to your definition Average Cost ... Jack rides his 213kg motor cross bike 25.3m up a 30.0 degree slope at a constant speed. what is the energy his bike expends if the frictional force opposing its movement is one-tenth its weight? Are you sure the mass is 213 kg? Shouldn't it be 21.3 kg? Or does that include... eight points lie on the circumference of a circle. one of them is labelled P. chords join some or all of the pairs of these points so that the seven points other than P lie on different numbers of chords. what is the minimum number of chords on which P lies? If chords join &qu... the sum of N positive integers is 19. what is the maximum possible product of these N numbers? thx. Nice problem. Let's look at some cases 1. N=2 clearly our only logical choices are 9,10 for a product of 90 It should be obvious that the numbers should be "centrally&q... parrots,those life spance is no longer than 20 year.... is the use of the term "no longer" correct in this sentence? or is there any other mistakes in this sentence? thx. There are several spelling and word choice errors, an error of fact, AND it is not a sentence. s... hello! i have to write a short narrative in the genre of science fiction using a current political issues as a starting point. can someone please name and give examples of some the current political issues for me please? thank you for the help! thank you ! Current political is... Hello I have exams coming up and I was wondering if any of you guys know where I can get some practice exam papers or questions that I can do or download from the internet. I am in year 11, high school, and I have exam on geometry and trigonometry, intro calculus, English and ...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jarin","timestamp":"2014-04-20T08:46:51Z","content_type":null,"content_length":"9018","record_id":"<urn:uuid:0702d1d2-2c4f-482c-89b2-48921085b499>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
About MathBlog The goal of MathBlog.dk is to increase our own (and the readers) knowledge and get YOU interested in math and computer science while having fun at the same time. We know that this is a very difficult goal but I hope you do enjoy. If you have a fun website with some interesting science related recreational inspiration feel free to mention it to us. We are always looking for more inspiration. A little history This page was started by me Kristian Edlund in 2010 with a series of posts about solutions to Project Euler. I realized that solving the problems was not enough. I had to communicate them to others. Not because I want to brag about what I can do, since thousands of others have solved the same problems before me. No, the need for communication is that it forces me to delve into the theories behind the problem and understand them to a level where I can relay the information. Not long after the beginning Bjarki Ágúst Guðmundsson became a regular reader and active participant in the discussions. In 2012 we took the leap and he joined me as an author for the site contributing with his vast knowledge of problems out there to be solved as well as solutions to some of all the problems. Why do you post the solutions? This is an often asked question, since some feel that it is cheating to read solutions on the web. However , we do it since many of the problems can be solved the brute force, or it can be solved by smarter means. We seek these smarter means whenever we can. So we post the solutions both in order to learn even more our self, but also to inspire others to delve into more computer science and To be honest you can find the solutions for most of the problems online already, some of them without an explanation and some of them with. So if you want to cheat, you can easily do so. We hope that you use the solutions as an inspiration to improve your problem solving skills once you have conquered the problems yourself.
{"url":"http://www.mathblog.dk/about/","timestamp":"2014-04-20T03:10:43Z","content_type":null,"content_length":"28217","record_id":"<urn:uuid:72a50602-8ab9-42a5-81be-9e2391e743d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
sinx and cosx functions Maybe cristo just missed it, but Ry122 is actually right, cosine can also be expressed in that form of asin(bx+c) + d. This happens because sin (90degrees-x)= cos x. So in answer to your original question, the difference when graphed between sin x and cos x is that the cos graph is the same as sins, moved back 90 degrees to the left.
{"url":"http://www.physicsforums.com/showthread.php?t=163797","timestamp":"2014-04-18T10:54:41Z","content_type":null,"content_length":"30640","record_id":"<urn:uuid:d0c4cae4-d1ad-424e-ba5a-21555230d76a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Model Constrained Systems as DAEs Model the motion of a single pendulum. Derive the governing equations using Newton's second law of motion, and . Express the fixed length of the pendulum rod as an algebraic constraint. The pendulum is released from the horizontal position with a vertical velocity of 1. Specify the physical parameters for the pendulum system. Solve the high-index DAE and visualize the system.
{"url":"http://wolfram.com/mathematica/new-in-9/advanced-hybrid-and-differential-algebraic-equations/model-constrained-systems-as-daes.html","timestamp":"2014-04-20T10:49:09Z","content_type":null,"content_length":"11614","record_id":"<urn:uuid:3d270d7a-1621-45d4-901b-669c4db5ff6d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Multivariate search for differentially expressed gene combinations • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2004; 5: 164. Multivariate search for differentially expressed gene combinations To identify differentially expressed genes, it is standard practice to test a two-sample hypothesis for each gene with a proper adjustment for multiple testing. Such tests are essentially univariate and disregard the multidimensional structure of microarray data. A more general two-sample hypothesis is formulated in terms of the joint distribution of any sub-vector of expression signals. By building on an earlier proposed multivariate test statistic, we propose a new algorithm for identifying differentially expressed gene combinations. The algorithm includes an improved random search procedure designed to generate candidate gene combinations of a given size. Cross-validation is used to provide replication stability of the search procedure. A permutation two-sample test is used for significance testing. We design a multiple testing procedure to control the family-wise error rate (FWER) when selecting significant combinations of genes that result from a successive selection procedure. A target set of genes is composed of all significant combinations selected via random search. A new algorithm has been developed to identify differentially expressed gene combinations. The performance of the proposed search-and-testing procedure has been evaluated by computer simulations and analysis of replicated Affymetrix gene array data on age-related changes in gene expression in the inner ear of CBA mice. The set of microarray expression data on p distinct genes is represented by a random vector X = X[1],..., X[p ]with stochastically dependent components. The dimension of X is typically very high relative to the number of observations (replicates of experiment). The standard practice is to test the hypothesis of no differential expression for each gene. Formulated in terms of the marginal distributions of all components of X, this hypothesis means that the expression levels of a particular gene are identically distributed under two (or more) experimental conditions. It is commonly believed that the only challenging problem here is that of multiple statistical tests, because the corresponding test statistics computed for different genes are stochastically dependent. This problem is discussed in [2] in the context of microarray data analysis. Resampling techniques [3,4] provide a universal approach to the problem of multiple dependent tests inherent in the most typical study designs. However, there is another aspect of the standard approach that warrants special attention. Any test constructed solely in terms of marginal distributions of gene expression levels disregards the multidimensional (dependence) information hidden in gene interactions, which is its most obvious deficiency. In a recent paper, Szabo et al. [5] proposed to build a target set of interesting genes from non-overlapping subsets of genes of a given size (≥1) that have been declared differentially expressed in accordance with a pertinent statistical test. The size of each sought-for subset is naturally constrained by the available sample size. This approach strives to preserve the dependence structure at least within each of such building blocks, which is already a major step toward a more general methodology of microarray gene expression data analysis. No matter what specific statistical techniques are chosen to approach the problem of identifying differentially expressed gene combinations rather than individual genes, the hypothesis that the expression levels of a given set of genes are identically distributed across the conditions under study is the most meaningful hypothesis to be tested. However, this hypothesis is now formulated in terms of the joint distribution of expression levels. The issue of multiple testing is dramatically magnified with multivariate methodology, because the total number of tests to be carried out at all steps of multivariate selection may be many orders of magnitude larger than with univariate methods. A constructive idea is to design a random search procedure for identifying differentially expressed sets of genes followed by testing significance of a final set. Szabo et al. [5,6] proposed a search procedure based on maximization of a new distance between multivariate distributions of gene expression signals. They used permutation techniques for hypotheses testing. To adjust for multiple testing, the null-distribution was estimated from the test statistics generated by each optimal (in terms of the adopted distance) set of genes found in each permutation sample. The authors provided an illustrative example of clear advantages of multivariate methodology over univariate approaches. In the present paper, we improve the cross-validation and multiple testing components of the earlier proposed algorithm. This new combination of the search-and-testing procedures furnishes a sound statistical methodology for multivariate analysis of microarray data. Mathematical framework: measure of differential expression To compare gene expression signals in two different experimental conditions (states) one needs a pertinent distance between two random vectors. Such a distance is expected to satisfy the following requirements: (1) it should have a clear probabilistic meaning; (2) it should accommodate both continuous and categorical data; (3) its estimate should be stable to random fluctuations and numerical errors; (4) its computation should not be too time consuming. A distance that meets all the above requirements was proposed in [6]. Let X = X[1],..., X[d ]and Y = Y[1],..., Y[d], d ≤ p, be two random sub-vectors with probability measures μ and ν, respectively, defined on the Euclidean space R^d. Let K(x, y) be a strictly negative definite kernel, that is x[1],..., x[s ]from R^d and any real numbers h[1],..., h[s], h[i ]= 0. Introduce the following expression The quantity N(μ, ν) can be shown [7] to be a metric in the space of all probability measures R^d, so that the null hypothesis in two-sample comparisons can be formulated as H[0 ]: N(μ, ν) = 0. A normalized version of N can be derived as If K(x, y) = Ψ(x - y) and Ψ(·) is homogeneous of any order, then N[norm ]is both location and scale invariant. Consider two independent samples, consisting of n[1]and n[2 ]observations respectively, represented by the d-dimensional vectors N(μ, ν) as follows A very important advantage of the empirical counterpart N is that it does not involve numerically unstable high-dimensional components (such as covariance matrix or its inverse), thus it is expected to be numerically stable even for small sample sizes. This was corroborated by a computer simulation study [5], in which this distance demonstrated a much higher stability than the Mahalanobis distance and the nearest neighbor classifier. Another distinct advantage of the approach based on μ = ν. Let x and y denote observations in two samples on a particular set R^d. One natural choice is the Euclidean distance between points representing experimental measurements: When this kernel is applied to logarithms of gene expression signals the corresponding distance Yet another kernel based on the correlation coefficient tends to pick up sets of genes with separated means and differences in correlation in the two samples under comparison [6]. One can also use a convex combination of the above mentioned kernels with the weights chosen in such a way as to make the distance more sensitive to particular types of the alternative hypothesis. The search-and-testing algorithm Once a multivariate distance between expression signals has been selected, it can be employed in a search for differentially expressed genes with the target subset of genes being defined as a subset for which the distance between the two groups under comparison attains its maximum. Unlike univariate testing, an exhaustive multivariate search is computationally prohibitive because the number of possible subsets increases as the d-th power of the total number of genes. The issue of computational complexity can be resolved by applying random search methodology. Random search can be designed in a number of various ways. One simple algorithm was described in [6,8]. We used this algorithm, hereafter designated as Simple Random Search (SRS), with multiple random starts and long sequences of search steps in the application reported in the present paper. We also compared its performance with that of simulated annealing [1]. To reduce the selection bias associated with choosing a small number of variables from a large set [9], Szabo et al. [5,6] suggested to use cross-validation techniques with the search for a target subset of genes running in each cross-validation cycle. The basic structure of our cross-validation algorithm is as follows: Algorithm A1: Cross-validated search for differentially expressed genes 1. Randomly draw (without replacement) u[1 ]samples from one group of arrays and u[2 ]samples from the other group. 2. Leave out the selected arrays and find the optimal (in accordance with the chosen criterion) subset of genes using only the data from the remaining arrays. 3. Repeat steps 1 and 2 in succession v times to obtain v "optimal" sets of genes. The main problem here is that the algorithm results in many overlapping sub-optimal sets, and one needs to somehow combine them to report a single final set. Szabo et al. resorted to a somewhat unnatural way of forming a final set by selecting single genes with the highest frequencies of occurrence in sub-optimal sets. In our new algorithm, this is accomplished through designing a second-stage cross-validated search limited to the union of the previously selected sets. In the second-stage search procedure, cross-validation is carried out at each step of random search with the Algorithm A2: The second-stage cross-validated random search 1. Form the union of all sets resulted from Algorithm A1 to represent an initial target set. Drop the data on all other genes from the data set. 2. Initiate a random search algorithm. 3. At each step of the search algorithm, randomly draw (without replacement) l[1 ]samples from one group of arrays and l[2 ]samples from the other group. Leave out the selected arrays and compute the N-statistic using only the data from the remaining arrays. Perform this computation r times. 4. Compute the average (arithmetic mean) of the N-statistics resulted from step 3. Denote this average by 5. Move to the next step of random search using the statistic In the application discussed in the present paper, we used Algorithm A2 with 200 cross-validated samples in the second stage of the search algorithm. The two-stage search algorithm runs with multiple random starts and returns the most differentially expressed (in terms of the distance Once an optimal set has been found, all genes pertaining to this set are discarded and a search for the next set of differentially expressed genes is initiated. Szabo et al. [5] proposed a stopping rule based on a permutation significance test. In the improved version of our algorithm, instead of testing significance at each step of the successive selection of subsets of genes, the selection procedure runs (without testing) for a preset number of steps, thereby forming a reasonably long sequence of non-overlapping "maximal" subsets. The same cross-validated random search procedure is applied to each permutation sample, generated to model the complete null hypothesis for disjoint subsets of genes, and finally the step-down multiple testing resampling algorithm by Westfall and Young [4] is applied to the subsets thus selected. If all the null hypotheses happen to be rejected, the selection procedure goes on eliminating subsets of genes resulting from the search algorithm, otherwise the procedure stops. The heuristic procedure thus designed mimics its univariate multiple testing (marginal hypotheses testing) counterpart with known properties [4], thereby ensuring an approximate control of the family-wise error rate (FWER). Suppose that all tests are two-tailed and utilize the same test statistic Algorithm A3: Successive selection of differentially expressed gene combinations 1. Form m permutation samples of sizes n[1]and n[2], respectively, from n[1]+ n[2 ]replicated observations (arrays). For each of the m permutation samples, run (without testing) the successive selection algorithm to find a preset number I of disjoint sets. At each step of successive selection, an optimal k-element set is identified by the two-stage cross-validated search algorithm and the corresponding m sequences of 2. Returning to the original two-sample setting, find a sequence of I optimal sets of the same size k and compute the respective test statistics 3. Apply the step-down multiple testing resampling algorithm by Westfall and Young [4] to the N-statistics resulting from Steps 1 and 2. If the number of rejected hypotheses is less than I then stop and declare all the rejected sets of genes differentially expressed, otherwise return to Step 1 and continue successively selecting sets of genes. A faster version of Step 3 uses the single-step resampling adjustment [4]. The above algorithm can be reformulated in terms of p-values. The algorithm is computationally more expensive than its prototype presented in [5]. We used a SunFire V480 station to implement the algorithm. This "brute force" approach is needed to extract more information from multivariate gene expression profiles. With the above approach, no distributional assumptions are needed although the test statistic least favorable for rejecting an underlying composite null hypothesis. In other words, permutations provide an optimal choice of a null distribution. More precisely, this theoretical result is valid for the resampling (with replacement) analog of permutations, but regular (without replacement) permutations may be a good approximation to this resampling procedure if both samples under comparison are not too small. This concept and its mathematical framework is discussed at length in our previous report [10]. For efficient nonparametric estimation of adjusted p-values associated with sets of genes resulting from random search, it is also desirable that the test statistic be scale invariant for any sample size. A statistic that meets this requirement is an empirical counterpart of the normalized distance N[norm ]with a properly chosen kernel function, see formula (2) and the succeeding explanation. Yet another possibility is to use the kernel K[1 ]with log-intensities of gene expressions. We employed the latter pivoting structure of the N-statistic in the analysis of simulated and biological data presented in the subsequent sections. Simulation studies We first tested our methodology by computer simulations. To this end, we designed a simulation study as follows. Two sets of data on 1,000 genes were simulated. For convenience we will label them as "control" and "treatment" samples, respectively. The size of each sample was equal to 10. In the treatment group, the first 12 genes were set to be differentially expressed. To simulate these genes, logarithms of gene expression signals were generated from a multivariate normal distribution with an exchangeable correlation structure. The algorithm designed to simulate such data is presented in the Appendix. The correlation coefficient for all pairs of gene log-intensities was set equal to 0.6, while the standard deviation was chosen to be either σ = 0.5 or σ = 1 for all individual genes. The mean log-expression values τ for the genes assigned to the target set of genes were specified as follows: τ = 5 for the first 4 genes (Subset 1), τ = 4 for the second group of 4 genes (Subset 2), τ = 3 for the third group of 4 genes (Subset 3). The remainder of the genes (not differentially expressed) were simulated as log-normally distributed random variables with τ = 1 and the same standard deviation (either σ = 0.5 or σ = 1) and correlation coefficient. The 1,000 genes in the control group were simulated just like those that were not differentially expressed in the treatment group. Our search-and-testing procedure was applied to the data sets thus generated in order to see whether (and how frequently) it can find all subsets, as well as all individual genes, included in the target set of differentially expressed genes. In each experiment, the SRS algorithm was run with multiple random starts. At each step of the successive selection of genes, the algorithm sought for a subset of 4 genes. The parameter I in Algorithm A3 was set equal to 5. Since the sole purpose of our simulations was to check how well a given algorithm finds a maximum of the N-statistic over gene sets, no recourse to cross-validation was made in this study. The number of permutations was set at 200. Because such simulations are very time consuming the experiment was repeated only 100 times. Two samples (control and treatment) were generated in each of the 100 experiments. First we tested the SRS algorithm with 8 random starts and 2,500 search steps. When σ = 0.5 for the treatment group the algorithm was able to correctly recover Subset 1 in 82%, Subset 2 in 72%, and Subset 3 in 76% of simulation runs. The proportion of cases where all 12 genes were correctly recovered (irrespective of the order they entered the selected subsets) was 61%. The false discovery rate, defined as the mean proportion of falsely discovered genes among the true differentially expressed genes, was equal to 0.02. When σ = 1 the SRS algorithm recovers Subset 1 in 76%, Subset 2 in 56%, and Subset 3 in 39% of the simulation runs. The proportion of cases where all 12 genes were correctly recovered was 53%. The false discovery rate was equal to 0.04. As one would expect, the SRS algorithm performed better with 16 random starts and 3,600 search steps. For σ = 0.5, the rate of correct discovery becomes 100% for all three sets. For σ = 1 the algorithm correctly recovers Subset 1 in 81%, Subset 2 in 65%, and Subset 3 in 48% of simulation runs. The proportion of cases where all 12 genes are correctly recovered is 62%. However, the false discovery rate remains essentially the same as when running the SRS algorithm with 8 starts and 2,500 search steps. The results on individual simulated genes are presented in Table Table11. Proportions of correct discoveries for each gene in the target set. By way of comparison, we ran the Westfall and Young algorithm with a univariate counterpart of the test statistic N at the same level of FWER. While the results for σ = 0.5 were identical (100% correct recovery), the univariate method recovered less genes (45%) in the target set when we set σ = 1. In the latter case, the univariate algorithm had a uniformly lower correct discovery rate for genes #9 through #12 (69%, 71%, 70%, 71%, respectively) in comparison to the multivariate method (Table (Table1).1). One should not expect much discrepancy between the univariate and multivariate methods in these simulations because the alternative hypotheses were modeled in a univariate way. In another experiment we studied the simulated annealing optimization (SAO) with one random start and the same parameters of the simulation model. Although computationally expensive, the SAO algorithm is easier to handle when tuning its parameters in simulation experiments. Proceeding from the less favorable case of σ = 1, we determined parameters of the SAO algorithm that provide correct selection of all three sets of differentially expressed genes in all simulation runs. Another way of testing the two algorithms is to apply them in a situation where the true global maximum of the N-distance is known. We randomly selected 2000 genes from the data set discussed in the next section. All possible pairs were formed from the 2000 genes and the corresponding N-statistic between the two samples (young versus old mice) was computed for each pair. The data were normalized before the analysis (see Section "Results and Discussion"). Having determined a maximum value of the N-statistic over all pairs, we ran the SRS and SAO algorithms (with parameters suggested by our simulation experiments) to see whether they could find the actual maximum. Both algorithms hit the target. The biological purpose of our experimental study was to better understand age-related changes in gene expression that occur in mouse inner ear (including the organ of Corti and stria vascularis). Since we do not expect numerous genes to be involved in the process of aging of the auditory system, this experimental system seems to be especially promising for the use of multivariate methods. Hearing loss or deafness affects about 10% of the U.S. population, or about 30 million people, most of them over age 60. Presbycusis – age-related hearing loss – is a primary sensory problem in the elderly population, the number one communicative disorder, and one of the top three chronic medical conditions affecting the aged. It is often described as difficulty in understanding speech, especially in conditions of high ambient background noise. Most elderly persons have a reduction in hearing acuity. For example, cross-sectional and longitudinal studies have consistently demonstrated gradually decreasing pure tone thresholds by cohort groups of elderly [13,14]. The composite audiometric pattern is one of better hearing for low- and mid-speech frequencies than higher speech frequencies. The consequence of this pattern is difficulty in hearing and understanding, not only conversational speech, but in particular, speech that is softly spoken. In fact, a similar gradual reduction in speech recognition for words and phonemes in quiet has been shown to accompany the pure tone threshold decrease in cohort groups of the elderly [14-16]. Much progress had been made in the field of auditory aging research regarding sensitivity deficits and metabolic problems of the cochlea. As humans and animals age, they lose sensory hair cells, 8th cranial nerve (i.e., vestibulocochlear) fibers, and develop stria vascularis/potassium recycling metabolic problems that degrade audibility and spectral tuning [17-21]. In addition, the differing roles of the ear and brain in presbycusis, and aging deficits in speech understanding in background noise, and their respective neural bases are beginning to be understood. Age effects in these areas are distinguishable and age-related problems in the brain can be influenced by the peripheral etiologies of presbycusis [22-24]. Considering studies completed to date, presbycusis in humans, and corresponding age-related hearing loss in animal models such as the CBA mouse, have two major facets: 1) A peripheral hearing loss of cochlear origin, starting with sensitivity losses in the high pitches (high frequencies), involving loss of sensory hair cells, spiral ganglion neurons (8th nerve fibers) and metabolic malfunctions of the highly vascularized stria vascularis organ system that produces the potassium rich endolymph of the inner ear [25,26]; and 2) An inability to comprehend speech in background noise, that results from deficits in the inner ear and the central auditory nervous system [23,24]. For the animal model studies of presbycusis, the CBA mouse strain has been quite useful to date. The goal of the present study is to explore the underlying cochlear gene expression changes that may predispose or cause presbycusis. Common neurodegenerative diseases such as presbycusis are likely to be caused by several fundamental problems that interact with each other and with environmental factors, including genetic pre-dispositions to environmental insults, noise and ototoxic medications [27]. Although over a hundred genes have been identified that cause congenital deafness (e.g. [28-30]), no candidate genes have yet been identified that are involved in human presbycusis. The present report attempts to gain some initial insights into gene expression changes related to inner ear problems that may predispose or cause age-related neurosensory disorders, such as age-related hearing loss – presbycusis, utilizing the CBA mouse strain. The two groups of arrays under comparison included 9 and 12 arrays, respectively (see the next section). The data were normalized using the quantile normalization method [11,12] carried out at the probe feature level. Compared to our simulations, the number of permutations was increased to 400. Each search cycle in the SRS algorithm proceeded in 45,000 steps with 100 random starts. The algorithm was tuned to search for a set of 5 genes at each step of the successive selection procedure. We also changed parameters that control the efficiency of the SAO algorithm to account for an increased dimensionality of the problem. The latter algorithm also sought for sets consisting of 5 genes. We used the following parameter values in the combined two-stage cross-validated search algorithm: I = 5, u[1 ]= 4 (out of 9 arrays), u[2 ]= 6 (out of 12 arrays), v = 10, l[1 ]= 4 (out of 9 arrays), l[2 ]= 6 (out of 12 arrays), r = 200. Although the lists of genes produced by both algorithms are quite similar, there are still some discrepancies between them which may be attributed to the choice of parameters for each method. Since the SAO algorithm is less sensitive to the choice of the initial gene combination, we present only the results obtained with this algorithm. In the "young" versus "old" comparison, the procedure selected two sets of 5 genes with an adjusted p-value of less than 0.05. For comparison, we applied the Wesfall and Young step-down multiple testing procedure with a univariate counterpart of Of the 10 identified genes (from 2 sets) exhibiting major expression changes with age, there are 6 differentially expressed genes having to do with immune system function. This is important from an aging point of view for two reasons. First, immunoprecipitations or immunoproducts can be damaging to nerve cells, and have been implicated as being responsible for age-related neurodegeneration in the brain in general, and in Alzheimer's disease specifically, but this is a new finding for the cochlea and age-related hearing loss – presbycusis. Second, autoimmune problems, where the immune system starts attacking its own nerve cells, is another leading candidate for a causative factor in neurodegenerative aging conditions. These immune products are likely to come from the vascular supply to the cochlea, yet may be a causative component for age-related hearing loss due to the resultant damage to the cochlea sensory cells. There are 3 genes having to do with post-translational protein changes, including protein binding properties, with two of these genes involved in carbohydrate metabolism (sugar/glucose binding in mitochondria for cellular respiration). These genes are related to the production of reactive oxygen species (ROS), which damage nerve cells, and have been implicated in age-related neurodegenerative disorders, and in cases of cochlear sensorineural hearing loss. For example, problems in cellular respiration can lead to accumulation of toxic intracellular substances, causing damage to sensory cell structures and abnormal metabolic processing along with increased levels of ROS [31-33]. The last gene, involved in mammary gland functioning, showed a significant increase with age. A closer inspection of the expression levels for this gene have shown that the observed effect cannot be attributed to the presence of outliers in the data. Although not directly involved in sensory functioning, this gene may change its espression as part of general degenerative processes in inner ear. An error in this gene annotation cannot be ruled out as well. This observation is definitely worth another look. The above-described initial observations are quite provocative, in that we have several groupings of genes that have important functional significance for aging and hearing, including important aspects of cochlear, inner ear functioning. These animal-model gene-array investigations are quite useful for guiding human genetics experiments aimed at identifying candidate genes involved in the susceptibility and progression of human age-related hearing loss and other age-dependent neurosensory disorders. Regarding methodological aspects of this paper, we would like to note that a pertinent multivariate method for selection of differentially expressed genes should include two components: finding subsets of candidate genes that jointly separate the classes (states) under comparison and testing statistical significance of this separation; the latter does not necessarily refer to characteristics of a classification (allocation) rule such as classification error rates. We also would like to stress that the problem of significance testing in the multivariate formulation is not equivalent to the problem of statistical classification (supervised learning). While closely related, these problems are fundamentally different. For example, the use of the classification error rate as a criterion for selection of important variables is appropriate where the aim is to form a discriminant rule for the subsequent outright allocation of unclassified samples to one of the known classes. A very good separation between classes can sometimes be provided by looking at a single feature variable (gene) so that the classification error rate is difficult to reduce further by including other (probably quite significant) variables in the rule. However, one would like to keep the chance of missing other interesting variables to a minimum. The problem dealt with in this paper is not that of classification or prediction. Our method is designed to find gene combinations that change in concert (as a set) their expression due to some biological factors. The problem thus formulated reduces to that of significance testing. It must be emphasized that our method is designed not only to identify sets of genes whose interrelationships differ but also those genes with marginal effects. More importantly, the method seeks to provide an alternative way of making a specific FWER-based multiple testing procedure less conservative and, to some extent, less dependent on the subset pivotality requirement (see [4] for definition), by extracting more information from the data. In addition, this approach can be used for ranking and clustering those genes that have been declared differentially expressed by univariate A new algorithm for identifying differentially expressed gene combinations has been developed. This algorithm is built on the earlier proposed multivariate test statistic [6] and successive selection of differentially expressed sets of genes [5]. The algorithm includes an improved random search procedure designed to generate candidate gene combinations of a given size. Cross-validation is used to provide replication stability of the search procedure. A permutation two-sample test is used for significance testing. We design a multiple testing procedure to control the family-wise error rate when selecting significant combinations of genes that result from a successive selection procedure. A target set of genes is composed of all significant combinations selected via random search. The performance of the proposed search-and-testing procedure has been evaluated by computer simulations and analysis of replicated Affymetrix gene array data on age-related changes in gene expression in the inner ear of CBA mice. CBA mice from the University of Rochester vivarium served as subjects for this study who had similar environmental, non-ototoxic life histories. Subjects were mice of the following age groups: Young adult (N = 9, 3–4 months) and old (N = 12, 24–33 months). All animal procedures were approved the University of Rochester Committee on Animal Resources. Cochlear dissections Subject groups of the present report had extensive behavioral and neurophysiological hearing testing prior to sacrifice, verifying that the old mice had age-related hearing loss. Mice were sacrificed by cervical dislocation. Then both cochleae for each mouse were immediately dissected using a Zeiss stereomicroscope. The cochleae were placed in cold saline for micro dissection of the cochlear partition (basilar membrane, organ of Corti and spiral ligament), and were then placed in cold Trizol. A detailed protocol for Trizol can be found at http://www.fgc.urmc.rochester.edu. All samples were stored at -80°C for microarray gene expression processing. Gene expression microarrays The RNA quality was assessed by electrophoresis using the Agilent Bioanalyzer 2100. Between 200 ng and 2 ug of total RNA from each sample was used to generate a high fidelity cDNA, which was modified at the 3' end to contain an initiation site for T7 RNA polymerase, while 1 ug of cDNA was used in an in vitro transcription (IVT). 20 ug of full-length cRNA, from each mouse (age groups as described above), was fragmented. After fragmentation, the cDNA, full-length cRNA, and fragmented cRNA were analyzed by electrophoresis using the Agilent Bioanalyzer 2100 to assess the appropriate size distribution prior to microarray hybridization. Detailed protocols for sample preparation using the Ambion MessageAmp protocol can be found at http://www.ambion.com. Affymetrix M430A High density oligonucleotide array set (A) which queried 20,000 murine probe sets was used. Each gene on the subarray is represented by 11 pairs of 25 mer oligonucleotides that span the coding region for the 20,000 genes and ESTs represented (clear overlapping of genes is evident). Each probe pair consists of a perfect match (PM) sequence that is complementary to the cDNA target, and a miss-match (MM) sequence that has a single base pair mutation in a region critical for target hybridization; this sequence serves as a control for non-specific hybridization. Staining and washing of all arrays was performed in the Affymetrix fluidics module per manufacturer's protocol. Streptavidin phycroerythrin stain (SAPE, Molecular Probes) was the fluorescent conjugate used to detect hybridized target sequences. All arrays in this study were assessed for "array performance" prior to data analysis. Methods for data analysis and computer simulations The methodology of data analysis and design of computer simulations have been described at length in the preceding sections. The relevant software for data analysis and simulations is included in the Additional Material Files [see the folder "MultivariateSearch"]. Here we supplement this information with a description of the generator of multivariate exchangeable normal random vectors which we used in our simulations. Suppose we want to generate a normal random vector X in R^d with mean vector M R^d and covariance matrix Σ whose entries are σ^2 and ρσ^2 on and off diagonal, respectively. It is well-known that X can be represented in the form X = M + CZ, where Z is the standard normal vector with mean 0 in R^d and C is a d × d matrix with CC^T = Σ. (Here C^T denotes the transpose of C.) The matrix C may be chosen symmetric and can be computed using well-known algebraic procedures. However, our matrix Σ has a special structure: Σ = (1 - ρ)σ^2I[d ]+ ρσ^21[d × d], where I[d ]is a unit matrix of size d and 1[d × d ]is a square matrix with all the d^2 entries being equal to 1. Using this we look for C of the same form: C = αI[d ]+ β1[d × d]. From the relations C^2 = Σ and α^2 = σ^2(1 - ρ), 2αβ = ρσ^2, so that Authors' contributions YX is responsible for the computational component of this study. He also participated in the methodology development. LK, AG, and AY have equally contributed to various methodological aspects of the proposed multivariate analysis. RF provided experimental data and biological interpretation of the net results of data analysis. Supplementary Material Additional File 1: The additional folder "MultivariateSearch" includes the following three sub-folders: 1. SAO _Simulation 2. SRS_Simulation 3. TSSearch Each subfolder contains a Unix executable file. The executable file "SASearch" implements the algorithm based on simulated annealing optimization. The executable file "SRSearch" implement the version based on simple random search. The exectuable file "TSSearch" for the two-stage search is is located in the sub-folder "TSSearch". Each sub-folder also contains two input files. The file "simulation04_UI.txt" is an input file for data analysis. Suppose the data file is named xxxx.marr, then the input file should be named as xxxx_UI.txt. To analyze the data from the file xxxx.marr, type: [Executable file] xxxx or [Executable file] 0 xxxx. The input file "simulation04_ui.txt" is designed for simulation experiments. To conduct simulations, one has to prepare an input file with the name: XXX_simu_ui.txt, where XXX is a string that follows the naming convention of computer files. An input file for data analysis with the name XXX_ui.txt is also needed. To run simulations, type: [executable file] 1 xxxx. We thank Dr. Andrew Brooks, Dr. Mary D'Souza, Dr. Xiaoxia Zhu, Martha Erhardt, John Housel and Cristine Brower for technical assistance. Methodological discussions with Dr. Anthony Almudevar are greatly appreciated. We are grateful to anonymous reviewers whose comments have helped us improve the manuscript. The research is supported by NIH Grants P01 AG09524 from the National Institute on Aging, P30 DC05409 from the National Institute on Deafness & Communication Disorders, and the International Center for Hearing & Speech Research, Rochester, NY. • Almudevar A. A simulated annealing algorithm for maximum likelihood pedigree reconstruction. Theoretical Population Biology. 2003;63:63–75. doi: 10.1016/S0040-5809(02)00048-5. [PubMed] [Cross Ref • Dudoit S, Shaffer JP, Boldrick JC. Multiple hypothesis testing in microarray experiments. Statistical Science. 2003;18:71–103. doi: 10.1214/ss/1056397487. [Cross Ref] • Pesarin F. Multivariate Permutation Tests: With Applications in Biostatistics. Wiley, Chichester; 2001. • Westfall PH, Young S. Resampling-Based Multiple Testing. Wiley, New York; 1993. • Szabo A, Boucher K, Jones D, Klebanov L, Tsodikov A, Yakovlev A. Multivariate exploratory tools for microarray data analysis. Biostatistics. 2003;4:555–567. doi: 10.1093/biostatistics/4.4.555. [ PubMed] [Cross Ref] • Szabo A, Boucher K, Carroll W, Klebanov L, Tsodikov A, Yakovlev A. Variable selection and pattern recognition with gene expression data generated by the microarray technology. Mathematical Biosciences. 2002;176:71–98. doi: 10.1016/S0025-5564(01)00103-1. [PubMed] [Cross Ref] • Zinger AA, Klebanov LB, Kakosyan AV. Stability Problems for Stochastic Models. VNIISI Moscow; 1999. Characterization of distributions by mean values of statistics in connection with some probability metrics; pp. 47–55. • Chilingaryan A, Gevorgyan N, Vardanyan A, Jones D, Szabo A. Multivarite approach for selecting sets of differentially expressed genes. Mathematical Biosciences. 2002;176:59–69. doi: 10.1016/ S0025-5564(01)00105-5. [PubMed] [Cross Ref] • Ambroise C, McLachlan GJ. Selection bias in gene extraction on the basis of microarray gene-expression data. Proceedings of the National Academy of Sciences USA. 2002;99:6562–6566. doi: 10.1073/ pnas.102102699. [PMC free article] [PubMed] [Cross Ref] • Klebanov L, Gordon A, Xiao Y, Land H, Yakovlev A. Technical Report. Department of Biostatistics and Computational Biology University of Rochester; 2004. A new test statistic for testing two-sample hypotheses in microarray data analysis.http://www.urmc.rochester.edu/smd/biostat/people/faculty/andrei.htm • Bolstad BM, Irizarry RA, Astrand M, Speed TP. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics. 2003;19:185–193. doi: 10.1093/bioinformatics/19.2.185. [PubMed] [Cross Ref] • Irizarry RA, Gautier L, Cope LM. An R package for analyses of Affymetrix oligonucleotide arrays. In: Parmigiani G, Garrett ES, Irizarry RA, Zeger SL, editor. The Analysis of Gene Expression Data. Springer, New York; 2003. pp. 102–119. • Corso JF. Age correction factor in noise-induced hearing loss: a quantitative model. Audiology. 1980;19:221–232. [PubMed] • Gates GA, Caspary DM, Clark W, Pillsbury HC, 3rd, Brown SC, Dobie RA. Presbycusis. Otolaryngol Head Neck Surg. 1989;100:266–271. [PubMed] • Gelfand SA, Piper N, Silman S. Consonant recognition in quiet as a function of aging among normal hearing subjects. J Acoust Soc Am. 1985;78:1198–1206. [PubMed] • Gelfand SA, Piper N, Silman S. Consonant recognition in quiet and in noise with aging among normal hearing listeners. J Acoust Soc Am. 1986;80:1589–1598. [PubMed] • Lonsbury-Martin BL, Cutler WM, Martin GK. Evidence for the influence of aging on distortion product otoacoustic emissions in humans. J Acoust Soc Am. 1991;89:1749–1759. [PubMed] • Lonsbury-Martin BL, Martin GK, Probst R, Coats AC. Acoustic distortion products in rabbit ear canal. I. Basic features and physiological vulnerability. Hear Res. 1987;28:173–189. doi: 10.1016/ 0378-5955(87)90048-7. [PubMed] [Cross Ref] • Probst R, Lonsbury-Martin BL, Martin GK. A review of otoacoustic emissions. J Acoust Soc Am. 1991;89:2027–2067. [PubMed] • Willott JF. Effects of aging, hearing loss, and anatomical location on thresholds of inferior colliculus neurons in C57BL/6 and CBA mice. J Neurophysiol. 1986;56:391–408. [PubMed] • Willott JF. Aging and the auditory system: Anatomy, physiology, and psychophysics. Singular Publishing Group, San Diego; 1991. • Frisina DR, Frisina RD. Speech recognition in noise and presbycusis: relations to possible neural mechanisms. Hear Res. 1997;106:95–104. doi: 10.1016/S0378-5955(97)00006-3. [PubMed] [Cross Ref] • Frisina DR, Frisina RD, Snell KB, Burkard R, Walton JP, Ison JR. Auditory temporal processing during aging. In: Hof PR, Mobbs CV, editor. Functional Neurobiology of Aging. Academic Press, San Diego; 2001. pp. 565–579. • Frisina RD. Anatomical and neurochemical bases of presbycusis. In: Hof PR, Mobbs CV, editor. Functional Neurobiology of Aging. Academic Press, San Diego; 2001. pp. 531–547. • Jacobson M, Kim SH, Romney J, Zhu X, Frisina RD. Contralateral suppression of distortion-product otoacoustic emissions declines with age: A comparison of findings in CBA mice with human listeners. Laryngoscope. 2003;113:1707–1713. doi: 10.1097/00005537-200310000-00009. [PubMed] [Cross Ref] • Guimaraes P, Zhu X, Cannon T, Kim SH, Frisina RD. Sex differences in distortion product otoacoustic emissions as a function of age in CBA mice. Hear Res. 2004. [PubMed] • Gates GA, Couropmitree NN, Myers RH. Genetic associations in age-related hearing thresholds. Arch Otolaryngol Head Neck Surg. 1999;125:654–659. [PubMed] • Kelley PM, Harris DJ, Comer BC, Askew JW, Fowler T, Smith SD, Kimberling WJ. Novel mutations in the connexin 26 gene (GJB2) that cause autosomal recessive (DFNB1) hearing loss. Am J Hum Genet. 1998;62:792–799. doi: 10.1086/301807. [PMC free article] [PubMed] [Cross Ref] • Kelsell DP, Dunlop J, Stevens HP, Lench NJ, Liang JN, Parry G, Mueller RF, Leigh IM. Connexin 26 mutations in hereditary non-syndromic sensorineural deafness. Nature. 1997;387:80–83. doi: 10.1038 /387080a0. [PubMed] [Cross Ref] • Kikuchi T, Adams JC, Miyabe Y, So E, Kobayashi T. Potassium ion recycling pathway via gap junction systems in the mammalian cochlea and its interruption in hereditary nonsyndromic deafness. Med Electron Microsc. 2000;33:51–56. doi: 10.1007/s007950070001. [PubMed] [Cross Ref] • Manna SK, Zhang HJ, Yan T, Oberley LW, Aggarwal BB. Over-expression of manganese superoxide dismutase suppresses tumor necrosis factor-induced apoptosis and activation of NF- and activated protein-1. J Biol Chem. 1998;273:132–145. [PubMed] • Frisina ST, Mapes F, Kim SH, Frisina DR, Frisina RD. Comprehensive characterization of hearing loss in aged diabetics. Paper presented at Society for Neuroscience 33rd Annual Meeting New Orleans, LA. 2003. • Gries A, Herr A, Kirsch S, Gunther C, Weber S, Szabo G, Holzmann A, Bottiger BW, Martin E. Inhaled nitric oxide inhibits platelet-leukocyte interactions in patients with acute respiratory distress syndrome. Crit Care Med. 2003;31:1697–170. doi: 10.1097/01.CCM.0000063446.19696.D3. [PubMed] [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC529250/?tool=pubmed","timestamp":"2014-04-19T06:15:30Z","content_type":null,"content_length":"117205","record_id":"<urn:uuid:a333358b-01cb-4d5e-a28e-a005254f28c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
searching for text for studying representation theory up vote 3 down vote favorite I'm a graduate student studying algebraic geometry. Recently, When I studying Hodge theory, I saw sl2-representation is used in Hodge theory. So I think that studying representation theory may be helpful for me. Can you recommand me some good text for studying representation theory, focused on materials helpful for algebraic geometry, and not so difficult to read. rt.representation-theory textbook-recommendation reference-request 1 Should be community wiki – Igor Rivin Jun 13 '12 at 14:40 mathoverflow.net/questions/13/learning-about-lie-groups related – Alexander Chervov Jun 13 '12 at 17:14 This question mathoverflow.net/questions/2755 is not exactly identical with yours, but the answers posted there could just as well be posted here. – darij grinberg Jun 13 '12 at 20:15 add comment 5 Answers active oldest votes I would recommend Introduction to Lie Algebras and Representation Theory by Humphreys. It covers all the basics of Lie algebras and their representations, though mostly in characteristic 0 and over an algebraically closed field. But then, going into any depth with the theory without these assumptions requires a lot of additional work anyway, and knowing up vote 2 down the theory for this nice case is a good start. vote accepted add comment Recently, Pavel Etingof published a book about his course on representation theory with his students. This is a description of the book: "The goal of this book is to give a "holistic" introduction to representation theory, presenting it as a unified subject which studies representations of associative algebras and treating the representation theories of groups, Lie algebras, and quivers as special cases. Using this approach, the book covers a number of standard topics in the representation theories of these structures. Theoretical material in the book is supplemented by many problems and exercises which touch upon a lot of additional topics; the more difficult exercises are provided with up vote 9 down vote The book is designed as a textbook for advanced undergraduate and beginning graduate students. It should be accessible to students with a strong background in linear algebra and a basic knowledge of abstract algebra." This is a very enjoyable book to read. A version can be found here: link text. You get a bonus buying the book, where some nice historical remarks are added. 1 The bonus also contains two short digression-style chapters on homological algebra (although I think they won't surprise an algebraic geometer). – darij grinberg Jun 13 '12 at 20:18 add comment A nice good book has been written by Fulton and Harris, of course there are many others up vote 4 down Indeed Fulton and Harris is very appropriate for someone focusing on algebraic geometry. – Claudio Gorodski Jun 13 '12 at 19:13 1 That's a good collection of examples and exercises, but a pretty bad text for reading. Flawed proofs, lots of missing details, lack of separation between results and proofs make learning much harder than it should be. – darij grinberg Jun 13 '12 at 20:13 add comment I'd also recommend up vote 3 down Erdmann, Karin; Wildon, Mark J. Introduction to Lie algebras. 1 I taught a 1-semester class for seniors/1st year grad students out of this book and they all really enjoyed it. It should be very suitable for self study as well. – daveh Jun 14 '12 at 12:38 add comment If you are particularly interested in sl2-representation theory, there is the book by Mazorchuk: Lectures on $\mathfrak{sl}_2(\mathbb{C})$ - modules. up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory textbook-recommendation reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/99453/searching-for-text-for-studying-representation-theory/99608","timestamp":"2014-04-20T06:40:15Z","content_type":null,"content_length":"75176","record_id":"<urn:uuid:921eb9ac-9893-4022-ba0d-8e3d57e46626>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Tossing a coin... 3 times ? Hmmm, not sure if this is right...and it's hard to say with words. George, with the HTH configuration. It first appears that the winner is simply decided by the last toss, since they both chose HT as the first two flips in their configurations. But since the first toss of the each configuration is an H, that H could be George's winning flip, since his last flip is H. Like I said, it's hard to say with words. English is my second language...an I don't have a first one
{"url":"http://www.physicsforums.com/showthread.php?t=246740","timestamp":"2014-04-18T00:33:43Z","content_type":null,"content_length":"73509","record_id":"<urn:uuid:7fe1a3c1-328c-4831-b305-d8c2e835df3a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Drag Coefficient, Surface Roughness, and Reference Wind Speed P.A. Hwang Oceanography Division Introduction: Drag coefficient (CD) and surface roughness (z0) are important parameters for quantifying air-sea momentum and energy exchanges. In almost all earlier analyses of CD and z0, the reference wind speed at 10-m elevation (U10) is used. Although the adoption of U10 in the analyses provides a consistent reference level of wind measurements (compared to earlier reports using "mast height" or "anemometer height"), the dynamical significance of the 10-m elevation in the marine boundary layer is not clear. From a heuristic point of view, surface waves are the ocean surface roughness, and the air-sea interaction processes are influenced by wave conditions. Because the influence of surface waves decays exponentially, with wavelength serving as the vertical length scale, the dynamically meaningful reference elevation should be λ, the characteristic wavelength of the peak component of surface wave spectrum. Wavelength Scaling: For a logarithmic wind profile, where U is the wind speed, u[*] the wind friction velocity, κ the von Kármán constant (0.4), z the vertical elevation, and z0 the dynamic roughness, the drag coefficient referenced to wind speed at the elevation equal to one-half of the surface wavelength, C[λ/2]= u[*]^2 / U^2[λ/2] is Equation (2) suggests that a natural expression of the dimensionless surface roughness is k[p]z[0], Analysis of several field experiments with wind-sea dominated wave conditions1 yields with Ac = 1.22 × 10-2, ac = 0.704, where ω** = u*ωp / g is the dimensionless frequency of the air-sea coupled system, g the gravitational acceleration, and ωp the frequency of peak spectral component (Fig. 1(a)). Although the data scatter is large, the result is a substantial improvement over C10 (Fig. 1(b-d)). Figure 1 illustrates convincingly that C[λ/2] is indeed superior to C10 for accounting for the surface wave effects on the wind stress over the ocean surface. Figure 2(a) shows the dimensionless roughness kpz0. The measurements from different sources collapse very nicely following the wavelength scaling function (Eq. (3)). The surface roughness is also frequently expressed in terms of the Charnock parameter, z[0*] = z[0] g / u[*]^2 which can be obtained from applying the dispersion relation of surface waves to Eq. (3),1 The Charnock parameter increases with ω[**] in the frequency range of the field data (Fig. 2(b)). The parameterization function (Eq. (5)) predicts that the dependence of z0*(ω**) reverses to a decreasing trend at ω** -0.25 and z0* approaches ω[**]^2 asymptotically at very large ω**. The inverse relationship of z0*(ω**) at high ω** range has been observed in laboratory measurements2 (Fig. 2 (c)). It is noted that the apparently different behavior of z0* with ω** shown in Fig. 2(c) had generated considerable controversies in the air-sea research community, as is evident from the large number of empirical (and contradicting) relations of z0*(ω**) that have been proposed in the literature.2 The present analysis predicts correctly the increasing trend of z0*(ω**) in the low-frequency region and the decreasing trend in the high-frequency region. Conclusions: Quantification of the drag coefficient and surface roughness is critically influenced by the choice of the reference wind speed. It is shown that when the scaling wind speed is referenced to the surface waves, the experimental data of drag coefficient and surface roughness can be collapsed into simple clusters. The analysis yields strong evidence that surface waves are the roughness element of the ocean surface and that they exert significant influences on air-sea momentum exchanges. The results also have important implications on ocean remote sensing applications. [Sponsored by ONR] 1P.A. Hwang, "Influence of Wavelength on the Parameterization of Drag Coefficient and Surface Roughness," J. Oceanogr. (in press). 2I.S.F. Jones and Y. Toba (eds.), Wind Stress Over the Ocean (Cambridge University Press, Cambridge, UK, 2001).
{"url":"http://www.nrl.navy.mil/research/nrl-review/2004/ocean-and-atmospheric-science-and-technology/hwang/","timestamp":"2014-04-21T05:00:23Z","content_type":null,"content_length":"31357","record_id":"<urn:uuid:48abbf3c-fcc5-4a74-a569-4e60afb08f8a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Constrained graph processes Constrained graph processes Let $\mathcal{Q}$ be a monotone decreasing property of graphs $G$ on $n$ vertices. Erdős, Suen and Winkler [5] introduced the following natural way of choosing a random maximal graph in $\mathcal{Q} $: start with $G$ the empty graph on $n$ vertices. Add edges to $G$ one at a time, each time choosing uniformly from all $e\in G^c$ such that $G+e\in \mathcal{Q}$. Stop when there are no such edges, so the graph $G_\infty$ reached is maximal in $\mathcal{Q}$. Erdős, Suen and Winkler asked how many edges the resulting graph typically has, giving good bounds for $\mathcal{Q}=\{$bipartite graphs$\} $ and $\mathcal{Q}=\{$triangle free graphs$\}$. We answer this question for $C_4$-free graphs and for $K_4$-free graphs, by considering a related question about standard random graphs $G_p\in \ The main technique we use is the 'step by step' approach of [3]. We wish to show that $G_p$ has a certain property with high probability. For example, for $K_4$ free graphs the property is that every 'large' set $V$ of vertices contains a triangle not sharing an edge with any $K_4$ in $G_p$. We would like to apply a standard Martingale inequality, but the complicated dependence involved is not of the right form. Instead we examine $G_p$ one step at a time in such a way that the dependence on what has gone before can be split into 'positive' and 'negative' parts, using the notions of up-sets and down-sets. The relatively simple positive part is then estimated directly. The much more complicated negative part can simply be ignored, as shown in [3]. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v7i1r18/0","timestamp":"2014-04-16T04:24:07Z","content_type":null,"content_length":"16465","record_id":"<urn:uuid:1782d2df-d758-41c8-91dd-fda89ab65779>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Potentially good, semi-stable reduction => good reduction ? up vote 7 down vote favorite Does a smooth proper variety having semi-stable reduction as well as potentially good reduction have good reduction ? Note that over a $p$-adic field, this is true for the Galois representations in the $p$-adic étale cohomology of $X$. (With a bit more details: fix a field $K$ complete for a discrete valuation, with ring of integers $\mathcal{O}_K$, and a smooth proper $K$-scheme $X$. We say that $X$ has good reduction if it is the generic fibre of a smooth proper $\mathcal{O}_K$-scheme $\mathcal{X}$. We say that $X$ has semi-stable reduction if it is the generic fibre of a flat proper $\mathcal{O}_K$-scheme $\mathcal{X}$ such that étale-locally on $\mathcal{X}$ there exists a smooth morphism $\mathcal{X}\to \text{Spec}(\mathcal{O}_K[T_1,\dots,T_r]/(T_1\dots T_r-\pi))$ where $\pi$ is a prime element of $\mathcal{O}_K$. We say that $X$ has potentially one of the two properties above if it has it after a finite extension $K'/K$.) ag.algebraic-geometry arithmetic-geometry Dear Matthieu, I think the answer should typically be yes, despite Will Savin's counterexample. One way to think about it is as follows: take your semistable model over $K$, base-change to $K'$ 1 (where it is no longer semistable: $T_1 \cdots T_r = \pi$ turns into $T_1 \cdots T = (\pi')^e$), blow-up to make it semi-stable again. Now your assumption is that there is also a good reduction model over $K$'. So when minimal models are unique, you will (I think) get a contradiction. (You have to think a bit about minimality of models.) Now models for curves over DVRs are like models for ... – Emerton Jun 17 '13 at 3:01 ... surfaces over a field, and minimal models are unique except in the case of rational and ruled surfaces. So e.g. for curves of genus $\geq 1$, I think that the answer to your question will be yes. In higher dimension, I'm not sure what is known; the theory of semistable models is less well-developed, because one doesn't have resolution of singularities and related tools in mixed characteristic. Anyway, it is not coincidence that Will Savin's counterexample has genus $0$. Regards, – Emerton Jun 17 '13 at 3:04 Dear Matthew, OK. In fact I wondered whether I'd add a comment to my question, to mention that the situation is somehow under control for curves and abelian varieties essentially due to the uniqueness of smooth models. I'm glad to hear that although not much is known in general, one expects the phenomenon to be quite typical. Thanks! – Matthieu Romagny Jun 17 '13 at 7:19 In response to Emerton: there are also examples of this behavior with degenerating families of K3 surfaces, e.g., a family over $\text{Spec} \mathbb{C}[[t]]$ whose total space has an $A_1$-singularity. After an etale base change, there are two small modifications (related to each other by a flop), so that the family has potentially good reduction. For the original family, you can blow up the singular point, resolving the singularity at the expense of adding an additional irreducible component to the central fiber. – Jason Starr Jun 17 '13 at 16:41 Dear Jason, Thanks for this example. Dear Matthieu, You should probably take my comments as just a broad outline of how to think about these kinds of question. The total space of a degenerating family of surfaces (as in Jason's example) is three-dimensional, and so I am guessing that his example is related to phenomena in the minimal model program for three-folds (something that I don't know much about, but that is a deeply researched area that others know a huge amount about!). The theory of reduction of $d$-dimensional varieties is closely related to the theory of minimal models for ... – Emerton Jun 19 '13 at 0:53 show 2 more comments 1 Answer active oldest votes No. Take $K=\mathbb Q_3$. Consider the projective genus $0$ curve $x^2+y^2+3z^2$. This has bad reduction, since it has no rational points. It is semistable, since after adjoining $i$ it has exactly that form. It has potentially good reduction, since all genus $0$ curves do. up vote 9 down vote accepted So you at least need to say it has good reduction etale locally. Maybe I'm being really dumb, but why does no rational points imply bad reduction? I know the standard argument for genus $1$, but it involves Lang's theorem. – Matt Jun 16 '13 at If it had good reduction you could lift a point over the residue field using Hensel's lemma, and all genus $0$ curves over finite fields have points. Alternately, by a well-known fact there are two genus $0$ curves over a local field, one with good reduction and rational points and one with bad reduction and without rational points. – Will Sawin Jun 16 '13 at 2:58 Oops. Of course. In the one-dimensional case the set of torsors is identified with the Brauer group of the field under the standard connecting homomorphism, which in this case is $C_1$. Sorry. The other part of the argument is the same. I was trying to remember why genus $0$ curves over finite fields always have points. – Matt Jun 16 '13 at 4:26 Thank you Will. Now I'm a bit confused with the Galois representation counterpart, e.g. like in Matthew Emerton's example answering this MO question. Namely, in that example Matt Emerton claims that $P$ and $E$ have the same étale cohomology but I would have thought that the Galois action on the cohomologies match only after restriction to a finite extension $K'/K$. Am I wrong? – Matthieu Romagny Jun 16 '13 at 15:31 What are P and E? – Will Sawin Jun 16 '13 at 17:04 show 3 more comments Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry arithmetic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/133840/potentially-good-semi-stable-reduction-good-reduction","timestamp":"2014-04-18T18:23:54Z","content_type":null,"content_length":"66033","record_id":"<urn:uuid:4a1956b3-98ec-4875-950a-2023317cc12e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
The Successes of String Theory String theory has gone through many transformations since its origins in 1968 when it was hoped to be a model of certain types of particle collisions. It initially failed at that goal, but in the 40 years since, string theory has developed into the primary candidate for a theory of quantum gravity. It has driven major developments in mathematics, and theorists have used insights from string theory to tackle other, unexpected problems in physics. In fact, the very presence of gravity within string theory is an unexpected outcome! Predicting gravity out of strings The first and foremost success of string theory is the unexpected discovery of objects within the theory that match the properties of the graviton. These objects are a specific type of closed strings that are also massless particles that have spin of 2, exactly like gravitons. To put it another way, gravitons are a spin-2 massless particle that, under string theory, can be formed by a certain type of vibrating closed string. String theory wasn’t created to have gravitons — they’re a natural and required consequence of the theory. One of the greatest problems in modern theoretical physics is that gravity seems to be disconnected from all the other forces of physics that are explained by the Standard Model of particle physics. String theory solves this problem because it not only includes gravity, but it makes gravity a necessary byproduct of the theory. Explaining what happens to a black hole (sort of) A major motivating factor for the search for a theory of quantum gravity is to explain the behavior of black holes, and string theory appears to be one of the best methods of achieving that goal. String theorists have created mathematical models of black holes that appear similar to predictions made by Stephen Hawking more than 30 years ago and may be at the heart of resolving a long-standing puzzle within theoretical physics: What happens to matter that falls into a black hole? Scientists’ understanding of black holes has always run into problems, because to study the quantum behavior of a black hole you need to somehow describe all the quantum states (possible configurations, as defined by quantum physics) of the black hole. Unfortunately, black holes are objects in general relativity, so it’s not clear how to define these quantum states. String theorists have created models that appear to be identical to black holes in certain simplified conditions, and they use that information to calculate the quantum states of the black holes. Their results have been shown to match Hawking’s predictions, which he made without any precise way to count the quantum states of the black hole. This is the closest that string theory has come to an experimental prediction. Unfortunately, there’s nothing experimental about it because scientists can’t directly observe black holes (yet). It’s a theoretical prediction that unexpectedly matches another (well-accepted) theoretical prediction about black holes. And, beyond that, the prediction only holds for certain types of black holes and has not yet been successfully extended to all black holes. Explaining quantum field theory using string theory One of the major successes of string theory is something called the Maldacena conjecture, or the AdS/CFT correspondence. Developed in 1997 and soon expanded on, this correspondence appears to give insights into gauge theories, such as those at the heart of quantum field theory. The original AdS/CFT correspondence, written by Juan Maldacena, proposes that a certain 3-dimensional (three space dimensions, like our universe) gauge theory, with the most supersymmetry allowed, describes the same physics as a string theory in a 4-dimensional (four space dimensions) world. This means that questions about string theory can be asked in the language of gauge theory, which is a quantum theory that physicists know how to work with! Like John Travolta, string theory keeps making a comeback String theory has suffered more setbacks than probably any other scientific theory in the history of the world, but those hiccups don’t seem to last that long. Every time it seems that some flaw comes along in the theory, the mathematical resiliency of string theory seems to not only save it, but to bring it back stronger than ever. When extra dimensions came into the theory in the 1970s, the theory was abandoned by many, but it had a comeback in the first superstring revolution. It then turned out there were five distinct versions of string theory, but a second superstring revolution was sparked by unifying them. When string theorists realized a vast number of solutions of string theories (each solution to string theory is called a vacuum, while many solutions are called vacua) were possible, they turned this into a virtue instead of a drawback. Unfortunately, even today, some scientists believe that string theory is failing at its goals.
{"url":"http://www.dummies.com/how-to/content/the-successes-of-string-theory.html","timestamp":"2014-04-19T04:42:12Z","content_type":null,"content_length":"54579","record_id":"<urn:uuid:36854408-a995-44a8-af5c-eea4f189ebf5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Drag Jim Franconeri December 17, 2003 ASEN 5050 Use of Differential Drag as a Satellite Constellation Stationkeeping Strategy Introduction and Theory There is a growing trend in the commercial satellite industry to use constellations of satellites working together to accomplish complex mission objectives (Orbcomm, Iridium, Globalstar, Teledesic). Many of these new constellations are consist of several small satellites orbiting at LEO (Low-Earth-Orbit) altitudes in precise lattice formations. Building and launching a cluster of small satellites can be cheaper and can offer more versatility than a single large one. Additionally, a cluster can be more robust than a single large satellite; if one part fails, another can be maneuvered into its place and take over its function. One of the challenges of operating small satellites flying in formation is that they require periodic orbit maintenance, or "stationkeeping", to maximize their utility. Stationkeeping refers to the adjustment of a satellite’s orbital characteristics in an effort to maintain a certain optimal position relative to the other satellites in the constellation. This paper discusses the advantages and risks of this technology as they pertain to the orbit maintenance requirements of a constellation of low-earth-orbit satellites. A hypothetical LEO constellation will be postulated, along with an example of how differential drag calculations would be performed in support of the mission design and operations stages of the constellation. The Need for Stationkeeping It is in the interest of satellite constellation companies to maintain certain relative positions between the satellites in their constellations. Ideally, under two-body dynamics, a group of satellites orbiting a planet in the same orbital plane would experience no force variations, which would cause them to stray from their optimal relative positions within the constellation. But in fact, variations in Earth's geopotential forces (due to unique satellite groundtracks), variations in the density of the Earth's atmosphere, variations in solar radiation pressure, and variations in spacecraft attitude cause differing orbit perturbations, resulting in relative drift among satellites in a plane. If left uncorrected, these perturbations can accumulate and significantly change the relative position of satellites, destroying the overall structure of the constellation. The term "stationkeeping" refers to efforts undertaken to ensure that satellites are in their proper orbits, which maximizes the usefulness of the constellation as a whole. Typically, stationkeeping is performed by firing a satellite's thrusters to raise or lower an orbit. This paper offers an explanation of an alternative stationkeeping method, called differential drag. Proper stationkeeping provides optimal satellite coverage for customers, minimizing coverage gaps and providing satellite contacts at regular intervals. Some constellations require high-fidelity stationkeeping to maintain communication cross-links between satellites, maintaining the integrity of the satellite network. For satellites flying in very close proximity, stationkeeping is needed to ensure against collision. The two orbit characteristics that are managed with stationkeeping efforts are the Argument of Latitude and the Period. The Argument of Latitude, (or ArgLat,) is defined as the Argument of Perigee plus the True Anomaly, and is therefore a measure of the number of degrees subtended by the satellite since it last crossed the equator heading north. Period is defined as the time the satellite takes to complete one orbit (seconds per orbit). (Another useful representation of the period is as a number of degrees subtended per day, obtained by dividing the number of seconds in a day by the orbital period, and then multiplying by 360 degrees.) Comparing these two characteristics between satellites in a plane allows for a measurement of stationkeeping success. Perfect stationkeeping in a plane would mean that satellites would orbit at evenly spaced Arguments of Latitude, and would all have the same period. Two convenient terms used in describing the stationkeeping status of satellites in a plane are "phase error" and "relative drift rate." The phase error between two satellites is defined as the actual ArgLat difference between the two minus their desired ArgLat difference. The relative drift rate is defined as the difference between two satellites' periods, most conveniently measured in degrees of ArgLat per day. The Differential Drag Technique The concept of using differential drag as a stationkeeping strategy is a relatively new idea in the field of constellation satellite operations, and there are very few companies currently making use of it. One of these companies is Orbcomm, whose constellation of 35 LEO satellites uses differential drag as a supplement to thrust-based stationkeeping. Differential drag has been proposed as a means of maintaining spacecraft proximity to the International Space Station. Use of intentional differential drag has also been proposed as a perturbing mechanism to be used to in demonstration missions verifying the feasibility of propellant-based stationkeeping systems. The differential drag stationkeeping technique requires attitude or geometry changes to maximize or minimize the amount of atmospheric drag a satellite encounters, in order to maintain or speed up its orbital velocity. This is accomplished by varying the amount of drag area presented to the atmosphere between satellites of the same plane. The acceleration due to drag on a satellite is given by: C[d] indicates the coefficient of drag, a dimensionless quantity that represents the extent to which the satellite is susceptible to atmospheric drag. It depends upon the material out of which the satellite is made, and upon the aerodynamic shape of the satellite. M is the satellite mass, and A is the cross-sectional area presented in the velocity direction. r is the density of the atmosphere through which the satellite is flying, and is difficult to determine accurately. v[rel] is not the orbital velocity of the satellite, but rather the velocity of the satellite relative to the rotating atmosphere. The negative sign indicates that the acceleration of drag is always in the anti-velocity direction. For identical satellites flying in the same formation and in the same plane, all of these parameters will be approximately equal, except the atmospheric density. It is primarily the density fluctuations in the Earth's atmosphere that cause acceleration differences between satellites flying in formation. The density of the upper atmosphere is subject to variations caused by three main factors. Imperfect homogeneity of the molecules composing the atmosphere has a strong effect on its density. Radiation from the Sun heats up the Earth's atmosphere, causing small local density differences and large density differences between the day and night sides of the Earth. And finally, the Earth's own geomagnetic activity heats atmospheric particles by causing high-speed collisions with charged energetic particles from the Sun, so variations in the Earth's magnetic field translate to variations in the atmospheric density. Control of the varying drag accelerations could be performed by altering the drag coefficients, masses, velocities of the satellites, but changing these parameters is inefficient and impractical. The velocity differences between satellites in the same plane will be very small, and the resulting effects on drag acceleration are negligible compared to the effects of differing areas. Assuming that satellites in the constellation all are designed to have the same mass and coefficient of drag, the only option available for actively controlling drag accelerations is the presented area. The orbits of satellites with a larger presented area will experience greater drag acceleration, and thus decay more rapidly than those with a smaller presented area. As seen in the above equation, the drag acceleration is proportional to the presented area. The rate of period change is given by: where r[p] is the atmospheric density at perigee and s is the satellite's path. For circular or near-circular orbits, this equation shows the directly proportional relationship between the rate of change of the period and the presented area. This is the fundamental idea behind differential drag: differing the presented areas of two satellites to create an unequal rate of period change between Practical Implementation Achieving an unequal presented area requires variable geometry on the satellite. The area presented to the velocity direction must be adjustable, to provide a proportional change in the drag force. Possible options for altering area profiles include the angling of solar panels, yaw/pitch/roll angle changes, or deploying a specifically designed drag sail. (Higher altitude orbits for which atmospheric drag plays little role may use a solar sail in a similar way, taking advantage of solar radiation pressure.) Differential drag maneuvers must be planned so as to avoid impact on normal satellite operations. Spacecraft need not hold their differential drag orientations throughout an entire orbit. For instance, if solar panels are being used to achieve the change in presented area, the maneuvers could occur during eclipse, in order to minimize the effect of solar array pointing efficiency loss. The differential drag operations should be performed during as large a portion of the orbit as possible without adversely impacting operations. Differential Drag Planning for a Hypothetical Constellation This section will demonstrate how stationkeeping via differential drag is planned before the launch of a LEO satellite constellation. Satellite Orbital Characteristics Imagine that we are responsible for the stationkeeping of a satellite constellation planned to consist of 6 satellites orbiting in the same plane. The satellites' target altitude will be 600 kilometers, and their target eccentricity will be 0. To optimize coverage and minimize gaps, we would like these satellites to orbit at a separation of 60º in Argument of Latitude. Their inclination will be chosen such that the satellites will be in a sun-synchronous orbit. The Earth's sun-synchronous orbit RAAN rate is: [] We will be in a circular orbit, so [] The satellites must be launched at an inclination of 97.79º in order to achieve a sun-synchronous orbit. Satellite Physical Characteristics Assume that the main body of the satellite is cylindrical, and rectangular solar panels are attached at right angles to the body. The solar panels are free to rotate about their long axis (the x-axis). Also assume that there is a cylindrical antenna connected to the bottom of the main body, pointing towards the Earth (the negative z-axis). The satellite travels at an angle normal to the solar panel and antenna directions (along the y-axis). The satellite maintains the antenna pointing to nadir at all times, but the vehicle is able to yaw (rotate about the z-axis) freely and articulate the solar panels in order to maximize solar power collection. Solar panels: Rectangular, 1 meter x 4 meters, negligible thickness Main body: Cylindrical, 1-meter diameter, 5 meters long Antenna: Cylindrical, 0.1 meters diameter, 5 meters long Calculating presented area Since the vehicle is always nadir-pointing, and the main body and the antenna are cylindrical, their contribution to the satellite's presented area remains constant for all yaw angles. Their contribution can be calculated simply by multiplying their diameters by their lengths: The presented area contribution of the solar panels is dependent upon the solar panel rotation angle and the yaw angle. Let us define a 0º yaw angle to be the configuration shown in the diagram above, with the long axis of the solar panels being perpendicular to the velocity vector. A 90º yaw angle would mean that the long axis of the solar panels would lie parallel to the velocity vector. For a yaw angle of zero, if the solar panels are edge-on to the velocity direction (solar panel angle = 0º), they contribute nothing to the presented area because they are of negligible thickness. If the solar panels are face-on to the velocity direction (solar panel angle = 90º), their contribution to the presented area is calculated by multiplying their length by their width. The presented area calculation becomes more complicated at different yaw angles and solar panel angles. Keeping in mind that we must multiply by two since there are two panels, and designating the yaw angle as γ and the solar panel angle as θ, we have: The area drops off sinusoidally as the yaw angle approaches 90º and the solar panel angle approaches 0º. The chart above shows the calculated satellite presented area as a function of solar panel angle and yaw angle. At a yaw angle of 90º or a solar panel angle of 0º, the solar panels contribute nothing to the presented drag area. In that case, the presented drag area is simply the baseline sum of the body and antenna areas, 5.5m^2. But as the yaw angle decreases and the solar panel angle increases, the drag area contribution from the solar panels increases. The panel contribution (and thus the total presented area) reaches a maximum at a yaw angle of 0º and a solar panel angle of 90º. In this configuration, the total area that the satellite presents to atmospheric drag is 13.5m^2. These calculations show that articulating the solar panels allows us to drastically change the satellite's drag area. When flying at a yaw angle of 0º, changing from a solar panel angle of 0º to a solar panel angle of 90º allows us to increase the presented area by 245%. Of course, we are not free to move the solar panels and yaw to any angle we would like at any time. The satellites must use their solar panels for power collection, and so must fly with the specific yaw and solar panel angles that point the panels at the Sun to allow for optimal power collection. But during the segment of the orbit while the satellite is flying through the Earth's eclipse, power collection concerns do not apply. During the eclipse, we are free to maneuver the yaw angle and the solar panel angle to any configuration we choose without fear that we are missing out on power collection. Since we have chosen a circular sun-synchronous orbit , each satellite will experience the same eclipse duration each day. The following diagram illustrates the percentage of each orbit that falls in eclipse. The view is from a perspective perpendicular to the plane of the satellites' orbit, and is not to scale. To calculate the size of the eclipse region, determine the angle a: The satellite is in eclipse for 132.1º out of 360º, or 37% of each orbit. This time can be used to perform differential drag operations without impacting the satellites' power generation We must make some simplifying assumptions in order to calculate the effect that these maneuvers will have on the drag accelerations. We will assume that the satellites in the plane will be flying with similar yaw and solar panel profiles while they are in sunlight, meaning that the only differential drag they will experience in sunlight will be caused by the atmospheric density variations. We assume that all vehicles in the plane have the same drag coefficients, velocities, and masses. We will perform all of our differential yaw and solar panel maneuvers while in eclipse, and thus we can effect a change in the presented area of 245% for 37% of each orbit. Let us assume that in sunlight, the satellites will fly with an average of 10m^2 presented area. We may then compute the effectiveness of the maneuvers: For any two satellites in the plane, A and B, [] [] Since our objective is to equalize the accelerations on these two vehicles, we set these two equations equal to one another: If satellite 1 flies with a maximum drag area of 13.5m^2 in eclipse, and satellite 2 flies with a minimum drag area of 5.5m^2, then averaged over the whole orbit, Therefore, our solar panel and yaw maneuvers should be able to compensate for the differential accelerations caused by a 36% variation in the atmospheric density encountered by two satellites. An Operational Example of the Technique To demonstrate the differential drag technique operationally, consider the two satellites in our plane designed to be spaced at 180 degrees apart in Argument of Latitude from one another. Assume that perturbations have caused the two to have slightly different semi-major axes (and hence, slightly different orbital periods). The satellite with the larger semi-major axis (Satellite A) will have the larger orbital period (figure 1). Since Satellite A’s period is longer than that of Satellite B, Satellite A will start to lag behind Satellite B in the orbit (figure 2). The Argument of Latitude difference between the two will increase from the optimum of 180 degrees. Differential drag will be used to remedy the situation by placing Satellite A into a maximum drag configuration and Satellite B into a minimum drag configuration. This will have the effect that Satellite A’s semi-major axis will decay more rapidly than that of Satellite B. Lowering a satellite’s orbit decreases its period. Satellite A’s period will decrease more rapidly than Satellite B’s (figure 3). It is important to note that at some point as the satellites return to the optimal 180-degree phasing, the differential drag configurations must be reversed so that when they reach the correct phase, they also have the same period. The satellites will maintain differing area profiles until the two periods are matched and the phase angle is the desired 180 degrees, at which time the two satellites will be returned to their nominal attitude configurations (figure 4). Phase plots To visualize the relative drift situation in a satellite plane, it is useful to construct a phase plot on which each satellite is represented according to its phase angle and drift rate relative to some reference orbit. For convenience in discussing phase plots, it is useful to define the terms "flare" and "feather". The flare mode will refer to the maximum presented area configuration, and the feather mode will refer to the minimum presented area configuration. The reference orbit should be picked so as to be convenient for assessing which satellites should flare and which should feather. The orbit of a particular disabled satellite that cannot change attitude may be a convenient choice as a reference. If all satellites in a plane are equally healthy, a convenient reference vehicle could be selected by determining which selection would require the least amount of correction to achieve perfect stationkeeping. The reference need not even necessarily coincide with the orbit of one of the satellites, but may rather be some computed mean of all the satellites' orbits. The reference satellite (or the reference orbit) lies at the origin of the plot. The remaining satellites in the plane are then plotted according to their phase error and drift rate with respect to the reference satellite. The phase plot may then be used in making differential drag decisions. A sample phase plot is shown below, for our plane of six satellites flying in formation. The ideal situation for our plane would be a perfect 60 degrees of ArgLat between each satellite, and each satellite orbiting with the same identical period. In this example, Satellite 1 has been designated as the reference vehicle, so it lies at the origin. We desire all the satellites to have the same period as Satellite 1 – this means every satellite will ideally have a zero relative drift rate with respect to Satellite 1. Satellites with positive drift rates are in lower orbits than the reference and have shorter periods than the reference. Satellites with negative drift rates are in higher orbits than the reference and have longer periods than the reference. We also desire all the satellites to have perfect phase separation from one another – this means every satellite will ideally have zero ArgLat phase error with respect to Satellite 1. A non-zero drift rate means that the satellite’s phase angle with respect to the reference vehicle is changing. If the satellite lies in the 2^nd or 4^th quadrant, the phase angle is coming closer to ideal. However, if the satellite lies in the 1^st or 3^rd quadrant, the phase angle is growing further from ideal. As seen in the phase plot above, three satellites are drifting in the correct direction (Satellites 3, 4, and 5), and two are drifting in the wrong direction (Satellites 2 and 6). Satellite 2 is two degrees away from its ideal position, and that spacing is growing worse by 0.03 degrees every day. As time passes, Satellite 2 will move further and further left on this plot unless action is taken to return it to its ideal orbit. A decelerate thrust (a thrust in the anti-velocity direction) could accomplish the correction, lowering Satellite 2’s orbit and reversing the drift direction. However, since our constellation is utilizing differential drag, may be able to remedy the situation without expending valuable propellant. If the presented area of Satellite 2 is maximized so that it exceeds the presented area of the other satellites in the plane, it will experience a greater drag force, and its orbit will be lowered faster than the others. While it is true that this method will take an extended period of time compared to an instantaneous correction from a thrust, the propellant savings makes the differential drag method more efficient. Even if differential drag is not fully successful on its own, if it is employed until the point where stationkeeping limits are about to be violated, then the corrective thrust will be smaller in magnitude than it would have been if differential drag was not attempted. This still represents a propellant savings over the solely thrust-based correction. Advantages and Limitations of Differential Drag The major advantage of the differential drag technique is the spacecraft mass savings that it affords. A more massive satellite is more expensive to launch, therefore satellite manufacturers try to conserve mass wherever possible. If a satellite can perform all or part of its stationkeeping without propellant, the propellant mass savings is translated into launch cost savings. If a satellite can perform all its stationkeeping using differential drag, it not only saves propellant mass, but also thruster instrumentation mass (thruster valves and tubing, propellant tank, temperature and pressure sensors, etc.). For constellations using both differential drag and propulsive thrusting as stationkeeping mechanisms, prudent use of differential drag can help conserve propellant, which may be valuable over the life of the satellite to accomplish other larger-scale orbit changes, such as inter-plane re-spacing or changing ascending node drift rates. If designers do not want to rely on differential drag for stationkeeping, it may be used as a backup plan in the event that there is damage to the propulsion system. Using differential drag is less disruptive to the attitude control system than a propulsive thrust. Differential drag allows the orbit-changing impulse to be spread out over a long period of time, meaning that the attitude control system doesn't have to work as hard to maintain the proper satellite attitude. An instantaneous thrust may jar the satellite violently enough that attitude control may be temporarily lost. In this way, differential drag is gentler on the other satellite subsystems when compared to impulsive thrusting. Atmospheric drag eventually causes the orbits of LEO satellites to decay to the point that they de-orbit. The capability to minimize the presented area allows a satellite operator to delay this de-orbit for as long as possible. If a satellite were nearing its de-orbit altitude, the satellite could be switched to its minimum drag mode. This mode would postpone the de-orbit for as long as possible, maximizing the satellite's service-providing lifetime, and allowing for an increased measure of control over the location of the eventual de-orbit. Differential drag may be used to achieve and maintain orbit circularity. Satellites in elliptical orbits will experience greater atmospheric drag while near periapse than they will near apoapse. If a certain percentage of an orbit were allotted to differential drag operations, the segment of the orbit near periapse could be chosen to operate in a maximum drag configuration, resulting in a more rapid lowering of the apoapse and a circularization of the orbit. Differential drag is more effective when the density of the atmosphere is higher. Therefore, differential drag is more efficient for constellations operating at lower altitudes. Differential drag may not be feasible for constellations whose satellites orbit at high altitude. At altitudes around 800 kilometers, the perturbing effects of solar radiation pressure are approximately equivalent to those of atmospheric drag. For orbits operating above this approximate threshold, it would be more efficient to design a system that would make use of solar radiation pressure differentials rather than atmospheric drag differentials. The solar cycle should also be accounted for when planning a mission where differential drag will be used. The density of the atmosphere is greater during solar maximum, and therefore differential drag will be more effective at solar maximum. A thorough analysis of the atmospheric density that the satellites are expected to encounter over their designed lifetimes should be performed prior to adopting the differential drag strategy. Since prediction of atmospheric density is very difficult, a significant margin should be built in to the stationkeeping budget to ensure control even in unexpectedly low-density atmospheric conditions. If differential drag is to be the sole method of stationkeeping, then mission designers must be able to show that it will be sufficient even throughout the minimum atmospheric density conditions expected to be encountered during the mission. For satellite missions planned to operate primarily during years in which the solar activity is near a minimum, the resulting smaller benefits of differential drag might be counterbalanced by the increased operational complexity required to perform the maneuvers themselves, to the point that the strategy is not worthwhile. An important consideration of using differential drag is that orbits can never be raised using the technique. Differential drag merely gives a measure of control over how fast the orbits decay. Stationkeeping is maintained by creating different decay rates among satellites, rather than by impulsively raising or lowering orbits using thrust maneuvers. In fact, it results in increased drag effects and an increase in the altitude loss over time when compared to a satellite flying at a minimum drag configuration throughout the duration of its mission lifetime. Therefore differential drag would not be effective for a mission that requires satellites to remain at a fixed altitude. The differential drag operations may adversely impact other satellite subsystems. Changing the satellite's attitude may result in decreased power generation capability, or decreased attitude control. These risks may be mitigated by performing the maneuvers during times in the orbit that the satellite has surplus power, or when attitude changes will have no effect on power generation (such as eclipse periods). Another risk is malfunction of the area change actuator. If solar panels are designed to be used to aid in stationkeeping, but as a result of some anomaly become immobile, or are restricted as a result of unforeseen operational constraints, the stationkeeping strategy would be sabotaged. Of course, the malfunction risk always exists for any satellite subsystem, so it is not necessarily a strike against differential drag when comparing it to other methods. A satellite relying on a thruster system for stationkeeping faces risks that that system may malfunction. Whatever method is chosen, it must be designed to be as robust as possible. It is wise design practice to build in redundancy wherever possible, including the stationkeeping mechanism. Design teams must weigh each of these risks and design the spacecraft to function with an amount of risk/performance tradeoff they are comfortable with. The option of differential drag is an inexpensive stationkeeping aid for satellite companies. It should be considered in the planning of any low-earth-orbit constellation that has stationkeeping requirements. Its use may not be practical for every venture, due to its various limitations, but for missions where it is feasible, it can prove to be an efficient propellant-saving technique. Fundamentals of Astrodynamics and Applications (Second Edition) David A. Vallado Autonomous Constellation Maintenance James R. Wertz, John T. Collins, Simon Dawson, Hans J. Koenigsmann, Curtis W. Potterveld Autonomous Formation Flying Control for Multiple Satellite Constellations Low-Cost, Minimum-Size Satellites for Demonstration of Formation Flying Modes at Small, Kilometer-Size Distances Hans F. Meissinger, John Collins, Gwynne Gurevich, Simon Dawson MEMS Mega-pixel Micro-thruster Arrays for Small Satellite Stationkeeping Daniel W. Youngner, Son Thai Lu, Edgar Choueiri, Jamie B. Neidert, Robert E. Black III, Kenneth J. Graham, Dave Fahey, Rodney Lucus, Xiaoyang Zhu
{"url":"http://ccar.colorado.edu/asen5050/projects/projects_2003/franconeri/","timestamp":"2014-04-17T03:49:06Z","content_type":null,"content_length":"111977","record_id":"<urn:uuid:2b12f64a-2d79-4e36-b982-faa2053df4c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparative social mobility revisited: models of convergence and divergence in 16 countries Results 1 - 10 of 12 , 1995 "... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..." Cited by 981 (70 self) Add to MetaCart In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P -values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology. - SOCIOLOGICAL METHODOLOGY 1995, EDITED BY PETER V. MARSDEN, CAMBRIDGE,; MASS.: BLACKWELLS. , 1995 "... It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a singl ..." Cited by 253 (19 self) Add to MetaCart It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection and accounting for model uncertainty is presented. Implementing this is straightforward using the simple and accurate BIC approximation, and can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P values and standard model selection procedures based on them. It also allows easy comparison of non-nested models, and permits the quantification of the evidence for a null hypothesis... - DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON , 1993 "... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..." Cited by 89 (6 self) Add to MetaCart In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are:- from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory;- Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis;- Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis;- Bayes factors are very general, and do not require alternative models to be nested;- several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods;- in "non-standard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive non-Bayesian significance - American Economic Review , 2012 "... The U.S. tolerates more inequality than Europe and believes its economic mobility is greater than Europe’s, though they had roughly equal rates of intergenerational occupational mobility in the late twentieth century. We extend this comparison into the nineteenth century using 23,000 nationally-repr ..." Cited by 4 (1 self) Add to MetaCart The U.S. tolerates more inequality than Europe and believes its economic mobility is greater than Europe’s, though they had roughly equal rates of intergenerational occupational mobility in the late twentieth century. We extend this comparison into the nineteenth century using 23,000 nationally-representative British and U.S. fathers and sons. The U.S. was more mobile than Britain through 1900, so in the experience of those who created the U.S. welfare state in the 1930s, the U.S. had indeed been “exceptional. ” The U.S. mobility lead over Britain was erased by the 1950s, as U.S. mobility fell from its nineteenth century levels. [W]e have really everything in common with America nowadays, except, of course, language. Oscar Wilde, The Canterville Ghost (1887). The economies of Britain and the U.S. have had much in common over the two centuries since the American Revolution: their legal traditions and property rights systems, sources of labor, capital, and technology, political ties and alliances in two world wars, and – Wilde’s quip notwithstanding – language and culture are the most obvious. One significant respect in which , 1998 "... Weakliem agrees that Bayes factors are useful for model selection and hypothesis testing. He reminds us that the simple and convenient BIC approximation corresponds most closely to one particular prior on the parameter space, the unit information prior, and points out that researchers may have diffe ..." Cited by 3 (0 self) Add to MetaCart Weakliem agrees that Bayes factors are useful for model selection and hypothesis testing. He reminds us that the simple and convenient BIC approximation corresponds most closely to one particular prior on the parameter space, the unit information prior, and points out that researchers may have different prior information or opinions. Clearly a prior that represents the available information should be used, although the unit information prior often seems reasonable in the absence of strong prior information. It seems that, among the Bayes factors likely to be used in practice, BIC is conservative in the sense of tending to provide less evidence for additional parameters or "effects". Thus if a Bayes factor based on additional prior information favors an effect, but BIC does not, the prior information is playing a crucial role and this should be made clear when the research is reported. BIC may well have a role as a baseline reference analysis to be provided in routine reporting of research results, perhaps along with Bayes factors based on other priors. In Weakliem's 2 x 2 table examples, BIC and Bayes factors based on Weakliem's preferred priors lead to similar substantive conclusions, but both differ from those based on P values. When there is additional prior information, the technology now exists to express it as , 2003 "... Which background factors matter more in intergenerational educational attainment: Social class, cultural capital or cognitive ability? A random effects approach ..." Add to MetaCart Which background factors matter more in intergenerational educational attainment: Social class, cultural capital or cognitive ability? A random effects approach , 1993 "... Event history analysis seems ideally suited for the analysis of World Fertility Survey (WFS) data, which consists of full birth histories and related information. However, it has not been much used for this purpose, and most analyses of WFS data have consisted of tabulations of standard fertility ra ..." Add to MetaCart Event history analysis seems ideally suited for the analysis of World Fertility Survey (WFS) data, which consists of full birth histories and related information. However, it has not been much used for this purpose, and most analyses of WFS data have consisted of tabulations of standard fertility rates, and regressions with children ever born as the dependent variable, both of which have disadvantages. We suggest that this is because event history analysis has practical drawbacks for WFS data, even though, in principle, it provides a superior analytic framework. These are the many partial dates, the computational burden of discrete-time event history analysis, the need to take account of five clocks at once (age, period, cohort, time since last event, and parity), and the difficulty of interpreting the coefficients. We propose a modeling strategy for the event history analysis of WFS data which aims to overcome these problems, and we apply it to the previously unanalyzed WFS data from... , 2003 "... Abstract. ..." "... This paper represents the views of the author and does not necessarily reflect the opinions of Statistics Canada. Data in many forms Statistics Canada disseminates data in a variety of forms. In addition to publications, both standard and special tabulations are offered. Data are available on the In ..." Add to MetaCart This paper represents the views of the author and does not necessarily reflect the opinions of Statistics Canada. Data in many forms Statistics Canada disseminates data in a variety of forms. In addition to publications, both standard and special tabulations are offered. Data are available on the Internet, compact disc, diskette, computer printouts, microfiche and microfilm, and magnetic tape. Maps and other geographic reference materials are available for some types of data. Direct online access to aggregated information is possible through CANSIM, Statistics Canada’s machine-readable database and retrieval system. How to obtain more information Inquiries about this product and related statistics or services should be directed to: Client Services, Income "... Research on social mobility, not just in Britain, has come to mean the increasingly sophisticated measurement of social fluidity, i.e. the continuing inequalities of relative mobility chances. This narrow focus has not only been at the expense of contributing to wider sociological debates, but has a ..." Add to MetaCart Research on social mobility, not just in Britain, has come to mean the increasingly sophisticated measurement of social fluidity, i.e. the continuing inequalities of relative mobility chances. This narrow focus has not only been at the expense of contributing to wider sociological debates, but has also led to the neglect of the individual and structural impact of actual mobility trends as well as their implications for general sociological theory. This could be remedied by closer attention to absolute rates of upward and downward mobility and their effects upon the homogeneity of social strata. It is argued that research should be directed to the cultural context of mobility as well as the economic and political. The Maturing of Mobility Studies More than any other topic in sociology the literature on intergenerational occupational mobility, over the past half century, has demonstrated the growing professionalism of sociological research. Unsubstantiated generalisation and critically unexamined data will no longer do. The methodological rigour and mathematical sophistication of
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1815260","timestamp":"2014-04-18T20:15:11Z","content_type":null,"content_length":"37156","record_id":"<urn:uuid:39101c17-8b03-44ed-bd9f-8e941229f99d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Fayetteville, GA Algebra 2 Tutor Find a Fayetteville, GA Algebra 2 Tutor ...We begin by identifying areas that students struggle with and identify methods that make sense and that the students can use on a regular basis. Once students begin to build up confidence we progress to more difficult questions. The highlight of the course is when students take another diagnost... 17 Subjects: including algebra 2, chemistry, physics, geometry ...I do enjoy tutoring or interacting one on one with students. I do strongly believe that each student is capable of succeeding, especially in Math. Working one on one with them does allow me to find their strengths as well as their weaknesses, and build a strategy that will fit in to make him/her succeed. 13 Subjects: including algebra 2, calculus, geometry, algebra 1 ...Have taken MANY college Chemistry courses. I am highly qualified, and have taught Physical Science for four years. I am a highly-qualified state certified teacher in Science grades 4-12. 11 Subjects: including algebra 2, chemistry, physics, biology ...During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry, and Physics courses and have tutored in all of these subjects. Currently, I co-teach Math 1 and GPS Algebra 1. I also run an After-School Math Tutorial program where I tutor students who are in Math 1-3. 13 Subjects: including algebra 2, chemistry, physics, geometry I am a former high school English teacher with four years' experience teaching adult ESL. I am working as a tutor now because I enjoy working with individuals and small groups instead of large classes. I am willing to work with students of any age. 27 Subjects: including algebra 2, Spanish, reading, English Related Fayetteville, GA Tutors Fayetteville, GA Accounting Tutors Fayetteville, GA ACT Tutors Fayetteville, GA Algebra Tutors Fayetteville, GA Algebra 2 Tutors Fayetteville, GA Calculus Tutors Fayetteville, GA Geometry Tutors Fayetteville, GA Math Tutors Fayetteville, GA Prealgebra Tutors Fayetteville, GA Precalculus Tutors Fayetteville, GA SAT Tutors Fayetteville, GA SAT Math Tutors Fayetteville, GA Science Tutors Fayetteville, GA Statistics Tutors Fayetteville, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Fairburn, GA algebra 2 Tutors Forest Park, GA algebra 2 Tutors Griffin, GA algebra 2 Tutors Hampton, GA algebra 2 Tutors Jonesboro, GA algebra 2 Tutors Lake City, GA algebra 2 Tutors Mcdonough algebra 2 Tutors Morrow, GA algebra 2 Tutors Peachtree City algebra 2 Tutors Riverdale, GA algebra 2 Tutors Stockbridge, GA algebra 2 Tutors Tyrone, GA algebra 2 Tutors Union City, GA algebra 2 Tutors Villa Rica, PR algebra 2 Tutors Woolsey, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/fayetteville_ga_algebra_2_tutors.php","timestamp":"2014-04-16T16:03:05Z","content_type":null,"content_length":"24189","record_id":"<urn:uuid:fcda38fe-d912-408e-9906-c4a992439f35>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Having trouble with econ problem - banks and balance sheets April 4th 2012, 09:43 AM Having trouble with econ problem - banks and balance sheets I've been reading my textbook and looking through my class notes and I just can't seem to make sense out of this problem. Any help would be appreciated!! I'm totally lost! :/ Suppose a bank has $100 million dollars of assets to invest. It can either invest in risky or safe loans. Safe loans will be worth $105 million in one year with certainty. Risky loans will be worth either $70 M or $130 M in one year, each with equal probability. Notice that risky loans have an expected value next year of 0.5 * 70 + 0.5 * 130= $100 M, so that risky loans are socially inefficient relative to safe loans: safe loans have both a higher average return and lower uncertainty. a) Suppose the bank has $80 million in one-year time deposits. For simplicity, assume that they pay no interest, so that the bank's liability will still be $80 M in one year. Assume that the deposits are insured by the government, and for simplicity assume that the bank does not have to pay a premium for this insurance. If the bank's assets are worth less than $80 M in one year, the government will shut the bank down and pay the difference between $80 million and the value of assets. [HINT: SET UP THE BANK’S BALANCE SHEET] Compute the following if the bank invests its assets in safe loans: (a) the probability that the bank will fail; (b) the expected value of the bank's net worth; and (c) the expected size of the government's bailout in one year. Do the same assuming the bank invests in risky loans. Which investment strategy would the government prefer the bank to undertake? Which strategy will the bank choose, assuming that the bank's primary objective is to ensure its survival, and its secondary objective is to maximize its expected net worth.
{"url":"http://mathhelpforum.com/business-math/196810-having-trouble-econ-problem-banks-balance-sheets-print.html","timestamp":"2014-04-18T20:47:19Z","content_type":null,"content_length":"4866","record_id":"<urn:uuid:ccaaa055-7464-429e-835b-2de93dbc2248>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Renormalization Group for dummies Its not quite like that. You write out the power series and you calculate the terms term by term using perturbation theory. The first term is finite - no problem. Second and higher terms turn out to be infinite. It took people a long time to figure out why this happened but the answer turned out to be the thing you expand the power series in, the coupling constant, was infinite and not 1/137 like they thought. Substitute infinity in any power series and its infinite or undefined. To get around this problem you impose a cutoff (this can be looked on as taking the term you are expanding about as finite and later taking its limit to infinity) redo your perturbation procedure and you find it is all OK. The reason is the coupling constant secretly depends on the cutoff - which is rather trivial the way I explained it - but it took people a long time to realize this is what is going on. The value of 1/137 they used was the value measured at a certain energy scale which in effect was measuring the value with a cutoff. But the equations they used had no cutoff so you were really using the value as the cutoff goes to infinity ie infinity. As you take the cutoff to infinity the coupling constant goes from 1/137 to infinity which is why without the cutoff terms in the power series are infinite. Now what you do is assume the coupling constant is a function of what is called the renormalised coupling constant (which is the value from experiment ie 1/137) so you know it will not blow up. You assume it is a function of the un-renormalised parameter ie the value that does blow up to infinity, expand it in a power series, substitute into the original power series, collect terms so you now have a power series in the renormalised parameter. But you have chosen it so it is the value found from experiment so does not blow up. Carry out your calculations, take the cutoff to infinity and low and behold you find the answer is finite. The infinity minus infinity thing comes from when you analyse the behavior of the series when you use the renormalised value and take the limit - you find a term that is the original un-renormalised coupling constant and a term that is a function of the renormalised coupling constant - they in fact both blow up to infinity as you take the limit - but are subtracted from each other so the answer is finite. If you are at the level of Calculus For Dummies its probably going to be difficult to understand the paper I linked to. I have a degree in applied math and I found it tough going. So don't feel bad you are finding it tough - I congratulate you for trying. If you want to get your math up to the level you can understand that paper you will have to a study a more advanced textbook. The one I recommend is Boas - Mathematical Methods Unfortunately otherwise you will have to accept the hand-wavey arguments. As I said in my original post the jig is up with this one - you need to do the math. To give a specific answer to the questions you raised and how to relate it to renormalisation I will see what I can do. If you substitute infinity into any power series it will give either infinity or terms like infinity minus infinity that are undefined. An example of the first would be the power series e^x where each term is positive and an example of the second would be sine x which has positive and negative terms. Now one way to try and get around this is let x be finite and take the limit. Before you take the limit everything is fine - its finite and perfectly OK. Now what you do is assume the variable in the power series is a function of another variable (in this case called the re-normalized variable) that you hope does not blow up to infinity as you take the limit. You expand that out as a power series and you collect terms so you have a new power series in that variable. Now you take the limit and low and behold, for the case of what are called re-normalizable theories, everything is finite. You look deeper into why this occurred and you find changing to this new variable introduced another term in your equations that also blows up to infinity but is subtracted from the original variable that blows up to infinity - as you take the limit they cancel and you are left with finite answers. Normally when you calculate the terms in a power series using perturbation theory it does not blow up to infinity. That's because it is very unusual to chose a variable to expand the power series in that is infinity. The only reason it was done is they did not understand the physics well enough then - they did not understand the measurement of the constant they thought was small at 1/137, and was a good thing to expand in a power series about since as it is raised to a power it gets smaller and smaller, was a measurement made with a cutoff basically in effect. The equations they used had no cutoff and it all went pair shaped. When this happened it left some of the greatest physicists and mathematicians in the world totally flummoxed - these are guys like Dirac with awesome mathematical talent. It was a long hard struggle over many years to sort out what was going on. The thing that fooled them was the parameter you expanded about as a power series secretly depended on the regulator or cutoff and as you took its limit to infinity it went to infinity. When you expanded about a different one that didn't blow up to infinity everything worked OK. As I was penning this I remembered John Baez wrote an interesting article about re-normalisation that may be of help:
{"url":"http://www.physicsforums.com/showthread.php?p=3777061","timestamp":"2014-04-16T13:41:05Z","content_type":null,"content_length":"92885","record_id":"<urn:uuid:18b4c381-3875-410b-9882-34d0a3ba6bcd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
About MathBlog The goal of MathBlog.dk is to increase our own (and the readers) knowledge and get YOU interested in math and computer science while having fun at the same time. We know that this is a very difficult goal but I hope you do enjoy. If you have a fun website with some interesting science related recreational inspiration feel free to mention it to us. We are always looking for more inspiration. A little history This page was started by me Kristian Edlund in 2010 with a series of posts about solutions to Project Euler. I realized that solving the problems was not enough. I had to communicate them to others. Not because I want to brag about what I can do, since thousands of others have solved the same problems before me. No, the need for communication is that it forces me to delve into the theories behind the problem and understand them to a level where I can relay the information. Not long after the beginning Bjarki Ágúst Guðmundsson became a regular reader and active participant in the discussions. In 2012 we took the leap and he joined me as an author for the site contributing with his vast knowledge of problems out there to be solved as well as solutions to some of all the problems. Why do you post the solutions? This is an often asked question, since some feel that it is cheating to read solutions on the web. However , we do it since many of the problems can be solved the brute force, or it can be solved by smarter means. We seek these smarter means whenever we can. So we post the solutions both in order to learn even more our self, but also to inspire others to delve into more computer science and To be honest you can find the solutions for most of the problems online already, some of them without an explanation and some of them with. So if you want to cheat, you can easily do so. We hope that you use the solutions as an inspiration to improve your problem solving skills once you have conquered the problems yourself.
{"url":"http://www.mathblog.dk/about/","timestamp":"2014-04-20T03:10:43Z","content_type":null,"content_length":"28217","record_id":"<urn:uuid:72a50602-8ab9-42a5-81be-9e2391e743d7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Binary or Logical Selection : Given an array A and a binary or logical vector S, how do I select the rows of A when S = 1 or True ? Replies: 6 Last Post: Sep 24, 2012 5:42 AM Messages: [ Previous | Next ] Re: Binary or Logical Selection : Given an array A and a binary or logical vector S, how do I select the rows of A when S = 1 or True ? Posted: Sep 23, 2012 5:57 AM > If A is n by 3 and S is an n-vector, then define a selection matrix, > say T, as those rows of an n by n identity matrix for which the > corresponding element of S is 1. Then R = TA contains the > coordinates of the points you want. (Yes, I realize that might be > seen as cheating, because we haven't expressed T as a function of > S using standard operators. If we could, we wouldn't need T!) Ah, I think that I understand that. I have to start with S as I have spent various lines defining it, then, I think I can say : T ij = 0 Tii = Si R = TA Something like that ?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2404231&messageID=7894823","timestamp":"2014-04-19T05:15:58Z","content_type":null,"content_length":"24704","record_id":"<urn:uuid:4e63d783-49b4-4041-8270-ee627eba04bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Employment history 2013-present Central European University, Department of Mathematics and its Applications, adjunct professor 2012-present Alfréd Rényi Institute of Mathematics, Number Theory Divison, research advisor 2008-2012 Alfréd Rényi Institute of Mathematics, Number Theory Divison, senior research fellow 2006-2008 Alfréd Rényi Institute of Mathematics, Number Theory Divison, research fellow 2003-2006 The University of Texas at Austin, Department of Mathematics, instructor 1999-2003 Princeton University, Department of Mathematics, teaching assistant 1996-1997 University of Illinois at Urbana-Champaign, Department of Mathematics, teaching assistant 1993-1996 Eötvös Loránd University, Department of Algebra and Number Theory, teaching assistant 1992-1996 Budapest Technical University, Department of Mathematics, teaching assistant
{"url":"http://www.renyi.hu/~gharcos/employment.htm","timestamp":"2014-04-20T00:54:03Z","content_type":null,"content_length":"4110","record_id":"<urn:uuid:fd7d9764-fb97-49e2-9f12-a6bcd9c85552>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
SudoCue - Sudoku Glossary 45 The sum of all digits in a house. This sum is used in solving Killer Sudoku puzzles, by comparing it to the the sum of all cages that partially overlap a house. When several houses are involved, the technique uses multiples of 45. Aligned Pair Two cells in the same intersection. This term is only used for the Aligned Pair Exclusion solving technique. Aligned Pair An advanced solving technique that examines all possible combinations for an aligned pair. Each combination is tested for validity and excluded when it would cause a conflict. Exclusion When a candidate has no combinations left, it can be eliminated. A set of N unsolved cells with candidates for N+1 digits. The acronym ALS is more commonly used. N can be between 1 and 8. In solving strategies, the Almost Locked Set can be used Almost Locked Set as a strong inference between participating candidates. When all candidates of a single digit within the set are connected with a weak input link, the remaining candidates are locked in the set. The candidates for any of the remaining digits can therefore be used in a subsequent weak output link. ALS The acronym for Almost Locked Set Alternating A requirement for propagating inferences through a chain or loop. Two adjacent inferences in a chain or loop cannot both be strong or weak, but they must alternate APE The acronym for Aligned Pair Exclusion Ariadne’s Thread A popular metaphore for Backtracking Backdoor A candidate which, when placed, leads to the solution without the need for any advanced solving techniques. Every sudoku, no matter how difficult, has a few backdoors. They are the targets for guessing. The best backdoors are those that allow the puzzle to be completed with singles only. A brute force method to solve a sudoku puzzle, capable of finding all solutions in case there are more than one. This technique is not used by humans, but by computer programs, Backtracking such as SudoCue. A backtracking algorithm is designed to be fast, not helpful. They are built into sudoku programs so that they can verify that the puzzles entered into the program have a unique solution. Band Alias used for floor, also used as an alias for chute. Bar For a brief period, this term was used for the intersection between a row and a box. It is no longer used for this purpose. B/B Plot A graphical representation of all bivalue cells and bilocation units. This diagram helps advanced players to locate possible chains or loops in the grid. A solving strategy that is frowned upon by many people. When the usual solving techniques can no longer advance the puzzle, a bivalue cell or a bilocation unit is selected, and Bifurcation both candidates are tested. In a proper sudoku, one of the alternatives will eventually lead to a conflict, allowing the player to eliminate this candidate. The other candidate can then be placed. Big Number A collective term used for the given and solved values in cells, as opposed to small numbers for candidates. Bilocation (unit) A unit constraint with only two candidates left. This causes a strong link between these two candidates, making them very useful in various solving techniques. Bivalue (cell) A cell with only two candidates left. This causes a strong link between these two candidates, making them very useful in various solving techniques. Block Alias used for box Block-Block Alias used for Locked Candidates type 2. Box Group of 9 cells in a 3x3 square formation. There are 9 boxes in a standard sudoku, usually numbered 1 through 9 from left-top to right-bottom. A thicker or darker border is often used to show where the boxes are in a sudoku grid. Each box must contain all 9 different digits in the solution, thus acting as a constraint for the puzzle. Boxcol Group of 3 cells lined up in the intersection of a box and a column. Boxrow Group of 3 cells lined up in the intersection of a box and a row. Braid In braiding analysis, a situation in which 2 of the 3 digits in a segment travel in the same direction and the 3rd digit follows an opposite strand. See also: Rope. Braiding An advanced technique based on the observation that at least 2 of 3 digits in a segment must share 2 other segments in the same chute. The traveling paths are called strands. Buddy Alias used for peer Cage A group of cells for which the sum of all solutions is given. In Killer Sudoku, cages replace the given values. There are 2 methods to draw cages in a grid. The most common method is a dotted border around the cage. The second method uses separate colors for adjacent cages. A possible solution for an unsolved cell. Each candidate represents a digit. Solving a sudoku puzzle is mainly done by elimination of candidates. When a cell contains a value, the Candidate remaining values are no longer considered candidates for that cell. In addition, all peers of that cell lose their candidates for that digit, because each house can only contain one instance of each digit. Smallest element in a sudoku grid, capable of containing a single digit. A cell can have 3 distinct state:. It can be a given, a solved cell or an unsolved cell. The latter is Cell also known as an empty cell. Each cell is identified by its row and column coordinates. The exact notation system can vary. A cell is always a member of a single row, a single column and a single box. There are 81 cells in a standard sudoku grid. Cell Set Alias used by Gaby Vanhegan for house A series of candidates which are linked together. Each candidate is a node in the chain. When all candidates in the chain represent the same digit, the digit is often omitted from Chain the chain, causing the cells themselves to be seen as the nodes. The purpose of the chain is to provide evidence for a relationship between the first and last node in the chain. This relationship is then used in further deductions. Chute A part of the grid that is either a floor or a tower. Some solving techniques operate within a single chute. This term is listed in Wayne Gould’s basic terms. In specific cases, people prefer to use a descriptive term like “the top 3 rows”, or “the middle 3 columns”. Clue Alias used for given Cluster In coloring, a group of candidates which are all connected with strong links. A cluster has 2 possible solutions. An advanced solving technique which uses alternating colors to highlight parity of candidates in a cluster. Simple coloring only examines the effects of separate clusters. Coloring Multi-coloring also examines the interactions between clusters. Both of these techniques only take a single digit into consideration. Ultra-coloring extends the technique by examining the interactions between clusters for different digits. Column A group of 9 cells in a single vertical line. In some solving techniques, rows and columns are commonly referred to as “lines”. Each column must contain all 9 different digits in the solution, thus acting as a constraint for the puzzle. Conflict Alias used for contradiction Conjugate Pair The pair of remaining candidates in a bilocation unit. These two candidates have a strong link within that unit. One of these candidates must be true and the other one must be false. Conjugate pairs can be used to build chains and loops. Any pair of candidates for the same digit, for which we have proof that at least one of them will be true. Finding connected pairs is the purpose of coloring and chaining Connected Pair techniques, for it enables us to eliminate all candidates that can see both ends of the connected pair. A conjugate pair is always a connected pair, but not all connected pairs are conjugate pairs. There are connected pairs that can both be true at the same time. A group of candidates of which only one can be true. It is sometimes used as an alias for house, but that is an incorrect use of this term. A house encapsulates 9 unit Constraint constraints, one for each digit. Each cell also enforces a constraint, because it can only contain a single digit. In total, there are (27 x 9) + 81 = 324 constraints in a standard sudoku. Contradiction A situation that violates the rule, which can be any of the following: Contradictions play an important role in the proof of solving techniques. In advanced techniques, conflicts are often a part of the technique itself. Cross-Hatching Basic solving technique that helps locate hidden singles. The solver imagines lines coming from placed instances of a digit, helping to remember which candidates have been eliminated by these placements. Cycle An alias used for loop Dancing Links Dancing Links is a Backtracking algorithm published by Donald E. Knuth. Given the right parameters, it can solve any complex problem, including sudoku puzzles. In sudoku programs, the algorithm is often optimized by directly running on the sudoku constraint definitions. It can easily be adapted to solve sudoku variants. Diagonal Each sudoku has two diagonals, running from r1c1 to r9c9 and from r9c1 to r1c9. In the Sudoku-X variant, both diagonals must also contain digits 1 through 9, thus adding a total of 18 unit constraints. The diagonals are also used for symmetry. DIC The acronym for Double-Implication Chain A numerical value between 1 and 9, which must be placed in the cells in order to complete the puzzle. For each digit, there must be 9 instances in the solution to satisfy all Digit constraints. Some sudoku variants use other symbols than digits, like letters or pictures. Other variants, like killer sudoku, depend on the numerical value of the digits. In standard sudoku, you do not need to perform any calculations with the digits. In the solving guide, I use this term for any group of N digits and N cells that are isolated from the remaining digits and cells in a house. In other texts, this term is Disjoint Subset sometimes used as an alias for naked and hidden subsets. I see these two types only as unresolved disjoint subsets. Therefore, it is not a solving technique in its own right, but a status that can be achieved by applying these two solving techniques. DLX The acronym for Dancing Links Domain Alias used for house Double-Implication A chain that has implications in both directions, causing a strong or weak relationship between the two end nodes. Edge Alias used for link. This term is borrowed from graph theory, and is rarely used in the sudoku community. Elimination The act of removing a candidate from the grid, by means of logical deduction. Most advanced solving techniques result in one or more eliminations. Error Alias used for contradiction, also part of the term Trial & Error. Excluded Alias used by Paul Stephens for Hidden Subset False Possible state for a candidate. Used in logical reasoning. A candidate can be either true or false. When it is false, it is not part of the solution and subject to elimination. Fixed Digit Alias used for given Floor A part of the grid that contains 3 rows and 3 boxes. There are 27 cells in a floor. Some solving techniques operate within a single floor or tower. Forced Digit Alias used for Naked Single Full House Final placement that completes a house. Basic solving technique. A cell containing a digit in the initial puzzle. They cannot be changed by the player. The placements of the givens define the sudoku puzzle. They are the primary source of Given information to find its solution. Currently, a minimum of 17 givens are required for a sudoku with a unique solution. Also used in the combinations: given digit, given number, given value. A 2-dimensional graphical representation of a sudoku puzzle. It shows the 81 constituent cells, lined up in 9 rows and 9 columns, with a distinct border around the boxes. Some Grid claim the grid is the actual puzzle, but a more popular view is that the grid merely represents the puzzle. When all cells in the grid contain a digit, we speak of the solution Group Alias used for house A solving strategy that is frowned upon by many people. When the usual solving techniques can no longer advance the puzzle, a random candidate is placed in a cell. When it leads to the solution, the puzzle is solved, otherwise it will probably lead to a conflict, in which case the guess is retracted and another one is made. Guessing is often employed by Guess humans solvers, and occasionaly by computer programs. SudoCue does not have guessing implemented as a solving strategy. The opportunity to solve a puzzle with a simple guess must appeal to a lot of people and may have had an impact on the popularity of the game. Serious players want to avoid guessing at all costs and develop more advanced solving strategies that can be used as an alternative. The larger public has long since lost connection with this elite group. The advanced solving techniques are just too difficult to learn. On the other hand, a guess requires no prior education. Hatching Alias used for Cross-Hatching Hidden Pair A Hidden Subset of size 2, a medium solving technique. Hidden Quad A Hidden Subset of size 4, a medium solving technique. Hidden Single Single candidate left in a unit constraint. The placement of hidden singles is a basic solving technique. The term “hidden” has grown into the sudoku community, but these singles are not really hard to spot. Without pencilmarks, the term “hidden” is meaningless. Hidden Subset N digits with candidates in N cells in a house. Medium solving technique. The size N can be 2 for a pair, 3 for a triple and 4 for a quad. The name hidden subset is well chosen. It is not easy to find these subsets in a pencilmarked grid. The remaining candidates in the cells belonging to the set can be eliminated. Hidden Triple A Hidden Subset of size 3, a medium solving technique. House A group of 9 cells, which must each contain a different digit in the solution. In standard sudoku, a house can be a row, a column or a box. There are 27 houses in a standard sudoku grid. Additional houses may occur in sudoku variants, such as the diagonals in Sudoku-X or the window panes in Windoku. Improper Sudoku A sudoku which has multiple solutions. Inference Deductions that can be made between two linked candidates. A distinction is made between strong and weak inference. Initial Value Alias used for given. There are other combinations with “initial”: digit, clue, number. Intersection In general, an intersection defines the cells that any two houses have in common. Because there is only one cell in the intersection of a row and a column, these are usually ignored. Remains the intersection between a row and a box, or a column and a box. There are 3 cells in each of these 54 intersections. Intersection Alias used for Locked Candidates Jellyfish A Row-Column Subset of size 4, an advanced solving technique. Killer (Sudoku) A variant of Sudoku, where the given values have been replaced by cages. This variant has lead to the development of specialized solving techniques. Last Digit In SudoCue and the collateral documentation, this term is used for the last instance of a digit that needs to be placed in the grid. With 8 instances already placed, there is only one candidate left for this digit. This term is also used as an alias for Full House. Earlier versions of this glossary may have contributed to that confusion. Line Common name in some solving techniques where either a row or a column can be used. There are 18 lines in a standard sudoku grid. Half of them are rows and the other half are Line-Box Alias used for Locked Candidates Link A connection between two or more candidates. These candidates must share a constraint, so they must either belong to the same cell, or use the same digit in a single house. A distinction is made between strong and weak links. Locked This term applies to candidates that are confined to a limited group of cells within a house, often narrowed down to an intersection. This implies that these candidates can not be used in that same house outside this limited group of cells, causing the elimination of these remaining candidates. Locked Candidates Candidates locked in an intersection. Basic solving technique. A distinction is made between 2 types. Type 1 causes eliminations in the row or column, and type 2 causes eliminations in the box. Some aliases refer to only one of these 2 types. “Every puzzle can be solved by logic alone” is a claim made by many puzzle makers. To substantiate such a claim, we need to define what logic actually means within the context of Logic Sudoku. There are many subcategories of logic, and not all of them are equally useful for solving puzzles. I have recently started a discussion to clarify this issue. When the results come in, I will update this glossary and the relevant sections of the solving guide. Lone Number Alias used for Hidden Single Loop A series of candidates which are linked together to form a closed loop. Each candidate is a node in the loop. When all candidates in the loop represent the same digit, the digit is often omitted from the loop, causing the cells themselves to be seen as the nodes. The purpose of the loop depends on whether it’s continuous or discontinuous. Markup Alias used for Pencilmark Minigrid Alias used for box Mini-col Alias used for Boxcol Mini-row Alias used for Boxrow Naked Pair A Naked Subset of size 2, a basic solving technique. Naked Quad A Naked Subset of size 4, a medium solving technique. Naked Single Single candidate left in a cell. The placement of naked singles is a basic solving technique. Naked Subset N cells with candidates for N digits in a house. Basic to medium solving technique. The size N can be 2 for a pair, 3 for a triple and 4 for a quad. A naked pair is considered by many to be a basic technique, while triples and quads are medium techniques. Naked Triple A Naked Subset of size 3, a medium solving technique. Nice Loop A loop which follows a strict set of rules and notation system. Node A candidate which is part of a chain or loop. Nonet Alias used for box, mainly by the Killer Sudoku community. Number Alias used for digit Number Chain Alias used by Gaby Vanhegan for Naked Subset Number Claiming Alias used by Paul Stephens for Locked Candidates Number Pair Alias used by Gaby Vanhegan for Naked Pair Number Place The name originally used by Dell Magazines for what we now call Sudoku. Pair Alias used by Paul Stephens for Naked Pair. In a wider context, the term “pair” refers to any two cells that interact in some way. Pairs are used in several medium and advanced solving techniques. Parity One of 2 states for a candidate within a chain, loop or cluster. Parity can be shown by colors, upper or lower case letters or selected symbols, like the plus and minus sign. All candidates with the same parity are either true or false together. Peer cell in the same house as another cell. Each cell has 20 peers. When a cell contains a certain digit, none of its peers can contain that digit. In advanced solving techniques, this effect is known as weak inference. Peers have a weak or strong link. Pencilmark Visual representation of a candidate. Also known as the small numbers in the grid. Pincer A cell that is part of an XY-Wing. Each XY-Wing has 2 pincer cells, which also form a connected pair. Pinned Digit Alias used for Hidden Single Pivot A cell that is part of an XY-Wing. Each XY-Wing has a single pivot cell, with strong links to both pincers. Placement The act of setting the value of a cell to one of the digits, by means of logical deduction. Most basic solving techniques result in a single placement, and only a few advanced solving techniques can cause placements. PM The acronym for Pencilmark Pointing Pair Alias used for Locked Candidates type 1. Proper Sudoku A sudoku which has a unique solution. Quadrant Alias mistakingly used for box Reduction Alias used for elimination Region Alias used for box. Because this alias is sometimes also used for a house in general, it is avoided in the documentation on this site. Rinse The removal of pencilmarks after a placement is made. Rope In braiding analysis, a situation in which all 3 digits in a segment travel in the same direction. See also: Braid. Row A group of 9 cells in a single horizontal line. In some solving techniques, rows and columns are commonly referred to as “lines”. Each row must contain all 9 different digits in the solution, thus acting as a constraint for the puzzle. Row/Column-Block Alias used for Locked Candidates type 1. Row-Column Subset A medium solving technique where all candidates for N rows are locked in N columns or vice versa. This name is a little artificial, as these techniques have a different name, depending on the size N. Size 2 is an X-Wing, size 3 is a Swordfish and size 4 is a Jellyfish. Scope Alias used for house Seafood Alias used for Row-Column Subset techniques. Sector Alias used for house See In sudoku texts, two cells that can “see” each other are peers. Segment Alias used for intersection or house. Also used for fragments of a chain or loop. Segment is nominated to be removed from this site’s documentation, because it is used in too many different contexts. The term Triad is the suggested replacement. Use the comment form to respond to this suggestion. Set Alias used for house Single Collective name for naked or hidden single. Single Candidate Alias used for Naked Single Slicing & Dicing A basic solving technique, which is a form of cross-hatching. Where standard cross-hatching only looks at placed digits, this extended form also takes locked candidates into account. This technique is implemented in SudoCue as Unlocked Single. Small Number Visual representation of a candidate. Also known as a pencilmark. This term is used in two ways, depending on the context. The most common use is the solution for the entire sudoku puzzle, where every cell contains a digit without violating the Solution rules of the game. A proper sudoku has only one solution, so it is fair to speak of “the solution”. Within the context of a single cell, “solution” refers to the digit that particular cell contains in the solution to the puzzle. Square Alias often used for a cell, but sometimes also for a box. To avoid this confusion, this term is not used in the sudoku documentation on this site. A basic solving technique, which is a form of Cross-Hatching, limited to a single chute. When the chute has 2 placements of a digit, there are only 3 cells left where the Squeezing remaining digit can go. When 2 of those 3 cells already contain another digit, the third digit can be placed in the only empty cell. In easier puzzles, squeezing is a very fast solving method, because it has a very limited scope. Squirmbag A row-column subset of size 5. In standard sudoku, a squirmbag always has a smaller complementary row-column subset. Stack Alias used for tower. For a brief period, the term was also used for an intersection of a column and a box. It is no longer used in this context. Strand One of six diagonals of segments within a chute used in braiding analysis. Going left to right, Z-Strands ascend and N-Strands descend. Strong Inference Deductions that can be made from two linked candidates. For candidates A and B, strong inference implies that A and B cannot both be false at the same time. This leads to the following deductions: In chain notation, strong inference is represented by an equal sign: ‘=’ Strong Link A link between 2 candidates in a bivalue cell or bilocation unit. These are very important in advanced solving techniques. Because these candidates are the only two left for a constraint, one of them must be true and the other must be false. A strong link can be used for both strong and weak inference in a chain. Sub-Block Alias used for intersection Subgrid Alias used for box Swordfish A Row-Column Subset of size 3, a medium solving technique. Symbol Alias used for digit T&E The acronym for Trial & Error. Tier Alias used for floor Tower A part of the grid that contains 3 columns and 3 boxes. There are 27 cells in a tower. Some solving techniques operate within a single floor or tower. Traveling Pairs Original name for the observation which is the foundation for braiding analysis. Triad Suggested term for the 3 cells in an intersection. According to the Free Dictionary, the definition of this word is “A group of three”, which perfectly suits our needs. Trial & Error A solving method that is placed between bifurcation and guessing. It has a bad reputation amongst sudoku players, because it is so closely related to guessing. Nevertheless, it is a sound scientific method. Many chain and loop techniques show all the signs of Trial & Error, but are seriously in denial. Triplet Alias that has been used for intersection, also an alias for naked triple. True Possible state for a candidate. Used in logical reasoning. A candidate can be either true or false. When it is true, it is path of the solution and placed in the associated cell. Unit Alias used for house. In some cases, the unit refers to a single constraint in a house. This makes it possible to identify all candidates for a single digit within a single house. Because this use of the term is not very common, I have decided to clarify this use by calling it a Unit Constraint. Unit Constraint A constraint for a single digit within a house. When named, the unit constraint must show both the house and the digit to which it applies, e.g. R1D7. This term is strongly related to digit, but it is not an alias. Where “digit” refers to the numbers in general, the term “value” refers to the digit as a property of a specific Value cell. Thus, the value of a cell can be one of the available digits, or nothing, when the cell is unsolved. The phrase “Digit 6 is placed in R1C8” is equivalent to “The value of R1C8 is 6” Vertex Alias used for node. This term is borrowed from graph theory, and is rarely used in the sudoku community. Victim A candidate that can be eliminated by the application of a solving technique. I have introduced this term in the solving guide to make it easier to read. A solving technique may have multiple victims. It is common practice to write it like this: R1C1<>4. Weak Inference Deductions that can be made from two linked candidates. For candidates A and B, weak inference implies that A and B cannot both be true at the same time. This results in the following deductions: In chain notation, weak inference is represented by a dash: ‘-’ Weak Link A link between 2 candidates in a cell or unit constraint, which has more than 2 candidates left. Because there are other candidates in the constraint that could be true, this type of link is not as powerful as a strong link. It can only be used for weak inference in a chain. X-Wing A Row-Column Subset of size 2, a medium solving technique. XY-Wing A semi-advanced solving technique, using a chain of only 3 cells. Because it uses such a short chain, it is also classified as a pattern recognition technique. The cell in the middle is called the pivot, and both end cells are known as pincers. Y-Wing An alias used for XY-Wing
{"url":"http://www.sudocue.net/glossary.php","timestamp":"2014-04-17T06:47:36Z","content_type":null,"content_length":"74750","record_id":"<urn:uuid:f2547dc7-8fc7-4c6d-8e5f-37971351955f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Thin plate theory... Submitted by ramdas chennamsetti on Thu, 2007-05-24 13:13. Hi all! I have a small doubt in the assumptions made in thin plate theory. We make some of the following assumptions in thin plate theory (Kirchoff's classical plate theory) (KCPT). [1] The normal stress (out of plane=> sigma(z)) is zero. and [2] The vertical deflection 'w' is not a function of 'z' => dw/dz = 0 Now there are three stress components sigma(x), sigma(y) and sigma(xy). The other three stress components sigma(z), sigma(xz) and sigma(yz). This is like a plane stress. But, from the second assumption ez=0 (strain in z-direction) and from the above exz=0 and eyz=0. Then, this leads to plane strain. From the constitutive equation for 'ez' => sigma(x)+sigma(y)=0 But, this doesn't happen.....I am looking for explanations ... Thanks in advance. Submitted by Ying Li on Thu, 2007-05-24 13:17. In the Kirchoff's classical plate theory , such a constitutive equation is not considered. So, you needn't to use it. R. Chennamsetti, Scientist, India Hi Lee, Thank you. But, constituvie equatations are to be satisfied. In this case they relate the stresses and strains. - R Chennamsetti This is just my idea and I cannot claim that it is exactly true. After these assumptions, we do not have any dependency on z (out-of plane direction) coordinate. So, we only end up with a problem on an in-plane surface. Therefore, we should only concentrate on x-y components of stress and strain. What this means, we should not take into account the relations related with the z-coordinate. So, we do not need to take into account the constituve relation of εzz and corresponding stress components. Erkan Oterkus. R. Chennamsetti, R&DE(E), INDIA Hi Erkan, Thank you. Here relations means 'Constitutive relation?' But, we can't violate the constitutive relation in any direction (I think). - Ramdas We estimated Kirchoff theory by solving spatial problem for small hight h=H/R (<0.1). It can be concluded that all normal stresses and deflection give asimptoticaly right values. But shear stress σrz must be approximated only by theories wich includes shear. For example Timoshenko-Reissner theory. More detailed information about useful boundaries in Kirchoff theory see Hi Ramdas, Thank you for this interesting topic. What I mean with the term relations is in general. Maybe I should even correct my phrase. Under the assumptions that we are making, we should eliminate the terms related with the z-coordinate. As a crude example, like eliminating applied boundary conditions from our global governing equation.So,, we should only have the in-plane related terms in any relation suh as equilibrium equations, constitutive relations,etc. You are right, we should not violate constitutive relation, but here we are making assumptions, so we need to sacrifice something which is OK to ignore under some particular conditions. Submitted by Wenbin Yu on Fri, 2008-09-19 14:10. The original post made a point. The violation of the 3D constitutive relations is happened because the second assumption is not correct. The first assumption is asymptotically correct for the first approximation of the original 3D model using a 2D model. However, the second assumption, transverse normal remains rigid, is clearly violating the first assumption, plane stress. Because by assuming plane stress, we assume that the plate is free to move in the thickness direction, which means the transverse normal is not rigid. Both assumptions can be valid only if the Poisson's ratio is zero. Recall the well-known and readily observed Poission's effect. The reason the second assumption is used is because it is convient to derive a 2D version kinematics (strain-displacement relations). It is noted that same conflicting assumptions also used to derive beam models dealing with tension and bending: section remain rigid in its own plane and the beam is in uniaxial stress state. As a final comment, both assumptions are not absolutely needed for one to derive a plate theory. One can use the variational asymptotic method to take advantage of the smallness of theory as the small parameter to reduce the 3D model to a 2D model with the first assumption comes out as a result of the first assumption and the transverse displacement will be a quadratic function of the thickness coordinate. Please refer to the following paper for more details. Yu, W.: "Mathematical Construction of a Reissner-Mindlin Plate Theory for Composite Laminates," International Journal of Solids and Structures, vol.42, no. 26, 2005, pp. 6680-6699. (pdf) Recent comments
{"url":"http://imechanica.org/node/1461","timestamp":"2014-04-17T00:53:41Z","content_type":null,"content_length":"31854","record_id":"<urn:uuid:bf949c09-1fb3-4623-bc60-0c96861ed28b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra Tutors Arlington, VA 22203 Ivy League tutor for Exam Prep, Writing, and much more! ...Many students also have never been taught to translate these skills into a form that will allow them to succeed on standardized tests - which is, of course, the way most school systems test those skills. I can help your student improve those base skills, and take... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Mc_Lean_VA_College_Algebra_tutors.aspx","timestamp":"2014-04-18T18:44:21Z","content_type":null,"content_length":"59947","record_id":"<urn:uuid:72bf96d5-fb8c-439c-9072-ccf9222b0a1c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:Scientific Memoirs, Vol. 3 (1843).djvu/685 This page has been , but needs to be L. F. MENABREA ON BABBAGE'S ANALYTICAL ENGINE. the product of two binomials $(a + bx) (m + nx)$, the result will be represented by $am + (an + bm) x + bnx^2$, in which expression we must first calculate $am$, $an$, $bm$, $bn$; then take the sum of $am + b m$; and lastly, respectively distribute the coefficients thus obtained, amongst the powers of the variable. In order to reproduce these operations by means of a machine, the latter must therefore possess two distinct sets of powers: first, that of executing numerical calculations; secondly, that of rightly distributing the values so obtained. But if human intervention were necessary for directing each of these partial operations, nothing would be gained under the heads of correctness and œconomy of time; the machine must therefore have the additional requisite of executing by itself all the successive operations required for the solution of a problem proposed to it, when once the primitive numerical data for this same problem have been introduced. Therefore, since from the moment that the nature of the calculation to be executed or of the problem to be resolved have been indicated to it, the machine is, by its own intrinsic power, of itself to go through all the intermediate operations which lead to the proposed result, it must exclude all methods of trial and guess-work, and can only admit the direct processes of It is necessarily thus; for the machine is not a thinking being, but simply an automaton which acts according to the laws imposed upon it. This being fundamental, one of the earliest researches its author had to undertake, was that of finding means for effecting the division of one number by another without using the method of guessing indicated by the usual rules of arithmetic. The difficulties of effecting this combination were far from being among the least; but upon it depended the success of every other. Under the impossibility of my here explaining the process through which this end is attained, we must limit ourselves to admitting that the four first operations of arithmetic, that is addition, subtraction, multiplication and division, can be performed in a direct manner through the intervention of the machine. This granted, the machine is thence capable of performing every species of numerical calculation, for all such calculations ultimately resolve themselves into the four operations 1. ↑ This must not be understood in too unqualified a manner. The engine is capable, under certain circumstances, of feeling about to discover which of two or more possible contingencies has occurred, and of then shaping its future course accordingly.—Note by Translator.
{"url":"http://en.wikisource.org/wiki/Page:Scientific_Memoirs,_Vol._3_(1843).djvu/685","timestamp":"2014-04-19T08:35:20Z","content_type":null,"content_length":"26440","record_id":"<urn:uuid:3a7607c7-51d5-4025-8789-a435249ca007>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Ugh....need Big Help! July 27th 2005, 10:18 PM #1 Ugh....need Big Help! I need help with a word problem: One pipe can fill a hot tub in 9 minutes, while a second pipe can fill it in 15 minutes. If the tub is empty, how long will it take both pipes together to fill the hot tub? I have several like this, so can you show me the formula so I can figure out how to plug the others in? Let's see... By the data of the problem, I suppose the pipes supply water at a constant rate. For the first pipe, call this rate $A$. If the pipe supplies $\alpha(t)$ units of water in time t minutes, then we have $\frac{d\alpha}{dt}(t)=A$, which integrates to $\alpha(t)=At+\alpha(0),$ the term $\alpha(0)$ being the initial supply at time $t=0$. I guess $\alpha(0)=0$, as we have to set the timer and open the tap simultaneously ( $V$ units of water, then $\alpha(9)=9A=V$, and so $A=\displaystyle\frac{V}{9}$. We conclude that, for the first pipe, the supply is $\alpha(t)=\frac{V}{9}t$. Under the same suicidal considerations, we obtain that the supply of the second pipe is $\beta(t)= \frac{V}{15}t$. For both pipes, the supply is $\alpha(t)+\beta(t)=\frac{V}{9}t+\frac{V}{15}t= 0.17 77Vt$ (almost). When the tub fills, we will have $0.1777Vt=V$, from where $t=\frac{1}{0.1777}=5.625$ minutes... The tub is full. Now, lets give president Bush a bath. Last edited by Rebesques; July 28th 2005 at 01:26 AM. Here is one way. So you have several questions like that. That is because that is a popular question. And, I think there is a popular formula already developed for that type of question. I think it is 1/x +1/y = 1/t --------****** x = time the task can be done/finished alone by one performer. y = another time the task can be done/finished alone also by another performer. t = the time the task can be done/finished if the two performers perform together. We can derive that formula, why not. Let J = complete task to be done. and, performer A can finish it alone in x time. and, performer B can finish it also alone in y time. and, if A and B perform together, the J will be finished in t time. Like distance, task = rate*time So, rate = task/time ------*** Rate of A alone is J/x. Rate of B alone is J/y. If A and B perform together, their combined rate is (J/x +J/y). task = rate*time J = (J/x +J/y)*t Divide both sides by t, J/t = J/x +J/y Divide both sides by J, 1/t = 1/x +1/y 1/x +1/y = 1/t -------the formula. Now, per your example, x = 9 min. y = 15 min 1/x +1/y = 1/t 1/9 +1/15 = 1/t Clear the fractions, multiply both sides by 9*15*t, 1*15*t +1*9*t = 1*9*15 15t +9t = 135 24t = 135 t = 135/24 t = 5.625 min ---- the time the tub will fill up if the two pipes will do it together. Wait, there is a shorter formula, or a simplification of the derived formula above; 1/x +1/y = 1/t ty +tx = x*y t = (x*y) / (x+y) -----shorter formula. t = (9*15)/(9+15) = 135/24 = 5.625 min----same. One big problem if you rely only on formulas, once you forget the formula then you are lost. Last edited by ticbol; July 28th 2005 at 02:04 AM. July 28th 2005, 01:23 AM #2 July 28th 2005, 01:56 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/665-ugh-need-big-help.html","timestamp":"2014-04-17T12:02:37Z","content_type":null,"content_length":"35638","record_id":"<urn:uuid:73568f4b-9876-4727-afdc-7368e60b312f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
* denotes an undergraduate coauthor; ** denotes a graduate student coauthor Refereed Research Papers [27] Strongly Jonsson and strongly HS modules, Journal of Pure and Applied Algebra 218 (2014), no. 8, 1385-1399. Abstract. Let R be commutative ring with identity and let M be an infinite unitary R-module. Then M is a Jónsson module provided every proper R-submodule of M has smaller cardinality than M. In this note, we strengthen this condition and call an R-module M (which may be finite) strongly Jónsson provided distinct R-submodules of M have distinct cardinalities. We present a classification of these modules, and then we study a sort of dual notion. Specifically, we consider modules M for which M/N and M/K have distinct cardinalities for distinct R-submodules N and K of M; we call such modules strongly HS. We conclude the paper with a classification of the strongly HS modules over an arbitrary commutative ring. [26] Commutative rings with infinitely many maximal subrings (with Alborz Azarang), Journal of Algebra and Its Applications 13 (2014), no. 6, 1450037, 29 pp. Abstract. Let R be an commutative ring with identity. A proper subring S of R is said to be a maximal subring of R provided there are no subrings of R properly between S and R. In this paper, we study rings with infinitely many maximal subrings. Our work builds on earlier work of several authors including Azarang and Dobbs. [25] Small and large ideals of an associative ring, Journal of Algebra and Its Applications 13 (2014), no. 5, 1350151, 20 pp. Abstract. Let R be an associative ring with identity, and let I be an (left, right, two-sided) ideal of R. Say that I is small if |I|<|R| and large if |R/I|<|R|. In this note, we present results on small and large ideals. In particular, we study their interdependence and how they influence the structure of R. Conversely, we investigate how the ideal structure of R determines the existence of small and large ideals. [24] Modules which are isomorphic to their factor modules (with Adam Salminen), Communications in Algebra 41 (2013), no. 4, 1300-1315. Abstract. Let R be commutative ring with identity and let M be an infinite unitary R-module. Call M homomorphically congruent (HC for short) provided M/N is isomorphic to M for every submodule N of M for which |M/N|=|M|. This definition generalizes more stringent definitions such as those given by the authors and Hirano & Mogami. In this paper, we study HC modules over commutative rings. After a fairly comprehensive review of the literature, several natural examples are presented to motivate our study. We then prove some general results on HC modules. To mention some of our results, we show that every HC module is either torsion or torsion-free. We obtain a complete description of the torsion-free HC modules. Further, we include HC module-theoretic characterizations of discrete valuation rings, almost Dedekind domains, and fields. We also provide a characterization of the HC modules over a Dedekind domain, extending W.R. Scott's classification over Z. Finally, we close with some open questions. [23] Jonsson posets and unary Jonsson algebras (with Keith Kearnes), Algebra Universalis 69 (2013), no. 3, 101-112. Abstract. We show that if P is an infinite poset whose proper order ideals have cardinality strictly less than |P|, and k is a cardinal number strictly less than |P|, then P has a principal order ideal of cardinality at least k. We apply this result to characterize the possible sizes of unary Jónsson algebras. [22] Rings whose multiplicative endomorphisms are power functons, Semigroup Forum 86 (2013), no. 2, 272-278. Abstract. Let R be a commutative ring. For any positive integer m, the power function f:R→R defined by f(x):=x^m is easily seen to be an endomorphism of the multiplicative semigroup (R,*). In this note, we characterize the commutative rings R with identity for which every multiplicative endomorphism of (R,*) is equal to a power function. Specifically, we show that every endomorphism of (R,*) is a power function if and only if R is a finite field. [21] Rings which admit faithful torsion modules (with Ryan Schwiebert**), Communications in Algebra 40 (2012), no. 6, 2184-2198. Abstract. Let R be an associative ring with identity and let M be a unitary R-module. Then M is torsion provided that for every m in M, there is some nonzero r in R such that rm=0, and M is faithful if rM={0} implies that r=0. In this note, we study the existence (and nonexistence) of faithful torsion modules. In particular, we say that a ring R is FT provided R admits a faithful torsion module (and non-FT otherwise). We define the FT rank of R, denoted FT(R), to be the smallest cardinality of a generating set of a faithful torsion module over R. The main objective of the paper is to study this rank function. We completely determine the FT rank function for simple rings, semisimple Artinian rings, quasi-Frobenius rings, commutative Noetherian domains, and valuation domains, among other classes. Further, we show that the FT rank of a (left or right) Artinian ring is finite, and the FT rank of a commutative Noetherian ring is countable. We also show that the FT rank of any ring is a regular cardinal. Moreover, we show that every regular cardinal can be realized as the FT rank of some ring R. [20] Rings which admit faithful torsion modules II (with Ryan Schwiebert**), Journal of Algebra and Its Applications 11 (2012), no. 3, 1250054, 12 pp. Abstract. Let R be an associative ring with identity. An (left) R-module M is said to be torsion if for every m in M, there exists a nonzero r in R such that rm=0, and faithful provided rM=0 implies that r=0 (r in R). We call R (left) FT if R admits a nontrivial (left) faithful torsion module. In this paper, we continue the study of FT rings initiated in [21] above. After presenting several examples, we consider the FT property within several well-studied classes of rings. In particular, we examine direct products of rings, Brown-McCoy semisimple rings, serial rings, and left nonsingular rings. Finally, we close the paper with a list of open problems. [19] On modules whose proper homomorphic images are of smaller cardinality (with Adam Salminen), Canadian Mathematical Bulletin 55 (2012), no. 2, 378-389. Abstract. Let R be a commutative ring with identity, and let M be a unitary module over R. We call M H-smaller (HS for short) iff M is infinite and |M/N|<|M| for every nonzero submodule N of M. After a brief introduction, we show that there exist nontrivial examples of HS modules of arbitrarily large cardinality over Noetherian and non-Noetherian domains. We then prove the following result: Suppose that M is faithful over R, R is a domain (we show that we can restrict to this case without loss of generality), and K is the quotient field of R. If M is HS over R, then R is HS as a module over itself, M is an R-submodule of K, and there exists a generating set S for M over R with |S|<|R|. We use this result to generalize a problem posed by Kaplansky, and conclude the paper by answering an open question on Jόnsson modules. [18] On elementarily k-homogeneous unary structures, Forum Mathematicum 23 (2011), no. 4, 791-802. Abstract. Let L be a first-order language with equality and let U be an L-structure of cardinality k. If ω≤λ≤k, then we say that U is elementarily λ-homogeneous iff any two substructures of cardinality λ are elementarily equivalent, and λ-homogeneous iff any two substructures of cardinality λ are isomorphic. In this note, we classify the elementarily λ-homogeneous structures (A,f) where f:A→A is a function and λ is a cardinal such that ω≤λ≤|A|. As a corollary, we obtain a complete description of the Jónsson algebras (A,f), where f:A→A. [17] The number of homomorphic images of an abelian group, International Journal of Algebra 5 (2011), no. 3, 107-115. Abstract. We study abelian groups with certain conditions imposed on their homomorphic images. We begin by classifying the abelian groups which have but finitely many homomorphic images. In particular, we show that an abelian group G has but finitely many homomorphic images (up to isomorphism) iff G is finitely cogenerated. We then determine the abelian groups G which have the maximum number of homomorphic images, in the sense that G/H and G/K are not isomorphic whenever H and K are distinct subgroups of G. [16] Cardinalities of residue fields of Noetherian integral domains (with Keith Kearnes), Communications in Algebra 38 (2010), no. 10, 3580-3588. Abstract. We determine the relationship between the cardinality of a Noetherian integral domain and the cardinality of a residue field. One consequence of the main result is that it is provable in ZFC that there exists a Noetherian domain of cardinality aleph_{1} with a finite residue field, but the statement “There is a Noetherian domain of cardinality aleph_{2} with a finite residue field” is equivalent to the negation of the continuum hypothesis. We apply our results to characterize the partially ordered set Spec R[x] when R is a one-dimensional semilocal domain. Our work corrects erroneous results in the literature. [15] Jónsson modules over Noetherian rings, Communications in Algebra 38 (2010), no. 9, 3489-3498. Abstract. Let R be a commutative ring with identity, and let M be an infinite unitary R-module. M is said to be a Jόnsson module provided every proper submodule of M has strictly smaller cardinality than M. Utilizing earlier results of the author as well as results of Gilmer/Heinzer, Weakley, and Heinzer/Lantz, we study Jόnsson modules over Noetherian rings. After a brief introduction, we classify the countable Jόnsson modules over an arbitrary ring. We then give a complete description of the Jόnsson modules over a one-dimensional Noetherian ring, extending W.R. Scott's classification over Z. We show that these results may be extended to Jόnsson modules over an arbitrary Noetherian ring if one assumes the generalized continuum hypothesis. Finally, we close with a list of open [14] On the axiom of union, Archive for Mathematical Logic 49 (2010), no. 3, 283-289. Abstract. In this paper, we study the union axiom of ZFC. After a brief introduction, we sketch a proof of the folklore result that union is independent of the other axioms of ZFC. In the third section, we prove some results in the theory T:=ZFC minus union. Among other results, we prove that funite unions of sets exist without appealing to the union axiom. We also show that the axiom of union is equivalent to every set of ordinals being bounded. Finally, we show that the consistency of T plus the existence of an inaccessible cardinal proves the consistency of ZFC. [13] More results on congruent modules, Journal of Pure and Applied Algebra 213 (2009), no. 11, 2147-2155. Abstract. W.R. Scott characterized the infinite abelian groups G for which H is isomorphic to G for every subgroup H of G of the same cardinality as G. In [8] (below), the author extends Scott's result to infinite modules over a Dedekind domain, calling such modules congruent, and in a subsequent paper ([12] below), the author obtains results on congruent modules over more general classes of rings. In this paper, we continue our study. Among many other results, we show that some statements on congruent modules are independent of ZFC. For example, the existence of a Noetherian non-Dedekind domain of cardinality aleph_{2} which admits a faithful congruent module is undecidable in ZFC. [12] On modules M for which N ~ M for every submodule N of size |M|, Journal of Commutative Algebra 1 (2009), no. 4, 679-699. Abstract. Let R be a commutative ring with identity and let M be an infinite unitary R-module. M is called a Jόnsson module provided every submodule of M of the same cardinality as M is equal to M. Such modules have been well-studied, most notably by Gilmer and Heinzer. We generalize this notion and call M congruent provided every submodule of M of the same cardinality as M is isomorphic to M (note that this class of modules contains the class of Jόnsson modules). These modules have been completely characterized by Scott when the operator domain is Z. In [8] (below), the author extended Scott's classification to modules over a Dedekind domain. In this paper, we study congruent modules over arbitrary commutative rings. We use the theory developed in this paper to prove new results about Jόnsson modules as well as characterize several classes of rings. [11] Ring semigroups whose subsemigroups intersect, Semigroup Forum 79 (2009), no. 2, 413-416. Abstract. Let (S,*) be a semigroup. Then (S,*) is called a ring semigroup provided there is an operation + on S such that (S,*,+) is a ring. In this note, we characterize the ring semigroups whose nonzero multiplicative subsemigroups intersect. In particular, we prove that a ring R (not assumed commutative or to possess an identity) has the property that any two nonzero subsemigroups interesect iff either R is a nilring or R is an absolutely algebraic field of prime characteristic. As a corollary, we show that a ring R is a finite field iff R is not a nilring and there exists a positive integer k such that x^k=y^k for all nonzero elements x and y of R. [10] Ring semigroups whose subsemigroups form a chain, Semigroup Forum 78 (2009), no. 2, 371-374. Abstract. Let (S,*) be a semigroup. Then (S,*) is called a ring semigroup provided there is an operation + on S such that (S,*,+) is a ring. Such semigroups have been well-studied in the literature. In this note, we use Mihailescu's Theorem (formerly Catalan's Conjecture) to characterize the ring semigroups whose subsemigroups containing 0 form a chain with respect to set inclusion. [9] Some results on Jonsson modules over a commutative ring, Houston Journal of Mathematics 35 (2009), no. 1, 1-12. Abstract. Let M be an infinite unitary module over a commutative ring R with identity. M is called Jόnsson over R provided every proper submodule of M has smaller cardinality than M; M is large if M has cardinality larger than R. Extending results of Gilmer and Heinzer, we prove that if M is Jόnsson over R, then either M is isomorphic to R and R is a field, or M is a torsion module. We show that there are no large Jόnsson modules of regular or singular strong limit cardinality. In particular, GCH implies that there are no large Jόnsson modules. Necessary and sufficient conditions are given for an infinitely generated Jόnsson module to be countable. As applications, we prove there are no large uniserial or Artinian modules. Under GCH, we derive a new characterization of the quasi-cyclic [8] On infinite modules M over a Dedekind domain for which N ∼ M for every submodule N of cardinality |M|, Rocky Mountain Journal of Mathematics 39 (2009), no.1, 259-270. Abstract. Let R be a commutative ring with identity, and let M be an infinite unitary R-module. Let us call M congruent iff every submodule of M of the same cardinality as M is isomorphic to M. Scott classified all congruent abelian groups. In this paper, we extend his results to classify all congruent modules over an arbitrary Dedekind domain. As a consequence, we get a complete description of the Jόnsson modules of a Dedekind domain. [7] A note on the n-generator property for commutative monoids, Semigroup Forum 74 (2007), no. 1, 155-158. Abstract. Let M be a cancellative commutative monoid with integral closure M*. Borrowing from ring theory, we say that M has the n-generator property iff every finitely generated ideal of M can be generated by n elements, and we say that M has rank n iff every ideal of M can be generated by n elements. We investigate the integral closure of such monoids. We show, in particular, that if M has the n-generator property, then M* is a valuation monoid, and if M has rank n, then M* is a principal ideal monoid. Miscellaneous Refereed Publications [6] The converse of The Intermediate Value Theorem: from Conway to Cantor to cosets and beyond, Missouri Journal of Mathematical Sciences, 16 pages (to appear) Abstract. The classical Intermediate Value Theorem (IVT) states that if f is a continuous real-valued function on an interval [a, b] ⊆ R and if y is a real number strictly between f(a) and f(b), then there exists a real number x ∈ (a, b) such that f(x) = y. The standard counterexample showing that the converse of the IVT is false is the function f defined on R by f(x):= sin(1/x) for x ̸= 0 and f(0 ):= 0. However, this counterexample is a bit weak as f is discontinuous only at 0. In this note, we study a class of strong counterexamples to the converse of the IVT. In particular, we present several constructions of functions f : R → R such that f[I] = R for every nonempty open interval I of R (f[I]:= {f(x) : x ∈ I}). Note that such an f clearly satisfies the conclusion of the IVT on every interval [a,b] (and then some), yet f is necessarily nowhere continuous. This leads us to a more general study of topological spaces X = (X, T) with the property that there exists a function f : X → X such that f[O] = X for every nonvoid open set O ∈ T . [5] Groups whose subgroups have distinct cardinalities, Pi Mu Epsilon Journal, 10 pages (to appear) Abstract. A standard undergraduate algebra exercise is to prove that distinct subgroups of a finite cyclic group G have distinct cardinalities. In this note, we study this property for groups in general (and we do not limit our focus to finite groups). In particular, we determine all groups G for which distinct subgroups of G have distinct cardinalities. Specifically, they are exactly the quasi-cyclic groups and the finite cyclic groups. These results follow easily from old results due to Baer and W.R. Scott. In this note, we present an elementary proof of the above classification [4] Group permutations which preserve subgroups (with Veronica Marth*), Pi Mu Epsilon Journal 13 (2012), no. 7, 407-414. Abstract. Let G be a group, and let f :G→G be a bijection. Say that f preserves subgroups (of G) provided that for any subset X ⊆ G, X is a subgroup of G if and only if f[X] (the image of X under f) is a subgroup of G. Let S(G) denote the set of all such functions f. It is easy to show that S(G) is a group under composition of functions. Further, if Aut(G) is the group of automorphisms of G (again, under composition), then Aut(G) is a subgroup of S(G). In this note, we study the structure and the size of S(G), relative to Aut(G), for various groups G. In particular, we show that the disparity in size can be minimal, moderate, or as large as possible (in a sense to be made precise). Finally, we determine all groups G up to isomorphism for which Aut(G) = S(G). [3] An independent axiom system for the real numbers, College Mathematics Journal 40 (2009), no. 2, 78-86. Abstract. We give an irredundant set of second-order axioms for the complete ordered field of real numbers. Upon adding the axiom that there is no least positive real number, we show that commutativity of addition, commutativity and associativity of multiplication, the existence of 1, and the existence of multiplicative inverses can be deduced as theorems. We also provide a complete proof that the axioms in our system are mutually independent. Book Chapter [2] Jónsson modules over commutative rings, Chapter 1 of “Commutative Rings: New Research,” Nova Science Publishers, New York (2009), 1-6. (invited chapter) Abstract. This chapter collects results from several of my published papers, as well as some unpublished results from my dissertation. The chapter ends with a discussion of several open problems. Refereed Survey Paper [1] Jónsson and HS modules over commutative rings, International Journal of Mathematics and Mathematical Sciences, Special Issue "Rings and Related Topics" (2014), 120907, 13 pages. Abstract. This paper gives a comprehensive survey of Jónsson and HS modules over commutative rings. Most of the main results on these modules are listed in the paper, and many proofs are given to help the reader become acquainted with the tools used to study these structures. The paper closes with a list of open problems. Problems Posed [25] Problem #???, College Mathematics Journal (to appear) [24] Problem #???, College Mathematics Journal (to appear) [23] Problem #11750, American Mathematical Monthly 121 (2014), no. 1, p. 83. [22] Problem #1934, Mathematics Magazine 86 (2013), no. 5, p. 382. [21] Problem #1011, College Mathematics Journal 44 (2013), no. 5, p. 437. [20] Problem #1006, College Mathematics Journal 44 (2013), no. 4, p. 325. [19] Problem #11702, American Mathematical Monthly 120 (2013), no. 4, p. 365. [18] Problem #11658, American Mathematical Monthly 119 (2012), no. 7, p. 608. [17] Problem #985, College Mathematics Journal 43 (2012), no. 4, p. 338. [16] Problem #1900, Mathematics Magazine 85 (2012), no. 3, p. 229. [15] Problem #977, College Mathematics Journal 43 (2012), no. 3, p. 257. [14] Problem #968, College Mathematics Journal 43 (2012), no. 1, p. 95. [13] Problem #11617, American Mathematical Monthly 119 (2012), no. 1, p. 68. [12] Problem #946, College Mathematics Journal 42 (2011), no. 2, p. 151. [11] Problem #940, College Mathematics Journal 41 (2010), no. 5, p. 410. [10] Problem #1211, Pi Mu Epsilon Journal, Fall 2009, p. 560. [9] Problem #11451, American Mathematical Monthly 116 (2009), no. 7, p. 648. [8] Problem #1825, Mathematics Magazine 82 (2009), no. 3, p. 228. [7] Problem #892, College Mathematics Journal 40 (2009), no. 1, p. 55. [6] Problem #1810, Mathematics Magazine 81 (2008), no. 5, p. 376. [5] Problem #871, College Mathematics Journal 39 (2008), no. 2, p. 153. [4] Problem #11284, American Mathematical Monthly 114 (2007), no. 4, p. 358. [3] Problem #203, Math Horizons, September 2006, p. 40. [2] Problem #820, College Mathematics Journal 37 (2006), no. 1, p. 60. [1] Problem #11166, American Mathematical Monthly 112 (2005), no. 7, p. 654.
{"url":"http://www.uccs.edu/goman/publications.html","timestamp":"2014-04-16T18:58:06Z","content_type":null,"content_length":"81489","record_id":"<urn:uuid:163d9e5e-a251-4963-b3b9-89b6b9a82042>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
E5 MATLAB Assignment 4 Upon completion of the exercise you should be able to use MATLAB to • manipulate strings, • define and access elements of a structure, • view and manipulate properties of an object, • define and manipulate simple graphics objects • perform element-by-element vector operations. As you go through this document, you should enter the commands at the MATLAB prompt, and try variations on the commands to make sure that you fully understand. Reading from text You may want to refer to chapters 1, 2 and 5 of your text. To this point all of the data we have used has been numeric, either a scalar, a vector or an array. MATLAB has several other types of data that it can use. In this exercise we will explore some of A "string" is a vector of "characters". We define a string by putting a sequence of characters between quotes. >> myString = 'This is this.' myString = This is this. We can now access parts of this vector as we did with vectors of numbers. >> myString(1) % The first character ans = >> myString(1:4) % The first four characters ans = >> myString(9:end) % From character 9 to the end ans = We can form a longer string by forming it from two (or more) smaller strings as we would create a long vector from several smaller vectors. >> word1 = 'Know'; >> word2 = 'fear'; >> [word1 ' ' word2 '!'] % Note addition of space between words and punctuation. ans = Know fear! There are also functions to convert a number to a string of characters (num2str). This can be useful for displaying text and numbers in a single line (instead of using multiple "disp" commands as we did in previous labs. >> num2str(pi) % Note: Result will be a string. ans = >> disp(['The number pi is approximated as ' num2str(pi) '.']) The number pi is approximated as 3.1416. We can also convert from a string to a number (str2num) >> x=str2num('1.2356') x = To get more information about strings enter "doc strings" at the command prompt. Sometimes it is useful to create a variable with that comprises more than one piece of information. For example, we can define a circle by its radius and center. We could do this we two separate variables, or one variable (call it "myCircle") with two "fields," one called "radius" and one called "center" (and "center" is a two element vector consisting of the "x" and "y" values of the circle's center. >> myCircle.center = [2 2]; >> myCircle.radius = 4; >> myCircle myCircle = center: [2 2] radius: 4 Now we can access each part of the structure by using both the variable name and the field >> myCircle.radius ans = Note that the data need not be numeric, we can use strings instead. >> myName.firstName='Erik'; >> myName.lastName='Cheever'; >> myName myName = firstName: 'Erik' lastName: 'Cheever' >> disp(['Fullname is ' myName.firstName ' ' myName.lastName '.']) Fullname is Erik Cheever. We can even mix types of data >> myName.age = 71 myName = firstName: 'Erik' lastName: 'Cheever' age: 71 >> disp(['My name is ' myName.firstName ' and I am ' num2str(myName.age) ' years old.']); My name is Erik and I am 71 years old. "Objects" are in some ways similar to structures, but in addition to storing multiple pieces of information in a variable, a variety of operations that can be performed on the object are also defined. For objects the different pieces of information are called properties instead of fields (the name used with structures). It is not important (now) to understand how objects are created, but you will need to know how they can be used. A predefined type of "object" that you will use in the next several weeks is the "patch." First, set four points that are the vertices of a square >> xVals = [0 1 1 0]; >> yVals = [0 0 1 1]; % Define vertices of a square >> plot(xVals,yVals,'ro'); % Plot the points >> axis(2*[-1 1 -1 1]); % Set axis limits The "patch" command defines a graphics shape and as you will use it has three arguments: patch(x,y,c). The variable "x" hold the x-values of the vertices, "y" holds the y-values and "c" is a color. The color is a three element vector "[r,g,b]" where "r," "g," and "b" are numbers from 0 to 1 that specify the intensity of the color. Define a square patch that is red (c=[1 0 0]). >> myPatch = patch(xVals, yVals, [1 0 0]) myPatch = >> axis(2*[-1 1 -1 1]); % Set axis limits Don't worry about the number that comes back; it is used by MATLAB to uniquely define the object. To see all of the properties of the object use the "get" command (only some of the output is shown below; an ellipsis (...) is used where output is not shown). >> get(myPatch) EdgeAlpha = [1] EdgeColor = [0 0 0] FaceAlpha = [1] FaceColor = [1 0 0] Faces = [1 2 3 4] LineStyle = - LineWidth = [0.5] XData = [ (4 by 1) double array] YData = [ (4 by 1) double array] ZData = [] Visible = on To change the color, you can alter the "FaceColor" property using the "set" command. >> set(myPatch,'FaceColor',[0 1 0]); % Make face color green [r g b]=[0 1 0] You can also change the edge color and thickness by adjusting the appropriate properties. >> set(myPatch,'EdgeColor',[0 0 1]); % Make edge color blue [r g b]=[0 0 1] >> set(myPatch,'LineWidth',2); % Make edge thicker You can even change the shape by altering "XData" property >> set(myPatch,'Xdata',[0 1.5 1.5 0]); % Change Xdata It is also possible to set multiple properties at once >> set(myPatch,'XData',[0 1 1 0],'YData',[0 0 -0.5 -0.5]); % Change Xdata and YData or to access a single property >> myColor = get(myPatch,'FaceColor') % Get color property myColor = Operations with Two Vectors Define two row vectors "p" and "q" where "p" is the unit price of three objects (e.g., apples, oranges and bananas) and "q" is the quantity of each fruit that is purchased. >> p = [0.45 0.50 .28]; % Unit price for apples, oranges and bananas >> q = [10, 5, 2]; % Quantities purchased of the 3 items In this examples, apples are 45¢ each (and 10 were purchased), oranges are 50¢ (5 purchased) and bananas are 28¢ (two purchased). To try to find how much is spent on each type of fruit, you might try multiplying the two vectors (but this results in an error, as shown below). >> cost = p*q ??? Error using ==> mtimes Inner matrix dimensions must agree. The error arises because multiplication is not defined for two row vectors. However, what you want to do is multiply each element of "p" by the corresponding element in "q". In MATLAB this is done with the "element-by-element" multiplication operator ".*" (a period followed by an asterisk). >> cost = p.*q cost = 4.5000 2.5000 0.5600 This tells us we spent $4.50 on apples, $2.50 on oranges and 56¢ on bananas. There are many other operations that aren't defined for two row vectors, but element-by-element operations are generally available for these operations. A common operation is element-by-element division, "./". A less obvious operation is exponentiation, or raising a vector to a power. Since raising a number to a power is really just multiplying the number by itself multiple times, it shouldn't be surprising that we need an element-by-element operator. In the example below the value of the square of the elements of "p" is calculated - first incorrectly, then correctly. >> p^2 % Incorrect, this is equivalent to p*p and multiplication of row vectors isn't defined. ??? Error using ==> mpower Inputs must be a scalar and a square matrix. >> p.^2 % Correct! Element-by-element exponentiation (equivalent to p.*p ans = 0.2025 0.2500 0.0784 Some Simple Shapes Define a few shapes. Make sure you understand the following code - you may need to review your trigonometry. Come see me, Ann Ruether, or a Wizard if you have trouble figuring out the various shapes. You will have to add comments later on to demonstrate you understand. Note that the newer shapes are on top of the older shapes. >> theta=linspace(0,2*pi); % Theta evenly space from 0 to 2*pi >> circleX = 0.5*cos(theta); % X values for circle, radius = 0.5. >> circleY = 0.5*sin(theta); % Y values. >> myCircle = patch(circleX, circleY, [1 1 0]); % Yellow circle >> theta4 = linspace(0,2*pi,5); % Theta spaced every pi/2 (90 degs). >> dX = 0.5*cos(theta4); % X values for diamond ("radius"=0.5). >> dY = 0.5*sin(theta4); % Y values >> myDiamond = patch(dX, dY, [0 1 1]); % Cyan diamond (defined geometrically) >> xS1 = [0 1 1 0]; % X values for square shape 1. >> yS1 = [1 1 0 0]; % Y values. >> mySquare1 = patch(xS1,yS1,[1 0 0]); % Red Square (defined by vertices) >> sX2 = 0.25*cos(theta4+pi/4); % X values for square shape 2 ("radius"=0.25). >> sY2 = 0.25*sin(theta4+pi/4); % Y values >> mySquare2 = patch(sX2,sY2,[0 0 1]); % Blue square (defined geometrically) >> axis([-2 2 -2 2],'square') % Make axes square. We can move the diamond by changing its "XData" property >> set(myDiamond,'XData',dX-0.5) % Move diamond to left by 0.5 units by altering 'XData'. To understand the next shape to be defined consider the parabola shown below. The parobola is symmetric about the vertical axis and goes through the y-axis (x=0) with a value y[0] (in this case y[0]=0.5). It also goes through the point (x[1],y[1]) and in this case (x[1],y[1])= (1.75,2.25). It can be shown that this parabola is defined by the equation For this particular parabola, c=0.5 and a=0.5714. If we use this information we can create a "smile" shape. Note that the code is poorly commented on purpose. Your job is to figure out what it does. x=linspace(-1,1,100); % x goes from -1 to 1 upperLip = 0.5714*x.^2+0.5; lowerLip = x.^2; xSmile = patch ([x -x], [upperLip lowerLip], [1 0 1]); % Magenta "smile" To do The task this week may take a bit longer than the others because there is no predefined script. Instead you have to start from scratch and create your own file. Read the directions carefully to make sure you complete the specified task. Please be sure you comment the code clearly and thoroughly! Your task is to 1. Create the shapes shown (and described) below. Make sure the creation of each shape is well documented in your code. 1. A large yellow circle (the face).; defined geometrically (i.e., with sines and cosines) 2. A red "smile"; defined with two parabolas (as above - but you must comment the code to explain it) 3. A cyan equilateral triangle (with a vertex at the top).; defined geometrically (i.e., with sines and cosines). The "radius" of the nose is 0.75. 4. A blue hexagon (left eye); defined geometrically (i.e., with sines and cosines). The "radius" of the hexagon is 0.5. 5. A green rectangle (right eye); defined by vertices. Dimensions are 0.25 high by 1 wide. 2. After you have defined the element, start a new cell (this will cause the figure to be published). The published document must have the image above as well as the image below. 3. In the new cell adjust the properties of the various patch objects to move them around to form a face, as shown below. Don't define new patches, just use "set(...)" to change the properties of the existing patches. Exact placement of shapes is not important 4. Create another cell and in it define a variable "h" that is the approximate number of hours this assignment took you to complete. An example follows h=2.25; % It took me 2 hours and 15 minutes to complete the assignment. 5. Use the "disp" command, the variable "h", and the "num2str" command to display a sentence that looks something like the one below. This assignment took about 2.25 hours to complete. Please try to be accurate here - I'd like to get an idea how long this assignment took. 6. Publish the script. You are to turn in both the script (.m file) and the published documents (.doc or .docx file). Hints/techniques to consider while debugging your script: • To display the current state of the output figure while publishing, start a new cell. • To execute a single cell select "" at the MATLAB menu at the top of the screen (or type "Ctrl-Enter"). • To execute a single cell and move on to the next cell select "" (or "Ctrl-Shift-Enter"). • To execute the entire script go to "" (or "" if the file is already saved). Hitting the "F5" key also works. • Before you can "publish" the file to a Word document, the publishing tool needs to be configured. From the editor window choose . Change the file type to "doc" and the directory to a convenient place to store the file (perhaps the desktop, or your "H:" drive) as shown below. You can also review "publishing" in the first lab. • If you have to go through the publish process more than once, be sure to close the Word document so MATLAB can overwrite it. To turn in (via moodle) Grading is as follows: • 5 pts for clear and thorough commenting of your code. • 5 pts for clearly written and concise MATLAB code • 3 pts for clear and appropriate formatting of the Word document • 3 pts for a correct result (if the code is clearly thought out and presented, the result should be correct) • 2 pts for turning in the MATLAB (.m) file • 2 pts for turning in the Word or pdf (.doc or .pdf) file.
{"url":"http://www.swarthmore.edu/NatSci/echeeve1/Class/e5/E5M4/E5M4.html","timestamp":"2014-04-18T16:44:49Z","content_type":null,"content_length":"20514","record_id":"<urn:uuid:99837632-a9a1-4e3b-ae7b-84dea21cf186>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Image Formation by Mirrors Step 1. Examine the situation to determine that image formation by a mirror is involved. Step 2. Refer to the Problem-Solving Strategies for Lenses. The same strategies are valid for mirrors as for lenses with one qualification—use the ray tracing rules for mirrors listed earlier in this
{"url":"http://cnx.org/content/m42474/latest/?collection=col11406/1.6","timestamp":"2014-04-17T12:37:32Z","content_type":null,"content_length":"223471","record_id":"<urn:uuid:685d974e-6a7c-4aad-af4a-2b90c6d9437e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Groups of order p^{e}m contains subgroups of order p^r for every integer r<=e. March 2nd 2011, 03:57 PM #1 Junior Member Groups of order p^{e}m contains subgroups of order p^r for every integer r<=e. Yes $p$ should be prime. That said, this isn't a trivial trivial theorem per se. That said, unless you know the first Sylow theorem and the fact that $p$-groups have a subgroup of every order dividing the group. Yes, similarly you should assume that $p$ is prime here too. Also, not a trivial exercise. 6.4 - Yes, it must be p a prime in the first case: $A_4\,,\,\,|A_4|=6\cdot 2$ , is a counterexample (p=6) 6.5 - The second question is false, too: $A_5\,,\,\,|A_5|=30\cdot 2$ , is a counterexample (p=30) . Even the condition $|G|>p$ is very weird, given the data of the question and, of course, it should say "a proper non-trivial subgroup, otherwise {1} makes the question itself trivial. Or, of course, I'm missing something. Yes $p$ should be prime. That said, this isn't a trivial trivial theorem per se. That said, unless you know the first Sylow theorem and the fact that $p$-groups have a subgroup of every order dividing the group. Yes, similarly you should assume that $p$ is prime here too. Also, not a trivial exercise. Ok, so I've been banging my head against the wall here for a while. I can't figure out how to proceed. I know that there is an element in $G$ that has order $p$. And I know that there is a Sylow $p$-subgroup in $G$, say $H\subseteq G$ such that $|H|=p^e$. And I also know that the center of $H$ is not trivial. I also know that if $x$ is an element in $G$, that is of order $p$ then $x$ is a positive power of $p$. That is, if $x^k$ has order $p$ then $k=p^{e-1}$, since Ord $(x^k)=\frac{|H|} {\gcd\bigl[k,|H|\bigr]}=p$. But I just don't know how to proceed from here. Can anyone give me some hints? Ok, so I've been banging my head against the wall here for a while. I can't figure out how to proceed. I know that there is an element in $G$ that has order $p$. And I know that there is a Sylow $p$-subgroup in $G$, say $H\subseteq G$ such that $|H|=p^e$. And I also know that the center of $H$ is not trivial. I also know that if $x$ is an element in $G$, that is of order $p$ then $x$ is a positive power of $p$. That is, if $x^k$ has order $p$ then $k=p^{e-1}$, since Ord $(x^k)=\frac{|H|} {\gcd\bigl[k,|H|\bigr]}=p$. But I just don't know how to proceed from here. Can anyone give me some hints? Let $p^k\| p^e m$ then by the first Sylow theorem there is a Sylow $p$-subgroup, say $H$. But, since $|H|=p^k$ and $e\leqslant k$ one has that (by the fact I said) that $H$ and thus $G$ has a subgroup of every $\ell\leqslant k$ and thus every $\ell\leqslant e$. Are you asking why a $p$-group has a subgroup of every order dividing it? Try using the fact that the converse of Lagrange's theorem is true for abelian groups (or prove the result is true for abelian $p$-groups...this is easy), the center of a $p$-group is non-trivial and inducting on the power of $p$...this is one of many ways. Ask if you get stuck. I'm sorry, I think I left some ambiguity as to what assumption I can make. According to my text (Artin), a Sylow $p$-subgroup of a group $G$, where $|G|=p^{e}m$ is a subgroup of $G$ that has order $p^e$. That said, the way the first Sylow theorem is stated in my book (and the way my class is using it) is as follows: A finite group whose order is divisible by a prime $p$ contains a Sylow $p$-subgroup. So I feel like what you are suggesting depends on my knowing that there is a subgroup of order $p^k$ where $k\leq e$. Based on the way we (my class/professor) are defining the first Sylow theorem, I don't think I can assume this. I realize that some books state the first Sylow theorem differently, and if I could apply that statement, then this problem would be proven just as you have suggested. I'm sorry, I think I left some ambiguity as to what assumption I can make. According to my text (Artin), a Sylow $p$-subgroup of a group $G$, where $|G|=p^{e}m$ is a subgroup of $G$ that has order $p^e$. That said, the way the first Sylow theorem is stated in my book (and the way my class is using it) is as follows: A finite group whose order is divisible by a prime $p$ contains a Sylow $p$-subgroup. So I feel like what you are suggesting depends on my knowing that there is a subgroup of order $p^k$ where $k\leq e$. Based on the way we (my class/professor) are defining the first Sylow theorem, I don't think I can assume this. I realize that some books state the first Sylow theorem differently, and if I could apply that statement, then this problem would be proven just as you have suggested. If you look at my suggestion it's still applicable. What I suggest is using the fact (Sylow's first theorem mine and yours) that a finite group has a subgroup of the maximal power of any prime dividing the order. March 2nd 2011, 06:37 PM #2 March 2nd 2011, 06:47 PM #3 March 4th 2011, 10:10 AM #4 Junior Member March 4th 2011, 11:19 AM #5 March 4th 2011, 12:31 PM #6 Junior Member March 6th 2011, 10:08 AM #7
{"url":"http://mathhelpforum.com/advanced-algebra/173219-groups-order-p-e-m-contains-subgroups-order-p-r-every-integer-r-e.html","timestamp":"2014-04-17T18:42:33Z","content_type":null,"content_length":"70170","record_id":"<urn:uuid:7d979e39-e76b-4c97-8c53-6cc1d1e0a561>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Cube Net' printed from http://nrich.maths.org/ How many tours that visit each vertex once and only once can be traced along the edges of a cube? How many of these tours can return to the starting point thus completing a Hamiltonian Circuit? How many different ways can the subsets of the set $\{a, b, c\}$ be arranged in a sequence so that each subset differs from the one before it by having exactly one element inserted or deleted?
{"url":"http://nrich.maths.org/2368/index?nomenu=1","timestamp":"2014-04-18T18:26:53Z","content_type":null,"content_length":"3318","record_id":"<urn:uuid:74440d6f-7a49-459f-838d-1e438e0509af>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
November 16, 2012 By Andrew (This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.) Richard McElreath writes: I’ve been translating a few ongoing data analysis projects into Stan code, mostly with success. The most important for me right now has been a hierarchical zero-inflated gamma problem. This a “hurdle” model, in which a bernoulli GLM produces zeros/nonzeros, and then a gamma GLM produces the nonzero values, using varying effects correlated with those in the bernoulli process. The data are 20 years of human foraging returns from a subsistence hunting population in Paraguay (the Ache), comprising about 15k hunts in total (Hill & Kintigh. 2009. Current Anthropology 50:369-377). Observed values are kilograms of meat returned to camp. The more complex models contain a 147-by-9 matrix of varying effects (147 unique hunters), as well as imputation of missing Originally, I had written the sampler myself in raw R code. It was very slow, but I knew what it was doing at least. Just before Stan version 1.0 was released, I had managed to get JAGS to do it all quite reliably. But JAGS was taking a long time to converge and then producing highly autocorrelated output. Stan has been amazing, in comparison. I could hardly believe the traceplots. Stan produces the same inferences as my JAGS code does, but with 8 hour runs (no thinning needed) instead of 30 hour runs (with massive thinning). In the future, I should be getting similar data for about a dozen other foraging populations, so will want to scale this up to a meta-analytic level, with partial pooling across societies. So the improved efficiency from Stan will be a huge help going forward as well. On the horizon, I have a harder project I’d like to port into Stan, involving cumulative multi-normal likelihoods. I wrote my own sampler, using likelihoods from pmvnorm in the mvtnorm package, but it mixes very slowly, once all the varying effects are included. Is there a clever way to get the same likelihoods in Stan yet? If not, once you have a guide prepared for how to compile in new distributions, I can probably use that to hack mvtnorm’s pmvnorm into Stan. I’m pretty sure that with some vectorization and other steps, he can get his model to run in much less than 8 hours in Stan. But I’m happy to see that even an inefficient implementation is working And Lucas Leeman writes: I just wanted to say thank you for Stan! Thank you and your collaborators very much! I had this problem with a very slow mixing chain and I have finally managed to get Stan to do what I want. With the mock example I am playing Stan drastically outperforms the software I was using In announcing this progress, I am not trying in any way to disparage Bugs or Jags. The success of these earlier programs is what inspired us to develop Stan. A few years ago, I had the attitude that I could fit a model in Bugs, and if that didn’t work I could program it myself. Now there’s Stan. Fitting a model in Stan is essentially the same as programming it myself, except that the program has already been optimized and debugged, thus combining the convenience of Bugs with the efficiency of compiled code. Also, again we thank the Department of Energy, Institute for Education Sciences, and National Science Foundation for partial support of this project. Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science
{"url":"http://www.statsblogs.com/2012/11/16/stantastic/","timestamp":"2014-04-18T00:45:39Z","content_type":null,"content_length":"37752","record_id":"<urn:uuid:db85070d-c0e9-4024-ba30-414952e7b839>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula Suppose you had the integral So now we let: We have Adding these two equations yields: Re: Linear Interpolation FP1 Formula Yes, that is a very nice idea. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Unfortunately it doesn't seem to work nearly as often as one would like! Re: Linear Interpolation FP1 Formula It is a trick and as such you use it when you can. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula That's the thing when you learn a new trick... you want to use it on everything but really it will only work on a few problems. Re: Linear Interpolation FP1 Formula I think it was Alan Turing along with Alonzo Church that proved a long time ago that for an infinite number of problems you will require an infinite number of methods. My favorite trick involved an integral and a recurrence. Not so difficult but when I first so it I was amazed. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula What trick is that? Re: Linear Interpolation FP1 Formula Supposing you want to evaluate this integral: you might first embed it into a whole family of integrals called y. you could now work it like this. Now you have solved for the family y in terms of a useful recurrence relation. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula That is impressive. A question like that came up in my STEP III exam -- except we had a trig version. Re: Linear Interpolation FP1 Formula When I first saw it I loved it and tried to find more of them. But they are few and far between. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Do you use a CAS for almost every integration problem you encounter? Re: Linear Interpolation FP1 Formula If I am doing an integration problem where there is something I need to learn how to do then I do it by hand no matter how long it takes. If someone is asking a question I will do it by hand for them and then use the CAS to check. But if I am doing another type of problem and an integral or a sum or a limit crops up as a small part of the problem I do not let it take my concentration off the bigger problem. If you get caught up in the small details you can lose sight of what you originally were working on. I always use a CAS there. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I understand what you mean. And there are also lots of problems that CAS's give odd-looking answers for, and many integrals that they cannot do (yet can be done on paper). A lot of integrals in STEP were like that. Re: Linear Interpolation FP1 Formula CAS's give odd-looking answers for, and many integrals that they cannot do A CAS does not mean you shut your brain off. Sometimes you have to help him along. With practice you will find that the number of integrals, sums, DE's , limits and simplifications you can do will increase by a factor of 10. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I was just looking at my UCL course and it appears that you can't go down the computational route without sacrificing the pure maths route... I can either fit into one category or the other! Re: Linear Interpolation FP1 Formula Who ever said you needed UCL to teach you computational math or any other type. They are mostly just good for supplying you with credentials and reputation. Like all the other things in life you will have to teach yourself. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula OK, but the problem is that if I do not take some computational modules then I won't be allowed to take other ones the next year (they claim I would not have the pre-requisite knowledge). I like to self-teach, but often when I do that, there are lots of holes in my knowledge. Re: Linear Interpolation FP1 Formula They are a product of self understanding. If you are forced down that road and that just might be the case you will have the satisfaction of knowing you are unique. If a hundred of them come out of some university they may not be fragmented but they are also bereft of imagination. All one hundred will be mirror images, all exactly alike. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes, I do not want to be like that. But, hopefully my passion will never dwindle. Re: Linear Interpolation FP1 Formula Then if they will not teach then you will still find a way to learn. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Hopefully the quality of teaching won't affect it too much. Re: Linear Interpolation FP1 Formula If you can decide what you most likely want to be then choose the one that fits that best. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula What do you mean? You mean, as a career? Re: Linear Interpolation FP1 Formula Not necessarily, although that would certainly be a way to decide. Even as a lifelong hobby. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Seems like life is just flying by...
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=17344&p=465","timestamp":"2014-04-18T13:16:55Z","content_type":null,"content_length":"40371","record_id":"<urn:uuid:de761bd6-4284-406e-a3cf-aa24a7a30fe4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Strategies For Planar Directional Couplers, Part 1 Designing or selecting a planar directional coupler configuration is a matter of comparing options and weighing a large number of tradeoffs in terms of performance, reliability, and cost. Directional couplers are an important part of analog signal processing in microwave systems, including as portions of power dividers and combiners, in directional filters, attenuators, phase shifters, mixers, amplifiers, modulators, and beam-forming networks for antenna arrays.^1-19 They are also essential in test applications allowing, for example, measurements of high-power signals with sensitive test equipment by coupling a small sample of the total power. To build on a report begun in this magazine in September 2008, the first part of this two-part article will examine methods for implementing planar directional couplers, which can be fabricated both in discrete forms on printed-circuit boards (PCBs) or as part of monolithic- microwave integrated circuits (MMICs). Next month, Part 2 of this article will explore a tradeoff analysis of different coupler design approaches and how to choose among them to meet a specific set of requirements. A directional coupler is a reciprocal four-port device. With a signal applied to its input port, it provides two amplitude outputs. It is characterized by a number of parameters, including frequency range, bandwidth, coupling, directivity, isolation, matching, insertion loss, relative phase difference between output signals, phase imbalance, and amplitude imbalance (Fig. 1). Coupling is calculated as the ratio in decibels of the incident power fed into the input port of the main line of the directional coupler to the coupled port power of the secondary line when all ports are terminated by reflectionless terminations. A 3-dB directional coupler (hybrid network) is a special class of directional couplers in which signals at the two output ports are equal. Its insertion loss is the ratio (in decibels) of input power and output power of the main line with reflectionless terminations connected to ports of the directional coupler. Insertion loss is a combination of coupling loss, conductor loss, dielectric loss, isolation loss, and mismatched loss. Directivity is calculated as the ratio (in decibels) of power of the couple port and the isolated port when all ports are terminated by reflectionless terminations. Isolation is the ratio in decibels of power at an isolated port to available power at the input port. The isolation is equal to the sum of the directivity and coupling. A directional coupler's relative phase difference can be quadrature (Δφ = 90 deg.) or in-phase/out-of-phase (Δφ = 0 deg. or 180 deg.). A coupler's bandwidth is the range of frequencies for which a parameter falls within a specified limit with respect to certain characteristics. Couplers can be generally separated into narrowband (less than 20 percent) and broadband (greater than 20 percent) Figure 2 shows a design flow for a planar directional coupler. Defining a system-level specification is the first step in the design flow. This involves system-level requirements applied directly to a directional coupler, as well as derived requirements that depend on the system requirements. Directional couplers specifications include electrical, cost, size, and other requirements. The major parameters that define RF and microwave planar directional couplers are bandwidth, type of directivity, relative output phases (Δf), phase imbalance, coupling (C), amplitude imbalance, insertion loss (IL), matching or return loss (RL), isolation (ISO), integration level, and cost. A coupler's RF specifications include margin for manufacturing tolerances, environmental conditions, and performance degradation over a system's life. For all requirements, a designer must choose consecutive integer values of weighting coefficients, ki, corresponding to each parameter (the second step of design flow in Fig. 2), from k = 1 for the most important parameter. The maximum value of k can be less than or equal to the number of parameters, depending on whether some parameters are considered to have the same importance or not. Selection of a directional coupler prototype depends on all requirements, and must take into account the corresponding weighting coefficients. Selecting a directional coupler can be accomplished by using the following procedure: 1. Compare a prototype's normalized parameters, P[pri]/P[reqi] with the normalized requirements, P[pri]/P[reqi] = 1 and determine the deviations, Δ[i] = 1 (P[pri]/P[reqi]) for each prototype, from 1 to n. 2. Choose the weighting coefficients, k[i], for each parameter as described above, using k = 1 for the most important parameter (such as insertion loss). 3. Normalize the parameter deviation with respect to the weighting coefficient for each prototype, by means of Δ[i]/k[i]. 4. Add all the deviation values for each prototype, 5. Compare the sum of the deviations from prototype 1 to prototype n and choose the one with the minimum value of The final selection of a directional coupler prototype can be made by analysis of a circle diagram.6 The optimum prototype should have the minimum area between real and goal performance. Synthesizing a planar directional coupler is based on both system requirements and derived requirements. Synthesis results are the physical dimensions of a directional coupler. The analysis of a printed-circuit directional coupler entails definition of the electrical performance resulting from given physical dimensions. An electromagnetic (EM) software simulation may be used to create an S-parameter model of a directional coupler. Four-port directional couplers symmetrical with respect to one or two planes are frequently implemented in RF and microwave devices. A mirrorreflection method,^4,13 is widely used for analyzing symmetrical networks. For RF/microwave couplers, it is popular to analyze directional couplers by means of matrix representations. For analyzing and calculating the dimensions of symmetrical directional couplers, the following approach can be used^4: 1. Determine the transfer matrices of the two-port networks (the symmetrical parts of the four-port coupler) with even- and odd-mode excitation. In case of a cascade connection of two-port networks, the transfer matrix is equal to the product of transfer matrices of the component fourport coupler. 2. Determine the most important scattering element of the four-port coupler, for example, coefficient S[11], which characterizes the input matching. Continue to page 2 Page Title 3. Determine relationship among admittances (or impedances) of line segments of the directional coupler under a condition of perfect matching: S[11] = 0. 4. Calculate the remaining elements of the scattering matrix, accounting for any discovered relationships among admittances. 5. Determine the characteristics of the four-port coupler. A directional coupler's parameters can be simulated using a computeraided- engineering (CAE) software program such as the Advanced Design System (ADS) from Agilent Technologies. With a simulation program such as ADS, a designer must set up variable parameters that can be used to optimize the directional coupler. Analysis of manufacturing tolerances should be considered to avoid excessive manufacturing cost. For high-frequency directional couplers this analysis is especially critical. Planar microwave directional couplers can be designed in a variety of types, including ring directional couplers, branchline directional couplers, and coupled-line directional couplers. In the classical ring or "rat race" coupler with length 3λ/2 (Fig. 3), the spacing between adjacent ports 3 and 4 is 3λ/4; the spacing between all other adjacent ports (3 and 1, 1 and 2, and 2 and 4) is and λ/4. The coupler shown in Fig. 3(b) includes meander quarterwavelength and three-quarter-wave segments to reduce the physical dimensions of the circuit. The 3λ/2 ring coupler has the disadvantage of narrow bandwidth due to the increased length of the 3λ/4 section. Figure 3(c) shows a modification of the classic coupler, with the 3λ/4 section replaced by a coupled λ/4 line section two diagonal grounded ends and a fixed 180-deg. phase shifter. This hybrid ring configuration provides a one-octave bandwidth compared to the 15-percent bandwidth of the 3λ/4 coupler design. The phase-reversal section can be implemented in different configurations.^8 Figure 3(d) illustrates a modified ring coupler using a defected-groundstructure (DGS) design.9 The DGS approach provides for a reduction of coupler size and improved harmonic suppression. The DGS pattern is etched on the ground plane (dashed line) of the microstrip line. This basic DGS is composed of two wide defected areas and a narrow connecting slot, which form the equivalent parallel inductive-capacitive (LC) circuit. The capacitance depends on the etched gap (g) below the conductor line. The length of the connecting slot is the same as the width of the ring. As the etched area of the DGS pattern is increased, the effective series inductance increases. The defected ground of the microstrip line maintains the characteristic impedance of a conventional microstrip line, with the DGS conductor being wider than a conventional microstrip conductor. This is equivalent to increasing the shunt capacitance of the transmission line impedance. Increasing of the equivalent series inductance and shunt capacitance leads to an increase in the phase constant and slow-wave effect. The DGS provides a rejection band in certain ranges due to the incremental increase in the effective inductance of the microstrip line. The ring coupler8 uses six DGS sections Fig. 3(d)> that are embedded in the ring, so that the structure size and the level of the third harmonic can be significantly reduced simultaneously. The classic branch-line directional coupler Fig. 4(a)> consists of main line 1-3 coupled to secondary line 2-4 by λ/4-long branches spaced by λ/4. Figure 4 shows the different modifications that can be made to a branch-line coupler. The design in Fig. 4(b) includes meander λ/4 segments to reduce the physical size of the coupler. Figure 4(c) shows a dual-band branch-line coupler. 10,11 The two bands are realized by stubs tapped to the center of each λ/4 segment. By changing the absolute and relative length of stubs, different frequency ratios can be realized. The bandwidth of a branch-line coupler can be increased by increasing the number of branches. For example, Fig. 4(d) shows a three-branch coupler with broad bandwidth. Although additional branches can increase the bandwidth further, couplers with more than four branches are difficult to implement in microstrip because the end branches require difficult-torealize impedances. Figure 4(e) shows a three-branch coupler with power split regulation. The two additional reactive stubs are connected to the center branch. These reactances can be realized as open or short-circuit stubs. If port 1 is the input, the power split between ports 2 and 4 depends on the length of the stubs. A three-branch directional coupler Fig. 4(d)> can be converted into a lumped-element p network Fig. 4(f)>. For a center frequency, f[0], the quarter wavelength segment with characteristic impedance z can be represented by a p-section lumped-element equivalent circuit with series inductance L and two shunt capacitors C with the following values^4: L = z/2pf[0] and C = 1/2pf[0]z. A coupled line directional coupler (Fig. 5) includes two or more coupled lines close enough to each other to be coupled by electrical and magnetic fields. A conventional directional coupler with two coupled lines Fig. 5(a)> is a completely symmetrical four-port network. Perfect matching of this coupler occurs4 when Z[0e] x Z[0o] = 1 where Z[0e] = z[0e]/z[0] and Z[0o] = z[0o]/z[0] are the normalized impedances for the even and odd modes, respectively; and z0e and z0o are the non-normalized impedances for the even and odd modes, respectively. A fully planar conventional coupled-line coupler Fig. 5(a)> has less than 10 dB coupling due to lower realizable limit of the slot width in print technology. For example, a planar 3-dB microstrip directional coupler has a gap of less than 0.5 mil. At VHF and UHF, a classic directional coupler Fig. 6(a)> has large dimensions. Figure 5(b) shows a miniature directional coupler^4,7 comprised of two coupled lines with short length (less than λ/4). The secondary line output is electrically connected with series inductor L and shunt resistor R. The inductance value depends on the coupling flatness, mid-band frequency, and coupling value. The value of shunt resistor R depends on the impedance of the secondary line and the inductance value. The level of integration of this coupler is approximately five times greater than in other wellknown coupled-line designs. An original design for a 3-dB coupler was presented by Lange^11: an interdigital coupler consisting of several segments of stripline or microstrip line connected to cross wires . The Lange coupler provides tight coupling values with substantially wider gaps than are required for the conventional two-line coupler. It features 3-dB coupling over an octave or more bandwidth. Figure 5(d) illustrates the unfolded Lange coupler with four strips of equal length for simplified circuit modeling. The bandwidth of a coupled-line directional coupler can be increased by increasing the number of quarter- wave sections. Figure 5(c) shows a three-section directional coupler. The condition for ideal matching is Z[0e1]Z[0o1] = Z[0e2]Z[0o2] = 1 Z[0e1] and Z[0o1] = the characteristic impedances of the edge sections for the even and odd modes, respectively; Z[0e2] and Z[0o2] = the characteristic impedances of the middle section for the even and odd modes, respectively. The large imbalance between the effective dielectric constant and the related phase velocity for the even and odd modes of the microstrip coupled lines can lead to some limitations in the application of these couplers. Compensation of the differences in phase velocities is achieved by adding lumped-element capacitors in the middle Fig. 5(e)> or at the ends of the coupled section Fig. 5(g)>.^4,15 These capacitors do not affect the even-mode signal, but do affect the odd-mode signal, reducing its phase velocity. Continue to page 3 Page Title Another directional coupler design uses a sawtooth shape of coupled lines Fig. 5(h)>.^14 The sawtooth shape increases the path of odd-mode currents, having a minimum effect on even-mode currents, thereby again leading to a closer matching of phase velocities. An analogous structure utilizes a periodic step shape Fig. 5(i)>. Modifications to conventional coupled- line directional couplers include asymmetric, tapered line, broadside coupled lines, with additional dielectric layers, etc. Fig. 4(a)>. The performance of the various planar directional couplers already described is compared in Table 1. Figure 6 shows an example of the design flow for the selection of a planar directional coupler prototype. In this example, the requirements include an L-band frequency range, with weighting coefficient of the highest importance at k[1] = 1; a 90-deg. relative phase difference, with k[2] = 1; 3-dB coupling, with weighting coefficient k[3] = 1; 20- percent bandwidth with that having a weighting coefficient k[4] = 2; 0.2-dB maximum insertion loss, with weighting coefficient k[5] = 3; high directivity, with weighting coefficient k[6] = 4; 15 dB isolation, with weighting coefficient k[7] = 5; 15 dB return loss, with weighting coefficient k[8] = 6; and minimal cost, with weighting coefficient k[9] = 7. The selection of a directional coupler prototype starts with satisfying the most critical requirements, with weighting coefficient k[1] = k[2] = k[3] = 1 (step 5.1, 5.2, 5.3), and then the less critical requirements with k[4] = 2 (step 5.4), etc. The design flow (Fig. 6) shows that the optimum directional coupler prototype for the above specifications is the three-branch hybrid Fig. 4(d)>. A design strategy for couplers with printed transmission lines was described in ref. 6. The type of optimal transmission line depends on many different factors including a technology process. According to the directional coupler design flow (Fig. 2), a directional coupler prototype is selected (step 5) after the selection of the transmission line (step 3) and the technology process (step 4). Sometimes the directional coupler prototype with the selected early transmission line does not satisfy requirements because there are some limitations for directional couplers based on different transmission lines (Table 2). In this case, the transmission line should be reselected to satisfy directional coupler requirements. The selection of a transmission-line technology for a coupled-line directional coupler is critical. As mentioned earlier, a microstrip coupled-line directional coupler has poor directivity due to the difference between the propagation constants for odd and even modes. The advantage of a stripline coupled-line coupler is that the even- and odd-mode phase velocities are equal. In this coupler configuration, stripline offers better directivity and isolation than microstrip. Next month, this two-part article will conclude with a tradeoff analysis of different coupler types based on required performance specifications. The criteria for analysis include cost versus manufacturing tolerances, cost versus thermal characteristics, cost versus reliabiliy, cost versus loss, bandwidth versus amplitude balance, and various other considerations. 1. B. M. Oliver, "Directional Electromagnetic Couplers," Proceedings of the IRE, Vol. 42, November 1954, pp. 1686-1692. 2. R. Levy, "Directional Couplers," in L. Young, Ed., Advances in Microwaves, Vol. 1, Academic Press, New York, 1966. 3. G. L. Matthaei, L. Young, and E. M. T. Jones, Microwave Filters, Impedance Matching Networks, and Coupling Structures, Artech House, Dedham, MA, 1980. 4. L. G. Maloratsky, Passive RF & Microwave Integrated Circuits, Elsevier, New York, 2003. 5. L. G. Maloratsky, "Understand the Basics of Microstrip Directional Couplers," Microwaves & RF, February, 2001, pp. 79-94. 6. L. G. Maloratsky, "Design Strategy of RF and Microwave Integrated Circuits," Microwaves & RF, September, 2008. 7. L. G. Maloratsky, "Couplers Shrink HF/VHF/UHF Designs," Microwaves and RF, June 2000, pp. 93, 94, 96. 8. T. Wang and K. Wu, "Size Reduction and Band-Broadening Design Technique of Uniplanar Hybrid Ring Coupler
{"url":"http://mwrf.com/components/strategies-planar-directional-couplers-part-1","timestamp":"2014-04-17T21:53:40Z","content_type":null,"content_length":"97105","record_id":"<urn:uuid:9293ca9e-43c6-425a-adfa-8317937ba454>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
equivalence relations November 23rd 2009, 05:43 PM #1 Nov 2009 equivalence relations say we have a set W. We also have a non-empty set Z such that each element of Z is an equivalence relation on W. Must show that the nZ is an ER in W. (where n is intersection) how would this be proven, like i know that u must prove that it is reflexive, transitive and symmetric but how exactly would u start it? say we have a set W. We also have a non-empty set Z such that each element of Z is an equivalence relation on W. Must show that the nZ is an ER in W. (where n is intersection) how would this be proven, like i know that u must prove that it is reflexive, transitive and symmetric but how exactly would u start it? I am sorry, I don't understand what you mean. my question is let W be a set. Let Z be a non empty set such that each element of Z is an equivalence relation on W i must show that the nZ is an equivalence relation on W. (where nZ is the intersection of Z) how would i go about proving this is true? i understand that i have to prove that the relation is reflexive, anti-symmetric and transitive, but how would i do that my question is let W be a set. Let Z be a non empty set such that each element of Z is an equivalence relation on W i must show that the nZ is an equivalence relation on W. (where nZ is the intersection of Z) how would i go about proving this is true? i understand that i have to prove that the relation is reflexive, anti-symmetric and transitive, but how would i do that I'm sorry. I don't mean to sound ignortant, but I was unsure of notation. Do you mean $nz=\bigcap_{x\in z}x$? dont worrry about it and yes i did mean the intersection as u stated Maybe some other member can help you better than I, but I don't really undertsand what this question could possibly mean? What is the intersection of two equivalence relations? Unless, you mean the intersection of all the partitions induced by the relations? well im not sure myself this is the question i was given for my hmwk. well im assuming its intersection it looks exactly like the symbol for intersection but has no limit. Ok, now that I think about it I believe that I understand what this is saying. if $\sim$ is an equivalence relation on $E$ then $\sim$ is charcterized by $R=\left\{(a,b)\in E\times E:a\sim b\ right\}$. Maybe that if we consider the different relations on $E$ to be charchertized by $R_1,R_2,\cdots$ (not necessarily countable..I just wrote it that way for clarity) then perhaps they mean to show that $R_1\cap R_2\cdots=R$ is an the characterization of an equivalence relation. Does that sound right? Another member may swoop in an answer this btw. umm it might be right im not sure i thought in order to prove it was an equivalence relation on something one would have to prove that the relation is reflexive, transitive and symmetric The gist of the problem is to prove the following. Let $W$ be a set and let $R_1$ and $R_2$ be relations (i.e., subsets of $W\times W$) on $W$. Moreover, suppose that $R_1$ and $R_2$ are equivalence relations. Show that $R_1\cap R_2$ is an equivalence relation a well. (This solves the problem when the set $Z$ of equivalence relations has size 2, or, in fact, any finite size. To be strict, one has to prove this also for infinite $Z$, but the proof is essentially the same.) This should be easy to show by definition. Yeah, i agree with emakarov! November 23rd 2009, 06:21 PM #2 November 23rd 2009, 06:29 PM #3 Nov 2009 November 23rd 2009, 06:33 PM #4 November 23rd 2009, 06:36 PM #5 Nov 2009 November 23rd 2009, 06:39 PM #6 November 23rd 2009, 06:44 PM #7 Nov 2009 November 23rd 2009, 06:48 PM #8 November 23rd 2009, 06:51 PM #9 Nov 2009 November 24th 2009, 01:18 AM #10 MHF Contributor Oct 2009 November 24th 2009, 01:47 AM #11
{"url":"http://mathhelpforum.com/discrete-math/116389-equivalence-relations.html","timestamp":"2014-04-17T02:48:32Z","content_type":null,"content_length":"65601","record_id":"<urn:uuid:f937df72-2351-4366-a190-c2990efb76ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Isometric embedding of a neighbourhood of a totally real submanifold in a Kähler manifold up vote 1 down vote favorite Let $(M,J,\omega)$ be a real-analytic Kähler manifold. Let furthermore $A \subset M$ be a real analytic, totally real, Lagrangian submanifold and set $g := h|_{A}$. Where $h$ is the Kähler metric on $M$. $g$ is now a Riemannian metric on $A$. Let $U$ be an arbitrary small neigbourhood of $A$ in $M$. Is it possible to embedd $U$ in some $\mathbb{C}^{N}$ isometrically? I think its always possible to embed such an arbitrary small neighbourhood $U$ in $\mathbb{C}^{N}$ for some $N$. But can this be also done isometrically? complex-geometry dg.differential-geometry riemannian-geometry sg.symplectic-geometry Do you want the embedding of $U$ into $\mathbb{C}^N$ to be holomorphic as well as isometric? That requirement is not in your question, and the answer depends on whether you add it or not. – Robert Bryant Dec 7 '12 at 12:55 yes, is it possible if the embedding is holomorphic? – hapchiu Dec 7 '12 at 13:18 I thaught that one can do the following: since $A$ is real analytic we can use the real analytic version of Nash embedding theorem and consider $A$ as a real analytic Riemannian submanifold of some $\mathbb{R}^{N}$. Then locally $A$ is the zero set of some real analytic functions. Extend these functions holomorphically and then patch them together, since on the overlaps of some open sets in $A$ these real analytic functions are the same, and the extension of them would be the same holomorphic function. Is this possible? – hapchiu Dec 7 '12 at 13:29 add comment 1 Answer active oldest votes In general, there is no holomorphic isometric embedding of the desired kind. In fact, the Lagrangian $A$ is a bit of a red herring, because most real analytic Kähler metrics cannot, even locally, be holomorphically and isometrically embedded into $\mathbb{C}^N$ for any finite $N$. For example, see "The complex version of Nash's Theorem is not true" for a up vote 2 down discussion of why and some counterexamples. vote accepted add comment Not the answer you're looking for? Browse other questions tagged complex-geometry dg.differential-geometry riemannian-geometry sg.symplectic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/115692/isometric-embedding-of-a-neighbourhood-of-a-totally-real-submanifold-in-a-kahler","timestamp":"2014-04-20T08:52:26Z","content_type":null,"content_length":"56097","record_id":"<urn:uuid:573910e0-7d1e-48d4-abb8-964554fd27b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterization of Kleisli adjunctions up vote 3 down vote favorite There's a well known theorem due to Beck that characterizes when an adjunction is monadic, that is, if $F$ is left adjoint to $G$, $G:D \to C$, $GF:=T$ is always a monad on $C$, and the adjunction is called monadic, essentially, when $D$ is the Eilenberg–Moore category $C^T$ of $T$-algebras and $G$ is the forgetful functor. (For the precise definition see http://ncatlab.org/nlab/show/ monadic+adjunction). I was wondering if there was a similar characterization to determine when $D$ is the Kleisli category of FREE $T$-algebras? ct.category-theory monads add comment 1 Answer active oldest votes There is a unique functor $\mathbf{Kl}(GF) \rightarrow \mathbf{D}$ commuting with the adjunctions from $\mathbf{C}$, since the Kleisli category is initial among adjunctions inducing the given monad; and this functor is always full and faithful, since $\mathbf{Kl}(GF)(A,B) \cong \mathbf{C}(A,GFB) \cong \mathbf{D}(FA,FB)$. So this functor will be an equivalence iff it is essentially surjective, and an isomorphism iff it is bijective on objects. But its object map is just the object map of $F$. up vote 7 down vote accepted So $\mathbf{Kl}(FG)$ is equivalent to $\mathbf{D}$ compatibly with the adjunctions from $\mathbf{C}$ precisely when $F$ is essentially surjective, and isomorphic just when $F$ is bijective on objects. Thanks very much! – David Carchedi May 27 '10 at 6:21 So, I guess this implies that if $F$ is left adjoint to $G$ and $G$ does not reflect isos, then $F$ cannot be essentially surjective? – David Carchedi May 27 '10 at 6:28 Yep, I think so! More generally, $G$ will always be full and faithful on the essential image of $F$, and hence reflect isomorphisms there. – Peter LeFanu Lumsdaine May 27 '10 at add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory monads or ask your own question.
{"url":"http://mathoverflow.net/questions/26075/characterization-of-kleisli-adjunctions/26106","timestamp":"2014-04-18T14:14:15Z","content_type":null,"content_length":"54479","record_id":"<urn:uuid:e2a5b5bc-b65d-4d3d-96b2-888cf18b76d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
tranny swap? [Archive] - Scionlife.com was just been curious recently.... it would be nice to have that extra 6th gear for the higher freeway speeds was just wondering if possible at all or anyone ever think about switching the matrix xrs/celica gts 6spd tranny onto the xb2 im not to knowledgeable about this topic so was just wondering if its even possible? if so if it would be worth while?
{"url":"http://www.scionlife.com/forums/archive/index.php/t-130219.html","timestamp":"2014-04-17T18:45:37Z","content_type":null,"content_length":"9304","record_id":"<urn:uuid:3eec0f90-e9bb-4a11-857b-8ca1f5643ac0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Conformal Mapping by Computationally Efficient Methods Last modified: 2010-07-03 Dimensionality reduction is the process by which a set of data points in a higher dimensional space are mapped to a lower dimension while maintaining certain properties of these points relative to each other. One important property is the preservation of the three angles formed by a triangle consisting of three neighboring points in the high dimensional space. If this property is maintained for those same points in the lower dimensional embedding then the result is a conformal map. However, many of the commonly used nonlinear dimensionality reduction techniques, such as Locally Linear Embedding (LLE) or Laplacian Eigenmaps (LEM), do not produce conformal maps. Post-processing techniques formulated as instances of semi-definite programming (SDP) problems can be applied to the output of either LLE or LEM to produce a conformal map. However, the effectiveness of this approach is limited by the computational complexity of SDP solvers. This paper will propose an alternative post-processing algorithm that produces a conformal map but does not require a solution to a SDP problem and so is more computationally efficient thus allowing it to be applied to a wider selection of datasets. Using this alternative solution, the paper will also propose a new algorithm for 3D object classification. An interesting feature of the 3D classification algorithm is that it is invariant to the scale and the orientation of the surface.
{"url":"http://www.aaai.org/ocs/index.php/AAAI/AAAI10/paper/viewPaper/1779","timestamp":"2014-04-16T07:31:09Z","content_type":null,"content_length":"13337","record_id":"<urn:uuid:7e94c847-b797-4e03-b6f7-08c45fb0a6aa>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
A nonlinear regression approach to the evaluation of reverberation ASA 126th Meeting Denver 1993 October 4-8 4pAA4. A nonlinear regression approach to the evaluation of reverberation times using Schroeder's integrated impulse response method. Ning Xiang HEAD Acoust., Kaiserstr. 100, D-5120 Herzogenrath 3, Germany W. Ahnert R. Feistel ADA Acoust. Design Ahnert, Berlin, Germany Reverberation decay curves can be obtained by backward integration of room impulse responses [M. R. Schroeder, J. Acoust. Soc. Am. 37, 409--412 (1965)]. The evaluation of reverberation times is often achieved by a regression line fitting the reverberation decay curves. However, the successful application of this method requires either a careful choice of the integration limit or estimation of the mean-square value of background noise where background noise is present in the room impulse responses to be evaluated. In the present paper, an alternative method for evaluating reverberation times from Schroeder's decay curves using a nonlinear iterative regression approach is proposed. The regression process is based on a nonlinear curve model using the generalized least-square error principle rather than a linear model as used in the linear regression. The present paper will describe the principle of this approach and discuss the advantages and disadvantages involved. Comparison of results obtained using this approach and alternative methods will also be presented.
{"url":"http://www.auditory.org/asamtgs/asa93dnv/4pAA/4pAA4.html","timestamp":"2014-04-18T01:50:13Z","content_type":null,"content_length":"1864","record_id":"<urn:uuid:b4e3f2fe-52b5-45b0-9d21-2beb70b33014>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Equation Calculator A free quadratic equation calculator that shows and explains each step in solving your quadratic equation. Welcome to Quadratic-Equation-Calculator.com, a free quadratic equation solver and tutorial. To solve your quadratic equation, and to get a step-by-step explanation of how that solution is reached, type your values for a, b, and c in the boxes above. Or, click here for a random example of how to solve a quadratic equation using the quadratic formula. A quadratic equation is any equation of the form ax^2+bx+c=0, where a, b, and c are constants, and x is an unknown. Specifically, it's the term x^2 that makes an equation quadratic. The term "quadratic" comes from the Latin word for "square." Quadratic equations are used in all branches of science. Real-world applications of quadratic equations are found in equations of motion, the pharmacokinetics of some drugs, and even business. The geometries of parabolic antennas and mirrors are defined by quadratic functions. There are several ways to solve a quadratic equation. Our quadratic equation calculator uses the quadratic formula, and shows you step-by-step how you can solve your quadratic equation using the quadratic formula, too. The quadratic formula is further described here. To solve a quadratic equation ax^2+bx+c=0, you need to solve for x. It turns out that x always has 2 values, called "roots." These 2 values for x may both be real numbers, or they both be complex numbers. Under very special circumstances, the 2 roots may be the same number, in which case there is only one solution. Give our quadratic equation solver a try. Plug in your values for a, b, and c into the spaces above, and click "SOLVE FOR X". Or, click here for a random example.
{"url":"http://quadratic-equation-calculator.com/","timestamp":"2014-04-19T14:37:09Z","content_type":null,"content_length":"10486","record_id":"<urn:uuid:7b612c7f-3daf-4a39-a426-5f20494bf4b7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
The two trains Two trains 140 meters and 160 meters long run at the speeds of 60 km/hour and 40 km/hour respectively in the opposite directions on parallel tracks. Find the time in seconds which they take to cross each other. Let's solve this using geogebra. First we will have to represent the two trains. 1) Scale the x - axis from -100 to 300 and the y axis from 0 to 6. 2) Draw a slider called t with Min = 0 and Max = 11 and increment 0f .001, Repeat = Increasing (once). 3) Create points A,B,C,D and E anywhere on the graph in quadrant 1. 4) For A enter (11.1111111*t, 3) in its definition. For B enter (11.1111111*t + 160, 3). 5) Draw a line segment between A and B. 6) For C enter (-16.66667 t + 160, 1) in its definition. For D enter (-16.66667 t + 300, 1). 7) Draw a line segment between C and D. 8) For E enter (-16.66667 t + 300, 3). 9) Draw a vector colored red from D to E and Hide E. 10) Move the slider and you should see AB and CD moving in opposite directions and carrying the vector with them. These represent the two trains relative length and speeds. 11) Run the animation of the slider until the red arrow is directly under A and then press pause. Adjust using the shift arrow keys and eyeball the best answer. Read t or time from the slider. 12) Check the first drawing for how it should look before the run and the second one for after.What did you get? Do you agree with the algebra answer of 10.8? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://mathisfunforum.com/viewtopic.php?pid=279717","timestamp":"2014-04-19T14:49:41Z","content_type":null,"content_length":"15212","record_id":"<urn:uuid:7934e322-177e-456d-a9e5-c8685fdc1cc5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Describe a topic in one sentence. up vote 41 down vote favorite When you study a topic for the first time, it can be difficult to pick up the motivations and to understand where everything is going. Once you have some experience, however, you get that good high-level view (sometimes!) What I'm looking for are good one-sentence descriptions about a topic that deliver the (or one of the) main punchlines for that topic. For example, when I look back at linear algebra, the punchline I take away is "Any nice function you can come up with is linear." After all, multilinear functions, symmetric functions, and alternating functions are essentially just linear functions on a different vector space. Another big punchline is "Avoid bases whenever possible." What other punchlines can you deliver for various topics/fields? soft-question big-picture big-list 11 This is a very good question, but to be useful and not just fun one should look critically at many of the answers below. – Gil Kalai Nov 8 '09 at 7:54 7 Gil, I am very skeptical about the value of this question. I don't think many of the answers given are that useful, because one won't get the punchlines unless one has acquired experience in the subject (and then, why would you need the punchline?). – Todd Trimble♦ May 20 '11 at 13:27 1 @Todd: to get fodder for a cocktail party level conversation.... – Suvrit Aug 28 '12 at 14:32 3 @Suvrit: I guess it would be more of a "Big-Bang-Theory"-kind of party ;-) – vonjd Oct 7 '12 at 18:37 add comment closed as no longer relevant by Felipe Voloch, Todd Trimble♦, Andres Caicedo, Joël, Alex Bartel Oct 8 '12 at 8:48 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center , please edit the question. 50 Answers active oldest votes One punchline in algebraic geometry is that all commutative rings are actually the ring of functions on some space. up vote 35 down vote add comment QFT — every expression converges after a Wick rotation. up vote -2 down 7 Wick rotation isn't what leads to convergence. A better sentence might be "Large size asymptotics of the moments of regularized path integrals are independent of the choice of regularization." – userN Oct 24 '09 at 15:07 add comment Complex Analysis: Holomorphic functions are just rotations and dilations up to the first order. Hold on... up vote 37 down vote Calculus: Differentiation is approximation by a linear map. 3 I like your description of calculus -- I am teaching multivariable calculus this semester, and I think the students have a hard time accepting that the "right" definition of differentiability is that a good linear approximation exists, instead of the more natural-seeming idea that all of the first partials exist. – Gabe Cunningham Oct 22 '09 at 19:16 3 About that description of complex analysis, see Needham's Visual Complex Analysis. – lhf Nov 8 '09 at 23:39 2 @Gabe: I'm teaching multivariable calculus this semester too, but I defined the derivative to be the linear approximation first, and then introduced partial derivatives as a useful computational technique. – Jeff Strom Oct 27 '10 at 18:26 show 3 more comments Complex Analysis: Taylor series behave the way you want them to in real analysis. up vote 34 down 69 When I was taking complex analysis, I remember someone saying "Complex analysis is the Disneyland of mathematics" because so many incredible theorems turn out to be true. – John D. Cook Oct 25 '09 at 1:51 add comment Homological algebra - In an abelian category, the difference between what you wish was true and what IS true is measured by a homology group. up vote 50 down vote 2 @Colin: One wants certain functors to be exact, e.g., the Hom-functor gives Exts, tensoring with a module gives Tor. – J.C. Ottem Feb 28 '11 at 0:26 For example, once I was comparing $\overline{I\cap J}$ to $\overline{I}\cap \overline{J}$, where the bar denotes taking the associated graded module with respect to some filtration of 2 $R$-ideals $I$ and $J$. I suspected that there was some homology group which vanished exactly when those coincided, and I was correct (it was a rather complicated $Tor$). – Greg Muller Feb 28 '11 at 2:05 show 4 more comments Analytic combinatorics: generating functions are awesome. up vote 25 down ("generating functions are awesome" is actually the title of a talk I gave a couple weeks ago.) 2 There is also the book by Flajolet and Sedgewick, which is available at algo.inria.fr/flajolet/Publications/books.html – lhf Nov 10 '09 at 11:50 @Andrew L : While I didn't vote this answer up, it is at least (arguably) correct. Your answer, on the other hand, reveals a profound misunderstanding of probability theory. Though 10 probability theory uses many tools from real analysis (eg measure theory), the way it uses those tools and the intuition/philosophical explanation behind them is completely different from those of traditional real analysis. Not to mention that your answer pretends there doesn't exist a giant field of finitary probability that is much more closely connected with combinatorics than with real analysis. – Andy Putman Oct 28 '10 at 1:56 15 I can't believe someone would come along a year later and make this comment. – Michael Lugo Oct 28 '10 at 3:50 1 @Michael : Obviously, you have not been following the saga of Andrew L... – Andy Putman Oct 28 '10 at 4:18 2 @Andy Wise-ass comment to Micheal aside,you made a very fair objection above. Discrete probability is fully half the science.I could counter it by saying combinatorics is essentially analysis on finite sets,but that's a real stretch. – Andrew L Oct 28 '10 at 4:25 show 3 more comments Lie groups: Think locally, act globally. ;) up vote 36 down vote 12 This applies to many other areas as well. – Gil Kalai Nov 8 '09 at 7:48 1 @Gil I totally agree.In fact,this can be the slogan for topology in general with some slight modifications. – Andrew L Oct 27 '10 at 20:57 4 Less catchy, but: "think at the identity, act globally" is more specific to Lie theory. – Paul Siegel Jan 7 '12 at 18:08 show 1 more comment Sobolev spaces: H = W up vote 6 down (There are ostensibly two kinds of Sobolev spaces, denoted with H's and W's, plus some superscripts and subscripts. Someone wrote a paper showing that the two kinds were equivalent vote and entitled their paper "H=W.") 1 Just in case anyone is interested, the paper is ams.org/mathscinet-getitem?mr=164252 by Meyers and Serrin – Willie Wong May 3 '10 at 13:12 4 And the "H" is a cyrillic en, that stands for S.M.Nikolsky. – Pietro Majer Dec 10 '10 at 23:57 show 1 more comment Operator theory: all separable infinite-dimensional Hilbert spaces are isomorphic, but they aren't all the same and moving your problem between them works wonders. up vote 25 down vote add comment Linear algebra: everything can be explained by a linear system. up vote 2 down vote 4 explained, or approximated? – Colin Tan May 20 '11 at 10:11 add comment Numerical analysis: The purpose of computing is insight, not numbers. — Richard Hamming (1962) up vote 22 down vote 20 There's also: The purpose of computing numbers is not yet in sight. — Richard Hamming (1971) – lhf Nov 3 '09 at 0:52 add comment Algebraic geometry: CommRing behaves a lot like Set^op. up vote 18 down vote add comment Logic teaches us that (untrained) intuition is often wrong, but that when it's right, it's for the wrong reason. up vote 15 down vote 2 Deeper than it looks like at first sight, you shouldn't vote it down so easily! – Jose Brox Nov 8 '09 at 2:19 show 2 more comments up vote Noncommutative Ring Theory: If it is not modules, then it is idempotents. 10 down 2 This seems a bit too cryptic for me... – Yemon Choi Nov 8 '09 at 10:09 Well, when you try to prove some (non too-far-fetched) fact in Noncommutative Ring Theory, you have roughly two main families of techniques to resort to: 1) Techniques which involve 4 modules. Facts about one-sided ideals, the categorical viewpoint, K-theory over the monoid of finitely generated projective modules, homological tools... 2) Techniques which involve idempotents. Taking corners, rings with local units, rings with enough idempotents, the Peirce decomposition... That's what I tried to comprise by this sentence ;-) – Jose Brox Nov 11 '09 at 11:00 add comment Navier-Stokes Equations: Energy estimates and more energy estimates. up vote 2 down vote *I suppose this goes for most non-linear PDEs add comment Real Analysis: Get your hypotheses right, or suffer the counter-examples! up vote 33 down Measure Theory: "Every [measurable] set is nearly a finite union of intervals; every [measurable] function is nearly continuous; every convergent sequence of [measurable] functions vote is nearly uniformly convergent." -- J.E. Littlewood 13 It's a question of where you put the quantifiers. For almost every point, the value is almost the same as it is at almost every nearby point. – gowers Oct 28 '10 at 16:59 show 2 more comments The bonniest mot I can ever recall — from some graduate algebra course: up vote 17 down vote • "Free" is just another word for nothing to do on the left. 6 In algebra, "freedom's just another word for nothing left to lose". :-) – Todd Trimble♦ May 20 '11 at 13:23 add comment Another favorite of mine … up vote 1 down vote • Redundancy is the essence of information. add comment • Generating functions are the 19th Century analog of addressable memory. up vote 2 down vote add comment One of my favorites: up vote 26 down vote "Algebraic topology is the "art" of Not doing the integral" add comment Linear Algebra is the correct generalization of dimension. (This came from Hubbard) up vote -1 down vote 18 I thought $K$-theory was! – Mariano Suárez-Alvarez♦ Dec 13 '09 at 4:15 add comment "set theory is the study of well-foundedness" - A.R.D Mathias up vote 5 down vote add comment Geometric group theory: the large-scale geometry of a group is invariant under quasi-isometry. up vote 11 down vote add comment Configuration space integrals: Don't take limits- compactify! up vote 8 down vote Dror Bar-Natan explained this punchline to me when I was just starting grad school. add comment Statistics: every parameter is learnable by sampling. up vote 6 down vote show 1 more comment Representation theory of Lie groups: there is a whole world between $\mathrm{Sym}^n V$ and $\wedge^n V$. (Okay, this is an oversimplication - I am talking about the representations of $\mathrm{GL}\left(V\right)$ here, but this is the fundament of all other classical groups.) Constructive logic: if you can't compute it, shut up about it. (At least some forms of constructive logic. Brouwer seemed to have a different opinion iirc.) Homological algebra: How badly do modules fail to behave like vector spaces? Gröbner basis theory: polynomials in $n$ variables can be divided with rest (at least if you have some $O\left(N^{N^{N^{N}}}\right)$ of time) Finite group classification: what works for Lie groups will surely be even simpler for finite groups, right? ;) up vote 12 Algebraic group theory: In order to differentiate a function on a Lie group, we just have to consider the group over $\mathbb R\left[\varepsilon\right]$ for an infinitesimal $\ down vote varepsilon$ ($\varepsilon^2=0$). Semisimple algebras: The representations of a sufficiently nice algebra mirror a structure of the algebra itself, namely how it breaks into smaller algebras. $n$-category theory: all the obvious isomorphisms, homotopies, congruences you have always been silently sweeping under the rug are coming back to have their revenge. Modern algebraic geometry (schemes instead of varieties): let's have the beauty of geometry without its perversions. How many of these did I get totally wrong? 12 I'm sure at least some people would reverse the last one... – Ketil Tveiten Oct 28 '10 at 11:56 1 D Grinberg, surely you meant ‘Lie group’ rather than ‘Lie algebra’ in the finite-group classification? – L Spice May 21 '11 at 6:38 2 @n-category theory: I would definitively watch that movie! :-D – Johannes Hahn Oct 7 '12 at 18:50 show 2 more comments Algebraic geometry is the study of the intrinsic properties of any mathematical object which can be locally described by polynomial equations. up vote 1 down vote Algebraic geometry is not about solving systems of polynomial equations, rather it's about studying the intrinsic properties thereof. add comment Analytic Number Theory: log log log log log... up vote 14 down vote Did I see that quote in Havil's book Gamma? add comment Dirichlet forms: a symmetric Markov process is a self-adjoint operator is a closed symmetric form is a Markovian semigroup. up vote 1 down vote (I've left out a lot of hypotheses, but the essence is that all these are in correspondence, and the properties of any one appear in the others.) add comment Functional analysis: Everything you know from linear algebra is true, under the right conditions; otherwise it's false. up vote 32 down 13 Like MO points are the end-all and be-all of existence. – Ketil Tveiten Oct 28 '10 at 11:57 13 One difference is that whereas most linear algebra concepts generalize nicely to, say, Banach spaces, differentiation, perhaps the most basic concept of calculus, doesn't make sense in a topological space. – gowers Oct 28 '10 at 14:17 2 I like this one because despite its tautological flavor, it is not. – Pietro Majer Dec 30 '10 at 17:41 1 ... differentiation being just another linear operator.... under the right conditions. :) – paul garrett Oct 7 '12 at 17:31 show 2 more comments Not the answer you're looking for? Browse other questions tagged soft-question big-picture big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence?answertab=oldest","timestamp":"2014-04-16T05:02:40Z","content_type":null,"content_length":"175655","record_id":"<urn:uuid:2688ffba-e582-4f59-b3b3-b48ecbedbe06>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
A projectile is shot straight up from the Earth's surface 1. The problem statement, all variables and given/known data A projectile is shot straight up from the earth's surface at a speed of 1.10×10^4 km/hr 2. Relevant equations 3. The attempt at a solution I converted the speed to m/s and 3055.5556 m/s masses cancel out so i get 9.8h=.5v^2 I plugged in v and solved for H and got 476347.95m, which was wrong. I'm not sure if this is how you are supposed to solve this problem, but i can't think of any other ways.
{"url":"http://www.physicsforums.com/showthread.php?t=685965","timestamp":"2014-04-16T10:38:08Z","content_type":null,"content_length":"48646","record_id":"<urn:uuid:2f671279-9876-4cc4-96b9-ec14b052b522>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Maywood, IL Algebra 2 Tutor Find a Maywood, IL Algebra 2 Tutor ...Beyond this academic instruction I have worked with students since I was in college on the side helping them optimize their own study habits and techniques for both their classwork but also their approach to test prep. Often times to master a substantial amount of material in a limited amount of... 38 Subjects: including algebra 2, Spanish, reading, statistics ...It all boils down to some pretty straightforward concepts: - Numbers (Natural, Whole, Integers, Rational, Real, Imaginary, Complex) - Operations (Add, Multiply, Exponent) - Formatting (Money, Percents, Fractions, Decimals, Measurements, etc.) I keep no mystery from my students. I find most co... 14 Subjects: including algebra 2, geometry, GRE, ASVAB ...I speak, read, and write fluently in both English and Polish. I was first introduced to Microsoft Outlook about 2 years ago. I really enjoy it. 36 Subjects: including algebra 2, English, ACT English, ACT Reading ...I have worked with both young children and college students and always adjust my teaching style in order to appease this wide array of ages. Everyone has a different learning style and I try to distinguish my student's style as quickly as possible in order for my tutoring to be rendered effectiv... 21 Subjects: including algebra 2, reading, chemistry, English ...As a result I have come to know this exam very well. I have great success with my students getting accepted to some of the most prestigious independent schools. I also work with students preparing them for their placement exams. 24 Subjects: including algebra 2, calculus, geometry, algebra 1 Related Maywood, IL Tutors Maywood, IL Accounting Tutors Maywood, IL ACT Tutors Maywood, IL Algebra Tutors Maywood, IL Algebra 2 Tutors Maywood, IL Calculus Tutors Maywood, IL Geometry Tutors Maywood, IL Math Tutors Maywood, IL Prealgebra Tutors Maywood, IL Precalculus Tutors Maywood, IL SAT Tutors Maywood, IL SAT Math Tutors Maywood, IL Science Tutors Maywood, IL Statistics Tutors Maywood, IL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bellwood, IL algebra 2 Tutors Berwyn, IL algebra 2 Tutors Broadview, IL algebra 2 Tutors Brookfield, IL algebra 2 Tutors Elmwood Park, IL algebra 2 Tutors Forest Park, IL algebra 2 Tutors Forest View, IL algebra 2 Tutors Franklin Park, IL algebra 2 Tutors Melrose Park algebra 2 Tutors Oak Park, IL algebra 2 Tutors River Forest algebra 2 Tutors River Grove algebra 2 Tutors Stickney, IL algebra 2 Tutors Stone Park algebra 2 Tutors Westchester algebra 2 Tutors
{"url":"http://www.purplemath.com/maywood_il_algebra_2_tutors.php","timestamp":"2014-04-17T15:39:58Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:66b099a3-60b4-40e4-b4ef-c0b4d58acc6a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Perfect Sphere's colliding using only vectors The problem I have is I need to find the force applied to a perfect sphere after colliding with another perfect sphere. For both spheres I have: position as <x,y> Velocity as <x,y> coefficient of restitution I am writing a program which simulates a bunch of balls bouncing around so I need to have an series of equations that when using the above values for two balls (that I know are colliding) it gives a force (as <x,y>) that is applied to the current sphere. The force being applied instantaneously. I have searched online and read up on (which only does one dimensional) and other sites like . Of the sites that do go into two dimensional they use angle and magnitude, or directly change the velocity of the sphere. I only want the force applied during the collision. I have a decent knowledge of basic physics but for whatever reason I just cannot get this to work properly. What I currently have is a Frankenstein of the wiki page that works relatively well, but it treats every collision as head-on so it isn't ideal.
{"url":"http://www.physicsforums.com/showthread.php?s=0d029e43e28ada4db794f4cd171e6d62&p=4610961","timestamp":"2014-04-18T18:18:45Z","content_type":null,"content_length":"39309","record_id":"<urn:uuid:2096b254-4b19-4958-a0e7-e7adb44a6c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson 6: Follow the Bouncing Ball This lesson introduces conditional logic and vector math by adding code to check whether the ball hits a wall and changing its velocity based on the collision. This lesson starts off with the code as it is at the end of lesson 5. At the end of this lesson, your code should look something like this: Now that the ball is moving, let's say we want to make it reverse direction when it hits the edges of the window. To do this, we will need to mimic the physics of objects bouncing off of a surface. Ignoring the energy lost as the ball bounces, a simple model for this is to keep the velocity parallel to the surface the same, and reverse the velocity perpendicular to the surface. Since our boundaries are parallel to the x and y axis, we simply need to reverse either the x velocity or the y velocity when the ball hits a wall. So we need to add code for the following behaviors: 1. Detect that a wall collision has taken place. 2. Reverse the x or y velocity appropriately. In lesson 5 the x and y velocity were constants. These need to be changed to variables, just like the x and y positions were made variables in the last lesson. Declare and initialize a Vector2 object at the top of the class just like spriteLocation was declared previously. The new line of code will look as follows: Vector2 spriteVelocity = new Vector2(1f, 1f); This sets the initial values of the x and y velocity to the same values as previously, just now they are variables and can be changed while the program is running. Now in the Game1.Update() method, change spriteLocation.X = spriteLocation.X + 1; spriteLocation.Y = spriteLocation.Y + 1; spriteLocation = spriteLocation + spriteVelocity; This may look a little strange, since we don't explicity refer to X and Y. Since spriteLocation and spriteVelocity are both of type Vector2, he can be added together and the Vector2 objects know that to add two Vector2 objects together, you need to add their X values together and their Y values together. Now immediately after those lines of code, we need to check for a collision with the walls. In order to do this, we need to use what is called a conditional statement. A conditional statement uses boolean logic (true or false) to determine whether a block of code should be executed. In C#, conditional statements use the if keyword. So what are the conditions we need to check for to see if we've hit a wall? Well there are 4 walls, so there are 4 conditions. For the purposes of this program, we will actually check to see if the ball has intersected with the wall, which makes the calculations a bit easier. So the conditions are as follows: 1. x < 0 2. y < 0 3. x > width of the window - width of the ball 4. y > height of the window - height of the ball For the third and fourth conditions, we need to take into account the size of the ball since x and y actually represent the top left corner of the bounding box that contains the ball. For each of the above conditions, we need to take the following actions respectively: 1. Reverse the x velocity and add the new x velocity to the x position to get the ball out from inside the wall 2. Reverse the y velocity and add the new y velocity to the y position to get the ball out from inside the wall 3. Same as 1 4. Same as 2 Since 1 and 3 require the same action, and 2 and 4 require the same action, we can group the 4 conditions into 2 conditional statements. Writing these as code looks as follows: if (spriteLocation.X < 0 || spriteLocation.X > graphics.GraphicsDevice.Viewport.Width - spriteTexture.Width) // If we get in here, we've hit a vertical wall spriteVelocity.X = -spriteVelocity.X; spriteLocation.X = spriteLocation.X + spriteVelocity.X; if (spriteLocation.Y < 0 || spriteLocation.Y > graphics.GraphicsDevice.Viewport.Height - spriteTexture.Height) // If we get in here, we've hit a horizontal wall spriteVelocity.Y = -spriteVelocity.Y; spriteLocation.Y = spriteLocation.Y + spriteVelocity.Y; The code in the block (between the curly braces) is only run if the conditional statement directly before it evaluates to true. the || characters mean "or" so in the case of the first "if" statement, it will be true if either spriteLocation.X is less than zero, or spriteLocation.X is greater than the width of the game surface minus the width of the ball. If the || characters were replaced with && characters, that would mean "and", and spriteLocation.X would have to be both less than zero and greater than the width of the game surface minus the width of the ball. Run the program and you will see the ball bouncing around the screen. Change the initialization of spriteVelocity changing the X and Y values and see what happens. I would suggest different combinations of 0, 5, and 10 for the x and y velocity values. Comments are closed
{"url":"http://www.bluerosegames.com/xna101/post/Lesson-6-Follow-the-Bouncing-Ball.aspx","timestamp":"2014-04-16T13:30:48Z","content_type":null,"content_length":"38263","record_id":"<urn:uuid:7218b0af-7b22-40f6-bd49-b93661d73e69>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating permutation covers: part II In a prior entry , we reduced the problem of generating a minimal permutation cover of a set elements to that of constructing a minimal tuple cover of S[0] U U S[M] , where is the set of all subsets of with cardinality = ( − 1)/2 and is odd. We do this construction recursively on = 0,..., Suppose then that we have a minimal tuple cover T of S[0] U ... U S[m], i.e. a minimal set of m-tuples of different elements jointly covering all subsets of S with cardinality ≤ m, and we want to extend T to a minimal set T' of (m + 1)-tuples jointly covering S[0] U ... U S[m] U S[m][ + 1]. We state without proof the following Lemma. If m ≤ N/2, a miminal tuple cover of S[0] U ... U S[m] has cardinality C(N,m). So, |T| = C(N,m) and we want to extend T to a set T' by assigning to each tuple τ of T one or more tuples of the form τa, a in S − range(τ), in such a way that all subsets of S[m][ + 1] are covered and the final number of elements of the cover |T'| is C(N,m + 1). When extending a tuple τ of T we will disregard the order of its elements, so basically we are identifying τ with range(τ); this reduction still yields C(N,m) different elements, since every element of S[m] must be covered by some tuple of T. So the extension mapping T → T' relates elements of S[m] to elements of S[m][ + 1], and in fact can be regarded as a subset of the graph induced by inclusion between elements of these two sets. It is easy to see that each element of S[m] is the source of (N − m) arrows, whereas m + 1 arrows arrive at each element of S[m][ + 1]. Our job is then to select C(N,m + 1) arrows from the diagram above so that all the elements from the source and the destination set are met by at least one arrow. This selection process has to maintain some balance so that no source or destination element is left unattended: for instance, if we select for each element of S[m][ + 1] an arbitrary arrow arriving at it there is the possibility that some elements of S[m] are not visited by any of the selected arrows. The following is a balanced selection criterion: given a fixed injection f : S[m] → S[m][ + 1] and function g : S[m][ + 1] → S[m], both compatible with the inclusion relationship between S[m] and S[m][ + 1] (that is, X is a subset of f(X) and Y a superset of g(Y)), we select an arrow X → Y iff Y = f(X) or (X = g(Y) and Y is not in f(S[m])). It is obvious that this criterion does not leave any X or Y unvisited. The number of selected arrows coincides with the number of elements in S[m][ + 1], which is C(N,m + 1) as required. In the following we suppose, without loss of generality, that S is the set {0,...,N − 1}. Constructively defining an inclusion compatible injection from S[m] to S[m][ + 1] is not a trivial task, but fortunately for us a paper from Pudlák, Turzík and Poljak provides the definition of such an injection f along with an algorithm χ : S[m][ + 1] → {true,false} that checks whether a given Y belongs to f(S[m]). We adopt the following definition for g: g(Y) := Y − max(Y), which leads to this algorithm for generating Τ' from Τ: Τ' ← Ø for every τ in Τ ····X ← range(τ) ····a ← f(X) − X {the one element added to X by f} ····Τ' ← Τ' U {τ·a} ····for i = max(X) + 1 to N − 1 ········Y ← X U {i} ········if not χ(Y) then ············Τ' ← Τ' U {τ·i} ········end if ····end for end for Note that the double loop over (X,i) is designed in such a way that it only visits the X → Y arrows where X = g(Y), which is maximally efficient and saves us the need to explicity compute g. The complete Τ[0] = Ø → Τ[1] → ··· → Τ[M] process can be inlined to avoid generating the intermediate Τ[i] covers. tuple-cover(m,N,τ,Τ) {initial call: tuple-cover(0,N,Ø,Τ) with Τ empty} if m = (N − 1)/2 then ····Τ ← ΤU {τ} ····X ← range(τ) ····a ← f(X) − X ····tuple-cover(m + 1,N,τ·a,Τ) ····for i = max(X) + 1 to N − 1 ········Y ← X U {i} ········if not χ(Y) then ············cover(m + 1,N,τ·i,Τ) ········end if ····end for end if In order to leverage the tuple cover algorithm to construct a minimal permutation cover, the only missing piece is finding a bijection d : S[M] → S[M] such that X ∩ d(X) = Ø for all X in S[M], as we already saw. The aforementioned paper from Pudlák et al. provides also such a function (which, in fact, is used for the construction of f). Having all the necessary components, the following is the full algorithm for constructing a minimal permutation cover on S = {0,...,N − 1}. if N is even then ····N' ← N − 1 ····N' ← N end if Σ ← Ø Τ ← Ø for every τ in Τ ····find τ' in Τ with range(τ') = d(range(τ)) ····a ← S − range(τ) − range(τ') ····σ ← τ·a·reverse(τ') ····if N is even then ········Σ ← Σ U {N'·σ} ········Σ ← Σ U {σ·N'} ···· else ········Σ ← Σ U {σ} ····end if end for A C++ implementation of the algorithm is available (Boost used). The following are the different covers generated for |S| = 1,...,8. │|S|│permutation cover │ │1 │(a) │ │2 │(ab), (ba) │ │3 │(acb), (bac), (cba) │ │4 │(acbd),(dacb),(bacd),(dbac),(cbad),(dcba) │ │5 │(acedb),(baedc),(bdaec),(bedca),(cbaed),(cdbae),(cebad),(daceb),(decab),(eadbc) │ │6 │(acedbf),(facedb),(baedcf),(fbaedc),(bdaecf),(fbdaec),(bedcaf),(fbedca),(cbaedf),(fcbaed), │ │ │(cdbaef),(fcdbae),(cebadf),(fcebad),(dacebf),(fdaceb),(decabf),(fdecab),(eadbcf),(feadbc) │ │ │(acegfdb),(baegfdc),(bdagfec),(bdfagec),(bdgfeca),(bedagfc),(befdcag), │ │ │(begdcaf),(bfaegdc),(bfgecad),(bgafced),(cbagfed),(cdbagfe),(cdfbage), │ │7 │(cdgbafe),(cebagfd),(cefbagd),(cegbafd),(cfbaged),(cfgeadb),(cgbfdae), │ │ │(dacgfeb),(decbagf),(defcagb),(degcafb),(dfacgeb),(dfgceab),(dgafbec), │ │ │(eadcgfb),(efadbgc),(efgdabc),(egadbfc),(facegdb),(fgaebdc),(gacfdeb) │ │ │(acegfdbh),(hacegfdb),(baegfdch),(hbaegfdc),(bdagfech),(hbdagfec),(bdfagech), │ │ │(hbdfagec),(bdgfecah),(hbdgfeca),(bedagfch),(hbedagfc),(befdcagh),(hbefdcag), │ │ │(begdcafh),(hbegdcaf),(bfaegdch),(hbfaegdc),(bfgecadh),(hbfgecad),(bgafcedh), │ │ │(hbgafced),(cbagfedh),(hcbagfed),(cdbagfeh),(hcdbagfe),(cdfbageh),(hcdfbage), │ │8 │(cdgbafeh),(hcdgbafe),(cebagfdh),(hcebagfd),(cefbagdh),(hcefbagd),(cegbafdh), │ │ │(hcegbafd),(cfbagedh),(hcfbaged),(cfgeadbh),(hcfgeadb),(cgbfdaeh),(hcgbfdae), │ │ │(dacgfebh),(hdacgfeb),(decbagfh),(hdecbagf),(defcagbh),(hdefcagb),(degcafbh), │ │ │(hdegcafb),(dfacgebh),(hdfacgeb),(dfgceabh),(hdfgceab),(dgafbech),(hdgafbec), │ │ │(eadcgfbh),(headcgfb),(efadbgch),(hefadbgc),(efgdabch),(hefgdabc),(egadbfch), │ │ │(hegadbfc),(facegdbh),(hfacegdb),(fgaebdch),(hfgaebdc),(gacfdebh),(hgacfdeb) │ In a later entry we will see a practical application of permutation covers in the context of database querying.
{"url":"http://bannalia.blogspot.com/2008/10/generating-permutation-covers-part-ii.html","timestamp":"2014-04-18T08:43:56Z","content_type":null,"content_length":"94290","record_id":"<urn:uuid:a44f0876-6bfa-4df9-a523-d02514343983>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert million gallons per day to cfs - Conversion of Measurement Units ›› Convert million gallon/day [US] to cubic foot/second Did you mean to convert million gallon/day [US] to cfs million gallon/day [UK] ›› More information from the unit converter How many million gallons per day in 1 cfs? The answer is 0.646316889697. We assume you are converting between million gallon/day [US] and cubic foot/second. You can view more details on each measurement unit: million gallons per day or cfs The SI derived unit for volume flow rate is the cubic meter/second. 1 cubic meter/second is equal to 22.8244652273 million gallons per day, or 35.3146662127 cfs. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between million gallons/day and cubic feet/second. Type in your own numbers in the form to convert the units! ›› Definition: Cubic foot/second A cubic foot per second (also cfs, cu ft/s, cusec and ft³/s) is an Imperial unit / U.S. customary unit volumetric flow rate, which is equivalent to a volume of 1 cubic foot flowing every second. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0029 seconds.
{"url":"http://www.convertunits.com/from/million+gallons+per+day/to/cfs","timestamp":"2014-04-18T05:32:23Z","content_type":null,"content_length":"20794","record_id":"<urn:uuid:51c85edd-b752-432a-b34f-0197737ffa39>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
need help optimization December 9th 2006, 03:15 AM #1 Sep 2005 need help optimization Suppose that body temperature 1 hour afer receiving x mg of drug is given by T(x)=102-(1/6) x^2(1-x/9) for 0<=x<=6.The absolute value of the derivative,|T'(x)| is defined as sensitivty of the body to the drug dosage.Find the dosage that maximize sensitivity Sketch y=T(x), you will find that it gas a maximum at x=0, and a minimum at x=6, and its derivative is negative on (0,6). So |T'(x)|=-T'(x) on the interval. So now you are looking for the maximum of -T'(x). So differentiate it and set that derivative to zero solve for x, and substitute back into -T'(x) to find the maximum. So you need to find the solutions of: December 9th 2006, 09:06 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/8635-need-help-optimization.html","timestamp":"2014-04-17T01:08:19Z","content_type":null,"content_length":"32913","record_id":"<urn:uuid:7424822c-1330-4efc-adbf-0bb6311a3252>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal Model of Operation Parameters of Gathering Pipeline Network with Triple-Line Process Advances in Mechanical Engineering Volume 2013 (2013), Article ID 573542, 7 pages Research Article Optimal Model of Operation Parameters of Gathering Pipeline Network with Triple-Line Process ^1China University of Petroleum, Beijing 102200, China ^2Petroleum Engineering Technology Research Institute of East China Branch, SINOPEC, Nanjing, Jiangsu 210031, China Received 14 January 2013; Accepted 5 March 2013 Academic Editor: Bo Yu Copyright © 2013 Yongtu Liang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This is a mathematic model for the optimal operation of the gathering pipeline network and its solution provides this gathering pipeline network having a triple-line heat-tracing process, a method for reducing the operational costs and increasing economic benefit. The model consists of an objective function, minimum total operation cost, and 3 constraints including water temperature constraint at pipe nodes, inlet oil temperature constraints, and outlet water temperature constraint. By using a sequential quadratic programming algorithm, the model can be solved and a set of optimal mass rate and the desired temperature of tracing water are attained. The method is here specifically applied to the optimal operation analysis of a gathering pipeline network in North China Oilfield. The result shows its operation cost can be reduced by 2076RMB/d, which demonstrates that this method contributes to the production cost reduction of old oilfields in their high water-cut stage. 1. Introduction Triple-line process has been widely used in early oilfield development in China. With oilfields now entering the high water-cut stage, it has become more and more clear that the triple-line process has the disadvantages of high energy consumption and low efficiency. In the last twenty years, research has been mostly based on the optimal design of pipeline network [1–4] and the simulation of pipeline network [5–7], but seldom on optimal operation problems of triple-line process. On the condition that the triple-line process is not changed, research was carried out on optimizing the operation parameters of oilfield gathering and transportation system which had positive effects on cost reduction and economic benefit increases for those areas unsuited to the low-temperature gathering and transportation process. 2. Thermodynamic Calculation of Tracing Oil Pipelines The cross-section of the tracing oil pipelines is divided into 5 parts [8] (Figure 1). is the heat transfer surface between an oil pipeline and the soil, is the heating surface between the pipes’ interspace and the soil, is the heat transfer surface between a water pipeline and the soil, is the heat transfer surface between an oil pipeline and the pipes’ interspace, and is the heat transfer surface between a water pipeline and the pipes’ interspace. By solving the pipe element thermodynamic differential equation [9], Oil/water temperature at the end of the oil/water pipeline can be obtained as shown in Figure 1, where are the areas of 5 heat transfer surfaces of unit pipe length, m^2Consider. where is the liquid temperature at the end of the oil pipe, °C. is the average soil temperature at the depth of pipes,°C where is the water temperature at the end of a tracing pipe, °C. , , ,and are given by where is the liquid temperature at the beginning of an oil pipe, °C. is the water temperature at the beginning of a tracing pipe, °C, where is the specific heat of an oil pipe liquid, J/(°C·kg) Consider where is the specific heat of water, J/(°C·kg). is the mass rate of water, kg/s Consider where are the overall heat transfer coefficients of 5 heat transfer surfaces, , 3. Thermodynamic Calculation of Pipeline Network 3.1. Pipe Network Numbering Method The oil wells, the metering stations, the transfer station, and the pipelines connecting them are ranked and numbered with the following rules. The transfer station is level 0 node, with no subscript; the metering station is level 1 node, with subscript as its number; the oil well is level 2 node, with subscript as its number. A transfer station is the highest level, a metering station comes second, and an oil well is the lowest. The number of pipelines connecting two nodes follows the lower one. is the total number of metering stations, and is the total number of oil wells connected with the metering station . As shown in Figure 2, the transfer station has 2 metering stations, one of them has 3 wells and the other has 2. 3.2. Node Parameters Calculation of Heat Tracing Pipelines (1) Mass rate and specific heat of an oil pipe liquid at nodes: where is the mass rate of an oil pipe’s liquid , kg/s. is the water density, kg/m³. is the crude density, kg/m³. is the volumetric flow rate of an oil pipe liquid , m³/s. is the volumetric water cut of a well , where is the specific heat of an oil pipe’s liquid , J/(°C·kg). (2) Temperature of an oil pipe liquid at nodes: is the resulting oil pipes’ liquid temperature when mixed at their node , °C. is the liquid temperature at the end of an oil pipe , °C. (3) Temperature of a water pipe liquid at nodes where is the water temperature at the end of a tracing pipe , °C. is the resulting tracing pipes’ water temperature when mixed at their node , °C. is the mass rate of water distributed to a node , kg/s. 3.3. Node Parameters Calculation of Water Pipeline Network Mass Rate of Water at the Nodes The mass rate of water at a node is calculated serially from the lower level to the higher one: Water Temperature at Nodes Water temperature at a node is calculated by using the Sukhov Formula serially from the lower level to the higher one: where is the water temperature at the beginning of a water pipe , °C. is the water temperature at the end of a water pipe , °C. is the overall heat transfer coefficient of a water pipe , W/(m^2·°C). is the diameter of the water pipe , m. is the length of the water pipe , m. 4. Optimal Mathematic Model 4.1. Objective Function With a water mass rate and a water temperature as decision variables and the minimum total operation cost, including heating cost and power cost, as the target, assuming that is a known quantity, the objective function is given by where is the fuel price, RMB/kg. is the price of electricity RMB/J. is the water temperature before heated, °C. is the acceleration of gravity, N/kg. is the water head of the pump, m. is the lower heating value, J/kg. is the efficiency of heating furnace. is the pump’s efficiency. 4.2. Constraints Condition Water Temperature Constraint at the Nodes One node has multiple outlets. The water temperatures of these outlets are the same: where is the commencing temperature difference between a water pipe and a water pipe ,°C. the commencing temperature difference between a water pipe and a water pipe . Inlet Oil Temperature Constraint To ensure the safe operation of the pipeline, the minimum inlet temperature [10] is specified to be higher than the freezing point of crude oil: where is the transfer station’s inlet temperature of an oil pipe , °C. is the freezing point of crude oil, °C. is the temperature allowance, °C. Outlet Water Temperature Constraint The outlet water temperature of the transfer station is usually °C [11, 12]: where is the outlet water temperature of a transfer station, °C. 5. Model Solutions There are metering stations, oil wells in this model, and correspondingly, decision variables, equality constraints and inequality constraints are generated. The model is a highly nonlinear problem, of which the most common method used to solve is the sequential quadratic programming algorithm. The sequential quadratic programming algorithm is a fast and effective method, of which the convergence rate is proved to be superliner under certain conditions [13, 14]. 5.1. Sequential Quadratic Programming Algorithm The main idea of the algorithm is to build a simple series of approximate optimization problems, namely, quadratic programming problems, using the information from the original nonlinear program. By solving these new problems, current iteration can be updated and gradually approximate the solution of the original nonlinear programming problem [15]. At the kth step, the approximate programming problem is as follow: where is the difference between former and later iterations, named iteration direction; is the objective function of new programming; is the objective function/constraints of original programming; is the gradient of ; is Hessian matrix of ; is the set of subscripts of equality/inequality constraints. As shown in Figure 3, the algorithm mainly includes 3 steps: (a) solve the subproblem with active-set method to get and Lagrange multiplier ; (b) employ quadratic interpolation and linear search to get step length ; (c) update Hessian matrix with BFGS (Broyden-Fletcher-Goldfarb-Shanno) method, where is the number of iterations, is the difference between former and later iterations, and is the control error. 5.2. Active-Set Method Subproblem (21) is a standard quadratic programming problem. The active-set method is the key to solving such problems. By swapping in/out the inequality constraints according to some rules, A convex quadratic programming (22) only with equality constraints is obtained: where Problem (22) converts the solution of into the solution of the direction of , named . Figure 4 gives the steps of the solution of . 6. Example Analyses As shown in Figure 5, the transfer station of North China Oilfield has 3 metering stations and 18 oil wells. The well effluent of each well has a mass rate of 10.5~53.6 t/d, a water cut of higher than 80%, and a temperature of 30~40. The model has 36 decision variables, 17 equality constraints, and 4 inequality constraints. It takes 114 iterations, about 4.24 seconds to get the optimal results, as shown in Table 1. Table 2 sets out the comparison between costs before and after optimization. It shows that the optimized heating cost and power cost decrease by 1925 RMB/d and 151 RMB/d, respectively, which means the total cost can be reduced by 2076 RMB/d in sum. 7. Conclusions A mathematic model of the gathering pipeline network optimal operation and its solution are given in this paper, which can provide optimal operation parameters for triple-line process. Using a gathering pipeline network of North China Oilfield as an example, a mathematic model has been established. After comparing the optimal results with the actual operation data, it concludes that by using the new model, there is a considerable cost saving in accordance with the optimized parameters over that which exists at present. Set and Indices Set of meter stations : Set of wells connected to a meter station : Indices corresponding to a meter station and a well, and . : The specific heat of an oil pipe liquid, J/(°C·kg) : The specific heat of water, J/(°C·kg) : Areas of 5 heat transfer surfaces of unit pipe length, m^2 : The overall heat transfer coefficients of 5 heat transfer surfaces, W/(m^2·°C) : The average soil temperature at the depth of pipes, °C : The water density, kg/m³ : The crude density, kg/m³ : The volumetric flow rate of an oil pipe liquid , m³/s : The volumetric water cut of a well : The overall heat transfer coefficient of a water pipe , W/(m^2·°C) : The diameter of the water pipe , m : The length of the water pipe , m : The fuel price, RMB/kg : The water temperature before being heated, °C : The lower heating value, J/kg : The efficiency of heating furnace : The price of electricity RMB/J : The acceleration of gravity, N/kg : The water head of the pump, m : The pump’s efficiency : The freezing point of crude oil, °C : The temperature allowance, °C. : The mass rate of an oil pipe liquid, kg/s : The liquid temperature at the end of the oil pipe, °C : The mass rate of water, kg/s : The water temperature at the end of a tracing pipe, °C : The liquid temperature at the beginning of an oil pipe, °C : The water temperature at the beginning of a tracing pipe, °C : The mass rate of an oil pipe’s liquid , kg/s : The specific heat of an oil pipe’s liquid , J/(°C·kg) : The liquid temperature at the end of an oil pipe , °C : The resulting oil pipes’ liquid temperature when mixed at their node , °C : The transfer station’s inlet temperature of an oil pipe , °C : The water temperature at the end of a tracing pipe , °C : The resulting tracing pipes’ water temperature when mixed at their node , °C : The transfer station’s inlet temperature of a tracing pipe , °C : The mass rate of water distributed to a node , kg/s : the water temperature at the beginning of a water pipe , °C : The water temperature at the end of awater pipe , °C : The outlet water temperature of a transfer station, °C : The commencing temperature difference between a water pipe and a water pipe ,°C : The commencing temperature difference between a water pipe and awater pipe ,°C. 1. R. J. Barnes, A. Kokossis, and Z. Shang, “An integrated mathematical programming approach for the design and optimisation of offshore fields,” Computers and Chemical Engineering, vol. 31, no. 5-6, pp. 612–629, 2007. View at Publisher · View at Google Scholar · View at Scopus 2. D. A. Antonenko, V. A. Pavlov, V. N. Surtaew, and K. K. Sevastyanova, “Selecting an optimal field development strategy for the vankor oilfield using an integrated-asset-modeling approach,” in Proceedings of the SPE Europec/EAGE Conference and Exhibition, Rome, Italy, June 2008. View at Publisher · View at Google Scholar 3. Y. Liu and G. Chen, “Optimal parameters design of oilfield surface pipeline systems using fuzzy models,” Information Sciences, vol. 120, no. 1, pp. 13–21, 1999. View at Publisher · View at Google Scholar · View at Scopus 4. S. A. Van Den Heever and I. E. Grossmann, “An iterative aggregation/disaggregation approach for the solution of a mixed-integer nonlinear oilfield infrastructure planning model,” Industrial and Engineering Chemistry Research, vol. 39, no. 6, pp. 1955–1971, 2000. View at Scopus 5. J. M. Duan, W. Wang, Y. Zhang, et al., “Calculation on inner wall temperature in oil-gas pipe flow,” Journal of Central South University of Technology, vol. 19, pp. 1932–1937, 2012. 6. P. Floquet, X. Joulia, A. Vacher, M. Gainville, and M. Pons, “Numerical and computational strategy for pressure-driven steady-state simulation of oilfield production,” Computers and Chemical Engineering, vol. 33, no. 3, pp. 660–669, 2009. View at Publisher · View at Google Scholar · View at Scopus 7. C. J. Alvarez, M. H. Al-awwami, and S. Aranco, “Wet crude transport through a complex hilly terrain pipeline network,” in Proceedings of the SPE Annual Technical Conference and Exhibition, Houston, Tex, USA, October 1999. View at Publisher · View at Google Scholar 8. W. X. Wang, Operation optimization and project setting for heavy oil three pipe tracing gathering and transferring system [M.S. thesis], Daqing Petroleum Institute, Daqing, China, 2006. 9. Department of Mathematics at Tongji university, Advanced Mathematics[M], Higher Education Press, Beijing, China, 2007. 10. X. L. Gao and M. Kuang, “The determination on the minimum allowable inlet temperature of waxy crude oil pipeline,” Oil & Gas Storage and Transportation, vol. 21, no. 11, pp. 17–21, 2002. 11. The Writing Committee of Technical Manual of Oilfield Oil-gas Gathering and Transportation Design, Technical Manual of Oilfield Oil-Gas Gathering and Transportation Design, Petroleum Industry Press, Beijing, China, 1994. 12. GB 50350-2005, Code for Design of Oil-Gas Gathering and Transportation System, GB, Beijing, China, 2005. 13. G. P. He, Z. Y. Gao, and Y. G. Zheng, “An effective sequential quadratic programming algorithm for nonlinear optimization problems,” Numerical Mathematics a Journal of Chinese Universities (English Series), no. 1, pp. 34–51, 2002. 14. Q. Ni, “A new inexact sequential quadratic programming algorithm,” Numerical Mathematics a Journal of Chinese Universities (English Series), no. 1, pp. 1–12, 2002. 15. G. P. He, Z. Y. Gao, and Y. L. Lai, “New sequential quadratic programming algorithm with consistent subproblems,” Science in China Series A, vol. 40, no. 2, pp. 137–150, 1997.
{"url":"http://www.hindawi.com/journals/ame/2013/573542/","timestamp":"2014-04-16T06:30:37Z","content_type":null,"content_length":"257127","record_id":"<urn:uuid:ade445d8-0866-4fbc-8555-061fab48cd07>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
(620-297) Semester II 2011 Group Theory and Linear Algebra Lecturer: Arun Ram, 174 Richard Berry, phone: 8344 6953, email: aram@unimelb.edu.au Time and Location: Lecture Tuesday 10:00-11:00 Old Geology 1 Lecture Wednesday 12:00-1:00 Old Geology 1 Lecture Friday 2:15-3:15 Old Geology 1 Practical Tuesday 11:00-12:00 Richard Berry G10 Practical Wednesday 3:15-4:15 Richard Berry G10 Practical Wednesday 11:00-12:00 Asia Centre 120 Practical Thursday 2:15-3:15 Richard Berry G10 Practical Friday 12:00-1:00 Richard Berry G4 • No books, notes, calculators, ipods, ipads, phones, etc at the exam. • Tips to avoid freaking out: □ The assignments are designed to take "an average of 6 hours per week". This is an average. □ The assignments can be reformatted to reduce the freak factor: See Assignment 1 (pdf file) from 2009 Real Analysis and Applications as an example. □ The assignments and the course are designed to make you know exactly what is on the exam, practice what is on the exam and do well on the exam. □ The assignments are worth 20% of the total mark. If you skip a few questions it will affect your total mark very little. □ Thousands of students have made it through this course format with Professor Ram in the past (and are proud to tell the tale). You can do it too. • Tips for time management: □ It is much easier (and safer) to run 45 min per day to attain 6 hours in a week and 24 hours in 4 weeks, than to run for 24 hours solid every fourth week on Sunday. □ To actually run 45 min, it takes me at least 15 min to psyche myself up and convince myself that it is actually not raining and so therefore I should go running, and after a 45 min run I always walk for 5 min and I always go home and have a glass of milk and tell my wife (at length) how cool I am for running 45 min per day. All in all, I waste a good 40 min when I go running for 45 min. If I were more efficient (and every so often, but rarely, I am) then it would only takes me 50 min. □ Measurement of time is a tricky thing and requires real discipline. Teaching and research faculty at University of Melbourne recently had to complete a survey on distribution of their time on the various activities of the job: Do I count the 6 times I had to go check my email and the weather and my iPhone in the time that I spend preparing my Group Theory and Linear Algebra • Tips for exam preparation: □ The time that a 100m olympic runner (who wins a medal) is actually competing at the olympics is say (5 heats, 7sec each) 40 seconds. Successful performance during these 40 sec is impossible without adequate preparation. □ The time that a Group Theory and Linear Algebra student spends on the final exam is 3 hours. Sucessful performance during these 3hours is ..... • Consultation hours for Prof. Ram will be Mondays 3:45-5:45pm in Old Geology 1. • Prof. Ram reads email but generally does not respond. • The start of semester pack includes: Plagiarism (pdf file), Plagiarism declaration (pdf file), Academic Misconduct (pdf file), Beyond third year (pdf file), Vacation scholarships (pdf file), SSLC (pdf file). • It is University Policy that: “a further component of assessment, oral, written or practical, may be administered by the examiners in any subject at short notice and before the publication of results. Students must therefore ensure that they are able to be in Melbourne at short notice, at any time before the publication of results” (Source: Student Diary). Students who make arrangements that make them unavailable for examination or further assessment, as outlined above, are therefore not entitled to an alternative opportunity to present for the assessment concerned (i.e. a ‘make-up’ examination). • Students must use UNICARD to print documents. The UNICARD printer is located near the G70 computer lab. For more information about printing at the University and for locations of UNICARD uploaders direct students to Student IT Support: http://www.studentit.unimelb.edu.au/printingandscanning/printing.html Subject Outline The handbook entry for this course is at https://handbook.unimelb.edu.au/view/2011/MAST20022. The subject overview that one finds there: This subject introduces the theory of groups, which is at the core of modern algebra, and which has applications in many parts of mathematics, chemistry, computer science and theoretical physics. It also develops the theory of linear algebra, building on material in earlier subjects and providing both a basis for later mathematics studies and an introduction to topics that have important applications in science and technology. Topics include: modular arithmetic and RSA cryptography; abstract groups, homomorphisms, normal subgroups, quotient groups, group actions, symmetry groups, permutation groups and matrix groups; theory of general vector spaces, inner products, linear transformations, spectral theorem for normal matrices, Jordan normal form. Main Topics • (1) Greatest common divisors, Euclid’s algorithm, arithmetic modulo m. • (2) Definition and examples of fields, equations in fields. • (3) Vector spaces, bases and dimension, linear transformations. • (4) Matrices of linear transformations, direct sums, invariant subspaces, minimal polynomials • (5) Cayley-Hamilton theorem, Jordan normal form. • (6) Inner products, adjoints. • (7) Spectral theorem. Definition and examples of groups. • (8) Subgroups, cyclic groups, orders of groups & elements, products, isomorphisms. • (9) Lagrange’s theorem, cosets, normal subgroups, quotient groups, homomorphisms. • (10) group actions, orbit-stabilizer relation, conjugation. • (11) Some results on classification of finite groups, Euclidean isometries Assessment will be based on three written assignments due at regular intervals during semester amounting to a total of up to 50 pages (20%), and a 3-hour written examination in the examination period The plagiarism declaration is available here. It is STRONGLY suggested that you turn in problems from the Problem sheets weekly (in tutorial). The homework assignments will soon appear below: • Assignment 1: Due 23 August: Do problems from Problem sheets for weeks 1-4. Problems from sheet 1-4 will be accepted by your tutor anytime before 23 August. It is STRONGLY suggested that you turn in problems from the Problem sheets weekly (in tutorial). The marker will briefly look through your assignment and try to give you feedback on and a mark reflecting how you are progressing towards doing well on the final exam. • Assignment 2: Due 4 October: Do problems from Problem sheets for weeks 5-8. Problems from sheet 5-8 will be accepted by your tutor anytime between 23 August and 3 October. It is STRONGLY suggested that you turn in problems from the Problem sheets weekly (in tutorial). The marker will briefly look through your assignment and try to give you feedback on and a mark reflecting how well you are progressing towards doing well on the final exam. • Assignment 3: Due 1 November: Do problems from Problem sheets for weeks 9-12. Problems from sheet 9-12 will be accepted by your tutor anytime between 4 October and 28 October. It is STRONGLY suggested that you turn in problems from the Problem sheets weekly (in tutorial). The marker will briefly look through your assignment and try to give you feedback on how you are progressing towards doing well on the final exam. Resources part I: recommended Texts The following problems page may have helpful examples: Resources part II: Lectures and lecture notes • Lecture 1, 26 July 2011: The clock and invertible elememnts Math Grammar: Definitions, Theorems and How to do Proofs (pdf file) and handwritten lecture notes-pdf file Examples of proofs written in proof machine (pdf file) • Lecture 2, 27 July 2011: gcd and Euclid's algorithm - handwritten lecture notes - pdf file • Lecture 3, 29 July 2010: Equivalence relations - handwritten lecture notes - pdf file • Lecture 4, 2 August 2011: Functions - hand written lecture notes - pdf file • Lecture 5, 3 August 2011: Rings and Fields (pdf file) • Lecture 6, 5 August 2011: C[t], gcd and Euclid's algorithm (pdf file) • Lecture 7, 9 August 2011: Vector spaces and linear transformations (pdf file) • Lecture 8, 10 August 2011: Span and bases (pdf file) • Lecture 9, 12 August 2011: Change of basis (pdf file) • Lecture 10, 16 August 2010: Eigenvectors and annihilators (pdf file) • Lecture 11, 17 August 2011: Minimal and characteristic polynomials (pdf file). • Lecture 12, 19 August 2011: Jordan normal form (pdf file) • Lecture 13, 23 August 2011: Block decomposition (pdf file) • Lecture 14, 24 August 2011: Cayley-Hamilton theorem (pdf file) • Lecture 15, 26 August 2011: Inner products and Gram-Schmidt (pdf file) • Lecture 16, 30 August 2011: Orthogonal complements and adjoints (pdf file) • Lecture 17, 31 August 2011: The spectral theorem (pdf file) • Lecture 18, 2 September 2011: Groups and group homomorphisms (pdf file) • Lecture 19, 6 September 2011: The polar decomposition (pdf file) • Lecture 20, 7 September 2011: Symmetric groups and subgroups generated by a subset (pdf file) • Lecture 21, 9 September 2011: Cyclic groups and products (pdf file) • Lecture 22, 13 September 2011: Cosets and quotient groups (pdf file) • Lecture 23, 14 September 2011: Quotient groups (pdf file) • Lecture 24, 16 September 2011: $G/\mathrm{ker}f&sime;\mathrm{im}f$ (pdf file) • Lecture 25, 4 October 2011: Group actions, orbits, stabilizers (pdf file) • Lecture 26, 5 October 2011: Centres and p-groups (pdf file) • Lecture 27, 7 October 2011: Proof of the Orbit-Stabilizer theorem (pdf file) • Lecture 28, 11 October 2011: The affine orthogonal group and isometries (pdf file) • Lecture 29, 12 October 2011: Isometries of E^2 (pdf file) • Lecture 30, 14 October 2011: Matching the affine orthogonal group with isometries (pdf file) • Lecture 31, 18 October 2011: Revision: Analogies (pdf file) • Lecture 32, 19 October 2011: Revision: The Fundamental Theorem of Algebra (pdf file) • Lecture 33, 21 October 2011: Revision: Proof machine (pdf file) • Lecture 34, 25 October 2010: Revision: Working sample randomly chosen problems • Lecture 35, 26 October 2010: Revision: Maths, Music and the Weil conjectures • Lecture 36, 28 October 2010: Revision: C[t]-modules (pdf file) Resources part III: Other notes Various lecture notes from the past that will be useful and supplemented during the term. Every subject at the University of Melbourne uses a student questionnnaire to let teaching staff know what students think about the quality of teaching in that subject. This is now administered online near the end of the semster. As such, it is too late to affect the teaching for the cohort of students that answer the questionnaire. Feedback to students based on 2009 questionnaires for Real Analysis: • The student survey last year showed high student satisfaction with the course. Most elements of last year's course are being retained. • Exam performance demonstrated that students had learned concepts and the general framework well, but were weak on skill (they knew what a hammer is for but were unable to use it to hammer in a nail effectively). Skill level is an important goal for this course and this semester there will be a determined effort to get the skill level of all students to a high level: □ The problem sheets will be very directed towards the final exam.
{"url":"http://www.ms.unimelb.edu.au/~ram/Teaching/GpThyLinAlg2011/GpThyLinAlg2011.html","timestamp":"2014-04-17T18:29:00Z","content_type":null,"content_length":"26430","record_id":"<urn:uuid:e4039e94-484d-4172-ae00-a29bcde7d159>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Mouse X/Y ray to Sphere Intersection to world space point 01-25-2013, 03:38 PM #1 Newbie Newbie Join Date Jan 2013 Hi all, I would like to do precise picking on a sphere surface, converting mouse X,Y to world space XYZ point so that I can draw object at that coord. Sounds simple enough and it should be, but I need some help I have built this simple test snippet that would simply create a some geo at the point on a globe where the mouse click ray intersection occurs and its nearly working! Its taking my xy casting that along the projection angle, taking into account the modelView inv (camera pos) etc. The only thing that I have an issue is that intersection with the Y coordinate is upside-down, also strangely I have to swap the Z coord of the mouse ( so ray_start = [x,y,1] and ray_dest = [x,y,0] otherwise its rejected as pointing backwards . So, that done, clicking around the equator of the sphere it plots the geo's nicely at the x and y moving orbit the camera left/right it works fine, but plotting above of the center screen creates geo points south or downward and clicking below center screen viceversa plots mirrored north . Simply swapping the final vector does not work as its the intersection thats matters. I know I am so close.. I looking at the computeWindow Matrix calculation as a potential culprit. I Just need a little more light I really appreciate some help if possible. Many thanks for any help! (This test code is in webGL and javascript) // window matrix function (where I think the problem could be) this.computeWindowMatrix = function(xstart, ystart, width, height) { var translate = osg.Matrix.makeTranslate(1.0, 1.0, 1.0); var scale = osg.Matrix.makeScale(0.5*width, 0.5*height, 0.5); var offset = osg.Matrix.makeTranslate(xstart,ystart,0.0); return osg.Matrix.preMult(offset, osg.Matrix.preMult(scale, translate)); //======PLOT GEO AT MOUSE X/Y CLICK ON CENTERED SPHERE var ray_start = [x,y,1]; //had to swap the Z so its cast forward var ray_dest = [x,y,0]; var matrix = osg.Matrix.makeIdentity(); var w = osg.Matrix.copy(viewer.view.viewport.computeWindow Matrix(), []); var p = viewer.view.getProjectionMatrix(); var m = viewer.view.getViewMatrix(); osg.Matrix.preMult(matrix, w); osg.Matrix.preMult(matrix, p); osg.Matrix.preMult(matrix, m); var inv = []; var valid = osg.Matrix.inverse(matrix, inv); var ns = osg.Matrix.transformVec3(inv, ray_start, new Array(3)); var ne = osg.Matrix.transformVec3(inv, ray_dest, new Array(3)); //======Ray-Sphere intersection======== var t0, t1; // parametric solutions for t if the ray intersects var radius2 = (6371* 6371); var ray_dir = osg.Vec3.normalize(osg.Vec3.sub(ns,ne, []) ,[]); var L = osg.Vec3.sub([0,0,0],ns,[]); var tca = osg.Vec3.dot(L,ray_dir); if (tca < 0) return false; //wrong direction var d2 = osg.Vec3.dot(L,L) - (tca * tca); //o2=h2-a2 if (d2 > radius2) return false; //overshoots var thc = Math.sqrt(radius2 - d2); //find the exit point t0 = tca - thc; t1 = tca + thc; //Draw a box at the world point var hitPoint = osg.Vec3.add(ns, osg.Vec3.mult(ray_dir,t0,[]),[]); var geometry = osg.createTexturedBox(0, 0, 0, 1000,1000,1000); var xform = new osg.MatrixTransform() Last edited by handsfellof; 01-26-2013 at 03:29 AM. Fixed it Yes as I thought it was the the viewport transformation to NDC (normalized device coordinate ) coordinates Code changed to this fixed it. ray_start[0] = (x / viewer.view.viewport.width()) * 2 - 1; ray_start[1] = -(y / viewer.view.viewport.height()) * 2 + 1; ray_start[2] = 1; ray_dest[0] = (x / viewer.view.viewport.width()) * 2 - 1; ray_dest[1] = -(y / viewer.view.viewport.height()) * 2 + 1; ray_dest[2] = 0; 01-26-2013, 04:04 AM #2 Newbie Newbie Join Date Jan 2013
{"url":"http://www.opengl.org/discussion_boards/showthread.php/180911-Mouse-X-Y-ray-to-Sphere-Intersection-to-world-space-point?s=0ce8475505768c379e4d39e4bd65c6d7&p=1247536","timestamp":"2014-04-24T04:09:17Z","content_type":null,"content_length":"43541","record_id":"<urn:uuid:d18fd1ce-6df8-4854-9228-a440611e1487>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
perlquestion phildeman Is there a way to compare the values of one hash (while looping through the hash) and determine how many times<br /> a value has occurred in the hash?<br /><br /> $hash = (<br /> &nbsp; &nbsp; &nbsp;key1 => 10,<br /> &nbsp; &nbsp; &nbsp;key2 => 10,<br /> &nbsp; &nbsp; &nbsp;key3 => 3,<br /> &nbsp; &nbsp; &nbsp;key4 => 5,<br /> &nbsp; &nbsp; &nbsp;key5 => 10<br /> );<br /> <br /> In the hash, above, there are 3 occurrences of the value 10. I need to do this in order display the value<br /> that has the highest frequency. Perhaps, there is a way to loop through the hash to compare the current<br /> value to the previous value, then increment a counter variable by 1?<br /><br /> Any suggestions?<br /><br /> Thanks.
{"url":"http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1055552","timestamp":"2014-04-20T14:13:31Z","content_type":null,"content_length":"1380","record_id":"<urn:uuid:25523147-938a-4281-b085-6d5931ae39f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Monthly -December 2009 December 2009 Rethinking the Lebesgue Integral By: Peter D. Lax This article describes a new approach to the description of the space of Lebesgue integrable functions L^1(K), where K is a ball in |^n. The traditional approach enlarges the concept of integration and the class of integrable functions. As an afterthought (the Riesz-Fischer theorem), it is shown that the space of integrable functions is complete in the L^1 norm. Today we know that the primary goal of the theory is to create a complete space, because most of the theorems of functional analysis require completeness. Accordingly we define L^1 as the abstract completion in the L^1 norm of the space C(K) of continuous functions. The elements of this completion are equivalence classes of Cauchy sequences of continuous functions; the remaining task is to assign to each element f of this abstract L^1 a function f(x) defined almost everywhere, that is, except on a set that can be enclosed in an open set of arbitrary small volume. We say that f(x) represents f if there is a Cauchy sequence in f that converges to f(x) ae. We prove that every f is represented by some f(x), and that f(x) and g(x) are equal ae if and only if they represent the same element of L^1. We then show how to derive the usual results of Lebesgue theory. In this approach measure is a derived concept; a set S is measurable if its characteristic function represents some element of L^1. The usual properties of measurable sets follow. I hope to see this approach adopted in the teaching of the Lebesgue theory. Computer Algebra in Systems Biology By: Reinhard Laubenbacher and Bernd Sturmfels reinhard@vbi.vt.edu, bernd@math.berkeley.edu Systems biology focuses on the study of entire biological systems rather than on their individual components. With the emergence of high-throughput data generation technologies for molecular biology and the development of advanced mathematical modeling techniques, this field promises to provide important new insights. At the same time, with the availability of increasingly powerful computers, computer algebra has developed into a useful tool for many applications. This article illustrates the use of computer algebra in systems biology by way of a well-known gene regulatory network, the Lac Operon in the bacterium E. coli. A Disorienting Look at Euler's Theorem on the Axis of a Rotation By: Bob Palais, Richard Palais, and Stephen Rodi palais@math.utah.edu, palais@math.uci.edu, srodi@austincc.edu In 1775 Euler showed that no matter how you rotate a sphere about its center, two points must end up where they began, so the result is equivalent to a rotation about the axis they determine. This was likely the first fixed point theorem, with many repercussions and generalizations. We give a determinant-free analysis of all 3x3 orthogonal matrices and obtain a similar result, then survey many other known proofs. We start by providing the first English translation of Euler's elegant geometric proof from the Latin, in which the essential hypothesis of orientation preservation is only mentioned implicitly. Expanding on this observation, we show how our analysis of general orthogonal transformations would look using Euler's geometric methods. Other proofs we examine are based on topology, determinants, Rodrigues' formula for quaternion multiplication (before Hamilton described quaternions!), Riemannian geometry, and Lie groups. How to Recognize a Parabola By: Bettina Richmond and Tom Richmond bettina.richmond@wku.edu, tom.richmond@wku.edu Parabolas have many interesting properties which were perhaps more well known in centuries past. Many of these properties hold only for parabolas, providing a characterization which can be used to recognize (theoretically, at least) a parabola. Here, we present a dozen characterizations of parabolas, involving tangent lines, areas, and the well-known reflective property. While some of these properties are widely known to hold for parabolas, the fact that they hold only for parabolas may be less well known. On Orders of Subgroups in Abelian Groups: An Elementary Solution of an Exercise of Herstein By: Robert Beals I. N. Herstein's Topics in Algebra contains an exercise for which Herstein acknowledges he doesn't know of a solution using only material developed to that point in the text. The problem is to show that, if an abelian group G contains subgroups of orders m and n, then it contains a subgroup whose order is the least common multiple of m and n. This appears as Exercise 26 in Section 2.5 of the second edition of Topics in Algebra (it is Exercise 11 in the first edition). We present a solution using material from Section 2.5 and earlier. A Nonmeasurable Set from Coin Flips By: Alexander E. Holroyd and Terry Soo holroyd@math.ubc.ca, tsoo@math.ubc.ca In this note we give an example of a nonmeasurable set in the probability space for an infinite sequence of coin flips. The example arises naturally from the notion of an equivariant function, and serves as a pedagogical illustration of the need for measure theory. A Note on Euler's Factoring Problem By: John Brillhart This note consists of a brief introduction to Euler's factoring problem and his results, as well as a complete and elegant solution to the problem given by Lucas and Matthews about a century later. π [p], the Value of π in By: Joseph B. Keller and Ravi Vakil The circumference of the "circle" in the plane under the [p]. A remarkable symmetry may be observed experimentally, π [p] = π [q] when 1/p+1/q=1, and Adler and Tanton asked why this was true. We explain why in an elementary manner. Behind our argument is geometric motivation, and in particular the notion of "polarity." The Associativity of the Pythagorean Law By: Lucio R. Berrone The composite-iterative functional equation arises from an iteration of the Pythagorean law expressing the length of the hypotenuse of a right-angled triangle as a function of the legs. An operation Music: A Mathematical Offering By: David J. Benson Musimathics: The Mathematical Foundations of Music By: Gareth Loy Reviewed by: Michael Henle
{"url":"http://www.maa.org/publications/periodicals/american-mathematical-monthly/american-mathematical-monthly-december-2009?device=mobile","timestamp":"2014-04-18T10:05:27Z","content_type":null,"content_length":"28652","record_id":"<urn:uuid:e29b73d4-676e-4aae-81c6-2254385d0bb1>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
stand for? What does NP stand for? What does NP mean? This page is about the various possible meanings of the acronym, abbreviation, shorthand or slang term: NP. No Password Computing » Security No Picture Academic & Science » Electronics The Number Of Points Miscellaneous » Unclassified Non Portable Governmental » Military Network Performance Miscellaneous » Unclassified Note Please Miscellaneous » Unclassified Non Partisan Miscellaneous » Unclassified No Publisher Miscellaneous » Unclassified Not Paginated Computing » General Naturopathic Physician Business » Positions Not Pertinent Governmental » FBI Files Network Processors Miscellaneous » Unclassified No Paper Miscellaneous » Unclassified Not Process Miscellaneous » Unclassified Non Polynomial Miscellaneous » Unclassified Normal Probability Miscellaneous » Unclassified Not Polynomial Miscellaneous » Unclassified Neck Pain Miscellaneous » Unclassified Nucleus Plugin Academic & Science » Chemistry Netscape Plugin Computing » Networking Near Pass Governmental » NASA Needle Punch Medical » Physiology Next Prime Miscellaneous » Unclassified Nicolas Pitre Community » Famous Not Portable Miscellaneous » Unclassified Know another definition for NP? Know what NP means? Don't keep it to yourself! Still can't find the acronym definition you were looking for? Use our Power Search technology to look for more unique definitions from across the web!
{"url":"http://www.abbreviations.com/serp.php?st=NP&p=2","timestamp":"2014-04-19T03:41:40Z","content_type":null,"content_length":"45148","record_id":"<urn:uuid:d67d8b45-85e6-47c9-a72e-76a08eadaaf9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Compactifying locally Cohen-Macaulay projective curves Abstract (Summary) We define a moduli functor parametrizing finite maps from a projective (locally) Cohen-Macaulay curve to a fixed projective space. The definition of the functor includes a number of technical conditions, but the most important is that the map is almost everywhere an isomorphism onto its image. The motivation for this definition comes from trying to interpolate between the Hilbert scheme and the Kontsevich mapping space. The main result is that our functor is represented by a proper algebraic space. As applications we obtain a new proof of the existence of Macaulayfications for varieties, and secondly, interesting compactifications of the spaces of smooth curves in projective space. We illustrate this in the case of rational quartics, where the resulting space appears easier than the Hilbert scheme. Bibliographical Information: School:Kungliga Tekniska högskolan School Location:Sweden Source Type:Doctoral Dissertation Keywords:MATHEMATICS; Algebra, geometry and mathematical analysis; Algebra and geometry; Cohen-Macaulay compactification; curves; algebraic space Date of Publication:01/01/2005
{"url":"http://www.openthesis.org/documents/Compactifying-locally-Cohen-Macaulay-projective-424276.html","timestamp":"2014-04-16T19:04:17Z","content_type":null,"content_length":"8769","record_id":"<urn:uuid:6a0a0330-2400-422b-974a-4c48cfc141d9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Alejandre: Exploration of Roots NCTM Standards: Use mathematical models to represent and understand quantitative relationships Understand patterns, relations, and functions Use graphs to analyze the nature of changes in quantitites Identify the roots of a quadratic equation from the graph and the factored equation. Understand connections between coefficients of the second and third terms of a quadratic equation and the roots of the equation. Students learn to factor equations but often they don't have the conceptual understanding to accompany what they do mechanically with the numbers. Given the equation of a parabola: x^2 - bx + a we start with an example with roots 2 and -3. Our equation is x^2 + x - 6. Concepts to be emphasized during the activity include: □ 2 + (-3) = -1 or the coefficient of the second term, a □ (2)(-3) = -6 or the coefficient of the third term, b □ The sum of the two numbers is -1 and the product of the two numbers is -6. □ When a quadratic equation is graphed, it is a parabola □ The roots satisfy the equation so that y equals zero and therefore, most importantly, the roots of the equation can be read from the graph where the lines of the parabola cross the y axis. The left grid is used to select a and b. Note what happens when (a, b) = (-1, -6) Students open the applet and get familiar with the controls. Open the Java Applet Note: It will open in a separate window. If you are displaying the page for students, arrange your browser windows so that the applet and the directions can be easily viewed. If students are working individually they should be encouraged to do this. As students work through the activity they should: • Realize that the black parabola is static but the location of (a, b) on the blue graph controls the location of the red parabola. • Try the various points and possibly others as they try to control the red parabola. • Recognize that the red (and black) parabola cross the x-axis at (2, 0) and also at (-3, 0). • 2 and -3 are the satisfy the equation y = x^2 + x - 6. • 2 and -3 are the roots of the quadratic equation y = x^2 + x - 6. • See that when a = 0 and b = -4, then the parabola crosses the x-axis at (2, 0) and (-2, 0). • Identify other values for a and b that determine the roots of the quadratic equation graphed on the green graph. Ask students to generalize how the applet can be used to find the roots of a quadratic equation.
{"url":"http://mathforum.org/te/alejandre/four/quadratic.html","timestamp":"2014-04-20T16:46:15Z","content_type":null,"content_length":"8601","record_id":"<urn:uuid:7578f43f-75a5-4e17-b1f4-e31fb8d11cba>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00048-ip-10-147-4-33.ec2.internal.warc.gz"}