content
stringlengths
86
994k
meta
stringlengths
288
619
Gauss' hemisphere Gauss' hemisphere is another compactification of the plane R^2. The Riemann sphere is much better-known, and probably more useful, but Gauss' hemisphere is useful for studying some aspects of projective geometry. To perform the compactification, place a hemisphere on the plane, much like you'd place a grapefruit on a plate. From every point x on the plane, draw a line to the centre of the [S:grapefruit:S] hemisphere. The line intersects the hemisphere at one point -- that point is the image of x on the hemisphere. Lines on the plane are transformed to appropriate halves of great circles on the hemisphere. In particular, they remain geodesics! Even nicer is that angles are kept. Points on the "rim" of the hemisphere correspond to no points of the plane (they are the points added to compactify the plane). Parallel lines in the plane indicate a "direction". On the hemisphere, their corresponding great circles all pass through the same 2 antipodal points on the rim. Great circles corresponding to parallel lines intersect at 2 points on the hemisphere; great circles corresponding to intersecting lines intersect at 1 point on the hemisphere. Since parallel lines indicate direction, and since points on the rim correspond to such sets of parallel lines, we can consider points on the rim (sometimes called "infinite points" or "points at infinity", but such terminology is often misleading, as the plane doesn't contain these points!) to indicate direction. It can be interesting to consider what happens when the hemisphere is tilted in some fashion prior to being set on the plane, or the hemisphere's relationship to the projective plane.
{"url":"http://everything2.com/title/Gauss%2527+hemisphere","timestamp":"2014-04-16T16:20:12Z","content_type":null,"content_length":"21130","record_id":"<urn:uuid:2c76aa70-4eab-4c5e-8bf8-2545ff2ed2bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
PVHydro Hydrogen Storage Model TYPE 74: GAS TANK General Description This subroutine models a gas tank after /1/. The calculations can be either for a ideal gas (mode=1) or a gas that follows the Van-der-Waal equation for real gases (mode=2). The gas pressure should be given in [bar] if SI-units are used. A[gas] - Van-der-Waals constant if mode=2 (=25000 for hydrogen) [Nm4/kg2] B[gas] - Van-der-Waals constant if mode=2 (=0.0267 for hydrogen) [m3/kg] Dsfp - Warning is printed when storage is empty (capacity less than dsfp)or when storage is full (capacity greater than 1-dsfp) f - Capacity of tank (0.0=empty, 1.0=full) lud - Logical unit number for data file m - Gas mass in the tank m[gas] - Gas mass flow to the tank Mode - 1 = ideal gas, 2 = real gas Mol - Molar mass of gas nstor - Storage number in data file p[gas ]- Pressure in the tank p[max] - Maximal allowed pressure in bar when SI units are used V - Volume of tank T[gas] - Absolute Gas temperature in the tank Mathematical Description Ideal Gas (mode 1): Real Gas (mode 2): (a[gas] = 25000. for hydrogen) (b[gas] = 0.0267 for hydrogen) TRNSYS Component Configuration 1 pmax - Maximal allowed pressure 2 V - Volume of tank 3 nstor - Storage number in data file 4 lud - Logical unit number for data file 1 m[gas] - Gas mass flow to the tank 2 T[gas] - Absolute Gas temperature in the tank 1 m[gas] - Gas mass flow to the tank 2 m - Gas mass in the tank 3 p[gas ]- Pressure in the tank 4 f - Capacity of tank (0.0=empty, 1.0=full) 1 m - Gas mass in the tank Informational Flow Diagrams Data file input The data for the gas has to be given in a separate file. The first line in this file will give the number of gases given in the file. The for each different gas will have two lines; the first line will give the number of the gas in the file, the second line will give the data for the gas. The data file for the gas will be like this (the third line (P1-P5) should not be in the file, but are included here to show which data means what): 2 2.016 0.005 25000 0.0267 P1 P2 P3 P4 P5 OUTPUT to file logical unit 6 will look like this: Parameters from file to Storage tank, unit 30: mode = 2 mol = 2.0160E+00 dfsp = 5.0000E-03 agas = 2.5000E+04 bgas = 2.6700E-02 /1/ F. Steinhardt, Photovoltaische Energieversorgung f9r ein Wohnhaus, Studienarbeit, FhG-ISE Freiburg, 1988
{"url":"http://sel.me.wisc.edu/trnsys/downloads/trnsedapps/demos/pvh-store.htm","timestamp":"2014-04-16T07:13:55Z","content_type":null,"content_length":"4990","record_id":"<urn:uuid:4bd35b37-1855-4b55-89e7-4db4bfee8d44>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Oliver Kerr Oliver Kerr Reader in the Department of Mathematics at City University. Email: o.s.kerr@city.ac.uk Phone: (020) 7040 8465 Teaching material For material concerning third year projects go to my Projects Page. Research Interests I am interested in many areas of fluid mechanics. In particular Double Diffusive Convection (what happens when you heat salty water), Laminar Flames (what happens when you have nice simple flames, with emphasis on the simple), Periodic Vortices (some new solutions of the steady Navier-Stokes equation, with HUGE graphics files). Einstein talk The presentation used in the "E=mc^2 made simple" talk is available here. It is not intended to be self contained. This is in a big PDF file (about 10MB) that uses animations, some of which only work on Acrobat Reader 7 or later (possibly also 6) in Windows. It was created in LaTeX using PPower4, pdfanim and movie15.sty. Senate Working Group on Sabbaticals Material relating to the Senate Working Group (which I chaired) can be found here. Erdös Number 4 I have an Erdös number of 4 via the following publications: Other Activities I am also a father to 2 children.
{"url":"http://www.staff.city.ac.uk/o.s.kerr/","timestamp":"2014-04-16T13:02:00Z","content_type":null,"content_length":"6132","record_id":"<urn:uuid:57fb5601-d0cf-4ea4-b67c-516cc5c84b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum cryptography From CryptoDox, The Online Encyclopedia on Cryptography and Information Security Some ideas and tools from quantum physics are challenging some of the assumptions that cryptographers have taken for granted. Currently, the three main applications of quantum physics to cryptography are in the areas of random number generation, quantum key exchange, and quantum cryptanalysis. The roots are in a proposal by Stephen Weisner called "Conjugate Coding" from the early 1970s<ref>Quantum Cryptography Tutorial</ref>. It was eventually published in 1983 in Sigact News, and by that time Bennett and Brassard, who were familiar with Weisner's ideas, were ready to publish ideas of their own. They produced BB84, the first quantum cryptography protocol, in 1984, but it was not until 1991 that the first experimental prototype based on this protocol was made operable (over a distance of 32 centimeters). Quantum random number generation Several encryption algorithms (such as One Time Pads) require a sequence of "unguessable" numbers. Several commercial hardware random number generators are based on precisely timing radioactive decay events which are (according to our current understanding of quantum physics) completely A few commercial hardware random number generators are based on other unguessable quantum events, such as sending individual photons toward a half-silvered mirror and detecting whether it passed through (and hit one detector) or was reflected (and hit a different detector). Quantum key exchange main article: BB84 A few prototypes have been built and at least one company is commercially selling hardware that implements quantum key exchange. Quantum key exchange protocols rely on certain kinds of measurements (typically measuring photon polarization) such that only 2 people can make the measurement -- if a 3rd eavesdropper attempts to make the same measurement, that measurement disrupts the system in ways that the 2 people can detect. Current implementations send photons either through the air or though fiber optic links from one person to another person. Quantum cryptanalysis Quantum computers are expected to be able to do integer factorization (using Peter Shor's algorithm) and discrete log much faster than digital computers. Many encryption algorithms (such as RSA) rely on those operations being very slow. Messages encrypted by those algorithms could be much easier to decrypt using a quantum comptuer. However, so far the largest number publicly factored by a (prototype) quantum computer was the number "15". Scaling up quantum computers to hold more "qubits" is a difficult technical challenge. It is widely believed that, if a specific key length is adequate against brute-force digital computer attacks, then doubling the number of bits in the key will be adequate protection against quantum-computer attacks. External References
{"url":"http://cryptodox.com/Quantum_cryptography","timestamp":"2014-04-19T01:51:17Z","content_type":null,"content_length":"21895","record_id":"<urn:uuid:d22b9dfc-dea2-4b4c-8492-33e9019034f0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of cubo-octahedron (plural: octahedra) is a with eight faces. A octahedron is a Platonic solid composed of eight equilateral triangles , four of which meet at each The octahedron's symmetry group is O[h], of order 48. This group's subgroups include D[3d] (order 12), the symmetry group of a triangular antiprism; D[4h] (order 16), the symmetry group of a square bipyramid; and T[d] (order 24), the symmetry group of a rectified tetrahedron. These symmetries can be emphasized by different decorations of the faces. It is a 3-dimensional cross polytope. Cartesian coordinates An octahedron can be placed with its center at the origin and its vertices on the coordinate axes; the Cartesian coordinates of the vertices are then (±1, 0, 0 ); (0, ±1, 0 ); (0, 0, ±1 ). Area and volume The area and the volume V of a regular octahedron of edge length $A=2sqrt\left\{3\right\}a^2 approx 3.46410162a^2$ $V=frac\left\{1\right\}\left\{3\right\} sqrt\left\{2\right\}a^3 approx 0.471404521a^3$ Thus the volume is four times that of a regular tetrahedron with the same edge length, while the surface area is twice (because we have 8 vs. 4 triangles). Geometric relations The interior of the of two dual is an octahedron, and this compound, called the stella octangula , is its first and only . Correspondingly, a regular octahedron is the result of cutting off from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. the tetrahedron). The vertices of the octahedron lie at the midpoints of the edges of the tetrahedron, and in this sense it relates to the tetrahedron in the same way that the relate to the other Platonic solids. One can also divide the edges of an octahedron in the ratio of the golden mean to define the vertices of an . This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly partitioning each edge into the golden mean along the direction of its vector. There are five octahedra that define any given icosahedron in this fashion, and together they define a regular compound. Octahedra and tetrahedra can be alternated to form a vertex, edge, and face-uniform tessellation of space, called the octet truss by Buckminster Fuller. This is the only such tiling save the regular tessellation of cubes, and is one of the 28 convex uniform honeycombs. Another is a tessellation of octahedra and cuboctahedra. The octahedron is unique among the Platonic solids in having an even number of faces meeting at each vertex. Consequently, it is the only member of that group to possess mirror planes that do not pass through any of the faces. Using the standard nomenclature for Johnson solids, an octahedron would be called a square bipyramid. Related polyhedra The octahedron can also be considered a rectified tetrahedron - and can be called a . This can be shown by a 2-color face model. With this coloring, the octahedron has tetrahedral symmetry Compare this truncation sequence between a tetrahedron and its dual: Tetrahedron Truncated tetrahedron octahedron Truncated tetrahedron Tetrahedron Octahedra in the physical world Octahedra in music If you place notes on every vertex of an octahedron, you can get a six note just intonation scale with remarkable properties - it is highly symmetrical and has eight consonant triads and twelve consonant diads. See hexany Other octahedra The regular octahedron has 6 vertices and 12 edges, the minimum for an octahedron; nonregular octahedra may have as many as 12 vertices and 18 edges. See also External links
{"url":"http://www.reference.com/browse/cubo-octahedron","timestamp":"2014-04-19T21:31:19Z","content_type":null,"content_length":"85615","record_id":"<urn:uuid:0224363c-d80e-4c11-8733-d94585fb768e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Why? Date: Oct 23, 2012 10:40 AM Author: Dave L. Renfro Subject: Re: Why? Robert Hansen wrote: > Why is the attached page in a 4th grade math text? > Teaching algebra in 4th grade IS NOT the path to algebra. > Teaching arithmetic in 4th grade IS the path to algebra. 4th grade seems a little early to me. I think 6th grade would be more appropriate for something like this. Also, I think I'd omit explicitly defining and using the word "variable", and I KNOW I'd omit defining and using the phrase "algebraic expression". Dave L. Renfro
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7911401","timestamp":"2014-04-19T00:15:01Z","content_type":null,"content_length":"1623","record_id":"<urn:uuid:6da85e73-8c09-4cf2-9e1b-694cfade92a1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring Design Optimization Problem The specifications and modeling equations for a compression spring create a number of trade-offs that must be considered during design. We wish to determine the spring design that maximizes the force of a spring at its preload height, h[o], of 1.0 inches. The spring is to operate an indefinite number of times through a deflection delta[o], of 0.4 inches, which is an additional deflection from h [o]. The stress at the solid height, h[s], must be less than S[y] to protect the spring from inadvertent damage. Turn in a report with the following sections: 1. Title Page with Summary. The Summary should be short (less than 50 words), and give the main optimization results. 2. Procedure: Give a brief description of your model. You are welcome to refer to the assignment which should be in the Appendix. Also include: 1. A table with the analysis variables, design variables, analysis functions and design functions. 3. Results: Briefly describe the results of optimization (values). Also include: 1. A table showing the optimum values of variables and functions, indicating binding constraints and/or variables at bounds (highlighted) 2. A table giving the various starting points which were tried along with the optimal objective values reached from that point. 4. Discussion of Results: Briefly discuss the optimum and design space around the optimum. Do you feel this is a global optimum? Also include and briefly discuss: 1. A “zoomed out” contour plot showing the design space (both feasible and infeasible) for coil diameter vs. wire diameter, with the feasible region shaded and optimum marked. 2. A “zoomed in” contour plot of the design space (mostly feasible space) for coil diameter vs. wire diameter, with the feasible region shaded and optimum marked. 5. Appendix: 1. Listing of your model with all variables and equations 2. Solver output with details of the convergence to the optimal values Any output from the software is to be integrated into the report (either physically or electronically pasted) as given in the sections above. Tables and figures should all have explanatory captions. Do not just staple pages of output to your assignment: all raw output is to have notations made on it. For graphs, you are to shade the feasible region and mark the optimum point. For tables of design values, you are to indicate, with arrows and comments, any variables at bounds, any binding constraints, the objective, etc. (You need to show that you understand the meaning of the output you have included.) Introduction to the Spring Problem Help Session comments powered by Disqus
{"url":"http://www.apmonitor.com/me575/index.php/Main/SpringDesign","timestamp":"2014-04-16T13:03:32Z","content_type":null,"content_length":"17265","record_id":"<urn:uuid:9013bf67-4d30-47d6-b4f0-b6f572a57ec0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
How do I Solve these two equations? As part of a question on axial load I need to simplify two equations in terms of P ([1] and [2]). In the above you can see how F and F were simplified to 0.8081P and 0.7143P respectively. I understand both the section prior to this and the one after, though I do not understand how this simplification was carried out. Any help would be greatly appreciated.
{"url":"http://www.physicsforums.com/showthread.php?t=581729","timestamp":"2014-04-18T10:44:03Z","content_type":null,"content_length":"23275","record_id":"<urn:uuid:0864dc30-6809-479a-8232-29aa6bd00635>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Gambrills Geometry Tutor Find a Gambrills Geometry Tutor My love of teaching, especially on a one-to-one basis, began in high school, where the school guidance counselor enlisted me in tutoring struggling algebra and geometry students. Since then, I informally assisted fellow students in my college studies in engineering. I worked successfully for many ... 12 Subjects: including geometry, reading, calculus, algebra 1 ...Let me help you or your son/daughter obtain the foundation that will not only ensure good grades, but will be a valuable asset through life and in college. While in college, I tutored a student for 3 years of his high school program, and had a great understanding of the material. We worked thro... 10 Subjects: including geometry, calculus, physics, Microsoft Excel ...It is important students practice so that they have the formulas they need memorized and they have some strategies in place. See my blog about some basic strategies. Performing well on exams like the SAT, ACT and GRE is partially about the mathematics and partially about the strategies needed to do well. 24 Subjects: including geometry, reading, calculus, ASVAB I am currently teaching at a High School and tutoring at a Community College in Maryland. I have had more than 30 years of teaching mathematics and other science courses. As an instructor, I take pride in using the most effective instructional approach that suits the students' learning style and available resources. 3 Subjects: including geometry, algebra 1, trigonometry ...In general, I find teaching to be one of the most satisfying things a person can do. I enjoy challenging people, and giving them the tools to further themselves in their lives. My education philosophy is fairly straight forward: speak clearly, be excited about the material at hand, be supportive and encouraging, and give students the tools and opportunity to show what they can do. 15 Subjects: including geometry, calculus, statistics, piano
{"url":"http://www.purplemath.com/gambrills_md_geometry_tutors.php","timestamp":"2014-04-18T13:49:01Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:fd93101d-d14e-4d10-9662-b16e58043b8b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
MPs' (Partial) Expenses - July-December 2009 [via Guardian Datastore] - CSV Datastore Explorer This is a quick demo of using the data published by the Guardian Datsatore as a database that can be interrogated via the Google visualization API. The following are the column headings from the spreadsheet... Go fish... Try out some visualisation queries here... Select the columns from the list box (ctrl+click for multiple selections in IE, command+click on a Mac) or type the elements directly into the appropriate text box. order by Display as: | Table | Scatter chart | Line chart | Pie chart | Bar chart | Column chart | So you are asking: Here is the URL for that query: , here is a link to this page: Writing the Queries - A Few Hints... Here are some quick tips on writing queries... │ select input │ where input │ Comments │ │* │ │Display everything │ │B,C,I │ │Display columns B, C and I │ │B,C,I │I=23083 │Display columns B, C and I for MPs claiming exactly 23083 in column 83 │ │count(I) │I=23083 │Count how many people claimed exactly 23083 in column I │ │B,C,I │I!=23083 order by I │display the people who did not claim exactly 23083 in column I and display them in increasing order of column I values│ │B,C,I │I!=23083 order by I desc │display the people who did not claim exactly 23083 in column I and display them in decreasing order of column I values│ │B,C,D,E │(C contains 'Joan' or C matches 'John')│Select by name (case sensitive); 'matches' must match exactly, 'contains' is a free text search │ │* │F<100000 │full details of everyone who claimed less than 100000 in column F │ │sum(I) │ │Total claimed within column I │ │count(I) │ │number of rows where there's a value in column I │ │count(I) │where I>=0 │make sure we only count 'valid' rows. │ │sum(I)/count(I) │ │Calculate the average amount claimed in column I │ │D,sum(I) │I>=0 group by D │Find out how much has been claimed by each party named in column D │ │D,sum(I)/count(I)│I>=0 group by D │for the total claimed by each party (column D), how much on average does each member of that party claim │
{"url":"http://ouseful.open.ac.uk/datastore/mpsexpensesPartJulDec-09.php?run=true&gqc=A,B,C,sum(E)&gqw=E%3E0%20group%20by%20A,B,C&gqo=sum(E)&gql=10","timestamp":"2014-04-20T08:14:31Z","content_type":null,"content_length":"12930","record_id":"<urn:uuid:7a90e015-fc71-4061-86ab-e585c3c1696c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Set up of applications involving uniform motion this is driving me bananas. i can do the distance=rate x time problems well enough but setting this one up has utterly baffled me. Hazel and Emily fly from atlanta to San diego. the flight to sandiego is against the wind and takes 4 hours. the return flight with the wind takes 3.5 hours. if the speed of the wind is 40 mph find the speed of the plane in still air. i know the table you do where you set it up but im just utterly baffled as how to set this one up to find the speed of the plane in still air. something to do with resistence...uuhhh...help? oh and im new here. Hi! ckowen wrote:i know the table you do where you set it up but im just utterly baffled as how to set this one up to find the speed of the plane in still air. The "in still air" part is a fancy way of saying "what the speedometer reads" (or whatever they call it in a passenger plane). The speedometer reading and the actual speed are not necessarily the same thing. For instance, if your car is stuck on ice, your speedometer might be reading "60", while the car is actually slowly drifting backwards. Or think of trying to row a boat up a waterfall: you can row like crazy, but you ain't goin' up! So the secret here is to account for the two input speeds with result in the one output speed: the speedometer reading for the plane, and the wind. On their trip out, the wind is against them, pushing them back, and thus subtracting from what the plane's engines are doing. On their trip back, the wind with with them, pushing them forward, and thus adding to what the plane's engines are Once you set up the "rate" part of your table with this information, use the fact that, whatever the distance was, it was the same in each direction. Set the "rt" expressions equal, and solve for the reading of the plane's speedometer.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=7&t=838&p=2745","timestamp":"2014-04-16T07:16:11Z","content_type":null,"content_length":"20544","record_id":"<urn:uuid:4b423077-1603-457c-b2a3-4dc42a62bcce>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrical Sample No. Question 1 The RMS value of the voltage u(t) =3 + 4cos (3t) is Options A) V B) 5 V C) 7 V D) (3 + 2 ) V Correct A The open loop transfer function of a unity feed back control system is given as 2 G(s) = The value of ‘a’ to give a phase margin of 45° is equal to Options A) 0.141 B) 0.441 C) 0.841 D) 1.141 Correct C 3 The armature resistance of a permanent magnet dc motor is 0.8 W. At no load, the motor draws 1.5 A from a supply voltage of 25 V and runs at 1500 rpm. The efficiency of the motor while it is operating on load at 1500 rpm drawing a current of 3.5 A from the same source will be Options A) 48.0% B) 57.1% C) 59.2% D) 88.8% Correct A 4 The solution of the first order differential equation x(t) = -3x(t), x (0) = x[0] is Options A) x (t) = x[0] e^-3t B) x (t) = x[0] e^-3 C) x (t) = x[0] e^-1/3 D) x (t) = x[0] e^-1 Correct A The unit impulse response of a second order under-damped system starting from rest is given by 5 c(t) = 12.5 e^-6t sin 8 t, t The steady-state value of the unit step response of the system is equal to Options A) 0 B) 0.25 C) 0.5 D) 1.0 Correct D A single-phase, 230 V, 50 Hz, 4 pole, capacitor-start induction motor has the following stand-still impedances Main winding Z[m] = 6.0 + j4.0 W Auxiliary winding Z[a] = 8.0 + j6.0 W The value of the starting capacitor required to produce 90° phase difference between the currents in the main and auxiliary windings will be Options A) 176.84 mF B) 187.24 mF C) 265.26 mF D) 280.86 mF Correct A 7 A single-phase half-controlled rectifier is driving a separately excited dc motor. The dc motor has a back emf constant of 0.5 V/rpm. The armature current is 5 A without any ripple. The armature resistance is 2W. The converter is working from a 280 V, single phase ac source with a firing angle of 80°. Under this operating condition, the speed of the motor will be Options A) 339 rpm B) 359 rpm C) 366 rpm D) 386 rpm Correct C 8 The 8085 assembly language instruction that stores the content of H and L registers into the memory locations 2050[H] and 2051[H], respectively, is Options A) SPHL 2050[H] B) SPHL2051[H] C) SHLD 2050[H] D) STAX 2050[H] Correct C 9 A 50 Hz, 4-pole, 500 MVA, 22 kV turbo-generator is delivering rated megavolt-amperes at 0.8 power factor. Suddenly a fault occurs reducing is electric power output by 40%. Neglect losses and assume constant power input to the shaft. The accelerating torque in the generator in MNm at the time of the fault will be Options A) 1.528 B) 1.018 C) 0.848 D) 0.509 Correct A 10 The Nyquist plot of loop transfer function G(s) H(s) of a closed loop control system passes through the point (-1, j0) in the G(s) H(s) plane. The phase margin of the system is Options A) 0° B) 45° C) 90° D) 180° Correct D A 50 kW dc shunt motor is loaded to draw rated armature current at any given speed. When driven 11 (i) at half the rated speed by armature voltage control and (ii) at 1.5 times the rated speed by field control, the respective output powers delivered by the motor are approximately. Options A) 25kW in (i) and 75kW in (ii) B) 25kW in (i) and 50kW in (ii) C) 50kW in (i) and 75kW in (ii) D) 50kW in (i) and 50kW in (ii) Correct B 12 A hydraulic turbine having rated speed of 250 rpm is connected to a synchronous generator. In order to produce power at 50 Hz, the number of poles required in the generator are Correct D 13 For the equation x(t)+3 x(t) +2x(t) = 5, the solution x (t) approaches which of the following values as t ® ¥ ? Correct B 14 The following motor definitely has a permanent magnet rotor Options A) DC commutator motor B) Brushless dc motor C) Stepper motor D) Reluctance motor Correct C 15 A110 kV, single core coaxial, XLPE insulated power cable delivering power at 50Hz, has a capacitance of 125 nF/km. If the dielectric loss tangent of XLPE is 2 x 10^-4, the dielectric power loss in this cable in W/km is Options A) 5.0 B) 31.7 C) 37.8 D) 189.0 Correct D 16 The simultaneous application of signals x(t) and y(t) to the horizontal and vertical plates, respectively, of an oscilloscope, produces a vertical figure-of-8 display. If P and Q are constants, and x(t) = P sin (4t + 30), then y(t) is equal to Options A) Q sin (4t - 30) B) Q sin (2t + 15) C) Q sin (8t + 60) D) Q sin (4t + 30) Correct B 17 A 500 MW 3-phase Y-connected synchronous generator has a rated voltage of 21.5 kV at 0.85pf. The line current when operating at full load rated conditions will be Options A) 13.43 kA B) 15.79 kA C) 23.25 kA D) 27.36 kA Correct B 18 Total instantaneous power supplied by a 3-phase ac supply to a balanced R-L load is Options A) zero B) constant C) pulsating with zero average D) pulsating with non-zero average Correct B 19 The equivalent circuit of a transformer has leakage reactances X[1] , X Options A) X[1]>> X B) X[1]<< X C) X[1]» X D) X[1]» X Correct D 20 If P and Q are two random events, then the following is TRUE Options A) Independence of P and Q implies that probability (P B) Probability (P C) If P and Q are mutually exclusive, then they must be independent D) Probability (P Correct D
{"url":"http://www.onestopgate.com/gate-preparation/sample-papers/eee-1.asp","timestamp":"2014-04-21T12:11:14Z","content_type":null,"content_length":"85216","record_id":"<urn:uuid:81ec45ce-1f82-4aad-98f2-7060b3a2c0db>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 501.05043 Autor: Erdös, Paul; Howorka, E. Title: An extremal problem in graph theory. (In English) Source: Ars Comb. 9, 249-251 (1980). Review: The distance d[G](u,v) between vertices u and v of a graph G is the least number of edges in any u -v path of G; d[G](u,v) = oo if u and v lie in distinct components of G. A graph G = (V,E) is distance-critical if for each x in V there are vertices u, v (defending on x) such that d[G](u,v) < d[G-x](u,v). Let g(n) denote the largest integer such that |E| \leq \binom{n}{2}-g(n) for every distance-critical graph on n vertices. The authors show that g(n) is of the order of magnitude n^3/2. Reviewer: D.Lick Classif.: * 05C35 Extremal problems (graph theory) Keywords: distance; distance-critical graph © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/50105043.htm","timestamp":"2014-04-21T02:33:38Z","content_type":null,"content_length":"3732","record_id":"<urn:uuid:6c548691-bdb1-4015-a319-4b97392619e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Mediation test Sun Kim posted on Saturday, April 07, 2012 - 1:42 pm Dear Dr. Muthen, The output using MODEL IND in Mplus does not give the same results as hand-calculated sobel test-- I thought they should be similar. My N = 1,006 (fairly large), and my predictors and mediators are categorical (overall use of WLSMV estimator) but my outcomes continuous. Which test should I go with, and do you have any suggestions? Thank you. Bengt O. Muthen posted on Saturday, April 07, 2012 - 2:13 pm The more general method is the delta method used in Mplus,which can always be applied. In some settings it is the same as the Sobel test. We have a FAQ on our web site on this, saying The MacKinnon (2008) book describes the Sobel method and the delta method for the indirect effect a*b in Section 4.14. See especially page 92. The delta method uses formula (4.27) with an added covariance term between the a and b estimates (see second line below 4.27). For some models, such as the mediation model for continuous observed variables, the covariance term is zero so that the delta method simplifies to the formula of (4.27). I believe (4.27) is what is referred to as the Sobel method. Mplus uses the delta method in Model Indirect and also in Model Constraint. Note that the zero covariance term is exactly zero when the ML estimator is used, but only approximately zero when MLF or MLR are used. Sun Kim posted on Saturday, April 07, 2012 - 3:33 pm Dear Dr. Muthen, Thank you so much for your kind response. Yes, I noticed that sometimes, the Model Indirect gave the same results as the sobel test and sometimes not. It seems though in reading published articles (I am in the field of psychology), it is rare to use the delta method (usually, sobel test is simply calculated and reported in published mediation model papers, with according citations). So (1) which method would you think is superior, and (2) do you have any examples of published articles that cite the delta method? Thank you so much for your time. Bengt O. Muthen posted on Saturday, April 07, 2012 - 4:01 pm As the quote from the FAQ says, Delta=Sobel in most cases - that's why you see it in your literature. So in those cases, when you are using Sobel, you are in fact also using Delta. When they differ Delta should be used. In this way, you can never go wrong with Delta. You can cite the MacKinnon book pages that I gave for the Delta method. Sun Kim posted on Saturday, April 07, 2012 - 6:06 pm Dear Dr. Muthen, Thank you so much for your response. It is so helpful and informative. I will do exactly that. - Sun Joseph posted on Monday, October 21, 2013 - 10:06 am Dear Drs Muthen, I am running SEM to explore mediation. I am calculating the 95%CIs using the Delta method. When I compare the CIs calculated using the delta method to those obtained specifying the bootstrap option, the results are essentially equivalent. I suspect this may be due to my large sample size (approx. 5000 respondents). A reviewer has asked me to cite a reference which also confirms the two approaches can produce equivalent results, could you recommend one please? Many thanks Linda K. Muthen posted on Monday, October 21, 2013 - 2:58 pm This sounds like a good question for a general discussion forum like SEMNET. RuoShui posted on Saturday, April 05, 2014 - 11:02 pm Dear Drs. Muthen, 1) Regarding your above comments about the FAQ, is it correct that if the mediation is for continuous latent variables then the method used is delta instead of sobel? 2) when comparing mediation models and direct models, usually the chi square favours the mediated model but incremental fit indices favours the parsimonious direct model. I am wondering if test of mediation using "model indirect" is significant, will that give more confidence to confirm the mediation model is a better model? Thank you so much for your input. Linda K. Muthen posted on Sunday, April 06, 2014 - 5:49 pm 1. Mplus always uses the Delta method. 2. Please send the outputs that illustrate your issue and your license number to support@statmodel.com. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=9348","timestamp":"2014-04-17T12:57:07Z","content_type":null,"content_length":"27875","record_id":"<urn:uuid:f9b5dac1-5a9e-4adb-bf5c-23bc432ad9f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
quartic diagonal as a sum of squares of quadratic forms up vote 4 down vote favorite I would appreciate if someone can point out to the literature related to characterizing the set of all different ways to write real quartic diagonal $\sum \limits_{k=1}^n x_k^4, x \in \mathbb{R^n}$ as a sum of squares of real quadratic forms. Murray Marshall in his book "Positive polynomials and sums of squares" show that quartic diagonal is in the interior of the cone of sum of squares. Does anyone knows details about such representation. In particular, suppose $\sum \limits_{k=1}^n x_k^4= \sum_p (x^T A_p x)^2$ ($x$ is a column vector, and $x^T$ its transpose, $A_p \in \mathbb{R^n \times R^n}$) then what $Q\in \mathbb{R^n \times R^n}$ can be represented by the expression $\sum_p (x^T A_p x - {x^*}^T A_p x^*)^2= \sum \limits_{k=1}^n x_k^4 - 2x^T Q x+ const$, $x^*$ is a point in $\mathbb{R^n}$. More specifically, whether $Q$ is dense around identity matrix (in a small neighborhood). polynomials sums-of-squares ag.algebraic-geometry reference-request add comment 1 Answer active oldest votes I passed your question on to a friend who knows about these things, and he replied, The theorem that $\sum_{k=1}^m x_k^{2r}$ is interior to the sum of squares of appropriate degree, can be found, with proof, in a paper of R. M. Robinson: Some definite polynomials which are not sums of squares of real polynomials, Izdat. "Nauka" Sibirsk. Otdel. Novosibirsk (1973), 264-282, Selected questions of algebra and logic (a collection dedicated to the up vote 2 down memory of A. I. Mal'cev), Abstract in Notices Amer. Math. Soc. 16 (1969), p. 554. vote accepted My friend also recommended the Memoir by Bruce Reznick, which can be found, scanned, on Bruce's webpage at UIUC. Thanks a lot. Nevertheless this answer does not provide any specific information about quartic polynomials of interest. As far as I know no one was studying them in details. – mkatkov Oct 17 '11 at 7:52 add comment Not the answer you're looking for? Browse other questions tagged polynomials sums-of-squares ag.algebraic-geometry reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/71970/quartic-diagonal-as-a-sum-of-squares-of-quadratic-forms?sort=oldest","timestamp":"2014-04-19T02:38:01Z","content_type":null,"content_length":"52732","record_id":"<urn:uuid:0a8ce2d3-696b-4593-aa6c-83b70c059fd3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Palisade, NJ Science Tutor Find a Palisade, NJ Science Tutor ...I've taken the MCATs myself twice scoring above 32 points combined. And I've completed two years of medical school before switching my focus to research and education. Over the past two years, I've been teaching Physics, Chemistry, Biology, Organic Chemistry, and Calculus at the College Level and preparing students for their MCATs, DATs, and GREs. 83 Subjects: including biology, astronomy, algebra 1, SAT math ...Let me be your guide and companion in your next academic journey and you will find the trip far easier and more pleasant than you imagined!I have taught algebra techniques not only as a topic on its own but also in conjunction with the physical sciences and biology since the 1980s. I am an econo... 50 Subjects: including organic chemistry, physics, chemistry, ACT Science ...You will be well prepared to earn a 5 on this AP exam. My sessions will test you with typical test questions you may be asked in class so that nothing will surprise you at exam time. I like to do 90 minute sessions each time in order to have sufficient time to cover a wealth of material.I have taught high school and college chemistry for 30 years. 2 Subjects: including chemistry, organic chemistry ...Further, I will help you understand key test-taking strategies to use on this test. The ACT Reading section includes four passages, each followed by ten questions, to be completed in 35 minutes. The passages are presented in a specific order: Prose Fiction, Social Science, Humanities, Natural Science. 9 Subjects: including ACT Science, SAT math, SAT writing, GMAT ...I have tutored students of all ages. This includes students in elementary school. I have tutored students with ADD, dyslexia and language barriers. 71 Subjects: including biology, calculus, elementary (k-6th), grammar Related Palisade, NJ Tutors Palisade, NJ Accounting Tutors Palisade, NJ ACT Tutors Palisade, NJ Algebra Tutors Palisade, NJ Algebra 2 Tutors Palisade, NJ Calculus Tutors Palisade, NJ Geometry Tutors Palisade, NJ Math Tutors Palisade, NJ Prealgebra Tutors Palisade, NJ Precalculus Tutors Palisade, NJ SAT Tutors Palisade, NJ SAT Math Tutors Palisade, NJ Science Tutors Palisade, NJ Statistics Tutors Palisade, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/palisade_nj_science_tutors.php","timestamp":"2014-04-20T21:29:33Z","content_type":null,"content_length":"23935","record_id":"<urn:uuid:99a6e496-dbed-4acf-9330-07a9e07f2c24>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
NA Digest NA Digest Friday, July 18, 1996 Volume 96 : Issue 27 Today's Editor: Cleve Moler The MathWorks, Inc. Submissions for NA Digest: Mail to na.digest@na-net.ornl.gov. Information about NA-NET: Mail to na.help@na-net.ornl.gov. URL for the World Wide Web: http://www.netlib.org/na-net/na_home.html ------------------------------------------------------- From: Cleve Moler <moler@mathworks.com> Date: Thu, 11 Jul 1996 17:07:37 -0400 (EDT) Subject: Bauer Remembers Householder and the Gatlinburg Meetings [At the Householder Symposium in Switzerland a couple of weeks ago, F. L. (Fritz) Bauer gave an after-banquet talk, remembering the Symposium's namesake, and the early history of the meetings. Here are his notes for the talk. -- Cleve] Memories of Alston Householder (1904-1993) F. L. Bauer Householder Symposium XIII June 17 - June 21, 1996 Pontresina, Switzerland How the Gatlinburgs came about The idea of a Symposium on Matrix Computations came up during the Ann Arbor Summer Session in 1960, when a group of people including Alston Householder, the Todds, Wallace Givens, George Forsythe, Dick Varga, Jim Wilkinson and I happened to be assembled at the Old German's Inn. In due course -- nine month later -- the first Gatlinburg Symposium was held in 1961, April 24-29. I was visiting Oak Ridge National Laboratory just at the time, and Alston asked me to help him with the local organization. I had not been in Gatlinburg before; on my first visit to Oak Ridge in 1957 I had only seen, from Knoxville Airport, the Smokey Mountains in the usual haze. Coming into the mountain, just at the time when blossoming started, made a deep impression on me -- in Mainz, in the Rhine valley, I was used to this, but the common picture a European has of America is so much dominated by wide plains, prairies and buffalos, terribly hot in the summer and freezingly cold in the winter, that Gatlinburg came as a complete surprise. My fascination was also influenced by the event. A group of Numerical Analysts -- or should I more properly say Numerical Algebraists -- from places all over the world came together in the Mountain View Hotel for a genuine Working Conference, very favorably contrasting the mammoth congresses. This was Alston's big idea, and he convinced SIAM, NSF, AEC, and the Oak Ridge National Laboratory, which we usually called Mr. Carbide, that it was worth trying. My first steps in the Golden West Maybe I should report on how I came to know Alston. I met him for the first time in October 1955, in a national meeting on computer use organized by Professor Alwin Walther in Darmstadt, Germany. Alston gave a lecture with the title "Numerical Mathematics from the viewpoint of electronic digital computers". A reprint of this paper which was published in an obscure German Journal ("Nachrichtentechnische Fachberichte") can be found in the appendix. It gives a short, remarkably clear listing of the essentials of Numerical Mathematics. This was Alston's admirable style. When in 1957 I had a chance to visit the important places in the U.S.A. I quite naturally included Oak Ridge in my wish list, next to UCLA-INA (where I met George Forsythe), RAND Corporation, Wayne State University Detroit (where Wallace Givens just held a famous Conference on Matrix Computations), Ann Arbor (where I met John Carr), Argonne National Laboratory (where I met Moll Flanders), University of Illinois Digital Computer Laboratory (where I met Abe Taub), National Bureau of Standards, Office of Naval Research, UNIVAC (where I met Grace Hopper and John Mauchly), IBM, Bell Labs (where I met Richard Hamming), MIT (where I met Howard Aiken). It was a tremendous seven weeks, from August 16, to October 5, 1957. I met many more people than I had planned, among others Richard Courant, Eugene Isaacson, John McPherson, H. F. Buckner, Ky Fan, Gertrude Blanch, Evelyn Frank. I learned to appreciate American hospitality. My transportation over the Atlantic had been arranged for by Military Air Transport System and since the Office of Naval Research was sponsoring it, I was even carried with the Generals Machine. Quite fittingly, when I arrived on my flight back in Francfort, I was greeted by the news that the Russians had started the Sputnik. I made a few more trips to the U.S.A. In April 1958 I was contacting an ACM group on behalf of our proposals that led to ALGOL 58. In September 1959 I stayed for a while with Alston Householder, in 1960 I met him and a few others at the Ann Arbor Summer Session. In turn, Alston and his wife Belle visited us in Mainz in August 1962 on their way to the Munich IFIP Congress. The next Gatlinburgs. After the first Gatlinburg Symposium, I took part in several others. The second one was held October 21-26, 1963, short after I had accepted a professorship at Munich and had returned to my home town. While the second meeting dealt with approximations, the third and all the others coming dealt again with matrix calculations. Gatlinburg III took place April 13-17, 1964. A photograph showing Jim Wilkinson, Wallace Givens, George Forsythe, Alston Householder, Peter Henrici, and myself has been reprinted in the "Users Guide to MATLAB 4.2"; a copy can be found in the appendix. In May 1964, Alston visited us in Munich, where he received a Honorary Doctorate. He came again in the summer 1965, when Richard Varga, invited on a Guest Professorship, also stayed for a quarter. This was the time when my scientific collaboration with Alston came to a There was a longer wait for Gatlinburg IV. In mid-August 1966, I was with Alston at the International Congress of Mathematicians in Moscow, where -- by the way -- we met Sobolew and Kantorovic. We went together with Jim Wilkinson to the home of Tychonoff who treated us with Mulberry liquor in an unforgettable way. Immediately following there was a matrix computations symposium organized by Rigal in Besancon, a kind of alternative Gatlinburg III. In the first quarter of 1967, I was a guest professor at Stanford University and visited together with my wife Irene Alston and Belle on my way home. Gatlinburg IV then took place April 13-19, 1969. Mr. Carbide supported us again in a grand way. The Cocktail parties were held at the swimming pool which so that the liquor could easily be dumped into the basin if the police raided the hotel (Tennessee was a dry state!). Then Alston retired from the Oak Ridge National Laboratory, and although he stayed on in the area, accepting a professorship at the University of Tennesee, a new location for a Gatlinburg V had to be found. Richard Varga succeeded in doing so. The meeting took place June 4-10, 1972 at Los Alamos with the local support of Nick Metropolis. It was again a great success. Gatlinburg goes Overseas. Once the meetings had moved away from Gatlinburg, it was time to think also of having a Gatlinburg somewhere in Europe. France, which was very attractive, was not a candidate because of the recent Besancon meeting, and England did not work out. But in Munich in 1973 the President of the Bavarian Academy of Sciences was very open-minded and had good contacts to the Stifterverband fur die Deutsche Wissenschaft, a sort of German NSF. Thus, I was able to arrange for a Gatlinburg VI at the Kurhotel Enzensberg in Hopfen am See, a small resort place at the foothills of the Bavarian Alps, quite similar to Gatlinburg in Tennessee, but now with snow instead of Dogwoods and Mountain Laurels. The meeting was held December 15-22, 1974. I was remarried then, and my wife Hildegard, a mathematician, took part. It was a very happy time for me, and I was in such a good mood that I even played tricks on Olga Taussky-Todd, explaining to her that the "Kurzentrum" meant a short piece (a "Trum" in Bavarian dialect) of wood. In 1977, Gene Golub took the meeting back to the U.S.A. Gatlinburg VII was held December 12-16, 1977 in Asilomar. It was a wonderful place and thanks to Gene gave a lasting impression. In 1981 Jim Wilkinson and Leslie Fox moved the meeting to Oxford, England. This was for a while the last Gatlinburg meeting I visited. Around the middle of the seventies I had reoriented the center of my activity towards programming languages and programming methodology, as a consequence of my building up a Computer Science department in Munich. Gatlinburg IX took place in 1984 at Waterloo, Canada, organized by J. A. George; Gatlinburg X in 1987 at Fairfield Glades, U.S.A., organized by Pete Stewart (Alston attended it); Gatlinburg XI in 1990 at Tylosand, Sweden, organized by Ake Bjork. All this time, Alston was no longer active, but he was the soul of the Gatlinburgs, which had come to be established in a regular 3-year cycle. Whenever I came to the West Coast, and this was quite regularly the case in the eighties due to a cooperation I had with IBM at Santa Teresa Lab, I visited Alston Householder at Malibu, where he lived, near to his son John and his daughter Jackie, after his wife Belle deceased in 1975. His home became almost a second home place to me. His son John wrote me once, that his family considered me to be Alston's best friend. I was very pleased and very proud of this. Alston gets remarried. Alston came to visit my wife Hildegard and me from time to time in our country house near Munich and in our apartment in town. At one of these occasions Alston met Heidi Vogg, Hildegard's sister. Shortly after, Heidi was run over by a car and severely injured; her recovery took more than a year. During this time, a romance started between Alston and Heidi, and they were married in spring 1984. Heidi was a great help to Alston, whose health was getting weaker, and Alston was a man who could give Heidi stability and warmth. Alston dies. In June 1993, Alston and Heidi came to the Gatlinburg XII meeting at Lake Arrowhead, which was organized by Gene Golub and T.F.~Chan. They enjoyed it tremendously. Three weeks later, on July 4, 1993, we received the terrible message that Alston had died of a heart attack. Although it was not completely unexpected, there was no special indication of an acute danger. Alston was 90 years old. He had had a full life, with many friends and people who admired him. He was an American in the best sense of the word, liberal and socially conscious. Yet he was a cosmopolitan with a thorough knowledge of foreign languages and cultures. He was a mathematician of distinction. Above all he was a friendly human being. We miss him very much. The Gatlinburgs go on The Gatlinburg Symposia on Matrix Calculations have a prehistory that should not be forgotten. The "Conference on Matrix Computations" Wallace Givens organized in 1957, has sometimes been called Gatlinburg 0. But already in 1951 Olga Taussky-Todd (1906-1995) had organized on the UCLA campus a symposium on "Simultaneous Linear Equations and the Determination of Eigenvalues". In these days there were exactly two electronic computers of the modern generation in operation in the U.S.A., but quite a number were soon to follow. Matrix calculations have been a testbed for the development of the computer. A unique feature of the Gatlinburgs is that there is no formal organization responsible for them; there is, as Alston once put it, not even a copyright on the name. In 1974, Alston, in SIAM review, discussed and defended the character of the Gatlinburgs as "closed" ~meetings, limited in attendance -- similar to the Oberwolfach meetings in mathematics. Alston wrote "Admittedly, no committee, however constituted, can hope that its selections will be the best possible". But with a truly international organizing committee, it is possible to come close to this aim. The Gatlinburgs have shown this so far, and as long as they continue to bring the elite of Numerical Algebra together, they will continue. Meanwhile, I may express the thanks of the people assembled here to the International Committee, chaired by Dianne O'Leary and to the local organizers of Gatlinburg XIII, Walter Gander and Martin Gutknecht, for their excellent job. From: Annie Cuyt <cuyt@uia.ua.ac.be> Date: Wed, 17 Jul 1996 12:30:35 +0200 Subject: Numerical Analysis in the CS Curriculum There has been a lot of discussion in the past few years on the teaching of Numerical Techniques in the CS curriculum. We refer for instance to the notes of M. Overton, the afternotes of G. Stewart and several NA-digest messages (among which volume 95 issue 46). Personally we have the experience that aspects and notions like `rounding error', `truncation error', `numerical instability' and `ill-conditioning' still do astonish a lot of students when they first encounter them, and not in the least CS students. Therefore we have developed a course on Computer Arithmetic and Numerical Techniques, aiming especially at CS students and training them in the proper {\bf use} of numerical routines and the correct {\bf interpretation} of the numerical output, rather than in the development of new numerical techniques (as is required from math students). It is also being taught with success in a one-year postgraduate program on computer science that is attended by students with mixed backgrounds (economists, electronic engineers, ...). The course starts with an extensive introduction on computer arithmetic, covering the full IEEE standard on floating-point arithmetic as well as lots of material from D. Knuth's volume 2 discussing alternatives. The rest of the course is based on the paradigm described by T. Marchioro, in which the journey from physical problem to computational solution is stressed. Each topic or chapter in the course is structured around 5 basic components: a motivating problem the mathematical model describing the problem to be solved a numerical technique developed for its solution the actual implementation or use of a numerical routine, be it in C, Fortran, Mathematica, Matlab or the like the evaluation or quality control of the numerical output. Let us for instance take the chapter on `approximation theory'. A motivating start is the problem of implementing an elementary function on a chip. Depending on the given function, apparently several mathematical techniques are available: Taylor series expansion and the use of Chebyshev polynomials, Pad\'e approximation and continued fraction representation, Fourier series etc. After a brief theoretical discussion several routines implementing the different techniques are looked up. In this respect the Guide to Available Mathematical Software (gams.nist.gov) is very helpful. Fully developed scientific environments like Matlab and Mathematica also offer a lot of ready-to-use software. The choice of quality software among the many routines found on the net is not an easy one for the students. Notions like stability of the algorithm and well-conditioning of the problem play a role here. Finally the correct use of the software, be it in exact rational arithmetic or in traditional floating-point arithmetic, and the evaluation of the quality of the returned numerical output top the chapter off. We plan to write down final course notes during the next academic year. The course will essentially consist of two main parts. One part on computer arithmetic which is important because it is underlying all numeric computations. A second part consisting for the moment of the following chapters: linear systems, nonlinear equation solving, polynomial and spline interpolation, least squares data smoothing, approximation of functions, Fourier series, Monte Carlo methods. The notes will be accompanied by a programming environment in which students can experiment with their numerical implementations in different floating-point sets (base 2, different precisions, different exponent ranges). Annie CUYT Dept Mathematics & Computer Science Tel (32)3/820.24.07 University of Antwerp (UIA) Fax (32)3/820.24.21 Universiteitsplein 1 Secr (32)3/820.24.01 B-2610 Wilrijk-Antwerp (Belgium) Email cuyt@wins.uia.ac.be Brigitte Verdonk Dept. of Math. and Comp. Sc. Tel. +32 3 820.24.03 University of Antwerp (UIA) Fax. +32 3 820.24.21 Universiteitsplein 1 Telex 33646 UIA B B2610 Wilrijk-Antwerp (Belgium) Email: verdonk@wins.uia.ac.be From: Bill Hager <hager@math.ufl.edu> Date: Sun, 14 Jul 1996 13:41:18 -0400 Subject: Applied Numerical Linear Algebra Book NA Instructors: Please remember that Applied Numerical Linear Algebra is no longer available from the original publisher Prentice-Hall, while used book dealers are charging quite a bit for the book. The author can provide new copies of the book at a very reasonable price. For orders or for examination copies: By email: hager@math.ufl.edu By fax: 352-392-6254 By telephone: 352-392-0281 x 244 By ordinary mail: William W. Hager, Department of Mathematics, University of Florida, Gainesville, FL 32611 USA From: Eric Grosse <ehg@research.bell-labs.com> Date: Tue, 16 Jul 1996 18:58:15 +0400 Subject: Email for Bell Labs' Scientific Computing In the recent split of AT&T, the scientific computing group stayed with Bell Labs, now part of Lucent Technologies. Our offices and phone numbers are unchanged, but email moved from research.att.com to: wmc@research.bell-labs.com Bill Coughran cowsar@research.bell-labs.com Lawrence Cowsar freund@research.bell-labs.com Roland Freund dmg@research.bell-labs.com David Gay ehg@research.bell-labs.com Eric Grosse lck@research.bell-labs.com Linda Kaufman mhw@research.bell-labs.com Margaret Wright netlib@research.bell-labs.com netlib with web pages now at http://cm.bell-labs.com/who/ From: Chris Luchini <luchini@elroy.jpl.nasa.gov> Date: Thu, 18 Jul 1996 14:16:56 -0700 Subject: Least Squares Fits of Spherical Harmonics I need to find a highly efficient method to fit N'th order spherical harmonics to the sum of an initial spherical harmonic series, and a set of 3-vectors, probably about 500 or so. Anyone have a source for libraries that might contain code that would be useful for this problem? From: Dan Katz and Tom Cwik <emstaff@emlib.jpl.nasa.gov> Date: Mon, 15 Jul 96 10:16:13 PDT Subject: Web Site for the Electromagnetics Communmity Members of the electromagnetics community, We are pleased to announce a new web site for the electromagnetics community. This site is a update of the original FTP library EMLIB, and is now located at This site has been created for the free distribution of electromagnetics software and related information. This related information includes relevant conference information, a list of other EM sites, and a user-defined searchable directory of people working in the EM field. Feel free to explore the information, and be sure to add yourself to the database if you wish. The database currently has very few entries, as this is the first public announcement of this web site. Dan Katz and Tom Cwik (emstaff@emlib.jpl.nasa.gov) From: Anshul Gupta <anshul@watson.ibm.com> Date: Mon, 15 Jul 1996 18:45:34 -0400 Subject: Sparse Matrix Ordering and Graph Partitioning Software We are glad to announce the availability of "WGPP: Watson Graph Partitioning (and sparse matrix ordering) Package." WGPP is a suite of routines for fast generation of graph partitions with low edge-cuts and for generating robust fill-reducing orderings of sparse matrices arising in a various applications ranging from finite-element analysis to linear programming. The manual, the software, and the related papers are available via anonymous FTP from the site Graph partitioning is an important problem with extensive application in scientific computing, optimization, VLSI design, and task partitioning for parallel processing. The graph partitioning problem, in its most general form, requires dividing the set of nodes of a weighted graph into disjoint subsets or partitions such that the sum of weights of nodes in each subset is nearly the same (within a user supplied tolerance) and the total weight of all the edges connecting nodes in different partitions is minimized. WGPP contains heuristics that significantly improve partitioning speed and, for small number of parts, also in partitioning quality over state-of-the-art graph partitioning packages. An important application of graph partitioning is in computing fill-reducing orderings of sparse matrices for solving large sparse systems of linear equations. WGPP ordering routines generate sparse matrix orderings that produce much less fill (on an average) upon factorization than the conventional minimum-degree based ordering algorithms. Although slower that minimum degree for finite-element matrices, WGPP is significantly faster and more consistent than minimum-degree based orderings for sparse matrices arising in linear programming problems solved using interior- point methods. The web site http://www.research.ibm.com/osl/wgppnews.html briefly describes the use of WGPP in conjunction with IBM Optimization Subroutine Library (OSL) and its run time advantage. WGPP can be seamlessly linked with OSL, but with the aid of the user interface described in the manual, it can be used with any interior-point code. Anshul Gupta Rm. 31-216, IBM Watson Research Center, PO Box 218, Yorktown Heights, NY 10598 Tel: 914-945-1450; Fax: 914-945-3434; From: Andrew Ilin <ilin@hpc.uh.edu> Date: Thu, 11 Jul 1996 14:30:01 -0500 (CDT) Subject: Proceedings of a Conference on Spectral Methods The Proceedings of the Third International Conference on Spectral And High Order Methods (Houston, Texas, 1995) are available for order. The 620 page proceedings includes 55 articles on the following subjects: spectral methods, high order finite differences and finite elements, h-p version of finite elements, spectral elements, multigrid methods and parallel computations. All of the papers were subject to a rigorous refereeing process, which was a principal responsibility of the Scientific Committee: I.Babuska, C.Bernardi, C.Canuto, M.Deville, R.Glowinski, D.Gottlieb, H.O.Kreiss, Y.Maday, J.T.Oden, A.T.Patera, A.Quarteroni, L.R.Scott. Information about the meeting is available at or from Susan Owens, ICOSAHOM '95 Coordinator Department of Mathematics, University of Houston, Houston, Texas 77204-3476. FAX: 713-743-3505, Phone: 713-743-8688, E-mail: susan@hpc.uh.edu From: David Brown <dlb@c3serve.c3.lanl.gov> Date: Thu, 11 Jul 1996 15:27:04 -0600 (MDT) Subject: Overset Grids Symposium 3rd Symposium on Center for Nonlinear Studies Los Alamos National Laboratory Los Alamos, New Mexico November 18-21, 1996 Symposium Web Page: http://www.c3.lanl.gov/OGS The method of Overset grids (or overlapping grids) was developed in the early 1980s for the simulation of continuum mechanics problems involving complex geometry using finite difference or finite volume methods applied to classical partial differential equations. The method has been used successfully for problems such as the simulation of fluid flow around the U.S. space shuttle, in the simulation of flow around ships, and for internal combustion applications. For more information about this symposium, please see the Web site listed above, or contact us by email at ogSymposium@lanl.gov. From: Siamak Amini <Siamak.Amini@math.tamu.edu> Date: Thu, 11 Jul 1996 17:05:34 -0500 (CDT) Subject: Boundary Integral Methods Conference An IMA Conference on Place: The University of SALFORD, Manchester, UK. Time: 15th-18th September 1997. ANNOUNCEMENT and CALL for PAPERS The meeting will provide a forum for the exchange of ideas between academic and industrial researchers in different disciplines whose common interest is boundary integral equations. As well as discussing recent developments in the theory and numerical analysis of boundary integral methods the conference will highlight many new applications, for example in direct and inverse scattering, moving boundaries, time dependent and nonlinear problems. Fast solution methods such as preconditioned iterative schemes, use of multipole and wavelet expansions and panel clustering techniques will also be discussed. CONFIRMED INVITED SPEAKERS: J R Blake (Birmingham, UK), D B Ingham (Leeds, UK), W Hackbusch (Kiel, Germany), G C Hsiao (Newark, USA), R Kress (Gottingen, Germany), C Schawb (Zurich, Switzerland). Organising Committee: Chair- Sia Amini (Salford), Simon Chandler-Wilde (Brunel), Ke Chen (Liverpool), Ivan Graham (Bath), Paul Martin Interested researchers are invited to contact: Mrs Pamela Bye Conference Officer Catherine Richards House 16 Nelson Street Essex SS1 1EF email: imarch@v-e.anglia.ac.uk or email Sia Amini at : s.amini@mcs.salford.ac.uk From: David Keyes <keyes@cs.odu.edu> Date: Fri, 12 Jul 1996 08:39:48 -0400 Subject: ICASE/LaRC Industry Roundtable ICASE/LaRC 2nd INDUSTRY ROUNDTABLE Williamsburg Hospitality House, Williamsburg, Virginia October 7-9, 1996 We are pleased to announce that the Institute for Computer Applications in Science and Engineering (ICASE) and NASA Langley Research Center (LaRC) will conduct a second Industry Roundtable at the Williamsburg Hospitality House, Williamsburg, Virginia, October 7-9, 1996. The objectives of the Roundtable are: to expose government and academic research scientists to industrial research agendas, and to acquaint industry with the capabilities and technology available at ICASE/LaRC and academic partners of ICASE. Industry participants will be invited to discuss their future research needs in semi-formal presentations and informal around-the-table discussions. These will be collected into a report by the session chairs, who may recommend workshops and specific projects for ICASE/LaRC-industry collaboration. Nineteen sessions in three parallel tracks are scheduled. Each session will consist of four presentations of 30 minutes each and an open forum of approximately one hour. The technical sessions and session chairs Computational Materials Science - N. Chandra, Florida State University Computational Structures - Jerry Housner, NASA Langley Research Center Technology Transfer Issues - Dimitri Mavriplis, ICASE Computational Electromagnetics - R. Nicolaides, Carnegie Mellon University Materials Modeling - Ivatury Raju, NASA Langley Research Center Structural Acoustics - Richard Silcox, NASA Langley Research Center Visualization - Thomas Crockett, ICASE Workstation Cluster Computing - David Keyes, ICASE & Old Dominion University Parallel Solvers in Industrial Applications- Alex Pothen, ICASE & Old Dominion Hardware/Software Interaction - Arun Somani, University of Washington, Seattle Software Reliability and Testability - Kishor Trivedi, Duke University Automotive Research Issues - Roger Arndt, National Science Foundation Aerothermodynamics - Gregory Buck, NASA Langley Research Center Laminar Flow Control - Ronald Joslin, NASA Langley Research Center Acoustics - Michele Macaraeg, NASA Langley Research Center General Aviation - Mujeeb Malik, High Technology Corporation Aircraft Integration - Len Sakell, Air Force Office of Scientific Research Turbulence and Combustion: Industrial Applications and Technology Transfer - Munir Sindir, Rocketdyne Active Flow Control - Richard Wlezien, NASA Langley Research Center There will also be KEYNOTE TALKS on issues affecting science and engineering "Aeronautics Research and Technology at the Crossroads" - Robert E. Whitehead, Associate Administrator for Aeronautics, NASA Headquarters "Change: Managing Your Way Through It!" - William Ballhaus, Vice President, Science and Engineering, Lockheed Martin Corporation and a BANQUET TALK: "The Frontiers of the Responsibly Imaginable in Aeronautics"- Dennis Bushnell, Senior Scientist, NASA LaRC Attendance will be limited by space considerations. There is no registration fee. A pre-registration social will be held from 7:00 - 9:00 p.m. on Sunday, October 6, 1996 at the hotel. The banquet dinner will be held on Monday, Octover 7. Further information, electronic registration, and the proceedings of the first ICASE/LaRC Industry Roundtable are available under The detailed agenda and hotel registration forms will be sent out in early August. For questions, please contact: Emily Todd, Conference Manager, ICASE, Mail Stop 132C, NASA Langley Research Center, Hampton, VA 23681-0001. Telephone: (804) 864-2175; FAX: (804) 864-6134; e-mail: emily@icase.edu. From: Trini Flores <flores@siam.org> Date: Tue, 16 Jul 96 09:18:59 EST Subject: Conference on Applications of Dynamical Systems 4th SIAM Conference on Applications of Dynamical Systems May 19-22, 1997 Snowbird Ski and Summer Resort Snowbird, Utah The Call for Papers for the conference is now available on the World Wide Web at Co-organizers: Mary Silber, Northwestern University Steven H. Strogatz, Cornell University For additional information, contact SIAM at E-mail: meetings@siam.org Tel. 215-382-9800 Fax: 215-386-7999 From: Chuck Gartland <gartland@mcs.kent.edu> Date: Tue, 16 Jul 1996 23:34:53 -0400 (EDT) Subject: Materials Studies Workshop Applied Mathematics Workshop for Materials Studies and Industrial Applications October 24-26, 1996 Penn State Scanticon Conference Center Hotel Penn State University University Park, Pennsylvania USA An interdisciplinary conference designed to bring together research and applications focusing on - liquid crystals - ferroelectric ceramics - piezocomposites The conference will deal with experimental, modeling, computational, analytic, and industrial problems and with mathematical methods arising in the study of such solid and liquid materials. Partial List of Speakers Marco Avellaneda Jack Kelly Bruce Pitman Gerhard Barsch Armen Khachaturyan Karin Rabe John Board David Kinderlehrer E. Salje Pat Cladis Robert Kohn A. Saxena Pierre Deymier H. Krakauer Mike Shelley Weinan E Oleg Lavrentovich G. Stanley Takeshi Egami Frank Leslie Luc Tartar Greg Forest Fang-Hua Lin Salvatore Torquato Chuck Gartland Mitchell Luskin Lev Truskinovsky Sharon Glotzer Robert Meyer Epifanio Virga Ken Golden David Muraki Qi Wang Dorian Hatch Peter Palffy-Muhoray Claudio Zannoni Diane Henderson George Papanicolaou Richard James Jay Patel Organizing Committee Leonid Berlyand Greg Forest Eugene Wayne M. Carme Calderer Chuck Gartland Wenwu Cao Peter Palffy-Muhoray Important Dates August 20, 1996: Workshop pre-registration deadline for special fee October 20, 1996: Deadline for cancellation refunds WWW: http://www.math.psu.edu/mcc/imm.html e-mail: imm-workshop@math.psu.edu From: Karsten Urban <urban@igpm> Date: Fri, 12 Jul 1996 08:27:41 +0100 Subject: Position at the RWTH Aachen Position at the RWTH Aachen available There is a position as 'Wissenschaftlicher Mitarbeiter' (BAT IIa/2) available at the Institut fuer Geometrie und Praktische Mathematik of the RWTH Aachen within the project 'Reduzierte Modellierung und Simulation von Vielstoffprozessen mit Multiskalen-Verfahren' supported by the Volkswagen-Stiftung. The possibility for preparing a Ph.D. thesis is given. The project is supervised by Professors Marquardt (Lehrstuhl fuer Prozesstechnik, RWTH Aachen) and Dahmen (Institut fuer Geometrie und Praktische Mathematik, RWTH Aachen). The reduction of the model plays a crucial role for the efficient numerical simulation of complex mixtures arising in many economically significant processes in chemical engineering and petrochemistry. The part of the project at the Institut fuer Geometrie und Praktische Mathematik is concerned with the study of multiscale and wavelet techniques for the reduction of the system as well as for the numerical solution of the nonlinear differential-algebraic reaction equations. Further information on the project (in german) can be found under Requirements include a diploma (masters) in mathematics and good knowledge of numerical analysis. Experience in programming (C++), basic knowledge in natural science and/or of wavelets are appreciated. The RWTH Aachen is interested in having a high quota of woman in research and teaching. Qualified female scientists are particularly encouraged to apply. Applications of handicapped candidate with equal qualification will be given preference. Applications should contain curriculum vitae, a copy of the master's thesis and copies of the relevant certificates should be sent to Dr. Karsten Urban Institut fuer Geometrie und Praktische Mathematik RWTH Aachen Templergraben 55 D-52056 Aachen, Germany Phone : +49 / 241 / 80 63 38 Fax : +49 / 241 / 8888 317 e-Mail: urban@igpm.rwth-aachen.de From: Jens Burmeister <jb@numerik.uni-kiel.de> Date: Tue, 16 Jul 1996 12:42:01 +0200 (MET DST) Subject: Position at University of Kiel Technische Fakult"at der Christian-Albrechts-Universit"at zu Kiel An der Technischen Fakult"at der Christian-Albrechts-Universit"at zu Kiel, in Kooperation mit dem Mathematischen Seminar, ist ab sofort C3-Professur f"ur Diskrete Optimierung neu zu besetzen. Die Bewerberinnen und Bewerber sollten durch ihre Forschungsleistungen in einem oder mehreren der folgenden Gebiete ausgewiesen sein: Kombinatorische Optimierung, Graphentheorie, Algorithmen und effiziente Datenstrukturen oder Codierungstheorie. Ein spezifischer Anwendungsbezug wird erwartet. Neben der Vertretung der Diskreten Optimierung in Forschung und Lehre wird eine angemessene Lehrbeteiligung an der Grundausbildung f"ur Studierende der Mathematik und anderer F"acher (insbesondere Informatik) und ein Engagement beim Aufbau der Fachrichtung Technomathematik erwartet. Die Bewerberinnen und Bewerber m"ussen habilitiert sein oder eine vergleichbare Leistung aufweisen. Die Universit"at ist bestrebt, den Anteil von Wissenschaftlerinnen zu erh"ohen. Sie fordert deshalb geeignete Frauen auf, sich zu bewerben. Schwerbehinderte Bewerberinnen und Bewerber werden bei entsprechender Eignung bevorzugt ber"ucksichtigt. Frauen werden bei gleichwertiger Eignung, Bef"ahigung und fachlicher Leistung vorrangig ber"ucksichtigt. Bewerbungen mit den "ublichen Unterlagen (darunter eine kurzgefa"ste Forschungsperspektive) sind bis zum 31. August 1996 zu richten an Dekan der Technischen Fakult"at der Christian-Albrechts-Universit"at zu Kiel Kaiserstra"se 2 24143 Kiel From: Wayne Joubert <wdj@c3serve.c3.lanl.gov> Date: Tue, 16 Jul 1996 14:28:22 -0600 (MDT) Subject: Graduate Student Positions at Los Alamos Graduate Student Research Assistants Los Alamos National Laboratory The Scientific Computing Group at Los Alamos National Laboratory is currently seeking highly motivated graduate students to participate in the Graduate Research Assistant program. Students with experience in any or all of the following categories are encouraged to apply: Parallel Computer Programming Numerical Linear Algebra Parallel Applications Development Oil Reservoir Simulation and Geostatistics Experience with Fortran 90, HPF, PVM, MPI and assembly languages on parallel machines such as the Cray T3D, Connection Machine CM-5, IBM SP-2, SGI Power Challenge and workstation clusters is desirable. Experience with iterative linear solver methods such as conjugate gradient methods and incomplete Cholesky preconditioners is also desirable. A minimum GPA of 2.5 is required. Appointments can range from 3 to 12 months in duration. Interested individuals are encouraged to contact Wayne Joubert for more information: Wayne Joubert Los Alamos National Laboratory Group CIC-3, MS B-256 Los Alamos, NM 87545 EMAIL: wdj@lanl.gov FAX: (505) 667-1126 Los Alamos is an equal-opportunity employer. From: Vladik Kreinovich <Vladik.Kreinovich@laforia.ibp.fr> Date: Fri, 12 Jul 1996 11:22:40 +0200 Subject: Contents, Reliable Computing Reliable Computing. - 1996. - N 2 (3). - 108 p. Preface 211 Robust algorithms that locate local extrema of a function of one variable from interval measurement results: A remark Christoph Eick and Karen Villaverde 213 Fast error estimates for indirect measurements: applications to pavement engineering Carlos Ferregut, Soheil Nazarian, Krishnamohan Vennalaganti, Ching-Chuan Chang, and Vladik Kreinovich 219 Newton's constant of gravitation and verified numerical Oliver Holzmann, Bruno Lang, and Holger Sch\"utt 229 Two adaptive Gauss-Legendre type algorithms for the verified computation of definite integrals Walter Kr\"amer and Stefan Wedner 241 A quadratic-time algorithm for smoothing interval functions Vladik Kreinovich and Karen Villaverde 255 Optimal interval enclosures for fractionally-linear functions, and their application to intelligent control Robert N. Lea, Vladik Kreinovich, Raul Trejo 265 If we measure a number, we get an interval. What if we measure a function or an operator? Joe Lorkowski and Vladik Kreinovich 287 New slope methods for sharper interval functions and a note on Fischer's acceleration method Jo\~ao B. Oliveira 299 Ordering events: Intervals are sufficient, more general sets are usually not necessary Alessandro Provetti 321 Applications of Reliable Scientific Computing 329 Addresses of the Editorial Board members 332 Information for authors 334 Contents 335 From: Petr Prikryl <prikryl@beba.cesnet.cz> Date: Fri, 12 Jul 1996 15:37:37 +0200 (MET DST) Subject: Contents, Applications of Mathematics Volume 41, Number 4 Ivan Hlavacek, Michal Krizek, and Vladislav Pistora How to recover the gradient of linear elements on nonuniform triangulations Hans-Goerg Roos and Martin Stynes Necessary conditions for uniform convergence of finite difference schemes for convection-diffusion problems with exponential and parabolic Ivan Hlavacek, Jan Chleboun A recovered gradient method applied to smooth optimal shape problems Jan Lovisek Singular perturbations in optimal control problem with application to nonlinear structural analysis End of NA Digest
{"url":"http://www.netlib.org/na-digest-html/96/v96n27.html","timestamp":"2014-04-21T02:01:43Z","content_type":null,"content_length":"47190","record_id":"<urn:uuid:50bc2f38-669a-403e-ba4e-c49dfa0a4080>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Curve Fitting, Matrices February 4th 2010, 08:20 PM #1 Polynomial Curve Fitting, Matrices I've done over 50 problems for a Linear Algebra class tonight and I'm sooo burnt out. I'm giving up on these ones.. if you can help me, that would be wonderful. Otherwise, I'm turning in what I have. Strangely enough, it's the odd problems that I already have solutions to that I don't understand. Got the even ones already. 11) In the "Polynomial Curve Fitting" section: The graph of a cubic polynomial function has horizontal tangents at (1, -2) and (-1,2). Find an equation for the cubic and sketch its graph. Somehow the answer is p(x) = -3x + x^3. Just want to know the steps. 29) Use a system of equations to write the partial fraction decomposition of the rational expression. Then solve the system using matrices. $<br /> \frac{4x^2}{(x+1)^2(x-1)} = \frac{A}{x-1}+\frac{B}{x+1}+\frac{C}{(x+1)^2}<br />$ And the final answer should be: $<br /> \frac{1}{1-x}+\frac{3}{1+x}-\frac{2}{(x+1)^2}<br />$ 47) Consider the matrix.. $<br /> A=\begin{bmatrix} 1 &k &2 \\ -3 &4 &1 \\ \end{bmatrix}<br />$ If A is the augmented matrix of a system of linear equations, find the value(s) of k such that the system is consistent. (Answer is all real k not equal to -4/3. Just want to know how they got this so I understand it. 58) True or false: Every matrix has a unique reduced row-echelon form. Thank you in advance. I appreciate it. ref attachment I've done over 50 problems for a Linear Algebra class tonight and I'm sooo burnt out. I'm giving up on these ones.. if you can help me, that would be wonderful. Otherwise, I'm turning in what I have. Strangely enough, it's the odd problems that I already have solutions to that I don't understand. Got the even ones already. 11) In the "Polynomial Curve Fitting" section: The graph of a cubic polynomial function has horizontal tangents at (1, -2) and (-1,2). Find an equation for the cubic and sketch its graph. Somehow the answer is p(x) = -3x + x^3. Just want to know the steps. Any cubic polynomial can be written in the form $f(x)= ax^3+ bx^2+ cx+ d$ and then $f'(x)= 3ax^2+ 2bx+ c$. Saying that it has a horizontal tangent at (1, -2) tells you two things: its value at x= 1 is $f(1)= a(1)^3+ b(1)^2+ c(1)+ d= a+ b+ c+ d= -2$ and its derivative there is $f'(1)= 3a(1)^2+ 2b(1)+ c = 0$. Do the same at x= -1 to get four equations for a, b, c, and d. [quote]29) Use a system of equations to write the partial fraction decomposition of the rational expression. Then solve the system using matrices. $<br /> \frac{4x^2}{(x+1)^2(x-1)} = \frac{A}{x-1}+\frac{B}{x+1}+\frac{C}{(x+1)^2}<br />$ Multiply both sides of the equation by $(x+1)^2(x-1)$ to get $4x^2= A(x+1)^2+ B(x-1)(x+1)+ C(x-1)= Ax^2+ 2Ax+ A+ Bx^2- B+ Cx- C$ $4x^2= (A+ B)x^2+ (2A+ C)x+ (A- B+ C)$ Equating coefficients, A+ B= 4, 2A+ C= 0, and A- B+ C= 0. Those correspond to the matrix equation $\begin{bmatrix}1 & 1 & 0 \\ 2& 0 & 1 \\ 1 & -1 & 1\end{bmatrix}\begin{bmatrix}A \\ B \\ C\end{bmatrix}= \begin{bmatrix}4 \\ 0 \\ 0\end{bmatrix}$ And the final answer should be: $<br /> \frac{1}{1-x}+\frac{3}{1+x}-\frac{2}{(x+1)^2}<br />$ 47) Consider the matrix.. $<br /> A=\begin{bmatrix} 1 &k &2 \\ -3 &4 &1 \\ \end{bmatrix}<br />$ If A is the augmented matrix of a system of linear equations, find the value(s) of k such that the system is consistent. (Answer is all real k not equal to -4/3. Just want to know how they got this so I understand it. Row reduce the matrix just as you would to solve it. Since there are only two rows, that is simple: Add 3 times the first row to the second to get $\begin{bmatrix} 1 & k & 2 \\0 & 4+ 3k & 7\end{bmatrix}$ That last row corresponds to (4+3k)y= 7. To solve that you must divide by 4+ 3k which you cannot do if 4+ 3k= 0. 58) True or false: Every matrix has a unique reduced row-echelon form. True, of course. You can find the reduced row-echelon form by following a specific formula which, if done correctly, will always give the same result for the same matrix. Thank you in advance. I appreciate it. February 5th 2010, 04:54 AM #2 Jan 2009 February 6th 2010, 05:13 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/127252-polynomial-curve-fitting-matrices.html","timestamp":"2014-04-16T05:54:48Z","content_type":null,"content_length":"43862","record_id":"<urn:uuid:53c5bbd7-4be4-4b69-bb71-395bd4897fd6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Parallel universe : Anything that can happen will happen... According to parallel universe theory ( which as far as I know is not any different from string theory/ies ) anything that can happen will happen, if not here then in any other, parallel universe. If "ANYTHING THAT CAN HAPPEN WILL HAPPEN IN ANY OF THE ALTERNATE UNIVERSE" were to be true it simply means that the either the no. of alternate universe is determined by the probability of the occurrence of an event or the probability of occurrence of an event is determined by the no. of alternate universe; cause each of possibility "HAS TO OCCUR". But none of the above can be true as if "no. of alter. universe" is determined by the " no. of possibilities of occurrence of an event " than that would mean as for each and every "event" there r different no. of "possibilities" of "different outcomes" ; so for each and every "event" the "no. of parallel universe" will be "different", that does not seems to be true. If we take second case that the "no. of para. universe" is "fixed" or even if increasing, than it is increasing according to some rule ( no matter how complex it be. ) and "no. of "possible outcomes" of different probability" is determined by them, than that would simply mean that "each and every EVENT" has Same probability ( or at least utterly related ) but for an "infinite" no. of events ( may be hypothetical ) to have same or completely related "probability of "no. of "possible outcomes" of different probability" " is surely way out. So all these things doesn't simple say that this idea of " ALTERNATE ( or parallel ) UNIVERSE " should be ridiculed or this idea must be abandoned that " "Anything that can happen will happen in ......." .
{"url":"http://www.physicsforums.com/showpost.php?p=2272526&postcount=1","timestamp":"2014-04-18T23:25:23Z","content_type":null,"content_length":"10445","record_id":"<urn:uuid:1299ee34-03e9-4c37-8e7c-10b989f8eb47>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
17.655 history of 20C mathematics From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty@kcl.ac.uk) Date: Wed Feb 18 2004 - 03:32:37 EST Humanist Discussion Group, Vol. 17, No. 655. Centre for Computing in the Humanities, King's College London Submit to: humanist@princeton.edu Date: Wed, 18 Feb 2004 08:25:51 +0000 From: Anne Mahoney <amahoney@perseus.tufts.edu> Subject: Re: 17.647 history of 20C mathematics? Willard -- No one has yet mentioned "The Honors Class: Hilbert's Problems and their Solvers," by Ben Yandell (A. K. Peters: 2002). It addresses precisely your questions, about the influence of Hilbert's problems on 20th-c. mathematics. Another book on the subject came out at the same time, Jeremy Gray's "The Hilbert Challenge" (Oxford: 2000), but I have not read it. Both are reviewed together in Notices of the AMS for September 2002; see http://www.ams.org/notices/200208/rev-blank.pdf for the review. Hilbert's address was re-printed (in English) on its centenary in Bull. AMS 37 (2000), 407-436; it's available on line at http://www.ams.org. I'm not sure I'd agree that "Hilbert's project ran aground" -- it's just that the safe harbor wasn't where he thought it was. The goal was to systematize all of mathematics. In a very strict sense this isn't possible (that's Gödel's theorem), but this result in itself is already a form of systematization. Model theory, then, is about what you can do in the spaces opened up by the incompleteness theorem. Take a model of Euclid's first 4 postulates, but with a different version of the 5th; what do you get? Is it interesting? Does it correspond to anything in the real world? Yes, in fact -- and there's 20th c. geometry for you. (Well, roughly.) --Anne Mahoney Tufts University This archive was generated by hypermail 2b30 : Fri Mar 26 2004 - 11:19:42 EST
{"url":"http://lists.village.virginia.edu/lists_archive/Humanist/v17/0636.html","timestamp":"2014-04-17T18:24:16Z","content_type":null,"content_length":"6236","record_id":"<urn:uuid:ce8093ef-1b70-4c45-90b1-de4eac8025f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
power series representing ∫sinx/x 1. The problem statement, all variables and given/known data Find the Power Series representing 2. Relevant equations sin(x)= x-(x^3/3!)+(x^5/5!)-(x^7/7!) 3. The attempt at a solution I Havent attempted yet but was wondering if you start with the maclaurin series of sin(x) then divide everything by x then integrate the entire summation
{"url":"http://www.physicsforums.com/showthread.php?p=3848729","timestamp":"2014-04-19T22:57:26Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:ad067b76-746d-48dc-809e-b503c2ac574b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
hidden mistake Just to expand on what other people have said, rules like "The limit of the difference is the difference of the limits" only apply when both limits exist. So it is not true that [tex]\lim_{h\to 0}\frac{f(x+h)g(x+h)-f(x)g(x)}{h}= \lim_{h\to 0}\frac{f(x+h)g(x+x)}{h} -\lim_{h\to 0}\frac{f(x)g(x)}{h} = \infty - \infty[/tex] (the last equality is assuming neither f nor g is 0 or has a 0 limit at x) Likewise, splitting up limits like that only works when the limits each exist for addition, multiplication and division. The limit of the denominator also can't be 0 in the case of division.
{"url":"http://www.physicsforums.com/showthread.php?p=3123604","timestamp":"2014-04-19T07:31:06Z","content_type":null,"content_length":"29913","record_id":"<urn:uuid:e4573771-a48a-4347-9210-2d5f0908336b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Parallel universe : Anything that can happen will happen... According to parallel universe theory ( which as far as I know is not any different from string theory/ies ) anything that can happen will happen, if not here then in any other, parallel universe. If "ANYTHING THAT CAN HAPPEN WILL HAPPEN IN ANY OF THE ALTERNATE UNIVERSE" were to be true it simply means that the either the no. of alternate universe is determined by the probability of the occurrence of an event or the probability of occurrence of an event is determined by the no. of alternate universe; cause each of possibility "HAS TO OCCUR". But none of the above can be true as if "no. of alter. universe" is determined by the " no. of possibilities of occurrence of an event " than that would mean as for each and every "event" there r different no. of "possibilities" of "different outcomes" ; so for each and every "event" the "no. of parallel universe" will be "different", that does not seems to be true. If we take second case that the "no. of para. universe" is "fixed" or even if increasing, than it is increasing according to some rule ( no matter how complex it be. ) and "no. of "possible outcomes" of different probability" is determined by them, than that would simply mean that "each and every EVENT" has Same probability ( or at least utterly related ) but for an "infinite" no. of events ( may be hypothetical ) to have same or completely related "probability of "no. of "possible outcomes" of different probability" " is surely way out. So all these things doesn't simple say that this idea of " ALTERNATE ( or parallel ) UNIVERSE " should be ridiculed or this idea must be abandoned that " "Anything that can happen will happen in ......." .
{"url":"http://www.physicsforums.com/showpost.php?p=2272526&postcount=1","timestamp":"2014-04-18T23:25:23Z","content_type":null,"content_length":"10445","record_id":"<urn:uuid:1299ee34-03e9-4c37-8e7c-10b989f8eb47>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Talk:Om 2008 Sudoku From Openmoko (Difference between revisions) (New page: Normally, a Sudoku puzzle guarantees that there is only one solution. It seems to me that this program does not make this guarantee.) Latest revision as of 17:01, 6 June 2009 Normally, a Sudoku puzzle guarantees that there is only one solution. It seems to me that this program does not make this guarantee.
{"url":"http://wiki.openmoko.org/index.php?title=Talk:Om_2008_Sudoku&diff=70691&oldid=50711","timestamp":"2014-04-20T04:15:20Z","content_type":null,"content_length":"14227","record_id":"<urn:uuid:93860804-adc4-4d88-a290-226f5e42aed7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Displaying Minimum and Maximum values from input 06-29-2008 #1 Registered User Join Date Jun 2008 Displaying Minimum and Maximum values from input I am working on a program for a class. Background is that there is a sample of salmon taken, and each is weighed. I am supposed to as the user to input the number of salmon in the sample, then enter each of their weights. I then need to display the total, average, minimum, and maximum weights. I have figured out the total and average, but can't quite get the minimum and maximum. I understand that I need to use an if statement to see if the value entered is less than what is currently the minimum, and if it is, change minimum to that new value. Also the same thing for maximum. I just can't quite figure out how to do that within my While loop. Here is my code: #include <stdio.h> #include <stdlib.h> int main( void ) float weight; /*Define Variables*/ int counter; int num_salmon; float tot_weight; float avg_weight; float min_weight=50000; float max_weight=-1; { printf( "How many salmon are in the sample?: " ); /*Get total from user*/ scanf( "%d", &num_salmon); /*Read the number*/ printf ("Number of salmon entered is: %d\n", num_salmon); /*Get the weight of each fish and add them to the total.*/ tot_weight = 0; counter = 0; while (counter < num_salmon) { printf("Enter the weight of a fish:\n"); scanf("%f", &weight); counter = counter + 1; tot_weight= tot_weight + weight; if (weight<min_weight); { min_weight=weight; else if (weight>min_weight); { min_weight=min_weight; if (weight>max_weight); { max_weight=weight; else if (weight>max_weight); { max_weight=max_weight; printf ("All %d fish entered.\n", counter ); printf ("Total weight of all salmon is %f pounds.\n", tot_weight); printf ("The average weight of the sampled salmon is %f pounds.\n", avg_weight); return EXIT_SUCCESS; Can anyone help me straighten out the issue here? I would greatly appreciate it. I am working on a program for a class. Background is that there is a sample of salmon taken, and each is weighed. I am supposed to as the user to input the number of salmon in the sample, then enter each of their weights. I then need to display the total, average, minimum, and maximum weights. I have figured out the total and average, but can't quite get the minimum and maximum. I understand that I need to use an if statement to see if the value entered is less than what is currently the minimum, and if it is, change minimum to that new value. Also the same thing for maximum. I just can't quite figure out how to do that within my While loop. Here is my code: #include <stdio.h> #include <stdlib.h> int main( void ) float weight; /*Define Variables*/ int counter; int num_salmon; float tot_weight; float avg_weight; float min_weight=50000; float max_weight=-1; { //WTH is this doing here?? Lose it! printf( "How many salmon are in the sample?: " ); /*Get total from user*/ scanf( "%d", &num_salmon); /*Read the number*/ printf ("Number of salmon entered is: %d\n", num_salmon); { //Another curly brace to lose! We don't just stick in a curly brace, pell mell! /*Get the weight of each fish and add them to the total.*/ tot_weight = 0; counter = 0; while (counter < num_salmon) { printf("Enter the weight of a fish:\n"); scanf("%f", &weight); counter = counter + 1; tot_weight= tot_weight + weight; if (weight<min_weight); //this is right, keep it { min_weight=weight; else if (weight>min_weight); //this is a frankenstein of logic - lose it { min_weight=min_weight; //C'mon!! You had to know this was crap if (weight>max_weight); { max_weight=weight; else if (weight>max_weight); //OMG, another Dr. Frankenstein! Lose this. { max_weight=max_weight; printf ("All %d fish entered.\n", counter ); printf ("Total weight of all salmon is %f pounds.\n", tot_weight); printf ("The average weight of the sampled salmon is %f pounds.\n", avg_weight); return EXIT_SUCCESS; Can anyone help me straighten out the issue here? I would greatly appreciate it. Your indentation style is not as helpful as it should be. Consider the three french braces I've highlighted in blue, above. What does each of them signal to a reader of your code? What block of code is being ended, by each of them? You might remember because you just posted this up, but a year from now, you surely won't. Neither will any reader (like me), who you want to have look at your code. Contrast that with the clarity of the following: //this is K&R style, which I prefer while( i < num_fishes) { if(fish_weighs < min_weight) { min_weight = fish_weight; } //see how this brace lines up perfect with the "if" above it? It //literally guides your eye right to the block of code it ends. } //ditto. You can't help but see the relationship between the start of //of the while loop, and this closing french brace. //this is Student or Uni style, which is also excellent. All the notes from above, //apply here, as well. for(i = 0; i < num_fishes; i++) if(fish_weighs < min_weight) min_weight = fish_weight; The example above is a simplified one, but I believe it shows the point. I like your if statement style, in your code, but it is non-standard. Quite nice though. I don't like the style you used with the closing braces in blue, however. It's a style that will come around and bite you on the butt, before long. What I mean is, you won't see an error in logic or syntax, that you would have seen, had you used a better style of indentation in your code. In fact, I believe it bit you right in this program. 06-29-2008 #2 Registered User Join Date Sep 2006
{"url":"http://cboard.cprogramming.com/c-programming/104613-displaying-minimum-maximum-values-input.html","timestamp":"2014-04-18T04:46:19Z","content_type":null,"content_length":"50174","record_id":"<urn:uuid:8500983a-797e-4a6a-ac29-5b555b3fd3cc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Light bulb July 31st 2013, 03:35 AM #1 Jul 2012 Light bulb A room has two lamps that use bulbs of type A and B, respectively.The lifetime, X, of any particular bulb of a particular type is a random variable, independent of everything else, with the following PDF: for type-A Bulbs: fX(x) = e^−x, if x ≥ 0, 0, otherwise; for type-B Bulbs: fX(x) = 3e^−3x, if x ≥ 0, 0, otherwise. Both lamps are lit at time zero. Whenever a bulb is burned out it is immediately replaced by a new bulb. (a) What is the expected value of the number of type-B bulb failures until time t? The time is infinity? how is this possible? (b) What is the PDF of the time until the first failure of either bulb type? lamdatotal = lamda 1 + lamda2 4te^(-4t) ? (c) Find the expected value and variance of the time until the third failure of a type-B bulb. (d) Suppose that a type-A bulb has just failed. How long do we expect to wait until a subsequent type-B bulb failure? Re: Light bulb (a) Expected number of lightbulb failures would be infinity as well if t is allowed to range to infinity (assuming you have an infinite number of bulbs). I suspect they mean a general "t", and this is just a Poisson Process where you are expected to find E[N(t)], where N(t) is the number of bulbs busted by time "t". (b) Not. . .quite. You are looking for P(A or B < t), which is simply 1-P(A and B > t). Using independence of the processes you should be able to go from here (by relating the CDF to the PDF). (c) Use (a) with t=3. (d) Use independence and memoryless property of an exponential rv. Re: Light bulb a) E[x]=3t b) 1-(e^-x + 3e^-3x) ? c) This is for time? not number of arrivals? is it just E[x]=3/3 and V[x]=1/3 d) E[x]=1/3 Re: Light bulb On (b), P(A and B > t) = P(A > t)P(B >t). There shouldn't be addition anywhere in there. On (c), you are correct - I read that wrong. You can define a new random variable Z = X1+X2+X3, which will give the sum of the failure times of the first three bulbs (i.e. time until 3rd bulb fails). Using properties of the sum of exponential (which is another distribution you should know) you should be able to answer this quite quickly (assuming you didn't do that to arrive at the answer above). Re: Light bulb b) 1-(e^-x * 3e^-3x) = 1-4e^-4x ?? whats with the minus 1? 1-P(a and b) = p(no fail?) c) I used the formula E[x] = k/lamda and V[x] = k/lamda, where k = #arrivals? is this okay to do that? July 31st 2013, 06:48 AM #2 Super Member Jul 2009 July 31st 2013, 02:38 PM #3 Jul 2012 July 31st 2013, 05:34 PM #4 Super Member Jul 2009 July 31st 2013, 06:31 PM #5 Jul 2012
{"url":"http://mathhelpforum.com/advanced-statistics/220936-light-bulb.html","timestamp":"2014-04-20T07:49:03Z","content_type":null,"content_length":"39485","record_id":"<urn:uuid:4dbac106-f48e-4dc9-9e21-47c8035f7fff>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Mathematical Surveys and Monographs 2006; 316 pp; hardcover Volume: 127 ISBN-10: 0-8218-4096-7 ISBN-13: 978-0-8218-4096-2 List Price: US$88 Member Price: US$70.40 Order Code: SURV/127 This book covers one of the most exciting but most difficult topics in the modern theory of dynamical systems: chaotic billiards. In physics, billiard models describe various mechanical processes, molecular dynamics, and optical phenomena. The theory of chaotic billiards has made remarkable progress in the past thirty-five years, but it remains notoriously difficult for the beginner, with main results scattered in hardly accessible research articles. This is the first and so far only book that covers all the fundamental facts about chaotic billiards in a complete and systematic manner. The book contains all the necessary definitions, full proofs of all the main theorems, and many examples and illustrations that help the reader to understand the material. Hundreds of carefully designed exercises allow the reader not only to become familiar with chaotic billiards but to master the subject. The book addresses graduate students and young researchers in physics and mathematics. Prerequisites include standard graduate courses in measure theory, probability, Riemannian geometry, topology, and complex analysis. Some of this material is summarized in the appendices to the book. Graduate students and research mathematicians interested in mathematical physics, statistical mechanics, dynamical systems, and ergodic theory. "Although there are many books covering general mathematical billiards there are no comprehensive introductory texts covering chaotic billiards. The book remedies this deficiency and presents the theory of chaotic billiards in a systematic way." -- Zentralblatt Math "In contrast to many other works on billiards, this book does not hide any technicalities. It works out all the technical details, to the last epsilon. There are numerous exercises which are well chosen and useful. This way the reader can not only understand billiard theory in an active way, but can also develop the skills needed to address new questions. Thus it is useful not only graduate students, but also for senior mathematicans who would like to start working on billiards. I strongly recommend this book." -- Mathematical Reviews
{"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-127","timestamp":"2014-04-18T22:40:53Z","content_type":null,"content_length":"16711","record_id":"<urn:uuid:7ecbd107-1748-4510-a742-cab84883aec9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
My Tech Interviews 18 Feb 10 Clock Hands Question: How many times a day do the minute and hour hands of a clock overlap? Answer: Did you think the answer was 24 times? Well if you did, it’s time you think again. Let’s do some math. In T hours, the minute hand completes T laps. In the same amount of time, the hour hand completes T/12 laps. The first time the minute and hour hands overlap, the minute hand would have completed 1 lap more than the hour hand. So we have T = T/12 + 1. This implies that the first overlap happens after T = 12 /11 hours (~1:05 am). Similarly, the second time they overlap, the minute hand would have completed two more laps than the hour hand. So for N overlaps, we have T = T/12 + N. Since we have 24 hours in a day, we can solve the above equation for N 24 = 24/12 + N 24 = 2 + N N = 22 Thus, the hands of a clock overlap 22 times a day. Thus the hands of the clock overlap at 12:00, ~1:05, ~2:10, ~3:15, ~4:20, ~5:25, ~6:30, ~7:35, ~8:40, ~9:45, ~10:50. Note that there is no ~11:55. This becomes 12:00. Have a better solution? Let us know through our comments section. 16 Responses 1. the answer is 2 because at only 12 am and 12 pm,the condition is true.. otherwise in case of 1:05 when the minute hand is at 1,the hour hand is not at 1 but is slightly ahead of 1 as for every 1 degree rotation of minute hand,the hour hand rotates 1/12 degrees.. so practically it is never possible for both minute hand and hour hand to overlap each other completely exept at 12:00 am and pm. 2. The hands of a clock coincide 11 times in every 12 hours (Since between 11 and 1, they coincide only once, i.e., at 12 o’clock). The hands overlap about every 65 minutes, not every 60 minutes. The hands coincide 22 times in a day. 3. I see how important it is to wear a watch at an interview lol. 4. @John It is not 23. You have think of it as 2 sets of 12 hours. If it is 12 hours, you logic is fine. You will get 12 – 1 (since it will 12am and 12pm will be the same). Now if you extend it to 2 set of 12 hours. Then you get 11*2=22. If this doesn’t convince you, try writing down all the matchs. 5. The error in the answer of 23 is that you are accounting for 12 O clock three times in the day, instead of just twice. In a true day, it should only be accounted for in the very first overlap at 12am. The next time 12am (not 12pm) occurs, it is the next day, and thus does not count. 6. 22 is the correct answer, not 23. The time when t=0 has already been included in the 22 times. Look closer. Oh, as an aside, the t=0 one is the correct one to count because technically 12:00am belongs to the day that follows it, not the day that precedes it. Hence the term ante-meridian, or “before the middle”. Likewise, noon is post-meridian and belongs to the second half of the day. Therefore, t=0 should be counted as the first overlap, and the following midnight (or t=24) should not be counted as belonging to this day. 7. I agree with abc, since at least the initial or the final overlap at t=24 or t = 0 has to be considered in the day, so its 23. 8. ABC is right!!! 9. uh each time the minute hand goes around the hour hand moves to a new spot, so they overlap once a minute,60 times and hour, and 1440 times in a day 10. As it is to be counted from 0 to 24 hours, it starts at 12 o clock and ends at 12 o clock. So its 23 times. 11. The site does not display my formula correctly~~ try it again: 24 * 60 t t ~= 22 anyway, i agree it is 22 12. After the previous overlap, if minute hand catches up hour hand again, it will move (1+1/12)*60 unit. So in 24 hours, the number of overlap will be calculated: 24 * 60 t t ~= 22 13. The essence of the problem is contained in the word “overlap”. As the word implies, it happens every time the minute hand laps the hour in their race around the clock. Since the minute hand completes 24 trips in the time that the hour hand completes only two, the minute hand laps the hour hand 22 times. 14. If we are counting the midnight as the second day then shouldn’t we count the midnight from previous day as today. In the end we count one of midnight (either the previous night or this night). So it ends up being 22. 15. Surely its 21, since the 22nd will actually be midnight of the 2nd day? 16. Excellent analysis! Using Gravatars in the comments - get your own and be recognized! XHTML: These are some of the tags you can use: <a href=""> <b> <blockquote> <code> <em> <i> <strike> <strong> 16 Comments Filed in Puzzles Tagged clock, easy, hands, interview, puzzle
{"url":"http://www.mytechinterviews.com/clock-hands","timestamp":"2014-04-18T15:39:16Z","content_type":null,"content_length":"38658","record_id":"<urn:uuid:2d64b344-21bc-496e-b752-1f8d395c5bc6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with real world problem involving the calculation of relative weights. July 4th 2012, 07:20 AM Need help with real world problem involving the calculation of relative weights. Hi guys, I have come in search of a solution to aprogramming task that has had me vexed for a two days now. Firstly, let me apologise if I have posted this in the wrong place, I am new here and really a bit out of my depth - so be gentle. The problem i goes like this: I have a container which (in this case) has three objects in it. The container has a volume of 7.65m³ The objects are split by volume: Object 1 has 30% of the volume (2.29 m³) Object 2 has 30% of the volume (2.29 m³) Object 3 has 40% of the volume (3.06 m³) The objects are of different density’s and this is represented by a conversion factor (based on the weight / tonne). Object 1 has a conversion factor of 0.12 Object 2 has a conversion factor of 0.34 Object 3 has a conversion factor of 0.48 Object 1’s weight can be calculated in the following way: (Total Volume / 100) * percentage of volume = volume in container Volume in container * conversion factor = weight of object 1 (7.65 / 100) * 30 = 2.29 m³ 2.29 m³ * 0.12 = 0.27 tonnes If you do this for all three you get: Object 1 = 0.27 tonnes Object 2 = 0.78 tonnes Object 3 = 1.46 tonnes Total tonnage = 2.51 tonnes The weights generated are 'guestimates' based on the volume and conversion factor. Later the container that holds the three objects (or more) is weighed and it's ACTUAL weight recorded as a whole. I need to work out what each object is now likely to weigh based on the new weight. I apprecite that this will not result in a 100% accurate result but it may be more accurate than a figure based on volumes and conversion factors and that any result will only be as accurate as the conversion factor. I have been programming for 20 years but this problem has got me stumped! (Headbang) Any help would be much appreciated. July 5th 2012, 07:15 PM Re: Need help with real world problem involving the calculation of relative weights. If I understand correctly then: Object 1 = 0.27/2.51 = 10.76% by weight Object 2 = 0.78/2.51 = 31.08% by weight Object 3 = 1.46/2.51 = 58.17% by weight So I would measure the weight of the box to be ACTUAL and estimate the weight of the objects to be: Object 1 = 10.76% of ACTUAL (tonnes) Object 2 = 31.08% of ACTUAL (tonnes) Object 3 = 58.17% of ACTUAL (tonnes)
{"url":"http://mathhelpforum.com/advanced-applied-math/200626-need-help-real-world-problem-involving-calculation-relative-weights-print.html","timestamp":"2014-04-19T11:00:25Z","content_type":null,"content_length":"6078","record_id":"<urn:uuid:8681bc30-2d82-4ef1-80a1-b773a3a09012>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Periodic function Not to be confused with periodic mapping, a mapping whose nth iterate is the identity (see periodic point In mathematics, a periodic function is a function that repeats its values in regular intervals or periods. The most important examples are the trigonometric functions, which repeat over intervals of 2π radians. Periodic functions are used throughout science to describe oscillations, waves, and other phenomena that exhibit periodicity. Any function which is not periodic is called aperiodic. A function f is said to be periodic with period P (P being a nonzero constant) if we have $f(x+P) = f(x) \,\!$ for all values of x in the domain. If there exists a least positive^[1] constant P with this property, it is called the fundamental period (also primitive period, basic period, or prime period.) A function with period P will repeat on intervals of length P, and these intervals are referred to as periods. Geometrically, a periodic function can be defined as a function whose graph exhibits translational symmetry. Specifically, a function f is periodic with period P if the graph of f is invariant under translation in the x-direction by a distance of P. This definition of periodic can be extended to other geometric shapes and patterns, such as periodic tessellations of the plane. A function that is not periodic is called aperiodic. For example, the sine function is periodic with period 2π, since $\sin(x + 2\pi) = \sin x \,\!$ for all values of x. This function repeats on intervals of length 2π (see the graph to the right). Everyday examples are seen when the variable is time; for instance the hands of a clock or the phases of the moon show periodic behaviour. Periodic motion is motion in which the position(s) of the system are expressible as periodic functions, all with the same period. For a function on the real numbers or on the integers, that means that the entire graph can be formed from copies of one particular portion, repeated at regular intervals. A simple example of a periodic function is the function f that gives the "fractional part" of its argument. Its period is 1. In particular, f( 0.5 ) = f( 1.5 ) = f( 2.5 ) = ... = 0.5. The graph of the function f is the sawtooth wave. The trigonometric functions sine and cosine are common periodic functions, with period 2π (see the figure on the right). The subject of Fourier series investigates the idea that an 'arbitrary' periodic function is a sum of trigonometric functions with matching periods. According to the definition above, some exotic functions, for example the Dirichlet function, are also periodic; in the case of Dirichlet function, any nonzero rational number is a period. If a function f is periodic with period P, then for all x in the domain of f and all integers n, f(x + nP) = f(x). If f(x) is a function with period P, then f(ax+b), where a is a positive constant, is periodic with period P/|a|. For example, f(x)=sinx has period 2π, therefore sin(5x) will have period 2π/5. Double-periodic functions[edit] A function whose domain is the complex numbers can have two incommensurate periods without being constant. The elliptic functions are such functions. ("Incommensurate" in this context means not real multiples of each other.) Complex example[edit] Using complex variables we have the common period function: $e^{ikx} = \cos kx + i\,\sin kx$ As you can see, since the cosine and sine functions are periodic, and the complex exponential above is made up of cosine/sine waves, then the above (actually Euler's formula) has the following property. If L is the period of the function then: $L = 2\pi/k$ Antiperiodic functions[edit] One common generalization of periodic functions is that of antiperiodic functions. This is a function f such that f(x + P) = −f(x) for all x. (Thus, a P-antiperiodic function is a 2P-periodic Bloch-periodic functions[edit] A further generalization appears in the context of Bloch waves and Floquet theory, which govern the solution of various periodic differential equations. In this context, the solution (in one dimension) is typically a function of the form: $f(x+P) = e^{ikP} f(x) \,\!$ where k is a real or complex number (the Bloch wavevector or Floquet exponent). Functions of this form are sometimes called Bloch-periodic in this context. A periodic function is the special case k = 0, and an antiperiodic function is the special case k = π/P. Quotient spaces as domain[edit] In signal processing you encounter the problem, that Fourier series represent periodic functions and that Fourier series satisfy convolution theorems (i.e. convolution of Fourier series corresponds to multiplication of represented periodic function and vice versa), but periodic functions cannot be convolved with the usual definition, since the involved integrals diverge. A possible way out is to define a periodic function on a bounded but periodic domain. To this end you can use the notion of a quotient space: ${\mathbb{R}/\mathbb{Z}} = \{x+\mathbb{Z} : x\in\mathbb{R}\} = \{\{y : y\in\mathbb{R}\land y-x\in\mathbb{Z}\} : x\in\mathbb{R}\}$. That is, each element in ${\mathbb{R}/\mathbb{Z}}$ is an equivalence class of real numbers that share the same fractional part. Thus a function like $f : {\mathbb{R}/\mathbb{Z}}\to\mathbb{R}$ is a representation of a 1-periodic function. See also[edit] • Ekeland, Ivar (1990). "One". Convexity methods in Hamiltonian mechanics. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)] 19. Berlin: Springer-Verlag. pp. x+247. ISBN 3-540-50613-6. MR 1051888. External links[edit]
{"url":"http://blekko.com/wiki/Periodic_function?source=672620ff","timestamp":"2014-04-16T07:53:40Z","content_type":null,"content_length":"25362","record_id":"<urn:uuid:a139f069-e991-41de-b733-de320e07cf5d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of geometrical optics As a mathematical study, geometrical optics emerges as a short- limit for solutions to hyperbolic partial differential equations . For a less mathematical introduction, please see . In this short wavelength limit, it is possible to approximate the solution locally by $u\left(t,x\right) approx a\left(t,x\right)e^\left\{i\left(kcdot x - omega t\right)\right\}$ where $k, omega$ satisfy a dispersion relation, and the amplitude $a\left(t,x\right)$ varies slowly. More precisely, the leading order solution takes the form $a_0\left(t,x\right) e^\left\{ivarphi\left(t,x\right)/varepsilon\right\}.$ The phase can be linearized to recover large wavenumber $k:= nabla_x varphi$ , and frequency $omega := -partial_t varphi$ . The amplitude satisfies a transport equation . The small parameter enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis A Simple Example Starting with the wave equation for $\left(t,x\right) in mathbb\left\{R\right\}timesmathbb\left\{R\right\}^n$ $L\left(partial_t, nabla_x\right) u := left\left(frac\left\{partial^2\right\}\left\{partial t^2\right\} - c\left(x\right)^2 Delta right\right)u\left(t,x\right) = 0, ;; u\left(0,x\right) = u_0\ left(x\right),;; u_t\left(0,x\right) = 0$ one looks for an asymptotic series solution of the form $u\left(t,x\right) sim a_varepsilon\left(t,x\right)e^\left\{ivarphi\left(t,x\right)/varepsilon\right\} = sum_\left\{j=0\right\}^infty i^j varepsilon^j a_j\left(t,x\right) e^\left\{ivarphi\left One may check that $L\left(partial_t,nabla_x\right)\left(e^\left\{ivarphi\left(t,x\right)/varepsilon\right\}\right) a_varepsilon\left(t,x\right) = e^\left\{ivarphi\left(t,x\right)/varepsilon\right\}$ left(left(frac{i}{varepsilon} right)^2 L(varphi_t, nabla_xvarphi)a_varepsilon + frac{2i}{varepsilon} V(partial_t,nabla_x)a_varepsilon + frac{i}{varepsilon} (a_varepsilon L(partial_t,nabla_x)varphi) + L(partial_t,nabla_x)a_varepsilon right) with $V\left(partial_t,nabla_x\right) := frac\left\{partial varphi\right\}\left\{partial t\right\} frac\left\{partial\right\}\left\{partial t\right\} - c^2\left(x\right)sum_j frac\left\{partial varphi \right\}\left\{partial x_j\right\} frac\left\{partial\right\}\left\{partial x_j\right\}$ Plugging the series into this equation, and equating powers of $varepsilon$, we find that the most singular term $O\left(varepsilon^\left\{-2\right\}\right)$ satisfies the eikonal equation (in this case called a dispersion relation), $0 = L\left(varphi_t,nabla_xvarphi\right) = \left(varphi_t\right)^2 - c\left(x\right)^2\left(nabla_x varphi\right)^2.$ To order we find that the leading order amplitude must satisfy a transport equation $2V a_0 + \left(Lvarphi\right)a_0 = 0$ With the definition $k : = nabla_x varphi$, $omega := -varphi_t$, the eikonal equation is precisely the dispersion relation one would get by plugging the plane wave solution $e^\left\{i\left(kcdot x - omega t\right)\right\}$ into the wave equation. The value of this more complicated expansion is that plane waves cannot be solutions when the wavespeed $c$ is non-constant. However, one can show that the amplitude $a_0$ and phase $varphi$ are smooth, so that on a local scale we have plane waves. To justify this technique, one must show that the remaining terms are small in some sense. This can be done using energy estimates, and an assumption of rapidly oscillating initial conditions. It also must be shown that the series converges in some sense. External links
{"url":"http://www.reference.com/browse/geometrical+optics","timestamp":"2014-04-17T10:50:50Z","content_type":null,"content_length":"80681","record_id":"<urn:uuid:5fd8c1f7-2e07-40cc-86db-53144d14cc86>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
C++ Programming Logical operatorsEdit The operators and (can also be written as &&) and or (can also be written as ||) allow two or more conditions to be chained together. The and operator checks whether all conditions are true and the or operator checks whether at least one of the conditions is true. Both operators can also be mixed together in which case the order in which they appear from left to right, determines how the checks are performed. Older versions of the C++ standard used the keywords && and || in place of and and or. Both operators are said to short circuit. If a previous and condition is false, later conditions are not checked. If a previous or condition is true later conditions are not checked. The not (can also be written as !) operator is used to return the inverse of one or more conditions. condition1 and condition2 condition1 or condition2 not condition When something should not be true. It is often combined with other conditions. If x>5 but not x = 10, it would be written: if ((x > 5) and not (x == 10)) // if (x greater than 5) and ( not (x equal to 10) ) When all conditions must be true. If x must be between 10 and 20: if (x > 10 and x < 20) // if x greater than 10 and x less than 20 When at least one of the conditions must be true. If x must be equal to 5 or equal to 10 or less than 2: if (x == 5 or x == 10 or x < 2) // if x equal to 5 or x equal to 10 or x less than 2 When at least one of a group of conditions must be true. If x must be between 10 and 20 or between 30 and 40. if ((x >= 10 and x <= 20) or (x >= 30 and x <= 40)) // >= -> greater or equal etc... Things get a bit more tricky with more conditions. The trick is to make sure the parenthesis are in the right places to establish the order of thinking intended. However, when things get this complex, it can often be easier to split up the logic into nested if statements, or put them into bool variables, but it is still useful to be able to do things in complex boolean logic. Parenthesis around x > 10 and around x < 20 are implied, as the < operator has a higher precedence than and. First x is compared to 10. If x is greater than 10, x is compared to 20, and if x is also less than 20, the code is executed. │statement1 │statement2 │and│ │T │T │T │ │T │F │F │ │F │T │F │ │F │F │F │ The logical AND operator, and, compares the left value and the right value. If both statement1 and statement2 are true, then the expression returns TRUE. Otherwise, it returns FALSE. if ((var1 > var2) and (var2 > var3)) std::cout << var1 " is bigger than " << var2 << " and " << var3 << std::endl; In this snippet, the if statement checks to see if var1 is greater than var2. Then, it checks if var2 is greater than var3. If it is, it proceeds by telling us that var1 is bigger than both var2 and │statement1 │statement2 │or│ │T │T │T │ │T │F │T │ │F │T │T │ │F │F │F │ The logical OR operator is represented with or. Like the logical AND operator, it compares statement1 and statement2. If either statement1 or statement2 are true, then the expression is true. The expression is also true if both of the statements are true. if ((var1 > var2) or (var1 > var3)) std::cout << var1 " is either bigger than " << var2 << " or " << var3 << std::endl; Let's take a look at the previous expression with an OR operator. If var1 is bigger than either var2 or var3 or both of them, the statements in the if expression are executed. Otherwise, the program proceeds with the rest of the code. The logical NOT operator, not, returns TRUE if the statement being compared is not true. Be careful when you're using the NOT operator, as well as any logical operator. The logical expressions have a higher precedence than normal operators. Therefore, it compares whether "not x" is greater than 10. However, this statement always returns false, no matter what "x" is. That's because the logical expressions only return boolean values(1 and 0). Last modified on 30 July 2010, at 12:49
{"url":"http://en.m.wikibooks.org/wiki/C%2B%2B_Programming/Programming_Languages/C%2B%2B/Code/Statements/Variables/Operators/Logical_Operators","timestamp":"2014-04-19T17:03:15Z","content_type":null,"content_length":"28488","record_id":"<urn:uuid:51f26be4-8504-45cd-971e-7e4663b5f2f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagoras' theorem Pythagoras' theorem demonstrated in the case of the 3, 5, 5 triangle (one having sides in the ratio 3:4:5). It can be seen by inspection that in this case a^2 + b^2 = c^2. Other right-angled triangles, the ratios of whose sides can be expressed using only small integers are 5, 12, 13 and 8, 15, 17 triangles. Right: The ancient method of laying out a right angle using a knotted rope was known and used long before the time of Pythagoras.
{"url":"http://www.daviddarling.info/encyclopedia/P/Pythagoras_theorem.html","timestamp":"2014-04-17T15:28:51Z","content_type":null,"content_length":"6438","record_id":"<urn:uuid:f38da92a-6045-442a-9c9d-218f4020733e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
KUKA LWR arm actuator This actuator reads a list of angles for the segments of the LWR arm and applies them as local rotations. It is a subclass of the armature_actuator. Angles are expected in radians. To install additional components at the tip of the arm using the MORSE Builder API, it is necessary to make the additional component as a child of the arm, and to place the component in the correct position with respect to the kuka arm. Example: kuka_arm = KukaLWR() kuka_arm.translate(x=0.1850, y=0.2000, z=0.9070) kuka_arm.rotate(x=1.5708, y=1.5708) gripper = Gripper() When the simulation is started any objects that are children of the KUKA arm will automatically be changed to be children of the last segment of the arm. • Blender: $MORSE_ROOT/data/robots/kuka_lwr.blend Unlike other actuators, this one also includes the mesh of the arm (composed of 8 segments) and an armature that controls its movement. • Python: $MORSE_ROOT/src/morse/actuators/kuka_lwr.py Local data There are 7 floating point values, named after the bones in the armature: • kuka_1: (float) rotation for the first segment. Around Z axis. • kuka_2: (float) rotation for the second segment. Around Y axis. • kuka_3: (float) rotation for the third segment. Around Z axis. • kuka_4: (float) rotation for the fourth segment. Around Y axis. • kuka_5: (float) rotation for the fifth segment. Around Z axis. • kuka_6: (float) rotation for the sixth segment. Around Y axis. • kuka_7: (float) rotation for the seventh segment. Around Z axis. These names are generated dynamically, so that if there are more than one arm in the scene, there will not be any conflicts. Configurable parameters No configurable parameters Applicable modifiers No available modifiers Available services See the documentation for the armature_actuator. There is also an additional service specific to this armature: • set_rotation_array: (service) Receives an array indicating the angle to give to each of the segments of the arm. Angles are expected in radians. The length of the array should be equal to 7 or less, where any values not specified will be considered as 0.0. If parameters exceeds IK limits, the whole request is rejected. │Parameters│rotation_array │Array of floats │ Parameters: (rotation_array) • set_rotation: (service) Makes the indicated segment rotate by the indicated angle. Receives the name of the segment to rotate, and the amount in radians. If rotations exceeds IK limits, the request is rejected. │Parameters│channel_name│Name of the armature bone to rotate (see the list above) │ │ │rotation │Array of 3 floats, with the angles to rotate around X, Y, Z. Note that given the restrictions imposed on the armature, only one of the rotation angles will be used.│ Parameters: (channel_name, rotation) A note for developpers: The orientation of the bones in the ‘kuka_armature’ in the Blender file will determine the direction of the rotations. To be consistent with the joint rotations of the real arm, the bones must have the following roll value (in the Bone panel when in Edit Mode): • kuka_1: 180 • kuka_2: 0 • kuka_3: 0 • kuka_4: 180 • kuka_5: 180 • kuka_6: 0 • kuka_7: 180 This is valid for Blender version 2.59
{"url":"http://www.openrobots.org/morse/doc/latest/user/actuators/kuka_lwr.html","timestamp":"2014-04-21T08:00:59Z","content_type":null,"content_length":"13809","record_id":"<urn:uuid:6a39df60-19ad-4489-a5c2-72a56bb99bf2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra for College Students - Mark Dugopolski Algebra for College Students, 5e is part of the latest offerings in the successful Dugopolski series in mathematics. The author’s goal is to explain mathematical concepts to students in a language they can understand. In this book, students and faculty will find short, precise explanations of terms and concepts written in understandable language. The author uses concrete analogies to relate math to everyday experiences. For example, when the author introduces the Commutative Property of Addition, he uses a concrete analogy that “the price of a hamburger plus a Coke is the same as a Coke plus a hamburger”. Given the importance of examples within a math book, the author has paid close attention to the most important details for solving the given topic. Dugopolski includes a double cross-referencing system between the examples and exercise sets, so no matter which one the students start with, they will see the connection to the other. Finally, the author finds it important to not only provide quality, but also a good quantity of exercises and applications. The Dugopolski series is known for providing students and faculty with the most quantity and quality of exercises as compared to any other developmental math series on the market. In completing this revision, Dugopolski feels he has developed the clearest and most concise developmental math series on the market, and he has done so without comprising the essential information every student needs to become successful in future mathematics courses. The book is accompanied by numerous useful supplements, including McGraw-Hill’s online homework management system, MathZone. Key Features An emphasis on real-data applications that involve graphs is a focus of the text. Some exercises have been updated throughout the text to help demonstrate concepts, motivate students, and to give students practice using new skills. Many of the real data exercises contain data obtained from the Internet. An Index of Applications listing applications by subject matter is included at the front of the text. Geometry Review Exercises - Located in the appendix, this review section can be used to assist students to remediate their Geometry skills learned in earlier courses. Chapter Opener: Chapter openers discuss a real application of algebra corresponding to the topics within a given chapter. The discussion is accompanied by a photograph and, in most cases by a real-data application graph that helps students visualize algebra and more fully understand the concepts discussed in the chapter. Each chapter opener has a corresponding real data exercise. In addition, each chapter contains a Math at Work feature, which profiles a real person and the mathematics that he or she uses on the job. In This Section: Located at the beginning of every section, this feature provides a list of topics that shows what will be covered in the given section. Because the topics correspond to the headings within each section, your students will find it easy to locate and study specific concepts. These topics are now referenced in the end of section exercises. Important ideas, such as definitions, rules, summaries, and strategies, are set apart in boxes for quick reference. Color is used to highlight these boxes as well as other important points in the Student assistance features located in the text: Calculator Close-Ups Located in the margin, this feature gives your students an idea of how and when to use a graphing calculator. Some Calculator Close-Ups simply introduce the features of a graphing calculator, where others enhance understanding of algebraic concepts. For this reason, many of the Calculator Close-Ups will benefit even those students who do not use a graphing calculator. Study Tips - Two study tips now precede each exercise set. Helpful Hints are short comments located in the margin that enhance the material in the text, provide another way of approaching a problem, or clear up misconceptions. Now Do Exercises: Linked to the end of section exercises, students are guided from the examples within a section to the end of section exercises where they can master the given topic being studied. Warm-up Exercises: Located at the end of every section, these exercises are a set of ten simple statements that are to be answered true or false. These exercises are designed to provide a smooth transition between the ideas and the exercise sets. They help your students understand that every statement in mathematics is either true or false. They are also good for discussion or group work. Simple Reading & Writing Exercises: Located in every section, these exercises appear in the exercise sets. The exercises are designed to get your students to review the definitions and rules of the section before doing more traditional exercises. For example, your student might be simply asked what properties of equality were discussed in this section. End-of-Section Exercises follow the same order as the textual material and contain exercises that are keyed to examples, as well as numerous exercises that are not keyed to examples. This organization allows the instructor to cover only part of a section if necessary and easily determine which exercises are appropriate to assign. The keyed exercises give your student a place to start practicing and building confidence, whereas the non-keyed exercises are designed to wean your student from following examples in a step-by-step manner. Getting More Involved exercises are designed to encourage writing, discussion, exploration, and cooperative learning. Graphing Calculator Exercises require a graphing calculator and are identified with a graphing calculator logo. Wrap-up: Located at the end of every chapter, the Wrap-Up includes the following - The Chapter Summary lists important concepts along with brief illustrative examples. Enriching Your Mathematical Word Power appears at the end of each chapter and consists of multiple choice questions in which the important terms are to be matched with their meanings. This feature emphasizes the importance of proper terminology. The Review Exercises contain problems that are keyed to the sections of the chapter as well as numerous miscellaneous exercises. The Chapter Test is designed to help your student assess his or her readiness for a test. The Chapter Test has no keyed exercises, thus enabling the student to work independently of the sections and Making Connections Exercises: Located at the end of each chapter, this feature is designed to help your students review and synthesize the new material with ideas from previous chapters, and in some cases, review material necessary for success in the upcoming chapter. Every Making Connections exercise set includes at least one applied exercise that requires ideas from one or more of the previous Subsection heads are now in the end of section exercise sets, and section heads are now in the Chapter Review Exercises. References to page numbers on which Strategy Boxes are located have been inserted into the direction lines for the exercises when appropriate. Study tips have been removed from the margins to give the pages a better look. Two study tips now precede each exercise set. McGraw-Hill’s MathZone is a complete, online tutorial and course management system for mathematics and statistics, designed for greater ease of use than any other system available. Instructors can create and share courses and assignments with colleagues and adjuncts in a matter of a few clicks of a mouse. All instructor teaching resources are accessed online, as well as student assignments, questions, e-Professors, online tutoring and video lectures which are directly tied to text specific material. MathZone courses are customized to your textbook, but you can edit questions and algorithms, import your own content, create announcements and due dates for assignments. MathZone has automatic grading and reporting of easy-to-assign algorithmically generated homework, quizzing and testing. Student activity within MathZone is automatically recorded and available to you through a fully integrated grade book than can be downloaded to Excel. Go to www.mathzone.com to learn Table of Contents 1 The Real Numbers 1.1 Sets 1.2 The Real Numbers 1.3 Operations on the Set of Real Numbers 1.4 Evaluating Expressions 1.5 Properties of the Real Numbers 1.6 Using the Properties Chapter 1 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 1 Test • Critical Thinking 2 Linear Equations and Inequalities in One Variable 2.1 Linear Equations in One Variable 2.2 Formulas and Functions 2.3 Applications 2.4 Inequalities 2.5 Compound Inequalities 2.6 Absolute Value Equations and Inequalities Chapter 2 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 2 Test • Making Connections: A review of Chapters 1-2 • Critical Thinking 3 Linear Equations and Inequalities in Two Variables 3.1 Graphing Lines in the Coordinate Plane 3.2 Slope of a Line 3.3 Three Forms for the Equation of a Line 3.4 Linear Inequalities and Their Graphs 3.5 Functions and Relations Chapter 3 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 3 Test • Making Connections: a review of Chapters 1-3 • Critical Thinking 4 Systems of Linear Equations 4.1 Solving Systems by Graphing and Substitution 4.2 The Addition Method 4.3 Systems of Linear Equations in Three Variables 4.4 Solving Linear Systems Using Matrices 4.5 Determinants and Cramer’s Rule 4.6 Linear Programming Chapter 4 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 4 Test • Making Connections: a review of Chapters 1-4 • Critical Thinking 5 Exponents and Polynomials 5.1 Integral Exponents and Scientific Notation 5.2 The Power Rules 5.3 Polynomials and Polynomial Functions 5.4 Multiplying Binomials 5.5 Factoring Polynomials 5.6 Factoring ax² + bx + c 5.7 Factoring Strategy 5.8 Solving Equations by Factoring Chapter 5 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 5 Test • Making Connections: a review of Chapters 1-5 • Critical Thinking 6 Rational Expressions and Functions 6.1 Properties of Rational Expressions and Functions 6.2 Multiplication and Division 6.3 Addition and Subtraction 6.4 Complex Fractions 6.5 Division of Polynomials 6.6 Solving Equations Involving Rational Expressions 6.7 Applications Chapter 6 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 6 Test • Making Connections: a review of Chapters 1-6 • Critical Thinking 7 Radicals and Rational Exponents 7.1 Radicals 7.2 Rational Exponents 7.3 Adding, Subtracting, and Multiplying Radicals 7.4 Quotients, Powers, and Rationalizing Denominators 7.5 Solving Equations with Radicals and Exponents 7.6 Complex Numbers Chapter 7 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 7 Test • Making Connections: a review of Chapters 1-7 • Critical Thinking 8 Quadratic Equations, Functions, and Inequalities 8.1 Factoring and Completing the Square 8.2 The Quadratic Formula 8.3 More on Quadratic Equations 8.4 Quadratic Functions and Their Graphs 8.5 Quadratic and Rational Inequalities Chapter 8 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 8 Test • Making Connections: a review of Chapters 1-8 • Critical Thinking 9 Additional Function Topics 9.1 Graphs of Functions and Relations 9.2 Transformations of Graphs 9.3 Combining Functions 9.4 Inverse Functions 9.5 Variation Chapter 9 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 9 Test • Making Connections: a review of Chapters 1-9 • Critical Thinking 10 Polynomial and Rational Functions 10.1 The Factor Theorem 10.2 Zeros of a Polynomial Function 10.3 The Theory of Equations 10.4 Graphs of Polynomial Functions 10.5 Graphs of Rational Functions Chapter 10 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 10 Test • Making Connections: a review of Chapters 1-10 • Critical Thinking 11 Exponential and Logarithmic Functions 11.1 Exponential Functions and Their Applications 11.2 Logarithmic Functions and Their Applications 11.3 Properties of Logarithms 11.4 Solving Equations and Applications Chapter 11 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 11 Test • Making Connections: a review of Chapters 1-11 • Critical Thinking 12 Nonlinear Systems and the Conic Sections 12.1 Nonlinear Systems of Equations 12.2 The Parabola 12.3 The Circle 12.4 The Ellipse and Hyperbola 12.5 Second-Degree Inequalities Chapter 12 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 12 Test • Making Connections: a review of Chapters 1-12 • Critical Thinking 13 Sequences and Series 13.1 Sequences 13.2 Series 13.3 Arithmetic Sequences and Series 13.4 Geometric Sequences and Series 13.5 Binomial Expansions Chapter 13 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 13 Test • Making Connections: a review of Chapters 1-13 • Critical Thinking 14 Counting and Probability 14.1 Counting and Permutations 14.2 Combinations 14.3 Probability Chapter 14 Wrap-Up • Summary • Enriching Your Mathematical Word Power • Review Exercises • Chapter 14 Test • Critical Thinking Appendix A Answers to Selected Exercises Student's Solutions Manual for with Algebra for College Students ISBN: 0073206237 Author(s): DUGOPOLSKI Annotated Instructor's Edition t/a Algebra for College Students (Comp ISBN) ISBN: 0073206253 Author(s): DUGOPOLSKI DVD Video Series to accompany Algebra for College Students ISBN: 0073206288 Author(s): DUGOPOLSKI
{"url":"http://mcgraw-hill.com.au/html/9780077224844.html","timestamp":"2014-04-20T15:55:04Z","content_type":null,"content_length":"57188","record_id":"<urn:uuid:f6b92dc8-16b3-4f99-a2f7-1c85ccad4dc2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
constant number of objects on screen - frustum [Archive] - OpenGL Discussion and Help Forums 02-27-2009, 12:21 PM I am trying write a little program that displays a constant number of object in the view area at all times (currently point objects). I am using a frustum class that gives me the important view coordinates (it is mostly from the Lighthouse View Frustum Culling Tutorial (http://www.lighthouse3d.com/opengl/viewfrustum/index.php)). It seems like it should be easy but the view remains splotchy with objects constantly blinking (being added to random positions) on screen. Does anyone have any ideas? Below is the function I use to redraw the objects. I can attach the rest of the application if it would help. void cstarfield::weMove() camera.MoveForward(g_speed);//forward is -z camera.Render();//needs to be here for frustum lftl = camera.ftl; lfbl=camera.fbl; lfbr=camera.fbr; lftr=camera.ftr; lntl = camera.ntl; lnbl=camera.nbl; lnbr=camera.fbr; lntr=camera.ftr; for ( int i = 0; i < NUM_STARS; i++ ) glTranslatef( g_xyz[i][0], g_xyz[i][1], g_xyz[i][2] ); if (g_color==GL_TRUE) glColor3fv( g_colors[i] ); glColor3f( 255,255,255); glPointSize(g_pointsize); //sets the pixel size of the points glBegin(GL_POINTS); // render with points glVertex2i(0,0); //display a point at current x,y,z GLfloat lowest, highest, range; GLfloat tmp; Vec3 a( g_xyz[i][0], g_xyz[i][1], g_xyz[i][2] ); enum {TOP=0, BOTTOM,LEFT,RIGHT,NEARP,FARP}; if (camera.pointInFrustum(a) == CCamera::OUTSIDE) //if star is outside of frustum then recalc to somewhere inside ... GLfloat dn,df,dl,dr,dt,db, td; if (dl<0) if (camera.ftl.x<camera.ftr.x) g_xyz[i][0] = g_xyz[i][0]-dr; g_xyz[i][0] = g_xyz[i][0]+dr; if (dr<0) if (camera.ftl.x<camera.ftr.x) g_xyz[i][0] = g_xyz[i][0]+dl; g_xyz[i][0] = g_xyz[i][0]-dl; if (dt<0) if (camera.ftl.y>camera.fbl.y) g_xyz[i][1] = g_xyz[i][1]-db; g_xyz[i][1] = g_xyz[i][1]+db; if (db<0) if (camera.ftl.y>camera.fbl.y) g_xyz[i][1] = g_xyz[i][1]+dt; g_xyz[i][1] = g_xyz[i][1]-dt; if (df<0) if (camera.ftl.z>camera.ntl.z) g_xyz[i][2] = g_xyz[i][2]-dn; g_xyz[i][2] = g_xyz[i][2]+dn; if (dn<0) if (camera.ftl.z>camera.ntl.z) g_xyz[i][2] = g_xyz[i][2]-df; g_xyz[i][2] = g_xyz[i][2]+df; Vec3 d(g_xyz[i][0], g_xyz[i][1], g_xyz[i][2]); if (camera.pointInFrustum(d) == CCamera::OUTSIDE) lowest=dl; highest=dr; if (lowest> highest){tmp=lowest;lowest=highest;highest=tmp; } g_xyz[i][0] = lowest+int(range*rand()/(RAND_MAX + 1.0)); lowest=db; highest=dt; if (lowest> highest){tmp=lowest;lowest=highest;highest=tmp; } g_xyz[i][1] = lowest+int(range*rand()/(RAND_MAX + 1.0)); lowest=dn; highest=df; if (lowest> highest){tmp=lowest;lowest=highest;highest=tmp; } g_xyz[i][2] = lowest+int(range*rand()/(RAND_MAX + 1.0));
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-166854.html","timestamp":"2014-04-24T03:02:17Z","content_type":null,"content_length":"11643","record_id":"<urn:uuid:621b75e6-b546-4bfb-b3f1-cb9b8dba7bf0>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Completely factor the following expression. `z^2 + 15z - 54` - Homework Help - eNotes.com Completely factor the following expression. `z^2 + 15z - 54` To factorize `z^2 +15z-54` you need to establish the factors that are relevant between the first and the third terms: `z^2 = z times z` and the factors of 54 are (1 x 54, 2 x 27, 3 x 18, 6 x 9, ) . Use must establish which of these combinations will render a middle term of +15 The only ones that will work are 3x18. Care must be taken not to use 6 x9 which seem s most obvious but would render +54 as the third term . `therefore= (z-3)(z+18)` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/completely-factor-following-expression-z-2-15z-54-421266","timestamp":"2014-04-18T11:49:22Z","content_type":null,"content_length":"25598","record_id":"<urn:uuid:e03af6c7-bf5c-4100-865b-b3dd1fbe41df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
A natural extension of natural deduction Results 11 - 20 of 36 - Dept. of Informatics, Univ. of Oslo , 1997 "... This paper is an attempt at a systematizing study of the proof theory of the intuitionistic predicate ¯; -logic (conventional intuitionistic predicate logic extended with logical constants ¯ and for the least and greatest fixpoint operators on positive predicate transformers). We identify eight pr ..." Cited by 6 (5 self) Add to MetaCart This paper is an attempt at a systematizing study of the proof theory of the intuitionistic predicate ¯; -logic (conventional intuitionistic predicate logic extended with logical constants ¯ and for the least and greatest fixpoint operators on positive predicate transformers). We identify eight proof-theoretically interesting natural-deduction calculi for this logic and propose a classification of these into a cube on the basis of the embeddibility relationships between these. 1 Introduction ¯,-logics, i.e. logics with logical constants ¯ and for the least and greatest fixpoint operators on positive predicate transformers, have turned out to be a useful formalism in a number of computer science areas. The classical 1st-order predicate ¯,-logic can been used as a logic of (non-deterministic) imperative programs and as a database query language. It is also one of the relation description languages studied in descriptive complexity theory (finite model theory) (for a survey on this hi... - Mathematical Methods in Program Development: Summer School Marktoberdorf 1996, NATO ASI Series F , 1996 "... Proof tools must be well designed if they... ..." - Proof-Theoretic Semantics. Special issue of Synthese "... Abstract. The standard approach to what I call “proof-theoretic semantics”, which is mainly due to Dummett and Prawitz, attempts to give a semantics of proofs by defining what counts as a valid proof. After a discussion of the general aims of proof-theoretic semantics, this paper investigates in det ..." Cited by 5 (4 self) Add to MetaCart Abstract. The standard approach to what I call “proof-theoretic semantics”, which is mainly due to Dummett and Prawitz, attempts to give a semantics of proofs by defining what counts as a valid proof. After a discussion of the general aims of proof-theoretic semantics, this paper investigates in detail various notions of prooftheoretic validity and offers certain improvements of the definitions given by Prawitz. Particular emphasis is placed on the relationship between semantic validity concepts and validity concepts used in normalization theory. It is argued that these two sorts of concepts must be kept strictly apart. 1. Introduction: Proof-theoretic - MATHEMATICAL KNOWLEDGE MANAGEMENT (MKM 2006), LNAI , 2006 "... Isabelle/Isar is a generic framework for human-readable formal proof documents, based on higher-order natural deduction. The Isar proof language provides general principles that may be instantiated to particular object-logics and applications. We discuss specific Isar language elements that support ..." Cited by 4 (1 self) Add to MetaCart Isabelle/Isar is a generic framework for human-readable formal proof documents, based on higher-order natural deduction. The Isar proof language provides general principles that may be instantiated to particular object-logics and applications. We discuss specific Isar language elements that support complex induction patterns of practical importance. Despite the additional bookkeeping required for induction with local facts and parameters, definitions, simultaneous goals and multiple rules, the resulting Isar proof texts turn out well-structured and readable. Our techniques can be applied to non-standard variants of induction as well, such as co-induction and nominal induction. This demonstrates that Isar provides a viable platform for building domain-specific tools that support fully-formal mathematical proof composition. , 2008 "... Abstract. It is well known how to use an intuitionistic meta-logic to specify natural deduction systems. It is also possible to use linear logic as a meta-logic for the specification of a variety of sequent calculus proof systems. Here, we show that if we adopt different focusing annotations for suc ..." Cited by 4 (4 self) Add to MetaCart Abstract. It is well known how to use an intuitionistic meta-logic to specify natural deduction systems. It is also possible to use linear logic as a meta-logic for the specification of a variety of sequent calculus proof systems. Here, we show that if we adopt different focusing annotations for such linear logic specifications, a range of other proof systems can also be specified. In particular, we show that natural deduction (normal and non-normal), sequent proofs (with and without cut), tableaux, and proof systems using general elimination and general introduction rules can all be derived from essentially the same linear logic specification by altering focusing annotations. By using elementary linear logic equivalences and the completeness of focused proofs, we are able to derive new and modular proofs of the soundness and completeness of these various proofs systems for intuitionistic and classical logics. 1 - Extensions of Logic Programming , 1990 "... This paper describes the logical and philosophical background of an extension of logic programming which uses a general schema for introducing assumptions and thus presents a new view of hypothetical reasoning. The detailed proof theory of this system is given in [7], matters of implementation and c ..." Cited by 3 (2 self) Add to MetaCart This paper describes the logical and philosophical background of an extension of logic programming which uses a general schema for introducing assumptions and thus presents a new view of hypothetical reasoning. The detailed proof theory of this system is given in [7], matters of implementation and control of the corresponding programming language GCLA with detailed examples can be found in [1, 2]. In Section 1 we consider the local rule-based approach to a notion of atomic consequence as opposed to the global logical approach. Section 2 describes our system and characterises the inference schema of definitional reflection which is central for our approach. In Section 3 we motivate the computational interpretation of this system. Finally, Section 4 relates our approach to the idea of logical frameworks and the way elimination inferences for logical constants are treated therein, and thus to the notions of logic and structure. It shows that from a certain perspective, logical reasoning is nothing but a special case of reasoning in our system. 1 Local and global consequence , 1996 "... We present a framework for machine implementation of both partial and complete fragments of large families of non-classical logics such as modal, relevance, and intuitionistic logics. We decompose a logic into two interacting parts, each a natural deduction system: a base logic of labelled formulae, ..." Cited by 2 (2 self) Add to MetaCart We present a framework for machine implementation of both partial and complete fragments of large families of non-classical logics such as modal, relevance, and intuitionistic logics. We decompose a logic into two interacting parts, each a natural deduction system: a base logic of labelled formulae, and a theory of labels characterizing the properties of the Kripke models. Our approach is modular and supports uniform proofs of correctness and proof normalization. We have implemented our work in the Isabelle Logical Framework. 1 INTRODUCTION The origins of natural deduction (ND) are both philosophical and practical. In philosophy, it arises from an analysis of deductive inference in an attempt to provide a theory of meaning for the logical connectives [24, 33]. Practically, it provides a language for building proofs, which can be seen as providing the deduction theorem directly, rather than as a derived result. Our interest is on this practical side, and a development of our work on ap... - In R "... When a logical system is specified and the notion of a derivation or formal proof is explained, we are told (i) which formulas can be used to start a derivation and (ii) which formulas can be derived given that certain other formulas have already been derived. Formulas of the sort (i) are either ass ..." Cited by 2 (2 self) Add to MetaCart When a logical system is specified and the notion of a derivation or formal proof is explained, we are told (i) which formulas can be used to start a derivation and (ii) which formulas can be derived given that certain other formulas have already been derived. Formulas of the sort (i) are either assumptions or axioms, formulas of the sort (ii) are conclusions of (proper) inference rules. Axioms may be viewed as conclusions of (improper) inference rules, viz. inference rules without premisses. In what follows I refer to conclusions of proper or improper inference rules as assertions. 1 In natural deduction systems, inference rules deal both with assumptions and assertions, as the assumptions on which the conclusion of an inference rule depends, are not necessarily given by the collection of all assumptions on which the premisses depend, in case the rule permits the discharging of assumptions. For example, the rule of implication introduction - Logica Universalis , 2007 "... Abstract. The term inversion principle goes back to Lorenzen who coined it in the early 1950s. It was later used by Prawitz and others to describe the symmetric relationship between introduction and elimination inferences in natural deduction, sometimes also called harmony. In dealing with the inver ..." Cited by 2 (2 self) Add to MetaCart Abstract. The term inversion principle goes back to Lorenzen who coined it in the early 1950s. It was later used by Prawitz and others to describe the symmetric relationship between introduction and elimination inferences in natural deduction, sometimes also called harmony. In dealing with the invertibility of rules of an arbitrary atomic production system, Lorenzen’s inversion principle has a much wider range than Prawitz’s adaptation to natural deduction,. It is closely related to definitional reflection, which is a principle for reasoning on the basis of rule-based atomic definitions, proposed by Hallnäs and Schroeder-Heister. After presenting definitional reflection and the inversion principle, it is shown that the inversion principle can be formally derived from definitional reflection, when the latter is viewed as a principle to establish admissibility. Furthermore, the relationship between definitional reflection and the inversion principle is investigated on the background of a universalization principle, called the ω-principle, which allows one to pass from the set of all defined substitution instances of a sequent to the sequent itself.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=774417&sort=cite&start=10","timestamp":"2014-04-16T11:29:22Z","content_type":null,"content_length":"37138","record_id":"<urn:uuid:325e9db2-5b20-4cc7-b5cc-9d534117b686>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Please Solve my integration Question The first one asks you to compute $\int3x^2y^3\,dy,$ so $3x^2$ behaves like a constant, you can take it apart of the integral, so it remains to integrate $y^3.$ It looks like the second one has to do with a differential equation, well, note that $v^2+2v+1=(v+1)^2.$ Define a simple substitution on the LHS and you're done (of course, the RHS is a known integral.)
{"url":"http://mathhelpforum.com/calculus/20822-please-solve-my-integration-question.html","timestamp":"2014-04-20T11:25:35Z","content_type":null,"content_length":"34139","record_id":"<urn:uuid:02660337-5063-48a9-a8dd-d42345ee9385>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite, countably infinite, or uncountable March 23rd 2011, 03:24 PM #1 Feb 2011 Finite, countably infinite, or uncountable Determine, with justification, whether each of the following sets is finite, countably infinite, or uncountable. a) { $x \in Q | 1 < x < 2$ } b) { $m/n | m, n \in N, m < 100, 5 < n < 105$ } I am lost. Not sure how to approach these problems! part b) given that the possible values of m (let's call this set A) are finite in number, and likewise for the possible values of n (call this set B), can you see that your set is smaller than what can you say about the size of AxB when A and B are finite? can you think of a way to map AxB to your set that is onto, proving that your set MUST be smaller? March 23rd 2011, 03:37 PM #2 March 25th 2011, 08:40 AM #3 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/discrete-math/175581-finite-countably-infinite-uncountable.html","timestamp":"2014-04-21T13:40:42Z","content_type":null,"content_length":"38073","record_id":"<urn:uuid:31bc2536-e7b2-4334-8670-4590d4620da3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
The Problem of Time leads to a Problem of Energy for the Universe You know what I think? I think you made a mistake and $q_i$ is a position rather than a velocity. What do you think? To me, $f_i(q)$ looks like a functional notation. That is, $f$ is a function rather than a number. Could that be correct? So $\delta L$ has nothing to do with variational calculus? It's just a stock-standard change in L? From your expressions, it looks like A is a vector three-potential - especially given your integral of $d^3 x$ rather than $d^4 x$. Anyway, I plan to move on and discuss the first part of your post... I will have to appeal to authority with two of these questions: ''You know what I think? I think you made a mistake and $q_i$ is a position rather than a velocity. What do you think?'' Susskind defines it as a velocity... I will watch the video again at some point to clarify this is indeed the case, but, essentially velocity turns up in the kinetic representation $T = \frac {1}{2} M \dot{q}$. Unless of course susskind has used the two were one is in fact the position given as q_i in which case I apologize. He couldn't have defined that well. '' That is, $f$ is a function rather than a number. Could that be correct?'' I don't believe it is. Susskind clearly stated that $f=1$ in his lecture. ''So $\delta L$ has nothing to do with variational calculus? It's just a stock-standard change in L?'' ''From your expressions, it looks like A is a vector three-potential - especially given your integral of $d^3 x$ rather than $d^4 x$.'' I could be wrong, but I think you can miss out writing it as d^3 because when you take the quantity $\dot{q}$ it would imply a time component. It can be important to remember that A is a four vector because when you write out a matrix for instance, for an equation it swaps signs in the lower half entries. I think I could have made a mistake in second light of q_i. I think q_i could have been a position. I will look more into it later and get back to you on that one. Yeah... after a quick look, I am sure f=1... you can see this 16 mins in. Looking back at what AN said '' $q_{i}$ is taken to be the position of an object while the velocity is then $\dot{q}_{i}$. '' It seems q_i is a position then, but susskind never defined this is what probably led to my confusion. I think that is cleared up now. So now you know you should go with logic on this. The universe is not timeless and devoid of energy. Last edited by Cortex_Colossum; 01-14-12 at 01:19 PM. Sorry, were did you get that from? I don't retract my statements at all and I never will until a satisfactory conclusions has been brought forwards. What you qouted me saying has nothing to do with the timelessness found in relativity. That was in conjecture to an energy-saving universe. Anyway, the universe could very well be timeless in a global sense and because of this, there is an existing problem with energy since the two are conjugates of each other under Noether's You're basically admitting that the only experience you have of any of this is watching a YouTube video. If you'd worked through a book or a set of lecture notes then you'd be well aware of what various things mean, of alternative notation, of how things relate to one another. Instead when you say "I haven't seen that notation before" or the like what you mean is "That's not the notation used in this one video I'm using to answer your questions". If the sum total of your knowledge on this stuff is from Susskind's video then how can you have been doing anything about the Dirac equation or gauge invariance or covariant properties of fields? All of them come after basic Lagrangian material and Noether's theorem because they make considerable use of it. It isn't 'confusion' you have, it's just not knowing. It's like me saying I'm confused about Japanese. No, I simply can't speak it. I think it's an excellent demonstration of your willingness to misrepresent what people say. Not to mention a demonstration of how poor your maths abilities were at a time you claimed to be studying Riemannian curvature in general relativity. Do you now admit you lied about that? And your reply doesn't retort any of my demonstrations of your misunderstandings and mistakes. You also ignored my question. It's simple enough, perhaps 2 sentences, no LaTeX, to answer if you're familiar with this stuff. But if the extent of your Lagrangian understanding is from a single YouTube video that would explain why you made the mistake and why you can't answer the I'm not asking you because I don't know. I'm asking you because I know and you claim to know but you have provided evidence to the contrary. I'm giving you the opportunity to step up and demonstrate your understanding goes beyond mangling a single YouTube video. Would you like me to repeat the question? Which I explained in detail and which you have yet to retort, despite being given many opportunities. Would you like me to repeat the question? Actually susskind did. I said to the physics subforum that in physics there is a noticable difference between perpendicular and orthogonal. If you wanna keep that one going up then fine... I find it interesting that cptbork didn't even know of this definition when I spoke about it, upon which he said I was decieving everyone. Firstly isn't a difference between perpendicular and orthogonal, they are synonyms. Secondly I was referring to the whole "matrix times matrix gives number" fiasco. Susskind said "equals one" and you took it to mean the number one, despite the fact matrix times matrix gives matrix and Susskind went on to explicitly state in the following 30 seconds or so he was referring to the identity matrix. Lagrangian methods, variational principles, integrals, functionals, partial differential equations, groups, transformations, these are all things most mathematicians cover in their undergrad. Remember, I did mathematics as my degree. I covered Lagrangian methods and variational principles in the first term of my second year. It was a required course so every mathematician covered it, not just those interested in mathematical physics. Similarly all the other things I just mentioned are considered essential learning and appeared in more than half a dozen required courses in the first and second years. As such even if Guest is a purest of the pure mathematician he'll know about this stuff. I picked two people who are definitely knowledgeable in mathematical methods in physics. It is a coincidence they happen to agree with me and disagree with you so much. In fact all of the people here educated to a high level in mathematics or physics are likewise. Of course the fact they all happen to agree with me and disagree with you so much isn't really a coincidence. They, like me, see through your nonsense and dishonesty and hence don't hold you in very high regard. So it's not that I am picking biased people from the pool of formally educated mathematicians/physicists who post here but rather all of said people have concluded similar things about you. Why, if you were to use the scientific method you might even consider that evidence for the hypothesis that you spout nonsense a lot. After all, while correlation does not imply causation it certainly warrants investigation. You make it sound like I've only pointed out one mistake. I've pointed out a plethora and all you can manage is a weak attempt at a single one, the rest you're just avoiding entirely, just as you did in the Dirac equation case. Guess? Why would you need to guess? The form of kinetic energy and potential energy in Newtonian physics is known to school children! Besides, that expression for kinetic energy is not universally true. There's other systems with different kinetic expressions. You should have seen a number of them if you had worked through quantum mechanics and into quantum field theory. For example, the expression in the Dirac equation for kinetic energy isn't of that form. Can you tell me why it can't be of that form? It's to do with something I told you about the Dirac equation. Again, I'm giving you an opportunity to show you can do more than parrot back facts you've heard from YouTube, you can show you have an actual grasp of how all these things fit together. Here's a chance to show all my accusations are not as valid as they seem to be. Susskind defines it as a velocity... I will watch the video again at some point to clarify this is indeed the case, but, essentially velocity turns up in the kinetic representation $T = \frac {1}{2} M \dot{q}$. Unless of course susskind has used the two were one is in fact the position given as q_i in which case I apologize. He couldn't have defined that well. It's quite clearly position. Susskind explains it in the video, the context makes it obvious. Kinetic energy is written as $\frac{1}{2}m\dot{q}^{2}$ and obviously if q=x then $\dot{q} = v$ and so you get $\frac{1}{2}mv^{2}$. You shouldn't have prompted me to watch the video because it really does become so obvious you're just parroting Susskind. For example, when James asked you what a Lagrangian density is you just spewed out the equation Susskind wrote on the board early on. Except you introduced an error because you wrote the second term in terms of $\dot{\phi}_{x}$, when it is actually $\dot{\ phi}$. Anyone familiar with scalar field theory knows such a term doesn't arise in the case Susskind is talking about (if ever). You also spewed out the expression involving the photon field, the bit involving A, ie $\frac{1}{2}m\dot{x}^{2} + A \cdot v$. Susskind actually uses very poor notation there, because he writes the velocity in two different ways in the same expression because $v = \dot{x}$. He makes this obvious when he does the partial differentiation and hits both terms in the same manner. You transcribed this mistake. And I can see why you can't answer my question about T+U, Susskind explicitly states the implication involves more work and its something he doesn't want to get into at that point. You have said people here can read and hear and so can watch that lecture for themselves and see what you're saying Susskind says. What you don't realise (and this is a mistake you made with the whole identity matrix issue from one of his other videos) is that Susskind makes a lot of short cuts, a number of slip ups or doesn't explain things entirely accurately and that's not accounting for you not understanding him. Since you have no other source of information on this stuff, ie you haven't actually learnt it elsewhere and just citing Susskind, you're actually just parroting him, you don't spot any of these and you take what he says as gospel. It also means you can't tailor your answers to James's questions, you can only try to bend the question to some part of the video for you to reproduce, as was made clear by your answers to his first lot of questions. I wouldn't be surprised if that's James's intention, to get you to post more of your 'own answers' to highlight they aren't really your answers, they aren't a mix of different sources, they are just snippets of Susskind's video reproduced here. Since you'll reply to questions you think you know the answer to maybe James has a much better way of having you dig your own hole. Actually no I haven't. I actually know much more on the side than I do at the heart of the topic. I only spoke of the conservation in the form $\delta L = 0$ in such a form and no expressive mathematics. Even you as a compatent scientist will know that this expression is true, that Jame's questions led to one ''critical mistake'' as you call it. The rest of the claims were bogus, since even you said that ''I understand what Susskind means,'' which is actually indirectly saying that ''Ok, that is what he said and what you said but I understand what he means,'' is not admitting you messed up, but passing the buck of interpretation. I would have actually much more appreciated the role James is taking, maybe one sympathetic to the fact I am actually trying here, but there you are, jumpimg in with a whole load, superfluous even set of arrogant accusations which have not summed up the situation once in entirity. Give it a break! F-in' *meow* baby!!! Reiku, if you have such a fascination with physics, why don't you go study it at a university? Thank you though for this. This is your stern stone cold way of saying ''yeh, he's telling the truth...'' ...but I won't let you forget just yet the pick and mix of accusations you came in with. just like Tach in my Black Hole essay thread I am sure you would agree. F-in' *meow* baby!!! I live in a remote part of Scotland, and studying physics alone means I need to pay 10,000 pounds maybe more for my whole three years, but the worst part is, is that to get a bursary you need to study a maximum of three years with three types of subjects. I did that at college first of all, but I could not hold down all the studies of biology and chemistry on top of the physics, which weighed me down most of all, so I went back and completed physics alone, but I am in a quandry because they are the only sciences avaliable here and I am not qualified to do them all, and I only spent 6 months on a primilinary math course. I defined before, just in case it is raised in your questions of the literature James, that I treat the mind as a subset of a larger system, a second set we call the universe. Let us denote consciousness (and everything related to) as a set $\mathcal{B}$. Let the universe then be the set $\mathcal{A}$. Thus if $\mathcal{B}$ is a subset of $\mathcal{A}$ then I can $\mathcal{B} \subseteq \mathcal{A}$ It is taken for fact that $\mathcal{B}$ exists as a subset of $\mathcal{A}$ that it cannot be an exact copy. No subsystem can model precisely the larger system it is made of. This is conjectured because I believe that $\mathcal{B}$ can never contain all the information contained in $\mathcal{A}$. So in case you raise any question on what I said and perhaps maybe forget how I really see all of this ''mind-stuff'', I will say the mind exists but the universe doesn't depend on the mind whilst the mind does depend on the universe. Surely anyone will agree with that statement. Last edited by Reiku; 01-14-12 at 08:32 PM. does'nt* depend on the mind... Jesus... totally wrote that wrong but fixed now. The way my mind model of physics has a description at all concerning the universe is the low energy phenomenon of geometry. Known as Geometrogenesis, it explains why matter is required for carbon-based life form's for us, because, Consciousness is an improbability of the equations concerning high energy phenomenon, which does not even allow for subsystems. The mind therefor arises from a stable spacetime, where matter is confugrated often close to their ground state energies that we call the brain. . . . can someone define and give me a few proven examples of "negative energy" . . . humor me a bit here . . . I have been thinking about alternative solutions to the time problem (which I have only just considered wondering about the clearup provided by AN concerning q) which is written into the archives of problems concerning unanwered paradoxes of physics. I decided that maybe there was an absolute field, maybe one single matter field where the time derivative does not vanish under the wheeler de Witt formalism [1]. I started to conjecture such a field and came up with $\dot{\chi} = i(\frac{\partial \mathcal{L}}{\partial \dot{q_i}} \cdot d_) abla^2$ The time dependance is inherent in the matter field $\chi$ itself. The time derivative appears from $\dot{q_i}$ as well. So long as there are real dynamics with real matter (not photon energy or any energy pertaining to nullified particle trajectories [2]) - This tardyonic matter can act as Einsteinian Clocks and measure time passing inside the universe, using a matter field to be a real set of clocks, locally defining time. In fact, the electron has an internal clock. Not many know this. Hestene's made a brilliant paper on this. [2] Time for bosons are stretched to a hypothetical infinity suggesting that no time passes at all. It is a consequence of relativity itself (as AN knows, just for the benefit of others.) F-in' *meow* baby!!! Nomally you would not be studying a sole subject alone but you would be majoriing in a subject (i.e. it's the one that gets the most attention and is the focus of a degree). Either way, 10,000 pounds for 3 years doesn't seem excessively expensive. That would make sense as univiersities typically want their students to have a well rounded education. Why rely on a bursary? Get a student loan instead. I did that at college first of all, but I could not hold down all the studies of biology and chemistry on top of the physics, which weighed me down most of all, so I went back and completed physics alone, but I am in a quandry because they are the only sciences avaliable here and I am not qualified to do them all, and I only spent 6 months on a primilinary math course. This paragraph confused me a little. Do you mean that you could not handle a biology, chemistry, and physics class at once so you just completed a single physics class instead of all 3? I am also confused about your statement about being qualified for biology, chemistry, and physics. Does this mean you don't have the pre-requisite education to learn these subjects (as your statement about a preliminary math course implied?). Biology and Chemistry typically wont require anything beyond algebra; however, physics will require a minimum of calculus. It's not uncommon for students to take calculus classes in tendem with physics classes, but if your math education is only preliminary then you will have some catching up to do. Typically the heirarchy of math courses to calculus for your first few physics courses go something like this: Basic Math, Alegebra, Geometry, Trigonometry or Pre-Calculus, Calculus (derivatives), Caculus (Integrals). ''This paragraph confused me a little. Do you mean that you could not handle a biology, chemistry, and physics class at once so you just completed a single physics class instead of all 3? '' ''I am also confused about your statement about being qualified for biology, chemistry, and physics.'' To qualify for a university I mean. I don't know what is required, but I doubt an insignificant understanding of math won't help, especially when it comes to qualifications, I mean. Similar Threads 1. By Reiku in forum The Cesspool Last Post: 10-20-11, 12:15 PM Replies: 22 2. By Quantum Quack in forum General Philosophy Last Post: 07-12-10, 04:16 PM Replies: 32 3. By freziggity in forum Physics & Math Last Post: 06-05-10, 11:30 PM Replies: 0 4. By Reiku in forum Pseudoscience Archive Last Post: 04-29-08, 01:01 PM Replies: 36 5. By Kaiduorkhon in forum Free Thoughts Last Post: 04-18-07, 11:48 PM Replies: 36
{"url":"http://www.sciforums.com/showthread.php?111942-The-Problem-of-Time-leads-to-a-Problem-of-Energy-for-the-Universe/page5","timestamp":"2014-04-17T12:30:34Z","content_type":null,"content_length":"126689","record_id":"<urn:uuid:13548293-29d0-4fe7-b7df-e8ee7b4e1f90>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/matt71/asked","timestamp":"2014-04-16T13:24:40Z","content_type":null,"content_length":"107804","record_id":"<urn:uuid:5b3a2910-3542-4d0b-aa23-285fa151b463>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Riemann: The Habilitation Dissertation July 25, 2011 • 1:21PM A comprehension of the content of Riemann's habilitation dissertation is essential for substantial understanding and progress in all important fields of thought. This presentation by Riemann, often treated (wrongly) as a mathematical presentation, reflects his views on the physical universe and the human mind, and gives us useful insight into understanding social processes. In this video, we'll cover the content of Riemann's presentation, and demonstrate an important fact of mental life: human beings do not sit outside the universe, investigating it from a fixed, stable location – rather, creative mental activity is itself a universal power, and must be itself considered by anyone seeking a unified physical view of the world. Let's dive in. Habilitation Dissertation On the Hypotheses Which Underlie Geometry Now, you've probably heard that if you add up the angles in a triangle, you get 180 degrees, or two right angles. You've also heard that two parallel lines never cross, even if you extend them infinitely. That's quite a claim. This triangle on a sphere has three right angles, and these lines, which seemed parallel, actually do cross when they are extended far enough. What would our geometry teacher say! Well, you'd probably hear the response: “The rules all work fine if the lines are straight.” But, let me ask: what does it mean for a line to be straight? Can you think of a definition? Is it, perhaps, the shortest distance between two points? If so, the lines on our sphere are straight. “But the sphere is curved!” comes the objection, “make your lines in space.” Fine – but how will we make them them? Maybe we should use a ruler, although, how would we know that our ruler is straight? Maybe we should use a beam of light, a physical process. But that won't work, because light bends. You see, the problem here is that without realizing it, we all have hypotheses about the nature of space itself, and we have preconceptions about constructions in space, such as making parallel lines. The now-famous faker Euclid didn't question whether his assumptions were true: he simply wrote out the geometry that corresponded to those assumptions (including the flatness of space), without showing that they were valid! More egregious than any specific wrong assumptions Euclid may have made, was the fact that his axioms found their basis only in a priori thinking of his imagination, not in real physical experiment. Up until Riemann's day, the hypotheses underlying space and geometry had not been examined in a general way, nor was it recognized that these foundations were actually hypotheses. To make his thoughts clear, Riemann had to offer a general concept of what he called “manifolds” of various numbers of dimensions and reveal the possible curvatures of these manifolds. Then, he could return to the shape of the actual space we inhabit, and decide how – and upon what basis – it would be distinguished from other possible imaginary spaces. This isn't something to be settled by logic; the answer can only be reached by a continued path of experiment. So, let's delve into the concept of manifolds in general. Section I Manifolds in General Start with magnitude – what is it? Riemann says that a magnitude is a general concept that has multiple specific instances, or ways of being, or specializations. For example, “length” is a single concept, that has multiple specializations, such as “two inches,” “three miles,” or “five kilometers.” They are all specific lengths. Taste is another example: “salty”, “sweet”, or “oregano-flavored” are specific instances of the general concept of “taste”. Shoe size, temperature, location – these are all magnitudes. Now we can distinguish two different kinds of magnitudes, those that change continuously and those which change in discrete jumps. As an example, the tones that can be played on a cello are continuous, while those on a piano are discrete. There is no key on the keyboard between a B and a C, but a cello can vary between them. We can also consider whether a magnitude's particular specification (or sometimes called “mode of determination”), whether this particular specification requires one or multiple values. One example is position on the earth. All lengths can be arranged by size, and you can always say which one is shorter and which longer, but the same is not true for position. A location has both a latitude and a longitude, and while positions could be arranged by latitude, or by longitude, they cannot be arranged by position. The position of New York isn't larger or smaller than the position of Houston. Such magnitudes (or “manifolds”) for which the fixing of position requires two specifications, are said to be doubly extended. For example, specifying the location on any of these different surfaces requires two values: here you see the sphere, the plane, the monkey saddle, and the catenoid. What's more, the location on a single surface can be specified in several different ways. Here are a variety of different coordinate systems being used to indicate the movement of a spot on a flat plane. While the coordinates are curved, the plane is not. Now, if we move beyond surfaces, a position in space is triply extended. There are many ways of identifying a location in space, but all require are least three specifications. For example, we could give the XYZ position – the latitude, longitude, and altitude – or we could use cylindrical coordinates – or catenoidal ones. Now, these are all different ways of describing a location in space, like the variety of ways of indicating the motion of the spot on the plane. They don't, as coordinate systems, indicate space as curved. Now, go up a level – just as the plane and the catenoid have different characteristics, not just different coordinate systems. For example, you can't put a plane on a catenoid – you'd have to bend it and stretch it. Can you think of a different space? That is, can you think of a three-dimensional space that isn't flat? It's difficult not to simply think of a curved object in space when you're pondering this question. We'll return to this question later, with some more ammunition for strengthening our imaginations of curved spaces. As another example, how many dimensions are there in color, as perceived by the human eye? If you are familiar with the way color is represented on televisions and computer monitors, you know that all colors are created by varying amounts of roughly red, green, and blue light, making it three-dimensional. There are many other ways of representing color, such as YUV, L*a*b* color, or Hue-Saturation-Lightness (seen here), but they all use three-dimensions. This triple extension is not a character of light itself, but it's a characteristic of the human eye, which contains three different types of color receptors. This can trick you – mixing red and yellow light together may look orange to us, but do not become orange light – the red and yellow can be separated back out with a prism. Another quirk of vision is that the color magenta is not really between red and violet in the color spectrum – it does not wrap around except for the way our mind puts together the experience of the senses. While color is three-dimensional for us, for birds, which have four different color-receptor cells, color is four-dimensional! This leads us to an example of a magnitude with so many dimensions you can't even count them up! The example is light itself, which has an infinite number of specifications to indicate it exactly. This comes up when matching paint, where the three dimensions of color on a monitor just aren't enough – the paint may match under a certain kinds of light, but not others. You can see how the match is better or worse, depending on the kind of light. Here you have the color curve for a certain color of professional light filter – for each possible color in the spectrum, the filter transmits a certain percentage of light – so each of the infinite number of colors in the rainbow has its own value of transmittance – actual light (as opposed to perceived color) doesn't have three, but an infinite number of dimensions! Here, you see the perceived color of a changing color curve – there is so much in light that our eyes cannot distinguish! Armed with these general concepts of magnitudes, we can now join Riemann in investigating the different metric relations that manifolds are susceptible of, and how they can be determined. What makes a sphere different from a plane? Section II Possible Metric Relations To start our investigation of the internal characteristics of manifolds, we'll start with two-dimensional curved surfaces, and for this, we'll use the approach of the great scientist Carl Gauss, who chose the topic for Riemann's lecture, and was delighted to hear it. Gauss had developed a completely general method for investigating curved surfaces, with specific techniques. For example, the sphere is curved while the plane is flat. Here's another shape, known as a monkey saddle. Unlike the plane, it is curved, but it's not curved the same way as the sphere is curved. How is it different? Can we quantify its curvature? As a first technique for doing so, we'll introduce the normal – it's a direction at each point on the surface that points directly away from it, perpendicularly. We'll use the normals of a surface to measure how curved it is. To do this, Gauss mapped the normals onto an auxiliary sphere – he kept only the direction of the normal, but not its location. You can also imagine the monkey saddle as being incredibly tiny and at the center of the sphere, just like we on the earth pointing at distant stars. Look at the directions of the normals as we move around on the surface. Sometimes, when we move to the right on the surface, the direction the normal points, moves to the left. The sphere serves for us as a kind of 3-dimensional compass, allowing us to indicate spatial directions, just like the rim of an ordinary compass tells us our direction on the two-dimensional surface of the earth. Gauss's first way of measuring the curvature of a surface was to take a region on the surface, and compare it to the size of the corresponding region on the auxiliary sphere. The larger the area on the sphere, the more curved the region on the surface. Here, the red region is several times more curved than the blue region. To measure the curvature at a specific point, he would shrink the region until it was infinitesimally small. If we use this technique, we find that a cylinder has no curvature at all – it is called flat by Gauss's technique! As we cover this quadrilateral region on the cylinder, the region traced out by the normals is just a straight line, with no area: zero curvature. Gauss's next technique for measuring curvature uses what are called osculating circles. Just as any two points imply a direction by connecting them and drawing a line through them, any three points form a circle. So if we pass a plane through a surface, we form a curve, and there is a circle that best fits that curve at the given point. Here you can see the series of osculating circles for a given point on the surface. Gauss demonstrated that the most extreme osculating circles are always on planes perpendicular to each other, and showed that by multiplying the radii of the two circles and taking the inverse, you get the same measure of curvature that we got earlier with the normals. Again, we find that a cylinder has no curvature: one extreme osculating circle is the radius of the cylinder, while the other appears as a straight line, with an infinite radius. One divided by the product of these radii, is zero. Now, before we get to Gauss's third method, which will be the most important for Riemann, let's take up a specific historical example: figuring out the size of the earth. To our knowledge, this was first discovered in the third century BC, by the onetime librarian of Alexandria, Eratosthenes of Cyrene. He had noticed that on the day of the summer solstice, the sun appeared directly overhead in Aswan, Egypt. Then, he measured the shadows on the same day of the year in Alexandria. By assuming the sun was so far away as to make its rays parallel, and by combining the angle of the shadows with the distance between the two cities, he estimated the circumference of the entire earth as 250,000 stadia, which was a remarkably accurate estimate. A characteristic of the entire planet was determined by making measurements in a small area. Now we're ready to distinguish two different categories of surface characteristics: extrinsic characteristics and intrinsic ones. All the examples given so far were extrinsic characteristics, which use external objects and positions as references. To contrast intrinsic characteristics, let's pose this: how could Eratosthenes have measured the size of the earth if the atmosphere was constantly cloudy, like on Venus? If he only had the surface of the earth, and no extrinsic sun to help him, what techniques would lie open to him to discover what the characteristics of the earth's surface Just to make it a little harder, let's say that we ourselves are two-dimensional creatures, rather than three dimensional. A popular example of this is in the book Flatland by E. A. Abbot. This author writes of a world in which only two dimensions exist: the residents are lines, triangles, quadrilaterals, and other polygons, with the leaders having many sides and approaching circular form. But what if there was a mistake? What if Flatland were really Sphereland? Each shape, moving around on a vast sphere, wouldn't notice one spot to be different than another, yet they could still get clues that something wasn't quite flat. Two such clues are the application Pythagorean theorem and displacing directions. So, the Pythagorean theorem: the Pythagorean theorem relates three squares, arranged to form a right triangle. Everyone has heard in school that square A plus square B equals square C, but do you know that it's really true? Here, let's form two larger squares by adding a number of equal triangles. These two larger squares are the same size and area. Removing four equal triangles from both, what remains should also be equal – so the A and B squares do have the same size as the C square. But as we saw earlier, this is certainly not true on a sphere. Remember our triangle with three right angles – which side is A, which B, and which C? The Pythagorean theorem certainly doesn't hold here. The way it must be modified is a clue to the Surfaceland polygons – it's a technique for discovering the shape of their world. But remember, like us trying to imagine curved space, how could they imagine a curved surface? The anomalies tell them what's happening, even though they can't visualize a sphere, since it is three-dimensional while they are two dimensional. A second technique involves direction. You could walk through a town or a building while keeping a sense of which way north is by keeping track of all the turns you've made. Here is an example on a plane. Now let's do the same thing on a sphere. We're moving around on the sphere, while always keeping the pointer in the same direction when turning. Now, when we get back, we aren't pointing in the original direction anymore! Let's see that again – although the pointer wasn't twisted along the way, its direction still changed. So both of these two techniques – the Pythagorean theorem and moving directions – these are intrinsic to the surface: they do not make any reference to anything outside the surface, including even the concept of space outside it. In summary, anything the polygon creatures can do or learn is intrinsic. Gauss then makes two amazing points: one of them is that none of the intrinsic characteristics change if the surface is bent around, so long as you don't stretch it. That is, bending a plane into a cylinder and back doesn't change anything in the surface itself: the distances between points, the shortest line between two points, angles, etc., everything remains the same. The Surfaceland residents would never notice a difference. Similarly, the catenoid and the helicoid have the same intrinsic characteristics, and are formed from bending one into the other. Gauss's second breakthrough was to come up with a way to measure the curvature at every point intrinsically, like the flat polygons could, meaning that the surface could be its own complete world, and giving it a certain shape in space (such as the cylinder versus the plane) is unnecessary. We don't need normals or osculating circles. The surface can be understood, from within. This breakthrough is key for Riemann's examination of curved spaces. Triply extended manifolds (such as space) can be curved! But now, we can only think of intrinsic curvature, since we can't step outside of space from a fourth dimension to look down on it, as we can look down on doubly extended surfaces. Instead, we are in the same shoes as our polygon friends for determining the shape of space around us. Riemann, in his paper, gives the most general possible way of determining curvature from within a manifold. It is the anomalous characteristics of action and motion, that define the With all this possibilities, what is the shape of actual space, as opposed to a mathematical daydream? Section III Understanding Curved Spaces As we apply these considerations to the actual shape of physical space, let's first contrast the two different concepts of unboundedness and infiniteness, which are often confused. Take a sphere as an example – motion on the sphere meets with no boundaries, finds no limits, and yet the sphere is not infinite – it has a total size that is measurable, while motion is unbounded. While space itself appears unbounded in every respect, we cannot conclude from this that it is infinite! Space might be finite. Now, as another consideration: let's ask whether objects are independent of position. Take as an example, the difference between an orange and a watermelon, or some other kind of fruit. On the orange, the skin at once location can be moved to another without stretching – there's no difference at all, the parts are all the same. But this isn't the case with our other example. The almost-flat region has a certain relationship between circumference and area, but here, the same circumference has a greater area! So stretching would be involved in moving from one place to the other. In three dimensions, this would be like moving an orange in space, and having the insides get stretched bigger, even while the skin on the outside stays the same size. Maybe we want to avoid that. So, we could do that by first hypothesizing that objects are independent of position – that would mean that every portion of space has the same curvature as any other. Now, if that measure were even slightly positive, then space would be finite. This would show up in astronomy, where the same star could appear to be in two opposite directions due to the curved nature of the intervening space, as you see here. Setting out in any direction, you will return to your starting point. Even if we didn't go all the way around, the same star could appear on multiple different paths, appearing as a halo, rather than a point. But we don't see duplicated stars or such halos in astronomy, even when we look far away. So if space were uniform, it would have to be flat, or almost entirely so. But what if space isn't uniform? What if objects aren't independent of position? What if spatial relations change from place to place? Then we couldn't infer anything about relationships in the small from what we have discovered in the large from astronomy. In fact, in Riemann's day, breakthroughs were being made in chemistry and electromagnetism, by hypotheses about the nature of activity on the very small. The metric relations in the small could have all sorts of characteristics, so long as on very large scales, the curvature evened out to the near-zero curvature inferred from astronomy. The apparent flatness of geometry on the astronomical scale, as understood in Riemann's day, has no inherent truth on the micro-scale. So how can we figure out the small-scale metric relationships? In what way is space curved, and, more importantly, how can we discover why it is curved as it is? “In a discrete manifold, the basis of metric relations is contained in the concept of the manifold itself, while it must come from elsewhere in the case of a continuous manifold.” To clarify that, whenever you name or conceptualize a discrete manifold, such as “the keys on a keyboard” or “the people in a room”, you've already given the means of measurement with the conception of the manifold. But, in a continuous manifold, such as length or position, you're given no idea of what the space is like, or how measurements ought to be made. Riemann continues: “Either then, the reality underlying space must form a discrete manifold, or the basis of metric relations must be sought for outside it, in the binding forces that operate upon it.” Yes, exactly! The basis for anything, its sufficient reason for being as it is, rather than otherwise, does not lie in making many observations of it! It seems like Riemann's investigation won't be ending with a final conclusion! In fact, Riemann finishes his lecture with the limits of armchair mathematical theorizing. While studies such as his can remove unjustified presumptions, they cannot make affirmative conclusions. Riemann closes: “This leads us into the domain of another science, the realm of physics, into which the nature of the present occasion does not permit us to enter.” We'll now leave mathematics behind, and venture into the realm of physics, the realm of reality. “The Realm of Physics” Beyond Mathematics Consider Johannes Kepler and the birth of astrophysics. Kepler entered a field that had been studied on the basis of understanding nature from the standpoint of the senses, very explicitly. His predecessors had put forward various models for the planetary system, based upon “saving appearances” – that is, their goal was to cause their models to present the same impression to the senses as the planets do. Kepler proved that even as they chased appearances, the models were always wrong, because of their method. And, furthermore, he knew that even a wrong hypothesis could look like the truth. He wrote about his own Vicarious Hypothesis: “Further the lack of any perceptible difference in effects between the as-yet unknown true hypothesis and the false one assumed by us does not make the effect identical. For there can be a small discrepancy which the senses do not perceive.” There will always be things we have not measured. Even if our model matches observations perfectly, that does not mean it is true – not just because the observations will get better in the future, but because matching observations, although necessary, is never itself the standard of truth. Although Kepler made a working mathematical model for the planets, he was not content without a hypothesis of the purpose lying behind their motions! Why do they move the way they do? In his New Astronomy, he hypothesized that a power in the sun caused the motions. In his Harmonies of the World , he details the harmonic principles of composition that required the specific orbits displayed by the planets. In this case, cause is purpose – it is a why, rather than a what or a how. Newton's inverse-square generalization may describe how motion changes from moment to moment, but not why, as Newton himself admitted. Kepler demonstrated the failure of mathematics, and succeeded as a scientist! His simple, beautiful question: “why so, rather than otherwise?” moved him beyond inductive generalizations of a world as according to sense-impression, and into a conception of the world as composed on the principles of beauty, as rigorous as they are free. Pierre de Fermat demonstrated the value of purpose as a scientific concept. The refractive bending of light as it moves from one material to another had puzzled thinkers for centuries – why does light bend the way it does when it enters water, for example? Fermat discovered the principle of least time. Among all the paths light might have taken from its origin to its destination, the paths it follows are those that make the journey the quickest. In this case, least time is nothing sense-perceptual. It has the character of a motive, not an appearance. Gottfried Leibniz demonstrated that the assumption of an a priori, independent space and time let to absurdities. When the Newtonian Samual Clarke tried to demonstrate God's great power in the free choice that God had in deciding where to make everything at the moment of creation, Leibniz responded that that's no decision at all. Moving the universe two feet to the right, is the same as moving space two feet to the left. The relation between objects wouldn't change at all, nothing would ever know. It has absolutely no meaning. The absurdity arises from the assumption of a space prior to, and independent of, things to be related in space. There is no absolute space. This is also seen when you compare Descartes's laws of motion with those of Leibniz. Descartes's biggest problem (well, one of his biggest problems), is that he believes in absolute motion and absolute rest, which drove him to conclusions that are really nuts. Leibniz knew that motions are only relative, while the cause of the motions can be real and absolute. Physics must be the foundation of geometry, and that was the basis of Leibniz's development of the infinitesimal calculus. Take Albert Einstein. His theory of relativity discards distinct geometric time and space, instead using the physical process of light propagation to give a physical meaning to space-time, and, in doing so, showed how space-times differ for different observers. They exist as action-spaces, not geometric spaces. Physical action is primary, and geometries are created to reflect our hypotheses of the true relations between unfolding actions in the universe. You've got to make geometry match the physics, not the other way around. Today, Einstein's general theory of relativity is the most commonly cited example of curved space-time (although not always correctly). Now move to biology, and beyond. Vladimir Vernadsky's passionate search for understanding the nature of life and cognition led him to hunt for geometries capable of expressing activities of life that he knew simply could not exist in a Euclidean space. One example is the chirality, the handedness of living processes. Pasteur and Curie had demonstrated that unlike abiotic processes, living processes showed a preference between left and right-handed versions of the same molecule, a preference which could not exist in simple Euclidean space. Vernadsky also wrote much about the different kind of living time distinct from abiotic time. In evolutionary living time, for example, before and after are not merely distinguished chronologically, as before being not-after and after being the opposite of before, but rather after is fundamentally different than before, being a time in which higher developments of new life processes exist. This is seen much more strongly in human time. In our economic time, the power of the human species – and we are ourselves a physical force – changes categorically with new discoveries of principle. Economic times differ qualitatively, not quantitatively. And such human time doesn't just “happen” like the ticking of a clock, it has to be created through discovery and driven by passion! This is the spacetime of economic development. So Euclid wasn't just wrong in his specific axioms; any different geometry starting with geometric axioms, rather than the principles that shape real physical action, would err as well. With Riemann, “geometry” itself completely changes its meaning – it isn't the stage upon which events unfold, it's the shape of action itself! What can we say about the process of gaining knowledge about this shape? Is our goal an ultimate understanding of nature, which we observe, hypothesize, and approach, as if asymptotically, although never reaching this ultimate knowledge? No: we are part of what we're studying! Consider the most powerful of physical forces: the human mind. Creative thought is a physical force: it has physical effects just like electromagnetism, plasma, biological processes. A true Riemannian geometry, based firmly on the principles that lie behind perceived appearances, must take creative mind into account. Our goal isn't a final geometry of the world out there: it must include the developing powers of reason – the kernel of economic development. There must be no separation between physics and the study of the mind. Physical science that rejects Mind can never find true principles, but will always be stuck in a bog of statistical induction and correlation based on the senses. Similarly, social philosophy or sociology which isn't informed by the study of fertile creative thought will become a study of neuroses or pedestrian irrelevancies, with a complete lack of any useful direction. There is only one world to discover and act on. Mind discovers, mind acts, mind creates. Riemann brings us into reality, and shows that the principles underlying reality cohere with the mind. While he concluded his lecture with the need to abandon mathematics for physics, to truly achieve Riemann's program, we must go beyond physics to economics; we must include the progressing development of the powers of the human mind. Thus, the importance of Riemann for economic science. Habilitation Dissertation:
{"url":"http://larouchepac.com/node/20005","timestamp":"2014-04-16T21:57:39Z","content_type":null,"content_length":"48451","record_id":"<urn:uuid:44908b7e-c61f-4111-bbf4-da65bf624691>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Using geometry (the interpretation of a double integral as a volume), evaluate the double integral ∫∫D sqrt(16−x^2−y^2) dA over the circular disk D: x^2+y^2≤16. • one year ago • one year ago Best Response You've already chosen the best response. hmm you can convert this in polar coordinates right? Best Response You've already chosen the best response. x=rcostheta y = r sin theata if i am not wrong.. limit of r would be 0-4 and theta limit 0-2pi Best Response You've already chosen the best response. Yeah, so remember: \[dA=| \mathcal{J}(r,\phi)|dr d \phi=r dr d \phi\] \[x=r \cos \phi; y=r \sin \phi\] \[D: \left\{ r^2 \le 16 \right\} \implies D: \left\{ r \le 4 \right\}\] Full circle implies \[0 \le \phi \le 2 \pi\] So your integral would be: \[\int\limits_0^{2 \pi} \int\limits_0^4 (16-r^2)r dr d \phi\] As: \[r^2=x^2+y^2\] Best Response You've already chosen the best response. what do i do with the square root? also the problem says to use geometry. but i'm not sure what the height would be Best Response You've already chosen the best response. ok let z = ur integrand = sqrt(16-x^2-y^2) what's the geometric relationship between x,y,and z? Best Response You've already chosen the best response. k one more hint: i said z = sqrt(16-x^2-y^2). let me rewrite this as: x^2+y^2+z^2 = 16 (with the understanding that z can't be negative). does this equation ring a bell? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b109d8e4b0e906b4a60d2b","timestamp":"2014-04-18T18:50:22Z","content_type":null,"content_length":"40106","record_id":"<urn:uuid:0e97f916-2a1f-4588-85cf-0e3d8538351f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Princeton University - Complex Fluids Group Knut Drescher Biological fluid mechanics. Hydrodynamic interactions of microorganisms and multicellularity. Office: G02 E-Quad Email: knutd[at] Janine Nunes I am interested in the controlled synthesis and fabrication of novel micro-objects, such as microfibers and core-shell/hollow microspheres, using multiphase microfluidics to (Post-doc) template the precursor liquid phases. Office: G02 E-Quad Email: nunes[at] Eujin Um I am interested in finding new applications of droplet microfluidics especially in biology, fulfilling the needs of scientists in the field and industry beyond conventional (Post-doc) tools or methods. My previous work includes development of devices for merging exact numbers of droplets, isolating single cells into droplets, and screening them with multifunctional droplet array. My research investigates well-designed control of droplet movement based on hydrodynamics of two-phase fluids in microchannels. Office: G02 E-Quad Email: eum[at] Hassan Masoud I employ theory and computer simulations to find solutions to challenging problems at the intersection of engineering, physics, and biology. My research interests include (Post-doc) mechanics of soft and active materials, fluid-structure interactions, small scale fluid mechanics, and biomimetic design. Office: G02 E-Quad Email: hmasoud[at] Shashi Thutupalli Quite unexpected collective behavior is often observed in complex open systems when many similar non-equilibrium units couple with one another, resulting in synchronization, (Post-doc) pattern formation, emergence, broken symmetries, and phase transitions. My research is focused on experimental studies of instances of such collective behavior and the problems that I am interested in stem mainly from the domains of condensed matter physics, non-linear dynamics, biology and fluid dynamics. Office: G02 E-Quad Email: shashi[at] Hyoungsoo Kim I am specialized in a three-dimensional velocimetry for microfluidics flow. I hold special interests in understanding hydrodynamics instabilities (thin film instability) and (Post-doc) microfluidics applications (electrokinetic flow). I am very open to cooperate with other topics, e.g. biology (thin-film flow in vivo or the flow field surrounding a living organism). However, to study fundamental problems in fluid dynamics is still at the core of my research. Office: G02 E-Quad Email: hskim[at] Alban Sauret My research addresses various fundamental problems of fluid mechanics at different scales. I have previously studied the dynamics of microfluidic flows at low interfacial (Post-doc) tension and its applications to all-aqueous emulsions. I am also interested in dense flows of granular materials as well as rotating and geophysical flows. Currently, I am mostly interested in problems involving the coupling of capillary-driven flows and elastic structures. All these works generally involve a combination of modeling, Office: G02 E-Quad experiments and numerical approach. Email: asauret[at] On Shun Pak I am interested in both fundamental problems in fluid mechanics, primarily flows at low Reynolds number, and the roles of fluid mechanics in biological phenomena, such as the (Post-doc) locomotion of microorganisms. Office: G02 E-Quad Email: opak[at] Sangwoo Shin My research focuses on the energy issues that are relevant to human. In large scale, I am interested in energy management technologies that are environmentally benign, such (Post-doc) as refrigerant-free cooling, waste heat recovery, and clean energy harvesting technologies. In small scale, I am interested in energetic behavior of cell membranes. Office: G02 E-Quad Email: sangwoos[at] Francois Boulogne A part of my research focuses on phenomena involving capillarity and elasticity of simple or complex fluids in both static and dynamic situations. I am also interested in (Post-doc) drying processes for which I study the relation between the geometry and the arising patterns (buckling, creases, cracks...). Office: G02 E-Quad Email: boulogne[at] Francois Ingremeau My research aims at understanding the relation between the macroscopic properties of complex systems, such as biofilms, and their microscopic structures. For example, the (Post-doc) mechanical properties of biofilms reflect their microscopic state. Such properties are probably influenced by the fluidic environment they grow in. Through experiments, I am currently measuring the mechanical properties at different scales, from the whole biofilm to the matrix the bacteria are embedded in. Measuring these properties could help to Office: G02 E-Quad understand how the biofilms form and grow. Email: fi[at] Talal Al-Housseiny I have a broad spectrum of interests in technical challenges that involve both energy research and transport phenomena. I am studying viscous and capillary instabilities that (Graduate student) occur in fluid-fluid displacement in porous media, with applications to Enhanced Oil Recovery and Carbon Sequestration. I am also working on integrating Microbial Fuel Cells in microfluidic devices to (a) enhance fuel cell efficiency and (b) study the bacterial biofilm growth and its effect on Office: G02 E-Quad electron transport. I have a side interest in swimming organisms and their collective behavior (swarming). Email: talal[at] Jie Feng My research focuses on fabrication of nanoemulsions using interface technique. In a system with a thin oil layer on top of water, the bursting process of gas bubbles at the (Graduate student) interface of water and air will disperse nanoemulsions of oil in the water phase. I am currently looking at the influence of different parameters on the size of nanoemulsions, such as the bubble size, viscosity and surfactants, to get more insight into the mechanism for a better control. I am also interested in near-surface flow Office: G02 E-Quad characteristics of slippery liquid-infused porous surface and transport phenomena in porous soft matter. Email: jiefeng[at] Naima Hammoud I am interested in the area of thin films, with a primary focus on stability. Currently, I am working on thin films interacting with boundary layer flows, which come up in (Graduate student) coating applications. I am also studying instabilities that occur due to intermolecular interactions, and I am specifically interested in how to inhibit dewetting. Office: G02 E-Quad Email: nhammoud[at] MinYoung Kevin Kim My interested area is interdisciplinary fields including fluidic mechanics, chemistry, biology, and material science. Currently, I am more focused on biofilm streamer and (Graduate student) twitching ability of a specific bacteria. I want to try out hydrodynamic interactions of different types of cells, including neuron cells, cancer cells, and bacterial cells. Office: G02 E-Quad Email: myk[at] Josephine Lembong I am interested in biofluids problems, mainly those related to blood flow. My current work is on red blood cell aggregates called rouleaux, i.e. how they form and breakup (Graduate student) during flow, and also modulation of aggregation using long-chain dextran. I am also studying cell chemosensing in hydrogel and its dependence on the cell mechanical Office: G02 E-Quad Email: lembong[at] Jessica Shang I am interested in a range of problems that involve flow over deformable boundaries. In one of my projects, I am interested in the fluid-structure interactions between a (Graduate student, fluid environment and a highly flexible fiber, such as the flow around kelp, grass, whiskers, and other biological structures. More recently, I have been exploring the co-advised by dynamic response of a thin film on a body subjected to external flows and its potential ability to reduce drag. Alexander Smits) Office: D100 E-Quad Email: jshang[at] Suin Shim I’m currently studying the effects of surfactants (SDS,…) on CO2 gas bubble dissolution in microfluidic channels. I’m also interested in droplet behavior on patterned (Graduate student) surfaces and spontaneous deformation of elastic materials due to capillary forces. Office: G02 E-Quad Email: sshim[at] Jason Wexler I am involved in a range of research projects centered around studying the deformation of fluid interfaces and flexible objects in viscous flow. In one of my current projects (Graduate student) I investigate the effects of flexibility on capillary adhesion between solid objects. In another I study a new type of drag-reducing surface. I have also done some work investigating the deformation of fibers in flow, and the coating of magnetic spheres in a microfluidic device. Office: G02 E-Quad Email: jwexler[at] Zhong Zheng My research focuses on the fundamental understanding, control and design capabilities of multi-phase flow dynamics in porous media. I combine theory, experiment and numerical (Graduate student simulation to study the basic flow patterns, such as viscous fingering, crack propagation and gravity currents; and their application to the control and design problems on co-advised by Robert energy, health and art topics, such as oil and gas recovery, CO2 storage and Chinese painting. I'm also interested in energy system and policy research from the theoretical H. Socolow) modeling point of view. Office: G02 E-Quad Email: zzheng[at] Wen Zeng My research topic is mainly about the droplet-based microfluidic closed-loop control system and its application, including the droplet generation in T-junction microfluidic (Visiting student channel, linearization between droplet length and flow rate ratio of two immiscible fluids and the control algorithm of closed-loop control system. from China) Office: G02 E-Quad Email: wenz[at]
{"url":"https://www.princeton.edu/~stonelab/people.html","timestamp":"2014-04-16T15:28:12Z","content_type":null,"content_length":"47894","record_id":"<urn:uuid:1ab98310-463c-453a-8efc-0a59e8d38c74>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Make a Ratio Edit Article Make a RatioAdditional Information About Ratios Edited by Me13, Peter, Rosejuice, BR and 2 others A ratio is a mathematical expression that represents the relationship between two numbers, showing the number of times one value contains or is contained within the other. One example of a ratio is the ratio of apples to oranges in a fruit basket. Knowing how to make a ratio can help you understand many different concepts, such as how much to increase the measurements in a recipe if you're doubling the portion size, or how many snacks you'll need to provide for a certain amount of guests. If you want to know how to make a ratio, just follow these steps. Method 1 of 2: Make a Ratio 1. 1 Use a symbol to denote the ratio. To indicate that you're using a ratio, you can use the division sign ( / ), a colon ( : ), or the word to. For example, if you wanted to say, "For every five men at the party, there are three women," then you could use any of the three symbols to state this. Here's how you would do it: □ 5 men / 3 women □ 5 men : 3 women □ 5 men to 3 women 2. 2 Write the first given quantity to the left of the symbol. Write down the quantity of the first item before the symbol of your choice. You should also remember to state the units, or the number you're working with, whether it's men or women, chickens or goats, or miles or inches. 3. 3 Write the second given quantity to the right of the symbol. After you've written the first given quantity followed by the symbol, you should write the second given quantity, along with its units. □ Example: 20 g of flour/ 8g of sugar 4. 4 Simplify your ratio (optional). You may want to simplify your ratio to do something like scale down a recipe. If you're using 20 g of flour for a recipe, then you know you'll need 8 g of sugar, and you're done. But if you'd like to scale down the ratio as much as possible, then you'll need to simplify it by writing the ratio in its lowest possible terms. You should use the same process as you would use to simplify a fraction. To do this, you have to find the GCF, or the greatest common factor, of both quantities, and then see how many times that number fits into each given □ To find the GCF of 20 and 8, write down all of the factors of both numbers (the numbers that can multiply to make those numbers and thus can be evenly divided into those numbers) and find the largest number that is evenly divisible into both. Here's how you do it: ☆ 20: 1, 2, 4, 5, 10, 20 ☆ 8: 1, 2, 4, 8 □ 4 is the GCF of 20 and 8 -- it's the largest number that evenly divides into both numbers. To get your simplified ratio, simply divide both numbers by 4: □ 20/4 = 5 □ 8/4 = 2 ☆ Your new ratio is 5 g flour/ 2 g sugar. 5. 5 Turn the ratio into a percentage (optional). If you'd like to turn the ratio into a percentage, you just have to complete the following steps: □ Divide the first number by the second number. Ex: 5/2 = 2.5. □ Multiply the result by 100. Ex: 2.5 * 100 = 250. □ Add a percentage sign. 250 + % = 250%. □ This indicates that for every 1 unit of sugar, there is 2.5 units of flour; it also means that there is 250% as much flour as there is sugar. Method 2 of 2: Additional Information About Ratios 1. 1 The order of the quantities doesn't matter. The ratio simply represents the relationship between two quantities. "5 apples to 3 pears" is the same as "3 pears to 5 apples." Therefore, 5 apples/ 3 pears = 3 pears/ 5 apples. 2. 2 A ratio can also be used to describe probability. For example, the probability of rolling a 2 on a die is 1/6, or one out of six. Note: if you're using a ratio to denote probability, then the order of quantities does matter. 3. 3 You can scale a ratio up as well as down. Though you may be used to simplifying numbers whenever you can, it can benefit you to scale a ratio up. For example, if you know that you'll need 2 cups of water for every 1 cup of pasta you boil (2 cups water/1 cup pasta), but you want to boil 2 cups of pasta, then you'll need to scale up the ratio to know how much water to use. To scale up a ratio, simply multiply the top and bottom by the same number. □ 2 cups water/ 1 cup pasta * 2/ 2 = 4 cups water/ 2 cups pasta. You'll need 4 cups of water to boil 2 cups of pasta. • Ratios can also describe probability. The probability of rolling a 2 on a die is 1/6, or one out of six. Sources and Citations Article Info Thanks to all authors for creating a page that has been read 43,205 times. Was this article accurate?
{"url":"http://www.wikihow.com/Make-a-Ratio","timestamp":"2014-04-18T13:12:05Z","content_type":null,"content_length":"66837","record_id":"<urn:uuid:018dfb40-6423-4628-a3cf-9c86a735f09d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Interpretation of interaction term in log linear (non linear) mo Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Interpretation of interaction term in log linear (non linear) model From Suryadipta Roy <sroy2138@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Interpretation of interaction term in log linear (non linear) model Date Thu, 13 Jun 2013 15:23:35 -0400 Dear David, Thank you very much for the wonderful suggestions! In the literature that I am working on, some of the important papers have already being applying Poisson maximum likelihood estimator to explain bilateral import (to deal with the problem of zero trade in the whole sample), and I am just following the tradition by applying it for my research question here (with some interesting differences in the results compared to the log-linear model). A couple of important works from the ones that I have come across are, e.g. "The log of gravity" by Santos Silva and Tenreryo (Review of Economics and Statistics, 2006), and "Estimating the gravity model with gravity using panel data" by Westerlund and Wilhelmsson (Applied Economics, 2011). Best regards, On Thu, Jun 13, 2013 at 10:59 AM, Suryadipta Roy <sroy2138@gmail.com> wrote: > Dear David, > Thank you very much for the wonderful suggestions! In the literature that I am working on, some of the important papers have already being applying Poisson maximum likelihood estimator to explain bilateral import (to deal with the problem of zero trade in the whole sample), and I am just following the tradition by applying it for my research question here (with some interesting differences in the results compared to the log-linear model). A couple of important works from the ones that I have come across are, e.g. "The log of gravity" by Santos Silva and Tenreryo (Review of Economics and Statistics, 2006), and "Estimating the gravity model with gravity using panel data" by Westerlund and Wilhelmsson (Applied Economics, 2011). > Best regards, > Suryadipta. > On Wed, Jun 12, 2013 at 9:21 PM, David Hoaglin <dchoaglin@gmail.com> wrote: >> Dear Suryadipta, >> A key idea underlying Bill Gould's blog post is that -poisson- enables >> you to use quasi-likelihood. That command does not require that the >> data follow a Poisson distribution. >> Please read carefully the part of the blog post that discusses zero >> values. I doubt that simply fitting your model by using -poisson- >> will adequately handle your "many zeros." >> Regards, >> David Hoaglin >> On Wed, Jun 12, 2013 at 12:06 PM, Suryadipta Roy <sroy2138@gmail.com> wrote: >> > Dear David, >> > I have used both log linear model (that apply only to country pairs >> > with nonzero bilateral imports) as well as fixed effects Poisson to >> > include the zero observations (based on some wonderful suggestions >> > from Bill Gould in Statablog here >> > http://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/). >> > Earlier, Dimitriy had suggested an intuitive interpretation of the >> > coefficient terms in terms of semi-elasticity, and I can probably >> > persist with that interpretation for Poisson regression. >> > >> > Best regards, >> > Suryadipta. >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/faqs/resources/statalist-faq/ >> * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-06/msg00629.html","timestamp":"2014-04-16T07:35:21Z","content_type":null,"content_length":"14383","record_id":"<urn:uuid:10c9b25e-3298-41d8-a12c-e4777415a14a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Port Washington, NY ACT Tutor Find a Port Washington, NY ACT Tutor ...I believe that learning mathematics is about understanding important concepts, not memorizing a bunch of formulas, and that obtaining the right answer is less important than trying to understand a problem and devising a method to solve it, even if that method isn't what the teacher is looking for... 16 Subjects: including ACT Math, calculus, geometry, statistics ...I have been playing guitar for over 14 years and I compose and perform regularly. I have taught private guitar lessons to students of all ages. When I'm not thinking about music or math I love reading Philosophy and playing Halo 4.Algebra becomes second nature when you start doing upper-level mathematics. 22 Subjects: including ACT Math, calculus, geometry, algebra 2 ...Additionally, I have taken both the ACT and SAT, scores of which can be provided upon request. I have tutored in a variety of subjects across the board for students ranging from middle school through college level. I would have to say my favorite subjects to teach are mathematics and biology. 45 Subjects: including ACT Math, English, reading, geometry ...My goal is to obtain results, and oftentimes that requires getting more creative and fun than can be had in a regular classroom. I work well with all types of students, including those with disabilities (given my experience in the Teaching Fellows), gifted students (as a lifelong product of gift... 50 Subjects: including ACT Math, English, reading, GRE I am an SAT, ACT, SSAT, ISEE, SHSAT, LSAT, GMAT, and GRE expert and have tutored hundreds of students for a myriad of other standardized tests and academic subjects. My students have consistently achieved results at the apex of the learning and test curve, winning admission to highly competitive ac... 52 Subjects: including ACT Math, reading, English, writing Related Port Washington, NY Tutors Port Washington, NY Accounting Tutors Port Washington, NY ACT Tutors Port Washington, NY Algebra Tutors Port Washington, NY Algebra 2 Tutors Port Washington, NY Calculus Tutors Port Washington, NY Geometry Tutors Port Washington, NY Math Tutors Port Washington, NY Prealgebra Tutors Port Washington, NY Precalculus Tutors Port Washington, NY SAT Tutors Port Washington, NY SAT Math Tutors Port Washington, NY Science Tutors Port Washington, NY Statistics Tutors Port Washington, NY Trigonometry Tutors Nearby Cities With ACT Tutor Baxter Estates, NY ACT Tutors East Hills, NY ACT Tutors Glen Cove, NY ACT Tutors Great Nck Plz, NY ACT Tutors Great Neck ACT Tutors Harbor Acres, NY ACT Tutors Kensington, NY ACT Tutors Kings Point, NY ACT Tutors Little Neck ACT Tutors Manhasset ACT Tutors Manorhaven, NY ACT Tutors Plandome, NY ACT Tutors Roslyn Heights ACT Tutors Sands Point, NY ACT Tutors The Terrace, NY ACT Tutors
{"url":"http://www.purplemath.com/port_washington_ny_act_tutors.php","timestamp":"2014-04-16T13:40:09Z","content_type":null,"content_length":"24235","record_id":"<urn:uuid:17162e2c-e07a-440a-b475-6643d0b8e101>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Sorting an array of unique random numbers at insertion September 30th, 2011, 02:44 PM #1 Junior Member Join Date Sep 2011 I have a piece of code that works well to fill an array with a unique random numbers. My problem now is, that I want to sort these numbers, but not after the array is full but as new numbers are being inserted. So as each new number is inserted into the array, if finds the position it is meant to be in. The code I have for creating unique random numbers is below, thank you in advance: #include <iostream> #include <stdio.h> #include <stdlib.h> #include <time.h> using namespace std; #define MAX 2000 // Values will be in the range (1 .. MAX) static int seen[MAX]; // These are automatically initialised to zero // by the compiler because they are static. static int randomNum[1000]; int main (void) { int i; srand(time(NULL)); // Seed the random number generator. for (i=0; i<1000; i++) int r; r = rand() / (RAND_MAX / MAX + 1); while (seen[r]); seen[r] = 1; randomNum[i] = r + 1; for (i=0; i<1000; i++) cout << randomNum[i] << endl; return 0; Re: Sorting an array of unique random numbers at insertion If you're looking for efficiency, a std::set may be more suitable than an array. Re: Sorting an array of unique random numbers at insertion If you're looking for efficiency, a std::set may be more suitable than an array. Not just for efficiency reasons. Std::set has the two properties the OP is asking for - elements are unique and they're kept sorted at all times. So it's just to keep inserting random ints until the set reaches the wanted size. Whenever the set is iterated the ints will appear in sorted order (and they will be unique). Last edited by nuzzle; September 30th, 2011 at 03:17 PM. Re: Sorting an array of unique random numbers at insertion ...I want to sort these numbers, but not after the array is full but as new numbers are being inserted. What is the difference? Anyway, instead of assigning your new random number to the randomNum[i], you could loop from randomNum[i] to randomNum[0] looking for the insertion point. If the new number is greater – assign; if it is less – move last element up and keep going. Vlad - MS MVP [2007 - 2012] - www.FeinSoftware.com Convenience and productivity tools for Microsoft Visual Studio: FeinViewer - an integrated GDI objects viewer for Visual C++ Debugger, and more... Re: Sorting an array of unique random numbers at insertion What is the difference? Well, running time of course. Assuming a naive insertion sort is used, keeping the data sorted after every insertion results in a running time of O(n*(n+log n)) at best. Sorting once after all insertions are done is O(n + n*log n), much faster. If a std::set is used, keeping the data sorted after every insertion is O(n*log n) but in some cases cache locality may not be as good. If an array is required, "library sort" (basically insertion sort where some empty slots are left between valid elements) can be done in O(n*log n) but in practice it's still slower. Last edited by Lindley; September 30th, 2011 at 03:36 PM. Re: Sorting an array of unique random numbers at insertion Well, running time of course. Assuming a naive insertion sort is used, keeping the data sorted after every insertion results in a running time of O(n*(n+log n)) at best. Sorting once after all insertions are done is O(n + n*log n), much faster. If a std::set is used, keeping the data sorted after every insertion is O(n*log n) but in some cases cache locality may not be as good. If an array is required, "library sort" (basically insertion sort where some empty slots are left between valid elements) can be done in O(n*log n) but in practice it's still slower. That's not correct. O(n*(n+log n)) and O(n + n*log n) are not even proper complexity measures. It's O(n*n) and O(n*log n) respectively. In the first case, even if you use an O(log n) binary search to locate the insertion point you need to insert it which is an O(n) operation. Repeating n insertions gives a complexity of O(n * n). In the second case you place the items in the array in arbitrary order which is an O(n) operation. Afterwards you sort at O(n* log n). Taken together this gives O(n * log n) complexity. Inserting all items in an ordered set has O(n * log n) complexity as you stated. Re: Sorting an array of unique random numbers at insertion Some would argue that when it comes to reads, a sorted vector is faster than a set, because of locality (I believe Meyers mentions it too in one of his "more effective"). This is an adaptor that transforms the vector's interface into that of a set. That said, it has methods specifically dedicated to NOT sort after every insertion, given its higher price. Is your question related to IO? Read this C++ FAQ LITE article at parashift by Marshall Cline. In particular points 1-6. It will explain how to correctly deal with IO, how to validate input, and why you shouldn't count on "while(!in.eof())". And it always makes for excellent reading. Re: Sorting an array of unique random numbers at insertion So as each new number is inserted into the array, if finds the position it is meant to be in. The code I have for creating unique random numbers is below, thank you in advance: Note that strictly the randomNum[] array isn't necessary since seen[] already holds all information about which ints have been randomly selected. Also seen[] is inherently sorted. Each newly selected int will be at the "position it is meant to be in" because the position is the int. Last edited by nuzzle; September 30th, 2011 at 04:35 PM. September 30th, 2011, 02:49 PM #2 Elite Member Power Poster Join Date Oct 2007 Fairfax, VA September 30th, 2011, 03:14 PM #3 Elite Member Join Date May 2009 September 30th, 2011, 03:12 PM #4 Elite Member Power Poster Join Date Aug 2000 New York, NY, USA September 30th, 2011, 03:25 PM #5 Elite Member Power Poster Join Date Oct 2007 Fairfax, VA September 30th, 2011, 04:33 PM #6 Elite Member Join Date May 2009 October 1st, 2011, 09:22 AM #7 Elite Member Join Date Jun 2009 September 30th, 2011, 03:35 PM #8 Elite Member Join Date May 2009
{"url":"http://forums.codeguru.com/showthread.php?516860-Sorting-an-array-of-unique-random-numbers-at-insertion&mode=hybrid","timestamp":"2014-04-16T20:39:26Z","content_type":null,"content_length":"119333","record_id":"<urn:uuid:14d27aa7-6db6-46c0-9614-8a9df062dd14>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Are extensions of profinite groups profinite? up vote 2 down vote favorite Assume $X$, $E$ and $G$ are topological groups and $1\to X\to E\to G\to 1$ a short exact sequence of continuous group homomorphisms. Under which of these conditions is $E$ a profinite group? (i) $G$ profinite, $X$ finite (ii) $G$ profinite, $X$ pro-$p$ for some prime $p$ (iii) $G$ profinite, $X$ profinite In those case where $E$ is profinite, is there a way to write down the inverse system for $E$ explicitly in terms of the inverse system of $G$ (and $X$) Thank you very much. gr.group-theory profinite-groups add comment 2 Answers active oldest votes The answer to all three questions is no in general. You need to assume in addition that $G$ carries the quotient topology from $E$. Otherwise, starting from any such exact sequence with $E$ compact and $G$ infinite, you can endow the compact group $E$ with the topology inherited from the embedding into $E\times G_d$ where $G_d$ is $G$ with the discrete topology; let up vote 8 $E'$ be the resulting group; then $1\to X\to E'\to G\to 1$ is an exact sequence of continuous group homomorphisms, $X$ and $G$ are compact but $E'$ is not. Here $X$ can even be the down vote trivial group, so $E'=G_d$. Dear Yves: In my answer I explicitly mention where I am assuming the OP meant for $G$ to have the quotient topology and $X$ the subspace topology (so our answers aren't incompatible!). I will edit to bring up those hypotheses at the outset. – user29720 Jan 2 '13 at 18:10 @Kreck I noticed this just after posting! this was the natural hypothesis to do indeed. – Yves Cornulier Jan 2 '13 at 18:50 add comment [EDIT: I assume $X$ is given the subspace topology and $G$ the quotient topology.] The answer to (iii) (and (i) and (ii)) is "yes". Is this not treated in the book "Profinite groups" (which I've never looked at)? I'm less sure about the "explicit" request, since (even for finite $X$) it seems a bit hard to "see" an open subgroup of $E$ that omits a given nontrivial $x \in X$ (then we could shrink it a bit to also be normal and intersect with preimages in $E$ of open normal subgroups of $G$ to get what you want). Here is the proof for (iii) in the affirmative. Step 1 (Hausdorff nonsense): Firstly, since $G$ is (I presume) given the quotient topology of $E$ and that is Hausdorff, its identity is closed and hence $X$ is closed in $E$. I assume $X$ is meant to have the subspace topology and so since its identity point is closed (as $X$ is Hausdorff) it follows that the identity of $E$ is a closed point. Thus, since $E$ is a topological group, it follows that $E$ is Hausdorff. That was pretty boring, and perhaps you were assuming $E$ to be Hausdorff at the outset. Quomodocumque. Step 2 (reformulation): Now we recall that among Hausdorff topological groups, the profinite ones are precisely those that are compact and totally disconnected (i.e., only non-empty connected subsets are points). This is proved in Montgomery-Zippin and elsewhere I presume. So we just have to check that $E$ inherits each such property separately from $X$ and $G$, and these are elementary as follows. For the total disconnectedness it is equivalent to say that the only connected closed subgroup is the trivial one (since the connected component of the identity point is closed and visibly a subgroup). But such a subgroup of $E$ has trivial image in the totally disconnected $G$ and so lies in the totally disconnected $X$ and thus is trivial, so $E$ is totally disconnected. up vote 9 down Step 3 (properness): Next, we verify the compactness. There may be a clever way to see it using open covers or nets, but I don't see such an argument offhand (since I don't recall in what vote generality one knows that quotient maps between topological groups admit local continuous cross-sections), so here is a direct argument using topological properness (for Hausdorff spaces) in the sense of Bourbaki. Maybe the argument can be done more efficiently; I just give what comes to mind at the moment. Recall that a separated continuous map between topological spaces defined to be proper when it is universally closed (in the category of all topological spaces), and that this is equivalent to the map being closed with quasi-compact fibers. In particular, since properness is preserved under composition and $G$ is proper over a point, to prove the Hausdorff $E$ is compact it suffices to show that $f$ is proper. More specifically, since $f$ has compact fibers (translates of $X$), we just have to show that $f$ is closed. That is, if $C$ is a closed set in $E$ then we want to show that $f(C)$ is closed in $G$. Since $G$ has the quotient topology from $E$, this means that $f^{-1}(f(C))$ is closed in $E$. To prove this closedness, we'll use compactness of $X$ in another way. The map $X \times E \rightarrow E \times_G E$ defined by $(x,e) \mapsto (xe, e)$ is a topological isomorphism (respecting 2nd projections), so if $C$ is a closed set in $E$ then $X \times C$ goes over to a closed set in $E \times_G E$, and this closed set is $S := f^{-1}(f(C)) \times_{f(C)} C$. Note that its image under the first projection to $E$ is $f^{-1}(f(C))$. But $E \ times_G E$ is also identified with $E \times X$ respecting first projections (via $(e,x) \mapsto (e, ex)$, say), and this first projection is proper since $X$ is compact Hausdorff. In particular, this first projection is a closed map, so $f^{-1}(f(C))$ is closed in $E$ because of the closedness of $S$ in the fiber product. That completes the proof of compactness of $E$. add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory profinite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/117878/are-extensions-of-profinite-groups-profinite?sort=newest","timestamp":"2014-04-19T05:00:42Z","content_type":null,"content_length":"59727","record_id":"<urn:uuid:90481ea7-aaa0-4e6c-921a-9f4131120bd1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
College Physics With Mcat Bind In Card 3rd Edition | 9780077263218 | eCampus.com College Physics With Mcat Bind In Card by Giambattista, Alan McGraw-Hill Science/Engineering/Math List Price: [S:$262.58:S] Only two copies in stock at this price. In Stock Usually Ships in 24 Hours. Usually Ships in 3-4 Business Days Questions About This Book? Why should I rent this book? Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book back to us with a free UPS shipping label! No need to worry about selling it back. How do rental returns work? Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in! What version or edition is this? This is the 3rd edition with a publication date of 1/14/2009. What is included with this book? • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc. • The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included. • The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself. College Physics, Third Edition is the best solution for today's college physics market. With a unique, new, approach to physics that builds a conceptual framework as motivation for the physical principles, consistent problem solving coverage strategies, stunning art, extensive end-of-chapter material, and superior media support, Giambattista, Richardson, and Richardson delivers a product that addresses today's market needs with the best tools available. Table of Contents Why study physics? Talking physics The use of mathematics Scientific notation and significant figures Dimensional analysis Problem-solving techniques Net force Inertia and Equlibrium: Newton''s first law of motion Vector addition using components Interaction pairs: Newton’s third law of motion Gravitational forces Contact forces Fundamental forces Acceleration and Newton’s Second Law of Motion Position and displacement Newton’s second law of motion Applying Newton’s second law Velocity is relative: reference frames Motion with a Changing Velocity Motion along a line due to a constant net force Visualizing motion along a line with constant acceleration Free fall Motion of projectiles Apparent weight Air resistance Circular Motion Description of uniform circular motion Centripetal acceleration Banked curves Circular orbits Nonuniform circular motion Angular acceleration Artificial gravity Conservation of Energy The law of conservation of energy Work done by a constant force Kinetic energy Gravitational potential energy (1) Gravitational potential energy (2) Work done by variable forces: Hooke’s Law Elastic potential energy Linear Momentum A vector conservation law The impulse-momentum theorem Conservation of momentum Center of mass Motion of the center of mass Collisions in one dimension Collisions in two dimensions Torque and Angular Momentum Rotational kinetic energy and rotational inertia Work done by a torque Equilibrium revisited Equilibrium in the human body Rotational form of Newton’s second law The dynamics of rolling objects Angular momentum The vector nature of angular momentum States of matter Pascal''s principle The effect of gravity on fluid pressure Measuring pressure Archimedes'' principle Fluid flow Bernoulli''s equation Viscous drag Surface tension Elasticity and Oscillations Elastic deformations of solids Hooke''s law for tensile and compressive forces Beyond Hooke''s law Shear and volume deformations Simple harmonic motion The period and frequency for SHM Graphical analysis of SHM The pendulum Damped oscillations Forced oscillations and resonance Waves and energy transport Transverse and longitudinal waves Speed of transverse waves on a string Periodic waves Mathematical description of a wave Graphing waves Principle of superposition Reflection and refraction Interference and diffraction Standing waves Sound waves The speed of sound waves Amplitude and intensity of sound waves Standing sound waves The human ear The Doppler effect Shock waves Echolocation and medical imaging Thermal physics Temperature and the Ideal Gas Temperature scales Thermal expansion of solids and liquids Molecular picture of a gas Absolute temperature and the ideal gas law Kinetic theory of the ideal gas Temperature and reaction rates Collisions between gas molecules Internal energy Heat capacity and specific heat Specific heat of ideal gases Phase transitions The first law of thermodynamics Thermodynamic processes Thermodynamic processes for an ideal gas Reversible and irreversible processes Heat engines Refrigerators and heat pumps Reversible engines and heat pumps Details of the Carnot cycle Statistical interpretation of entropy The third law of thermodynamics Electric Forces and Fields Electric charge Conductors and insulators Coulomb’s law The electric field Motion of a point charge in a uniform electric field Conductors in electrostatic equilibrium Gauss''s law for electric fields Electric Potential Electric potential energy Electric potential The relationship between electric field and potential Conservation of energy for moving charges Energy stored in a capacitor Electric Current and Circuits Electric current Emf and circuits Microscopic view of current in a metal Resistance and resistivity Kirchoff’s rules Series and parallel circuits Circuit analysis using Kirchoff’s rules Power and energy in circuits Measuring currents and voltages RC circuits Electrical safety Magnetic Forces and Fields Magnetic fields Magnetic force on a point charge Charged particle moving perpendicular to a uniform magnetic field Motion of a charged particle in a uniform magnetic field: general A charged particle in crossed E and B fields Magnetic force on a current-carrying wire Torque on a current loop Magnetic field due to an electric current Ampère’s law Magnetic materials Electromagnetic Induction Motional Emf Electric generators Faraday''s law Lenz''s law Back Emf in a motor Eddy currents Induced electric fields Mutual and self-inductance LR circuits Alternating Current Sinusoidal currents and voltages; resistors in AC circuits Electricity in the home Capacitors in AC circuits Inductors in AC circuits RLC series circuit Resonance in an RLC circuit Converting AC to DC; filters Electromagnetic Waves And Optics Electromagnetic Waves Accelerating charges produce electromagnetic waves Maxwell’s equations The electromagnetic spectrum Speed of EM waves in vacuum and in matter Characteristics of electromagnetic waves in vacuum Energy transport by EM waves The Doppler effect for EM waves Reflection and Refraction of Light Wavefronts, rays, and Huygens’ principle The reflection of light The refraction of light: Snell’s law Total internal reflection Brewster’s angle The formation of images through reflection or refraction Plane mirrors Spherical mirrors Thin lenses Optical Instruments Lenses in combination Cameras 24 Table of Contents provided by Publisher. All Rights Reserved.
{"url":"http://www.ecampus.com/college-physics-mcat-bind-card-3rd/bk/9780077263218","timestamp":"2014-04-18T18:38:47Z","content_type":null,"content_length":"68089","record_id":"<urn:uuid:490d6251-2e15-48ba-9fcf-cbe4c2e293d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
When quoting this document, please refer to the following URN: urn:nbn:de:0030-drops-16971 URL: http://drops.dagstuhl.de/opus/volltexte/2008/1697/ Go to the corresponding Portal Kaufman, Tali ; Litsyn, Simon ; Xie, Ning Breaking the $\epsilon$-Soundness Bound of the Linearity Test over GF(2) For Boolean functions that are $epsilon$-far from the set of linear functions, we study the lower bound on the rejection probability (denoted by $extsc{rej}(epsilon)$) of the linearity test suggested by Blum, Luby and Rubinfeld. This problem is arguably the most fundamental and extensively studied problem in property testing of Boolean functions. The previously best bounds for $extsc{rej} (epsilon)$ were obtained by Bellare, Coppersmith, H{{a}}stad, Kiwi and Sudan. They used Fourier analysis to show that $ extsc{rej}(epsilon) geq e$ for every $0 leq epsilon leq frac{1}{2}$. They also conjectured that this bound might not be tight for $epsilon$'s which are close to $1/2$. In this paper we show that this indeed is the case. Specifically, we improve the lower bound of $ extsc{rej} (epsilon) geq epsilon$ by an additive constant that depends only on $epsilon$: $extsc{rej}(epsilon) geq epsilon + min {1376epsilon^{3}(1-2epsilon)^{12}, frac{1}{4}epsilon(1-2epsilon)^{4}}$, for every $0 leq epsilon leq frac{1}{2}$. Our analysis is based on a relationship between $extsc{rej}(epsilon)$ and the weight distribution of a coset of the Hadamard code. We use both Fourier analysis and coding theory tools to estimate this weight distribution. BibTeX - Entry author = {Tali Kaufman and Simon Litsyn and Ning Xie}, title = {Breaking the $\epsilon$-Soundness Bound of the Linearity Test over GF(2)}, booktitle = {Sublinear Algorithms }, year = {2008}, editor = {Artur Czumaj and S. Muthu Muthukrishnan and Ronitt Rubinfeld and Christian Sohler}, number = {08341}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2008/1697}, annote = {Keywords: Linearity test, Fourier analysis, coding theory} Keywords: Linearity test, Fourier analysis, coding theory Seminar: 08341 - Sublinear Algorithms Issue Date: 2008 Date of publication: 25.11.2008 DROPS-Home | Fulltext Search | Imprint
{"url":"http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=1697","timestamp":"2014-04-19T22:10:33Z","content_type":null,"content_length":"5662","record_id":"<urn:uuid:a6bfded4-9dc9-46c7-929b-5d5972653827>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 113 4 Transport INTRODUCTION Ground water contamination occurs when chemicals are detected where they are not expected and not desired. This situation is a result of movement of chemicals in the subsurface from some source (per- haps unknown) that may be located some distance away. Ground water contamination problems are typically advection dominated (see "Dissolved Contaminant Transport" in Chapter 2), and the pri- mary concerns in defining and treating ground water contamination problems must initially focus on physical transport processes. If a contaminant is chemically or biologically reactive, then its migration tends to be attenuated in relation to the movement of a nonreactive chemical. The considerations of reaction add another order of mag- nitude of complexity to the analysis of a contamination problem, in terms of both understanding and modeling. Regardless of the reac- tivity of a chemical, a basic key to understanding and predicting its movement lies in an accurate definition of the rates and direction of ground water flow. The purpose of a mode} that simulates solute transport in ground water is to compute the concentration of a dissolved chemical species in an aquifer at any specified place and time. Numerical solute trans- port models were first developed about 20 years ago. However, the modeling technology did not have a long time to evolve before a 113 OCR for page 113 114 GROUND WATER MODELS great demand arose for its application to practical and complex field problems. Therefore the state of the science has advanced from the- ory to practice in such a short time (considering the relatively small number of scientists working on this problem at that time) that a large base of experience and hypothesis testing has not accumulated. It appears that some practitioners have assumed that the underly- ing theory and numerical methods are further beyond the research, development, and testing stage than they actually are. Most transport models include reaction terms that are math- ematically simple, such as decay or retardation factors. However, these do not necessarily represent the true complexities of many reactions. In reality, reaction processes may be neither linear nor equilibrium controlled. Rubin (1983) discusses and classifies the chemical nature of reactions and their relation to the mathematical problem formulation. Difficult numerical problems arise when reaction processes are highly nonlinear, or when the concentration of the solute of interest is strongly dependent on the concentration of numerous other chemical constituents. However, for field problems in which reactions sig- nificantly affect solute concentrations, simulation accuracy may be limited less by mathematical constraints than by data constraints. That is, the types and rates of reactions for the specific solutes and minerals in the particular ground water system of interest are rarely known and require an extensive amount of data to assess accurately. Mineralogic variability may be very significant and may affect the rate of reactions, and yet be essentially unknown. There are very few documented cases for which deterministic solute transport mod- els have been applied successfully to ground water contamination problems involving complex chemical reactions. Many contaminants of concern, particularly organic chemicals, are either immiscible or partly miscible with water. In such cases, processes in addition to those affecting a dissolved chemical may significantly affect the fate and movement of the contaminant, and the conventional solute transport equation may not be applicable. Rather, a multiphase modeling approach may be required to rep- resent phase composition, interphase mass transfer, and capillary forces, among other factors (see Pinder and Abriola, 1986~. This would concurrently impose more severe data requirements to de- scribe additional parameters, nonlinear processes, and more complex geochemical and biological reactions. Faust (1985) states, "Unfortu- nately, data such as relative permeabilities and capillary pressures OCR for page 113 TRANSPORT 115 for the types of fluids and porous materials present in hazardous waste sites are not readily available." Well-documented and efficient multiphase models applicable to contamination of ground water by immiscible and partly miscible organic chemicals are not yet gener- ally available. TRANSPORT OF CONSERVATIVE SOLUTES Much of the recently published research literature on solute transport has focused on the nature of dispersion phenomena in ground water systems and whether the conventional solute transport equation accurately and adequately represents the process causing changes in concentration in an aquifer. In discussing the development and derivation of the solute transport equation, Bear (1979, p. 232) states, "As a working hypothesis, we shall assume that the dispersive flux can be expressed as a Fickian type law." The dispersion process is thereby represented as one in which the concentration gradient is the driving force for the dispersive flux. This is a practical engineer- ing approximation for the dispersion process that proves adequate for some field problems. But, because it incorrectly represents the actual physical processes causing observed dispersion at the scale of many field problems, which is commonly called macrodispersion, it is inadequate for many other situations. The dispersion coefficient is considered to be a function both of the intrinsic properties of the aquifer (such as heterogeneity in hydraulic conductivity and porosity) and of the fluid flow (as rep- resented by the velocity). Scheidegger (1961) showed that the dis- persivity of a homogeneous, isotropic porous medium can be defined by two constants. These are the longitudinal dispersivity and the transverse dispersivity of the medium. Most applications of trans- port models to ground water contamination problems documented to date have been based on this conventional formulation, even when the porous medium is considered to be anisotropic with respect to flow. The consideration of solute transport in a porous medium that is anisotropic would require the estimation of more than two disper- sivity parameters. For example, in a transversely isotropic medium, as might occur in a horizontally layered sedimentary sequence, the dispersion coefficient would have to be characterized on the basis of six constants. In practice, it is rare that field values for even the two constants longitudinal and transverse dispersivity can be OCR for page 113 116 GROUND WATER MODELS defined uniquely. It appears to be impractical to measure as many as six constants in the field. If just single values of longitudinal and transverse dispersivity are used in predicting solute transport in an anisotropic medium when the flow direction ~ not always parallel to the principal directions of anmotropy, then dispersive fluxes will be either overestimated or underestimated for various parts of the flow system. This can sometimes lead to significant errors in predicted concentrations. Dispersion and advection are actually interrelated and are de- pendent on the scale of measurement and observation and on the scale of the model. Because dispersion is related to the variance of velocity, neglecting or ignoring the true velocity distribution must be compensated for in a mode! by a correspondingly higher value of dispersivity. Domenico and Robbins (1984) demonstrate that a scaling up of dispersivity will occur whenever an (n-l) dimensional mode} is calibrated or used to describe an e-dimensional system. Davis (1986) used numerical experiments to show that variations in hydraulic conductivity can cause an apparently large dispersion to occur even when relatively small values of dispersivity are assumed. Similarly, Goode and Konikow (1988) show that representing a tran- sient flow field by a mean steacly-state flow field, as is commonly done, inherently ignores some of the variability in velocity ancI must also be compensated for by increased values of dispersivity. The scale dependence of dispersivity coefficients (macrodisper- sion) is recognized as a limitation in the application of conventional solute transport models to field problems. Anderson (1984) and Gelhar (1986) show that most reported values of longitudinal dis- persivity fall in a range between 0.01 and 1.0 on the scale of the measurement (see Figure 4.~. Smith and Schwartz (1980) conclude that macrodispersion results from large-scale spatial variations in hydraulic conductivity and that the use of relatively large values of dispersivity with uniform hydraulic conductivity fields is an inapt propriate basis for describing transport in geologic systems. It must be recognized that geologic systems, by their very nature, are com- plex, three-dimensional, heterogeneous, and often anisotropic. The greater the degree to which a mode! approximates the true hetero- geneity as being uniform or homogeneous, the more must the true variability in velocity be incorporated into larger dispersion coeffi- cients. We will never have so much hydrogeologic data available that we can uniquely define all the variability in the hydraulic properties of a geologic system; therefore, assumptions and approximations are OCR for page 113 TRANSPORT 117 Sand, Gravel, Sandstone Limestone, Basalt · Granite, & Schist 100 Oh llJ In 10 z At o 1 . I` · / ~ / / / · ·~. By a/ · ~/ ~- . A_ ·, / in)/ · / · / . / - . 1 0 1 00 1 ,000 DISTANCE (m) FIGURE 4.1 Variation of dispersivity with distance (or scale of measurement). SOURCE: Modified from Anderson, 1984. always necessary. Clearly, the more accurately and precisely we can define spatial and temporal variations in velocity, the lower will be the apparent magnitude of dispersivity. The role of heterogeneities is not easy to quantify, and much research is in progress on this problem. An extreme but common example of heterogeneity is rocks that OCR for page 113 118 GROUND WATER MODELS exhibit a dominant secondary permeability, such as fractures or so- lution openings. In these types of materials, the secondary perme- ability channels may be orders of magnitude more transmissive than the porous matrix of the bulk of the rock unit. In these settings, the most difficult problems are identifying where the fractures or so- lution openings are located, how they are interconnected, and what their hydraulic properties are. These factors must be known in or- der to predict flow, and the flow must be calculated or identified in order to predict transport. Anderson (1984) indicates that where transport occurs through fractured rocks, diffusion of contaminants from fractures to the porous rock matrix can serve as a significant retardation mechanism, as illustrated in Figure 4.2. Modeling of flow and transport through fractured rocks is an area of active research, but not an area where practical and reliable approaches are readily available. Modeling the transport of contaminants in a secondary permeability terrain is like predicting the path of a hurricane with- out any knowledge of where land masses and oceans are located or which way the earth is rotating. Because there is not yet a consensus on how to describe, account for, or predict scale-dependent dispersion, it is important that any conventional solute transport mode} be applied to only one scale of a problem. That is, a single model, based on a single value of dispersivity, should not be used to predict both near-field (near the solute source) and far-field responses. For example, if the clispersivity value that is used in the mode} is representative of transport over distances on the order of hundreds of feet, it likely will not accurately predict dispersive transport on smaller scales of tens of feet or over D I f FU POROU S ROCK MATRIX FRACTURE F LOW FIGURE 4.2 Flow through fractures and diffusion of contaminants from frac- tures into the rock matrix of a dual-porosity medium. SOURCE: Anderson, 1984. OCR for page 113 TRANSPORT C nJection well Land surface A C {B+D 1- Flow ~ B 1 o 119 Fully penetrating observation well /A _ ~ - `. Point', samples C ~ C: D:Br FIGURE 4.3 Effect of sampling scale on estimation of dispersivity. SOURCE: L. F. Konikow, U.S. Geological Survey, Ralston, Va., written communication, 1989. larger scales of miles. Warning flags must be raised if measurements of parameters such as dispersivity are made or are representative of some scale that is different from that required by the mode! or by the solution to the problem of interest. Similarly, the sampling scale and manner of sampling or measur- ing dependent variables, such as solute concentration, may affect the interpretation of the data and the estimated values of physical pa- rameters. For example, Figure 4.3 illustrates a case in which a tracer or contaminant is injected into a confined and stratified aquifer sys- tem. It is assumed that the properties are uniform within each layer but that the properties of each layer differ significantly. Hence, for injection into a fully penetrating injection well, as shown at the left of Figure 4.3, the velocity will differ between the different layers. Arrival times will then vary at the sampling location. Samples col- lected from a fully penetrating observation well will yield a gentle breakthrough curve indicating a relatively high dispersivity. How- ever, breakthrough curves from point samples will be relatively steep, indicating low dispersivity in each layer. The finer scale of sampling OCR for page 113 120 GROUND WATER MODELS yields a more accurate conceptual mode} of what is really happening, and an analogous mode! should yield more reliable predictions. Because advective transport and hydrodynamic dispersion de- pend on the velocity of ground water flow, the mathematical simu- lation mode! must solve at least two simultaneous partial differential equations. One is the flow equation, from which velocities are calcu- lated, and the other ~ the solute transport equation, which describes the chemical concentration in ground water. If the range in concen- tration throughout the system is small enough that the density and viscosity of the water do not change significantly, then the two equa- tions can be decoupled (or solved separately). Otherwise, the flow equation must be formulated and solved in terms of intrinsic per- meability and fluid pressure rather than hydraulic conductivity and head, and iteration between the solutions to the flow and transport equations may be needed. Ground water transport equations, in general, are more diffi- cult to solve numerically than are the ground water flow equations, largely because the mathematical properties of the transport equa- tion vary depending upon which terms in the equations are dominant in a particular situation (Konikow and Mercer, 1988~. The transport equation has been characterized as "schizophrenic" in nature (Pin- der and Shapiro, 1979~. If the problem is advection dominated, as it is in most cases of ground water contamination, then the govern- ing partial differential equation becomes more hyperbolic in nature (similar to equations describing the propagation of a shock front or wave propagation). If ground] water velocities are relatively low, then changes in concentration for that particular problem may re- sult primarily from diffusion and dispersion processes. In such a case, the governing partial differential equation is more parabolic in nature. Standard finite-difference and finite-element methods work best with parabolic and elliptic partial differential equations (such as the transient and steady-state ground water flow equations). Other approaches (including method of characteristics, random walk, and related particle-tracking methods) are best for solving hyperbolic equations. Therefore no one numerical method or simulation mode! will be ideal for the entire spectrum of ground water contamination problems encountered in the field. Mode! users must take care to use the mode} most appropriate to their problem. Further compounding this clifficulty is the fact that the ground water flow velocity within a given multidimensional flow field will normally vary greatly, from near zero in low-permeability zones or OCR for page 113 TRANSPORT 121 near stagnation points, to several feet per day in high-permeability areas or near recharge or discharge points. Therefore, even for a single ground water system, the mathematical characteristics of the transport process may vary between hyperbolic and parabolic, so that no one numerical method may be optimal over the entire domain of a single problem. A comprehensive review of solute transport modeling is pre- sented by Naymik (1987~. The mode! survey of van der Heij~e et al. (1985) reviews a total of 84 numerical mass transport models. Currently, there is much research on mixed or adaptive methods that aim to minimize numerical errors and combine the best features of alternative standard numerical approaches because none of the standard numerical methods is ideal over a wide range of transport problems. In the development of a deterministic ground water transport mode! for a specific area and purpose, an appropriate level of com- plexity (or, rather, simplicity) must be selected (Konikow, 1988~. Finer resolution in a mode} should yield greater accuracy. However, there also exists the practical constraint that even when appropriate data are available, a finely subdivided three-dimensional numerical transport mode] may be too large or too expensive to run on available computers. This may also be true if the mode! incorporates nonlinear processes related to reactions or multiphase transport. The selection of the appropriate mode} and the appropriate level of complexity will remain subjective and dependent on the judgment and experience of the analysts, the objectives of the study, and the level of prior information on the system of interest. In general, it is more difficult to calibrate a solute transport mode! of an aquifer than it is to calibrate a ground water flow model. Fewer parameters need to be defined to compute the head distribu- tion with a flow mode} than are required to compute concentration changes using a solute transport model. A mode} of ground water flow is often calibrated before a solute transport mode} is developed because the ground water seepage velocity is determined by the head distribution and because advective transport ~ a function of the seepage velocity. In fact, in a field environment, perhaps the single most important key to understanding a solute transport problem is the development of an accurate definition (or model) of the flow system. This is particularly relevant to transport in fractured rocks where simulation is based on porous-media concepts. Although OCR for page 113 122 GROUND WATER MODELS head distribution can often be reproduced satisfactorily, the required velocity field may be greatly in error. It is often feasible to use a ground water flow mode} alone to analyze directions of flow and transport, as well as travel times, because contaminant transport in ground water ~ so strongly (if not predominantly) dependent on ground water flow. An illustrative example is the analysis of the Love Canal area, Niagara Fails, New York, described by Mercer et al. (1983~. Faced with inadequate and uncertain data to describe the system, Monte CarIo simulation and uncertainty analysis were used to estimate a range of travel times (and the associated probabilities) from the contaminant source area to the Niagara River. Similarly, it is possible and often useful to couple a particle-tracking routine to a flow mode! to represent advective forces in an aquifer and to demonstrate explicitly the travel paths and travel times of representative parcels of ground water. This ignores the effects of dispersion and reactions but may nevertheless lead to an improved understanding of the spreading of contaminants. Figure 4.4 illustrates in a general manner the role of models in providing input to the analysis of ground water contamination problems. The value of the modeling approach lies in its capability to integrate site-specific data with equations describing the relevant processes as a basis for predicting changes or responses in ground water quality. There is a major difference between evaluating existing contaminate sites and evaluating new or planned sites. For the former, if the contaminant source can be reasonably well defined, the history of contamination itself can, in effect, serve as a surrogate long-term tracer test that provides critical information on velocity and Aspersion at a regional scale. However, it is more common that when a contamination problem is recognized and investigated, the locations, tinning, and strengths of the contaminant sources are for the most part unknown, because the release to the ground water system occurred in the past when there may have been no monitoring. In such cases it is often desirable to use a mode} to determine the characteristics of the source on the basis of the present distribution of contaminants. That is, the requirement is to run the mode! backward in time to assess where the contaminants came from. Although this is theoretically possible, in practice there is usually so much uncertainty in the definition of the properties and boundaries of the ground water system that an unknown source cannot be uniquely identified. At new or planned sites, historical data are commonly not available to provide a basis for mode! calibration and to serve as a OCR for page 113 TRANSPORT P NN NO FOR FUTURE ASSESSMENTS OF EXISTING WASTE DISPOSAL 1 | CONTAM NATED S TES. · SITE SELECTION · SOURCES OF CONTAMINATION · OPERATIONAL DESIGN · FUTURE CONTAMINATION · MONITORING NETWORK · MANAGEMENT OPTIONS ~1- ~ 1 1 L 123 | MODEL PREDICTIONS| rid - | MODEL APPLICATION AND CALIBRATION ~ 1 NUMERICAL MODEL' I OF GROUND WATER 1 FLOW AND CONTAMINANT TRANSPORT I I COLLECTION AND INTERPRETATION OF SITE-SPECIFIC DATA CONCEPTUAL MODELS OF GOVERNING PHYSICAL, CHEMICAL, AND BIOLOGICAL PROCESSES ~1 FIGURE 4.4 Overview of the role of simulation models in evaluating ground water contamination problems. SOURCE: Konikow, 1981. On the accuracy of predictions. As indicated in Figure 4.4, there should be allowances for feedback from the stage of interpreting mode! output both to the data collection and analysis phase and to the conceptualization and mathematical definition of the relevant governing processes. NONCONSERVATIVE SOLUTES The following sections assess the state of the art for modeling abiotic transformations, transfers between phases, and biological pro- cesses in the subsurface. Descriptions of all of these processes are provided in Chapter 2. The focus for this assessment is an examina- tion of what reactions are important and to what extent they can be described by equilibrium and kinetic models. OCR for page 113 TRANSPORT V SEA 149 Accretion Fresh water ~ Ground Surface Water Table /~e ~Impervious Seawater =~ c ~Boundary "'a_ Toe FIGURE 4.9 Seawater intrusion in an unconfined aquifer. SOURCE: After Sa da Costa and Wilson, 1979. then the problem scale is such that an interface approximation is valid and saltwater and fresh water may be treated as immiscible. The analysis of aquifers containing both fresh water and salt- water may be based on a variety of conceptual models (doss, 1984~. The range of numerical models includes dispersed interface mod- els of either cross-sectional or fully three-dimensional fluid-density- dependent flow and solute transport simulation. Sharp interface models are also available for cross-sectional or areal applications. Of the sharp interface models, some account for the movement of both fresh water and saltwater, while others account only for fresh- water movement. The latter models are based on an assumption of instantaneous hydrostatic equilibrium in the saltwater environment. In the majority of cases involving seawater intrusion, water qual- ity is viewed as good or bad; either it is fresh water or it is not a resource. Therefore many studies seek to determine acceptable lev- els of pumping or appropriate remediation or protection strategies. These resource management questions are resolved through fluid flow simulation and do not require solute transport simulations. Organic FIllid Contamination The migration and fate of organic compounds in the subsurface are of significant interest because of the potential health effects of these compounds at relatively low concentrations. A significant body of work exists within the petroleum industry regarding the move- ment of organic compounds, e.g., of} and gas resources. However, this capability has been developed for estimating resource recovery OCR for page 113 150 GROUND WATER MODELS or production and not contaminant migration. To compound prob- lems, the petroleum industry's computational capability is largely proprietary and is oriented toward deep geologic systems, which typ- ically have higher temperatures and higher pressure environments than those encountered in shallow contamination problems. Within the past decade, a considerable effort has been made to establish a capability to simulate immiscible and miscible or- ganic compound contamination of ground water resources. Migration patterns associated with immiscible and miscible organic fluids are schematically described by Schwille (1984) and Abriola (1984~. Fig- ure 4.10 depicts one possible organic liquid contamination event. If not remediated, the migration of an immiscible organic liquid phase is of interest because it could represent an acute or chronic source of pollution. Movement of the organic liquid through the vadose zone is governed by the potential of the organic liquid, which in turn depends upon the fluid retention and relative permeability properties of the air/organic/water/solid system. As an organic liquid flows through a porous medium, some Is adsorbed to the medium or trapped within the pore space. Specific retention defines that fraction of the pore space that will be occupied by organic liquid after drainage of the bulk organic liquid from the soil column. This organic contamina- tion held within the soil column by capillary forces (at its residual saturation) represents a chronic source of pollution because it can be leached by percolating soil moisture and carried to the water table. If the organic liquid is lighter than water, it may migrate as a distinct immiscible contaminant (the acute source) within the cap- ilIary fringe overlying the water table. The soluble fraction of the organic liquid will also contaminate the water table aquifer and mi- grate as a miscible phase within ground water. This is the situation shown in Figure 4.10. If the organic liquid is heavier than water, it will migrate vertically through the vadose zone and water table to directly contaminate the ground water aquifer. It may also pene- trate water-confining strata that are permeable to the organic liquid and, consequently, contaminate underlying confined aquifers. The organic contaminant may form a pool on the bedrock of the aquifer and move in a direction defined by the bedrock relief rather than by the hydraulic gradient. Contamination of ground water occurs by dissolution of the soluble fraction into ground water contacting either the main body of the contaminant or the organic liquid held by specific retention within the porous medium. OCR for page 113 TRANSPORT Ground Surfacer Capillary Fringe Water Table 151 'my ,.::::.. i. :$...////// Oil Zone ~ l - - .' lo,!. . of ~ CUnsaturated Zone Gas Zone (evaporation envelope) ~ ", ~.~;-, Oil Core \ Diffusion Zone (soluble components) Figure 4.10 Organic liquid contamination of unsaturated and saturated porous media. SOURCE: After Abriola, 1984. Governing Equations for M~tiphase Flow The region of greatest interest in seawater intrusion problems is the front between fresh water and seawater. The problem of salinity as a miscible contaminant in ground water is addressed with standard solute transport models. In reality, seawater is miscible with fresh water, and the front between the two bodies of water Is really a transition zone. The density and salinity of water across the zone gradually vary from those of fresh water to those of seawater (Bear, 1979~. A sharp or abrupt interface Is assumed if the width of the transition zone is relatively small. Fresh water is buoyant and will float above seawater. The balance struck among fresh water (i.e., ground water) moving toward the sea, seawater contaminating the approaching fresh water by miscible displacement, and fresh water overlying seawater results in a nearly stationary saline wedge. Figure 4.9 illustrates the stationary saline wedge conceptual model. This wedge will change if influenced by pumping or changes in recharge. While one can pose and solve the seawater intrusion problem as a single fluid having variable density (e.g., Begot et al., 1975; Voss, 1984), the most common approach has been to simulate fresh water OCR for page 113 152 GROUND WATER MODELS and seawater as distinct liquids separated by an abrupt interface. Along the interface, the pressures of both liquids must be identical. Sharp interface methods are applied to both vertical cross sections (e.g., Volker and Rushton, 1982) and areal models (e.g., Sa da Costa and Wilson, 1979~. The equations used to formulate the problem are the same as those used for the standard ground water flow problem. The only differences are that two equations are used (i.e., freshwater and seawater versions) and that their joint solution is conditioned to the pressure along the interface. Assuming that the response of the seawater domain is instantaneous and hence that hydrostatic equi- librium exists in the seawater domain, one can mode} the intrusion problem with a stanciard transient ground water flow mode} (doss, 1984~. Pinder and Abriola (1986) provide a broad overview of the prob- lem of modeling multiphase organic compounds in the subsurface. Abriola (1984) grouped models of multiphase flow and transport into two categories, those that address the migration of a miscible contaminant in ground water and those that address two or more distinct liquid phases. The former category of models addresses the far-field problem of chronic miscible contamination. Standard ground water flow and solute transport codes can be applied to these organic compound contamination problems. However, standard codes may require modifications to address biodegradation or sorption charac- teristics of a specific organic compound. As in the case of seawater intrusion, the region of greatest interest is the region exhibiting multiphase behavior. The problem of organic contamination is more complex for two reasons: (~) in general, a stationary interface will not exist, and (2) one is often interested in contamination of unsaturated soil deposits as a precursor to contam- ination of a ground water aquifer. Interest in the migration and fate of organic compounds has required that transient analysis methods be developed. Such methods enable one to simulate the movement of bulk contamination through the vadose zone and into a ground water aquifer. One is also able to estimate the mass of contamina- tion held in the media by specific retention. Because the front is not stationary, one must mode} liquid/solid interactions that govern the movement of each fluid in the presence of others. The equations describing multiphase flow and transport are similar to those previ- ously described for simulating water movement and solute transport in unsaturated soils. One fluid flow (e.g., fluid mass conservation) equation is required for each fluid phase simulated (e.g., gas, organic OCR for page 113 TRANSPORT 153 liquid, water). Rather than simulate distinct fluid regions separated by abrupt interfaces, one simulates a continuum shared by each of the fluids of interest. The equation set is coupled by the fluid retention and relative permeability relationships of the multiphase system. Miscible displacement of trace quantities of an organic fluid can occur within the water and gas phases. This is a common occurrence; however, it greatly complicates the ma" balance equation for the organic fluid. The statement of mass conservation must now account for organic mass entrained in the water and gas phases as well as the organic mass held in the immiscible fluid phase. Transport processes are introduced into the conservation equations, and the exchange of organic mass between fluid phases must be accounted for through partition coefficients. Abriola (1988) and Allen (1985) review models available for the simulation of multiphase problems. A variety of solutions have been published for multiphase contamination problems. This is due to the complexity of the overall problem and the variety of approaches that can be taken to provide an approximate solution. A useful hierarchy of modeling approaches is as follows: sharp interface approxima- tions, immiscible phase flow modem incorporating capiliarity, and compositional models incorporating interphase transfer. Examples of models based on sharp interface approximations are those of Hochmuth and Sunada (1985), Schiegg (1986), and van Dam (1967~. Immiscible phase models incorporating capilIarity al- low a more realistic sunulation of the specific retention phenomena but do not address hysteresis in the fluid-soi! interaction. Examples of these models are presented by Faust (1985), Kuppusamy et al. (1987), and Osborne and Sykes (1986)0 Compositional models in- corporating interphase transfer are extremely complex and require the most data, many of which are not routinely available for con- taminants of interest. Examples of these models are presented by Abriola and Pinder (1985a,b), Baehr and Corapcioglu (1987), and Corapcioglu and Baehr (1987~. Parameters and Titian and Boundary Conditions for Multiphase Flow The physical complexity exhibited by multiphase flow models consumes all available computer resources. This strain on computer resources has precluded acknowledgment, in models, of the com- plexities of heterogeneous media that are spatially distributed in OCR for page 113 154 GROUND WA173R MODELS the real environment. At the present time, computational resources restrict fully three-dimensional problems to homogeneous, porous media. Realistically, currently available computational resources are best suited to addrem conceptual modem Mode} parameters necessary for the simulation of seawater in- trusion are basically identical to those required for the simulation of ground water flow; however, two-fluid models require duplicate pa- ran~eters for fresh water and saltwater. A great many more mode} pa- rameters are necessary for a complete analysis of immiscible organic contaminant migration in the subsurface. While the seawater intru- sion problem is restricted to saturated porous media, organic fluid migration often occurs in the unsaturated zone. Consequently, fluid retention and relative permeability properties are required for the air/organic/water/solid system. Other standard data requirements for multiphase fluid flow simulation include porosity, compressibility of liquids and porous media (or storage coefficient), fluid densities and viscosities, and the intrinsic permeability tensor. As in the case of the fluid flow simulations, mode} parameters for transport simulations are more detailed for the organic fluid mi- gration problem than for the seawater intrusion problem. Mode} parameter requirements for solute migration within variable-density seawater intrusion are very similar to the requirements of any single- phase saturated zone model; however, duplicate data sets are re- quired for freshwater and seawater domains. Parameters necessary for detailed analysis of organic liquid transport phenomena include macroscopic diffusion and dispersion coefficients for each fluid phase (e.g., gas, water, or organic liquid), partition coefficients for water-gas and water-organic phases, sorption mode} parameters for alternative sorption models, and degradation mode! parameters for the organic fluid. Certainly, the more complex and complete models of multiphase contaminant problems require more data. If one considers only the immiscible flow problem in an attempt to estimate the migration of the bulk organic plume, then one will not require any of the miscible displacement (transport) parameters. If one assumes that the gas phase is static, one greatly reduces the data requirement in terms of both flow and transport phenomena. Key data for any analysis of multiphase migration are the fluid retention and relative permeability characteristics for the fluids and media of interest. The media porosity and intrinsic permeability, as well as fluid densities and viscosities, are also essential. OCR for page 113 TRANSPORT 155 All comments made regarding boundary and initial conditions for flow and transport of a single-phase contamination analysis also apply to a multiphase analysis. Aspects of transient analysis can be important in seawater intrusion problems because of seasonal pumping stresses. Transient analyses are also essential for organic fluid migration simulation because of interest in the migration and fate of these potentially harmful substances. Spatial dunensionality of a multiphase analysis can influence re- sults. For example, in the real, fully three-dimensional environment, a heavier-than-water organic fluid can move vertically through the soil profile and form a continuous distinct fluid phase from the wa- ter table to an underlying impermeable medium. Ground water will simply move around the immiscible organic fluid as though it were an impermeable object. Attempts to analyze such a situation in a ver- tical cross section with a two-dimensional multiphase model will fad! because the organic fluid will act as a dam to laterally moving ground water. Thus only an intermittent source of immiscible organic fluid can be analyzed. Note that such an analysis will be flawed for most real-worId applications because it will represent a laterally infinite intermittent source rather than a point source of pollution. Problems Associated with Mult~phase Flow The problems associated with modeling multiphase flow include the following: . complexities; magnitude of computational resources required to address all data requirements of the multiphase problem that are inde- pendent of consideration of spatial variability, paucity of data specific to soils and organic contaminants of interest, and no way to address the problem of mixtures of organics; . absence of hysteresis submodels needed to address retention capacity of porous media and to enable one to simulate purging of the environment; . virtual orn~ssion of any realistic surface geochemistry or mi- crobiology submodels necessary to more completely describe the as- sim~lative or attenuative capacity of the subsurface environment; and . viscous fingering and its relationship to spatial variability occurring in the natural environment. OCR for page 113 156 GROUND WATER MODELS REFERENCES Abriola, L. M. 1984. Multiphase Migration of Organic Compounds in a Porous Medium: A Mathematical Model, Lecture Notes in Engineering, Vol. 8. Springer-Verlag, Berlin. Abriola, L. M. 1988. Multiphase Flow and Transport Models for Organic Chemicals: A Review and Assessment. EA-5976, Electric Power Research Institute, Palo Alto, Calif. Abriola, L. M., and G. F. Pinder. 1985a. A multiphase approach to the mod- eling of porous media contamination by organic compounds, 1. Equation development. Water Resources Research 21~1), 11-18. Abriola, L. M., and G. F. Pinder. 1985b. A multiphase approach to the modeling of porous media contamination by organic compounds, 2. Numerical simulation, Water Resources Research 21~1), 19-26. Allen, III, M. B. 1985. Numerical modeling of multiphase flow in porous media. In Proceedings, NATO Advanced Study Institute on Fundamentals of Transport Phenomena in Porous Media, July 14-23, J. Bear and M. Y. Corapcioglu, eds. Martinus Nijhoff, Newark, Del. Anderson, M. P. 1984. Movement of contaminants in groundwater: Ground- water transport Advection and dispersion. Pp. 37-45 in Groundwater Contamination. National Academy Press, Washington, D.C. Baehr, A. L., and M. Y. Corapcioglu. 1987. A compositional multiphase model for ground water contamination by petroleum products, 2. Numerical solution. Water Resources Research 23~1), 201-213. Bear, J. 1979. Hydraulics of Groundwater. McGraw-Hill, New York, 567 pp. Beven, K., and P. F. Germann. 1982. Macropores and water flows in soils. Water Resources Research 18, 1311-1325. Cederberg, G. A., R. L. Street, and J. O. Leckie. 1985. A Groundwater mass- transport and equilibrium chemistry model for multicomponent systems. Water Resources Research 21~8), 1095-1104. Corapcioglu, M. Y., and A. L. Baehr. 1987. A compositional multiphase model for Groundwater contamination by petroleum products, 1. Theoretical considerations. Water Resources Research 23~1), 191-200. Davis, A. D. 1986. Deterministic modeling of a dispersion in heterogeneous permeable media. Ground Water 24~5), 609-615. Delany, J. M., I. Puigdomenech, and T. J. Wolery. 1986. Precipitation kinetics option of the EQ6 Geochemical Reaction Path Code. Lawrence Livermore National Laboratory Report UCR~53642, Livermore, Calif. 44 pp. Domenico, P. A., and G. A. Robbins. 1984. A dispersion scale effect in model calibrations and field tracer experiments. Journal of Hydrology 70, 123-132. Faust, C. R. 1985. Transport of immiscible Guide within and below the un- saturated zone: A numerical model. Water Resources Research 21~4), 587-596. Felmy, A. R., S. M. Brown, Y. Onishi, S. B. Yabusaki, R. S. Argo, D. C. Girvin, and E. A. Jenne. 1984. Modeling the transport, speciation, and fate of heavy metals in aquatic systems. EPA Project Summary. EPA- 600/53-84-033, U.S. Environmental Protection Agency, Athens, Gal, 4 PP Gelhar, L. W. 1986. Stochastic subsurface hydrology from theory to applica- tions. Water Resources Research 22~9), 135s-145s. OCR for page 113 TRANSPORT 157 Germann, P. F. 1989. Approaches to rapid and far-reaching hydrologic processes in the vadose zone. Journal of Contamination Hydrology 3, 115-127. Germann, P. F., and K. Beven. 1981. Water flow in soil macropores, 1, An experimental approach. Journal of Soil Science 32, 1-13. Goode, D. J., and L. F. Konikow. 1988. Can transient Bow cause appar- ent transverse dispersion? (abst.~. Eos, Transactions of the American Geophysical Union 69 (44), 1 184-1 185. Hochmuth, D. P., and D. K. Sunada. 1985. Ground-water model of two-phase immiscible flow in coarse material. Ground Water 23~5), 617-626. Hostetler, C. J., R. L. Erikson, J. S. Fruchter, and C. T. Kincaid. 1988. Overview of FASTCHEMTM Code Package: Application to Chemical Transport Problems, Report EQ-5870-CCM, Vol. 1. Electric Power Re- search Institute, Palo Alto, Calif. Jones, R. L., A. G. Hornsby, P. S. C. Rao, and M. P. Anderson. 1987. Movement and degradation of aldicarb residues in the saturated zone under citrus groves on the Florida ridge. Journal of Contaminant Hydrology 1, 265-285. Konikow, L. F. 1981. Role of numerical simulation in analysis of groundwater quality problems. Pp. 299-312 in The Science of the Total Environment, Vol. 21. Elsevier Science Publishers, Amsterdam. Konikow, L. F. 1988. Present limitations and perspectives on modeling pollution problems in aquifers. Pp. 643-664 in Groundwater Flow and Quality Modelling, E. Custudio, A. Gurgui, and J. P. Lobo Ferreira, eds. D. Reidel, Dordrecht, The Netherlands. Konikow, L. F., and J. M. Mercer. 1988. Groundwater flow and transport modeling. Journal of Hydrology 100~2), 379-409. Kuppusamy, T., J. Sheng, J. C. Parker, and R. J. Lenhard. 1987. Finite-element analysis of multiphase immiscible flow through soils. Water Resources Research 23~4), 625-631. Lindberg, R. D., and D. D. Runnells. 1984. Groundwater redox reactions: An analysis of equilibrium state applied to Eh measurements and geochemical modeling. Science 225, 925-927. Luxmoore, R. J. 1981. Micro-, meso- and macro-porosity of soil. Soil Science Society of America Journal 45, 671. Mercer, J. M., L. R. Silka, and C. R. Faust. 1983. Modeling ground-water flow at Love Canal, New York. Journal of Environmental Engineering ASCE 109~4), 924-942. Miller, D., and L. Benson. 1983. Simulation of solute transport in a chemically react~ve heterogeneous system: Model development and application. Water Resources Research 19, 381-391. Naymik, T. G. 1987. Mathematical modeling of solute transport in the subsur- face. Critical Reviews in Environmental Control 17~3), 229-251. Osborne, M., and J. Sykes. 1986. Numerical modeling of immiscible organic transport at the Hyde Park landfill. Water Resources Research 22~1), 25-33. Parkhurst, D. L., D. C. Thorstenson, and L. N. Plummer. 1980. PHREEQE-A computer program for geochemical calculations. U.S. Geological Survey Water Resources Investigation 80-96, 210 pp. Peterson, S. R., C. J. Hostetler, W. J. Deutsch, and C. E. Cowan. 1987. MINTEQ User's Manual. Report NUREG/CR-4808, PN~6106, Prepared by Battelle Pacific Northwest Laboratory for U.S. Nuclear Regulatory OCR for page 113 158 GROUND WATER MODELS Commission, Washington, D.C., 148 pp. (available from National Technical Information Service, U.S. Department of Commerce, Springfield VA 22161~. Pinder, G. F., and L. M. Abriola. 1986. On the simulation of nonaqueous phase organic compounds in the subsurface. Water Resources Research 22~9), 109~-1 19~. Pinder, G. F., and A. Shapiro. 1979. A new collocation method for the solution of the convection-dominated transport equation. Water Resources Research 15~5), 1177-1182. Plummer, L. N., B. F. Jones, and A. H. Truesdell. 1976. WATEQF A FORTRAN IV version of WATEQ, a computer code for calculating chem- ical equilibria of natural waters. U.S. Geological Survey Water Resources Investigation 76-13, 61 pp. Rittmann, B. E., and P. L. McCarty. 1980. Model of steady-state-biofilm kinetics. Biotechnology and Bioengineering 22, 2343-2357. Rittmann, B. E., and P. L. McCarty. 1981. Substrate flux into biofilms of any thickness. Journal of Environmental Engineering 107, 831-849. Rubin, J. 1983. Transport of reacting solutes in porous media: Relation between mathematical nature of problem formulation and chemical nature of reactions. Water Resources Research 19~5), 1231-1252. Sa da Costa, A. A. G., and J. L. Wilson. 1979. A Numerical Model of Seawater Intrusion in Aquifers. Technical Report 247, Ralph M. Parsons Laboratory, Massachusetts Institute of Technology, Cambridge. Saez, P. B., and B. E. Rittmann. 1988. An improved pseudo-analytical solution for steady-state-biofilm kinetics. Biotechnology and Bioengineering 32, 379-385. Scheidegger, A. E. 1961. General theory of dispersion in porous media. Journal of Geophysical Research 66~10), 3273-3278. Schiegg, H. O. 1986. 1.5 Ausbreitung van Mineralol ale Fluessigkeit (Methode our Abschaetsung). In Berteilung und Behandlung van Mineralolschadens- fallen im Hinblick auf den Grundwasserschutz, Tell 1, Die wissenschaftlichen Grundlagen zum Verstandnis des Verhaltens van Mineralol im Untergrund. LTwS-Nr. 20. Umweltbundesamt, Berlin. [Spreading of Oil as a Liquid (Estimation Method). Section 1.5 in Evaluation and Treatment of Cases of Oil Damage with Regard to Groundwater Protection, Part 1, Scien- tific Fundamental Principles for Understanding the Behavior of Oil in the Ground. LTwS-Nr. 20. Federal Office of the Environment, Berlin.) Schwille, F. 1984. Migration of organic druids immiscible with water in the unsaturated zone. Pp. 27-48 in Pollutants in Porous Media, The Unsatu- rated Zone Between Soil Surface and Groundwater, B. Yaron, G. Dagan, and J. Goldshmid, eds. Ecological Studies Vol. 47, Springer-Verlag, Berlin. Segol, G., G. F. Pinder, and W. G. Gray. 1975. A Galerkin-finite element technique for calculating the transient position of the saltwater front. Water Resources Research 11~2), 343-347. Smith, L., and F. W. Schwartz. 1980. Mass transport, 1, A stochastic analysis of macroscopic dispersion. Water Resources Research 16~2), 303-313. Sposito, G., and S. V. Mattigod. 1980. GEOCHEM: A Computer Program for the Calculation of Chemical Equilibria in Soil Solutions and Other Natural Water Systems. Department of Soils and Environment Report, University of California, Riverside, 92 pp. OCR for page 113 TRANSPORT 159 van Dam, J. 1967. The migration of hydrocarbons in a water-bearing stratum. Pp. 55-96 in The Joint Problems of the Oil and Water Industries, P. Hepple, ed. The Institute of Petroleum, 61 New Cavendish Street, London. van der Heijde, P. K. M., Y. Bachmat, J. D. Bredehoeft, B. Andrews, D. Holtz, and S. Sebastian. 1985. Groundwater management: The use of numerical models. Water Resources Monograph 5, 2nd ed. American Geophysical Union, Washington, D.C., 180 pp. van Genuchten, M. Th. 1987. Progress in unsaturated flow and transport modeling. U.S. National Report, International Union of Geodesy and Geophysics, Reviews of Geophysics 25~2), 135-140. Volker, R. E., and K. R. Rushton. 1982. An assessment of the importance of some parameters for sea-water intrusion in aquifers and a comparison of dispersive and sharp-interface modeling approaches. Journal of Hydrology 56~3/4), 239-250. Voss, C. I. 1984. AQUIFEM-SALT: A Finite-Element Model for Aquifers Containing a Seawater Interface. Water-Resources Investigations Report 84-4263, U.S. Geological Survey, Reston, Va. White, R. E. 1985. The influence of macropores on the transport of dissolved suspended matter through soil. Advances in Soil Science 3, 95-120. Wolery, T. J., K. J. Jackson, W. L. Bourcier, C. J. Bruton, B. E. Viani, and J. M. Delany. 1988. The EQ3/6 software package for geochemical modeling: Current status. American Chemical Society, Division of Geochemistry, 196th ACS National Meeting, Los Angeles, Calif., Sept. 25-30 (abstract).
{"url":"http://www.nap.edu/openbook.php?record_id=1219&page=113","timestamp":"2014-04-20T01:22:58Z","content_type":null,"content_length":"81950","record_id":"<urn:uuid:a0ca3a2a-3009-4406-96ff-522b25283664>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
El Segundo Statistics Tutor Find an El Segundo Statistics Tutor I am a very patient, steadfast, and enthusiastic women with a good sense of humor who simply enjoys teaching. I believe in addressing each student's individual needs and/or learning disabilities to develop an effective tutoring program, emphasizing positive reinforcement. I work very well with chi... 8 Subjects: including statistics, SPSS, Microsoft Excel, psychology ...My International Marketing career took me to attend several conferences around the world. Speaking engagements were very important for my event strategy so I am used to speak to audiences coming from three main vertical markets: Government (Education, Local, State and Federal Government), Financ... 20 Subjects: including statistics, Spanish, physics, calculus ...I excelled in geometry in high school, and am comfortable applying the principles of geometry to higher levels of math. I started tutoring pre-calculus for the 2012 academic year and have had great results with my students thus far. I took five years of Spanish culminating in my junior year of high school when I scored a 5 on the AP Spanish exam. 18 Subjects: including statistics, chemistry, Spanish, biology ...This works especially well in math and science-related courses, but it has never led me astray in humanities courses, either. I have previously tutored two individuals in math, one for a month and one for a full semester. Both performed exceptionally well in the concepts I tutored them on, and ... 27 Subjects: including statistics, reading, physics, writing ...I have learned the key is patience and practice. I have taken 5 years of Spanish throughout high-school and college. I also spent 3 months in Costa Rica learning Spanish. 8 Subjects: including statistics, Spanish, ESL/ESOL, algebra 1
{"url":"http://www.purplemath.com/El_Segundo_Statistics_tutors.php","timestamp":"2014-04-19T17:16:19Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:65e1fcbf-d82b-4f43-a47e-807a3b8eac49>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
A mathematical approach to physical problems: An interview with Rupert Frank November 19th, 2013 in Other Sciences / Mathematics Rupert Frank, professor of mathematics. Credit: Lance Hayashida/Caltech Marketing and Communications Rupert Frank, professor of mathematics. Credit: Lance Hayashida/Caltech Marketing and Communications Rupert Frank joined the Caltech faculty this spring as a professor of mathematics. Originally from Munich, Germany, Frank graduated from the Ludwig Maximilian University in his hometown in 2003 and his PhD from the Royal Institute of Technology in Stockholm, Sweden, in 2007. After completing a postdoctoral position at Princeton University, he was hired as an instructor there and quickly worked his way up to assistant professor. Frank recently answered a few questions about his work at the intersection of mathematics and physics. What do you work on? I work in this area called mathematical physics. It involves taking things that we see and observe in nature and trying to explain them mathematically from first principles. In mathematics, people often say that they're doing algebra or geometry or something, where they are talking about the methods they are using. However, for us it's more that we use whatever methods we need in order to understand a concrete problem. It's much more problem-specific. For example, one thing that we still cannot explain—that we are actually really far from being able to explain—is the emergence of periodic structures; that is, structures that repeat themselves. It's clear in nature that it does happen. We see crystals, for example. But we still have no idea why this happens. It's embarrassing really. So how do you approach a problem like that? We like to start, for example, with the rules of quantum mechanics—some axioms, which describe the state and the energy of a system. From there, we would like to see that periodic structures can emerge on a macroscopic scale. Sometimes we work with smaller dimensions—one-dimensional or two-dimensional models, not three dimensional, as nature is. Or we work with discrete models where you assume that all objects can only sit at discrete sites; they cannot move continuously through space. There is a hope that by working with such models, one can reveal more about the overall system. What problems are you currently addressing? An important aspect of my work is symmetry and symmetry breaking. Periodicity is a particular case of symmetry. A problem that I'm always working on is how to explain superconductivity. Superconductivity is a quantum phenomenon that happens on a macroscopic scale, meaning that I can observe it with my bare eyes. [The phenomenon involves the electrical resistance of certain metals and ceramics dropping to zero when cooled below a particular critical temperature. This means such materials can conduct electricity for longer periods, more efficiently. They also repel magnetic fields.] But I cannot explain it with ordinary classical mechanics; I need quantum mechanics. So again, the point is how do we come up with a theory for superconductivity on a macroscopic scale from a microscopic model using the laws of quantum mechanics? And that has been understood, I would say, on a physical level, and there are models that work numerically very well, but mathematically it has not been clarified. How would you say the discipline of mathematical physics informs both mathematics and physics? Well, mathematics and physics have always been interrelated, and a lot of mathematics has been developed while trying to solve physical problems. I think physics, from a mathematics perspective, leads to interesting mathematical problems. You are trying to prove something, and it's typically related to some optimization problem—where you want to minimize energy costs or something. So it gives you a way of thinking. In terms of the benefit to physics, I think we can sometimes provide a different perspective. Physicists typically speak about what they consider to be typical cases within a model, whereas in mathematics, one usually works on the negative side—trying to exclude the atypical. So from time to time, we come up with problems that really require physical explanation that has not been there How did you originally become interested in mathematics and physics? Actually, both my mother and my father are mathematicians, and one of my brothers is a mathematician; the other is a computer scientist. So it was around when I was growing up, that's for sure. By my third year of university studies, I knew which field of mathematics I wanted to focus on. It can be called functional analysis, operator theory, or mathematical physics. And I saw that all of this was intrinsically related to quantum mechanics. To a certain extent, this field of mathematics was created to explain quantum mechanics. So it was clear that I had to go into physics. Why did you decide to come to Caltech? Well, it's a very nice place, and it's a smaller place. That gives you a lot of opportunities because you're not only one of the many. Everybody expects you to do something, and they help you to do it. That's something that I really appreciate. Provided by California Institute of Technology "A mathematical approach to physical problems: An interview with Rupert Frank." November 19th, 2013. http://phys.org/news/2013-11-mathematical-approach-physical-problems-rupert.html
{"url":"http://phys.org/print304070816.html","timestamp":"2014-04-20T11:36:25Z","content_type":null,"content_length":"10407","record_id":"<urn:uuid:30b03e92-f101-4ddf-bf39-4983ad64c5ae>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Lisa Randall: dreams of warped space-time - CERN Courier Lisa Randall et les dimensions inconnues Lisa Randall fait partie des spécialistes mondiaux des dimensions supplémentaires et de la gravité quantique. Elle compte aussi parmi les rares auteurs d’ouvrages scientifiques à succès. Dans cet entretien, la physicienne évoque avec Antonella Del Rosso l’univers intriguant des dimensions supplémentaires. Elle aborde en particulier la possibilité de découvrir des gravitons ou des mini-trous noirs au LHC. Elle nous parle aussi des réactions de la presse et du public à son livre « Warped Passages », dans lequel elle prend le pari d’expliquer des concepts compliqués à un large public. Lisa Randall is the first female tenured theoretical physicist at Harvard University. This alone would probably be enough to raise the interest of most science journalists who are all too often confronted with the endless search for a female face who would look good in their newspapers, and make science somehow more human to non-scientific readers. Search her name in Google and read articles about her, then read her most recent book, and you realize that she is also one of the small band of physicists who can write popular science books. Then meet her, as I did at CERN, and you discover a no-nonsense person who finds it "normal" to deal with extra-dimensions and parallel universes, as well as hidden gravitons and quantum gravity. Randall has visited CERN many times, staying for several months in 1992 to 1993, when she worked on B physics and also on ideas in supersymmetry and supersymmetry breaking. These ideas have since evolved, and she is now one of the world’s experts in the theory of extra dimensions, one of the solutions proposed for the puzzling question of quantum gravity. According to these theories, our universe could have extra dimensions beyond the four that we experience – three of space and one of time. The idea of an extra dimension is simple to state, but how can we picture extra dimensions in our three-dimensional minds? As Randall concedes, explaining the extra dimensions is possible primarily through analogies, such as, Edwin Abbot’s analogy of Flatland. If you lived on a two-dimensional surface and could see only two dimensions, what would a three-dimensional object become for you? "In order to answer, you would have to explore your object in your two-dimensional view," she explains. "The slice would be two dimensional but the object would still be three dimensional." This is to say that, although extra dimensions are difficult to imagine in our limited three-dimensional world, we can nevertheless explore them. Warping in a universe with extra dimensions would be an amazing discovery, but does Randall expect to find any evidence? The LHC, she explains, could hold the key. "The LHC will allow us to explore an energy scale never reached before – the TeV scale. We know there are questions about this particular scale. We know the simple Higgs theory is incomplete, so there should be something else around. That’s why people think it should be supersymmetry or extra dimensions, something just explaining why the Higgs boson is as light as it is," she explains. Randall works in particular on the idea of warped geometry. If this is true, experiments at the LHC should see particles that travel in extra dimensions, the mass of which is around the tera-electron-volt scale that the LHC is studying. One fascinating area of modern physics linked to extra dimensions is that of quantum gravity. Gravity is the best known among the forces that we experience every day, yet there is no theory that can describe it at the quantum level. Gravity also still holds secrets experimentally, because its force-carrying particle, the graviton, remains hidden from view, but Randall’s theories of extra dimensions could shed light here, too. Could the graviton be found in the additional dimensions, and therefore in the proton–proton collisions at the LHC? "We don’t know for sure," says Randall, "but the Kaluza–Klein partner of the graviton – the partner of the graviton that travels in extra dimensions – might be accessible." It seems that even for the theorists leading the field, the theory is a little tricky to understand. "You have one graviton that doesn’t have any mass," she explains, "and it acts just as a graviton is supposed to act in four dimensions. And you have another graviton that has momentum in the extra dimensions: it will look like a massive graviton according to four-dimensional physics. The particle will have momentum in the fifth dimension and this is the part that we will be able to see." The quantum effects of gravity have also led theorists to talk of the possibility that black holes could be formed at the LHC, but Randall remains sceptical. "I don’t really think we will find black holes at the LHC," she says. "I think you’d have to get to even higher energy." It is more likely in her opinion that experiments will see signs of quantum gravity emerging from a low-energy quantum gravity scale in higher dimensions. However, she admits: "If we really were able to have enough energy to see a black hole, it would be exciting. A black hole that you could study would be very Interesting, indeed, but also scary, because black holes have always been described as "matter-eaters". However, there is nothing to fear. Massive black holes can only be created in the universe by the collapse of massive stars. These contain enormous amounts of gravitational energy, which pull in surrounding matter. Given the collision energy at the LHC, only microscopic and rapidly evaporating black holes can be produced in the collisions. Even if this does occur, the black holes will not be harmful: cosmic rays with energies much higher than at the LHC would already have produced many more black holes in their collisions with Earth and other astrophysical objects. The state of our universe is therefore the most powerful proof that there will be no danger from these high-energy collisions, which occur continuously on Earth. So much for black holes, but I am still full of curiosity about Randall. What, for example, originally sparked her interest in physics? "I actually liked math first more than physics," she says, "because when I was younger that is what you got introduced to first. I loved applying math a little bit more to the real world – at least what I hope is the real world." Now, as a leading woman in a male-dominated research field, and as the author of a popular book, Warped Passages (CERN Courier December 2005 p51), she is the focus of media attention. She finds some of this surprising but notes that it’s not just attention to her but to the field in general. One of the motivations she had for writing her book, was that people are excited about the LHC. She saw the chance to give them the opportunity to find out more about what it will do. "These are difficult concepts to express. You could give an easy explanation or you could try to do it more carefully in a book. One of the very rewarding things is that a lot of people who have read my book have said they can’t wait for the LHC; they can’t wait to see what they are going to find. So it is exciting when you give a lecture and thousands of people are there – it’s exciting because you know that so many people are interested." On the other hand, she finds some of the specific types of reporting disturbing, because it shows how far society still has to go: "We haven’t reached the point where it’s usual for women to be in the field." In addition to her work on black holes, gravity and so on, Randall is currently working on ideas of how to look for different models at the LHC, and how to look for heavier objects, such as the graviton, that might decay into energetic top quarks. She is also trying to explore alternative theories. "I’m not sure how far we’ll go in things like supersymmetry," she says, "I’m playing around with models and ways to search for it at the LHC." Yes, physics is about playing around with ideas – ideas that nobody has ever had before but that have to be tested experimentally. The LHC will shed light on some of the current mysteries, and Randall, who like many others has played around with ideas for years, can’t wait for this machine to produce the experimental answers. • For Lisa Randall’s lectures at CERN in March 2008 on "Warped Extra-Dimensional Opportunities and Signatures", see http://indico.cern.ch/conferenceDisplay.py?confId=28978, http://indico.cern.ch/ conferenceDisplay.py?confId=28979 and http://indico.cern.ch/conferenceDisplay.py?confId=28980.
{"url":"http://cerncourier.com/cws/article/cern/34938","timestamp":"2014-04-18T13:10:10Z","content_type":null,"content_length":"35398","record_id":"<urn:uuid:ae79e2cc-5d02-4c0f-a2a7-c87c8bf267e0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
The Input To The Circuit In The Figure Below Is ... | Chegg.com Image text transcribed for accessibility: The input to the circuit in the figure below is vs(t) = VA[sin 105t]u(t) V. Derive an expression for the output voltage when the OP AMP operates in its linear range. The OP AMP saturates at 12 V. What is the maximum value of Va for linear operation? The input to the circuit in the figure below is vs(t) = 5[sin 106t]u(t) V. Derive an expression for the output voltage when the OP AMP operates in its linear range. The OP AMP saturates at 15 V. What is the maximum value of the feedback resistor for the linear operation? Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/input-circuit-figure-vs-t-va-sin-105t-u-t-v-derive-expression-output-voltage-op-amp-operat-q2303817","timestamp":"2014-04-20T14:12:54Z","content_type":null,"content_length":"20954","record_id":"<urn:uuid:b4f16e55-50b2-4456-abc9-69c8f6d94597>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Conal Elliott I have another paper draft for submission to ICFP 2009. This one is called Beautiful differentiation, The paper is a culmination of the several posts I’ve written on derivatives and automatic differentiation (AD). I’m happy with how the derivation keeps getting simpler. Now I’ve boiled extremely general higher-order AD down to a Functor and Applicative morphism. I’d love to get some readings and feedback. I’m a bit over the page the limit, so I’ll have to do some trimming before submitting. The abstract: Automatic differentiation (AD) is a precise, efficient, and convenient method for computing derivatives of functions. Its implementation can be quite simple even when extended to compute all of the higher-order derivatives as well. The higher-dimensional case has also been tackled, though with extra complexity. This paper develops an implementation of higher-dimensional, higher-order differentiation in the extremely general and elegant setting of calculus on manifolds and derives that implementation from a simple and precise specification. In order to motivate and discover the implementation, the paper poses the question “What does AD mean, independently of implementation?” An answer arises in the form of naturality of sampling a function and its derivative. Automatic differentiation flows out of this naturality condition, together with the chain rule. Graduating from first-order to higher-order AD corresponds to sampling all derivatives instead of just one. Next, the notion of a derivative is generalized via the notions of vector space and linear maps. The specification of AD adapts to this elegant and very general setting, which even simplifies the development. You can get the paper and see current errata here. The submission deadline is March 2, so comments before then are most helpful to me. Enjoy, and thanks! I just reread Jason Foutz’s post Higher order multivariate automatic differentiation in Haskell, as I’m thinking about this topic again. I like his trick of using an IntMap to hold the partial derivatives and (recursively) the partials of those partials, etc. Some thoughts: • I bet one can eliminate the constant (C) case in Jason’s representation, and hence 3/4 of the cases to handle, without much loss in performance. He already has a fairly efficient representation of constants, which is a D with an empty IntMap. • I imagine there’s also a nice generalization of the code for combining two finite maps used in his third multiply case. The code’s meaning and correctness follows from a model for those maps as total functions with missing elements denoting a default value (zero in this case). • Jason’s data type reminds me of a sparse matrix representation, but cooler in how it’s infinitely nested. Perhaps depth n (starting with zero) is a sparse n-dimensional matrix. • Finally, I suspect there’s a close connection between Jason’s IntMap-based implementation and my LinearMap-based implementation described in Higher-dimensional, higher-order derivatives, functionally and in Simpler, more efficient, functional linear maps. For the case of R^n, my formulation uses a trie with entries for n basis elements, while Jason’s uses an IntMap (which is also a trie) with n entries (counting any implicit zeros). I suspect Jason’s formulation is more efficient (since it optimizes the constant case), while mine is more statically typed and more flexible (since it handles more than R^n). For optimizing constants, I think I’d prefer having a single constructor with a Maybe for the derivatives, to eliminate code duplication. I am still trying to understand the paper Lazy Multivariate Higher-Order Forward-Mode AD, with its management of various epsilons. A final remark: I prefer the term “higher-dimensional” over the traditional “multivariate”. I hear classic syntax/semantics confusion in the latter. A previous post described a data type of functional linear maps. As Andy Gill pointed out, we had a heck of a time trying to get good performance. This note describes a new representation that is very simple and much more efficient. It’s terribly obvious in retrospect but took me a good while to stumble onto. The Haskell module described here is part of the vector-space library (version 0.5 or later) and requires ghc version 6.10 or better (for associated types). • 2008-11-09: Changed remarks about versions. The vector-space version 0.5 depends on ghc 6.10. • 2008-10-21: Fixed the vector-space library link in the teaser. Continue reading ‘Simpler, more efficient, functional linear maps’ » Two earlier posts described a simple and general notion of derivative that unifies the many concrete notions taught in traditional calculus courses. All of those variations turn out to be concrete representations of the single abstract notion of a linear map. Correspondingly, the various forms of mulitplication in chain rules all turn out to be implementations of composition of linear maps. For simplicity, I suggested a direct implementation of linear maps as functions. Unfortunately, that direct representation thwarts efficiency, since functions, unlike data structures, do not cache by This post presents a data representation of linear maps that makes crucial use of (a) linearity and (b) the recently added language feature indexed type families (“associated types”). For a while now, I’ve wondered if a library for linear maps could replace and generalize matrix libraries. After all, matrices represent of a restricted class of linear maps. Unlike conventional matrix libraries, however, the linear map library described in this post captures matrix/linear-map dimensions via static typing. The composition function defined below statically enforces the conformability property required of matrix multiplication (which implements linear map composition). Likewise, conformance for addition of linear maps is also enforced simply and statically. Moreover, with sufficiently sophisticated coaxing of the Haskell compiler, of the sort Don Stewart does, perhaps a library like this one could also have terrific performance. (It doesn’t yet.) You can read and try out the code for this post in the module Data.LinearMap in version 0.2.0 or later of the vector-space package. That module also contains an implementation of linear map composition, as well as Functor-like and Applicative-like operations. Andy Gill has been helping me get to the bottom of some some severe performance problems, apparently involving huge amounts of redundant dictionary creation. • 2008-06-04: Brief explanation of the associated data type declaration. The post Beautiful differentiation showed some lovely code that makes it easy to compute not just the values of user-written functions, but also all of its derivatives (infinitely many). This elegant technique is limited, however, to functions over a scalar (one-dimensional) domain. Next, we explored what it means to transcend that limitation, asking and answering the question What is a derivative, really? The answer to that question is that derivative values are linear maps saying how small input changes result in output changes. This answer allows us to unify several different notions of derivatives and their corresponding chain rules into a single simple and powerful form. This third post combines the ideas from the two previous posts, to easily compute infinitely many derivatives of functions over arbitrary-dimensional domains. The code shown here is part of a new Haskell library, which you can download and play with or peruse on the web. Continue reading ‘Higher-dimensional, higher-order derivatives, functionally’ » The post Beautiful differentiation showed how easily and beautifully one can construct an infinite tower of derivative values in Haskell programs, while computing plain old values. The trick (from Jerzy Karczmarczuk) was to overload numeric operators to operate on the following (co)recursive type: data Dif b = D b (Dif b) This representation, however, works only when differentiating functions from a scalar (one-dimensional) domain, i.e., functions of type a -> b for a scalar type a. The reason for this limitation is that only in those cases can the type of derivative values be identified with the type of regular values. Consider a function f :: (R,R) -> R, where R is, say, Double. The value of f at a domain value (x,y) has type R, but the derivative of f consists of two partial derivatives. Moreover, the second derivative consists of four partial second-order derivatives (or three, depending how you count). A function f :: (R,R) -> (R,R,R) also has two partial derivatives at each point (x,y), each of which is a triple. That pair of triples is commonly written as a two-by-three matrix. Each of these situations has its own derivative shape and its own chain rule (for the derivative of function compositions), using plain-old multiplication, scalar-times-vector, vector-dot-vector, matrix-times-vector, or matrix-times-matrix. Second derivatives are more complex and varied. How many forms of derivatives and chain rules are enough? Are we doomed to work with a plethora of increasingly complex types of derivatives, as well as the diverse chain rules needed to accommodate all compatible pairs of derivatives? Fortunately, not. There is a single, simple, unifying generalization. By reconsidering what we mean by a derivative value, we can see that these various forms are all representations of a single notion, and all the chain rules mean the same thing on the meanings of the representations. This blog post is about that unifying view of derivatives. • 2008-05-20: There are several comments about this post on reddit. • 2008-05-20: Renamed derivative operator from D to deriv to avoid confusion with the data constructor for derivative towers. • 2008-05-20: Renamed linear map type from (:->) to (:-*) to make it visually closer to a standard notation. Comparing formulations of higher-dimensional, higher-order derivatives Simpler, more efficient, functional linear maps Functional linear maps Higher-dimensional, higher-order derivatives, functionally What is a derivative, really?
{"url":"http://conal.net/blog/tag/linear-map","timestamp":"2014-04-17T19:13:51Z","content_type":null,"content_length":"83240","record_id":"<urn:uuid:291a6007-777a-45ca-bc65-2f06436e7b40>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Linear Modelling Concrete Slab Problem October 24th 2007, 07:52 PM #1 Oct 2007 Non-Linear Modelling Concrete Slab Problem I hope someone can help me because i'm stumped This problem is in the maths applied level 3 & 4. The concrete slab in the shape of a rectangle shown on the right has an area of 63m^2. Form a Quadratic Equation and solve for "x" and then determine the width and length of the rectangle in order for it to be paved. (you can't use trial and error or guessing) Hope Someone can help me out Cheers OLLiE Last edited by ollieman; October 24th 2007 at 10:13 PM. Reason: fix up the picture The slab is rectangle, so, Area, A = 3x(x+4) = 63 3x^2 +12x -63 = 0 Divide both sides by 3, x^2 +4x -21 = 0 (x +7)(x -3) = 0 x = -7 or 3 meters Now I see what you mean by "in order for it to be paved" The x = -7 meters cannot be measured because there are no negative measurements. So, x = 3m width = 3x = 3*3 = 9m. lenght = (x +4) = (3 +4) = 7m. Umm, again, therefore, width = 7 meters, and length = 9 meters. ---answer. thanks ticbol thanks heaps ticbol I'm just not yet very good at written problems. I'm sure you didn't even have to think to blink to work it out. Cheers OLLiE October 24th 2007, 11:36 PM #2 MHF Contributor Apr 2005 October 25th 2007, 12:48 AM #3 Oct 2007
{"url":"http://mathhelpforum.com/algebra/21256-non-linear-modelling-concrete-slab-problem.html","timestamp":"2014-04-16T10:14:20Z","content_type":null,"content_length":"35272","record_id":"<urn:uuid:2a598387-42b1-4fb5-96f7-3ab83479863c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
ml/min into mm/s paul stoodley P.Stoodley at exeter.ac.uk Wed Jul 22 07:54:13 EST 1998 You have flow rate and want to convert it to a flow velocity, average flow velocity to be correct. The flow rate will be the same throughout the pipe line regardless of geometry but the flow velocity will not. What you need to do is measure (if the geometry is simple) or guestimate (if the geometry is complex) the cross sectional area of the component in the system where you want the flow velocity, I guess it will be the flow cell. Then divide the flow rate by the cross sectional area making sure your units are compatible. 1 ml can be taken as 1 cm^3 for water (assuming a density of 1000 kg/m^3), which most people do for reasonably dilute media. I am not sure what you mean by "0.2mm/sec as a usual speed". The flow velocity will be very different in different systems, for example in water distribution systems and heat exchangers you might expect 0.2 - 3 m/s. Since the flow velocity and reactor geometry determine the shear and mixing in a flow cell you should use a comparable flow to the system you are trying to model. To do this you can use the Reynolds number (although there are other parameters that may be more relevant). The Reynolds number is the ratio between the inertial forces (density (rho) * flow velocity (u)) and the viscous forces (viscosity)(mu). The inertia tends to push the water along while the viscosity tends to try and stop it. Re = u * rho * l / mu l is a characteristic length and is a scaling factor. For a tube l = the diameter, for a rectangular tube l = 4CSA/WP CSA = cross sectional area, WP = wetted perimeter = 2 (length * height). Re can be used as a comparative parameter over a wide range of flowing systems. Generally flow is laminar below Re 1000 and turbulent above Re 3000. Hope this helps. On 21 Jul 1998 08:13:51 -0700 Sarah Boyle <slb7 at ukc.ac.uk> > Dear biofilmers > could anyone provide me with an equation for converting ml/min into > mm/s; when using a peristalsic pump and a flow cell. > All literature seems to quote 0.2mm/sec as a usual speed, but I am > uncertain as to how they arrive at this figure when pump speeds are in > rpm and flow rates are in ml/min. > Surely 0.2mm/s could be highly variable depending on the tube radius? > Help > Sarah Boyle > Research School of Biosciences > University of Kent > CT2 7NJ > 01227 764000 ext. 3023 Paul Stoodley Environmental Tel: 01392 264348 Microbiology Fax: 01392 263700 Research email: p.stoodley at exeter.ac.uk Exeter University Biological Sciences Hatherly Laboratories Prince of Wales Road Exeter EX4 4PS. UK. To reply to the group as well as to the originator, make sure that the address biofilms at net.bio.net is included in the "To:" field. See the BIOFILMS homepage at http://www.im.dtu.dk/biofilms for info on how to (un)subscribe and post to the Biofilms newsgroup. More information about the Biofilms mailing list
{"url":"http://www.bio.net/bionet/mm/biofilms/1998-July/000331.html","timestamp":"2014-04-16T21:19:31Z","content_type":null,"content_length":"5372","record_id":"<urn:uuid:c6d649f8-b83c-4282-8368-be9a0e186969>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Irvington, NJ Trigonometry Tutor Find a Irvington, NJ Trigonometry Tutor ...I am an motivated teacher who can teach you to your understanding. I am an educator who motivates and educates in a fun, focused atmosphere. I look forward to help and educate all students who are willing to learn. 6 Subjects: including trigonometry, calculus, algebra 1, geometry ...I will find presentations that unlock the mystery and fun of mathematics for you! I will come to your home or meet you at a mutually convenient location (such as the library). I am happy to work with individuals or groups. Group rates can be negotiated. 10 Subjects: including trigonometry, calculus, statistics, geometry I believe that everyone can do well in Mathematics and Science. It does however require effort. I look forward to showing you the way, but the effort has to come from you. 11 Subjects: including trigonometry, calculus, GRE, GMAT ...In addition to having scored a 100 on the NYS regents exam in this subject, during the 2012-2013 school year I worked as a home school math teacher/tutor for a student who used the Saxon curriculum, so this material is extremely fresh in my mind. I am intimately familiar with the NYS regents cur... 11 Subjects: including trigonometry, Spanish, algebra 2, algebra 1 ...I encourage the student to ask questions, and I will always answer them to the fullest extent of my ability. I believe that sort of individualized attention is what sets tutoring apart from the classroom lessons. Being able to spend as much time as is needed on each topic is of vital importance to understanding math. 21 Subjects: including trigonometry, chemistry, physics, calculus Related Irvington, NJ Tutors Irvington, NJ Accounting Tutors Irvington, NJ ACT Tutors Irvington, NJ Algebra Tutors Irvington, NJ Algebra 2 Tutors Irvington, NJ Calculus Tutors Irvington, NJ Geometry Tutors Irvington, NJ Math Tutors Irvington, NJ Prealgebra Tutors Irvington, NJ Precalculus Tutors Irvington, NJ SAT Tutors Irvington, NJ SAT Math Tutors Irvington, NJ Science Tutors Irvington, NJ Statistics Tutors Irvington, NJ Trigonometry Tutors Nearby Cities With trigonometry Tutor Bayonne trigonometry Tutors Belleville, NJ trigonometry Tutors Bloomfield, NJ trigonometry Tutors East Orange trigonometry Tutors Elizabeth, NJ trigonometry Tutors Hillside, NJ trigonometry Tutors Kearny, NJ trigonometry Tutors Livingston, NJ trigonometry Tutors Maplewood, NJ trigonometry Tutors Newark, NJ trigonometry Tutors Orange, NJ trigonometry Tutors South Kearny, NJ trigonometry Tutors South Orange trigonometry Tutors Union Center, NJ trigonometry Tutors Union, NJ trigonometry Tutors
{"url":"http://www.purplemath.com/Irvington_NJ_Trigonometry_tutors.php","timestamp":"2014-04-20T07:04:45Z","content_type":null,"content_length":"24128","record_id":"<urn:uuid:64affdf7-afa5-4dba-840b-0bfd915b99ed>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. systeme international - as called in french. The 'International System of Units'. This is a standard measure of units. Did you ever think.. 'What is 1kg of mass?' - The scientists came up with a standard amount or a particular number of moles of that substance and said that this is 1kg. Hence, when we say 5kg of something.. we mean 5 lots of that particular amount of that something. This was done to standardize measurement. Or else, people could have taken any amount of something and said this is 1kg. Best Response You've already chosen the best response. oh i see..thanks Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4dcb04a340ec8b0b945d1117","timestamp":"2014-04-20T03:12:52Z","content_type":null,"content_length":"30315","record_id":"<urn:uuid:a48f7634-81b3-4244-b80d-b1c2bbe217ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
A book for quantum physics Sorry, but I'm about to widen your search still further. 'Quantum Mechanics' by F Mandl. I like it, it explains things well, but it takes a lot of reading. The Eisberg and Resnick is really good, but I find it hard work, and it is expensive. I use it as a reference, whereas the Mandl I can actually read, albeit slowly and with a lot of thinking. As to which book is the best it really depends on which you actually prefer in style. It might be a good idea to try to see these books in a library if you can before deciding to buy one. Some styles can be fairly dry and hard to read for one person, but give another person exactly what they were looking for. Good luck!
{"url":"http://www.physicsforums.com/showthread.php?p=4157527","timestamp":"2014-04-20T05:46:46Z","content_type":null,"content_length":"66738","record_id":"<urn:uuid:e762e8de-f736-4126-9089-49dca9e2cb7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding the Definition of bits per second (bits/sec) from the show interfaces Command Output This document answers the question "What is the definition of bits/sec in the output of the show interfaces command?" There are no specific requirements for this document. This document is not restricted to specific software and hardware versions. The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command. For more information on document conventions, refer to the Cisco Technical Tips Conventions. Bits per second includes all packet/frame overhead. It does not include stuffed zeros. The size of each frame is added to the total bytes of output. Take the difference every 5 seconds to calculate the rate. The algorithm for the five-minute moving average is: new average = ((average - interval) * exp (-t/C)) + interval • t is five seconds, and C is five minutes. exp(-5/(60*5)) == .983. • newaverage = the value we are trying to compute. • average = the "newaverage" value calculated from the previous sample. • interval = the value of the current sample. • (.983) is the weighting factor. Here, you take the average from the last sample, less what was gathered in this sample, and weight that down by a decay factor. This quantity is referred to as a "historical average". To the weighted (decayed) historical average, add the current sample, and come up with a new weighted (decayed) average. The interval is the value for some given variable in the five-second sample interval. The interval can be load, reliability, or packets per second. These are the three values to which we apply exponential decay. The average value minus the current value is the deviation of the sample from the average. You must weight this by .983, and add it to the current value. If the current value is greater than the average, this results in a negative number, and causes the "average" value to rise less quickly on traffic spikes. Conversely, if the current value is less than the running average, it results in a positive number, and ensures that the "average" value falls less rapidly if there is a sudden stoppage of traffic. Imagine that traffic is stopped altogether, after it has been 100% for an infinite period before such stoppage. In other words, the average rose slowly to 100%, and stayed there. The interval is always 0 for the "no traffic" scenario. Then, over five-second intervals, the exponentially weighted utilization goes from: 1.0 - .983 - .983 ^2 - .983 ^3 - .... .983 ^n 1.0 - .983 - .95 - 0.9 - 0.86 - and so on. In this example, utilization drops from 100% to 1% in 90 intervals, or 450 seconds, or 7.5 minutes. Conversely, if you start from 0 load, and apply 100% load, the exponentially decayed average should take about 7.5 minutes to reach 99%. As n gets large (with time), the average slowly falls (asymptotically) to zero for no traffic, or climbs to 100% for maximum traffic. This method prevents traffic spikes from skewing statistics about the "average". We are "damping" the wild fluctuations of the network traffic. In the real world, where things are not so black and white, the exponentially decayed average gives a picture of your average network utilization untainted by wild spikes.
{"url":"http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-110/12816-3.html","timestamp":"2014-04-19T16:48:46Z","content_type":null,"content_length":"49281","record_id":"<urn:uuid:b7ebd32f-6eef-4781-a724-8f2c6de1af20>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Myopic Active Learning: A Reinforcement Learning Approach Active learning considers the problem of actively choosing the training data. This is particularly useful in settings where the training data is limited or comes with a price and therefore the learner needs to be "economical" in its data usage. Active learning can be particularly challenging in settings where the cost of the data varies, the learner only has partial control over the data it receives and the value of each data point depends on the information captured by the training data already received. In such situations, non-myopic strategies that take into account the long-term effects of each data selection are desirable. In this talk, I will describe how non-myopic active learning can be naturally formulated as a reinforcement learning problem. This formulation is particularly useful to deal with the exploration exploitation dilemma that arises when the learner hesitates between selecting data that minimizes the immediate cost (exploitation) and selecting data that maximizes the long-term information gain (exploration). I will describe a Bayesian approach to optimally tradeoff exploitation and exploration. I will also show how to derive an analytic solution for discrete problems and an algorithm called BEETLE. Presented by Pascal Poupart, University of Waterloo Google Tech Talk March 16, 2009
{"url":"http://www.bestechvideos.com/2009/03/31/non-myopic-active-learning-a-reinforcement-learning-approach","timestamp":"2014-04-20T23:51:21Z","content_type":null,"content_length":"21060","record_id":"<urn:uuid:e846e751-1ddd-4537-91fa-dcd470b36998>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Tutorial on Robust Standard Errors April 12, 2011 By Tony Cookson Update: I have included a modified version of this summaryR() command as part of my package tonymisc, which extends mtable() to report robust standard errors. The tonymisc package is available on CRAN through the install.packages() command. If you have the right R commands at your disposal, it is simple to correct for heteroskedasticity using the robust correction that is commonly-used among economists. I recorded a video tutorial to describe the simplest (and most flexible) way I know to get R to compute robust standard errors. The key is to use a "summary-style" command that has an option to correct for heteroskedasticity. The command I like to use is called summaryR(). Here is the script file with the summaryR() command. I found this function on an R-help discussion board where several people were answering someone's question about extending the summary.lm() command. I deserve none of the credit for writing this (credit goes to John Fox), but I consider it my duty to point out how nice this function is. I demonstrate how to use the function in this video Here are is the script file I used in the video: Here's a link to the data set daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/video-tutorial-on-robust-standard-errors/","timestamp":"2014-04-19T14:42:17Z","content_type":null,"content_length":"35986","record_id":"<urn:uuid:a7af4567-ada7-413f-8d9a-46b485a9317e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
6th order Polynomial May 18th 2009, 01:43 PM #1 May 2009 6th order Polynomial I am looking for the roots of the following equation: I was told to use the substitution $x=s^2$ but I still can't figure out how to do it. This has to be solved in an exam as part of a bigger question and should not take longer than a minute, so I can't go for other approches. Did the original equation have rational coefficients, possibly $-\frac{64}{27} x^3 - \frac{32}{9} x^2 - \frac{4}{3} x +1 = 0$ ? If so then you can rescale by substitution of $x = \frac{3}{4} y$ in the equation so it can be transformed to $y^3 +2 y^2 +y -1 = 0$. Sadly, if this is the case then there are no easy roots to find via easy analytical methods (unless you happen to know the cumbersome general solution for a cubic) so the only approach available would be to use a numerical method such as Newton-Raphson method. So the bottom line is that this will most likely need a numerical method. To do this I recommend that you first differentiate to find the maximum and minimum coordinates. From these values you should have a rough idea of where the single real root is. You can then put this first good guess into your Newton-Raphson iteration method to find the root. Finally, don't forget to work backwards to get back to $s$. May 18th 2009, 02:43 PM #2 May 2009
{"url":"http://mathhelpforum.com/pre-calculus/89531-6th-order-polynomial.html","timestamp":"2014-04-20T21:38:46Z","content_type":null,"content_length":"32926","record_id":"<urn:uuid:e795f5a4-7efa-46d2-8f09-fc44e3a9132c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
SUMMARY: Operational semantics and compiler generation. Hermano Moura <moura@dcs.glasgow.ac.uk> Mon, 15 Mar 1993 15:44:57 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: Hermano Moura <moura@dcs.glasgow.ac.uk> Keywords: summary, semantics Organization: Compilers Central References: 93-02-154 Date: Mon, 15 Mar 1993 15:44:57 GMT Many thanks to all the kind people who replied to my message about the use of operational semantics for semantics-directed compiler generation. Certainly all the messages will be of great help. As many people demonstrated interest, I include below all the received messages. -- Hermano Moura Contribution from John Michael Ashley <jashley@olympus.cs.indiana.edu> The following is the bibliography from the action semantics mailing list. The book "Action Semantics" has a fairly large bibliography (althought probably a subset of this one). Lately, Gurevich has been looking at evolving algebras as an operational semantics, but I don't believe there has been any work in generating compilers from such specifications. The mentioned bibliography (action.bib) is available from ftp.daimi.aau.dk under /pub/action/ Contribution from Kurt Bischoff <bischoff@cs.iastate.edu>: I'm not quite sure what you're looking for ... I've built a tool that generates semantic analyzers/compilers from attribute grammars. It generalizes the syntaxes and functionalities of Yacc and Lex. I can send some information if you're interested. Contribution from Arie van Deursen <Arie.van.Deursen@nl.cwi>: I know of some references which may be of some interest for you. In Edinburgh D.Berry had his Ph.D. thesis titled ``Generating Program Animators from Programming Language Semantics'' (1991). He describes semantics using SOS. Moreover, at INRIA Sophia-Antipolis the specification formalism Typol is in heavy use; Officially, Typol is ``natural semantics'', but that is very close to SOS. Typol specifications can be executed using the Centaur system. Reference: G.Kahn, ``Natural Semantics'', TACS4, 1987, LNCS 247, pp 22-39. Finally, in our group at CWI - Amsterdam we have a lot of experience with algebraic specifications which we execute as term rewriting systems. In practice, an executable (functional) term rewriting specification is very similar to a definition using structural operational semantics. At the end of the mail, I appended our default "advertisement" folder. Perhaps it's useful. The ASF+SDF Project To allow for programming environments (integrated collections of tools) that are both easy to obtain and provably correct, it is investigated how tools can be generated from algebraic specifications of programming Algebraic Specification using ASF+SDF All specifications are algebraic specifications, using the ASF+SDF formalism. The ASF+SDF formalism combines the elder "Algebraic Specification Formalism" with the "Syntax Definition Formalism". By viewing signatures also as grammars, concrete syntax can be used for terms (e.g., in the equations). The formalism supports modularization, conditional equations (both positive and negative), and built-in associative lists. The formalism is suited to provide specifications for arbitrary abstract data types (traditional algebraic specification), as well as definitions of any (formal) language (e.g. programming, query, text-processing, specification, etc.). The ASF+SDF Meta-environment A system, called the ASF+SDF Meta-environment, has been built around the ASF+SDF formalism. It allows for rapid prototyping of ASF+SDF specifications. >From the signature, parsers are geneated, and from the equations, term rewriting systems are generated. Terms can be edited using syntax-directed editors. It is possible to attach specified functionality to user-interface events such as mouse-clicks, buttons, etc. The ASF+SDF Meta-environment has an incremental implementation; if the specification is changed the prototyped tools are adapted rather than regenerated from scratch. This supports inter-active developing and testing of specifications. The system is still under development, but stable enough for external use. A tape with the system is available. Availability of Reports There is an annotated bibliography list of abstracts of papers and publications of the ASF+SDF group. This abstract list can be obtained by ftp to "ftp.cwi.nl", directory "pub/gipe". In this directory also electronic versions of several reports can be found. Some References: editor = {J.A. Bergstra and J. Heering and P. Klint}, title = {{A}lgebraic {S}pecification}, series = {ACM Press Frontier Series}, publisher = {The ACM Press in co-operation with Addison-Wesley}, year = {1989}} author = {P. Klint}, booktitle = {Proceedings of the METEOR workshop on Methods Based on Formal Specification}, editor = {J.A. Bergstra and L.M.G. Feijs}, pages = {105-124}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {A meta-environment for generating programming environments}, volume = {490}, year = {1991}, institution = {Centrum voor Wiskunde en Informatica (CWI)}, type = {Report {CS}-{R}9064}} author = {J. Heering and P. Klint and J. Rekers}, title = {{I}ncremental generation of parsers}, journal = {IEEE Transactions on Software Engineering}, volume = {16}, number = {12}, pages = {1344-1351}, year = {1990}, note = {Also in: {\it SIGPLAN Notices}, 24(7):179-191, 1989} } key = {Deu92}, author = {Deursen, A. van}, title = {Specification and Generation of a $\lambda$-calculus institution = {Centrum voor Wiskunde en Informatica (CWI)}, type = {{R}eport {CS}-{R}9233}, address = {Amsterdam}, year = {1992}, note = {Available by {\em ftp} from ftp.cwi.nl: pub/gipe} Contribution from Trejos-Zelaya <Ignacio.Trejos-Zelaya@uk.ac.oxford.prg>: I'm not directly into that, but some of the following might be useful: Hannan and Miller. From operational semantics to abstract machines: preliminary results. Lisp and Functional Programming Conf. 1990. Hannan and Miller. From operational semantics to abstract machines. Mathematical Structures in Computer Science 2(4), dec 1992. (seems to be an extended version of the above one) Hannan. Staging transformations for abstract machines. Proc. symp. on partial evaluation and semantics-based program manipulation. jun 1991 (check Sigplan Notices). Hannan. Making abstract machines less abstract. Funct. prog. langs. and computer arch. (5th. conf.) aug 1991. LNCS 523. Though I don't know how "automatic" the process might be. I suggest that you also check the work done by Kahn and collaborators on the Centaur system. It's not compiler generation, but rather both high- level description of programming environments and tools for (executable) semantics definitions suitable for prototyping. An article illustrating the style (oldish): Kahn. Natural semantics. 4th ann. symp. theo. aspects of Comp. Sci. LNCS 247. 1987. They execute (Typol) programs through a translation to Prolog. Also a thesis shows how to deal with a subclass of Typol programs, via attribute grammar evaluators: Attali. Compilation de programmes TYPOL par attribut se'mantiques. U. Nice. 1989. On "animation", see Dave Berry's thesis: Berry. Generating program animators from programming language semantics. U. Edinburgh. CST-79-91 (ECS-LFCS-91-163) And on correctness of compilers and debuggers, your compatriot: da Silva. Correctness proofs of compilers and debuggers: an approach based on structural operational semantics. U. Edinburgh 1992. (Our friend Augusto Sampaio told me that da Silva is now at At Oxford, Steve MacKeever (swm@ac.ox.prg) is precisely working on obtaining compilers automatically from operational semantics. An I am using a subclass of "inference systems" (by imposing restrictions on the form of rules) for _programming_, resulting in a simple extension of Standard ML's purely functional subset of the Core. My notation is directly executable, though I am not able (yet) to compile it. I strongly suggest that you contact Steve MacKeever, who is really into that and might give you useful pointers. His work is very interesting. Contribution from Mitchell Wand <wand@dec5120z.ccs.northeastern.edu>: You might want to look at our paper author = "Mitchell Wand and Dino P. Oliva", title = "Proving the Correctness of Storage Representations", booktitle = "1992 ACM Conference on Lisp and Functional Programming", year = "1992", pages = "151--160", and at the references therein. I haven't published anything yet, however this is exactly what my PhD thesis is all about. The core idea is based on converting the SOS rules down to term rewriting machines on which Pass Separation is performed. This technique was first elaborated for a lambda calculus machine in the following papers: [1] ``From Operational Semantics to Abstract Machines,'' with Dale Miller. Invited to appear in a special issue of {\em Mathematical Structures in Computer Science} dedicated to 1990 ACM Conference on Lisp and Functional Programming. 56 pages. Accepted. [2] ``Staging Transformations for Abstract Machines.'' In {\em Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and Semantics Based Program Manipulation}, ACM SIGPLAN Notices, September 1991. [3] ``Making Abstract Machines Less Abstract.'' In {\em Proceedings of the ACM Conference on Functional Programming Languages and Computer Architecture}, Lecture Notes in Computer Science, Vol. 523, Springer-Verlag, 1991. It's in some ways the opposite of Action Semantics as we use the semantics to derive the actions rather then using actions, which have been formally defined, to define the semantics. This message is rather brief but if you would like any more assistance then please ask. Contribution from David Bruce <"ISIS::dib"@uk.mod.hermes>: Peter Lee, "Realistic Compiler Generation", MIT Press, 1989, 0-262-12141-7. (a revision of his 1987 U. Michigan doctoral thesis) Contribution from Fabio Queda Bueno da Silva <fabio@BR.UFPE.DI>: In my PhD thesis I address the problem of compiler correctness using a variant of Plotkin's Structural Operational Semantics, called Relational Semantics. In my work I don't address automatic compiler generation directly, but I believe it may provide a theoretical basis for such generations. One of the goals of my research was to define a general criterion for compiler correctness. I used the concept of Observational Equivalence from algebraic specification to define an equivalence relation between two Relational Semantics which provides a suitable criterion for compiler correctness. The thesis gives extensive arguments on why I believe that is a suitable notion of compiler correctness. Furthermore, I also defined a powerfull proof method to be used in compiler correctness proofs. This method also use ideas from algebraic specification, namely the notion of Strong Correspondences developed by Oliver Schoett in his PhD thesis. Another issue in my thesis that might interest you is the evaluation of Relational Semantics. There the idea is to define an evaluation model for evaluating semantics which leads to interpreter generation. This idea is not new and appears in systems like CENTAUR and The Animator Generator. I studied this problem from a theoretical perspective, and defined an evaluation model that is proved sound and complete wrt the model-semantics of Relational Semantics. Finally, I also studied the problem of debugger specification and correctness using Relational Semantics. I gave an abstract definition of debuggers and then defined a language for specification of debuggers. I also defined an equivalence relation between debuggers based on the concept of a bisimulation. This provides both a criterion for debugger correctness and also a proof method for debugger correctness proofs. If you are interested in these issues my thesis is available from the department of computer science. There is a small charge to cover printing costs. There are two LFCS technical reports (free): one summarizes the PhD thesis and the other discuss compiler correctness in detail. The references are in BibTeX format: author = "Fabio Q. B. da Silva", title = "Correctness Proofs of Compilers and Debuggers: an Approach Based on Structural Operational Semantics", school = LFCS, year = "1992", address = "Edinburgh, EH9 3JZ, Scotland", month = "September", note = "Available as LFCS Report Series ECS-LFCS-92-241 or author = "Fabio Q. B. da Silva", title = "Observational Equivalence and Compiler Correctness", institution = LFCS, year = "1992", OPTtype = "", number = "ECS-LFCS-92-240", address = "Edinburgh, EH9 3JZ, Scotland", month = "September", OPTnote = "Available from Lorraine Edgar (\ml{lme@dcs.ed.ac.uk}) or in writing to the Department of Computer Science" author = "Fabio Q. B. da Silva", title = "Correctness Proofs of Compilers and Debuggers: an Overview of an Approach Based on Structural Operational Semantics", institution = LFCS, year = "1992", OPTtype = "", number = "ECS-LFCS-92-233", address = "Edinburgh, EH9 3JZ, Scotland", month = "September", OPTnote = "Available from Lorraine Edgar (\ml{lme@dcs.ed.ac.uk}) or in writing to the Department of Computer Science" Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/93-03-052","timestamp":"2014-04-21T00:16:54Z","content_type":null,"content_length":"24027","record_id":"<urn:uuid:d3ca63f3-d97b-4fe1-9467-e340966929fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Millbury, MA Science Tutor Find a Millbury, MA Science Tutor ...Most of all we'll have fun. Learning always takes place in a fun environment! I look forward to the opportunity to work with you! 1 Subject: nursing ...Focusing on the subject matter, I would strive to have students understand the fundamentals of the subject, with the inclusion of real world (and personal) experience and discussion to maximize their focus. I am enthusiastic and patient, and will let it show in all areas of my teaching. Thank you for your consideration. 19 Subjects: including geology, astronomy, physical science, physics I have a B.S. in Chemistry/Physical Science and a PhD in Environmental Toxicology/Biochemistry and I am available for tutoring students in high school or college level math (algebra, trig, calculus, all levels including honors and A.P.) as well as Chemistry (Organic, Inorganic, all levels) Biochemis... 15 Subjects: including biochemistry, algebra 2, calculus, chemistry ...Most troubles with introductory calculus are traceable to an inadequate mastery of algebra and trigonometry. As noted above, trigonometry is usually encountered as a part of a pre-calculus course. In my view, much of the traditional material associated with trigonometry should be replaced by an... 7 Subjects: including physics, algebra 2, calculus, astronomy ...I have been tutoring for four years, beginning with the years when I was an NHS volunteer. I explain concepts being learned, and look for gaps that students have in learning Math facts and concepts in order to do work at their current level. As a Physics and Mathematics double major and Astrono... 17 Subjects: including physics, physical science, algebra 1, algebra 2 Related Millbury, MA Tutors Millbury, MA Accounting Tutors Millbury, MA ACT Tutors Millbury, MA Algebra Tutors Millbury, MA Algebra 2 Tutors Millbury, MA Calculus Tutors Millbury, MA Geometry Tutors Millbury, MA Math Tutors Millbury, MA Prealgebra Tutors Millbury, MA Precalculus Tutors Millbury, MA SAT Tutors Millbury, MA SAT Math Tutors Millbury, MA Science Tutors Millbury, MA Statistics Tutors Millbury, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/millbury_ma_science_tutors.php","timestamp":"2014-04-16T16:21:39Z","content_type":null,"content_length":"23628","record_id":"<urn:uuid:8716a1a7-c665-4755-a8e5-893a871bf6ea>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Special infinitary relations and ultrafilters up vote -7 down vote favorite (This problem appeared in face of me trying to generalize my theory of (binary) funcoids to the theory of $n$-ary funcoids (I call them "multifuncoids") for arbitrary $n$.) Let $I$ is some indexing set. By filters I will mean (not necessarily proper) filters on some fixed set $U$. I will call a multifuncoid a $I$-ary relation $f$ between subsets of $U$ such that 1. for every $k \in I$, subsets $A$ and $B$ of $U$, and family $L = L_{i \in I \setminus \{ k \}}$ of subsets of $U$ we have $$ f ( L \cup \{ (k ; A \cup B) \} ) \Leftrightarrow f ( L \cup \{ (k ; A) \} ) \vee f ( L \cup \{ (k ; B) \} ) . $$ 2. for every $k \in I$, and family $L = L_{i \in I}$ we have $L_k=\emptyset \Rightarrow \neg f (L)$. Let $a = a_{i \in I}$ is some family of filters. I will call funcoidal product $\prod a$ of a family $a = a_{i \in I}$ of filters an $I$-ary relation between subsets of $U$ such that for every family $R = R_{i \in I}$ of sets we have $$ \left( \ prod a \right) R \Leftrightarrow \forall i \in I \forall A \in a_i : A \cap R_i \neq \emptyset . $$ It simple to show that funcoidal product is a multifuncoid. Conjecture For every non-empty multifuncoid $f$ there exist a family $a = a_{i \in I}$ of ultrafilters such that $f \supseteq \prod a$. If this conjecture is false, under which additional conditions it will be true? (I know that it is true for finite set $I$, but am interested also in the infinite case.) [DEL: Addition: I think that the following condition may be necessary: $f(\{(i;A_i\cup B_i) | i\in I\}) \Leftrightarrow f(\{(i;A_i) | i\in I\}) \vee f(\{(i;B_i) | i\in I\})$ for each families $A=A_{i \in I}$ and $B=B_{i\in I}$ of subsets of $U$. :DEL] gn.general-topology ra.rings-and-algebras 4 Is the additional condition formulated correctly? It seems to me that it fails even for the two-element index set $I=\{0,1\}$ when $a_0$ and $a_1$ are ultrafilters. Take $A_0=B_1=U$ and $A_1=B_0= \emptyset$. – Andreas Blass Apr 7 '11 at 14:31 14 I am in awe of the kindness of the mathematicians who responded. The question makes no attempt to define the totally unfamiliar terms, save for a link to porton's web page, where the reader has to pick a paper, take a deep breath, and wade into porton's stuff. I don't think the way the question is worded really deserves such kindness. – Todd Trimble♦ Apr 7 '11 at 16:15 4 I found the question quite self-contained, I certainly didn’t read any porton’s paper. – Emil Jeřábek Apr 7 '11 at 16:37 3 @Daniel, @Emil: perhaps you are right and I was being unfair to porton. I had some difficulty understanding what a multifuncoid was supposed to be at first reading, which is when I clicked on the link. On another more careful reading, I think it is indeed decipherable -- my apologies. – Todd Trimble♦ Apr 7 '11 at 16:52 6 But I still think Andreas and Emil are kind! :-) – Todd Trimble♦ Apr 7 '11 at 16:53 show 7 more comments 2 Answers active oldest votes For $a$ an $I$-indexed family of filters and $S$ an $I$-indexed family of subsets of $U$ such that $U\smallsetminus S_i\notin a_i$ for every $i\in I$, define the restricted product $\prod^ Sa$ by $$\left(\prod\nolimits^Sa\right)R\Leftrightarrow\left(\prod a\right)R\land\{i\in I:R_i\ne S_i\}\text{ is finite.}$$ This is again a nonempty multifuncoid. 1. Every nonempty multifuncoid $f$ contains a restricted product of ultrafilters. Fix $S$ such that $f(S)$. For every $J\subseteq I$ finite, let $A_J$ be the set of sequences $a$ of ultrafilters such that $S_i\in a_i$ for every $i$, and $f(R)$ holds for every $R$ where $R_i\in a_i$ for $i\in J$, and $R_i=S_i$ for $i\notin J$. Then $A_J$ is closed in $(\beta U)^I$, $A_J\cap A_{J'}\supseteq A_{J\cup J'}$, and $A_J\ne\varnothing$ by the finite case, hence there exists $a\in\bigcap_JA_J$ by compactness of $(\beta U)^I$. Then $f\supseteq\prod^Sa$. 2. The restricted product of an infinite family of ultrafilters does not contain any product of a family of ultrafilters (assuming $U$ has more than one element), thus refuting the original wording of your conjecture. Indeed, if $f=\prod^Sa$ and $f(R)$, then $R_i=S_i$ for all but finitely many $i$, whereas if $g=\prod b$ is a product of a family of ultrafilters, we can for every $i\in I$ fix $R_i\in b_i$ such that $R_i\ne S_i$; then $g(R)$, but not $f(R)$, so $g\nsubseteq f$. Point 1 says that the intuition behind the conjecture is basically sound, but the notion of the product has to be modified to make it really work to take into account that the axioms of multifuncoids only concern local behaviour when a single (or finitely many, by iteration) coordinate is changed, they do not imply anything about what happens when infinitely many coordinates change. Since the proof above refers to the case of finitely many coordinates in a stronger form than what is claimed to hold in the question, I may as well give a self-contained proof of 1. up vote 18 As before, fix $S$ such that $f(S)$. By definition, $S_i\ne\varnothing$ for every $i$. If $a$ is a family of filters such that $S_i\in a_i$ for all $i\in I$, consider a modified product \ down vote left(\prod\nolimits_ma\right)R&\Leftrightarrow(\forall i\in I)\,R_i\in a_i,\\ \left(\prod\nolimits_m^Sa\right)R&\Leftrightarrow\left(\prod\nolimits_ma\right)R\land\{i\in I:R_i\ne S_i\}\ accepted text{ is finite.} Note that if all $a_i$ are ultrafilters, then $\prod_ma=\prod a$, and $\prod_m^Sa=\prod^Sa$. It thus suffices to find $a$ such that $\prod_m^Sa\subseteq f$, and all $a_i$ are ultrafilters. Let $P$ be the set of all families $a$ of proper filters such that $S_i\in a_i$ for all $i$, and $\prod_m^Sa\subseteq f$. We define a partial order on $P$ by $a\le b$ iff $a_i\subseteq b_i$ for all $i\in I$. It is easy to see from the definition of a multifuncoid that: (*) Whenever $f(R)$, $R_i\subseteq R'_i$ for every $i$, and $R_i=R'_i$ for all but finitely many $i$, then $f(R')$. It follows that $P$ is nonempty, since $a\in P$, where $a_i$ is the filter generated by $S_i$. Since the pointwise union of any chain in $P$ is an element of $P$, Zorn’s lemma implies that there exists a maximal element $a\in P$. I claim that every $a_j$ is an ultrafilter. Assume for contradiction that it is not, and let $X\subseteq U$ be such that $X,U\smallsetminus X\notin a_j$. Define $b$ by $b_i=a_i$ for $i\ne j$, and $b_j$ is the filter generated by $a_j\cup\{X\}$. Since $a< b$, we have $b\notin P$, thus there exists $R$ such that $\neg f(R)$, $R_i=S_i$ for all but finitely many $i$, $R_i\in a_i$ for all $i\ne j$, and $X\cap Y\subseteq R_j$ for some $Y\in a_j$. Symmetrically, there exists $R'$ and $Y'\in a_j$ such that $\neg f(R')$, $R'_i=S_i$ for all but finitely many $i$, $R'_i\in a_i$ for $i\ne j$, and $(U\smallsetminus X)\cap Y'\subseteq R'_j$. Using (*) and the closure of $a_i$ under intersections, we can replace $R_i$ with $R_i\cap R'_i$ for all $i\ne j$, and the same for $R'_i$. Thus, without loss of generality, $R_i=R'_i$ for all $i\ne j$. But then by the definition of a multifuncoid, $\neg f(R'')$, where $R''_i=R_i=R'_i$ for $i\ne j$, and $R''_j=R_j\cup R'_j$. However, $R''_j\supseteq Y\cap Y'\in a_j$, hence $R''\in\prod_m^Sa\subseteq f$, a contradiction. 1 I see I do not fully understand your counter-example and also that your result contradicts with my intuition (thus my intuition being wrong). By these reason I will stop my writing ( mathematics21.org/algebraic-general-topology.html) now and go to learn. I have purchased the book "The Theory Of Ultrafilters" by W. W. Comfort, S. Negrepontis but have not yet fully learned it. Now it's the time. – porton Apr 7 '11 at 17:13 11 @porton: Your conjecture seems to be a moving target. It couldn't hurt to work on it for yourself for a bit. – Daniel Litt Apr 7 '11 at 17:30 Some attempts on further clarification. In case it’s not obvious, I’m implicitly using the following observation: if $a_i$ is an ultrafilter, then the condition “$\forall A\in a_i\,A\ 1 cap R_i\ne\varnothing$” from the definition of $\prod a$ simplifies to “$R_i\in a_i$”. The point of the counterexample is that any $R$ such that $(\prod^Sa)R$ agrees with $S$ on all but finitely many coordinates, whereas for a product $\prod b$ of ultrafilters you can find $R$ such that $R_i\in b_i$ is distinct from $S_i$ for every $i$, so that $(\prod b)R$ but not $(\prod^Sa)R$. – Emil Jeřábek Apr 7 '11 at 17:36 1 @Emil Jeřábek: Oh, sorry, it's my error, I was uncareful. I will delete my wrong comment where I blame you for an error. – porton Apr 7 '11 at 17:39 @Emil Jeřábek: I made a modified version of my conjecture: mathoverflow.net/questions/61118/… – porton Apr 9 '11 at 6:18 add comment If I've correctly deciphered your definitions, then the following should be a counterexample to your conjecture (even with the condition that you added later). Take both $I$ and $U$ to be the set $N$ of natural numbers. Define $f$ to be true for an $N$-indexed sequence $(A_i)$ of subsets of $N$ if and only if there is no finite upper bound (independent of $i$) for the up vote 10 cardinalities of the sets $A_i\cap\{0,1,2,\dots,i\}$. down vote @Andreas Blass: The $f$ you defined is a multifuncoid. But it is not obvious for me why this is a counter-example for my conjecture that is why your $f$ is not above any product of ultrafilters. Could you explain? – porton Apr 7 '11 at 16:58 Consider a product of ultrafilters $a_i$. Define $A_i$ to be the unique singleton in $a_i$ if $a_i$ is principal and to be $N\setminus\{0,1,\dots,i\}$ if $a_i$ is non-principal. Then $A_i\in a_i$ for all $i$, but $f$ applied to the sequence $(A_i)$ is false because the cardinalities of the sets $A_i\cap\{0,1,\dots,i\}$ are bounded above by 1. – Andreas Blass Apr 8 '11 at 13:16 @Andreas Blass: I did a bad error, I forgot the second condition in the definition of multifuncoids (now added). This breaks your proof. I think we can fix it replacing "if there is no finite upper bound (independent of i) for the cardinalities of the sets $A_i\cap\{0,1,2,\dots,i\}$" with "if there is no finite upper bound (independent of i) for the cardinalities of the sets $A_i\cap\{0,1,2,\dots,i\}$ and every set $A_i$ is not empty", but I have not yet checked this. – porton Apr 8 '11 at 14:47 @Andreas Blass: In your comment you've proved only that the multifuncoid defined by you is not a superset of certain funcoidal product of filters. I however need to verify a different supposition, whether a funcoidal product of filters has other subsets except of the empty set. See also my previous comment about my error in the formulation of the question and a modified question: mathoverflow.net/questions/61118/… – porton Apr 15 '11 at 21:33 add comment Not the answer you're looking for? Browse other questions tagged gn.general-topology ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/60925/special-infinitary-relations-and-ultrafilters/60947","timestamp":"2014-04-20T11:21:18Z","content_type":null,"content_length":"79440","record_id":"<urn:uuid:f1eb48dd-d52b-4d86-b963-80a61d2369de>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
CTRMODE, FRQx and PHSx [Archive] - Parallax Forums Jim C 10-07-2006, 05:53 PM The time has come to learn about frequency generation along with detecting and counting edges. There have been a few threads about this but none recently, and I am looking for a thread from a few months ago. The thread included a chart or worksheet about what each CTRMODE did, in sort of a spreadsheet format. It explained how all the NCO and PLL business of the counters worked. I've searched the forums every way I can think of, to no avail. Does anyone remember this thread? Jim C
{"url":"http://forums.parallax.com/archive/index.php/t-88752.html","timestamp":"2014-04-20T18:27:34Z","content_type":null,"content_length":"22524","record_id":"<urn:uuid:b33c2606-04dc-4bc4-b303-6302b8a2309f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Approaches to Survival Modeling 7.3 Approaches to Survival Modeling Up to this point we have been concerned with a homogeneous population, where the lifetimes of all units are governed by the same survival function S(t). We now introduce the third distinguishing characteristic of survival models-the presence of a vector of covariates or explanatory variables that may affect survival time-and consider the general problem of modeling these effects. 7.3.1 Accelerated Life Models* Let T[i] be a random variable representing the (possibly unobserved) survival time of the i-th unit. Since T[i] must be non-negative, we might consider modeling its logarithm using a conventional linear model, say where e is a suitable error term, with a distribution to be specified. This model specifies the distribution of log-survival for the i-th unit as a simple of a standard or baseline distribution represented by the error term. Exponentiating this equation, we obtain a model for the survival time itself where we have written T for the exponentiated error term. It will also be convenient to use g as shorthand for the multiplicative effect exp{ β} of the covariates. Interpretation of the parameters follows along standard lines. Consider, for example, a model with a constant and a dummy variable x representing a factor with two levels, say groups one and zero. Suppose the corresponding multiplicative effect is g = 2, so the coefficient of x is β = log(2) = 0.6931. Then we would conclude that people in group one live twice as long as people in group zero. There is an interesting alternative interpretation that explains the name `accelerated life' used for this model. Let S[0](t) denote the survivor function in group zero, which will serve as a reference group, and let S[1](t) denote the survivor function in group one. Under this model, In words, the probability that a member of group one will be alive at age t is exactly the same as the probability that a member of group zero will be alive at age t/g. For g = 2, this would be half the age, so the probability that a member of group one would be alive at age 40 (or 60) would be the same as the probability that a member of group zero would be alive at age 20 (or 30). Thus, we may think of g as affecting the passage of time. In our example, people in group zero age `twice as fast'. For the record, the corresponding hazard functions are related by so if g = 2, at any given age people in group one would be exposed to half the risk of people in group zero half their age. The name `accelerated life' stems from industrial applications where items are put to test under substantially worse conditions than they are likely to encounter in real life, so that tests can be completed in a shorter time. Different kinds of parametric models are obtained by assuming different distributions for the error term. If the e[i] are normally distributed, then we obtain a log-normal model for the T[i]. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model. Alternatively, if the e[i] have an extreme value distribution with p.d.f. f(e) = exp{ e- exp{ e} }, then T has an exponential distribution, and we obtain the exponential regression model, where T is exponential with hazard l satisfying the log-linear model An example of a demographic model that belongs to the family of accelerated life models is the Coale-McNeil model of first marriage frequencies, where the proportion ever married at age a in a given population is written as where F is a model schedule of proportions married by age, among women who will ever marry, based on historical data from Sweden; c is the proportion who will eventually marry, a is the age at which marriage starts, and k is the at which marriage proceeds relative to the Swedish standard. Accelerated life models are essentially standard regression models applied to the log of survival time, and except for the fact that observations are censored, pose no new estimation problems. Once the distribution of the error term is chosen, estimation proceeds by maximizing the log-likelihood for censored data described in the previous subsection. For further details, see Kalbfleish and Prentice (1980). 7.3.2 Proportional Hazard Models A large family of models introduced by Cox (1972) focuses directly on the hazard function. The simplest member of the family is the proportional hazards model, where the hazard at time t for an individual with covariates x[i] (not including a constant) is assumed to be l[i](t|x[i]) = l[0](t) exp{ x[i]β}. (7.10) In this model l (t) is a baseline hazard function that describes the risk for individuals with = 0, who serve as a reference cell or pivot, and exp{ β} is the relative risk, a proportionate increase or reduction in risk, associated with the set of characteristics . Note that the increase or reduction in risk is the same at all durations t. To fix ideas consider a two-sample problem where we have a dummy variable x which serves to identify groups one and zero. Then the model is Thus, l (t) represents the risk at time t in group zero, and g = exp{β} represents the ratio of the risk in group one relative to group zero at any time t. If g = 1 (or β = 0) then the risks are the same in the two groups. If g = 2 (or β = 0.6931), then the risk for an individual in group one at any given age is twice the risk of a member of group zero who has the same age. Note that the model separates clearly the effect of time from the effect of the covariates. Taking logs, we find that the proportional hazards model is a simple additive model for the log of the hazard, with logl[i](t|x[i]) = α[0](t) + x[i]β, where α (t) = logl (t) is the log of the baseline hazard. As in all additive models, we assume that the effect of the covariates is the same at all times or ages t. The similarity between this expression and a standard analysis of covariance model with parallel lines should not go unnoticed. Returning to Equation 7.10, we can integrate both sides from 0 to t to obtain the cumulative hazards L[i](t|x[i]) = L[0](t) exp{ x[i]β}, which are also proportional. Changing signs and exponentiating we obtain the survivor functions S[i](t|x[i]) = S[0](t) ^exp{ x[i]β}, (7.11) where S (t) = exp{ -L (t) } is a baseline survival function. Thus, the effect of the covariate values on the survivor function is to raise it to a power given by the relative risk exp{ In our two-group example with a relative risk of g = 2, the probability that a member of group one will be alive at any given age t is the square of the probability that a member of group zero would be alive at the same age. 7.3.3 The Exponential and Weibull Models Different kinds of proportional hazard models may be obtained by making different assumptions about the baseline survival function, or equivalently, the baseline hazard function. For example if the baseline risk is constant over time, so l[0](t) = l[0], say, we obtain the exponential regression model, where Interestingly, the exponential regression model belongs to both the proportional hazards and the accelerated life families. If the baseline risk is a constant and you double or triple the risk, the new risk is still constant (just higher). Perhaps less obviously, if the baseline risk is constant and you imagine time flowing twice or three times as fast, the new risk is doubled or tripled but is still constant over time, so we remain in the exponential family. You may be wondering whether there are other cases where the two models coincide. The answer is yes, but not many. In fact, there is only one distribution where they do, and it includes the exponential as a special case. The one case where the two families coincide is the Weibull distribution, which has survival function and hazard function for parameters l > 0 and p > 0. If p = 1, this model reduces to the exponential and has constant risk over time. If p > 1, then the risk increases over time. If p < 1, then the risk decreases over time. In fact, taking logs in the expression for the hazard function, we see that the log of the Weibull risk is a linear function of log time with slope p-1. If we pick the Weibull as a baseline risk and then multiply the hazard by a constant g in a proportional hazards framework, the resulting distribution turns out to be still a Weibull, so the family is closed under proportionality of hazards. If we pick the Weibull as a baseline survival and then speed up the passage of time in an accelerated life framework, dividing time by a constant g, the resulting distribution is still a Weibull, so the family is closed under acceleration of time. For further details on this distribution see Cox and Oakes (1984) or Kalbfleish and Prentice (1980), who prove the equivalence of the two Weibull models. 7.3.4 Time-varying Covariates So far we have considered explicitly only covariates that are fixed over time. The local nature of the proportional hazards model, however, lends itself easily to extensions that allows for covariates that change over time. Let us consider a few examples. Suppose we are interested in the analysis of birth spacing, and study the interval from the birth of one child to the birth of the next. One of the possible predictors of interest is the mother's education, which in most cases can be taken to be fixed over time. Suppose, however, that we want to introduce breastfeeding status of the child that begins the interval. Assuming the child is breastfed, this variable would take the value one (`yes') from birth until the child is weaned, at which time it would take the value zero (`no'). This is a simple example of a predictor that can change value only once. A more elaborate analysis could rely on frequency of breastfeeding in a 24-hour period. This variable could change values from day to day. For example a sequence of values for one woman could be Let x[i](t) denote the value of a vector of covariates for individual I at time or duration t. Then the proportional hazards model may be generalized to l[i](t, x[i](t)) = l[0](t) exp{x[i](t)β}. (7.12) The separation of duration and covariate effects is not so clear now, and on occasion it may be difficult to identify effects that are highly collinear with time. If all children were weaned when they are around six months old, for example, it would be difficult to identify effects of breastfeeding from general duration effects without additional information. In such cases one might still prefer a time-varying covariate, however, as a more meaningful predictor of risk than the mere passage of time. Calculation of survival functions when we have time-varying covariates is a little bit more complicated, because we need to specify a path or trajectory for each variable. In the birth intervals example one could calculate a survival function for women who breastfeed for six months and then wean. This would be done by using the hazard corresponding to x(t) = 0 for months 0 to 6 and then the hazard corresponding to x(t) = 1 for months 6 onwards. Unfortunately, the simplicity of Equation 7.11 is lost; we can no longer simply raise the baseline survival function to a power. Time-varying covariates can be introduced in the context of accelerated life models, but this is not so simple and has rarely been done in applications. See Cox and Oakes (1984, p.66) for more 7.3.5 Time-dependent Effects The model may also be generalized to allow for effects that vary over time, and therefore are no longer proportional. It is quite possible, for example, that certain social characteristics might have a large impact on the hazard for children shortly after birth, but may have a relatively small impact later in life. To accommodate such models we may write l[i](t,x[i]) = l[0](t) exp{ x[i]β(t)}, where the parameter β(t) is now a function of time. This model allows for great generality. In the two-sample case, for example, the model may be written as which basically allows for two arbitrary hazard functions, one for each group. Thus, this is a form of saturated model. Usually the form of time dependence of the effects must be specified parametrically in order to be able to identify the model and estimate the parameters. Obvious candidates are polynomials on duration, where β(t) is a linear or quadratic function of time. Cox and Oakes (1984, p. 76) show how one can use quick-dampening exponentials to model transient effects. Note that we have lost again the simple separation of time and covariate effects. Calculation of the survival function in this model is again somewhat complicated by the fact that the coefficients are now functions of time, so they don't fall out of the integral. The simple Equation 7.11 does not apply. 7.3.6 The General Hazard Rate Model The foregoing extensions to time-varying covariates and time-dependent effects may be combined to give the most general version of the hazard rate model, as l[i](t,x[i](t)) = l[0](t) exp{ x[i](t)β(t) }, (t) is a vector of time-varying covariates representing the characteristics of individual I at time t, and β(t) is a vector of time-dependent coefficients, representing the effect that those characteristics have at time or duration t. The case of breastfeeding status and its effect on the length of birth intervals is a good example that combines the two effects. Breastfeeding status is itself a time-varying covariate x(t), which takes the value one if the woman is breastfeeding her child t months after birth. The effect that breastfeeding may have in inhibiting ovulation and therefore reducing the risk of pregnancy is known to decline rapidly over time, so it should probably be modeled as a time dependent effect β(t). Again, further progress would require specifying the form of this function of time. 7.3.7 Model Fitting There are essentially three approaches to fitting survival models: • The first and perhaps most straightforward is the parametric approach, where we assume a specific functional form for the baseline hazard l[0](t). Examples are models based on the exponential, Weibull, gamma and generalized F distributions. • A second approach is what might be called a flexible or semi-parametric strategy, where we make mild assumptions about the baseline hazard l[0](t). Specifically, we may subdivide time into reasonably small intervals and assume that the baseline hazard is constant in each interval, leading to a piece-wise exponential model. • The third approach is a non-parametric strategy that focuses on estimation of the regression coefficients β leaving the baseline hazard l[0](t) completely unspecified. This approach relies on a partial likelihood function proposed by Cox (1972) in his original paper. A complete discussion of these approaches in well beyond the scope of these notes. We will focus on the intermediate or semi-parametric approach because (1) it is sufficiently flexible to provide a useful tool with wide applicability, and (2) it is closely related to Poisson regression analysis. Continue with 7.4. The Piece-Wise Exponential Model
{"url":"http://data.princeton.edu/wws509/notes/c7s3.html","timestamp":"2014-04-19T09:24:19Z","content_type":null,"content_length":"27717","record_id":"<urn:uuid:abb7c7f2-cca2-4d73-87f0-d63133351a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
convergence space Basic concepts Convergence spaces A convergence space is a generalisation of a topological space based on the concept of convergence of filters (or nets) as fundamental. The basic concepts of point-set topology (continuous functions, compact and Hausdorff topological spaces, etc) make sense also for convergence spaces, although not all theorems hold. The category $Conv$ of convergence spaces is a quasitopos and may be thought of as a nice category of spaces that includes Top as a full subcategory. A convergence space is a set $S$ together with a relation $\to$ from $\mathcal{F}S$ to $S$, where $\mathcal{F}S$ is the set of filters on $S$; if $F \to x$, we say that $F$converges to $x$ or that $x$ is a limit of $F$. This must satisfy some axioms: 1. Centred: The principal ultrafilter $F_x = \{ A \;|\; x \in A \}$ at $x$ converges to $x$; 2. Isotone: If $F \subseteq G$ and $F \to x$, then $G \to x$; 3. Directed: If $F \to x$ and $G \to x$, then some filter contained in the intersection $F \cap G$ converges to $x$. In light of (2), it follows that $F \cap G \to x$ itself. (Strictly speaking, the relation should not be called directed unless also every point is a limit of some filter, but this follows from 1.) It follows that $F \to x$ if and only if $F \cap F_x$ does. Given that, the convergence relation is defined precisely by specifying, for each point $x$, a filter of subfilter?s of the principal ultrafilter at $x$. (But that is sort of a tongue twister.) A filter $F$clusters at a point $x$ if there exists a proper filter $G$ such that $F \subseteq G$ and $G \to x$. The definition can also be phrased in terms of nets; a net $u$ converges to $x$ if and only if its eventuality filter converges to $x$. The morphisms of convergence spaces are the continuous functions; a function $f$ between convergence spaces is continuous if $F \to x$ implies that $f(F) \to f(x)$, where $f(F)$ is the filter generated by the filterbase $\{F(A) \;|\; A \in F\}$. In this way, convergence spaces form a concrete category $Conv$. Note that the definition of ‘convergence’ varies in the literature; at the extreme end, one could define it as any relation whatsoever from $\mathcal{F}S$ (or even from the class of all nets on $S$) to $S$, but that is so little structure as to be not very useful. Here we follow the terminology of Lowen-Colebunders. In measure theory, given a measure space $X$ and a measurable space $Y$, the space of almost-everywhere defined measurable functions from $X$ to $Y$ becomes a convergence space under convergence almost everywhere?. In general, this convergence space does not fit into any of the examples below. A pseudotopological space is a convergence space satisfying the star property: • If $F$ is a filter such that every proper filter $G \supseteq F$ clusters at $x$, then $F$ converges to $x$. Assuming the ultrafilter theorem (a weak version of the axiom of choice), it's enough to require that $F$ converges to $x$ whenever every ultrafilter that refines $F$ converges to $x$ (or clusters there, since these are equivalent for ultrafilters). A subsequential space is a pseudotopological space that may be defined using only sequences instead of arbitrary nets/filters. (More precisely, a filter converges to $x$ only if it refines (the eventuality filter of) a sequence that converges to $x$.) A pretopological space is a convergence space that is infinitely filtered: • If $(F_\alpha)_\alpha$ is any family of filters each of which converges to $x$, then $\bigcap_\alpha F_\alpha$ converges to $x$. In particular, the intersection of all of the filters converging to $x$ (the neighbourhood filter of $x$) also converges to $x$. Note that every pretopological space is pseudotopological. Any topological space is a convergence space, and in fact a pretopological one: we define $F \to x$ if every neighbourhood of $x$ belongs to $F$. A convergence space is topological if it comes from a topology on $S$. The full subcategory of $Conv$ consisting of the topological convergence spaces is equivalent to the category Top of topological spaces. In this way, the definitions below are all suggested by theorems about topological spaces. Every Cauchy space is a convergence space. The improper filter (the power set of $S$) converges to every point. On the other hand, a convergence space $S$ is Hausdorff if every proper filter converges to at most one point; then we have a partial function $\lim$ from the proper filters on $S$ to $S$. A topological space is Hausdorff in the usual sense if and only if it is Hausdorff as a convergence space. A convergence space $S$ is compact if every proper filter clusters at some point; that is, every proper filter is contained in a convergent proper filter. Equivalently (assuming the ultrafilter theorem), $S$ is compact iff every ultrafilter converges. A topological space is compact in the usual sense if and only if it is compact as a convergence space. The topological convergence spaces can be characterized as the pseudotopological ones in which the convergence satisfies a certain “associativity” condition. In this way one can (assuming the ultrafilter theorem) think of a topological space as a “generalized multicategory” parametrized by ultrafilters. In particular, note that a compact Hausdorff pseudotopological space is defined by a single function $\mathcal{U}S \to S$, where $\mathcal{U}S$ is the set of ultrafilters on $S$, such that the composite $S \to \mathcal{U}S \to S$ is the identity. That is, it is an algebra for the pointed endofunctor $\mathcal{U}$. The compact Hausdorff topological spaces (the compacta) are precisely the algebras for $\mathcal{U}$ considered as a monad. If we treat $\mathcal{U}$ as a monad on Rel, then the lax algebra?s are the topological spaces in their guise as relational beta-modules. Topological structure Given a convergence space, a filter $F$star-converges to a point $x$ if every proper filter that refines $F$ clusters at $x$. (Assuming the ultrafilter theorem, $F$ star-converges to $x$ iff every ultrafilter that refines $F$ converges to $x$.) The relation of star convergence makes any convergence space into a pseudotopological space with a weaker convergence. In this way, $Ps Top$ becomes a reflective subcategory of $Conv$ over $Set$. Note: the term ‘star convergence’ is my own, formed from ‘star property’ above, which I got from HAF. Other possibilities that I can think of: ‘ultraconvergence’, ‘universal convergence’, ‘subconvergence’. —Toby Given a convergence space, a set $U$ is a neighbourhood of a point $x$ if $U$ belongs to every filter that converges to $x$; it follows that $U$ belongs to every filter that star-converges to $x$. The relation of being a neighbourhood makes any convergence space into a pretopological space, although the pretopological convergence is weaker in general. In this way, $Pre Top$ is a reflective subcategory of $Conv$ (and in fact of $Ps Top$) over $Set$. Other pretopological notions: The preinterior? of a set $A$ is the set of all points $x$ such that $A$ is a neighbourhood of $x$. The preclosure? of $A$ is the set of all points $x$ such that every neighbourhood $U$ of $x$ meets (has inhabited intersection with) $A$. For more on these, see pretopological space. Given a convergence space, a set $G$ is open if $G$ belongs to every filter that converges to any point in $G$, or equivalently if $G$ equals its preinterior. The class of open sets makes any convergence space into a topological space, although the topological convergence is weaker in general. In this way, $Top$ is a reflective subcategory of $Conv$ (and in fact of $Ps Top$ and $Pre Top$) over $Set$. Other topological notions: A set $F$ is closed if $F$ meets every neighbourhood of every point that belongs to $F$, equivalently if $F$ equals its preclosure. The interior of $A$ is the union of all of the open sets contained in $A$; it is the largest open set contained in $A$. The closure of $A$ is the intersection of all of the closed sets that contain $A$; it is the smallest closed set that contains $A$. (For a topological convergence space, the interior and closure match the preinterior and preclosure.) • Eva Lowen-Colebunders (1989). Function Classes of Cauchy Continuous Maps. Dekker, New York, 1989.
{"url":"http://www.ncatlab.org/nlab/show/convergence+space","timestamp":"2014-04-19T11:57:50Z","content_type":null,"content_length":"60162","record_id":"<urn:uuid:4729883f-2065-4249-a461-7242ee36fba5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
From Conservapedia In calculus, solids are formed by rotating a curve around an axis and integrating to find the volume. Typically the integration is of slices cut vertically to the axis of the rotation that formed the solid. Those slices are then integrated from one end of the solid to the other. Consider the region in the first quadrant that has an upper bound of $y = \sqrt 2$ and a lower bound of $y = (\sec{x})\,(\tan{x})$, and bounded on the left side by the y-axis. Find the volume of the solid formed by rotating the region about the line $y = \sqrt 2$. Calculus provides an elegant way to determine the volume of this solid. First, find where the curves intersect in order to ascertain the end-point of the integration. The boundaries intersect where $y = \sqrt 2 = (\sec{x})(\tan{x})$ $\sqrt 2 = \frac{\sin{x}}{\cos^2{x}}$ This is best solved by trial-and-error. Since $\sin{\frac{\pi}{4}} = \cos{\frac{\pi}{4}}=\frac{\sqrt 2}{2}$, that is the solution. The integral to find the volume of the solid must therefore be taken from x=0 on one side to: on the other side. We are now ready to find the volume. Note first that: dV = πr^2dx and thus $V = \int_0^\frac{\pi}{4} {\pi}r^2\,dx$ The next insight is to express r in terms of x. The variable r is the distance of the boundary from the axis about which it is rotated: $r = \sqrt 2 - \sec{x}\,\tan{x}$ The volume then becomes: $V = \int_0^\frac{\pi}{4} {\pi}(\sqrt 2 - \sec{x}\,\tan{x})^2\,dx$ $V = \int_0^\frac{\pi}{4} {\pi}(2 - 2\sqrt2\sec{x}\,\tan{x} + (\sec{x}\,\tan{x})^2\,dx$ The only challenging part of this integral is the last term, which must be integrated by parts: $= \int_0^\frac{\pi}{4}\sin{x}\,(\frac{\sin{x}}{\cos^4{x}})\,dx$ $= \frac{\sin{x}}{3\cos^3{x}} - \int_0^\frac{\pi}{4}\frac{\sec^2{x}}{3},\,dx$ Recall that: $\int\sec^2{x}\,dx = \tan{x}$ and the solution to the overall integral is easy to obtain.
{"url":"http://conservapedia.com/Solids","timestamp":"2014-04-16T10:20:23Z","content_type":null,"content_length":"15875","record_id":"<urn:uuid:fa96bc01-413a-4e94-b419-bbd87560e682>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society Bulletin Notices AMS Sectional Meeting Program by Day Current as of Sunday, October 20, 2013 00:27:41 Program · Deadlines · Registration/Housing/Etc. Special Event or Lecture · Inquiries: meet@ams.org Fall Eastern Sectional Meeting Temple University, Philadelphia, PA October 12-13, 2013 (Saturday - Sunday) Meeting #1093 Associate secretaries: Steven H. Weintraub, AMS shw2@lehigh.edu Saturday October 12, 2013 • Saturday October 12, 2013, 7:30 a.m.-4:00 p.m. Exhibit and Book Sale Room 409, Tuttleman Learning Center • Saturday October 12, 2013, 7:30 a.m.-4:00 p.m. Meeting Registration Lobby, Tuttleman Learning Center • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Contact and Symplectic Topology, I Room 300, Tuttleman Learning Center Joshua M. Sabloff, Haverford College jsabloff@haverford.edu Lisa Traynor, Bryn Mawr College ltraynor@brynmawr.edu • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Difference Equations and Applications, I Room 409, Barton B Michael Radin, Rochester Polytechnic Institute marsma@rit.edu Faina Berezovskaya, Howard University • Saturday October 12, 2013, 8:00 a.m.-10:45 a.m. Special Session on Geometric Aspects of Topology and Group Theory, I Room 304, Tuttleman Learning Center David Futer, Temple University dfuter@temple.edu Ben McReynolds, Purdue University • Saturday October 12, 2013, 8:00 a.m.-10:45 a.m. Special Session on Geometric and Spectral Analysis, I Room 305, Barton B Thomas Krainer, Pennsylvania State Altoona krainer@psu.edu Gerardo A. Mendoza, Temple University • Saturday October 12, 2013, 8:00 a.m.-10:45 a.m. Special Session on Higher Structures in Algebra, Geometry and Physics, I Room 400, Tuttleman Learning Center Jonathan Block, University of Pennsylvania Vasily Dolgushev, Temple University vald@temple.edu Tony Pantev, University of Pennsylvania • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Mathematical Biology, I Room 308, Barton B Isaac Klapper, Temple University klapper@temple.edu Kathleen Hoffman, University of Maryland, Baltimore County Robert Manning, Haverford College • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Modular Forms and Modular Integrals in Memory of Marvin Knopp, I Room 402, Tuttleman Learning Center Helen Grundman, Bryn Mawr College Wladimir Pribitkin, College of Staten Island and the Graduate Center, City University of New York Wladimir.Pribitkin@csi.cuny.edu • Saturday October 12, 2013, 8:00 a.m.-10:45 a.m. Special Session on Nonlinear Elliptic and Wave Equations and Applications, I Room 400, Barton B Nsoki Mavinga, Swarthmore College nmaving1@swarthmore.edu Doug Wright, Drexel University • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Partial Differential Equations, Stochastic Analysis, and Applications to Mathematical Finance, I Room 407, Barton B Paul Feehan, Rutgers University feehan@rci.rutgers.edu Ruoting Gong, Rutgers University Camelia Pop, University of Pennsylvania • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Recent Developments in Noncommutative Algebra, I Room 302, Tuttleman Learning Center Edward Letzter, Temple University Martin Lorenz, Temple University lorenz@temple.edu • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Representation Theory, Combinatorics and Categorification, I Room 303, Tuttleman Learning Center Corina Calinescu, New York City College of Technology, City University of New York Andrew Douglas, New York City College of Technology and Graduate Center, City University of New York Joshua Sussan, Medgar Evers College, City University of New York joshua.sussan@gmail.com Bart Van Steirteghem, Medgar Evers College, City University of New York • Saturday October 12, 2013, 8:00 a.m.-10:50 a.m. Special Session on Several Complex Variables and CR Geometry, I Room 307, Barton B Andrew Raich, University of Arkansas araich@uark.edu Yuan Zhang, Indiana University-Purdue University Fort Wayne • Saturday October 12, 2013, 8:00 a.m.-10:20 a.m. Special Session on The Geometry of Algebraic Varieties, I Room 401, Tuttleman Learning Center Karl Schwede, Pennsylvania State University schwede@math.psu.edu Zsolt Patakfalvi, Princeton University • Saturday October 12, 2013, 8:15 a.m.-10:55 a.m. Contributed Paper Session, I Room 402, Barton B • Saturday October 12, 2013, 8:30 a.m.-10:50 a.m. Special Session on Analysis and Computing for Electromagnetic Waves, I Room 309, Barton B David Ambrose, Drexel University ambrose@math.drexel.edu Shari Moskow, Drexel University • Saturday October 12, 2013, 8:30 a.m.-10:50 a.m. Special Session on Combinatorial Commutative Algebra, I Room 404, Tuttleman Learning Center Tái Huy Há, Tulane University Fabrizio Zanello, Massachusetts Institute of Technology and Michigan Technological University zanello@math.mit.edu • Saturday October 12, 2013, 8:30 a.m.-10:50 a.m. Special Session on History of Mathematics in America, I Room 306, Tuttleman Learning Center Thomas L. Bartlow, Villanova University Paul R. Wolfson, West Chester University David E. Zitarelli, Temple University zit@temple.edu • Saturday October 12, 2013, 8:30 a.m.-10:50 a.m. Special Session on Meshfree, Particle, and Characteristic Methods for Partial Differential Equations, I Room 403, Barton B Toby Driscoll, University of Delaware Louis Rossi, University of Delaware Benjamin Seibold, Temple University seibold@temple.edu • Saturday October 12, 2013, 8:30 a.m.-10:50 a.m. Special Session on Multiple Analogues of Combinatorial Special Numbers and Associated Identities, I Room 406, Tuttleman Learning Center Hasan Coskun, Texas A&M University Commerce hasan.coskun@tamuc.edu • Saturday October 12, 2013, 8:30 a.m.-10:20 a.m. Special Session on Parabolic Evolution Equations of Geometric Type, I Room 401, Barton B Xiaodong Cao, Cornell University Longzhi Lin, Rutgers University Peng Wu, Cornell University wupenguin@math.cornell.edu • Saturday October 12, 2013, 9:00 a.m.-10:50 a.m. Special Session on Geometric Topology of Knots and 3-manifolds, I Roomn 301, Tuttleman Learning Center Abhijit Champanerkar, College of Staten Island and The Graduate Center, City University of New York Ilya Kofman, College of Staten Island and The Graduate Center, City University of New York ikofman@math.csi.cuny.edu Joseph Maher, College of Staten Island and The Graduate Center, City University of New York □ 9:00 a.m. A discrete uniformization theorem for polyhedral surfaces. Feng Luo*, Rutgers University David Gu, State University of New York, Stony Brook Jian Sun, Tsinghua University, China Tianqi Wu, Tsinghua University, China □ 10:00 a.m. Stable Commutator Length and Knot Complements. Tim Susse*, The Graduate Center, CUNY □ 10:30 a.m. An Enhanced Prime Decomposition Theorem for Knots. Matt Mastin*, Wake Forest University • Saturday October 12, 2013, 9:00 a.m.-10:50 a.m. Special Session on Recent Advances in Harmonic Analysis and Partial Differential Equations, I Room 405, Barton B Cristian Gutiérrez, Temple University Irina Mitrea, Temple University imitrea@temple.edu • Saturday October 12, 2013, 11:05 a.m.-12:05 p.m. Invited Address Welcome remarks. Room 13, Gladfelter Hall Normal functions. Room 13, Gladfelter Hall Patrick Gerald Brosnan*, University of Maryland • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Analysis and Computing for Electromagnetic Waves, II Room 309, Barton B David Ambrose, Drexel University ambrose@math.drexel.edu Shari Moskow, Drexel University □ 2:00 p.m. Propagation of Pulses in a Lossy Plasma. Natalie Cartwright*, State University of New York at New Paltz □ 2:30 p.m. Evaluation of 2D-periodic 3D EM scattering at Wood-anomaly frequencies. Stephen Shipman*, Louisiana State University Oscar Bruno, Cal Tech Catalin Turc, NJIT Stephanos Venakides, Duke University □ 3:00 p.m. Fast Algorithms for Computing the Effective Dielectric Properties of Cancellous Bone from Micro-CT scans. Miao-jung Yvonne Ou*, Department of Mathematical Sciences, University of Delaware Wai-Yip Chan, Department of Mathematics, The Chinese University of Hong Kong Yen-Hsi Richard Tsai, Department of Mathematics and Institute for Computational Engineering and Sciences, The University of Texas at Austin Seong Jun Kim, Department of Mathematics, The University of Texas at Austin Luis Cardoso, Department of Biomedical Engineering, The City College of The City University of New York □ 3:30 p.m. Boundary Integral Formulation of the Transmission Eigenvalue Problem for Maxwell's Equations. Fioralba Cakoni*, University of Delaware □ 4:00 p.m. Asymptotic Expansions for Transmission Eigenvalues for Media with Small Inhomogeneities. Shari Moskow*, Drexel University Fioralba Cakoni, University of Delaware □ 4:30 p.m. Topological reduction of the inverse Born series. John C Schotland*, University of Michigan • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Combinatorial Commutative Algebra, II Room 404, Tuttleman Learning Center Tái Huy Há, Tulane University Fabrizio Zanello, Massachusetts Institute of Technology and Michigan Technological University zanello@math.mit.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Contact and Symplectic Topology, II Room 300, Tuttleman Learning Center Joshua M. Sabloff, Haverford College jsabloff@haverford.edu Lisa Traynor, Bryn Mawr College ltraynor@brynmawr.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Difference Equations and Applications, II Room 409, Barton B Michael Radin, Rochester Polytechnic Institute marsma@rit.edu Faina Berezovskaya, Howard University • Saturday October 12, 2013, 2:00 p.m.-4:45 p.m. Special Session on Geometric Aspects of Topology and Group Theory, II Room 304, Tuttleman Learning Center David Futer, Temple University dfuter@temple.edu Ben McReynolds, Purdue University • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Geometric Topology of Knots and 3-manifolds, II Roomn 301, Tuttleman Learning Center • Saturday October 12, 2013, 2:00 p.m.-4:45 p.m. Special Session on Geometric and Spectral Analysis, II Room 305, Barton B Thomas Krainer, Pennsylvania State Altoona krainer@psu.edu Gerardo A. Mendoza, Temple University • Saturday October 12, 2013, 2:00 p.m.-4:45 p.m. Special Session on Higher Structures in Algebra, Geometry and Physics, II Room 400, Tuttleman Learning Center Jonathan Block, University of Pennsylvania Vasily Dolgushev, Temple University vald@temple.edu Tony Pantev, University of Pennsylvania • Saturday October 12, 2013, 2:00 p.m.-4:20 p.m. Special Session on History of Mathematics in America, II Room 306, Tuttleman Learning Center Thomas L. Bartlow, Villanova University Paul R. Wolfson, West Chester University David E. Zitarelli, Temple University zit@temple.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Mathematical Biology, II Room 308, Barton B Isaac Klapper, Temple University klapper@temple.edu Kathleen Hoffman, University of Maryland, Baltimore County Robert Manning, Haverford College • Saturday October 12, 2013, 2:00 p.m.-4:20 p.m. Special Session on Meshfree, Particle, and Characteristic Methods for Partial Differential Equations, II Room 403, Barton B Toby Driscoll, University of Delaware Louis Rossi, University of Delaware Benjamin Seibold, Temple University seibold@temple.edu • Saturday October 12, 2013, 2:00 p.m.-4:20 p.m. Special Session on Modular Forms and Modular Integrals in Memory of Marvin Knopp, II Room 402, Tuttleman Learning Center Helen Grundman, Bryn Mawr College Wladimir Pribitkin, College of Staten Island and the Graduate Center, City University of New York Wladimir.Pribitkin@csi.cuny.edu • Saturday October 12, 2013, 2:00 p.m.-4:20 p.m. Special Session on Multiple Analogues of Combinatorial Special Numbers and Associated Identities, II Room 406, Tuttleman Learning Center Hasan Coskun, Texas A&M University Commerce hasan.coskun@tamuc.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Nonlinear Elliptic and Wave Equations and Applications, II Room 400, Barton B Nsoki Mavinga, Swarthmore College nmaving1@swarthmore.edu Doug Wright, Drexel University • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Parabolic Evolution Equations of Geometric Type, II Room 401, Barton B Xiaodong Cao, Cornell University Longzhi Lin, Rutgers University Peng Wu, Cornell University wupenguin@math.cornell.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Partial Differential Equations, Stochastic Analysis, and Applications to Mathematical Finance, II Room 407, Barton B Paul Feehan, Rutgers University feehan@rci.rutgers.edu Ruoting Gong, Rutgers University Camelia Pop, University of Pennsylvania • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Recent Advances in Harmonic Analysis and Partial Differential Equations, II Room 405, Barton B Cristian Gutiérrez, Temple University Irina Mitrea, Temple University imitrea@temple.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Recent Developments in Noncommutative Algebra, II Room 302, Tuttleman Learning Center Edward Letzter, Temple University Martin Lorenz, Temple University lorenz@temple.edu • Saturday October 12, 2013, 2:00 p.m.-4:50 p.m. Special Session on Representation Theory, Combinatorics and Categorification, II Room 303, Tuttleman Learning Center Corina Calinescu, New York City College of Technology, City University of New York Andrew Douglas, New York City College of Technology and Graduate Center, City University of New York Joshua Sussan, Medgar Evers College, City University of New York joshua.sussan@gmail.com Bart Van Steirteghem, Medgar Evers College, City University of New York □ 2:00 p.m. ADE classification and the triplet vertex algebra. Antun Milas*, SUNY-Albany □ 2:30 p.m. Vertex tensor categorifications. Yi-Zhi Huang*, Rutgers University □ 3:00 p.m. The vertex-algebraic structure of principal subspaces as categorification. James Lepowsky*, Rutgers University, Piscataway, NJ □ 3:30 p.m. Vertex algebraic structure in integral forms of standard affine Lie algebra modules. Robert H McRae*, Rutgers, the State University of New Jersey, New Brunswick □ 4:00 p.m. Tensor Product Decomposition of $\widehat{\mathfrak{sl}}(n)$ Modules and identities. Kailash C. Misra*, North Carolina State University □ 4:30 p.m. Magic squares of Lie groups. Tevian Dray*, Department of Mathematics, Oregon State University John Huerta, Centro de Análise Matemática, Geometria e Sistemas Dinâmicos, Instituto Superior Técnico (Lisboa) Joshua Kincaid, Department of Physics. Oregon State University Corinne A. Manogue, Department of Physics, Oregon State University Aaron Wangberg, Department of Mathematics & Statistics Robert A. Wilson, School of Mathematical Sciences, Queen Mary University of London • Saturday October 12, 2013, 2:00 p.m.-4:20 p.m. Special Session on Several Complex Variables and CR Geometry, II Room 307, Barton B Andrew Raich, University of Arkansas araich@uark.edu Yuan Zhang, Indiana University-Purdue University Fort Wayne • Saturday October 12, 2013, 2:00 p.m.-4:45 p.m. Special Session on The Geometry of Algebraic Varieties, II Room 401, Tuttleman Learning Center Karl Schwede, Pennsylvania State University schwede@math.psu.edu Zsolt Patakfalvi, Princeton University • Saturday October 12, 2013, 2:00 p.m.-4:25 p.m. Contributed Paper Session, II Room 402, Barton B • Saturday October 12, 2013, 5:10 p.m.-6:00 p.m. Erdős Memorial Lecture Arithmetic statistics: Elliptic curves and other mathematical objects. Room 13, Gladfelter Hall Barry Mazur*, Harvard University • Saturday October 12, 2013, 6:00 p.m.-8:00 p.m. Erdős Memorial Lecture Reception Diamond Club, Mitten Hall Inquiries: meet@ams.org
{"url":"http://ams.org/meetings/sectional/2209_program_saturday.html","timestamp":"2014-04-17T20:22:06Z","content_type":null,"content_length":"130405","record_id":"<urn:uuid:f5bb1020-91d8-448c-9400-9aa5e130fa31>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Primes Between Consecutive Cubes: How many primes are there between n^3 and (n+1)^3? Legendre's conjecture states that, for each positive integer n, there is at least one prime between n^2 and (n+1)^2. On this page, we will investigate a related question: How many primes are there between n^3 and (n+1)^3? Here are two hypotheses and both of them appear to be true: (A) For each integer n > 0, there are at least four primes between n^3 and (n+1)^3. (B) For each integer n > 0, there are at least 2n + 1 primes between n^3 and (n+1)^3. Note that if the above statement (B) is true, then statement (A) is also true. Indeed, for n = 1 both statements are easy to check and both are true, while for n ≥ 2 statement (A) follows from (B) because 2n + 1 > 4 for every n ≥ 2. Statement (B) is suggested by these observations: (1) For integer m > 1051, each interval [m^3/2, (m+1)^3/2] contains a prime (generalized Legendre conjecture, case 3/2). (2) For positive integers m and n, each interval [n^3, (n+1)^3] contains precisely 2n+1 intervals [m^3/2, (m+1)^3/2], for example: the interval [1^3, 2^3] contains three intervals [1^3/2, 2^3/2], [2^3/2, 3^3/2], [3^3/2, 4^3/2]; the interval [2^3, 3^3] contains five intervals [4^3/2, 5^3/2], [5^3/2, 6^3/2], [6^3/2, 7^3/2], [7^3/2, 8^3/2], [8^3/2, 9^3/2]; the interval [33^3, 34^3] contains 67 intervals [1089^3/2, 1090^3/2],... [1155^3/2, 1156^3/2]; and so on. Combining (1) and (2), we see that, since 1051^3/2 < 1089^3/2 = 33^3, statement (B) is true for n ≥ 33 provided that (1) is true. But we already tested statement (1) and, based on the knowledge of maximum prime gaps, (1) holds true for large numbers (from m = 1052 and up to 18-digit primes). However, when m and n are small, statement (1) does not help us establish (B). Therefore, now it is of particular interest to test statement (B) directly for small n. The table below presents a computational check of statement (B) for a range of consecutive small cubes and our computational experiment shows that (B) is apparently true. There are at least 2n + 1 primes between consecutive cubes n^3 and (n+1)^3. (We have to remember, though, that a computational check alone is not a proof n n^3 < primes < (n+1)^3 How many primes? OK/fail Expected: Actual: Copyright © 2011 Alexei Kourbatov, JavaScripter.net.
{"url":"http://www.javascripter.net/math/primes/primesbetweenconsecutivecubes.htm","timestamp":"2014-04-17T17:53:28Z","content_type":null,"content_length":"7671","record_id":"<urn:uuid:a623d24e-71d7-4803-8050-f1a35a0a5b46>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of merited Andrey (Andrei) Andreyevich Markov (Андрей Андреевич Марков) (June 14, 1856 N.S. – July 20, 1922) was a Russian mathematician. He is best known for his work on theory of stochastic processes. His research later became known as Markov chains. He and his younger brother Vladimir Andreevich Markov (1871-1897) proved Markov brothers' inequality. His son, another Andrey Andreevich Markov (1903-1979), was also a notable mathematician, making contributions on constructive mathematics and recursive function theory. Andrey Andreevich Markov was born in as the son of the secretary of the public forest management of Ryazan, Andrey Grigorevich Markov, and his first wife, Nadezhda Petrovna Markova. In the beginning of the 1860s Andrey Grigorevich moved to St Petersburg to become an asset manager of the princess Ekaterina Aleksandrovna Valvatyeva. In 1866 Andrey Andreevich’s school life began with his entrance into Saint Petersburg’s fifth grammar school. Already during his school time Andrey was intensely engaged in higher mathematics. As a 17-year-old grammar school student he informed Bunyakovsky, Korkin and Yegor Zolotarev about an apparently new method to solve linear ordinary differential equations and was invited to the so-called Korkin Saturdays, where Korkin's students regularly met. In 1874 he finished the school and began his studies at the physico-mathematical faculty of St Petersburg University. Among his teachers were Yulian Sokhotski (differential calculus, higher algebra), Konstantin Posse (analytic geometry), Yegor Zolotarev (integral calculus), Pafnuty Chebyshev (number theory, probability theory), Aleksandr Korkin (ordinary and partial differential equations), Okatov (mechanism theory), Somov (mechanics) and Budaev (descriptive and higher geometry). In 1877 he was awarded the gold medal for his outstanding solution of the problem “About Integration of Differential Equations by Continuous Fractions with an Application to the Equation $\left(1+x^2 \right) frac\left\{dy\right\}\left\{dx\right\} = n \left(1+y^2\right).$“ In the following year he passed the candidate examinations and remained at the university to prepare for the lecturer’s In April 1880 Markov defended his master thesis “About Binary Quadratic Forms with Positive Determinant“, which was encouraged by Aleksandr Korkin and Yegor Zolotarev. Five years later, in January 1885, there followed his doctoral thesis “About Some Applications of Algebraic Continuous Fractions“. His pedagogical work began after the defense of his master thesis in autumn 1880. As a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on “introduction to analysis“, probability theory (succeeding Chebyshev who had left the university in 1882) and calculus of differences. From 1895/96 until 1905 he additionally lectured on differential calculus. One year after the defense of the doctoral thesis, he was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became extraordinary member of the academy. His promotion to an ordinary professor of St Petersburg University followed in autumn 1894. In 1896, he was elected ordinary member of the academy as the successor of Chebyshev. In 1905 he was appointed merited professor and got the right to retire which he immediately used. Till 1910, however, he continued to lecture calculus of differences. In connection with student riots in 1908, professors and lecturers of Saint Petersburg University were ordered to observe their students. Markov initially refused to accept this decree and wrote an explanation in which he declined to be an “agent of the governance”. Markov was rejected from a further teaching activity at the Saint Petersburg University, and he eventually decided to retire from the university. In 1913 the council of Saint Petersburg elected nine scientists honorary members of the university. Markov was among them, but his election was not affirmed by the minister of education. The affirmation was done only four years later, after the February revolution in 1917. Markov then resumed his teaching activities and lectured probability theory and calculus of differences until his death in 1922. See also • А. А. Марков. "Распространение закона больших чисел на величины, зависящие друг от друга". "Известия Физико-математического общества при Казанском университете", 2-я серия, том 15, ст. 135-156, • A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons, 1971. External links
{"url":"http://www.reference.com/browse/merited","timestamp":"2014-04-16T11:07:51Z","content_type":null,"content_length":"79460","record_id":"<urn:uuid:17bfee09-1914-4424-8eb8-e3e20d4b9fb1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader). Alternatively, you can also download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the Download link below. If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs. Download this PDF file Fullscreen Fullscreen Off 1. S. Alili. Asymptotic behavior for random walks in random environments. J. Appl. Prob. 36 (1999), 334-349. Math. Review MR1724844 (2000i:60120) 2. W. Bryc and A. Dembo. Large deviations and strong mixing. Annal. Inst. H. Poincare - Prob. Stat. 32 (1996), 549-569. Math. Review MR1411271 (97k:60075) 3. A. Dembo and O. Zeitouni. Large deviation techniques and applications, 2nd edition, Springer, New York, 1998. Math. Review MR1619036 (99d:60030) 4. B. de Saporta. Tails of the stationary solution of the stochastic equation Y_{n+1}=a_n Y_n+b_n with Markovian coefficients, to appear in Stoch. Proc. Appl. Math. Review number not available. 5. N. Gantert and Z.Shi. Many visits to a single site by a transient random walk in random environment, Stoch. Proc. Appl. 99 (2002), 159-176. Math. Review MR1901151 (2003h:60148) 6. C. M. Goldie. Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab. 1 (1991), 126-166. Math. Review MR1097468 (93i:60118) 7. O. V. Gulinsky and A. Yu. Veretennikov. Large Deviations for Discrete-Time Processes with Averaging, VSP, Utrecht, The Netherlands, 1993. Math. Review MR1360713 (97a:60043) 8. H. Kesten. Random difference equations and renewal theory for products of random matrices. Acta. Math. 131 (1973), 208-248. Math. Review MR0440724 (55 #13595) 9. H. Kesten, M. V. Kozlov, and F. Spitzer. A limit law for random walk in a random environment. Comp. Math. 30 (1975), 145--168. Math. Review MR0380998 (52 #1895) 10. M. V. Kozlov. A random walk on the line with stochastic structure. Teoriya Veroyatnosti i Primeneniya 18 (1973), 406-408. Math. Review MR0319274 (47 #7818) 11. E. Mayer-Wolf, A. Roitershtein, and O. Zeitouni. Limit theorems for one-dimensional transient random walks in Markov environments. Ann. Inst. H. Poincare - Prob. Stat. 40 (2004), 635-659. Math. Review MR2086017 (2005g:60165) 12. F. Solomon. Random walks in random environments. Annal. Probab. 3 (1975), 1-31. Math. Review MR0362503 (50 #14943) 13. O. Zeitouni. Random walks in random environment. XXXI Summer School in Probability, St. Flour (2001). Lecture Notes in Math. 1837, Springer, 2004, 193-312. Math. Review MR2071631 (2006a:60201) This work is licensed under a Creative Commons Attribution 3.0 License
{"url":"http://www.emis.de/journals/EJP-ECP/article/view/1164/1477.html","timestamp":"2014-04-20T05:45:03Z","content_type":null,"content_length":"22380","record_id":"<urn:uuid:bc694fc7-df42-4773-987c-bdf3e5a2fa1b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Free monads and their algebras Today we are interested in the following theorem: The category of algebras of an endofunctor is isomorphic to the category of algebras of its free monad. It sounds complicated (and is rather not precise), so let me explain: Free monads (arising from algebras). Let $\mathcal{C}$ be a category, $\Sigma$ be an endofunctor on $\mathcal C$, and $\mathsf{Alg}(\Sigma)$ be the category of $\Sigma$-algebras. We can define a forgetful functor $U : \mathsf{Alg}(\Sigma) \rightarrow \mathcal C$ as $U (X, f) = X$ on objects, and $U \beta = \beta$ on morphisms. Assume that $U$ has a left adjoint, which we call $F$. As always in case of adjunctins, a monad arises. We call this monad the free monad generated by $\Sigma$, and denote it by $\Sigma^*$. Free monads are initial algebras. If $\mathcal{C}$ has finite products, we can prove that $\Sigma^*X$ is equal to the initial algebra (if it exists) of the endofunctor $\Sigma(-) + X$. We name the action of this initial algebra $\xi$. So, the situation is: $\Sigma(\mu Y. \Sigma Y + X) + X \stackrel{\xi}{\rightarrow} \mu Y. \Sigma Y + X$. We split $\xi$ into two components: $\eta : X \rightarrow \Sigma^*X$ $\eta = \xi \cdot inr$ $\alpha : \Sigma\Sigma^*X \rightarrow \Sigma^*X$ $\alpha = \xi \cdot inl$ One can prove that $\xi$ (and so $\eta$ and $\alpha$) is natural in $X$. We can provide another natural transformation, which, intuitively, transforms a $\Sigma$ into its free monad: $\psi : \Sigma X \rightarrow \Sigma^*X$ $\psi = \alpha \cdot \Sigma \eta$ Moreover, $\eta$ is the unit of the monad $\Sigma^*$, and $\mu = ( \hspace{-0.8mm} [ \alpha , id ] \hspace{-0.8mm} ) : \Sigma^*\Sigma^* \stackrel{\bullet}{\rightarrow} \Sigma^*$ is the multiplication of the monad. Eilenberg-Moore algebras. Let $(T, \mu^T, \eta^T)$ be a monad on $\mathcal C$. We define an (Eilenberg-Moore) $T$-algebra (aka “for $T$ qua monad”) as an algebra $(X \in \mathsf{Obj}(\mathcal C), f : TX \rightarrow X)$, where (1) $TTX \stackrel{\mu^T_X}{\longrightarrow} TX \stackrel{f}{\longrightarrow} X = TTX \stackrel{Tf}{\longrightarrow} TX \stackrel{f}{\longrightarrow} X$ (2) $X \stackrel{\eta^T_X}{\longrightarrow} TX \stackrel{f}{\longrightarrow} X = X \stackrel{\mathit{id}_X}{\longrightarrow} X$ By $\mathsf{EM}(T)$ we denote the category of Eilenberg-Moore $T$-algebras. The theorem can now be stated as: $\mathsf{Alg}(\Sigma)$ is isomorphic to $\mathsf{EM}(\Sigma^*)$. Is there any use of such a theorem? It allows us to automatically transfer some properties from the simpler level of $\Sigma$-algebras to the world of free monads and initial algebras. This theorem will appear at least once more in this blog, so don’t forget about it too soon. How to prove it? One way is to use Beck’s monadicity theorem (the “evil” version from Mac Lane’s book). It exactly fits the conditions about existence of adjoints, and the unintuitive condition about $\Sigma$ creating coequalizers has a lot to do with (1). But we are (or at least I am) interested in something that can be encoded in Haskell more directly. So, let’s build an explicit isomorphism. We define two functors (we should actually prove that they are really functors, but you know…): $J : \mathsf{EM}(\Sigma^*) \rightarrow \mathsf{Alg}(\Sigma)$ $J (X, f : \Sigma^*X \rightarrow X) = (X, f \cdot \psi)$ $J g = g$ $J^{-1} : \mathsf{Alg}(\Sigma) \rightarrow \mathsf{EM}(\Sigma^*)$ $J^{-1} (X, f : \Sigma X \rightarrow X) = (X, ( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) )$ $J^{-1} g = g$ To prove the theorem, we show that (A) $J \cdot J^{-1} = Id_{\mathsf{Alg}(\Sigma)}$ and (B) $J^{-1} \cdot J = Id_{\mathsf{EM}(\Sigma^*)}$. We concentrate on objects, arrows are easy. $(J \cdot J^{-1}) (X, f) = (X, ( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) \cdot \psi)$ Since the functors do not alter the carrier of the algebra, we focus on the action: $( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) \cdot \psi$ = (def. of $\psi$) $( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) \cdot \alpha \cdot \Sigma \eta$ = (def. of $\alpha, \eta$) $( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) \cdot \xi \cdot inl \cdot \Sigma \xi \cdot \Sigma inr$ = (computation law) $[f, id] \cdot (\Sigma ( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) + id) \cdot inl \cdot \Sigma \xi \cdot \Sigma inr$ = (sum) $[f \cdot \Sigma ( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) , id \cdot id] \cdot inl \cdot \Sigma \xi \cdot \Sigma inr$ = (inl) $f \cdot \Sigma ( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) \cdot \Sigma \xi \cdot \Sigma inr$ = (comp. law) $f \cdot \Sigma [ f, id ] \cdot \Sigma(\Sigma ( \hspace{-0.8mm} [ f, id ] \hspace{-0.8mm} ) + id) \cdot \Sigma inr$ = (sum + inr) $f \cdot \Sigma id$ = (functor) We first calculate: $f \cdot \psi \cdot \Sigma f$ = (def. of $\psi$) $f \cdot \alpha \cdot \Sigma \eta \cdot \Sigma f$ = (naturality of $\eta$) $f \cdot \alpha \cdot \Sigma \Sigma^* f \cdot \Sigma \eta$ = (naturality of $\alpha$) $f \cdot \Sigma^* f \cdot \alpha \cdot \Sigma\eta$ = (1) $f \cdot \mu \cdot \alpha \cdot \Sigma\eta$ = (def. of $\mu$) $f \cdot ( \hspace{-0.8mm} [ \alpha, id ] \hspace{-0.8mm} ) \cdot \alpha \cdot \Sigma\eta$ = (def. of $\psi$) $f \cdot ( \hspace{-0.8mm} [ \alpha, id ] \hspace{-0.8mm} ) \cdot \psi$ = (similarly to A) $f \cdot \alpha$ We use this result in the following calculation: $[f \cdot \psi, id] \cdot (\Sigma f + id)$ = (sum) $[f \cdot \psi \cdot \Sigma f, id \cdot id]$ = (prev. calculation) $[f \cdot \alpha, id \cdot id]$ = (2) $[f \cdot \alpha, f \cdot \eta]$ = (sum) $f \cdot [\alpha, \eta]$ = (def. of $\alpha, \eta$) $f \cdot \xi$ We use this result as a premise in the fusion law, hence: $( \hspace{-0.8mm} [ f \cdot \psi, id ] \hspace{-0.8mm} )$ = (fusion) $f \cdot ( \hspace{-0.8mm} [ \xi ] \hspace{-0.8mm} )$ = (reflection law) We conclude: $(J^{-1} \cdot J) (X, f) = (X, ( \hspace{-0.8mm} [ f \cdot \psi, id ] \hspace{-0.8mm} )) = (X, f)$ Explore posts in the same categories: Algebra 3 Comments on “Free monads and their algebras” □ February 15, 2013 at 6:56 pm W każdej kategorii, która ma koprodukty i algebrę początkową opisaną w części “Free monads are initial algebras”
{"url":"http://maciejcs.wordpress.com/2012/04/17/free-monads-and-their-algebras/","timestamp":"2014-04-20T05:42:07Z","content_type":null,"content_length":"71955","record_id":"<urn:uuid:5a5d712f-d0ae-49c2-866d-2f451c579268>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Checkerboard and Dominoes Problem - Slick Math! Checkerboard and Dominoes Problem Suppose you have a checkerboard, and a set of dominoes. Each domino is twice the area of a square of the checkerboard. Clearly, you could cover the entire checkerboard with thirty-two dominoes. But here's the question: Suppose you chopped off two opposite corners of the checkerboard. Can you now completely cover the remainder of the board using thirty-one dominoes? The answer to this question is: No, you cannot cover the checkerboard with 31 dominoes after two opposite corners have been removed. But how to prove it? That's the question. The answer is amazingly simple. If you are removing opposite corners, you are removing two squares of the same color. This leaves 32 squares of one color, and 30 squares of the other color. Since every domino must cover one square of each color, it is impossible to fully cover the checkerboard. Isn't that slick? 1. Can you take two corners from the same side of a checkerboard and cover the remaining squares with dominos? 2. What if you take away any two adjacent squares? 3. What if you take away two diagonally-touching squares? 4. What if you took away all four corners? POPULAR PAGES TO TRY Yes, the computer cheats...but how? Do mental arithmetic while racing the clock Build and animate beautiful snowflakes! Trap all the dots in this problem-solving puzzle Assign games and activities to students! Approved Sites Your personal hiking journal online! Affordable proofreading services Get answers to your questions Addicting daily word games Watch out for tricks played by SEO specialists Word processor for chords and lyrics Pro members don't see ads! Click here for details
{"url":"http://www.theproblemsite.com/slickmath/checkerboard_domino.asp","timestamp":"2014-04-16T07:16:35Z","content_type":null,"content_length":"35665","record_id":"<urn:uuid:c27f7d73-e76b-4ba4-8a73-61acd6c852e0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Talk abstract: Financial Engineering and Risk Management by Iris Mack, Associated Technologies Iris Mack, Associated Technologies Financial engineering is the application of mathematical models in the research, development and pricing of new financial instruments and services. Although the origins of financial engineering can be traced to the work of Bachelier in the early part of this century, a major breakthrough in the field was made in 1973 with the discovery of the option pricing formula. It is often said that the option pricing formula is to financial economics what the double helix was to molecular biology. In biology, the discovery of the structure of DNA gave birth to a new field of immense practical importance -- genetic engineering. Similarly, the discovery of the option pricing formula gave birth to the equally important field of financial engineering. Dr. Mack will discuss some applications of options analysis to risk management and investments analysis. Mathematical models in options analysis consist of a system of stochastic parabolic partial differential equations with fixed and/or moving boundary conditions. Dr. Mack will describe various known analytical solutions, as well as numerical approximations to the solution to these stochastic partial differential equations. Some industrial applications will also be described.
{"url":"http://ima.umn.edu/hpc/wkshp_abstracts/mack1.html","timestamp":"2014-04-19T04:27:20Z","content_type":null,"content_length":"14630","record_id":"<urn:uuid:6a11b3e2-6597-432e-8ffe-8536d6c06bad>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
The L.A. Times’s Errors in Its Piece on DNA and Cold Hits I have sent the following e-mail to the authors of that L.A. Times piece on DNA and cold hits: Mr. Felch and Ms. Dolan, I believe your recent front-page article on DNA cold case statistics misstated the meaning of the math you discuss. Your article said: Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett’s case, it was 1 in 3. The 1-in-3 number does not pertain to the probability that the database search had hit upon an innocent person. Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match — whether that match is to an innocent person or a guilty one. If we ignore the existence of independent evidence of Puckett’s guilt, the statistical chance Puckett is innocent depends in part on the probability that the database contains the guilty party. Your article gives no information on what this probability is (although the fact that the database consists of California-based felons suggests that the chances are better than one would find in a purely random database). Without knowing the probability that the database contains the guilty party, you can’t conclude that the 1-in-3 figure accurately represents the chances Puckett is innocent. Your article confuses two distinct concepts and requires correction. You state: In every cold hit case, the panels advised, police and prosecutors should multiply the Random Match Probability (1 in 1.1 million in Puckett’s case) by the number of profiles in the database (338,000). That’s the same as dividing 1.1 million by 338,000. Actually, you have that upside down. Multiplying (1 in 1.1 million) by 338,000 is the same as dividing 338,000 by 1.1 million — not dividing 1.1 million by 338,000. Your article continues: For Puckett, the result was dramatic: a 1-in-3 chance that the search would link an innocent person to the crime. Again, this is wrong. There is a 1-in-3 chance that the search would link someone to the crime. Whether that person is innocent or not depends on the likelihood that the database contains the guilty party (as well as the quality of other evidence tying that defendant to the crime). I am not the only person saying this. A similar point was made by Eugene Volokh in this post. And I made the point in more detail in this blog post of mine. I think the paper owes readers at least two corrections — one of the 1-in-3 statistic, and one on the upside-down division. Given the prominence of the error on the 1-in-3 statistic, which appeared on the front page of the Sunday paper, I hope your paper will make an effort to give this correction the prominence it deserves. cc: Readers’ Representative I’ll let you know what I hear in response. P.S. When I say “Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match — whether that match is to an innocent person or a guilty one.” I meant to express this concept: “Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match, period. If we get a single match, we won’t know whether it was to an innocent person or a guilty person without learning more.” In other words, without prior knowledge of the likelihood that the database has the guilty person, all we know is the chance of a hit — not the chance that a single hit has come back to an innocent person. P.P.S. I just changed the last phrase from “not the chance of a hit to an innocent person” to “not the chance that a single hit has come back to an innocent person.” That more accurately expresses what I was trying to say. Expressing statistical concepts in accurate English is like walking a tightrope. I think I’d go with that letter, too. (Though the paper may decide it’s too trivial a correction to deal with.) Comment by Karl Lembke (521de2) — 5/8/2008 @ 11:31 pm misstated the meaning of the math you discuss. Great job – and easily correctable without requiring pages of figures. Comment by Apogee (366e8b) — 5/9/2008 @ 12:26 am This statement is demonstrably wrong: The 1-in-3 number does not pertain to the probability that the database search had hit upon an innocent person. Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match — whether that match is to an innocent person or a guilty one. If the database contains the guilty persons DNA sample then the probability of at least one match becomes unity. If the database does not contain the guilty persons DNA then the probability is 1 in 3 of at least one match. Thus the actual probability of a match is somewhere between 1/3 and 1 depending on what the probability is that the offender’s DNA is in the database. The range of this can be measured by examining the statistics of like cases. Should it turn out that for that type of crime there is a 70% chance the offender’s DNA is already in the database then we can work out the probability of both a match, and the probability of Take the two distinct cases: For the offender in database: probability of match is 1. For the offender not in the database, the probability of a match is .333 Overall probability of a match: 1.0*0.7 + 0.333*(1.0-0.7) = .70 + .10 = .80 Probability that, given a match, the matched person is not the offender: .10/.80 = 1 in 8 Probability that, given a match, the matched person is the offender: .70/.80 = 7 in 8 Not bad. A lot more than the 2 in 3 indicated in the article. But wait. Lets look at the case where the probability of the offender’s DNA is less likely to be in the database. Let’s say it is shown to be there only 10% of the time. Going through the same calcuations: Overall probability of a match: 1.0*0.1 + 0.333*(1.0-0.1) = .10 + .30 = .40 Probability that, given a match, the matched person is not the offender: .10/.40 = 1 in 4 Probability that, given a match, the matched person is the offender: .30/.40 = 3 in 4 So we can see how the probabilities shift depending on the database coverage. A final note. There are probabilities that the database may match multiple people and to be rigorous, multiple scenarious would have to be examined. Does the search terminate on one match or does it do an exhaustive search reporting all matches. Both alter the numbers. However, the purpose of the exercise was to demonstrate error and I think it sufficient for that purpose. Of course the real question that remains is what percentage of people in the database, non-matching and selected at random, could be excluded by subsequent investigation. The probability of a false positive would then be reduced by this factor. Comment by doug (fbba00) — 5/9/2008 @ 12:50 am Your points are of mild interest. Sorry, but even after reading all this, I still don’t see it. Yes, the writer is befuddled in his fractions. Yes, the chance of hitting on a guilty party is more likely in this database than, say, the phone book, and certainly cannot possibly hit an “innocent”, except possibly of this particular crime. But again, so what? The real problem that the article exposed is a completely clueless use of statistics in a particular trial. The odds of the next 4 rolls of a pair of [fair] dice being snakeyes are 1 in 1.7 million. The odds someone did it last week in Las Vegas are almost certain. Presenting the one case for the other is what happened here. Comment by Kevin Murphy (0b2493) — 5/9/2008 @ 12:56 am Presenting the one case for the other is what happened here. If you were talking about the Times article, you would also be correct. Comment by Apogee (366e8b) — 5/9/2008 @ 1:22 am The 1-in-3 number does not pertain to the probability that the database search had hit upon an innocent person. Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match — whether that match is to an innocent person or a guilty one. No, no no, a thousand times no. There is no little green man hiding in the database fixing the odds so that 1 in 3 searches will always result in a hit regardless of the probability that the killer is in there. The 1 in 3 number is an error rate, not a total hit rate. The latter is necessarily higher. How much higher, we don’t know without knowing the odds of a true hit. Call that value X. Prior to running the search, there was a 1 in 3 chance of matching an innocent person, and a 1 in X chance of matching the truly guilty party. Neither had any influence on the other. Unless the value of X was 1, there was a possibility that the search could turn up a true hit only, a false hit only, or both. After the fact, we know that this particular search turned up only one hit, therefore, we can state with certainty that this time around, either the killer was in the database or someone was nabbed randomly, but not both. To know which is more likely, we need to know the value of X. If it is 3 (one-third of all killers are in the DB), then one of two things happened, both of which had a 1 in 3 chance of occurring. In that case, the odds of Puckett’s innocence are 50-50, and the Times’s only sin was understatement. If the value of X is 2/3 (two-thirds of all killers are in the DB), then it is twice as likely that Puckett was guilty rather than innocent. But unless the value of X is excruciatingly close to 1, it’s nowhere near the one in a million figure cited by that disingenuous prosecutor, and the Times is guilty of, at worst, a technical foul. Comment by Xrlq (62cad4) — 5/9/2008 @ 4:26 am OK, you’re right about the slip in the fraction. But they did use the correct fraction to calculate the expected number of false positives. This is slightly greater than the probability of false positives occuring but close enough for the purposes of the article. As Doug has pointed out you made a slip in the letter. The article makes the mistake of confusing the false positive rate with the probabilty of innocence. You correctly point out that you need to know the probability of the culprit being in the data base before you can calculate the probability that the match is from the culprit. But the point of the article is that the prosecution used the random match rate in a very missleading way. They presented it as if it was equivalent to the false positive rate which under the circumstances it was not. This point is correct. You need to use caution in interpreting results from this sort of search. There is a temtptation to believe in things which would make your efforts more successful especially in these frustrating cases. I suppose that is why many people in law enforcment trust polygraphs. In a cold case such as this you want to belive that you have made a breakthrough. You have to accept the limitations of your techniques. Comment by Lloyd Flack (ddd1ac) — 5/9/2008 @ 4:39 am You don’t understand, the LATimes doesn’t believe anyone is guilty except for meat eating, gun owning REPUBLICANS. Comment by PCD (5c49b0) — 5/9/2008 @ 4:56 am I don’t think Doug is saying anything different from what I said. If the database contains the guilty persons DNA sample then the probability of at least one match becomes unity. If the database does not contain the guilty persons DNA then the probability is 1 in 3 of at least one match. Thus the actual probability of a match is somewhere between 1/3 and 1 depending on what the probability is that the offender’s DNA is in the database. The range of this can be measured by examining the statistics of like cases. Perhaps the way I expressed it was less than clear, but when I said Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match — whether that match is to an innocent person or a guilty one. I meant to express this: Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match. If we get a single match, we won’t know whether it was to a guilty person or not because we don’t have prior knowledge whether the guilty person is in the database or not. [To elaborate:] If we had prior knowledge that he was not in the database, we could say there is a 1 in 3 chance of a hit, and any hit would be to an innocent person. If we had prior knowledge that he was, we could say there is a certain chance of a hit, and that the hit is to a guilty person. With no prior knowledge either way, we know only that there is a 1 in 3 chance of a hit. If we get a hit, we don’t know whether the hit is to a gulity person or an innocent person. So maybe I wasn’t entirely clear about *how* I said it, but I think the concept I was trying to express was correct. Using the numbers in the article, it doesn’t change the percentages significantly to remove one possible donor from the world’s pool of possible donors. Put another way, the rest of the database doesn’t know whether the guilty guy is there or not. Comment by Patterico (4bda0b) — 5/9/2008 @ 6:31 am See my P.S. Comment by Patterico (4bda0b) — 5/9/2008 @ 6:35 am Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett’s case, it was 1 in 3. The seminal problems with the above statements are manifold:: 1)Leading scientists don’t consider the statistics the “most significant”. This is an intentional overstatement of facts. In point of fact, leading scientists would not agree with the conclusions in this article. Period. 2)The second problem with this statement, is that it intends to convey “innocence” with “marker match” at a level of 1 out of 3. This is also a gross overstatement of the facts. You are NOT going to get a 33% hit on markers on innocent persons if the guilty party is in the database searched. In fact, depending upon the specific markers, the database, and the crime…that number may fluctuate 3) If we ignore the existance of independent evidence of Puckett’s guilt,… we are completely distorting the statistics to slander the investigation and prosecution of this crime. One cannot, MUST not eliminate the corroborating evidence when one discusses “innocence” of a party. The existence of compounding pieces evidence work to ameliorate the STATISTICAL chances that the “matched” person was a)not available to commit the crime; b)not physically able to commit the crime; If you have a database consisting of only those persons available and capable of committing the particular crime…AND then there is a match…the likelihood of 1 in 3 being a “random hit” is ridiculously overstated. Comment by cfbleachers (4040c7) — 5/9/2008 @ 6:38 am Journalists have never been known for their math skills. That’s why we’re mostly English majors. Comment by Bradley J. Fikes (1c6fc4) — 5/9/2008 @ 6:47 am leading scientists The invocation of generic experts – a necessary ingredient for all weak articles. Comment by Amphipolis (fdbc48) — 5/9/2008 @ 7:01 am The article got it partially right. The false positive rate is far largeer than the random match rate that the prosecutor quoted. Using the random match rate was missleading. Where the LA Times got it wrong was in confusing the false positive rate with the probability of innocence. Your second point is wrong. See Patterico’s post above. The probability of getting a false positive has nothing to do with the probability of the suspect being in the data base. Both can happen. As for the third point, the discussion has been about how strong the evidence of guilt from a data base match is, not how strong the other evidence is. The argument has been that in this case proof of guilt will have to depend primarilly on the other evidence. Yes the data base hit means that he was one of a fairly small number of persons who might have committed the crime. It would not be conclusive evidence by itself. Comment by Lloyd Flack (ddd1ac) — 5/9/2008 @ 7:02 am I am fascinated by most of this discussion, but the real point of the LAT article was to criticize the technique of searching the database for matches. But this technique is only a high tech version of all investigation work. You start with the population, and narrow the field of suspects by eliminating those who couldn’t have done it. Searching the database eliminated 337,999 potential suspects because their DNA did not match the crime scene. There has to be more evidence to convict than that, and in this case there was. I am just as convinced of his guilt because he has been convicted of crimes with the same MO, and can’t be eliminated as a suspect by other evidence. Comment by Mike S (d3f5fd) — 5/9/2008 @ 7:45 am P.S. When I say “Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match — whether that match is to an innocent person or a guilty one.” I meant to express this concept: “Rather, the 1-in-3 number pertains to the probability that a database search will result in a single match, period. Still wrong. The 1-in-3 number pertains to the probability that the database will result in a random, i.e., false match. The probability that the database will result in one match total is unknown, but irrelevant. All that is relevant are the following: 1. What are the odds of a false hit? Answer: 1 in 3. 2. What are the odds of a true hit? Answer: 1 in x, where x represents the odds that the killer was in the database to begin with. 3. Was this one hit (Puckett) more likely to have been a true hit or a false hit, and how much more likely? The answer depends entirely on the value of x. If x = 0 (no possibility of killer being in the DB), then we know for certain that Puckett was a false hit. Similarly, if x = 1 (absolute certainty that the killer was in the DB), then we know for certain that Puckett was a true hit. If x = 3, then the odds of his guilt vs. innocence are 50-50, as we know one of two equally probable outcomes (each stood a 1 in 3 chance) has occurred. Bottom line: your email should have focused entirely on the fact that we are comparing the odds of two types of hits occurring, rather than the odds that one or the other might have occurred in a Comment by Xrlq (b71926) — 5/9/2008 @ 7:57 am It’s a question of what is known when you do the search. In your example, if you don’t know the value of x, then the chance of any hit is 1 in 3. If you do know the value of x going in, it changes the equation. It also changes what you can say about the meaning of a single hit resulting from the search. Comment by Patterico (a87c8f) — 5/9/2008 @ 8:14 am 1 of every 1.1 million coins was minted in 1972. There are 6000 coins in the world made in 1972. Only one is green. Here’s a room with 338,000 coins. What are the chances a coin will be found with a 1972 date? About 1 in 3. We have found a single 1972 coin in the room. What are the chances it’s the green one? Dunno. Depends on the chances the green coin was in the room. We can’t say it’s a 1 in 3 chance the coin is green without knowing more. If we had a scout go in the room and look for the green coin, determine it’s not there, and not look at dates, and we then did the search, we’d expect a roughly1 in 3 chance of finding a 1972 coin. And once we had it, we would know it’s not green. [UPDATE: Ah, but the chance of a "single hit" is not the same as a chance of a "hit." That part of my post was poorly worded and inaccurate. -- P] Comment by Patterico (deb6c3) — 5/9/2008 @ 8:26 am In your example, if you don’t know the value of x, then the chance of any hit is 1 in 3. Wrong. The chance of a random (read: false) hit is 1 in 3. The chances of getting any hit is necessarily higher, as it includes both the 1 in 3 chances of getting a false hit, plus any chance x may give us of getting a true one. Comment by Xrlq (b71926) — 5/9/2008 @ 8:35 am Reaction of an average person to this thread: MEGO! We, of the great unwashed, rely on the professionals to sort this out, and to use their best judgement in coming to a consensus that can be applied to the law in a neutral manner. Now, I know that some of the preceeding thoughts are asking a lot; but, when the freedom and property of real people are at risk, I think it is a fair request. Too many times, the “professionals” within the judicial system, seem to be children playing at very sophisticated games, knowing there is no down-side risk for them personnally. Whether or not that perception is accurate, it is out there and should be addressed – perhaps by arguements being carried out on a somewhat higher plane than what we see in the media, and in some televised trials. Comment by Another Drew (f9dd2c) — 5/9/2008 @ 8:43 am I like this letter better as well Patterico. You are on much more solid ground here. I think you could have brought up the issue of 1-in-4 vs. 1-in-3, but YMMV as they say. You are arguing about a conditional probability whereas what Patterico is talking about is an unconditional probability. That is, Prob(Match|KIB) d.n.e. Prob(Match). d.n.e. = does not equal. As has been noted many, many times before by me, Karl and Daryl (and I’ll also note we don’t all agree on certain aspects of this case–especially Daryl and I) all agree we do need to know P(Match). P(Match|KIB) assuming no false negatives is going to be 1. You’ll also need that number to update your prior (probability) for guilt. Prior to running the search, there was a 1 in 3 chance of matching an innocent person, and a 1 in X chance of matching the truly guilty party. True, but I consider this a trivial observation. Prior to running the trawl we could argue the probability that any of the people in the database is guilty is 1/N where N is the number of people in the database. Once we observe the results of the trawl, that probability is either going to remain the same (no hits) or for some members of the database their probability of guilt will go up, while others go down (by the theorem of total probability). This is kind of a “No duh,” observation. After the fact, we know that this particular search turned up only one hit, therefore, we can state with certainty that this time around, either the killer was in the database or someone was nabbed randomly, but not both. So what? The point still remains that we need to know the probability of a DNA match that is not conditioned on anything else. It was in every formulation of Bayes theorem that was put forward in that thread (IIRC). Calm down dude, Patterico is the last person I’d accuse of ignoring corroborating evidence of guilt. The point he is making is: “If we set that data aside for the moment and look at this situation….” Also, please keep in mind we can do sequential updating with Bayes theorem. Let me be explicit here. We are ultimately interested in P(G|DNA=1). G = guilt and DNA=1 is that we have 1 DNA match. Initially the investigator had no reason to think any member of the database was anymore likely to be guilty than the next. So P(G) could be set to 1/N. Now we get our DNA trawl results. Ah ha! One hit. Woohoo, a possible break through. Now we want to update our prior via, P(G|DNA=1) = P(DNA=1|G)*P(G)/P(DNA=1). Now, I think the 1-in-3 number is too high and favor 0.226 as was discussed in previous threads. Further, we agree that P(DNA=1|G) = 1. So we do the arithmetic and get, P(G|DNA=1) = 0.000013091. Not very good, but it is much higher than our initial pobability for Pucket of 0.000002959. Our revised probability is 4.24 times as large. So you go and investigate Pucket because everyone elses probability of being guilty just dropped like a rock. Now, we could argue that the 1/338,000 is too high. Maybe some of the people in that database could not have possibly committed the crime in question in 1972. So you could remove these people and go with a lower prior probability. In the previous thread it was argued that the killer was in the database with probability of 0.066. This is close to 1/15. Using this prior we’d get P(G|DNA=1) = 0.295. We approach unity for the above conditional probability as our prior of guilt (for Puckett) approaches 0.226. What this prior says is that prior to the trawl, the investigators would have had to have reason to suspect Puckett, since they didn’t clearly that prior is too high. Once you do settle on a prior (or even a range of priors for sensitivity analysis) then you bring in the additional evidence. For example, suppose we have a case where we get two hits. And that we used the uniform distribution as our prior. Now the probability that one of these two is guilty, P(G(i)|DNA=2), i = 1,2. Is the same. So we investigate both individuals and find out that at the time of the question individual 1 was incarcerated for another crime. We’d update our P(G|DNA=2) again using Bayes theorem and it should be obvious for individual 1 his probability is going to drop substantially. So, Patterico is just fine on that last point of yours. Comment by Steve Verdon (4c0bd6) — 5/9/2008 @ 8:47 am I incorrectly stated the probabilities in the low percentage database coverage. It is serious in that the correct numbers point to a higher probability of a false positive than the 1 and 3 Times article indicates. Overall probability of a match: 1.0*0.1 + 0.333*(1.0-0.1) = .10 + .30 = .40 Probability that, given a match, the matched person is not the offender: .10/.40 = 1 in 4 Probability that, given a match, the matched person is the offender: .30/.40 = 3 in 4 Oops. Got the order wrong on the 10% coverage case. It’s correct on the earlier 70% coverage example. The 10% case should read: Overall probability of a match: 1.0*0.1 + 0.333*(1.0-0.1) = .10 + .30 = .40 Probability that, given a match, the matched person is not the offender: .30/.40 = 3 in 4 Probability that, given a match, the matched person is the offender: .10/.40 = 1 in 4>/i> So here we have a different problem. When the a priori probability the database contains the DNA of the guilty party is low, the probability that a match is a false positive becomes larger than 1 in As stated earlier, this probability is an upper limit that must be reduced by investigative exclusion. Comment by doug (fbba00) — 5/9/2008 @ 9:03 am Pat, I like this letter much better as well. I think it conveys the fundamentals that potential jurors need to be aware of in a case like this. #19 Xrlq: The chance of a random (read: false) hit is 1 in 3. The chances of getting any hit is necessarily higher, as it includes both the 1 in 3 chances of getting a false hit, I have some heartburn with your wording here. I just want to make sure that you understand there is no such thing as a “false” hit here. Either the portion of the sample DNA fragment exactly matches the relevant portion of a recorded sample in the database, or it doesn’t. The random match probability remains 1:1.1 million~so the chances of a match are going to remain the database size multiplied by the random match probability, unless you’ve managed to skew the database by recording only the descendents of a single progenitor or something. Comment by EW1(SG) (84e813) — 5/9/2008 @ 9:24 am Pat, I think what you say is correct, and I was just about to post this on your “proposed email” thread, which tries to say the same thing: Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett’s case, it was 1 in 3. This last statement is false because the “1 in 3″ instead only represents the probability that the one dna match which should be found if you sampled 1.1 million members of the total population you are looking at will be in the data base – which in fact here consists of only about 1/3 of the 1.1 million needed. So the data base correspondingly has a 1 in 3 chance of containing a match, that’s And you don’t know the probability that this one db match is a false positive unless you aslo know the total matches likely based upon the total population number you are considering, which in Puckett’s case was much greater than 1.1 million people, meaning more likely matches than just Puckett’s at a rate of 1/1.1 million total population. So, Pat, why don’t you just flat-out tell the LAT that the 1/3 number is simply and only the chance that there is a dna match in the actual data base, which itself is within a much larger population [tp] containing all of the possible matches = “total matches” [tm], a total which depends entirely upon the actual number tp represents, given the “random match probability”[rmp], which I assume is the frequency of expected matches within an even larger theoretical population – perhaps the U.S.’s or the World’s? Whatever, the rmp = 1/1.1 million people in the tp of interest. If I recall correctly, the liklihood that the one match found in the data base is a false positive is actually: [tm-1]/tm., where tm = likely total matches in the tp. So that: the probability that any match found anywhere within the tp is “guilty” [Pg] = 1/tm., which has nothing to do with how many matches are actually found within the db, or who the matches are – that is, lacking any other consideration as to whether being in the db makes a “match” there more likely to have committed another particular crime compared to a match not in the db. tm is found by multiplying tp by the frequency of the dna match, rmp: tm = [tp][rmp]. For example, if tp = 18.7 million people, and rmp = 1 match/1.1 million people, then tm = 17 and Pg = 1/tm = 1/17 = the Probability that any match found within the tp is the guilty one. The probability that the/any match is a false positive is [17-1]/17 = 16/17. Also, maybe tell the LAT that the rmp, = 1/1.1 million, is not the chance that an innocent match will be “found” by Law Enforcement instead of a guilty one. LE would have a very difficult time of actually finding any of the matches at all by a random search or process, such as by randomly picking someone up off the street. Randomly finding the guilty match would be even less likely, because there is only one of this kind of match vs, say, 17 total. LE simply[?] had a match within their db, which at least made finding Pluckett much easier. [They still haven't actually found the other matches yet, have they?] Comment by J. Peden (4938ac) — 5/9/2008 @ 9:39 am EW1(SG), I’m not going to get into a semantic debate. If you don’t like the words “true” and “false,” feel free to substitute other labels more to your liking. The point is that a distinction must be drawn between the hits I call true (those matched to the actual donor) and those I call false (those matched randomly to others). False (random) hits are the 1 in 3 factor. True (actual match to donor) hits are a function of how likely the donor is to be in the database. Neither factor influences the other. Either the portion of the sample DNA fragment exactly matches the relevant portion of a recorded sample in the database, or it doesn’t. The random match probability remains 1:1.1 million Right, but if the true donor is in the database, his match isn’t random. that’s why you can’t conflate the two. If some random schmoe is added to the database, the odds increase by 1 / 1.1 million that there will be a hit. If the true donor is added, the odds increase by 1. Think about the most obvious example, where we have a 100% that the true donor is in the database. We run the test on the entire database. Do you really think there is only a 1 in 3 chance that we will find a match?! Comment by Xrlq (62cad4) — 5/9/2008 @ 10:40 am There’s lots of ways to skew the “sample”/database. That’s why researchers calculate P-values. For example, here, what if the James-Younger gang were in the database? They were all siblings or But we are not looking at the efficacy of a drug or the predictability of a heart attack. We are looking for suspects of a crime and the database is a useful place to start looking. Jesse was killed by Bob Ford and Cole is doing forty years in prison so we check out Frank. Comment by nk (8f20b5) — 5/9/2008 @ 11:00 am This is why most journalists are unqualified to do anything BUT opinion crap pieces. They are generally incompetent in all fields. A communications degree is probably the #1 choice of students incapable of entering college without affirmative action, unless you count ethnic studies. It undoubtedly has drug down an already lame and pointless degree. Comment by martin (cd5d90) — 5/9/2008 @ 12:40 pm There is a 1-in-3 chance that the search would link someone to the crime. There is a 1-in-3 chance the search would link an innocent person to the crime. There is a P chance that the search would like the guilty party to the crime, where P is the likelihood that the killer would be in the DB. If P = 90%, then there is better than a 90% chance that the search would link someone to the crime. Comment by Daryl Herbert (4ecd4c) — 5/9/2008 @ 12:56 pm Daryl, but doesn’t that go only to the usefulness of the database? In other words, are the police wasting their time with it or not? Whether the blind hog found an acorn is an independent issue. Did the totality of the evidence prove the defendant guilty beyond a reasonable doubt? Comment by nk (8f20b5) — 5/9/2008 @ 1:11 pm How can we know? The jury was led to believe the odds were 1 in a million that the DNA would randomly match to Puckett, when in fact the odds were 1 in 3 that they would match to someone in a database full of sex offenders. If the jury had properly been instructed on that point, they may well have concluded that the DNA semi-match, in conjunction with other evidence, established proof beyond reaosnable doubt. But since they were improperly instructed, they may well have convicted based on the DNA evidence alone. Comment by Xrlq (b71926) — 5/9/2008 @ 2:11 pm Yup. It’s an investigative tool. Not evidence. Its potential for prejudice outweighs its probative value. Comment by nk (8f20b5) — 5/9/2008 @ 2:25 pm the LAT’s reporters can not even bother to confirm the gender of an interviewed man’s partner and call that man an “open homosexual” when in fact the partner is female and the man heterosexual. You expect them to be able to pin down a complicated scientific story ? Comment by seaPea (3c8938) — 5/9/2008 @ 2:52 pm Yup. It’s an investigative tool. Not evidence. Its potential for prejudice outweighs its probative value. That depends on how it’s presented. If the prosecution had truthfully advised the jury that the odds of a false hit were roughly 1 in 3, that would still mean that the evidence is probative, just not probative enough to sustain a conviction on its own. I would imagine that most admissible non-DNA evidence falls into this category as well. Every piece of evidence doesn’t have to be a smoking gun, it just can’t be presented as though it were a smoking gun if it’s not. Comment by Xrlq (62cad4) — 5/9/2008 @ 4:10 pm #25 Xrlq: I’m not meaning to debate semantics, and in truth have only been following certain parts of the discussion “with half an ear,” as much of it has little relevance to a criminalist, and even less to a jury. What concerned me in your earlier post was this statement: The chances of getting any hit is necessarily higher, as it includes both the 1 in 3 chances of getting a false hit, plus any chance x may give us of getting a true one. where I got the mistaken impression that you were conflating the two conditions. I actually think we are on the same page, but what little I know of the subject I learned from an extremely competent criminalist so my viewpoint isn’t necessarily that of the man off the street. As a working assumption, a criminalist isn’t going to worry about whether the criminal is in the database or not when running the sample against it. After all, to illustrate, with an RMP of 1:1.1 million, it’s possible that as many as 10 people in the LA County area alone match (and yes, I know the crime occurred in San Francisco). And its entirely possible there are no matches in the database. It’s only after a match is found, that the likelihood of an uninvolved party is considered and accounted for by corroboration. So in that sense, all hits are “true” until eliminated. Comment by EW1(SG) (84e813) — 5/9/2008 @ 4:32 pm #26 nk: There’s lots of ways to skew the “sample”/database. Actually, there aren’t and I gave a very poor example above without thinking. The value of DNA testing as an identification tool comes precisely because of its measurement of independently heritable traits, so a set of quints would skew the database but anybody else wouldn’t. Comment by EW1(SG) (84e813) — 5/9/2008 @ 4:51 pm Er, and they would have to be identical quints, at that. Comment by EW1(SG) (84e813) — 5/9/2008 @ 4:55 pm This is so reminiscent of the disconnect between law and medicine regarding the insanity defense. Insanity is not a medical term. So the doctor is talking about “passive-aggressive personality disorder” and “cannabis and alcohol-induced dissociative state” and the jury is supposed to find whether “as a result of mental disease or mental defect, the defendant lacked substantial capacity to appreciate the criminality of his conduct”. Comment by nk (8f20b5) — 5/9/2008 @ 5:18 pm There is a ceiling on the probability that the culprit is in the data base. It is the probability that the culprit survived long enough to be included in the data base. Considering how long ago this case was, the chance that the culprit has died before he could be included in the data base is far from negligible. Comment by Lloyd Flack (ddd1ac) — 5/9/2008 @ 8:04 pm nk wrote: Daryl, but doesn’t that go only to the usefulness of the database? In other words, are the police wasting their time with it or not? Whether the blind hog found an acorn is an independent issue. Did the totality of the evidence prove the defendant guilty beyond a reasonable doubt? No. It’s not an “independent issue.” The probability that a search will return one or more hits is not independent from the probability that the killer is in the DB. Comment by Daryl Herbert (452002) — 5/9/2008 @ 9:17 pm If the prosecution had truthfully advised the jury that the odds of a false hit were roughly 1 in 3 The prosecution could not “truthfully” do so, because it isn’t true. Comment by Daryl Herbert (452002) — 5/9/2008 @ 10:24 pm Steve Verdon: I replied to your last comment directed to me on the previous thread. If you want to keep discussing it, I would prefer to continue the conversation on whichever thread is the most recent, just so we only have to look at once place to talk to each other. Comment by Daryl Herbert (452002) — 5/9/2008 @ 10:33 pm And, actually, there are other problems with the Times and Patterico’s “analysis.” If the chance of a random comparison being a match is indeed 1 in 1.1 million, then the chance against such a match is 0.999999091. Since each comparison is independent, the calculation of the probability against ANY matches in N tries against X samples is (0.999999091)^N. If there are 338,000 samples to test against, the chance that NO MATCH occurs is 73.5%. And 26.5% is therefore the chance that SOME matches occur. If you want the chance that exactly R matches occur in N samples, you need to use the formula: C(N,R) * (p^R) * ((1-p)^(N-R)) where C(N,R) is the combination of N things taken R at a time, and p is the probability of a single trial. IF R = 1, this simplifies to N*p*((1-p)^(N-1)) or with these numbers: (338000/1100000)* ((1-(1/1100000))^337999 or 0.225, or 22.5% is the chance for exactly 1 match, and 4% is the chance that more-than-one match occurs (26.5%-22.5%). So, the whole article is silly, as is the correction. The odds of only one match occurring is not 1/3rd, and further it has nothing to do with dividing 338,000 by 1.1 million or the other way around. It’s not a division problem at all. Lastly, the fact that only one match occurs is not even of interest because it is 5 times more likely than more-than-one match. All this says is that the DNA testing in this case is hardly conclusive. Convicting someone on this basis is the equivalent of convicting on the basis of hair color or race. At best the DNA test corroborated other evidence. By the way, the defense attorney gets even lower marks on this test because he had a slam dunk rebuttal. Comment by Kevin Murphy (0b2493) — 5/10/2008 @ 1:57 am Lastly, the fact that only one match occurs is not even of interest because it is 5 times more likely than more-than-one match. It’s of interest because it allows us to exclude the possibility of both a true and a false hit. Comment by Xrlq (62cad4) — 5/10/2008 @ 6:02 am The whole problem with the article (and with much of the debate that has followed it), is the looseness with which some terms are used to convey concepts. “Innocence” is being used too broadly or inappropriately and the attempt to convey the concept that DNA evidence will wrongly “convict” 1 out of every 3 defendants in a case where it is used, …is the journalistic equivalent of jury tampering before the fact. Given the LA Times sordid history, I believe this has 1 out of 1.25 chance of being the intention. Without knowing which specific markers and also the specific composition of the database tested against those markers…we seem to be taking random markers vs. a random database. The chance of a 5.5 marker match on a RANDOM database of 350,000 RANDOM individuals…that would produce ONE individual…and that individual did not match all of the 13 markers…produces one statistical number. Let’s call that number “X” Change the database to include all Hopi tribal women. Or Italian-American babies. Or Scottish farmboys. Will that produce a number that is greater or lesser than “X”, because the database is composed of a different subset of individuals? Now, let’s change those 5.5 markers…for an entirely different group of 5.5 markers. Apply it to our random database and give our “conclusive” number the designation “Y”. Will “Y” always equal “X”? How about against different databases? Here’s the thing…if the markers rule out and rule in different “things”…they rule out and rule in different people. If the database includes some commonality, that may rule out or rule in a higher or lower percentage of people…depending upon the markers. Are KNOWN OFFENDERS more likely to commit crimes of this type than the “random general public”? Does that subset database already have built into it a higher likelihood of criminality than the same sized random database? Would that fact give a higher return on a “hit” of GUILTY parties based upon 5.5 markers where only one person is returned…and that person was known to be available to commit the crime and in the area at the time of the violent act? Comment by cfbleachers (4040c7) — 5/10/2008 @ 7:17 am If the chance of a random comparison being a match is indeed 1 in 1.1 million, then the chance against such a match is 0.999999091. Since each comparison is independent, the calculation of the probability against ANY matches in N tries against X samples is (0.999999091)^N. If there are 338,000 samples to test against, the chance that NO MATCH occurs is 73.5%. And 26.5% is therefore the chance that SOME matches occur. The 26.5% number was covered by Eugene Volokh in his post, which I linked in mine. In the longer version of my post, I included all the caveats so that people like yourself couldn’t come along later and claim that I was an idiot because the number was really 26.5% instead of 1 in 3. Everyone who has read all my posts and followed the links understands that we are talking about an approximation of an approximation. The 1 in 3 number does accurately represent the multiplication adjustment recommended by the scientific committees as a conservative and simpified way of expressing the chances of finding a match in a database of unrelated individuals who did not donate the crime scene DNA. Comment by Patterico (4bda0b) — 5/10/2008 @ 10:33 am #44 cfbleachers: Change the database to include all Hopi tribal women. Or Italian-American babies. Or Scottish farmboys. Will that produce a number that is greater or lesser than “X”, because the database is composed of a different subset of individuals? No. As I mentioned above, one of the reasons the markers that used are used is because they are independently heritable, i.e. not associated with any subset of the population. Comment by EW1(SG) (84e813) — 5/10/2008 @ 11:02 am Expressing statistical concepts in accurate English is like walking a tightrope. And that’s one part of one discipline. You can understand why professional scientists cringe over what laypeople do to scientific concepts when they try to express them in English. Comment by Karl Lembke (7ae576) — 5/10/2008 @ 1:24 pm Sorry, I didn’t follow all the links, and I saw no reference to the actual values in any of the posts or comments. The 1/3rd number AND the idea that the calculation was a simple matter of division was prominent however. I confess to skimming. I think that my pique is more towards 1) the LA Times article’s incorrect use of statistics while correcting someone else’s use of statistics; 2) the trial’s reported use of meaningless psuedo-statistics to convict someone; and 3) your proposed correction focusing on minor errors and ignoring the main issues. I must admit though that there are about 97 sides to this dicussion by now, and perhaps the statistics themselves are no longer meaningful. Comment by Kevin Murphy (805c5b) — 5/10/2008 @ 3:14 pm your proposed correction focusing on minor errors and ignoring the main issues. Kevin, I love ya, but that’s a load of horse hockey. The CENTRAL POINT made by the article was that the jurors weren’t told that the chances the search HAD HIT on an innocent person were 1 in 3. That statistic is NOT what the jury should have been told, even according to the scientific committees cited in the article. That is my focus, and it is hardly a focus on a side issue. It’s the MAIN issue — and even someone with a touch of Prosecution Derangement Syndrome should be able to see that. Also, since you have been skimming this, maybe I should repeat something that I think most skimmers have been missing: THE JURY WAS NOT TOLD THERE HAD BEEN A HIT FROM A DATABASE. You do understand that, right? Comment by Patterico (4bda0b) — 5/10/2008 @ 4:19 pm “THE JURY WAS NOT TOLD THERE HAD BEEN A HIT FROM A DATABASE.” I consider this sufficient to reverse (assuming the defense wanted them told). Comment by James B. Shearer (fc887e) — 5/10/2008 @ 4:38 pm What James said. That is by far the biggest problem here. If the prosecutor had said: Ladies and gentlemen of the jury, there’s this thing called the ‘prosecutor’s fallacy’ that means blah blah blah blah blah. Don’t really understand that crap, all I know is that we had no clue who to suspect for this murder, so we went a-fishin’ in a huge database of known sex offenders. Each little experiment had a 1 in a million chance of randomly matching someone, but we ran the experiment about one-third of a million times. Do the math. There would have been little or no risk of the jury convicting Puckett on the basis of the partial DNA match alone. They would have seen it as probative (which it is, unless the odds of the killer actually being in the DB are insanely low) but not dispositive (which it isn’t, unless the odds of the killer actually being in the DB are insanely high). Comment by Xrlq (62cad4) — 5/11/2008 @ 12:19 pm
{"url":"http://patterico.com/2008/05/08/the-la-timess-errors-in-its-piece-on-dna-and-cold-hits/","timestamp":"2014-04-17T09:35:43Z","content_type":null,"content_length":"142114","record_id":"<urn:uuid:d59ce529-5cd7-47cb-b184-ff064a7be256>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Similar triangles September 11th 2011, 04:41 AM #1 Senior Member Jul 2008 Similar triangles If I have two right angle triangles and two unknown side lengths which I have not yet as apart of the question been asked to calculate or find, but have been asked to show that the two triangles are similar, would somebody please advise what the question means by SHOW. Re: Similar triangles Well, how do you know when two triangles are similar? Re: Similar triangles Re: Similar triangles Absolutely not. There are certain properties about the two triangles which will make them similar. I'm sure if you have been given homework on similar triangles you would have gone over this in class, so check your class notes. Re: Similar triangles Hi David Green, Show that an acute angle of the first triangle is equal to one in the second triangle Re: Similar triangles I have not been given home work on it. Reading the course book does give advise about similar triangles, it explains about the angles being equal which I understand, and it talks about the sides being in proportion to each other, which I understand, then it goes on to show some algebra to proves the lenghts of the sides are in proportion, which I also understand, what I did'nt understand was the explanation of show? Show by drawing it, show by mathematically working out, show by explanation? Sometimes questions are not very well worded, which is why I asked for professional second opinions? Re: Similar triangles September 11th 2011, 05:11 AM #2 September 11th 2011, 05:12 AM #3 Senior Member Jul 2008 September 11th 2011, 05:14 AM #4 September 11th 2011, 05:14 AM #5 Super Member Nov 2007 Trumbull Ct September 11th 2011, 05:24 AM #6 Senior Member Jul 2008 September 11th 2011, 05:25 AM #7 Senior Member Jul 2008
{"url":"http://mathhelpforum.com/algebra/187760-similar-triangles.html","timestamp":"2014-04-19T14:40:13Z","content_type":null,"content_length":"49243","record_id":"<urn:uuid:16079bd6-b1a7-4137-a379-bb5cbe05dd56>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Cramer's Rule - Concept Sometimes using matrix algebra or inverse matrices to find the solution to a system of linear equations can be tedious. Sometimes it is more convenient to use Cramer's Rule and determinants to solve a system of equations. Finding determinants becomes much more difficult with higher dimensions, and so Cramer's Rule is better for smaller systems of linear equation. One thing you can do with determinants is solve systems of linear equations with them and the method's called Cramer's rule, so let's start with the system 9x+3y=12, 10x-4y=50 two equations, two unknowns. Cramer's rule says the solution will be x equals this determinant 12,3 50,-4 over the determinant 9,3 10,-4 now let me I may explain where these determinants come from. This determinant in the denominator is the determinant of the coefficient matrix right? 9, 3, 10, -4 that's from the coefficients on the left side. In the numerator, you basically have the same determinant only you've replaced the x coefficients with these numbers the constants so that's how you get x. You get y very similarly again in the denominator you've got the determinant of the coefficient matrix and in the numerator you've taken the coefficient matrix you've replaced the y terms with 12 and 50 and so it's very similar how you calculate this just remember replace the appropriate column with the constants in this case 12 and 50. Let's actually calculate these and see what the solution is, so we're going to do x first. Let's observe that and you could still use the simplification rules for determinants whenever whenever possible like in the denominator I can pull a 3 out of this top row and I can pull a 2 out of the bottom row and that gives me 3 times 2 times the determinant 3,1 right? I pull a 3 out of the top I pull a 2 out of the bottom so I have 5 -2 and then in the in the top I can also pull out a 3 from the top row and a 2 from the bottom row see that's nice because I can actually cancel these factors of 3 and 2 and what's left is 4 1 and 25 -2 so as I said, you can just go ahead and cancel the 3 and the 2 and then on let's see the bottom first actually we get -6-5 that's -11 in the bottom and in the top, we get -8-25 that's -33 that's 3 so let's see the same thing for y again it's always a little bit easier if you can factor things out first because things factored out sometimes cancel so I'll pull 3 out and a 2 out again and I get 3,1 on top 5,-2 on the bottom and I can pull a 3 and a 2 at the top as well might as well just pull those out because I mean I can pull more out of the bottom obviously but there's no point these are going to cancel and nothing else will so let me just leave this as 3,4 and then I have 5,25 and then again the 6 cancels in the bottom I'm still going to have -6-5 -11 but in the top I'll have 75-20 55 and so that's 55 over -11 -5 so x=3, y=-5. This is Cramer's rule you're using determinants to solve a system of linear equations. systems of linear equations Cramer's rule determinant coefficient matrix
{"url":"https://www.brightstorm.com/math/precalculus/systems-of-linear-equations-and-matrices/cramers-rule/","timestamp":"2014-04-17T15:55:11Z","content_type":null,"content_length":"64828","record_id":"<urn:uuid:9eabf22a-9fc8-4c8d-ab83-41faab4e6aa9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamics of the quorum sensing switch: stochastic and non-stationary effects A wide range of bacteria species are known to communicate through the so called quorum sensing (QS) mechanism by means of which they produce a small molecule that can freely diffuse in the environment and in the cells. Upon reaching a threshold concentration, the signalling molecule activates the QS-controlled genes that promote phenotypic changes. This mechanism, for its simplicity, has become the model system for studying the emergence of a global response in prokaryotic cells. Yet, how cells precisely measure the signal concentration and act coordinately, despite the presence of fluctuations that unavoidably affects cell regulation and signalling, remains unclear. We propose a model for the QS signalling mechanism in Vibrio fischeri based on the synthetic strains lux01 and lux02. Our approach takes into account the key regulatory interactions between LuxR and LuxI, the autoinducer transport, the cellular growth and the division dynamics. By using both deterministic and stochastic models, we analyze the response and dynamics at the single-cell level and compare them to the global response at the population level. Our results show how fluctuations interfere with the synchronization of the cell activation and lead to a bimodal phenotypic distribution. In this context, we introduce the concept of precision in order to characterize the reliability of the QS communication process in the colony. We show that increasing the noise in the expression of LuxR helps cells to get activated at lower autoinducer concentrations but, at the same time, slows down the global response. The precision of the QS switch under non-stationary conditions decreases with noise, while at steady-state it is independent of the noise value. Our in silico experiments show that the response of the LuxR/LuxI system depends on the interplay between non-stationary and stochastic effects and that the burst size of the transcription/ translation noise at the level of LuxR controls the phenotypic variability of the population. These results, together with recent experimental evidences on LuxR regulation in wild-type species, suggest that bacteria have evolved mechanisms to regulate the intensity of those fluctuations. Quorum sensing; Noise; Stochastic modeling; Vibrio fischeri; Autoinducer; Synchronization Bacteria, long thought having a solitary existence, were found to communicate with one another by sending and receiving chemical messages [1]. Their communication mechanism results in the ability to synchronize the activity of the colony as a whole. The latter leads to a coordinated behaviour that in some cases resembles that of multicellular organisms, e.g. the so-called community effect during development [2]. Thus, by means of the quorum sensing (QS) mechanism, cells produce, export, and import signalling molecules (autoinducer). As the colony grows, more cells produce and export autoinducer, leading to an increasing concentration of the signalling molecule in the environment and in the cells. Upon reaching a concentration threshold, the autoinducer activates the expression of QS-controlled genes therefore coordinating the cells in a density-dependent manner. Importantly, QS controls a number of relevant phenotypic changes in bacteria as for example the virulence in S. aureus[3]. In addition, it has become a model system for studying the emergence of coordinated behaviour in communicating cells. All in all, QS has opened a research field with promising technological applications [4], as for example, the environmentally controlled invasion of cancer cells [5]. The QS systems in gram-negative bacteria share a core network architecture. In this regard, a characteristic model system is the LuxR/LuxI regulatory network in Vibrio fischeri[6]. LuxR protein is an autoinducer-dependent activator of the lux operon that drives the autocatalytic expression of luxR and of the autoinducer synthase, luxI, together with that of the genes responsible for the production of bioluminescence. The up-regulation of luxI increases the production of autoinducer molecules that in turn activates further gene expression. The resulting positive feedback loop leads to a bistable switch-like behaviour depending on the concentration of the autoinducer as shown by in silico[7-9] and in vivo experiments [10,11]. Such switch-like behaviour has been observed at the population level by measuring the average gene expression level. However, how individual cells behave remains puzzling. In fact, as observed in Vibrio harveyi[12], Vibrio fischeri[13], Pseudomonas aeruginosa[14], and luxI/luxR-GFP strains of E. coli[15], the cellular response to QS signals seems to be highly heterogeneous at the level of the distribution of both the population phenotype and the response times of individual cells. A number of studies have shown that noise plays an important role in bistable systems [16-18]. Therefore, the aforementioned heterogeneity may be caused by the random fluctuations that unavoidably affect cell regulation and signalling. This poses the intriguing question of how cells achieve a coordinated response in the presence of noise. Indeed, the QS mechanism may produce a robust and synchronized behaviour at the level of the population both experimentally [19] and theoretically [20]. However, how this behaviour at the collective level arises from the stochastic dynamics of individual cells is still an open question. At the end, in the framework of QS, a collective response means a precise information exchange in the colony. Consequently, how can a bacterial population estimate its number of constituents precisely if such information is fuzzy at the single cell level? Herein, we shed light on this problem and investigate how noise affects the QS transition both at the level of individual cells and at the level of the cell population. In the context of QS modelling, most research has focused on the understanding of the intracellular circuit [7-11,21-24], i.e. single cell studies, while few of them have considered an ensemble of communicating cells [25-28]. Yet, so far no study has taken into account the coupling of the signalling mechanism at the single cell and collective levels by stochastic means together with realistic dynamics of the proliferation process. In this work, we model the QS mechanism by using both deterministic and stochastic approaches and taking into account the key regulatory interactions between LuxR and LuxI, the autoinducer transport, the cellular growth and the division dynamics. Our results indicate that the cell response is highly heterogeneous and that noise in the gene expression of luxR is the main factor that determines this variability. Moreover, we show that the transition of the QS switch near the critical concentration of autoinducer is very slow compared to other characteristic temporal scales of the process and that, as a consequence, the non-stationary effects are crucial for setting a precise switch. As we show further below, the dilution due to cell growth and division is a key element required for an in-depth understanding of the QS response dynamics. In addition, we demonstrate that noise, depending on the cell density, can either prevent or promote phenotypic changes indicating a beneficial role played by stochasticity. Altogether, we find that the precision of the QS switch for determining the number of cells in the colony is highly dynamic and context dependent, which in turn favors adaptability. Modelling of the LuxI/LuxR gene regulatory network The regulatory interactions that control the wild-type lux operon are more complex than first thought [29]. Those include both positive and negative regulation of the luxR gene depending on the concentration of the autoinducer [30]. Simplified synthetic constructs, such as lux01 and lux02[10], retain the minimal luxI/luxR regulatory motif and lack the structural genes responsible for light emission that may also play a regulatory role, e.g. luxD[31]. Still, these constructs reproduce the main features of the wild-type operon as revealed by GFP tags reporting the promoter activity [10]. In addition, lux01 and lux02 constructs allow to perform controlled experiments that have shed light on the wild-type dynamics and its regulatory interactions. Herein, we follow this approach and focus on the lux01 and lux02 constructs as well characterized examples of the behaviour of the wild-type operon. The lux01 operon lacks the luxI gene and only gfp is transcribed in that direction. On the other hand, the lux02 operon carries a luxI::gfp fusion. Accordingly, lux01 cells cannot produce their own autoinducer and the induction in that case is driven by adding exogenous autoinducer to the medium. Figure 1 shows schematically the regulatory interactions we consider in our model. The autoinducer molecules (A) are produced due to the action of their synthetase, LuxI, and bind to the cytoplasmic protein LuxR (R) creating a complex (C[2]). The latter binds to the promoter region activating both the transcription of luxI::gfp (only gfp in the case of lux01) and luxR. Signalling molecules can diffuse passively in and out the cell and contribute to increase the external concentration of the autoinducer (A[ext]) that can be eventually modified by an external influx of molecules (A^∗) and a dilution protocol (see below). In our model we consider that signalling molecules degrade at the same rate whether they are cytoplasmic or not. Finally, we consider a DNA duplication process. Such modelling scheme can be formally written as a set of chemical reactions: Figure 1. Scheme of the LuxI/LuxR regulatory network. The LuxR (R) protein activates the operon upon binding to autoinducer molecules (A). The lux01 operon lacks the luxI gene and therefore cells cannot produce their own autoinducer and exogenous signalling molecules are needed to activate the expression of luxR and GFP [10]. On the other hand, the lux02 operon carries a luxI::gfp fusion and allows for the production of autoinducer and self-induction (see text for details). As revealed by the set of reactions (1), we assume that the regulatory complex (luxR·A)[2] activates the transcription of luxI and luxR in opposite directions upon binding to the DNA. These reactions account for the main regulatory interactions of both lux01 and lux02 constructs. Since lux01 lacks the luxI gene the autoinducer, A, cannot be synthesized, i.e. k[A]=0, and an exogenous supply of the signalling molecule is required to induce the system. The expression rates of luxI and luxR depend on the initiation rate of transcription, the speed of elongation, the length of the transcript, and the rate of translation and postmodification into functional proteins. We take into account the differences due to these intermediate processes in an effective manner by using different transcription /translation rates for the luxR and luxI::gfp genes. Note that we assume that there are basal transcriptional rates, α[R]k[R] and α[I]k[I], even though the regulatory complex (luxR·A)[2] is not bound to the promoter region of the DNA. Still, since α[R],α[I]≪1 (see parameter values below), the maximum transcriptional rates take place when the activator complex is bound. Deterministic and stochastic approaches: cell growth and division The equations (1) lead to a Master equation description that can be sampled exactly by means of the Gillespie algorithm [32]. This approach is suitable for the characterization of the system at the single cell level. Complementary to this, if the number of molecules of the species is large enough such that the fluctuations can be neglected, a set of ordinary differential equations (ODEs) can be derived from Eqs. (1) (see Additional file 1: Text S1). The ODEs formalism is then appropriate to account for the behaviour at the colony level since noise averages out in that case. Herein we make use of both stochastic and deterministic descriptions as follows. As for the deterministic model, we consider that all cells share their cytoplasm in a single volume V[c,tot](Figure 2). Chemical species X inside the cell are described by their concentration, c[X], in V[c,tot]. Therefore, this model can only be used to study the dynamics of species averaged over all the cells in the population. From an experimental point of view, the population average can be measured determining the average bulk fluorescence of the GFP reporter of the cell culture by means of a fluorometer or by averaging the fluorescence data obtained with a flow cytometer. Additional file 2. Video S1. Movie of the stochastic simulation. Movie of the stochastic simulation for the lux02 operon, 10 h of induction at , burst size b[R]=b[I]=4. Cells are modelled as individual compartments containing a copy of the LuxR/LuxI regulatory network. The Gillespie algorithm (see text for details) is used to integrate the stochastic dynamics of the whole system of cells. Cell growth and division is explicitly taken into account as well as a certain degree of stochasticity in the cell cycle duration. Cells movement is purely aesthetic since we do not include any spatial effects in our model and consider a well-mixed environment. The number of cells (N=100) is maintained constant by removing one cell at random each time a cell divides. Format: AVI Size: 19.4MB Download file Figure 2. Scheme of the deterministic and stochastic modelling approaches. A: In the deterministic model, the population of cells is described by a unique volume with average and continuous concentrations of all species, including the DNA carrying the QS network (small circles). Cellular growth is also taken into account in this approach. B: In the stochastic model, cells are modelled as individual compartments that can grow and divide and all molecular species are represented as discrete entities. In both cases, A and B, we assume that all species are well-stirred inside the cells and in the medium. In order to maintain a constant cell density, as in the experiments we aim to model, we implement a dilution protocol. In the deterministic model the dilution removes continuously cytoplasmic material in order to compensate the cell growth. In the stochastic model individual cells are removed every time a new cell is born (see Additional file 2: Video S1). We notice that our in silico experiments span up to 100 hours of cell culture growth in some cases (simulated experimental time, not computational time). Thus, regardless of the description, and in addition to the dynamics of the regulatory network, we also need to take into account the effects of cell growth. If cells are maintained in the exponential phase with doubling time τ then the dynamics of the volume of the cell is V[c,tot](t)=V[0,tot]2^t/τ. Where V[0,tot]=NV[0], N being the number of cells in the colony and V[0]the volume of a single cell at the beginning of the cell cycle. As a consequence, the cellular growth introduces dilution terms, , in the r.h.s. of the ODEs of all species, with the exception of the autoinducer in the medium A[ext]. On the other hand, cell division events lead to the duplication of the genetic material. The latter is taken into account by adding the term to the ODE that describes the concentration of DNA. This term compensates exactly for the cell growth dilution such that , i.e. the total concentration of DNA, is kept constant. In our simulations, as in the experiments we aim to reproduce, the cell density is kept constant. This can be achieved by means of an external dilution protocol (see below) that compensates for cell proliferation. We then keep the volume V[c,tot] constant and define the external volume, V[ext], such that the total volume of the cell culture reads V[tot]=V[ext] + V[c,tot]. Accordingly, the parameter r, see equations (1), reads r=V[c,tot]/V[ext]. We assume that molecules are homogeneously distributed inside both the cytoplasm and the external volume (i.e. spatial effects are disregarded). Finally, the resulting ODEs are numerically integrated. In order to study the role of noise in a population of cells communicating by QS, we build also a stochastic model of a population of bacteria. In this case, each bacterium is described as a single cell carrying a copy of the regulatory network. The ensemble of all the chemical reactions in all cells, including the diffusion reaction, are treated as one global system. We apply the Gillespie algorithm [32] to compute the time of the next reaction, choose the reaction channel from the list of all possible reactions and update the number of molecules according to the reaction stoichiometry. We model the system of cells as a global stochastic system in order to simulate as exactly as possible the stochastic dynamics of all chemical species, in particular that of autoinducer molecules. The noise in the signalling molecule originates from different sources: randomness in its synthesis by LuxI, fluctuations at the level of the number of molecules of LuxI, and randomness in the diffusion reaction of the autoinducer. The latter is particularly important since it leads to correlations between cells as follows. An autoinducer molecule can diffuse out of the cytoplasm of one cell into the medium, thereby increasing the number of molecules in the external volume by one; this increase in the level of A[ext]changes the probabilities of an autoinducer molecule to diffuse into any other cell. Thus, all the cells are coupled through the diffusion reaction. We note that while a possible optimization of the algorithm relies on parallelizing the code such that each cell evolves independently [25], this approximation is prone to introduce errors in the dynamics of the signalling molecule because the aforementioned correlations are neglected. As mentioned above, cell growth introduces a dilution of the molecules in a cell. We implement cell growth in our stochastic model by allowing the volume of cell i to change in time as, where V[0]is the volume of a cell at the beginning of the cell cycle (same for all cells), τ[i]is the duration of the cell cycle of cell i, and t is referred to the precedent division event. When t=τ [i] the cell i has doubled its volume and a new division takes place. At this time the internal clocks and volumes of daughter cells are reset to zero and V[0]respectively. Moreover, when a cell divides, proteins, mRNAs and signalling molecules are binomially distributed [33] between daughter cells and one copy of the DNA is given to each cell. We note that regulatory complexes bound to the DNA are detached prior to the distribution between daughter cells. As in the case of the deterministic model, we assume that the cell density is maintained constant during experiments due to a compensational external efflux that wash away cells in the culture (see below). In relation to the effect of the cell volume of individual cells on the diffusion rate of the autoinducer, we note that in this case, The duration of the cell cycle, τ[i]is different for each cell and is set independently after a division according to the following stochastic rule [34], where τ and denote, respectively, the deterministic and stochastic components of the cell cycle duration, and λ∈[0,1] is a parameter that weights their relative importance. The stochastic component accounts for the period of time between events driven by a Poissonian process and satisfies an exponential distribution, In this way, we allow variability from cell to cell in regards of the duration of the cell cycle, yet setting a minimum cell cycle duration, λτ. According to these definitions, the average duration and standard deviation of the cell cycle are τ and (1−λ)τrespectively. Finally, we notice that in principle the Gillespie algorithm needs to be adapted in order to take into account the time-dependent cell volume. The propensity of a second-order reaction at cell i at time t scales as p[i](t)=p[0]V[0]/V[c,i](t), where p[0] stands for propensity of the reaction at division time when V[c,i](0)=V[0]. The propensity p[0]are derived from the corresponding reaction rate, k, by dividing the latter by the initial cell volume, p[0]=k/V[0]. In addition to the change in the propensities of the reaction channels, the algorithm would also need to be adapted to compute the time till next reaction [35]. However, in our case, since all reactions rates are faster than the rate of variation of the cell volume, ∼1/τ, (see parameter values below) then the volume increase is negligible during the time interval until the next reaction takes place. Consequently, we can adiabatically eliminate the volume growth dynamics and safely assume that the volume-dependent propensities remain constant until the next reaction occurs. Summarizing, at a given time t we compute, as described above, the time-dependent propensities based on the volume of the cell at that time and, according to those, we determine the time at which the next reaction takes place, t + △t, following the standard Gillespie algorithm. Gene expression noise: burst size During translation mRNA molecules are translated into proteins following a bursting dynamics [36-38]. The so-called burst size, b[X], is defined as the ratio between the protein X production rate and the mRNA X degradation rate. It has been shown that b[X] is directly related to the intensity of gene expression noise [36,39]. Thus, for the same average protein concentration, the larger b[X] is, the more fluctuating expression dynamics is displayed by protein X. In our stochastic simulations we use the burst size b[X]as a parameter to tune the noise intensity at the level of luxI and luxR and study its effects. Unless explicitly indicated otherwise, the bursting size in the stochastic simulations is b[R]=b[I]=20. External dilution protocol In controlled experimental setups it is advantageous to keep the cell density constant. This is carried out by means of an external dilution protocol that compensates for cell growth. Experimentally, this is usually achieved by periodic dilutions of the cell culture [10] or by a continuous flow of liquid medium in a chemostat or in a microfluidic device [40]. This procedure allows to measure the stationary concentration of the signalling molecule at a given cell density and/or to estimate the threshold of the QS collective response of a cell culture. Moreover, the external dilution is also important in order to maintain cells in the exponential growth phase and prevent depletion of nutrients in the medium. Additionally, the levels of the autoinducer can be controlled by adding/removing exogenous signalling molecules in/from the culture buffer. We implement those in our simulations as follows. In the deterministic model, as shown in Figure 2, we assume a unique cell with volume V[c,tot]. Cell density is controlled by a continuous efflux that removes cytoplasm and culture medium at a rate that compensates exactly for the cell growth, such that the volume V[c,tot]remains constant. Concurrently, a continuous influx of equal and opposite rate brings fresh medium to the cell culture. In our in silico stochastic experiments, the efflux is reproduced by removing molecules, A[ext], from the medium and washing away cells by “deleting” a cell picked at random in the population each time a new cell is born. In our simulations, as in the experiments we aim to reproduce, the exogenous autoinducer concentration is the control parameter [10]. This means that the levels of autoinducer are controlled by varying the concentration of exogenous autoinducer in the dilution buffer (influx). The influx of exogenous autoinducer molecules, together with the efflux of culture medium, can be represented by the following reaction, where γ=ln(2)/τ. That is, an efflux removes autoinducer molecules from the external volume at a rate γand an influx introduces signalling molecules in the external volume at a rate . In the deterministic description, the last equation leads to an additional term at the r.h.s. of the ODE for the concentration of A[ext]: . We notice that in our simulations, as in experiments, V[tot]/V[ext ]≃1. In the absence of synthesis (e.g. lux01) and taking into account that the degradation is slower than the diffusion and the influx rate, it is easy to see that the concentration of autoinducer, both inside and outside the cell, tends to : the desired control value of the autoinducer concentration (see Additional file 3: Figure S1). Additional file 3. Figure S1. Intra and extracellular autoinducer as a function of exogeneous autoinducer concentration. Response curves to autoinducer induction for lux01 (A, C and E) and lux02 (B, D and E) operons. Total autoinducer concentration in the external volume and in the cells (A and B), intracellular concentration c[A](C and D), and extracellular concentration (E and F), as a function of the exogenous autoinducer concentration, , in the deterministic model. All graphs represent the steady-state response for increasing (blue curve) and decreasing (red curve) autoinducer concentrations. The exogeneous autoinducer concentration controls the autoinducer concentration in the medium by means of an influx and an efflux (see main text). Upon activation of the operon, LuxR is produced at high levels, thus sequestering autoinducer molecules inside the cells. The bound form of autoinducer cannot diffuse out of the cell and is therefore not subjected to the influx and efflux. This explains why the total concentration of autoinducer in the system, is slightly larger than , when the operon is activated. For the same reason, the free form of autoinducer, both in the cell and in the medium, is slightly smaller. Format: PNG Size: 261KB Download file The parameters used in our model are listed in Table 1. When possible, parameter values are fixed or estimated by using experimental measurements found in the literature. The rest of the parameters are fitted to the experimental data of [10] using the deterministic model to reproduce the main characteristics of the response curves of the lux01 operon: a difference of two orders of magnitude in the level of expression of GFP between the low and the high states, a hysteresis effect in the range of autoinducer concentrations , and a time to reach steady-state at full induction ( ) shorter than 6 hours. In regards of the cell density, based on an estimate of the CFU/mL for an average OD of 0.5 for E. coli cells, we take a typical value of c[N]=5·10^8cells/mL. Moreover, in order to keep the computational time within reasonable limits, we choose a system size of N=100 cells. After fixing the number of cells and the cell density, the total and external volumes are then respectively derived from the relations c[N]=N/V[tot] and V[ext]=V[tot]−NV[0], where V[tot]=2·10^−4μL. Finally, for the case of the lux02 operon there is one additional parameter that needs to be calibrated: the synthesis rate of the autoinducer, k[A]. The latter is adjusted such that the lower bound of the hysteresis region extends up to as experimentally reported. Table 1. Parameters used in the deterministic and stochastic simulations First passage time analysis The mean first passage time at a given autoinducer concentration quantifies the average time that a cell takes to get activated or deactivated. For computing the first passage time in transitions, from low (high) to high (low) state, we take a single cell at the low (high) state and follow its dynamics until the GFP expression level reaches the high (low) state. We point out that the maximum GFP concentration refers to that of the deterministic simulations. In order to get enough statistics, we repeat this procedure, departing from the same initial condition, 10^3 times for each concentration of autoinducer. The deterministic model reproduces the experimental observations at the population level The chemical kinetics formalism leads to a set of ODEs that describes the population average dynamics in terms of the concentration of the different species considered in our model (see Additional file 1: Text S1). As in some experiments [10], we assume that the cell culture grows in an environment where the concentration of the external autoinducer in the medium, , is kept fixed and under well-stirred conditions. In addition, we implement a dilution protocol that compensates for cell growth and maintains the cell density constant (see Methods). We notice that in some experimental setups, e.g. [10], a periodic dilution protocol is applied for keeping the cell density constant; in our model, we keep the cell density constant by means of a continuous influx and efflux of culture medium, as in a chemostat or microfluidic device. We use the deterministic simulations as a benchmark of the regulatory interactions included in our model and also to fit/estimate some parameters such that the experimental data are reproduced (see [ 10]). Thus, by integrating numerically the rate equations derived from the population-averaged model, we compute the steady state concentration (induction time 100 hours) of GFP (lux01) and LuxI::GFP (lux02) as a function of . The steady-state induction curves for increasing and decreasing autoinducer concentration of the lux01 and lux02 constructs are shown in Figure 3. We are able to reproduce the behaviour of the network at the steady-state, in particular a region of bistability in the range of autoinducer concentration nM (lux01) and nM (lux02). As shown by Williams et al., the luxR regulation of the lux01 operon alone (positive feedback loop) is enough to yield a bistable response. Moreover, expression of LuxI in the lux02 operon restores the autoinduction loop and extends the lower bound of the hysteresis range to zero concentration of exogenous autoinducer as seen experimentally, indicating that once the operon is fully activated and cells produce their own autoinducer that increases the stability of the high state. Figure 3. Response curves to autoinducer induction in the population-average model.lux01 (A) and lux02 (B) operons. The normalized GFP concentration is plotted as a function of the exogenous autoinducer concentration : steady-state response for increasing (arrow-free upper blue curve) and decreasing (arrow-free red curve) autoinducer concentration, response under 10 h induction time for increasing (blue curve with arrow) autoinducer concentration, transient response after 2 hours of induction (lower blue curve) from initially non-induced cells, decreasing-concentration trajectories (green curves) for cells weakly induced (2 hours) at , 75 nM and 50 nM, and decreasing-concentration trajectories (red curve with arrow) for cells fully induced (10 hours) at . The decreasing-concentration trajectories reduce the value of hourly by 25% (similar to the experiments in [10]). The gray-shaded region between the increasing and decreasing steady-state curves reveals bistability in the range (lux01) and (lux02). Further simulations to check if the dynamics of our model is compatible with the experimental data refer to the behaviour of the system under non-stationary induction conditions and to the serial dilution protocol of the external medium [10]. As for the first, when cells are induced for 10 h, we observe that the bistability region increases (see Figure 3). As for the second, cells are partially induced at a fixed autoinducer concentration for 2 hours and afterwards the external medium is changed hourly to decrease the concentration of the autoinducer. In this case, the transient response of the cells (Figure 3, green curves) also reproduces the experimental observations. That is, from the point of view of the population average, the deterministic model is not only capable of reproducing the steady-state of the network but also its dynamics. Moreover, in agreement with experiments (see Figure S6 in [10]) our simulations reveal that the temporal scale for reaching a steady-state is much larger than the cell cycle duration. In order to clarify how noise and the induction time modifies the timing for the transition at the single cell level we then perform stochastic simulations. The stochastic simulations reveal the interplay between non-stationary effects and noise Cells are subjected to intrinsic noise at the level of the mRNAs, regulatory proteins, i.e. LuxR and LuxI, and at the level of signalling molecules. In order to analyze the behaviour of individual cells and reveal how noise affects the QS switch, we perform stochastic simulations of a population of growing and dividing cells as described in the Methods section (see Additional file 2: Video S1). The transition of an individual cell from the low to the high state, and the other way around, is intrinsically random and depends, among others, on the levels of autoinducer. Thus, inside a population some cells will jump while others remain in their current state leading to a bimodal phenotypic distribution. We compute the proportion of cells that are below and above a threshold of GFP equal to half-maximum GFP concentration. We consider the distribution of cells to be bimodal when the proportion of cells in either the low or the high state is below 90% and according to this we define the range of autoinducer concentration for which there is bimodality. For low concentrations of autoinducer, , the collective response of the cell population is unactivated, and for high concentrations, , such response activates most of the cells leading to a global response of the colony. On the other hand, within the bimodality range, the response is distributed between two subpopulations, thus failing to achieve a global coordination in the colony. In order to characterize this behaviour, we introduce the concept of precision in the QS switch as the inverse of the concentration range for which the cells response distribution (phenotypes), during an induction experiment, is bimodal. That is, the larger the bimodal range, the less precise the switch is in order to generate a global response in the colony. In this regard, we point out that the precision of the switch in a noise-free situation is infinite since all cells achieve global coordination Figure 4 shows, by means of a color density plot, the probability of a cell to have a particular GFP expression level after either 10 or 100 hours of induction as a function of . In order to gather enough statistics, we average our results over 10 different realizations (i.e. experiments). For a large range of autoinducer concentrations, for both the lux01 and for the lux02 operon, the distribution of GFP expression after 10 h of induction is bimodal. As shown, some cells of the colony are induced before the critical concentration of the deterministic model at the steady state (black line). Still, the concentration for which more than 90% of the cells are induced requires up to four times more autoinducer than under deterministic conditions. Thus, on the one hand noise can help cells to get induced at lower autoinducer concentrations but, on the other hand, amplifies the non-stationary effects for achieving global coordination. In order to clarify this interplay between non-stationary and stochastic effects, we perform the same simulations with a larger induction time (100 h). As expected, the precision of the switch increases (10-fold change) and cells achieve global coordination at (lux01) or before (lux02) the critical deterministic concentration. Note that in all cases noise induces a significant variability in terms of the GFP expression levels in the high state compared to that of the low state (see also Figure 5). The variability introduced in the colony response by the fluctuations with respect to the deterministic approach can also be observed in experiments under weak inducing conditions where the autoinducer concentration is periodically decreased (see Additional file 4: Figure S2). Additional file 4. Figure S2. Cell response distribution during decreasing-concentration trajectories. Cell response distribution for decreasing-concentration trajectories for lux01 (left) and lux02 (right) strains in the stochastic model. Cells are initially induced at for 2 hours. The concentration of exogenous autoinducer is then hourly decreased in order to simulate the experiments (see [ 10]). The cell distribution reveals the variety of cell trajectories in comparison to the deterministic population average solution (green line). The cells jump to the high state for a wide range of times and autoinducer concentrations. Note also that fluctuations leads to a stabilization of the low state with respect to the deterministic solution. Format: PNG Size: 511KB Download file Figure 4. Cell response distribution to autoinducer induction in the stochastic model. Cell response probability after 10 hours (top: A, B) and 100 hours (middle: C, D) of induction at different autoinducer concentrations for the lux01 (left: A, C) and lux02 (right: B, D) operons in the stochastic model. The distribution reveals the coexistence of two subpopulations with low and high GFP expression when the cells are induced at intermediate autoinducer concentrations. The region of bistability (precision) is defined by the range of for which the response is bimodal according to the following criterion: the lower/upper limit of the bistable region (orange lines) is defined by the value of for which 90% of the cells are in the low/high state. The black line stands for the concentration of GFP (normalized) as a function of in the deterministic model at the steady state. After 10 hours of induction (top: A, B) most cells are still in a transient state if . After 100 hours of induction (middle: C, D), the bimodality region shrinks and the precision increases. The population average curves of the induction and dilution experiments in the stochastic model (bottom: E, F, dashed lines) show that the intrinsic noise allows cells to jump to the high state inside the deterministic bistable region. On the other hand, the transition from high to low follows the deterministic path thus indicating that the switching rate in this case is close to zero. Figure 5. Individual cell trajectories for autoinducer induction in the stochastic model. Individual cell trajectories (blue lines), cell population average (orange line) and deterministic solution (red dashed line) for an induction experiment at for the lux01 operon in the stochastic model. Individual cell trajectories show the heterogeneous distribution of cell jumping times. While some cells achieve full induction of the operon before the deterministic case, the global response of the population reaches steady-state at ∼30 hours, slower than the deterministic solution. The heterogeneity in terms of the jumping statistics is revealed in Figure 5 where we plot individual trajectories for the lux01 operon as a function of time at over a period of 50 hours. Some cells become induced after 3 hours, while others need ∼10 times more induction time to reach the high state. At this concentration of autoinducer all cells have eventually reached the high state after ∼30 hours of induction. Importantly, we do not observe that cells jump back (see Discussion). That is, while there is variability over the colony in regards of the switching time, once the transition occurs the cell remains in the new state that is sustained over generations as seen in Figure 6. Therefore, over the typical timescale of an experiment (10 to 50 hours), the behaviour of the QS switch is highly dynamic and the precision of the switch is a transient quantity that crucially depends on the duration of induction. Figure 6. Lineage tree of an induced population of cells in the stochastic model. Linage tree of a population of cells induced at a fixed autoinducer concentration for the lux01 operon (left) and the lux02 operon (right). Vertical lines represent individual cells and horizontal lines cell division events. The color of the lines is proportional to the normalized GFP expression. The initial number of cells is 100 and is kept constant during the experiment by “deleting” cells at random every time a cell divides (truncated vertical lines). The lineage tree shows how the state of the cell is transmitted over generations and reveals that once the operon is activated the transition is “irreversible”. As expected the intrinsic noise decreases the precision of the QS switch with respect to the deterministic case. Still, noise helps cells to become activated before the critical concentration of a fluctuations-free system under all induction conditions. Moreover, in steady-state conditions the high state is globally achieved before the critical deterministic concentration. This phenomenon is recapitulated in Figure 4 (bottom) where we plot the population average response for the induction and dilution experiments at steady-state (100 h induction) for both the deterministic and stochastic models. Notice that the dilution curves of the stochastic model are similar to that of the deterministic model; however, the average transition to the high state occurs at a lower autoinducer concentration due to intrinsic fluctuations. The features of the QS switch depends on the transcriptional noise of LuxR For the same concentration of the external autoinducer, the stochastic dynamics of the regulatory network arises from the noise at the level of LuxI and LuxR. We now analyze the individual contribution of those by modulating the burst size of LuxR and LuxI, b[R]and b[I]respectively. We notice that the burst size modulates the stochasticity levels while maintaining the average protein copy numbers. Additional file 5: Figure S3 illustrates the effect of changing the burst size by showing individual trajectories of the chemical species obtained for large and small values of this quantity at low and high concentrations of the external autoinducer. In this regard, insight about the activation process can be obtained by computing the mean first passage time (MFPT) for transitions between the low and the high state. Figure 7 shows this quantity as a function of and for different values of the burst size of LuxR and LuxI. For the sake of comparison, we also compute the MFPT for the deterministic solution. We note that in that case, the MFPT inside the bistable region is infinite, since the deterministic system cannot spontaneously jump from one stable state to the other. Our results indicate that changing the burst size of LuxI does not modify the mean first passage time whereas changing the transcriptional noise at the level of LuxR modifies the jumping statistics. Moreover, our results reveal a non-trivial behaviour of the MFPT as a function of the concentration of the autoinducer. On one hand, with respect to the activation dynamics, when is below ∼25 nM, an increase in LuxR noise decreases the mean time of the activation. That is, LuxR noise helps cells to get the initial activation quicker. On the other hand, above ∼25 nM of autoinducer concentration, the effect is the opposite: an increase in LuxR noise increases the mean jumping time thus slowing down the full cell activation. Additional file 5. Figure S3. Trajectory of chemical species in individual cells. Trajectory of chemical species LuxR mRNA (mR), LuxR, LuxI, intracellular autoinducer (AI), regulatory complex (LuxR· AI)[2](AL2) and promoter bound to complex (P10), in an individual cell for the following control parameter and burst size values: (A) , (B) , (C) , (D) . Format: PNG Size: 381KB Download file Figure 7. Mean first passage time of cell activation for different burst size values. Mean first passage time of cell activation as a function of autoinducer concentration for different values of the burst size for LuxR (b[R]) and LuxI (b[I]) and for the deterministic solution: (A) low to high transition MFPT in the lux01 operon, (B) low to high transition MFPT in the lux02 operon. The lower (upper) limit of the shaded regions is the 10% (90%) quantile curve of the distribution of FPT for the cases b[R]=b[I]=20 (blue shaded region) and b[R]=b[I]=0.01 (green shaded region). The MFPT reveals a non-trivial behaviour: for low autoinducer concentration noise helps cells to jump quicker to the high state, while for high autoinducer concentration noise slows down the cells activation (see text). Intersections of the quantile 10% and quantile 90% curves with a horizontal line at t=10h indicate the autoinducer concentration for which 10% of cell trajectories have jumped to the high state (left arrow) and the concentration for which 90% of cell trajectories have been activated (right arrow). The precision after 10h of induction (inversely proportional to the width of the region delimited by the arrows), increases when decreasing the noise in LuxR (see text). Note that in the case of the lux01 operon, we only change the value of b[R]since GFP does not contribute to the activation process. We observe these effects both for the lux01 and lux02 operons. Surprisingly, when the autoinducer concentration is above the critical concentration of the deterministic system, nM, the stochastic system always takes more time to get activated than the deterministic case. By computing additional properties of the first passage time probability density we also clarify the behaviour of the precision depending on the induction time. In particular, we compute the times t[low] and t[high] for which, at a given concentration, the probabilities of finding a FPT<t[low]and a FPT>t[high]are 10%, i.e. the 10% and 90% quantiles respectively. The shadings in Figure 7 delimit these regions for the cases b[R]=b[I]=20 and b[R]=b[I]=0.01. The precision of the switch after n hours of induction, is directly related to the width of the shaded region at 〈FPT〉=n h: at any given time, this width indicates which is the minimal concentration of autoinducer for getting 10% of cells already activated and also the concentration beyond which more than 90% of cells have been activated. Thus, in agreement with Figure 4, the induction time clearly modifies the precision: it increases (the width decreases) as the induction time becomes larger. Moreover, note that as the LuxR noise weakens the precision increases. Figure 8 recapitulates some of these results. There we show the GFP expression probability for the lux02 operon after 10 hours of induction for different values of the burst size b[R] and b[I]. Notice that the region of bimodality does not vary when changing the burst size for LuxI. However, decreasing the burst size in LuxR reduces the region of bimodality thus increasing the precision of the switch. Furthermore, the noise at the level of LuxR helps some cells to become activated at lower concentration levels of the autoinducer. Once more, this phenomenon does not depend on the levels of transcriptional noise of LuxI. That is, while the global coordination increases as the transcriptional noise of LuxR decreases, more concentration of the autoinducer is required to start activating cells. Figure 7 also suggests that the sensitivity of the precision as a function of the induction time and/or as a function of the stochasticity levels get diminished after ∼30 hours since the width of the shaded region barely varies. Figure 9 points towards that direction: under long induction time conditions (100 h) the precision of the switch remains constant regardless the value of the burst size. All together, these results indicate an interesting and counterintuitive role of the transcriptional noise of LuxR in terms of the biological function of the QS switch. Figure 8. Cell response distribution in the transient regime for different burst size values. Cell response distribution (jumping probability) after 10 hours of induction (transient state) at different autoinducer concentrations for the lux02 operon in the stochastic model and different burst sizes. Burst size values (A) b[R]=b[I]=20 (B) b[R]=4,b[I]=20 (C) b[R]=20,b[I]=4 (D) b[R]=b[I]=4 ( E) b[R]=b[I]=0.01. Width of bistable region: (A) = 60 nM (B) 25 nM (C) 70 nM (D) 27.5 nM (E) 25 nM. The black line stands for the concentration of GFP (normalized) as a function of in the deterministic model at the steady state. Figure 9. Cell response distribution at the steady-state for different burst size values. Cell response distribution at the steady-state (100 h induction), at different autoinducer concentrations for the lux02 operon in the stochastic model for different burst size values: (A) b[R]=b[I]=20 (B) b[R]=b[I]=4 (C) b[R]=b[I]=0.01. The probability density of getting a particular GFP expression level is indicated by means of a density plot. The width of bistable region barely depends on the stochasticity levels, ≈7 nM. The black line stands for the concentration of GFP (normalized) as a function of in the deterministic model at the steady state. The response of bacterial colonies driven by the QS signalling mechanism under noisy conditions has been addressed, in a broad sense, by different authors. In particular, the characterization of the collective response as a synchronization phenomenon where the phenotypic variations can be generically predicted has been proposed [47]. However, this approach requires gene regulatory interactions controlling the QS switch that do not induce bistability and lead to a monostable behaviour, e.g. negative feedback loops [48]. Our study focus on strains that display, as the wild-type LuxI/LuxR system do, bistability and, consequently, an alternative method to quantify the phenotypic variability induced by noise was needed, i.e. the precision concept. Moreover, previous works assume stationary conditions and disregard the role of the cell cycle duration. Herein, in agreement with experimental results, we have shown that the time for reaching a steady expression rate is much larger than the cell cycle duration (see [10]). As a result, we have revealed that the interplay between non-stationary and stochastic effects is key for understanding the global response of the colony and the phenotypic variability. Finally, we have shown that the intrinsic noise is able to stabilize a particular phenotypic state. This effect, namely the fluctuations inducing a slowing down in the activation of the cells, emerges because noise extends the bistable region compared to the deterministic system. While such a noise-induced phenomenon has been characterized in population models [49] and, more recently, in theoretical studies on bistable switches [18], to the best of our knowledge, this is the first time that is reported in the context of QS systems. All in all, from the viewpoint of the comprehension of how noisy inputs may condition phenotypic variability in bacterial colonies, our study introduces a number of advances. Herein, we have characterized how the precision of the QS switch depends on the stochasticity levels and, importantly, elucidated which noisy component of the LuxI/LuxR regulatory network drives the observed phenomenology. Thus, we have found that under non-stationary conditions, LuxR controls the phenotypic variability and that changing the noise intensity at the level of LuxI has no effect on the precision of the switch. A plausible explanation for this reads as follows. The fluctuations at the level of LuxI are transmitted to the autoinducer. However, the diffusion mechanism rapidly averages out the stochasticity levels of the latter. This is not possible for LuxR which is kept within the cell. As a consequence the amount of activation complex, that is ultimately the responsible for the activation, is driven by the fluctuations of LuxR but not by those of LuxI. Recent experimental work has measured the bioluminescence levels of individual V. fischeri cells at fixed autoinducer concentration [13]. In agreement with our results, the authors observed that cells differed widely in terms of their activation time and luminescence distribution. Interestingly, other experiments have revealed the presence of additional regulatory interactions for controlling the LuxR noise levels. For example, C8HSL molecules, a second QS signal in V. fischeri, has been suggested to reduce the noise in bioluminescence output of the cells at low autoinducer concentrations [50]. In the same direction, in V. harveyi, the number of LuxR dimers is tightly regulated indicating a control over LuxR intrinsic noise [51]. In fact, wild-type V. harveyi strains have two negative feedback loops that repress the production of LuxR [52] and this kind of regulatory circuit is known to reduce noise levels [53]. In this context, our results provide a feasible explanation for the network structure in wild-type strains: since noise in LuxR controls the phenotypic variability of the LuxR/LuxI QS systems, bacteria have evolved mechanisms to control its noise levels. An additional argument in this regard arises from our results about the deactivation of cells: once they are fully induced we do not observe reversibility of the phenotype (FPT larger than 100 h). First, these results are in agreement with other switching systems as the gallactose signalling network in yeast [54] and with theoretical results that explain the asymmetric switching dynamics due to stochastic effects [18]. Second, they reveal the importance of additional interactions that regulate negatively luxR in wild-type strains and indicates that synthetic strains as lux01 and lux02 summarize many features of the wild-type operon during the activation process but fail to capture some of dynamical aspects of the deactivation phenomenon. Finally, our simulations indicate that non-stationary effects are essential during the activation of the QS response. While speculative, these results can be extrapolated to growing colonies where the cell density is not kept constant. A good supply of nutrients implies short induction times since the concentration of autoinducer will quickly grow (exponentially) as the population size does. According to our results, this fast growing condition decreases the precision of the switch and, consequently, promotes variability at the population level (see Figure 10). In addition, the full collective activation of the system would require a large population size. On the other hand, if the colony grows in a poor nutrient environment, the system will have time to reach a steady-state more easily and the precision would increase. Hence, the variability would be diminished, and full activation would require smaller colony sizes. Most phenotypic changes induced by the QS mechanism refer to bacterial strategies for survival and/or colonization. In this context, our results suggest that both the QS activation threshold and the phenotypic variability might depend on the growth rate of the colony and, as a consequence, on the environmental conditions. This is in agreement with recent studies that show that the collective response of a population of cells depends not only on the underlying genetic circuit and the environmental signals, but also on the speed of variation of these signals [55]. Figure 10. The growth rate conditions the phenotypic variability. In the context of a growing colony, the autoinducer concentration increases as the colony does: purple lines show schematically two exponential growth conditions for the autoinducer concentration as a function of time. Our results on the MFPT, valid at fixed autoinducer concentrations, can be extrapolated, qualitatively, to the case of increasing autoinducer levels. Fast growth results in a large cell variability and large critical colony size for achieving a global response, while slow growth produces reduced cell variability and a smaller critical population size. Increasing fluctuations in LuxR have two opposite effects: in the slow growth case, increasing the noise (blue curves: b[R]=20; green curves: b[R]= 0.01;) decreases the critical population size while hardly changing the variability, in the fast growth case, increasing noise increases the critical population size and increases greatly the Herein we have introduced deterministic and stochastic modelling approaches for describing the core functionality of the LuxI/LuxR regulatory network in quorum sensing systems. We have focused on synthetic constructs, lux01 and lux02, that reproduce the behaviour of the wild-type system and allow for controlled experiments that have provided quantification of the activation process [10]. The deterministic approach has allowed us to estimate different parameters of the model and reproduce the switch-like behaviour of the QS network. Thus, our simulations reveal that the interplay between non-stationary and stochastic effects are key and that, for an extended range of autoinducer concentrations, a bimodal phenotypic variability develops such that cells fail to produce a global response. In this context we have introduced the concept of precision of the QS switch, as the inverse of the width of the bimodal phenotypic region. By computing the statistics of the activation dynamics of cells, we have shown that the QS precision depends on the gene expression noise at the level of LuxR and is independent from that of LuxI. Our results, together with the experimental evidences on LuxR regulation in wild-type species, suggest that the noise at the level of LuxR controls the phenotypic variability of the LuxR/LuxI QS systems and that bacteria have evolved to control its intensity. In addition, the robust stabilization of the phenotype once is fully induced indicates that, albeit synthetic strains as lux01 and lux02 summarize many features of the wild-type operon during the activation process, they fail to capture crucial aspects of the deactivation phenomenon. Most insight in regards of the effect of LuxR noise on the dynamics of cell activation is given by the study of the mean first passage time (MFPT). In terms of the timing of activation, we have observed two opposite effects depending on the control parameter : for nM, the larger the noise in LuxR, the quicker the cells become activated, while for nM, we observe the opposite effect and noise slows down cell activation. We suggest that this effect can be explained by the stochastic stabilization of the low state. Moreover, the calculation of additional properties of the statistics of the first passage time have allowed us to relate the concept of precision of the switch with the variability of the FPT by estimating the 10% and 90% quantiles. In summary, our results indicate that in bacterial colonies driven by the QS mechanism there is a trade-off between the activation onset and a global response due to non-stationary and stochastic effects. On one hand, large levels of noise at the level of LuxR imply that cells require smaller autoinducer levels for achieving an activation onset but, at the same time, a global response requires a substantial autoinducer concentration. On the other hand, if the LuxR noise levels are small, the activation onset is shifted toward larger values of the autoinducer concentration but the global response is achieved for smaller concentration values. Our study could be useful for Synthetic Biology approaches that exploit the QS mechanism. The fact that some important features of the QS mechanism, e.g. precision, rely on the burst size of one component, opens the door to modifications of the LuxI/LuxR operon for regulating the response depending on the problem under consideration. Finally, further research is needed about the general validity and applicability on the noise-induced stabilization phenomenon of particular phenotypic states in other gene regulatory systems beyond the QS mechanism. Work in that direction is in progress. Authors’ contributions MW and JB designed the experiments. MW carried out the simulations. MW and JB analyzed the data. All authors read and approved the final manuscript. We thank Oriol Canela Xandri and Nico Geisel for fruitful comments. Financial support was provided by MICINN under grant BFU2010-21847-C02-01/BMC, and by DURSI through project 2009-SGR/01055. We also acknowledge support from the European Science Foundation through the FuncDyn programme. M.W. acknowledges the support of the Spanish MICINN through a doctoral fellowship (FPU AP2008-03272). 1. Bassler BL, Losick R: Bacterially speaking. Cell 2006, 125(2):237-46. PubMed Abstract | Publisher Full Text 2. Saka Y, Lhoussaine C, Kuttler C, Ullner E, Thiel M: Theoretical basis of the community effect in development. BMC Syst Biol 2011, 5:54. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 3. Antunes LCM, Ferreira RBR, Buckner MMC, Finlay BB: Quorum sensing in bacterial virulence. 4. Pai A, Tanouchi Y, Collins CH, You L: Engineering multicellular systems by cell-cell communication. Curr Opin Biotechnol 2009, 20(4):461-70. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 5. Anderson JC, Clarke EJ, Arkin AP: Voigt Ca: Environmentally controlled invasion of cancer cells by engineered bacteria. J Mol Biol 2006, 355(4):619-27. PubMed Abstract | Publisher Full Text 6. Ng WL, Bassler BL: Bacterial quorum-sensing network architectures. Annu Rev Genet 2009, 43:197-222. PubMed Abstract | Publisher Full Text 7. Dockery JD, Keener JP: A mathematical model for quorum sensing in Pseudomonas aeruginosa. Bull Math Biol 2001, 63:95-116. PubMed Abstract | Publisher Full Text 8. Cox C, Peterson G, Allen M, Lancaster J, McCollum J, Austin D, Yan L, Sayler G, Simpson M: Analysis of noise in quorum sensing. OMICS 2003, 7(3):317-334. PubMed Abstract | Publisher Full Text 9. Goryachev AB, Toh DJ, Lee T: Systems analysis of a quorum sensing network: design constraints imposed by the functional requirements, network topology and kinetic constants. 10. Williams JW, Cui X, Levchenko A, Stevens AM: Robust and sensitive control of a quorum-sensing circuit by two interlocked feedback loops. Mol Syst Biol 2008, 4(234):234. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 11. Haseltine EL, Arnold FH: Implications of rewiring bacterial quorum sensing. Appl Environ Microbiol 2008, 74(2):437-45. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 12. Anetzberger C, Pirch T, Jung K: Heterogeneity in quorum sensing-regulated bioluminescence of Vibrio harveyi. Mol Microbiol 2009, 73(2):267-77. PubMed Abstract | Publisher Full Text 13. Pérez PD, Hagen SJ: Heterogeneous response to a quorum-sensing signal in the luminescence of individual Vibrio fischeri. PLoS One 2010, 5(11):e15473. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 14. Boedicker JQ, Vincent ME, Ismagilov RF: Microfluidic confinement of single cells of bacteria in small volumes initiates high-density behavior of quorum sensing and growth and reveals its Angew Chem Int Ed Engl 2009, 48(32):5908-11. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 15. Hagen SJ, Son M, Weiss JT, Young JH: Bacterium in a box: sensing of quorum and environment by the LuxI/LuxR gene regulatory circuit. J Biol Phys 2010, 36(3):317-327. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 16. Tian T, Burrage K: Stochastic models for regulatory networks of the genetic toggle switch. Proc Natl Acad Sci U S A 2006, 103(22):8372-7. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 17. Wang J, Zhang J, Yuan Z, Zhou T: Noise-induced switches in network systems of the genetic toggle switch. BMC Syst Biol 2007, 1:50. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 18. Frigola D, Casanellas L, Sancho JM, Ibañes M: Asymmetric stochastic switching driven by intrinsic molecular noise. PLoS One 2012, 7(2):e31407. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 19. Danino T, Mondragón-Palomino O, Tsimring L, Hasty J: A synchronized quorum of genetic clocks. Nature 2010, 463(7279):326-330. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 20. García-Ojalvo J, Elowitz M, Strogatz S: Modeling a synthetic multicellular clock: Repressilators coupled by quorum sensing. Proc Natl Acad Sci U S A 2004, 101(30):10955. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 21. Karlsson D, Karlsson S, Gustafsson E, Normark BH, Nilsson P: Modeling the regulation of the competence-evoking quorum sensing network in Streptococcus pneumoniae. BioSystems 2007, 90:211-23. PubMed Abstract | Publisher Full Text 22. Kuttler C, Hense BA: Interplay of two quorum sensing regulation systems of Vibrio fischeri. J Theor Biol 2008, 251:167-80. PubMed Abstract | Publisher Full Text 23. Tanouchi Y, Tu D, Kim J, You L: Noise reduction by diffusional dissipation in a minimal quorum sensing motif. PLoS Comput Biol 2008, 4(8):e1000167. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 24. Pai A, You L: Optimal tuning of bacterial sensing potential. Mol Syst Biol 2009, 5(286):286. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 25. Goryachev AB, Toh DJ, Wee KB, Zhang HB, Zhang LH, Lee T: Transition to quorum sensing in an Agrobacterium population: A stochastic model. PLoS Comput Biol 2005, 1(4):e37. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 26. Romero-Campero FJ, Pérez-Jiménez MJ: A model of the quorum sensing system in Vibrio fischeri using P systems. Artif Life 2008, 14:95-109. PubMed Abstract | Publisher Full Text 27. Koseska A, Zaikin A, Kurths J, García-Ojalvo J: Timing cellular decision making under noise via cell-cell communication. PLoS One 2009, 4(3):e4872. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 28. Melke P, Sahlin P, Levchenko A, Jönsson H: A cell-based model for quorum sensing in heterogeneous bacterial colonies. PLoS Comput Biol 2010, 6(6):e1000819. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 29. Lyell NL, Dunn AK, Bose JL, Stabb EV: Bright mutants of Vibrio fischeri ES114 reveal conditions and regulators that control bioluminescence and expression of the lux operon. J Bacteriol 2010, 192(19):5103-14. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 30. Sitnikov D, Shadel G, Baldwin T: Autoinducer-independent mutants of the LuxR transcriptional activator exhibit differential effects on the twolux promoters ofVibrio fischeri. 31. Shadel G, Baldwin T: Identification of a distantly located regulatory element in the luxD gene required for negative autoregulation of the Vibrio fischeri luxR gene. J Biol Chem 1992, 267(11):7690. PubMed Abstract | Publisher Full Text 32. Gillespie D: Exact stochastic simulation of coupled chemical reactions. J Phys Chem 1977, 81(25):2340-2361. Publisher Full Text 33. Rosenfeld N, Young JW, Alon U, Swain PS, Elowitz MB: Gene regulation at the single-cell level. Sci (New York, N.Y.) 2005, 307(5717):1962-5. Publisher Full Text 34. Canela-Xandri O, Sagués F, Buceta J: Interplay between intrinsic noise and the stochasticity of the cell cycle in bacterial colonies. Biophys J 2010, 98(11):2459-68. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 35. Lu T, Volfson D, Tsimring L, Hasty J: Cellular growth and division in the Gillespie algorithm. Syst Biol (Stevenage) 2004, 1:121. Publisher Full Text 36. Kærn M, Elston T, Blake W, Collins J: Stochasticity in gene expression: from theories to phenotypes. Nat Rev Genet 2005, 6(6):451-464. PubMed Abstract | Publisher Full Text 37. Yu J, Xiao J, Ren X, Lao K, Xie XS: Probing gene expression in live cells, one protein molecule at a time. Sci (New York, N.Y.) 2006, 311(5767):1600-3. Publisher Full Text 38. Cai L, Friedman N, Xie XS: Stochastic protein expression in individual cells at the single molecule level. Nature 2006, 440(7082):358-62. PubMed Abstract | Publisher Full Text 39. Ozbudak EM, Thattai M, Kurtser I, Grossman AD, van Oudenaarden A: Regulation of noise in the expression of a single gene. Nat Genet 2002, 31:69-73. PubMed Abstract | Publisher Full Text 40. Bennett MR, Hasty J: Microfluidic devices for measuring gene network dynamics in single cells. Nat Rev Genet 2009, 10(9):628-38. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 41. Urbanowski M, Lostroh C, Greenberg E: Reversible acyl-homoserine lactone binding to purified Vibrio fischeri LuxR protein. J Bacteriol 2004, 186(3):631. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 42. Kaufmann G, Sartorio R, Lee S, Rogers C, Meijler M, Moss J, Clapham B, Brogan A, Dickerson T, Janda K: Revisiting quorum sensing: discovery of additional chemical and biological functions for 3-oxo-N-acylhomoserine lactones. Proc Natl Acad Sci U S A 2005, 102(2):309. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 43. Roberts C, Anderson KL, Murphy E, Projan SJ, Mounts W, Hurlburt B, Smeltzer M, Overbeek R, Disz T, Dunman PM: Characterizing the effect of the Staphylococcus aureus virulence factor regulator, SarA, on log-phase mRNA half-lives. J Bacteriol 2006, 188(7):2593-603. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 44. Kaplan H, Greenberg E: Diffusion of autoinducer is involved in regulation of the Vibrio fischeri luminescence system. J Bacteriol 1985, 163(3):1210. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 45. Reshes G, Vanounou S, Fishov I, Feingold M: Timing the start of division in E. coli: a single-cell study. Phys Biol 2008, 5(4):046001. PubMed Abstract | Publisher Full Text 46. Trueba F, Koppes L: Exponential growth of Escherichia coli B/r during its division cycle is demonstrated by the size distribution in liquid culture. Arch Microbiol 1998, 169:491-496. PubMed Abstract | Publisher Full Text 47. Hong D, Saidel WM, Man S, Martin JV: Extracellular noise-induced stochastic synchronization in heterogeneous quorum sensing network. J theor biol 2007, 245(4):726-736. PubMed Abstract | Publisher Full Text 48. Zhou T, Chen L, Aihara K: Molecular communication through stochastic synchronization induced by extracellular fluctuations. Phys rev lett 2005, 95(17):178103. PubMed Abstract | Publisher Full Text 49. Horsthemke W, Lefever R: Noise-induced transitions: theory and applications in physics, chemistry and biology. Springer-Verlag, Berlin Heidelberg; 1984. 50. Pérez PD, Weiss JT, Hagen SJ: Noise and crosstalk in two quorum-sensing inputs of Vibrio fischeri. BMC Syst Biol 2011, 5:153. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 51. Teng SW, Wang Y, Tu KC, Long T, Mehta P, Wingreen NS, Bassler BL, Ong NP: Measurement of the copy number of the master quorum-sensing regulator of a bacterial cell. Biophys J 2010, 98(9):2024-31. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 52. Tu KC, Long T, Svenningsen SL, Wingreen NS, Bassler BL: Negative feedback loops involving small regulatory RNAs precisely control the Vibrio harveyi quorum-sensing response. Mol cell 2010, 37(4):567-79. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 53. Balázsi G, van Oudenaarden A, Collins JJ: Cellular decision making and biological noise: from microbes to mammals. Cell 2011, 144(6):910-25. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 54. Acar M, Becskei A, van Oudenaarden A: Enhancement of cellular memory by reducing stochastic transitions. Nature 2005, 435(7039):228-32. PubMed Abstract | Publisher Full Text 55. Nené NR, García-Ojalvo J, Zaikin A: Speed-dependent cellular decision making in nonequilibrium genetic circuits. PLoS One 2012, 7(3):e32779. PubMed Abstract | Publisher Full Text | PubMed Central Full Text Sign up to receive new article alerts from BMC Systems Biology
{"url":"http://www.biomedcentral.com/1752-0509/7/6","timestamp":"2014-04-17T05:54:40Z","content_type":null,"content_length":"227326","record_id":"<urn:uuid:4e4df335-ea8a-4adc-a420-a71b9f40fe06>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of potential parallel implementations of the unsymmetric-pattern multifrontal method for sparse LU factorization Steven M. Hadfield and Timothy A. Davis 301 CSE, Computer and Information Sciences Dept. University of Florida, Gainesville, FL 32611-2024 USA (904) 392-1481, email: smh@cis.ufl.edu and davis@cis.ufl.edu Technical Report TR-92-0171 July 13, 1992 1Available via anonymous ftp to cis.ufl.edu as /cis/tech-reports/tr92/tr92-017.ps.Z. Supported in part by NSF ASC-911 1'2,.; The unsymmetric-pattern multifrontal method for the LU factorization of sparse matrices creates a series of smaller, dense frontal matrices that are partially factored and then com- bined into a fully factored result. These smaller frontal matrices can overlap in rows and/or columns creating dependencies between themselves which are represented as an elimination directed acyclic graph(DAG). This elimination DAG can serve as a task graph to control the parallel factorization of frontal matrices. The potential for parallelism provided in the elim- ination DAGs produced by the unsymmetric-pattern multifrontal method is investigated via analysis and trace-driven simulations. Different levels of parallelism are explored using both unbounded and bounded processor models. Various scheduling and task allocation schemes are also investigated. Performance results are measured in terms of speed-ups and processor 1 Overview 2 Background 2.1 LU Factorization .. ....... 2.2 Frontal Matrices (Elements) . 2.3 Elimination Directed Acyclic Graph 2.4 Major Algorithmic Steps ...... 2.5 Potentials for Parallelism ...... 3 Unbounded Parallelism Models 3.1 Elimination DAG Analysis ..... 3.2 Model Definitions .. ........ 3.3 Matrices Analyzed .. ....... 3.4 R results . . . . . . . . 3.5 Conclusions and Observations . 4 Bounded Parallelism Models 4.1 Initial M odels .. ......... 4.1.1 Trace-Driven Simulation . 4.1.2 Processor Allocations . . 4.1.3 Scheduling Methods ..... . .......... ....... . 19 4.1.4 Results. .................. ............... 20 4.1.5 Conclusions and Observations .................. ... . 27 4.2 Refined M models .................. ............... 27 4.2.1 M odel 3 Revisions .................. ......... 28 4.2.2 Models 2 and 4: Vertical Partitioning ..... . . . . . 36 4.2.3 Model 4: Assembly Pipelining .... ........... ... 43 4.3 Scalability Analysis .................. ............. 46 5 Conclusions 48 Chapter 1 One of the cornerstones of numerical linear algebra is the solving of systems of linear equa- tions [7]. LU factorization is the most common and appropriate method for solving such systems when the only assumptions that can be made of the coefficient matrix are that it is square and nonsingular. When the additional assumption of sparsity can be made, algorithmic modifications are possible that can significantly enhance execution times. Dr. Tim Davis, working with Dr. Iain Duff, has developed a very promising algorithm for solving such systems of linear equations [4]. Their method focuses on large, sparse, nonsingular systems that have an unsymmetric nonzero pattern. A most significant aspect of the method is the potential it has for parallel implementation. Current implementations of the method operate on sequential and vectorized processors. Parallel implementations are upcoming. The focus of this project is to provide an assessment of parallel implementation alternatives through both analysis and simulation. The research presented in this document is organized into three parts. The first part, Chapter 2, provides some fundamental background on LU factorization and the unsymmetric-pattern multifrontal method. This is followed by the results of the unbounded parallelism models run against traces produced by the sequential version of the method. These results provide an initial assessment of the potential parallelism of the method assuming a sufficiently large number of available processors. The third and final part refines the unbounded models into bounded models and uses trace-driven simulation techniques to predict the actual paral- lelism achievable with various processor set sizes. Major concerns with these models are the allocation of processors to subproblems and the scheduling of tasks. The results of the analysis just outlined illustrate a strong potential for parallelism with practical and realizable processor set sizes. Chapter 2 This section briefly overviews some of the basic concepts of the unsymmetric-pattern multi- frontal method [4]. 2.1 LU Factorization LU factorization is a numerical algorithm based on Gaussian elimination for solving square, nonsingular systems of linear equations of the form: Ax = b, where A is an n by n real matrix and x, b are n dimensional real column vectors [7]. Specifically, the coefficient matrix, A, is factored into the product of two triangular matrices: A = LU. L is unit lower triangular and consists of the row multipliers used to reduce A to echelon form. U is the echelon form of A which is upper triangular. With the A = LU factorization complete, the solution to the system of linear equations, Ax = b, can be found by solving the two triangular systems: Ly = b and U = y. This is a much simpler task of O(n2) compared to O(n3) for the factorization. Typically, LU factorization is accompanied by row permutations to insure numerical stability and, particularly when dealing with sparse matrices, by column permutations to recover from zero pivots. Such permutations are incorporated into the unsymmetric-pattern multifrontal method. 2.2 Frontal Matrices (Elements) The unsymmetric-pattern multifrontal method uses LU factorization to solve very large sys- tems of linear equations (A of size 1000 x 1000 and larger). In order to perform this task efficiently, the method takes advantage of the sparse, yet unsymmetric pattern typical of many such large coefficient matrices. The method does this by identifying elemental subma- trices along the diagonal of the coefficient matrix. The non-zero elements of the first row and column of the right lower submatrix define the rows and columns of the elemental matrix. These elemental matrices are also referred to as frontal matrices. They are typically much smaller and very dense. Thus, they can be factored very efficiently by existing vectorized and parallel subroutines. 2.3 Elimination Directed Acyclic Graph A major complexity of this method arises from the fact that these elemental matrices can (and do) overlap in rows, columns, or both. As a result, most elemental matrices can be only partially factored. The unfactored portion of the submatrix has been modified by the partial factorization and will be passed and assembled into other elemental matrices for further factorizations. This "assembly" process creates data dependencies between the elemental submatrices. These dependencies are represented with a directed acyclic graph (DAG) where the elemental submatrices are represented by vertices and the dependencies with edges. This DAG is referred to as the elimination DAG. The unsymmetric-pattern exploited by the method can result in parts of the unfactored sub- matrix of a frontal matrix to be assembled into multiple later frontal matrices. In symmetric multifrontal methods [3] the assembly will be into only one later frontal matrix. This results in an elimination DAG for the unsymmetric-pattern method as opposed to an elimination tree for the symmetric-pattern methods. 2.4 Major Algorithmic Steps The execution of the unsymmetric-pattern multifrontal method is thus controlled in large part by this elimination DAG. Two of the major steps in the method include the assembly of data passed between elemental matrices and the actual numerical factoring. These are the algorithmic steps that will be modeled in this effort. These two steps constitute the majority of the processing of the factor-,., ii version of the unsymmetric-pattern multifrontal method which is the version to be analyzed in this effort. The entire method is implemented in the 1,, :,,li-.-factor version. Within this larger version these steps typically account for over fifty percent of the method's execution time [4]. The limited scope of this effort allows the assumption of an existing and static elimination DAG. This is a simplification of what actually happens since possible modifications of the elimination DAG and memory allocation for frontal matrices are ignored. 2.5 Potentials for Parallelism There are at least three levels at which parallelism can be exploited in the unsymmetric- pattern multifrontal method. The highest level is present in the independent blocks that can naturally appear in the coefficient matrix once it has been permuted into an upper block triangular form. All entries to the left and below these block are zeroes and the operations on such a block are independent of the operations on the other blocks. These blocks manifest themselves as distinct connected components of the elimination DAG. Within these blocks, parallelism is also possible when factoring independent elemental ma- trices. These typically manifest themselves as multiple vertices at the same topological level within a connected component of the elimination DAG (however, additional parallelism is possible at this level depending on the structure of the dependencies represented by the edges of the elimination DAG). Assembly and numerical factoring of such elemental matrices can be accomplished in parallel. The lowest level of parallelism occurs within the actual numerical factoring of elemental ma- trices using factoring subroutines tailored for particular parallel architectures. The assembly process of data transferred along edges in the elimination DAG can also be done in parallel at this level. Exploitation of this level of parallelism is critical for achieving the most significant Chapter 3 Unbounded Parallelism Models The performance and complexity of a sparse matrix algorithm are closely tied to the sparsity, structure, and numerical characteristics of a particular matrix. As a result, performance of such algorithms is frequently measured by comparison to other algorithms on identical matri- ces. This same approach will be taken with this model. Both sequential and parallel versions of the unsymmetric-pattern multifrontal method will be modeled for specific matrices and then the results compared. An underlying multiple instruction, multiple data (MIMD) shared memory (SM) model of computation is assumed, although several of the models are also realizable on single instruction, multiple data (SIMD) models. The shared memory is initially assumed to be concurrent read, concurrent write with concurrently written results summed (CRCW-SUM). Later models will require only a concurrent read, exclusive write (CREW) shared memory. Instruction counts are abstracted to only the floating point operations (flops) required for the assembly and numerical factoring processes. The analysis of the unbounded parallelism models is based on the representations of elimina- tion DAGs built from traces produced by running the sequential version of the unsymmetric- pattern, multifrontal method on real matrices. These elimination DAGs are analyzed and then used as input for the five unbounded parallelism models. The resulting speed-up, processor set, and utilization estimates are then analyzed. 3.1 Elimination DAG Analysis A sound characterization of the elimination DAG is the cornerstore of the unbounded models and will be fundamental to fully understanding the results of the bounded models. In order to obtain this characterization, the elimination DAGs are topologically sorted and partitioned into a data structure that identifies all the nodes in a given topological level. Node weights are assigned in accordance with the model definitions defined below. Using these node weights, the heaviest weighted path through the elimination DAG is determined for each model. The weight of this path provides the lower bound on the parallel execution time. Other characteristics of the elimination DAGs are also determined. In particular, the number of nodes and edges, the depth, and the number of leaf nodes are determined. Since the elimination DAGs tend to be significantly wider at their leaf level (Level 0), the next widest level is also determined. The total weight of all nodes is divided by the number of nodes to obtain an average node weight. The depth is divided by number of nodes to give a depth ratio. Low values will correspond to DAGs that are the most significant course grain parallelism. Various methods of exploiting the parallelisms are investigated with the different models defined next. 3.2 Model Definitions A model of the sequential version of the algorithm is provided as reference and will be used to develop the speed up calculations. In this and all the subsequent models there are two levels. The first (higher) level consists of the outer sum that counts processing steps for an entire frontal matrix. This summation will either be across all nodes of the elimination DAG, which represents a sequential processing of the frontals, or along the heaviest path, which represents the critical path for the processing. The heaviest path summation is used to characterize parallelism exploited between the nodes in the elimination DAG. With the assumption of an unbounded number of processors, processing of nodes with dependencies not on the heaviest path (critical path) can be accomplished during the critical path processing and not affect the completion time. The second level of the models describes the processing and parallelism of factoring a par- ticular frontal matrix (i.e. node in the elimination DAG). Each of these descriptions are presented in four terms. The first term describes the processing needed to complete the assembly of contribution blocks from sons in the elimination DAG. The second term char- acterizes the processing needed for the selection of a pivot for numerical stability. The third term describes the calculation of multipliers, which results in a column of the lower triangu- lar matrix L. Finally, the fourth term defines the updating of the active submatrix which is essentially the carrying out of the necessary row operations needed to reduce the matrix. These models are very similar to an earlier effort by Duff and Johnsson [6] that focused on the elimination trees produced by symmetric-pattern, multifrontal methods. Each of the models use the following terms: Aj number of matrix entries assembled into the frontal represented by nodej. * Sj number of sons assembling entries into nodej. Pj number of pivots by which the frontal matrix at nodej is reduced. cj number of columns in the nodej frontal. rj number of rows in the nodej frontal. With these definitions, the model for the sequential version of the algorithm is defined as EJiaode [A + E J(r, + 1) + E7f(r i) + 2E1f(r i)(cj )] Model 0: Concurrency Between Elements Only Model 0 depicts a version of the algorithm that only exploits parallelism across nodes in the elimination DAG. Each frontal matrix is factored sequentially. This model requires an underlying MIMD architecture. The formula describing this model is: =1 iesth[Aj + EI,(rj i + 1) + Etl(rj i)+ 2E,' (rj i)(cj -)] Model 1: Block Concurrency Within Elements Only Model 1 assumes that nodes of the elimination DAG are processed sequentially in an order that preserves the dependencies expressed by the elimination DAG. Within each node, the frontal matrix is factored with a block level of parallelism. Assembly, scaling, and updates to the active submatrix are done for all rows in parallel moving sequentially through the columns. For the assembly of a row that includes contributions from more than one son in the elimination DAG, the assumption of a CRCW-SUM shared memory insures correctness. In this model selection of numerical pivots is assumed to be sequential. The method described by this model and the next lend themselves easily to a SIMD archi- Ej=1 [c3 + E='1(r i 1) + Pj + 2E~,(cj i)] Model 2: Full Concurrency Within Elements Only Concurrency within a frontal matrix is extended by Model 2. In this model, more processors are used and a maximal amount of parallelism exploited. Assembly is done for each element in parallel. However, in order to maintain consistency with the earlier work of Johnsson and Duff [6], a CREW memory is used by this model and the assembly of contributions to an entry from multiple sons is done via a parallel prefix computation using the associative operator of addition. Numerical pivot selection is handled in this model with another parallel prefix computation this time using the maximum function as the associative operator. Scaling and updating of the active matrix are done with parallelism at the entry level which is easily realizable with the CREW memory model. This model is also oriented to a SIMD architecture. Eallnodes + j= L[ log2 S + i=i rlog2(r i + 1)1 + P1 + 21P] Model 3: Block Concurrency Within and Between Elements The block-oriented, node level parallelism of Model 1 is augmented by parallelism across the nodes in Model 3. This is accomplished by changing the outer summation to be over the heaviest path instead of across all nodes. With this model, a MIMD architecture is required. iheaviestpath[ + E-J1(r j=1 [J i=(J Model 4: Full Concurrency Within and Between Elements The suite of models is completed by extending the full node level concurrency of Model 2 to include parallelism across nodes in the elimination DAG. This is done in the same fashion as with Model 3 and a MIMD architecture is likewise assumed. zheaviestpath [ lo2 Sjl + EP I lg2j j= 1 [1092 S1 + ,=flog102 (ri i+ I)] + P + 2Pj] 3.3 Matrices Analyzed A set of five matrices was selected for this study. They represent a cross section of the type of elimination DAGs that are generated. The matrices are from the Harwell-Bocing set and each matrix comes from a real problem [5]. The table below briefly describes these matrices: gemat 11 economic modelling optimal power flow problem computer system simulation compressible fluid flow oil reservoir, implicit black oil model I + ) + Pj + 2f l(cj )] 3.4 Results Elimination DAG Analysis The first objective of this part of the effort was to charac- terize the elimination DAGs of the test matrices. Table 3.1 below provides these character- izations that were produced with the analysis program written for the effort. The DEPTH entries are the number levels in the elimination DAG as determined by a topological sort. The width of the DAG is presented in two entries. LEVEL 0 refers to the leaf nodes of the elimination DAG and WIDTH gives the widest level other than the leaf level (Level 0). It is important to remember that the elimination DAGs are typically NOT a single connected component and that the many small, distinct connected components show up on these lowest level of the DAG's topological order. # NODES # EDGES DEPTH 68 LEVEL 0 WIDTH 55 AVERAGE 1,887 gre_1107 lns_3937 sherman5 Table 3.1: Elimination DAG Characterizations The statistics presented above reveal that the elimination DAGs tend to be short and wide (which is the desired shape for parallelism). However, they do not provide the full picture. The true nature of the elimination DAGs was better revealed with an abstraction of the DAG provided by the analysis program. This abstraction illustrates the topologically ordered elimination DAG with an example provided for the gematll matrix in Figure 3.1. For each level the number of elements in that level is provided as well as the minimum, average, and maximum node weights. (The node weights are expressed as common logs to save space and since their magnitude is all that is of interest). These abstractions revealed that the elimination DAGs were very bottom heavy in terms of the number of nodes but that the average node size tended to increase dramatically in the upper levels. The elimination DAG for the gematll matrix had the nicest shape for parallelism (as i.-'. -1. .1 by its depth ratio). The other elimination DAGs were taller and relatively more narrow at their base. L(Level): Count (Min,Avg,Max) Figure 3.1: Elimination DAG for gematll Speed Ups The most interesting result of the unbounded models is the speed ups ob- tained via parallelism (as compared to the sequential version of the method). These results are presented in Table 3.2. MATRIX mahindasb gematll gre_1107 lns_3937 sherman5 MODEL 0 1.17 2.88 1.26 1.30 1.74 MODEL 1 10.50 9.31 41.21 103.77 78.36 MODEL 2 114.25 39.35 575.21 3490.76 1877.05 MODEL 3 15.96 68.79 70.15 167.94 161..,l MODEL 4 237.74 743.80 1,847.68 11, 1-. 40 i.,I.S.42 Table 3.2: Speed Up Results The speed ups for Model 0 are disappointing but not unexpected after what was seen in the previous section on the shape of the elimination DAGs. However, significant speed ups were seen as parallelism was introduced within the nodes (Models 1 and 2) and within and between nodes (Models 3 and 4). It is interesting to note that a significant synergism is taking place when parallelism is utilized BOTH within and between nodes. One's intuition might -, -.-. -1 that both levels of parallelism would produce a resulting speed up that is close to the product of the speed ups achieved by using levels of parallelism separately. However, the speed ups obtained are actually much better than this expectation. This is consistent with the results of the earlier study by Duff and Johnsson [6]. Another interesting, but expected, observation is that the speed ups for models 0, 3, and 4 correlate strongly with the depth ratio presented earlier. Likewise, models 1, 2, 3, and 4 correlate strongly with average node weight. The underlying principle is that concurrency within the node is most exploitable by larger nodes and that concurrency across nodes is most exploitable with short and wide elimination DAG structures. Processors Sets and Utilization Also analyzed in the unbounded models were an upper bounded on the number of processors that could be employed on a given matrix and the associated processor utilization. In particular, Model 0 assumed that any frontal matrix would have only one processor allocated to it. For the block level concurrency of Models 1 and 3, a frontal matrix would have a processor set equal to the number of rows in the frontal. The full concurrency of Models 2 and 4 would use processor sets equal to the row size times column size of the frontal. The maximum processor usage would then be the largest processor set for any frontal in Models 1 and 2 and the largest sum of processors for any one level in the elimination DAG for Models 0, 3, and 4. This strategy results from the use of concurrency across nodes in Models 0, 3, and 4. Utilizations were then calculated by dividing parallel time times the number of processors provided in Table 3.3. The top value in each lower value is the utilization. 1. ;I,'-, 20.1' ' 5.2 ,' , 5.71 , total sequential time by the product of total used. The results by matrix and model are entry is the size of the processor set and the gematll gre_1107 lns_3937 0.54 , 0.8,.' . 1.,2' , 2.47' . 0.54 ~ 2.91 , 6.04 ~ 9.34 ~ 0.2 ;' , 3.24 ~ 5.1 , 17' 'i. L 6 .:; '" 41. 1 .' 4 .;i .' 1- 1 1 ;- 'I ' Table 3.3: Processor Set and Utilization Results In all cases the size of the processor sets seem quite high and the utilization quite low, but these estimates are very crude upper bounds and it is likely that significantly better results are realizable. 3.5 Conclusions and Observations A significant amount of parallelism is achievable with the unsymmetric-pattern multifrontal method. However, very little of the parallelism comes directly from the elimination DAG as is evidenced from the Model 0 results. Yet a synergism occurs when the parallelism from the elimination DAG is combined with the concurrency available within the factoring of MODEL 0 MODEL 1 MODEL 2 MODEL 3 MODEL 4 particular frontal matrices. These resulting speed ups, as seen in Models 3 and 4, are very Looking beyond the speed ups, the large processor sets and low utilizations are of both concern and interest. In particular, the very "bottom heavy" elimination DAGs -i.-2-. -1 that better distributions of the processing may be possible. Such processing distributions could be achieved by appropriate task scheduling on bounded sets of processors. The next chapter explores these possibities using trace-driven simulation techniques. Chapter 4 Bounded Parallelism Models The results of the unbounded parallelism models illustrate that significant speed-ups are possible with parallel implementations of the unsymmetric-pattern, multifrontal method. However, the efficient use of processor sets and the speed ups achievable on smaller pro- cessor sets are still open questions. This chapter addresses these issues using the results of trace-driven simulations run on the five models defined in the previous chapter. With each model sixteen different processor sets and three different scheduling schemes were used. The processor sets were powers of 2 from 21 to 216. The scheduling schemes corresponded to orderings of the work queue and are described in a subsequent section. The results of the bounded parallelism models will indicate that the speed ups seen in the unbounded models are achievable will reasonable size processor sets and with significant improvements in efficiency. However, the initial versions of these bounded models will also demonstrate the critical importance of processor allocation strategies and task definitions. As a result, several revisions will be made to the initial models. 4.1 Initial Models The initial bounded parallelism models follow directly from the unbounded parallelism mod- els. The critical difference is that a limited processor set is assumed so tasks that are ready for execution will have to wait for available processors. This can be appropriately mod- elled via a trace- driven simulation run against the representation of the elimination DAGs obtained from the traces produced by the sequential version of the unsymmetric-pattern, multifrontal method. Critical issues with the simulation will be how to allocate processors when the number available is less than that required and how to order the work that is ready to be done. 4.1.1 Trace-Driven Simulation The trace-driven simulations used to implement the bounded parallelism models were ac- complished with a custom Pascal program that was built as an extension to the analysis program used for the unbounded parallelism models. The simulation uses the topologically sorted representation of the elimination DAG as input. All the nodes (frontal matricies) on the leaf level (Level 0) are initially put into a work queue. Work is scheduled from this queue based on processor availability. The amount of work and required processors for each node is determined based on the specific model being simulated. These models are initially those defined for the unbounded parallelism models. When the model calls for a sequential ordering of the nodes, only one node may be executing at any particular time, otherwise multiple nodes may execute concurrently based on processor availability and the allocation strategy in use. Upon completion of a node's execution, the edges to dependent nodes are removed. When all such edges to a particular node have been removed, that node is put into the work queue as it is available for execution. The simulation continues until all nodes have completed execution. Speed up and processor utilization are then calculated per the following formulas: Tim sequential SpeedUp = 6--- alnodesTme ProcessorsUsed) Utilization = j Procssors (Time finished ProcessorSetSize) The average number of searches through elements in the work queue is also determined for each simulation run. Time history reports can be produced that track the number of processors in use and the size of the work queue across time. Output from the simulation program can be in either report format or a Matlab m-file format. The latter format is used to port the results to Matlab for plotting. 4.1.2 Processor Allocations Allocation of processors to the assembly and factoring of a frontal matrix is a critical issue for the realization of the algorithm on a bounded set of processors. As will be shown in the results of the initial models, the allocation scheme can greatly effect the performance of the algorithm. Since the five unbounded models use three different approaches to factoring a particular frontal, there are three distinct allocation schemes. The allocation scheme required by Model 0 is trivial since each frontal matrix in this model is allocated only a single processor. Models 1 and 3 use a block concurrency within a frontal matrix. As this block concurrency is row-oriented, a processor set equal to the row size is required. The difficulty arises when there are processor sets available but they are smaller than that required (in fact the total available number of processors is frequently less than the row size). In this eventuality the total amount of work is evenly distributed across the available processors. This is a fairly crude allocation scheme but is not too unrealistic given the nature of the processing within a frontal. To formally define this allocation scheme, we define the work to be done within a frontal j as before and shown below (recall the following definitions: Pj pivots of nodej, cj columns of nodej, and rj rows of nodej): work = cj + Ei'j(r, i + 1) + Pj + 2E, (c i) With this definition of work we define the allocation scheme as: If processors_available >= row_size then Schedule work on (row_size) processors Schedule (work row_size) / processors_available on the processors_available Models 2 and 4 use a full concurrency within each frontal matrix and requires a number of processors that is equal to the number of entries in the frontal (row_size times col_size). A more sophisticated allocation scheme is thus required. Prior to formally defining this scheme, we recall the definition of the time complexity for a frontal using this model: work = [l2 S + ~g2 S log, 2( i + 1)] + Pj + 2Pj Using this definition and a similar assumption on the ability to distribute work on smaller processor sets, the allocation scheme is formally defined as follows: if processors_available >= (col_size row_size) then Schedule work on (col_size row_size) processors if processors_available >= row_size then Schedule (work + 3 pivots col_size ) on (row_size) processors if processors_total >= row_size then Wait for more available processors Schedule (work col_size row_size) / on processors_available Notice that the deepest nested "else" block provides a fairly crude over-approximation of the amount of work to do. However, this case is only used for the small total processor set sizes which are not the target processor sets for the parallelism of Models 2 and 4. Thus, this over-estimate is reasonable. 4.1.3 Scheduling Methods There are three scheduling methods employed by the work queue. These methods corre- spond to how the work ready to be executed is ordered within the work queue. The three corresponding orderings are first-in, first-out (FIFO); largest calculation first; and heaviest path first. The FIFO method is self-explanatory and included because of its implementation simplicity and efficiency. The largest calculation first order uses the node weight estimate of the work to be done and schedules the largest such nodes first. This method is designed to approximate the next method that is heaviest path first. The largest calculation first requires a priority queue implementation based on a set of priorities that is quite variable. As such there would be some significant penalties to address in implementing this method. The last method, heaviest path first, uses the heaviest path determined by the analysis portion of the software. Any node on this path would be the next scheduled. Other nodes are put in a FIFO ordering. Notice that since the heaviest path is essentially a path through the dependency chart, only one node at most from the heaviest path will ever be in the work queue at any one time. 4.1.4 Results The results of the simulations using the initial models were analyzed using both the report format listings and graphs produced from the output files using Matlab. The results of this analysis is summarized by model. These initial models will be referred to as the baseline models from which further revisions will be made in the next section. Whenever the results for all five matrices are presented, the following line types will be used to represent the different matrices: ------- mahindasb -*-*-*- gematll -+-+-+- gre_1107 -0-0-0- Ins_3937 - - sherman5 All the results presented in this section are based on the heaviest path first scheduling unless otherwise mentioned. Model 0: Concurrency Between Elements Only Model 0 takes advantage of con- currency only between frontal matrices. The speed up and utilization results from Model 0 are shown in Figure 4.1. These indicate that the speed ups seen in the unbounded models are achievable with significantly fewer processors and corresponding higher utilizations. Table 4.1 compares the number of processors used in the bounded versus the unbounded models to achieve maximum speed up. There were no significant differences produced by the scheduling methods for this model. Model 1: Block Concurrency Within Elements Only Model 1 only takes advan- tage of concurrency within frontal matrices and does so in a block-oriented fashion. The speed up and utilization results are shown in Figure 4.2. The method employed by this model seems well suited for processor sets up to about 256 processors. After that point there is no further advantage to additional processors for any of the test matrices. The various scheduling methods showed no significant differences for this model. Model 2: Full Concurrency Within Elements Only The block concurrency of Model 1 is extended to full concurrency in Model 2 but is still restricted to concurrency only within the frontal. The speed up and utilization results for Model 2 are shown in Figure 4.3. A very definite step behavior is evident in the speed up curves. This behavior directly correlates to the allocation scheme in use that bases allocation on the row size and column size of the frontal matrices. While the speed ups are significant, the step behavior is severely limits scaleability. Later revisions to this model will resolve this problem. The different scheduling methods showed no significant differences for this model. Model 3: Block Concurrency Within and Between Elements Model 3 combines the block concurrency within a frontal of Model 1 with the concurrency between frontals used in Model 0. The speed up and utilization results for this model are shown in Figure There is a definite irregular behavior evident in the speed up and utilization curves. Analysis of these irregularities determined that the allocation scheme employed was causing large frontal matrices to be scheduled on a relatively small number of processors. This could occur if two frontals were in the work queue and the first one required 91i' of the processors. Once the first frontal got its processors, the second would be scheduled on the remaining. This would significantly stretch out the time it would need to complete. The first frontal could complete and release its processors but all subsequent frontals could not be scheduled since they are dependent on the completion of the second frontal. The result is that 911' of the processors go unused for a significant period of time. This scenario is illustrated with a time history of utilization for the sherman5 matrix using 128 processors provided in Figure 4.5. The problems with processor allocation will be addressed in two subsequent revisions to this model in the next section. There were some significant differences in the scheduling methods used but these were side effects of the allocation problems. Scheduling methods will be addressed again after the allocation problems are resolved with the subsequent revisions. Model 4: Full Concurrency Within and Between Elements The fifth and final model uses concurrency between nodes and full concurrency within nodes. The speed up and utilization results for this model are presented in Figure 4.6. A step-like speed up and corresponding utilization irregularities are evident and very similar to the results for Model 2. These results are also traced to the allocation method in use and will addressed in a subsequent revision to the allocation scheme presented in the next Comparison of these results against the unbounded Model 4 reveals that the maximum speed up is not obtainable with the processor sets tested and the current definition of Model 4. Log2 ofNumber of Processors (a) Speed Up Results Log2 ofNumber of Processors (b) Utilization Results Figure 4.1: Model 0 Baseline Results gemat 11 Table 4.1: Processors Used Comparison I 6 , Log2 ofNumber of Processors (a) Speed Up Results Log2 ofNumber of Processors (b) Utilization Results Figure 4.2: Model 1 Baseline Results Log2 ofNumber of Processors (a) Speed Up Results Log2 ofNumber of Processors (b) Utilization Results Figure 4.3: Model 2 Baseline Results Log2 ofNumber of Processors (a) Speed Up Results Log2 ofNumber of Processors (b) Utilization Results Figure 4.4: Model 3 Baseline Results Time x105 Figure 4.5: Utilization Time History For Sherman5 With P=128 I I '6 Log2 ofNumber of Processors (a) Speed Up Results I I '6 Log2 ofNumber of Processors (b) Utilization Results Figure 4.6: Model 4 Baseline Results 4.1.5 Conclusions and Observations Several conclusion and observations are apparent from these initial, baseline bounded par- allelism models. There is an excellent potential for parallelism under both SIMD and MIMD implemen- The speed ups indicated in the unbounded models are achievable with practical size processor sets. The benefits of the different models are realized within distinct ranges of the number of processors. Model 3 exhibits irregular behavior due to inefficient allocation of large critical path elements to small processor sets. Similar, but less severe results were seen with Model Scheduling method has no significant effect on performance for Models 0 and 1. Models 2 and 4 also exhibit consistent behavior for the different scheduling methods, however, a more detailed analysis will be done after these models have been refined. The effect of scheduling on performance for model 3 is obscured by the irregular behavior of that model. Analysis of scheduling for this model will also be postponed until the model has been refined. The performance under Models 2 and 4 (those using full concurrency within the ele- ments) displays a definite step-like behavior that correlates strongly with the allocation strategy in use. The speed ups obtained by Model 4 were well below the maximum speed ups predicted by the unbounded version of the model for the larger matrices. 4.2 Refined Models While the baseline models provided some significant insights into the potential of the unsymmetric- pattern, multifrontal method, they also illustrated several undesired effects that can result if care is not taken in the allocation of processors and the definitions of tasks. As a result the allocation method used by Model 3 is revised in two distinct manners to produce a more regular behavior. The allocation and task definitions for Models 2 and 4 are revised to ad- dress the step-like performance curves obtained from these models. Finally, the concept of pipelining is applied to the assembly process to further improve the performance of Model 4.2.1 Model 3 Revisions The first version of Model 3 (block concurrency within and between elements) experienced irregular behavior that was traced to inefficient allocation of processors. In particular, when most of the processors were already in use, the allocation algorithm would assign the next frontal to the smaller set of remaining processors. Since this next element could be a large frontal, it would take a long time on the small set of processors. Furthermore, all subsequent elements could (in fact, are very likely to) be dependent on this large frontal. Thus, when other processors are freed up, no other frontals can be factored since they are dependent on this large frontal. This behavior was verified with the time history utilization graphs presented earlier. Model 3 Revision 1 The idea behind the first approach to this problem is to only assign processors to a task if such an assignment will result in the task being completed at least as soon as if it were postponed until more processors are available. This method requires an accurate prediction of the number of processors to next become available. Such predictions are possible for this algorithm given a centralized scheduler and the predictability of the workload associated with each frontal. The second revision to this model will not require such predictions and is thus well suited for more general application. New Allocation Scheme The new allocation scheme is formally defined in the following algorithm. Notice that the case of the full compliment of processors available is implicitly Set Pointer to beginning of the Work Queue while (processors are available) and (more Work Queue Entries to check) do Calculate when next entry on Work Queue would finish if allocated processors from those currently available -> Scheduled_Now Calculate when next entry on Work Queue would finish if allocation is delayed until more processors are freed up -> Scheduled_Later if (Scheduled_Now <= Scheduled_Later) then Schedule the entry now using min(required,available) processors Advance Work Queue pointer end; {while} Speed Up and Utilization Results The results of this new allocation scheme are presented in Figure 4.7. The new allocation has resolved the irregularities and provides a very nicely scalable method up through processor sets of 512. . 6 Log2 ofNumber of Processors (a) Speed Up Results 0\\ \ , Log2 ofNumber of Processors (b) Utilization Results Figure 4.7: Model 3 Revision 1 Results Time History Utilization Results Additional verification that the problem with the Model 3 baseline have been resolved is seen the utilization time history of the sherman5 ma- trix using 128 processors. A comparison of the baseline and revision 1 utilization time history results is provided in Figure 4.8. The large gaps of low utilization have been eliminated. 0--~ I Required Work Queue Searches A concern with the new allocation scheme is that the scheduling/allocation process may now take much longer since multiple work queue entries (up to the entire work queue) may have to be checked on each scheduling opportunity. For this reason the average number of work queue entries check was analyzed for each of the three scheduling methods. Figure 4.9 illustrates this comparison for the lns_3937 matrix. The solid line depicts the FIFO method. The starred (*) line show the largest calculation first method. The dashed line shows heaviest path first. While the largest calculation first method had the highest number of searches for all five matrices, its performance relative to the other methods varied with processor set size and by the matrix. Thus, while the other two methods appeared to be better in general, their dominance was not consistent. An additional scheduling alternative that was not tested would be smallest calculation first. Since this method would put the entries most likely to be scheduleable on limited processors up in the front of the work queue, the number of work queue searches would likely drop. Scheduling Differences With the ill effects of the initial processor allocation scheme resolved, a serious look at the scheduling method influence can be taken. As a result of this look, I found the effect of scheduling on performance was minimal. The greatest difference was seen for the gematll matrix. These results are seen in Figure 4.10. The solid line rep- resents the FIFO method, the starred (*) line is largest calculation first, and the dashed line is the heaviest path first method. The heaviest path first method did offer some minimally better performance for processor sets in the range 256 to 2048. (a) Original (b) Revision 1 Figure 4.8: Utilization Time Histories (Before and After Revision 1) o 30 10 - Log2 of Number of Processors Figure 4.9: Average Number of Work Queue Searches: lns_3937 Log2 of Number of Processors Figure 4.10: Model 3 Rev 1 Schedulings: gematll Model 3 Revision 2 The second revision to the Model 3 allocation scheme was developed as an alternative to the overhead and limitations imposed by the need to make predictions that was inherent in the first revision. In particular, this second scheme always allocates processors to the next task in the work queue if nothing is currently executing. If, however, only a subset of the total number of processors is available, the next task in the work queue will be scheduled only if its entire number of required processors is available. While this scheme should not produce results that are quite as good as the first revision, it also does not require the additional overhead. Furthermore, the method could use a work queue that is organized in some type of tree structure ordered by processor requirement. With such a work queue, searching for next tasks could be reduced from linear to logarithmic time complexity. Revision 2 Allocation Scheme A more formal definition of this allocation scheme is presented in the following algorithm: Set Work Queue Point to start of Work Queue if processors_available = processors_total then Schedule next Work Queue entry Advance Work Queue Pointer while (more Work Queue entries to check) and (processors_available > 0) do if processors_available >= processors_required then Schedule next Work Queue entry Advance Work Queue Pointer Speed Up and Utilization Results The speed up and utilization results produced by this second revision are provided in Figure 4.11. While not quite as good as the results of the first revision, they are very promising. Speed ups increase nicely with processor size and the utilization curves are very consistent with only one minor irregularity. The greatest difference in speed ups was seen for the sherman5 matrix. This difference is shown in Figure 4.12 with the dashed line representing the second revision and the solid line the first revision. S 6 Log2 ofNumber of Processors (a) Speed Up Results Log2 ofNumber of Processors (b) Utilization Results Figure 4.11: Model 3 Revision 2 Results Time History Utilization Results A comparison of the utilization time histories of Model 3 Revisions 1 and 2 is provided for the sherman5 matrix in Figure 4.13. The time histories use a processor set of 128 (as before) and illustrate that the second revision maintains a very nice processor utilization. Scheduling Differences Scheduling methods differences for this second revision were very comparable to that of the first revision. - 1( C- * Log2 of Number of Processors Figure 4.12: Model 3 Speed Ups (Rev 1 vs Rev 2): sherman5 (a) Revision 1 (b) Revision 2 Figure 4.13: Utilization Time Histories (Rev 1 vs Rev 2): Sherman5 (P=128) 4.2.2 Models 2 and 4: Vertical Partitioning In order to smooth out the step-like performance curves of Models 2 and 4, an allocation scheme is needed to make more efficient use of the parallelism available. The approach taken is to combine the allocation concepts of the first revision to Model 3 with a finer grain of task definition. I call the approach vertical partitioning since it will partition the work done for a frontal into sets of tasks done sequentially. Each task in turn will be composed of subtasks that can be accomplished in parallel. This approach should provide the flexibility needed to make more efficient use of the parallelism available. Specifically, the factoring of a frontal matrix will be altered to process assemblies one son at a time and to divide up the numerical pivoting, scaling, and active matrix updates. The new assembly processing changes the model definition. The new definition requires more sequential time, since sons are not assembled concurrently but does not require the additional memory of the earlier approach. The new definition of processing within a frontal for Models 2 and 4 is given by the following formula: Sj + ES flog2(r- i + 1)1 + PJ + 2Pj With vertical partitioning each son is assembled separately using a processor set equal to the number of entries contributed by the son. Each such assembly becomes a separate task for scheduling. Once all the sons are assembled, the factorization commences for one pivot at a time. First the numerical pivoting is scheduled as a separate task requiring a number of processors equal to the row size of the active submatrix of the frontal. Numerical pivot- ing requires logarithmic time to complete the parallel prefix operation using the maximum associative operator. Upon determination of the numerical pivoting, scaling and updating are scheduled sequentially each as a set of parallel tasks. The scaling requires a processor set proportional to the row size of the active submatrix. The update will use a processor set equal to the entire size of the active submatrix. This sequence of numerical pivoting, scaling, and updating is repeated for each pivot with which the frontal is reduced. If ever there are insufficient processors available for a particular task group, a strategy similar to that used by Revision 1 of Model 3 is used. That is, the completion times of scheduling the work now on the available processors versus that of waiting for more processors to be freed up are compared. If immediate scheduling will not delay completion, the tasks are scheduled, otherwise the scheduling is postponed. The basic algorithm for vertical partitioning is shown below without the logic for insufficient processor availability. for each son to be assembled do Assemble the contributions from this son in parallel using one processor per entry for each pivot (i=l to p) for this frontal do Determine the maximum valued entry in first column of the active submatrix using row_size i + 1 processors Do the scaling (multiplier calculation) using row_size i processors Update the active submatrix using (row_size i) (col_size -i) Model 2 Vertical Partitioning The improvements due to vertical partitioning are quite dramatic. Figure 4.14 below illustrates the speed up improvements of the original Model 2 versus the vertical partitioning version. Recall that Model 2 uses concurrency within frontals only. Not only have curves smoothed out but they offer much higher speed ups across the entire range of the processor set sizes (notice the vertical scales). Likewise the corresponding utilizations are significantly improved as is illustrated by the comparison provided in Figure 4.15. Model 4 Vertical Partitioning The same vertical partitioning changes were applied to Model 4 that exploits parallelism both within and across nodes in the elimination DAG. Even more dramatic improvements were seen with this revision to Model 4. The speed up comparison is shown in Figure 4.16. These curves illustrate that the maximum speed ups are now achievable with the processor sets tested for all but one of the matrices (lns_3937 has a speed up of about 11,000 using 65,536 processors, where its unbounded Model 4 speed up was 11, '_ 40 using 179, ;1i processors). The corresponding utilization improvements of using vertical partitioning with Model 4 are shown in Figure 4.17. Notice that the use of concurrency across nodes has significantly improved utilization also. Average Work Queue Searches The scheduling method (ordering of the work queue) with Model 4 vertical partitioning has no real effect on speed ups or utilizations but did affect the average number of work queue searches for the smaller processors sets. Figure 4.18 shows the average work queue searches for gre_ll07. The largest calculation first method (starred line) required significantly more searches than FIFO (solid line) or heaviest path first (dashed line). Utilization Time History A utilization time history comparisons reveals much of the dynamics of Model 4. Figure 4.19 compares utilization time histories for gre_ll07 using 16,384 processors on both the original and vertical partitioning versions of Model 4. Subfig- ures (a) and (b) offer the comparison with a common horizonal scale. Subfigure (c) provides a more detailed look at processor usage with vertical partitioning. This figure reveals much about the nature of the elimination DAG with the presence of the larger frontals quite evident in the large quadratically decreasing regions. Log2 ofNumber of Processors (a) Original 0I -.=-- F . i. .. .i. .t .1.^ Log2 ofNumber of Processors (b) Vertical Partitioning Figure 4.14: Model 2 Vertical Partitioning Speed Up Comparisons Log2 ofNumber of Processors (a) Original Log2 ofNumber of Processors (b) Vertical Partitioning Figure 4.15: Model 2 Vertical Partitioning Utilization Comparisons Log2 ofNumber of Processors (a) Original , I ,16 Log2 ofNumber of Processors (b) Vertical Partitioning Figure 4.16: Model 4 Vertical Partitioning Speed Up Comparisons , 6 , Log2 ofNumber of Processors (a) Original \0 \\\\ Log2 ofNumber of Processors (b) Vertical Partitioning Figure 4.17: Model 4 Vertical Partitioning Utilization Comparisons I 1 CA i Log2 of Number of Processors Figure 4.18: Model 4 Average Work Queue Searches: gre_1107 (a) Original 4 4. (b) Vertical Partitioning (c) Vertical Partitioning (Spread Out) Figure 4.19: Vertical Partitioning Utilization Time History: gre_1107 I I I I I 4.2.3 Model 4: Assembly Pipelining - An additional modification was made to the vertical partitioning version of Model 4. This modification is based on a concept proposed by Professor Ravi Varadarajan to pipeline the assembly process. In particular, the assembly process is initiated by the sons (the contribu- tors) instead of by the fathers (the receivers). As the factorization of a frontal completes, it immediately schedules the assembly of its contributions to other frontals. Once a subsequent frontal has received all the contributions from its sons, it begins its own factorization process. This approach to assembly pipelining is fairly conservative since the assembly pipeline does not begin until factorization is complete. More aggressive assembly pipeline strategies are possible with closer interactions with the factorization process. Tli, re is an oversight in this ,, rll.il pipeline model that limits the value of its comparison to vertical partitioning. S..' .,,lli. there are no synchronization provisions made to insure that the ,i, r,,l1il of multiple entries into single frontal element do not occur concurrently. Thus a CRCW-SUM model must be .',,.//,., '/il assumed for the *.,,, ll,,il pipeline where a CREW model was used for the vertical partitioning version of the Model 4. The effect of this oversight is not suspected to be too significant but further revised runs are necessary to address the issue. The results of the assembly pipelining runs are that the same nice speed up and utilization curves appeared as were seen with vertical partitioning. Figure 4.20 illustrates the speed up Vertical Partitioning vs Assembly Pipelining Furthermore, the assembly pipeling version of Model 4 also demonstrated a significant improvement in speed up over the vertical partitioning version. Figure 4.21 shows a comparison of the speed up curve for the gre_1107 matrix under the two versions. Such comparisons must however be tempered by the earlier comments on the ;,,,,1, i l;,',i memory models. Figure 4.22 compares the average work queue searches for vertical partitioning and assembly pipelining both using a heaviest path first scheduling. The number of searches is larger for assembly pipelining since multiple assemblies are concurrently scheduled as each frontal's factorization completes. 6000 - Log2 of Number of Processors Figure 4.20: Model 4 Assembly Pipeline Speed Up Results 1800 - Log2 of Number of Processors Figure 4.21: Speed Up Comparisons (VP vs AP): gre_1107 Log2 of Number of Processors Figure 4.22: Average Work Queue Searches (VP vs AP): gre_1107 4.3 Scalability Analysis The results presented so far have indicated that the unsymmetric-pattern, multifrontal method has some nice scalability features, especially when the fuller ranges of concurrency are employed (i.e. Model 4). Ideally, one would like to quanitify this scalability with the use of iso- efficiency curves. However, there is a major obstacle to accomplishing this goal. In particular, it is very difficult to quantify problem size, since the complexity of a sparse matrix problem is a function of its order, sparsity, degree of symmetry, as well as other factors. More specifically to the unsymmetric-pattern, multifrontal method; the structure and node size of the elimination DAG combine to dictate the complexity of the factorization One approach to the scalability issue is to isolate a single factor that dominates problem size for a given limited model. In particular, I chose to look at Model 2 whose results were very strongly correlated to average node size. Thus, I defined problem size by average node size and developed a three dimensional representation of the efficiency function. (That is, efficiency equals speed up divided by processors used). Figure 4.23 (a) show this three dimension result. The upper left corner corresponds to the 2-processor set and larger proces- sors sets (4, 8, 16, 32, ..., 65536) are represented coming down the left diagonal. The right diagonal represents average node size where the leftmost entry is the lns_3937 matrix with an average node size of 67,S.-, then sherman5 with 42,093, gre_1107 with 9,998, mahindasb with 1,887, and, finally in the upper right, gematll with 1,195. Thus the scales on this figure are logarithmic on one axis and irregular and decreasing on the other. However, inspite of its difficulties, the figure does illustrate efficiencies due to larger node sizes for Model 2. The second subfigure is a contour map of the first subfigure and is presented as an approxima- tion of the iso-efficiency curves. (The vertical axis represents the processor sets of the left diagonal and the horizonal axis indicates the various matrices). The concept for this type of ,.. iJ.,.i7,'i iir,,1. ;. was also proposed to me by Professor Ravi Varadarajan. While the difficult nature of this problem and limited resources available pro- duced less than what could be hoped for, the results do illustrate some positive eff ,'. i Avg Node Weight (large to small) (a) Processor Sets vs Matrix Avg Node Weight (large to small) (b) Iso-Efficency Curves Figure 4.23: Model 2 Efficiency Curves Chapter 5 The bottom line is that a factor-only version of the unsymmetric-pattern, multifrontal method for the LU factorization of sparse matrices shows significant potential for paral- lel implementation. Furthermore, parallelism can be exploited in a variety of degrees and levels. The most significant results occur when parallelism is used both within and across nodes in the elimination DAG. The resulting synergism is very promising, as the refined Model 4 results indicate. However, exploiting parallelism in the fullest since also incurs costs both in scheduling and mapping the parallelism. As possible, several of these cost have been addressed with the analyses such as the average number of work queue searches and the various allocation and scheduling alternatives. Another critical observation is that the different degrees of concurrency produce their best results for distinct processor set ranges. Concurrency only across nodes in the elimination DAG (Model 0) saw its best performance for small processor sets (typically no more than eight processors). Block level concurrency within frontals (Models 1 and 3) was best realized on processor sets of 32 to 512 processors. The full concurrency within a frontal models (Models 2 and 4) realized the best results and were dominant over most of the range of processor sets tested. Figure 5.1 compares Model 3 Revision 1 against Model 4 with vertical partitioning on the sherman5 matrix for a processor set range of 32 to 256. The figure illustrates that the higher degree of concurrency of Model 4 does out perform Model 3 in the range of processors where Model 3 has its best performance. Also encouraging is the observation that almost of the speed ups predicted by the unbounded models are realizable with processor set sizes that are currently implemented. Other conclusions from this effort are that the structure of the elimination DAGs tend to have dominant near-linear components that directly limit further parallelism. While these structures work well for the sequential version of the method, different construction techniques may be possible to enhance parallelism. Furthermore, the allocation and task definition schemes and methods have very pronounced effects on performance. Workable solutions to these issues have been developed and verified with the models but care needs to be taken in implementation. Finally, throughout this effort I have approached the actual LU factorization in a very straight forward fashion. This was done for easy of description and ease of modelling. The are many other algorithmic alternatives to accomplishing the factorization that may lend themselves to easier and more efficient parallel implementation. Such alternatives would be a significant implementation issue. 50 - 5 5.5 6 6.5 7 7.5 8 Log2 of Number of Processors Figure 5.1: Model 3 (Rev 1) vs Model 4 (VP) for P=32..256 1. Aho A. V., J. E. Hopcroft, and J. D. Ullman, Data Structures and Algorithms, Addison- Wesley, Reading, MA 1974. 2. Aki, Selim G., The Design and Analysis of Parallel Algorithms, Prentice-Hall, Engle- wood Cliffs, NJ 1989. 3. Amestoy, Patrick R., Factorization of Large Unsymmetric Sparse Matrices Based on a Multifrontal Approach in a Multiprocessor Environment, Doctorate Thesis, CERFACS Report Ref: TH/PA/91/2, Toulouse, France, 1991. 4. Davis, T. A. and I. S. Duff, Unsymmetric-Pattern Multifrontal Methods for Paral- lel Sparse LU Factorization, Technical Report TR-91-23, Computer and Information Sciences Department, Univ. of Florida, Gainesville, FL 1991. 5. Duff, I. S., Grimes, R. G., and Lewis, J. G. Sparse Matrix Test Problems, ACM Trans. Math. Softw., 1989, 15, ppl-14. 6. Duff, I. S. and S. L. Johnsson, Node Orderings and Concurrency in Structurally- Symmetric Sparse Problems, appeared in Parallel Supercomputing: Methods, Algo- rithms, and Applications, John Wiley & Sons Ltd, 1989. 7. Golub, G. H. and C. F. Van Loan, Matrix Computations, 2d Ed., John Hopkins Uni- versity Press, Baltimore, MD 1989. 8. Manber, Udi, Introduction to Algorithms: A Creative Approach, Addison-Wesley, Reading, MA 1989.
{"url":"http://ufdc.ufl.edu/UF00095117/00001","timestamp":"2014-04-19T10:34:21Z","content_type":null,"content_length":"104816","record_id":"<urn:uuid:f9257e87-0092-42e4-a5f0-47e34caac909>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Spray Mix Calculations Revised March 1997 Download a free PDF of this publication (45KB). PDF help G1272, Spray Mix Calculations Related publications Use our feedback form for questions or comments about G1272. Find publications Search MU Extension publications. Spray Mix Calculations H. Willard Downs and William W. Casady Department of Agricultural Engineering Fred Fishel Department of Agronomy Liquid pesticide sprayers must apply the proper amount of a carefully mixed spray solution to be effective in controlling weed and insect pests. This publication describes procedures for determining how much pesticide to mix in the tank so the right amount of pesticide will be applied per acre. Pesticides formulated to be applied as sprays are sold both as liquids and as dry materials such as wettable powders. Calculations for mixing liquids are different from calculations for dry materials. This two-part guide provides specific instructions for mixing both liquid and dry pesticides. Liquid pesticides Step 1 Determine the recommended application rate Read the label. The recommended range of application rates for the specific formulation is given on the label. The selected pesticide rate should be based on soil, target pest and crop conditions. Normally, the label will list pesticide application rates per acre in quarts or pints. However, some labels may list the application rate in pounds of active ingredient. If so, continue with Step 2. If the label refers to quarts, pints, or other volume measurements, go directly to Step 4. Step 2 Determine the concentration of active ingredient Read the label. The label will show the amount of active ingredient in each gallon of pesticide formulation. This amount is normally shown as pounds of active ingredient per gallon (gal). acid equivalent = 4 pounds per gallon Step 3 Calculate the volume of pesticide product to apply per acre If the label gives the pesticide application rate in volume units such as quarts or pints, then the amount was found in Step 1. However, if the rate is shown as pounds of active ingredient per acre, then it is necessary to calculate the volume of pesticide to apply per acre. This volume can be found by dividing the pesticide application rate (Step 1) by the number of pounds of active ingredient per gallon (Step 2). Gallons of pesticide per acre = application rate (pounds per acre) ÷ concentration or acid equivalent Suppose you want to apply 1.5 pounds of 2,4-D per acre and the 2,4-D contains 4 pounds of active ingredient per gallon. Gallons of 2,4-D per acre = 1.5 pounds per acre ÷ 4 pounds per gallon = 0.375 gallon per acre or 3/8 gallon per acre You may find it useful to convert gallons per acre to pints per acre for measuring purposes. Pints of 2,4-D per acre = 3/8 gallon per acre × 8 pints per gallon = 3 pints per acre Step 4 Calculate the number of acres sprayed by a full tank of the spray mixture If you use a sprayer with two or more tanks remember to consider the total volume of all tanks and to divide all ingredients proportionally among the tanks. All references to "tank" in the following material refer to the combined capacity of all tanks. The number of acres sprayed by a full tank is found by dividing the tank capacity by the sprayer application rate, which was found during calibration Acres per tank = Total tank capacity (galllon per tank) ÷ Application rate (gallons per acre) Your spray tank holds 400 gallons and your sprayer application rate is 20 gallons per acre. Acres per tank = 400 gallon per tank ÷ 20 gallon per acre = 20 acres per tank Small fields can be sprayed with partially filled tanks. The pesticide and carrier (water) are added to the tank until the tank is filled to the correct level. The correct volume of spray is the sprayer application rate multiplied by the number of acres. You want to spray a 12-acre field and your sprayer applies 20 gallons per acre. Gallons of spray mixture = application rate (gallons per acre) × area to spray (acres) Therefore, put (20 × 12 =) 240 gallons of pesticide and carrier in the tank. Step 5 Calculate the volume of pesticide to mix in the tank The volume of pesticide added to the tank is the number of acres per tank (Step 4) multiplied by the volume of pesticide per acre (Step 3). Volume of pesticide per tank = (acres per tank) × volume of pesticide per acre (gallons) You want to spray a full 400-gallon tank. Gallons of 2,4-D per tank = 20 acres per tank (Step 4) × 0.375 gallon per acre (Step 3) = 7.5 gallons per tank If you want to spray the small 12-acre field, the amount of 2,4-D added to the tank before bringing the volume up to 240 gallons would be: Gallons of 2,4-D = 12 acres × 0.375 gallon per acre = 4.5 gallon Partially fill the spray tank with water before adding pesticides. Dry pesticides Some pesticides are formulated and sold as powders and water dispersible granules for mixing with water. These dry formulations are recommended in units of weight per acre. The amount of active ingredient in these products is shown in percent. Step 1 Determine the recommended rate of application. Read the label.The recommended range of application rates is given on the label. Be sure the rate you use is the right rate for your soil, target pest, and crop conditions. The rate can be shown in pounds of active ingredient or pounds of product. If the rate is shown as pounds of active ingredient, continue with Step 2. If the rate is shown as pounds of product, go directly to Step 4. Step 2 Determine the concentration of active ingredient in the pesticide formulation Read the label. The label will list the percentage of active ingredient. atrazine: 80 percent Step 3 Calculate the weight of pesticide product to apply per acre. The weight of pesticide product to apply per acre is the pesticide application rate (pounds per acre) divided by the percent Pounds of pesticide product per acre = (application rate × 100) ÷ percent active ingredient You want to apply 1.5 pounds of atrazine and the label shows atrazine: 80 percent. Pounds of atrazine formulation per acre = (1.5 pounds of atrazine per acre (Step 1) × 100) ÷ 80 (Step 3) = 1.875 pounds per acre Step 4 Calculate number of acres sprayed by each full tank. Follow the procedure provided in Step 4 for liquid pesticides. Step 5 Calculate the weight of pesticide to mix in the tank.The weight of pesticide added to the tank is the number of acres per tank (Step 4) multiplied by the weight of pesticide per acre (Step 3). Weight of pesticide per tank = (acres per tank) × application rate (pounds of product per acre) You want to spray a full 400-gallon tank of spray. Pounds of atrazine product per tank = 20 acres (Step 4) × 1.875 pounds of product per acre (Step 3) = 37.5 pounds per tank If you want to spray the small 12-acre field, the weight of atrazine product added to the tank before bringing the volume up to 240 gallons would be: Pounds of atrazine product = 12 acres × 1.875 pounds of product per acre = 22.5 pounds Partially fill the spray tank with water before adding pesticides. G01272, revised March 1997
{"url":"http://extension.missouri.edu/publications/DisplayPub.aspx?P=G1272","timestamp":"2014-04-16T13:06:29Z","content_type":null,"content_length":"41850","record_id":"<urn:uuid:747d1cf1-c35c-4645-b10d-60c239199f17>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
JMU - positions I. A tenure-track faculty position is available in mathematics and statistics beginning August 2014. General information: The Department of Mathematics and Statistics at James Madison University invites applications for a tenure track position in pure mathematics, statistics, mathematics education, or applied mathematics starting August 2014. Applicants should have a doctoral degree in mathematics, mathematics education, or statistics. Compatibility of research with current department needs and interests will be a factor. Candidates should have demonstrated success in teaching and potential in scholarship. Interest in participation in the department's active undergraduate research program is a (Note that a directed search in mathematics education with an earlier review date is also underway, but mathematics educators are also invited to apply for this position.) To apply: Applicants should submit an online application through https://joblink.jmu.edu (Click on "apply for this posting." You will need to create a user name and password for this site. Note that a response is "required" for only some fields.) Review of applications begins November 25, 2013. Apply by that date to guarantee full consideration. Department representatives will be present at the Joint Mathematics Meetings. James Madison University is a comprehensive, coeducational state university, located in the beautiful Shenandoah Valley of Virginia. Enrollment is approximately 20,000. The primary focus of the department is undergraduate teaching, with an active research program in that context. James Madison University is committed to a diverse and inclusive community and to maintaining a work and educational environment that is free of all forms of discrimination. This institution does not tolerate discrimination or harassment on the basis of age, color, disability, genetic information, national origin, parental status, political affiliation, race, religion, sex, sexual orientation or veteran status. For additional information, see the department homepage at http://www.jmu.edu/mathstat/ II. A tenure-track faculty position is available in mathematics education beginning August 2014. General information: The Department of Mathematics and Statistics at James Madison University invites applications for a tenure-track position in mathematics education, starting August 2014. Preference is given to candidates invested in the teaching and improvement of mathematics content courses for prospective PreK-8 teachers. The department and mathematics education program value collaborative scholarship among colleagues so preference will be given to candidates whose research interests allow for collaboration with current mathematics education faculty in the department and in the College of Education. Candidates must have a doctoral degree in mathematics education and a master's degree or equivalent in mathematics or statistics. The successful applicant will join a growing number of mathematics educators in the department. To apply: Applicants should submit an online application through https://joblink.jmu.edu Complete applications will include: · letter of application, · curriculum vitae, · statement of teaching philosophy, · research statement, · transcript photocopies, and · three letters of recommendation. In addition to a mathjobs.org application, applicants must also complete a short online form at joblink.jmu.edu/applicants/Central?quickFind=60301 (Click on "apply for this posting." You will need to create a user name and password for this site. Note that a response is "required" for only some fields.) James Madison University, located in the beautiful Shenandoah Valley of Virginia, is a comprehensive, coeducational state university with an approximate enrollment of 19,000 students. The mathematics education group at JMU includes active scholars in mathematics education as well as mathematicians interested in PreK-12 mathematics education. The department teaches mathematics majors with secondary education minors as well as PreK-8 prospective teachers in an interdisciplinary major. The department also offers an M.Ed in Mathematics for current secondary mathematics teachers. Teaching of mathematics methods courses may be arranged on request through exchanges with the JMU College of Education. For additional information, see the department homepage at http://www.jmu.edu/mathstat/ Review of applications will begin October 18, 2013. To guarantee full consideration submit materials by that date. Department representatives will be present at the Joint Mathematics Meetings.
{"url":"http://www.jmu.edu/mathstat/positions.shtml","timestamp":"2014-04-17T12:44:08Z","content_type":null,"content_length":"14726","record_id":"<urn:uuid:57cecb51-1646-4437-8bd2-c5663e0a9aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Studying the Heart Left Ventricular Studying the Heart Left Ventricular Motion with 4D Transformations by Jérôme Declerck, Jacques Feldmar and Nicholas Ayache The heart has a vital function in the body. A malfunction may have fatal consequences. Coronary artery disease remains one of the leading causes of death in developed nations. In a large number of instances, the first symptom is a myocardial infarction, and half of the myocardial infarctions cause death. Detecting and preventing this is one of the major goals of modern medicine. Cardiologists assume that analysis of the motion of the heart, especially the left ventricle (LV), can give precise information about the health of the myocardium. Modern techniques provide 3D images which describe either the anatomy of the heart (eg, MRI) or its functionality (eg, Nuclear Medicine SPECT imaging). It is possible to get sequences of such images over the whole cardiac cycle; such sequences are real 3D movies of the cardiac motion. The cardiac motion, like the motion of any real object must therefore be described as a 4D (3D + time) continuous and regular transformation of time and space. The purpose of this work is to define a class of 4D planispheric transformations which describes the LV motion, and to develop a method to estimate such a transformation from a sequence of 3D heart images. The transformation is defined in 3D-planispheric coordinates by a small number of parameters involved in a set of simple linear equations. It is continuous and regular in time and space, periodicity in time can be imposed. The local motion can be easily decomposed into a few canonical motions (centripetal contraction, rotation around the long-axis, elevation) that give useful information to the physician for his diagnosis. For our experiments, we use sequences of gated SPECT (Single Photon Emission Computed Tomography) images, covering the whole cardiac cycle in 8 images. First, points featuring the edges of the left ventricle (endocardium, epicardium) are automatically extracted from the images. We write a 4D least-squares criterion so that its minimisation yields an optimal solution expressed as a 4D planispheric transformation. This criterion involves pairs of matched points over the whole sequence: those pairs are determined and the criterion is minimised using an adaptation of the Iterative Closest Point algorithm. Once the 4D transformation is obtained, a local analysis is performed: the motion is decomposed into three independent elementary motions featuring: □ a radial motion describing the centripetal contraction that occurs during the systole (first shrinking part of the cardiac cycle) □ a motion along the long axis of the LV describing the shortening and □ a motion describing the rotation around the long axis. Fourier analysis of the centripetal contraction parameter. On the top, a normal heart, on the bottom, a pathological case. The surfaces of the LVs are shaded according to the values of the The dependency in time and space allows us to study the evolution of the parameters (centripetal contraction, ...) over time and can potentially reveal pathologies: for instance, the figure illustrates the time analysis performed on the centripetal contraction motion. The amplitude (left) and the phase (right) of the first Fourier harmonic for the parameters values over time are displayed with colors on the surface of the LV. On the top, a normal case; on the bottom, a pathological case suffering from a septal ischemia (reduced blood flow). The ischemia induces an akinesia of the area (the area does not move): the pathological area is located on the left of the images (the septum), the comparison to the normal case reveals its difference (it appears darker because the centripetal contraction is nearly absent) and the type of the pathology (akinesia). A clinical validation of the protocol is currently under study on a series of gated SPECT images in collaboration with Pr. Michael Goris (Stanford University Hospital) and on tagged Magnetic Resonance images in collaboration with Dr. Elliot McVeigh (Johns Hopkins University). Our hope is that such a software could be used by physicians to better assess myocardial health using non-invasive imaging technology. Please contact: Jérôme Declerck - INRIA Tel: +33 4 93 65 76 63 E-mail: jdecler@sophia.inria.fr Nicholas Ayache - INRIA Tel: +33 4 93 65 76 61 E-mail: na@sophia.inria.fr
{"url":"http://www.ercim.eu/publication/Ercim_News/enw31/declerck.html","timestamp":"2014-04-17T12:31:03Z","content_type":null,"content_length":"5752","record_id":"<urn:uuid:7c7e369f-6c48-44da-8581-de5601f7613d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2009 [00496] [Date Index] [Thread Index] [Author Index] Re: Problem with DSolve • To: mathgroup at smc.vnet.net • Subject: [mg96467] Re: [mg96410] Problem with DSolve • From: Bob Hanlon <hanlonr at cox.net> • Date: Sat, 14 Feb 2009 03:09:16 -0500 (EST) • Reply-to: hanlonr at cox.net Use exact numbers soln = DSolve[{y'[x] == 2/100*y[x] - y[x]^2, y[0] == a}, y[x], x][[1]] {y(x)->(a E^(x/50))/(50 a E^(x/50)-50 a+1)} soln /. x -> 0 y'[x] == 2/100*y[x] - y[x]^2 /. NestList[D[#, x] &, soln[[1]], 1] Bob Hanlon ---- Tony <aezajac at optonline.net> wrote: can anyone help what is wrong? On version 7 I enter and get During evaluation of In[58]:= Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >> During evaluation of In[58]:= Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >> During evaluation of In[58]:= DSolve::bvnul: For some branches of the general solution, the given boundary conditions lead to an empty solution. >>
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Feb/msg00496.html","timestamp":"2014-04-18T13:19:36Z","content_type":null,"content_length":"25879","record_id":"<urn:uuid:669786e5-91b8-427c-ab55-f4e0458a7f82>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Which is greater: 355*356 or 354*357? August 6th 2009, 02:01 PM Voluntarius Disco Which is greater: 355*356 or 354*357? Well, I'm looking through a means of solving problems and I just want a little more explanation to this method of solving that question. Represent 354=a, 357=b, 355=c, and 356=d We now have that 1) a+b=c+d; 2) |b-a|=|d-c| We want to prove that ab < dc This is an elementary question maybe but...how did the problem solving leap from multiplication to addition for the first step? Can anybody clear that up for me - on why it's a+b=c+d because...to me...it doesn't. And I'm not seeing it. D: August 6th 2009, 02:13 PM Well, I'm looking through a means of solving problems and I just want a little more explanation to this method of solving that question. Represent 354=a, 357=b, 355=c, and 356=d We now have that 1) a+b=c+d; 2) |b-a|=|d-c| We want to prove that ab < dc This is an elementary question maybe but...how did the problem solving leap from multiplication to addition for the first step? Can anybody clear that up for me - on why it's a+b=c+d because...to me...it doesn't. And I'm not seeing it. D: $355\times 356= 355 \times (355+1)=355^2+355$ $354\times 357=(355-1)(355+2)=355^2+355-2$ so we conclude $355\times 356>354 \times 357$ August 6th 2009, 03:22 PM You could always solve for when this identity works too by setting up an inequality. In our case, n is equal to 355, but let's expand the inequality to solve for n. Now, unfortunately, we cannot solve for any n since that variable falls out of the equation. However, what this result does tell us is that the left side is greater by two which is exactly what CaptainBlack showed with numbers.
{"url":"http://mathhelpforum.com/number-theory/97194-greater-355-356-354-357-a-print.html","timestamp":"2014-04-19T05:28:32Z","content_type":null,"content_length":"6839","record_id":"<urn:uuid:f6a0222e-12b8-489a-ac71-80c79ad42e01>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Complex Plane On the Complex Plane Indra's Pearls: The Vision of Felix Klein. David Mumford, Caroline Series and David Wright. xx + 396 pp. Cambridge University Press, 2002. $50. Swirly polychrome pictures of the Mandelbrot set and other intricate fractal objects were much in vogue 25 years ago, when computer graphics was still a novelty. Many of those images now seem quaint and dated, like paisley neckties, and one can't help wondering if the mathematics behind them was also merely a passing fad. Indra's Pearls makes a strong claim to the contrary; the mathematics here is unquestionably genuine. The swirly fractal images are also pretty impressive. (Even the Mandelbrot set makes a cameo appearance, albeit in black and white.) The particular fractal patterns explored by David Mumford, Caroline Series and David Wright have their roots in a simple geometric operation: inverting a circle. In this context "inverting" does not mean turning the circle upside down but rather turning it inside out, mapping every point of the interior onto a point outside the circle, and vice versa. Suppose you have several circles, like coins lying on a table, and you apply the same inverting transformation repeatedly to all of them. Then each circle appears reflected in all the others, and the reflections also get reflected, forming a series of nested disks. In Buddhist tradition this is the vision of Indra's net, which is studded with infinitely many shiny pearls. As the authors put it, The pearls in the net reflect each other, the reflections themselves containing not merely the other pearls but also the reflections of the other pearls. In fact the entire universe is to be found not only in each pearl, but also in each reflection in each pearl, and so ad infinitum. The mathematical counterparts of Indra's pearls live on the plane of complex numbers, and the process of reflection is modeled by a transformation applied to this plane. The transformation, or mapping, is defined by a 2 x 2 matrix; multiplying the coordinates of each point by the matrix yields a new point. Depending on the nature of the matrix, the transformation might merely change the position or orientation of figures drawn in the plane, or it might also alter shapes and sizes. Of particular interest are a family of transformations called Möbius maps, after the German mathematician August Ferdinand Möbius (who also lends his name to the famous one-sided loop). A Möbius mapping distorts some shapes, but it has the special property that it always maps circles into circles—which is just what's needed to reproduce the reflections of Indra's pearls. Mumford, Series and Wright are interested in the "limit set" of this family of transformations. When the mapping is applied repeatedly, the reflected circles become more numerous, but they also get steadily smaller; in the limiting case of infinitely many iterations, there are infinitely many circles, but they shrink away to dimensionless points, usually forming a disconnected "dust." Indra's Pearls records a 20-year collaborative effort to describe and explore the limit set of this process. Among books of mathematics, it is unusual in two respects. First, it focuses on the journey rather than the destination. The reader is invited to tag along and watch the passing scenery—and maybe even help paddle the boat from time to time—but the guides can't say at the outset where the voyage will end. Second, theorems, pictures and computer programs are all equally important in this story. I don't mean to suggest that the authors are practicing some sort of postmodern mathematics in which pictures or programs take the place of proofs. In their account, however, the process of discovery usually begins with "Let's write the program and see what happens"; the proof comes later. The mathematics presented is not difficult, but there is a lot of it: Group theory, topology, non-Euclidean geometry, linear algebra, limits. All of it is patiently explained, but the reader too must be patient. By the time you finish, you'll know your way around the complex plane.—Brian Hayes "Penguins are 10 times older than humans and have been here for a very, very long time," said Daniel Ksepka, Ph.D., a North Carolina State University research assistant professor. Dr. Ksepka researches the evolution of penguins and how they came to inhabit the African continent. Because penguins have been around for over 60 million years, their fossil record is extensive. Fossils that Dr. Ksepka and his colleagues have discovered provide clues about migration patterns and the diversity of penguin species. Click the Title to view all of our Pizza Lunch Podcasts! • A free daily summary of the latest news in scientific research. Each story is summarized concisely and linked directly to the original source for further reading. An early peek at each new issue, with descriptions of feature articles, columns, Science Observers and more. Every other issue contains links to everything in the latest issue's table of News of book reviews published in American Scientist and around the web, as well as other noteworthy happenings in the world of science books. To sign up for automatic emails of the American Scientist Update and Scientists' Nightstand issues, create an online profile, then sign up in the My AmSci area.
{"url":"http://www.americanscientist.org/bookshelf/pub/on-the-complex-plane","timestamp":"2014-04-16T08:26:24Z","content_type":null,"content_length":"105827","record_id":"<urn:uuid:82101730-ec2b-4a70-ad87-c120c544b4c8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
OLH - Number of points required? Hi, im currently looking at using an OLH to define a sample set of points in order to construct an MLS metamodel. However, i understand the number of sample points should be around 1.5 - 2 times the number of coefficients required to approximate a polynomial to my design variables. If i have 4 design variables, how many coefficients should i be expecting to need in order to approximate this?
{"url":"http://www.physicsforums.com/showthread.php?p=4165498","timestamp":"2014-04-16T19:08:30Z","content_type":null,"content_length":"19895","record_id":"<urn:uuid:f121d819-a716-44dc-93d9-461c6354296d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Comsol Tutorial Comsol Tutorial: Laminar Flow Reactor Pdf Size: 1.46 MB | Book Pages: 181 1 Comsol Tutorial: Laminar Flow Reactor Spring 2008 Objectives: 1) Describe to a graduate student the difference between the prediction of the outlet conversion COMSOL Tutorial Bwaise III Pdf Size: 1.71 MB | Book Pages: 173 1 COMSOL Tutorial Bwaise III Original tutorial: Roger Thunvik (http://www.lwr.kth.se/Grundutbildning/1B1635/QH/CourseHome/index.htm) Updated by Emma Engström 26-Jan-10 This is COMSOL 4.2 tutorial Pdf Size: 1.39 MB | Book Pages: 77 COMSOL 4.2 Tutorial COMSOL Multiphysics (formerly FEMLAB) is a finite element analysis, solver and Simulation software / FEA Software package for various physics and Tutorial 1: Heat conduction in a slab Pdf Size: 1.24 MB | Book Pages: 172 1 Tutorial 1: Heat conduction in a slab 2007 Cornell University BEE453, Professor Ashim Datta Authored by Vineet Rakesh and Frank Kung Software: COMSOL 3.3 Tutorial 1: Heat Comsol Tutorial: Laminar Flow Reactor with Heat Transfer Effects Pdf Size: 1.51 MB | Book Pages: 72 Microsoft Word - Femlab Tutorial - Heat transfer Reactor from ECRE 2007.doc Finite Element Models for Heat Transfer Modeling and Control Pdf Size: 1.32 MB | Book Pages: 111 This week you will experiment with COMSOL finite Element modeling software for a variety of diffusion experiments. We will start with four tutorial case studies for practice An Introduction to COMSOL Multiphysics Pdf Size: 1.32 MB | Book Pages: 179 An Introduction to COMSOL Multiphysics . Introduction . A solid undergoes thermal expansion due to the application of heat along with deformation due to Finite Element Method (FEM) & Comsol Multiphysics Pdf Size: 2.21 MB | Book Pages: 104 13 next lecture… 25 FDTD & EMPIRE: The Basics next COMSOL-Tutorial on Tuesday 17.11.09 5 pm in BA328 "FEM simulations of electrodynamicfields" COMSOL Multiphysics Instruction Manual Pdf Size: 1.72 MB | Book Pages: 176 INTRODUCTION TO COMSOL Multiphysics - viii - 4. COMSOL video tutorial website (www.victordanilochkin.org/comsol) In case you still cannot obtain MATLAB scripts for some reason, there How to contact COMSOL: Benelux Pdf Size: 5.14 MB | Book Pages: 182 i | CONTENTS CONTENTS Preface 2 Tube in Current 3 Introduction to the Lesson . . . . . . . . . . . . . . . . . . . 3 Key Instructive Elements . Benchmarking COMSOL Multiphysics 3.4 Pdf Size: 3.85 MB | Book Pages: 144 This test problem is taken from ANSYS 11.0 Micro-Electromechanical System (MEMS) tutorial [2]. 3. Electromagnetic wave propagation (COMSOL Multiphysics 3.4 versus HFSS v10). Parameterizing a Geometry using the COMSOL Moving Mesh Feature Pdf Size: 5.81 MB | Book Pages: 140 This example shows how to parameterize a CAD model by deforming the corresponding finite Element mesh. The method enables geometry parameterization of non-parameterized CAD CHEE 4367 (Required) Chemical Reaction Engineering Pdf Size: 1.29 MB | Book Pages: 75 11/3 Heterogeneous reactions Fogler Ch. 10 Recitation problem set 11/6 Exam 2 (Saturday morning) 11/8 AICHE Week - Comsol Tutorial 11/10 AICHE Week - COMSOL Tutorial 11/15 Diffusion review COMSOL Multiphysics Pdf Size: 2.29 MB | Book Pages: 120 CONTENTS | i CONTENTS Chapter 1: Introduction Typographical Conventions . . . . . . . . . . . . . . . . . . . 2 Chapter 2: Application Mode Specifying time varying Boundary Conditions Pdf Size: 1.24 MB | Book Pages: 183 dependant variable vs. time) (see Method 1 below) similar to how we specified properties varying with temperature for the CRYOSURGERY Tutorial. æ The second option in COMSOL is to MEMS reliability in shock environments Pdf Size: 1.24 MB | Book Pages: 196 Presented at IEEE International Reliability Physics Symposium in San Jose, CA, April 10-13, 2000, pp. 129-138. MEMS reliability in shock environments Danelle M. Tanner, Jeremy COMSOL Multiphysics Quick Start and Quick Reference Pdf Size: 2.61 MB | Book Pages: 193 functionality of COMSOL Multiphysics across its entire range from geometry modeling to postprocessing. It serves as a tutorial and a reference guide to using FLOWS IN LIQUID FOAMS A finite element approach Pdf Size: 2.32 MB | Book Pages: 131 of boundary conditions 3 -Moving meshes 3 -Moving meshes 4 -Implementation of the surface tension 4 -Implementation of the surface tension TUTORIAL : INTRODUCTION TO COMSOL Multiphysics COMSOL MUltiphysics Instruction Manual Pdf Size: 5.43 MB | Book Pages: 57 COMSOL Multiphysics Instruction Manual COMSOL Multiphysics Instruction Manual COMSOL video tutorial website (www.victordanilochkin.org/comsol) ANSYS Sales Portal Tutorial and Usage Guidelines Pdf Size: 2.11 MB | Book Pages: 151 ANSYS Sales Portal Tutorial and Usage Guidelines 3/25/2008 The ANSYS Sales Portal is a Competitive Data Abaqus Algor COMSOL Cosmos MSC Plassotech 3G Pro/Mechanica UG CoventorWare Tutorial Pdf Size: 1.28 MB | Book Pages: 59 Tutorial • To learn most useful aspects of CoventorWare, only need to complete first two Modal Analyses Can perform modal analyses similar to Comsol/ANSYS, but potentiallyy Pdf Size: 1.24 MB | Book Pages: 56 many available commercial programs, Abaqus, FLUENT, Comsol Multiphysics, Tutorial 1: Truss problem You will now use ANSYS to analyse your first problem. Heat and Moisture In Building Envelopes Pdf Size: 1.5 MB | Book Pages: 124 The Multiphysics software COMSOL is used at all exercises. 0.1 Install Comsol Class kit on your computer. See Appendix A for details. 0.2 Start the Tutorial of Appendix B involving Introduction to Computational Fluid Dynamics Pdf Size: 2.22 MB | Book Pages: 185 Available CFD software ANSYSCFX http://www.ansys.com commercial FLUENT http://www.fluent.com commercial STAR-CD http://www.cd-adapco.comcommercial FEMLAB http://www.comsol.com commercial Numerical Methods in MATLAB Pdf Size: 1.33 MB | Book Pages: 162 software packages such as MATLAB and COMSOL Multiphysics (formerly known as In this tutorial, we will introduce some of the numerical methods available in Matlab. Measurement of High Heat Flux Heat Transfer Coefficient Pdf Size: 1.24 MB | Book Pages: 161 become familiar with FEHT, it is suggested that the reader stop and go through the tutorial Widely used programs such ANSYS, COSMOS, and COMSOL can handle 3-D problems and provide
{"url":"http://tutorial6.com/c/comsol-tutorial","timestamp":"2014-04-16T13:23:48Z","content_type":null,"content_length":"32124","record_id":"<urn:uuid:dc8e5253-acc6-48f0-a8ed-f36c8bea6a50>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Modular Arithmetic May 22nd 2013, 06:41 AM #1 Super Member Aug 2010 Modular Arithmetic Modular Arithmetic in Z: Definition: a(modm) stands for the remainder when a is divided by m. Definition: a(modm) + b(modm) is (a+b)(modm). Definition: a(modm)xb(modm) is axb(modm). Definition: a≡b(modm): a and b leave same remainder when divided by m Theorem: If a≡b(modm) and c≡d(modm) then a+b≡c+d(modm) Example: 3mod4 + 2mod4 ≡ 5mod4 ≡ 1mod4 Re: Modular Arithmetic By popular demand, I’ll prove the theorem (and correct a typo). a≡b(modm) → a=q[1]m+r, b=q[2]m+r → (a-b)=qm c≡d(modm) → (c-d)=q’m * So I don’t leave with a guilty conscience, a-b=qm, a=q[1]m+r[1], b=q[2]m+r[2] → r[1]=r[2]→ a≡b(modm). The point of the theorem was to show it is not a definiton of addition. Now that you understand congruences and modular arithmetic (definitions and operations), don’t look back at any problems- remember Lot's wife. EDIT: For a rainy day, if m is prime there is a cancellation law. Didn't want to clutter up the structure. Last edited by Hartlw; May 22nd 2013 at 11:22 AM. Re: Modular Arithmetic The OP says (intentionally) that a(modm) STANDS for the remainder. Then a(modm)+b(modm) becomes the addition of symbols, which is a little unsettling. If a(modm) IS the remainder, ie 0,1,2, m-1, then you can keep the arithmetic of remainders in Z if you define +’ as: a(modm)+’(bmodm) = [a(modm)+(bmodm)](modm) Similarly for multiplication. Then, Theorem: a(modm)+’b(modm) = (a+b)(modm) Proof: a(modm)=r1 and b(modm)=r2, then (a+b)modm=(r1+r2)(modm). In effect, you have replaced a relation (≡) among symbols with a different definition of + (+') among integers, in the manner of + for complex integers. (This is the Math (free discussion) forum, right?) Still getting the initial MHF screen which says “Install Flash Player HD required,” but now with “required” in italian (to thwart anti-virus software I assume). Be nice to have a separate forum for this type of thing. Last edited by Hartlw; May 24th 2013 at 09:48 AM. Re: Modular Arithmetic I'm not sure why you would call that a "definition". if a (mod m)= x and b (mod m)= y then a= x+ mp for some integer p and b= y+ mq for some integer q. Then a+ b= x+ mp+ y+ mq= (a+ b)+ m(p+ q). Definition: a(modm)xb(modm) is axb(modm). Definition: a≡b(modm): a and b leave same remainder when divided by m Theorem: If a≡b(modm) and c≡d(modm) then a+b≡c+d(modm) Example: 3mod4 + 2mod4 ≡ 5mod4 ≡ 1mod4 What is your question? Are you trying to prove the theorem? The way you have stated it, it is NOT true. For example, 1= 5 (mod 4) and 2= 6 (mod 4) but 1+ 5 is NOT equal to 2+ 6 (mod 4). The first is equal to 2 (mod 4) and the second is equal to 0 (mod 4). But that is probably typo. What is true if "If a≡b(modm) and c≡d(modm) then a+c≡b+d(modm)" If a≡ b (mod m) then a= b+ pm for some integer p and if c≡ d (mod m) then c= d+ qm for some integer q. Then a+ c= (b+ pm)+ (d+ qm)= b+ d+ (p+ q)m which says that a+ c≡ b+ d (mod m) Re: Modular Arithmetic You are correct, there is a typo in the theorem in post 1, noted in first line of post 2 (noticed it too late), where the corrected theorem was stated and proved. If 3mod4=3, and 2mod4=2, then 3mod4+2mod4 = 5, which is why you need a definition: (≡ instead of =): 2mod4 + 3mod4 ≡ 5mod4. I started this thread to clarify a situation, nameley, that nowhere could I find a definition of addition in modular arithmetic other than descriptive. What I was looking for was something that said the elements X form a ring with addition and multiplication defined by ---. Something like the definition of addition of two matrices: A+B=C where cij=aij+bij. In my search it appeared that additon in modular arithmetic was simply assumed to be a description (construction), without spelling out exactly what was being added, and how you write down the addition, a+b=c. I answered this in my first post. In retrospect, post 3 was a little pedantic and unnecessary. Stick with OP, which is clear, concise, and easily remembered, but has a typo (noted in post 2) in the Theorem. It should be, as stated in post 2,: Theorem: If a≡b(modm) and c≡d(modm) then a+c≡b+d(modm), RIGHT Instead of Theorem: If a≡b(modm) and c≡d(modm) then a+b≡c+d(modm), WRONG Wish I had caught the edit. Congruence Rings in Modular Arithmetic Revised Thinking Definition: a≡b modm if m|(b-a), or, if b and a leave same remainder when divided by m. a,b,m integers. c is called the modular sum of a & b if (a+b)≡c modm. c is called the product of a & b if ab≡c modm. For a fixed m=n, and ≡ replacing =: The above defines a ring Zn for the integers 0,1,2,..n-1, Or a ring of residue classes if the integers are divided into classes with same remainder 0,1,2,..n-1. Ring: Addition, multiplication, associativity, commutativity, 0, 1, and distributivity. Domain: Ring plus cancellation law (m prime). May 22nd 2013, 11:06 AM #2 Super Member Aug 2010 May 24th 2013, 09:37 AM #3 Super Member Aug 2010 May 24th 2013, 10:48 AM #4 MHF Contributor Apr 2005 May 25th 2013, 07:00 AM #5 Super Member Aug 2010 May 27th 2013, 08:21 AM #6 Super Member Aug 2010
{"url":"http://mathhelpforum.com/math/219189-modular-arithmetic.html","timestamp":"2014-04-20T01:29:45Z","content_type":null,"content_length":"48537","record_id":"<urn:uuid:7f200d22-8f6e-4f15-afe7-c7ddf27fe2a1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
a simple algorithm and questions 02-18-2002 #1 Registered User Join Date Feb 2002 a simple algorithm and questions As a test of my programming ability, I tried to write a little program that inputs the diagonal of a monitor and calculates the viewable area. It then optionally inputs the diagonal of another monitor, calculates the viewable area, and compares the two areas. (I got this idea when I upgraded from a 15-inch to a 17-inch). My algorithm is based on the Pythagorean Theorem and the fact that monitors seem to have a 3:4 height-to-width ratio. After a cin statement to get the float variable inputDiag, which is the monitor's diagonal, it goes like diagSquared = inputDiag * inputDiag; heightSquared = (diagSquared / 25) * 9; widthSquared = diagSquared - heightSquared; height = sqrt (heightSquared); width = sqrt (widthSquared); area = height * width; return area; My questions are as follows: 1. Is there a way to improve this algorithm? 2. If the values 25 and 9 above were changed to variables, would the algorithm, as a function, be a candidate for coversion to a header file I could later #include in any programs that might need it, just to save me having to type it in all over again? Thanks for all replies.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/11242-simple-algorithm-questions.html","timestamp":"2014-04-23T10:34:49Z","content_type":null,"content_length":"39539","record_id":"<urn:uuid:80a9c682-dc27-4776-ae4a-d77785629ed8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: The Circuit Up: Inductance Previous: Energy Stored in an Consider a circuit in which a battery of emf . The resistance In steady-state, the current specified by Ohm's law. Note that, in a steady-state, or DC, circuit, zero back-emf is generated by the inductor, according to Eq. (243), so the inductor effectively disappears from the circuit. In fact, inductors have no effect whatsoever in DC circuits. They just act like pieces of conducting wire. Let us now slightly modify our 46. Suppose that the switch is initially open, but is suddenly closed at 243). Thus, although the inductor does not affect the final steady-state value of the current flowing around the circuit, it certainly does affect how long after the switch is closed it takes for this final current to be established. If the instantaneous current 243)] Applying Ohm's law around the circuit, we obtain which yields where 257) can be rewritten since 258) that Integration of Eq. (259), subject to the initial condition (260), yields Thus, it follows from Eq. (258) that The above expression specifies the current 47. It can be seen that when the switch is closed the current after the switch is closed (since time-constant, or, somewhat unimaginatively, the L over R time, of the circuit. Suppose that the current flowing in the circuit discussed above has settled down to its steady-state value Applying Ohm's law around the circuit, in the absence of the battery, we obtain Integration of Eq. (265), subject to the boundary condition (266), yields According to the above formula, once the battery is switched out of the circuit, the current decays smoothly to zero. After one i.e., We can now appreciate the significance of self inductance. The back-emf generated in an inductor, as the current flowing through it tries to change, effectively prevents the current from rising (or falling) much faster than the L/R time of the circuit. This effect is sometimes advantageous, but is often a great nuisance. All circuits possess some self inductance, as well as some resistance, so all have a finite 48. Clearly, there is little point in us having a fancy power supply unless we also possess a low inductance wire, so that the signal from the power supply can be transmitted to some load device without serious distortion. Figure 48: Typical difference between the input wave-form (top) and the output wave-form (bottom) when a square-wave is sent down a line with finite Next: The Circuit Up: Inductance Previous: Energy Stored in an Richard Fitzpatrick 2007-07-14
{"url":"http://farside.ph.utexas.edu/teaching/302l/lectures/node104.html","timestamp":"2014-04-18T15:40:37Z","content_type":null,"content_length":"23379","record_id":"<urn:uuid:5ad85110-6e82-4f2a-a69f-1093036e444b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - which would have greater pressure at the end, an L shape or Y shaped bauer99999 Feb23-12 09:58 PM which would have greater pressure at the end, an L shape or Y shaped which would have greater pressure at the end, an L shape or Y shaped funnel? Re: pressure Assuming a fairly slow flow velocity, and assuming the same height for each funnel, it would be approximately the same. jim hardy Feb25-12 04:27 AM Re: pressure The taller one? elliott Feb26-12 12:52 AM Re: pressure Correct me if I'm wrong (I'm taking the 2nd calculus based physics course fliuds, thermo, etc.) The pressure of a fluid in a container at rest can be defined as: p = p[0] * ρgh where p is the absolute pressure, the pressure your looking for at a certain depth. p[0] is the pressure from the atmosphere which bears down on the fluid. ρ is the density of the fluid. g of course is gravity. h is the length from the surface to where you're trying to find the pressure. So if were dealing with containers undergoing the same pressure from the atmosphere, that contain the same density liquid, have the same gravitational force applied to them then the only factor is the height or length from the surface to the point your looking for the pressure. In essence, the pressure at any given depth really only depends upon the depth but not on any horizontal dimension. So if we're to assume this Y shaped container and this T shaped container are both the same height then the pressure at any point in those containers will be the same! And, this would also apply for any kind of shape you can think of (of the same height). Take this into consideration when constructing your next beer bong. All times are GMT -5. The time now is 04:57 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=580801","timestamp":"2014-04-21T09:57:01Z","content_type":null,"content_length":"6560","record_id":"<urn:uuid:2a98196c-a784-48a8-9325-15497abd401a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Conroe Geometry Tutor Find a Conroe Geometry Tutor ...I'd love to talk more about tutoring for your specific situation and look forward to hearing from you.During my time at Texas A&M I learned many different study methods that I put into practice. Additionally, I have tutored (and currently tutor) a number of students & have been able to help them... 17 Subjects: including geometry, reading, calculus, algebra 1 ...With over a dozen shows under my belt, I have served as cast member, stage crew, light design, sound tech, stage manager and director. I recently served as stage manager for a professional production in the Houston area and have accepted an offer to work in Conroe as Stage Manager for an upcomin... 73 Subjects: including geometry, reading, English, writing I have been tutoring for seven years and teaching High School Mathematics for four years. My first year teaching, my classrooms TAKS scores increased by 40%. This last year I had a 97% pass rate on the Geometry EOC and my students still contact me for math help while in college. I know I can help... 8 Subjects: including geometry, physics, biology, algebra 1 ...I have taught students of all ages from high school through graduate students, and I especially enjoy working with adults who want to learn new skills and are going back to school after being in the real world. I look forward to working with you and contributing to your success.A real advantage ... 20 Subjects: including geometry, writing, algebra 1, algebra 2 ...And if my method of teaching isn't working I'll try another approach that I think will fit the student I'm teaching. I'm excited about tutoring and look forward to helping you learn!Took prealgebra growing up and made an A+ when I completed the course. This subject was never difficult for me. 9 Subjects: including geometry, calculus, physics, algebra 1 Nearby Cities With geometry Tutor Beach, TX geometry Tutors Cut And Shoot, TX geometry Tutors Houston geometry Tutors Humble geometry Tutors Jersey Village, TX geometry Tutors Jersey Vlg, TX geometry Tutors Katy geometry Tutors Kingwood, TX geometry Tutors Oak Ridge N, TX geometry Tutors Panorama Village, TX geometry Tutors Shenandoah, TX geometry Tutors Spring geometry Tutors The Woodlands, TX geometry Tutors Tomball geometry Tutors Willis, TX geometry Tutors
{"url":"http://www.purplemath.com/conroe_tx_geometry_tutors.php","timestamp":"2014-04-17T15:30:59Z","content_type":null,"content_length":"23771","record_id":"<urn:uuid:a458f6cc-3592-41aa-83c5-337afe86e86e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic and Geometric Topology 2 (2002), paper no. 27, pages 563-590. Linking first occurrence polynomials over F_p by Steenrod operations Pham Anh Minh, Grant Walker Abstract. This paper provides analogues of the results of [G.Walker and R.M.W. Wood, Linking first occurrence polynomials over F_2 by Steenrod operations, J. Algebra 246 (2001), 739--760] for odd primes p. It is proved that for certain irreducible representations L(\lambda ) of the full matrix semigroup M_n(F_p), the first occurrence of L(\lambda ) as a composition factor in the polynomial algebra P=F_p[x_1,...,x_n] is linked by a Steenrod operation to the first occurrence of L(\lambda ) as a submodule in P. This operation is given explicitly as the image of an admissible monomial in the Steenrod algebra A_p under the canonical anti-automorphism \chi . The first occurrences of both kinds are also linked to higher degree occurrences of L(\lambda ) by elements of the Milnor basis of A_p. Keywords. Steenrod algebra, anti-automorphism, p-truncated polynomial algebra T, T-regular partition/representation AMS subject classification. Primary: 55S10. Secondary: 20C20. DOI: 10.2140/agt.2002.2.563 E-print: arXiv:math.AT/0207213 Submitted: 24 January 2002. Accepted: 10 July 2002. Published: 20 July 2002. Notes on file formats Pham Anh Minh, Grant Walker Department of Mathematics, College of Sciences University of Hue, Dai hoc Khoa hoc, Hue, Vietnam Department of Mathematics, University of Manchester Oxford Road, Manchester M13 9PL, U.K. Email: paminh@dng.vnn.vn, grant@ma.man.ac.uk AGT home page Archival Version These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
{"url":"http://www.emis.de/journals/UW/agt/AGTVol2/agt-2-27.abs.html","timestamp":"2014-04-16T16:10:28Z","content_type":null,"content_length":"3455","record_id":"<urn:uuid:5c27040b-796d-4bf0-a9dc-098fd8fb53c4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00647-ip-10-147-4-33.ec2.internal.warc.gz"}