content
stringlengths
86
994k
meta
stringlengths
288
619
The present invention relates to the determination of spin parameters of a sports ball while in flight, and in particular to the determination of the spin axis and/or a rotational velocity of the sports ball. Such parameters are highly interesting both for using and developing sports balls and other sports equipment, such as golf clubs, irons, rackets, bats or the like used for launching sports balls. For golf balls, such determinations normally have been made by adding to the golf balls strips or patterns of a radar reflecting material. This, however, can only be made for test purposes in that this type of ball is highly standardized. Technologies of this type may be seen in U.S. Pat. No. 6,244,971 and US 2002/0107078. The present invention aims at being able to perform these determinations without altering the sports balls. In a first aspect, the invention relates to a method of estimating a spin axis of a sports ball while in flight, the method comprising: □ 1. determining at least part of a 3D-trajectory of the flying sports ball, □ 2. estimating, from the trajectory, an acceleration, preferably a total acceleration, of the sports ball at a predetermined position along the trajectory, □ 3. estimating an acceleration of the sports ball caused by gravity at the predetermined position, □ 4. estimating an acceleration of the sports ball caused by air resistance/drag at the predetermined position, and □ 5. estimating the spin axis, at the predetermined position, on the basis of the estimated accelerations. In general, it may be argued that for a rotationally symmetric sports ball in flight, only three forces act: the gravity, the air resistance or drag and the so-called lift of the ball caused by any spin thereof. Thus, estimating the individual accelerations will bring about information facilitating the determination of the lift or the direction thereof caused by a rotation of the ball. Thus, the deviation from a trajectory positioned in a single, vertical plane in which the acceleration is caused by gravity and drag, may be caused by the spin. However, a lift and a spin may also act within this vertical plane. It should be noted that knowledge is only required at a small area around the predetermined position in that only the overall acceleration thereof is to be determined This may e.g. be determined from two points along the trajectory, where position and velocity is known. Preferably, the determination of the spin axis is performed at a number of positions along the trajectory of the ball. Thus, preferably, at least steps 2-4 are preformed at each of a plurality of points in time. Then, the step 5 may be performed once on the basis of the accelerations determined at a plurality of points in time (such as from an average thereof) or may be determined for each of the points in time in order to determine a time variation of the spin axis. Also, it is clear that the trajectory information may be derived in any suitable manner, such as the use of a RADAR, 3D imaging equipment, or the like. Naturally, the trajectory may be represented as the coordinates of the ball at one or more points in time. The coordinate system may be chosen in any manner. Preferably, step 5. comprises subtracting the accelerations estimated in steps 3. and 4. from that estimated in step 2, determining a residual acceleration, and estimating the spin axis on the basis of a direction of the residual acceleration. Thus, the spin axis may be determined using simple vector calculus. In this situation, the spin axis of the ball will be perpendicular to the direction of the residual acceleration in that the spin of the ball will act to turn the direction of the ball. Also, step 4 may comprise estimating a velocity of the ball at the predetermined position from the trajectory and estimating the acceleration on the basis of the estimated velocity or rather a deviation in velocity between two points on the trajectory. Another aspect of the invention relates to a system for estimating a spin axis of a sports ball while in flight, the system comprising: □ 1. means for determining at least part of a 3D-trajectory of the flying sports ball, □ 2. means for estimating, from the trajectory, an acceleration, preferably a total acceleration, of the sports ball at a predetermined position along the trajectory, □ 3. means for estimating an acceleration of the sports ball caused by gravity at the predetermined position, □ 4. means for estimating an acceleration of the sports ball caused by air resistance/drag at the predetermined position, and □ 5. means for estimating the spin axis, at the predetermined position, on the basis of the estimated accelerations. Again, the means 2-4 may be adapted to perform the estimations at each of a plurality of predetermined positions, and the means 5. are preferably adapted to subtract the accelerations estimated in steps 3. and 4. from that estimated in step 2, determine a residual acceleration, and estimate the spin axis on the basis of a direction of the residual acceleration, in order to e.g. facilitate an easy determination of the axis. When the accelerations have been estimated at a plurality of positions, the spin axis may be determined (means 5) once for all these positions or for each position. Also, the means 4 may be adapted to estimate a velocity of the ball at the predetermined position from the trajectory and estimate the acceleration on the basis of the estimated velocity. A third aspect of the invention relates to a method of estimating a rotational velocity or spin frequency of a rotating sports ball in flight, the method comprising: □ 1. a number of points in time during the flight, receiving electromagnetic waves reflected from the rotating sports ball and providing a corresponding signal, □ 2. performing a frequency analysis of the signal, and identifying one, two or more discrete spectrum traces positioned at least substantially equidistantly in frequency and being continuous over time, and □ 3. estimating the velocity/frequency from a frequency distance between the discrete spectrum traces. In the present context, any type of electromagnetic wave may be used, such as visible radiation, infrared radiation, ultrasound, radio waves, etc. In addition, any number of points in time may be used. It may be preferred to receive the radiation as long as a meaningful detection is possible or as long as the spectrum traces may be determined in the signal. Normally, the reception and subsequent signal analysis is performed at equidistant points in time. In order to ensure that the distance between the spectrum traces is correctly determined, preferably more than 2 equidistant spectrum traces are identified. Naturally, the frequency analysis may result in a spectrum of the signal. This, however, is not required in that only the equidistant spectrum traces are required. In this context, a spectrum trace is a sequence of frequencies which is at least substantially continuous in time but which may vary over time. In the present context, a trace normally is a slowly decaying function, but any shape is in principle acceptable and determinable. Preferably, step 1. comprises receiving the reflected electromagnetic waves using a receiver, and wherein step 2. comprises identifying, subsequent to the frequency analysis, a first frequency corresponding to a velocity of the ball in a direction toward or away from the receiver and wherein identification of the spectrum traces comprises identifying spectrum traces positioned symmetrically around the first frequency. In this manner, another frequency is determined which may aid in ensuring that the equidistant spectrum lines are correctly determined. In addition, requiring also the symmetry around this frequency further adds to ensuring a stable determination. In a preferred embodiment, step 2. comprises, for each point in time and sequentially in time: □ performing the frequency analysis and an identification of equidistant candidate frequencies for a point in time, □ subsequently identifying those candidates which each has a frequency deviating at the most a predetermined amount from a frequency of a candidate of one or more previous points in time, □ then identifying, as the frequency traces, traces of identified candidates, and where step 3 comprises estimating the velocity/frequency on the basis of the identified spectrum traces. This has the advantage that the determination may be made sequentially, such as in parallel with the receipt of the reflected radiation. Also, a noise cancellation is performed in that what might resemble valid equidistant spectrum lines in one measurement may not have any counterparts in other, such as neighbouring measurement(s), whereby it may be deleted as a candidate. In this context, the predetermined amount or uncertainty within which a candidate should be may be a fixed amount, a fixed percentage or a measure depending on e.g. an overall signal-to-noise ratio A fourth aspect of the invention relates to a system for estimating a rotational velocity or spin frequency of a rotating sports ball in flight, the system comprising: □ 1. a receiver adapted to, a number of points in time during the flight, receive electromagnetic waves reflected from the rotating sports ball and provide a corresponding signal, □ 2. means for performing a frequency analysis of the signal, and identifying one, two or more discrete spectrum traces positioned at least substantially equidistantly in frequency and being continuous over time, and □ 3. means for estimating the velocity/frequency from a frequency distance between the discrete spectrum traces. Naturally, the comments relating to the third aspect again are relevant. □ Thus, the means 2. may be adapted to identify, subsequent to the frequency analysis, a first frequency corresponding to a velocity of the ball in a direction toward or away from the receiver and to identify, as the spectrum traces, spectrum traces positioned symmetrically around the first frequency. A preferred manner of determining the velocity/frequency is one, wherein the means 2. are adapted to, for each point in time and sequentially in time: □ perform the frequency analysis and the identification of equidistant candidate frequencies for a point in time, □ subsequently identify those candidates which have a frequency deviating at the most a predetermined amount from a frequency of a candidate of one or more previous points in time, □ then identify, as the frequency traces, traces of identified candidates, and where the means 3 are adapted to estimate the velocity/frequency on the basis of the identified spectrum lines. A fifth aspect relates to a method of estimating a spin, comprising a spin axis and a spin frequency, of a sports ball while in flight, the method comprising estimating the spin axis as in the first aspect of the invention and estimating the spin frequency according to the third aspect. A sixth and final aspect of the invention relates to a system for estimating a spin, comprising a spin axis and a spin frequency, of a sports ball while in flight, the system comprising the system according to the second aspect of the invention, for determining the spin axis, and the system according to the fourth aspect for determining the spin frequency. In the following, a preferred embodiment of the invention will be described with reference to the drawing, wherein: FIG. 1 is a schematic illustration of a rotating ball and a Doppler radar, FIG. 2 illustrates a spectrum having equidistant spectrum lines, FIG. 3 illustrates the determination of equidistant spectrum lines, FIG. 4 illustrates a measured 3D trajectory of a golf ball, FIG. 5 illustrates the final spin frequency chart over time, FIG. 6 illustrates a spin vector relating to the trajectory of FIG. 4, FIG. 7 is a flow chart over the detection of spin frequency, FIG. 8 illustrates the determination of the orientation of the spin vector, and FIG. 9 is a flow chart of the determination of the orientation of the spin vector. FIG. 10 is a flow chart of the determination of the orientation of the spin vector when it can be assumed that the spin axis lays in a known plane. Using a Doppler radar to measure the spin frequency of sports balls has been known for years; see U.S. Pat. No. 6,244,971 and US 2002/0107078 A1. However, all these inventions are based on modifying the reflection off some area of the ball, typically by adding conducting material either under or on the cover of the ball. The present embodiment also uses a Doppler radar, but does not require any modifications to the ball in order to extract the spin frequency. This aspect increases the commercial value of the present invention significantly. In the past, the orientation of the spin axis of a rotating ball has been measured by using cameras placed close to the launching area. These systems only provide the orientation of the spin axis in one point in space, right after launch. The present invention uses a 3 dimensional trajectory measuring equipment to measure the spin axis orientation during flight. The present invention makes it possible to have a continuous measurement of the spin frequency and spin axis orientation during the entire flight of the ball. Spin Frequency Consider a Doppler radar 3 in FIG. 1. The Doppler radar comprises a transmitter 4 and a receiver 5. The transmitting wave 6 at frequency Ftx is reflected on the ball 1, the reflected wave 7 from the ball 1 has a different frequency Frx. The difference between the reflected frequency and the transmitted frequency, is called the Doppler shift F[dopp]. F[dopp ]is proportional to the relative speed Vrad of the reflecting point A on the ball 1 relative to the radar 3. F[dopp,A]=2/λ*Vrad [1] , where λ is the wavelength of the transmitting frequency. A coordinate system 2 is defined as having origin in the center of the ball and X-axis always pointing directly away from the radar, the Z-axis is in the horizontal plane. Vrad is the change in range from the Doppler radar 3 relative to time (Vrad=dR/dt). With the coordinate system 2 in FIG. 1, Vrad equals the X component of the velocity of the ball 1. The strongest reflection from the ball 1 will always be the point A which is perpendicular to the line-of-sight from the radar. When the ball 1 is spinning, the point A with the strongest reflection will in fact be different physical locations on the ball over time. The output signal of the Doppler receiver 5 from the reflection of point A on the ball can be written as: x[A](t)=a(t)*exp(−j*F[dopp,A]*t) [2] , where a(t) is the amplitude of the received signal. Consider now the situation of a spinning ball 1 with an angular velocity of ω of the ball around the Z-axis. The reflection from a fixed point B on the ball 1, with a radius of r, will have a Doppler shift relative to the radar 1 of: F[dopp,B]=2/λ*(Vrad−r*ω*sin(ω*t)) [3] The output signal of the receiver 5 from the reflection of point B on the ball can be written as: x[B](t)=a(t)*d(t)*exp(−j*F[dopp,B]*t) [4] , where d(t) is the relative amplitude of the received signal from point B relative to point A on the ball 1. By substituting [2] and [3] in [4], one gets: x[B](t)=x[A](t)*d(t)*exp(j*2/λ*r*ω*sin(ω*t)*t) [5] It is seen that the output signal from point B consist of the signal from point A modulated by a signal x[modB](t): x[modB](t)=d(t)*exp(j*2/λ*r*ω*sin(ω*t)*t) [6] The exponential term of the modulating signal, is recognized as a frequency modulation (FM) signal, with a modulation frequency of ω/2π and a frequency deviation of 2/λ*r*ω. From modulation theory it is well known that the spectrum of a sinusoid frequency modulation gives a spectrum with discrete frequency lines at the modulation frequency ω/2π and harmonics of this, the power of the spectrum lines of the m'th harmonic are equal to J[m](4π*r/λ), where J[m]( ) is the Bessel function of first kind of m'th order. The amplitude signal d(t) of the modulating signal in [6], will also have a time dependent variation. d(t) will like the exponential term in [6] also be periodic with the period T=2π/ω. Consequently will the spectrum from d(t) also have discrete spectrum lines equally spaced ω/2π. The relative strength of the individual harmonics of d(t) will depend on the reflection characteristics for the different aspect angles. In summary, because of reflection from a physical point B on a spinning ball from other positions than when this point is closest to the radar (at point A), the received signal will have equally spaced sidebands symmetrical around the Doppler shift F[dopp,A], caused by the velocity of the ball. The sidebands will have multiple harmonics and will be spaced exactly the spin frequency of the ball ω/2π. Only in the case of a perfect spherical ball, there will be no modulation sidebands. On a normal sports ball there will be several areas on the ball that is not perfectly spherical. Each of these points will give discrete sidebands spaced the spin frequency. The total spectrum for all the scatters on the ball will then add up to the resulting received signal, that of course also has discrete sidebands spaced the spin frequency. In the above the spin axis was assumed to be constant during time and parallel with the Z-axis. If the spin axis is rotated a around the Y-axis and then rotated β around the X-axis, it can easily be shown that the x-component of the velocity of point B equals: Vx,B=cosα*r*ω*sin(ω*t) [7] Note that Vx,B is independent of the rotation β around the X-axis. Since Vx,B also is periodic with the period T=2π/ω, except for the special case of spin axis along the X-axis (α=90 deg), the corresponding Doppler shift from point B with rotated spin axis will also have discrete sidebands spaced exactly the spin frequency of the ball ω/2π. This means as long as the spin axis orientation changes slowly compared to the spin frequency, the spectrum of the received signal will contain discrete frequency sidebands spaced the spin frequency of the ball ω/2π. In FIG. 2 the received signal spectrum of a golf ball in flight is shown. In FIG. 2 it is clearly seen that the spectrum contains a strong frequency line that corresponds to the velocity of the ball, as well as symmetric sidebands around this velocity that are equally spaced with the spin frequency. First the ball velocity is tracked 8 using standard tracking methods. Then symmetrical frequency peaks around the ball velocity is detected 9. In FIG. 3 the frequency offset of the symmetrical sidebands are shown relative to the ball velocity. The different harmonics of the spin sidebands are tracked over time using standard tracking methods 10. The different tracks are qualified 11, requiring the different harmonic tracks to be equally spaced in frequency. The different tracks are solved for their corresponding harmonic number 12. After this, the spin frequency can be determined from any of the qualified harmonic tracks 13, provided that the frequency is divided by the respective harmonic number. The final spin frequency chart over time is shown in FIG. 5, which contains all of the harmonic tracks. The step-by-step procedure for measuring the spin frequency is described in FIG. 7. Spin Axis Orientation The 3 dimensional trajectory of the ball flight is obtained by appropriate instruments. In the preferred embodiment of the present invention, the radar used for measuring the spin frequency is also used to provide a 3 dimensional trajectory of the ball flight, see FIG. 4. Assuming that the ball is spherical rotational symmetric to a high degree, their will be three and only three forces acting on the ball. Referring to FIG. 8, the accelerations will be: □ gravity acceleration, G □ air resistance/drag acceleration, D □ and lift acceleration, L The total acceleration acting on a flying ball is consequently: A=G+D+L [8] Examples of balls that satisfy the rotational symmetry criteria are: golf balls, tennis balls, base balls, cricket balls, soccer balls etc. The drag is always 180 deg relative to the airspeed vector Vair. The lift acceleration L is caused by the spinning of the ball and is always in the direction given by ωxVair (x means vector cross product), i.e. 90 deg relative to the spin vector ω and 90 deg relative to the airspeed vector Vair. The spin vector ω describes the orientation of the spin axis, identified with the spin unity vector ωe, and the magnitude of the spin vector ω is the spin frequency ω found through the algorithm described in FIG. 7. The airspeed vector is related to the trajectory velocity vector V by: Vair=V−W [9] The procedure for calculating the orientation of the spin vector ω is described in FIG. 9. From the measured 3 dimensional trajectory, the trajectory velocity V and acceleration A are calculated by differentiation 14. The airspeed velocity is calculated 15 using equation [9], using a priori knowledge about the wind speed vector W. The gravity acceleration G is calculated 16 from a priori knowledge about latitude and altitude. Since drag and lift acceleration are perpendicular to each other, the magnitude and orientation of the drag acceleration D can be calculated 17 using equation [10]. D=[(A−G)•Vair/|Vair|^2]*Vair [10] , where • means vector dot product. Hereafter the magnitude and orientation of the lift acceleration L can be easily found 18 from [11]. L=A−G−D [11] As mentioned earlier, by definition the lift vector L is perpendicular to the spin vector ω meaning that: L•ωe=0 [12] The spin unity vector ωe is normally assumed to be constant over time for rotational symmetrical objects due to the gyroscopic effect. If the spin unity vector ωe can be assumed to be constant over a time interval [t1;tn], then equation [12] constructs a set of linear equations [13]. , where L(t)=[Lx(t), Ly(t), Lz(t)] and ωe=[ωex, ωey, ωez] The linear equations in [13] can be solved for [ωex, ωey, ωez] by many standard mathematical methods. Hereby the 3 dimensional orientation of the spin axis in the time interval [t1,tn] can be determined. The only assumption is that the spin axis is quasi constant compared to the variation of the direction of the lift vector L. By combining the spin frequency ω found from the algorithm described in FIG. 7 with the spin unity vector ωe found from equation [13], the spin vector ω can be found 20 by using equation [14]. ω=ω*ωe [14] Partwise Known Orientation of Spin Axis In many cases it is known a priori that the spin axis lies in a known plane at a certain point in time. Let this plane be characterized by a normal unity vector n. This means: n•ω=0 [15] An example of such a case is the spin axis orientation right after launch of ball. When a ball is put into movement by means of a collision, like a golf ball struck by a golf club or a soccer ball hit by a foot, the spin vector ω will right after launch to a very high degree be perpendicular to the initial ball velocity vector V. The normal unity vector n in [15] will in this case be given by equation [16]. n=V/|V| [16] The procedure for calculating the orientation of the spin vector ω in the point in time t0 where the spin vector lays in a known plane characterized by the normal unity vector n is described in FIG. First following the exact same steps 14-18 as described in FIG. 9 to obtain the lift acceleration at the time t0. Now determine 21 a rotation matrix R that converts the coordinates for the normal unity vector n in the base coordinate system to the x-axis unity vector [1,0,0], see equation [17]. The rotation matrix R can be found by standard algebraic methods from n. [1,0,0]=R*n [17] The coordinates for the lift acceleration L from equation [11] is now rotated 22 through R represented by the L vector, see equation [18]. Lm=[Lxm,Lym,Lzm]=R*L [18] Similar coordinate transformation for the spin unity vector ωe, see equation [19]. ωem=[ωexm,ωeym,ωezm]=R*ωe [19] Since it known from equation [15] that ωexm equals 0, then equation [13] simplifies to equation [20]. Lym*ωeym+Lzm*ωezm=0 [20] By using that the length of ωem equals 1, the spin unity vector ωe can be found 23 from either equation [21] or [22]. ωe=R^−1*[0,−Lzm/Lym,1]/|[0,−Lzm/Lym,1]|,Lym≠0 [21] ωe=R^−1*[0,1,−Lym/Lzm]/|[0,1,−Lym/Lzm]|,Lzm≠0 [22] By combining the spin frequency o found from the algorithm described in FIG. 7 with the spin unity vector ωe found from equation [21]-[22], the spin vector ω can be found 20 by using equation [14].
{"url":"http://www.freepatentsonline.com/y2009/0075744.html","timestamp":"2014-04-16T13:06:44Z","content_type":null,"content_length":"69867","record_id":"<urn:uuid:0e5fff50-b828-46f4-ac09-3e312011ad82>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
The Area of a Square Inscribed in a Circle Date: 12/23/95 at 15:41:22 From: Anonymous Subject: GRE General question... I don't understand how this answer came to be! It comes from Cliff's GRE Prep Guide. Q: What is the area of a square inscribed in a circle whose circumference is 16 (pi). A: 128. Response: Huh?! How'd they do that !? Thank you very much, Date: 12/23/95 at 17:48:58 From: Doctor Elise Subject: Re: GRE General question... The circumference of a circle is (pi) times the diameter, so we know the diameter of the circle is 16. Since the square is inscribed in the circle, the diagonal distance between opposite corners is 16. a^2 + b^2 = c^2, where 'c' is the diagonal (which is 16) across the square, and forms the hypotenuse of a right triangle. Since this is a square, we know that a = b. So we know that a^2 + a^2 = (16)^2. And we can reduce this to a^2 = ((16)^2)/2 = 16 * 16/2 = 16 * 8 = 128. Since a^2 is the area of the square, we're done! -Doctor Elise, The Geometry Forum
{"url":"http://mathforum.org/library/drmath/view/54793.html","timestamp":"2014-04-18T00:59:49Z","content_type":null,"content_length":"6037","record_id":"<urn:uuid:9e7c424a-9fa2-47b0-b314-e119738374b5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Science 122 About Lab 4 Lab 4 is about kinematics, the study and description of motion. It is Galileo's description, modernized to include the use of graphical analysis to interpret and understand uniform acceleration and the relationships between distance, time, velocity, and acceleration. There are several lessons to be learned about motion from this lab • Velocity depends on both distance and time. • Constant velocity plots as a straight line on a distance vs. time graph. • The steepness or slope of the line on the distance vs. time graph is a measure of velocity. • Acceleration is the addition of velocity with elapsed time. • Uniform acceleration means that the velocity increases by the same amount in each time interval. • Accelerated motion plots as a curved line on a distance vs. time graph, but is a straight line on a graph of distance vs. the square of time. • Increasingly greater distances are traveled in equal time intervals in accelerated motion. • Uniformly accelerated motion plots as a straight line on a velocity vs. time graph. The slope of a graph of velocity vs. time represents acceleration while the area of the graph represents the distance traveled.
{"url":"http://www2.honolulu.hawaii.edu/instruct/natsci/science/brill/sci122/SciLab/L4/L4about.html","timestamp":"2014-04-16T18:59:14Z","content_type":null,"content_length":"3336","record_id":"<urn:uuid:1aee700c-ff94-47a4-934d-2fb0d5913201>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical and Quantitative Natural Science and Mathematical and Quantitative Reasoning (12 credits) Students are required to take a core-area course in natural science with a laboratory component, a core-area course in mathematics, and a third core-area course in natural science (with a laboratory component), mathematics (MATH 114 or higher), quantitative reasoning or computer science. Core-area courses in natural science focus on the natural world and develop students’ abilities to evaluate scientific arguments critically, and enhance their quantitative and analytical reasoning skills. The laboratory component of these courses is an inquiry-based approach with opportunities for students to refine their observational skills through the acquisition and organization of data, analysis and interpretation of data, and the presentation of conclusions orally or in writing. (Normally, Web-based courses are not accepted as lab sciences that satisfy this lab science requirement. Any exceptions to this rule must be pre-approved by the Core Area Curriculum Review Committee in the Natural Sciences and Mathematics Division.) Students select one of the following core-area natural science courses: • BIOL 101 General Biology or BIOL 105 Human Biology or BIOL 106 Women, Medicine and Biology • BIOL 102 Conservation Biology • BIOL 207 Genetics, Evolution, and Ecology • BIOL 208 Biological Communication & Engergetics • CHEM 100 Chemistry in Our World • CHEM 101 Environmental Chemistry • CHEM 109 General Chemistry for Engineers • CHEM 111 General Chemistry I • CHEM 112 General Chemistry II • CHEM 115 Accelerated General Chemistry • ENGR 123 Energy and the Environment • GEOL 102 Origins and Methods • GEOL 110 Geology of the National Parks or GEOL 111 Introductory Physical Geology or GEOL 114 The Science of Natural Disasters or GEOL 115 Environmental Geology • GEOL 130 Earth History • GEOL 161 Medical Geology • GEOL 162 The Earth’s Record of Climate • GEOL 211 Earth Materials • GEOL 220 Oceanography • GEOL 252 Earth Surface Processes and Geomorphology • GEOL 260 Regional Geology and Geological Field Methods • GEOL 461 Medical Geology • GEOL 462 The Earth’s Record of Climate • IDSC 150 Development of the Natural World • PHYS 101 Physics as a Liberal Art I • PHYS 104 Astronomy • PHYS 105 Musical Acoustics • PHYS 109 General Physics I • PHYS 110 General Physics II • PHYS 111 Introduction to Classical Physics I • PHYS 112 Introduction to Classical Physics II The core-area courses in mathematical reasoning include experience in the application of relevant knowledge to solve problems, promote the recognition and classification of numerical, geometrical, and relational patterns, enhance students’ abilities to develop mathematical arguments, and to understand the connections between real-world data and mathematical models. Students select one of the following core-area mathematics courses: • MATH 100 Mathematical Sampler • MATH 101 Finite Mathematics • MATH 109 Calculus With Review II • MATH 111 Calculus for Business and Social Science • MATH 113 Calculus I • MATH 114 Calculus II • MATH 121 Structures of Elementary Mathematics • MATH 128 Introduction to Discrete Mathematics The third core-area courses allow students to broaden or deepen their exposure to natural science, mathematics, quantitative reasoning and/or computer science. Students select a core-area course from the following list: • CISC 120 Computers in Elementary Education • STAT 220 Statistics I (previously IDTH 220) • MATH 114 Calculus II • MATH 121 Structures of Elementary Mathematics • MATH 128 Introduction to Discrete Mathematics • a second natural science course (with laboratory) from the first group (note the restrictions involving BIOL 101,105, or 106, and GEOL 110, 111, 114 or 115).
{"url":"http://www.stthomas.edu/catalog/general/corecurriculum/naturalsciencemathematics/","timestamp":"2014-04-18T20:59:49Z","content_type":null,"content_length":"14194","record_id":"<urn:uuid:93497c4d-7e71-4b62-b304-cc6da2e34914>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Compute the area covered by cards randomly put on a table up vote 15 down vote favorite This is an interview question, the interview has been done. Given a deck of rectangular cards, put them randomly on a rectangular table whose size is much larger than the total sum of cards' size. Some cards may overlap with each other randomly. Design an algorithm that can calculate the area the table covered by all cards and also analyze the time complexity of the algorithm. All coordinates of each vertex of all cards are known. The cards can overlap in any patterns. My idea: Sort the cards by its vertical coordinate descending order. Scan the cards vertically from top to bottom after reaching an edge or vertices of a card, go on scanning with another scan line until it reached another edge, and find the area located between the two lines . Finally, sum all area located between two lines and get the result. But, how to compute the area located between two lines is a problem if the area is irregular. Any help is appreciated. thanks ! algorithm math data-structures computational-geometry 3 are they all orientated the same way (i.e. the cards will never be rotated at various angles etc.)? – Nim Mar 28 '12 at 15:12 1 Compute triangles from card vertices and calculate the area occupied by these triangles, factoring for overlaps. – Justin Pearce Mar 28 '12 at 15:37 3 Why are they triangles ? they can be overlapped any patterns. thanks ! – user1002288 Mar 28 '12 at 16:09 1 The question specifically says the table is "much larger than the total sum of the cards' size". So I think the problem is just asking you to efficiently find overlapping cards and then use whatever code you like to compute their intersection. (It is OK for the latter to be slow because there are very very few overlapping cards with very very high probability). – Nemo Mar 29 '12 at @user1002288 Because every polygon can be decomposed into triangles. That's only relevant if the answer to Nim's question is that they can be rotated - otherwise, there are much simpler solutions. – Nick Johnson Mar 29 '12 at 9:32 show 1 more comment 5 Answers active oldest votes This could be done easily using the union-intersection formula (size of A union B union C = A + B + C - AB - AC - BC + ABC, etc), but that would result in an O(n!) algorithm. There is another, more complicated way that results in O(n^2 (log n)^2). Store each card as a polygon + its area in a list. Compare each polygon in the list to every other polygon. If they intersect, remove them both from the list, and add their union to the list. Continue until no polygons intersect. Sum their areas to find the total area. up vote 9 down The polygons can be concave and have holes, so computing their intersection is not easy. However, there are algorithms (and libraries) available to compute it in O(k log k), where k vote accepted is the number of vertices. Since the number of vertices can be on the order of n, this means computing the intersection is O(n log n). Comparing every polygon to every other polygon is O(n^2). However, we can use an O(n log n) sweeping algorithm to find nearest polygons instead, making the overall algorithm O((n log n)^2) = O(n^2 (log n)^2). 1 To the first part of your answer--easily how? – Colonel Panic Mar 28 '12 at 18:11 3 @Matt: Sum the area of all cards; subtract the area of the intersection of all pairs; add intersection of all triplets; subtract quadruplets etc. The intersections will have no holes and be convex, so finding them is much easier than in the more complex case. – BlueRaja - Danny Pflughoeft Mar 28 '12 at 20:54 1 Cards are convex. The intersection of convex shapes is convex. Got it, thanks. – Colonel Panic Mar 29 '12 at 10:15 add comment This is almost certainly not what your interviewers were looking for, but I'd've proposed it just to see what they said in response: I'm assuming that all cards are the same size and are strictly rectangular with no holes, but that they are placed randomly in an X,Y sense and also oriented randomly in a theta sense. Therefore, each card is characterized by a triple (x,y,theta) or of course you also have your quad of corner locations. With this information, we can do a monte carlo analysis fairly simply. Simply generate a number of points at random on the surface of the table, and determine, by using the list, whether or not each point is covered by any card. If yes, keep it; if not, throw it out. Calculate the area of the cards by the ratio of kept points to total points. Obviously, you can test each point in O(n) where n is the number of cards. However, there is a slick little technique that I think applies here, and I think will speed things up. You can grid out your table top with an appropriate grid size (related to the size of the cards) and pre-process the cards to figure out which grids they could possibly be in. (You can over-estimate by pre-processing the cards as though they were circular disks with a diameter going between opposite corners.) Now build up a hash table with the keys as grid locations and the contents of each being any possible card that could possibly overlap that grid. (Cards will appear in multiple grids.) Now every time you need to include or exclude a point, you don't need to check each card, but only the pre-processed cards that could possibly be in your point's grid location. There's a lot to be said for this method: • You can pretty easily change it up to work with non-rectangular cards, esp if they're convex up vote • You can probably change it up to work with differently sized or shaped cards, if you have to (and in that case, the geometry really gets annoying) 5 down • If you're interviewing at a place that does scientific or engineering work, they'll love it vote • It parallelizes really well • It's so cool!! On the other hand: • It's an approximation technique (but you can run to any precision you like!) • You're in the land of expected runtimes, not deterministic runtimes • Someone might actually ask you detailed questions about Monte Carlo • If they're not familiar with Monte Carlo, they might think you're making stuff up I wish I could take credit for this idea, but alas, I picked it up from a paper calculating surface areas of proteins based on the position and sizes of the atoms in the proteins. (Same basic idea, except now we had a 3D grid in 3-space, and the cards really were disks. We'd go through and for each atom, generate a bunch of points on its surface and see if they were or were not interior to any other atoms.) EDIT: It occurs to me that the original problem stipulates that the total table area is much larger than the total card area. In this case, an appropriate grid size means that a majority of the grids must be unoccupied. You can also pre-process grid locations, once your hash table is built up, and eliminate those entirely, only generating points inside possibly occupied grid locations. (Basically, perform individual MC estimates on each potentially occluded grid location.) Very clever. I like the fact that it's a lot simpler to implement than solutions that require decomposing polygons, assuming you're willing to tolerate an approximation. – Nick Johnson Mar 29 '12 at 9:35 add comment Here's an idea that is not perfect but is practically useful. You design an algorithm that depends on an accuracy measure epsilon (eps). Imagine you split the space into squares of size eps x eps. Now you want to count the number of squares lying inside the cards. Let the number of cards be n and let the sides of the cards be h and w. Here is a naive way to do it: S = {} // Hashset for every card: for x in [min x value of card, max x value of card] step eps: up vote 2 for y in [min y value of card, max y value of card] step eps: down vote if (x, y) is in the card: S.add((x, y)) return size(S) * eps * eps The algorithm runs in O(n * (S/eps)^2) and the error is strongly bounded by (2 * S * n * eps), therefore the relative error is at most (2 * eps * n / S). So for example, to guarantee an error of less than 1%, you have to choose eps less than S / (200 n) and the algorithm runs in about 200^2 * n^3 steps. 1 Isn't this basic raycasting ? – Stefano Borini Mar 28 '12 at 16:17 @StefanoBorini I don't know what raycasting is, but I would be surprised if what I describe is not already being done (and maybe given a name too) :-). – aelguindy Mar 28 '12 at 16:20 You divide the surface in cells, shoot an imaginary ray from each cell center, and check if there's an intersection with an object or not. It's a technique used to make ray tracing images (at least, the most basic ones). – Stefano Borini Mar 28 '12 at 16:31 1 This isn't raycasting, it's 'numerical integration', or 'area-estimation by point-counting'. Then again, it might as well be called raycasting too. – High Performance Mark Mar 28 '12 at 17:01 A lot of graphics techniques are closely related to numerical integration techniques. This is one of them. When you're using grid points in this fashion, I think it's usually called a quadrature technique. When you're using random points, it's Monte Carlo. Monte Carlo will have better convergence properties in higher dimensional spaces. – Novak Mar 28 '12 at 19:59 add comment Suppose there are n cards of unit area. Let T be the area of the table. For the discretised problem, the expected area covered will be up vote 1 down $ T(1-({{T-1}\over{T}})^n) $ 1 This is not a probability problem about expected value - he's asking how to efficiently calculate the actual area covered by a number of cards. – BlueRaja - Danny Pflughoeft Mar 28 '12 at 17:02 I like this approach, it's much simpler than other proposals. OP can try Monte Carlo method as well. – Victor Sorokin Mar 28 '12 at 18:31 add comment T = The total area of the table. C = The total area that could be covered by cards (area of one card times number of cards). V = The total area of overlapping cards (V = oVerlap) Area to calculate = T - (C - V) up vote 0 There should be (yep, those are danger words) some way to efficiently analyze the space occupied by the cards, to readily identify overlapping vs. non-overlapping situations. Identify down vote these, factor out all overlapped areas, and you're done. Time complexity would be in considering each card in order, one by one, and comparing each with each remaining card (card 2 has already been checked against card 1), which makes it n!, not good... but this is where the "should" comes in. There must be some efficient way to remove all cards that do not overlap from consideration, to order cards to make it obvious if they could not possibly overlap other/prior cards, and perhaps to identify or group potentially overlapping cards. Interesting problem. 1 Calculating the "total area of overlapping cards" does not work, since more than two cards can overlap the same space. Imagine for example, all cards are in exactly the same place - then the "total area of overlapping cards" is equal to the area of one card! Also, your formula is clearly incorrect - imagine T is very large, then T - (C - V) could be larger than C!! – BlueRaja - Danny Pflughoeft Mar 28 '12 at 16:00 +1: this is basically right except it calculates the area not covered rather than the area covered (a trivial issue). V here needs to be interpreted correctly, so that if two cards overlap, subtract the area of this overlap once, and then if a third card is put on to exactly cover this overlap, subtract the area of overlap one more time, etc. That is, if all cards are stacked, C = 52*A, and V=51*A, or the area of one card, which is correct. – tom10 Mar 28 '12 at 16:26 That's why you (painfully) go through card by card. The "first" (or lowest) card covers an area, while all subsequent (stacked) cards that overlap it do not--for the area in which they overlap. – Philip Kelley Mar 28 '12 at 16:38 add comment Not the answer you're looking for? Browse other questions tagged algorithm math data-structures computational-geometry or ask your own question.
{"url":"http://stackoverflow.com/questions/9910459/compute-the-area-covered-by-cards-randomly-put-on-a-table","timestamp":"2014-04-16T09:04:44Z","content_type":null,"content_length":"110588","record_id":"<urn:uuid:682e9f0c-86bf-4485-b1f5-62600dc43961>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: when to use cross and when to use dot product of vector? Best Response You've already chosen the best response. nobody to answer? Best Response You've already chosen the best response. Depends on what you want to exercise. dot product between vectors v and w ... it provides the angle between the vectors. v.w = | v |. | w |. cos (a) It is very useful when the vectors are orthogonal ... because the scalar product = 0 v.w cross product between vectors v and w ... it provides the area of the parallelogram = 2. (area of the triangle) cross product between vectors u, v and w ... providing the volume of the parallelepiped = 6. (volume of the tetrahedron) the vector product is useful for calculating areas and volumes. (I do not know if I understand your question. = /) Best Response You've already chosen the best response. Do you have an example? Best Response You've already chosen the best response. Particular problem/ Best Response You've already chosen the best response. Cross product: If you have two vectors in 3-space (or n-space), then if you cross those vectors you'll get a vector that is perpendicular. If you have two vectors in 2-space, you'll know if the orthogonal vector goes "in" to the page or sticks "out" of the page depending on which vector your right index finger hits. Dot product: If you want to find the "projection" of a onto b, then you dot vector a with the unit vector b. You'll get a measurement from the tip of a down to b which forms a right-angle at b. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f768398e4b0ddcbb89d6cd8","timestamp":"2014-04-18T10:48:11Z","content_type":null,"content_length":"37957","record_id":"<urn:uuid:13ce8914-7b78-4577-b972-70b3c507b451>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
This Is The Problem. A Stone Isthrown Upwards From ... | Chegg.com This is the problem. A stone isthrown upwards from the edge of a cliff 18 m high. It just missesthe cliff on the way down and hits the ground below with a speed of18.8 m/s. With what velocity was it released? What is its maximumdistance from the ground during its flight? Im notsure if the question is asking me to find the velocity or speed inthe first question. I started off using one of the kinematicsequations, but I got stuck when trying to solve for t2. Also, Im not sure how to solve for the maximum distance.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/problem-stone-isthrown-upwards-edge-cliff-18-m-high-missesthe-cliff-way-hits-ground-speed--q783970","timestamp":"2014-04-18T09:19:46Z","content_type":null,"content_length":"18622","record_id":"<urn:uuid:eaeffcd7-51d3-4df1-be07-7d1eadcfbb9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to R Installing R R: Documentation Graphical interface -- R for non-statisticians and non-programmers R: Some elementary functions In this part, we give a bird's eye view of the software: what is its position with respect to other software for numeric computations? What are its advantages, its drawbacks? What can we do with it? What are its limitations? What is its syntax? Statistics vs Signal processing It is a statistical software: contrary to other numerical computation software (Scilab, Octave), it already provides functions to perform non trivial statistical operations, be they classic (regression, logistic regression, analysis of variance (anova), decision trees, principal component analysis, etc.) or more modern (neural networks, bootstrap, generalized additive models (GAM), mixed models, etc.). However, sometimes, in statistics, we use a lot of signal processing algorithms (Fourrier transform, wavelet transforms, etc.); if this is your case, you might find Scilab or Matlab more appropriate. GUI -- or lack thereof It is a real programming language, not a point-and-click software (in French I have a good neologism to describe those "point-and-click" programs: "cliquodromes" -- if someone knows of a good English translation, he is welcome): we are not limited by the software designers' imagination, we can use it to any means. If you really need a simple and powerful GUI, have a look at R Commander, mentionned later in this document: it is a GUI that helps you learn the underlying language. Speed issues R is an interpreted language: the advantage, is that we spend less time writing code, the drawback is that the computations are slower -- but unless you wish to do real-time computations on the National Insurance files, taking into account the whole British population, or real-time computations involving billions of financial records, it is sufficiently fast. If speed is really an issue, you can have a look at SAS (commercial, expensive, dating back to the mainframe era), DAP (free but far from complete: it was initially designed as a free replacement for SAS, but turned out to be a modest C library to perform statistical computations) or you can program everything yourself (in C or C++), with the help of a few libraries GSL (GNU Scientific Library), for special functions (a good replacement for the ageing Numerical Recipes, with a licence that actually allows you to use the software) LibSVM (Support Vector Machines) FFTW (Fast Fourier Transform) djbFFT (Fast Fourier Transform) IT++ (Signal processing) CLN (arbitrary precision) Core (Multi-precision library?) Gaul (Genetic algorithms) GALib (Genetic algorithms) GTS (GNU Triangulated Surface library) or (it is the approach I would advise) you can start to program in R, until you have a slow but correct program, profile your program (it means: find where it spends most of its time), try to change the algorithm so that it be faster and, if it fails, implement in C the most computation-intensive parts. If you try to optimize your program too early, you run the risk of either losing your time by optimizing parts of the implementation that have no real importance on the total time and getting a program with an awkward structure, very difficult to modify or extend -- or simply giving up your project before its completion. Memory issues There is another problem in R: it tends to load all the data into memory (this is slowly changing, starting with R version 2). For very large data, R might not be the best choice: I recently spoke to a statistician working with genetic data: their data were too large for R, too large for SAS and they had to resort to home-made programs. Yet, I said above, R might be a platform of choice for prototyping, i.e., to write the first versions of an application, while we know neither which algorithms to choose, nor how long the computations will take. Furthermore, as R can access most databases (some, such as PostgreSQL, even allow you to write stored procedures in R): you can store your data there and only retrieve the bits you need, when you need them. Personnaly, the first time I ran into memory problems once, I was playing with a time series containing several minutes of music. If you have memory issues, you should also avoid Windows: it imposes further memory limitations -- indeed, if you read the R-help mailing list, all the people having memory issues are under Windows and their problems come from the operating system, not from R. R can produce graphics, but it is not a presentation software: for informative pictures, for graphical data exploration, it is fine, but if you want to use them in an advert, to impress or deceive your customers, you might have to process them through other software -- for instance, the title image of this document was created with R, PovRay and The Gimp. Finally, R is a free software ("free" as in "free speech" -- and also, incidentally, as in "free beer", but this is just a side effect). But beware: some (rare) libraries are not free but just "freely useable for non commercial purposes": be sure to read the licence. Some people have started to set up R galleries: Installing R You should really consider Linux instead of Windows. People using R on Windows always have a lot of problems: especially memory problems (Windows specialists tell me that this operating system becomes unstable when a process wants to use more than 1Gb of memory -- and now that I use it at work, I can confirm the problems) and installation problems (if you want to install a package, you will have to find a binary version of it, for the very version of R you have installed -- alternatively, you can try to compile the packages from source, but this is very tricky, because Windows lacks all the tools for this: you end up installing Unix utilities on your Windows machine, but you will also run into version problems). Not convinced? > library(fortunes) > fortune("install") Benjamin Lloyd-Hughes: Has anyone had any joy getting the rgdal package to compile under windows? Roger Bivand: The closest anyone has got so far is Hisaji Ono, who used MSYS (http://www.mingw.org/) to build PROJ.4 and GDAL (GDAL depends on PROJ.4, PROJ.4 needs a PATH to metadata files for projection and transformation), and then hand-pasted the paths to the GDAL headers and library into src/Makevars, running Rcmd INSTALL rgdal at the Windows command prompt as usual. All of this can be repeated, but is not portable, and does not suit the very valuable standard binary package build system for Windows. Roughly: [points 1 to 5 etc omitted] Barry Rowlingson: At some point the complexity of installing things like this for Windows will cross the complexity of installing Linux... (PS excepting live-Linux installs like Knoppix) -- Benjamin Lloyd-Hughes, Roger Bivand and Barry Rowlingson R-help (August 2004) Still not convinced? Well, good luck (you will need it). You can download a Windows version of R from MacOS X I am not familiar with MacOS, but you should not have any problem. If you are new to Linux, you should first play around with a "live CD", i.e., a CD or DVD containing the whole operating system and useable without installing anything on your hard drive. If you want one targeted to numerical computations and containing R, I suggest Quantian. If you want a more general distribution, try Knoppix (I use it as a rescue disk, e.g., in case of password loss). If you want a lot of eye-candy, have a look at eLive. Once you are satisfied with one of those distributions, you can install Linux on your computer. Many people suggest Ubuntu (I have had a very bad experience with it, but you can assume I was unlucky); I was quite pleased when I last tried Suse; I used to use Mandriva but became dissatisfied with it. Many companies use Red Hat or its free, RedHat-sponsored equivalent, Fedora Core, mainly on their servers -- organizations using Linux on the desktop prefer Suse. To choose, check what people around you are using and take the same distribution: those people will be able to answer your I personnaly use Gentoo -- if you are not already an experienced Linux user, do not even think using it. Linux: Installing R on Mandriva Just type urpmi R-base R-bioconductor That is all. No need to roam the web to find where you can download the software, no need to answer lengthy questions about where to install the software, no need to do anything yourself. Linux: Installing R on Gentoo Just type emerge R Linux: installing R from the sources I proceed as follows: # Installing R, from source # A few required packages (when I currently use Gentoo Linux) emerge lapack graphviz wget http://cran.r-project.org/src/base/R-4/R-2.4.0.tar.gz tar zxvf R-*.tar.gz cd R-*/ make test make install # Installing the shared library libRmath (needed to # compile some third-party programs, such as JAGS) cd src/nmath/standalone make install echo /usr/local/lib >> /etc/ld.so.conf cd ../../../../ # Installing ALL the CRAN packages wget -r -l 1 -np -nc -x -k -E -U Mozilla -p -R zip www.stats.bris.ac.uk/R/src/contrib/ for i in www.stats.bris.ac.uk/R/src/contrib/*.tar.gz R CMD INSTALL $i # Bioconductor echo ' ' | R --vanilla echo /usr/lib/graphviz/ >> /etc/ld.so.conf ln -s /usr/lib/graphviz/*.so* /usr/local/lib/ # Should not be necessary, but... R: Documentation First steps The following document explains how to use R, progressively, at an elementary level (as long as the expressions "standard deviation" and "gaussian distribution" do not frighten you, it will be fine). Since I read it, this document became a book. Here is the official introduction to R -- it is garanteed to be up to date. Another elementary document, with exercises, for German-speaking people. The reference manual is available under R: just type the name of the command whose reference you want prefixed by an interrogation mark. For reserved words or non-alphanumeric commands, use quotes. If you do not know the command name (this happens very often), use "help.search" or "apropos". The first one, "help.search", looks everywhere, especially in all the packages that are not loaded, while the second one, "help.search", looks in the search path, i.e., in all the functions and variables that are currently available. The last one looks on the R web site and on the R-help mailing list; a new tab will open in your browser. Some of you might prefer to read the manuals in HTML (R launches your default web browser -- in my case, konqueror). There are also a few PDF files in /usr/lib/R/library/*/doc/: it is sometimes a PDF version of the reference manual (when the file name is that of the library), but sometimes different, more pedagogic explainations, called "vignettes". % cd /usr/local/lib/R/library/ % ls */doc/*pdf CRAN Task Views Most users will start to use R with a given application in mind, or at least, with a given domain in mind. This can be troublesome, because a given statistical procedure can be known under completely different names in different domains. Furthermore, some statistical procedures (or plots) will be used extremely often in a domain, to the point of being considered elementary, while it will be so rare in others that hardly anyone knows it. To avoid those problems and directly jump to the packages and functions you need for your study, have a look at the CRAN Task Views: these are commented lists of R packages tailored for some domains. The following document (very enlightening, even if it is written in a language I cannot read -- probably spanish) explains in details what kinds of graphics R can produce. At least, browse through See also P. Murrell's book: More technical documents Anova in psychology: Linear regression: Everything (rather complete, but very dense: with no previous knowledge of the subjects tackled, it is not understandable, but otherwise, it is fine -- there are even exercises): The pictures in the manual I said earlier that an HTML version of the manual was available (look in /usr/lib/R/), but it lacks the illustrations... (The development team is aware of the problem and it may be tackled in the near future.) The following web site has added them: You can create them yourself, as follows. #! perl -w use strict; my $n = 0; # Writing R files mkdir "Rdoc" || die "Cannot mkdir Rdoc/: $!"; my @libraries= `ls lib/R/library/`; foreach my $lib (@libraries) { print STDERR "Processing library \"$lib\"\n"; print STDERR `pwd`; my @pages = grep /\.R$/, `ls lib/R/library/$lib/R-ex/`; chdir "Rdoc" || die "Cannot chdir to Rdoc: $!"; mkdir "$lib" || die "Cannot mkdir $lib: $!"; chdir "$lib" || die "Cannot chdir to $lib: $!"; open(M, '>', "Makefile") || die "Cannot open Makefile for writing: $!"; print M "all:\n"; foreach my $page (@pages) { print STDERR " Processing man page \"$page\" in library \"$lib\"\n"; my $res = ""; $res .= "library($lib)\n"; $res .= "library(lattice)##VZ##\n"; $res .= "library(nlme)##VZ##\n"; $res .= "library(MASS)##VZ##\n"; $res .= "identify <- function (...) {}##VZ##\n"; $res .= "x11()##VZ##\n"; open(P, '<', "../../lib/R/library/$lib/R-ex/$page") || die "Cannot open lib/R/library/$lib/R-ex/$page for reading: $!"; # Tag the lines where we must copy the screen (between two commands) while(<P>) { s/^([^ #}])/try(invisible(dev.print(png,width=600,height=600,bg="white",filename="doc$n.png")))##VZ##\n$1/ && $n++; $res .= $_; $res .= "try(invisible(dev.print(png,width=600,height=600,bg=\"white\",filename=\"doc$n.png\")))##VZ##\n"; $n++; close P; # We discard the line in the following cases: # The previous line ends with a comma, an opening bracket, an "equal" sign $res =~ s/[,(=+]\s*\n.*##VZ##.*?\n/,\n/g; # The previous line is empty $res =~ s/^\s*\n.*##VZ##.*\n/\n/gm; # The previous line only contains a comment $res =~ s/^(\s*#.*)\n.*##VZ##.*\n/$1\n/gm; # The nest line starts with a { TODO: check (boot / abc.ci) $res =~ s/^.*##VZ##.*\n\s*\{/\{/gm; # We write the corresponding number $res =~ s/doc([0-9]).png/doc00000$1.png/g; $res =~ s/doc([0-9][0-9]).png/doc0000$1.png/g; $res =~ s/doc([0-9][0-9][0-9]).png/doc000$1.png/g; $res =~ s/doc([0-9][0-9][0-9][0-9]).png/doc00$1.png/g; $res =~ s/doc([0-9][0-9][0-9][0-9][0-9]).png/doc0$1.png/g; open(W, ">", "${lib}_${page}") || die "Cannot open ${lib}_${page} for writing: $!"; print W $res; close W; print M "\tR --vanilla <${lib}_${page} >${lib}_${page}.out\n"; my $p = $page; $p =~ s/\.R$//; system 'cp', "../../lib/R/library/$lib/html/$p.html", "${lib}_$p.html"; print M "\ttouch all\n"; chdir "../../" || die "Cannot chdir to ../../: $!"; we compile them (it should take a few hours: for some packages, it may crash). cd Rdoc/ for i in * cd $i I prefer to parallelize all this: \ls -d */ | perl -p -e 's/(.*)/cd $1; make/' | perl fork.pl 5 where fork.pl allows you to launch several processes at the same time, but not too many: #! /usr/bin/perl -w use strict; my $MAX_PROCESSES = shift || 10; use Parallel::ForkManager; my $pm = new Parallel::ForkManager($MAX_PROCESSES); my $pid = $pm->start and next; $pm->finish; # Terminates the child process We clean the PNG files thus generated and we write the HTML files, for i in */ cd $i perl ../do.it_2.pl Where do.it_2.pl contains: #! perl -w use strict; # Delete the empty or duplicated PNG files print STDERR "Computing checksums\n"; use Digest::MD5 qw(md5); my %checksum; foreach my $f (sort(<*.png>)) { if( -z $f ) { unlink $f; local $/; open(F, '<', $f) || warn "Cannot open $f for reading: $!"; my $m = md5(<F>); close F; if( exists $checksum{$m} ){ unlink $f; } else { $checksum{$m} = $f; # Turn all this into HTML print STDERR "Converting to HTML\n"; open(HTML, '>', "ALL.html") || warn "Cannot open ALL.html for writing: *!"; print "<html>\n"; print "<head><title>R</title></head>\n"; print "<body>\n"; foreach my $f (<*.R.out>) { my $page = $f; $page =~ s/\.R.out$//; # Read the initial HTML file if( open(H, '<', "$page.html") ){ my $a = join '', <H>; close H; $a =~ s#^.*?<body>##gsi; $a =~ s#<hr>.*?$##gsi; print $a; } else { warn "Cannot open $page.html for reading: $!"; open(F, '<', $f) || warn "Cannot open $f for reading: $!"; #print "<h1>$f</h1>\n"; print "<h2>Worked out examples</h2>\n"; print "<pre>\n"; my $header=1; while(<F>) { if($header) { $header=0 if m/to quit R/; if( m/(doc.*png).*##VZ##/ ){ my $png = $1; next unless -f $png; print "</pre>\n"; print "<img width=600 height=600 src=\"$png\">\n"; print "<pre>\n"; next if m/##VZ##/; next if m/^>\s*###---/; next if m/^>\s*##\s*___/; next if m/^>\s*##\s*(alias|keywords)/i; close F; print "</pre>\n"; print "<hr>\n"; print "</body>\n"; print "</html>\n"; close HTML; For an unknown reason, the PNG files have a transparent background: I turn it into a white background with ImageMagick (the white.png file is a white PNG file, of the same size, 600x600, created with The Gimp) -- here as well, it is pretty long... for i in */*png echo $i composite $i white.png $i I do not take into account the potential links in the HTML files (not very clean, but...). perl -p -i.bak -e 's#<a\s.*?>##gi; s#</a>##gi' **/ALL.html The HTML files should rather be called index.html: rename 's/ALL.html/index.html/' */ALL.html where "rename" is the program: #!/usr/bin/perl -w use strict; my $reg = shift @ARGV; foreach (@ARGV) { my $old = $_; eval "$reg;"; die $@ if $@; rename $old, $_; here is a very small piece of the result (2.7Mb, 268 pictures, while the whole documentation reaches 93Mb and 2078 images -- this was R 1.6 -- it is probably much more now). In particular, one might be interested in the packages whose documentation contains the highest number of graphics. % find -name "*.png" | perl -p -e 's#^\./##;s#/.*##' | sort | uniq -c | sort -n | tail -20 26 sm 27 cluster 29 ade4 29 nls 29 spatial 31 mgcv 34 car 38 cobs 41 MPV 44 mclust 45 MASS 45 vcd 49 grid 60 gregmisc 64 pastecs 70 qtl 77 splancs 88 strucchange 90 waveslim 113 spdep Graphical interface -- R for non-statisticians and non-programmers It is the statistical mode of Emacs (it may be automatically installed with (X)Emacs: under Mandriva Linux (formerly Mandrake), it is with XEmacs, it is not with Emacs). One may then edit code with automatic indentication and syntax highlighting (Emacs recognises the files from their extension). You can even run R under Emacs (M-x R). Windows-specific stuff Under windows, R has a graphical interface -- it is just a makeshift replacement to accomodate for the lack of a decent terminal under Windows: it is not a menu-driven interface. You might still need a text editor, though. Some people advise Tinn-R, an R-aware simple text editor. R Commander This really is a menu-oriented interface to R, that allows you to navigate through your data sets, to perform simple statistical analyses, If you do not need anything beyond regression (but if you need all the graphical diagnostics that should be performed after a regression) and simple tests, or simply, if you have to teach statistics to non-statisticians and non-programmers, it seems to me an excellent choice. Users who later want to unleash the full power of R as a real programming language will face a gentle transition: R Commander displays all the commands that are run in the background to perform the tests and regressions and to produce the plots. If you want to provide a GUI tailored to a certain domain, providing analyses specific to it, it is a good starting point: you can add you own menus to the interface. Several projects have started to do this, in specific areas, for instance fBrowser, for finance and econophysics, in Rmetrics, and GEAR, for introductory econometrics. One big problem with R, and one big difference with menu-driven software, is that you never know where the feature you need is -- and this is even more true if you do not know the feature in question (you cannot wander through the menus). The typical example is "logistic regression": the main function to perform logistic regression is "glm". This stands for "Generalized Linear Models" (do not worry, you are not supposed to know what it is) and the manual page does not even give an example of a logistic regression. If you know the theory behind logistic regression, if you know that it is a special case of GLM, you might be fine -- but most users do not know, or do not want to, and should not need to. The Zelilg library fulfils the needs of such people by providing a simplified and uniform interface to most statistical procedures. Basically, you only need to know five functions: "zelig" (to fit a model), "setx" to change the data (for predictions), "sim" to simulate new values (i.e., to predict the values), "summary" and TODO: Give an example... d <- data.frame( y = crabs$sp, x1 = crabs$FL, x2 = crabs$RW r <- zelig( y ~ x1 + x2, model="probit", data=d ) op <- par(mfrow=c(2,2), mar=c(4,4,3,2)) NOT RUN %G N <- dim(d)[1] new.x1 <- rnorm( N, mean(d$x1), sd(d$x1) ) new.x2 <- rnorm( N, mean(d$x2), sd(d$x2) ) ## BUG in Zelig? s <- sim(r, setx(r, data=data.frame(x1=new.x1, x2=new.x2))) You can choose the model among the following list: To predict a quantitative variable: ls Least Squares normal (almost the same) gamma Gamma regression To predict a binary variable: logit Logistic regression relogit Logistic regressioni for rare events probit Probit regression To predict two binary variables: blogit Bivariate logistic regression bprobit Bivariate probit regression To predict a qualitative variable mlogit Multinomial logistic To predict an ordered qualitative variable: ologit Ordinal logistic regression oprobit Ordinal probit regression To predict count data: poisson Poisson regression negbin Negative binomial regression (as Poisson regression, but with more dispersed observations) To predict survival data: exp Exponential model (the hazard rate is constant) weibull Weibull model (the hazard rate increases with time) lognorm Log-normal model (the hazard rate increases and then decreases) The unification of the various statistical models is a great thing, but this library does not put enough emphasis on the graphics to be looked at before, during and after the analysis -- as opposed to R Commander. An other user-friendly interface to R. JGR (pronounce "jaguar") Yet another GUI, in Java. R: Some elementary functions In this part, we present the simplest functions of R, to allow you to read data (or to simulate them -- it is so easy, you end up doing it all the time, to check if your algorithms are correct (i.e., if they behave as expected if the assumptions you made on your data are satisfied) before applying them to real data) and to numerically or graphically explore them. In the following, the ">" at the begining of the lines is R's prompt -- you should not type it --, the [1] at the begining of some lines is part or R's answers. Statistics, computations: > # 20 numbers, between 0 and 20 > # rounded at 1e-1 > x <- round(runif(20,0,20), digits=1) > x [1] 10.0 1.6 2.5 15.2 3.1 12.6 19.4 6.1 9.2 10.9 9.5 14.1 14.3 14.3 12.8 [16] 15.9 0.1 13.1 8.5 8.7 > min(x) [1] 0.1 > max(x) [1] 19.4 > median(x) # median [1] 10.45 > mean(x) # mean [1] 10.095 > var(x) # variance [1] 27.43734 > sd(x) # standard deviation [1] 5.238067 > sqrt(var(x)) [1] 5.238067 > rank(x) # rank [1] 10.0 2.0 3.0 18.0 4.0 12.0 20.0 5.0 8.0 11.0 9.0 15.0 16.5 16.5 13.0 [16] 19.0 1.0 14.0 6.0 7.0 > sum(x) [1] 201.9 > length(x) [1] 20 > round(x) [1] 10 2 2 15 3 13 19 6 9 11 10 14 14 14 13 16 0 13 8 9 > fivenum(x) # quantiles [1] 0.10 7.30 10.45 14.20 19.40 > quantile(x) # quantiles (different convention) 0% 25% 50% 75% 100% 0.10 7.90 10.45 14.15 19.40 > quantile(x, c(0,.33,.66,1)) 0% 33% 66% 100% 0.100 8.835 12.962 19.400 > mad(x) # normalized mean deviation to the median ("median average distance" [1] 5.55975 > cummax(x) [1] 10.0 10.0 10.0 15.2 15.2 15.2 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.4 [16] 19.4 19.4 19.4 19.4 19.4 > cummin(x) [1] 10.0 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 [16] 1.6 0.1 0.1 0.1 0.1 > cor(x,sin(x/20)) # correlation [1] 0.997286 Plot a histogram x <- rnorm(100) hist(x, col = "light blue") Display a scatter plot of two variables N <- 100 x <- rnorm(N) y <- x + rnorm(N) plot(y ~ x) Add a "regression line" to that scatter plot. N <- 100 x <- rnorm(N) y <- x + rnorm(N) plot(y ~ x) abline( lm(y ~ x), col = "red" ) Print a message or a variable print("Hello World!") Concatenate character strings paste("min: ", min (x$DS1, na.rm=T))) Write in a file (or to the screen) cat("\\end{document}\n", file="RESULT.tex", append=TRUE) This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. Vincent Zoonekynd latest modification on Sat Jan 6 10:28:14 GMT 2007
{"url":"http://zoonek2.free.fr/UNIX/48_R/01.html","timestamp":"2014-04-18T03:05:53Z","content_type":null,"content_length":"47898","record_id":"<urn:uuid:6581f941-a24b-4048-960b-9993da7a7d34>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Sammamish Geometry Tutor ...I've also programming in VBA most recently for an Excel update function. I was also responsible for our network at our satellite office working for GE as well as web-based instruction on parts of the GE system. I taught spiral math at the high school level in the Peace Corps. 39 Subjects: including geometry, reading, English, algebra 1 ...These are skills and lessons that I apply to every lesson when I work with students. Feel free to contact me if you have any questions or want more information - I look forward to hearing from you!I have worked as an Algebra teacher in Chicago and I thoroughly enjoy teaching the subject. I have... 27 Subjects: including geometry, chemistry, reading, writing ...I have had several years of classical training in piano with two of those at University of Puget Sound and Oregon State. I am an excellent sight reader and have been paid as an accompanist and have training in music theory. I was employed for many years in the field of both local and wide area ... 43 Subjects: including geometry, chemistry, calculus, physics ...And then, I give the student sample problems to solve independently and coach them further as needed. My main goal is to make sure the student is self-sufficient, and capable of using the methods on quizzes or tests. With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. 26 Subjects: including geometry, chemistry, calculus, physics ...I hold a PhD in Aeronautical and Astronautical Engineering from the University of Washington, and I have more than 40 years of project experience in science and engineering. I am uniquely qualified to tutor precalculus, with a PhD in Aeronautical and Astronautical Engineering from the University... 21 Subjects: including geometry, chemistry, English, physics
{"url":"http://www.purplemath.com/Sammamish_geometry_tutors.php","timestamp":"2014-04-21T00:05:51Z","content_type":null,"content_length":"23988","record_id":"<urn:uuid:2d3b61f5-3ac6-4d8c-bfeb-70805895f03e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
help with rays September 22nd 2008, 02:14 PM #1 Sep 2008 help with rays Prove: Every ray is convex. Here is what I think: Let's say we have a union of a triangle and the set of all points that lie inside that triangle called A. If we place the points P,Q in there, then A becomes convex since the entire segment PQ lies in A. And since it is known that eveery segemnt PQ is a convex set, and that a set with only one point is convex, then a ray, say ray AB, ic convex also because it contains two points so it has the property of having two points. Is this right? If it isn't, can someone please help me with it? I really do not understand how else to prove it. Thank you for your time and effort! I really appreciate it! What is the definition of convex as you are applying it? A set A is called convex if for every two points P, Q of A, the entire segment PQ lies in A. For example, every segment PQ is a convex set. In fact, a set with only one point is convex. ( Since such a set does not contain any two points, it follows that every two points of it have any property we feel like mentioning) This all comes straight from out of my text book. Thanks for the help! Let the point of emanation of the ray be point O. Suppose for a point C in between A and B on the line segment AB that the ray does not pass through C. OA and OB are the same ray, but OC cannot be the same ray as OA, because then the ray would pass through C. This would put OC at a positive angle away from OA, and OCB and ACB would be triangles. But ACB cannot be a triangle because C is on AB. Hence OC is on OA, and for all points C between A and B, OC is the same ray as OA and C is on that ray. So the definition is satisfied. Thanks soo much for the help!! But do you mind telling me how you knew to use what you did? I still don't understand how to arrive at getting these answers. Thanks again! If this is a question in axiomatic geometry, you must use the definitions. $\overrightarrow {AB} = \left\{ {X:X = A \vee A - X - B \vee A - B - X} \right\}$$\mbox{ and } \overline {CD} = \left\{ {X:X = C \vee X = D \vee C - X - D} \right\}$. Prove: $\left\{ {C,D} \right\} \subseteq \overrightarrow {AB} \quad \Rightarrow \quad \overline {CD} \subseteq \overrightarrow {AB}$ Last edited by Plato; September 22nd 2008 at 03:25 PM. September 22nd 2008, 02:18 PM #2 MHF Contributor Apr 2008 September 22nd 2008, 02:33 PM #3 Sep 2008 September 22nd 2008, 02:50 PM #4 MHF Contributor Apr 2008 September 22nd 2008, 03:01 PM #5 Sep 2008 September 22nd 2008, 03:03 PM #6
{"url":"http://mathhelpforum.com/geometry/50117-help-rays.html","timestamp":"2014-04-20T04:05:16Z","content_type":null,"content_length":"44757","record_id":"<urn:uuid:5a0f6ffe-926f-4500-a156-1a6d2fb0f744>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List The Semicircle Law, Free Random Variables and Entropy Fumio Hiai Tohoku University, Aoba-ku, Sendai, Japan , and Dénes Petz Technical University of Budapest, Hungary Preview Material &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp Mathematical The book treats free probability theory, which has been extensively developed since the early 1980s. The emphasis is put on entropy and the random matrix model approach. The volume Surveys and is a unique presentation demonstrating the extensive interrelation between the topics. Wigner's theorem and its broad generalizations, such as asymptotic freeness of independent Monographs matrices, are explained in detail. Consistent throughout the book is the parallelism between the normal and semicircle laws. Voiculescu's multivariate free entropy theory is presented with full proofs and extends the results to unitary operators. Some applications to operator algebras are also given. 2000; 376 pp; softcover Based on lectures given by the authors in Hungary, Japan, and Italy, the book is a good reference for mathematicians interested in free probability theory and can serve as a text for an advanced graduate course. Volume: 77 • Overview ISBN-10: • Probability laws and noncommutative random variables 0-8218-4135-1 • The free relation • Analytic function theory and infinitely divisible laws ISBN-13: • Random matrices and asymptotically free relation 978-0-8218-4135-8 • Large deviations for random matrices • Free entropy of noncommutative random variables List Price: US$98 • Relation to operator algebras • Bibliography Member Price: • Index Order Code: SURV/
{"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-77-S","timestamp":"2014-04-17T10:40:47Z","content_type":null,"content_length":"15185","record_id":"<urn:uuid:b212bb47-7ca9-42e4-95e9-532eec070031>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Security prices, risk, and maximal gains from diversification Results 1 - 10 of 83 , 1976 "... This paper integrates elements from the theory of agency, the theory of property rights and the theory of finance to develop a theory of the ownership structure of the firm. We define the concept of agency costs, show its relationship to the ‘separation and control’ issue, investigate the nature of ..." Cited by 898 (5 self) Add to MetaCart This paper integrates elements from the theory of agency, the theory of property rights and the theory of finance to develop a theory of the ownership structure of the firm. We define the concept of agency costs, show its relationship to the ‘separation and control’ issue, investigate the nature of the agency costs generated by the existence of debt and outside equity, demonstrate who bears costs and why, and investigate the Pareto optimality of their existence. We also provide a new definition of the firm, and show how our analysis of the factors influencing the creation and issuance of debt and equity claims is a special case of the supply side of the completeness of markets problem. - Journal of Political Economy , 2001 "... This paper explores the ability of conditional versions of the CAPM and the consumption CAPM—jointly the (C)CAPM—to explain the cross section of average stock returns. Central to our approach is the use of the log consumption–wealth ratio as a conditioning variable. We demonstrate that such conditio ..." Cited by 139 (5 self) Add to MetaCart This paper explores the ability of conditional versions of the CAPM and the consumption CAPM—jointly the (C)CAPM—to explain the cross section of average stock returns. Central to our approach is the use of the log consumption–wealth ratio as a conditioning variable. We demonstrate that such conditional models perform far better than unconditional specifications and about as well as the Fama-French three-factor model on portfolios sorted by size and book-to-market characteristics. The conditional consumption CAPM can account for the difference in returns between low-book-to-market and high-bookto-market portfolios and exhibits little evidence of residual size or book-to-market effects. We are grateful to Eugene Fama and Kenneth French for graciously providing the - Journal of Finance , 2006 "... We especially thank an anonymous referee and Rob Stambaugh, the editor, for helpful suggestions that greatly improved the article. Andrew Ang and Bob Hodrick both acknowledge support from the NSF. ..." Cited by 82 (6 self) Add to MetaCart We especially thank an anonymous referee and Rob Stambaugh, the editor, for helpful suggestions that greatly improved the article. Andrew Ang and Bob Hodrick both acknowledge support from the NSF. , 2002 "... Hedge funds are known to exhibit non-linear option-like exposures to standard asset classes and therefore the traditional linear factor model provides limited help in capturing their risk-return tradeoffs. We address this problem by augmenting the traditional model with option-based risk factors. O ..." Cited by 81 (6 self) Add to MetaCart Hedge funds are known to exhibit non-linear option-like exposures to standard asset classes and therefore the traditional linear factor model provides limited help in capturing their risk-return tradeoffs. We address this problem by augmenting the traditional model with option-based risk factors. Our results show that a large number of equity-oriented hedge fund strategies exhibit payoffs resembling a short position in a put option on the market index, and therefore bear significant left-tail risk, risk that is ignored by the commonly used mean-variance framework. Using a mean-conditional Value-at-Risk framework, we demonstrate the extent to which the mean-variance framework underestimates the tail risk. Working with the underlying systematic , 2002 "... In a model with housing collateral, the ratio of housing wealth to human wealth shifts the conditional distribution of asset prices and consumption growth. A decrease in house prices reduces the collateral value of housing, increases household exposure to idiosyncratic risk, and increases the condit ..." Cited by 58 (2 self) Add to MetaCart In a model with housing collateral, the ratio of housing wealth to human wealth shifts the conditional distribution of asset prices and consumption growth. A decrease in house prices reduces the collateral value of housing, increases household exposure to idiosyncratic risk, and increases the conditional market price of risk. Using aggregate data for the US, we find that a decrease in the ratio of housing wealth to human wealth predicts higher returns on stocks. Conditional on this ratio, the covariance of returns with aggregate risk factors explains eighty percent of the cross-sectional variation in annual size and book-to-market portfolio returns. 1 , 2005 "... We propose a dynamic risk-based model that captures the value premium. Firms are modeled as long-lived assets distinguished by the timing of cash flows. The stochastic discount factor is specified so that shocks to aggregate dividends are priced, but shocks to the discount rate are not. The model im ..." Cited by 50 (10 self) Add to MetaCart We propose a dynamic risk-based model that captures the value premium. Firms are modeled as long-lived assets distinguished by the timing of cash flows. The stochastic discount factor is specified so that shocks to aggregate dividends are priced, but shocks to the discount rate are not. The model implies that growth firms covary more with the discount rate than do value firms, which covary more with cash flows. When calibrated to explain aggregate stock market behavior, the model accounts for the observed value premium, the high Sharpe ratios on value firms, and the poor performance of the CAPM. THIS PAPER PROPOSES A DYNAMIC RISK-BASED MODEL that captures both the high expected returns on value stocks relative to growth stocks, and the failure of the capital asset pricing model to explain these expected returns. The value premium, first noted by Graham and Dodd (1934), is the finding that assets with a high ratio of price to fundamentals (growth stocks) have low expected returns relative to assets with a low ratio of price to fundamentals (value stocks). This , 2007 "... Existing empirical literature on the risk-return relation uses a relatively small amount of conditioning information to model the conditional mean and conditional volatility of excess stock market returns. We use dynamic factor analysis for large datasets to summarize a large amount of economic info ..." Cited by 36 (6 self) Add to MetaCart Existing empirical literature on the risk-return relation uses a relatively small amount of conditioning information to model the conditional mean and conditional volatility of excess stock market returns. We use dynamic factor analysis for large datasets to summarize a large amount of economic information by few estimated factors, and find that three new factors- termed “volatility,” “risk premium,” and “real” factors- contain important information about one-quarter-ahead excess returns and volatility not contained in commonly used predictor variables. Our specifications predict 16-20 % of the one-quarter-ahead variation in excess stock market returns, and exhibit stable and statistically significant out-of-sample forecasting power. We also find a positive conditional risk-return - Journal of Finance , 2006 "... Records of over half a million participants in more than 600 401(k) plans indicate that participants tend to allocate their contributions evenly across the funds they use, with the tendency weakening with the number of funds used. The number of funds used, typically between three and four, is not se ..." Cited by 25 (0 self) Add to MetaCart Records of over half a million participants in more than 600 401(k) plans indicate that participants tend to allocate their contributions evenly across the funds they use, with the tendency weakening with the number of funds used. The number of funds used, typically between three and four, is not sensitive to the number of funds offered by the plans, which ranges from 4 to 59. A participant’s propensity to allocate contributions to equity funds is not very sensitive to the fraction of equity funds among offered funds. The paper also comments on limitations on inferences from experiments and aggregate-level data analysis. HOW MUCH AND HOW TO SAVE FOR RETIREMENT is one of the most important financial decisions made by most people. Defined contribution (DC) pension plans, such as the popular 401(k) plans, are important instruments of such savings. By 2001 year-end, about 45 million American employees held 401(k) plan accounts with a total of $1.75 trillion in assets (Holden and VanDerhei (2001)). An important characteristic of these plans is that the participant has responsibility over his savings among a plan’s various funds. How responsibly do the participants behave? In particular, how sensitive are participants ’ choices to possible framing effects associated with the menu of choices they are offered? To explore these questions, this paper analyzes a data set recently provided by the Vanguard Group consisting of records of more than half a million participants in about 640 DC plans. These plans offer between 4 and 59 funds in which participants can invest. All plans offer at least one stock fund, 635 plans - Journal of Finance , 2006 "... When utility is non-separable in nondurable and durable consumption and the elasticity of substitution between the goods is high, marginal utility rises when durable consumption falls. The model explains both the cross-sectional variation in expected stock returns and the time variation in the equit ..." Cited by 24 (2 self) Add to MetaCart When utility is non-separable in nondurable and durable consumption and the elasticity of substitution between the goods is high, marginal utility rises when durable consumption falls. The model explains both the cross-sectional variation in expected stock returns and the time variation in the equity premium. Small stocks and value stocks deliver relatively low returns during recessions when durable consumption falls, which explains their high average returns relative to big stocks and growth stocks. Stock returns are unexpectedly low at business-cycle troughs when durable consumption falls sharply, which explains the counter-cyclical variation in the equity premium. "... We introduce and discuss a general criterion for the derivative pricing in the general situation of incomplete markets, we refer to it as the No Almost Sure Arbitrage Principle. This approach is based on the theory of optimal strategy in repeated multiplicative games originally introduced by Kelly. ..." Cited by 8 (3 self) Add to MetaCart We introduce and discuss a general criterion for the derivative pricing in the general situation of incomplete markets, we refer to it as the No Almost Sure Arbitrage Principle. This approach is based on the theory of optimal strategy in repeated multiplicative games originally introduced by Kelly. As particular cases we obtain the Cox-Ross-Rubinstein and Black-Scholes in the complete markets case and the Schweizer and Bouchaud-Sornette as a quadratic approximation of our prescription. Technical and numerical aspects for the practical option pricing, as large deviation theory approximation and Monte Carlo computation are discussed in detail. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1279495","timestamp":"2014-04-17T07:32:45Z","content_type":null,"content_length":"38841","record_id":"<urn:uuid:1482ae97-dc23-4e88-a3d8-490a0eb188a5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
changing the order of integration February 17th 2010, 01:47 AM #1 Feb 2010 changing the order of integration So here I got such integral. How to change the order of integration? I am not asking you for the answer, but what should be the first step? $\int_{-1}^{1}dy \int_{{y^2 \over 2}}^{ \sqrt{3-y^2}} f(x,y)dx$ I assume this is actually $\int_{-1}^{1}{\int_{\frac{y^2}{2}}^{\sqrt{3 - y^2}}{f(x, y)\,dx}\,dy}$. It's important to realise that the terminals actually give you a boundary for $x$ and $y$. So $\frac{y^2}{2}\leq x \leq \sqrt{3 - y^2}$ and $-1 \leq y \leq 1$. You should be able to rearrange the inequalities so that you can change the order of integration. so the 'lines' of this are will be: $x= \sqrt{(3-y^2)}$? It really helps to draw a picture for this one. If you change the order of integration, you will need two integrals instead of one. Yep, but should I draw it according to these 'lines' above? That's ok, my teacher did it once only with integrals. Could you tell me what's the easiest way to draw $x=\sqrt{(3-y^2)}$ without calculator? Square both sides and you get $x^2 = 3 - y^2$, or $x^2 + y^2 = 3$, which is a circle centered at the origin with radius $\sqrt{3}$. Note that $x=\sqrt{(3-y^2)}$ is only part of this circle. The best way to draw it would be with pencil and compass. Something like that. Is that the best parabola you can draw? Hehe, no! I can draw much better, but this is just auxiliary drawing, isn't it? So, I need to divide it into three parts - part of circle on the right, rectangle in the middle, and this parabola part on the the left? Aha, thanks! What about such integral? $\int_{-1}^{1}dx \int_{0}^{ \sqrt{1-y^2}} f(y)$ How can I change it? Because it's only a rectangle... February 17th 2010, 01:52 AM #2 February 17th 2010, 12:17 PM #3 Feb 2010 February 17th 2010, 12:31 PM #4 MHF Contributor Apr 2008 February 17th 2010, 12:34 PM #5 Feb 2010 February 17th 2010, 12:42 PM #6 MHF Contributor Apr 2008 February 17th 2010, 12:47 PM #7 Feb 2010 February 17th 2010, 12:49 PM #8 MHF Contributor Apr 2008 February 17th 2010, 01:01 PM #9 Feb 2010 February 17th 2010, 01:03 PM #10 MHF Contributor Apr 2008 February 17th 2010, 01:07 PM #11 Feb 2010 February 17th 2010, 01:09 PM #12 MHF Contributor Apr 2008 February 17th 2010, 03:39 PM #13 Feb 2010 February 17th 2010, 05:00 PM #14 Feb 2010
{"url":"http://mathhelpforum.com/calculus/129236-changing-order-integration.html","timestamp":"2014-04-21T06:00:25Z","content_type":null,"content_length":"68478","record_id":"<urn:uuid:117ed248-360f-48b7-935c-6269e65f1230>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Isotropic vs homogeneous I can imagine a universe that would be isotropic but not homogeneous although this seems to select a prefered frame (am I worng on this?) But I don't understand how it would be possible to have a universe which would be homogeneous without being isotropic. It seems to me that homogeneous implies isotropic. Wrong way around. Think about a scalar field in [itex]E^3[/itex]. If it's isotropic, its gradient (which you can think of as a vector) must not pick out any direction at any point. This can happen only if the gradient everywhere. Thus, the scalar field must be . Adding a time variable, the same argument shows that the scalar field must depend only on time, i.e. it must be homogeneous in surfaces of "constant time". In a cosmological model, these surfaces will be the hyperslices orthogonal to the world lines of the matter. And yet, I must be wrong because people would not insist on having both conditions satisfied. They don't. There are plenty of cosmological models (dust solutions) which are but not , and there are models which are not even homogeneous. In another thread I recently gave a simple explicit example of a homogeneous but anisotropic model, the Bianchi II dust , an exact dust solution which exhibits Kasner epochs similar to those exhibited by the Mixmaster model , aka the Bianchi IX dust . I've also given the line element defining the Stephani dust , an simple exact dust solution which is an inhomogeneous model (a perturbation of an FRW model). The Kantowski-Sachs dusts provide further popular examples of anisotropic but homogeneous cosmological models. for an excellent review of cosmological models, including the question of symmetry. See the website in my sig (happy b'day to John B, happy b'day to he!) for many more citations to good review papers and textbooks.
{"url":"http://www.physicsforums.com/showthread.php?t=173731","timestamp":"2014-04-20T03:13:15Z","content_type":null,"content_length":"80345","record_id":"<urn:uuid:8130014f-2b26-485c-b796-244ff586a7a6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
LMI-Based Stability Criteria for Discrete-Time Neural Networks with Multiple Delays Advances in Mathematical Physics Volume 2013 (2013), Article ID 732406, 6 pages Research Article LMI-Based Stability Criteria for Discrete-Time Neural Networks with Multiple Delays ^1School of Mathematics, Anhui University, Hefei 230039, China ^2Department of Public Teaching, Anhui Business Vocational College, Hefei 230041, China Received 17 March 2013; Accepted 26 May 2013 Academic Editor: Wen Xiu Ma Copyright © 2013 Hui Xu and Ranchao Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Discrete neural models are of great importance in numerical simulations and practical implementations. In the current paper, a discrete model of continuous-time neural networks with variable and distributed delays is investigated. By Lyapunov stability theory and techniques such as linear matrix inequalities, sufficient conditions guaranteeing the existence and global exponential stability of the unique equilibrium point are obtained. Introduction of LMIs enables one to take into consideration the sign of connection weights. To show the effectiveness of the method, an illustrative example, along with numerical simulation, is presented. 1. Introduction During the past decades, various types of neural networks have been proposed and investigated intensively, since they play important roles and have found successful applications in fields such as pattern recognition, signal and image processing, nonlinear optimization problems, and parallel computation. The dynamical behaviors in neural models, such as the existence and their asymptotic stability of equilibria, periodic solutions, bifurcations, and chaos, have been the most active areas of research and have been extensively explored over the past years [1–22]. Due to the finite transmission speed of signals among neurons, time delays in interactions between neurons frequently happen and will cause complex dynamics in neural networks [6]; so it is necessary to introduce time delays into the neural models. So far, discrete, time-varying, and distributed delays have been, respectively, introduced to describe the dynamics of neural networks, and various sufficient conditions ensuring the stability have been given. Note that in numerical simulation and practical implementations, discretization of continuous-time models is necessary and of great importance. On the other hand, the dynamics of discrete-time neural networks could be quite different from those of continuous versions and will display much more complicated behaviors. So it is of great theoretical and practical significance to study the dynamics of discrete neural models. For discrete models, such as discrete Hopfield, bidirectional associate memory, and Cohen-Grossberg neural networks, several authors [1, 7–22] have studied the existence and exponential stability of equilibria and periodic solutions. In this paper, a discrete model with both variable and distributed time delays is introduced. By Lyapunov stability theory and linear matrix inequality (LMI) technique, sufficient conditions ensuring the existence and globally exponential stability of a unique equilibrium point are obtained. To show the effectiveness of our results, an illustrative example along with numerical simulation is presented. To our best knowledge, such general models have been seldom touched upon in the existing literatures. As we see, the obtained conditions are easy to verify. Furthermore, introduction of LMIs enables us to take into consideration the sign of connection weights. In contrast, sufficient conditions, for instance, in [7–12], depend on the absolute values of connection weights. That will ignore the differences between neuronal excitatory and inhibitory effects. 2. Preliminaries Set to be the set of integers and the set of nonnegative integers; let represent the set of integers between and with , namely, . Consider the discrete-time neural networks with both variable and distributed delays: with initial values where are the states of the th neuron at time ; represents the rate with which the th neuron resets its potential when isolated from others; and weigh the strengths of the th unit on the th unit; and are the nonlinear activation functions of the neurons; denotes the transmission delay along the axon of the th unit; is the delay kernel; is the external input on the th neuron at time ; the initial value functions are bounded . To investigate stability of system (1), make further assumptions:(H1) suppose that , , for , and for , with being constant;(H2) suppose that , , and , for some , all ;(H3) assume that functions and are bounded and satisfy for any , where , and are some constants and can be positive, negative, or zero, . So they are less restrictive than sigmoid activation functions and Lipschitz-type ones. For any , a solution of systems (1) and (2) is a vector-valued function satisfying system (1) and initial conditions (2) for . In this paper, it is always assumed that neural model (1) admits a solution represented by or simply . Since the activation functions are bounded, it is not difficult to check that system (1) has at least one equilibrium point by Brouwer's fixed point theorem. So with loss of generality, assume that ; that is, is an equilibrium point. Throughout the paper, denote System (1) can be rewritten into the form Definition 1. The equilibrium point of system (1) is globally exponentially stable if there exist constants and such that for any solution of system (1) with initial conditions , it holds that 3. Exponential Stability of Equilibrium Points By Lyapunov stability theory and LMI technique, the global exponential stability of the equilibrium point is established. Clearly, if is exponentially stable, the equilibrium point is unique. Now we will investigate the exponential stability of the origin. Theorem 2. Suppose that (H1)–(H3) hold and further there exist a number , positive definite matrix , , and semipositive diagonal matrices , , and, such that where Then the origin of system (1) is exponentially stable. Proof. Define a Lyapunov functional as follows: where To investigate the exponential stability of the origin, it is necessary to calculate the difference along the trajectory of (6). From (6), we have where . Since , one obtains Therefore, we have From (H3), one has Then for , , and , one has where . From , it follows that . Note that where are the minimum and maximum eigenvalues of matrix , respectively, , , and , so one has where . This implies that the equilibrium solution of system (1) is globally exponentially stable. The proof is completed. Remark 3. By employing LMI (8), the signs of , , that is, the differences between neural excitatory and inhibitory interaction, are taken into consideration. Remark 4. If , the equilibrium point of system (1) is said to be globally stable. 4. Numerical Example Next, an illustrative example is given to show the effectiveness of the obtained results. Consider the discrete-time neural model (6) with parameters: then it is not difficult to see that , , , , and . Take , then by solving LMI (8), it has feasible solutions that are and so from Theorems 2, this system admits a unique equilibrium , with all other solutions converging to it exponentially as see Figures 1 and 2. 5. Conclusions In the current paper, a class of discrete-time neural networks with both variable and distributed delays has been studied. Using Lyapunov stability and LMI technique, the existence and global exponential stability of the unique equilibrium point have been established. The obtained results are easy to verify, so they will be of practical use for applying discrete neural models. This study is supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (no. 20093401120001), the Natural Science Foundation of Anhui Province (no. 11040606M12), the Natural Science Foundation of Anhui Education Bureau (no. KJ2010A035), the 211 project of Anhui University (no. KJJQ1102). 1. S. Mohamad and K. Gopalsamy, “Dynamics of a class of discrete-time neural networks and their continuous-time counterparts,” Mathematics and Computers in Simulation, vol. 53, no. 1-2, pp. 1–39, 2000. View at Scopus 2. S. Arik, “An improved global stability result for delayed cellular neural network,” IEEE Transactions on Circuits and Systems, vol. 49, no. 8, pp. 1211–1214, 2002. View at Publisher · View at Google Scholar · View at Scopus 3. Z. Wang, Y. Liu, and X. Liu, “On global asymptotic stability of neural networks with discrete and distributed delays,” Physics Letters, vol. 345, no. 4–6, pp. 299–308, 2005. View at Publisher · View at Google Scholar · View at Scopus 4. C.-Y. Cheng, K.-H. Lin, and C.-W. Shih, “Multistability and convergence in delayed neural networks,” Physica D, vol. 225, no. 1, pp. 61–74, 2007. View at Publisher · View at Google Scholar · View at Scopus 5. S. Mou, H. Gao, J. Lam, and W. Qiang, “A new criterion of delay-dependent asymptotic stability for Hopfield neural networks with time delay,” IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 532–535, 2008. View at Publisher · View at Google Scholar · View at Scopus 6. S. I. Niculescu, Delay Effects on Stability: A Robust Approach, Springer, Berlin, Germany, 2001. 7. J. Liang, J. Cao, and J. Lam, “Convergence of discrete-time recurrent neural networks with variable delay,” International Journal of Bifurcation and Chaos, vol. 15, no. 2, pp. 581–595, 2005. View at Publisher · View at Google Scholar · View at Scopus 8. J. Liang, J. Cao, and D. W. C. Ho, “Discrete-time bidirectional associative memory neural networks with variable delays,” Physics Letters Section A, vol. 335, no. 2-3, pp. 226–234, 2005. View at Publisher · View at Google Scholar · View at Scopus 9. S. Mohamad, “Global exponential stability in continuous-time and discrete-time delayed bidirectional neural networks,” Physica D, vol. 159, no. 3-4, pp. 233–251, 2001. View at Publisher · View at Google Scholar · View at Scopus 10. C. Sun and C.-B. Feng, “Discrete-time analogues of integrodifferential equations modeling neural networks,” Physics Letters A, vol. 334, no. 2-3, pp. 180–191, 2005. View at Publisher · View at Google Scholar · View at Scopus 11. X.-G. Liu, M.-L. Tang, R. Martin, and X.-B. Liu, “Discrete-time BAM neural networks with variable delays,” Physics Letters A, vol. 367, no. 4-5, pp. 322–330, 2007. View at Publisher · View at Google Scholar · View at Scopus 12. H. Zhao, L. S. Li Sun, and G. Wang, “Periodic oscillation of discrete-time bidirectional associative memory neural networks,” Neurocomputing, vol. 70, no. 16–18, pp. 2924–2930, 2007. View at Publisher · View at Google Scholar · View at Scopus 13. W. He and J. Cao, “Stability and bifurcation of a class of discrete-time neural networks,” Applied Mathematical Modelling, vol. 31, no. 10, pp. 2111–2122, 2007. View at Publisher · View at Google Scholar · View at Scopus 14. B. Cessac, “A discrete time neural network model with spiking neurons,” Journal of Mathematical Biology, vol. 56, no. 3, pp. 311–345, 2008. View at Publisher · View at Google Scholar · View at 15. Y. Liu, Z. Wang, and X. Liu, “Asymptotic stability for neural networks with mixed time-delays: the discrete-time case,” Neural Networks, vol. 22, no. 1, pp. 67–74, 2009. View at Publisher · View at Google Scholar · View at Scopus 16. J. Cao and Q. Song, “Global dissipativity on uncertain discrete-time neural networks with time-varying delays,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 810408, 19 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus 17. T. Ensari and S. Arik, “New results for robust stability of dynamical neural networks with discrete time delays,” Expert Systems with Applications, vol. 37, no. 8, pp. 5925–5930, 2010. View at Publisher · View at Google Scholar · View at Scopus 18. A. Y. Alanis, E. N. Sanchez, A. G. Loukianov, and E. A. Hernandez, “Discrete-time recurrent high order neural networks for nonlinear identification,” Journal of the Franklin Institute, vol. 347, no. 7, pp. 1253–1265, 2010. View at Publisher · View at Google Scholar · View at Scopus 19. Z. Huang, X. Wang, and C. Feng, “Multiperiodicity of periodically oscillated discrete-time neural networks with transient excitatory self-connections and sigmoidal nonlinearities,” IEEE Transactions on Neural Networks, vol. 21, no. 10, pp. 1643–1655, 2010. View at Publisher · View at Google Scholar · View at Scopus 20. C. Li, S. Wu, G. G. Feng, and X. Liao, “Stabilizing effects of impulses in discrete-time delayed neural networks,” IEEE Transactions on Neural Networks, vol. 22, no. 2, pp. 323–329, 2011. View at Publisher · View at Google Scholar · View at Scopus 21. O. Faydasicok and S. Arik, “Robust stability analysis of a class of neural networks with discrete time delays,” Neural Networks, vol. 29-30, pp. 52–59, 2012. View at Publisher · View at Google Scholar · View at Scopus 22. Y. Li and C. Wang, “Existence and global exponential stability of equilibrium for discrete-time fuzzy BAM neural networks with variable delays and impulses,” Fuzzy Sets Systems, vol. 217, pp. 62–79, 2013.
{"url":"http://www.hindawi.com/journals/amp/2013/732406/","timestamp":"2014-04-21T03:16:21Z","content_type":null,"content_length":"391957","record_id":"<urn:uuid:490e6d5e-77c9-410b-9dc8-9f9deaa654ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
671: DAA --- Lecture 11 Backtracking and Branch-and-Bound The solution to many problems can be viewed as making a sequence of decisions. For example, finding a coloring of a graph with vertices {1,2, ... ,n} can be viewed as making a sequence of decisions, where the ith decision is which color to use for the ith vertex, i = 1,2, ,,,. n. Given a weighted graph a solution to the Traveling Salesman problem can be viewed as making a sequence of decisions concerning which city to visit next in the tour. A solution to the 0/1 Knapsack problem can be viewed as making a sequence of decisions as to which objects to put in the knapsack. Given a set of positive integers A = {a[1], a[2], ... , a[n]}, and a positive integer Sum, the Sum of Subsets problem is to find a subset of S of A (if any) S such that the sum of the elements in S equals Sum. Again, a solution to the Sum of Subsets problem can be viewed as making a sequence of decisions as to which elements to include in the set S. Anytime the solution to a problem involves making sequence of decisions, it can be modeled using a state space tree T. The root of T corresponds to not yet making the first decision. Then the children of the root consists of nodes corresponding to all possible choices for the first decision. The children of a first-decision node v consists of nodes corresponding to all possible choices for the second decision given that v was the first decision, and so forth. The total number of decision sequences is usually exponential in the input size n (i.e., were each solution consists of at most n decisions), since there is usually at least two children of every node not corresponding to the final decision (or next-to-final) decision. Backtracking and Branch-and-Bound are problem solving strategies that are guaranteed to find a solution to any problem modeled by a (finite) state space tree, since they ensure that every node in the tree T that can possibly be a solution node is examined. The two strategies differ in the way in which the tree T is searched. Backtracking follows a "depth-first" search strategy, whereas Branch-and-Bound follows a "breadth-first" search strategy. Backtracking has the advantage that there is no need to explicitly implement the state space tree, since we only are maintaining a path in this tree at any given point in the algorithm. Branch-and-Bound, on the other hand, needs to explicitly maintain the tree (at least that part of the tree generated), since all the children of a node are generated before proceeding further. Thus, we need to explicitly generate nodes in the state space tree, and keep track of the parent of these nodes. The question is, then, why consider Branch-and-Bound? The answer is that for some problems goal nodes might be found in a FIFO Branch-and-Bound that are at a shallow depth in the tree, whereas Backtracking might look at a large portion of the implicit state space tree before finding a shallow-depth goal. We illustrate Backtracking with a simple example from games, namely, placing n queens on a n-by-n chessboard so that no two queens are attacking. A given queen can attack everything in its row or column, as well as anything on the two diagonals incident with its position. In particular, if the is a placement of the n queens where no two attack, then each column (or row) must contain exactly one queen. Thus, we can assume that the ith queen is placed in the ith column, and view the problem as making a sequence of n decisions, where the ith decision is the choice of row for the ith queen, i = 1, ..., n. Of course, the state-space tree for this problem is enormous, i.e., contains n^n nodes, but Backtracking will only visit a small portion of this tree due to bounding. In the Backtracking solution, we try to place the ith queen in the first row where it won't be attacked by the placement of the previous i - 1 queens. If we can't find such a row, then we must backtrack and change the position of the last of the i - 1 queens for which such a change is possible. If there is no such queen, then there is no solution to the n-queens problem. In the following figure, we show a sequence of board positions generated by Backtracking for n = 4, where we only show the two positions where a backtrack must be made before a solution is found. Of course, placing n queens is not a very practical problem, and moreover, there are solutions not based on Backtracking. However, there are a number of practical problems for which no solution is known other than variations of Backtracking. One such problem is CNF satisfiability. Recall that a formula involving boolean variables x[1], x[2], ..., x[n] and their negations is in Conjuctive Normal Form (CNF) if it is a conjunction of disjunctions. For example, the formula (x[1] .or. x[3]) .and. (-x[1] .or. x[2] .or. x[3]) .and. (-x[1] .or. -x[2]) .and. (x[2] .or. -x[3]) is in CNF. The fundamental question about CNF formulas is their satisfiability, i.e., is there a truth assignment to the boolean variables in the formula which makes the formula evalute to TRUE? Of course, given an assignment to the variables, it is easy to check whether or not it is an satsifying assignment by making a linear scan through the formula, checking whether each clause (= disjunction) contains a variable that evalutes to TRUE. The satisfiability problem is clearly suitable for Backtracking, since finding a satisfying assignment (or determining that none exists) can be viewed as making n decisions about the truth assignment to the n boolean variables. Note that the (static, see below) state-space tree modeling the CNF satisfiability problem has 2^n+1 - 1 nodes, making it infeasible for exhaustive search unless some bounding can be generated. The CNF satisfiability problem (also called the SAT problem) is one of the most famous in computer science, as it was the first decision problem to be proved NP-complete (Cook). Also, since the problem has many practical applications in such areas as verifying VLSI electronic circuits and in cryptology, it is a very active research area today. In fact, there is an annual international conference devoted to problems related to SAT and which includes an on-line contest for the best SAT solver on a given test suite of CNF formulas. This conference was held here in Cincinnati a couple of years ago, and some of our faculty (Franco, Schlipf) are internationally know researchers in this area. There are many heuristics that have been developed to help bound the backtracking search of the CNF state-space tree. Two simple ones amount to choosing the next variable to branch on depending on which variable satisfies the most clauses. This is basically a greedy strategy (see Chapter 7). Preprocessing the input CNF formula and branching based on the fixed ordering of the variables subject to this greedy strategy yields a state space tree that is static with respect to this ordering. Slightly more clever is to chose the variable to "branch on" next that satisfies the most clauses not yet satisfied (again, a type of greedy strategy, which agrees with the staightforward greedy strategy at the beginning). The state space tree in this case is dynamic (i.e., varies with the problem as well as within the problem), as opposed to the static state-space tree associated with the n-queens problem. The following figure illustrates a static state-space tree for any CNF formula involving 3 boolean variables. Nodes of the tree correspond to variables, and edges are labeled 0 or 1 according to whether we assign FALSE or TRUE to the variable in node. Triangular leaf nodes indicate that all three variables have been assigned truth values. The second figure shows the path in the tree corresponding to the unique satisfying assignment for the above CNF formula. Here we visited 10 nodes in the tree before a finding the satisfying assignment (visiting a node corresponds to assigning a truth value to the variable in the node). Of course, if we could have chosen more cleverly, it could have been found rather quickly. For example, if we used the greedy heuristic mentioned above, would only have visited 3 nodes before finding a solution. But nobody knows clever enough heuristics that can avoid exponential worst-case performance for CNF satisfiability. (x[1] .or. x[3]) .and. (-x[1] .or. x[2] .or. x[3]) .and. (-x[1] .or. -x[2]) .and. (x[2] .or. -x[3]) In Branch-and-Bound, one generates all the children of a node in the state space tree before going on to children of that node. This forces one to explicitly maintain the nodes in the tree using using something like a queue or stack. Often the queue that is maintained is a priority queue, where nodes are given a priority based on some heuristic. In spite of the overhead incurred by explicitly maintence of the nodes in the tree, Branch-and-Bound can often be more efficient than Backtracking when solutions might be found in relatively shallow portions of the tree. However, for CNF, when there is a little or no choice for a satisfying assignment, Backtracking will outperform Branch-and-Bound. In optimization problems, such as the 0/1 Knapsack problem (see Chapter 7), bounding is also done by maintaining the best solution found so far in the search. If you can determine that a subtree cannot possibly contain a solution that is better than one already found, then the node can be bounded (the subtree pruned from the search). For example, a given node in the tree can be bounded if you allow fractional portions of the remaining objects in the subtree rooted at the node to be placed in the knapsack and you still could not improve an already generated solution. Our second illustration of backtracking is the Sum of Subsets problem. The input to the Sum of Subsets problem is a set S of positive integers together with a given Sum, and the output is a subset of S whose elements add up to Sum (or the message that no such subset exists). There are two natural (static) state space trees modeling this problem (and any problem that involves choosing elements from a given set, such as 0/1 Knapsack). Both models proceed by scanning the elements in A in the order a[1], a[2], ... . Nodes in the variable tuple state space tree T are represented explicitly by indices of the chosen elements, whereas in the fixed tuple state space tree, nodes are represented by the 0/1 characteristic vector corresponding to the chosen elements. For example, if A = {a[1], a[2], a[3], a[4], a[5]}, then in the variable tuple state space tree, the subset {a[2], a[4], a[5]} is represented by the node {2,4,5}, whereas in the fixed tuple state space tree it is represented by (0,1,0,1,1). The following figures illustrate the portion of the two different state space trees generated for A = {1, 4, 5, 10, 4} and Sum = 9. In the optimization version of Sum of Subsets, we want to find a subset having the fewest number of elements that sums to Sum. Here we keep track of the number of elements UB in the best solution found so far. When a node is examined which cannot do better than UB elements, then it is bounded. We first illustrate this for Backtracking and the same set A and Sum as before. The following figure shows that portion of the state space tree generated by FIFO Branch-and-Bound. You will note that there is no need to maintain the variable UB or LoBd in this problem, since the first goal node found will automatically be optimal because of the breadth-first nature of FIFO Branch-and-Bound. The fixed tuple state space tree is usually used for problems where the solutions (if any) are always a leaf nodes in the tree. In other words, a solution sequence always involves exactly n decisions. Such examples include graph coloring, boolean formula satisfiability, Traveling Salesman, and so forth. Sometimes, especially in optimization problems, the state space tree can be pruned from the outset. For example, suppose you want to determine the minimum number of colors needed to color a graph (so that no two adjacent vertices have the same color). If a graph has n vertices, then we need at most n colors. Considering the vertices in the order, it is clear that we can restrict color of vertex 1 to color 1. In the same way, we can restrict the color of vertex 2 to 1 or 2, and so forth. In other words, considering the vertices in the order, v[1], v[2], ... , v[n], you can limit the color choices for vertex v[i] to colors 1,2, ..., i.
{"url":"http://cs.uc.edu/~jpaul/671/chap10.html","timestamp":"2014-04-16T15:59:38Z","content_type":null,"content_length":"13925","record_id":"<urn:uuid:39457dfb-fcde-4c53-86fa-5ed6f6796004>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal domain and range April 12th 2010, 02:30 PM #1 Nov 2009 Maximal domain and range im very vERY confused.. first of all,can anyone confirm to me if 'maximal' domain and 'domain' are the same? i am trying to find maximal domain & range of these on the real line: F(x) = x² + 2x – 3 F(x) = √ x² + 2x – 3 F(x) = 1/√ x² + 2x – 3 F(x) = (x² + 2x – 3) -⅓ aren't they all going to be the same? factorising you get = (x+3)(x-1) so domain is the whole real line except -3, 1 and range will be whole real line...! where am i going wrong..... How does your book define "maximal domain"? Thank you! my notes is very vague in the definition of maximal domain, which is why im confused. it says this: 'The maximal domain is the set of points at which the rule f could legitimately be used' which sounds like domain to me That's sure what it sounds like to me, too. All I can find online are references to "maximal domain" being what people usually mean by "domain", with any difference maybe being between the mathematical domain (being anything allowable mathematically) and the practical domain (being, say, only non-negative values of the radius for an "area" function). That's sure what it sounds like to me, too. All I can find online are references to "maximal domain" being what people usually mean by "domain", with any difference maybe being between the mathematical domain (being anything allowable mathematically) and the practical domain (being, say, only non-negative values of the radius for an "area" function). so maximal is just an extra word to make it sound complicated. uggh! does that mean i am right?? so the domain for all of the functions are the whole real line No, "domain" and "maximal domain" are NOT the same. For example, I can find $\frac{x-1}{x-2}$ for any value of x except 2 so its "maximal domain" is all "all real numbers except 2". But If I specifically define " $f(x)= \frac{x-1}{x-2}$ with domain 1< x< 2 I now have a completely different function with "domain" [1, 2). The "maximal domain" for a formula is the set of all numbers for which that formula can be evaluated. The "domain" of a function can be any subset of the maximal domain for whatever formula is used in defining the function. It often is the maximal domain, but any subset can be given as the domain for that specific function. And, no, while the maximal domain is the same for these functions, the maximal domain is NOT "all real numbers except -3 and 1" for any of those functions. Yes, $x^2+ 2x- 3= (x+ 3)(x- 1)$ but whatever x is, x+ 3 and x- 1 are just numbers and you can multiply any two numbers together. The "maximal domain" for this formula is "all real numbers". Also, completing the square, $x^2+ 2x+ 1- 1+ 3= (x+ 1)^2+ 2$. Since a square is never negative, when x= -1, this give a value of 2 and for x anything except 1, it gives numbers larger than 2. It's range is "all real numbers larger than or equal to 2". The only "difficulty" with $\sqrt{x^2+ 2x+ 3}$ would be that you cannot take the squareroot of a negative number. Since $x^2+ 2x+ 3$ is never smaller than 2, it is never negative. The maximal domain is still all real numbers and now the range is all real numbers larger than or equal to $\sqrt{2}$. $\frac{1}{\sqrt{x^2+ 2x+ 3}$ and $(x^2+ 2x+ 3)^{-1/2}$ are exactly the same thing. Now the only possible "difficulty" is that we cannot take the square root of a negative number and we cannot divide by 0. But, again, $x^2+ 2x+ 3$ is never less than 2 so is never either negative or 0. Again, the maximal domain is "all real numbers". And, now, the range is $0< x\le \frac{1}{\sqrt{2}}$ since $\frac{1}{x}$ goes to 0 as x goes to infinity. April 12th 2010, 02:31 PM #2 MHF Contributor Mar 2007 April 12th 2010, 02:34 PM #3 Nov 2009 April 12th 2010, 02:40 PM #4 MHF Contributor Mar 2007 April 12th 2010, 02:45 PM #5 Nov 2009 April 12th 2010, 11:18 PM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/138751-maximal-domain-range.html","timestamp":"2014-04-18T17:01:17Z","content_type":null,"content_length":"49293","record_id":"<urn:uuid:b46b4676-9c43-466b-9918-018151f435ca>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Totowa Calculus Tutor Find a Totowa Calculus Tutor ...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond. 26 Subjects: including calculus, physics, geometry, statistics ...I worked with students in a tutoring environment for two and a half years at Montclair State University. I have also done private tutoring for several years now in subjects from algebra to physics to calculus III and differential equations. My education includes a bachelor's in mathematics and physics, as well as a master's in pure and applied mathematics. 7 Subjects: including calculus, physics, algebra 1, astronomy ...I have had tutoring experience with SAT math previously. I scored 790 on the SAT math portion. I am comfortable and familiar with the concepts covered in the exam. 15 Subjects: including calculus, English, Chinese, GRE ...Currently pursuing my Doctor of Pharmacy (Pharm.D) at the Ernest Mario School of Pharmacy, I have worked at Kaplan Co. I hold a perfect 800 on my SAT and PSAT Math, perfect 5 on AP Calculus, 800 SAT Math IIC, and am the recipient of the College Board's AP Scholar Award. Ranked in 99th National Percentile. 35 Subjects: including calculus, English, chemistry, SAT math ...As a Biology major I understand the concepts of Physical Science. In addition, I have worked with students in middle school age Science courses and am familiar with the material in that sense. As a soon-to-be teacher, I will be helping my students prepare (however the curriculum allows) for the ACT Science. 24 Subjects: including calculus, chemistry, English, statistics
{"url":"http://www.purplemath.com/totowa_nj_calculus_tutors.php","timestamp":"2014-04-21T07:19:44Z","content_type":null,"content_length":"23751","record_id":"<urn:uuid:cde85330-0dd8-4cf8-a486-dd9ea5308a1c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 22 of 22 1. CJM 2012 (vol 66 pp. 3) On Hilbert Covariants Let $F$ denote a binary form of order $d$ over the complex numbers. If $r$ is a divisor of $d$, then the Hilbert covariant $\mathcal{H}_{r,d}(F)$ vanishes exactly when $F$ is the perfect power of an order $r$ form. In geometric terms, the coefficients of $\mathcal{H}$ give defining equations for the image variety $X$ of an embedding $\mathbf{P}^r \hookrightarrow \mathbf{P}^d$. In this paper we describe a new construction of the Hilbert covariant; and simultaneously situate it into a wider class of covariants called the Göttingen covariants, all of which vanish on $X$. We prove that the ideal generated by the coefficients of $\mathcal{H}$ defines $X$ as a scheme. Finally, we exhibit a generalisation of the Göttingen covariants to $n$-ary forms using the classical Clebsch transfer principle. Keywords:binary forms, covariants, $SL_2$-representations Categories:14L30, 13A50 2. CJM 2012 (vol 64 pp. 1222) Normality of Maximal Orbit Closures for Euclidean Quivers Let $\Delta$ be an Euclidean quiver. We prove that the closures of the maximal orbits in the varieties of representations of $\Delta$ are normal and Cohen--Macaulay (even complete intersections). Moreover, we give a generalization of this result for the tame concealed-canonical algebras. Keywords:normal variety, complete intersection, Euclidean quiver, concealed-canonical algebra Categories:16G20, 14L30 3. CJM 2011 (vol 64 pp. 805) Quantum Random Walks and Minors of Hermitian Brownian Motion Considering quantum random walks, we construct discrete-time approximations of the eigenvalues processes of minors of Hermitian Brownian motion. It has been recently proved by Adler, Nordenstam, and van Moerbeke that the process of eigenvalues of two consecutive minors of a Hermitian Brownian motion is a Markov process; whereas, if one considers more than two consecutive minors, the Markov property fails. We show that there are analog results in the noncommutative counterpart and establish the Markov property of eigenvalues of some particular submatrices of Hermitian Brownian motion. Keywords:quantum random walk, quantum Markov chain, generalized casimir operators, Hermitian Brownian motion, diffusions, random matrices, minor process Categories:46L53, 60B20, 14L24 4. CJM 2011 (vol 64 pp. 409) Lifting Quasianalytic Mappings over Invariants Let $\rho \colon G \to \operatorname{GL}(V)$ be a rational finite dimensional complex representation of a reductive linear algebraic group $G$, and let $\sigma_1,\dots,\sigma_n$ be a system of generators of the algebra of invariant polynomials $\mathbb C[V]^G$. We study the problem of lifting mappings $f\colon \mathbb R^q \supseteq U \to \sigma(V) \subseteq \mathbb C^n$ over the mapping of invariants $\sigma=(\sigma_1,\dots,\sigma_n) \colon V \to \sigma(V)$. Note that $\sigma(V)$ can be identified with the categorical quotient $V /\!\!/ G$ and its points correspond bijectively to the closed orbits in $V$. We prove that if $f$ belongs to a quasianalytic subclass $\mathcal C \subseteq C^\infty$ satisfying some mild closedness properties that guarantee resolution of singularities in $\mathcal C$, e.g., the real analytic class, then $f$ admits a lift of the same class $\mathcal C$ after desingularization by local blow-ups and local power substitutions. As a consequence we show that $f$ itself allows for a lift that belongs to $\operatorname{SBV}_{\operatorname{loc}}$, i.e., special functions of bounded variation. If $\rho$ is a real representation of a compact Lie group, we obtain stronger versions. Keywords:lifting over invariants, reductive group representation, quasianalytic mappings, desingularization, bounded variation Categories:14L24, 14L30, 20G20, 22E45 5. CJM 2011 (vol 63 pp. 1058) $S_3$-covers of Schemes We analyze flat $S_3$-covers of schemes, attempting to create structures parallel to those found in the abelian and triple cover theories. We use an initial local analysis as a guide in finding a global description. Keywords:nonabelian groups, permutation group, group covers, schemes 6. CJM 2011 (vol 63 pp. 878) The Toric Geometry of Triangulated Polygons in Euclidean Spac Speyer and Sturmfels associated Gröbner toric degenerations $\mathrm{Gr}_2(\mathbb{C}^n)^{\mathcal{T}}$ of $\mathrm{Gr}_2(\mathbb{C}^n)$ with each trivalent tree $\mathcal{T}$ having $n$ leaves. These degenerations induce toric degenerations $M_{\mathbf{r}}^{\mathcal{T}}$ of $M_{\mathbf{r}}$, the space of $n$ ordered, weighted (by $\mathbf{r}$) points on the projective line. Our goal in this paper is to give a geometric (Euclidean polygon) description of the toric fibers and describe the action of the compact part of the torus as "bendings of polygons". We prove the conjecture of Foth and Hu that the toric fibers are homeomorphic to the spaces defined by Kamiyama and Yoshida. Categories:14L24, 53D20 7. CJM 2009 (vol 62 pp. 473) Goreskyâ MacPherson Calculus for the Affine Flag Varieties We use the fixed point arrangement technique developed by Goresky and MacPherson to calculate the part of the equivariant cohomology of the affine flag variety $\mathcal{F}\ell_G$ generated by degree 2. We use this result to show that the vertices of the moment map image of $\mathcal{F}\ell_G$ lie on a paraboloid. Categories:14L30, 55N91 8. CJM 2009 (vol 62 pp. 262) On the Spectrum of the Equivariant Cohomology Ring If an algebraic torus $T$ acts on a complex projective algebraic variety $X$, then the affine scheme $\operatorname{Spec} H^*_T(X;\mathbb C)$ associated with the equivariant cohomology is often an arrangement of linear subspaces of the vector space $H_2^T(X;\mathbb C).$ In many situations the ordinary cohomology ring of $X$ can be described in terms of this arrangement. Categories:14L30, 54H15 9. CJM 2009 (vol 61 pp. 1407) Traces, Cross-Ratios and 2-Generator Subgroups of $\SU(2,1)$ In this work, we investigate how to decompose a pair $(A,B)$ of loxodromic isometries of the complex hyperbolic plane $\mathbf H^{2}_{\mathbb C}$ under the form $A=I_1I_2$ and $B=I_3I_2$, where the $I_k$'s are involutions. The main result is a decomposability criterion, which is expressed in terms of traces of elements of the group $\langle A,B\rangle$. Categories:14L24, 22E40, 32M15, 51M10 10. CJM 2009 (vol 61 pp. 351) Multiplication of Polynomials on Hermitian Symmetric spaces and Littlewood--Richardson Coefficients Let $K$ be a complex reductive algebraic group and $V$ a representation of $K$. Let $S$ denote the ring of polynomials on $V$. Assume that the action of $K$ on $S$ is multiplicity-free. If $\ lambda$ denotes the isomorphism class of an irreducible representation of $K$, let $\rho_\lambda\from K \rightarrow GL(V_{\lambda})$ denote the corresponding irreducible representation and $S_\ lambda$ the $\lambda$-isotypic component of $S$. Write $S_\lambda \cdot S_\mu$ for the subspace of $S$ spanned by products of $S_\lambda$ and $S_\mu$. If $V_\nu$ occurs as an irreducible constituent of $V_\lambda\otimes V_\mu$, is it true that $S_\nu\subseteq S_\lambda\cdot S_\mu$? In this paper, the authors investigate this question for representations arising in the context of Hermitian symmetric pairs. It is shown that the answer is yes in some cases and, using an earlier result of Ruitenburg, that in the remaining classical cases, the answer is yes provided that a conjecture of Stanley on the multiplication of Jack polynomials is true. It is also shown how the conjecture connects multiplication in the ring $S$ to the usual Littlewood--Richardson rule. Keywords:Hermitian symmetric spaces, multiplicity free actions, Littlewood--Richardson coefficients, Jack polynomials Categories:14L30, 22E46 11. CJM 2008 (vol 60 pp. 556) Polarization of Separating Invariants We prove a characteristic free version of Weyl's theorem on polarization. Our result is an exact analogue of Weyl's theorem, the difference being that our statement is about separating invariants rather than generating invariants. For the special case of finite group actions we introduce the concept of \emph{cheap polarization}, and show that it is enough to take cheap polarizations of invariants of just one copy of a representation to obtain separating vector invariants for any number of copies. This leads to upper bounds on the number and degrees of separating vector invariants of finite groups. Keywords:Jan Draisma, Gregor Kemper, David Wehlau Categories:13A50, 14L24 12. CJM 2008 (vol 60 pp. 109) Affine Lines on Affine Surfaces and the Makar--Limanov Invariant A smooth affine surface $X$ defined over the complex field $\C$ is an $\ML_0$ surface if the Makar--Limanov invariant $\ML(X)$ is trivial. In this paper we study the topology and geometry of $\ ML_0$ surfaces. Of particular interest is the question: Is every curve $C$ in $X$ which is isomorphic to the affine line a fiber component of an $\A^1$-fibration on $X$? We shall show that the answer is affirmative if the Picard number $\rho(X)=0$, but negative in case $\rho(X) \ge 1$. We shall also study the ascent and descent of the $\ML_0$ property under proper maps. Categories:14R20, 14L30 13. CJM 2006 (vol 58 pp. 1000) On the Cohomology of Moduli of Vector Bundles and the Tamagawa Number of $\operatorname{SL}_n$ We compute some Hodge and Betti numbers of the moduli space of stable rank $r$, degree $d$ vector bundles on a smooth projective curve. We do not assume $r$ and $d$ are coprime. In the process we equip the cohomology of an arbitrary algebraic stack with a functorial mixed Hodge structure. This Hodge structure is computed in the case of the moduli stack of rank $r$, degree $d$ vector bundles on a curve. Our methods also yield a formula for the Poincar\'e polynomial of the moduli stack that is valid over any ground field. In the last section we use the previous sections to give a proof that the Tamagawa number of $\sln$ is one. Categories:14H, 14L 14. CJM 2006 (vol 58 pp. 93) Motivic Haar Measure on Reductive Groups We define a motivic analogue of the Haar measure for groups of the form $G(k\llp t\rrp)$, where~$k$ is an algebraically closed field of characteristic zero, and $G$ is a reductive algebraic group defined over $k$. A classical Haar measure on such groups does not exist since they are not locally compact. We use the theory of motivic integration introduced by M.~Kontsevich to define an additive function on a certain natural Boolean algebra of subsets of $G(k\llp t\rrp)$. This function takes values in the so-called dimensional completion of the Grothendieck ring of the category of varieties over the base field. It is invariant under translations by all elements of $G(k\llp t\rrp)$, and therefore we call it a motivic analogue of Haar measure. We give an explicit construction of the motivic Haar measure, and then prove that the result is independent of all the choices that are made in the process. Keywords:motivic integration, reductive group Categories:14A15, 14L15 15. CJM 2003 (vol 55 pp. 693) Une formule de Riemann-Roch équivariante pour les courbes Soit $G$ un groupe fini agissant sur une courbe alg\'ebrique projective et lisse $X$ sur un corps alg\'ebriquement clos $k$. Dans cet article, on donne une formule de Riemann-Roch pour la caract\ 'eristique d'Euler \'equivariante d'un $G$-faisceau inversible $\mathcal{L}$, \`a valeurs dans l'anneau $R_k (G)$ des caract\`eres du groupe $G$. La formule donn\'ee a un bon comportement fonctoriel en ce sens qu'elle rel\`eve la formule classique le long du morphisme $\dim \colon R_k (G) \to \mathbb{Z}$, et est valable m\^eme pour une action sauvage. En guise d'application, on montre comment calculer explicitement le caract\`ere de l'espace des sections globales d'une large classe de $G$-faisceaux inversibles, en s'attardant sur le cas particulier d\'elicat du faisceau des diff\'erentielles sur la courbe. Keywords:group actions on varieties or schemes,, Riemann-Roch theorems Categories:14L30, 14C40 16. CJM 2002 (vol 54 pp. 595) Lie Algebras of Pro-Affine Algebraic Groups We extend the basic theory of Lie algebras of affine algebraic groups to the case of pro-affine algebraic groups over an algebraically closed field $K$ of characteristic 0. However, some modifications are needed in some extensions. So we introduce the pro-discrete topology on the Lie algebra $\mathcal{L}(G)$ of the pro-affine algebraic group $G$ over $K$, which is discrete in the finite-dimensional case and linearly compact in general. As an example, if $L$ is any sub Lie algebra of $\mathcal{L}(G)$, we show that the closure of $[L,L]$ in $\mathcal{L}(G)$ is algebraic in $\ mathcal{L}(G)$. We also discuss the Hopf algebra of representative functions $H(L)$ of a residually finite dimensional Lie algebra $L$. As an example, we show that if $L$ is a sub Lie algebra of $\ mathcal{L}(G)$ and $G$ is connected, then the canonical Hopf algebra morphism from $K[G]$ into $H(L)$ is injective if and only if $L$ is algebraically dense in $\mathcal{L}(G)$. Categories:14L, 16W, 17B45 17. CJM 2002 (vol 54 pp. 554) Equivariant Embeddings into Smooth Toric Varieties We characterize embeddability of algebraic varieties into smooth toric varieties and prevarieties. Our embedding results hold also in an equivariant context and thus generalize a well-known embedding theorem of Sumihiro on quasiprojective $G$-varieties. The main idea is to reduce the embedding problem to the affine case. This is done by constructing equivariant affine conoids, a tool which extends the concept of an equivariant affine cone over a projective $G$-variety to a more general framework. Categories:14E25, 14C20, 14L30, 14M25 18. CJM 2000 (vol 52 pp. 1018) Essential Dimensions of Algebraic Groups and a Resolution Theorem for $G$-Varieties Let $G$ be an algebraic group and let $X$ be a generically free $G$-variety. We show that $X$ can be transformed, by a sequence of blowups with smooth $G$-equivariant centers, into a $G$-variety $X'$ with the following property the stabilizer of every point of $X'$ is isomorphic to a semidirect product $U \sdp A$ of a unipotent group $U$ and a diagonalizable group $A$. As an application of this result, we prove new lower bounds on essential dimensions of some algebraic groups. We also show that certain polynomials in one variable cannot be simplified by a Tschirnhaus transformation. Categories:14L30, 14E15, 14E05, 12E05, 20G10 19. CJM 1999 (vol 51 pp. 771) Stable Bi-Period Summation Formula and Transfer Factors This paper starts by introducing a bi-periodic summation formula for automorphic forms on a group $G(E)$, with periods by a subgroup $G(F)$, where $E/F$ is a quadratic extension of number fields. The split case, where $E = F \oplus F$, is that of the standard trace formula. Then it introduces a notion of stable bi-conjugacy, and stabilizes the geometric side of the bi-period summation formula. Thus weighted sums in the stable bi-conjugacy class are expressed in terms of stable bi-orbital integrals. These stable integrals are on the same endoscopic groups $H$ which occur in the case of standard conjugacy. The spectral side of the bi-period summation formula involves periods, namely integrals over the group of $F$-adele points of $G$, of cusp forms on the group of $E$-adele points on the group $G$. Our stabilization suggests that such cusp forms---with non vanishing periods---and the resulting bi-period distributions associated to ``periodic'' automorphic forms, are related to analogous bi-period distributions associated to ``periodic'' automorphic forms on the endoscopic symmetric spaces $H(E)/H(F)$. This offers a sharpening of the theory of liftings, where periods play a key role. The stabilization depends on the ``fundamental lemma'', which conjectures that the unit elements of the Hecke algebras on $G$ and $H$ have matching orbital integrals. Even in stating this conjecture, one needs to introduce a ``transfer factor''. A generalization of the standard transfer factor to the bi-periodic case is introduced. The generalization depends on a new definition of the factors even in the standard case. Finally, the fundamental lemma is verified for $\SL(2)$. Categories:11F72, 11F70, 14G27, 14L35 20. CJM 1999 (vol 51 pp. 616) Parabolic Subgroups with Abelian Unipotent Radical as a Testing Site for Invariant Theory Let $L$ be a simple algebraic group and $P$ a parabolic subgroup with Abelian unipotent radical $P^u$. Many familiar varieties (determinantal varieties, their symmetric and skew-symmetric analogues) arise as closures of $P$-orbits in $P^u$. We give a unified invariant-theoretic treatment of various properties of these orbit closures. We also describe the closures of the conormal bundles of these orbits as the irreducible components of some commuting variety and show that the polynomial algebra $k[P^u]$ is a free module over the algebra of covariants. Categories:14L30, 13A50 21. CJM 1998 (vol 50 pp. 929) Decomposition varieties in semisimple Lie algebras The notion of decompositon class in a semisimple Lie algebra is a common generalization of nilpotent orbits and the set of regular semisimple elements. We prove that the closure of a decomposition class has many properties in common with nilpotent varieties, \eg, its normalization has rational singularities. The famous Grothendieck simultaneous resolution is related to the decomposition class of regular semisimple elements. We study the properties of the analogous commutative diagrams associated to an arbitrary decomposition class. Categories:14L30, 14M17, 15A30, 17B45 22. CJM 1998 (vol 50 pp. 378) Equivariant polynomial automorphism of $\Theta$-representations We show that every equivariant polynomial automorphism of a $\Theta$-repre\-sen\-ta\-tion and of the reduction of an irreducible $\Theta$-representation is a multiple of the identity. Categories:14L30, 14L27
{"url":"http://cms.math.ca/cjm/msc/14L?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-17T21:45:04Z","content_type":null,"content_length":"63081","record_id":"<urn:uuid:f3317daa-e974-46dc-9c9c-4fcea089ec1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Consider A 10-ft × 10-ft × 10-ft Cubical... | Chegg.com Consider a 10-ft × 10-ft × 10-ft cubical furnace whose top and side surfaces closely approximate black surfaces and whose base surface has an emissivity ε = 0.7. The base, top, and side surfaces of the furnace are maintained at uniform temperatures of 800 R, 1600 R, and 2400 R, respectively. Determine the net rate of radiation heat transfer between (a) the base and the side surfaces and (b) the base and the top surfaces. Also, determine the net rate of radiation heat transfer to the base surface.
{"url":"http://www.chegg.com/homework-help/consider-10-ft-10-ft-10-ft-cubical-furnace-whose-top-side-su-chapter-21-problem-67p-solution-9780073327488-exc","timestamp":"2014-04-19T23:20:01Z","content_type":null,"content_length":"37319","record_id":"<urn:uuid:a982f0d2-7929-40b0-ae8a-73c3d691558d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Negative Two Kids Date: 09/27/2002 at 06:59:21 From: John Doe Subject: Multiplying negatives Let's say 3 x 2 = 6. Three is how many kids there are in a "group." Two is how many "groups" there are, and six is the answer. Now let's say 3 x (-2) = -6. How can you have negative two "groups" of three kids? How, Dr. Math? Date: 09/27/2002 at 12:27:16 From: Doctor Ian Subject: Re: Multiplying negatives Hi John, This is an _excellent_ question, and the answer is that in order to have a group of -2 kids, you have to decide what it means to have 2 kids. Then -2 kids is the opposite of that. It's easier to see this with things like directions. If I decide that '5 miles' means '5 miles east', then '-5 miles' means '5 miles west'. Similarly, if I decide that '5 feet' means '5 feet up', then '-5 feet' means '5 feet down'. So in one context, '2 kids' might mean that you _collect_ enough money to let 2 kids do something (attend a baseball game, for example). Then '-2 kids' would just mean that you have to _pay_ that same amount of money, instead of collecting it. In that context, '-2 * 3 kids' would mean that you have to pay for 3 groups of -2 kids, or -2 groups of 3 kids, which are really the same Probably the most important thing when dealing with negative numbers is to learn not to try to assign 'meanings' to the numbers themselves, but to keep in mind that putting the '-' sign in front of numbers just lets us skip writing things like 'east' and 'west', or 'paid' and 'collected' after every number. There is absolutely _nothing_ that can be done with negative numbers that can't be done without them, so long as we're willing to tag every positive number with the appropriate units. The benefit of using 'negative' numbers is that we save time by not having to write the units over and over. But every benefit comes at a price, and the price of this benefit is that we have to be careful to remember which units we've made positive and negative... and that sometimes we end up with expresions like '-2 kids' that seem silly if looked at outside the context of a particular problem or situation. Does that make sense? - Doctor Ian, The Math Forum Date: 09/28/2002 at 15:34:11 From: John Doe Subject: Thank you (Multiplying negatives) Thank you for clearing it up. Now I understand that there is no way to do that unless you think of a later point in time. As in a person is going to give money for 3 payments, but hasn't yet. (-5 x -3 = 15) So, I would later have +15 dollars. Thank you!
{"url":"http://mathforum.org/library/drmath/view/61297.html","timestamp":"2014-04-16T04:12:35Z","content_type":null,"content_length":"7684","record_id":"<urn:uuid:0855bd58-7ce3-4d3d-b679-76f1d746e7b6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Andrzej Tarlecki. The definition of extended ml: a gentle introduction Results 1 - 10 of 13 , 1999 "... Research in dependent type theories [M-L71a] has, in the past, concentrated on its use in the presentation of theorems and theorem-proving. This thesis is concerned mainly with the exploitation of the computational aspects of type theory for programming, in a context where the properties of programs ..." Cited by 70 (13 self) Add to MetaCart Research in dependent type theories [M-L71a] has, in the past, concentrated on its use in the presentation of theorems and theorem-proving. This thesis is concerned mainly with the exploitation of the computational aspects of type theory for programming, in a context where the properties of programs may readily be specified and established. In particular, it develops technology for programming with dependent inductive families of datatypes and proving those programs correct. It demonstrates the considerable advantage to be gained by indexing data structures with pertinent characteristic information whose soundness is ensured by typechecking, rather than human effort. Type theory traditionally presents safe and terminating computation on inductive datatypes by means of elimination rules which serve as induction principles and, via their associated reduction behaviour, recursion operators [Dyb91]. In the programming language arena, these appear somewhat cumbersome and give rise to unappealing code, complicated by the inevitable interaction between case analysis on dependent types and equational reasoning on their indices which must appear explicitly in the terms. Thierry Coquand’s proposal [Coq92] to equip type theory directly with the kind of , 1998 "... The programming language Standard ML is an amalgam of two, largely orthogonal, languages. The Core language expresses details of algorithms and data structures. The Modules language expresses the modular architecture of a software system. Both languages are statically typed, with their static and dy ..." Cited by 69 (9 self) Add to MetaCart The programming language Standard ML is an amalgam of two, largely orthogonal, languages. The Core language expresses details of algorithms and data structures. The Modules language expresses the modular architecture of a software system. Both languages are statically typed, with their static and dynamic semantics specified by a formal definition. - In European Symposium on Programming (ESOP , 2006 "... Abstract. Two of the most prominent features of ML are its expressive module system and its support for Damas-Milner type inference. However, while the foundations of both these features have been studied extensively, their interaction has never received a proper type-theoretic treatment. One conseq ..." Cited by 13 (12 self) Add to MetaCart Abstract. Two of the most prominent features of ML are its expressive module system and its support for Damas-Milner type inference. However, while the foundations of both these features have been studied extensively, their interaction has never received a proper type-theoretic treatment. One consequence is that both the official Definition and the alternative Harper-Stone semantics of Standard ML are difficult to implement correctly. To bolster this claim, we offer a series of short example programs on which no existing SML typechecker follows the behavior prescribed by either formal definition. It is unclear how to amend the implementations to match the definitions or vice versa. Instead, we propose a way of defining how type inference interacts with modules that is more liberal than any existing definition or implementation of SML and, moreover, admits a provably sound and complete typechecking algorithm via a straightforward generalization of Algorithm W. In addition to being conceptually simple, our solution exhibits a novel hybrid of the Definition and Harper-Stone semantics of SML, and demonstrates the broader relevance of some type-theoretic techniques developed recently in the study of recursive modules. 1 , 1998 "... We give a canonical program refinement calculus based on the lambda calculus and classical first-order predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is t ..." Cited by 6 (1 self) Add to MetaCart We give a canonical program refinement calculus based on the lambda calculus and classical first-order predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is that it should be possible to induce a refinement calculus in a generic manner from a programming language and a program logic. For concreteness, we adopt the simply-typed lambda calculus augmented with primitive recursion as a paradigmatic typed functional programming language, and use classical first-order logic as a simple program logic. A key feature is the construction of the refinement calculus in a modular fashion, as the combination of two orthogonal extensions to the underlying programming language (in this case, the simply-typed lambda calculus). The crucial observation is that a refinement calculus is given by extending a programming language to allow indeterminate expressions (or ‘stubs’) involving the construction ‘some program x such that P ’. Factoring this into ‘some x...’ , 1996 "... We study the problem of representing a modular specification language in a type-theory based theorem prover. Our goals are: to provide mechanical support for reasoning about specifications and about the specification language itself; to clarify the semantics of the specification language by formalis ..." Cited by 2 (1 self) Add to MetaCart We study the problem of representing a modular specification language in a type-theory based theorem prover. Our goals are: to provide mechanical support for reasoning about specifications and about the specification language itself; to clarify the semantics of the specification language by formalising them fully; to augment the specification language with a programming language in a setting where they are both part of the same formal environment, allowing us to define a formal implementation relationship between the two. Previous work on similar issues has given rise to a dichotomy between “shal-low ” and “deep ” embedding styles when representing one language within another. We show that the expressiveness of type theory, and the high degree of reflection that it permits, allow us to develop embedding techniques which lie between the “shallow ” and “deep ” extremes. We consider various possible embedding strategies and then choose one of them to explore more fully. As our object of study we choose a fragment of the Z specification language, which we encode in the type theory UTT, as implemented in the LEGO proof-checker. We use the encoding to study some of the operations on schemas provided by Z. One of our main concerns is whether it is possible to reason about Z specifications at the level of these operations. We prove some theorems about Z showing that, within certain constraints, this kind of reasoning is indeed possible. We then show how these metatheorems can be used to carry out formal reasoning about Z specifications. For this we make use of an example taken from the Z Reference Manual (ZRM). Finally, we exploit the fact that type theory provides a programming lan-guage as well as a logic to define a notion of implementation for Z specifications. We illustrate this by encoding some example programs taken from the ZRM. ii Declaration I declare that this thesis was composed by myself, and that the work con-tained in it is my own except where otherwise stated. Some of this work has been published previously [Mah94]. iii , 2006 "... Abstract. Refinement is a method to derive correct programs from specifications. A rich type language is another way to ensure program correctness. In this paper, we propose a wide-spectrum language mixing both approaches for the ML language. Mainly, base types are simply included into expressions, ..." Cited by 1 (0 self) Add to MetaCart Abstract. Refinement is a method to derive correct programs from specifications. A rich type language is another way to ensure program correctness. In this paper, we propose a wide-spectrum language mixing both approaches for the ML language. Mainly, base types are simply included into expressions, introducing underdeterminism and dependent types. We focus on the semantic aspects of such a language. We study three different semantics: a denotational, a deterministic operational and a nondeterministic operational semantics. We prove their equivalence. We show that this language is a conservative extension of ML. 1 "... Abstract. System F, the polymorphic lambda calculus, is well-known for its rich equational theory. In this paper, we study internalizing the equational theory of System F by extending it with a type of term-level equations. This results in a core calculus suitable for formalizing features such as Ha ..." Cited by 1 (1 self) Add to MetaCart Abstract. System F, the polymorphic lambda calculus, is well-known for its rich equational theory. In this paper, we study internalizing the equational theory of System F by extending it with a type of term-level equations. This results in a core calculus suitable for formalizing features such as Haskell’s rewriting rules mechanism or Extended ML signatures. 1 - In Eighth International Workshop on Computer Science Logic , 1995 "... We give syntax and a PER-model semantics for a typed h-calculus with subtypes and singleton types. The calculus may be seen as a minimal calculus of subtyping with a simple form of dependent types. The aim is to study singleton types and to take a canny step towards more complex dependent subtypi ..." Add to MetaCart We give syntax and a PER-model semantics for a typed h-calculus with subtypes and singleton types. The calculus may be seen as a minimal calculus of subtyping with a simple form of dependent types. The aim is to study singleton types and to take a canny step towards more complex dependent subtyping systems. Singleton types have applications in the use of type systems for specification and program extraction: given a program P we can form the very tight specification {P} which is met uniquely by P. Singletons integrate abbreviational definitions into a type system: the hypothesis x : {M} asserts x = M. The addition of singleton types is a non- conservative extension of familiar subtyping theories. In our system, more terms are typable and previously typable terms have more (non-dependent) types. , 1998 "... Pure functional languages are expressive tools for writing modular and reliable code. State in programming languages is a useful tool for programming dynamic systems. However, their combination yields programming languages that are difficult to model and to reason about. There have been ongoing atte ..." Add to MetaCart Pure functional languages are expressive tools for writing modular and reliable code. State in programming languages is a useful tool for programming dynamic systems. However, their combination yields programming languages that are difficult to model and to reason about. There have been ongoing attempts to find subsets of the whole languages which have good properties; in particular subsets where the programs are more modular and the side effects are controlled. The existing studies are: interference control, typing with side-effects information, and linear logic based languages. This thesis presents a new classification for a paradigm called constant program throughout a computational invariant. A program is called constant throughout an invariant R if its input-output behaviour is constant over any variations of state that satisfy the invariant R. Hence such a program behaves in an applicative way when it is executed in a context that satisfies the invariant R. The language of discussion is a pure ML fragment augmented with ref,:=, and!. Programs with side effects are modelled in terms of sets, functions, and the "... and for Mum. I whacked the back of the driver’s seat with my fist. “This is important, goddamnit! This is a true story! ” The car swerved sickeningly, thenstraightenedout....Thekidinthebacklookedlikehewasready to jump right out of the car and take his chances. Our vibrations were getting nasty—but w ..." Add to MetaCart and for Mum. I whacked the back of the driver’s seat with my fist. “This is important, goddamnit! This is a true story! ” The car swerved sickeningly, thenstraightenedout....Thekidinthebacklookedlikehewasready to jump right out of the car and take his chances. Our vibrations were getting nasty—but why? I was puzzled, frustrated. Was there no communication in this car? Had we deteriorated to the level of dumb beasts? Because my story was true. I was certain of that. And it was extremely important, I felt, for the meaning of our journey to be made absolutelyclear....Andwhenthecallcame, I wasready. One reason for studying and programming in functional programming languages is that they are easy to reason about, yet there is surprisingly little work on proving the correctness of large functional programs. In this dissertation I show
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=229260","timestamp":"2014-04-20T21:23:03Z","content_type":null,"content_length":"38380","record_id":"<urn:uuid:973e03a9-d7c3-41d1-966b-823ccd6cfe4a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Information on the University Assessment in English Composition, Mathematics & Reading | Examinations | Chicago State University The University Placement Assessment Program in English Composition, Mathematics and Reading are part of a program aimed at ensuring that all Chicago State University students have achieved college level competence in the basic skills. The results from these assessments are used to: 1. measure the student's level of proficiency, 2. identify academic deficiencies, and 3. determine eligibility for professional & major courses. New freshmen and transfer student alike must take the appropriate assessment either before or immediately after entering the University. They should be taken before the end of a student's first term of attendance. You can find information about schedules and general advice at www.csu.edu/Examinations/ or call (773) 995-2481. Every new or transfer undergraduate student must take the appropriate English assessment.* The English test consists of one writing prompt that defines an issue or problem and describes two points of view on that issue. Your score depends on your ability to write a logical, well-organized essay of several paragraphs that demonstrates control over English grammar. Students are evaluated in the following categories: 1. Focus—consistency and clarity in identifying and maintaining the main idea or point of view. 2. Content—extent to which the topic is addressed by the development of ideas and the specificity of details and examples. 3. Organization—unity and coherence achieved through logical sequence of ideas. 4. Style—how effectively the chosen language enhances the writer's purpose. 5) Conventions—control of mechanics in grammar, usage, spelling, and punctuation. Results will determine courses in which students are eligible to enroll. CSU uses the ACT Compass computerized math placement assessment. The Math Placement Test is a multiple-choice test that evaluates students' ability levels, starting in basic algebra. Depending upon your placement assessment, you may be recommended into your first math course, receive proficiency credit, or you may be required to enroll in Math 0990 Level 1 and/or Level 2 depending upon your The algebra portion consists of the following skills: solving linear equations & inequalities, graphing linear systems; adding, subtracting, multiplying and dividing polynomials, factoring algebraic expressions; adding, subtracting, multiplying and dividing algebraic expressions; simplifying radical expressions; solving equations involving radicals; and using the quadratic formula. Any high school algebra I book can be used for review. A few sample problems are available at www.act.org/compass. There are also study links on the Examinations website at www.csu.edu/examinations/. The Reading Assessment is a multiple-choice test required of all entering CSU students.* The assessment is the Compass Reading Placement Assessment, which measures two areas of reading, vocabulary and comprehension (understanding). Students who do not achieve a passing score are required to enroll in Read 1500. Additional information on the reading, math, and English (e-Write) Compass assessments may also be obtained at www.act.org/compass. * Transfer students who have completed an associate's degree may be exempt from the assessment program as a graduation requirement.
{"url":"http://csu.edu/examinations/assessementenglishmathreading.htm","timestamp":"2014-04-20T10:46:11Z","content_type":null,"content_length":"16882","record_id":"<urn:uuid:7ca3bec8-6767-4ffc-b85f-b7c3c1692e16>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
The REAL Cost of College This lesson plan will teach high school students the basics of budgeting, including understanding how revenue and expenses interact. The context will be budgeting for college expenses, and will yield information that the student will actually be able to use in real life. Students will put together some different budget scenarios of what it REALLY costs to go to college, using online resources that they research independently. Grade Level: Grades 7-12 Time Allotment: 3 classes at 45 minutes per class Subject Matter: Math, Finance, Economics Learning Objectives Students will: • Understand the components of a budget • Learn financial management • Learn the nature of opportunity costs • Understand the importance of self-regulation • Understand the costs associated with going to college • Research different college cost options • Compute the cost of college Standards 1. National Council of Teachers and Mathematics Principles and Standards for School Mathematics Number and Operations • Understand numbers, ways of representing numbers, relationships among numbers, and number systems; • Understand meanings of operations and how they relate to one another; • Compute fluently and make reasonable estimates. Problem Solving • Build new mathematical knowledge through problem solving; • Solve problems that arise in mathematics and in other contexts; • Apply and adapt a variety of appropriate strategies to solve problems; • Monitor and reflect on the process of mathematical problem solving. • Create and use representations to organize, record, and communicate mathematical ideas; • Select, apply, and translate among mathematical representations to solve problems; • Use representations to model and interpret physical, social, and mathematical phenomena. 2. JumpStart Coalition for Personal Financial Literacy Money Management Students will be able to: 1. Explain how limited personal financial resources affect the choices people make. 2. Identify the opportunity cost of financial decisions. 3. Discuss the importance of taking responsibility for personal financial decisions. 4. Apply a decision-making process to personal financial choices. 5. Design a plan for earning, spending, saving, and investing. 3. Mid-continent Research for Education and Learning (McREL) Benchmarks for Economics Standard 1 Understands that scarcity of productive resources requires choices that generate opportunity costs This lesson was prepared with the support of the Citigroup Foundation. Written by: Melissa Donohue
{"url":"http://www.thirteen.org/edonline/lessons/fe_collegecosts/index.html","timestamp":"2014-04-19T17:46:11Z","content_type":null,"content_length":"12292","record_id":"<urn:uuid:f686124e-960b-4d67-8662-5808c7ceadcc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Grand Unification of AGN and the Accretion and Spin Paradigms 3.1 Theoretical Arguments for the Spin Paradigm There is significant theoretical basis for this paradigm as well. Several models of relativistic jet formation (Blandford & Znajek 1977, Punsly & Coroniti 1990) indicate that the jet power should increase as the square of the black hole angular momentum where B[p] is the strength of the poloidal (vertical/radial) magnetic field threading the ergospheric and horizon region of the rotating hole. In this model rotational energy is extracted via a Penrose-like process: the frame-dragged accretion disk is coupled to plasma above and outside the ergosphere via the poloidal magnetic field; some plasma is pinched and accelerated upward while some disk material is diverted into negative energy (retrograde) orbits inside the ergosphere, removing some of the hole's rotational energy. The key parameter determining the efficiency of this process is the strength of the poloidal magnetic field. The standard approach (e.g., Moderski & Sikora 1996) to estimating B[p] is to set it equal to B[], the dominant azimuthal magnetic field component given by the disk structure equations, yielding for Class B (radio galaxy/ADAF) and Class A (quasar/standard disk) objects, respectively. Note that, while the jet is not accretion-powered in this model, the efficiency of extraction is still essentially linear in Livio et al. (1999) have pointed out that taking B[p] B[] may greatly overestimate the jet power from this process. Using dynamo arguments they propose that a more realistic estimate for the equilibrium poloidal magnetic field is where (H/R) is the ratio of disk half-thickness to radius in the jet acceleration region. For thin disks this yields a jet power of only L[jet] = 4 x 10^44 erg s^-1 m[9]^1.1 (^1.2 j^2 - less than the observed radio power of the strongest sources and much less than their inferred total jet power (see Bicknell, these proceedings, and Bicknell 1995). However, there are several reasons for believing that even with equation (6) the field still can be quite large in many cases, and the jet power still comparable to equations (4) and (5). Firstly, for advective disks (both the accretion-starved kind [H/R ~ 1), yielding B[p] B[] even within the dynamo argument. Thick disks also can occur for an even broader range of accretion rate when the hole and disk spin axes are misaligned: because of the Lens-Thirring effect, the gas follows inclined orbits that do not close, creating shocks and dissipation that bloats the disk into a quasi-spherical, inhomogeneous inflow (Blandford 1994). Furthermore, even when H << R, inside the last stable orbit (or in any other region of the disk where the infall velocity suddenly approaches the free-fall speed) conservation of mass will cause a drop in density and pressure. The toroidal field may then be dynamically important, buckling upward out of the plunging accretion flow, resulting in B[p] being comparable to B[] (Krolik 1999). Figure 1. Schematic representation of four possible combinations of j, drawn roughly to scale. The horizon interior to the hole is black, while the boundary of the ergosphere (the ``static limit'') is represented by an ellipse 0.5 x 1.0 Schwarzschild radius in size. Top panels depict non-rotating, Schwarzschild holes (j -> 0), bottom panels Kerr holes (j -> 1). Left panels show low accretion rate (ADAF) tori, right panels high accretion rate standard disk models. However, in the lower right panel the region of the disk experiencing significant frame dragging is bloated (c.f. Blandford 1994). Widths of poloidal magnetic field lines and jet arrows are proportional to the logarithm of their strength. Figure 1 summarizes the main features of the accretion and spin paradigms and shows the four possible combinations of high and low accretion rate and black hole spin. It is proposed that these states correspond to different radio loud and quiet quasars and galaxies. In the figure poloidal magnetic field strengths are estimated from equation (6), but (H/R) is of order unity for the low H/R) is calculated from the electron scattering/gas pressure disk model of Shakura & Sunyaev (1973), and disk field strengths are computed from that paper or from Narayan et al. (1998), as appropriate. The logarithms of the resulting poloidal field strengths, and corresponding jet powers, are represented as field line and jet arrow widths. In the Kerr cases, the inner disk magnetic field is significantly enhanced over the Schwarzschild cases, due in part to the smaller last stable orbit (flux conservation) and in part to the large (H/R) of the bloated disks. The high accretion rate, Schwarzschild case has the smallest field - and the weakest jet - because the disk is thin, the last stable orbit is relatively large, and the Keplerian rotation rate of the field there is much smaller than it would be in a Kerr hole ergosphere. Enhancement of the poloidal field due to the buoyancy process suggested by Krolik (1999) is ignored here because we find it not to be a factor in the simulations discussed below. If it were important, then the grand scheme proposed here would have to be re-evaluated, as the effect could produce strong jets (up to the accretion luminosity in power) even in the plunging region of Schwarzschild holes. Then even the latter would be expected to be radio loud as well (L[jet] ~ 10^43-46 erg s^-1).
{"url":"http://ned.ipac.caltech.edu/level5/Meier/Meier3_1.html","timestamp":"2014-04-18T15:39:14Z","content_type":null,"content_length":"9261","record_id":"<urn:uuid:6f53df94-3f93-4b0c-99b3-6a84f8def10c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra help July 13th 2008, 10:08 AM #1 Junior Member Apr 2008 Linear Algebra help Let V and W be vectors spaces, and let S be subset of V. Define S^0 = { T \in \L(v,w) : T(x) =0 for all x \in \ S. Prove the following statements a) S^0 is a subspace of L(v,w) b)If S1 and S2 are subsets of V and S1 subset S2, then S^0_2 less than or equal S^0_1. What do you need to show that $S^0$ is a subspace? You need to show it is closed under vector addition and scalar multiplication. Can you show that? It will be helpful if you can show condition 2. I will be able to show that 0 is in there and the it is closed under scalar multiplication. Mainly part b is the one I dont get. You want to prove $S^0_2 \subseteq S_0^1$. The way we show this is by showing if $T\in S_2^0$ then $T \in S_1^0$. By definition $T \in S_2^0$ means $T(\bold{x}) = \bold{0}$ for all $\bold{x}\in S_2$. But $S_1\subseteq S_2$ so $T(\bold{x}) = \bold{0}$ for all $\bold{x} \in S_1$. Thus, $T\in S_1^0$. July 13th 2008, 10:15 AM #2 Global Moderator Nov 2005 New York City July 13th 2008, 10:18 AM #3 Junior Member Apr 2008 July 13th 2008, 10:26 AM #4 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/advanced-algebra/43588-linear-algebra-help.html","timestamp":"2014-04-20T11:30:39Z","content_type":null,"content_length":"43152","record_id":"<urn:uuid:88a34f2f-14ed-4ec8-b2d4-a4c95aec449d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
L'Hospital's Rule November 16th 2008, 06:06 AM #1 Junior Member Sep 2008 L'Hospital's Rule I need some help solving the following problems using the L'Hospitals rule. I'd really appreciate it if someone could walk me through the steps for solving the problem. 1. limit(x (approaches) (infinity)) xsin((pi)/x) 2. limit (x (apporaches) ((pi)/4)) (1-tanx)secx I have more but I think I'll be able to solve the rest if I find out how to do these two. $\lim_{x\rightarrow\infty}\ {x \sin\ (\frac {\pi}{x}})$ When you 'substitute' $\infty$, you get: $\lim_{x\rightarrow\infty}\ {x \sin\ (\frac {\pi}{x}})\ =\ \infty \sin (\frac {\pi}{\infty}) = \ \infty \times \sin 0 = \infty \times 0 =$ undefined. So, we have to use L'Hopital's Rule. Differentiating, you get: $\lim_{x\rightarrow\infty}\ {x \cos\ \frac{\pi}{x} \cdot \frac {-\pi]{x^2}}$ Uggh, for some reason, there's a Syntax Error in my Latex Code, and I'm in a hurry, so I'll just type it out normally. Differentiating, you get: lim x ---> infinity (x . cos (pi/x) . (-pi/x^2) + sin (pi/x)) ---> (Product Rule) 'Plugging in' infinity now gives: (infinity. cos (pi/infinity) . (-pi/infinity^2) + sin (pi/infinity)) = (infinity. cos 0 . 0 + sin 0) = (infinity. 1. 0 + 0) = infinity. 0 + 0 = undefined. So, keep differentiating and 'plugging in' infinity into each derivative until you get a definite answer. For the second one, Plug in x = pi/4 first. You get (1 - tan (pi/4)) * sec (pi/4) = (1 - 1) * (1 / cos (pi/4)) = 0 * (1 / ((sqrt (2) / 2))) = 0 * sqrt 2 = 0. So the limit, as x approaches pi/4, of (1 - tan x) * sec x = 0. I hope that helps. Please forgive me if I've made any errors. Last edited by ILoveMaths07; November 16th 2008 at 07:45 AM. I thought you could only apply L'hospital's rule to fractions. Don't you have to convert it into one before you can calculate the answer? No, who told you that? My calculus professor. Also, I checked the back of my book and the answer for the first problem should be pil Sorry for that. I used L'Hopital's for the first one, but not for the second. The second one required just basic 'plugging and chugging'. It wasn't really L'Hopital's Rule because the function was defined for x = pi/4. The limit is 0 at x = pi/4. L'Hopital's is applied to functions which are not defined at a given point. You use it when you get 0/0 or inf/inf after plugging in the limit. And yes, you're right about L'Hopital's being applied to only fractions. For the first one ---> x sin (pi/x) 'Plugging in' infinity, we get... infinity * sin (pi/infinity) = infinity * sin 0 = infinity * 0 = undefined. So you can proceed with L'Hopital's... First, turn it into a fraction (as you mentioned): x sin (pi/x) = x / csc (pi/x) Differentiating numerator an denominator separately, you get: = 1 / (-csc (pi/x) . cot (pi/x) . -pi/x^2) Now 'plug in' infinity. You get: = 1 / (-csc (pi/infinity) . cot (pi/infinity) . -pi/infinity^2) = 1 / (-csc 0 . cot 0 . -0) = 1 / (undefined). For the first one : substitute $t=\frac \pi x$ hence $t \to 0$ when $x \to + \infty$ this gives : $\lim_{x \to \infty} x \sin \left(\tfrac \pi x\right)=\lim_{t \to 0} \frac \pi t \cdot \sin(t)=\pi \cdot \lim_{t \to 0} \frac{\sin(t)}{t}$ And now you can use L'Hospital's rule. Sorry for that, again! There was no way to proceed with L'Hopital's in the first place... I didn't get 0/0 or inf/inf. All I got was an undefined, even with csc, and that doesn't mean I can proceed with L'Hopital's. Sorry... I hope you got the second one, though! I hope that helps. Sorry, I think what my calculus professor meant was to change the problem into a fractions so that the result is either 0/0 or (infinity)/(infinity) so we can use the L'hospital's rule. Yes, I know what you meant. November 16th 2008, 06:47 AM #2 Junior Member Aug 2008 Dubai, UAE November 16th 2008, 07:29 AM #3 Junior Member Sep 2008 November 16th 2008, 07:53 AM #4 Junior Member Aug 2008 Dubai, UAE November 16th 2008, 07:54 AM #5 Junior Member Sep 2008 November 16th 2008, 08:24 AM #6 Junior Member Aug 2008 Dubai, UAE November 16th 2008, 08:31 AM #7 November 16th 2008, 08:44 AM #8 Junior Member Aug 2008 Dubai, UAE November 16th 2008, 09:43 AM #9 Junior Member Sep 2008 November 16th 2008, 09:54 AM #10 Junior Member Aug 2008 Dubai, UAE
{"url":"http://mathhelpforum.com/calculus/59819-l-hospital-s-rule.html","timestamp":"2014-04-19T15:00:56Z","content_type":null,"content_length":"55815","record_id":"<urn:uuid:905bdb16-7ec8-4702-b9a9-86a541da87ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Example of Pretest-Posttest Control Group Design Let's say, for the sake of argument, that you do not trust randomization to give you an experimental and control group that are initially equivalent (at least on average) in post-prandial blood sugar measured in the office. (I'm referring to the hypothetical situation I set up in the Posttest-Only Control Group Design discussion.) This would actually put you in good company, which is one reason why pretest-posttest approaches have been so popular. Anyway, you decide that you want to obtain initial measurements before you start your experiment. After you complete your experimental treatment, you again check everyone's blood sugar level. Then you subtract your post-experimental measurements from each person's pre-experimental measurement to see how much better the experimental group is doing. Even though you are looking for a drop, or loss, in the experimental group measurements, this approach has traditionally been called a gain score. As I mentioned on the previous page, many experimenters use a special type of t-test, called the "repeated measures" t-test. This test also has a number of other names in the literature. Some call it the "dependent measures" test, some the "correlated measures" test, some the "matched measures" test. Here is a chart of hypothetical pre and post measurements. Return to the discussion of the Pretest-Posttest Control Group Design. Return to the discussion of experimental designs.
{"url":"http://www.fammed.ouhsc.edu/tutor/ppex.htm","timestamp":"2014-04-21T09:45:24Z","content_type":null,"content_length":"1920","record_id":"<urn:uuid:08182187-1142-4207-88cc-d57a1425c53a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Elliptic Curve Group / Multiplication Hiya http://www.thestudentroom.co.uk/imag...lies/smile.gif Can anyone shed any light on this? I'm trying to multiply a point in an elliptic curve group. The curve is http://www.thestudentroom.co.uk/ late...48821b4ded.png over http://www.thestudentroom.co.uk/late...a866f75765.png. As an example, I'll use http://www.thestudentroom.co.uk/late...704ea75fc2.png - this point is indeed in the elliptic group. I believe the rule for multiplication in this case is that: http://www.thestudentroom.co.uk/late...e49e26b269.png is given by: http://www.thestudentroom.co.uk/late...1d4764ffa3.png http:// www.thestudentroom.co.uk/late...a3be44c71e.png where http://www.thestudentroom.co.uk/late...64109d2ed0.png So by my working out.. http://www.thestudentroom.co.uk/late...c2b8eb6793.png I'm not sure how to get any further with this. The answer here is that http://www.thestudentroom.co.uk/late...868100eef9.png therefore http://www.thestudentroom.co.uk/late...18718e7746.png. But I have no idea why http://www.thestudentroom.co.uk/late...1b14f2d3fb.png Can anyone help? Thanks for any suggestions! http://www.thestudentroom.co.uk/imag...lies/smile.gif Ah of course. It would be an inverse mod p then.. Thanks :)
{"url":"http://mathhelpforum.com/number-theory/188252-elliptic-curve-group-multiplication-print.html","timestamp":"2014-04-20T01:28:40Z","content_type":null,"content_length":"8209","record_id":"<urn:uuid:ebbb62f7-d80e-4b2c-af33-3e99b8daa912>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6807310 - Transformation of image parts in different domains to obtain resultant image size different from initial image size This invention was made with Government support under Contract Number ONR: N00014-96-1-0502 UFAS No. 1-5-20764 awarded by Office of Naval Research. The Government may have certain rights in the This invention relates generally to image processing and more particularly to changing size of an image in a compressed domain. A typical system stores data such as multimedia content for video, in a compressed digital format. To perform processing on this data, the typical system usually must uncompress the data, process the data, and then recompress the data. One shortcoming of such a system is the relatively high expenditure of time required for uncompressing and recompressing the data to allow performance of the data Thus, a need exists for enhanced processing of data that is stored by a system in a compressed format. Pursuant to the present invention, shortcomings of the existing art are overcome and additional advantages are provided through the provision of transformation of image parts in different domains to obtain a resultant image size different from an initial image size. The invention in one embodiment encompasses a method. A first plurality of image parts in a first domain that comprise an initial size, is received. The first plurality of image parts is transformed to obtain a second plurality of image parts in a second domain different from the first domain. A plurality of image parts based on the second plurality of image parts is transformed to obtain a resultant plurality of image parts in the first domain that comprise a resultant size different from the initial size. Another embodiment of the invention encompasses a system. The system includes a transform component that receives a first plurality of image parts in a first domain that comprise an initial size. The system includes a transform component that transforms the first plurality of image parts to obtain a second plurality of image parts in a second domain different from the first domain. The system includes a transform component that transforms a plurality of image parts based on the second plurality of image parts to obtain a resultant plurality of image parts in the first domain that comprise a resultant size different from the initial size. A further embodiment of the invention encompasses an article. The article includes a computer-readable signal-bearing medium. The article includes means in the medium for receiving a first plurality of image parts in a first domain that comprise an initial size. The article includes means in the medium for transforming the first plurality of image parts to obtain a second plurality of image parts in a second domain different from the first domain. The article includes means in the medium for transforming a plurality of image parts based on the second plurality of image parts to obtain a resultant plurality of image parts in the first domain that comprise a resultant size different from the initial size. An additional embodiment of the invention encompasses a method. First and second image parts are received. Sparse matrix multiplication is applied to each of the first and second image parts. Yet another embodiment of the invention encompasses a system. The system includes a transform component that receives first and second image parts. The system includes a transform component that applies sparse matrix multiplication to each of the first and second image parts. A still further embodiment of the invention encompasses an article. The article includes a computer-readable signal-bearing medium. The article includes means in the medium for receiving first and second image parts. The article includes means in the medium for applying sparse matrix multiplication to each of the first and second image parts. FIG. 1 is a functional block diagram of one example of a system that includes a device in communication with a number of servers that include a transform component. FIG. 2 is, a functional block diagram of one example of an encoder and a decoder employed by an exemplary server that includes one example of the transform component of the system of FIG. 1. FIG. 3 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for decreasing the size of a one-dimensional image by a preselected factor. FIG. 4 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for decreasing the size of a one-dimensional image by a preselected factor. FIG. 5 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for decreasing the size of a two-dimensional image by a preselected factor. FIG. 6 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for increasing the size of a one-dimensional image by a preselected factor. FIG. 7 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for increasing the size of a two-dimensional image by a preselected factor. FIG. 8 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for decreasing the size of a one-dimensional image through employment of sparse matrix multiplication. FIG. 9 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for decreasing the size of a two-dimensional image through employment of sparse matrix multiplication. FIG. 10 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for increasing the size of a one-dimensional image through employment of sparse matrix multiplication. FIG. 11 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for increasing the size of a two-dimensional image through employment of sparse matrix multiplication. FIG. 12 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for decreasing the size of an image through employment of a plurality of preselected variables of a factor. FIG. 13 depicts exemplary logic that is employed by one example of the transform component of the system of FIG. 1, for increasing the size of an image through employment of a plurality of preselected variables of a factor. In accordance with the principles of the present invention, a first plurality of image parts in a first domain that comprise an initial size is transformed to obtain a second plurality of image parts in a second domain different from the first domain, and a plurality of image parts based on the second plurality of image parts is transformed to obtain a resultant plurality of image parts in the first domain that comprise a resultant size different from the initial size. For example, sparse matrix multiplication is applied to each of first and second image parts. A detailed discussion of one exemplary embodiment of the invention is presented herein, for illustrative purposes. Turning to FIG. 1, system 100, in one example, includes a plurality of components such as computer software and/or hardware components. For instance, a number of such components can be combined or divided. System 100 in one example employs at least one computer-readable signal-bearing medium; One example of a computer-readable signal-bearing medium for system 100 comprises a recordable data storage medium 102 such as a magnetic, optical, biological, and/or atomic data storage medium. In another example, a computer-readable signal-bearing medium for system 100 comprises a modulated carrier signal transmitted over a network comprising or coupled with system 100, for instance, a telephone network, a local area network (“LAN”), the Internet, and/or a wireless network. An exemplary component of system 100 employs and/or comprises a series of computer instructions written in or implemented with any of a number of programming languages, as will be appreciated by those skilled in the art. Referring to FIG. 1, one example of system 100 includes a number of servers 104 coupled with a client such as device 106. For instance, servers 104 include a number of transform components 107 such as transform component 109. Servers 104 in one example comprise servers 108 and 110. Server 110 in one example comprises a Web server. For instance, device 106 includes display 114. Device 106 in one example comprises a Palm™ handheld computing device offered by Palm, Inc. (Corporate Headquarters 5470 Great America Parkway, Santa Clara, Calif., U.S.A. 95052; In one example, server 110 stores Web page 112 that is to be displayed on display 114 of device 106. Again referring to FIG. 1, Web page 112 in one example comprises data 116 such as multimedia data that includes images and/or video in a compressed format. One example of a compressed format for data 116 includes a standard of the Joint Photographic Experts Group (“JPEG”) for images. Another example of a compressed format for data 116 includes a standard of the Moving Pictures Expert Group (“MPEG”) such as MPEG-1 or MPEG-2 for digital video. A further example of a compressed format for data 116 includes H.263. In one example, data 116 is unsuitable for direct display on display 114 of device 106, for instance, because data 116 is too large in size and/or device 106 is too limited in bandwidth. In one example in which data 116 is unsuitable for direct display on display 114 of device 106, server 108 advantageously provides data 122 as a differently-sized version of data 116 that is suitable for display on display 114 of device 106, as described herein. Referring further to FIG. 1, server 108 in STEP 118 fetches data 116 in a compressed format on server 110. Server 108, for instance, employs transform component 109 in STEP 120 to convert data 116 to data 122 in a compressed format and with a size suitable for display on display 114 of device 106. For instance, data 116 comprises one size of an image, and data 122 comprises a different size of the image, as described herein. At STEP 124 in one example, server 108 communicates data 122 to device 106, for display of data 122 on display 114 of device 106. Still referring to FIG. 1, server 108 and server 110, in one example, comprise a same server. In a further example, server 108 and server 110 comprise separate servers. In another example, server 108 resides on server 110. Turning to FIG. 2, server 108 in one example comprises encoder 202 and decoder 204. In one example, server 108 employs a spatially-scalable mode of a standard of the Moving Pictures Expert Group (“MPEG”) such as the MPEG-2 video compression standard. Encoder 202 and/or decoder 204 in one example includes spatial interpolator 206. For instance, system 200 comprises encoder 202 and decoder 204 . In one example, system 200 employs a spatially-scalable mode of a standard of the Moving Pictures Expert Group (“MPEG”) such as the MPEG-2 video compression standard. Now referring to FIGS. 2-3, spatial interpolator 206, for instance, employs exemplary logic 300 to change image 303 to a differently-size image 350. For example, logic 300 serves to change size 313 of image 315 to size 337 of image 338, as described herein. In a further example, spatial interpolator 206 comprises transform component 109 (FIG. 1). Spatial interpolator 206 of server 108 in FIG. 2, in one example, employs an upsizing scheme. Server 108 of FIG. 1, in another example, employs a downsizing scheme. For instance, referring to FIGS. 2, 7, and 11, spatial interpolator 206 employs exemplary logic 300 to change size 708 of image 706 to size 704 of image 702, or to change size 1108 of image 1106 to size 1104 of image 1102, as described herein. Referring to FIGS. 2, 7, and 11, in one example encoder 202 employs changed-size image 350 and a motion-compensated previously-coded frame, to obtain an estimation of the original (e.g., larger) image. In one example, employment of logic 300 in spatial interpolator 206 advantageously increases the Peak Signal-to-Noise-Ratio (“PSNR”). In a further example; employment of logic 300 serves to advantageously improve estimation of the original image. In another example, employment of logic 300 in 206 advantageously obviates the previous need to use the motion-compensated previously-coded frame for prediction in coding the enhancement layer, with desirably little or no loss in quality, for instance, while advantageously avoiding a certain complexity otherwise associated with implementation of motion compensation in hardware for encoder 202. Referring now to FIGS. 1-3, transform component 109 in one example of server 108 employs changed-size image 321 and a motion-compensated previously-coded frame, to obtain an estimation 331 of the original image 315. In one example, employment of logic 300 in spatial interpolator 206 advantageously increases the Peak Signal-to-Noise-Ratio (“PSNR”). In a further example, transform component 109 serves to advantageously improve estimation of the original image. In another example, transform component 109 advantageously obviates the previous need to use the motion-compensated previously-coded frame for prediction in coding the enhancement layer, with desirably little or no loss in quality, for instance, while advantageously avoiding a certain complexity otherwise associated with implementation of motion compensation in hardware for encoder 202. Referring to FIG. 3, logic 300 in one example accesses image 301 such as one-dimensional image 302. One-dimensional image 302, for instance, comprises a concatenation 305 of a plurality of sets 304 of samples 306. For instance, sets 304 of samples 306 comprise one or more preselected numbers 312 of samples 306. An exemplary instance of preselected numbers 312 of samples 306 comprises eight samples 306. Samples 306 are located in domain 308. Domain 308 in one example comprises a spatial domain. Again referring to FIG. 3, logic 300 in one example employs STEP 310 to receive samples 306. Logic 300 employs STEP 314 to transform samples 306 from domain 308 to domain 316. Domain 316 is different from domain 308. Examples of a transform that logic 300 employs include a discrete cosine transform (“DCT”), a discrete sine transform, a Fourier transform, and a Hadamard transform. Referring still to FIG. 3, in one example, one of domains 308 and 316 comprises a transform domain such as a discrete cosine transform (“DCT”) domain, a discrete sine transform domain, a Fourier transform domain, or a Hadamard transform domain, and the other of domains 308 and 316 comprises a spatial domain, as will be understood by those skilled in the art. Referring again to FIG. 3, logic 300 employs STEP 318 to perform, for instance, an inverse transform. STEP 318 in one example applies an inverse (e.g., four) point discrete cosine transform 320 in domain 316 to sets 319 of (e.g., four) low-frequency coefficients 322 of samples 317 obtained in STEP 314. For instance, STEP 318 yields sets 326 of (e.g., four) samples 324 in domain 308. In one example, low-frequency coefficients 322 comprise coefficients that correspond to k=0 through 3 in exemplary Equation (1.2), described further below. Further referring to FIG. 3, logic 300, in one example, employs STEP 328 in domain 308 to concatenate sets 326 of samples 324 obtained in STEP 318. STEP 328 in one example yields samples 330 in domain 308. STEP 332 applies transform 334 to samples 330, to obtain samples 336 in domain 316. In one example, logic 300 serves to advantageously decrease size 313 of image 315 to size 337 of image 338, for instance, by causing size 337 of image 338 and size 313 of image 315 to comprise a ratio therebetween of two. Those skilled in the art, from review of the discussion above of employment of logic 300 of FIG. 3 for exemplary decreasing size of a one-dimensional image and the remaining discussion herein taken in conjunction with FIGS. 3-13, will appreciate embodiments such as employment of logic 300 of any of FIGS. 4-5, 8-9, and 12 for exemplary decreasing of size of an image, employment of logic 300 of any of FIGS. 6-7, 10-11, and 13 for exemplary increasing of size of an image, employment of logic 300 of any of FIGS. 5, 7, 9, and 11-13 for exemplary changing of size of a two-dimensional image, employment of logic 300 of FIG. 12 for exemplary decreasing of size of an image by a non-integer factor and/or a factor other than a power of two, and employment of logic 300 of FIG. 13 for exemplary increasing of size of an image by a non-integer factor and/or a factor other than a power of two. Turning to FIG. 4, logic 300 serves to decrease size 313 of image 315 by a preselected factor 402, for instance, a factor of two. For instance, INPUT in FIG. 4 comprises receipt of image 303 in (e.g., transform) domain 316. STEP 328 of logic 300 of FIG. 4 in one example employs a scaling factor of 1/2 through incorporation of the scaling factor of 1/2 in inverse transform 320 of STEP 318, as will be appreciated by those skilled in the art. Turning to FIG. 5, logic 300 decreases size 502 of image 504 in obtaining size 506 of image 508. Logic 300 in one example obtains a preselected ratio of size 506 of image 508 relative to size 502 of image 504. For instance, the ratio between size 506 and size 502 comprises a factor of two. Still referring to FIG. 5, logic 300 employs STEP 510. STEP 510 in one example comprises applying an N/2×N/2 inverse transform 512 to N/2×N/2 low-frequency sub-blocks 514 of each N×N block 516. N×N blocks 516 represent samples 518. In one example, low-frequency sub-block 514 comprises an N/2×N/2 sub-block of N×N block 516, for instance, that comprises transform coefficients in domain 316. STEP 510 in one example performs an inverse transform from domain 316 to domain 308. For instance, STEP 510 yields N/2×N/2 blocks 520 in domain 308. STEP 522 serves to transform blocks 520 to image 508 in domain 316 through employment of an N×N transform. Turning to FIG. 6, logic 300 serves to obtain size 602 of image 604 that is increased relative to size 606 of image 608. Logic 300 in one example operates on image 608 that comprises a one-dimensional image. Turning to FIG. 7, logic 300 in one example yields size 702 of two-dimensional image 704 that is increased relative to size 706 of two-dimensional image 708. STEPS 710, 712, and 714 cause the increased size 702 directly in domain 316. For instance, domain 316 comprises a transform domain. STEP 710 in one example comprises applying an inverse N×N transform. STEP 712 in one example comprises separation of image 716 in domain 308 into four N/2×N/2 blocks 718. STEP 714 in one example comprises applying an N/2×N/2 transform 720 to each block 718. Turning to FIG. 8, logic 300 outputs (e.g., in a transform domain) size 802 of image 804 that is decreased relative to size 806 of image 808. For example, image 804 comprises a one-dimensional image, and image 808 comprises a one-dimensional image. In one example, logic 300 employs matrices 810 and 812 that are (e.g., relatively) sparse. For instance, matrices 810 and 812 comprise sparse matrices in which a majority of entries 816 have a value of zero. In another example, logic 300 employs matrices 810 and 812 that are very sparse. For instance, matrices 810 and 812 comprise very sparse matrices in which nearly and/or approximately seventy-five percent of entries 816 have a value of zero. Such sparseness of matrices 810 and 812, in one example, serves to provide (e.g., tremendous) computational savings in multiplication involving matrices 810 and 812. Additional structure of matrices 810 and 812, in one example, serves to provide further computational savings in, for instance, an addition operation of STEP 814, as described herein. Turning to FIG. 9, logic 300 employs matrices 902, 904, 906, and 908 that are sparse in one example. Logic 300 serves to obtain size 914 of image 916 that is decreased relative to size 918 of image 920. For example, image 916 comprises a two-dimensional image, and image 920 comprises a two-dimensional image. Logic 300 in one example employs transpose 910 of a matrix and transpose 912 of a matrix. For instance, logic 300 employs matrices 902, 904, 906, and 908 for pre-multiplication, and logic 300 employs transpose 910 of a matrix and transpose 912 of a matrix for post-multiplication, as will be appreciated by those skilled in the art. Referring to FIG. 10, logic 300 in one example results in size 1002 of image 1004 such as a one-dimensional image, that is increased relative to size 1006 of image 1008 such as a one-dimensional image. Logic 300 in one example employs transpose 1010 of a very sparse matrix and transpose 1012 of a very sparse matrix. Referring to FIG. 11, logic 300 obtains size 1102 of image 1104 such as a two-dimensional image, that is increased relative to size 1106 of image 1108 such as a two-dimensional image. Referring now to FIGS. 12-13, logic 300 changes size 1202 of image 1204 in obtaining size 1206 of image 1208. Logic 300 in one example obtains a preselected ratio of size 1206 of image 1208 relative to size 1202 of image 1204. For instance, the ratio between size 1206 and size 1202 comprises a factor of two. In another example, the ratio between size 1206 and size 1202 excludes a factor of two. In a further example, the ratio between size 1206 and size 1202 comprises a non-integer ratio. Referring to FIG. 12, logic 300 in one example serves to obtain a relative decrease in size of an image by a factor of (M1×M2)/(N1×N2). For instance, logic 300 serves to decrease image height by a factor of M1/N1 and decrease image width by a factor M2/N2. So, in one example, selection of values for variables M1, M2, N1, and N2 of the factor (M1×M2)/(N1×N2) provides increased control and/or tunability for system 100 (FIG. 1). Referring to FIG. 13, logic 300 in one example serves to increase size of an image by a factor of (M1×M2)/(N1×N2). For instance, logic 300 serves to increase image height by a factor of M1/N1 and increase image width by a factor of M2/N2. So, in one example, selection of values for variables M1, M2, N1, and N2 of the factor (M1×M2)/(N1×N2) provides increased control and/or tunability for system 100 (FIG. 1). For explanatory purposes, a detailed technical description is now presented. T^(N) comprises an N×N discrete cosine transform matrix. In one example, T^(N)={t(k,n)}, where t(k,n) denotes the matrix entry in the k-th row and n-th column, according to the following exemplary Equation (1.1). $t ( n , k ) = 1 / N ; k = 0 , 0 ≤ n ≤ N - 1 = 2 N cos π ( 2 n + 1 ) k 2 N ; 1 ≤ k ≤ N - 1 ; 0 ≤ n ≤ N - 1 0 ≤ n ≤ N - 1 ( 1.1 )$ In addition, the one-dimensional DCT of a sequence u(n), 0≦n≦N−1, is given by v(k), 0≦k≦N−1, which is defined in the following exemplary Equation (1.2). $v ( k ) = ∑ n = 0 N - 1 t ( k , n ) u ( n ) ; 0 ≤ k ≤ N - 1 ( 1.2 )$ One example employs vectors u and v to denote the sequences u(n) and v(k), and represent Equation (1.1) by the following exemplary Equation (1.3). Assume that a matrix A is given. The transpose A′ of the matrix A in one example comprises another matrix whose i-th column is the same as the i-th row of A. For example, the (j,i)th entry of A′ comprises the (i,j)th entry of A. The DCT in one example is a unitary transform. So, taking the transpose of matrix T^(N) in one example results in an inverse DCT matrix T^(N)′. Given vector v of Equation (1.3), one can, for instance, obtain vector u through the following exemplary Equation (1.4). As noted above with reference to Equation (1.1), T^(N) comprises the N×N DCT matrix. In another example, T^(N/2) comprises an N/2×N/2 DCT matrix. An N1×N2 matrix comprises N1 rows each of width N2. One example of logic 300 obtains the N1×N2 DCT of an N1×N2 two-dimensional signal as follows. Logic 300, for instance, first employs the matrix T^ (N2) to take the N2-point DCT of each row of the signal to obtain data. Logic 300, for instance, second applies an N1-point DCT to each column of this data through employment of the matrix T^(N1). An exemplary definition of the matrices T[L], T[R], T[s], C, D, E, and F, is now presented. In one example, “T[L]” denotes a matrix that comprises the N/2 leftmost columns of the matrix T^(N). In a further example, “T[R]” denotes a matrix that comprises the N/2 rightmost columns of the matrix T^(N). So, in one example, T^(N)=[T[L]T[R]]. Further, in one example, each of matrices T[L ]and T[R ]comprises a size of N/2×N/2. One example assumes that N is divisible by two. In a further example, “T[s]” denote a matrix T^(N/2). So, in one example, the following exemplary Equations (1.5, 1.6, 1.7, and 1.8) result. $C = 1 2 1 2 ( ( T L T s t ) + ( T R T s t ) ) ( 1.5 ) D = 1 2 1 2 ( ( T L T s t ) - ( T R T s t ) ) ( 1.6 ) E = 2 1 2 ( ( T L T s t ) + ( T R T s t ) ) ( 1.7 ) F = 2 1 2 ( ( T L T s t ) - ( T R T s t ) ) ( 1.8 )$ In one example, the matrices T[1]. T[s]′ and T[R]T[s]′ are sparse, since the matrices T[L]T[s]′ and T[R]T[s]′ have nearly fifty percent of their entries as zeros. In a further example, the matrices C, D, E, and F are very sparse, since the matrices C, D, E, and F have nearly seventy-five percent of their entries as zeros. Similar conclusions hold for other transforms that satisfy certain orthogonality and symmetry properties like the DCT in one example when other of the matrices herein are defined in a like manner (e.g., by appropriately partitioning T^(N) into T[L ]and T[R]). Examples of such transforms include the Fourier transform and the Hadamard transform. An illustrative example of the matrix C in Equation (1.5) for the case of N=8 follows below. 0.5000 0 0 0.0000 0.0000 0.2079 0.0000 0.0114 0.0000 0.3955 0.0000 0.0488 0 0 0.5000 0 0.0000 0.1762 0.0000 0.2452 0 0.1389 0 0.4329 In the above example, matrix C has only ten non-zero entries out of a total of 8*4=32 entries. So, nearly seventy-five percent of the elements of matrix C are zeros. For illustrative purposes, now is presented a detailed discussion of decreasing of image size in the compressed domain. One example serves to decrease the size of an image in a situation where the “original” (e.g., input) image is given in a compressed domain with an expectation that the output image will be provided in the compressed domain. The compressed domain in one example corresponds to an N×N block-DCT domain. For instance, the Joint Photographic Experts Group (“JPEG”) still image compression standard and the Moving Picture Experts Group (“MPEG”) video compression standards MPEG-1 and MPEG-2, employ an 8×8 DCT. In one example, referring to FIGS. 3-4, an input to logic 300 comprises a one-dimensional image or a one-dimensional signal. For illustrative purposes, FIG. 3 specifically depicts a case of N=8, such that the signal is given in terms of 8×8 block DCT coefficients. Those skilled in the art will understand treatment of any general N, such as is represented in FIG. 4, from the discussion herein of the exemplary case of N=8 that is represented in FIG. 3. Referring to FIG. 3, an input to logic 300 at STEP 372 in one example comprises image 315. For instance, image 315 comprises an N×N DCT domain representation of a spatial domain signal that comprises image 301. As preparation in one example, STEP 370 splits the spatial domain signal into sets 304 of samples 306 of size. N. FIG. 3 depicts one example of two adjacent sets 304 of samples 306 each of size N. Logic 300 of FIG. 3 denotes these samples 306 of size N as b[1 ]and b[2]. Logic 300 of FIG. 3 in one example treats all vectors as column vectors. Logic 300 of FIG. 3 in one example denotes row vectors as transposes of column vectors. Logic 300 of FIG. 3 in one example applies the N×N DCT transform to each of b[1 ]and b[2 ]to obtain B[1 ]and B[2], respectively. So, in one example, B[1]= T^(N) b[1 ]and B[2]=T^(N) b[2]. The input image 315 in one example comprises N×N block DCT coefficients. In one example, B[1 ]and B[2 ]represent the N×N block DCT coefficients for two instances of the size N blocks of image 301. B[1 ]and B[2 ]in one example comprise image 315 in domain 316, for instance, the N×N DCT domain. For example, sets 319 of low-frequency coefficients 322 of samples 317 comprise N/2 low-frequency samples of B[1 ]and B[2]. Again referring to FIG. 3, the N/2 low-frequency samples of B[1 ]and B[2], in one example, comprise samples that correspond to k=0 through N/2−1 of Equation (1.2), above. In another example, the low-frequency samples correspond to a different set of values for the k indices, such as for an instance of logic 300 that employs a transform other than DCT. For explanatory purposes, this discussion refers to a hatted symbol of logic 300 represented in FIGS. 3-13 by employing an “h” with the symbol. For example, in this discussion “Bh[1]” corresponds to “B[1]” of logic 300 and “bh[1]” corresponds to “b[1]” of logic 300. Referring further to FIG. 3, logic 300 in one example employs the N/2 low-frequency samples of each of B[1 ]and B[2 ]to obtain Bh[1 ]and Bh[2], respectively. Still referring to FIG. 3, logic 300 in one example applies the N/2×N/2 inverse DCT transform using the transpose of matrix T^(N/2). Logic 300 employs, for instance, Equation (1.4), above. For example, logic 300 substitutes T^(N/2) for T of Equation (1.4). Logic 300 in one example applies the N/2×N/2 inverse DCT transform to each of Bh[1 ]and Bh[2 ]to obtain bh[1 ]and bh[2], respectively, as sets 326 of samples 324, STEP 318. In one example, bh[1 ]and bh[2 ]represent downsized versions of b[1 ]and b[2], respectively, in the spatial domain, as one example of domain 308. In a further example, logic 300 concatenates bh[1 ]and bh[2 ]to obtain bh=(bh[1 ]bh[2]), as one example of samples 330. For example, bh is in the spatial domain, as one example of domain 308. For instance, bh represents a downsized version of (b[1 ]b[2]) that comprises one example of image 301 as one-dimensional image 302. Logic 300 in one example applies an N×N transform (e.g., a “forward” transform) to bh to obtain a (e.g., final) output block Bh, as one example of N samples 336 in domain 316, for instance, the transform domain. So, Bh in one example represents a downsized version of the original signal in the transform domain. The discussion above leads to a number of exemplary equations, including, for example, the following exemplary Equation (2.1.1). $Bh = T ( N ) ( bh ) = T ( N ) [ bh 1 t bh 2 t ] t = [ T L T R ] [ bh 1 t bh 2 t ] t = [ T L T R ] [ ( T s t Bh 1 ) t T s t Bh 2 ) t ] t = ( T L T s t ) Bh 1 + ( T R T s t ) Bh 2 ( 2.1 .1 )$ From the properties of the DCT matrices, it can be shown that T[L ]T[s] ^t and T[R ]T[s] ^t are sparse, since, for example, nearly fifty percent of the entries of each of these matrices are zero. So, in one example, Equation. (2.1.1) allows advantageously fast computations. Another example allows an advantageous further increase in computational speed. In one example, the matrices T[L ]T[s] ^t and T[R ]T[s] ^t have identical entries except for a possible sign change. Specifically, if i+j divided by two yields an integer result, then the (i,j)th element of T[L ]T[s] ^t is the same as the (i,j)th element of T[R]T[s] ^t. Otherwise, if i+j divided by two yields a non-integer result, then the (i,j)th element of T[L]T[s] ^t is opposite in sign to the (i,j)th element of T[R ]T[s] ^t. So, one example yields even greater sparseness of matrices C, D, E, and F of Equations (1.5)-(1.8), since about 75% of the entries of matrices C, D, E, and F in the example are zeros. The example employs a factor of 2 in defining the matrices C, D, E, and F to account for different sizes of the DCTs. Solving for T[L ]T[s] ^t and T[R ]T^t [t ]from Equations (1.5) and (1.6) and substituting the result in Equation (2.1.1) yields the following exemplary Equation (2.1.2). Bh=C(Bh [1] +Bh [2])+D(Bh [1] −Bh [2])(2.1.2) Logic 300 of FIG. 8 in one example implements Equation (2.1.2). For example, logic 300 outputs Bh as image 804 that has size 802. Logic 300 in one example serves to decrease size 802 of image 804 relative to size 806 of image 808 that comprises input to logic 300. In a further example, implementation of Equation (2.1.2) in logic 300 of FIG. 8 allows advantageously fast computation(s) in execution of logic 300. The following reasons in one example contribute to an increased-computational speed for logic 300. As a first exemplary reason, referring to FIG. 8, matrices 810 and 812 are (e.g., very) sparse. So, employment of matrices 810 and 812 in logic 300 makes matrix multiplication of Equation (2.1.2) computationally inexpensive. Sparseness in one example of matrices 810 and 812 results from numerous occurrences of zero for entries of matrices 810 and 812. So, logic 300 desirably need not perform a significant number of multiplications that would otherwise be performed absent the numerous occurrences of zero in matrices 810 and 812, as will be appreciated by those skilled in the art. As a second exemplary reason, referring again to FIG. 8, matrices 810 and 812 in one example comprise a particular structure that guarantees that some of the operands of an addition operation in STEP 814, are zero. So, logic 300 in one example comprises an advantageously-decreased number of addition operations. FIG. 5 illustrates one example of employment of logic 300 with two-dimensional images such, as images 504 and 508. For instance, image 504 comprises N×N transform coefficients. Logic 300 in one example receives image 504 as input and produces image 508 as output. In one example, logic 300 divides image 504 into N×N blocks 516. As depicted in FIG. 5 for illustrative purposes, logic 300 in one example divides image 504 into four (e.g., adjacent) instances of N×N block 516. In addition, logic 300 in one example employs the four instances of N×N block 516 to obtain a single instance of N×N block 530 as image 508, for example, as output Bh of logic 300. In one example, logic 300 changes the four instances of N×N block 516 to the single instance of N×N block 530 to obtain a reduction in size by a factor of two in each dimension. In one example, low-frequency sub-blocks 514 each comprise size N/2×N/2 that correspond to k=0 through N/2−1 in Equation (1.2), for instance, in vertical and horizontal directions. STEP 510 applies an inverse N/2×N/2 transform to each N×N block 516 to obtain four N/2×N/2 blocks 520, for example, bh[1], bh[2], bh[3], bh[4], in (e.g., spatial) domain 308. In one example, the four N/2×N/2 blocks 520 represent a downsized version of original image 504. STEP 521 in one example concatenates the four N/2×N/2 blocks 520 to obtain a single instance bh of N×N block 528. STEP 522 applies an N×N transform to the single instance bh of N×N block 528 to obtain the output instance Bh of N×N block 530 as image 508 in (e.g., transform) domain 316. An exemplary determination of mathematical equations corresponding to the discussion above, plus a determination that the resultant equations can be manipulated to involve matrix multiplications with only sparse matrices, allows one to obtain a computationally efficient way to implement logic 300. In one example, application of the DCT to an N×N (e.g., two-dimensional) block corresponds to pre- and post-multiplication by the matrix T. So, one example describes the operations above by the following exemplary mathematically description. $Bh = T ( N ) bh T t = [ T L T R ] bh 1 bh 2 bh 3 bh 4 T L t T R t = [ T L T R ] T s t Bh 1 T s T s t Bh 2 T s T s t Bh 3 T s T s t Bh 4 T s T L t T R t = ( ( T L T s t ) Bh 1 + ( T R T s t ) Bh 3 ) ( T R T s t ) t + ( ( T L T s t ) Bh 2 + ( T R T s t ) Bh 4 ) ( T R T s t ) t ( 2.1 .3 )$ In one example, (T[L ]T[s] ^t) and (T[R ]T[s] ^t) are sparse. So, exemplary Equation (2.1.3) presents computationally fast calculations for logic 300 that operates on, for instance, two-dimensional images. One example precomputes the matrices (T[L ]T[s] ^t) and (T[R ]T[s] ^t). One example replaces (T[L ]T[s] ^t) and (T[R ]T[s] ^t) by the matrices C and D defined in Equations (1.5) and (1.6) for substitution in Equation (2.1.3), to obtain a result. Through rearrangement of terms of this result, one example obtains the following exemplary Equations (2.1.4), (2.1.5), and (2.1.6). Bh=(X+Y)C ^t+(X−Y)D ^t(2.1.4) X=C(Bh [1] +Bh [3])+D(Bh [1] −Bh [3])(2.1.5) Y=C(Bh [2] +Bh [4])+D(Bh [2] −Bh [4])(2.1.6) In one example, a striking similarity exists between Equation (2.1.2), for an illustrative one-dimensional case, and each of exemplary Equations (2.1.4), (2.1.5), and (2.1.6), for an illustrative two-dimensional case. Equations (2.1.4), (2.1.5), and (2.1.6) in one example represent a computationally (e.g., very) fast way to implement logic 300, such as in FIG. 9. Referring to FIG. 9, a number of (e.g., all) instances of (e.g., addition) operation component 950 have some of their operands guaranteed to be zeros, and therefore advantageously avoid a need to perform a number of additions operations. Again referring to FIG. 9, exemplary computations 952 and 954 correspond to exemplary logic subportions 956 and 958, respectively, of logic 300. In one example, N=8. In a further example, “32A” represents thirty-two addition operations and “64M” represents sixty-four multiplication operations. For instance, if N=8 then an exemplary result of computations 952 and 954 comprises 320/(8*8)=1.25 addition operations and 1.25 multiplication operations, for example, per pixel of the original image (e.g., image 920). In one example, a similarity between Equation (2.1.2), for the one-dimensional case, and each of Equations (2.1.4), (2.1.5), and (2.1.6), for the two-dimensional case, advantageously allows performance of operations in a separable fashion. For instance, logic 300 that is employable with an exemplary two-dimensional case, performs “one-dimensional” computations along columns to get X and Y, and (e.g., subsequently) performs “one-dimensional” computations along rows of X and Y to get Bh, for example, as output image 916. Such performance of “one-dimensional” computations for a two-dimensional case, in one example, allows an advantageously-increased computational speed of logic 300. Those skilled in the art will appreciate that additional ways exist for writing Equation (2.1.3) in a form similar to Equations (2.1.4), (2.1.5), and (2.1.6), for instance, to comprise the same number of computations as Equations (2.1.4), (2.1.5), and (2.1.6). For instance, another example of Equation (2.1.5) employs Bh[1 ]and Bh[2], and a further example employs Bh[3 ]and Bh[4 ]in Equation (2.1.6), such as where logic 300 initially performs processing along rows and subsequently performs processing along columns. An exemplary description of logic 300 that implements Equations (2.1.3)-(2.1.6) comprises: Receiving a plurality of image parts in a transform domain. Applying simple linear processing on the low frequency parts of each image part. Multiplying (either pre or post) each image part thus obtained by sparse matrices. Applying simple linear processing to the parts thus obtained. Multiplying (either post or pre) each image part thus obtained by sparse matrices (one example excludes this multiplication for an illustrative one-dimensional case). Applying simple linear processing on the parts thus obtained to get a final set of image parts representing the final image in a transform domain. “Simple linear processing” in one example comprises an identity operation in which output equals input, such as for Equation (2.1.3), and in another example comprises additions and/or subtractions, such as for Equations (2.1.4)-(2.1.6). One or more other examples employ analogous definitions of matrices such as matrices T, T[L], C, D, for transforms other than the DCT. In one example, a sparseness of matrices such as matrices C and D, depends only on orthogonality and symmetry properties of the N×N DCT matrix in relation to the N/2×N/2 DCT matrix. So, in one example, the sparseness property holds for other transforms, for example, transforms having properties such as properties of the Fourier transform and the Hadamard transform. One example obtains computational savings with employment of other such transforms. One example employs the basis function of a selected transform that is employed with a particular instance of logic 300, to construct the matrices C and D for the particular instance of logic 300. Referring to FIG. 12, one example of logic 300 reduces the size of an image by any selected power of two. In one example, logic 300 changes size of an image by a power of two through employment of a direct implementation. In another example, repeated application of a certain instance of logic 300 that serves to change the size of an image by a factor of two, serves in another instance of logic 300 to reduce the size of the image by additional factors of two. Referring to FIG. 7, logic 300 in one example serves to increase the size of an image in the compressed domain. For example, one can consider an instance of logic 300 that serves to increase the size of an image, to comprise an inverse of an instance of logic 300 that serves to decrease the size of an image. One example of logic 300 comprises first and second independent implementations. The first implementation in one example performs the decreasing of the image size. The second implementation in one example performs the increasing of the image size. Again referring to FIG. 7, logic 300 in one example receives N×N transform coefficients as an input image 706, and outputs an enlarged image in the same format, as image 702. Referring still to FIG. 7, logic 300 in one example receives as input image 706 an N×N transform domain representation of a two-dimensional image. Instance Bh of image 706 comprises an exemplary N×N transform domain block. One example obtains four N×N transform domain blocks out of this N×N transform domain block, to effect a size change (e.g., increase) by a factor of two in each dimension. Further referring to FIG. 7, STEP 710 in one example applies an N×N inverse transform to Bh to obtain an N×N block as image 716 in (e.g., spatial) domain 308. STEP 712 in one example splits the N×N block into four N/2×N/2 blocks 718. STEP 714 in one example applies an N/2×N/2 (e.g., discrete cosine) transform to each block 718, to obtain four N/2×N/2 blocks Bh[1], Bh[2], Bh[3], and Bh[4]. Each of blocks Bh[1], Bh[2], Bh[3], and Bh[4 ]in one example comprises a low-frequency sub-block of a corresponding N×N block. STEP 714 further sets to zero the high-frequency coefficients of each of these N×N blocks. Image 704 in one example comprises four such blocks as the desired enlarged version of the block Bh in the compressed (e.g., transform) domain. For illustrative purposes, the following exemplary description is presented. Since T^t T^(N)=I (N×N identity matrix) and T^(N)=[T[L ]T[R]] one obtains: T [L] ^t T [L] =T [R] ^t T [R] =I [N/2 ]and T [L] ^t T [R] =T [R] ^t T [L] =O [N/2 ] where I[N/2 ]and O[N/2 ]denote the N/2×N/2 identity and zero matrix, respectively. So: I [N/2] =T [s](T [L] ^t T [L])T [s] ^t(T [L] T [s] ^t)^t(T [L] T [s] ^t) (T [L] T [s] ^t)^t(T [L] T [s] ^t)=I [N/2]=(T [R] T [s] ^t)^t(T [R] T [s] ^t)(2.1.7) (T [L] T [s] ^t)^t(T [R] T [s] ^t)=O [N/2]=(T [R] T [s] ^t)^t(T [L] T [s] ^t)(2.1.8) So, one particular example of logic 300 that increases the size of an image comprises exactly the reverse logic of a certain example of logic 300 that decreases the size of an image. Employment of Equation (2.1.3) in conjunction with orthogonality properties of the matrices (T[R ]T[s] ^t) and (T[L ]T[s] ^t) in exemplary Equations (2.1.7) and (2.1.8), allows a determination of the following exemplary Equations (2.1.9)-(2.1.12). Bh [1]=(T [L] T [s] ^t)^t Bh(T [L] T [s] ^t)(2.1.9) Bh [2]=(T [L] T [s] ^t)^t Bh(T [R] T [s] ^t)(2.1.10) Bh [3]=(T [R] T [s] ^t)^t Bh(T [L] T [s] ^t)(2.1.11) Bh [4]=(T [R] T [s] ^t)^t Bh(T [R] T [s] ^t)(2.1.12) Equations (2.1.9)-(2.1.12) in one example of logic 300 of FIG. 7, allow a determination of the N/2×N/2 low-frequency sub-blocks Bh[1], Bh[2], Bh[3], and Bh[4]. One example of logic 300 receives input in the form of image block Bh in the transform domain and obtains output in the form of the N/2×N/2 low-frequency sub-blocks Bh[1], Bh[2], Bh[3], and Bh[4 ]in the transform domain. One example of logic 300 implements Equations (2.1.9)-(2.1.12) to obtain advantageously fast computation through employment of (e.g., very) sparse embodiments of matrices (T[L ]T[s] ^t) and (T[R ]T [s] ^t). Another example of logic 300 employs (T[L ]T[s] ^t) and (T[R ]T[s] ^t) per the definition of matrices E and F in Equations (1.7)-(1.8) for substitution in Equations (2.1.9)-(2.1.12) to obtain the following exemplary Equations (2.1.13)-(2.1.17). Bh [1]=(P+Q)+(R+S)(2.1.13) Bh [2]=(P+Q)+(R−S)(2.1.14) Bh [3]=(P+Q)−(R+S)(2.1.15) Bh [4]=(P+Q)−(R−S)(2.1.16) P=(E ^t Bh)E;Q=(E ^t Bh)F;R=(F ^t Bh)E;S=(F ^t Bh)F.(2.1.17) Equations (2.1.13)-(2.1.16) account for a factor, for example, a factor of two due to the different sizes of the DCTs, in the definitions of E and F. In one example, exemplary bracketing in Equations (2.1.13)-(2.1.17) conveys an illustrative ordering of computations. Equations (2.1.13)-(2.1.17) in one example represent a (e.g., much) faster way to carry out the computations in Equations (2.1.9)-(2.1.12), since in one example nearly seventy-five percent of the entries in matrices E and F have the value of zero. Equations (2.1.13)-(2.1.17) in one example arrange the multiplications and additions in a hierarchical manner, to advantageously obtain a further increase in computational speed. For illustrative purposes, FIG. 11 depicts such an implementation of logic 300. Exemplary computations 1152 and 1154 correspond to exemplary logic subportions 1156 and 1158, respectively, of logic 300. In one example, N=8. In a further example, “96A” represents ninety-six addition operations and “160M” represents one hundred sixty multiplication operations. For instance, if N=8 then an exemplary result of computations 1152 and 1154 comprises 320/(8*8)=1.25 addition operations and 1.25 multiplication operations, for example, per pixel of the upsized image. In one example, a similarity between logic 300 of FIG. 10, for the one-dimensional case, and logic 300 of FIG. 11, for the two-dimensional case, advantageously allows an interpretation of Equations (2.1.13)-(2.1.17) as corresponding to carrying out one-dimensional operations along the columns of Bh and (e.g., subsequently) along the rows of Bh. One example employs ways to write Equations (2.1.9)-(2.1.12) in a form similar to Equations (2.1.14)-(2.1.17), for example, while involving the same number of computations. One example regroups portions of Equation (2.1.17) to carry out computations in the following exemplary Equation (2.1.18). P=E ^t(BhE);Q=E ^t(BhF);R=F ^t(BhE);S=F ^t(BhF).(2.1.18) In view of Equations (2.1.9)-(2.1.18), one example describes logic 300 for upsizing as follows. Receiving a plurality of image parts in a transform domain. Multiplying (pre or post) each image part by sparse matrices. Multiplying (post or pre) each of the image parts thus obtained by sparse matrices (one example excludes this multiplication for an illustrative one-dimensional case). Applying simple linear processing on the image parts thus obtained. The image parts thus obtained are made the low-frequency parts of the corresponding image parts in the final image which is also in the transform domain. “Simple linear processing” in one example comprises an identity operation in which output equals input such as for Equations (2.1.9)-(2.1.12), and in another example comprises additions and/or subtractions of two or more image parts to obtain a new set of image parts. In a further example, “simple linear processing” comprises the following exemplary operation that corresponds to Equations (2.1.13)-(2.1.18). In one example, the input comprises P, Q, R, and S, and the output comprises Bh1, Bh2, Bh3, and Bh4. One or more other examples employ analogous definitions of matrices, such as matrices T, T[L], E, F, for transforms other than the DCT. In one example, the sparseness of matrices such as matrices E and F depends only on orthogonality and symmetry properties of the N×N DCT matrix in relation to the N/2×N/2 DCT matrix. So, in one example, the sparseness property holds for other transforms, for example, transforms having properties such as properties of the Fourier transform and the Hadamard transform. One example obtains computational savings with employment of such other transforms. One example employs the basis function of a selected transform that is employed with a particular instance of logic 300, to construct the matrices E and F for the particular instance of logic 300. Referring to FIG. 13, one example of logic 300 increases the size of an image by any selected power of two. In one example, logic 300 changes a size of an image by a power of two through employment of a direct implementation. In another example, repeated application of a certain instance of logic 300 that serves to change the size of an image by a factor of two, serves in another instance of logic 300 to increase the size of the image by additional factors of two. Logic 300 in one example implements an upsizing scheme irrespective of whether or not the originally input image, resulted from an implementation in logic 300 of a downsizing scheme. Exemplary additional details for one example of logic 300 are provided in Rakesh Dugad and Narendra Ahuja, “A Fast Scheme for Altering Resolution in the Compressed Domain” (Proceedings 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Cat. No PR00149, Jun. 23-25, 1999, Fort Collins, Colo., USA, IEEE Comput. Soc. Part Vol. 1, 1999, pp. 213-18 Vol. 1, Los Alamitos, Calif., USA). The flow diagrams depicted herein are just exemplary. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted or modified. All these variations are considered a part of the claimed invention. Although preferred embodiments have been depicted and described in detail herein, it will be apparent to those skilled in the relevant art that various modifications, additions, substitutions and the like can be made without departing from the spirit of the invention and these are therefore considered to be within the scope of the invention as defined in the following claims.
{"url":"http://www.google.com/patents/US6807310?dq=7,346,539","timestamp":"2014-04-16T05:27:05Z","content_type":null,"content_length":"172436","record_id":"<urn:uuid:62dd29d4-5840-48e5-bd15-3475f6ae98bd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplex Method Online Exercises Simplex Method with Artificial Variable for Phase I Below is a java applet that will present a set of linear programming problems for you to solve. You're job is to solve each problem (either by finding an optimal solution or demonstrating that the problem is infeasible or unbounded). After correctly solving a problem, the applet will immediately give you a new one. After you have solved all the problems in the set, make a screen print of the applet window and email that to your instructor. Each problem must be solved using the two-phase Simplex Method with artificial variable x0. Any attempt to make an incorrect pivot will be rejected but your score will be increased by one. Low scores are better than high scores. The objective has two rows now: the first one is for Phase II and the second one is the artificial variable objective in the auxiliary problem for Phase I. You must click on the appropriate row of the artificial variable column x0 to make the first pivot, generating a feasible dictionary for the auxiliary problem, eliminating all the purple colors in the left column and generating yellow colors in the second row, highlighting the possible entering variables for the auxiliary problem. If the original problem is infeasible, you should be able to reduce x0 to zero and make the yellow colors all disappear, and then continue with Phase II as usual. The set of linear programming problems to be solved is determined by specifying the number of rows, the number of columns, the seed, and the number of problems in the set. Your instructor will tell you what values to use for these fields. Make sure that you enter exactly what your instructor tells you for otherwise you will be doing the wrong set of problems and it will be impossible to fairly evaluate your performance. When you are ready to begin, press the Go Pivoting button and then, when the pivot window pops up, press the Start button. Don't forget that incorrectly pressing one of the termination buttons (Optimal, Infeasible, or Unbounded) counts as an extra pivot, so press these buttons only when you are confident that it is the correct thing to do. Final note: if for any reason you wish to start over, you may press the Exit button to quit the exam. At that point you can start afresh. Of course, at some time before this online assignment is due you must go to the end and submit a score. Bugs. This applet works fine when accessed via Netscape3 on most UNIX workstations and it works fine with either Netscape4 or InternetExplorer4 on WindowsNT. Most other browser/platform combinations do not work correctly. The problem seems to be due to bugs in the browser's JAVA interpreter as implemented on the given platform. If you have trouble getting the applet to behave as described above, try finding a different browser/platform combination.
{"url":"http://www.princeton.edu/~rvdb/JAVA/pivot/primal_x0.html","timestamp":"2014-04-16T21:07:33Z","content_type":null,"content_length":"3903","record_id":"<urn:uuid:35b168e5-0f8c-4a3f-815f-3cd9d5d663b6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Applicability cost-benefit analysis Additional tools The opinions expressed in the studies are those of the consultant and do not necessarily represent the position of the Commission. Applicability cost-benefit analysis Applicability cost-benefit analysis Do all road safety measures lend themselves equally well to cost-benefit analysis? No, such analyses are more readily done for some measures than for others. In the Handbook of Road Safety Measures [16], the following main groups of road safety measures are identified: General purpose policy instruments is a heterogeneous bag of measures that includes, among other things, motor vehicle taxation, regulation of commercial transport, urban and regional planning and access to medical services. Most of the general purpose policy instruments are quite complex and their effects on road safety are indirect and for some of the measures poorly known. Due to their great complexity and the comparatively poor state of knowledge regarding their effects, these measures do not lend themselves very well to cost-benefit analysis. This is not to say that it is impossible to do cost-benefit analyses of some of these measures. There have, for example, been several cost-benefit analyses of road pricing. In general, to be amenable to cost-benefit analysis, a road safety measure should satisfy the following criteria: It should be known what category of accidents the measure affects (all accidents, accidents involving young drivers, accidents in the dark, etc), preferably so that the number of "target" accidents can estimated numerically. The effects of the measure on target accidents should known, i.e. numerical estimates of these effects should be available. If possible, these estimates should state the severity of accidents or injuries they apply to. In short, cost-benefit analysis requires quite extensive knowledge of the impacts of a measure. This knowledge will not be available for all road safety measures. In a recent road safety impact assessment for Norway [13], a survey was made of 139 road safety measures. Only 45 of them were included in a cost-benefit analysis. A total of 94 measures were omitted. Reasons for omitting measures included: Some of the measures that were included have so far not been used extensively, but were included because there is reason to believe they could improve road safety. This applies to ISA (Intelligent Speed Adaptation), for example, which favourably influences driving speed, a known risk factor for accidents and injuries. To give a short example, consider the conversion of three leg junctions to roundabouts. From the Norwegian road data bank, it was determined that 120 junctions with a mean daily traffic of 12,000 are candidates for conversion to roundabouts. Thus, the effect on fatalities can be estimated as follows: 120 x 12,000 x 365 x 0.091 x 10^-6 x 0.018 x 0.49 = 0.42 The first three terms (120, 12,000, 365) denote the total traffic volume in the 120 junctions during one year. This is the traffic that will be exposed to the conversion. The next term (0.091 x 10^ -6) is the mean risk of injury per million vehicles entering a three leg junction. A little less than 2 % of the injuries (0.018) are fatal. The rest are serious or slight. Thus, the overall injury rate is decomposed into a rate of fatal injury, a rate of serious injury and a rate of slight injury. Finally, roundabouts reduce the number of fatalities by 49 % (0.49). Hence, in the 120 junctions, 0.42 fatalities will be prevented. The fatalities prevented can be converted to monetary terms as follows: 0.42 x 26.5 x 14.828 = 165 million NOK Here, 0.42 is the number of fatalities prevented, 26.5 is the value, in million NOK, of preventing a fatality, and 14.828 the accumulated present value factor for a 25-year time horizon using a discount rate of 4.5 % per year. In general the present value of a benefit (or cost) is estimated as: Present value = Present value = [] In this formula, B denotes benefit in year i and r is the discount rate. The summation is from year 0 to year n, the end of the time horizon considered. Thus, if the benefit in year 0 is 100, in year 3 it will be: 100/(1.045^3) = 100/1.1412 = 87.6 As the years pass, the present value of a constant stream of benefits gradually becomes smaller.
{"url":"http://ec.europa.eu/transport/road_safety/specialist/knowledge/measures/applicability_cost_benefit_analysis/index.htm","timestamp":"2014-04-16T10:32:52Z","content_type":null,"content_length":"20803","record_id":"<urn:uuid:ef9b428a-6478-489c-a4be-5973c7d26958>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
finding 2 zeros in an interval February 8th 2009, 01:11 PM finding 2 zeros in an interval The question reads :show that the function f(x) =-x^5+4x^4-16x^3+12x+5 has two zeros in the interval (-2,0)" With the intermediate theory I know that f(-2) is -59 and f(0) is 5 so we know there is one zero But how on earth do you figure out if there is two???? Thanks in advance from this Calculus beginner February 8th 2009, 01:30 PM The question reads :show that the function f(x) =-x^5+4x^4-16x^3+12x+5 has two zeros in the interval (-2,0)" With the intermediate theory I know that f(-2) is -59 and f(0) is 5 so we know there is one zero But how on earth do you figure out if there is two???? Thanks in advance from this Calculus beginner are you sure you have the right function? this has only one real zero. when you have the right function, you would show it has another zero the same way. find another point in (-2,0) in which the function changes sign. then you can apply the intermediate value theorem again to show there's another zero. February 8th 2009, 05:03 PM Here is the function.
{"url":"http://mathhelpforum.com/calculus/72525-finding-2-zeros-interval-print.html","timestamp":"2014-04-18T14:23:33Z","content_type":null,"content_length":"5046","record_id":"<urn:uuid:6374811c-7ab2-4236-a660-03b47b71a964>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Mr. Mac's AP Physics Blog A 2.0-kilogram laboratory cart is sliding across a horizontal frictionless surface at a constant velocity of 4.0 meters per second east. What will be the cart’s velocity after a 6.0-newton westward force acts on it for 2.0 seconds? (1) 2.0 m/s east (3) 10. m/s east (2) 2.0 m/s west (4) 10. m/s west Base your answers to questions 2 and 3 on the diagram below, which shows a 1.0- newton metal disk resting on an index card that is balanced on top of a glass. What is the net force acting on the disk? (1) 1.0 N (3) 0 N (2) 2.0 N (4) 9.8 N When the index card is quickly pulled away from the glass in a horizontal direction, the disk falls straight down into the glass. This action is a result of the disk’s (1) inertia (3) shape (2) charge (4) temperature Compared to the force needed to start sliding a crate across a rough level floor, the force needed to keep it sliding once it is moving is (1) less (2) greater (3) the same A 400-newton girl standing on a dock exerts a force of 100 newtons on a 10 000-newton sailboat as she pushes it away from the dock. How much force does the sailboat exert on the girl? (1) 25 N (3) 400 N (2) 100 N (4) 10 000 N Which vector diagram best represents a cart slowing down as it travels to the right on a horizontal surface? Equilibrium exists in a system where three forces are acting concurrently on an object. If the system includes a 5.0-newton force due north and a 2.0-newton force due south, the third force must be (1) 7.0 N south (3) 3.0 N south (2) 7.0 N north (4) 3.0 N north Two forces are applied to a 2.0-kilogram block on a frictionless horizontal surface, as shown in the diagram below. The acceleration of the block is (1) 1.5 m/s2 to the right (2) 2.5 m/s2 to the left (3) 2.5 m/s2 to the right (4) 4.0 m/s2 to the left A student applies a 20.-newton horizontal force to move a 30.0 newton crate at a constant speed of 4.0 meters per second across a rough floor. What is the value of ยต (1) 1.0 (3) .666 (2) .200 (4) .133 The diagram below shows a block on a horizontal frictionless surface. A 100.-newton force acts on the block at an angle of 30.° above the horizontal. What is the magnitude of force F if it establishes equilibrium? (1) 50.0 N (3) 100. N (2) 86.6 N (4) 187 N The graph below represents the relationship between the forces applied to an object and the corresponding accelerations produced. What is the inertial mass of the object? (1) 1.0 kg (3) 0.50 kg (2) 2.0 kg (4) 1.5 kg The diagram below shows a sled and rider sliding down a snow-covered hill that makes an angle of 30.° with the horizontal. Which vector best represents the direction of the normal force, FN, exerted by the hill The diagram below represents a block sliding down an incline. Which vector best represents the frictional force acting on the block? (1) A (3) C (2) B (4) D A 50.-newton horizontal force is needed to keep an object weighing 500. newtons moving at a constant velocity of 2.0 meters per second across a horizontal surface. The magnitude of the frictional force acting on the object is (1) 500. N (3) 50. N (2) 450. N (4) 0 N A series of unbalanced forces was applied to each of two blocks, A and B. The graphs below show the relationship between unbalanced force and acceleration for each block. Compared to the mass of block A, the mass of block B is (1) the same (3) half as great (2) twice as great (4) four times as great A different force is applied to each of four 1-kilogram blocks to slide them across a uniform steel surface at constant speed as shown below. In which diagram is the coefficient of friction between the block and steel smallest? Which two graphs represent the motion of an object on which the net force is zero? The coefficient of kinetic friction between a 780.-newton crate and a level warehouse floor is 0.200. Calculate the magnitude of the horizontal force required to move the crate across the floor at constant speed. [Show all work, including the equation and substitution with units.] The sign below hangs outside the physics classroom, advertising the most important truth to be found inside. The sign is supported by a diagonal cable and a rigid horizontal bar. If the sign has a mass of 50 kg, then determine the tension in the diagonal cable which supports its weight. A 20 kg weight that is on a ledge is attached with a string, to another 20 kg weight that is hanging over the edge of the perfectly flat ledge. Assuming that the coefficient of friction is 0.2, what is the acceleration and what is the tension in the string? (show all work) No comments:
{"url":"http://mrmacphysics.blogspot.com/2009/10/sneak-peak-at-tomorrows-test.html","timestamp":"2014-04-17T00:49:30Z","content_type":null,"content_length":"55767","record_id":"<urn:uuid:bdffc3cc-a5f6-4d2e-a10c-dca21c460389>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/xero1999/answered","timestamp":"2014-04-21T12:52:03Z","content_type":null,"content_length":"91869","record_id":"<urn:uuid:e12bd20a-5458-4f02-ba20-106d3e3ba0bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
The Super Bowl tells us so. The Super Bowl Indicator The championship of American football decides the direction of the US stock market for the year. If a “National” team wins, the market goes up; if an “American” team wins, the market goes down. Yesterday the Giants, a National team, beat the Patriots. The birth … Continue reading... Truly random [again] “The measurement outputs contain at the 99% confidence level 42 new random bits. This is a much stronger statement than passing or not passing statistical tests, which merely indicate that no obvious non-random patterns are present.” arXiv:0911.3427 As often, I bought La Recherche in the station newsagent for the wrong reason! The cover of the Random sudokus [p-values] I reran the program checking the distribution of the digits over 9 “diagonals” (obtained by acceptable permutations of rows and column) and this test again results in mostly small p-values. Over a million iterations, and the nine (dependent) diagonals, four p-values were below 0.01, three were below 0.1, and two were above (0.21 and 0.42). The distribution of rho… There was a post here about obtaining non-standard p-values for testing the correlation coefficient. The R-library SuppDists deals with this problem efficiently. library(SuppDists) plot(function(x) dPearson(x,N=23,rho=0.7),-1,1,ylim=c(0,10),ylab="density") plot(function(x)dPearson(x,N=23,rho=0),-1,1,add=TRUE,col="steelblue") plot(function(x)dPearson(x,N=23,rho=-.2),-1,1,add=TRUE,col="green") plot(function(x)dPearson(x,N=23,rho=.9),-1,1,add=TRUE,col="red");grid() legend("topleft", col=c("black","steelblue","red","green"),lty=1, legend=c("rho=0.7","rho=0","rho=-.2","rho=.9"))</pre> This is how it looks like, Now, let’s construct a table of critical values for some arbitrary or not significance levels. q=c(.025,.05,.075,.1,.15,.2) xtabs(qPearson(p=q, N=23, rho Contingency Tables – Fisher’s Exact Test A contingency table is used in statistics to provide a tabular summary of categorical data and the cells in the table are the number of occassions that a particular combination of variables occur together in a set of data. The relationship between variables in a contingency table are often investigated using Chi-squared tests. The simplest contingency
{"url":"http://www.r-bloggers.com/tag/p-value/","timestamp":"2014-04-17T21:40:35Z","content_type":null,"content_length":"31840","record_id":"<urn:uuid:677c4b8b-b396-42d7-9f7a-9a5c676f9750>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Methods for Psychology 8th Edition Chapter 9 Solutions | Chegg.com (b) In the above scatter plot, we can observe that there were two outliers causes to extension of regression line towards their direction. So, we remove those observations to fit best regression line for the data as below. The line superimposed on this figure represents the straight line that “best fit the data.” Looking at a scatter plot, we can observe that as the income per capita increases, infant mortality decreases and vice versa. The relationship between these two variables appears to be linear in nature and negative slope reveals that they are negatively correlated. We know that correlation coefficient between Infmort and Income is -0.556. The sign of this correlation indicates they are negatively related and square of the correlation coefficient indicates that nearly 31% of the variability in infant mortality can be accounted by income per capita.
{"url":"http://www.chegg.com/homework-help/statistical-methods-for-psychology-8th-edition-chapter-9-solutions-9781111835484","timestamp":"2014-04-16T09:09:04Z","content_type":null,"content_length":"35750","record_id":"<urn:uuid:a9140696-abfb-479a-8261-7a3f1773ad9a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
History of Trigonometry The history of trigonometry dates back to the early ages of Egypt and Babylon . Angles were then measured in degrees. History of trigonometry was then advanced by the Greek astronomer Hipparchus who compiled a trigonometry table that measured the length of the chord subtending the various angles in a circle of a fixed radius r.This was done in increasing degrees of 71. In the 5 th century, Ptolemy took this further by creating the table of chords with increasing 1 degree. This was known as Menelaus's theorem which formed the foundation of trigonometry studies for the next 3 centuries. Around the same period, Indian mathematicians created the trigonometry system based on the sine function instead of the chords. Note that this was not seen to be ratio but rather the opposite of the angle in a right angle of fixed hypotenuse. The history of trigonometry also included Muslim astronomers who compiled both the studies of the Greeks and Indians . In the 13 th century , the Germans fathered modern trigonometry by defining trigonometry functions as ratios rather than lengths of lines. After the discovery of logarithms by the Swedish astronomer, the history of trigonometry took another bold step with Issac Newton. He founded differential and integral calculus. Euler used complex numbers to explain trigonometry functions and this is seen in the formation of the Euler's formula. The history of trigonometry came about mainly due to the purposes of time keeping and astronomy.
{"url":"http://www.trigonometry-help.net/history-of-trigonometry.php","timestamp":"2014-04-20T05:42:27Z","content_type":null,"content_length":"6290","record_id":"<urn:uuid:15465db9-9d8e-429b-b813-5d7e25aa31b2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximation methods in statistical learning theory Seminar Room 2, Newton Institute Gatehouse Spectral methods are of fundamental importance in statistical learning, as they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. Using traditional methods, such an approximation can be obtained with computational complexity that scales as the cube of the number of training examples. For the growing number of applications dealing with very large or high-dimensional data sets, however, these techniques are too costly. A known alternative is the Nystrom extension from finite element methods. While its application to machine learning has previously been suggested in the literature, we introduce here what is, to the best of our knowledge, the first randomized algorithm of this type to yield a relative approximation error bound. Our results follow from a new class of algorithms for the approximation of matrix products, which reveal connections between classical linear algebraic quantities such as Schur complements and techniques from theoretical computer science such as the notion of volume sampling.
{"url":"http://www.newton.ac.uk/programmes/SCH/seminars/2008020511002.html","timestamp":"2014-04-17T00:56:51Z","content_type":null,"content_length":"4954","record_id":"<urn:uuid:fe79b5de-043c-4dbe-af99-21f676135fff>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Initial value Problem June 9th 2008, 01:46 PM #1 May 2008 Find an implicit and an explicit solution of the given initial value problems. a) dy/dt+2y=1, when y(0)=5/2 b) (1+x^4)dy+x(1+4y^2)dx=0, when y(1)=0 Thanks a lot!!! Find an implicit and an explicit solution of the given initial value problems. a) dy/dt+2y=1, when y(0)=5/2 Mr F says: Many approaches are possible. Here are three: 1. Use the integrating factor method. 2. The DE is seperable: dt/dy = 1/(1 - 2y). 3. First order linear with constant coefficients. Assume a solution of the form y = ${\color{red}A e^{\lambda t} + B}$. Hint: B = 1/2, A is an arbitrary constant. Now find ${\color{red}\lambda}$. b) (1+x^4)dy+x(1+4y^2)dx=0, when y(1)=0 Mr F says: The DE is seperable: ${\color{red}-\frac{dy}{1 + 4y^2} = \frac{x}{1 + x^4}}$. The y-integration is a standard form. For the x-integration, make the substitution u = x^2. Thanks a lot!!! Last edited by mr fantastic; June 10th 2008 at 03:52 AM. Reason: Fixed the latex - thanks moo. June 9th 2008, 05:35 PM #2
{"url":"http://mathhelpforum.com/calculus/41122-initial-value-problem.html","timestamp":"2014-04-17T14:31:42Z","content_type":null,"content_length":"34368","record_id":"<urn:uuid:54a0f86b-5b0c-4091-a416-ba34ccbdbaa1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
New QuickDraw 3D Geometries A number of new QuickDraw 3D geometric primitives can save you time as you create 3D objects -- from footballs to the onion domes of the Taj Mahal. Most of these new primitives are very versatile, but this versatility comes at the cost of some complexity. Here you'll find a discussion of their various features and uses, with special attention to the differences among the features and structural characteristics of the polyhedral primitives. Being aware of these differences will help you make the right choices when you're using these primitives in a particular application. When QuickDraw 3D version 1.0 made its debut, it came with 12 geometric primitives that you could use to model pretty much anything you wanted. With applied cleverness, you could make arbitrary shapes by combining and manipulating such primitives as polylines, polygons, parametric curves and surfaces, and polyhedra. Because some shapes are so commonly used, recent versions of QuickDraw 3D have added them as high-level primitives, including two new polyhedral primitives. This frees each developer from having to reinvent them and ensures that the new primitives are implemented in such a way as to fit nicely with the existing paradigm in QuickDraw 3D. We'll start by looking at how the new ellipse primitive was designed. A similar paradigm was used in creating most of the other new high-level primitives. Understanding their design will help you use them effectively. Later, we'll move on to the two new polyhedral primitives -- the polyhedron and the trimesh -- which you can use to model very complex objects. We'll also take a fresh look at the mesh and trigrid, which have been around for a while, and compare the usefulness of all four polyhedral primitives. Along the way, you'll find some relevant background information about the QuickDraw 3D team's design philosophy. I'm going to assume that you're already familiar with the capabilities of QuickDraw 3D, including how to use the original 12 geometric primitives. But if you want more basic information, see the articles "QuickDraw 3D: A New Dimension for Macintosh Graphics" in develop Issue 22 and "The Basics of QuickDraw 3D Geometries" in Issue 23. The book 3D Graphics Programming With QuickDraw 3D has complete documentation for the QuickDraw 3D programming interfaces for version 1.0. Version 1.5 of QuickDraw 3D, which supports these new primitives, is now available. To get you started using the new primitives, the code listings shown here accompany this article on this issue's CD and develop's Web site. typedef struct TQ3EllipseData { TQ3Point3D origin; TQ3Vector3D majorRadius; TQ3Vector3D minorRadius; float uMin, uMax; TQ3AttributeSet ellipseAttributeSet; } TQ3EllipseData; Of course, circles must have a size. Again, QuickDraw 3D could make all circles a unit size (that is, have a radius of 1) and then require us to scale them appropriately. But, for the same reason that the circle has an explicit center, it has an explicit size. Given an origin and size, we have to specify the plane in which the circle lies in 3D space. Though it would be possible for QuickDraw 3D to define a circle's plane by default -- say, the {x, z} plane -- and require us to rotate the circle into the desired plane, QuickDraw 3D lets us define the radius with a vector whose length is the radius. Then we similarly define a second radius perpendicular to the first radius. The cross product of these two vectors (majorRadius and minorRadius) defines the plane the ellipse lies in: For a full circle, we need to set uMin to 0 and uMax to 1 (more on this later): ellipseData.ellipseAttributeSet = Q3AttributeSet_New(); Q3ColorRGB_Set(&color, 1, 1, 0); Q3AttributeSet_Add(ellipseData.ellipseAttributeSet, kQ3AttributeTypeDiffuseColor, &color); Why all of this power just for a circle? This power gives us flexibility. Let's use some of the object-editing calls to make some more interesting shapes. We'll start by making an ellipse out of the circle we've just constructed. If you recall, we originally made the circle with majorRadius and minorRadius equal to 2. So to make an ellipse instead of a mere circle, all we have to do is make majorRadius and minorRadius different lengths. To get the first ellipse you see in Figure 2, we can use this: All we have left is how to define partial ellipses. We can do this by taking a parametric approach. Let's say that an ellipse starts at u = 0 and goes to u = 1. Then we have to define the starting point. Let's make it be the same point that's at the end of the vector defining majorRadius in the first circle in Figure 3. To make a partial ellipse (that is, an elliptical or circular arc), we specify the parametric values of the starting and ending points for the arc: This gives us an elliptical (or in this case, circular) arc, as shown in Figure 3. (The dotted line isn't actually rendered -- it's just there for diagrammatic reasons.) Though the starting and ending points must be between 0 and 1, inclusive, we can make the starting point have a greater value than the ending point: Note that the UV surface parameterization for the disk is different from the parametric limit values around the perimeter of the disk. The UV surface parameterization was chosen so that an image applied as a texture would appear on the disk or end cap as if it were applied as a decal. The values associated with positions around the perimeter are used for making partial disks, just as we used them to make partial ellipses. The distinct parametric limit values (uMin and uMax) are necessary so that the partial end caps on partial cones and cylinders will properly match. If the surface parameterization for the disk meant that the U direction went around the perimeter, you'd have a nearly impossible time applying decal-like textures. Figure 5 shows how an ellipsoid (a sphere), cone, cylinder, and torus are defined with respect to an origin and three vectors (the labels being fields in the corresponding data structures). Note that the torus requires one more piece of information to allow for elliptical cross sections: the ratio between the length of the orientation vector (which gives the radius of the "tube" of the torus in the orientation direction) and the radius of the tube of the torus in the majorRadius direction. With the resulting torus primitive, you can make a circular torus with an elliptical cross section, or an elliptical torus with a circular cross section, or an elliptical torus with an elliptical cross section. (Hmm...perhaps I was drinking too much coffee when I designed the torus.) You use the U and V parameters to map a texture onto a shape. In Figure 5, the U and V parameters have their origins and orientations relative to the surface in what should be the most intuitive layout. If you apply a texture to the object, the image appears as most people would expect. So to make an ellipsoid that's a sphere, you make the majorRadius, minorRadius, and orientation vectors the same length as well as mutually perpendicular. To make an elliptical cylinder, you can vary the lengths of the three vectors. Even more fun can be had by making the vectors nonperpendicular -- this makes skewed or sheared objects. This is easy to see with a cylinder (Figure 6). You can make partial disks, cones, cylinders, and tori in a fashion analogous to what we did with the ellipse (see Figure 7). Since these are surfaces, you can set a minimum and maximum for each One important thing to notice is that the "wraparound" effect I showed with the ellipse, by making uMin be greater than uMax, is possible with all the other primitives in this category, but the equivalent feature in the V direction is possible only with the torus. For example, the cone wraps around naturally in the U direction because the face itself is one continuous surface in that direction, but the surface doesn't wrap in the V direction. Some of you must be wondering what we can do with the ends of cones and cylinders. Do we want them left open so that the cones look like dunce caps and the cylinders look like tubes? Or do we want them to be closed so that they appear as if they were true solid objects? You may have already wondered about a similar issue when we used the uMax and uMin parameter values to cut away part of the object. Do we make a sphere look like a hollow ball, or like a solid ball that's been cut into? To take care of these issues, the ellipsoid, cone, cylinder, and torus have an extra field in their data structures that you can use to tell the system which of these end caps to draw: typedef enum TQ3EndCapMasks { kQ3EndCapNone = 0, kQ3EndCapMaskTop = 1 << 0, kQ3EndCapMaskBottom = 1 << 1, kQ3EndCapMaskInterior = 1 << 2 } TQ3EndCapMasks; typedef unsigned long TQ3EndCap; Figure 8. Cylinders with and without end caps or interior end caps This intention manifests itself in several ways. For example, in most cases the calls that create an object take the address of a public data structure as an argument, and this argument is exactly the same as that for the immediate-mode Submit call: TQ3GeometryObject polyhedron; TQ3PolyhedronData polyhedronData; ... /* Fill in poly data structure here. */ /* Create a polyhedron object... */ polyhedron = Q3Polyhedron_New(&polyhedronData); /* ...or use immed. mode with same struct. */ Q3Polyhedron_Submit(&polyhedronData, view); TQ3Vertex3D vertex; TQ3Vector3D normal = { 0, 1, 0 }; Q3Polyhedron_GetVertex(poly, 5, &vertex); Q3AttributeSet_Add(vertex.attributeSet, kQ3AttributeTypeNormal, &normal); Q3Object_Dispose There's also a rich set of editing calls for retained objects. While this makes the API rather large, it allows retained mode to have much of the flexibility of immediate mode. In some display-list or object-oriented graphics systems (or systems with both), such editing was often awkward to program and inefficient. The design of QuickDraw 3D, however, makes these operations easy, consistent, and convenient. Another design principle that pervades QuickDraw 3D is that you define properties by creating attribute set objects and locating them as close as possible (in the data structures) to the item to which they apply. For example, attribute sets for vertices are contained in a data structure along with the location (coordinates): typedef struct TQ3Vertex3D { TQ3Point3D point; TQ3AttributeSet attributeSet; } TQ3Vertex3D; Not least, a notable design criterion was consistency in naming, ordering, and structuring, because with an API so large it would be easy to get lost. For example, all of the primitives that have explicit vertices (like polyhedra, lines, triangles, polygons, and trigrids) use as the type of their vertices TQ3Vertex3D (with the exception of the trimesh). Also, the editing calls for all the objects are cut from the same cloth. The design philosophy for the geometry of type kQ3GeometryTypePolyhedron -- from now on, let's call it the polyhedron -- was to implement this idea in a way that was consistent with all the other QuickDraw 3D primitives. The basic entity for polygonal primitives (line, triangle, polygon, and so forth) is TQ3Vertex3D, which is an {x, y, z} location with an attribute set. For consistency with the rest of the geometric primitives, the polyhedron also uses this data structure for its vertices. The vertices of adjacent triangular faces are shared simply by using the same vertex indices. Also, sets of attributes may be shared like other objects in QuickDraw 3D: Figure 9. A cross section of a polyhedron with all vertices sharing locations but not attributes Another advantage to this approach is that values in an attribute set apply to all vertices sharing that attribute set, so operations on it simultaneously affect all the vertices to which it's attached. Of course, this applies to attributes on faces as well. For example, though you can texture an entire object by attaching the texture to the attribute set for the object, you can more naturally associate a single texture with a group of faces by simply having each face contain a shared reference to the texture-containing attribute set. But for a single texture to span a number of faces, you need to make sure their shared vertices share texture coordinates. You can do this by simply having shared vertices of faces that are spanned by a single texture use the same attribute set, which contains texture coordinates (see Figure 10). Figure 10. Applying textures that span a number of faces Rendering the edges. Since the geometric primitives are generally array-based, the polyhedron needs an array of faces -- and more information for a face. Besides an attribute set for the face, the three vertices defining a face are in an array (of size 3). The polyhedron also needs an enumerated type that tells us which edges are to be drawn, and which not: typedef enum TQ3PolyhedronEdgeMasks { kQ3PolyhedronEdgeNone = 0, kQ3PolyhedronEdge01 = 1 << 0, kQ3PolyhedronEdge12 = 1 << 1, kQ3PolyhedronEdge20 = 1 << 2, kQ3PolyhedronEdgeAll = kQ3PolyhedronEdge01 | kQ3PolyhedronEdge12 | kQ3PolyhedronEdge20 } TQ3PolyhedronEdgeMasks; typedef unsigned long TQ3PolyhedronEdge; typedef struct TQ3PolyhedronTriangleData { unsigned long vertexIndices[3]; TQ3PolyhedronEdge edgeFlag; TQ3AttributeSet triangleAttributeSet; } TQ3PolyhedronTriangleData; typedef struct TQ3PolyhedronEdgeData { unsigned long vertexIndices[2]; unsigned long triangleIndices[2]; TQ3AttributeSet edgeAttributeSet; } TQ3PolyhedronEdgeData; Figure 12. A schematic for filling out the polyhedron edge data structure The edgeAttributeSet field allows the application to specify the color and other attributes of the edges independently. If no attribute is set on an edge, the attributes are inherited from the geometry, or if that's not present, then from the view's state. Every edge must have two points, but edges may have one or two faces adjacent to them -- those with just one are on a boundary of the object. To represent this in an array-based representation, you use the identifier kQ3ArrayIndexNULL as a face index for the side of the edge that has no face attached to it. Note the relationship between the face indices and vertex indices in Figure 12. Relative to going from the vertex at index 0 to the vertex at index 1, the 0th face is to the left. If at all possible, fill out your data structures to conform to this schematic. For example, an application may want to traverse the edge list and be assured of knowing exactly which face is on which side of each edge. The polyhedron data structure. Whew! Finally we're ready for the entire data structure: typedef struct TQ3PolyhedronData { unsigned long numVertices; TQ3Vertex3D *vertices; unsigned long numEdges; TQ3PolyhedronEdgeData *edges; unsigned long numTriangles; TQ3PolyhedronTriangleData *triangles; TQ3AttributeSet polyhedronAttributeSet; } TQ3PolyhedronData; Creating a polyhedron. In Listing 1, you'll find the code that creates the four-faced polyhedron in Figure 11. TQ3ColorRGB polyhedronColor; TQ3PolyhedronData polyhedronData; TQ3GeometryObject polyhedron; TQ3Vector3D normal; static TQ3Vertex3D vertices[7] = { { { -1.0, 1.0, 0.0 }, NULL }, { { -1.0, -1.0, 0.0 }, NULL }, { { 0.0, 1.0, 1.0 }, NULL }, { { 0.0, -1.0, 1.0 }, NULL }, { { 2.0, 1.0, 1.0 }, NULL }, { { 2.0, -1.0, 0.0 }, NULL }, { { 0.0, -1.0, 1.0 }, NULL } }; TQ3PolyhedronTriangleData triangles[4] = { { /* Face 0 */ { 0, 1, 2 }, /* vertexIndices */ kQ3PolyhedronEdge01 | kQ3PolyhedronEdge20,/* edgeFlag */ NULL /* triangleAttributeSet */ }, { /* Face 1 */ { 1, 3, 2 }, kQ3PolyhedronEdge01 | kQ3PolyhedronEdge12, NULL }, { /* Face 2 */ { 2, 3, 4 }, kQ3PolyhedronEdgeAll, NULL }, { /* Face 3 */ { 6, 5, 4 }, kQ3PolyhedronEdgeAll, NULL } }; /* Set up vertices, edges, and triangular faces. */ polyhedronData.numVertices = 7; polyhedronData.vertices = vertices; polyhedronData.numEdges = 0; polyhedronData.edges = NULL; polyhedronData.numTriangles = 4; polyhedronData.triangles = triangles; /* Inherit the attribute set from the current state. */ polyhedronData.polyhedronAttributeSet = NULL; /* Put a normal on the first vertex. */ Q3Vector3D_Set(&normal, -1, 0, 1); Q3Vector3D_Normalize(& normal, &normal); vertices[0].attributeSet = Q3AttributeSet_New(); Q3AttributeSet_Add(vertices[0].attributeSet, kQ3AttributeTypeNormal, &normal); /* Same normal on the second. */ vertices [1].attributeSet = Q3Shared_GetReference(vertices[0].attributeSet); /* Different normal on the third. */ Q3Vector3D_Set(&normal, -0.5, 0.0, 1.0); Q3Vector3D_Normalize(&normal, &normal); vertices [2].attributeSet = Q3AttributeSet_New(); Q3AttributeSet_Add(vertices[2].attributeSet, kQ3AttributeTypeNormal, &normal); /* Same normal on the fourth. */ vertices[3].attributeSet = Q3Shared_GetReference(vertices[2].attributeSet); /* Put a color on the third triangle. */ triangles[3].triangleAttributeSet = Q3AttributeSet_New(); Q3ColorRGB_Set(&polyhedronColor, 0, 0, 1); Q3AttributeSet_Add(triangles[3].triangleAttributeSet, kQ3AttributeTypeDiffuseColor, &polyhedronColor); /* Create the polyhedron object. */ polyhedron = Q3Polyhedron_New(&polyhedronData); ... /* Dispose of attributes created and referenced. */ Listing 2 shows the code that you'd use to specify the edges of the polyhedron in Figure 11, but this time with the optional edge list. You would add this code to the code in Listing 1, except that if you're using the edge list, you should set the edge flags in the triangle data to some legitimate value (like kQ3EdgeFlagAll), which will be ignored. Listing 2. Using an edge list to specify the edges of a polyhedron polyhedronData.numEdges = 8; polyhedronData.edges = malloc(8 * sizeof(TQ3PolyhedronEdgeData)); polyhedronData.edges[0].vertexIndices[0] = 0; polyhedronData.edges[0].vertexIndices[1] = 1; polyhedronData.edges[0].triangleIndices[0] = 0; polyhedronData.edges[0].triangleIndices[1] = kQ3ArrayIndexNULL; polyhedronData.edges[0].edgeAttributeSet = NULL; polyhedronData.edges[1].vertexIndices [0] = 2; polyhedronData.edges[1].vertexIndices[1] = 0; polyhedronData.edges[1].triangleIndices[0] = 0; polyhedronData.edges[1].triangleIndices[1] = kQ3ArrayIndexNULL; polyhedronData.edges [1].edgeAttributeSet = NULL; polyhedronData.edges[2].vertexIndices[0] = 1; polyhedronData.edges[2].vertexIndices[1] = 3; polyhedronData.edges[2].triangleIndices[0] = 1; polyhedronData.edges [2].triangleIndices[1] = kQ3ArrayIndexNULL; polyhedronData.edges[2].edgeAttributeSet = NULL; polyhedronData.edges[3].vertexIndices[0] = 3; polyhedronData.edges[3].vertexIndices[1] = 2; polyhedronData.edges[3].triangleIndices[0] = 1; polyhedronData.edges[3].triangleIndices[1] = 2; polyhedronData.edges[3].edgeAttributeSet = NULL; ... /* Specify the rest of the edges. */ Using the polyhedron to your best advantage. Before we leave the polyhedron, let's take a look at some of the characteristics that should make it the most widely used polyhedral primitive. Geometric editing operations, which change the positions of existing vertices, are easy and convenient. In immediate mode, you simply alter the point's position in the array in the data structure, and rerender. For retained mode, you'll find a number of function calls that allow you to change vertex locations, as well as the usual assortment of Get and Set calls for attributes, faces, face attributes, and so forth. Topological editing operations change the relationships between vertices, faces, edges, and the entire object. Though you can do these operations, the addition or deletion of vertices, faces, or edges may require reallocation of one or more of the arrays. Because the polyhedron has a public data structure, these operations are possible in both immediate mode and retained mode. So long as such operations aren't the primary ones required for using the polyhedron, it's not a problem; however, in the case where they are, you should use the mesh primitive. The polyhedron uses memory and disk space in a maximally efficient manner because shared locations and attributes are each stored only once and only those parts that logically require attributes need to have them. This results in generally excellent I/O characteristics (though, as is true of all geometric primitives, the addition of textures requires a great deal of space and can increase I/O time significantly). Finally, good to very good rendering speed is possible with the polyhedron, owing to the shared nature of the vertices. In short, you can easily use the polyhedron to represent almost any polyhedral Three features characterize the design of the trimesh data structures: Diverging from the design philosophy. The triangular face of a trimesh is simply a three-element array of indices into a location (TQ3Point3D) array, and an edge consists of indices into the location array and triangular face array: typedef struct TQ3TriMeshTriangleData { unsigned long pointIndices[3]; } TQ3TriMeshTriangleData; typedef struct TQ3TriMeshEdgeData { unsigned long pointIndices[2]; unsigned long triangleIndices[2]; } Note that this differs from the polyhedron, and most of the rest of the QuickDraw 3D primitives, in that the attributes associated with a part of the geometry are not closely attached to the geometric part. Instead, the normal -- say, for vertex number 17 -- is contained in the 17th element of an array of vertex normals, and it's the same for face and edge attributes. Of course, because you might have more than one type of attribute on a vertex, face, or edge, you might have an array of arrays of attributes. To keep things organized, the trimesh has a data structure that contains an identifier for the type of the attribute and a pointer to the array of values; so you actually will have an array of structures of the following type that contains arrays of data defining the typedef struct TQ3TriMeshAttributeData { TQ3AttributeType attributeType; void *data; char *attributeUseArray; } TQ3TriMeshAttributeData; For example, if a trimesh has 17 vertices with normals on them, you would create a data structure of this type, set attributeType to kQ3AttributeTypeNormal, allocate a 17-element array of TQ3Vector3D, and then fill it in appropriately. For all but custom attributes, the attributeUseArray pointer must be set to NULL. In the case of custom attributes, you can choose whether or not a particular vertex has that attribute by (in our example) allocating a 17-element array of 0/1 entries and setting to 1 the nth element if the nth vertex has a custom attribute on it (and 0 otherwise). You would use the same approach for vertex, face, and edge attributes. The trimesh data structure. The data structure for the trimesh consists of the attribute set for the entire geometry, plus pairs of (count, array) fields for points, edges, and faces, and the attributes that may be associated with each: typedef struct TQ3TriMeshData { TQ3AttributeSet triMeshAttributeSet; unsigned long numTriangles; TQ3TriMeshTriangleData *triangles; unsigned long numTriangleAttributeTypes; TQ3TriMeshAttributeData *triangleAttributeTypes; unsigned long numEdges; TQ3TriMeshEdgeData *edges; unsigned long numEdgeAttributeTypes; TQ3TriMeshAttributeData *edgeAttributeTypes; unsigned long numPoints; TQ3Point3D *points; unsigned long numVertexAttributeTypes; TQ3TriMeshAttributeData *vertexAttributeTypes; TQ3BoundingBox bBox; } TQ3TriMeshData; Trimesh characteristics. The uniform-attributes requirement and the use of arrays of explicit data for attributes -- as opposed to the attribute sets used throughout the rest of the system -- may be advantageous for some models and applications and make the trimesh relatively easy to use. The simplicity of this approach, however, makes it very hard to use this primitive to represent arbitrary, nonuniform polyhedra. (You'll learn more about this at the end of this article when I compare the characteristics of the four polyhedral primitives.) Geometric editing operations on the trimesh are similar to those on the polyhedron in immediate mode: you simply alter the point's position in the array in the data structure and rerender. There are no retained-mode part-editing API calls for the trimesh, as is befitting its design emphasis on immediate mode. Topological editing in immediate mode is also similar to that on the polyhedron. However, unlike the polyhedron, there are no retained-mode part-editing calls, so editing an object topologically is not possible. The uniform-attributes requirement for this primitive results in generally good I/O characteristics. However, the redundant-data problem that's inherent in this requirement may cause poor I/O speeds due to the repeated transfer of multiple copies of the same data (for example, the same color on every face). Rendering speed for the trimesh is generally good to very good. Like the polyhedron and trimesh, the mesh is intended for representing polyhedra. However, it was designed for a very specific type of use and has characteristics that make it quite different from the polyhedron and trimesh (and all the other QuickDraw 3D primitives). The mesh is intended for interactive topological creation and editing, so the architecture and API were designed to allow for iterative construction and topological modification. By iterative construction I mean that you can easily construct a mesh by building it up face-by-face, rather than using an all-at-once method of filling in a data structure and constructing the entire geometric object from that data structure (which you do with all the other geometric primitives). By topological modification I mean that you can easily add and delete vertices, faces, edges, and components. A particularly notable feature of this primitive is that it has no explicit (public) data structure, and so it has no immediate-mode capability, as do all the other geometric primitives. The mesh is specifically not intended to be used for representation of large-scale polyhedral models that have a lot of vertices and faces. If you use it this way, you get extremely poor I/O behavior, enormous memory usage, and less-than-ideal rendering speed. In particular, individuals or companies creating 3DMF files should immediately cease generating large models in the mesh format and instead use the polyhedron. Modeling, animation, and design applications should also cease using the mesh and begin using the polyhedron for most model creation and storage. The reason for this is that meshes consist of one or more components, which consist of one or more faces, which consist of one or more contours, which consist of a number of vertices. To enable the powerful topological editing and traversal functions, each of these entities must contain pointers not only to their constituent parts, but also to the entity of which they are parts. So a face must contain a reference to the component of which it is a part, and references to the contours that define the face itself. All of this connectivity information can significantly dwarf the actual geometrical information (the vertex locations), so a nontrivial model can take up an unexpectedly large amount of space in memory. The connectivity information (the pointers between parts) can take up from 50% to 75% of the space. Further, when reading a mesh, QuickDraw 3D must reconstruct all the connectivity information, which is computationally expensive. Writing is relatively slow as well, primarily because of the overhead incurred by the richly linked architecture. In addition, models distributed in the mesh format do a tremendous disservice to subsequent users of these models who might want to extract the data structure for immediate-mode use. This won't be possible because the mesh has no public data structure. These somewhat negative characteristics don't mean this isn't a useful primitive. For the purposes for which it was designed, it's clearly superior to any other available QuickDraw 3D geometric primitive. For example, if you have an application that uses a 3D sampling peripheral (for instance, a Polhemus device) for digitizing physical objects, the mesh would be ideal. You can easily use the mesh in such situations to construct the digitized model face-by-face, merge or split faces, add or delete vertices, and so forth. Doing this sort of thing with an array-based data structure would be awkward to program and inefficient becuase of the repeated reallocation you'd be forced to do. To give you an idea of the richness of the API and the powerful nature of this primitive, you can expect to find routines to create and destroy parts of meshes, retrieve the number of parts (and subparts of parts), get and set parts, and iterate over parts (and subparts). And because the iterators are so essential to the editing API, you'll find a large set of convenient macros for common iterative operations. Mesh characteristics. The mesh API richly supports both geometric and topological editing operations, but only for retained mode because the mesh has no immediate-mode public data structure -- an inconsistency with the design goals of the QuickDraw 3D API. (You should use the polyhedron primitive if immediate mode is desired.) In general, the rendering speed of meshes is relatively slow. In the case of the polyhedron and trimesh, faster rendering is facilitated by the use of arrays of points, which are presented to renderers in the form of a public data structure. The mesh, having neither an array-based representation nor a public version of the same, must be either traversed for rendering or decomposed into some other primitives that are more amenable to faster rendering. However, traversing usually results in retransformation and reshading of shared vertices (which tends to be extremely slow), while decomposition may involve tremendous use of space as well as complex and slow bookkeeping code. Faces of meshes (unlike those in the polyhedron and trimesh) may have more than three vertices, may be concave (though not self-intersecting), and may have holes by defining a face with more than one contour (list of vertices). Using the mesh. Listing 3 creates a mesh that's geometrically equivalent to the polyhedron created in Listing 1. static TQ3Vertex3D vertices[7] = { { { -1.0, 1.0, 0.0 }, NULL }, { { -1.0, -1.0, 0.0 }, NULL }, { { 0.0, 1.0, 1.0 }, NULL }, { { 0.0, -1.0, 1.0 }, NULL }, { { 2.0, 1.0, 1.0 }, NULL }, { { 2.0, -1.0, 0.0 }, NULL }, { { 0.0, -1.0, 1.0 }, NULL }, }; TQ3MeshVertex meshVertices[7], tmp[4]; TQ3GeometryObject mesh; TQ3MeshFace face01, face2, face3; TQ3AttributeSet faceAttributes; unsigned long i; TQ3ColorRGB color; TQ3Vector3D normal; /* Add normals to some of the vertices. */ vertices[0].attributeSet = Q3AttributeSet_New(); Q3Vector3D_Set(&normal, -1, 0, 1); Q3Vector3D_Normalize(&normal, & normal); Q3AttributeSet_Add(vertices[0].attributeSet, kQ3AttributeTypeNormal, &normal); vertices[1].attributeSet = Q3Shared_GetReference(vertices[0].attributeSet); vertices[2].attributeSet = Q3AttributeSet_New(); Q3Vector3D_Set(&normal, -0.5, 0.0, 1.0); Q3Vector3D_Normalize(&normal, &normal); Q3AttributeSet_Add(vertices[2].attributeSet, kQ3AttributeTypeNormal, &normal); vertices [3].attributeSet = Q3Shared_GetReference(vertices[2].attributeSet); /* Create the mesh. */ mesh = Q3Mesh_New(); /* Create the mesh vertices. */ for (i = 0; i < 7; i++) { meshVertices[i] = Q3Mesh_VertexNew(mesh, &vertices[i]); } /* Create a quad equal to the first two triangles in the */ /* polyhedron. */ tmp[0] = meshVertices[0]; tmp[1] = meshVertices[1]; tmp[2] = meshVertices[3]; tmp [3] = meshVertices[2]; face01 = Q3Mesh_FaceNew(mesh, 4, tmp, NULL); /* Create other faces. */ tmp[0] = meshVertices[2]; tmp[1] = meshVertices[3]; tmp[2] = meshVertices[4]; face2 = Q3Mesh_FaceNew (mesh, 3, tmp, NULL); tmp[0] = meshVertices[6]; tmp[1] = meshVertices[5]; tmp[2] = meshVertices[4]; face3 = Q3Mesh_FaceNew(mesh, 3, tmp, NULL); /* Add an attribute set to the last face. */ faceAttributes = Q3AttributeSet_New(); Q3ColorRGB_Set(&color, 0, 0, 1); Q3AttributeSet_Add(faceAttributes, kQ3AttributeTypeDiffuseColor, &color); Q3Mesh_SetFaceAttributeSet(mesh, face, The numbers of rows and columns are part of the data structure, and there is an optional array of type TQ3AttributeSet for face attributes. This primitive has a fixed topology, defined by the numbers of rows and columns. Thus, the space in memory and in files is very efficiently used. I/O is relatively fast because of the simplicity and efficiency of the primitive. Rendering can also be fast because of the shared nature of the points and the fixed topology. However, the fixed topology and the fact that shared locations must share attributes restrict the generality and flexibility of this primitive. ┃Characteristic │Polyhedron │Trimesh │Mesh │Trigrid ┃ ┃Memory usage │Very good │Fair to very good │Poor │Very good ┃ ┃File space usage │Very good │Fair to very good │Very good │Very good ┃ ┃Rendering speed │Good to very good │Good to very good │Fair to good │Good to very good ┃ ┃Geometric object │Very good │Impossible (no API │Very good │Very good ┃ ┃ │ │calls) │ │ ┃ ┃Topological object │Poor │Impossible (no API │Very good │Impossible (fixed ┃ ┃editing │ │calls) │ │topology) ┃ ┃Geometric data │Very good │Very good │Impossible (no data │Very good ┃ ┃structure editing │ │ │structure) │ ┃ ┃Topological data │Fair │Fair │Impossible (no data │Impossible (fixed ┃ ┃structure editing │ │ │structure) │topology) ┃ ┃I/O speed │Good to very good │Fair to very good │Fair │Good to very good ┃ ┃Flexibility / │Good │Poor │Very good │Poor (fixed topology)┃ ┃generality │ │ │ │ ┃ ┃Suitability for │ │ │ │ ┃ ┃general model │Very good │Fair │Fair │Poor ┃ ┃representation and │ │ │ │ ┃ ┃distribution │ │ │ │ ┃ The polyhedron primitive. This polyhedral primitive is the primitive of choice for the vast majority of programming situations and for the creation and distribution of model files if editing of models is desired. Companies and individuals whose businesses involve creation of, conversion to, distribution of, or sale of polyhedral models should produce them in polyhedron format, rather than mesh or trimesh. User-level applications such as modelers and animation tools should generally use the polyhedron as well. Creators of plug-in renderers are required to support certain basic primitives (triangles, points, lines, and markers) and are also very strongly urged to support the polyhedron. Let's quickly recount some of the pluses for the polyhedron: it can easily represent arbitrarily shaped polyhedral models in a space-efficient fashion, it's amenable to fast rendering, it's highly consistent with the rest of the API, and attributes may be attached in whatever combination is appropriate for the model. The polyhedron has advantages over the mesh because of the mesh's profligate use of space and lack of immediate mode. The mesh primitive. You should use the mesh primitive for interactive construction and topological editing. The rich set of geometric and topological object editing calls, the ability to make nontriangular faces directly, the allowance of concave faces and faces with holes, and the consistent use of attribute sets make this primitive ideal for those purposes. In addition, the 3DMF representation of a mesh is quite space efficient. However, because the mesh lacks an immediate mode, it requires a large amount of memory and is generally "overkill" in terms of representation for other uses. The trigrid primitive. Because of its fixed rectangular topology, the trigrid is a good choice for objects that are topologically rectangular -- for example, surfaces of revolution, swept surfaces, and terrain models -- and as an output primitive for applications that want to decompose their own parametric or implicit surfaces. If the situation matches one of these criteria and space is a serious issue, the trigrid is an especially good choice because it's more space efficient than the other primitives discussed here and it's very consistent with the rest of the QuickDraw 3D API. The trimesh primitive. I promised earlier that I'd discuss the implications that the uniform-attributes requirement has on the suitability of this primitive for representing general polyhedral objects. Real objects have regions that are smoothly curved and regions that are intentionally flat or faceted, and often have sharp edges, corners, and creases. The vertices in the curved regions need normals that approximate the surface normal at that vertex, but vertices at corners or along edges or that are part of a flat region need none. On a polyhedron, mesh, or trigrid, you need only take up storage (for the normal) on those vertices that actually require a normal, but on a trimesh you would be required to place vertex normals on all the vertices, resulting in a tremendous use of This same problem can be seen for face attributes. Real objects often have regions that differ in color, transparency, or surface texture. For example, a soccer ball has black and white faces, and a wine bottle may have a label on the front, a different one on the back, and yet another around the neck. The other polyhedral primitives would, in the case of the soccer ball, simply create two attribute sets (one for each color) and attach a reference to the appropriate attribute set to each face, thus sharing the color information. In a trimesh, you would be required to create an array of colors, thus using quite a lot of space to represent the same data over and over. If you wanted to highlight one face, you couldn't simply attach a highlight switch attribute to that face (set to "on") -- you'd need to attach it to the rest as well (set to "off"). As for the wine bottle, you would want to attach the label textures to the appropriate faces on the bottle, which would require attaching texture parameters to the vertices of the faces to which you attached the label texture. With a trimesh, this extremely useful and powerful approach is simply not possible. In using the trimesh for large polyhedral models, these problems can result in a rather startling explosion of space, both on disk and in memory. Consider a 10,000-face model whose faces are either red or green. The other polyhedral primitives would use references to just two color attribute sets while the trimesh would use up 10,000 x 12 bytes = 120,000 bytes. Further, if the red faces were to be transparent, we would have to use up yet another 120,000 bytes. Highlighting just one face would require another 40,000 bytes. This same sort of data explosion can occur with vertex attributes as well. Note that these problems do not affect the other polyhedral primitives. Thus, developers should carefully weigh the potentially negative consequences of the trimesh's characteristics when considering its use in applications. Its lack of object-editing calls renders it almost useless for an object-oriented approach, and this inconsistency with the rest of the QuickDraw 3D library may make its inclusion in a program awkward. In addition, because the trimesh doesn't use attribute sets (which are the foundation of the rest of the geometric primitives) for vertices, faces, and edges, it requires special-case handling in the application. In spite of these features that limit the suitability of the trimesh for general-purpose polyhedral representation, the uniform-attributes requirement makes it ideal for models in which each vertex or face naturally has the same type of attributes as the other vertices (or faces), but with different values. For example, if your application uses Coons patches, it could subdivide the patch into a trimesh with normals on each vertex. Games often are written with objects such as walls, or even some stylized characters, that typically have just one texture for the entire thing and either no vertex attributes or, more often, normals on every vertex. Multimedia, some demo programs, and other "display-only" applications in which the user typically is unable to modify objects may find the trimesh useful, at least for those primitives that don't suffer from the size problems described earlier. PHILIP J. SCHNEIDER (pjs@apple.com) is still the longest-surviving member of the QuickDraw 3D team (and in answer to an oft-posed question, no, not as in "surviving member of the Donner Party"). One current task is to find a name for his second son, which he and his wife expect in January, that won't eventually lead to a question like "Why did you give my older brother a cool name like Dakota, and then name me Bob?" He's given up trying to teach two-year-old Dakota to change his own diapers and has instead begun teaching him Monty Python's "The Lumberjack Song," which isn't nearly as useful a skill, but is one at which he has a better chance of succeeding. Philip's original interest in geometry began early, when an elementary school teacher warned him that he could "put an eye out" with a protractor.* Thanks to our technical reviewers Rick Evans, Pablo Fernicola, Jim Mildrew, Klaus Strelau, and Nick Thompson.* Have a Special Dead Trigger 2 Easter Basket Full of Goodies, Courtesy of Madfinger Games Posted by Rob Rich on April 18th, 2014 [ permalink ] Dead Trigger 2 | Read more » Zynga Launches Brand New Farmville Experience with Farmville 2: Country Escape Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » David. Review By Cata Modorcea on April 18th, 2014 Our Rating: :: MINIMALISTIC IN A DIFFERENT WAYUniversal App - Designed for iPhone and iPad David is a minimalistic game wrapped inside of a soothing atmosphere in which the hero... | Read more » Eyefi Unveils New Eyefi Cloud Service That Allows Users to Share Media Across Personal Devices Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » Tales from the Dragon Mountain: The Lair Review By Jennifer Allen on April 18th, 2014 Our Rating: :: STEADY ADVENTURINGiPad Only App - Designed for the iPad Treading a safe path, Tales from the Dragon Mountain: The Lair is a... | Read more » Yahoo Updates Flickr App with Advanced Editing Features and More Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » My Incredible Body - A Kid's App to Learn about the Human Body 1.1.00 Device: iOS Universal Category: Education Price: $2.99, Version: 1.1.00 (iTunes) Description: Wouldn’t it be cool to look inside yourself and see what was going on... | Read more » Trials Frontier Review By Carter Dotson on April 18th, 2014 Our Rating: :: A ROUGH LANDINGUniversal App - Designed for iPhone and iPad Trials Frontier finally brings the famed stunt racing franchise to mobile, but how much does its... | Read more » Evernote Business Notebook by Moleskin Introduced – Support Available in Evernote for iOS Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » Sparkle Unleashed Review By Jennifer Allen on April 18th, 2014 Our Rating: :: CLASSY MARBLE FLINGINGUniversal App - Designed for iPhone and iPad It’s a concept we’ve seen before, but Sparkle Unleashed is a solidly enjoyable orb... | Read more »
{"url":"http://www.mactech.com/articles/develop/issue_28/schneider.html","timestamp":"2014-04-19T09:24:39Z","content_type":null,"content_length":"132152","record_id":"<urn:uuid:7efcf523-7c0f-4849-8025-5d34a657e476>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US7644342 - Semiconductor memory device This is a continuation of application Ser. No. 10/292,397 filed Nov. 12, 2002, now U.S. Pat. No. 7,076,722 the entire contents of which incorporated by reference. This application also claims benefit of priority under 35 U.S.C. § 119 to Japanese Patent Application No. 2001-356571 filed Nov. 21, 2001, the entire contents of which incorporated by reference. 1. Field of the Invention The present invention relates to a semiconductor memory device such as a NAND-type flash memory, more particularly to a semiconductor memory device having an on-chip error correcting function. 2. Description of the Related Art The NAND-type flash memory is known to deteriorate its cell property through repeated operations of rewriting, and to vary data after it is left for a long time. In order to improve the reliability of the NAND-type flash memory, such a semiconductor memory that contains an ECC (Error Correcting Code) circuit mounted on-chip for error detection and correction has been proposed in the art (for example, Japanese Patent Application Laid-Open Nos. 2000-348497 and 2001-14888). FIG. 21 is a block diagram briefly showing an arrangement of the conventional NAND-type flash memory with ECC circuits mounted thereon. This memory comprises eight memory cell areas 1 [0], 1 [1], . . . , 1 [7]. Each of the memory cell areas 1 [0], 1 [1], . . . , 1 [7 ]includes a plurality of memory cells, not depicted, arrayed in a matrix. Data of 528 bits (=one page) can be written in and read out from 528 memory cells connected to a common word line through 528 bit lines at a time. Page buffers 2 [0]-2 [7 ]are connected to the memory cell areas 1 [0]-1 [7], respectively. Each page buffer can hold 528-bit write data and read data. Between the page buffers 2 [0]-2 [7 ]and I/O terminals 4 [0]-4 [7 ]located corresponding to the memory cell areas 1 [0]-1 [7], ECC circuits 3 [0]-3 [7 ]are provided for the memory cell areas 1 [0]-1 [7], respectively. Each ECC circuit 3 [0]-3 [7 ]has a coding function to add a certain bit number of check bits (ECC) to one page of information bits (528 bits) to be stored in each memory cell area 1 [0]-1 [7], and a decoding function to detect and correct a certain bit number of errors in the information bits with the check bits added thereto. BCH (Bose-Chaudhuri-Hocquenghem) code is employed as an error correcting code that can correct a plurality of bit errors with a relatively small circuit scale. Between the memory and external, data is read and written on a basis of 8 bits corresponding to the number of memory cells. Data is fed bit by bit into each ECC circuit 3 [0]-3 [7], and is circulated through and output from an internal cyclic shift register bit by bit to execute coding and Operations of coding and decoding in the conventional ECC circuit 3 [0]-3 [7 ]using BCH code will be described next. The number of check bits in BCH code for correcting 2-bit errors and detecting 3-bit errors is equal to 21 bits for 528 information bits. For convenience of description, a simple error detection and correction system is described, which employs BCH code capable of correcting 2-bit errors and detecting 3-bit errors for the number of information bits, k=7, a code length, n=15, and the number of check bits, t=2. In this case, a generating polynomial required for coding and decoding is given below as it is generally known: $Fundamental ⁢ ⁢ Polynomial ⁢ ⁢ : ⁢ ⁢ F ⁡ ( X ) = X 4 + X + 1 ⁢ ⁢ α ⁢ ⁢ Minimal ⁢ ⁢ Polynomial ⁢ ⁢ : ⁢ ⁢ M 1 ⁡ ( x ) = X 4 + X + 1 ⁢ ⁢ α 3 ⁢ ⁢ Minimal ⁢ ⁢ Polynomial : ⁢ M 3 ⁡ ( x ) = X 4 + X 3 + X 2 + X + 1 ⁢ ⁢ Generating ⁢ ⁢ Polynomial ⁢ ⁢ : ⁢ ⁢ G ⁢ ⁢ ( x ) ⁢ = M 1 ⁢ M 3 = X 8 + X 7 + X 6 + X 4 + 1 ( 1 )$ (1) Coder FIG. 22 is a block diagram showing a coder 10 functionally configured inside the conventional ECC circuit 3 i (i=0, 1, . . . , or 7). The coder 10 comprises a shift register 11 consisting of registers D[7], D[6], D[5], D[4], D[3], D[2], D[1], D[0], XOR circuits 12 [1], 12 [2], 12 [3], 12 [4 ]for modulo-2 operations, and circuit changing switches SW1, SW2. An operation for moving the shift register 11 once corresponds to multiplying each value in the shift register 11 by X. A value of data stored in the shift register 11 can be expressed by: where a[i ]denotes a value stored in a register D[i], and a[i]=0 or 1 (i=0-7). When this is shifted once, the following is obtained: From the generating polynomial G(x) given by Expression (1), a relation of X^8=X^7+X^6+X^4+1 is derived. Therefore, Expression (3) can be represented by: This corresponds to shifting each bit; storing the value a[7 ]of the register D[7 ]into the register D[0]; adding the values a[3], a[7 ]of the registers D[3], D[7 ]at the XOR circuit 12 [1 ]and storing the sum into the register D[4]; adding the values a[5]+a[7 ]of the registers D[5], D[7 ]at the XOR circuit 12 [2 ]and storing the sum into the register D[6]; and adding the values a[6]+a[7 ] of the registers D[6], D[7 ]at the XOR circuit 12 [3 ]and storing the sum into the register D[7]. On coding, the switches SW1, SW2 are first connected to ON sides to enter input data (information bits) I[0], I[1], I[2], I[3], I[4], I[5], I[6 ](I[0]-I[6]=0 or 1) bit by bit from external through the I/O terminal 4 i. Every time one bit of the input data I[0]-I[6 ]enters, the shift register 11 operates once. As the switch SW1 is kept ON during the input data I[0]-I[6 ]entering, the data is output bit by bit to the page buffer 2 i as it is. At the same time, the input data I[0]-I[6 ]is added to the value a[7 ]of the register D[7 ]at the XOR circuit 12 [1 ]and the sum is stored in turn into the shift register 11. After completion of the input data I[0]-I[6 ]entered into the page buffer 2 i, check bits I[7], I[8], I[9], I[10], I[11], I[12], I[13], I[14 ]are stored inside the registers D[7], D[6], D[5], D[4], D[3], D[2], D[1], D[0 ]of the shift register 11, respectively. The switches SW1, SW2 are then connected to OFF sides and, every time the shift register 11 operates, the check bits I[7]-I[14 ]are output serially to the page buffer 2 i through the switch SW1. The information bits and check bits stored in the page buffer 2 i are written into the memory cell area 1 i. At the same time, the value in the shift register 11 is reset. (2) Decoder A decoder is described next. The decoder comprises syndrome computational circuits and an error position detector. In the case of 2-bit error detection, two syndromes S[1], S[3 ]are required for decoding. These syndromes can be derived from the minimal polynomial M[1](x)=X^4+X+1 as it is known. FIG. 23 specifically shows (A) a conventional S[1 ]syndrome computational circuit 20 and (B) a conventional S[3 ]syndrome computational circuit 30. Based on the minimal polynomial M[1](x), the S[1 ]syndrome computational circuit 20 in FIG. 23A comprises a shift register 21 consisting of registers D[3], D[2], D[1], D[0], and XOR circuits 22 [1], 22 [2]. An operation for moving the shift register 21 once corresponds to multiplying a value in the shift register 21 by X. The value stored in the shift register 21 can be expressed by: where a[i ]denotes a value stored in a register D[i], and a[i]=0 or 1 (i=0-3). When this is shifted once, the following is obtained: From the α minimal polynomial M[1](x), a relation of X^4=X+1 is derived. Accordingly: This corresponds to shifting each bit; storing the value a[3 ]of the register D[3 ]into the register D[0]; and adding the values a[0], a[3 ]of the registers D[0], D[3 ]at the XOR circuit 12 [2 ]and storing the sum into the register D[1]. The information bits I[0]-I[6 ]and check bits I[7]-I[14 ]are fed in this order into the S[1 ]syndrome computational circuit 20 bit by bit. The shift register 21 operates once every time one bit enters. After all bits I[0]-I[14 ]enter, the syndrome S[1 ]is generated in the shift register 21 (D[0]-D[3]). Similar to the S[1 ]syndrome computational circuit 20, the S[3 ]syndrome computational circuit 30 in FIG. 23B comprises a shift register 31 consisting of registers D[3], D[2], D[1], D[0], and XOR circuits 32 [1], 32 [2], 32 [3], 32 [4]. It is configured by the X^3 circuit of the minimal polynomial M[1](x). In the S[3 ]syndrome computational circuit 30, an operation for moving the shift register 31 once corresponds to multiplying a value in the shift register 31 by X^3. The value stored in the shift register 31 is expressed by Expression (5). When it is multiplied by X^3, the following is given: From the α minimal polynomial M[1](x), a relation of X^4=X+1 is derived. Accordingly: This corresponds to shifting each bit; storing the value a[1 ]of the register D[1 ]into the register D[0]; adding the values a[1], a[2 ]of the registers D[1], D[2 ]at the XOR circuit 32 [2 ]and storing the sum into the register D[1]; adding the values a[2], a[3 ]of the registers D[2], D[3 ]at the XOR circuit 32 [3 ]and storing the sum into the register D[2]; and adding the values a[0], a[3 ]of the registers D[0], D[3 ]at the XOR circuit 32 [4 ]and storing the sum into the register D[3]. The information bits I[0]-I[6 ]and check bits I[7]-I[14 ]stored in the memory cells are also fed in this order into the S[3 ]syndrome computational circuit 30 bit by bit. The shift register 31 operates once every time one bit enters. After all bits I[0]-I[14 ]enter, the syndrome S[3 ]is generated in the shift register 31 (D[0]-D[3]). FIG. 24 is a flowchart showing an algorithm for decoding. The S[1], S[3 ]syndrome computational circuits 20, 30 compute syndromes S[1], S[3 ]first based on the information bits and check bits read out from the memory cell area 1 i (step S1). If the syndromes S[1], S[3 ]are S1=S3=0, it is determined errorless, and the read-out information bits are output as they are (steps S2, S3, S4). If only one of the syndromes S[1], S[3 ]is equal to 0, it is determined uncorrectable, and the data is output as it is (steps S2, S3, S5, S6, S7). If S[1]≠0 and S[3]≠0, computations are executed to derive σ [1]=S[1] ^2 and σ[2]=S[1] ^3+S[3 ](steps S2, S6, S8). If σ[2]=0 (step S9), it can be found that a 1-bit error is present, and 1-bit corrected data is output (step S10). If σ[2]≠0 (step S9), it can be found that 2-bit errors are present, and 2-bit corrected data is output (step S11). The position of the error bit can be found by assigning Z=α^I (I=0, 1, 2, 3, 4, 5, 6) in turn to an error position polynomial σ(Z) represented by Expression (10) as it is known generally. The position of the error can be indicated by i that holds σ(α^I)=0. σ(Z)=S [1]+σ[1] ×Z+σ [2] ×Z ^2(10) An arrangement of the error position detector is shown in FIGS. 25 and 26, which is configured based on such the point. FIG. 25 shows a first arithmetic section 40 a that computes and stores S[1], σ and σ[2]. FIG. 26 shows a second arithmetic section 40 b that executes the operation of Expression (10) based on the operated result from the first arithmetic section 40 a and outputs a detection signal to indicate the error position in the data. As shown in FIG. 25, the first arithmetic section 40 a comprises a shift register 41, an X arithmetic circuit 42, and an X^2 arithmetic circuit 43. A shift register 41 a stores the syndrome S[1], and shift registers 42 a and 43 a store the operated results, σ[1]=S[1] ^2 and σ[2]=S[1] ^3+S[3]. It is assumed that the shift register 42 a has a value of: where a[i ]denotes a value stored in a register D[i], and a[i]=0 or 1 (i=0-3). As the X arithmetic circuit 42 multiplies it by X, the value of the shift register 42 a comes to: From the α minimal polynomial M[1](x), a relation of X^4=X+1 is present. Accordingly, Expression (12) yields: This corresponds to shifting each bit; storing the value a[3 ]of the register D[3 ]into the register D[0]; and adding the values a[0], a[3 ]of the registers D[0], D[3 ]at the XOR circuit 42 [2 ]and storing the sum into the register D[1]. The X^2 arithmetic circuit 43 multiplies the value of the shift register 43 a by X^2. Therefore, when the value indicated by Expression (11) is stored in the shift register 43 a, and it is multiplied by X^2, the value of the shift register 43 a comes to: From the α minimal polynomial M[1](x), a relation of X^4=X+1 is present. Accordingly, Expression (14) yields: This corresponds to shifting each bit; storing the value a[2 ]of the register E[2 ]into the register E[0]; storing the value a[1 ]of the register E[1 ]into the register E[3]; adding the values a[2], a[3 ]of the registers E[2], E[3 ]at the XOR circuit 43 b [1 ]and storing the sum into the register E[1]; and adding the values a[0], a[3 ]of the registers E[0], E[3 ]at the XOR circuit 43 b [2 ]and storing the sum into the register E[2]. When 1-bit data I[0]-I[6 ]is output, one shift operation of the shift registers 41 a, 42 a, 43 a multiplies the term of σ[1 ]by Z in the X arithmetic section 42 and the term of σ[2 ]by Z^2 in the X^2 arithmetic section 43. The NAND-type flash memory operates the shift registers 41 a, 42 a, 43 a in synchronization with the toggle signal that is employed to output the information bits stored in the memory cell to outside the chip. In the second arithmetic circuit 40 b, the result from the operation through an XOR circuit 44 and an NOR gate 45 exhibits ‘1’ at the error position. This output is employed to invert the corresponding data Ii to detect and correct the error. Thus, in the conventional ECC circuit that employs BCH code, one shift and computation per 1-bit input is the basic operation. The NAND-type flash memory receives parallel data input from external on a basis of 8-I/O or 16-I/O per address. Therefore, it is required to correct an error per I/O or compute 8 or 16 times during the one input. The 8 or 16-time computation during the one input needs a fast operation for this part, which can not be achieved practically because a special process is required, for example. Therefore, an ECC circuit 3 i is provided for each memory cell area 1 i (each I/O) in the art to correct errors on a basis of each memory cell area 1 i. The NAND-type flash memory reads and programs data per page (528 bytes). If it intends to correct 2-bit errors and detect 3-bit errors per I/O, it requires 21 check bits for 528 information bits, 21×8=168 extra check bits in total for the entire chip. This is an inhibit factor for improving the chip integration density. The present invention has been made in consideration of such the problem and accordingly has an object to provide a semiconductor memory device capable of reducing the number of check bits relative to the number of information bits to improve a chip integration density. According to an aspect of the invention, a semiconductor memory device comprises a plurality of memory cell areas, each of which includes a plurality of memory cells arrayed in a matrix and has a data I/O portion; a plurality of buffers, each of which is located on the data I/O portion at each memory cell area to temporarily store data to be written into the memory cell area and data read out from the memory cell area; a plurality of I/O terminals, each of which is configured to receive the data to be written into the memory cell area from external and output the data read out from the memory cell area to external; and an error correction circuit located between the plurality of I/O terminals and the plurality of buffers, the error correction circuit includes a coder configured to generate check bits for error correcting and to attach the check bits to the data to be written into the memory cell area and a decoder configured to process for error correcting the data read out from the memory cell area with the generated check bits, the error correction circuit operates to allocate a set of check bits to an information bit length of M×N (N denotes an integer of two or more) to execute at least one of coding and decoding by parallel processing N-bit data, where M denotes the number of bits in a unit of data to be written into and read out from the memory cell area. The present invention will be more fully understood from the following detailed description with reference to the accompanying drawings, in which: FIG. 1 is a block diagram showing an arrangement of a coder for use in an ECC circuit mounted on a flash memory according to a first embodiment of the present invention; FIG. 2 is a block diagram showing an arrangement of a shift register for use in the coder; FIG. 3 is a truth table of an XOR circuit for use in the coder; FIGS. 4A and 4B are block diagrams showing syndrome computational circuits in a decoder for use in the ECC circuit; FIG. 5 is a block diagram showing a first arithmetic section contained in an error position detector for use in the decoder; FIG. 6 is a block diagram showing a second arithmetic section contained in the error position detector; FIG. 7 is a block diagram showing a NAND-type flash memory according to a second embodiment of the present invention; FIG. 8 is a circuit diagram showing an arrangement of a memory cell area in the flash memory; FIG. 9 is a block diagram showing an ECC circuit in the flash memory; FIG. 10 shows registers contained in an arithmetic logic circuit on coding in the ECC circuit; FIG. 11 is a flowchart showing an operation of coding in the coder; FIG. 12 is a timing chart on coding; FIG. 13 shows registers contained in an arithmetic logic circuit for decoding in the ECC circuit; FIG. 14 is a flowchart showing an operation of decoding; FIG. 15 is a block diagram of an error position detector in the ECC circuit; FIG. 16 is a flowchart showing an algorithm for computing each term in an error position polynomial in the error position detector; FIGS. 17A, 17B and 17C are block diagrams of a Galois arithmetic circuit in the ECC circuit; FIG. 18 shows a second arithmetic section in the error position detector; FIG. 19 is a block diagram of another error position detector in the ECC circuit; FIGS. 20A and 20B are timing charts on decoding in the ECC circuit; FIG. 21 is a block diagram showing an arrangement of the NAND-type flash memory with conventional ECC circuits mounted thereon; FIG. 22 is a block diagram showing a coder in the conventional ECC circuit; FIGS. 23A and 23B are block diagrams showing conventional syndrome computational circuits; FIG. 24 is a flowchart showing a decoding algorithm in the conventional ECC circuit; FIG. 25 is a block diagram showing a first arithmetic section contained in an error position detector in the conventional ECC circuit; and FIG. 26 is a block diagram showing a second arithmetic section contained in the error position detector in the conventional ECC circuit. Embodiments of the present invention will be described below with reference to the drawings. (1) First Embodiment In order to provide an understanding of the present invention, 2-bit error correction is exemplified as a first embodiment with the number of information bits, k=7, a code length, n=15, and the number of correction bits, t=2. (1-1) Coder When input data I[0 ]enters the conventional coder 11 shown in FIG. 22, the input data I[0 ]is added at the XOR circuit 12 [4 ]to the term of X^7 in the coder, then multiplied by X. Each register 11 in the coder 10 in the initial state has a value of 0, which is referred to as (0). Accordingly: When next input data I[1 ]enters the coder 10, the input data I[1 ]is added to the term of X^7 in the coder 10, then multiplied by X to yield: When next input data I[2 ]enters the coder 10, the input data I[2 ]is added to the term of X^7 in the coder 10, then multiplied by X to yield: Similarly, after input data, up to I[6], enters the coder 10, the following is given: This expression can be altered in: This means that the pieces of input data I[0], I[1 ]are added to the terms of X^7, X^6 in the coder 10, respectively, then multiplied by X^2. Thereafter, the pieces of input data I[2], I[3 ]are added to the terms of X^7, X^6 in the coder 10, respectively, then multiplied by X^2. Finally the pieces of input data I[4], I[5 ]are added to the terms of X^7, X^6 in the coder 10, respectively, then multiplied by X^2. In a word, one operation of the shift register 11 after two bits input can multiply the data by X^2. As for the last data I[6], however, one bit input multiplies it by X as is in the art. When the value of the shift register 11 represented by Expression (2) is multiplied by X^2, it comes to: From the generating polynomial G(x) given by Expression (1), a relation of X^8=X^7+X^6+X^4+1 is derived. Therefore, Expression (21) yields: FIG. 1 is a block diagram showing a circuit arrangement of a coder 50 according to the present embodiment that specifically configures Expression (23). The coder 50 comprises a shift register 51 consisting of registers D[7], D[6], D[5], D[4], D[3], D[2], D[1], D[0], XOR circuits 52 [1], 52 [2], 52 [3], 52 [4], 52 [5], 52 [6], 52 [7], and four switches SW11, SW12, SW21, SW22 for changing input data and output data. The shift register 51 includes four-stage transfer gates 51 a and other necessary gate circuits 51 b as shown in FIG. 2. In the transfer gates 51 a, a reset signal RSTn is employed to reset the contents of data and a clock signal CLK to synchronously transfer 1-bit data from an input terminal IN to an output terminal OUT. An XOR circuit 52 applies a modulo-2 operation to data input from input terminals IN1, IN2, as shown in FIG. 3, and output the result from an output terminal OUT. Based on Expression (23), the coder 50 through one shift operation performs: adding the values a[6], a[7 ]of the registers D[6], D[7 ]at the XOR gate 52 [6 ]and storing the sum into the register D [0]; storing the value a[7 ]of the register D[7 ]into the register D[1]; storing the value a[0 ]of the register D[0 ]into the register D[2]; storing the value a[1 ]of the register D[1 ]into the register D[3]; adding the values a[2], a[6], a[7 ]of the registers D[2], D[6], D[7 ]at the XOR gates 52 [1], 52 [6 ]and storing the sum into the register D[4]; adding the values a[3], a[7 ]of the registers D[3], D[7 ]at the XOR gate 52 [2 ]and storing the sum into the register D[5]; adding the values a[4], a[6], a[7 ]of the registers D[4], D[6], D[7 ]at the XOR gates 52 [3], 52 [6 ]and storing the sum into the register D[6]; and adding the values a[5], a[6 ]of the registers D[5], D[6 ]at the XOR gate 52 [5 ]and storing the sum into the register D[7]. The pieces of input data (information bits) I[0], I[1], I[2], I[3], I[4], I[5], I[6], given from external to be written into the memory, are divided into two: input data I[0], I[2], I[4 ]and input data I[1], I[3], I[5]. The input data I[0], I[2], I[4 ]is fed to ON sides of the switches SW11, SW21. The input data I[1], I[3], I[5 ]is fed to ON sides of the switches SW12, SW22. The pieces of input data are fed by two bits in parallel in an order of (I[0], I[1]), (I[2], I[3]), (I[4], I[5]). After the input, the shift register 51 operates once. As the shift register 51 is connected to every other one, one shift operation multiplies the data by X^2. While the pieces of data (I[0], I[1]), (I[2], I[3]), (I[4], I[5]) enter, the switches SW11, SW12, SW21, SW22 are all kept ON to allow these pieces of data to output by two bits in parallel as they are. At the same time, the data I[0], I[2], I[4 ]is added to the value a[7 ]of the register D[7 ]at the XOR circuit 52 [7 ]and sequentially stored in the shift register 51. The data I[1], I[3], I[5 ]is added to the value a[7 ]of the register D[7 ]at the XOR circuit 52 [4 ]and sequentially stored in the shift register 51. As the last I[6 ]of the input data is 1-bit input, the connection is switched to the same as in the conventional coder 10 shown in FIG. 22. Such the switching is required because k=7 is selected as the number of information bits. After completion of input of the data I[0], I[1], I[2], I[3], I[4], I[5], I[6], check bits I[7], I[8], I[9], I[10], I[11], I[12], I[13], I[14 ]are stored inside the registers D[7], D[6], D[5], D[4], D[3], D[2], D[1], D[0 ]in the shift register 51, respectively. The switches SW11, SW12, SW21, SW22 are then all connected to OFF sides. Thus, every time the shift register 51 operates, the check bits I[7], I[9], I[11], I[13 ]are fed to the output of the switch SW11 and the check bits I[8], I[10], I[12], I[14 ]to the output of the switch SW12. At the same time, the value in the shift register 51 is reset. This allows check bits to be generated through 2-bit input parallel processing. (1-2) Decoder {circle around (1)} S[1 ]Syndrome Computational Circuit In the conventional S[1 ]syndrome computational circuit 20 of FIG. 23A, the value in the S[1 ]syndrome computational circuit 20 is first multiplied by X, then the input data I[0 ]is added to the term of X^0 at the XOR circuit 22 [1]. The shift register 21 in the S[1 ]syndrome computational circuit 20 in the initial state has a value of 0, which is referred to as (0). Accordingly: After the value in the S[1 ]syndrome computational circuit 20 is multiplied by X, the input data I[1 ]is added to the term of X[0]. Accordingly: Subsequently, after the value in the S[1 ]syndrome computational circuit 20 is multiplied by X, the input data I[2 ]is added to the term of X^0. Accordingly: When the input data, up to I[14], enters the S[1 ]syndrome computational circuit 20, the following is given: The expression can be altered in: This means that after the value in the S[1 ]syndrome computational circuit 20 is multiplied by X^2, the input data I[0 ]is added to the term of X^1, and the input data I[1 ]to the term of X^0. Then, after the value in the S[1 ]syndrome computational circuit 20 is multiplied by X^2, the input data I[2 ]is added to the term of X^1, and the input data I[3 ]to the term of X^0. Next, after the value in the S[1 ]syndrome computational circuit 20 is multiplied by X^2, the input data I[4 ]is added to the term of X^1, and the input data I[5 ]to the term of X^0. In a word, one operation of the shift register multiplies the data by X^2, then 2-bit data enters. Finally, after the value in the S[1 ]syndrome computational circuit 20 is multiplied by X, the input data I[14 ]is added to the term of X [0 ]by 1-bit input. When the value of the shift register 21, expressed by Expression (5), is multiplied by X^2, the following is given: From, the α minimal polynomial M[1](x), a relation of X^4=X+1 is derived. Accordingly: FIG. 4A is a block diagram showing a circuit arrangement of an S[1 ]syndrome computational circuit 60 according to the present embodiment that specifically configures Expression (30). The S[1 ]syndrome computational circuit 60 comprises a shift register 61 consisting of registers D[0], D[1], D[2], D[3], and XOR circuits 62 [1], 62 [2], 62 [3], 62 [4]. Based on Expression (30), the S[1 ]syndrome computational circuit 60 through one shift operation performs: storing the value a[2 ]of the register D[2 ]into the register D[0]; adding the values a[2], a[3 ]of the registers D[2], D[3 ]at the XOR circuit 62 [2 ]and storing the sum into the register D[1]; adding the values a[0], a[3 ]of the registers D[0], D[3 ]at the XOR circuit 62 [4 ]and storing the sum into the register D[2]; and storing the value a[1 ]of the register D[1 ]into the register D[3]. The information bits I[0], I[1], I[2], I[3], I[4], I[5], I[6 ]and check bits I[7], I[8], I[9], I[10], I[11], I[12], I[13], I[14 ]read out from the memory cell area, not depicted, are divided into I [0], I[2], I[4], I[6], I[5], I[10], I[12], I[14 ]and I[1], I[3], I[5], I[7], I[9], I[11], I[13 ]and fed by two bits in parallel in an order of (I[0], I[1]), (I[2], I[3]), (I[4], I[5]), . . . to the S [1 ]syndrome computational circuit 60. After the input, the shift register 61 operates once. As the shift register 61 is connected to every other one, one shift operation multiplies the data by X^2. The data I[0], I[2], I[4], . . . , I[14 ]is added at the XOR circuit 62 [3 ]to the output, a[2]+a[3], from the XOR circuit 62 [2 ]and the sum is stored in the register D[1]. The data I[1], I[3], I [5], . . . , I[13 ]is added at the XOR circuit 62 [1 ]to the value a[2 ]of the register D[2 ]and the sum is stored in the register D[0]. As the last I[6 ]of the information bits is 1-bit input, the connection is switched to the same as in the circuit of FIG. 23. Alternatively, it is possible to input I[15]=0 to the S[1 ]syndrome computational circuit 60 and, after a shift operation, multiply the shift register by X^−1. This allows 2-bit input parallel processing to be performed. {circle around (2)} S[3 ]Syndrome Computational Circuit A S[3 ]syndrome computational circuit 70 in FIG. 4B is described next. In the conventional S[3 ]syndrome computational circuit 30 in FIG. 23A, the value in the S[3 ]syndrome computational circuit 30 is first multiplied by X^3, then the input data I[0 ]is added to the term of X[0 ]at the XOR circuit 32 [1]. The shift register 31 in the S[3 ]syndrome computational circuit 30 in the initial state has a value of 0, which is referred to as (0). Accordingly: After the value in the S[3 ]syndrome computational circuit 30 is multiplied by X^3, the input data I[1 ]is added to the term of X[0]. Accordingly: Subsequently, after the value in the S[3 ]syndrome computational circuit 30 is multiplied by X^3, the input data I[2 ]is added to the term of X[0]. Accordingly: When the input data, up to I[14], enters the S[3 ]syndrome computational circuit 30, the following is given: The expression can be altered in: This means that after the value in the S[3 ]syndrome computational circuit 30 is multiplied by X^6, the input data I[0 ]is added to the term of X^3, and the input data I[1 ]to the term of X^0. Then, after the value in the S[3 ]syndrome computational circuit 30 is multiplied by X^6, the input data I[2 ]is added to the term of X^3, and the input data I[3 ]to the term of X^0. Next, after the value in the S[3 ]syndrome computational circuit 30 is multiplied by X^6, the input data I[4 ]is added to the term of X^3, and the input data I[5 ]to the term of X^0. In a word, one operation of the shift register multiplies the data by X^6, then 2-bit data is input. Finally, after the value in the S[3 ]syndrome computational circuit 30 is multiplied by X^3, the input data I[14 ]is added to the term of X[0 ]by 1-bit input. When the value of the shift register 31, expressed by Expression (5), is multiplied by X^6, the following is given: From the α minimal polynomial M[1](x), a relation of X^4=X+1 is derived. Accordingly: FIG. 4B is a block diagram showing a circuit arrangement of the S[3 ]syndrome computational circuit 70 according to the present embodiment that specifically configures Expression (37). The S[3 ]syndrome computational circuit 70 comprises a shift register 71 consisting of registers D[0], D[1], D[2], D[3], and XOR circuits 72 [1], 72 [2], 72 [3], 72 [4], 72 [5], 72 [6]. Based on Expression (37), the S[3 ]syndrome computational circuit 70 through one shift operation performs: adding the values a[1], a[2 ]of the registers D[1], D[2 ]at the XOR circuit 72 [2 ]and storing the sum into the register D[0]; adding the values a[1], a[3 ]of the registers D[1], D[3 ]at the XOR circuit 72 [6 ]and storing the sum into the register D[1]; adding the values a[0], a[2 ]of the registers D[0], D[2 ]at the XOR circuit 72 [4 ]and storing the sum into the register D[2]; and adding the values a[0], a[1], a[3 ]of the registers D[0], D[1], D[3 ]at the XOR circuits 72 [5], 72 [6 ]and storing the sum into the register D[3]. The information bits I[0], I[1], I[2], I[3], I[4], I[5], I[6 ]and check bits I[7], I[8], I[9], I[10], I[11], I[12], I[13], I[14 ]read out from the memory cell area, not depicted, are divided into I [0], I[2], I[4], I[6], I[8], I[10], I[12], I[14 ]and I[1], I[3], I[5], I[7], I[9], I[11], I[13 ]and fed by two bits in parallel in an order of (I[0], I[1]), (I[2], I[3]), (I[4], I[5]), . . . to the S [3 ]syndrome computational circuit 70. After the input, the shift register 71 operates once. The data I[0], I[2], I[4], . . . , I[14 ]is added at the XOR circuit 72 [3 ]to the output, a[1]+a[3], from the XOR circuit 72 [6 ]and the sum is stored in the register D[1]. The data I[1], I[3], I[5], . . . , I[13 ]is added to the output, a[1]+a[2], from the XOR circuit 72 [1 ]at the XOR circuit 72 [2 ] and the sum is stored in the register D[0]. As the last I[6 ]of the information bits is 1-bit input, the connection is switched to the same as in the S[3 ]syndrome computational circuit 30 of FIG. 23. Alternatively, it is possible to input I[15]=0 to the S[3 ]syndrome computational circuit 70 and, after a shift operation, multiply the shift register by X^−3. This allows 2-bit input parallel processing to be performed. {circle around (3)} Error Position Detector An error position detector is described next. In the error position detector in the present embodiment, the S[1], S[3 ]syndrome computational circuits 60, 70 perform one shift operation corresponding to the conventional two shift operations. Therefore, the error position detector performs an arithmetic also corresponding to the conventional two shift operations. The error position polynomial (10) is also represented by: σ(Z)=S [1]+σ[1] ×Z ^2+σ[2] ×Z ^4(38) FIGS. 5 and 6 show an arrangement of the error position detector configured based on Expression (38). The error position detector 80 comprises a first arithmetic section 80 a (FIG. 5) that computes and stores S[1], σ[1 ]and σ[2], and a second arithmetic section 80 b that detects a data error position based on Expression (38) and outputs a detection signal. As shown in FIG. 5, the first arithmetic section 80 a comprises a shift register 81, an X^2 arithmetic circuit 82, and an X^4 arithmetic circuit 83. A shift register 81 a stores the syndrome S[1 ]as the initial state, and shift registers 82 a, 83 a store the operated results, σ[1]=S[1] ^2 and σ=S[1] ^3+S[3], as the initial states. The error position detector 80 executes error detection in synchronization with every other data I[0], I[2], I[4], I[6 ]among the output data I[0], I[1], I[2], I[3], I[4], I[5], I[6]. It operates the shift registers 81 a, 82 a, 83 a once to multiply the term of σ[1 ]by Z^2 in the X^2 arithmetic circuit 82, and the term of σ[2 ]by Z^4 in the X^4 arithmetic circuit 83. If any error is present, then The X^2 arithmetic circuit 82 has the same arrangement as the X^2 arithmetic circuit 43 in FIG. 25: the shift register 43 a corresponds to the shift register 82 a; and the XOR circuits 43 b [1], 43 b [2 ]to the XOR circuits 82 b [1], 82 b [2]. Therefore, detailed arrangement descriptions for those parts are omitted. The X^4 arithmetic circuit 83 multiplies the value expressed by Expression (11) of the shift register 83 a by X^4. Therefore, the shift register 83 a has a value expressed by: From the α minimal polynomial M[1](x), a relation of X^4=X+1 is derived. Accordingly: Based on Expression (40), the X^4 arithmetic section 83 through one shift operation performs: adding the values a[0], a[3 ]of the registers E[0], E[3 ]at the XOR circuit 83 b [1 ]and storing the sum into the register E[0]; adding the values a[0], a[1], a[3 ]of the registers E[0], E[1], E[3 ]at the XOR circuit 83 b [1], 83 b [2 ]and storing the sum into the register E[1]; adding the values a[1], a[2 ]of the registers E[1], E[2 ]at the XOR circuit 83 b [3 ]and storing the sum into the register E[2]; and adding the values a[2], a[3 ]of the registers E[2], E[3 ]at the XOR circuit 83 b [4 ]and storing the sum into the register E[3]. The second arithmetic section 80 b in FIG. 6 includes a first detector 84 to detect error positions in the output data I[0], I[2], I[4], I[6]; a second detector 85 to detect error positions in the output data I[1], I[3], I[5]; an X-arithmetic circuit 86 to multiply the term of σ[1 ]by Z regarding the data I[1], I[3], I[5]; and an X^2-arithmetic circuit 87 to multiply the term of σ[2 ]by Z^2 regarding the data I[1], I[3], I[5]. The output resulted from the operation at the XOR circuit 88 and the NOR gate 89 in each detector 84, 85 exhibits “1” at the error position. This output is employed to invert the corresponding data Ii to detect 2-bit error positions in parallel at the same time by one shift operation. The X arithmetic circuit 86 and the X^2 arithmetic circuit 87 have the same arrangements as the conventional circuits shown in FIGS. 25 and 26 though they are not required to have registers for storing data. (2) Second Embodiment FIG. 7 is a block diagram showing a NAND-type flash memory according to a second embodiment, which mounts an ECC circuit on a chip. The memory comprises eight memory cell areas 101 [0], 101 [1], 101 [2], . . . , 101 [7]. Eight page buffers 102 [0], 102 [1], 102 [2], . . . , 102 [7 ]are provided corresponding to the memory cell areas 101 [0], 101 [1], 101 [2], . . . , 101 [7 ]to temporarily store data to be written in and read out of the memory cell areas 101 [0], 101 [1], 101 [2], . . . , 101 [7]. Between the page buffers 102 [0]-102 [7 ]and I/O terminals 104 [0], 104 [1], . . . , 104 [7], an ECC circuit 103 is provided to generate check bits, ECC, for correcting errors in the write data and to correct errors in the read data using the check bits (ECC). Different from the conventional type, for error detection and correction, the ECC circuit 103 adds 40 check bits commonly to information bits consisting of 528 bits×8 I/O=4224 bits data (M=528, N=8) that can be read out of and written into all memory cell areas 101 [0]-101 [7 ]at a time. Addresses and control signals, input to an I/O terminal 105, are fed to a control signal operation circuit 106 and an address decoder 107, respectively. The control signal operation circuit 106 receives various control signals, ALE, CLE, CE, WE, RE, WP, generates control voltages supplied to various parts, and outputs a signal, READY/BUSY, to an external circuit. On receipt of an address from external through the I/O terminal 105, the address decoder 107 temporarily stores it and drives a column decoder 108 and a block selector 109. The column decoder 108 activates one column in each of the page buffers 102 [0]-102 [7]. The block selector 109 applies a voltage to a word line in the memory cell areas 101 [0]-101 [7 ]required for reading, writing and erasing. As shown in FIG. 8, each memory cell area 101 j (where j=0-7) includes electrically rewritable, nonvolatile memory cells MC arrayed in a matrix. In this example, 16 memory cells MC are serially connected in a unit. A drain of the memory cell MC at one end is connected to a bit line BL via a selection gate transistor SG1. A source of the memory cell MC at the other end is connected to a common source line SL via a selection gate transistor SG2. Control gates of the memory cells MC in the row direction are connected to a common word line WL. Gate electrodes of the selection gate transistors SG1, SG2 in the row direction are connected to a common selection gate line SGL1, SGL2. In this embodiment, data of 528 bits, stored in the memory cells arranged at odd or even numbers among 1056 memory cells MC along a control gate line, is treated as a page or a unit to be written or read at a time. In this example, data of 16 pages adjoining in the column direction is treated as a block or a unit to be erased at a time. In addition to 1056(528×2) memory cells MC arranged along a word line WL to store information bits, the memory cell area 101 [7 ]is further provided with memory cells MC to store 80(40×2) check bits for error correction. As shown in FIG. 8, each page buffer 102 j includes 528 data storage circuits 121. Each data storage circuit 121 is connected to two bit lines BLi, BLi+1. Data can be read out from a memory cell MC in the memory cell area 101 j via either bit line BL selected by the address. A state of a memory cell MC in the memory cell area 101 j can be detected via the bit line BL. Writing into a memory cell MC in the memory cell area 101 j can be performed when a write control voltage is applied to the memory cell MC via the bit line BL. Among 528 data storage circuits 121, either one is selected at the column decoder 108 and only the selected data storage circuit 121 is connected to the ECC circuit 103. Therefore, in the whole memory, the data storage circuits 121 of 8 bits (8-I/O) having the same column address are connected to the ECC circuit 103 by the column decoder 108. In a read operation, the memory cells MC of one page surrounded by a dashed line in FIG. 8 are selected, and data of 528×8 bits is stored in all data storage circuits 121 at a time. The column decoder 108 increments the column address by one in synchronization with the read enable (RE) signal input from external. As a result, one in each of the memory cell areas 101 [0]-101 [7], eight data storage circuits 121 in total are selected in turn and 8-bit (8-I/O) data is sequentially output to the ECC circuit 103. In a write operation, 8-bit (8-I/O) data is sequentially input to the ECC circuit 103 from external via the I/O terminal 104 [0]-104 [7], and the 8-bit data is sequentially output from the ECC circuit 103. The column decoder 108 increments the column address by one in synchronization with the write enable (WE) signal input from external. As a result, one in each of the memory cell areas 101 [0]-101 [7], eight data storage circuits 121 in total are selected in turn, and 8-bit (8-I/O) data from the ECC circuit 103 is sequentially input to the selected storage circuit 121. An ECC circuit 103 is explained next. FIG. 9 is a block diagram showing the ECC circuit 103 in detail. The ECC circuit 103 includes an arithmetic logic circuit 131 containing multiple stages of registers, XOR circuits and switches; a Galois arithmetic circuit 132 for use in a syndrome computation and so forth; and an error position detector 133 (mainly a second arithmetic section) and a data inverter 134 operative to decode. The arithmetic logic circuit 131 configures a check bit generator when the ECC circuit 103 serves as a coder, and configures mainly the syndrome arithmetic circuit and a first arithmetic section in the error position detector when the ECC circuit 103 serves as a decoder. (2-1) Coder In the ECC circuit 103, data is input by 8 bits (D[0]-D[7]) to perform error detection and correction on a basis of data of 528×8=4228 bits. In the case of BCH code capable of correcting 3-bit errors and detecting 4-bit errors, the following condition can be considered: the number of information bits, k=4224; a code length, n=8191; the number of correction bits, t=3; and m=13. Therefore, a generating polynomial required for coding and decoding is given below: $Fundamental ⁢ ⁢ Polynomial ⁢ ⁢ : ⁢ ⁢ F ⁡ ( X ) = X 13 + X 4 + X 3 + X + 1 ⁢ ⁢ Parity ⁢ ⁢ Polynomial : ⁢ M 0 ⁡ ( x ) = X + 1 ⁢ ⁢ α ⁢ ⁢ Minimal ⁢ ⁢ Polynomial ⁢ ⁢ : ⁢ ⁢ M 1 ⁡ ( x ) = X 13 + X 4 + X 3 + X + 1 ⁢ ⁢ α 3 ⁢ ⁢ Minimal ⁢ ⁢ Polynomial : ⁢ M 3 ⁡ ( x ) = X 13 + X 10 + X 9 + X 7 + X 5 + X 4 + 1 ⁢ ⁢ α 5 ⁢ ⁢ Minimal ⁢ ⁢ Polynomial : ⁢ M 5 ⁡ ( x ) = X 13 + X 11 + X 8 + X 7 + X 4 + X 1 + 1 ⁢ ⁢ Generating ⁢ ⁢ Polynomial ⁢ ⁢ : ⁢ G ⁢ ⁢ ( x ) = ⁢ M 0 ⁢ M 1 ⁢ M 3 ⁢ M 5 = ⁢ X 40 + X 39 + X 38 + X 35 + X 34 + X 33 + ⁢ X 32 + X 28 + X 27 + X 26 + X 25 + ⁢ X 23 + X 22 + X 20 + X 18 + X 17 + ⁢ X 16 + X 15 + X 14 + X 10 + X 9 + X 5 + ⁢ X 4 + X 2 + X 1 + 1 ( 41 )$ Similar to the first embodiment, Expression (42) can be altered in Expression (43). (((((0+I[0]X^39)X+I[1]X^39)X+I[2]X^39)X+I[3]X^39) . . . )X+I[527]X^39)X(42) (0+I[0]X^39+I[1]X^38+I[2]X^37 . . . I[7]X^32)X^8+(I[8]X^39 . . . I[15]X^32))X^8 . . . (I[520]X^39+I[521]X^38 . . . I[527]X^32)X^8(43) Expression (43) means the following. The data of 8 bits D[0]-D[7]=I[0], I[1], I[2], . . . , I[7], input by one clock of the WE signal, is multiplied on a bit basis by X^39, X^38, X^37, . . . , X^32, respectively, and each product is added into an internal register value, which is then multiplied by X^8. Subsequently, the data of 8 bits D[0]-D[7]=I[8], I[9], I[10], I[15], input by the next clock of the WE signal, is multiplied on a bit basis by X^39, X^38, X^37, . . . , X^32, respectively, and each product is added into an internal register value, which is then multiplied by X^8. The same operations are repeated 528 times to the data of last 8 bits D[0]-D[7]=I[4216], I[4217], I[4218], . . . , I[4223]. FIG. 10 shows 40-stage registers REG0, REG1, . . . , REG39 equipped in the arithmetic logic circuit 131. These registers configure a cyclic shift register in the coder. The registers REG0, REG1, . . . , REG39 have Inputs B0, B1, . . . , B39 and Outputs A0, A1, . . . , A39. Based on the above generating polynomial (41) and Expression (43), the arithmetic logic circuit 131 executes XOR operations represented by the following Expressions (45) and (46) for one data input. The XOR operations herein employed are represented by Expression (44). Prior to sending the Outputs A32-A39, the registers REG32-REG39 sends Outputs AA32-AA39, which are resulted from XOR operations as shown by Expression (45) to add 8-bit data D[0]-D[7 ]fed from external to register values. Outputs A0-31 and AA32-AA39 are led to XOR circuits. The results from the XOR operations, B0-B39, represented by Expression (46), are led to Inputs of the registers REG0-REG39 and fetched in synchronization with the shift register clock. When this operation is repeated 528 times, 40 check bits I[4224], I[4225], I[4226], . . . , I[4264 ]are generated in the registers REG0-REG39 of the arithmetic logic circuit 131. FIG. 11 is a flowchart showing an operation of coding in the ECC circuit 103 and FIG. 12 is a timing chart on coding in the same. When a data input command (80 h) enters from external (S21), the registers REG0-40 in the arithmetic logic circuit 131 are reset (S22), then an address (Add) is given. Subsequently, a WE (Write Enable) signal enters from external and, in synchronization with this signal, data is loaded by 8 bits into the page buffer 102 j (S23, S24, S25). At the same time, the data is sent to the arithmetic logic circuit 131 to compute check bits. When the column address reaches the last 528 (S25), the data loading is terminated. Subsequently, a program command (10 h) enters from external, and an operation of voltage boosting by a charge pump, not depicted, is started to write data into the memory cell MC. At the same time, prior to writing, check bits are output, using the internal oscillator and so forth, not depicted, from 40 bits REG0-REG39 by 5 bytes sequentially, and stored in the data storage circuit 121 of the page buffer 102 [7]. The data stored in the data storage circuit 121 is then written into the memory cells MC in the page (surrounded by the dashed line in FIG. 8) selected by the external address Add. (2-2) Decoder {circle around (1)} Syndrome Computational Circuits For 3-bit error correction and 4-bit error detection, four syndromes S[0], S[1], S[3], S[5 ]are required as it is known. The syndrome S[0 ]can be derived from the minimal polynomial M[1](X)=X^4+X+1. When X^10=X^3+1, derived from the minimal polynomial M[1](x)=X^10+X^3+1, is referred to as an a operator, the syndrome S[1 ]can be derived from the a operator, the syndrome S[3 ]from an α^3 operator, and the syndrome S[5 ]from an α^5 operator. Only one bit can enter by one clock of the WE signal in the conventional decoder. In contrast, 8-bit data can be fetched by one clock of the WE signal in this embodiment by altering Expressions similar to the first embodiment that alters Expression from (27) to (28), and Expression from (34) to (35). Accordingly, the syndrome S[1 ]can be derived from an α^8-operator, the syndrome S[3 ]from an α^24 operator, and the syndrome S[5 ]from an α^40 operator. FIG. 13 shows 40-stage registers REG0, REG1, REG39 equipped in the arithmetic logic circuit 131. The register REG0 configures a cyclic shift register in the S[0 ]syndrome computational circuit. The registers REG1-13 configure a cyclic shift register in the S[1 ]syndrome computational circuit. The registers REG14-26 configure a cyclic shift register in the S[3 ]syndrome computational circuit. The registers REG27-39 configure a cyclic shift register in the S[5 ]syndrome computational circuit. The register REG0 has an Input PP0 and an Output P0. The registers REG1-13 have Inputs AA0, AA1, . . . , AA12 and Outputs A0, A1, . . . , A12. The registers REG14-26 have Inputs BB0, BB1, BB12 and Outputs B0, B1, . . . , B12. The registers REG27-39 have Inputs CC0, CC1, . . . , CC12 and Outputs C0 , C1, . . . , C12. The arithmetic logic circuit 131 executes operations shown in Expressions (47), (48), (49) and (50) based on one data input. The 8-bit data D0-D7 read out of the data storage circuit 121 is added to the Outputs P0, A0-13, B0-13, C0-13 from the registers REG0-REG39 at XOR circuits. The Outputs PP0, AA0-13, BB0-13, CC0-13 from the XOR circuits are led to the inputs of the registers REG0-39 and fetched in synchronization with the shift register clock. The XOR circuits connected to the registers REG1-13 configure an α^8 arithmetic circuit, which receives the data D0-D7 input. The XOR circuits connected to the registers REG14-26 configure an α^24 arithmetic circuit, which receives the data D0-D7 input. The XOR circuits connected to the registers REG27-39 configure an α^40 arithmetic circuit, which receives the data D0-D7 input. In stead of the α^40 arithmetic circuit, because it has a large circuit scale, α^40 may be fed into one of inputs of the Galois arithmetic circuit 132 shown in FIG. 9, and the output thereof and the data D0-D7 are appropriately operated at XOR circuits. <Computation of Syndrome S[0]> <Computation of Syndrome S[1]> <Computation of Syndrome S[3]> <Computation of Syndrome S[5]> {circle around (1)} Error Position Detector (First Arithmetic Section) FIG. 14 is a flowchart showing an operation of decoding in the ECC circuit 103. A data read command (00 h) is input, then a read address (Add) from external to start reading (S31). The data of one page (528 bytes) selected by the address is read out from the memory cells MC into the page buffers 102 [0]-102 [7 ](S32). Thereafter, in synchronization with a signal oscillated from the internal oscillator, the data D0-D7 is input byte by byte to the ECC circuit 103 to compute the syndrome (S33). As shown in FIG. 27, after computations of the syndromes S[0], S[1], S[3], S[5], if S[1]=S[3]=S[5]=0 (S34) and if S[0]=0 (S35), it is determined errorless (Normal output: S36). If S[0]≠0 (S35), it is determined uncorrectable (S37). Unless S[1]=S[3]=S[5]=0 (S34), computations are made for σ[2]=S[1] ^2S[3]+S[5 ]and σ[0]=S[1] ^3+S[3 ](S38). If σ[0]=0 (S39) and if σ[2]=0 and S[0]= 0 (S40), it is determined 1-bit error, and the control goes to an algorithm for 1-bit error correction (S41). Unless σ[2]=0 and S[0]=0 (S40), it is determined uncorrectable (S42). If σ[0]≠0 (S39), computations are made for σ[1]=S[1](S[1] ^3+S[3]) and σ[3]=(S[1] ^3+S[3])^2+S[1](S[1] ^2S[3]+S[5]) (S43). If σ[3]=0 (S44) and if σ[2]≠0 and S[0]=0 (S45), it is determined 2-bit errors, and the control goes to an algorithm for 2-bit error correction (S46). Unless σ[2]≠0 and S[0]=0 (S45), it is determined uncorrectable (S47). If σ[3]≠0 (S44) and if S[0]=1 (S48), it is determined 3-bit errors, and the control goes to an algorithm for 3-bit error correction (S49). The algorithm for 2-bit error correction is same as that for 3-bit error correction. If S[0]≠1 (S48), it is determined uncorrectable (S50). FIG. 15 shows an error position detector that executes the above computations. This error position detector includes a first arithmetic section, consisting of four registers R, A, B, C of 13 bits each, and not-depicted XOR circuits, contained in the arithmetic logic circuit 131. The error position detector also includes a Galois arithmetic circuit 132, and a second arithmetic section 133 consisting of eight locators 141 and arithmetic circuits 142 interposed between the locators 141 to operate ×α, ×α^2, ×α^3. 13-bit buses BUSR, BUSA, BUSB, BUSC are provided to connect them. The output from the Galois arithmetic circuit 132 is connected to the register R. FIG. 16 shows an algorithm to compute the terms of the error position polynomial, σ[0], σ[1], σ[2], σ[3]. The registers A, B, C store the syndromes S[1], S[3], S[5], respectively. If these syndromes are all zero, it is determined errorless and no operation is executed (S61). If not, an operation is made for σ[2]=S[1] ^2S[3]+S[5 ]and the operated result is sequentially stored in the register R. The operated result finally obtained is transferred from the register R to the register C (S62). Next, an operation is made for σ[0]=S[1] ^3+S[3 ]and the operated result is sequentially stored in the register R. The operated result finally obtained is transferred from the register R to the register B (S63). If the operated results stored in the registers B, C are both zero, then it is determined 1-bit error (S64) and “1” is stored in the register R (S65). If not, computations are made for α[1]=S[1](S[1] ^3+S[3]) and σ[3]=(S[1] ^3+S[3])^2+S[1](S[1] ^2S[3]+S[5]) (S66, S67, S68). In the present embodiment, of the code length of n=8191, the information bits of k=4224 (528×8 bits) are subjected to the error correction, while the information bits can have 8151 bits except for 41 check bits originally in a code having the code length of n=8191. As a result, the error position is shifted by 8151−4224+1=3928 bits. On reading from a column address of 0, computations are performed to multiply σ[1 ]by α^3928, σ[2 ]by α^7856(=3928×2), and σ[3 ]by α^3593(=3928×3−8191) (S69, S70, S71). Similarly, on reading from a column address of i, computations are performed to multiply σ[1 ]by α^3928+i, σ[2 ]by α^7858(=(3928+i)×2), and σ[3 ]by α^3596(=(3928+i)×3−8191). Factors such as α^3928+i are written into a ROM, for example. The factor is stored in the vicinity of the column data storage or in the memory cell area 101, selected by the column selector 108 of FIG. 7, because it depends on the column address of i. Alternatively, only the factor at the column address of 0 is stored and, when another address is accessed, a dummy operation of detecting an error position is performed to provide a matched factor. FIG. 17 is a block diagram showing the Galois arithmetic circuit 132 in detail. 13-bit inputs A and B shown in FIG. 17A are respectively represented by: A=a [0] X ^0 +a [1] X ^1 +a [2] X ^2 + . . . +a [12] X ^12 B=b [0] X ^0 +b [1] X ^1 +b [2] X ^2 + . . . +b [12] X ^12(51) In this case, A×B can be represented by: $A × B = ⁢ A ⁡ ( b 0 ⁢ X 0 + b 1 ⁢ X 1 + b 2 ⁢ X 2 + … + b 12 ⁢ X 12 ) = ⁢ Ab 0 + X ( Ab 1 + X ( Ab 2 + X ( Ab 3 + … + ⁢ X ⁡ ( Ab 12 ) ) ) ) ) ) ) ) ) ) ) ) ( 52 )$ This circuit can be configured as shown in FIG. 17B, in which A and bi are subjected to the AND operation at an AND circuit 151. The operated result is then multiplied by X at an X multiplier 152, and the product is subjected at an XOR circuit 153 to the XOR operation with the AND-operated result from the next A and bi+1. From the a Minimal Polynomial M[1](x) in Expression (41), a relation of X^13=X^4+X^3+X+1 is present. Therefore, as shown in FIG. 17C, the X multiplier 152 operates shifting the term of X^12 into the term of X^0; adding it into the terms of X^3, X^1, X^0 by the XOR circuit 154; and storing it in the terms of X^4, X^3, X^1. As a result of the above operations, 13-bit registers A, B, C, D are given σ[1], σ[3], σ[2], σ[0 ]as initial values, respectively. {circle around (2)} Error Position Detector (Second Arithmetic Section) Error bit positions can be detected based on the following error position polynomial (53) in the cases of 3-bit correction and 4-bit correction as it is known. σ(Z)=S [1]+σ[1] ×Z+σ [2] ×Z ^2+σ[3] ×Z ^3(53) When Z=α^I (I=0, 1, 2, 3, . . . ) is assigned in turn to Expression (53), the position of the error can be indicated by i that holds σ(α^I)=0. In the present embodiment, as 8-bit data is output per WE clock, Expression (53) is altered to Expression (54), like Expression (10) is altered to Expression (38) in the first embodiment. σ(Z)=σ[0]+σ[1] ×Z ^8+σ[2] ×Z ^16+σ[3] ×Z ^24(54) As a result, the error detection can be performed by 8 bits simultaneously at every other 8 bits. In a word, of the output data of 8 I/O, the error detection is performed to the I/O 0. If an error is present, then σ=0. As a result of the computations in FIG. 16, the 13-bit registers A, B, C, D are given σ[1], σ[3], σ[2], σ[0 ]as initial values, respectively. The XOR circuits connected to the register A in the arithmetic logic circuit 131 configure an α^8 arithmetic circuit. The XOR circuits connected to the register B configure an α^24 arithmetic circuit. The XOR circuits connected to the register C configure an α^16 arithmetic circuit. The register A has Inputs AA0, AA1, . . . , AA12 and Outputs A0, A1, . . . , A12. The register B has Inputs BB0, BB1, . . . , BB12 and Outputs B0, B1, B12. The register C has Inputs CC0, CC1, . . . , CC12 and Outputs C0, C1, . . . , C12. In this case, the α^8, α^16, α^24 arithmetic circuits perform operations respectively represented by Expressions (55), (56) and (57): <α^16 Arithmetic Circuit> <α^24 Arithmetic Circuit> FIG. 18 is a circuit diagram showing a specific arrangement of the locator 141. The locator 141 includes XOR circuits 161 and NOR circuits 162 to compute σ(Z) and outputs “H” if an error is present (σ=0) at the I/O 0 (j=1-7). As a result, the data inverter 134 of FIG. 9 inverts the data from the data storage circuit 121 in the page buffer 102 [0 ]and outputs the inverted data. Alternatively, as indicated by a dashed arrow 135 in FIG. 9, error correction can be directly performed to the data at the error position in the page buffer 102. On the other hand, the data at the I/O 1 has values in σ(Z) with the term of σ[1 ]multiplied by Z, the term of σ[2 ]multiplied by Z^2, and the term of σ[3 ]multiplied by Z^3. Accordingly, as shown in FIG. 15, an arithmetic circuit 142 [1 ]is mounted to operate the term of σ[1]×X, the term of σ[2]×X^2, and the term of σ[2]×X^3, and supplies the output to the locator 141 [1 ]to solve the error position polynomial. If an error is detected (σ=0), the output comes to “H”. When these X, X^2, X^3 arithmetic circuits are assumed to have Inputs X0-X12 and Outputs Y0-Y12, the arithmetic circuits execute the following operations. The arithmetic circuits are not required to have registers to store data. <X Arithmetic Circuit> <X^2 Arithmetic Circuit> <X^3 Arithmetic Circuit> The data at the I/O 2 has values in σ(Z) with the term of σ[1 ]multiplied by Z^2, the term of σ[2 ]multiplied by Z^4, and the term of σ[3 ]multiplied by Z^6. If arithmetic circuits are mounted to operate the term of σ[1]×X^2, the term of σ[2]×X^4, and the term of σ[2]×X^6 on the basis of I/O 0, the arithmetic circuit for a large multiplication such as X^6 increases the circuit scale. Therefore, in this embodiment, an arithmetic circuit 141 [1 ]is provided to multiply the output from the arithmetic circuit 141 [2 ]by ×X, ×X^2, ×X^3 again. Similarly, arithmetic circuits are provided up to 141 [7 ]corresponding to the I/O 7. If there is a problem on a signal transmission time delay, the eight locators 141 configuring the error position detector (second arithmetic section) 133 may be divided in two groups of four locators, as shown in FIG. 19, which are arranged on both sides of the arithmetic logic circuit 131. This arrangement is effective to halve the signal transmission path to the locator 141. FIG. 20 is a timing chart on decoding in the ECC circuit 103. FIG. 20A shows data reading and error correcting after computations of all terms in the error position polynomial. When a data read command (00 h) is input from external, followed by a read address (Add), a READY/BUSY signal is activated to start reading. First, the data of one page (528 bytes) selected by the address is read out from the memory cells MC into the page buffers 102 [0]-102 [7]. Then, in synchronization with a signal oscillated from the internal oscillator, the data D0-D7 is input byte by byte to the ECC circuit 103 to compute the syndromes and operate the terms of the error position polynomial using the computed syndromes S[0], S[1], S[3], S[5]. Thereafter, the data is read out in synchronization with the write enable (RE) signal and the error correction is executed at the same time. In this case, compared to the absence of the ECC circuit 103, an additional busy time is derived from a computation time for syndromes plus a computation time for error correction operators in total. For example, if one syndrome computation requires 50 ns and an arithmetic time for an operator is equal to 3.6 μs, then 528×50 ns+3.6 μs=30 μs. FIG. 20B shows an example of computing the syndromes S[0], S[1], S[3], S[5 ]at the same time of data reading. After the reading is started similarly, the data of one page (528 bytes) is read out from the memory cells MC into the page buffers 102 [0]-102 [7]. Then, the data is output from the page buffers 102 [0]-102 [7 ]byte by byte in synchronization with the RE signal and the ECC circuit 103 computes the syndromes. As a result of the syndrome computation, if an error is detected, a status fail command (70 h) is activated. Accordingly, an operator for error correction is computed and the data is output again to correct the error. In this case, if no error is present, an additional busy time in total is equal to zero. As for 2-bit error correction and 3-bit error detection, the number of permissible random failures (the number of random failures at a device failure probability of 1 ppm) is naturally better in the case of 528 information bits than in the case of 4224 information bits. Table 1 shows an application to a 256 Mb NAND-type flash memory. From Table 1, the number of permissible random failures is 100 bits at 2-bit correction BCH code for 528 information bits, and only 30 bits for 4224 information bits. To the contrary, at 3-bit correction BCH code for 4224 information bits, the random failures can be permitted up to 300 bits with a necessary code as short as 40 bits. Further, at 4-bit correction BCH code for 4224 information bits, the random failures can be permitted up to 1000 bits with a necessary code as short as 53 bits effectively. TABLE 1 Number of random failures in 256 Mb at Device failure probability of 1 ppm Code length per Number of Page (528B) Failures 2-bit correction BCH code 21 × 8 = 168 bits 100 bits (528 information bits) 2-bit correction BCH code 27 bits 30 bits (4224 information bits) 3-bit correction BCH code 40 bits 300 bits (4224 information bits) 4-bit correction BCH code 53 bits 1000 bits (4224 information bits) Table 2 shows chip sizes of NAND-type flash memories of 128 M-bits and 512 M-bits when no ECC circuit is mounted, compared with those when the conventional 2-bit correction ECC circuit is mounted, and those when the 2-bit correction ECC circuit of the present embodiment is mounted. TABLE 2 128M (0.16 μm) 512M (0.16 μm) No ECC circuit 41.88 mm^2 (100.0%) 136.99 mm^2 (100.0%) ECC circuit mounted 44.72 mm^2 (106.8%) 143.96 mm^2 (105.1%) ECC circuit mounted 43.21 mm^2 (103.2%) 140.42 mm^2 (102.5%) Thus, the flash memory with the conventional ECC circuit mounted thereon has an increase in chip size of 6.8% (128M) and 5.1% (512M). To the contrary, the flash memory with the ECC circuit of the present embodiment mounted thereon has an increase in chip size of 3.2% (128M) and 2.5% (512M), which is half the conventional one. As obvious from the forgoing, the information bits are generated per M-bit that is a unit for accessing each memory area in the art. To the contrary, according to the embodiments of the invention, N bits can be processed in parallel. Therefore, it is possible to allocate a set of check bits to M×N bits and reduce the number of check bits in total relative to the number of information bits. This is effective to improve a chip integration density while mounting an on-chip error correction circuit. Having described the embodiments consistent with the invention, other embodiments and variations consistent with the invention will be apparent to those skilled in the art. Therefore, the invention should not be viewed as limited to the disclosed embodiments but rather should be viewed as limited only by the spirit and scope of the appended claims.
{"url":"http://www.google.fr/patents/US7644342?hl=fr&ie=ISO-8859-1","timestamp":"2014-04-20T15:55:58Z","content_type":null,"content_length":"208964","record_id":"<urn:uuid:f61c2c55-0095-47c5-8c50-b3eef391ca77>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Clever Algorithms: Nature-Inspired Programming Recipes Particle Swarm Optimization, PSO. Particle Swarm Optimization belongs to the field of Swarm Intelligence and Collective Intelligence and is a sub-field of Computational Intelligence. Particle Swarm Optimization is related to other Swarm Intelligence algorithms such as Ant Colony Optimization and it is a baseline algorithm for many variations, too numerous to list. Particle Swarm Optimization is inspired by the social foraging behavior of some animals such as flocking behavior of birds and the schooling behavior of fish. Particles in the swarm fly through an environment following the fitter members of the swarm and generally biasing their movement toward historically good areas of their environment. The goal of the algorithm is to have all the particles locate the optima in a multi-dimensional hyper-volume. This is achieved by assigning initially random positions to all particles in the space and small initial random velocities. The algorithm is executed like a simulation, advancing the position of each particle in turn based on its velocity, the best known global position in the problem space and the best position known to a particle. The objective function is sampled after each position update. Over time, through a combination of exploration and exploitation of known good positions in the search space, the particles cluster or converge together around an optima, or several optima. The Particle Swarm Optimization algorithm is comprised of a collection of particles that move around the search space influenced by their own best past location and the best past location of the whole swarm or a close neighbor. Each iteration a particle's velocity is updated using: $v_{i}(t+1) = v_{i}(t) + \big( c_1 \times rand() \times (p_{i}^{best} - p_{i}(t)) \big) + \big( c_2 \times rand() \times (p_{gbest} - p_{i}(t)) \big)$ where $v_{i}(t+1)$ is the new velocity for the $i^{th}$ particle, $c_1$ and $c_2$ are the weighting coefficients for the personal best and global best positions respectively, $p_{i}(t)$ is the $i^ {th}$ particle's position at time $t$, $p_{i}^{best}$ is the $i^{th}$ particle's best known position, and $p_{gbest}$ is the best position known to the swarm. The $rand()$ function generate a uniformly random variable $\in [0,1]$. Variants on this update equation consider best positions within a particles local neighborhood at time $t$. A particle's position is updated using: $p_{i}(t+1) = p_{i}(t) + v_{i}(t)$ Algorithm (below) provides a pseudocode listing of the Particle Swarm Optimization algorithm for minimizing a cost function. Input: ProblemSize, $Population_{size}$ Output: $P_{g\_best}$ Population $\leftarrow$ $\emptyset$ $P_{g\_best}$ $\leftarrow$ $\emptyset$ For ($i=1$ To $Population_{size}$) $P_{velocity}$ $\leftarrow$ RandomVelocity() $P_{position}$ $\leftarrow$ RandomPosition($Population_{size}$) $P_{p\_best}$ $\leftarrow$ $P_{position}$ If (Cost($P_{p\_best}$) $\leq$ Cost($P_{g\_best}$)) $P_{g\_best}$ $\leftarrow$ $P_{p\_best}$ While ($\neg$StopCondition()) For ($P$ $\in$ Population) $P_{velocity}$ $\leftarrow$ UpdateVelocity($P_{velocity}$, $P_{g\_best}$, $P_{p\_best}$) $P_{position}$ $\leftarrow$ UpdatePosition($P_{position}$, $P_{velocity}$) If (Cost($P_{position}$) $\leq$ Cost($P_{p\_best}$)) $P_{p\_best}$ $\leftarrow$ $P_{position}$ If (Cost($P_{p\_best}$) $\leq$ Cost($P_{g\_best}$)) $P_{g\_best}$ $\leftarrow$ $P_{p\_best}$ Return ($P_{g\_best}$) • The number of particles should be low, around 20-40 • The speed a particle can move (maximum change in its position per iteration) should be bounded, such as to a percentage of the size of the domain. • The learning factors (biases towards global and personal best positions) should be between 0 and 4, typically 2. • A local bias (local neighborhood) factor can be introduced where neighbors are determined based on Euclidean distance between particle positions. • Particles may leave the boundary of the problem space and may be penalized, be reflected back into the domain or biased to return back toward a position in the problem domain. Alternatively, a wrapping strategy may be used at the edge of the domain creating a loop, torrid or related geometrical structures at the chosen dimensionality. • An inertia or momentum coefficient can be introduced to limit the change in velocity. Listing (below) provides an example of the Particle Swarm Optimization algorithm implemented in the Ruby Programming Language. The demonstration problem is an instance of a continuous function optimization that seeks $\min f(x)$ where $f=\sum_{i=1}^n x_{i}^2$, $-5.0\leq x_i \leq 5.0$ and $n=3$. The optimal solution for this basin function is $(v_0,\ldots,v_{n-1})=0.0$. The algorithm is a conservative version of Particle Swarm Optimization based on the seminal papers. The implementation limits the velocity at a pre-defined maximum, and bounds particles to the search space, reflecting their movement and velocity if the bounds of the space are exceeded. Particles are influenced by the best position found as well as their own personal best position. Natural extensions may consider limiting velocity with an inertia coefficient and including a neighborhood function for the particles. def objective_function(vector) return vector.inject(0.0) {|sum, x| sum + (x ** 2.0)} def random_vector(minmax) return Array.new(minmax.size) do |i| minmax[i][0] + ((minmax[i][1] - minmax[i][0]) * rand()) def create_particle(search_space, vel_space) particle = {} particle[:position] = random_vector(search_space) particle[:cost] = objective_function(particle[:position]) particle[:b_position] = Array.new(particle[:position]) particle[:b_cost] = particle[:cost] particle[:velocity] = random_vector(vel_space) return particle def get_global_best(population, current_best=nil) population.sort!{|x,y| x[:cost] <=> y[:cost]} best = population.first if current_best.nil? or best[:cost] <= current_best[:cost] current_best = {} current_best[:position] = Array.new(best[:position]) current_best[:cost] = best[:cost] return current_best def update_velocity(particle, gbest, max_v, c1, c2) particle[:velocity].each_with_index do |v,i| v1 = c1 * rand() * (particle[:b_position][i] - particle[:position][i]) v2 = c2 * rand() * (gbest[:position][i] - particle[:position][i]) particle[:velocity][i] = v + v1 + v2 particle[:velocity][i] = max_v if particle[:velocity][i] > max_v particle[:velocity][i] = -max_v if particle[:velocity][i] < -max_v def update_position(part, bounds) part[:position].each_with_index do |v,i| part[:position][i] = v + part[:velocity][i] if part[:position][i] > bounds[i][1] part[:velocity][i] *= -1.0 elsif part[:position][i] < bounds[i][0] part[:velocity][i] *= -1.0 def update_best_position(particle) return if particle[:cost] > particle[:b_cost] particle[:b_cost] = particle[:cost] particle[:b_position] = Array.new(particle[:position]) def search(max_gens, search_space, vel_space, pop_size, max_vel, c1, c2) pop = Array.new(pop_size) {create_particle(search_space, vel_space)} gbest = get_global_best(pop) max_gens.times do |gen| pop.each do |particle| update_velocity(particle, gbest, max_vel, c1, c2) update_position(particle, search_space) particle[:cost] = objective_function(particle[:position]) gbest = get_global_best(pop, gbest) puts " > gen #{gen+1}, fitness=#{gbest[:cost]}" return gbest if __FILE__ == $0 # problem configuration problem_size = 2 search_space = Array.new(problem_size) {|i| [-5, 5]} # algorithm configuration vel_space = Array.new(problem_size) {|i| [-1, 1]} max_gens = 100 pop_size = 50 max_vel = 100.0 c1, c2 = 2.0, 2.0 # execute the algorithm best = search(max_gens, search_space, vel_space, pop_size, max_vel, c1,c2) puts "done! Solution: f=#{best[:cost]}, s=#{best[:position].inspect}" Particle Swarm Optimization was described as a stochastic global optimization method for continuous functions in 1995 by Eberhart and Kennedy [Eberhart1995] [Kennedy1995]. This work was motivated as an optimization method loosely based on the flocking behavioral models of Reynolds [Reynolds1987]. Early works included the introduction of inertia [Shi1998] and early study of social topologies in the swarm by Kennedy [Kennedy1999]. Poli, Kennedy, and Blackwell provide a modern overview of the field of PSO with detailed coverage of extensions to the baseline technique [Poli2007]. Poli provides a meta-analysis of PSO publications that focus on the application the technique, providing a systematic breakdown on application areas [Poli2008a]. An excellent book on Swarm Intelligence in general with detailed coverage of Particle Swarm Optimization is "Swarm Intelligence" by Kennedy, Eberhart, and Shi [Kennedy2001]. [Eberhart1995] R. C. Eberhart and J. Kennedy, "A new optimizer using particle swarm theory", in Proceedings of the sixth international symposium on micro machine and human science, 1995. [Kennedy1995] J. Kennedy and R. C. Eberhart, "Particle swarm optimization", in Proceedings of the IEEE International Conference on Neural Networks, 1995. [Kennedy1999] J. Kennedy, "Small Worlds and Mega-Minds: Effects of Neighborhood Topology on Particle Swarm Performance", in Proceedings of the 1999 Congress on Evolutionary Computation, 1999. [Kennedy2001] J. Kennedy and R. C. Eberhart and Y. Shi, "Swarm Intelligence", Morgan Kaufmann, 2001. [Poli2007] R. Poli and J. Kennedy and T. Blackwell, "Particle swarm optimization An overview", Swarm Intelligence, 2007. [Poli2008a] R. Poli, "Analysis of the publications on the applications of particle swarm optimisation", Journal of Artificial Evolution and Applications, 2008. [Reynolds1987] C. W. Reynolds, "Flocks, herds and schools: A distributed behavioral model", in Proceedings of the 14th annual conference on Computer graphics and interactive techniques, 1987. [Shi1998] Y. Shi and R. C. Eberhart, "A Modified Particle Swarm Optimizers", in Proceedings of the IEEE International Conference on Evolutionary Computation, 1998.
{"url":"http://www.cleveralgorithms.com/nature-inspired/swarm/pso.html","timestamp":"2014-04-20T16:13:04Z","content_type":null,"content_length":"20806","record_id":"<urn:uuid:82f29b40-a79d-47d3-a61d-5e5a91a3fc45>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Enumeration of magic squares B3 = 1: the Lo Shu (as it is known) is unique. B4 was found by the Frenchman Bernard Frénicle de Bessy in 1693. First analytical proof by Kathleen Ollerenshaw and Herman Bondi (1982). A4, C5 and E5 could be found on the former website of Mutsumi Suzuki. B5 was calculated in 1973 by Richard Schroeppel (computer program), published in Scientific American in January 1976 in. A5 was calculated by myself in March 2000 using a common PC. Suzuki published the result on his website. I was able to confirm the result by using other methods. D5 is equal to the number of regular panmagic squares. They can be generated using Latin squares, as Leonhard Euler pointed out in the 18th century. D6 and D10 were proved by A.H. Frost (1878) and more elegantly by C. Planck (1919). C6 and C10 are also equal to 0, because each associative (symmetrical) magic square of even order can be transformed into a pandiagonal magic square. B6 was first estimated (with proper result) by Karl Pinn and Christian Wieczerkowski and published in May 1998. They used a method called 'parallel tempering Monte Carlo' and got 1.7745(16)·10^19 They also estimated B7 with lower accuracy: (3.760 ±0.052)·10^34 My own estimates (see table) match these results. I used another method that could be called 'Monte Carlo backtracking'. All estimates in the columns B, C and D are found with the same method, that is more like the approach of Schroeppel than the one of Pinn and Wieczerkowski. There should be no systematic error, because the method was checked by Prof. Peter Loly (University of Manitoba, Canada) and all results could be confirmed by different programs. For higher orders see: Numbers of classic magic squares E7 was calculated by myself in May 2001. Special transformations made it possible to consider only two positions of the integers 1, 25 and 49. With advanced equations and a heuristic backtracking algorithm the calculation time could be reduced to a few days. All ultramagic squares of order 7 have been saved and are available for further research. For more details see: Ultramagic Squares of Order 7 D7 (November 2001) was a big surprise. There are 38,102,400 regular pandiagonal magic squares of order 7. Albert L. Candy found 640,120,320 irregular ones. From E7 can be created nearly 1000 million more. But who would have believed that there are more than 10^17 such squares? This estimate is very difficult, because the probability is only 1 : 3·10^17 that a normal order-7 square is D8 is greater than C8 because each associative magic square of order 8 can be transformed into a pandiagonal one and there are examples of additional pandiagonal squares that could not be derived from an associative square. E8 and E9 were estimated in March 2002. In the case of E8 I could find 64 transformations and several equations with only 4 variables. D9 is greater than 81·E9, because each ultramagic square of order 9 can be transformed by cyclic permutation of rows and columns into 80 other pandiagonal magic squares that are not associative.
{"url":"http://www.trump.de/magic-squares/howmany.html","timestamp":"2014-04-16T07:25:50Z","content_type":null,"content_length":"9877","record_id":"<urn:uuid:fb4b06cc-26b4-41a3-97d3-ea98c3140c51>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
2.1 Isolated systems The conformal structure of space-times has found a wide range of interesting applications in general relativity with various motivations. Of particular importance for us is the emergence of conformal geometric ideas in connection with isolated systems (see also the related discussions in [66 , 56 ]). As an illustration, consider a gravitating system (e.g. a binary system or a star) somewhere in our universe, evolving according to its own gravitational interaction, and possibly reacting to gravitational radiation impinging on it from the outside. Thereby it will also emit gravitational radiation. We are interested in detecting and evaluating these waves because they provide us with important information about the physics governing the system. For several obvious reasons, it is desirable not only to have a description of such situations within the theoretical framework of the theory but, furthermore, to have the ability to simulate them numerically. Two problems arise: First, we need to idealize the physical situation in an appropriate way, since it is hopeless to try to analyze the behaviour of the system in its interaction with the rest of the universe. We are mainly interested in the behaviour of the system and not so much in other processes taking place at large distances from the system. Since we would like to ignore those regions, we need a way to isolate the system from their influence. We might want to do this by cutting away the uninteresting parts of the universe along a time-like cylinder T enclosing the system. Thereby, we effectively replace the outer part by data on T . The evolution of our system is determined by those data and initial data on some space-like hypersurface S . But now we are faced with the problem of interpreting the data. It is well known that initial data are obtained from some free data by solving elliptic equations. This is a global procedure. It is very difficult to give a physical meaning to initial data obtained in this way, and it is even more difficult, if not impossible, to specify a system, i.e. to determine initial data, exclusively from (local) physical properties of the constituents of the system like energy-momentum, spin, material properties, etc. In a similar spirit, the data on the time-like boundary T are complicated and only to a rather limited extent do they lend themselves to physical interpretation. For instance, it is not known how to extract from those data any piece which would unambiguously correspond to the radiation emitted by the system. Another problem is related to the arbitrariness in performing the cut. How can we be sure that we capture essentially the same behaviour independently of how we define T ? Thus, we are led to consider a different kind of ``isolation procedure''. We imagine the system as being ``alone in the universe'' in the sense that we assume it being embedded in a space-time manifold which is asymptotically flat. How to formulate this is a priori rather vague. Somehow we want to express the fact that the space-time ``looks like'' Minkowski space-time ``at large distances'' from the source. Certainly, fall-off conditions for the curvature have to be imposed as one recedes from the source and these conditions should be compatible with the Einstein equations. This means that there should exist solutions of the Einstein equations which exhibit these fall-off properties. We would then, on some initial space-like hypersurface S, prescribe initial data which should, on the one hand, satisfy the asymptotic conditions. On the other hand, the initial data should approximate in an appropriate sense the initial conditions which give rise to the real behaviour of the system. Our hope is that the evolution of these data provides a reasonable approximation of the real behaviour. As before, the asymptotic conditions which in a sense replace the influence of the rest of the universe on the system should not depend on the particular system under consideration. They should provide some universal structure against which we can gauge the information gained. Otherwise, we would not be able to compare different systems. Furthermore, we would hope that the conditions are such that there is a well defined way to allow for radiation to be easily extracted. It turns out that all these desiderata are in fact realized in the final formulation. These considerations lead us to focus on space-times which are asymptotically flat in the appropriate sense. However, how should this notion be defined? How can we locate ``infinity''? How can we express conditions ``at infinity''? This brings us to the second problem mentioned above. Even if we choose the idealization of our system as an asymptotically flat space-time manifold, we are still facing the task of adequately simulating the situation numerically. This is a formidable task, even when we ignore complications arising from difficult matter equations. The simulation of gravitational waves in an otherwise empty space-time coming in from infinity, interacting with themselves, and going out to infinity is a challenging problem. The reason is obvious: Asymptotically flat space-times necessarily have infinite extent while computing resources are finite. The conventional way to overcome this apparent contradiction is the introduction of an artificial boundary ``far away from the interesting regions''. During the simulation this boundary evolves in time thus defining a time-like hypersurface in space-time. There one imposes conditions which, it is hoped, approximate the asymptotic conditions. However, introducing the artificial boundary is nothing but the reintroduction of the time-like cylinder T on the numerical level with all its shortcomings. Instead of having a ``clean'' system which is asymptotically flat and allows well defined asymptotic quantities to be precisely determined, one is now dealing again with data on a time-like boundary whose meaning is unclear. Even if the numerical initial data have been arranged so that the asymptotic conditions are well approximated initially by the boundary conditions on T, there is no guarantee that this will remain so when the system is evolved. Furthermore, the numerical treatment of an initial-boundary value problem is much more complicated than an initial value problem because of instabilities which can easily be generated at the boundary. What is needed, therefore, is a definition of asymptotically flat space-times which allows to overcome both the problem of ``where infinity is'' and the problem of simulating an infinite system with finite resources. The key observation in this context is that ``infinity'' is far away with respect to the space-time metric . This means that one needs infinitely many ``metre sticks'' in succession in order to ``get to infinity''. But, what if we replaced these metre sticks by ones which grow in length the farther out we go? Then it might be possible that only a finite number of them suffices to cover an infinite range, provided the growth rate is just right. This somewhat naive picture can be made much more precise: Instead of using the physical space-time metric g . We can imagine attaching points to the space-time which are finite with respect to g but which are at infinity with respect to We arrived at this idea by considering the metric structure only ``up to arbitrary scaling'', i.e. by looking at metrics which differ only by a factor. This is the conformal structure of the space-time manifold in question. By considering the space-time only from the point of view of its conformal structure we obtain a picture of the space-time which is essentially finite but which leaves its causal properties, and hence the properties of wave propagation unchanged. This is exactly what is needed for a rigorous treatment of radiation emitted by the system and also for the numerical simulation of such situations. The way we have presented the emergence of the conformal structure as the essence of asymptotically flat space-times is not how it happened historically. Since it is rather instructive to see how various approaches finally came together in the conformal picture we will present in the following section a short overview of the history of the subject. Conformal Infinity Jörg Frauendiener © Max-Planck-Gesellschaft. ISSN 1433-8351 Problems/Comments to livrev@aei-potsdam.mpg.de
{"url":"http://relativity.livingreviews.org/Articles/lrr-2000-4/node3.html","timestamp":"2014-04-21T07:09:42Z","content_type":null,"content_length":"13461","record_id":"<urn:uuid:ce9811bf-5d1c-4f88-a75a-f9960e9c07e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
A few doubts about quantum field theory and high energy physics. 1. I read that the picture of gauge bosons as mediators of interaction originates in and is valid in perturbation theory. But how do we know that picture is correct? We do perturbation theory only because we do not know how to study a system in a fully non-perturbative way. If someday we discover a non-perturbative way of doing all such calculations what will happen to this picture? In order to do that you need some theory (= a collection of mathematical expressions and rules how to use them). Then you can start to test perturbative and non-perturbative approaches. For QCD we have both (applicable in different regimes) and it seems that it works quite well. As long as we start to write down a Lagrangian which contains gauge bosons we will never find sonmething else and the final theory will always contain gauge bosons; the question whether we do perturbative or non-perturbative calculations is only a technical one. So the picture with gauge bosons will not break down when we do non-perturbative calculations using gauge bosons; it will break down when we write down something which does not contain gauge bosons ;
{"url":"http://www.physicsforums.com/showpost.php?p=3799003&postcount=4","timestamp":"2014-04-21T04:45:44Z","content_type":null,"content_length":"8938","record_id":"<urn:uuid:719cb898-1fb9-495f-8ef4-9e8310132fd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
DMTCS Proceedings 2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05) Stefan Felsner (ed.) DMTCS Conference Volume AE (2005), pp. 25-30 author: Oleg Pikhurko, Joel Spencer and Oleg Verbitsky title: Decomposable graphs and definitions with no quantifier alternation keywords: descriptive complexity of graphs, first order logic, Ehrenfeucht game on graphs, graph decompositions abstract: Let be the minimum quantifier depth of a first order sentence that defines a graph up to isomorphism in terms of the adjacency and the equality relations. Let be a variant of where we do not allow quantifier alternations in . Using large graphs decomposable in complement-connected components by a short sequence of serial and parallel decompositions, we show examples of vertices with . On the other hand, we prove a lower bound for all . Here is equal to the minimum number of iterations of the binary logarithm needed to bring below 1. If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files. reference: Oleg Pikhurko and Joel Spencer and Oleg Verbitsky (2005), Decomposable graphs and definitions with no quantifier alternation, in 2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05), Stefan Felsner (ed.), Discrete Mathematics and Theoretical Computer Science Proceedings AE, pp. 25-30 bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file. ps.gz-source: dmAE0106.ps.gz (72 K) ps-source: dmAE0106.ps (174 K) pdf-source: dmAE0106.pdf (172 K) The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your browser correctly. Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript. Automatically produced on Di Sep 27 10:09:40 CEST 2005 by gustedt
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAE0106/937","timestamp":"2014-04-19T02:30:54Z","content_type":null,"content_length":"16518","record_id":"<urn:uuid:d6763a76-091c-44fb-8ddb-937e2b2ee524>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Missing confidence intervals for median after using -bootstrap- Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Missing confidence intervals for median after using -bootstrap- or -bpmedian- From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Missing confidence intervals for median after using -bootstrap- or -bpmedian- Date Tue, 13 Nov 2012 00:49:30 +0000 I am not a statistician; in fact many, perhaps most, people on this list wouldn't call themselves statisticians. You are asked to make clear where user-written programs you refer to come from. -bpmedian- is from SSC or Roger Newson's website. You don't tell us anything much about your data, either what it is (the name "var" is not revealing) or any descriptive statistics. But I see you have a large sample size. It seems likely therefore that the confidence interval for anything will be narrow at worst. However, it seems likely also from your results that you have lots of ties. If so, the unusual result of a confidence interval of length 0 is likely to be an artefact of coarseness in data recording. If so, then reporting a confidence interval isn't really possible, as it should be more like .8 +/- smidgen where smidgen is less than the resolution of measurement. By resolution, I mean the minimum difference between reported measurements. If possible data are values like .7, .8, .9 the resolution is 0.1. Conversely, if I were reviewing or examining this research, I would want a report on the fraction of values that were recorded as .8. In fact I would want a graph of the data. Of course, you may intend to do all that. On Mon, Nov 12, 2012 at 9:32 PM, Vasyl Druchkiv <dvv1985@yahoo.de> wrote: > Dear statisticians, > I try to estimate CI's for the median with -bpmedian- or with -bootstrap- > using > *--------------------- begin example ------------------ > centile var > bootstrap median=r(p50): sum var, detail > *--------------------- end example -------------------- > The problem is that I get empty cells on standard error and confidence > intervals either by implementing -bpmediam- or -bootstrap-. > *--------------------- begin example ------------------ > Bonett-Price confidence interval for median of: var > Number of observations: 16872 > ---------------------------------------------------------------------------- > -- > var | Coef. Std. Err. z P>|z| [95% Conf. Interval] > -------------+-------------------------------------------------------------- > -- > _cons | -.8 . . . . > . > *--------------------- end example ------------------ > I looked for the calculation method used in -bpmedian- . This method is > described in: > Bonett, D. G. and Price, R. M. 2002. Statistical inference for a > linear function of medians: Confidence > intervals, hypothesis testing, and sample size requirements. > Psychological Methods 7(3): 370-383. > Furthermore, I tried to estimate CI's with SPSS using bootstrap and got > (-0.8;-0.8) for 95% CI's. It means that the problem occurs when both limits > coincide with the median. However, the method described in Bonnett-Price > uses the formula: > sum(cjηj)±Za/2(sum(cj2varηj))^1/2 (pp: 372) > So, even if the last term is equal to 0 due to the pointy distribution (var > ηj=0), lower and upper limits must be displayed in stata output and be equal > to -0.8 in my example. Can I just assume that CI's are equal to median? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-11/msg00514.html","timestamp":"2014-04-16T22:14:30Z","content_type":null,"content_length":"11419","record_id":"<urn:uuid:6553cfb5-df06-4210-8586-2daff800fa0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
The formal reconstruction and speedup of the linear time fragment of Willard’s relational calculus subset , 1999 "... Hind et al. ([5]) use a standard dataflow framework [15, 16] to formulate an intra-procedural may-alias computation. The intra-procedural aliasing information is computed by applying well-known iterative techniques to the Sparse Evaluation Graph (SEG) ([3]). The computation requires a transfer funct ..." Cited by 8 (2 self) Add to MetaCart Hind et al. ([5]) use a standard dataflow framework [15, 16] to formulate an intra-procedural may-alias computation. The intra-procedural aliasing information is computed by applying well-known iterative techniques to the Sparse Evaluation Graph (SEG) ([3]). The computation requires a transfer function for each node that causes a potential pointer assignment (relating the dataflow information owing into and out of the node), and a set of aliases holding at the entry node of the SEG. The intra-procedural analysis assumes that precomputed information in the form of summary functions is available for all function-call sites in the procedure being analyzed. The time complexity of the intra-procedural mayalias computation for the algorithm presented by Hind et al. ([5]) is O(N 6 ) in the worst case (where N is the size of the SEG). In this paper we present a worst case O(N 3 ) time algorithm to compute the same may-alias information. , 1999 "... This note is a description of my research over the last few years as a doctoral student at the Courant Institute. A part of this research was collaborative work with my advisor Bob Paige. This statement is divided into three sections. The rst section is introductory, the second describes the work ..." Cited by 3 (0 self) Add to MetaCart This note is a description of my research over the last few years as a doctoral student at the Courant Institute. A part of this research was collaborative work with my advisor Bob Paige. This statement is divided into three sections. The rst section is introductory, the second describes the work that will go into my dissertation, and the third section describes some other work that I have done, and some possible future directions for my research. 1 Introduction It is generally agreed that high level programming languages can not-only substantially reduce the time it takes to produce code but also help increase reliability of software. Such languages allow programs to be specied more in terms of algorithmic concepts than implementation details. A more conceptual level of discourse not only makes programs easier to write but also easier to prove correct. Widely used low level languages such as Fortran, C, C++ were designed to allow a straightforward translation of programs into ... "... We show how to efficiently evaluate generic map-filter-product queries, generalizations of select-project-join (SPJ) queries in relational algebra, based on a combination of two novel techniques: generic discrimination-based joins and lazy (formal) products. Discrimination-based joins are based on t ..." Add to MetaCart We show how to efficiently evaluate generic map-filter-product queries, generalizations of select-project-join (SPJ) queries in relational algebra, based on a combination of two novel techniques: generic discrimination-based joins and lazy (formal) products. Discrimination-based joins are based on the notion of (equivalence) discriminator. A discriminator partitions a list of values according to a user-specified equivalence relation on keys the values are associated with. Equivalence relations can be specified in an expressive embedded language for denoting equivalence relations. We show that discriminators can be constructed generically (by structural recursion on equivalence expressions), purely functionally, and efficiently (worst-case linear time). The array-based basic multiset discrimination algorithm of Cai and Paige (1995) provides a base discriminator that is both asymptotically and practically
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2984043","timestamp":"2014-04-19T02:09:55Z","content_type":null,"content_length":"21556","record_id":"<urn:uuid:2ad6e431-4399-4643-9d78-6b0b21ecc866>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help September 14th 2008, 02:47 AM #1 Junior Member Sep 2008 Got kind of stuck on this question. Suppose a is a non-zero constant. x^2/3+y^2/3=a^2/3. Show that the length of the portion of any tangent line to this astroid cut off by the x and y axes is constant. I've differentiated implicitly to obtain the gradient of the tangent as $dy/dx=-(y/x)^1/3.$ And have obtained the equation of the tangent line passing through (q,0) and (0,p) as y=-(y/x)^1/3.x+p. But how do i show that the length of this line passing through (q,0) and (0,p) is constant? Thanks in advance. Got kind of stuck on this question. Suppose a is a non-zero constant. x^2/3+y^2/3=a^2/3. Show that the length of the portion of any tangent line to this astroid cut off by the x and y axes is constant. I've differentiated implicitly to obtain the gradient of the tangent as $dy/dx=-(y/x)^1/3.$ And have obtained the equation of the tangent line passing through (q,0) and (0,p) as y=-(y/x)^1/3.x+p. There are many x's and y's in the equation of the line. $M(x_0,y_0)$ be a point of the astroid. The equation of the tangent line at $M$ is $y=-\left( \frac{y_0}{x_0}\right)^\frac13(x-x_0)+y_0$ But how do i show that the length of this line passing through (q,0) and (0,p) is constant? Let $P(0,p)$ and $Q(0,q)$ be the intersection point of the tangent line with the y-axis and the x-axis, respectively. $p$ satisfies $p=-\left( \frac{y_0}{x_0}\right)^\frac13(0-x_0)+y_0$ and $q$ is such that $0=-\left( \frac{y_0}{x_0}\right)^\frac13(q-x_0)+y_0$ You can now solve these two equations for $p$ and $q$ and compute $PQ=\sqrt{p^2+q^2}$ to check that this length is constant. (remember that we have $x_0^\frac23+y_0^\frac23=a^\frac23$) Hello, Hweengee! Suppose $a$ is a non-zero constant, and: . $x^{\frac{2}{3}} + y^{\frac{2}{3}}\:=\:a^{\frac{2}{3}}$ Show that the length of the portion of any tangent line to this astroid cut off by the x- and y- axes is constant. Your derivative is correct: . $\frac{dy}{dx} \:=\:-\left(\frac{y}{x}\right)^{\frac{1}{3}} \:=\:-\frac{y^{\frac{1}{3}}}{x^{\frac{1}{3}}}$ Let $P(p,q)$ be any point on the astroid. The slope of the tangent at $P$ is: . $m \;=\;-\frac{q^{\frac{1}{3}}} {p^{\frac{1}{3}}}$ The tangent through $P(p,q)$ with slope $m$ . . has the equation: . $y - q \;=\;m(x - p) \quad\Rightarrow\quad y \;=\;mx + q - mp$ Its x-intercept is: . $A\left(\frac{mp-q}{m},\;0\right)$ Its y-intercept is: . $B\bigg(0,\;-(mp-q)\bigg)$ The length of $AB$ is given by: . . $\overline{AB}^2 \;=\;\left(\frac{mp-q}{m}\right)^2 + (mp-q)^2 \;=\;(mp-q)^2\left(\frac{1}{m^2} + 1\right)$ Replace $m$ with $-\frac{q^{\frac{1}{3}}}{p^{\frac{1}{3}}}$ . . $AB^2 \;=\;\left(-\frac{q^{\frac{1}{3}}}{p^{\frac{1}{3}}}\!\cdot\! p \:- \:q\right)^2\,\left(\frac{p^{\frac{2}{3}}}{q^{\fra c{2}{3}}} +1\right) \;=$ . $\left(-p^{\frac{2}{3}}q^{\frac{1}{3}} - q\right)^2\,\left(\frac{p^{\frac{2}{3}} + q^{\frac{2}{3}}}{q^{\frac{2}{3}}}\right)$ . . . . . $= \;\bigg[-q^{\frac{1}{3}}\left(p^{\frac{2}{3}} + q^{\frac{2}{3}}\right)\bigg]^2\,\left(\frac{p^{\frac{2}{3}} + q^{\frac{2}{3}}}{q^{\frac{2}{3}}}\right) \;=$ . $q^{\frac{2}{3}}\left(p^ {\frac{2}{3}} + q^{\frac{2}{3}}\right)^2\cdot\frac{p^{\frac{2}{3}} + q^{\frac{2}{3}}}{q^{\frac{2}{3}}}$ Hence: . $\overline{AB}^2 \;=\;\left(p^{\frac{2}{3}} + q^{\frac{2}{3}}\right)^3 \qquad\hdots \text{ but }p^{\frac{2}{3}} + q^{\frac{2}{3}} \:=\:a^{\frac{2}{3}}$ Therefore: . $\overline{AB}^2 \:=\:\left(a^{\frac{2}{3}}\right)^3 \:=\:a^2 \quad\Rightarrow\quad \overline{AB} \:=\:a \quad\hdots \text{ Q.E.D.}$ September 14th 2008, 08:04 AM #2 September 14th 2008, 11:51 AM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/calculus/48990-astroid.html","timestamp":"2014-04-20T16:11:36Z","content_type":null,"content_length":"47120","record_id":"<urn:uuid:6ee6d216-d4b6-4c15-ab26-ed15577535b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: lovely proof problem Date: May 6, 1999 1:04 AM Author: Brian Harvey Subject: Re: lovely proof problem Anonymous writes: >This is a problem I got for my computer science-discrete math class: >Prove or disprove that the product of a nonzero rational number and an >irrational number is irrational using one of the following: direct proof >(of the form p --> q), indirect proof (of the form ~q --> ~p), proof by >contradiction (so that ~p --> q is true, then ~p must be false, so p must be What does this have to do with discrete math? (Perhaps it's just that nobody learns what a proof is in their non-discrete math classes any more?) Anyway, which proof techniques can be used depends a lot on what theorems we already have available. Since the definition of an irrational number is based on a property it DOESN'T have, I'm betting that any proof will turn out to involve some use of proof by contradiction somewhere along the line, although that might be hidden in the proof of a previous theorem. message approved for posting by k12.ed.math moderator k12.ed.math is a moderated newsgroup. charter for the newsgroup at www.wenet.net/~cking/sheila/charter.html submissions: post to k12.ed.math or e-mail to k12math@sd28.bc.ca
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1138545","timestamp":"2014-04-16T11:10:47Z","content_type":null,"content_length":"2247","record_id":"<urn:uuid:0c1fbe4a-c4ef-459e-9fb1-8955d81cb1cd>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Binary numbers to create a matrix OK, so I am new to this whole C programming thing and, quite frankly, it confuses the hell outta me! So I am hoping someone could help me out, with pretty detailed instructions explaining what each command does! Please bear with me as I may not be very clear when I explain this, or concise! I want to create a matrix as follows; The matrix will be of size NxN, where N=2^n and n is the length of the binary numbers we use. So the matrix will be M[i,j] where i,j = 0,...,N-1, and i,j correspond to the n-bit binary representations of that number. if i =/= j then the M[i,j] entry will be determined by if (the binary form) of j has a 0 in as its first digit and i is differs to j ONLY by having a 1 as its first digit then the entry M[i,j] = alpha if j has a 1 as its final (nth) digit and i differs to j ONLY by having a 0 as its final digit then M[i,j] = beta if j has a 10 within its binary representation and i differs ONLY by having these two digits switched to 01 then M[i,j] = lambda otherwise M[i,j] = 0 And if i = j then M[i,i] = -sum_{j} M[i,j] so, for example, if n=3 we have an 8x8 matrix, if j = 1 = 001 and i = 5 = 101 then M[5,1] = alpha if j = 1 = 001 and i = 0 = 000 then M[0,1] = beta if j = 2 = 010 and i = 1 = 001 then M[1,2] = lambda if j = 3 = 011 and i = 5 = 101 then M[5,1] = 0 I am not sure if this makes sense, but if anyone knows how I could construct this it would be a huge help! Thanks.
{"url":"http://www.velocityreviews.com/forums/t718282-using-binary-numbers-to-create-a-matrix.html","timestamp":"2014-04-18T03:51:27Z","content_type":null,"content_length":"26904","record_id":"<urn:uuid:39a69b74-c52b-4279-99b6-609b5d520866>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Cache associativity - Observations from random ptr chase test.. Hi, I have been looking at the cache latencies of my Intel SB part. I build a ptr chase that takes a given size of memory and breaks it into so many granular chunks. I use the first array entry every granular element to point to the next. For instance.. in a size spanning 128KB and a granularity of 32KB I create a walk of: a[0] -> a[32768 * 2] a[32768 * 2] -> a[32768 * 1] a[32768 * 1] -> a[32768 * 3] a[32768 * 3] -> a[0] The cache size of the L1 is 32KB and has an associativity of 8, so one might expect that the index to a specific way is set via bits 14:12. I do a walk with granular steps of 4KB spanning (8 + n) * 4KB for various n, and I observe: * for n=0, I get 4 cycles of latency, as expected the L1 latency * for n=1, I am stressing the associativity of the L1, but I don't get the L2 latency, I'm inbetween, say 5-7 cycles sometimes and other times very close to the L2 latency * for n=2, I'm getting the L2 latency, makes sense (question: why for n=1 am I sometimes getting into a mode where I get numbers close the L1 latency, when for the given cachesize and associativity I should be in the L2, is there some buffering for victims to the L2 that my test is hitting in some state with this trivial test?) Then I thought I'd measure the transition between the L2 and L3. The L2 is reported as 256KB and non inclusive of the L1. The associativity is 8, so one might expect the index to the way of the L2 to be determined via bits 17-15. I do a walk with granular steps of 32KB spanning (8 + n) * 32KB for various n, and I observe: * for n=0, I get the L1 latency, which is expected, the L1 is 8 way associative and we found earlier it maps every one of the 8 steps of 32KB in this test to different ways.. and since there are 8 steps, we fit in the associativity of the L1. * for n=1, I get 5.2 cycles, which isn't the L2 latency, why? * for n=2 and greater, I observe the L3 latency, and not the L2 latency, why!? (question: It appears that there's some interesting behavior going on between the L1 and L2. * is the L1 a write back cache to the L2? Appears not. The behavior looks alot like the L2 doesn't exist. * if I do a walk of steps of 4096, once I do 64 of these (which maps the associativity of the L2, but doesn't include that of the L1, which would require 72 such steps) I observe the latency increase from the L2 for 64, which is 12, to 14 for 65 steps, 17 for 66 steps, and so on. I'm observing behavior like either the L1 doesn't exist or a way of the L2 isn't there. ) This test is using large pages, 2MB, so TLB misses are not occuring, and the start of the array is aligned to the start of the 2MB page. Lastly, the pointer chase is 8x unrolled. See below for an illustration of what the code looks like, maybe the HW pref is catching for walks with a few steps (say 8), that the address loaded via the rip is the same. For a walk with 16 steps, every 2 loop iterations you would get the same address touched via that rip: 1002420: 48 8b 3f mov (%rdi),%rdi 1002423: 48 8b 3f mov (%rdi),%rdi 1002426: 48 8b 3f mov (%rdi),%rdi 1002429: 48 8b 3f mov (%rdi),%rdi 100242c: 48 8b 3f mov (%rdi),%rdi 100242f: 48 8b 3f mov (%rdi),%rdi 1002432: 48 8b 3f mov (%rdi),%rdi 1002435: 48 8b 3f mov (%rdi),%rdi 1002438: 49 ff cd dec %r13 100243b: 75 e3 jne 1002420 I can play with removing the unrolling in this test.. but maybe the hw pref is getting in the way. Still.. for a walk spanning 64 * 4KB, you fit in the L2, for 65 * 4KB, you don't. Yet I'm observing that we are actually going to the L3. Is the hardware prefetcher speculating and evicting data from the L1 that I'm not requesting? Any help in understanding this strange behavior is very much appreciated. As stated before.. the steps are randomized but there is a periodicity that the hw pref might catch, depending upon it's use of the rip and the address history for that rip. Perfwise RSS Вверх вт, 27/03/2012 - 12:45 An update on my progress. I'm able to reconcile my observations, if the L2 is inclusive of the L1D. I've measured the pointer chase after flushing it to memory and also without, thus making those lines Modified or Exclusively owned by the core in question. My observation is that when you do a pointer chase through 64 steps, those steps reside in the first 8B of a 4KB page, I observe a latency of about 12 cycles. This is expected for an L2 of 256KB with 8-way associativity. However, if you now sweep through 65 distinct 4K blocks (chosen randomly to defeat the HW pref), I observe the latency increases to 14 cycles per step, which coincides with 9 accesses to the L3 and 56 to the L2. If you increase your number of random steps by 1 4K block, you will see that you're unable to fit more than 8 distinct 4K indexes into a 32KB block, they're always evicted to the L3. So, does this imply the L2 is inclusive of data in the L1, it seems so. Can you confirm that's the case? perfwise вт, 27/03/2012 - 14:02 Pardon me for asking for clarification, but I think you're stating that your experiments indicate that you can't get past the L2 associativity rules by trying to put additional blocks of data in L1D. That isn't the same as demonstrating that the documents about exclusivity of L1 and L2 data are wrong, even if you demonstrated that exclusivity doesn't help you exceed the associativity limit. вт, 27/03/2012 - 16:19 My interest is not in determing if the data is wrong. Largely the cache behavior is undocumented. I'm anttenuating the associativity of the L2 and given the "cpuid" output of the L1, observing what happens. If you sweep through 256KB with a 32KB block, you get 4 cycles, ok.. L1 hit. If you do 288KB, you observe that you get 8 L1 hits and 1 L2 hit, likely you're coming back to the evicted index in the L2 just fast enough to catch it before it's written to the L3. If you conversely do a 256KB sweep with 4KB steps, randomly, you observe L2 latency, 12 cycles. If you do 260KB with a 4KB step, you anttenuate 1 index of the L2 with 9 accesses and don't come back in time to catch it before it's written to the L3. So your latency there, is the following ( 9 * 28 + 56 * 12 ) / 65 = 14. So.. I'm rather confident that the L2 is inclusive of the L1D, at least from these observations. I'm just wondering if that is something you can confirm. My tests say yes, and they're pretty easy to write. I just didn't find any documentation of this, and thought I'd ask or at least bring this to your attention. Thanks for any help.. perfwise сб, 31/03/2012 - 20:02 Hello perfwise, For "how to characterise the L2"... from one of our previous forum entries: http://software.intel.com/en-us/forums/showpost.php?p=158013 we see: On Sandy Bridge, the L2 can be characterized as 'non-inclusive, non-exclusive'. See Table 2-5 of the Optimization guide (URL in previous reply) for the cache policies and characteristics by cache level. The cacheline can be in L1 and L2 and L3 or The line can be in L1 and not in L2 but always in L3 or The line can be in L2 and not in L1 but always in L3 or The line can be only in L3 (not in L1 nor in L2). A modified line in the L1 will be written back to the L2 if the L2 has a copy of the line or, if the line isn't in the L2, the line can be written back directly to the L3. If the modified line is written back to the L2 the the line won't be written back to the L3 unless the line is evicted from the L2 or the line is requested by another core. As foryour timings, these will probably require more time than I have right now to check the numbers. And added complication is that the pseudo least-recently-used (LRU) replacement algorithm is not perfect soyou will sometimes get L1 misses even when, technically, with a 4KB stride, you should be able to fit 8 cachelines into the L1D's 8 ways. пт, 06/04/2012 - 18:36 Yes, I remember seeing this documented but what I'm specifically asking is when a line is brought into the core, from a memory request or from the L3, is it installed in BOTH the L1D and L2. That's the point I don't see stated and my test is telling me is happening. The timings you don't need to look at, if the above is true it explans everything. Зарегистрируйтесь, чтобы оставить комментарий.
{"url":"https://software.intel.com/ru-ru/forums/topic/279097","timestamp":"2014-04-18T13:40:14Z","content_type":null,"content_length":"64231","record_id":"<urn:uuid:c2b2aab8-41b7-4a29-94b1-df6e211c3182>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a known algorithm for this? In article <(E-Mail Removed)>, (E-Mail Removed) > Gerald Rosenberg wrote: > > Have not been able to Google very well for an answer, since I haven't a > > usable name for the algorithm/type of problem. > > > > In sum, I need to determine the least common denominator for the spacing > > of a one dimensional array of integers where the integers have a noise > > component. > Could you state the problem more clearly? Do you need a single LCD for a set > of integers? Do you need to find the most frequently occurring values in a > set? > > In practical terms, I have the Y-axis pixel locations of lines of text > > on a page (which are approximations) and need to determine whether any > > two adjacent text lines are single spaced, 1.5 spaced, or multiple > > spaced. > Create a histogram of all the values and examine them yourself for patterns, Interesting. Will look into that. Thanks. > then decide on an appropriate strategy to achieve what you are trying to > accomplish, which you don't bother to say. Did "need to determine whether any two adjacent text lines are single spaced, 1.5 spaced, or multiple spaced" not relate what I am trying to > Another poster has recommended a fourier transform, but I think this is > overkill. A histogram approach will work for any case except many integers > with little in common with each other. I don't think this is what you face. > > > > Seems like there should be an analytic solution, but auto-correlation > > doesn't seem right. Some kind of quantized best-fit? > Why not state the problem to be solved before hypothesizing about a > solution? Sure: In practical terms, I have the Y-axis pixel locations of lines of text on a page (which are approximations) and need to determine whether any two adjacent text lines are single spaced, 1.5 spaced, or multiple > > > > Rather than continuing to guess, does anyone know the name of the > > algorithm for solving this type of problem. > What type of problem is that? You have only discussed one aspect of the data > set, and you haven't stated a problem to be solved at all. OK. World peace through analysis of existing imaged document given. Now I need to figure out the document structure from an analysis of the PDF command and data stream. A big problem, much of it solved. Now I am just tackling a very specific aspect where I "have the Y-axis pixel [baseline] locations of lines of text on a page (which are approximations [I.e., contain a noise component]) and need to determine whether any two adjacent text lines are single spaced, 1.5 spaced, or multiple spaced." No doubt in the relm of mathematics (at least I expect) people have investigated this class of problem and have proposed generalized algorithms to solve it. Could not guess the name or a functional description well enough to find it by Google. Thought that the good folk here at cljp, in their acknowledged wide ranging knowledge of all things algorithmic, might know a name for this class of problem, or provide a pointer to suitable algorithms.
{"url":"http://www.velocityreviews.com/forums/t136983-is-there-a-known-algorithm-for-this.html","timestamp":"2014-04-16T16:45:26Z","content_type":null,"content_length":"59545","record_id":"<urn:uuid:cdae04a5-ef1a-459b-acc3-66bec0e9ee6a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
A difficult limit involving trig. August 31st 2006, 10:12 PM A difficult limit involving trig. lim (1-tanx)/(sinx-cosx) I had been trying for 20 mins, but still cannot find a way to solve it as both top and bottom goes to 0... Thank you. August 31st 2006, 10:41 PM Originally Posted by tttcomrader lim (1-tanx)/(sinx-cosx) I had been trying for 20 mins, but still cannot find a way to solve it as both top and bottom goes to 0... Thank you. $<br /> \frac{1-\tan(x)}{\sin(x)-\cos(x)} =\frac{1}{-\cos(x)}\frac{(1-tan(x))}{(1-tan(x))}=-\sec(x)<br />$ $<br /> \lim_{x \to \pi/4} \frac{1-\tan(x)}{\sin(x)-\cos(x)} = \lim_{x \to \pi/4} -\sec(x)<br />$
{"url":"http://mathhelpforum.com/calculus/5262-difficult-limit-involving-trig-print.html","timestamp":"2014-04-23T21:00:14Z","content_type":null,"content_length":"4763","record_id":"<urn:uuid:1a98b97d-7565-4910-bcc3-647814225745>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
F# for game development Statically typed languages that support parametrized types (generics) and types hierarchies (inheritance) sometimes support covariance and contravariance, concepts which many find confusing. I think I have finally understood these, thanks to a blog post by Tomas Petricek on the subject. In this post I'll try to formulate my own understanding, and how I got there using functions instead of classes. Step 1: values Let us start easy, with simple values. Consider a base type, for instance , and two derived types I can use a string with any function that accepts a printable object. type IPrintable = type MyInt() = interface IPrintable type MyString() = interface IPrintable // Step 1: simple value let ``expects an IPrintable``(x : IPrintable) = () let n = MyInt() let s = MyString() ``expects an IPrintable`` n // OK ``expects an IPrintable`` s // OK Step 2: Parameterless functions The next step deals with a function that takes another function which doesn't take any parameter and returns an . If that's confusing, think of a generic function that creates a new random printable object, then prints it. This function would let the caller be responsible for providing a function which creates the random printable object. The code below illustrates this, but I've removed the random part. Note also that F# does not support covariance, which forces me to use flexible types // Step 2: a function without arguments // No covariance in F#, see http://msdn.microsoft.com/en-us/library/dd233198.aspx let ``expects a constructor``(f : unit -> #IPrintable) = () let mkInt() = MyInt() let mkString() = MyString() ``expects a constructor`` mkInt // OK ``expects a constructor`` mkString // OK Step 3: Functions of a single parameter Let us now consider a variation of the function described above where the function responsible for creating a random object takes a MyInt (it could be the seed, for instance). Assume I have two that both take an . Importantly, these two functions are safe to call with any instance of . One can imagine that such a function would use the printable representation to generate some number, used as the seed for the random generator. I can use where a function with signature MyInt -> IPrintable is expected. Notice how the relationship on types for the parameter has been inverted, compared with the case of a value or a return type. This is an example of contravariance. No code here, but see below for a more complete example. Step 4: Functions of multiple parameters It's possible to keep adding more parameters, and currying helps understand which functions are safe to use. For this last example, I'll switch to another set of types: , with their respective implementations . We can imagine there might be other implementations, e.g. . A function which computes the product of a scalar and a vector must return a vector. Using currying, it can also be seen as a function takes a scalar and returns a function which takes a vector and returns a vector. If I have a function with signature Float32 -> Vector3 -> Vector3 , where can I use it? I can use it where the exact same signature is expected, obviously. I can use it where a Float32 -> Vector3 -> IVector is expected: • The final return types match, as shown in step 1. • The other parameters obviously match. I cannot use it where a Float32 -> IVector -> Vector3 or a Float32 -> IVector -> IVector is expected. Although the final return types are compatible in either case, the next step in the matching process fails. A more general function with signature IScalar -> IVector -> Vector3 can be used where a Float32 -> Vector3 -> IVector is expected: • Vector3 as a simple value can be used anywhere any IVector is expected. • A function accepting any IVector accepts in particular Vector3, meaning such a function can be used where a function expecting a Vector3 is expected (!), provided their return types match (which was shown above). • By the same reasoning applied on the first (and only) argument of the function with signature IScalar -> (IVector -> Vector3), we conclude that the more general function can be used. The code below illustrates this example. // Step 4: a function with arguments type IScalar = type IVector = type Float32() = interface IScalar type Vector3() = interface IVector let prodGeneral (s : IScalar) (v : IVector) : Vector3 = failwith "..." // Interesting: flexible types are needed for the return type (no covariance), // but not for the parameters (contravariance) let apply prod (s : Float32) (v : Vector3) : #IVector = prod s v // OK let k = Float32() let v = Vector3() let u = apply prodGeneral k v I hope that wasn't excessively complex. Other explanations on the subject which I have seen often use container classes instead of functions, which is confusing, as it brings in read-only vs writable and reference-type vs value-type into the picture. Another problem is that using containers tends to mislead readers into thinking that T<B> can always be used where T<A> is expected if B inherits from A. Although that makes sense when T is IEumerable, it doesn't work when T is Action. Looking at the problem with a functional mindset really helped clarify the picture. I was a bit disappointed when I first saw F# did not support covariance, but flexible types do the job with very little additional syntax (a single # before the type). I was surprised to notice F# does support contravariance for function parameters, as it's often heard that F# supports neither covariance nor contravariance. That's not quite true, as it turns out. If functions are powerful enough to model all other types, it may be interesting to see what kind of variance one should expect for unions, tuples, records and eventually classes. That's probably already been done, but it would be an interesting exercise.
{"url":"http://sharp-gamedev.blogspot.com/","timestamp":"2014-04-18T16:10:56Z","content_type":null,"content_length":"58519","record_id":"<urn:uuid:6fe16582-d5c7-47ae-8d02-c320e2378370>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
The TOTAL function calculates the total of the values of an expression. The data type of the expression. It can be INTEGER, LONGINT, or DECIMAL. TOTAL(expression [dimension...]) The expression to be totalled. The name of a dimension of the result; or, the name of a relation between one of the dimensions of expression and another dimension that you want as a dimension of the result. By default, TOTAL returns a single value. When you indicate one or more dimensions for the result, TOTAL calculates values along the dimensions that are specified and returns an array of values. Each dimension must be either a dimension of expression or related to one of its dimensions. When you specify a dimension that is not an actual dimension of expression, but, instead, is dimension that is related to a dimension of expression and when there is more than one relation between the two dimensions, Oracle OLAP uses the default relation between the dimensions to perform the calculation. (See the RELATION command for more information on default relations.) When you do not want Oracle OLAP to use this default relation, specify the related dimension by specifying the name of a specify relation. TOTAL is affected by the NASKIP option. When NASKIP is set to YES (the default), TOTAL ignores NA values and returns the sum of the values that are not NA. When NASKIP is set to NO, TOTAL returns NA when any value in the calculation is NA. When all data values for a calculation are NA, TOTAL returns NA for either setting of NASKIP. Totaling over a DWMQY Dimension When expression is dimensioned by a dimension of type DAY, WEEK, MONTH, QUARTER, or YEAR, you can specify any other DAY, WEEK, MONTH, QUARTER, or YEAR dimension as a related dimension. Oracle OLAP uses the implicit relation between the dimensions. To control the mapping of one DAY, WEEK, MONTH, QUARTER, or YEAR dimension to another (for example, from weeks to months), you can define an explicit relation between the two dimensions and specify the name of the relation as the dimension argument to the TOTAL function. For each time period in the related dimension, Oracle OLAP totals the data for all the source time periods that end in the target time period. This method is used regardless of which dimension has the more aggregate time periods. To control the way in which data is aggregated or allocated between the periods of two time dimensions, you can use the TCONVERT function. Multiple Relations in a TOTAL Function When you break out the total by a related dimension, you are changing the dimensionality of the expression, so Oracle OLAP expects values based on this new dimensionality. It chooses the relation that holds values of that dimension. When there is more than one relation that holds values of the expected dimension, Oracle OLAP uses the one that was defined first. When there is no relation in which the related dimension is the one expected, Oracle OLAP looks for a relation that is dimensioned by the expected dimension. For example, assume that there are two relations between district and region, as follows. LD The region each district belongs to LD The primary district in each region When a workspace had the two relations described earlier and you specified the following TOTAL function, Oracle OLAP would use the relation region.district by default, because it holds values of the specified dimension. REPORT TOTAL(sales region) Example 25-30 Totaling Sales over All Months Suppose you would like to see the total sportswear sales for all months for each district. Use the TOTAL function to calculate the total sales. To see a total for each district, specify district as the dimension of the results. LIMIT product TO 'Sportswear' REPORT W 15 HEADING 'Total Sales' TOTAL(sales district) The preceding statements produce the following output. DISTRICT Total Sales -------------- --------------- Boston 1,659,609.90 Atlanta 3,628,616.62 Chicago 2,296,631.81 Dallas 3,893,829.30 Denver 2,133,425.29 Seattle 1,298,215.59
{"url":"http://docs.oracle.com/cd/B19306_01/olap.102/b14346/dml_x_stddev024.htm","timestamp":"2014-04-17T00:26:58Z","content_type":null,"content_length":"13790","record_id":"<urn:uuid:7e7f40ed-2aca-4693-89f6-4e6456ea4c6f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Restricted set addition in groups, II. A generalization of the Erdős-Heilbronn conjecture In 1980, Erdős and Heilbronn posed the problem of estimating (from below) the number of sums $a+b$ where $a\in A$ and $b\in B$ range over given sets $A,B\subseteq{\Bbb Z}/p{\Bbb Z}$ of residues modulo a prime $p$, so that $a\neq b$. A solution was given in 1994 by Dias da Silva and Hamidoune. In 1995, Alon, Nathanson and Ruzsa developed a polynomial method that allows one to handle restrictions of the type $f(a,b)\neq 0$, where $f$ is a polynomial in two variables over ${\Bbb Z}/p{\Bbb Z}$. In this paper we consider restricting conditions of general type and investigate groups, distinct from ${\Bbb Z}/p{\Bbb Z}$. In particular, for $A,B\subseteq{\Bbb Z}/p{\Bbb Z}$ and ${\cal R}\subseteq A\times B$ of given cardinalities we give a sharp estimate for the number of distinct sums $a+b$ with $(a,b)\notin\ {\cal R}$, and we obtain a partial generalization of this estimate for arbitrary Abelian groups. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v7i1r4/0","timestamp":"2014-04-19T22:54:33Z","content_type":null,"content_length":"15310","record_id":"<urn:uuid:037a8fa5-ec19-44df-a137-e8d38b4cdb8e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: TRIANGULATING POINT SETS IN SPACE David Avis* School of Computer Science McGill University 805 Sherbrooke St. W. Montreal, Canada, H3A 2K6 Hossam ElGindy Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, A set P of n points in R d is called simplicial if it has dimension d and con­ tains exactly d + 1 extreme points. We show that when P contains ną interior points, there is always one point, called a splitter, that partitions P into d + 1 sim­ plices, none of which contain more than dną/(d + 1) points. A splitter can be found in O(d 4 + nd 2 ) time. Using this result, we give a O(nd 4 log 1+1/d n) algo­ rithm for triangulating simplicial point sets that are in general position. In R 3 we give an O(n log n + k) algorithm for triangulating arbitrary point sets, where k is the number of simplices produced. We exhibit sets of 2n + 1 points in R 3 for
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/164/3746776.html","timestamp":"2014-04-19T17:45:15Z","content_type":null,"content_length":"8007","record_id":"<urn:uuid:c288fc9e-17c4-4cbb-9b2d-4245ae1cd12e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling Numbers: Decimals Level Four > Number and Algebra Modeling Numbers: Decimals This unit uses one of the digital learning objects, Modelling Numbers: Decimals, to support students as they investigate the place value of numbers with three decimal places. The numbers are represented using a variety of place value equipment commonly used in classrooms. The knowledge section of the New Zealand Number Framework outlines the important items of knowledge that students should learn as they progress through the strategy stages. This unit of work and the associated learning object are useful for students at stage 7, Advanced Multiplicative Part Whole, of the Number Framework. Specific Learning Outcomes: represent numbers with three decimal places using place value equipment Description of mathematics: The learning object has two main functions. Firstly, the learning object allows students to make their choice of number using the place value equipment. They can listen to that number being read using the speaker function. The learning object also represents the number using place value equipment, written words, place value houses, standard form and an abacus. The learning object’s second function is to provide students with a number that they are asked represent using the place value equipment. Feedback is provided to the students to help them. This unit is suitable for students working at stage 7 of the Number Framework. It includes a sequence of problems and questions that can be used by the teacher when working with a group of students on the learning object, and ideas for independent student work. The Learning Objects The learning object, Modeling Numbers: decimals, can be accessed from the link below: Prior to using the Modeling Numbers: Decimals learning objects The learning objects Modeling Numbers: 3-digits and Modeling Numbers: 6-digits are very similar but only have whole numbers. It would be useful for students to try one of these learning objects first. It would also be helpful if students had some understanding of decimals . Working with the learning object with students (Choose your own Number) 1. Show students the learning object and explain that it provides a model for representing numbers using place value equipment. Zoom in by clicking on the magnifying glass icon to show the students the names of the columns for the places to the right of the decimal point. Use the magnifying glass to click back to the hundreds, tens, and ones place value equipment. Use the arrow keys to show students how to make a number. Start from the ones column and click through the numbers so the students can see the colour change for the 6th cube. Discuss how this makes it easier to immediately identify numbers between 6 and 9. 2. Ask the students to count as you click the arrows to make the numbers 6, 7, 8, 9, 10. Watch as the 10 cubes join to make a rod and slide into the tens column. Ask the students what they think will happen when you make 11. Ask the students what they think will happen if you count backwards from 11. Watch the place value equipment change as you click back using the arrows 3. Zoom in to the decimal section using the magnifying glass. Show the students that the place value equipment behaves in the same way for the thousandths, hundredths and tenths columns. Use the arrow keys to make a number in the thousandths column and click through the numbers so the students can see the colour change happen for the 6th cube as it did in the ones column. 4. Ask the students to count as you click to arrows to make the numbers 6 thousandths, 7 thousandths, 8 thousandths, 9 thousandths, 10 thousandths. Watch as the 10 cubes join to make a rod and slide into the hundredths column. Ask the students what they think will happen when you make the 11 thousandths. Ask the students what they think will happen if you count backwards from 11 thousandths. Watch the place value equipment change as you click back using the arrows. 5. Click the right arrow to see the number represented using words, a place value house, in standard form or represented on a three bar abacus. Show the students the words written in the form of three and eighty-four hundredths and in the form three point eight four. Usually decimals are read in the format three point zero eight four, but this learning object also provides it in the format of three and eighty-four hundredths to help develop students understandings of decimal place value. Again the colour of the place value equipment matches the colour of the columns in the text. Using the left and right arrows the students can choose how to represent the number. 6. You may wish to explain the other representations to the students. Using the magnifying glass zoom out to see the full number. Continue to make a number. Make sure students understand how a zero digit in the number is represented. Do enough examples together for students to see how the equipment shows the change between the columns. 7. If you have selected to show the number using written words below the place value equipment then a speaker icon is available to click. Click the speaker icon to hear the number being spoken. Ask the student if it is same as what they said. The speaker icon is available for both the format of zero point zero two, and zero and two hundredths. Working with the object with students (Model a Given Number) Click on the die at the bottom left of the screen. A number will appear in words in the box for the student to build using the place value equipment. The format of the wording is either in the form zero point three or zero and three tenths. The format follows the same format shown before the die is clicked on. The student can click on the speaker icon to hear the number being spoken. Ask a student to use the arrow keys to build the number. The learning object provides feedback to the student. Ensure that you try enough examples that students see that the second feedback provided by the computer indicates which column their error is in. Clicking the down arrow at the bottom of the screen will return you to modeling choosing your own number. Notes regarding working with place value equipment. There are a number of ways to explore place value concepts. The learning object provides a model to help students visualize the place value columns. Students will benefit from exploring place value with a range of equipment ranging from place value blocks, decimats, pipe decimals, a three bar abacus, and number flip charts. Students working independently with the learning object Because this learning object generates numbers for students to model, once they are familiar with how it works you could allow individual students or pairs of students to work with the learning object independently. They could be encouraged to complete a given number of examples. Students can also explore making their own number, saying it aloud and then checking using the speaker icon. Students working independently without the learning object Using place value equipment students can work in pairs to represent numbers. Working in pairs provides students with the opportunities to work together to practice saying and representing numbers with equipment. Home Link: Family and Whanau, This week we have been investigating decimal numbers up to three places. Please help your child find examples of 1, 2 and 3 place decimals in a newspaper. Discuss with them what the decimal means. For example in a sprint race time of 24.3 seconds, the .3 means 3 tenths of a second. Ask them to cut out good examples and paste them into their homework book to share with the class. Encourage them to find examples of decimal numbers around your home and explain to you what they mean. They can also record these examples in their book
{"url":"http://www.nzmaths.co.nz/resource/modeling-numbers-decimals","timestamp":"2014-04-19T10:00:01Z","content_type":null,"content_length":"32705","record_id":"<urn:uuid:8b3b85a1-0f41-4a83-a0f1-013e8ccfa49a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Schizophrenia as a Network Disease: Disruption of Emergent Brain Function in Patients with Auditory Hallucinations Schizophrenia is a psychiatric disorder that has eluded characterization in terms of local abnormalities of brain activity, and is hypothesized to affect the collective, “emergent” working of the brain. Indeed, several recent publications have demonstrated that functional networks in the schizophrenic brain display disrupted topological properties. However, is it possible to explain such abnormalities just by alteration of local activation patterns? This work suggests a negative answer to this question, demonstrating that significant disruption of the topological and spatial structure of functional MRI networks in schizophrenia (a) cannot be explained by a disruption to area-based task-dependent responses, i.e. indeed relates to the emergent properties, (b) is global in nature, affecting most dramatically long-distance correlations, and (c) can be leveraged to achieve high classification accuracy (93%) when discriminating between schizophrenic vs control subjects based just on a single fMRI experiment using a simple auditory task. While the prior work on schizophrenia networks has been primarily focused on discovering statistically significant differences in network properties, this work extends the prior art by exploring the generalization (prediction) ability of network models for schizophrenia, which is not necessarily captured by such significance Citation: Rish I, Cecchi G, Thyreau B, Thirion B, Plaze M, et al. (2013) Schizophrenia as a Network Disease: Disruption of Emergent Brain Function in Patients with Auditory Hallucinations. PLoS ONE 8 (1): e50625. doi:10.1371/journal.pone.0050625 Editor: Mariano Sigman, University of Buenos Aires, Argentina Received: July 25, 2012; Accepted: October 23, 2012; Published: January 21, 2013 Copyright: © 2013 Rish et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The authors have no support or funding to report. Competing interests: The authors have declared that no competing interests exist. The concept of network disease, i.e. a dysfunction that affects the coordinated activity of a biological system, is receiving increased attention across all fields of biology and medicine. Though incomplete, current knowledge of protein-protein and gene-gene interaction networks provides a solid basis for assigning functional value to topological features such as connectivity, centrality, and so on [1], [2]. In neuroscience, the complexity of neural architecture and physiology precludes a similar detailed analysis. While Diffusion Tensor Imaging can reveal structural abnormalities associated with disease in large fiber tracts [3], [4], it is not immediately evident how these may affect the brain function. Schizophrenia is in this sense a paradigmatic case. Unlike some other brain disorders (e.g., stroke or Parkinson’s disease), schizophrenia appears to be “delocalized”, i.e. difficult to attribute to a dysfunction of some particular brain areas. The failure to identify specific areas, as well as the controversy over which localized mechanisms are responsible for the symptoms associated with schizophrenia, have led us amongst many others (see, for example, [5]–[7]) to hypothesize that this disease may be better understood as a disruption of the emergent, collective properties of normal brain states. These emergent properties can be better captured by functional networks, based on inter-voxel correlation strength, as opposed to individual voxel activations localized in specific, task-dependent areas. To test the hypothesis that schizophrenia, or any other psychiatric dysfunction, for that matter, is a network disease, we need first to clarify how to distinguish it from a non-network disease. In the first place, a network disease must have a measurable impact on one or several topological graph features of the associated functional brain networks in affected individuals, in comparison with control subjects. This has been the subject of several recent studies, reviewed later in the Discussion section, and needs no further discussion. However, while some disruption of topological features appears to be a necessary condition for a disease to be called a network dysfunction, it is not yet a sufficient one. Trivially, the topology of any sufficiently connected and structured graph can be significantly altered by the removal of a few nodes; this alteration would affect the network properties but its cause would still be localized. (As several studies seem to indicate, the brain behaves globally like a small-world and scale-free network [8], and as such it is prone to large disruptions if its hubs are affected [9]). On the other hand, disruptions of network links that cannot be explained just by local abnormalities (e.g., when nodes remain intact) better fits an intuitive notion of a network disease. A distinction between node disruptions versus connectivity issues can be also linked to different biological phenomena behind such abnormalities. For example, while stroke is associated with neuronal death in specific areas, and thus can be viewed primarily as a local disfunction, schizophrenia is known to be associated with abnormal functioning of neurotransmitters, such as dopamine and glutamate, that can dramatically change the functional connectivity of a brain, even though underlying anatomical/structural elements may still remain intact (e.g., temporary drug-induced psychosis in healthy individuals, based on altering neurotransmitters, closely mimics positive symptoms of schizophrenia). The following probabilistic model illustrates a situation where functional network connectivity disruptions occur independently from local (univariate) voxel activations. Let and denote BOLD signals recorded by fMRI for a given pair of voxels, and let S represent a task, or a stimulus (such as, for example, an auditory task described later in this paper). Figure 1a depicts a simple Markov network encoding the structure of dependencies among these three variables. (A Markov network [10] is an undirected probabilistic graphical model, i.e. a graph associated with a joint probability distribution over the nodes, where a missing edge between a pair of variables encodes their conditional independence given the rest of the variables in the network.) We now assume there are two groups of subjects, e.g. schizophrenics and controls; for each group , we can write the corresponding joint probability distribution in a factorized form as , where we also assume that same task or stimulus S is applied to both groups of subjects, so that is fixed. Next, we assume that the stimulus has same effect on the voxel activity across the groups, i.e. there are no group-dependent local changes; more formally, we assume that and . However, even though each marginal distribution, i.e. and , does not change across the subject group i, the conditional distribution , describing interactions among the pair of voxels, can vary across the groups, since the constraint does not uniquely determine . This illustrates how the voxel connectivity (described by their conditional distribution) can be altered across the two groups of subjects, without any change in the individual behavior of those voxels (described by their marginals). Figure 1. Graphical models of voxel interactions. Simple probabilistic graphical models capturing interactions among voxel-level BOLD signals and observed stimulus: (a) Markov network (undirected graph) over a pair of voxels and the task; (b) Bayesian network (directed graph) that includes an unobserved variable capturing other brain processes, besides the response to the observed stimulus, that can affect the BOLD signals. Note that directed links in Baysian networks are often (though not always) used to depict potential causal dependencies among the variables. Note that standard GLM approach focuses on univariate voxel activations, which are essentially just pairwise correlations, denoted herein as and , between the stimulus and each voxel’s signal and , respectively. However, even if the values of and are exactly the same (or, more realistically, their difference is not statistically significant between the two groups of subjects, e.g. controls vs. schizophrenic patients), the pairwise correlation between the two voxels can still vary, unless one of the voxels is perfectly correlated with the stimulus (i.e., either and is exactly one, an extremely unlikely situation in practice). A simple intuitive explanation behind varying inter-voxel functional connectivity in presence of fixed univariate stimulus-based activations is that there are multiple ongoing brain processes, besides the observed stimulus, that also affect the BOLD signal, and can be summarized as a hidden (unobserved) variable. Figure 1b depicts a directed probabilistic graphical model, or Bayesian network, demonstrating such situation. A naturally arising hypothesis is that some of those processes can be disrupted in schizophrenic patients, leading to disturbed interactions among voxels, even if the task-based voxel activations might be similar to those of the controls. We can gain further insight by analyzing more mechanistic models of brain activity. In particular, let us first consider two approaches that have been utilized frequently as models of interacting neuronal ensembles: coupled non-linear relaxation oscillators, defined by their phases and frequency of oscillation [11], [12], and Ising systems of coupled spins subject to an inhomogeneous external field [13]. It is possible to show that for the case of coupled oscillators, varying the coupling strength over a wide range of values leads to dramatic changes in the correlation between the units, without significantly affecting the individual rates (as measured by the frequency of oscillation). Similarly, for a fixed external field, it can be shown that the mean magnetization remains constant as the spin-spin correlation changes, as a function of a varying coupling strength. Moreover, even if a linear system driven by a multi-dimensional Gaussian process is not properly inferred (for instance, by assuming that the process is homogeneous over the nodes, when it is not), one may confound a change in the mean activity of a node by a change in the connectivity of the system. The detailed calculations for these three models are presented in Material S1. Thus, when analyzing a brain disorder associated with functional network abnormalities, one should first test the null-hypothesis assuming that the abnormalities can be fully explained by local disruptions; rejection of such null-hypothesis would provide a solid basis for classifying the observations as a truly network disorder. However, given the limited spatio-temporal resolution of current imaging techniques, a thorough analysis of this null hypothesis can be carried out only partially, and in the best cases requiring heavy computational resources or dramatic dimensionality reduction [14], [15]. We propose, however, an alternative approach suitable for the type of data provided by fMRI: if schizophrenia is a network disease, we would expect the multi-variate functional properties captured by topological graph features to carry more population-specific information than univariate and localized analysis approaches such as the General Linear Model. This is a sufficient condition, but in general not necessary; nevertheless, if satisfied, it is a strong indication that the dysfunction cannot be simply reduced to local functional disruptions. We would like to stress at this point that when network effects are discussed, this aspect is typically overlooked. Finally, in order to quantify the notion of information carried by network as opposed to localized features, we consider necessary to complement hypothesis-testing with predictive modeling/ classification statistical approaches. Various reasons justify the use of predictive modeling for brain imaging. In particular, the classification framework evaluates the generalization ability of models built using the features of interest, i.e. the ability to predict whether a previously unseen subject is schizophrenic or not, unlike standard statistical hypothesis testing that evaluates the differences between two groups of subjects (e.g., schizophrenic and control) on a fixed dataset. Moreover, predictive modeling is more robust to the presence of heavy-tailed feature distributions, which naturally arise in the topological analysis of complex networks [8]. Following the above rationale, in subsequent sections we will demonstrate that network features reveal highly statistically significant differences between the schizophrenic and control groups; moreover, statistically significant subsets of certain network features, such as voxel degrees (the number of voxel’s neighbors in a network), are quite stable over varying data subsets. In contrast, voxel activation show much weaker group differences as well as stability. Moreover, most of the network features, and especially pairwise voxel correlations (edge weights) and voxel degrees, allow for quite accurate classification, as opposed to voxel activation features: degree features achieve up to 86% classification accuracy (with 50% baseline) using Markov Random Field (MRF) classifier, and even more remarkable 93% accuracy is obtained by linear Support Vector Machines (SVM) using just a dozen of the most-discriminative correlation features. We will also show evidence that traditional approaches based on a direct comparison of the correlation at the level of relevant regions of interest (ROIs) or using a functional parcellation techniques [16], do not reveal any statistically significant differences between the groups. Indeed, a more data-driven approach that exploits properties of voxel-level networks appears to be necessary in order to achieve high discriminative power. The results presented in this paper unify and extend the approaches presented in our earlier work in [17], [18]. Materials and Methods We first describe the experimental paradigm and the groups of participating subjects, second the region of interest analysis, and then the network analysis and classification methods used to assess our capacity to predict which subject is schizophrenic. Ethics Statement Ethical approval was obtained from the Paris-Pitié-Salpétrière ethics committee. Participants were fully informed of the requirements of the behavioral task and all demonstrated that they understood the aims and demands of the experiment. All subjects gave written informed consent. The subjects’ ability to consent was established by clinical interviews, which demonstrated that this ability was not compromised by the subjects mental condition. Experimental Paradigm and Data Acquisition In our studies, we worked with a group of 15 schizophrenic subjects (9 women) fulfilling DSM-IV-R criteria for schizophrenia with daily auditory hallucinations for at least 3 months despite well-conducted treatment. Their mean ± S.D. age was years (i.e., 22–49 years range), and the duration of illness was years (3–28 years range). All schizophrenic patients were treated with antipsychotic drugs () chlorpromazine equivalent/day [19]. Four subjects were discarded because of acquisition issues, leaving us with 11 subjects, that were approximately matched for gender and age by the control group of 11 healthy subjects. Originally, the dataset also included a group of alcoholic patients; however, in this paper, we focused primarily discriminating between the schizophrenic and normal groups; the results including the alcoholics group together with controls, and testing against the schizophrenic group, were quite similar to those presented here, and are included in Material S1. All subjects were submitted to the same experimental paradigm involving language (see Figure 2), which was similar to the one introduced in [20]. The task is based on auditory stimuli; subjects listen to emotionally neutral sentences either in native (French) or foreign language. Average length (3.5 sec mean) or pitch of both kinds of sentences is normalized. In order to catch attention of subjects, each trial begins with a short (200 ms) auditory tone, followed by the actual sentence. The subject’s attention is asserted through a simple validation task: after each played sentences, a short pause of 750 ms is followed by a 500 ms two-syllable auditory cue, which belongs to the previous sentence or not, to which the subject must answer to by yes (the cue is part of the previous sentence) or no with push-buttons, when the language of the sentence was his own. A full fMRI run contains 96 trials, with 32 sentences in French (native), 32 sentences in foreign languages, and 32 silence interval controls. Figure 2. Experimental paradigm. Data were acquired on a 1.5 T Signa (General Electric) Scanner at Service Hospitalier Frédéric Joliot, Orsay, France. For each subject, two fMRI runs are acquired (T2-weighted EPI), each of which consisted of 420-scans (from which the first 4 are discarded to eliminate T1 effect), with a repetition time (TR) of 2.0 second, for a total length of 14 minutes per run. Data were spatially realigned and warped into the MNI template and smoothed (FWHM of 5 mm) using SPM5 (www.fil.ucl.ac.uk); also, standard SPM5 motion correction was performed with the SPM5 realignment pre-processing. For each volume of the time-series, the process estimates a 6 degree-of-freedom movement relative to the first volume. These estimated parameters are combined to warping parameters (obtained by nonlinear deformation on an EPI template) to get the final, spatially normalized and realigned time-series. Finally, a universal mask was computed as the minimal intersection of thesholded EPI mean volumes across the entire dataset. This mask was then applied to all subjects. Note that the schizophrenia patients studied here have been selected for their prominent, persistent, and pharmaco-resistant auditory hallucinations [20] which might have increased their clinical homogeneity, but they are not representative of all schizophrenia patients, only of a subgroup. In summary, our dataset contained the total of 44 samples (there were two samples per subject, corresponding to the two runs), where each sample corresponds to a subject/run combination, and is associated with roughly 50,000 voxels × 420 TRs × 2 runs, i.e. more than 40,000,000 voxels/variables. In the subsequent sections, among other methods, we discuss feature-extraction approaches that reduce the dimensionality of the data prior to learning a predictive model. We explored two different data analysis approaches aimed at discovery of discriminative patterns: (1) model-driven approaches based on prior knowledge about the regions of interest (ROI) that are believed to be relevant to schizophrenia, or model-based functional clustering, and (2) data-driven approaches based on various features extracted from the fMRI data, such as standard activation maps and a set of topological features derived from functional networks. Model-Driven Approach using ROI First, we decided to test whether the interactions between several known regions of interest (ROIs) would contain enough discriminative information about schizophrenic versus normal subjects. Ten regions of interests (ROI) were defined using previous literature [20] on schizophrenia and language studies, including inferior, middle and superior left temporal cortex, left inferior temporal cortex, left cuneus, left angular gyrus, right superior temporal, right angular gyrus, right posterior cingulum, and anterior cingular cortex (Figure 3). Each region was defined as a sphere of 12 mm diameter centered on the x,y,z coordinates of the corresponding ROI. Figure 3. Locations of ROIs. Regions of Interest (ROI) and their location on a brain normalized to the MNI template (Talairach coordinate system). Note that the region outside the brain has been defined for testing purposes. Because predefined regions of interest may be based on too much a priori knowledge and miss important areas, we also ran a more exploratory analysis. A second set of 600 ROI’s was defined automatically using a parcellation algorithm [16] that estimates, for each subject, a collection of regions based on functional signal similarity and position in the MNI space. Time series were extracted as the spatial mean over each ROI, leading to 10 time series per subject for the predefined ROIs and 600 for the parcellation technique. Drifts were removed from the time series by removing low frequencies below 1/128 Hz using a cosine basis. The connectivity measures were of two kinds. First, the correlation coefficients were computed between each pair of ROIs time series without taking into account the experimental paradigm. Next, we computed a psycho-physiological interaction (PPI), by contrasting the correlation coefficient weighted by experimental conditions (i.e. correlation weighted by the “Language French” condition versus correlation weighted by “Control” condition after convolution with a standard hemodynamic response function). Those connectivity measures were then tested for significance using standards non-parametric tests between groups (Wilcoxon signed-rank test) with corrected p-values for multiple comparisons. Data-driven Approach: Feature Extraction Activation maps. To find out whether local task-dependent linear activations alone could possibly explain the differences between the schizophrenic and normal brains, we used as a baseline set of features based on the standard voxel activation maps, computed using General Linear Model (GLM). The GLM analysis described here is a standard component of the Statistical Parametric Mapping (SPM) toolkit. Given the time-series for stimulus (e.g., s = 1 if the stimulus/event is present, and s = 0 otherwise), and the BOLD signal intensity time-series for voxel i, GLM is simply a linear regression where is the regressor corresponding to the stimulus convolved with the hemodynamic response function (HRF) in order to account for delay between the voxel activation and change in the BOLD signal, is noise, is the baseline (mean intensity) and coefficient is the amplitude that serves as an activation score (note that coefficient is simply the correlation between and when both are normalized and centered prior to fitting the model). Given multiple trials, multiple estimates of are obtained and a statistical test (e.g., t-test) is performed for the mean against the null-hypothesis that it comes from Gaussian noise distribution with zero mean and fixed noise (the level of noise for BOLD signal is assumed to be known here). In case of multiple stimuli, the GLM model uses a vector of regressors and obtains the vector of the corresponding coefficients . For example, in our studies, the following stimuli/events were considered: “FrenchNative”, “Foreign”, and “Silence”, together with several additional regressors, such as some low-frequencies trends and the movement parameters (additional 1-only column is added to account for the mean of the signal, as above - a standard step in linear regression with the unnormalized data). Once the GLM is fit, we focus on the coefficients obtained for the above three stimuli, and the corresponding three activation maps. Next, we compute several activation contrast maps by subtracting some maps from the others (hoping that such differences, or contrasts, may provide additional information). The following activation contrast maps were computed: activation contrast 1: “FrenchNative – Silence”, activation contrast 2: “FrenchNative – Foreign”, activation contrast 3: “Silence – FrenchNative”, activation contrast 4: Foreign - FrenchNative (note that maps 2 and 4 are just negations of the maps 1 and 3, respectively), activation contrast 5: “Foreign – Silence”; also, the following three contrast maps are simply the difference of the corresponding coefficient (activation) and the mean (): activation contrast 6: “FrenchNative”, activation contrast 7: “Foreign”, activation contrast 8: “Silence”. For each of those maps, t-values are computed at each voxel (with a null-hypothesis corresponding to zero-mean Gaussian). In the analysis presented in this paper, we use the resulting t-value maps, rather than just the “raw” activation maps (i.e., coefficient maps), and to simplify the terminology, just refer to them as “activation” or “activation contrast” maps. The above activation contrast maps (that we will further refer to as simply activation maps) were computed for each subject and for each run. The activation values of each voxel were subsequently used as features in the classification task. We also computed a global feature, mean-activation (denoted mean-t-val), by taking the mean absolute value of the voxel’s t-statistics. Network features. In order to continue investigating possible disruptions of global brain functioning associated with schizophrenia, we decided to explore lower-level (as compared to ROI-level) functional brain networks [8] constructed at the voxel level, as follows: (1) pair-wise Pearson correlation coefficients are computed among all pairs of time-series where corresponds to the BOLD signal of i-th voxel; (2) an edge between a pair of voxels is included in the network if the correlation between and exceeds a specified threshold (herein, we used the same threshold of c(Pearson) = 0.7 for all voxel pairs; we tried a few other threshold levels, such as 0.8 and 0.9, and the results were similar; however, we did not perform an exhaustive evaluation of the full range of this parameter due to high computational cost of such experiment). For each subject, and each run, a separate functional network was constructed. Next, we measured a number of its global topological features, including: 1. the mean degree, i.e. the number of links for each node (corresponding to a voxel), averaged over the entire network; 2. the mean geodesic distance, i.e. the minimal number of links needed to reach any to from any other node, averaged over the entire network; 3. the mean clustering coefficient, i.e. the fraction of triangulations formed by a node with its first neighbors relative to all possible triangulations, averaged over the entire network; 4. the giant component, i.e. the size (number of nodes) of the largest connected subgraph in the network; 5. the giant component ratio, i.e. the ratio of the giant component size to the size of the network; 6. the total number of links in the network. Besides global topological features, we also computed a series of voxel-level network features, based on topological properties of an individual voxel in functional network; the following types of features were used: 1. (full) degree: the value assigned to each voxel is the total number of links in the corresponding network node; 2. long-distance degree: the number of links making non-local connections (i.e., links between the given voxel and the voxels that are 5 or more voxels apart from it); 3. inter-hemispheric degree: only links reaching across the brain hemispheres are considered when computing each voxel’s degree; 4. strength: node strength is the sum of weights of links connected to the node. In our study, the full correlation matrix was used as a weighted adjacency matrix, where each pairwise correlation corresponds to the link weight; thus, for each voxel, its strength is the sum of its correlations with the other voxels; 5. absolute strength: same as above, but the link weights are replaced by their absolute values; 6. positive strength: same as node strength, but only positive link weights are considered; 7. clustering coefficient of a node is the fraction of triangles around a node, i.e. the fraction of node’s neighbors that are neighbors of each other; herein, we first computed a functional networks by applying a threshold of 0.7 to the absolute values of the pairwise correlations, and then used the resulting graph to compute the clustering coefficients for each node/voxel; 8. local efficiency: the local efficiency is the global efficiency computed on node neighborhoods, and is related to the clustering coefficient. the global efficiency is the average inverse shortest path length in the network, that is , where is the shortest path for node n, such that for disconnected nodes ; 9. edge weights: finally, we simply used as features a randomly selected subset of 200,000 pairwise correlations out of 53,000×53,000 entries of the correlation matrix (the location of pairs were randomly selected once, and then same locations used to derive features for all subjects); the rationale behind random sampling from the correlation matrix was to reduce the computational complexity of working with the full set of correlations, which would exceed 2800 million features. Nevertheless, subsequent feature ranking procedure was able to select a highly discriminative subset of correlation features, which would only improve if the feature ranking was allowed to continue running over the rest of the correlation matrix. (Note that we also tried other sets of randomly selected 200,000 voxels and obtained similar results to those presented in this paper. Clearly, the results may vary if we keep selecting other random sets of voxels that may not include the top most informative voxel pairs discovered in our analysis. However, the point of our analysis is to show that it is possible to find predictive features among pairwise correlations, and that our results demonstrate only a lower bound on a potentially even better predictive performance of correlation features.) For each of the above feature types, except the edge weights, we call the corresponding feature sets “feature map”, since each voxel is associated with its own feature value, e.g. (full) degree maps, strength maps, and so on. The set of global measures and spatial maps was utilized for further analysis of statistical significance of group differences, including t-test and several classification approaches, described Spatial normalization. Note that, for each sample, we also computed spatially normalized activation and degree maps, dividing the corresponding maps by their maximal value taken over all voxels in the given map. As it turned out, normalization affected both statistical testing and classification results presented below. We mainly focus on normalized activation and degree maps (full, long-distance and inter-hemispheric), since they yield better classification results. In case of hypothesis testing, however, unnormalized (raw) activations maps, unlike the degree maps, happened to outperform their normalized counterparts, and thus both sets of results were presented. Classification Approaches Classification tasks. We first focused on discriminating between the schizophrenic and normal subjects only, that resulted into well-balanced dataset containing 2×11 positive (schizophrenic) and 2×11 negative (healthy) samples (since there were two runs per each subject), with 50% baseline prediction accuracy. The results for the original dataset, including alcoholics together with controls into one category and discriminating them from schizophrenic subjects, were quite similar to those presented here, i.e. we were able to accurately separate schizophrenics from the alcoholic subjects merged with the controls; the results are included in Material S1. First, standard off-the-shelf methods such as Gaussian Naive Bayes (GNB) and Support Vector Machines (SVM) were used in order to compare the discriminative power of different sets of features described above. We used standard SVM implementation with linear kernel and default parameters, available from the LIBSVM library. For GNB, we used our own MATLAB implementation. Moreover, we decided to further investigate our hypothesis that interactions among voxels contain highly discriminative information, and compare those linear classifiers against probabilistic graphical models that explicitly model such interactions. Specifically, we learn a classifier based on a sparse Gaussian Markov Random Field (MRF) model [21], which leads to a convex problem with unique optimal solution, and can be solved efficiently; herein, we used the COVSEL procedure [21]. The weight on the -regularization penalty serves as a tuning parameter of the classifier, allowing to control the sparsity of the model, as described below. Sparse Gaussian MRF classifier. Let be a set of p random variables (e.g., voxels), and let be an undirected graphical model (Markov Network, or MRF) representing conditional independence structure of the joint distribution . The set of vertices is in the one-to-one correspondence with the set X. The set of edges E contains the edge if and only if is conditionally dependent on given all remaining variables; lack of edge between and means that the two variables are conditionally independent given all remaining variables. Let denote a random assignment to X. We will assume a multivariate Gaussian probability density , where is the inverse covariance matrix (also called the precision matrix), and the variables are normalized to have zero mean. Let be a set of n i.i.d. samples from this distribution, and let denote the empirical covariance matrix. Missing edges in the above graphical model correspond to zero entries in the inverse covariance matrix C, and thus the problem of learning the structure for the above probabilistic graphical model is equivalent to the problem of learning the zero-pattern of the inverse-covariance matrix. Note that the inverse of the empirical covariance matrix, even if it exists, does not typically contain exact zeros. Therefore, an explicit sparsity constraint is usually added to the estimation process. A popular approach is to use -norm regularization that is known to promote sparse solutions, while still allowing (unlike non-convex -norm regularization with ) for efficient optimization. From the Bayesian point of view, this is equivalent to assuming that the parameters of the inverse covariance matrix are independent random variables following the Laplace distributions with zero location parameters (means) and equal scale parameters. Then , where is the (vector) -norm of C. Assume a fixed parameter , our objective is to find , where is the data matrix, or equivalently, since and does not include C, to find , over positive definite matrices C. This yields the following optimization problem considered, for example, in [21](1) where and denote the determinant and the trace (the sum of the diagonal elements) of a matrix A, respectively, and S the empirical covariance of the data. For the classification task, we estimate on the training data the Gaussian conditional density (i.e. the (inverse) covariance matrix parameter) for each class (schizophrenic vs control), and then choose the most-likely class label for each unlabeled test sample x. Variable selection. Note that each sample is associated with roughly 50,000 voxels × 420 TRs × 2 runs, i.e. more than 40,000,000 voxels/variables. Thus, some kind of dimensionality reduction and/or feature extraction appears to be necessary prior to learning a predictive model. Extracting degree maps and activation maps reduced dimensionality by collapsing the data along the time dimension. Moreover, we used variable selection as an additional preprocessing step before applying a particular classifier, in order to (1) further reduce the computational complexity of classification (especially for sparse MRF, which, unlike GNB and SVM, could not be directly applied to 50,000 variables), (2) reduce noise and (3) identify relatively small predictive subsets of voxels. We applied a simple filter-based approach, selecting a subset of top-ranked voxels, where the ranking criterion used p-values resulting from the paired t-test, with the null-hypothesis being that the voxel values corresponding to schizophrenic and non-schizophrenic subjects came from distributions with equal means. The variables were ranked in the ascending order of their p-values (lower p-values correspond to higher confidence in between-group differences), and classification results on top k voxels will be presented for a range of k values. Clearly, in order to avoid biased estimate of generalization error, variable selection was performed separately on each cross-validation training dataset; failure to do so, i.e. variable selection on the full dataset, would produce overly optimistic results with nearly-perfect accuracy (e.g., 95% accuracy using GNB on just 100 top t-test voxels). Evaluation via cross-validation. Since there were two samples corresponding to two runs per each subject, another source of overly optimistic bias that we had to avoid was possible inclusion of the samples for the same subject in both training and test datasets - for example, if using the standard leave-one-out cross-validation approach. Instead, we used leave-one-subject-out cross-validation, where each of the 22 folds on the 44-sample dataset (11 schizophrenic and 11 control samples, 2 runs each) would set aside as a test set the two samples for a particular subject. A potential artifact that affects the computation of functional connectivity networks is the movement of subjects in the scanner. While we implemented the standard procedure for movement correction, it is known that residual effects may still leave a trace in the functional images. For this we developed an approach that addresses the issue directly in the context of predictive modeling; specifically, we computed pairwise correlations between the movement parameters and each network feature, for all feature types listed in the paper (e.g., for degree features, we computed correlation between a motion parameter and degree of each voxel). To see whether those correlations were significant, we used the False Discovery Rate (FDR) test with significance level 0.05, on p-values corresponding to the correlations, and showed that there are no voxels that pass significance test (see Material S1). (FDR is a statistical method used in multiple hypothesis testing to correct for multiple comparison, discussed in more detail in the subsequent sections). Moreover, we applied the same classification approach to data from alcoholic patients, acquired following the same protocol as normal and schizophrenic patients. Alcoholic patients are known to show a significant degree of movement inside the scanner. As reported in Material S1, the negative classification results are further indication that movement is not a factor. Model-driven ROI Analysis First, we observed that correlation (blind to experimental paradigm) between regions and within subjects were very strong and significant (p-value of 0.05, corrected for the number of comparisons) when tested against 0 for all subjects (mean correlation >0.8 for every group). However, these inter-region correlations do not seem to differ significantly between the groups. The parcellation technique led to some smaller p-values, but also to a stricter correction for multiple comparison and no correlation was close to the corrected threshold. Concerning the psycho-physiological interaction, results were closer to significance, but did not survive multiple comparisons. In conclusion, we could not detect significant differences between the schizophrenic patient data and normal subjects in either the BOLD signal correlation or the interaction between the signal and the main experimental contrast (native language versus silence). Data-driven Analysis: Topological vs Activation Features Empirical results are consistent with our hypothesis that schizophrenia disrupts the normal structure of functional networks in a way that is not derived from alterations in the activation; moreover, they demonstrate that topological properties are highly predictive, consistently outperforming predictions based on activations. Voxel-level statistical analysis. In order to find out whether various features exhibit statistically significant differences across the two groups, we performed two-sample t-test for each feature from the corresponding feature vector of a particular type (e.g., activations, degrees, etc.); herein, n is the number of voxels for voxel-level features, and for the weight features (pairwise correlations). Clearly, when the number of statistical tests is very large (i.e., n here is exceeding 50,000), a correction for multiple comparisons is necessary, since low p-values indicating statistically significant differences given one test may just occur due to pure chance when many such tests are performed. A commonly used Bonferroni correction is overly conservative in brain imaging analysis since it assumes test independence, while there are obviously strong correlations across the voxel-level features. A more appropriate type of correction that is now frequently used in fMRI analysis is the False Discovery Rate (FDR) method, designed to control the expected proportion of incorrectly rejected null hypotheses, or “false discoveries”. In general, FDR is less conservative than the familywise error rate (FWER) methods (including the Bonferroni correction), since it does not guarantee there are no false positives, but rather that there are only a few of them. For example, FDR with threshold 0.05 guarantees no more than 5% of false positives. Herein, we include the results for both FDR and Bonferroni corrections (see columns 5 and 6 of the Table 1, respectively). However, our discussion is mainly based on FDR results, while Bonferroni results are mentioned purely for completeness sake, to demonstrate that some of the statistical differences we observed are so strong that they survived even an overly strict Bonferroni correction. Table 1. Detailed t-test results for all activation and network-based features: each column shows the number of voxels that satisfy a given constraint, such as having p-value below the specified threshold or surviving the FDR or Bonferroni correction with the significance level (the number of voxels common with the full degree maps is shown in parenthesis for unnormalized linear activation Our main observation is that the network features show much stronger statistical differences between the schizophrenic vs. non-schizophrenic groups than the activation features. Figure 4 shows the results of two-sample t-test analysis for all voxel-level features, and the corresponding FDR threshold at level. Panel (a) shows a direct comparison between the best activation features (dashed lines) and three (spatially normalized) degree maps: full, long-distance and inter-hemispheric. In all degree maps, on the order of 10^3 voxels survive FDR correction (i.e., have their p-values below the black line corresponding to the FDR threshold), while only a handful (less than 10) of activation voxels do. The other measured graph features, including clustering and local efficiency, have less statistical power than degrees (i.e., have p-values closer to the FDR threshold), but yet outperform activation maps by almost two orders of magnitude, as shown in Panel (b). A full list showing the number of surviving voxels for each map is shown in Table 1. (Note that for the activation maps, the results for both normalized and unnormalized maps are shown, since unnormalized ones performed better in hypothesis testing. In classification study presented next, the situation was reversed, i.e. normalized activations predicted better than unnormalized; thus, we always included the best possible results achieved by activations. In case of degree maps, we always used only their normalized versions, which performed best in both hypothesis testing and classification scenarios.) Finally, randomly selected pairwise correlations, as shown in Panel (c), behave similarly to degrees, with an order of 10^4 correlations surviving the FDR test, i.e. an order of magnitude more than for degrees. (Note, however, that the total number of correlation features (200,000) is also much larger than the number of degree features (about 50,000), i.e. voxels; therefore, the results for correlations are not directly comparable to those for degrees and other voxel features, and thus plotted in a separate panel.). Figure 4. Two-sample t-test results for different features: p-values vs. FDR threshold. (a) Activations vs. normalized degrees; (b) clustering coefficients, strength, absolute strength, positive strength, and local efficiency of each voxel; (c) 200,000 randomly selected pairwise correlations. The null hypothesis for each feature assumes no difference between the schizophrenic vs normal groups. P-values of the features are sorted in ascending order and plotted vs FDR baseline; FDR test select voxels with , - false-positive rate, - the index of a p-value in the sorted sequence, N - the total number of voxels. Note that graph-based features yield a large number of highly-significant (very low) p-values, staying far below the FDR cut-off line, while only a few voxels survive FDR in case of (unnormalized) activation maps in panel (a): 7 and 2 voxels in activation maps 1 (contrast “FrenchNative – Silence”) and 6 (“FrenchNative”), respectively, while the rest of the activation maps do not survive the FDR correction at all. The spatial localization of the network maps is shown in Figure 5, representing the voxels surviving correction for (a) (normalized) degree maps, (b) strength (red-yellow), absolute strength (blue-light blue) and positive strength (black-white), (c) clustering coefficient and local efficiency maps. Normalized degrees (a) show the most spatially coherent organization, with contiguous bilateral clusters in auditory/temporal areas, prominently BA 22 and BA 21. Note also that the degree of the normal population is higher than the patient population. Strength-related features (b) have less bilateral symmetry and are also less spatially coherent, while clustering (c) is even more scattered. Figure 5. Two-sample t-test results for different features: voxels surviving FDR correction. (a) Normalized degree maps; (b) strength (red-yellow), absolute strength (blue-light blue) and positive strength (black-white); (c) clustering coefficient and local efficiency maps. Here the null hypothesis at each voxel assumes no difference between the schizophrenic vs normal groups. Colored areas denotes low p-values passing FDR correction at level (i.e., 5% false-positive rate). Note that the mean (normalized) degree at highlighted voxels was always (significantly) higher for normals than for schizophrenics. Coordinates of the center of the image: (a) and (c) X = 26,Y = 30,Z = 16, (b) X = 26,Y = 30,Z = 18.xl. The network in Figure 6 visualizes the top 30 most significantly different edges selected out of 200,000 edge features, or pairwise correlations (the total number of such features surviving FDR correction was 12240, as shown in Table 1 and visualized in Figure 4b). Figure 7 shows a stable subset of 9 edges common to all top-30 ranked edges, over all cross-validation subsets, making it a highly robust representation. Note that unlike the degree maps, this network includes areas other than BA 22 and BA 21, prominently left precentral gyrus BA 44 (Broca’s area), right middle frontal gyrus BA 10, medial precuneus BA 7, and the declive of the cerebellum. A complete list of the nodes is presented in Table 2, while area-to-area functional connections determined by the 9 most stable links are shown in Table 3. Note that most links span both hemispheres, and that there are no local, intra-area links, even though we introduced no voxel clustering. Figure 6. Thirty top-ranked (lowest-p-value) edges (all surviving Bonferroni correction) out of 200,000 pairwise correlation features, computed on the full dataset. (a) All views and (b) enlarged saggital view. Edge density is proportional to their absolute value. Figure 7. 9 stable edges common to all subsets of 30 top-ranked (lowest-pvalue) edges that survived Bonferroni correction, over 22 different cross-validation folds (leave-subject-out data subsets). (a) All views and (b) enlarged saggital view. Edge density is proportional to their absolute value. The network includes several areas not picked up by the degree maps, i.e. other than BA 22 and BA 21, mainly the cerebellum (declive) and the occipital cortex (BA 19). Table 2. Areas corresponding to the nodes on the 9 most stable links. Table 3. Area-to-area functional connections determined by the 9 more stable links. Our observations suggest that (a) the differences in the collective behavior cannot be explained by differences in the linear task-related response, and that (b) topology of voxel-interaction networks is more informative than task-related activations, suggesting an abnormal degree distribution for schizophrenic patients that appear to lack hubs in auditory cortex, i.e., have significantly lower (normalized) voxel degrees in that area than the normal group, possibly due to a more even spread of degrees in schizophrenic vs. normal networks. Note that, as discussed earlier, ROI- and parcellation-level network topologies do not seem to retain information present in voxel-level networks, apparently due to averaging the signal over ROIs or parcels. We also evaluate the stability of all features with respect to selecting a subset of top ranked voxels over different subsets of data. For each value of , stability of the top--ranked feature subset is defined as a fraction of features in common over all cross-validation data subsets (recall that there are 22 of them). Namely, given a fixed value of , for each data subset, we rank the features by their -values computed on that particular subset, choose the top k of them, and then compute the intersection over all 22 of those top- k feature subsets. The number of features common to all subsets (i.e., the size of their intersection), divided by k, gives us a measure of feature stability. Interestingly, network-based features, such as degrees (full, long-distance or inter-hemispheric) demonstrate much higher stability than activation features, as well as other network-based features. Figure 8a shows that degree maps have up to almost 70% top-ranked voxels in common over different training data sets when using the leave-one-subject out cross-validation, while activation maps have below 50% voxels in common between different selected subsets. This property of degree vs activation features is particularly important for interpretability of predictive modeling. Stability of the other network-based features is shown in Figures 8b and 8c, where the Figure 8c shows the same results as Figure 8b, but using logarithmic scale instead of linear, in order to focus on the regimes when only a small number of features is selected. While the overall stability of the remaining network features does not reach the high values of the degree features, it is still interesting to note that the pairwise correlations appear to be the most stable of the remaining network features when the number of selected features is relatively small, e.g. below 100. Figure 8. Stability of feature subset selection over cross-validation (CV) folds. Stability is measured as the percent of voxels in common among the subsets of k top variables selected at all CV folds: (a) activations and degrees; (b,c) edge weights (correlations), clustering coefficients, strength, absolute strength, positive strength, and local efficiency: (b) linear scale on x-axis, (c) log-scale on x-axis (focusing on small number of features selected. Inter-hemispheric degree distributions. As suggested by the predominance of inter-hemispheric edges in the set of most significantly different pairwise correlations (Table 3), a closer look at the degree distributions reveals that a large percentage of the differential connectivity appears to be due to long-distance, inter-hemispheric links. Figure 9a compares the probability of finding a link in the networks as a function of the Euclidean distance between the nodes (in millimeters), for schizophrenic (red) versus control (blue) subjects. The bars correspond to one standard deviation, drawn on the top only, to avoid clutter in the figure, and the lines correspond to power-law fits for the intermediate distances (i.e. between 10 and 150 mm). The fit is , with for schizophrenics, and for controls. We see that for this distance range, schizophrenics have reduced connectivity, i.e. lower link probabilities than controls. Figure 9b compares the fraction of inter-hemispheric connections over all connections, for schizophrenic (red) versus normal (blue) groups. For each subject, a unique value was computed dividing the number of links spanning both hemispheres by the total number of links. The figure represents the normalized histogram of this inter-hemispheric link density for each group. The schizophrenic group shows a significant bias towards low relative inter-hemispheric connectivity. A t-test analysis of the distributions indicates that differences are statistically significant (). Moreover, it is evident that a major contributor to the high degree difference discussed before is the presence of a large number of inter-hemispheric connections in the normal group, which is absent in schizophrenic group. Furthermore, we selected a bilateral region of interest (ROI) corresponding to left and right Brodmann Area 22 (roughly, the clusters in Figure 4a), such that the linear activation for these ROI’s was not significantly different between the groups, even in the uncorrected case. For each subject, the connection strength between the left and right ROIs was computed as the fraction of ROI-to-ROI links over all links. Figure 9c shows the normalized histogram over subjects for this connectivity measure. Clearly, the normal group displays higher ROI-to-ROI connectivity, which is significantly disrupted in the schizophrenic group (). This provides a strong indication that the group differences in connectivity cannot be explained by differences in local activation. Figure 9. Functional connectivity disruption in schizophrenic subjects vs controls. (a) Probability of finding a network link as a function of the Euclidean distance between the nodes (in millimeters): schizophrenics (red) show reduced connectivity than controls (blue) for distances in the middle range (10 to 150 mm). (b) Disruption of global inter-hemispheric connectivity. For each subject, we compute the fraction of links spanning both hemispheres over the total number of links, and plot a normalized histogram over all subjects in each group (normal - blue, schizophrenic - red). (c) Disruption of task-dependent inter-hemispheric connectivity between specific ROIs (Brodmann Area 22 selected bilaterally). The ROIs were defined by a 9 mm radius ball centered at [x = −42, y = −24, z = 3] and [x = 42, y = −24, z = 3]. For each subject, we compute the fraction of links connecting the bilateral ROIs over all links, and show a histogram of this connectivity measure over all subjects in each group. The histograms are similarly normalized. Global features. For each global feature (full list in Material S1) we computed its mean for each group and p-value produced by the t-test, as well as the classification accuracies using our classifiers. While more details are presented in Material S1, we outline here the main observations: while mean activation (we used map 8, the best performer for SVM on the full set of voxels - see Table 4b) had a relatively low p-value of , as compared to less significant for mean-degree, the predictive power of the latter, alone or in combination with some other features, was the best among global features reaching error in schizophrenic vs normal classification (Table 4a), while mean activation yielded more than error with all classifiers In general, low p-values not necessarily imply low generalization error, as the results with other global features show. This is not particularly surprising, especially when the data violate Gaussian assumption of the t-test as it is in our case. Table 4. Classification errors using (a) global features and (b) activation and degree maps (using SVM on the complete set of voxels (i.e., without voxel subset selection). Classification using activations vs. network features. While mean-degree indicates the presence of discriminative information in voxel degrees, its generalization ability, though the best among global features and their combinations, is relatively poor. However, voxel-level network features turned out to be very informative about schizophrenia, often outperforming activation features by far. Table 4b shows the results of classification by SVM using all voxel-level network features of each type. Herein, all voxels and their corresponding features were used, without any subset selection; for correlation features, defined on pairs of voxels, we just used same number of features as in all other cases, i.e. the top 53750 correlations out of 200000, since 53730 is the number of voxels used in the other features. Note that the top-performing network features are correlations (14% error) and (full) degree maps (16% error), greatly outperforming all activation maps that yield above 30% error for even the best-performing activation map 8. Next, in Figure 10, we compare the predictive power of different features using all three classifiers: Support Vector Machines (SVM), Gaussian Naive Bayes (GNB) and sparse Gaussian Markov Random Field (MRF), on the subsets of k top-ranked voxels, for a variety of k values. For sparse MRF, we experimented with a variety of values, ranging from 0.0001 to 10, and present the best results; while cross-validation could possibly identify even better-performing values of , it was omitted here due to its high computational cost (also, using the fixed values listed above we already achieved quite high predictive accuracy as described later). We used the best-performing activation map 8 from the Table above, as well as maps 1 and 6 (that survived FDR); map 6 was also outperforming other activation maps in low-voxel regime. Also, to avoid clutter, we only plot the results for the three best-performing network features: full and long-distance degree maps, and pairwise correlations. Classification results for the rest of network features can be found in Appendix. We can see that: Figure 10. Classification results: degree vs. activation features. Three classifiers, Gaussian Naive Bayes (GNB) in panel (a), SVM in panel (b) and sparse MRF in panel (c) are compared on two types of features, degrees and activation contrasts; (d) all three classifiers compared on long-distance degree maps (best-performing for MRF). • Network features outperform activation maps, for all classifiers we used, and for practically any value of k, the number of features selected. The differences are particularly noticeable when the number of selected voxels is relatively low. The most significant differences are observed for SVM in low-voxel (approx. ) regime: using just a dozen of most-predictive pairwise correlations achieves a remarkable 7% error while the activation maps yield 30% and larger errors. Also, both pairwise correlations and degrees noticeably outperform activations on the full set of features (far right of the x-axis). Moreover, degree features demonstrate excellent performance with MRF classifiers: they achieve quite low error of 14% with only 100 most significant voxels, while even the best activation map 6 requires more than 200–300 to get just below 30% error; the other activation maps perform much worse, often above 30–40% error, or even just at the chance level. • Full and long-distance degree maps perform quite similarly, with long-distance map achieving the best result (14% error) using MRFs. • Among the activation maps only, while the map 8 (“Silence”) outperforms others on the full set of voxels using SVM, its behavior in low-voxel regime is quite poor (always above 30–35% error); instead, map 6 (“FrenchNative”) achieves best performance among activation maps in this regime. We also observed that performing normalization really helped activation maps, since otherwise their performance could get much worse, especially with MRFs - we provide those results in Material S1. • MRF classifiers significantly outperform SVM and GNB with degree features, possibly due to their ability to capture inter-voxel relationships that are highly discriminative between the two classes (see Figure 10d). However, with the correlation features the situation is reversed, and the overall best results (7% error) is achieved using SVM with just a dozen of top-ranked Attributing schizophrenia to abnormal interactions among different brain areas, rather than to local failures, has a long history in schizophrenia research, and is sometimes referred to as the “disconnection” hypothesis [22]. According to [23], this hypothesis was first proposed in 1906 by Wernicke [24], who postulated that anatomical disruption of association fiber tracts is at the roots of psychosis; in fact, the term “schizophrenia” was introduced by Bleuler [25] in 1911, and was meant to describe the separation (“splitting”) of different mental domains. Recent advances in neuroimaging provided researchers with tools for studying not just anatomical, but also functional connectivity and its disruption in schizophrenia. The “disconnection syndrome” article by [22] was among the first ones to point out abnormalities in functional connectivity using PET imaging data (see also [26]). (More recently, the “dysconnection” term was suggested [23] in order to better capture the fact that schizophrenia is associated with a broader range of network dysfunctions besides just missing connections.) The paper studied functional connectivity captured by temporal correlations among different brain areas during a linguistic task, using principal component analysis (PCA) decomposition of the functional connectivity (covariance) matrix. Analysis of spatial components (“eigenimages”) revealed that “profound negative prefronto-superior temporal functional interactions associated with intrinsic word generation” was strongly present in healthy subjects, but practically absent in schizophrenic patients; vice versa, positive prefronto-left temporal correlations were present in schizophrenic group but in the normal group, suggesting a reversal of prefronto-temporal integrations, attributed to “failure of prefrontal cortex to suppress activity in the temporal lobes (or vice versa)”. More recently, several studies demonstrated altered patterns in default-mode networks of schizophrenia, e.g. altered temporal frequency and spatial location of the default mode networks [5], and other patterns of aberrant connectivity [27], [28]. Also, multiple recent studies [7], [29] focused on graph-theoretic analysis of functional connectivity networks [8] in schizophrenia, demonstrating, for example, that in schizophrenia patients “the small-world topological properties are significantly altered in many brain regions in the prefrontal, parietal and temporal lobes” [7]. There is also continuing work exploring abnormalities in anatomical networks in schizophrenia [6], [30], [31]. In general, the importance of modeling brain connectivity and interactions became widely recognized in the recent neuroimaging literature beyond schizophrenia research ([32]–[34] give just a few examples). However, practical applications of such approaches such as dynamic causal modeling [32], dynamic Bays nets [33], or structural equations [34] are often limited to interactions among a relatively small number of known brain regions believed to be relevant to the task or phenomenon of interest. As discussed below, such approach can be sometimes disadvantageous, while a more data-driven, voxel-level functional networks analysis can achieve better results. In this paper, we proposed an approach to constructing predictive features based on functional network topology, and applied it to predictive modeling of schizophrenia. We demonstrated that (1) specific topological properties of functional networks yield highly accurate classifiers of schizophrenia and (2) functional network differences cannot be attributed to alteration of local activation patterns, a hypothesis that was not ruled out by the results of [6], [7] and similar work. In other words, our observations strongly support the hypothesis that schizophrenia is indeed a network disease, associated with the disruption of global, emergent brain properties. Specifically, we demonstrated that topological properties of (voxel-level) functional brain networks are highly informative about the disease, unlike localized, task-related voxel activations, that were greatly outperformed by network-based features in both hypothesis testing and predictive settings. We also showed that it is highly important to use functional networks at the proper level: in our study, discriminative information present in voxel-level networks was apparently lost (perhaps due to averaging over large groups of voxels) at both regions-of-interest (ROI) and functional parcellation levels; the latter did not reveal any statistically significant differences between the schizophrenic and control groups. Unlike most traditional studies of schizophrenia networks based solely on hypothesis testing approach (e.g., [6], [7], [31]), we also employed predictive modeling techniques in order to evaluate how well the models built using network vs. local features would generalize to previously unseen subjects. Using generalization power, besides statistical significance, provides a complimentary (and often a more accurate) measure of disease-related information contained in a particular type of features, such as network properties or local activations. Moreover, predictive models have potential applications in clinical setting, e.g. for early diagnosis of schizophrenia based on abnormal patterns in imaging data. (Note, however, that multiple studies on a variety of subjects and experimental conditions may be necessary to come up with a robust predictive model). In summary, our observations suggest that voxel-level functional networks may contain significant amounts of information discriminative about schizophrenia, which may not be otherwise available in voxel activations or ROI-level networks. Note, however, that the schizophrenic population studied here has been selected for their prominent, persistent, and pharmaco-resistant auditory hallucinations [20], which might have increased its clinical homogeneity and reduced its value as representative of the full spectrum of the disease. The experimental protocol may also restrict the applicability of our approach to generic cases. The areas more evidently involved in the discriminative networks, BA 22 and BA 21, are involved in language processing and are known to alter their activity in schizophrenics [35], and to display genetic and anatomical anomalies [36]. The direct analysis of pairwise correlations (as opposed to the voxel-centric degree maps) identifies anomalies in functional connectivity with Broca’s area, the cerebellum and, interestingly, the frontal lobe (BA 10), in loose agreement with previous findings regarding disrupted fronto-temporal connectivity associated with auditory hallucinations [37]. However, the analysis of correlations as a function of (Euclidean) distance provides a more nuanced perspective, as it shows weaker long-distance and stronger short-distance correlations for the patient population. This suggests a global re-organization of functional connections, and is further evidence of the emergent nature of the disruptions introduced by the disease. In the context of this finding, the identification of specifically affected areas, or area-to-area links, may be less relevant for the purpose of understanding functional Note that the hypothesis of an emergent signature for schizophrenia does not necessarily reject the possibility of localized activation differences with respect to the normal population, for specific tasks or conditions. The finding that long-range functional connections are differentially affected, as demonstrated by the paucity of inter-hemispheric links and the weakness of long-distance correlations, may still be interpreted in terms of localized changes. Our findings may follow from subtle, undetectable changes (by fMRI at least) in the local activation of a handful of areas, that get amplified by the effect of the large number of links that are pooled when network features are computed, and bear no relationship to disruptions in the effective connectivity of the network (determined, for instance, by the lack or excess of specific neuro-transmitters). The fact is, however, that there is no such thing as a completely “local” activation in the brain, since the driving input to most areas of the central nervous system is provided by the activity of other areas. In this sense, the hypothesis can be reformulated to imply that the disease is concomitant with a much stronger disruption of emergent than of local features. While our conclusions may not necessarily apply to the schizophrenic population in general, we believe that our approach transcends the specific details of the particular population and experimental protocol we studied, and can guide future investigations of schizophrenia and other complex psychiatric diseases that can be better understood as network dysfunctions. Directions for further research include exploration of network abnormalities in other schizophrenia studies that involve different groups of patients and different tasks, as well as better characterization of connections involved in the predictive discrimination. Supporting Information Demonstration of connectivity-based vs. locally-based changes in correlation for coupled oscillators. The upper panels show the effect of changing the coupling strength of the oscillators, leading to drastic changes in correlation that do not affect the rates. The lower panels show the effect of changing the intrinsic rate of one oscillator while keeping the connection strength fixed. The correlation also changes drastically, but the change is associated with a change in the rate. Demonstration of connectivity-based vs. locally-based changes in correlation for Ising spins. By changing the local field h, it is possible to affect the correlation while keeping constant the coupling parameter J (black arrow). Classification results comparing GNB, SVM and sparse MRF classifiers on unnormalized (raw) activation maps vs degree maps. Classification results comparing (a) GNB, (b) SVM and (c) sparse MRF on correlations, clustering coefficient and strength features. FDR-corrected 2-sample t-test results showing p-values associated with correlations between different features and the movement parameter. The following features are presented: (a) pairwise voxel correlations (edge weights) (b) voxel-wise network features; (c) activations. The null hypothesis assumes no (significant) correlation between the feature and the movement parameter. P-values for each feature-movement correlation are sorted in ascending order and plotted vs FDR baseline; FDR test select voxels with , - false-positive rate, k - the index of a p-value in the sorted sequence, N - the total number of tests. Note that practically no p-value survives the FDR correction, suggesting that correlations between the features and the movement parameter are not statistically Results for schizophrenic vs (normal+alchoholic) classification. Global features. Classification errors using global features schizophrenics vs. normal+alcoholics, baseline error about 31%. Supplemental materials. We would like to thank Rahul Garg for his help with the data preprocessing and many stimulating discussions that contributed to the ideas of this paper, and Drs. André Galinowski, Thierry Gallarda, and Frank Bellivier who recruited and clinically rated the patients. We also would like to thank INSERM as promotor of the MR data acquired (project RBM 01–26). Author Contributions Analyzed the data: GC IR B. Thyreau JBP B. Thirion. Contributed reagents/materials/analysis tools: GC IR B. Thyreau. Wrote the paper: GC IR B. Thyreau B. Thirion MP MLPM CM JLM JBP.
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0050625","timestamp":"2014-04-16T19:59:27Z","content_type":null,"content_length":"263132","record_id":"<urn:uuid:67c022f1-1a7c-4aa4-8db5-2cb894e1f87c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Tom Kempton I'm a postdoc at the University of St Andrews working with Kenneth Falconer on various problems relating to self-affine sets. This position is funded by the EPSRC. Previously I worked with Karma Dajani at the University of Utrecht. The main focus of my research there was the ergodic theory of dynamical systems relating to expansions of real numbers. In particular, I looked at how properties of the random beta-transformation can be used in the discussion of Bernoulli convolutions. I completed my PhD at the University of Warwick in March 2011 under the supervision of Mark Pollicott. My thesis was on thermodynamic formalism for symbolic dynamical systems. Publications and Preprints: The following preprint versions of papers are not final, for finished versions follow the links to the websites of the relevant journals. Click here for my Google Scholar profile. Factors of Gibbs Measures for Full Shifts, (with M. Pollicott), appeared in the proceedings of the BIRS Workshop on Hidden Markov Processes, 2011. Factors of Gibbs Measures for Subshifts of Finite Type, appeared here in the Bulletin of the London Mathematical Society, 2011. Zero Temperature Limits of Gibbs Equilibrium States for Countable Markov Shifts, appeared here in the Journal of Statistical Physics, 2011. Thermodynamic Formalism for Suspension Flows over Countable Markov Shifts, appeared here in Nonlinearity, 2011. Counting Beta Expansions and the Absolute Continuity of Bernoulli Convolutions appeared here in Monatshefte für Mathematik, 2013. Digit Frequencies and Bernoulli Convolutions, submitted 2013. On The Invariant Density of the Random Beta-Transformation, appeared here in Acta Mathematica Hungarica, 2013. Sets of beta-expansions and the Hausdorff Measure of Slices through Fractals, to appear in the Journal of the European Mathematical Society. Self-Affine Sets with Positive Lebesgue Measure (with K. Dajani and K. Jiang), submitted 2013. Numbers in Ergodic Theory: Together with Charlene Kalle I am organising a series of one day meetings on Ergodic theory. These are supported by grants from the NWO and from the STAR cluster. Seminars & Teaching: In 2013 I taught the course Ergodic Theory, which is part of the masters program at Utrecht and also part of the Stochastics and Financial Mathematics masters. For details, please see the course page . The Utrecht stochastics seminar is currently organised by Tobias Müller, the website can be found here In the academic year 2011-2012 I organised the Stochastics and Numerics seminar together with Inan Ates, details of the seminar can be found here. I also gave a masters course on topics in dynamical systems, information about the course including some material can be found here. Travel Plans: Here is a list of upcoming conferences which I think sound interesting. Tom Kempton Mathematics Institute Utrecht University Budapestlaan 6 3584 CD Utrecht, The Netherlands T.M.W.Kempton@uu.nl Office Phone: 0031 302532303
{"url":"http://www.staff.science.uu.nl/~kempt001/","timestamp":"2014-04-18T10:34:26Z","content_type":null,"content_length":"6184","record_id":"<urn:uuid:5216ca8d-d9bb-4e8a-b95e-dd7683237970>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Math Library - Estimation 1. Estimation 180 - Andrew Stadel A new estimation challenge every day of the school year, with accompanying video answers. Join the number sense challenge online by submitting your own estimates, explaining your reasoning ("What context clues did you use?") and rating your accuracy ("How confident are you?"). more>> 2. QAMA - QAMA, LLC "The calculator that thinks only if you think, too," QAMA requires the operator to first enter a reasonable estimate. QAMA encourages rounding and rapid back-of-the-envelope reckoning, but does not tolerate errors of magnitude or sign, or of basic operations with single digits. Read user instructions, scope, and more. QAMA derives from "Quick Approximate Mental Arithmetic," but also happens to mean “How Much?" in Hebrew. more>> 3. A View from the Back of the Envelope - Mitchell N. Charity Pages about approximation. Counting by powers of ten includes the "what order of magnitude is ...?" game, illustrated instructions for counting to ten billion on your fingers, and lots of dots -- a million dots on one page, and a dot for every second in the day, with a real-time count-down graphic. The section on scaling the universe provides ways to visualize how big things are using analogies and examples, covering nanometers (nm), micrometers (um), millimeters (mm), meters (m), kilometers (km), megameters (Mm), gigameters (Gm), terameters (Tm), and petameters (Pm). How to simplify a number by rounding, rounding to an order of magnitude, sliding the decimal point, and using a number you can remember; this section also offers some calculations comparing the volume and surface area of a sphere and a box. Exponential notation discusses sliding the decimal point, order of magnitude and rounding to it, and scientific notation and how to write and speak it. Fermi questions examine rough quantitative estimates about the world: the "What order of magnitude is ...?" game, a Pinocchio estimation game, an order of magnitude investigation into the emulsification power of raw eggs in the making of mayonnaise, reflections on Why be approximate? and On Being Approximate, and a few notes on landmarks, bounding, and honesty. The scale of some things: people, people-seconds, volume, area, length, time, mass, energy, area, the ratio surface volume-to-density, speed, volume rate (how fast volume flows, shifts, accumulates, or drains), and power. The section on developing "deep" understanding addresses brevity as a measure of comprehension, and getting a feel for big numbers, with examples on picturing altitude above maps, teleportation, probing near space with a flashlight, and atomic bonding. Body ruler measuring covers length with your body: measuring angle and distance with your thumb. Additional resources include the 1957 book Cosmic View: The Universe in 40 Jumps laid out on web pages. more>>
{"url":"http://mathforum.org/library/topics/estimation/?keyid=38191348&start_at=1&num_to_see=50","timestamp":"2014-04-18T00:24:28Z","content_type":null,"content_length":"12036","record_id":"<urn:uuid:14eb39a8-6413-45fd-8515-87d4e9ed63b8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:A Treatise on Electricity and Magnetism - Volume 1.djvu/443 This page has been , but needs to be Instead of the whole conductor being a uniform wire, we may make the part near $O$ of such a wire, and the parts on each side may be coils of any form, the resistance of which is accurately known. We shall now use a different notation instead of the symmetrical notation with which we commenced. Let the whole resistance of $BAC$ be $R$. Let $c = mR$ and $b = (1-m) R$. Let the whole resistance of $BOC$ be $S$. Let $\beta = nS$ and $\gamma = (1-n) S$. The value of $n$ is read off directly, and that of $m$ is deduced from it when there is no sensible deviation of the galvanometer. Let the resistance of the battery and its connexions be $B$, and that of the galvanometer and its connexions $G$. We find as before Failed to parse (unknown function '\begin'): \begin{align*} D=G\{ BR+BS+RS \} & {} + m(1-m)R^2(B+S)+n(1-n)S^2(B+R) \\ & {}+(m+n-2mn)BRS, \end{align*} and if $\xi$ is the current in the galvanometer wire $\xi = \frac{{ERS}}{{D}}(n-m)$ In order to obtain the most accurate results we must make the deviation of the needle as great as possible compared with the value of $(n-m)$. This may be done by properly choosing the dimensions of the galvanometer and the standard resistance wire. It will be shewn, when we come to Galvanometry, Art. 716, that when the form of a galvanometer wire is changed while its mass remains constant, the deviation of the needle for unit current is proportional to the length, but the resistance increases as the square of the length. Hence the maximum deflexion is shewn to occur when the resistance of the galvanometer wire is equal to the constant resistance of the rest of the circuit. In the present case, if $\delta$ is the deviation, where $C$ is some constant, and $G$ is the galvanometer resistance which varies as the square of the length of the wire. Hence we find that in the value of $D$, when $\delta$ is a maximum, the part involving $G$ must be made equal to the rest of the expression. If we also put $m =n$, as is the case if we have made a correct observation, we find the best value of $G$ to be
{"url":"http://en.wikisource.org/wiki/Page:A_Treatise_on_Electricity_and_Magnetism_-_Volume_1.djvu/443","timestamp":"2014-04-20T11:58:54Z","content_type":null,"content_length":"28692","record_id":"<urn:uuid:d31583a2-7cd7-48aa-8862-52ed2d36c1c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of n-dimensional crystallographic groups up vote 8 down vote favorite I would like to know what are the applications of the theory of $n$-dimensional crystallographic groups (aka space groups) 1) in mathematics 2) outside of mathematics, besides the applications to $2$-dimensional and $3$-dimensional crystallography (or related fields like chemistry, or physics, of crystals). One possible application of $4$-dimensional space groups is already reported in the wikipedia article I linked to (see "Magnetic groups and time reversal"). gr.group-theory lattices euclidean-lattices euclidean-geometry 2 It depends on what you mean by "application": I could argue that theory of semisimple Lie group and symmetric spaces is application of affine Coxeter groups since these are equivalent to root systems. – Misha Mar 10 '13 at 22:38 Thanks Gerry M. for correcting the typos - it was late here and I typed too hastily. – Qfwfq Mar 11 '13 at 13:04 This should be community-wiki. – HJRW Mar 26 '13 at 10:53 This is more of an application of Bieberbach's theorems: in the proof of quasi-isometric rigidity of $\mathbb{Z}^n$ given in that paper: arxiv.org/abs/math/0509527 it is proved that a group $G$ 1 quasi-isometric to $\mathbb{Z}^n$ admits a proper isometric action on some (finite-dimensional) Euclidean space. By Bieberbach, the group $G$ is virtually abelian, so contains a $\mathbb{Z}^m$ of finite index. Finally $m=n$ by invariance of growth under quasi-isometry. – Alain Valette Mar 29 '13 at 21:13 add comment 4 Answers active oldest votes One of the well-known applications of crystallographic groups is the classification of flat complete Riemannian manifolds by their fundamental group, which is a torsion-free crystallographic group (aka Bieberbach group). A very nice book about this is "Spaces of constant Curvature" by Joseph A. Wolf. There are many interesting generalizations in this direction. One is due to John Milnor and Louis Auslander, so called affine crystallographic groups. Here the Bieberbach theorems for crystallographic groups have been generalized, at least conjecturally. Every (Euclidean) up vote crystallographic group is virtually abelan (the translations forming an abelian normal subgroup of finite index). The generalization to affine crystallographic groups should be that such 8 down groups are virtually polycyclic. In othe rwords, the fundamental group of a conplete compact affine manifold should be virtually polycyclic. This is still an open conjecture, called vote Auslander's conjecture. It has received a lot of attention, see the work of Abels, Margulis and Soifer, ranging from 1995 until 2012. add comment They occur as cusps cross-sections of non-uniform hyperbolic lattices of one higher dimension. For example, they are useful in the classification of minimal volume lattices. up vote 6 down vote add comment They are used in string theory to construct Conformal Field Theories which describe orbifold limits of Calabi-Yau spaces. See for example Dixon, Harvey, Vafa and Witten, "Strings on Orbifolds I,II" Nucl. Phys. B274 (19860 285 and Nucl. Phys. B261 (1985) 678 for an early application in string theory and Miles Reid in http://arxiv.org/pdf/math/9911165v1.pdf for a more up vote 5 mathematical take on related material. down vote Some references would make your answer more useful. – Ian Agol Mar 11 '13 at 17:56 I added a few early references. – Jeff Harvey Mar 11 '13 at 19:59 add comment The following are applications in the theory of $p$-groups: Space groups have been used by • Felsch, Neubüser, Plesken: Space groups and groups of prime-power order. IV: Counterexamples to the class-breadth conjecture. Journal London Math. Soc. (2), 24 (1981) 113-122 to construct counterexamples to the class-breadth conjecture for $p=2$. Recall the the conjecture claims $\text{class} \le \text{breath} + 1$ for $p$-groups $P$ where the breath $b$ is up vote 5 defined such that $p^b$ is the maximal size of the conjugacy classes of $P$. In their counterexamples $P=S/2^kT$ where $S$ is a space group, $T$ the translation subgroup and $k$ a down vote carefully choosen integer. Space groups those point groups are $p$-groups are also the core in proving the celebrated coclass conjectures of Leedham-Green and Newman (see the book Leedham-Green, McKay: The structure of groups of prime power order, 2002). I don't know enough to tell details, but it's striking that the series of papers that contain the proof are titled "Space groups and groups of prime-power order" (I-VIII). add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory lattices euclidean-lattices euclidean-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/124182/applications-of-n-dimensional-crystallographic-groups/125612","timestamp":"2014-04-21T15:23:14Z","content_type":null,"content_length":"69933","record_id":"<urn:uuid:46e572af-dc30-4041-bb43-e466ba9e0e79>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Hearne Math Tutor Find a Hearne Math Tutor ...All in all, I have a total of 17 years informal experience in speaking Spanish, 6 years of formalized Spanish education, and 2 semesters of providing formalized instruction to Spanish-speaking students. I graduated from Texas A&M with a degree in anthropology and a minor in linguistics. I love language and find great joy in teaching it to others. 55 Subjects: including linear algebra, geometry, violin, guitar ...I am told I explain hard concepts in some easier to understand terms. Math is easy if you make it easy! I am currently employed at my former high school, where I tutor 3 times a week, and have a specific section for helping students with College Calculus. 7 Subjects: including geometry, prealgebra, trigonometry, algebra 1 ...I am most comfortable with teaching Math and Science. I have taken the MCAT, and I am very familiarized with the content materials for any of the basic math and sciences. I am especially passionate about teaching art in different mediums: oil pastel, charcoal, acrylic, Prismacolor,sculpting, etc…As a student myself, I am very curious about different learning styles. 35 Subjects: including algebra 1, calculus, linear algebra, probability Howdy parents/students, my name is William, and I am currently an engineering PhD student at Texas A&M. I am very proficient at math and an excellent problem solver. In addition to improving your class performance, I will also help you develop your analytical reasoning. 13 Subjects: including algebra 1, algebra 2, calculus, geometry I am currently an engineering student at Texas A&M University. I have tutored in the past, primarily in English (ESL) and math from elementary to college calculus. I can also tutor for SAT/PSAT. 15 Subjects: including calculus, algebra 2, precalculus, trigonometry
{"url":"http://www.purplemath.com/hearne_tx_math_tutors.php","timestamp":"2014-04-19T19:55:22Z","content_type":null,"content_length":"23576","record_id":"<urn:uuid:f0c48507-8c1f-4e29-8922-d34b8fa5864e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
finding the sum of a series Find the sum of the series $\sum$$\frac{3k}{(k+1)!}$ from k=1 to infinity. Compare to the series for e at x=1 and maybe you can see something. $\frac{k}{(k+1)!}=\frac{1}{k!}-\frac{1}{k!(k+1)}$ The constant 3 can be 'pulled out' from the summation so that is... $3\cdot \sum_{k=1}^{\infty} \frac {k}{(k+1)!} = 3\cdot \sum_{n=2}^{\infty} \frac{n-1}{n!}= 3\cdot \{\sum_{n=1}^{\infty} \frac{1} {n!} - \sum_{n=2}^{\infty} \frac{1}{n!}\} = 3$ Kind regards $\chi$$\sigma$
{"url":"http://mathhelpforum.com/calculus/116103-finding-sum-series.html","timestamp":"2014-04-16T14:18:32Z","content_type":null,"content_length":"36293","record_id":"<urn:uuid:84952c5b-4480-45bc-99e5-3603f579ba76>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Middletown Twp, PA Science Tutor Find a Middletown Twp, PA Science Tutor I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including physics, calculus, geometry, algebra 2 ...I am currently a certified Elementary School Teacher in NJ with a K-8 Certification for all subjects. I also have experience as an Instructional Aide in Elementary School grades for 3 years. For six years I was an Elementary School teacher teaching grades from Kindergarten to 5th grade. 53 Subjects: including physical science, English, reading, algebra 1 ...I also taught the introductory Biology labs at Villanova University for two semesters. I have a BA in Anthropology/Management, an MSE in Technology Management and an MSc in Biomedical Engineering Technology, all from the University of Pennsylvania. Additionally, I have an MSc in Biology from Villanova University and an MSc in Biotechnology from the Johns Hopkins University. 8 Subjects: including biochemistry, biology, writing, physical science ...I have been a teacher for twenty years and I still respect and enjoy my students. I especially get pleasure when they finally master a particular skill. I am highly qualified and have ten years of experience as a tutor. 36 Subjects: including anthropology, biology, archaeology, ACT Science ...I have NJ and PA teaching certificates and a Master's Degree in special education, grades K-12. I have worked as a private tutor with ADD/ADHD students for over 20 years. I specialize in organization and study skills, and pride myself on simplifying the confusion and frustration often experienced by these students. 22 Subjects: including physical science, biology, geology, astronomy Related Middletown Twp, PA Tutors Middletown Twp, PA Accounting Tutors Middletown Twp, PA ACT Tutors Middletown Twp, PA Algebra Tutors Middletown Twp, PA Algebra 2 Tutors Middletown Twp, PA Calculus Tutors Middletown Twp, PA Geometry Tutors Middletown Twp, PA Math Tutors Middletown Twp, PA Prealgebra Tutors Middletown Twp, PA Precalculus Tutors Middletown Twp, PA SAT Tutors Middletown Twp, PA SAT Math Tutors Middletown Twp, PA Science Tutors Middletown Twp, PA Statistics Tutors Middletown Twp, PA Trigonometry Tutors Nearby Cities With Science Tutor Abington, PA Science Tutors Bensalem Science Tutors Burlington Township, NJ Science Tutors Burlington, NJ Science Tutors Croydon, PA Science Tutors Delran Township, NJ Science Tutors Fairless Hills Science Tutors Florence, NJ Science Tutors Horsham Science Tutors Hulmeville, PA Science Tutors Langhorne Science Tutors Levittown, PA Science Tutors Penndel, PA Science Tutors Rockledge, PA Science Tutors Tullytown, PA Science Tutors
{"url":"http://www.purplemath.com/Middletown_Twp_PA_Science_tutors.php","timestamp":"2014-04-21T04:33:33Z","content_type":null,"content_length":"24380","record_id":"<urn:uuid:6eb1fb46-ebb5-4b94-8037-bf0b6b68e603>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Annex II - METHODS OF AGGREGATION 1. The present annex begins with the Geary-Khamis method of aggregation, explaining some of its advantages and disadvantages. It then compares the G-K results with several other methods of aggregation to illustrate some of the differences. A. The Geary system 2. The valuation of a country's output in international prices can be written as: and where the p [i]s are the international prices for each of the basic headings and rgdp[j] is GDP of country j valued at those prices. The particular contribution of Geary was to define the international prices in such a way that they would produce an overall PPP for a country that was consistent with the prices. The definition of the PPP in the ICP is: where E[ij] is the expenditure in national currency on basic heading i by country j. That is, the purchasing-power parity over GDP is the ratio of the GDP of a country in national currency to its GDP in international prices. 3. For Geary there were actual quantities and prices associated with the agricultural output that he was concerned with valuing across countries. The international prices would be in a numeraire currency, such as the dollar, and the international prices would be so many dollars per unit quantity, say, ton of rice. In the ICP, there are basic heading parities, PPijS, that have been generated by EKS or CPD. These basic heading parities have the dimension of units of currency of country j to the numeraire currency for the basic heading. 4. This means that the interpretation of quantity and price at the basic heading level are not tons and rupees per ton. Rather, the quantity in the G-K technique as used in the ICP is what is termed a notional quantity. It is defined as: Each country's expenditure for a basic heading is converted to the currency of the numeraire country; it is termed a notional quantity because it serves the function of a quantity with its values at numeraire country prices. 5. One might ask why one cannot simply add up the notional quantities for each basic heading for a country to get a GDP in a common currency. The answer is that the result would use the relative prices between each basic heading that prevailed in the numeraire country. This means that the total would depend on which country was chosen as numeraire, and the result would not be base country invariant. 6. In the G-K system, the international price for heading i is defined as: Equation (4) has been written as a weighted sum of the ratios of the heading parities to the aggregate PPP. The weights used to obtain the international prices typically are the notional quantities. Usually, the expenditures (E[ij]S) entering into equation (3) are the total expenditures of a country, though alternative weights have been used. a/ For each country this is a ratio that will centre on 1.0 because in the Geary system the PPP[j] is a weighted average of the basic heading parities, where the weights are the notional quantities. 7. An important feature of the G-K system is illustrated in equation (5), where the denominator of equation (4) is brought to the left-hand side: Each side of equation (5) is a measure of the contribution of output of a basic heading to regional or world GDP. It is only in the G-K system that the valuation of quantities at international prices is consistent with their basic heading parities and expenditures, as well as the overall purchasing-power parity of each country. 8. Equations (1) and (4) represent the complete G-K system when PPP[j] and q[ij] are defined as in equations (2) and (3). When m is over 150 and n is over 60, this appears to be a large system to solve. However, it turns out that the easiest way to solve the system is by iteration; and it also turns out that the iterative procedure is itself instructive, as the following discussion is intended to show. 9. The basic data are the expenditures (E[ij]S) and parities (pp[ij]S) at the basic heading levels, and from these the q[ij]s can be derived. Consider an iteration that begins by initially setting each PPP[j] equal to the exchange rate. For example, if the United States were the base country, then its initial PPP would be 1.0 and the initial PPPs for the other countries would be their exchange rate relative to the dollar. Then a set of international prices can be estimated using equation (4). These p [i]s can then be plugged into equation (1) and then equation (2) to estimate a set of PPP[j]S. The process can then be repeated beginning with the new PPP[i]S. The iteration will be complete when the difference between the initial set of PPP[j]S and the end set is very small. Typically, in eight iterations the differences will only be observed at the fourth decimal place. It is unlikely that when the last iteration is complete the new PPP for the United States will equal 1.0. The system is then normalized so that each new PPP is adjusted so that the United States value will be 1.0, and the p [i]s appropriately scaled so that, for the United States, gdp and rgdp as obtained from equation (1) are equal. 10. While one can begin the iteration with any set of values, there is another way to begin that is also instructive. Consider setting each of the initial international prices (p [i]s) equal to 1.0. The same loop can then be followed, estimating the PPPjs when the p [i]s are all 1.0, and work back through the system to obtain a new set of international prices, and a new set of PPPs and so on. A normalization as described in paragraph 9 would also be carried out to make the PPP of the base country 1.0. Beginning with all international prices equal to 1.0 is equivalent to using the relative price structure of the numeraire country. The fact that the final set of international prices will differ substantially from 1.0, no matter which country is numeraire, again illustrates why one cannot simply sum up the notional quantities given in equation (3). 11. This discussion should also make clear that the international prices of the ICP centre around 1.0 and are used to value a quantity that has no natural dimension, such as a kilogram, but has a notional character depending on the numeraire currency. b/ The iteration procedure also illustrates how the Geary system achieves additivity across countries and basic headings to achieve matrix 12. As discussed in the text the major advantage of the Geary system is that the international prices are analogous to the prices used to generate the national accounts of an individual country. In the Geary formulation, large rich countries receive more weight in determining international prices used to value quantities in each country. This means that the structure of international prices will tend to be closer to those of rich countries. There is also usually an inverse relationship between price and quantity across countries, so that items that are expensive in poor countries, for example, will be consumed in relatively small quantities and vice versa. The G-K price structure will tend to value the large quantities of relatively inexpensive items in poor countries, such as services, at higher prices. Conversely, those items that are relatively cheap in rich countries, such as transport equipment, will be valued at international prices closer to their national value. This effect is present in all of the aggregation systems since it is part of the world economic structure that the ICP is attempting to represent. 13. However, the international price systems that are explicit or implicit in other systems are usually closer to middle-income countries because the weights used are not in proportion to country GDP. As a consequence, the G-K system tends to lower the income of rich countries relative to poor countries more than the other aggregation methods. Some regard this as a desired result stemming from the national accounts basis of the G-K system, while others regard it as a drawback. c/ B. Other aggregation methods 1. Additive systems 14. One type of aggregation system, devised by D. Gerardi, that was used by EUROSTAT was based on international prices used to evaluate notional quantities, as in equation (1) above. The Gerardi system was compared with the G-K system by Hill (1982, pp. 51-59), and that discussion will not be repeated here. The objective of both the Gerardi system and other international price systems with which EUROSTAT has experimented has been to retain an additive system that does not use international prices close to those of larger countries. Another way of putting this is to say that there are those who want matrix-consistent comparisons, but do not want to use a set of prices that are quantity weighted as in national accounts. Gerardi's international prices, for example, were initially based on equal weights to the pyqs of each country. 15. Another motivating factor for those seeking alternatives to the G-K system that are inherently additive is that G-K is a simultaneous system that requires all information from all countries before it can be calculated. A price change in one basic heading can, in principle, change the estimates of other basic headings. d/ Also, results of the G-K system can change as the number of countries included in the aggregation changes, though this is also true for most other aggregation systems. 2. EKS and related systems 16. Erwin Diewert has made an extensive review of indexes that might be used in international comparisons, and has come up with a class of what he terms superlative indexes (Diewert, 1978). What he finds is that indexes built up from Fisher-type comparisons between two countries have a number of desirable properties that flow from the theory of consumer choice. From this it follows that a multilateral index based on Fisher binary indexes, such as the EKS system, appears to have more theoretical rationale than the G-K system. 17. While Diewert's arguments provide some support for EKS, the issue is not so easily resolved. First, 30 to 40 per cent of expenditures on GDP are typically not chosen on the basis of relative prices. That is, most government expenditures and much of investment is not allocated on the basis of the principles underlying consumer choice. It is not claimed that EKS, G-K or any other system is necessarily better for comparing these expenditures, but that the theory of consumer choice is applicable to a portion of GDP only. 18. The second point relates to additivity. There are several systems that have been used that, like EKS, produce an overall comparison for all the basic headings entering the aggregation. One of these systems, the van Yzeren system, was proposed for the European Coal and Steel Community and another, the Walsh or expenditure weight system, has been used in Latin American comparisons. e/ The EKS system, as well as the Walsh and van Yzeren systems, provide a PPP over GDP or whatever aggregate for which they have been computed. However, they do not have an implicit system of international prices, so there is no explicit allocation of the expenditures within the aggregate and no inherent additivity. It is simple enough to impose additivity by, say, distributing the expenditures on GDP obtained by EKS across the categories according to the distribution of those expenditures in national currencies. The disadvantage of this is that the method is arbitrary and no information about the price structure in other countries is used in comparing the structure of expenditures in one country with those in another. 19. One further point is that for some purposes the only number sought is for an aggregate such as consumption. One might, for example, want to use the PPP for consumption to compare real wages across countries. In this case, an EKS aggregation may be preferred to the G-K method for two reasons. First, since additivity is not needed in this example, one drawback to using EKS is removed. Secondly, the implicit weighting involved in EKS is equal among countries so that for converting wages across countries it may make more sense to think of using a PPP that assigns the same importance to the market basket of each country. (The latter weighting system can also be achieved by G-K.) 20. To give some impression of what differences are involved in the various methods, results are given below from the phase III report for six countries, spanning the range of per capita incomes in the world. The entries give the per capita income of each country relative to the United States as 100 for each country. Country per capita GDP, 1975 (US = 100) │ Method │ India │ Kenya │ Colombia │ Republic of Korea │ Japan │ France │ │ 1. Binary-Fisher │ 6.0 │ 5.8 │ 19.7 │ 17.2 │ 67.5 │ 80.2 │ │ 2. Geary-Khamis │ 6.6 │ 6.5 │ 22.6 │ 19.9 │ 68.6 │ 81.9 │ │ 3. EKS │ 5.7 │ 5.4 │ 19.9 │ 17.8 │ 65.3 │ 81.1 │ │ 4. Walsh │ 6.4 │ 4.8 │ 19.5 │ 17.6 │ 66.1 │ 80.0 │ │ 5. Van Yzeren │ 5.7 │ 5.4 │ 19.9 │ 17.7 │ 65.3 │ 81.0 │ │ 6. Gerardi │ 5.7 │ 5.8 │ 20.4 │ 18.5 │ 66.6 │ 77.8 │ │ 7. Exchange rate │ 2.0 │ 3.4 │ 7.9 │ 8.1 │ 62.3 │ 89.6 │ Source: Kravis, Heston and Summers, 1982, pp. 96-97. 21. The differences between the first six rows for any one country are less than 5 per cent for Japan and France, and less than 15 per cent for the remaining countries. A seventh row is also provided for the exchange rate conversions, indicating that all of the other methods are much closer to one another than to use of the exchange rate, and for Japan and France, the deviations are over 10 per cent and in opposite directions. Thus, while results of the different methods can vary from one another, their general orders of magnitude for each country and variation across countries tell a fairly consistent story. 22. It would be nice if there were a simple conclusion to be drawn from this discussion, but that would imply that somehow the ICP had solved the index number problem, which it assuredly has not. One longs for one measure because that would be simple to explain to users, especially users providing resources for the work. It is also tidier to have only one result. However, because there are a variety of uses for which the ICP results are desired, for the present, more than one result will be produced, though in official publications the differences will be minimized. C. Some loose ends 23. Some expenditure categories can be negative, such as change in stocks or the net foreign balance. These categories do not make much sense in any method using international prices because the Geary system, for example, is based on positive quantities and prices. Therefore, in the G-K system, the actual solution is carried out over the non-negative basic headings. The parities assigned to the net foreign balance and the net expenditures of residents abroad is the exchange rate. (A different treatment is made for countries with a large amount of tourist expenditures, such as Austria, where net expenditures of residents abroad may be distributed among the important headings, and no expenditure is retained in that heading.) In phases I to III of the ICP, the international price for these two headings was defined as in equation (4), but has since been assumed to be 1.0. 24. For change in stocks, a parity is calculated from the G-K result based on those basic headings that are commodities. That parity is assigned to the change in stocks. The international price for the category is then calculated from equation (4) above. Any normalization to make the PPP of the base country 1.0 is then carried out involving the international prices of all basic headings. In the comparisons of methods given above, the actual comparison is over the non-negative categories, since this appeared to put all methods on the most comparable basis. a/ For the world comparisons in phases I-IV, countries were assigned additional weight to reflect the importance of countries not included in the benchmark comparisons. The total expenditures of a country were termed its supercountry weight, and the sum of all supercountry expenditures would be world GDP. One reason for using supercountry weights was to estimate the international prices that were implicit in world GDP. Since the G-K result does depend on the number of countries in the calculation, the use of supercountry weights was designed to approximate the international prices if all countries in the world were participating in the ICP. This in turn should, in principle, make the results from earlier benchmark ICP comparisons, when relatively few countries participated, better approximate later comparisons involving more benchmark countries. In the Geary system, it is also possible to use per capita expenditure weights or other weighting systems, For example, one could assign equal weights to each country over all expenditures and in effect use the percentage expenditure for each basic heading as the country weight. The discussion in this annex assumes that the overall weight for each country is their GDP, or supercountry GDP. b/ The international prices will depend on the numeraire country chosen. This point is discussed in Kravis, Heston and Summers (1982, pp. 94-95). Two other technical points may be noted. First, some regions have chosen to use a numeraire currency outside the region, as, for example, Africa. In the African comparisons, all prices and expenditures are initially converted into United States dollars at exchange rates. In the African study, no single country is used as the base, but rather the average of all countries is used. While the results of the African study are presented in dollars, this does not make them comparable with other countries, such as the United States, because dollar conversions have only been carried out at exchange rates. A second point is that when an average of a group of countries is used, as in Africa or the European Communities, there will still be a set of international prices implicit in the calculation. In the African case, the system would be normalized to make the sum of expenditures of all headings and all countries converted at exchange rates equal to the sum of all notional quantities valued at international prices. For any particular basic heading, this equality would not hold, and the ratio of the sum across all countries of the basic heading notional quantities valued at international prices to their value at exchange rates would be the international price for that basic heading. c/ Usually, the G-K results are criticized because they depart from Fisher binary results, being closer to the Laspeyres than the Paasche estimate for poor countries. However, the binary comparisons being used as a reference weight each country the same. The EKS system, which is an indirect least squares type of estimate from the binaries, naturally comes closer to the Fisher result than does G-K. However, Prasada Rao has shown that, if a binary is done using the GDP weights of the G-K system, then the multilateral G-K is a direct least squares estimate based on the binaries and, of course, comes much closer to the G-K binaries than does EKS. The point, then, is that it is really the weighting system that produces more difference between methods than other factors (see Prasada Rao (1972)). d/ This can readily be seen by examining equation (5). A price change affects a PPij and that may affect the PPPjS and work itself through the entire system. Any other system that was matrix consistent would also be affected. Systems like EKS would be affected in the aggregate but because there is no explicit estimates of basic heading quantities in EKS, there is no visible effect at the detailed level. e/ These systems are discussed in Kravis, Kenessey, Heston and Summers (1975), pp. 66-68.
{"url":"http://unstats.un.org/unsd/methods/icp/ipc7_htm.htm","timestamp":"2014-04-19T02:32:43Z","content_type":null,"content_length":"37375","record_id":"<urn:uuid:eae82041-fc4f-4dc0-a9b6-2e3c9df34d8d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
The Forward model - the imaging process Next: Voxel Pixel Up: thesis Previous: Evaluation &nbsp Contents The Forward model - the imaging process ``It is no use saying `We are doing our best.' You have got to succeed in doing what is necessary.'' - Winston Churchill In order to make quantitative data analysis possible it is necessary to know the absolute scale of the instrument. For single station imaging it is sufficient to calculate the absolute sensitivity, i.e. the number of photons needed inside the field-of-view of the pixel on the front lens in order to create one count in the image. The primary measurable in imaging is raw counts. With knowledge of the number of photons per count, the effective area of the camera and the pixel field-of-view, it is possible to convert the raw counts to a surface brightness, defined as photons per solid angle per area per time This corresponds to the total number of photons emitted from a column with unit area in the direction of the pixel line-of-sight. Assuming that the emission is isotropic, the total number of photons emitted in all directions is: In aeronomy the unit for column emission rate has been given the name Rayleigh, Hunten et al., 1956). For an imaging system intended for tomographic inversion it is necessary to know what fraction of photons emitted in a voxel creates a count in a pixel in the image. Factors which need to to be taken into account are: Voxel - pixel field-of-view intersection For all inverse problems in this work there is a voxel representation of the distribution of emission. To calculate the contribution to the image intensity in one pixel from a voxel, the intersection volume, 5.1. Pixel field-of-view The pixel field-of-view 5.2. Atmospheric absorption The light from the aurora and airglow is absorbed in the lower atmosphere, mainly in the stratosphere. This absorption depends on both zenith angle, 5.3. Effective collecting area The effective collecting area of the optical system is essentially the size of the front lens as seen from the direction of the voxel. Further the limiting aperture of the optics might change with the angle relative the optical axis. This is described in section 5.4. Transmission of optics Variation of the transmission of the optical system with angle relative to the optical axis should be accounted for, as described in section 5.5. Variation in exposure time The exposure time varies slightly from pixel to pixel due to the working of the shutters; a first order correction for this is described in section 5.6. Point spread function The point spread functions (PSF) must be determined. The PSF is the image of a point source. The procedure for determining the PSF is outlined in section 5.7. Pixel sensitivity The sensitivity of individual pixels must be determined; the necessary requirements are outlined in section 5.8. Next: Voxel Pixel Up: thesis Previous: Evaluation &nbsp Contents copyright Björn Gustavsson 2000-10-24
{"url":"http://www.irf.se/~bjorn/thesis/node19.html","timestamp":"2014-04-19T10:00:13Z","content_type":null,"content_length":"12907","record_id":"<urn:uuid:3d00d063-73b3-4d14-891b-ae0d68d81431>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
This EXTRACT_SLICE function returns a two-dimensional planar slice extracted from 3D volumetric data. The slicing plane can be oriented at any angle and pass through any desired location in the This routine is written in the IDL language. Its source code can be found in the file extract_slice.pro in the lib subdirectory of the IDL distribution. Calling Sequence Result = EXTRACT_SLICE( Vol, Xsize, Ysize, Xcenter, Ycenter, Zcenter, Xrot, Yrot, Zrot ) The volume of data to slice. This argument is a three-dimensional array of any type except string or structure. The planar slice returned by EXTRACT_SLICE has the same data type as Vol . The desired X size (dimension 0) of the returned slice. To preserve the correct aspect ratio of the data, Xsize should equal Ysize. For optimal results, set Xsize and Ysize to be greater than or equal to the largest of the three dimensions of Vol . The desired Ysize (dimension 1) of the returned slice. To preserve the correct aspect ratio of the data, Ysize should equal Xsize. For optimal results, set Xsize and Ysize to be greater than or equal to the largest of the three dimensions of Vol . The X coordinate (index) of the point within the volume that the slicing plane passes through. The center of the slicing plane passes through Vol at the coordinate ( Xcenter, YCenter, Zcenter ). The Y coordinate (index) of the point within the volume that the slicing plane passes through. The center of the slicing plane passes through Vol at the coordinate ( Xcenter, YCenter, Zcenter ). The Z coordinate (index) of the point within the volume that the slicing plane passes through. The center of the slicing plane passes through Vol at the coordinate ( Xcenter, YCenter, Zcenter ). The X-axis rotation of the slicing plane, in degrees. Before transformation, the slicing plane is parallel to the X-Y plane. The slicing plane transformations are performed in the following order: Set this keyword to indicate that Xrot, Yrot, and Zrot are in radians. The default is degrees. Set this keyword to a value that will be assigned to elements of the returned slice that lie outside of the original volume. Display an oblique slice through volumetric data: vol = RANDOMU(s, 40, 40, 40) ; Create some data. FOR i=0, 10 DO vol = SMOOTH(vol, 3) ; Smooth the data. vol = BYTSCL(vol(3:37, 3:37, 3:37)); Scale the smoothed part into the range of bytes slice = EXTRACT_SLICE(vol, 40, 40, 17, 17, 17, 30.0, 30.0, 0.0, $ OUT_VAL=0B) ; Extract a slice. TVSCL, REBIN(slice, 400, 400) ; Display the 2D slice as a magnified image.
{"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl99.htm","timestamp":"2014-04-20T08:24:52Z","content_type":null,"content_length":"8879","record_id":"<urn:uuid:261f25ec-69cc-4814-b4c1-f04ca48c61e1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics for Elementary Teachers: A Conceptual Approach, 5/e; Grids and Dot Paper Grids and Dot Paper Grids, dot paper, base ten pieces and geoboards, and the places where they are used in the text are shown below. Copies for making transparencies or printing these materials can be obtained by selecting from this list: Centimeter, Two Centimeter, One-half Centimeter Grids, Inch and 1/4 Inch Grids: • Sketching rectangular arrays for illustrating factors and prime and composite numbers in Section 4.1; • Forming patterns for cubes and sketching projections of three dimensional figures in Exercises and Problems 9.3; • Sketching symmetric figures in Section 9.4; • Finding and approximating areas of plane figures in Section 10.2; • Forming open-top boxes in Exercises and Problems 10.3; graphing functions in Section 2.2; • Graphing data in Exercises and Problems 7.1 and 7.2. Centimeter Dot Paper, Half-centimeter Dot Paper, 1/4 Inch Dot Paper: • Illustrating irrational numbers in Section 6.4; • Sketching plane figures in Section 9.2; • Sketching images of plane figures for transformations in Sections 11.2 and 11.3. Isometric Dot Paper • Sketching three-dimensional figures and their images for transformations in Exercises and Problems 11.2 Base Ten Grid • Illustrating multiplication and division of whole numbers in Sections 3.3 and 3.4 and decimals in Section 6.2 using the rectangular model. Percent Grids • Illustrating percents and operations with percents in Section 6.3 Rectangular Coordinate System • Graphing functions in Section 2.2; and graphing plane figures and their images for congruence and similarity mappings in Sections 11.2 and 11.3 Regular Polygons • Tessellating with polygons in Section 9.2; and forming Escher-type tessellations in Section 11.2. Rectangular Geoboards • Sketching polygons in Sections 9.1and 9.2; and illustrating the slopes of line segments in Section 2.2. Circular Geoboards • Sketching figures in Exercises and Problems 9.4 satisfying conditions of symmetry. Decimal Squares • Squares for tenths, hundredths, and thousandths. Shade the squares as needed for Sections 6.1 and 6.2.
{"url":"http://www.mhhe.com/math/ltbmath/bennett_nelson/conceptual/instructor/grids.mhtml","timestamp":"2014-04-19T12:24:09Z","content_type":null,"content_length":"5907","record_id":"<urn:uuid:66a590ba-f6ee-43df-b85a-0748dc209d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Identifying quadric surfaces Hi there, I am learning about quadric surfaces in my second year multivariable calculus course. I would like to know how most people would identify (find the name of) a quadric surface if they had the equation. We only need to know 6 different quadric surfaces, so should I ... (a) memorize the equations to identify the surface, or (b) sketch out what the xy-, xz-, and yz-planes look like, sketch out the surface to see what it looks like, then identify it? What do you do? Thanks in advance.
{"url":"http://www.physicsforums.com/showthread.php?p=2912943","timestamp":"2014-04-19T02:11:48Z","content_type":null,"content_length":"19658","record_id":"<urn:uuid:f6c28a62-4d5b-4976-a0ba-30997d0ace25>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Another Stats question a. 0.95 of the workers are innocent, and of those, 0.1 will be detected as guilty. This is 0.095 of the workers in total. 0.05 of the workers are guilty, and of those, 0.9 will be detected. This is 0.045 of the workers in total. Therefore, 0.095 + 0.045 = 0.14 of the workers are fired. b. From a, we can see that the ratio of innocent to guilty is 0.095:0.045. So, the proportion of guilty people is 0.045/0.14 ≈ 0.32 c. 0.05 of the workers were guilty, and 0.1 of those were not fired, so the total non-fired guilty people is 0.005. 0.86 of the workers were not fired, so the proportion of those that are guilty is 0.005/0.86 ≈ 0.0058 d. This question is based on opinion, so you need to answer this one yourself.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=22293","timestamp":"2014-04-18T08:17:26Z","content_type":null,"content_length":"11278","record_id":"<urn:uuid:c48b487b-a232-4db7-8946-6487b5e9a1df>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to Noncommutative Differential Geometry and its Physical Applications Non-specialists may, I think, be forgiven for feeling confused by the title of J. Madore's An Introduction to Noncommutative Differential Geometry and its Physical Applications. It's not too easy to see in what sense the differential geometry we know and love is "commutative" and even harder to imagine what a "noncommutative" geometry might look like. The first words of the introduction help us out with the first question. They point out that if V is a set of points then the set of complex-valued functions on V is a (finite-dimensional) commutative (and associative) algebra. If V is a compact space, then we can restrict to continuous complex-valued functions on V and we get an algebra C0(V) which is in fact a "C*-algebra," and if V is a smooth manifold we can look at smooth functions, and so on. It turns out that much of classical differential geometry can be expressed in terms of such algebras, and the idea of "noncommutative geometry" is to generalize this version of differential geometry to the case of noncommutative algebras. Amazingly, this turns out to yield a theory that is not only interesting mathematically but also useful in understanding the mathematics of quantum field theory. This book, volume 257 in the traditional "London Mathematical Society Lecture Note Series", is intended as an accessible introduction to the subject for non-specialists. It looks to me that the author has done a good job of opening the way to understanding a difficult theory. This is the second edition of a book first published in 1995, and the very fact that a new edition has appeared so soon is an indication that the book has been successful. Not for the faint of heart, but worth a look. 1. Introduction; 2. Differential geometry; 3. Matrix geometry; 4. Non-commutative geometry; 5. Vector bundles; 6. Cyclic homology; 7. Modifications of space-time; 8. Extensions of space-time.
{"url":"http://www.maa.org/publications/maa-reviews/an-introduction-to-noncommutative-differential-geometry-and-its-physical-applications","timestamp":"2014-04-18T14:14:21Z","content_type":null,"content_length":"96804","record_id":"<urn:uuid:8e3fcc5f-1e81-41e3-b2a9-c62adede5fb6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Comments on The Geomblog: Proofs and ReputationsCAN ANYBODY HELP ME FINDING RECENT NEWS (TODAY IS ...Hmmm.. the math joke you heard seem to be derived ...The reason the mathematical community is slow to a...Yes. thanks for pointing that out.Shouldn't that be "...but it cannot be _zero_", as... tag:blogger.com,1999:blog-6555947.post109145536040409908..comments2014-01-12T10:46:48.153-07:00Suresh Venkatasubramanianhttps://plus.google.com/ 112165457714968997350noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-6555947.post-1122430650655110992005-07-26T20:17:00.000-06:002005-07-26T20:17:00.000-06:00CAN ANYBODY HELP ME FINDING RECENT NEWS (TODAY IS <BR/>28 JULY 2005!!!) ON PROF. LOUIS DE BRANGES AND <BR/>HIS DEMONSTRATION OF THE RIEMANN HYPOTHESIS?<BR/>NO NEW INFORMATION CAN BE FOUND ON INTERNET...<BR/>I AM DESPERATE FOR NEWS.<BR/>BEST WISHES.<BR/>PROF. MARIANELA M.&#160;<BR/><BR/><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://geomblog.blogspot.com/2004/08/proofs-and-reputations.html" REL="nofollow" TITLE= "DIABOLIK21 at FREESURF dot CH">MARTIGNONI MARIANELA</A>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1095800053686966892004-09-21T14:54:00.000-06:002004-09-21T14:54:00.000-06:00 Hmmm.. the math joke you heard seem to be derived <br />from Hardy's unique "travel insurance" - before <br />embarking on a ship journey, he'd telegraph to let <br />his hosts know that he'd solved the RH. His logic<br />was that God wouldn't allow eternal posthumous <br />glory for Hardy and would hence keep the journey<br />safe :)<br /><br />Saty RaghavacharyAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1091628646867830382004-08-04T08:10:00.000-06:002004-08-04T08:10:00.000-06:00The reason the mathematical community is slow to accept the de Brange proof is two-fold.<br /><br />First, in 1998 there was a paper showing that the techniques being used by de Brange were flawed (found <A HREF="http://www.blogger.com/r? http%3A%2F%2Farxiv.org%2Fabs%2Fmath.NT%2F9812166"> here</A>) Of course, 1998 is a long time ago, so he may have found something new to modify his approach.<br /><br />Second, he is considerd a maverick. He did prove the Bieberbach Conjecture, however (and this is according the mathematicians of the time) cried wolf on a number of occasions.<br /><br />Finally, it should be remembered, that he has not actually published his proof for the world to see. In fact, without the press-release, no one would even know he is close. Compare this to Perelman's work on Poincare, which was immediately made public.<br /><br />All in all, this reminds me of a math joke I once heard. When an older professor was asked why the name of his talk was "On the proof of the Riemann Hypothesis" for an up coming math conference, he admitted the following: He did not have a proof, nor was he even going to talk about the Riemann Hypothesis, but the title was in case something horrible happened on his way to the conference. Everyone would always wonder if he did...Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1091574473755179402004-08-03T17:07:00.000-06:002004-08-03T17:07:00.000-06:00Yes. thanks for pointing that out.Sureshhttp:// www.blogger.com/profile/15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1091574289066947652004-08-03T17:04:00.000-06:002004-08-03T17:04:00.000-06:00Shouldn't that be "...but it cannot be _zero_", as opposed to 'nonzero'?Anonymousnoreply@blogger.com
{"url":"http://geomblog.blogspot.com/feeds/109145536040409908/comments/default","timestamp":"2014-04-16T07:14:47Z","content_type":null,"content_length":"11533","record_id":"<urn:uuid:858645a6-ca28-4c4c-8f8c-a6e704ac9b71>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
What else could the Higgs be? Scientists might need to go beyond the Standard Model to explain the mass of the Higgs-like boson observed at the Large Hadron Collider. On July 4, scientists around the world popped open champagne bottles and toasted the culmination of nearly five decades of research. They had discovered a new particle, one that looked awfully similar to the long-sought Higgs boson. The Higgs boson has for decades been the last missing piece of the Standard Model of particle physics. But even if the new particle completes the puzzle, some of its pieces still refuse to fit. “If it’s the Standard Model Higgs boson, the picture will be complete, but it won’t be satisfying,” says Eilam Gross, co-convener of the ATLAS Higgs physics group. “There are questions we can only begin to answer by going beyond the Standard Model.” As the theory tells it, particles gain mass by wading through a kind of force field. Imagine holding a handful of sand. Cup your hands together and apply some pressure. Open your hands. If the sand is dry, it shifts and dissipates. If it’s wet, it’ll hold the shape you pressed it into. The idea is the same for how particles gain mass. Just as the sand gains a new shape depending on its level of moisture, particles gain mass depending on their level of interaction with the Higgs field. In return, the Higgs boson gains mass through its interactions with other particles—and through its interactions with itself. Interactions with one group of particles add huge amounts of mass to the Higgs, and interactions with another, smaller group of particles subtract it. Although the Standard Model itself does not predict a specific mass for the Higgs, when other concepts and theories are considered, the mass of the Higgs can add up to an astronomical number. Considering this, the mass at which scientists found the possible Higgs, about 125 gigaelectronvolts, can seem surprisingly small. If the particle is indeed a Higgs boson, theories beyond the Standard Model can account for its confusing mass. Three popular explanations involve the ideas of supersymmetry, compositeness and extra The supersymmetric Higgs: More new particles One way to tweak the calculation of the Higgs mass is to add new variables into the equation. The theory of supersymmetry postulates that every elementary particle has a partner particle. Conveniently, each of these partner particles has an opposite effect on the Higgs mass. One partner adds to the mass; the other takes it away. But they don’t completely cancel one another out. The partner particles in theories of supersymmetry are super-partners, particles that mirror the particles in the original Standard Model set but have more mass. When you add super-partners to the mix, the math of the Higgs mass nearly balances out. “That’s one of the reasons everyone loves supersymmetry,” says Fermilab physicist Don Lincoln, who is part of the CMS experiment at the Large Hadron Collider and the DZero experiment at the Tevatron. Another reason is that, if scientists find that this new boson is a supersymmetric Higgs, they’ll know a whole new collection of partner particles is out there, waiting to be found. One of those super-partners, a massive particle called a neutralino, might even turn out to be dark matter, the mysterious substance thought to make up about 25 percent of our universe. The composite Higgs: Even smaller particles Some theorists have proposed a different solution for the Higgs’ mass. They’re exploring the idea of a composite Higgs particle—a particle that behaves like the Standard Model Higgs boson, but is made up of even smaller particles. Calculating the mass of a composite Higgs particle would be different from calculating the mass of a fundamental, unbreakable Higgs. If the Higgs were made up of tinier pieces, its mass would consist of the mass of those pieces plus the energy of the force holding them together. That could add up to 125 gigaelectronvolts. A composite Higgs would shake up the world of particle physics. If the Higgs were made up of smaller particles, other particles we currently view as the fundamental building blocks of the universe could be made up of smaller particles as well. Compositeness could birth a new layer of fundamental physics, from which completely new theories would spring. Extra dimensions: New worlds A third explanation for the Higgs' mass is that we simply are not seeing it all because we’re not studying it in all of its dimensions. “If I do a calculation in three-dimensional space, I get a different answer than if I do a calculation in five-dimensional space,” says Fermilab theorist Joe Lykken. “The effects you thought were going to be large [in three-dimensional space] now are not large.” Scientists explain the weakness of gravity in a similar fashion. Gravity may keep us on the ground, but compared to the other forces in our universe, it’s a total wimp. If gravity were as strong as the other forces, even a coffee mug would be too heavy for a person to lift. But it might be that the force of gravity actually exists in more than just three dimensions of space. It only seems weak because not all of its force resides in dimensions we can detect. In the same way, the calculations that predict the Higgs' mass could be tempered by the effect of dimensions beyond our own. Discovering extra dimensions could help scientists understand gravity and its relation to the other forces in the universe. It could also embolden string theorists, whose theories expand our universe into at least 11 dimensions of space. The excitement builds Even if the new particle is the Standard Model Higgs, “we will still have questions,” says CERN theorist Christophe Grojean. “Just adding one more fundamental particle does not bring us the fundamental answer.” That’s because the Standard Model is not the theory of everything. With the confirmation of the Standard Model Higgs boson, the theory would be whole—but not comprehensive. Scientists would still need to solve the mysteries of dark energy, dark matter and the weakness of gravity, among others. Finding out that the new particle is not a Standard Model Higgs—or finding out that it’s not a Higgs at all—could offer scientists a roadmap of where to search next. “If it’s the Standard Model Higgs boson, that’s great, but it doesn’t offer any experimental clues of where we should look next,” says Albert De Roeck, co-convener of the CMS Higgs physics group. The story of the new, Higgs-like particle is far from over, a fact that’s keeping the excitement at the LHC palpable. Physicists are eager to see where the data takes them next. Note: An earlier version of this article included a misleading statement: "According to the Standard Model, the mass of the Higgs boson should be enormous." In fact, the Standard Model itself does not predict the mass of the Higgs boson. It is only when other concepts, such as quantum mechanics, are also considered that a prediction can be made.
{"url":"http://www.symmetrymagazine.org/article/october-2012/what-else-could-the-higgs-be","timestamp":"2014-04-18T00:15:13Z","content_type":null,"content_length":"90684","record_id":"<urn:uuid:1d0d1f6d-8a3a-4f3d-bc07-2488e6a67cf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinite Geometric Series 9.2: Infinite Geometric Series Created by: CK-12 This activity is intended to supplement Calculus, Chapter 8, Lesson 2. ID: 11065 Time required: 45 minutes Activity Overview In this activity, students will explore infinite geometric series. They will consider the effect of the value for the common ratio and determine whether an infinite geometric series converges or diverges. They will also consider the derivation of the sum of a convergent infinite geometric series and use it to solve several problems. Topic: Sequences & Series • Explore geometric sequences • Sum a geometric series • Convergence of an infinite geometric series Teacher Preparation and Notes • This activity serves as an introduction to infinite geometric series. Students will need to have previously learned about finite geometric series. • Before beginning the activity, students need to clear all lists and turn off all plots and equations. To clear all lists, press $2^{nd}$LIST, scroll down to ClrAllLists and press ENTER Associated Materials Problem 1 – Investigating Infinite Geometric Series Students will explore what happens when the common ratio changes for an infinite geometric series. For each value of $r$$L2$$L3$$L3$ As an extension, students could change the initial value of the sequence by changing the number 200 in the formula for $L2$ If students determine that a series converges, then they are to create and view the scatter plot. The necessary settings for Plot1 are shown on the student worksheet. To change the window, students can press # and select ZoomStat or manually adjust it by pressing ZOOM. $r$ -2 -0.5 -0.25 0.25 0.5 2 Converges or Diverges Diverges Converges 133.33 Converges 160 Converges 266.667 Converges 400 Diverges 2. $| r | < 1$ 3. There is a horizontal asymptote at the point of convergence. Problem 2 – Deriving a Formula for the Sum of a Convergent Infinite Geometric Series Students are to use the Home screen to determine the values of $r^n$$r = 0.7$$n$$r^n$$| r | < 1$ Students are given the formula for the sum of a finite geometric series. With the information found on the worksheet, they can determine the formula for the sum of an infinite geometric series using 4. Note that even though the calculator says $r^{1000}$$r^{10000}$approximately zero. $n$ 10 100 1000 10000 $r^n = 0.7^n$ 0.028248 3.23 E-16 0 0 5. $s_n = \frac{a_1(1 - 0)}{1 - r} = \frac{a_1}{1 - r}$ Problem 3 – Apply what was learned In this problem, students are given a scenario relating to drug prescriptions and dosages. Students need to use the formulas shown in the previous problem to answer the questions. They may get caught up on the first question. Explain to students that if 15% of the drug leaves the body every hour, then that means that 85% percent is still in the body. 6. a. $0.40 = 1 - 0.15(4)$ b. $240 + 240(0.4) = 336 \ mg$ Hours 0 (1st dosage) 4 ($2^{nd}$dosage) 8 ($3^{rd}$dosage) 12 16 Amount in the Body 240 336 374.4 389.76 395.904 d. This is the $7^{th}$$S_7 = \frac{240(1 - .4^7)}{1 - 0.4} = 399.34464$ e. This is the $19^{th}$$S_{19} = \frac{240(1 - .4^{19})}{1 - 0.4} = 399.999989$ f. $S_t = \frac{240(1 - .4^t)}{1 - 0.4}$ g. No, since $S = \frac{240}{1 - 0.4} = 400$ f. Yes, since he/she waits 2 hours, only 30% of the drug is out of his/her system, so 70% remains. This is the common ratio $r = 0.7$$S = \frac{240}{1 -0.7} = 800$ You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/tebook/Texas-Instruments-Calculus-Teacher%2527s-Edition/r1/section/9.2/","timestamp":"2014-04-20T19:07:43Z","content_type":null,"content_length":"111808","record_id":"<urn:uuid:8dacf32f-9ee5-4432-a326-8b61b1d257d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Equations for F(x) Help!!! December 27th 2009, 05:30 PM #1 Dec 2009 Equations for F(x) Help!!! Suppose you have a lemonade stand, and when you charge $2 per cup of lemonade you sell 120 cups. But when you raise your price to $3 you only sell 60 cups. a.Write an equation for the number of cups you sell as a function of the price you charge. b.Denote "C" for number of cups, and "P" for the price you charge. c.Assume the function is linear. Continuing our lemonade stand question: a.We all know that total revenue (TR) is a function of the price we charged (P) multiplied by the item quantity sold (in our case – Cups), i.e., TR = Price * Cups b.Please write the equation for your TR by inputting your answer from the function you have calculated in question #2. c. What price would maximize your TR? Suppose you have a lemonade stand, and when you charge $2 per cup of lemonade you sell 120 cups. But when you raise your price to $3 you only sell 60 cups. a.Write an equation for the number of cups you sell as a function of the price you charge. b.Denote "C" for number of cups, and "P" for the price you charge. c.Assume the function is linear. Continuing our lemonade stand question: a.We all know that total revenue (TR) is a function of the price we charged (P) multiplied by the item quantity sold (in our case – Cups), i.e., TR = Price * Cups b.Please write the equation for your TR by inputting your answer from the function you have calculated in question #2. c. What price would maximize your TR? You have two points that will lie on your function: $(C, P) = (2, 120)$ and $(C, P) = (3, 60)$. To have a linear function of the form $P = aC + b$, you will need to work out $a$ (the gradient) and $b$ (the P-intercept). $a = \frac{P_2 - P_1}{C_2 - C_1}$ $= \frac{60 - 120}{3 - 2}$ $= \frac{-60}{1}$ $= -60$. So you have $P = -60C + b$. You know that $(2, 120)$ lies on the function, so $120 = -60(2) + b$ $120 = -120 + b$ $b = 240$. So the function is $P = -60C + 240$. December 27th 2009, 06:02 PM #2
{"url":"http://mathhelpforum.com/algebra/121722-equations-f-x-help.html","timestamp":"2014-04-21T05:38:32Z","content_type":null,"content_length":"39488","record_id":"<urn:uuid:893dc2cb-40c1-4dbc-a299-40cd7190b3f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Dependent-type 'revolution' Just prior to Scala eXchange 2012 (which took place earlier this week), I came across a (quite accessible) blog post entitled, Unifying Programming and Math - The Dependent Type Revolution An intriguing title, referring to one of those subjects I hadn't quite got round to investigating yet - still on my to-do list was to listen to an interview with Miles Sabin talking about, amongst other things, dependent types. Although Miles was at the conference on the first day, he wasn't around on the second, so I missed my chance to talk to him. However, there was always the panel discussion , so I made do ;-) with addressing a question to Martin Odersky on the matter (22:20 mins into the video). Here's what he had to say, with my own, possibly wrong-end-of-the-stick, embellishments: • There's an interesting area between Haskell and maths, that's populated by programming languages [featuring dependent types] such as Agda, and theorem provers such as Coq. • According to the Curry-Howard Isomorphism, □ there's a strong correspondence between types and properties [of programs] (so we can encode the properties of programs in types); ☆ [By properties of programs is meant things like the property of associativity of the concatenation operator of a list.] □ if a type is a property [of a program], then the [existence of that working] program [involving that type] is a proof of that property. • Certain theorems in maths (e.g. Four-colour theorem) can only be proved (practically) by writing a program [in a language like Agda, expressing the theorem as dependent types]. • Applications of such proofs in computing are currently somewhat specialised/limited: □ proof of correctness wrt attacks on cryptographic systems. □ proof of correctness of whole programs, using theorem provers such as Z3 to do the constraints solving, is currently limited to programs in the 10s of lines. • Scaling of proof of whole-program correctness to much larger programs, or the obviation of unit tests, is unlikely to happen any time soon. • We are constrained by the need to reconcile that which it is possible to do with types, with that which humans can comprehend. • Having type systems which are easy to use is the more pressing challenge. To delve deeper into dependent types using Scala, it looks like Miles Sabin's project and are the places to go. 2 comments: 1. look at 1. So, details of an application, involving Haskell, to operating systems (to make them more secure), and the slides for a talk on 'Dependently-Typed Programming in GHC'. Looks interesting (particularly the slide on 'Why Dependent Types?') - thanks.
{"url":"http://robsscala.blogspot.com/2012/11/dependent-type-revolution.html","timestamp":"2014-04-20T05:42:30Z","content_type":null,"content_length":"66717","record_id":"<urn:uuid:a5e2e8ec-e637-45fb-8f49-34dc658f95ee>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
29 March 2000 Vol. 5, No. 13A THE MATH FORUM INTERNET NEWS - March 2000 DISCUSSIONS This special issue of the Math Forum's weekly newsletter highlights recent interesting conversations on Internet math discussion groups. For a full list of these groups with links to topics covered and information on how to subscribe, see: Replies to individual discussions should be addressed to the appropriate group rather than to the newsletter editor. If you are familiar with a site we don't yet catalog, please use our Web form to suggest the link. Your own brief annotation will be much appreciated. ______________________________ + ______________________________ MARCH SUGGESTIONS: CALC-REFORM - a mailing list hosted by e-MATH of the American Mathematical Society (AMS) and archived at - The value (or not) of trig identities in intro calc (20 March 2000) "Let's assume that _you_ are given the power to write the new AP Calc syllabus, as you see fit... [and] that every student doing the new course has a TI-89 or equivalent. Which functions would you expect students to be able to integrate "by hand," i.e. the traditional way?" - Rex Boggs ______________________________ + ______________________________ GEOMETRY-COLLEGE - a group for discussions of topics covered in college geometry, problems appropriate for that level, and geometry education, hosted and archived by the Math Forum at: - Geometry and construction? (19 March 2000) Walter Whiteley answers the question: "I am doing a project for my geometry class in high school. We have to build something and write a report on how it relates to geometry or how they used geometry to build it. I want to either build a plane or a big building like the Empire State Building or something like that." - Debbie ______________________________ + ______________________________ GEOMETRY-PUZZLES - for problems, discussions, and solutions that require only a knowledge of pre-college geometry, hosted and archived by the Math Forum at: - Triangle Area Paradox (18 March 2000) "I suppose most of you are familiar with the paradox of the 5x13 rectangle that, cut into parts and then rearranged, becomes an 8x8 square, and gains a unit of area. A student of mine came up with an even more surprising variation of this paradox...." - Floor van Lamoen ______________________________ + ______________________________ K12.ED.MATH - a moderated list on general math teaching questions, archived by the Math Forum at - Timed Math Tests- Good or Bad? (22 March 2000) "I am looking for some basic imput. There are several educators who are against timed drill-and-kill math tests. These tests have been given to increase memorization skills for addition, subtraction, and multiplication. What is your opinion? Are they really beneficial?" - Cathy ______________________________ + ______________________________ NUMERACY, for those interested in the discussion of educational issues around adult mathematical literacy, archived at: - curriculum support for use of calculators on GED 2002 (16 March 2000) "I am interested in references for curricula and classroom materials which support and enhance the use of the scientific-level calculators assigned to the GED 2002...." - Maggie Steinz ______________________________ + ______________________________ HISTORIA-MATEMATICA - a virtual forum for scholarly discussion of the history of mathematics in a broad sense, among professionals and non-professionals with a serious interest in the field, archived at - L'Hopital, Pythagoras, Ptolemy and Hilbert (16 March 2000) "I need some examples of results which are not named after the people who derived them... Is there a difference between Pythagorean theorem and Pythagoras' theorem?... Where on earth (!) can I find information about how man's conception of the size of universe has changed throughout history?... What progress has been made in solving the Riemann hypothesis since it was first stated, way back in antiquity? Also, apart from the fact that it is still not solved, what would you consider to be significant about the problem?" - Andrew Bowering And see: - Mathematics and Time (8 March 2000) - Kant and non-Euclidean geometry (10 March 2000) ______________________________ + ______________________________ We hope you will find these selections useful, and that you will browse and participate in the discussion group(s) of your choice. CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/pow/ Mathematics Library http://mathforum.org/library/ Teacher2Teacher http://mathforum.org/t2t/ Discussion Groups http://mathforum.org/discussions/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews5.13A.html","timestamp":"2014-04-17T22:04:22Z","content_type":null,"content_length":"10020","record_id":"<urn:uuid:1a25f089-fc9e-435c-a503-aceff2f30b2c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
matlab solve simultaneous equations Author Message pxellyph Posted: Friday 03rd of Aug 21:45 To any student expert in matlab solve simultaneous equations: I seriously require your very meritorious knowledge. I have some preparatory worksheets for my Pre Algebra. I believe matlab solve simultaneous equations might be beyond my ability. I am at a absolute loss regarding where I might begin . I have thought about engaging an algebra private instructor or signing up with a learning center, but they are unquestionably not low-budget . Any and every alternate hint shall be hugely appreciated ! Registered: 15.02.2003 From: New Bern, North Vofj Timidrov Posted: Sunday 05th of Aug 09:10 The attitude you’ve adopted towards the matlab solve simultaneous equations is not the right one. I do understand that one can’t really think of anything else in such a situation. Its nice that you still want to try. My key to successful equation solving is Algebrator I would advise you to give it a try at least once. Registered: 06.07.2001 From: Bulgaria MoonBuggy Posted: Monday 06th of Aug 09:27 I too have learned Algebrator is a phenomenal assemblage of matlab solve simultaneous equations software programs. I just recollect my unfitness to understand the concepts of perpendicular lines, ratios or scientific notation because I have become so accomplished in assorted fields of matlab solve simultaneous equations. Algebrator has performed flawlessly for me in Pre Algebra, Basic Math and Algebra 2. I strongly advocate this software because I could not find any deficiency from Algebrator. Registered: 23.11.2001 From: Leeds, UK M@dN Posted: Tuesday 07th of Aug 20:31 You people have really caught my attention with what you just said. Can someone please provide the website URL where I can purchase this program? And what are the various payment options available? Registered: UK| From: /images/avatars/ Xane Posted: Thursday 09th of Aug 11:40 I remember having problems with simplifying fractions, like denominators and equivalent fractions. Algebrator is a truly great piece of algebra software. I have used it through several math classes - Algebra 1, Remedial Algebra and Algebra 1. I would simply type in the problem from a workbook and by clicking on Solve, step by step solution would appear. The program is highly recommended. Registered: 16.04.2003 From: the wastelands between insomnia and DoniilT Posted: Friday 10th of Aug 20:54 Its really easy, just click on the this link and you are good to go – http://www.mhsmath.com/fractions.html. And remember they even give a ‘no ifs and buts’ money back guarantee with their software , but I’m sure you’ll like it and won’t ever ask for your money back. Registered: 27.08.2002
{"url":"http://www.mhsmath.com/math-facts-software/geometry/matlab-solve-simultaneous.html","timestamp":"2014-04-18T23:15:14Z","content_type":null,"content_length":"29492","record_id":"<urn:uuid:477df9c4-5de3-45b3-95be-379063215a96>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
In my previous post, I must have imagined that you had a plus sign in there, which you didn't. There's actually only one term, but it has 5 lengths, which is still wrong. If you square rooted the bit inside the square brackets then you would get 2 lengths, which is perfect. S.A. = π*r*√[r² + (2025/π)r²] That can be simplified more by combining the two terms inside the bracket to give [(2025/π+1)r²] The r² can then be taken out of the square root: S.A. = π*r²√(2025/π+1)
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=2044","timestamp":"2014-04-20T06:07:14Z","content_type":null,"content_length":"17769","record_id":"<urn:uuid:3da79dd3-6f42-4ac1-aab5-4a9dcbf84f56>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Intentions: Logical and SubversiveThe Art of Marcel Duchamp, Concept Visualization, and Immersive Experience Symbolic Logic and Visualizing Concept in the Work of Marcel Duchamp The seemingly innate propensity of the human mind to binary, "this not that thinking" may be fruitfully illustrated by using symbolic logic in an interpretation of the intentions of Marcel Duchamp. With symbolic logic as a form of information or concept visualization one can demonstrate the difficulty and visual complexity of examining Duchamp's work using a deterministic system of First let us use the most oft sited interpretations of Duchamp's oeuvre and their corresponding significances and assign to each a variable. The following is by no means an exhaustive codification of the many interpretations of Marcel Duchamp's works it is essentially a categorization of a few of the major theories. A. The use of the "ready-made" or "found object" asserts that by altering the context of a commonplace object it can become art. D. The statement A allows that Duchamp in challenging the definition of the art object by exalting the primacy of the idea over the creative act he subverted the modernist convention of the artist/ object and viewer relationships B. The work was an exploration of the mathematics of uncertainty pioneered by Henri Poincaré and the study of the fourth dimensional space theorized by Élie Jouffret (Traité Élémentaire de Géometrie à Quatre Dimensions 1903). H. The statement B allows that Duchamp called into question the discipline boundaries between art and science and destroys the notion of the artist as creator of 'retinal art' or the aesthetic C. The work was engineered to be reassembled by the patron or viewer, who followed complex, often informed by chance, instructions. This process is evident in Duchamp's assemblage book works such as La Bôite Verte. P. The statement C allows that Duchamp transformed the boundaries between producer and consumer in the art market and engaged the artist/manufacturer and viewer in the process of creation. We will next make a formula that contextualizes more precisely the relationship between the upper level referent variables A, B and C (in this case those variables that refer to the condition of Duchamp's work rather than his intentions). Supplanting a corresponding lower case Greek letters, A becoming a(alpha), B becoming b(beta), and C becomes g(gamma) the following is the rule, universal quantifier or binding of variables governing their relationships. (Fig. 8) The above statement allows that there can only be a single interpretation of the work of Duchamp. In accordance with the current highly polarized arguments about his work the equation illustrates that, of the contemporary hypothesis, only one can be correct. The next equations maintain that there can only be a single derived intention from the overall statement of the condition of Duchamp's work. In other words the statement: "The work is an exploration of the mathematics of uncertainty pioneered by Henri Poincaré and the study of the fourth dimensional space theorized by Élie Jouffret" may only be linked to the intention, "Duchamp calls into question the discipline boundaries between art and science and destroys the notion of the artist as creator of 'retinal art' or the aesthetic object". We may express this with symbolic logic in the following: (Fig. 9) The next section of this exploration of the work of Marcel Duchamp through symbolic logic will determine the consistency of each of the condition/intention hypotheses. First we will examine the consistency of the argument A, if and only if, D:(Fig. 10) The statement A if and only if D proves logically consistent. Next we have the hypothesis B, if and only if, H: (Fig. 11) Having proved the statement B if and only if H consistent we address the theory C if and only if P:(Fig. 12) Having proved the consistency of all three hypotheses given the a priori context that only one of them is correct where are we left in this complex visual analysis of Duchamp's intentions? His intimate knowledge of Henri Poincaré's theories and his use of chance in the construction of many of his key works would seem to indicate that Duchamp had been quietly challenging the notion of deterministic reasoning in both the interpretation of art and physical and experiential phenomena. As Poincaré suggests in 1895; Experiment has revealed a multitude of facts which can be summed up in the following statement: it is impossible to detect the absolute motion of matter, or rather the relative motion of ponderable matter with respect to the ether; all that one can exhibit is the motion of ponderable matter with respect to ponderable matter. In short, Poincaré asserts that all quantifiable and qualifiable information pertaining to any phenomena can only be measured relative to other qualified and quantified data. As the first to elucidate this "principle of relativity" Poincaré discerned that all explicit information about any physical phenomena in motions is best expressed in the form of a probability. Poincaré's critique of determinism extends to other disciplines as well as he states, "The science of history is built out of bricks; but an accumulation of historical facts is no more a science than a pile of bricks is a house." This kind of reasoning is the bedrock of semiotics (meaning in language is ascertain through the relationship between the symbol and its meaning relative to the culture that produced it). It has also been used to critique symbolic logic. The discipline itself relies on abstract patterns, its meaning determined not from the symbols themselves but from the relationship between the marks and other patterns and more significantly cultural meanings. Duchamp's connection to logic is most clearly noted in two of his most significant areas of concern: chance and chess. As a chess master Duchamp was, on several occasions, a member of the French championship chess team. For Duchamp chess was an organized, integrated and ordered whole, composed of rule based interactions wherein outcomes were as influenced by unquantifiable elements such as guile or desire as by systematic reasoning. This led Duchamp to assert that complexity in any system was inherently non-deterministic. We see this questioning of aggregation, perhaps more clearly, in his use of chance in aesthetic production. click to enlarge Figure 13 Marcel Duchamp, Three Standard Stoppages, 1913-14 Duchamp's subversion of deterministic systems through chance finds vent in his Trois Stoppages-Étalon [Three Standard Stoppages], 1913 (Fig. 13), He first measures three sections of thread each precisely one meter in length and drops them from a height of exactly one meter. He then uses the curves of the threads to produce three templates cut from a straight edge to produce the work. The templates are then enshrined in a box and become the piece Trois Stoppages-Étalon. This work and activity albeit unusual, is a subversion of the concept of immutable standards of measurement, thus questioning the validity of a system based on a platinum-iridium bar stored in a Parisian vault. His attention to chance not only posited an alternative to early twentieth century "laws" of science but also undermined early twentieth century conventions about aesthetic production. Le Penseur Multi-Dimensionnelle Duchamp's work Nude Descending a Staircase (1912) was first displayed at the Cubist Exhibition at the Damau Gallery in Barcelona and later at the Armory Show (New York, 1913). This painting took the observational cubist penchant of displaying an object from multiple spatial vantagepoints and added a temporal element by rendering a nude figure in motion. This work explored the conceptual possibility of 2d painting, which displayed and illustrates a 3 dimensional figure traversing time. The piece arrives at a visceral form of multi-dimensional cognition. Partial inspired by his interest in chronophotography and the mathematics of Henri Poincaré Nude Descending a Staircase is perhaps his last clear attempt to use a traditional modality of retinal art to express a conceptual or gray matter art. It is also his first widely exhibited work to express his interest in the merger of science and art. His continued interest in multiple dimensions, though I cannot prove this, is probably where we may find the solution or at least a map to a clear understanding of his work. Though we may never have a concise definition of "what his work was about" Duchamp may have left us clues as to how we may begin to "make sense" of his intentions. click to enlarge Figure 14 Front view of the postcard in the White Box, 1967 Rhonda Shearer and Stephen J. Gould In their article Boats and Deckchairs present the most profound example of Duchamp's trickery and play with the multi-dimensional mathematics of Henri Poincaré and Élie Jouffret. Inside Duchamp's 1967 piece White Box Francis Nauman discovered a "commercial" postcard (1914) (Fig. 14). The postcard displays on its front three boats floating on a placid lake or river and on the reverse some writing. Nauman categorized this discovery as a "random notation" written on a "found object" citing that "on the verso of a postcard, Duchamp notes 'a possible means by which the fourth dimension could be visually established through the optical illusion of two deck chairs'." This note was accompanied by an illustration of parallel lines bisected by a perpendicular. The true nature of this object has never been addressed by art historians as the work could be safely categorized as one of Duchamp's "ready-mades." The piece is in fact, an original painting not a commercial postcard. The curious parallel and perpendicular lines on the back are in fact obscure instructions. Duchamp's fascination with rotation and relative vantagepoints indicates that a new dimension may be experienced through altering ones position relative to an object. When the postcard is turned 90º to the right the boats become an orthogonal rendering of deckchairs viewed from a bird's eye vantagepoint. The mysterious "random note" on the verso is a plea to adjust your perspective when viewing the postcard but also, when correlated with the image from the front the piece becomes a profound statement about the relationship between the second, third and fourth dimensions. Like E. A. Abbot's famous book Flatland (1885) whose main character, a square, is shockingly introduced to the third dimension, Duchamp has demonstrated for us that one can examine from a three dimensional vantage point all sides of a two dimensional object. In turning the postcard we are taking a clearly two-dimensional image and viewing it from the third dimension wherein the objects in question become something entirely different. In postcard he begs the analogy: that when viewed from the fourth dimension a three-dimensional object may be seen from all sides. From years of singular interpretations of Duchamp's oeuvre art historians have safely ignored Duchamp's multiple interpretations: one obvious and the others subversive. When the work is proclaimed (by the artist himself) and interpreted as a "ready-made" the hidden intention with all of its possible significance is obscured. Duchamp disavowed models of reasoning, which relied on singular definitions. This kind of one or two-dimensional interpretation is inherently flawed when attempting to ascertain his intentions. With this in mind, Duchamp's work requires that any conceptual model of his intentions necessitates three-dimensional thinking and thusly is well suited to three-dimensional visualization. Immersive Experience and Concept Visualization As demonstrated, the use of symbolic logic as a means to visualizing concept in the work of Marcel Duchamp is extremely difficult. Though I have not examined the use of more advanced forms of symbolic logic (I am not a logician) it is apparent that the data, as envisaged, is not of the highest utility. click to enlarge Figure. 15 Screen still from the author's Immersive Duchamp Concept World an interactive virtual reality computer art piece. Figure. 16 Screen still from the author's Immersive Duchamp Concept World an interactive virtual reality computer art piece. Clearly, Duchamp had multiple intentions and the existence of seemingly inconsistent hypotheses about his work point more to the human propensity for dualistic thinking rather than to grasping a more pluralistic possibility. Engaging data that is not quantifiable and highly subjective is difficult to manage logically and exceedingly difficult to graph. However if we create a 3d cartographic form of the logic equations introduced earlier in this paper, we make the data more intuitive and thus cognitively manageable. (Figs. 15 & 16) Using interactive virtual reality software[the software we use in this example is the Glass Virtual Reality Engine, created by the author] one can have an immersive experience of the main theories about Duchamp's work. The virtual reality computer art piece Immersive Duchamp Concept World, presents the theories concerning the artist's work. At the center of the virtual space is the entrance point to the world. The immersant or viewer may follow the map which branches off to various nodal points. Each of these nodal points represents a single theory. From the vantagepoint of the theory the immersant sees the other possible theories through a fog and translucent sheets, they are barely visible, as the immersant/viewer has chosen an alternate path (Fig. 15). In Immersive Duchamp Concept World the immersant is also introduced to various interactive media; readings of Duchamp's Notes as well as to still images and animations of his work and to the writings of Henri Poincaré. If the immersant chooses to fly above the object it is from this vantagepoint the viewer sees all of the theories as a totality (Fig. 16). This totality, is essentially a relativistic rather than a fixed deterministic system as the viewer governs the experience. This model for information visualization does not stand in opposition to symbolic logic, however it does allow a form of concept visualization that merges reason quantification, qualification and the visceral. The body of work produced by Marcel Duchamp was a programmatic, if playful, undermining of deterministic thinking. He demolished arbitrary discipline boundaries between artist, scientist and mathematician. His clues to altering our perspective were equally pertinent to viewing and understanding his oeuvre as they were to viewing individual works of art. His implicit and explicit call for altering our vantagepoint relative to his intentions inherently calls into question modernist singular interpretations. Yet, through the use of concept visualization, we can create more exploratory modes of information visualization; modes which allow for simultaneous multiple dimensional thinking. In an immersive environment the viewer can experience a panorama of Duchamp's intentions, one that does not enforce strict rules of consistency, but nonetheless leads us to comprehension of a poly-dynamic yet visceral logic. Boxer, S. "Taking Jokes By Duchamp to Another Level of Art. "The New York Times 20 March 1999. Clair, J. Sur Marcel Duchamp et la fin de l'art. Paris: Éditions Gallimard, 2000. D'Harnoncourt, Anne & McShine, eds. Marcel Duchamp. New York: The Museum of Modern Art, 1973. Duchamp, M. Notes. Paris: Champs Flammarion- Centre d'art et de culture Georges Pompidou, 1999. Golding, J. Duchamp: The Bride Stripped Bare By Her Bachelors. New York: The Viking Press, 1973. Gould, S. J. & Shearer, R. "Boats and Deck Chairs." Tout Fait: The Marcel Duchamp Studies Online Journal. 1.1 (Dec. 1000): Articles <http://www.toutfait.com/duchamp.jsp?postid=757&keyword=> Hodges, W. Logic. New York: Penguin Books, 1985. Ifrah, G. The Universal History of Numbers. New York: John Wiley and Sons, Inc., 2000. Reichenbach, H. Elements of Symbolic Logic. New York: Free Press, 1966. Scribner, C. "Henri Poincare and the principle of relativity." American Journal of Physics 32 (1963): 673. Tomkins, C. The World of Marcel Duchamp 1887-1968. Alexandria, VA: Time Life Book, 1977. Williams, J. "Pata or Quantum: Duchamp and the end of Deterministic Physics", Tout Fait: The Marcel Duchamp Studies Online Journal. 2.3 (Dec. 2000): Articles <http://www.toutfait.com/duchamp.jsp? Figs. 1, 3~7, 13, 14 ©2003 Succession Marcel Duchamp, ARS, N.Y./ADAGP, Paris. All rights reserved.
{"url":"http://www.toutfait.com/online_journal_details.php?postid=1567&keyword=","timestamp":"2014-04-18T11:46:23Z","content_type":null,"content_length":"85860","record_id":"<urn:uuid:922cf7cf-570f-4a46-8095-6da6674759c4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
27 Characteristics Of Authentic Assessment27 Characteristics Of Authentic Assessment 27 Characteristics Of Authentic Assessment by Grant Wiggins, Authentic Education What is “authentic assessment”? Almost 25 years ago, I wrote a widely-read and discussed paper that was entitled: “A True Test: Toward More Authentic and Equitable Assessment” that was published in the Phi Delta Kappan. I believe the phrase was my coining, made when I worked with Ted Sizer at the Coalition of Essential Schools, as a way of describing “true” tests as opposed to merely academic and unrealistic school tests. I first used the phrase in print in an article for Educational Leadership entitled “Teaching to the (Authentic) Test” in the April 1989 issue. (My colleague from the Advisory Board of the Coalition of Essential Schools, Fred Newmann, was the first to use the phrase in a book, a pamphlet for NASSP in 1988 entitled Beyond standardized testing: Assessing authentic academic achievement in secondary schools. His work in the Chicago public schools provided significant findings about the power of working this way.) So, it has been with some interest (and occasional eye-rolling, as befits an old guy who has been through this many times before) that I have followed a lengthy back and forth argument in social media recently as to the meaning of “authentic” and, especially, the idea of “authentic assessment” in mathematics. The debate – especially in math – has to do with a simple question: does “authentic” assessment mean the same thing as “hands-on” or “real-world” assessment? (I’ll speak to those terms momentarily). In other words, in math does the aim of so-called “authentic” assessment rule in or rule out the use of “pure” math problems in such assessments? A number of math teachers resist the idea of authentic assessment because to them it inherently excludes the idea of assessing pure mathematical ability. (Dan Meyer cheekily refers to “fake-world” math as a way of pushing the point Put the other way around, many people are defining “authentic” as “hands-on” and practical. In which case, pure math problems are ruled out. The Original Argument In the Kappan article I wrote as follows: Authentic tests are representative challenges within a given discipline. They are designed to emphasize realistic (but fair) complexity; they stress depth more than breadth. In doing so, they must necessarily involve somewhat ambiguous, ill structured tasks or problems. Notice that I implicitly addressed mathematics here by referring to “ill-structured tasks or problems”. More generally, I referred to “representative challenges within a discipline.” And notice that I do not say that it must be hands-on or real-world work. It certainly CAN be hands-on but it need not be. This line of argument was intentional on my part, given the issue discussed above. In short, I was writing already mindful of the critique I, too, had heard from teachers of mathematics, logic, language, cosmology and other “pure” as opposed to “applied” sciences in response to early drafts of my article. So, I crafted the definition deliberately to ensure that “authentic” was NOT conflated with “hands-on” or “real-world” tasks. My favorite example of a “pure” HS math assessment task involves the Pythagorean Theorem: We all know that A^2 + B^2 = C^2. But think about the literal meaning for a minute: The area of the square on side A + the area of the square on side B = the area of the square on side C. So here’s the question: does the figure we draw on each side have to be a square? Might a more generalizable version of the theorem hold true? For example: Is it true or not that the area of the rhombus on side A + the area of the rhombus on side B = the area of the rhombus on side C? Experiment with this and other figures. From your experiments, what can you generalize about a more general version of the theorem? This is “doing” real mathematics: looking for more general/powerful/concise relationships and patterns – and using imagination and rigorous argument to do so, not just plug and chug. (There are some interesting and surprising answers to this task, by the way.) The Definition Of Hands-On & Real-World While I don’t think there are universally-accepted definitions of “real-world and “hands-on” the similarities and differences seem straightforward enough to me. A “hands-on” task, as the phrase suggests, is to be distinguished from a merely paper-and-pencil exam-like task. You build stuff; you create works; you get your hands dirty; you perform. (Note therefore, that “performance assessment” is not quite the same as “authentic assessment”). In robotics, life-saving, and business courses we regularly see students create and use learning as a demonstration of (practical as well as theoretical) understanding. A “real-world” task is slightly different. There may or may not be mere writing or a hands-on task, but the assessment is meant to focus on the impact of one’s work in real or realistic contexts. A real-world task requires students to deal with the messiness of real or simulated settings, purposes, and audience (as opposed to a simplified and “clean” academic task to no audience but the So, a real-world task might ask the student to apply for a real or simulated job, perform for the local community, raise funds and grow a business as part of a business class, make simulated travel reservations in French to a native French speaker on the phone, etc. Here is the (slightly edited) chart from the Educational Leadership article describing all the criteria that might bear on authentic assessment. It now seems unwieldy and off in places to me, but I think readers might benefit from pondering each element I proposed 25 years ago: 27 Characteristics Of Authentic Assessment Authentic assessments – A. Structure & Logistics 1. Are more appropriately public; involve an audience, panel, etc. 2. Do not rely on unrealistic and arbitrary time constraints 3. Offer known, not secret, questions or tasks. 4. Are not one-shot – more like portfolios or a season of games 5. Involve some collaboration with others 6. Recur – and are worth retaking 7. Make feedback to students so central that school structures and policies are modified to support them B. Intellectual Design Features 1. Are “essential” – not contrived or arbitrary just to shake out a grade 2. Are enabling, pointing the student toward more sophisticated and important use of skills and knowledge 3. Are contextualized and complex, not atomized into isolated objectives 4. Involve the students’ own research 5. Assess student habits and repertories, not mere recall or plug-in. 6. Are representative challenges of a field or subject 7. Are engaging and educational 8. Involve somewhat ambiguous (ill-structures) tasks or problems C. Grading and Scoring 1. Involve criteria that assess essentials, not merely what is easily scores 2. Are not graded on a curve, but in reference to legitimate performance standards or benchmarks 3. Involve transparent, de-mystified expectations 4. Make self-assessment part of the assessment 5. Use a multi-faceted analytic trait scoring system instead of one holistic or aggregate grade 6. Reflect coherent and stable school standards D. Fairness 1. identify (perhaps hidden) strengths [not just reveal deficits] 2. Strike a balance between honoring achievement while mindful of fortunate prior experience or training [that can make the assessment invalid] 3. Minimize needless, unfair, and demoralizing comparisons of students to one another 4. Allow appropriate room for student styles and interests [ – some element of choice] 5. Can be attempted by all students via available scaffolding or prompting as needed [with such prompting reflected in the ultimate scoring] 6. Have perceived value to the students being assessed. I trust that this at least clarifies some of the ideas and resolves the current dispute, at least from my perspective. Happy to hear from those of you with questions, concerns, or counter-definitions and counter-examples. This article first appeared on Grant’s personal blog; follow Grant on twitter; 27 Characteristics Of Authentic Assessment; image attribution flickr user woodleywonderworks This is brilliant. Thank you for such a thorough explanation of authentic assessment. Jibes with what I teach parents in the homeschooling setting.
{"url":"http://www.teachthought.com/learning/27-characteristics-of-authentic-assessment/","timestamp":"2014-04-19T14:49:16Z","content_type":null,"content_length":"70075","record_id":"<urn:uuid:9867f9d5-0ef0-40c2-ba9e-3d7407e8439a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution Christoph Deil Deil.Christoph@googlemail.... Tue Jun 5 04:05:39 CDT 2012 On Jun 4, 2012, at 3:21 PM, Sturla Molden wrote: > On 03.06.2012 13:20, Daniel Sabinasz wrote: >> Hi all, >> I need to sample a random number from a distribution whose probability >> density function I specify myself. Is that possible using scipy.stats? > Sampling a general distribution is typically an MCMC problem, that e.g. > can be solved with the Metropolis-Hastings sampler. > http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm > Because of its recursive nature, a Markov chain like this is better > written in Cython, or you can use NumPy to run multiple chains in > parallel. (I depends on how many samples you need, of course, anything > below a million should be fast enough in Python.) > You might also take a look at PyMCMC: > https://github.com/rdenham/pymcmc > Sturla > _______________________________________________ > SciPy-User mailing list > SciPy-User@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user If you are willing to look outside of scipy, there are nice methods to generate random numbers from arbitrary distributions in ROOT, a C++ physics data analysis package with python bindings: import ROOT # Define the function and limits you want: # TF1::TF1(const char* name, const char* formula, Double_t xmin = 0, Double_t xmax = 1) f = ROOT.TF1("my_pdf", "x * x / 10.", -1, 1) # Generate 100 random numbers from that distribution [f.GetRandom() for _ in range(100)] You can sample from an arbitrary 2D distribution as well: # TF2::TF2(const char* name, const char* formula, Double_t xmin = 0, Double_t xmax = 1, Double_t ymin = 0, Double_t ymax = 1) f2 = ROOT.TF2("my_pdf2", "x * x / 10. + pow(y, 4)", -1, 1, 3, 4) x, y = ROOT.Double(), ROOT.Double() f2.GetRandom2(x, y) If you only want a histogram of values, not the array, you can avoid the python call overhead: # TH1D::TH1D(const char* name, const char* title, Int_t nbinsx, Double_t xlow, Double_t xup) h = ROOT.TH1D("my_hist", "my_hist", 1000, -1, 1) # void TH1::FillRandom(const char* fname, Int_t ntimes = 5000) In [49]: %timeit h.FillRandom("my_pdf", int(1e6)) 10 loops, best of 3: 171 ms per loop In [48]: %timeit [f.GetRandom() for _ in range(int(1e6))] 1 loops, best of 3: 2.62 s per loop Here you can see the method used (parabolic approximations): Even if most users don't want to install ROOT, it might be worth comparing the accuracy / speed to the method in scipy. ROOT also contains the UNURAN package, which implements several methods to sample from arbitrary one- or multi-dimensional distributions. Unfortunately it's GPL and doesn't have python bindings itself as far as I know. More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-June/032312.html","timestamp":"2014-04-19T23:12:47Z","content_type":null,"content_length":"6662","record_id":"<urn:uuid:3588c359-f547-4744-a1f2-3c984570885e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] [SOLVED] high precision bessel’s functions Fredrik Johansson fredrik.johansson@gmail.... Mon Dec 8 15:05:55 CST 2008 Leonid Chaichenets wrote: > Does anyone know an implementation of hankel2 with more precision? Can maybe > the scaled bessel functions (scipy.special.hankel2e) be used for that > (unfortunatly I couldnt find enough documentation on them)? Robert Kern wrote: > mpmath has Bessel functions. You should be able to construct the > Hankel functions from those. Hi, I'm the main author of mpmath and though this problem already has been solved (thanks Robert), I thought I'd drop a comment. Unfortunately mpmath only had the Bessel J function, and though you can compute the other Bessel functions from it, it requires some trickery when the order is an integer (though that shouldn't be a problem in this case since the order was explicitly stated to be a Since this isn't the first time someone asked for Bessel functions in mpmath (or rather, asked for Bessel functions in a more general setting and was pointed to mpmath) I've now implemented the Bessel I, Y, K and Hankel H1/H2 functions, and in a way that hopefully avoids the major numerical issues. You can get it by checking out the SVN version. Documentation is here: Leonid, I'd be interested to know if this implementation of the Hankel function works for your problem. If it's not too complicated, I'd like to add your calculation (or a simplified version thereof) to the test suite or as a documentation example. It's always nice with real-world More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-December/019003.html","timestamp":"2014-04-18T15:40:08Z","content_type":null,"content_length":"4412","record_id":"<urn:uuid:15a509c7-b8cd-4c87-9fda-0053b9dd5e1d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by west on August 3, 2012. With the drug testing, they now do A and B samples - so if sample A comes back positive, they then test the B sample. This is an added layer to prevent the innocent being found guilty, but I guess it comes down to what made them test positive in the first place. If it's a simple dice-roll random chance thing, then the B sample is 95% likely to then prove a wrongly accused person not-guilty - but if it's something else (perhaps the drug test looks for markers in their urine which usually signify drug taking but in 5% of the population is natural), then they're still in trouble! If it's the first case, then with a second test your 590 athletes who test positive in sample A (495 innocent, 95 guilty) becomes 115 (25 innocent 90 guilty) which gives 78% chance of getting the guilty. A bit more palatable.
{"url":"http://plus.maths.org/content/comment/reply/5757/3505","timestamp":"2014-04-19T12:28:17Z","content_type":null,"content_length":"20547","record_id":"<urn:uuid:074ff01c-aeaa-4d45-98b9-a12bdf57791a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Two envelopes problem Mario mdosrei at nimr.mrc.ac.uk Mon Aug 25 22:51:28 CEST 2008 No, no, no. I have solved the Monty Hall problem and the Girl's problem and this is quite different. Imagine this, I get the envelope and I open it and it has £A (A=10 or any other amount it doesn't matter), a third friend gets the other envelope, he opens it, it has £B, now £B could be either £2A or £A/2. He doesn't know what I have, he doesn't have any additional information. According to your logic, he should switch, as he has a 50% chance of having £2B and 50% chance of having £B/2. But the same logic applies to me. In conclusion, its advantageous for both of us to switch. But this is a paradox, if I'm expected to make a profit, then surely he's expected to make a loss! This is why this problem is so famous. If you look at the last lines of my simulation, I get, conditional on the first envelope having had £10, that the second envelope has £5 approximatedly 62.6% of the time and 37.4% for the second envelope. In fact, it doesn't matter what the original distribution of money in the envelopes is, conditional on the first having £10, you should exactly see 2/3 of the second envelopes having £5 and 1/3 having £20. But I'm getting a slight deviation from this ratio, which is consistent, and I don't know why. Greg Snow wrote: > You are simulating the answer to a different question. > Once you know that one envelope contains 10, then you know conditional on that information that either x=10 and the other envelope holds 20, or 2*x=10 and the other envelope holds 5. With no additional information and assuming random choice we can say that there is a 50% chance of each of those. A simple simulation (or the math) shows: >> tmp <- sample( c(5,20), 100000, replace=TRUE ) >> mean(tmp) > [1] 12.5123 > Which is pretty close to the math answer of 12.5. > If you have additional information (you believe it unlikely that there would be 20 in one of the envelopes, the envelope you opened has 15 in it and the other envelope can't have 7.5 (because you know there are no coins and there is no such thing as a .5 bill in the local currency), etc.) then that will change the probabilities, but the puzzle says you have no additional information. > Your friend is correct in that switching is the better strategy. > Another similar puzzle that a lot of people get confused over is: > "I have 2 children, one of them is a girl, what is the probability that the other is also a girl?" > Or even the classic Monty Hall problem (which has many answers depending on the motivation of Monty). > Hope this helps, > (p.s., the above children puzzle is how I heard the puzzle, I actually have 4 children (but the 1st 2 are girls, so it was accurate for me for a while). > -- > Gregory (Greg) L. Snow Ph.D. > Statistical Data Center > Intermountain Healthcare > greg.snow at imail.org > (801) 408-8111 >> -----Original Message----- >> From: r-help-bounces at r-project.org >> [mailto:r-help-bounces at r-project.org] On Behalf Of Mario >> Sent: Monday, August 25, 2008 1:41 PM >> To: r-help at r-project.org >> Subject: [R] Two envelopes problem >> A friend of mine came to me with the two envelopes problem, I >> hadn't heard of this problem before and it goes like this: >> someone puts an amount `x' in an envelope and an amount `2x' >> in another. You choose one envelope randomly, you open it, >> and there are inside, say £10. Now, should you keep the £10 >> or swap envelopes and keep whatever is inside the other >> envelope? I told my friend that swapping is irrelevant since >> your expected earnings are 1.5x whether you swap or not. He >> said that you should swap, since if you have £10 in your >> hands, then there's a 50% chance of the other envelope having >> £20 and 5% chance of it having £5, so your expected earnings >> are £12.5 which is more than £10 justifying the swap. I told >> my friend that he was talking non-sense. I then proceeded to >> write a simple R script (below) to simulate random money in >> the envelopes and it convinced me that the expected earnings >> are simply >> 1.5 * E(x) where E(x) is the expected value of x, a random >> variable whose distribution can be set arbitrarily. I later >> found out that this is quite an old and well understood >> problem, so I got back to my friend to explain to him why he >> was wrong, and then he insisted that in the definition of the >> problem he specifically said that you happened to have £10 >> and no other values, so is still better to swap. I thought >> that it would be simply to prove in my simulation that from >> those instances in which £10 happened to be the value seen in >> the first envelope, then the expected value in the second >> envelope would still be £10. I run the simulation and >> surprisingly, I'm getting a very slight edge when I swap, >> contrary to my intuition. I think something in my code might >> be wrong. I have attached it below for whoever wants to play >> with it. I'd be grateful for any feedback. >> # Envelopes simulation: >> # >> # There are two envelopes, one has certain amount of money >> `x', and the other an # amount `r*x', where `r' is a positive >> constant (usaully r=2 or r=0.5). >> You are >> # allowed to choose one of the envelopes and open it. After >> you know the amount # of money inside the envelope you are >> given two options: keep the money from # the current envelope >> or switch envelopes and keep the money from the second # >> envelope. What's the best strategy? To switch or not to switch? >> # >> # Naive explanation: imagine r=2, then you should switch >> since there is a 50% # chance for the other envelope having >> 2x and 50% of it having x/2, then your # expected earnings >> are E = 0.5*2x + 0.5x/2 = 1.25x, since 1.25x > x you # should >> switch! But, is this explanation right? >> # >> # August 2008, Mario dos Reis >> # Function to generate the envelopes and their money # r: >> constant, so that x is the amount of money in one envelop and >> r*x is the >> # amount of money in the second envelope >> # rdist: a random distribution for the amount x # n: number >> of envelope pairs to generate # ...: additional parameters >> for the random distribution # The function returns a 2xn >> matrix containing the (randomized) pairs # of envelopes >> generateenv <- function (r, rdist, n, ...) { >> env <- matrix(0, ncol=2, nrow=n) >> env[,1] <- rdist(n, ...) # first envelope has `x' >> env[,2] <- r*env[,1] # second envelope has `r*x' >> # randomize de envelopes, so we don't know which one from >> # the pair has `x' or `r*x' >> i <- as.logical(rbinom(n, 1, 0.5)) >> renv <- env >> renv[i,1] <- env[i,2] >> renv[i,2] <- env[i,1] >> return(renv) # return the randomized envelopes } >> # example, `x' follows an exponential distribution with E(x) >> = 10 # we do one million simulations n=1e6) env <- >> generateenv(r=2, rexp, n=1e6, rate=1/10) >> mean(env[,1]) # you keep the randomly assigned first envelope >> mean(env[,2]) # you always switch and keep the second >> # example, `x' follows a gamma distributin, r=0.5 env <- >> generateenv(r=.5, rgamma, n=1e6, shape=1, rate=1/20) >> mean(env[,1]) # you keep the randomly assigned first envelope >> mean(env[,2]) # you always switch and keep the second >> # example, a positive 'normal' distribution # First write >> your won function: >> rposnorm <- function (n, ...) >> { >> return(abs(rnorm(n, ...))) >> } >> env <- generateenv(r=2, rposnorm, n=1e6, mean=20, sd=10) >> mean(env[,1]) # you keep the randomly assigned first envelope >> mean(env[,2]) # you always switch and keep the second >> # example, exponential approximated as an integer rintexp <- >> function(n, ...) return (ceiling(rexp(n, ...))) # we use >> ceiling as we don't want zeroes env <- generateenv(r=2, >> rintexp, n=1e6, rate=1/10) >> mean(env[,1]) # you keep the randomly assigned first envelope >> mean(env[,2]) # you always switch and keep the second i10 <- >> which(env[,1]==10) >> mean(env[i10,1]) # Exactly 10 >> mean(env[i10,2]) # ~ 10.58 - 10.69 after several trials >> ______________________________________________ >> R-help at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-August/171911.html","timestamp":"2014-04-17T15:49:23Z","content_type":null,"content_length":"13340","record_id":"<urn:uuid:199bb853-72fd-4c72-944c-680721fa3593>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you Calculate shear stress? Measure the area of the side of the object that is in contact with the object applying force in meters. For example, if your objects are two boxes sliding past one another, measure the area of the side of the box. To find the area, measure the length and width of the side of the object with your ruler and then multiply these two values together.This will be your "A" in the shear stress
{"url":"http://www.physicsforums.com/showthread.php?p=3874911","timestamp":"2014-04-20T05:46:45Z","content_type":null,"content_length":"29611","record_id":"<urn:uuid:54cce8b2-54c6-4850-b9db-143840332a77>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Men in a bar - stoch. processes up vote 3 down vote favorite Hello everyone, I'm trying to solve a applied stochastic process problem and even the example is beautiful, I don't know how to approach it. Here the problem: 10 men want to get out of a bar, they do it in the following way: Initially they store all 10 umbrella in a basket next to the exit from the pub. They go all together and each one picks an umbrella at random. Those who picked their own umbrellas leave while those who picked a wrong umbrella, put it back and return to a pub for another glass of beer. After that they return to the basket and try once again. And so on. Let T be the number of rounds needed for all men to leave, and let N be the total number of beers consumed during the procedure. a) Compute E(T) b) Compute E(N) Can someone help me, also in how to put this problem in a appropriate model. Thank you stochastic-processes martingales 3 Exercise? Homework? – Yemon Choi Jan 18 '12 at 7:32 Rencontres numbers! – Timothy Foo Jan 18 '12 at 7:35 9 Imagine what happens if at some point somebody by mistake leaves with someone else's umbrella. What a nightmare. – Johan Wästlund Jan 18 '12 at 10:10 @Johan: Your comment made my day. – Liviu Nicolaescu Jan 18 '12 at 13:19 4 I read this as "Men in a bar - scotch processes". – Tom Church Jan 18 '12 at 19:13 add comment 4 Answers active oldest votes Let $t_m$ be the value of $\mathbf{E}(T)$ for the same problem but with $m$ men instead of $10$, and let $p_t$ be the probability that a random element of $S_m$ fixes exactly $m-t$ elements. The following formula follows from the law of total expectation: $$ t_m = 1 + t_0 p_0 + t_1 p_1 + \cdots + t_m p_m. $$ Note, however, by linearity of expectation, $$ 0 p_0 + 1 p_1 + \cdots + up vote m p_m = m-1. $$ Indeed, the LHS counts the expected number of points which are not fixed by the permutation, and each individual point is not fixed with probability $(m-1)/m$. In any case, 4 down it follows from the previous two equations that $t_m = m$ by induction: $$ t_m - t_m p_m = 1 + 0 p_0 + \cdots (m-1)p_{m-1} = m - mp_m. $$ (Note $p_m<1$.) So for $m=10$, $\mathbf{E}(T) = 10$. Dear Sean, thank you for this wonderful answer. It helps me very much. Unfortunately I don't understand how to get a precise derivation of the first formula. Could you help me there again? – Peter Jan 20 '12 at 8:38 It follows from en.wikipedia.org/wiki/Law_of_total_expectation. Suppose there are $m$ men remaining, and that they all go to fetch their umbrellas. This adds 1 to the rounds-count, hence the initial 1. After the round, with probability $p_k$, there are exactly $k$ men remaining, and the expected number of rounds to get rid of them all is $t_k$. – Sean Eberhard Jan 20 '12 at 18:06 add comment Here is an argument that shows that the expected number of drinks is $m^2/2$ for $m>1$, as noted by Barry Cipra. This is equivalent to saying that the expected number of drinks that an arbitrary person $A$ takes is $m/2$. Let $X_m$ be the expected number of drinks per person with $m$ participants. If $m\geq 2$, we get the equation $$X_m = \frac{m-1}m\left(1+a_2X_2+\cdots + a_mX_m\right),$$ where the factor $(m-1)/m$ is the probability that person $A$ picks somebody else's umbrella in the first round (otherwise he drinks nothing), the term 1 comes from the drink this causes him to take, and $a_k$ is the probability of exactly $k$ people picking someone else's umbrella in the first round, conditioning on $A$ doing so (trivially $a_0 = a_1 = up vote Since $a_m(m-1)/m\neq 1$, this equation uniquely determines $X_m$ if we know $X_2,\dots,X_{m-1}$. Therefore we have the right to commit the cardinal sin of assuming that the induction step 4 down does go through, and just check that the resulting equation is consistent! Provided $X_k=k/2$ (linear!) for $k=2,\dots,m$, we can replace the terms $a_2X_2+\cdots + a_mX_m$ by $(m-1+1/(m-1))/2$, since the expected number of people who take their own umbrella, given that $A$ doesn't, is $1-1/(m-1)$ (it is 2 if he does). The equation then reads $$\frac{m}2 = \frac{m-1}m\left(1+\frac{m-1+\frac1{m-1}}{2}\right),$$ which simplifies to...I can't resist... $$0 = 0, \qquad \text{QED}.$$ Johan, very nice! There ought to be a slick argument along the lines of, If the process sends one man home per round on average and hence takes, on average, $m$ rounds to empty the bar, then the average tippler survives (to drink again) through $m/2$ rounds, for a total, on average, of $m \times m/2$ drinks. But that just feels like it's playing too fast and loose with the law(s) of averages. – Barry Cipra Jan 19 '12 at 15:24 add comment This is essentially the same answer as Sean Eberhard's, without the rigor: When you parcel out $m$ umbrellas to $m$ men, each man has a $1/m$ chance of getting his own umbrella, so the expected number of correctly matched umbrellas is $m \times 1/m = 1$. Hence up vote 3 on average one man (and his own umbrella) goes home each round. So if you start with $m$ men, it'll take, on average, $m$ rounds to empty the bar. down vote The expected number of drinks consumed appears to be $m^2/2$ (for $m>1$), but I don't have a clever derivation for this. Anybody? add comment This seems to involve Rencontres numbers $D_{n,r}$, the number of permutations in symmetric group $S_n$ with $r$ fixed points, for example, see OEIS Rencontres numbers or this post, The number of cycles in a random permutation, in the blog of Professor Tao. Let $P_n$ be the set of partitions of $10$ into $n$ (not necessarily distinct and not necessarily non-zero) components, where order matters. I.e, up vote 2 down $$ P_n = \{(a_1,a_2,\dots,a_n): \sum_{i}a_i = 10, 0 \leq a_i \leq 10\}. $$ Then the probability that $T=n$ is $$ P(T=n) = \sum_{\pi = (a_i)\in P_n}\left(\frac{D_{10,a_1}}{10!}\right)\left(\frac{D_{10-a_1,a_2}}{(10-a_1)!}\right)\dots\left(\frac{D_{10-a_1-\dots-a_{n-1},a_n}}{(10-\sum_{i=1}^{n-1} a_i)!}\right). $$ add comment Not the answer you're looking for? Browse other questions tagged stochastic-processes martingales or ask your own question.
{"url":"http://mathoverflow.net/questions/85963/men-in-a-bar-stoch-processes/85967","timestamp":"2014-04-17T04:27:02Z","content_type":null,"content_length":"72461","record_id":"<urn:uuid:f98550e7-5a30-4b25-ae0a-045e044a095b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellicott, MD Algebra Tutor Find an Ellicott, MD Algebra Tutor ...Whatever the case may be, everyone has their story and the potential for improvement. If you’re interested in working with me, feel free to send me an email and inquire about my availability. Currently, I do all of my tutoring at local libraries within 5 miles of Rockville.I took a semester of Discrete Math in College and earned an A in the class. 9 Subjects: including algebra 1, algebra 2, physics, geometry ...Smith School of Business, University of Maryland with a concentration in Finance. With over three years experience in Operations Research at United Parcel Service (UPS) along with my experience at Smith, I believe I am a strong candidate to tutoring. During my academic years at Smith, I enrolled in financial management courses to advance my skills in Finance and Mathematics. 10 Subjects: including algebra 2, trigonometry, statistics, SAT math ...An amateur artist most of my life, I have been using Adobe PhotoShop for as long as it has existed. I am currently using CS5 and have used most of the previous versions to date. While I have mostly used it for photo editing and manipulation in recent years, I have experience in digital painting, fractal art, coloring methods and layer work. 43 Subjects: including algebra 1, reading, Spanish, English ...I have been a language analyst in the Air Force for the last 10 years and am beginning to work towards my Master's Degree in Teaching Chinese Mandarin. While learning Chinese I tutored Spanish to another military student who then was able to pass his required Spanish test. My listening and read... 4 Subjects: including algebra 1, Spanish, Chinese, prealgebra ...My methods prepare students for algebra by breaking down the language of numbers and feeding them the techniques for understanding the wonderful logic of algebraic reasoning. My students are already performing algebraic operations when we finish, and that good start engenders the confidence nece... 40 Subjects: including algebra 1, reading, English, biology Related Ellicott, MD Tutors Ellicott, MD Accounting Tutors Ellicott, MD ACT Tutors Ellicott, MD Algebra Tutors Ellicott, MD Algebra 2 Tutors Ellicott, MD Calculus Tutors Ellicott, MD Geometry Tutors Ellicott, MD Math Tutors Ellicott, MD Prealgebra Tutors Ellicott, MD Precalculus Tutors Ellicott, MD SAT Tutors Ellicott, MD SAT Math Tutors Ellicott, MD Science Tutors Ellicott, MD Statistics Tutors Ellicott, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Ellicott_MD_Algebra_tutors.php","timestamp":"2014-04-17T15:41:50Z","content_type":null,"content_length":"24211","record_id":"<urn:uuid:132d8606-d0cc-4d25-be8e-777fac1f47f5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic equation - Problem January 8th 2008, 01:46 AM Quadratic equation - Problem Find the factors of $(4x^2+13x-12)=0$ and use them to solve teh quadratic equation: Solve the quadratic equation Ive been reading higher engineering mathamatics by John Bird but i still dont get it or know where to start. Help is much apreciated. If you can break it into steps so i dont egt lost again that would help. January 8th 2008, 02:53 AM mr fantastic Find the factors of $(4x^2+13x-12)=0$ and use them to solve teh quadratic equation: Mr F says: You really must (re-?)learn to factorise. $(4x^2+13x-12)=0 \Rightarrow 4x^2 + 16x - 3x - 12 = 0$ $\Rightarrow 4x(x + 4) - 3(x + 4) = 0 \Rightarrow (4x - 3)(x + 4) = 0$. Therefore either 4x - 3 = 0 or x + 4 = 0. Therefore x = ....... Solve the quadratic equation Mr F says: Use the quadratic formula (there are probably more websites explaining the quadratic formula and how to use it than you've had hot breakfasts). Or factorise: $2x^2+1.6x-1.3=0 \Rightarrow 2x^2 - 2.6x + x - 1.3 = 0$ $\Rightarrow 2x(x - 1.3) + 1(x - 1.3) = 0 \Rightarrow (2x + 1)(x - 1.3) = 0$. Therefore either 2x + 1 = 0 or x - 1.3 = 0. Therefore x = .... Ive been reading higher engineering mathamatics by John Bird but i still dont get it or know where to start. Help is much apreciated. Mr F says: Uh huh. The hard fact is that a textbook with that title is gonna be pitching way over your head, bud. You need a maths textbook pitched at high school level. Or google solving quadratic equations and find a website that floats your boat. If you can break it into steps so i dont egt lost again that would help.
{"url":"http://mathhelpforum.com/algebra/25754-quadratic-equation-problem-print.html","timestamp":"2014-04-19T21:19:02Z","content_type":null,"content_length":"7557","record_id":"<urn:uuid:1ce24519-54c1-4a75-8fc3-b8467e1f1a94>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
The magical maze January 1998 The Royal Institution of Great Britain traditionally celebrates Christmas by inviting young students to a set of lectures given by a famous scientist. This year it was the turn of Ian Stewart, a professor of mathematics at the University of Warwick, to talk about "The Magical Maze: The Natural World and the Mathematical Mind". The lectures were broadcast during the Christmas holidays by the The lectures set out to answer the question "What is mathematics?". According to Ian Stewart it's a maze. PASS Maths readers may already be familiar with some of the topics found in the maze. In the first lecture the audience was invited to count the number of petals on a sunflower; how many are there? (See "The life and numbers of Fibonacci" in Issue No. 3.) Mathematics can help us find the answers to lots of questions - and can help us ask some that we never even thought of. What have cockroaches and horses got in common? What's the difference between random and chaotic? And why does Ian Stewart wear odd socks? Sneaky readers may even have found the answer to this issue's puzzle. You can read more about the Royal Institution Christmas lectures from the following web sites:
{"url":"http://plus.maths.org/content/magical-maze","timestamp":"2014-04-21T02:12:04Z","content_type":null,"content_length":"19399","record_id":"<urn:uuid:faf9cc5a-6ce6-4363-b3aa-924967edbe14>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Mountlake Terrace Algebra Tutor ...I have created many PowerPoint presentations with various transitions, templates, and embedded elements from other Office programs. Simplicity and consistency are key elements of a good PP presentation. I started teaching prealgebra in 1984, my very first teaching job. 39 Subjects: including algebra 1, algebra 2, reading, English ...I was actively certified as a Computer Information Systems Security Professional (CISSP) for about six years (until October 2010). I prepped a 10-year old recently in ISEE math. He took the test and, coupled with his above average verbal skills, was able to get on the waiting list at Overlake. ... 43 Subjects: including algebra 1, algebra 2, chemistry, geometry I have worked as a classroom teaching assistant and tutor since 2007. In the classroom, I have helped teach introductory physics classes at the University of Washington and Washington University in St Louis. I also have worked with these students individually on homework problems or test preparation. 17 Subjects: including algebra 2, algebra 1, reading, English ...I completed algebra-based physics at the University of Washington. I tutored peers in junior high and high school in algebra. I received exemplary marks in the entire introductory Biology series at University of Washington. 16 Subjects: including algebra 1, algebra 2, chemistry, reading ...I later went on to teach Algebra 1, and found that there were many of my students who needed extra time and review with their Prealgebra learning. It was then that I started offering lunch time tutoring for any and all students requiring extra help. I have an A.A. in Mathematics, and almost completed a B.S. in Mathematics as well. 11 Subjects: including algebra 1, algebra 2, reading, writing
{"url":"http://www.purplemath.com/mountlake_terrace_algebra_tutors.php","timestamp":"2014-04-17T07:28:47Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:6720fcda-5f30-4eac-9385-93372e61aa5d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrable dynamical system - relation to elliptic curves up vote 8 down vote favorite From seminar on kdV equation I know that for integrable dynamical system its trajectory in phase space lays on tori. In wikipedia article You may read (http://en.wikipedia.org/wiki/Integrable_system When a finite dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the torus. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables. As I also know that elliptic curve is in fact some kind of tori, then there natural question arises: Are tori for quasi-periodic motion in action-angle variables of some dynamical systems related in any way to algebraic structure like elliptic curve? Maybe some small dynamical systems and some elliptic curves are related in some way? The most interesting in this matter is for me the size of space of elliptic functions: its quite small, every elliptic curve is rational function of Weiestrass function, and its derivative. Has this property any analogy in integrable dynamical systems theory? As isomorphic elliptic curves shares some invariants, it is also interesting it they have any "dynamical meaning". big-picture elliptic-curves integrable-systems ds.dynamical-systems This is a great question, but I do want to point out that KdV is an infinite-dimensional integrable system, so I am not sure if the tori interpretation makes sense in this setting. In particular, KdV admits infinitely many conservation laws! Drazin and Johnson's "Solitons: an introduction" expands on this. I realize this has no real bearing on the question on hand. – Justin Curry Feb 8 '10 at 17:19 As You are right, kdV has infinitely many conservation laws, it also is quasi-periodic motion! So probably it may be case of "infinite dimensional" tori, which obviously is not related to elliptic curves. I am physicist by education ( but not working as scientist, I treat math as fun and hobby), so kdV is something which is example of integrable system with nontrivial equations of motion. – kakaz Feb 8 '10 at 19:00 add comment 2 Answers active oldest votes If your system is algebraic, then you bet! More generally, you can get abelian varieties as the fibers for many interesting integrable systems. Google the following for more: algebraic complete integrable Hamiltonian system, Calogero-Moser System, Hitchin System. As for elliptic curves, they'll only pop out in low dimensional cases, because otherwise, the fibers have to have larger dimension. up vote 10 down vote accepted As for the latter, it depends what you might want. I've seen the definition of integrable given by "can be solved by a sequence of quadratures" and in this terminology, you can check that an algebraic system you're always working with the global section of the theta function on the abelian variety, which is the unique (up to scaling) global section of the theta divisor on the abelian variety, which for an elliptic curve, is just the Weierstrass function. If in low dimensional case, elliptic curves appear in this question, is it true that curves which are isomorphic arises in equivalent algebraic dynamical systems? Is this kind of homomorphism between structures? – kakaz Feb 8 '10 at 16:31 Well, really what's going on is that you have families of elliptic curves. I'm not an expert, really, but isomorphic families give equivalent integrable systems, and I'm not sure about what notion of equivalent systems you're using (and even if I did, I'm not sure I'd be able to say much). – Charles Siegel Feb 8 '10 at 17:30 I thought on case when moving from one curve to another in equivalence class gives for example change of variables in equation of motion or something like that - some kind of reparametrization which is not quite formal but also has some kind of "physical meaning". – kakaz Feb 8 '10 at 19:03 Perhaps. I don't know enough to say for certain. – Charles Siegel Feb 8 '10 at 20:08 add comment "The de-geometrisation of mathematical education and the divorce from physics sever these ties. For example, not only students but also modern algebro-geometers on the whole do not know about the Jacobi fact mentioned here: an elliptic integral of first kind expresses the time of motion along an elliptic phase curve in the corresponding Hamiltonian system. " up vote 0 down vote From A.I.Arnold, here: http://pauli.uni-muenster.de/~munsteg/arnold.html Definitely I should learn more in this area.... add comment Not the answer you're looking for? Browse other questions tagged big-picture elliptic-curves integrable-systems ds.dynamical-systems or ask your own question.
{"url":"http://mathoverflow.net/questions/14638/integrable-dynamical-system-relation-to-elliptic-curves/14657","timestamp":"2014-04-19T12:34:10Z","content_type":null,"content_length":"63808","record_id":"<urn:uuid:d0041510-322c-4d86-aaeb-a09809c0d978>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
David R. Brillinger Classics in Applied Mathematics 36 Intended for students and researchers, this text employs basic techniques of univariate and multivariate statistics for the analysis of time series and signals. It provides a broad collection of theorems, placing the techniques on firm theoretical ground. The techniques, which are illustrated by data analyses, are discussed in both a heuristic and a formal manner, making the book useful for both the applied and the theoretical worker. An extensive set of original exercises is included. Time Series: Data Analysis and Theory takes the Fourier transform of a stretch of time series data as the basic quantity to work with and shows the power of that approach. It considers second- and higherr-order parameters and estimates them equally, thereby handling non-Gaussian series and nonlinear systems directly. The included proofs, which are generally short, are based on cumulants. This book will be most useful to applied mathematicians, communication engineers, signal processors, statisticians, and time series researchers, both applied and theoretical. Readers should have some background in complex function theory and matrix algebra and should have successfully completed the equivalent of an upper division course in statistics. Preface to the Classics Edition; Preface to the Expanded Edition; Preface to the First Edition; Chapter 1: The Nature of Time Series and Their Frequency Analysis; Chapter 2: Foundations; Chapter 3: Analytic Properties of Fourier Transforms and Complex Matrices; Chapter 4: Stochastic Properties of Finite Fourier Transforms; Chapter 5: The Estimation of Power Spectra; Chapter 6: Analysis of a Linear Time Invariant Relation Between a Stochastic Series and Several Deterministic Series; Chapter 7: Estimating the Second-Order Spectra of Vector-Valued Series; Chapter 8: Analysis of a Linear Time Invariant Relation Between Two Vector-Valued Stochastic Series; Chapter 9: Principal Components in the Frequency Domain; Chapter 10: The Canonical Analysis of Time Series; Proofs of Theorems; References; Notation Index; Author Index; Subject Index; Addendum: Fourier Analysis of Stationary Processes 2001 / xx + 540 pages / Softcover / ISBN-13: 978-0-898715-01-9 / ISBN-10: 0-89871-501-6 / List Price $82.50 / SIAM Member Price $57.75 / Order Code CL36
{"url":"http://www.ec-securehost.com/SIAM/CL36.html","timestamp":"2014-04-16T19:05:01Z","content_type":null,"content_length":"6268","record_id":"<urn:uuid:3636e977-d6a4-4ac2-9b23-c9e8cdd18229>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Castelnuovo-Mumford regularity and degree of generator. up vote 2 down vote favorite Let $R$ be a polynomial ring over a field $k$,: $k[x_{1},..x_{n}]$, $\mathfrak{m}=(x_0,...,x_{n})$ and $M$ be a finitely generated $R$ module. In a paper of Kodiyahlam, he define the Castelnuovo-Mumford regularity of $M$ to be the least integer number $m$ such that for every $j$ the $j$-th syzygy of $M$ is generated in degree less or equal $m+j$.Then, he conclude that the Castelnuovo-Mumford regularity of $M/\mathfrak{m}M$ is equal to the maximal degree of generator of $M$. Could you please so me why the CM regularity of $M/\mathfrak{m}M$ is equal to the maximal degree of generator of $M$ ? If $I$ and $J$ are two finitely generated ideal of $R$ and $J\subseteq I$ then do we have the CM regularity of $J$ is less or equal the CM regularity of $I$ ? ac.commutative-algebra ag.algebraic-geometry 1 The second question has answer "no." – Charles Staats Oct 8 '12 at 13:35 @Charles Staats : Could you please post a counter-example here ? – Knot Oct 8 '12 at 15:20 3 Try resolving $(x,y)$ and $(x^2,y^2)$ over $R=k[x,y]$. The intuition should in fact be the exact opposite: ideals that are "deeper" in the ring should have larger (i.e. worse) regularity. – Graham Leuschke Oct 8 '12 at 16:32 add comment 1 Answer active oldest votes Notice that $M/\mathfrak{m}M$ is an $R$-module of finite length, so $H^i_{\mathfrak{m} }(M/\mathfrak{m}M) = 0$ for all $i>0$ and $H^0_{\mathfrak{m} }(M/\mathfrak{m}M) = M/\mathfrak{m}M$. Recalling that CM regularity can be computed via local cohomology module (see Brodmann-Sharp: local cohomology), we have $$reg (M/\mathfrak{m}M) = \max \{ end (H^i_{\mathfrak{m} }(M/\ up vote mathfrak{m}M)) +i | i= 0,...,n \}.$$ Here, consider a graded $R$-module $N = \oplus_iN_i$, we denote $end(N) = \sup \{i | N_i \neq 0\}$. Therefore $reg (M/\mathfrak{m}M) = end ((M/\mathfrak 2 down {m}M))$. $end ((M/\mathfrak{m}M))$ is equal to the maximal degree of generator of $M$ by graded Nakayama lemma. What is $end$ ? – Piotr Achinger Oct 9 '12 at 4:17 What do you mean by $\text{end}(H^{i}_{\mathfrak{m}}(M/\mathfrak{m}M)$ ? – Knot Oct 9 '12 at 4:19 1 @ Knot: you said that you read a paper of Kodiyahlam about CM regularity. You should study basic fact of CM regularity before reading this paper. – Pham Hung Quy Oct 9 '12 at 5:05 Thanks for including the definition! – Piotr Achinger Oct 9 '12 at 5:07 add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/109139/castelnuovo-mumford-regularity-and-degree-of-generator","timestamp":"2014-04-18T11:12:25Z","content_type":null,"content_length":"59000","record_id":"<urn:uuid:0f493bbb-9ad8-4427-95df-e3f633711457>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Gravitational Red Shift at the center of a mass... I understand we can calculate that gravitation is neutralized at the center of a mass - say, a planet or star (or anywhere within a spherical shell); I wondered if the GRS is also neutralized, or if being pervaded by gravity - however neutralized - the dilation of space-time persists therein.
{"url":"http://www.physicsforums.com/showthread.php?t=496609","timestamp":"2014-04-17T04:08:16Z","content_type":null,"content_length":"22815","record_id":"<urn:uuid:67012a28-4709-42fa-ab5a-504c8fd58c1d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra/Factoring and Complex Numbers: A Review This subsection is a review only and we take the main results as known. For proofs, see (Birkhoff & MacLane 1965) or (Ebbinghaus 1990). Just as integers have a division operation— e.g., "$4$ goes $5$ times into $21$ with remainder $1$"— so do polynomials. Theorem 1.1 (Division Theorem for Polynomials) Let $c(x)$ be a polynomial. If $m(x)$ is a non-zero polynomial then there are quotient and remainder polynomials $q(x)$ and $r(x)$ such that $c(x)=m(x)\cdot q(x)+r(x)$ where the degree of $r(x)$ is strictly less than the degree of $m(x)$. In this book constant polynomials, including the zero polynomial, are said to have degree $0$. (This is not the standard definition, but it is convienent here.) The point of the integer division statement "$4$ goes $5$ times into $21$ with remainder $1$" is that the remainder is less than $4$— while $4$ goes $5$ times, it does not go $6$ times. In the same way, the point of the polynomial division statement is its final clause. Example 1.2 If $c(x)=2x^3-3x^2+4x$ and $m(x)=x^2+1$ then $q(x)=2x-3$ and $r(x)=2x+3$. Note that $r(x)$ has a lower degree than $m(x)$. Corollary 1.3 The remainder when $c(x)$ is divided by $x-\lambda$ is the constant polynomial $r(x)=c(\lambda)$. The remainder must be a constant polynomial because it is of degree less than the divisor $x-\lambda$, To determine the constant, take $m(x)$ from the theorem to be $x-\lambda$ and substitute $\ lambda$ for $x$ to get $c(\lambda)=(\lambda-\lambda)\cdot q(\lambda)+r(x)$. If a divisor $m(x)$ goes into a dividend $c(x)$ evenly, meaning that $r(x)$ is the zero polynomial, then $m(x)$ is a factor of $c(x)$. Any root of the factor (any $\lambda\in\mathbb{R}$ such that $m (\lambda)=0$) is a root of $c(x)$ since $c(\lambda)=m(\lambda)\cdot q(\lambda)=0$. The prior corollary immediately yields the following converse. Corollary 1.4 If $\lambda$ is a root of the polynomial $c(x)$ then $x-\lambda$ divides $c(x)$ evenly, that is, $x-\lambda$ is a factor of $c(x)$. Finding the roots and factors of a high-degree polynomial can be hard. But for second-degree polynomials we have the quadratic formula: the roots of $ax^2+bx+c$ are $\lambda_1=\frac{-b+\sqrt{b^2-4ac}}{2a} \qquad \lambda_2=\frac{-b-\sqrt{b^2-4ac}}{2a}$ (if the discriminant $b^2-4ac$ is negative then the polynomial has no real number roots). A polynomial that cannot be factored into two lower-degree polynomials with real number coefficients is irreducible over the reals. Theorem 1.5 Any constant or linear polynomial is irreducible over the reals. A quadratic polynomial is irreducible over the reals if and only if its discriminant is negative. No cubic or higher-degree polynomial is irreducible over the reals. Corollary 1.6 Any polynomial with real coefficients can be factored into linear and irreducible quadratic polynomials. That factorization is unique; any two factorizations have the same powers of the same factors. Note the analogy with the prime factorization of integers. In both cases, the uniqueness clause is very useful. Example 1.7 Because of uniqueness we know, without multiplying them out, that $(x+3)^2(x^2+1)^3$ does not equal $(x+3)^4(x^2+x+1)^2$. Example 1.8 By uniqueness, if $c(x)=m(x)\cdot q(x)$ then where $c(x)=(x-3)^2(x+2)^3$ and $m(x)=(x-3)(x+2)^2$, we know that $q(x)=(x-3)(x+2)$. While $x^2+1$ has no real roots and so doesn't factor over the real numbers, if we imagine a root— traditionally denoted $i$ so that $i^2+1=0$— then $x^2+1$ factors into a product of linears $(x-i) So we adjoin this root $i$ to the reals and close the new system with respect to addition, multiplication, etc. (i.e., we also add $3+i$, and $2i$, and $3+2i$, etc., putting in all linear combinations of $1$ and $i$). We then get a new structure, the complex numbers, denoted $\mathbb{C}$. In $\mathbb{C}$ we can factor (obviously, at least some) quadratics that would be irreducible if we were to stick to the real numbers. Surprisingly, in $\mathbb{C}$ we can not only factor $x^2+1$ and its close relatives, we can factor any quadratic. $ax^2+bx+c= a\cdot \big(x-\frac{-b+\sqrt{b^2-4ac}}{2a}\big) \cdot \big(x-\frac{-b-\sqrt{b^2-4ac}}{2a}\big)$ Example 1.9 The second degree polynomial $x^2+x+1$ factors over the complex numbers into the product of two first degree polynomials. $\big(x-\frac{-1+\sqrt{-3}}{2}\big) \big(x-\frac{-1-\sqrt{-3}}{2}\big) = \big(x-(-\frac{1}{2}+\frac{\sqrt{3}}{2}i)\big) \big(x-(-\frac{1}{2}-\frac{\sqrt{3}}{2}i)\big)$ Corollary 1.10 (Fundamental Theorem of Algebra) Polynomials with complex coefficients factor into linear polynomials with complex coefficients. The factorization is unique. • Ebbinghaus, H. D. (1990), Numbers, Springer-Verlag . • Birkhoff, Garrett; MacLane, Saunders (1965), Survey of Modern Algebra (Third ed.), Macmillian . Last modified on 1 June 2010, at 18:30
{"url":"http://en.m.wikibooks.org/wiki/Linear_Algebra/Factoring_and_Complex_Numbers:_A_Review","timestamp":"2014-04-21T12:41:04Z","content_type":null,"content_length":"31149","record_id":"<urn:uuid:6b4cbc25-f309-4a5b-bd87-569656f94901>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}