content
stringlengths
86
994k
meta
stringlengths
288
619
Archived:Optimized Collision detection in Flash Lite Revision as of 09:16, 14 May 2013 by hamishwillee (Talk | contribs) Archived:Optimized Collision detection in Flash Lite From Nokia Developer Wiki Archived: This article is archived because it is not considered relevant for third-party developers creating commercial solutions today. If you think this article is still relevant, let us know by adding the template {{ReviewForRemovalFromArchive|user=~~~~|write your reason here}}. We do not recommend Flash Lite development on current Nokia devices, and all Flash Lite articles on this wiki have been archived. Flash Lite has been removed from all Nokia Asha and recent Series 40 devices and has limited support on Symbian. Specific information for Nokia Belle is available in Flash Lite on Nokia Browser for Symbian. Specific information for OLD Series 40 and Symbian devices is available in the Flash Lite Developers Library. Article Metadata Code Example Created: manikantan (25 Apr 2009) Last edited: hamishwillee (14 May 2013) Collision detection plays an important role in game design. In most games, action is required to be taken upon collision between two objects. Say for example in space shooter kind of games, when the bullet hits the spaceship, there is a explosion. Now to test collisions there are lot of ways. The simplest for a Flash Lite user is however, using the hitTest() function. Simplest Way The code below checks for collision between two movieclips – mc and mc2. It is assumed that these are already on stage and posses the same Y coordinate(Just for simplicity). Now paste the following code on an actions layer onEnterFrame = function () { if (mc._x>=Stage.width) { mc._x = -mc._width; if(mc.hitTest(mc2)) { Here, we move one movieclip (mc) and wrap its motion about the boundaries of the Stage. You will observe that whenever mc and mc2 collide, we get the trace message indicating a Collision. The hitTest function in Flash involves massive computation. This is because it assumes a generalized algorithm to check collision. In our case, however, by placing two circular objects, this is not needed. We can simple the process and reduce the computation involved by making it simpler. onEnterFrame = function () { if (mc._x>=Stage.width) { mc._x = -mc._width; if(Math.ceil(mc2._x)==Math.ceil((mc._x + mc._width))) { trace("collided"); } Here, as both movieclips share the same Y coordinate, we check the collision by just checking the X coordinate values. By executing the above code, you will understand that the first collision is detected by this code snippet. We are forced to use Math.ceil() to convert to X coordinate values to an integer. Optimized approach However, this may just be an artificial case as two bodies (whose collision needs to be detected need not lie on the Y coordinate). Hence, we now introduce a more generalized procedure to check collision. The logic underlying this test would be compute the distance between two bodies and then to relate it with their sum of widths. Now having the same two movieclips on stage and moving the mc2 clip slightly down ( incrementing its Y coordinate), we demonstrate the more general case. The center of each object can be deduced to (mc._x + mc._width/2, mc._y + mc._height/2). This holds good when both their registration points are Top-Left corner. Now if the distance between their centers is less than the sum of their widths, a collision said to have occurred.We can recollect that the distance between two points can be calculated from Dist = Sqrt( square(x1-x2) + square(y1-y2)) To check this paste the following code snippet and try it out yourself. var widthsum = (mc._width+mc2._width)/2; Y_coord = mc._y; Mc2_x = mc2._x+mc2._width/2; Mc2_y = mc2._y; onEnterFrame = function () { if (mc._x>=Stage.width) { mc._x = -mc._width; distanceBetween = dist(mc._x+mc._width/2, Y_coord, Mc2_x, Mc2_y); if (distanceBetween<widthsum) { function dist(x1:Number, y1:Number, x2:Number, y2:Number):Number { sample = Math.sqrt((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)); return sample; You may note that since mc2 is static I compute its center outside the onEnterFrame and this applies to the static Y co-ordinate and the sum of their widths. Its lucrative to reduce the computation inside the looping constructs or onEnterFrame functions. This type of checking is the most fundamental way to check collisions between circular objects. Download Code A sample Code supporting this article is available Media:Collision.zip. --manikantan 08:46, 25 April 2009 (EEST)
{"url":"http://developer.nokia.com/community/wiki/index.php?title=Archived:Optimized_Collision_detection_in_Flash_Lite&oldid=194433","timestamp":"2014-04-20T08:16:34Z","content_type":null,"content_length":"54513","record_id":"<urn:uuid:884a183a-58d9-444c-a121-d6269b2ba747>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 43 - Bulletin of Symbolic Logic , 1997 "... We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability dist ..." Cited by 48 (10 self) Add to MetaCart We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability distributions. It is shown how these models have a logical and effective presentation and how they are used to give a computational framework in several areas in mathematics and physics. These include fractal geometry, where new results on existence and uniqueness of attractors and invariant distributions have been obtained, measure and integration theory, where a generalization of the Riemann theory of integration has been developed, and real arithmetic, where a feasible setting for exact computer arithmetic has been formulated. We give a number of algorithms for computation in the theory of iterated function systems with applications in statistical physics and in period doubling route to chao... , 1997 "... In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications in fractal geometry, several extensions of the programming language PCF with a real number data type, and ..." Cited by 43 (8 self) Add to MetaCart In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications in fractal geometry, several extensions of the programming language PCF with a real number data type, and a framework and an implementation of a package for exact real number arithmetic. Based on recursion theory we present here a precise and direct formulation of effective representation of real numbers by continuous domains, which is equivalent to the representation of real numbers by algebraic domains as in the work of Stoltenberg-Hansen and Tucker. We use basic ingredients of an effective theory of continuous domains to spell out notions of computability for the reals and for functions on the real line. We prove directly that our approach is equivalent to the established Turing-machine based approach which dates back to Grzegorczyk and Lacombe, is used by Pour-El & Richards in their found... - ACM Transactions on Computational Logic , 2004 "... Data types containing infinite data, such as the real numbers, functions, bit streams and waveforms, are modelled by topological many-sorted algebras. In the theory of computation on topological algebras there is a considerable gap between so-called abstract and concrete models of computation. We pr ..." Cited by 30 (19 self) Add to MetaCart Data types containing infinite data, such as the real numbers, functions, bit streams and waveforms, are modelled by topological many-sorted algebras. In the theory of computation on topological algebras there is a considerable gap between so-called abstract and concrete models of computation. We prove theorems that bridge the gap in the case of metric algebras with partial operations. With an abstract model of computation on an algebra, the computations are invariant under isomorphisms and do not depend on any representation of the algebra. Examples of such models are the ‘while ’ programming language and the BCSS model. With a concrete model of computation, the computations depend on the choice of a representation of the algebra and are not invariant under isomorphisms. Usually, the representations are made from the set N of natural numbers, and computability is reduced to classical computability on N. Examples of such models are computability via effective metric spaces, effective domain representations, and type two enumerability. The theory of abstract models is stable: there are many models of computation, and - Journal of Complexity , 2002 "... We study a restricted version of Shannon's General . . . ..." - In Proceedings of ICALP’04 , 2004 "... Abstract. We present a domain-theoretic version of Picard’s theorem for solving classical initial value problems in R n. For the case of vector fields that satisfy a Lipschitz condition, we construct an iterative algorithm that gives two sequences of piecewise linear maps with rational coefficients, ..." Cited by 18 (9 self) Add to MetaCart Abstract. We present a domain-theoretic version of Picard’s theorem for solving classical initial value problems in R n. For the case of vector fields that satisfy a Lipschitz condition, we construct an iterative algorithm that gives two sequences of piecewise linear maps with rational coefficients, which converge, respectively from below and above, exponentially fast to the unique solution of the initial value problem. We provide a detailed analysis of the speed of convergence and the complexity of computing the iterates. The algorithm uses proper data types based on rational arithmetic, where no rounding of real numbers is required. Thus, we obtain an implementation framework to solve initial value problems, which is sound and, in contrast to techniques based on interval analysis, also complete: the unique solution can be actually computed within any degree of required accuracy. 1 - Fundamenta Informaticae , 2006 "... Recently, using a limit schema, we presented an analog and machine independent algebraic characterization of elementary functions over the real numbers in the sense of recursive analysis. In a different and orthogonal work, we proposed a minimalization schema that allows to provide a class of real r ..." Cited by 18 (8 self) Add to MetaCart Recently, using a limit schema, we presented an analog and machine independent algebraic characterization of elementary functions over the real numbers in the sense of recursive analysis. In a different and orthogonal work, we proposed a minimalization schema that allows to provide a class of real recursive functions that corresponds to extensions of computable functions over the integers. Mixing the two approaches we prove that computable functions over the real numbers in the sense of recursive analysis can be characterized as the smallest class of functions that contains some basic functions, and closed by composition, linear integration, minimalization and limit schema. - Theoretical Computer Science , 1998 "... This paper extends the order-theoretic approach to computable analysis via continuous domains to complete metric spaces and Banach spaces. We employ the domain of formal balls to define a computability theory for complete metric spaces. For Banach spaces, the domain specialises to the domain of clos ..." Cited by 15 (2 self) Add to MetaCart This paper extends the order-theoretic approach to computable analysis via continuous domains to complete metric spaces and Banach spaces. We employ the domain of formal balls to define a computability theory for complete metric spaces. For Banach spaces, the domain specialises to the domain of closed balls, ordered by reversed inclusion. We characterise computable linear operators as those which map computable sequences to computable sequences and are effectively bounded. We show that the domain-theoretic computability theory is equivalent to the wellestablished approach by Pour-El and Richards. 1 Introduction This paper is part of a programme to introduce the theory of continuous domains as a new approach to computable analysis. Initiated by the various applications of continuous domain theory to modelling classical mathematical spaces and performing computations as outlined in the recent survey paper by Edalat [6], the authors started this work with [9] which was concerned with co... , 2005 "... We establish a new connection between the two most common traditions in the theory of real computation, the Blum-Shub-Smale model and the Computable Analysis approach. We then use the connection to develop a notion of computability and complexity of functions over the reals that can be viewed as an ..." Cited by 15 (5 self) Add to MetaCart We establish a new connection between the two most common traditions in the theory of real computation, the Blum-Shub-Smale model and the Computable Analysis approach. We then use the connection to develop a notion of computability and complexity of functions over the reals that can be viewed as an extension of both models. We argue that this notion is very natural when one tries to determine just how “difficult ” a certain function is for a very rich class of functions. 1 - Trans. Amer. Math. Soc "... Abstract. Let (α, β) ⊆ R denote the maximal interval of existence of solution for the initial-value problem { dx = f(t, x) dt x(t0) = x0, where E is an open subset of R m+1, f is continuous in E and (t0, x0) ∈ E. We show that, under the natural definition of computability from the point of view o ..." Cited by 14 (13 self) Add to MetaCart Abstract. Let (α, β) ⊆ R denote the maximal interval of existence of solution for the initial-value problem { dx = f(t, x) dt x(t0) = x0, where E is an open subset of R m+1, f is continuous in E and (t0, x0) ∈ E. We show that, under the natural definition of computability from the point of view of applications, there exist initial-value problems with computable f and (t0, x0) whose maximal interval of existence (α, β) is noncomputable. The fact that f may be taken to be analytic shows that this is not a lack of regularity phenomenon. Moreover, we get upper bounds for the “degree of noncomputability” by showing that (α, β) is r.e. (recursively enumerable) open under very mild hypotheses. We also show that the problem of determining whether the maximal interval is bounded or unbounded is in general undecidable. 1. - THEORETICAL COMPUTER SCIENCE , 2005 "... We present an analog and machine-independent algebraic characterization of elementarily computable functions over the real numbers in the sense of recursive analysis: we prove that they correspond to the smallest class of functions that contains some basic functions, and closed by composition, linea ..." Cited by 13 (5 self) Add to MetaCart We present an analog and machine-independent algebraic characterization of elementarily computable functions over the real numbers in the sense of recursive analysis: we prove that they correspond to the smallest class of functions that contains some basic functions, and closed by composition, linear integration, and a simple limit schema. We generalize this result to all higher levels of the Grzegorczyk Hierarchy. This paper improves several previous partial characterizations and has a dual interest: • Concerning recursive analysis, our results provide machine-independent characterizations of natural classes of computable functions over the real numbers, allowing to define these classes without usual considerations on higher-order (type 2) Turing machines. • Concerning analog models, our results provide a characterization of the power of a natural class of analog models over the real numbers and provide new insights for understanding the relations between several analog computational models.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=246276","timestamp":"2014-04-17T19:04:42Z","content_type":null,"content_length":"37904","record_id":"<urn:uuid:31c3b206-5a0b-4d24-94b7-fb696e14c7e4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
binomial theorem help March 5th 2009, 10:14 AM binomial theorem help hey, I'm really struggling on this question i just cant seem to find the right method to get to the answer which is 0.4375 find and simplify the term independent of x in the expansion $<br /> ({\frac{1}{2x}}+x^3)^8<br />$ thanks xx March 5th 2009, 11:38 AM Binomial Coefficients Hello alyson $\left({\frac{1}{2x}}+x^3\right)^8 = \left(\frac{1+2x^4}{2x}\right)^8$ $= \left(\frac{1}{2x}\right)^8(1+2x^4)^8$ You now need to find the term in $x^8$ in the expansion of $(1+2x^4)^8$, to cancel the $\frac{1}{x^8}$ outside. This term is: So the coefficient is: $\frac{1}{2^8}\times \frac{8\times 7 \times 2^4}{2} = \frac{7}{16} = 0.4375$
{"url":"http://mathhelpforum.com/algebra/77088-binomial-theorem-help-print.html","timestamp":"2014-04-21T07:54:00Z","content_type":null,"content_length":"6582","record_id":"<urn:uuid:c824eec1-cdc9-492e-bc31-75503ccf0dfd>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Bridgewire Heating Proceedings of the Sixteenth Symposium on Explosives and Pyrotechnics, Essington, PA, April 1997. Bridgewire Heating Barry T. Neyer EG&G Optoelectronics/Star City Miamisburg, OH 45343-0529 Contact Address Barry T. Neyer PerkinElmer Optoelectronics 1100 Vanguard Blvd Miamisburg, OH 45342 (937) 865-5586 (937) 865-5170 (Fax) Electric currents are applied to an electro-explosive device for several reasons. One is to function the device. Another reason would be to check the electrical resistance of the circuit to ensure that the device will work as expected. When checking the resistance, it is important to ensure that the device does not function, and that it does not degrade. Thus, it is desirable to know the current-induced temperature rise in the bridgewire. This report will describe calculations to compute the temperature rise. It uses a simplified physical model that nevertheless gives a good match to the physics of the device. Using this model, it is possible to construct an exact solution to the physical problem. This report will give the derivation, and show the temperature rise as a function of position along the bridgewire and the time. The temperature information will allow the design engineers to determine the allowable current to use when performing continuity tests, as well as to estimate the effects of various RF induced currents on the temperature of the device. Technical Papers of Dr. Barry T. Neyer Figure 1: Schematic Diagram of Electro-Explosive Device Electro-explosive components are widely used to convert electrical energy into an explosive output. A schematic diagram of an electro-explosive device is shown in Figure 1. Current from an external source is applied via metallic pins to a resistive bridgewire. The current heats the bridgewire, which in turn heats the explosive, causing it to ignite. A similar type of device is an exploding bridgewire device (EBW). A rapidly rising current from a capacitor discharge unit causes the wire to rapidly heat and burst. The bursting wire causes the explosive to either deflagrate or detonate. The proper functioning of such a device requires that the bridgewire remains intact and not be degraded before the unit is used. Because the bridgewire can be broken during the explosive loading process, or can degrade due to incompatibility with the explosive mixture, it is often desirable to ensure that the bridgewire has not been damaged. The most commonly used method of determining that the bridgewire has not been degraded is to measure the resistance of the device before and after loading the explosive. These numbers should be approximately the same. The resistance can also be measured as the device ages to ensure that no degradation occurs. Implicit in this measurement technique is the assumption that the measurement does not in any way damage the bridgewire - explosive interface. The requirement of no damage requires that the current used to perform the resistance measurement is low enough that it raises the temperature only minimally. Thus, current limiting resistors must be used. This paper calculates the temperature rise in an electro-explosive device. The calculation is exact in the case of a electro-explosive device of a certain simple geometry. While detonators are not usually made in this simple geometric pattern, the calculation still provides a realistic temperature profile for most electro-explosive devices. Exact Solution Figure 2: Current Flow Consider a bridgewire of cross section area a with a current flow I, as shown in Figure 2. The change in energy of the small volume shown in Figure 2 generated by the current flow is given by the where s is the bulk resistivity of the wire. Heat flows in the wire if there is a thermal gradient. The net heat flow into the small volume of wire is given by the equation: where K is the heat diffusion constant. The temperature change in a wire is governed by the equation: where r is the bulk density of the wire material. Combining the heat and temperature equations yields: Taking the limit as Dt ® 0 yields the equation Equation (5) is the well known one dimensional heat diffusion equation with a current source term. The general solution to the linear partial differential equation is found by finding a particular solution, and adding this to the general solution of the homogeneous equation. The general solution can be written as the sum of terms of the form: The particular solution depends on the boundary conditions. Bridgewires used in explosive devices are usually thin resistive elements bonded to thicker low resistance pins. The pins are usually a factor of ten or more times the diameter of the bridgewire, and are often embedded in a composition that can dissipate heat. Thus, a good approximation is to assume that the pins remain at ambient temperature. (Calculations that follow will show that the assumption of ambient temperature of the pins is very realistic is most cases.) If the wire is of length l, then a particular solution to Equation (5) with the appropriate boundary conditions is where z measures the distance from the mid point of the wire. That Equation (8) satisfies Equation (5) can be readily verified by substituting it into the differential equation. The set of solutions to the homogeneous equation must allow for any initial condition. Assume that at time t = 0 the wire is at uniform temperature T[0], when a current I "turns on." Because the endpoints are assumed to remain at the temperature T[0] throughout, a convenient set of homogeneous solutions is to chose b [n] l/2=np/2. Because of the symmetry of the problem, the B[n] terms in Equation (6) are identically zero. The boundary conditions further require the even A[n] terms to also be zero. The Fourier coefficients are thus: Because the temperature is required to be T[0] at time t = 0, the Fourier coefficients are chosen to exactly cancel the parabolic term in brackets in Equation (8). Substituting the bracketed term for f(z) yields the general solution: Inspection of Equation (10) shows that the solutions to the homogeneous equation all fall off exponentially. Thus, after a short time, the temperature profile becomes parabolic, determined by the first two terms in the bracket in Equation (10). The time constants for exponential decay are determined from the length and the diffusivity constant of the bridgewire. The thermal diffusivity constant, D, for nickel wire used in many EEDs is 23.5 mm^2/s. A one millimeter long bridgewire has a thermal decay constant of 232 inverse seconds. Thus, all transient effects are gone in several tens of milliseconds. Numerical Values for a Typical EED Figure 3 shows a typical temperature rise above ambient as a function of time, assuming the above diffusivity constant. The real temperature at any point is found by multiplying the curve by the coefficient in front of the bracket in Equation (10). A different diffusivity constant would produce similar curves, but at different times. The above discussion assumed a constant current. However, if the current changes slowly compared to the thermal decay constant, the upper curve in Figure 3 will also represent the instantaneous temperature increase as a function of the current. Figure 3: Temperature Rise in Constant Current Bridgewire The curve and analysis show that the peak temperature is given by the equation: where R = ls/a is the resistance, and m = rla is the mass of the bridgewire. Equation (11) represents the final temperature at the center of the wire. Inspection of Figure 3 shows that the temperature rise is a linear function of time for short times. Taking the limit t®0 in Equation (10) yields the short time behavior of the temperature at the center of the wire: Equation (12) is also the form of the temperature that would be derived assuming there was no heat flow. (The limit t®0 is the same as D®0, because D and t only appear as a product.) Figure 3 shows that the temperature is essentially at the maximum after 40 ms of current flow. If a constant current of 0.3 Amps flows for 40 milliseconds, in a 2 mil bridgewire, the above calculations yield a temperature rise of: │Parameter │Symbol│Value │ │Bridgewire length │l │1 mm │ │Bridgewire cross section │a │2.027 10^-3 mm^2 (2 mil diameter) │ │Bridgewire density │r │8.9 g/cm^3 │ │Bridgewire mass │m │1.804 10^-5 g │ │Bridgewire specific heat │C[p] │0.105 cal/g C = 0.4395 J/g C │ │Heat Diffusion Constant │D │23.5 mm^2/s │ │Resistivity │s │6.32 cm = 38 / cmf │ │Resistance │ls/a │31 m │ │Current │I │0.3 Amp │ │Temp Peak │ │1.9 C │ │Temp Peak (no heat flow, 40 ms)│ │14 C │ │Temp Peak (no heat flow, 1 s) │ │352 C │ The importance of heat flow in the calculation is clear if one considers what would happen for longer duration currents. Neglecting heat flow would lead to a predicted temperature rise of 350 C for a one second 0.3 Amp current, and a 1000 C rise for a three second pulse, all from a 3 milliwatt heat source! The calculation with heat flow would show the realistic 2 C rise. If the current was on for an extremely long time, then the temperature of the entire device would start to rise. However, since the mass of a typical device is 10 or more grams, or one million times the mass of the bridgewire, the temperature rise would be only a few thousands of a degree. If the current turns off at a later time, the temperature will decay with the same speed as it rose to the steady state solution. Thus, approximately 8 ms after the current stops, the temperature will be reduced to 36% of the peak value, to 14% after 16 ms, etc. The temperature fall is shown in Figure 4. Figure 4: Temperature Fall After Removing Current Repeated pulsing of such a system will not result in any appreciable temperature rise in the bridgewire compared to the rest of the component. Thus, there is no need to wait between current pulses to ensure that there is no large temperature rise in the system. The above calculations have assumed no heat transfer to the explosive powder surrounding the bridgewire. Because the thermal diffusion constant for explosive powder is usually many orders of magnitude lower than the constant for the bridgewire, the approximation is a good one. The three dimensional heat diffusion equation can be solved for the full problem. Because of the geometry, a full calculation is rather difficult. However, a reasonable approximation would be to calculate the heat flow radially at any point along the wire. Such a calculation is beyond the scope of this paper. A full calculation would reduce the temperature of the bridgewire, but by a negligible amount. The above calculations also were performed under assumption that the pins remained at the ambient temperature. Calculation of the temperature rise in the pins is much more complicated, than the rise in the bridgewire, because the pins are in intimate contact with header material. The header material will generally conduct heat much more readily than the explosive powder. If heat conduction to the header material can be neglected, then the solution is given by Equation (10), with the 8 replaced by a 2 and where l represents the length of one pin.. The pins used in EEDs are usually at least a factor of 10 larger in diameter than the bridgewire, which results in a factor of 10^-4 smaller temperature, while a factor of 10 or more times the length results in a factor of 100 larger temperature. Combined, the two factors result in a temperature rise of at most a few percent of the temperature rise in the bridgewire. Thus, for even this unrealistic case, the temperature rise in the pins can safely be ignored. For the more realistic case of the pins imbedded in a glass or ceramic header, the temperature rise of the pins would be drastically reduced by several orders of The temperature rise in electro explosive devices was calculated analytically. The formula allows EED designers to determine the temperature rise in such devices. Proper utilization of current limiting resistors allows experimenters to test the EED without any concern about generating too large of a temperature rise in the EED. This work was initiated to determine the temperature rise in the High Voltage Detonator (HVD) during continuity tests. The HVD is manufactured for Lockheed Martin Aerospace Corporation. This study was partially funded by Lockheed Martin.
{"url":"http://neyersoftware.com/Papers/EandP97Bridge/BridgewireHeating.htm","timestamp":"2014-04-19T17:01:04Z","content_type":null,"content_length":"18468","record_id":"<urn:uuid:d4c071f2-0f90-45e9-8a97-da2785223b71>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Option Pricing You might have had success beating the market by trading stocks using a disciplined process that anticipates a nice move either up or down. Many traders have also gained the confidence to make money in the stock market by identifying one or two good stocks that may make a big move soon. But if you don't know how to take advantage of that movement, you might be left in the dust. If this sounds like you, then maybe it's time to consider using options to play your next move. This article will explore some simple factors that you must consider if you plan to trade options to take advantage of stock movements. Option Pricing Before venturing into the world of trading options, investors should have a good understanding of the factors that determine the value of an option. These include the current stock price, the intrinsic value, time to expiration or the time value, volatility, interest rates and cash dividends paid. (If you don't know about these building blocks, check out our Option Basics and Options Pricing tutorials.) There are several options pricing models that use these parameters to determine the fair market value of the option. Of these, the Black-Scholes model is the most widely used. In many ways, options are just like any other investment in that you need to understand what determines their price in order to use them to take advantage of moves the market. Main Drivers of an Option's Price Let's start with the primary drivers of the price of an option: current stock price, intrinsic value, time to expiration or time value, and volatility. The current stock price is fairly obvious. The movement of the price of the stock up or down has a direct - although not equal - effect on the price of the option. As the price of a stock rises, the more likely the price of a call option will rise and the price of a put option will fall. If the stock price goes down, then the reverse will most likely happen to the price of the calls and puts. (For related reading, see ESOs: Using The Black-Scholes Model.) Intrinsic Value Intrinsic value is the value that any given option would have if it were exercised today. Basically, the intrinsic value is the amount by which the strike price of an option is in the money. It is the portion of an option's price that is not lost due to the passage of time. The following equations can be used to calculate the intrinsic value of a call or put option: Call Option Intrinsic Value = Underlying Stock\'s Current Price – Call Strike Price Put Option Intrinsic Value = Put Strike Price – Underlying Stock\'s Current Price The intrinsic value of an option reflects the effective financial advantage that would result from the immediate exercise of that option. Basically, it is an option's minimum value. Options trading at the money or out of the money have no intrinsic value. For example, let's say General Electric (GE) stock is selling at $34.80. The GE 30 call option would have an intrinsic value of $4.80 ($34.80 – $30 = $4.80) because the option holder can exercise his option to buy GE shares at $30 and then turn around and automatically sell them in the market for $34.80 - a profit of $4.80. In a different example, the GE 35 call option would have an intrinsic value of zero ($34.80 – $35 = -$0.20) because the intrinsic value cannot be negative. It is also important to note that intrinsic value also works in the same way for a put option. For example, a GE 30 put option would have an intrinsic value of zero ($30 – $34.80 = -$4.80) because the intrinsic value cannot be negative. On the other hand, a GE 35 put option would have an intrinsic value of $0.20 ($35 – $34.80 = $0.20). Time Value The time value of options is the amount by which the price of any option exceeds the intrinsic value. It is directly related to how much time an option has until it expires as well as the volatility of the stock. The formula for calculating the time value of an option is: Time Value = Option Price – Intrinsic Value The more time an option has until it expires, the greater the chance it will end up in the money. The time component of an option decays exponentially. The actual derivation of the time value of an option is a fairly complex equation. As a general rule, an option will lose one-third of its value during the first half of its life and two-thirds during the second half of its life. This is an important concept for securities investors because the closer you get to expiration, the more of a move in the underlying security is needed to impact the price of the option. Time value is often referred to as extrinsic value. (To learn more, read The Importance Of Time Value.) Time value is basically the risk premium that the option seller requires to provide the option buyer the right to buy/sell the stock up to the date the option expires. It is like an insurance premium of the option; the higher the risk, the higher the cost to buy the option. Looking again at the example from above, if GE is trading at $34.80 and the one-month to expiration GE 30 call option is trading at $5, the time value of the option is $0.20 ($5.00 - $4.80 = $0.20). Meanwhile, with GE trading at $34.80, a GE 30 call option trading at $6.85 with nine months to expiration has a time value of $2.05. ($6.85 - $4.80 = $2.05). Notice that the intrinsic value is the same and all the difference in the price of the same strike price option is the time value. An option's time value is also highly dependent on the volatility in that the market expects the stock will display up to expiration. For stocks where the market does not expect the stock to move much, the option's time value will be relatively low. The opposite is true for more volatile stocks or those with a high beta, due primarily to the uncertainty of the price of the stock before the option expires. In the table below, you can see the GE example that has already been discussed. It shows the trading price of GE, several strike prices and the intrinsic and time values for the call and put options. General Electric is considered a stock with low volatility with a beta of 0.49 for this example. Investopedia.com © 2009 Figure 1: General Electric (GE) Amazon.com Inc. (AMZN) is a much more volatile stock with a beta of 3.47 (see Figure 2). Compare the GE 35 call option with nine months to expiration with the AMZN 40 call option with nine months to expiration. GE has only $0.20 to move up before it is at the money, while AMZN has $1.30 to move up before it is at the money. The time value of these options is $3.70 for GE and $7.50 for AMZN, indicating a significant premium on the AMZN option due to the volatile nature of the AMZN stock. Investopedia.com © 2009 Figure 2: Amazon.com (AMZN) This makes - an option seller of GE will not expect to get a substantial premium because the buyers do not expect the price of the stock to move significantly. On the other hand, the seller of an AMZN option can expect to receive a higher premium due to the volatile nature of the AMZN stock. Basically, when the market believes a stock will be very volatile, the time value of the option rises. On the other hand, when the market believes a stock will be less volatile, the time value of the option falls. It is this expectation by the market of a stock's future volatility that is key to the price of options. (Keep reading on this subject in The ABCs Of Option Volatility.) The effect of volatility is mostly subjective and it is difficult to quantify. Fortunately, there are several calculators that can be used to help estimate volatility. To make this even more interesting, there are also several types of volatility - with implied and historical being the most noted. When investors look at the volatility in the past, it is called either historical volatility or statistical volatility. Historical Volatility helps you determine the possible magnitude of future moves of the underlying stock. Statistically, two-thirds of all occurrences of a stock price will happen within plus or minus one standard deviation of the stocks' move over a set time period. Historical volatility looks back in time to show how volatile the market has been. This helps options investors to determine which exercise price is most appropriate to choose for the particular strategy they have in mind. (To read more about volatility, see Using Historical Volatility To Gauge Future Risk, and The Uses And Limits Of Volatility.) Implied volatility is what is implied by the current market prices and is used with the theoretical models. It helps to set the current price of an existing option and assists option players to assess the potential of an option trade. Implied volatility measures what option traders expect future volatility will be. As such, implied volatility is an indicator of the current sentiment of the market. This sentiment will be reflected in the price of the options helping options traders to assess the future volatility of the option and the stock based on current option prices. The Bottom Line A stock investor who is interested in using options to capture a potential move in a stock must understand how options are priced. Besides the underlying price of the stock, the key determinates of the price of an option are its intrinsic value - the amount by which the strike price of an option is in-the-money - and its time value. Time value is related to how much time an option has until it expires and the option's volatility. Volatility is of particular interest to a stock trader wishing to use options to gain an added advantage. Historical volatility provides the investor a relative perspective of how volatility impacts options prices, while current option pricing provides the implied volatility that the market currently expects in the future. Knowing the current and expected volatility that is in the price of an option is essential for any investor that wants to take advantage of the movement of a stock's price. comments powered by Disqus
{"url":"http://www.investopedia.com/articles/optioninvestor/07/options_beat_market.asp","timestamp":"2014-04-19T08:11:57Z","content_type":null,"content_length":"90667","record_id":"<urn:uuid:87b76de4-76ee-46fb-9350-c7a0ed02142a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Lines in a Plane 7 September 6th 2009, 08:43 AM #1 Senior Member Nov 2008 Lines in a Plane 7 If they exist, find the x-,y-,and z-intercepts of the line x=24+7t, y=4+t, z=-20-5t. So this question seems very straight forward, but for some reason I cannot come up with the correct answer as the back of the book says. Their answer is... xintercept is -4. This is my work: Sym eqtn: (x-24)/7 = (y-4)/1 = (z+20)/-5 Find x intercept, let y and z =0 (x-24)/7 = -4=-4 I believe the eqns should be x= 24+7t y= 4 + t z = 20 + 5t then y= 0 = z for t = -4 then x(-4) = -4 there are no z or y intercepts you don't need to use the symmetric equations September 6th 2009, 10:25 AM #2
{"url":"http://mathhelpforum.com/calculus/100792-lines-plane-7-a.html","timestamp":"2014-04-16T20:29:46Z","content_type":null,"content_length":"31410","record_id":"<urn:uuid:48177967-7eae-430a-868e-7e504c641f27>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Pattern formation in coupled cell systems A coupled cell system is a collection of interacting dynamical systems. Coupled cell models assume that the output from each cell is important and that signals from two or more cells can be compared so that patterns of synchrony can emerge. We show that patterns of synchrony in planar lattice dynamical systems can be very complicated, but that they are all spatially doubly periodic if enough coupling is assumed.
{"url":"http://www.newton.ac.uk/programmes/PFD/Abstract1/golubitsky.html","timestamp":"2014-04-18T08:09:39Z","content_type":null,"content_length":"2278","record_id":"<urn:uuid:c779db14-535d-4425-b301-588b3d8eb4a2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Stack #48269 Hangman Question Answer A mathematical sentence that includes operations and numbers a statement using letters and numbers i.e. 3x+1 a sentence that is true A sentence that is neither true or false A sentence that is false The study of the properties of relations and numbers, using four operations- Addition, Subtraction, Multiplication, and Division The branch of mathematics that uses both letters and numbers to show relations between quantities The number that multiplies the variable The mathematical processs of addition, subtraction, multiplication, and division A letter or symbol thet stands for an unknown number A whole number or its opposite A whole number less than zero Numbers the same distance from zero but on different sides of zero on the number line A number on the number line A whole number greater than zero The distance a number on the number line is from zero The answer to an addition problem The arithmetic operation of combining numbers to find their sum or total A number that is added to one or more numbers The arithmetic operation of taking one number away from another to find the difference The answer to a subtraction problem The arithmetic operation of adding a number to itself many times A number that is multiplied in a multiplication problem The answer to a multiplication problem The arithmetic operation that finds how many times a number is contained in another number The number by which you are dividing The answer to a division problem A number that is divided Part of an expression separated by an addition or subtraction sign Terms that have the same variable Combine like terms Terms that have different variables Number that tells the times another number is a factor The number being multiplied, a factor The product of multiplying any number by itself once or many times The distance around the outside of a shape A triangle with three equal sides
{"url":"http://www.studystack.com/hangman-48269","timestamp":"2014-04-18T13:52:46Z","content_type":null,"content_length":"53869","record_id":"<urn:uuid:19b511db-7989-4bb4-88ac-d708c763c5b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
why doesn't the code work on right part of tree? 05-16-2007 #1 Registered User Join Date May 2007 why doesn't the code work on right part of tree? Idea is to swap Binary tree's element's with it's left node. Here's the code I came up with: TREE swap_left (TREE T){ element_type tmp; tmp = T->element; T->element = T->left->element; T->left->element = tmp; T = T->left; return (swap_left(T->right)&& swap_left(T->left)); return T; when I print elements, Elements left to node are swapped but not right....please help in fixing this. Well . . . it seems like all your code does is swap the left. You never manipulate the right at all. I don't really understand how you could expect the right to be modified. If this is your Idea is to swap Binary tree's element's with it's left node. then what does the right side of the tree have to do with it? Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. 05-16-2007 #2
{"url":"http://cboard.cprogramming.com/c-programming/89943-why-doesn%27t-code-work-right-part-tree.html","timestamp":"2014-04-17T16:59:46Z","content_type":null,"content_length":"44673","record_id":"<urn:uuid:0f97f79a-6474-4632-a3cb-6c2b96c17487>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A 20-foot ladder is leaning against the side of a building. The bottom of the ladder is 4 feet from the wall. How many feet above the ground does the ladder touch the wall? ANSWER CHOICES A. 16 FT B. √384 FT C. 12 FT D. √ 416 FT • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51410e0be4b0d5d99f3e467a","timestamp":"2014-04-20T06:19:00Z","content_type":null,"content_length":"56828","record_id":"<urn:uuid:87a5ddcf-2f9b-4676-99fa-e4add7b1cd05>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
and Adriana Compagnoni. Subtyping dependent types , 2000 "... The need for subtyping in type-systems with dependent types has been realized for some years. But it is hard to prove that systems combining the two features have fundamental properties such as subject reduction. Here we investigate a subtyping extension of the system *P, which is an abstract versio ..." Cited by 70 (6 self) Add to MetaCart The need for subtyping in type-systems with dependent types has been realized for some years. But it is hard to prove that systems combining the two features have fundamental properties such as subject reduction. Here we investigate a subtyping extension of the system *P, which is an abstract version of the type system of the Edinburgh Logical Framework LF. By using an equivalent formulation, we establish some important properties of the new system *P^, including subject reduction. Our analysis culminates in a complete and terminating algorithm which establishes the decidability of type-checking. - In Foundations of Object Oriented Languages 3 , 1996 "... In this paper we present an extension to basic type theory to allow a uniform construction of abstract data types (ADTs) having many of the properties of objects, including abstraction, subtyping, and inheritance. The extension relies on allowing type dependencies for function types to range over ..." Cited by 29 (8 self) Add to MetaCart In this paper we present an extension to basic type theory to allow a uniform construction of abstract data types (ADTs) having many of the properties of objects, including abstraction, subtyping, and inheritance. The extension relies on allowing type dependencies for function types to range over a well-founded domain. Using the propositions--as--types correspondence, abstract data types can be identified with logical theories, and proofs of the theories are the objects that inhabit the corresponding ADT. 1 Introduction In the past decade, there has been considerable progress in developing formal account of a theory of objects. One property of object oriented languages that make them popular is that they attack the problem of scale: all object oriented languages provide mechanisms for providing software modularity and reuse. In addition, the mechanisms are intuitive enough to be followed easily by novice programmers. During the same decade, the body of formal mathematics has be... , 2001 "... The language MSR has successfully been used in the past to prove undecidability results about security protocols modeled according to the Dolev-Yao abstraction. In this paper, we revise this formalism into a flexible specification framework for complex crypto-protocols. More specifically, we equip i ..." Cited by 12 (7 self) Add to MetaCart The language MSR has successfully been used in the past to prove undecidability results about security protocols modeled according to the Dolev-Yao abstraction. In this paper, we revise this formalism into a flexible specification framework for complex crypto-protocols. More specifically, we equip it with an extensible typing infrastructure based on dependent types with subsorting, which elegantly captures and enforces basic relations among objects, such as between a public key and its inverse. We also introduce the notion of memory predicate, where principals can store information that survives role termination. These predicates allow specifying complex protocols structured into a coordinated collection of subprotocols. Moreover, they permit describing different attacker models using the same syntax as any other role. We demonstrate this possibility and the precision of our type system by presenting two formalizations of the Dolev-Yao intruder. We discuss two execution models for this revised version of MSR, one sequential and one parallel, and prove that the latter can be simulated by the former. - of Lecture Notes in Computer Science , 2000 "... This paper introduces a typed #-calculus called # Power , a predicative reformulation of part of Cardelli's power type system. Power types integrate subtyping into the typing judgement, allowing bounded abstraction and bounded quantification over both types and terms. This gives a powerful and co ..." Cited by 7 (0 self) Add to MetaCart This paper introduces a typed #-calculus called # Power , a predicative reformulation of part of Cardelli's power type system. Power types integrate subtyping into the typing judgement, allowing bounded abstraction and bounded quantification over both types and terms. This gives a powerful and concise system of dependent types, but leads to di#culty in the meta-theory and semantics which has impeded the application of power types so far. Basic properties of # Power are proved here, and it is given a model definition using a form of applicative structures. A particular novelty is the auxiliary system for rough typing, which assigns simple types to terms in # Power . These "rough" types are used to prove strong normalization of the calculus and to structure models, allowing a novel form of containment semantics without a universal domain. , 1996 "... A type may be a subtype of another type. The intuition about this should be clear: a type is a type of data, some data then may live in a given type as well as in a larger one, up to a simple "transformation". The advantage is that those data may be "seen" or used in different contexts. The formal ..." Cited by 1 (0 self) Add to MetaCart A type may be a subtype of another type. The intuition about this should be clear: a type is a type of data, some data then may live in a given type as well as in a larger one, up to a simple "transformation". The advantage is that those data may be "seen" or used in different contexts. The formal treatment of this intuition, though, is not so obvious, in particular when data may be programs. In Object Oriented Programming, where the issue of "reusing data" is crucial, there has been a long-lasting discussion on "inheritance" and ... little agreement. There are several ways to understand and formalize inheritance, which depend on the specific programming environment used. Since early work of Cardelli and Wegner, there has been a large amount of papers developing several possible functional approaches to inheritance, as subtyping. Indeed, functional subtyping captures only one point of view on inheritance, yet this notion largely motivated most of that work. Whethe , 1996 "... A type may be a subtype of another type. The intuition about this should be clear: a type is a type of data, some data then may live in a given type as well as in a larger one, up to a simple "transformation". The advantage is that those data may be "seen" or used in different contexts. The formal ..." Add to MetaCart A type may be a subtype of another type. The intuition about this should be clear: a type is a type of data, some data then may live in a given type as well as in a larger one, up to a simple "transformation". The advantage is that those data may be "seen" or used in different contexts. The formal treatment of this intuition, though, is not so obvious, in particular when data may be programs. In Object Oriented Programming, where the issue of "reusing data" is crucial, there has been a long-lasting discussion on "inheritance" and ... little agreement. There are several ways to understand and formalize inheritance, which depend on the specific programming environment used. Since early work of Cardelli and Wegner, there has been a large amount of papers developing several possible functional approaches to inheritance, as subtyping. Indeed, functional subtyping captures only one point of view on inheritance, yet this notion largely motivated most of that work. Whethe , 1996 "... Dependent type systems have been the basis of many proof development enviroments. In [AC96], a system P is proposed as a subtyping extension of the dependent type system P[Bar92] (also called \ Pi[Dow95]). P has nice meta-theoretic properties including subject reduction and decidability, but transit ..." Add to MetaCart Dependent type systems have been the basis of many proof development enviroments. In [AC96], a system P is proposed as a subtyping extension of the dependent type system P[Bar92] (also called \Pi [Dow95]). P has nice meta-theoretic properties including subject reduction and decidability, but transitivity elimination is restricted to the fi 2 normalized types. In this report, we propose a system \Pi , which is equivalent to P , but it has type level transitivity eliminatioin property. This feature distinguishes our approach from the existing subtyping systems with reduction relations in types. e.g. P[AC96], F ! [SP94], F ! [Com94], where transitivity elimination only holds for normalized types. Meta-theoretic properties including subject reduction and decidability are established. The system is shown to be equivalent with P in typing, kinding and context formation. The type checking algorithm is more clear and efficient than P . The technique is suitable for future extensions and...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=190569","timestamp":"2014-04-16T11:30:03Z","content_type":null,"content_length":"29561","record_id":"<urn:uuid:31b09a54-c1e8-4565-bd97-ab061468a550>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help May 1st 2013, 04:37 AM #1 Junior Member Apr 2013 It is given that OA=a and OB=b. Find OR in each of the following cases: a) R is a point on the straight line AB such that AR : RB=1:2 b) R lies on the extended straight line AB such that AR:RB= 4:3 c) R is the midpoint of the straight line AB I solved (a), but I don't know what is the mean of 'extended straight line' Re: Vector.. It means you go out further than the end of the original vector... Re: Vector.. So the distancde from B to R is twice the distance from A to R. And that means that the distance from A to R is 1/3 the distance from A to B. b) R lies on the extended straight line AB such that AR:RB= 4:3 So the distance from A to R is 4/3 the distance from B to R. And that means that the distance from A to B is 4/(4+3)= 4/7b thwe distance from A to R. c) R is the midpoint of the straight line AB I solved (a), but I don't know what is the mean of 'extended straight line' Re: Vector.. still don't understand..how to find OR? May 1st 2013, 05:15 AM #2 May 1st 2013, 06:41 AM #3 MHF Contributor Apr 2005 May 2nd 2013, 07:01 AM #4 Junior Member Apr 2013
{"url":"http://mathhelpforum.com/math-topics/218428-vector.html","timestamp":"2014-04-16T05:57:36Z","content_type":null,"content_length":"39350","record_id":"<urn:uuid:65c8b707-9190-425d-9386-6ac576aa3420>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
accumulation points September 23rd 2010, 08:10 PM #1 Sep 2010 accumulation points how can you prove this theorem from the book? A finite set has no accumulation points. I know that it's true, but fail to show it mathematically Are you working in the real numbers? What ideas have you had so far? Well, if you construct an epsilon neighborhood around a proposed limit point x in your finite set, then the intersection of your set with the epsilon neighborhood of x must contain points in your finite set other than x for x to be a limit point (accumulation point). If you can show this doesn't hold, then you are done. The definition of "accumulation point" is: p is an accumulation point of set A if and only if every neighborhood of p contains at least one point of A (other than p itself). Suppose A is finite. Then the set of all distances from p to points in A (other that p itself) is finite and so contains a smallest value. Take the radius of your neighborhood to be smaller than that value. (I notice now, that is pretty much what Danneedshelp said!) The definition of "accumulation point" is: p is an accumulation point of set A if and only if every neighborhood of p contains at least one point of A (other than p itself). Suppose A is finite. Then the set of all distances from p to points in A (other that p itself) is finite and so contains a smallest value. Take the radius of your neighborhood to be smaller than that value. (I notice now, that is pretty much what Danneedshelp said!) I am not a 100% this correct, but I recall a lemma from my intro to real analysis book that stated something along to lines of: a point $x$ is a limit point (accumulation point) of a set $A$ iff for every $\epsilon>0$, the neighborhood $N_{\epsilon}(x)$ contains infinitly many points of the set $A$. I think you would have to prove this to use it, but it answers your original question rather quickly. September 24th 2010, 05:13 AM #2 September 24th 2010, 10:03 AM #3 September 25th 2010, 04:26 AM #4 MHF Contributor Apr 2005 September 25th 2010, 08:44 AM #5
{"url":"http://mathhelpforum.com/differential-geometry/157256-accumulation-points.html","timestamp":"2014-04-21T15:19:24Z","content_type":null,"content_length":"44808","record_id":"<urn:uuid:99b16b4f-5819-465d-86bd-454d9cbed6fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Complex survey design [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Complex survey design From jpitblado@stata.com (Jeff Pitblado, StataCorp LP) To statalist@hsphsun2.harvard.edu Subject Re: st: Complex survey design Date Thu, 19 Oct 2006 13:41:05 -0500 sergio salis <ssalis22@yahoo.it> has a question about estimating multiple subpopulation means simultaneous: > I have never used the complex survey design in stata > and I would really appreciate if you could give me an > advice. I have a firm-level dataset which includes 2 > subsets. The first subset includes data that I don't > want to analyse, while the second is the one of > interest. > My aim is to draw statistics for each region in the > subset 2. > I suppose that I should proceed as follows: > Create one identifier for each reg in subset 2 > gen reg1=1 if subsample==2 & region==1 > replace reg1=0 if reg1!=1 > (do the same for all the regions...) > Now I would calculate the mean as follows: > svy: mean productivity, sub(reg1) > svy: mean productivity, sub(reg2) > .... > svy: mean productivity, sub(regN) > I would do so because I realised that the new commands > (stata 9) for svy do not include the option 'by' > previously available. Is the way of proceeding > described above correct to obtain the regional means? > (I guess that first dropping the observations for > which subset==2 and then creating the reg dummies is > completely wrong.) > I read on this webpage > http://www.cpc.unc.edu/services/computer/presentations/statatutorial/example31.html > that using 'if' to analyse subsets of the dataset > (instead of the subpop option) is wrong since for the > variance, standard error, and confidence intervals to > be calculated properly svy must use all the available > observations. Hence the only way to proceed seems to > be as I described above...please let me know whether I > am following the right procedure. > Thanks in advance for your help In Stata 9, the new -mean- command has an -over()- option which is synonymous with the -by()- option of the old -svymean- command. Thus Sergio can use the -subpop()- option of -svy- to identify the subpopulation of interest, and the -over()- option of -mean- to identify the groups over which to simultaneously estimate subpopulation means and their variance-covariance matrix. Given Sergio's Stata code above, this can be accomplished by the following single command: . svy, subpop(if subsample == 2) : mean productivity, over(region) * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-10/msg00816.html","timestamp":"2014-04-16T23:06:01Z","content_type":null,"content_length":"7819","record_id":"<urn:uuid:3913d7ab-1f70-4291-a0be-e95fea2997ed>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the x- and y-intercept of the line. x – 6y = 12 (1 point)x-intercept is 12; y-intercept is –2 x-intercept is –6; y-intercept is 1 x-intercept is –2; y-intercept is 12 x-intercept is 1; y-intercept is –6 • one year ago • one year ago Best Response You've already chosen the best response. @kelly226 sorry I kept bothering you Best Response You've already chosen the best response. I will give medal if you help with all questions I post there is 5 Best Response You've already chosen the best response. x=12 y=-2 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509819ebe4b02ec0829c573d","timestamp":"2014-04-18T03:23:24Z","content_type":null,"content_length":"32452","record_id":"<urn:uuid:073d07f3-a32c-44c4-9aa8-f86f7a41b56c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Mean Value Theorem March 21st 2006, 06:06 AM #1 Mar 2006 Mean Value Theorem I am having a problem with coming out with the right answer for the following problems, please help. Using the Mean Value Theorem. If f(x)=sin(x) on the interval [1, 1.15] what is c. okay the Mean Value Thoerem is f'(c) = f(b)-f(a)/b-a so I have 1.15-1/.15, so tell me if I'm on the right track,if not please help. Next problem Sin(x)+ln y and 0<x<pi/2, using implicit differntiation, what is dy/dx. Where do I start? On the first one, you are on the right track, but one thing. You found the y-coordinate of c, which is f(c). They want to know the x-coordinate, c. So, set sin(x)=f(c) (the value you got) and solve for c. On the second one, you know that sin(x)+ln(y) must equal some constant correct? So set it up like this. See how that works? I'm sure you can finish it from there. March 21st 2006, 06:35 AM #2 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/calculus/2290-mean-value-theorem.html","timestamp":"2014-04-16T08:34:35Z","content_type":null,"content_length":"33256","record_id":"<urn:uuid:1a773440-f7d9-4403-ab41-edc0c6deefd8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Rectangle Area and Perimeter Date: 11/08/2002 at 16:50:32 From: Mark Subject: Area Dear Dr.Math, I am a sophmore in high school and we are reviewing area and perimeter. My teacher gave us a problem and told us to prove it true or false. I know that the answer is false, but I need help understanding and explaining why it's wrong. Here is the problem: We have a given square and we see that if we increase the perimeter, area increases as well in our new rectangle. Is that always true? Give a thorough explanation and clear examples. Given Square: 3ft | | | | | | P = 12ft and A = 9 sq.ft New Rectangle: | | 3ft | | | | P = 14ft and A = 12 sq.ft I do see that the theory that "as perimeter increases, the area as well increases" is not always true, even though it works for the given problem. My example is that if our new rectangle looks like this, then our perimeter increased but our area decreased, so the theory is wrong or at least not always right. 1ft | | P = 14ft and A = 6ft I know how to do the math and figure out perimeter and area, but I don't know how to explain why it doesn't work if the one side of the rectangle is one, and why it did work for the given problem. Please help, Date: 11/09/2002 at 17:22:49 From: Doctor Rick Subject: Re: Area Hi, Mark. You are correct that a rectangle with perimeter greater than 12 (the area of the given square) does not necessarily have an area greater than 9 (the area of the given square). When you are told to disprove a statement, all you need to do is to provide a counterexample - a case that satisfies the premises (rectangle with perimeter greater than 12) and does not fit the conclusion (area greater than 9). You have done this, so you have a thorough explanation with a clear You don't need to explain "why" the statement is not true in any deeper sense than "because here is an example in which it is not true." But I understand your desire to understand it on a deeper The fact is that if you require that the perimeter of a rectangle be THE SAME as that of the square, the area of the rectangle will ALWAYS be LESS than that of the square (unless the rectangle is the square itself). In other words, for a given perimeter, the rectangle that has the largest area is a square. Let's consider a rectangle with length L and width W. Its perimeter is 2(L+W) and its area is LW. If we require that the perimeter be some particular value P, then we can find what the width must be when we know the length of the rectangle: 2(L+W) = P L + W = P/2 W = P/2 - L Then the area will be LW = L(P/2 - L) A = (P/2)L - L^2 Thus the area of the rectangle is a quadratic function of the length L. Its graph is a parabola that opens downward, and the greatest area occurs at the vertex of the parabola. You can prove that the value of L at the vertex is P/4, so that the width is also P/4 and the rectangle with greatest area is a square. If you have further questions about this, please feel free to ask me. "Why" questions can lead in lots of different directions, and only you can tell me when we have hit on an explanation of the type that will satisfy you. Of course, a true mathematician will never be completely satisfied, there are always more directions to explore. - Doctor Rick, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/61578.html","timestamp":"2014-04-17T18:29:32Z","content_type":null,"content_length":"8932","record_id":"<urn:uuid:233a2a73-7383-49e5-aa23-6d82521f7e4a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Register or Login To Download This Patent As A PDF United States Patent Application 20030069512 Kind Code A1 Kaiser, Willi ; &nbsp et al. April 10, 2003 Method and system for measuring T-wave alternans by alignment of alternating median beats to a cubic spline An electrocardiogram processing technique for measuring T-wave alternans by which the alternating electrocardiogram signals are aligned to a target cubic spline. The target cubic spline is calculated on base of three isoelectric points namely a point before the P-wave, a point before the QRS-complex and a point after the T-wave. The aligned signals may then be further analyzed for variations such as T-wave alternans which are only present in alternating beats and which have diagnostic significance. Inventors: Kaiser, Willi; (Emmendingen, DE) ; Findeis, Martin; (Freiburg, DE) Correspondence Address: FLETCHER, YODER & VAN SOMEREN P. O. BOX 692289 Serial No.: 682698 Series Code: 09 Filed: October 5, 2001 Current U.S. Class: 600/516 Class at Publication: 600/516 International Class: A61B 005/0452 1. A method of measuring T-Wave alternans by aligning alternating heartbeats from a series of electrocardiogram signals comprising: determining a first ECG data series comprising one or more beats; determining a second ECG data series comprising one or more beats such that the beats of the second ECG data series alternate with the one or more beats of the first ECG data series; deriving a reference function from a third ECG data series such that the third ECG data series comprises at least a portion of the first ECG data series and at least a portion of the second ECG data series; deriving a first ECG data series reference function from the first ECG data series; deriving a second ECG data series reference function from the second ECG series; calculating a first difference between the reference function and the first ECG data series reference function; calculating a second difference between the reference function and the second ECG data series reference function; determining an aligned first ECG data series by adjusting the first ECG data series by the first difference; determining an aligned second ECG data series by adjusting the second ECG data series by the second difference; determining a first median beat representation from the aligned first ECG data series and a second median beat representation from the aligned second ECG data series; and determining a maximum amplitude difference between the first median beat representation and the second median beat representation within a common interval. 2. The method as recited in claim 1, further comprising determining the first median beat representation as a weighted function of the aligned first ECG data series and determining the second median beat representation as the weighted function of the aligned second ECG data series. 3. The method recited in claim 1, further comprising deriving the reference function as a target cubic spline. 4. The method as recited in claim 1, further comprising calculating the target cubic spline using one or more averages of one or more pairs of successive heartbeats. 5. The method as recited in claim 1, further comprising calculating the first ECG data series reference function as a first cubic spline and the second ECG data series reference function as a second cubic spline. 6. The method as recited in claim 1, further comprising comparing the maximum amplitude difference to a diagnostic reference value. 7. The method as recited in claim 6, wherein the common interval comprises a ST-segment and a T-wave of the first median beat representation and of the second median beat representation and wherein the diagnostic reference value is a T-wave alternans threshold. 8. The method as recited in claim 7, further comprising the step of applying a nonlinear filter to the first median beat representation and to the second median beat representation. 9. A method of determining a reference function from a series of electrocardiogram signals comprising: determining a first ECG data series comprising one or more beats; determining a second ECG data series comprising one or more beats such that the second ECG data series comprises those one or more beats not comprising the first ECG data series; and determining a reference function derived from a third ECG data series such that the third ECG data series comprises at least a portion of the first ECG data series and at least a portion of the second ECG data series. 10. The method recited in claim 9, further comprising constituting the first ECG data series with beats alternating with the beats of the second ECG data series such that successive beats are excluded from the first ECG data series. 11. The method recited in claim 10, further comprising deriving a first median beat representation of the one or more beats of the first ECG data series and deriving a second median beat representation of the one or more beats of the second ECG data series. 12. The method as recited in claim 11, further comprising determining the first median beat representation as a weighted function of the one or more beats of the first ECG data series and determining the second median beat representation as the weighted function of the one or more beats of the second ECG data series. 13. The method recited in claim 12, further comprising adjusting the weighted function by an amount determined by the difference between the median beat sample and the according sample of the current 14. The method recited in claim 9, further comprising determining the reference function as a target cubic spline. 15. The method as recited in claim 14, further comprising calculating the target cubic spline using one or more averages of one or more pairs of successive heartbeats. 16. The method recited in claim 11, further comprising determining the reference function as a target cubic spline. 17. The method as recited in claim 16, further comprising calculating the target cubic spline using one or more averages of one or more pairs of successive heartbeats. 18. The method as recited in claim 16, further comprising determining a first median beat representation function and a second median beat representation function. 19. The method as recited in claim 18, further comprising calculating the first median representation function as a first median beat cubic spline and the second median representation function as a second median beat cubic spline. 20. The method as recited in claim 19, further comprising subtracting the target cubic spline from the first median beat cubic spline to derive a first difference and subtracting the target cubic spline from the second median beat cubic spline to derive a second difference. 21. The method as recited in claim 20, further comprising correcting the one or more beats comprising the first ECG data series by the first difference and correcting the one or more beats comprising the second ECG data series by the second difference such that the first median beat representation is derived from the corrected one or more beats of the first ECG data series and the second median beat representation is derived from the corrected one or more beats of the second ECG data series. 22. The method as recited in claim 21, further comprising finding a maximum difference amount between the first median beat representation and the second median beat representation within a common 23. The method as recited in claim 22, further comprising comparing the maximum difference amount to a diagnostic reference value. 24. The method as recited in claim 23, wherein the common interval comprises a ST-segment and a T-wave of the first median beat representation and of the second median beat representation and wherein the diagnostic reference value is a T-wave alternans threshold. 25. A method of aligning alternating electrocardiogram signals based upon certain reference intervals comprising: determining a first span bounded by a first reference time and a second reference time; determining a second span bounded by the second reference time and a third reference time; locating the first span and the second span on a first median beat representation derived from a first ECG data series comprising one or more beats; locating the first span and the second span on a second median beat representation derived from a second ECG data series comprising one or more beats such that the second ECG data series comprises those one or more beats not comprising the first ECG data series; and applying a reference function to align the first span of the first median beat representation with the first span of the second median beat representation and to align the second span of the first median beat representation with the second span of the second median beat 26. The method as recited in claim 25, further comprising locating the first reference time before a P-wave, locating the second reference time before a QRS-complex, and locating the third reference time after a T-wave. 27. The method as recited in claim 26, further comprising constituting the first ECG data series with beats alternating with the beats of the second ECG data series such that successive beats are excluded from the first ECG data series. 28. The method as recited in claim 27, further comprising determining the first median beat representation as a weighted function of the one or more beats of the first ECG data series and determining the second median beat representation as the weighted function of the one or more beats of the second ECG data series. 29. The method as recited in claim 28, further comprising formulating the reference function as a combination of a first span cubic spline used to align the first span and a second span cubic spline used to align the second span. 30. The method as recited in claim 29, further comprising setting the value of the first span cubic spline at the second reference time equal to the value of the second cubic spline at the second reference time such that the transition between the first span cubic spline is continuous with the second span cubic spline. 31. The method as recited in claim 30, further comprising setting the value of the first span cubic spline at the second reference time equal to 0. 32. The method as recited in claim 25, further comprising determining a first median beat representation function and a second median beat representation function. 33. The method as recited in claim 32, further comprising subtracting the reference function from the first median beat representation function to derive a first difference and subtracting the reference function from the second median beat representation function to derive a second difference. 34. The method as recited in claim 33, further comprising correcting the one or more beats comprising the first ECG data series by the first difference and correcting the one or more beats comprising the second ECG data series by the second difference such that the first median beat representation is derived from the corrected one or more beats of the first ECG data series and the second median beat representation is derived from the corrected one or more beats of the second ECG data series. 35. The method as recited in claim 34, further comprising finding a maximum difference between the first median beat representation and the second median beat representation within a common interval. 36. The method as recited in claim 35, further comprising comparing the maximum difference to a diagnostic reference value. 37. The method as recited in claim 36, wherein the common interval comprises a ST-segment and a T-wave of the first median beat representation and of the second median beat representation and wherein the diagnostic reference value is a T-wave alternans threshold. 38. A method of aligning alternating electrocardiogram signals to a reference function comprising: determining a first median beat representation derived from a first ECG data series comprising one or more beats; determining a second median beat representation derived from a second ECG data series comprising one or more beats such that the second ECG data series comprises those one or more beats not comprising the first ECG data series; determining a first span bounded by a first reference time and a second reference time; determining a second span bounded by a second reference time and a third reference time; locating the first span and the second span on the first median beat representation; locating the first span and the second span on the second median beat representation; determining a reference function derived from a third ECG data series such that the third ECG data series comprises at least a portion of the first ECG data series and at least a portion of the second ECG data series; and applying the reference function to align the first span of the first median beat representation with the first span of the second median beat representation and to align the second span of the first median beat representation with the second span of the second median beat representation. 39. The method recited in claim 38, further comprising constituting the first ECG data series with beats alternating with the beats of the second ECG data series such that successive beats are excluded from the first ECG data series. 40. The method as recited in claim 39, further comprising locating the first reference time before a P-wave, locating the second reference time before a QRS-complex, and locating the third reference time after a T-wave. 41. The method as recited in claim 40, further comprising determining the first median beat representation as a weighted function of the one or more beats of the first ECG data series and determining the second median beat representation as the weighted function of the one or more beats of the second ECG data series. 42. The method as recited in claim 41, further comprising determining the reference function as a target cubic spline comprising a first span cubic spline used to align the first span and a second span cubic spline used to align the second span. 43. The method as recited in claim 42, further comprising calculating the target cubic spline using one or more averages of one or more pairs of successive heartbeats. 44. The method as recited in claim 43, further comprising setting the value of the first span cubic spline at the second reference time equal to the value of the second cubic spline at the second reference time such that the transition between the first span cubic spline is continuous with the second span cubic spline. 45. The method as recited in claim 44, further comprising setting the value of the first span cubic spline at the second reference time equal to 0. 46. The method as recited in claim 45, further comprising determining a first median beat representation function and a second median beat representation function. 47. The method as recited in claim 46, further comprising subtracting the reference function from the first median beat representation function to derive a first difference and subtracting the reference function from the second median beat representation function to derive a second difference. 48. The method as recited in claim 47, further comprising correcting the one or more beats comprising the first ECG data series by the first difference and correcting the one or more beats comprising the second ECG data series by the second difference such that the first median beat representation is derived from the corrected one or more beats of the first ECG data series and the second median beat representation is derived from the corrected one or more beats of the second ECG data series. 49. The method as recited in claim 48, further comprising finding a maximum difference amount between the first median beat representation and the second median beat representation within a common 50. The method as recited in claim 49, further comprising comparing the maximum difference amount to a diagnostic reference value. 51. The method as recited in claim 50, wherein the common interval comprises a ST-segment and a T-wave of the first median beat representation and of the second median beat representation and wherein the diagnostic reference value is a T-wave alternans threshold. 52. A system for aligning alternating electrocardiogram signals comprising: a monitoring module; a memory module; an output module; and a processing module comprising one or more processing circuits configured to determine a first median beat representation; determine a second median beat representation; determine a first span bounded by a first reference time and a second reference time; determine a second span bounded by a second reference time and a third reference time; locate the first span and the second span on the first median beat representation; locate the first span and the second span on the second median beat representation; determine a reference function; and apply the reference function to align the first span of the first median beat representation with the first span of the second median beat representation and to align the second span of the first median beat representation with the second span of the second median beat representation. 53. The system recited in claim 52, wherein the first median beat representation is derived from a first ECG data series comprising one or more beats, the second median bet representation is derived from a second ECG data series comprising one or more beats such that the second ECG data series comprises those one or more beats not comprising the first ECG data series, and the third median beat representation is derived from a third ECG data series such that the third ECG data series comprises the first ECG data series and the second ECG data series. 54. The system recited in claim 53, wherein the first ECG data series is comprised of beats alternating with the beats of the second ECG data series such that successive beats are excluded from the first ECG data series. 55. The system as recited in claim 54, wherein the first reference time is located before a P-wave, the second reference time is located before a QRS-complex, and the third reference time is located after a T-wave. 56. The system as recited in claim 55, wherein the first median beat representation is a weighted function of the one or more beats of the first ECG data series and the second median beat representation is the weighted function of the one or more beats of the second ECG data series. 57. The system as recited in claim 56, wherein the reference function is a target cubic spline comprising a first span cubic spline used to align the first span and a second span cubic spline used to align the second span. 58. The system as recited in claim 57, wherein the target cubic spline is calculated using one or more averages of one or more pairs of successive heartbeats. 59. The system as recited in claim 58, wherein the value of the first span cubic spline at the second reference time is set equal to the value of the second cubic spline at the second reference time such that the transition between the first span cubic spline is continuous with the second span cubic spline. 60. The system as recited in claim 59, wherein the value of the first span cubic spline at the second reference time is set equal to 0. 61. The system as recited in claim 60, wherein a first median beat representation function and a second median beat representation function are calculated. 62. The system as recited in claim 61, wherein the reference function is subtracted from the first median beat representation function to derive a first difference and the reference function is subtracted from the second median beat representation function to derive a second difference. 63. The system as recited in claim 62, wherein the one or more beats comprising the first ECG data series are corrected by the first difference and the one or more beats comprising the second ECG data series are corrected by the second difference such that the first median beat representation is derived from the corrected one or more beats of the first ECG data series and the second median beat representation is derived from the corrected one or more beats of the second ECG data series. 64. The system as recited in claim 63, wherein a maximum difference amount between the first median beat representation and the second median beat representation within a common interval is 65. The system as recited in claim 64, wherein the maximum difference amount is compared to a diagnostic reference value. 66. The system as recited in claim 65, wherein the common interval comprises a ST-segment and a T-wave of the first median beat representation and of the second median beat representation and wherein the diagnostic reference value is a T-wave alternans threshold. 67. A computer program for determining an electrocardiogram reference function, the computer program comprising: a machine readable medium for supporting machine readable code; and configuration code stored on the machine readable medium for determining a reference function derived from an ECG data series comprising at least a portion of a first ECG data series and at least a portion of a second ECG data series such that the first and second ECG data series are mutually exclusive of one another. 68. The computer program of claim 67, wherein the configuration code refers to additional configuration code stored on a second machine readable medium, additional configuration code comprising the ECG data series. 69. The computer program of claim 67, wherein the configuration code is installed on the machine readable medium over a configurable network connection. 70. The computer program of claim 67, wherein the ECG data series comprises two or more successive heartbeats, and wherein the configuration code comprisess code for determining a reference function calculated as a target cubic spline after at least partial processing of the ECG data series. 71. A computer program for aligning alternating electrocardiogram signals using certain reference intervals: a machine readable medium for supporting machine readable code; and configuration code stored on the machine readable medium for locating a first span on a first median beat representation and the corresponding first span on a second median beat representation, locating a second span on the first median beat representation and the corresponding second span on the second median beat representation, and applying a reference function to align the corresponding first spans and to align the corresponding second spans. 72. The computer program of claim 71, wherein the configuration code refers to additional configuration code stored on a second machine readable medium, additional configuration code comprising the first median beat representation and the second median beat representation. 73. The computer program of claim 71, wherein the configuration code is installed on the machine readable medium over a configurable network connection. 74. The computer program of claim 71, wherein the reference function comprises two or more cubic splines, and wherein the configuration code comprises code for correcting ECG data based upon the alignment to the reference function. [0001] The present invention relates generally to the field of cardiology and, more particularly, to a method and system of processing an electrocardiogram signal to detect T-wave alternans by aligning alternating beats to a cubic spline. More accurate detection and quantification of alternans within the ST-segment and T-wave of the signal is then possible upon the aligned beats. [0002] In the field of electrocardiography, electrical alternans are the differences in electrical potential at corresponding points between alternate heartbeats. T-wave alternans or alternation is a regular beat-to-beat variation of the ST-segment or T-wave of an ECG which repeats itself every two beats and has been linked to underlying cardiac instability. A patient's odd and even heartbeats may therefore exhibit different electrical properties of diagnostic significance which can be detected by an electrocardiogram (ECG). [0003] The presence of these electrical alternans is significant because patients at increased risk for ventricular arrythmias commonly exhibit alternans in the ST-segment and the T-wave of their ECG. Clinicians may therefore use these electrical alternans as a noninvasive marker of vulnerability to ventricular tacharrhythmias. The term T-wave alternans (TWA) is used to broadly denote these electrical alternans. It should be understood that the term encompasses both the alternans of the T-wave segment and the ST-segment of an ECG. [0004] It may however be both difficult to detect TWA and difficult to quantify the magnitude of TWA since the magnitude of the phenomena is typically less than one hundred microvolts. Differences of this magnitude between ECG signals are difficult to differentiate from baseline wander, white noise, or from other artifacts such as patient movement or other irregularities in the heartbeat. [0005] The current method of detecting TWA involves receiving an ECG signal and, from this data, calculating both an odd and an even median complex using the respective incoming odd and even signal data. The odd median complex is then compared with the even median complex to obtain an estimate of the amplitude of beat-to-beat alternation in the ECG data. The maximum alternation amplitude observed between the end of the QRS-complex and the end of the T wave is defined as the T-wave alternans value. A TWA is present if this value is greater than some threshold value determined by a [0006] In the prior art the baseline wander was removed by calculating a cubic spline based on points measured between the P-wave and the QRS-complex of three consecutive QRS complexes. The values generated by this spline curve were then subtracted from the corresponding values of the incoming beat data. Since points in the isoelectric area preceding the QRS complex are used to calculate the cubic spline, this method does not properly correct for baseline wander between the end of the QRS-complex and the end of the T-wave. [0007] To better correct baseline wander it would be preferable to use an additional point after the T-wave in calculating the cubic spline correction. However the amplitudes of the isoelectric areas before the QRS-complex and between the T and P-waves differ. The isoelectric area before the QRS-complex is influenced by the atrial repolarisation. Other reasons for different amplitudes in both "isoelectric areas" could be a short PR-interval or a merging of P- and T-waves. Applying the cubic spline correction algorithm to points before the QRS-complex and also to points after the T-wave will cause the algorithm to produce artificial baseline wander, and therefore to produce incorrect T-wave alternans values. As a result, a more effective means of aligning odd and even heartbeats is needed in order to obtain more accurate TWA values. [0008] The invention offers a technique for detecting T-wave alternans by aligning alternating heartbeat data, i.e. odd and even beats. In a preferred embodiment of the invention, a digitized ECG signal is received for processing. The ECG data is used to calculate an odd and even median beat and a target cubic spline which is then used to align an odd and an even median beat complex. The odd median complex is then compared with the even median complex to obtain an estimate of the amplitude of beat-to-beat alternation in the ECG signal. [0009] The step of calculating a median complex may proceed as follows. A first array (representing the odd median complex) is initialized with the median of a plurality of odd complex values. A second array (representing the even median complex) is initialized with the median of a plurality of even complex values. The samples of a new odd beat of the ECG data are compared to corresponding values in the first array and, based on the comparison, the values of the first array are adjusted as follows. If a sample of the odd beat exceeds the corresponding value of the first array by a fixed amount, then the corresponding value is incremented by the fixed amount. In the other case the corresponding value is incremented by {fraction (1/32)}th of the difference between the sample of the odd beat and the corresponding value of the first array This process is repeated for other odd beats desired to be included in the calculation. This same process is then followed for the second array using the even beats. [0010] Once the odd and even median complexes have been calculated they are then aligned. This alignment is accomplished by calculating a target cubic spline, an odd median complex cubic spline, and an even median complex cubic spline. The differences between the target cubic spline and both the odd median complex cubic spline and the even median complex cubic spline are then calculated. These differences are then subtracted, respectively, from the odd and even median beat data, to correct (align) them. [0011] The effect of this alignment step is to minimize any residual baseline wander between the odd and even beat data. More accurate comparisons of the odd and even beat data may then be made. [0012] In accordance with one aspect of the present technique, there is provided a method of calculating a reference function (in the preferred embodiment a target cubic spline) derived from odd and even beat data and useful for aligning odd and even median beat complexes. [0013] In accordance with another aspect of the present technique, there is provided a method of processing ECG signals for alternating heartbeats and of aligning these alternating heartbeats using cubic splines. The method may be extended to incorporate the detection and quantification of differences, such as alternans, between the alternating ECG signals. [0014] In accordance with another aspect of the present technique, there is provided a system for processing ECG signals for alternating heartbeats whereby the ECG signals are analyzed by processing circuitry to derive a reference function, the alternating ECG signals are aligned by the processing circuitry, and the aligned ECG signals are saved by memory circuitry or displayed by display circuitry. In addition, the system may be expanded to include analysis circuitry capable of processing the aligned ECG signals to determine the presence and magnitude of variations between the alternating signals such as T-wave alternans. [0015] These and other features and advantages of the invention are described in detail below with reference to the figures in which like numbers indicate like elements. [0016] The foregoing and other advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings in which: [0017] FIG. 1 illustrates a typical application of a patient undergoing an electrocardiogram procedure and the components of an idealized electrocardiogram system in relation thereto; [0018] FIG. 2 is a functional block diagram of a system representing a preferred embodiment of the present invention; [0019] FIG. 3 is a block diagram of a method of collecting and analyzing alternating ECG signals representing a preferred embodiment of the present invention; [0020] FIG. 4 is a flowchart illustrating the steps taken in collecting, analyzing and aligning alternating ECG data signals by the preferred embodiment of the present invention; [0021] FIG. 5 is an ECG plot superimposing an odd and even heartbeat which has undergone baseline removal wander filtering by the prior art method; and [0022] FIG. 6 is an ECG plot superimposing an odd and even heartbeat which has undergone alignment the preferred embodiment of the present invention. [0023] In the invention, a reference function is calculated and then applied to alternating heartbeat data to bring the odd and even heartbeats into alignment. The aligned heartbeats are then susceptible to comparative analysis to detect and quantify differences between the alternating heartbeats. In a preferred embodiment, the target cubic spline, the odd median cubic spline, and the even median cubic spline are calculated in the same manner, each of them comprising a pair of cubic splines which are determined as follows and where the following letter representations are used: T represents a reference time; t represents a variable time; y represents an amplitude in an ECG cycle, measured at some time, t; s(t) is the amplitude in a spline segment at time t calculated with a spline function. [0024] An ECG cycle consists of samples. In a cycle, y is the amplitude of an ECG sample at point of time t. Similarly, y.sub.1 is the amplitude of an ECG cycle when t=T.sub.1. [0025] Initially, three reference points are determined, (T.sub.1, y.sub.1), (T.sub.2, y.sub.2), (T.sub.3, y.sub.3) such that T.sub.1<T.sub.2<T.sub.3. Point (T.sub.1, y.sub.1) is the point before the P-wave at time T.sub.1. Point (T.sub.2, y.sub.2) is the point before the QRS-complex at time T.sub.2. Point (T.sub.3, y.sub.3) is the point after the T-wave at time T.sub.3. These three points are used to calculate two splines spanning the two regions defined by T.sub.1, T.sub.2, and T.sub.3. Spline s.sub.1(t) is between t=T.sub.1 and t=T.sub.2. Spline s.sub.2 (t) is between t=T.sub.2 and [0026] The following equations demonstrate the calculation of the two splines: s(t)=at.sup.3+bt.sup.2+ct+d (1) s(t)=at.sup.3+bt.sup.2+ct+d. (2) [0027] The coefficients a1, a2, b1, b2, c1, c2, d1 and d2 are calculated using the following derivations of the two spline equations: s'(t)=3at.sup.22bt+cs'(0)=c (3) s.sub.1"(t)=6a.sub.1t+2b.sub.1s.sub.1"(0)=2b.sub.1 (4) s.sub.1'"(t)=6a.sub.1s.sub.1'"(0)=6a.sub.1 (5) s'(t)=3at.sup.2+2bt+cs'(0)=c (6) s.sub.2"(t)=6a.sub.2t+2b.sub.2s.sub.2"(0)=2b.sub.2 (7) s.sub.2'"(t)=6a.sub.2s.sub.2'"(0)=6a.sub.2. (8) [0028] In order to create a smooth transition between the two splines the following conditions are stipulated: s.sub.1(T.sub.2)=s.sub.2(T.sub.2) (9) s.sub.1'(T.sub.2)=s.sub.2'(T.sub.2), and (10) s.sub.1"(T.sub.2)=s.sub.2"(T.sub.2). (11) [0029] In order to simplify this transition, (T.sub.2, y.sub.2) is set to (0,0) and: s.sub.1(0)=s.sub.2(0)=0 (12) s.sub.1'(0)=s.sub.2'(0) (13) s.sub.1"(0)=s.sub.2"(0) (14) [0030] The first spline starts with: s.sub.1"(T.sub.1)=0. (15) [0031] The second spline ends with: s.sub.2"(T.sub.3)=0. (16) [0032] With conditions (12) through (16) in place, the coefficients a.sub.1, a.sub.2, b.sub.1, b.sub.2, c.sub.1, c.sub.2, d.sub.1, and d.sub.2 are calculated and substituted into equations (1) and (2). The s.sub.1 (t) values are then calculated with the t values between T.sub.1 and T.sub.2 and the s.sub.2(t) values are then calculated with the t values between T.sub.2 and T.sub.3. [0033] In a typical embodiment, the above mentioned equations would be implemented and solved by one or more processing circuits either dedicated to those functions or programmable via machine readable code, such as software, to perform those functions as part of a programmable ECG system. [0034] Reference will first be made to FIG. 5 to allow the introduction of ECG related terminology. In FIG. 5, an odd and an even heartbeat are shown superimposed on an ECG plot. FIG. 5 depicts superimposed odd and even heartbeats after undergoing alignment by the prior art method. A P-wave deflection 500 is depicted which is due to the depolarization of the atria. A QRS-complex 510 is depicted which is due to the depolarization of the ventricles and which is composed of an isoelectric line 512, a R-wave deflection 514, and an S-wave deflection 516. AT-wave deflection 520 is also depicted and is due to the repolarization of the ventricles. An ST-segment 522 is defined by the region between the end of S-wave 516 and the beginning of T-wave 520. Because the present technique is concerned with alternans in ST-segment 522 as well as in T-wave 520, the term "T-wave alternans" in this disclosure includes both T-wave 520 and ST-segment 522. Also depicted in FIG. 5 are three reference times, T.sub.1 (532), T.sub.2 (534), and T.sub.3 (536), which are discussed above in relation to the algorithm equations and which are also discussed in FIG. 4 in relation to the cubic alignment step of the present technique. [0035] Referring now to a typical embodiment, as illustrated in FIG. 1, an ECG data series is collected from a patient 10 over a period of time. The ECG data series is collected in a manner common in the art and familiar to one skilled in the art. In particular, an ECG system 20 is connected by leads 22 and contact pads 21 to patient 10. Patient 10 may be monitored in either Holter, resting ECG, electrophysical test or exercise test systems or in other types of systems known in the art. ECG system 20 is comprised of a signal receiver 24 connected to leads 22, processing circuitry 26 performing the TWA calculation and alignment functions later described, display circuitry 28 transmitting signals to an output device such as a printer 32 or display unit 34 through an output interface 30, and memory circuitry 36 which can be accessed by signal receiver 24, processing circuitry 26, or display circuitry 28. The circuitry comprising ECG system 20 may be implemented in one or more computer systems or other processing systems and may be implemented using hardware, software, or a combination thereof. The circuitry comprising ECG system 20 is ideally interconnected along a communication infrastructure 38. In a preferred embodiment, the present technique is implemented via a program comprising configuration code for ECG system 20 and written in a machine readable code which can be accessed by the system off of a machine readable medium, such as a magnetic or optical disk, or over a configurable network connection. Additionally, components of the program or ECG data series may be accessed from a second machine readable medium if desired. [0036] FIG. 2 depicts a functional block diagram of the components of a preferred embodiment of ECG system 20 and their interrelationships. An operator 50 interacts with the ECG system 20 by means of a user interface module 55. User interface module 55 is connected to monitoring module 60 which is comprised of signal receiver 24 and leads 22. Monitoring module 60 and user interface module 55 are in turn connected to processing module 65 which is comprised of at least processing circuitry 26. Processing module 65 and user interface module 55 are in turn connected to output module 70 which is comprised of at least display circuitry 28 and further possibly comprising an output interface 30 to a device such as a printer 32 or display unit 34. [0037] The ECG system is further comprised of a memory module 75 comprising at least memory circuitry 36. Memory module 75 may include such components as standard RAM memory, magnetic storage media such as a hard disk drive, or optical storage media such as optical disks. Monitoring module 60 may be connected to memory module 75 such that signal data is passed from monitoring module 60 to memory module 75 for temporary or long term storage. Processing module 65 may also be connected to memory module 75 such that ECG signal data may be passed to or from processing module 65 to memory module 75. Finally, output module 70 may be connected to memory module 75 such that data may be passed from memory module 75 to output module 70. User interface module 55 may also be connected to memory module 75 but, in a preferred embodiment, instead interacts with memory module 75 via the other modules such as processing module 65 or monitoring module 60. [0038] Referring now to FIG. 3, the functions generally performed by processing circuitry 26 are displayed as a block diagram and are designated generally by reference numeral 90. ECG data is received into the processing circuitry from either signal receiver 24, from an external data source (e.g. internet) or from memory circuitry 36. In turn, the processed output, either a TWA value, an aligned ECG signal or both are sent to display circuitry 28 or to memory circuitry 36. [0039] The preferred embodiment encompasses a first and a second median beat complex. Ideally these two median beat representations separately, and exclusively, comprise the odd and the even beat data. As used herein, the term "median complex" refers to a median representation of one or more beats of the ECG data. While the median complex can represent only a single beat, it is preferred that a larger number of beats contribute to the median values of the complex. The resultant median complex represents an average of the samples of the beats (odd or even) which contribute to it. However in a preferred embodiment, the median complex is not a true average because the averaging is done so that the effect that any one beat can have on the median complex is limited. Instead a weighted function or average is used to determine the median complex. This weighted function increments the median complex by a fixed increment or {fraction (1/32)}th the difference between the amplitude of the sample beat and the median complex, depending on the difference between the contributing beat and the median complex. [0040] The ECG data is used to calculate the odd median complex and the even median complex in steps 130 and 140. In step 130, a first array is initialized with the median of a plurality of odd complex values. For example, the samples of the first beat of the ECG data may be used as the initial values of the odd median complex. Alternatively, if a previous ECG segment has been processed, the odd median values resulting from the calculation may be used as the initial odd median values. The odd beats of the ECG data are compared sample by sample with the odd median complex. Each odd beat (i.e., beats 1, 3, 5, 7, 9 and so on) is identified based on the first slope of the QRS-complex. In a preferred embodiment, a beat is identified as the ECG data occurring between a point 450 msec before the first slope of the QRS-complex to a point 550 msec after the slope of the QRS-complex for a total of 1.0 seconds of ECG data. At a sampling rate of 500 Hz, this provides 500 samples per beat. Thus, if the first beat is used to initialize the odd median complex, then 500 samples representing the third beat (i.e., the next odd beat) will be compared to the corresponding 500 samples in the odd median complex. [0041] For each sample of an odd beat that is compared to a corresponding sample in the odd median complex, the result of the comparison is tested. If the sample of the current beat exceeds the corresponding value of the median complex by a predetermined amount then the corresponding value of the median complex is incremented by the predetermined amount. If the sample of the current odd beat does not exceed the corresponding value of the odd median complex by the predetermined amount then the corresponding value in the odd median complex is incremented by {fraction (1/32)}th of the difference between the sample of the current odd beat and the corresponding value of the odd median complex. [0042] These steps are repeated for each sample of each odd beat until the desired number of odd beats (i.e., at least one) are processed. The resultant odd median complex represents an average of the samples of the odd beats. The same process is than followed for the second array using the even beats. [0043] Once the odd and even median complexes are computed they are aligned by cubic alignment step 150. Cubic alignment step 150 is depicted in greater detail in FIG. 4. [0044] Referring now to FIG. 4, a flowchart of the implementation of cubic alignment step 150 is depicted. Cubic alignment step 1 50 is accomplished by a reference function which aligns both the even and odd median beat complexes. In a preferred embodiment, the reference function is a target cubic spline. This target cubic spline is calculated on the basis of three reference points located exactly between the odd and even median beats before P-wave 500 at T.sub.1 (532), with y.sub.1=(y.sub.1(odd)+y.su- b.1(even))/2, before the QRS-complex 510 at T.sub.2 (534), with y.sub.2=(y.sub.2 (odd)+y.sub.2(even))/2, and after T-wave 520 at T.sub.3 (536), with y.sub.3=(y.sub.3(odd)+y.sub.3(even))/2. [0045] These three reference times define two separate spans or regions for each beat processed, and define a common interval along the median complexes which can be aligned and compared. Selection of these three points is represented diagrammatically as point selection step 400. [0046] After point selection step 400, target cubic spline (TCS) calculation step 405 is performed. The TCS calculation is comprised of equations (1) and (2) and solved using equations (3) through (16) on the basis of the points (T.sub.1, y.sub.1), (T.sub.2, y.sub.2), and (T.sub.3, y.sub.3). [0047] After the calculation of the TCS, an odd point selection step 408 and an odd median beat cubic spline calculation step 41 0 are performed. Odd point selection step 408 corresponds to point selection step 400. Three points on the odd median beat complex are selected which are located before P-wave 500 at T.sub.1 (532), before the QRS-complex 510 at T.sub.2 (534), and after T-wave 520 at T.sub.3 (536). Calculation of an odd median beat cubic spline is then done similarly to the calculation of the TCS except the amplitude y.sub.1 corresponding to T.sub.1 of the odd median beat data is utilized so that y.sub.1 equals y.sub.1(odd) at time T.sub.1 (532). The points y.sub.2 and y.sub.3 are similarly derived. [0048] The algorithm next calculates the difference between the odd median beat cubic spline and target cubic spline in odd difference calculation step 415. This difference is then subtracted from the odd median beat in odd alignment step 420. This subtraction (alignment) is done with every sample of the odd median beat so that every odd sample is corrected. The result is an odd median beat [0049] Similarly, an even point selection step 422 and an even median beat cubic spline calculation step 425 is performed such that y.sub.1 equals y.sub.1(even) at time T.sub.1 (532) of the even median beat complex. After calculation of the even median beat cubic spline, even difference calculation step 430 and even alignment step 435 are performed and correspond to their odd beat counterpart steps 415 and 420 respectively. [0050] The cubic alignment is very tolerant to the location of the points. It yields very good results even when atrial repolarization or a short PR interval hides the isoelectric line before QRS-complex 510 or a merging T-wave 520 and P-wave 500 in the other two points hide the isoelectric T-P-interval. [0051] By way of comparison, unaligned median beat complexes as processed by the prior art method are plotted in FIG. 5 while the same median beat complexes are plotted in FIG. 6 after alignment by the present invention. FIG. 6 also depicts the presence of a T-wave alternans 530 seen as the maximum difference between the superimposed portions of the T-waves of the successive beats. A comparison of FIGS. 5 and 6 demonstrates the elimination of false positive T-wave alternans by alignment of the odd and even median complexes achieved by the present technique. This alignment of the odd and even median beat complexes as taught by the present technique is significant to both the determination of and the quantification of T-wave alternans 530 events. While there is no current agreement upon a T-wave alterans threshold to determine the presence of a T-wave alternans event, one skilled in the art may decide upon an appropriate diagnostic reference value to apply in conjunction with the present invention as an appropriate T-wave alternans threshold value. [0052] Returning now to FIG. 3, after cubic alignment step 150, a nonlinear filter step 170 and a TWA calculation step 160 are performed. In TWA calculation step 160 the aligned odd and even median beat complexes are compared to obtain an estimate of the amplitude of the beat-to-beat alternation in the ECG data. This estimate is the TWA value which is compared to some diagnostic reference value to determine if TWA is present. In a preferred embodiment of the invention, the comparison involves determining the maximum difference amount in amplitude (.vertline.y.sub.(odd)-y.sub. (even).vertline.) between the corresponding values of the aligned odd and even median complexes in the region encompassing ST-segment 522 and T-wave 520. [0053] When calculating the TWA value, high frequency noise can falsify the TWA value substantially. Therefore, a nonlinear filter step 170 is performed. The nonlinear filter has two 20 ms windows, one in the odd the other in the even median beat, and both starting at the end of QRS-complex 510. The minimal difference amount between all amplitudes of the windows is selected and stored, and then the windows are moved one step towards the end of T-wave 520. The minimal difference amount is selected again and stored, and the windows are moved once more. The procedure is repeated until the windows reach the end of T-wave 520. In the stored values the high frequency noise is filtered out. The TWA value is then calculated by searching for the maximal difference in the stored values. [0054] It should be noted that, for the first measurement interval, the odd beats are beats 1, 3, 5, 7 and so forth to the end of the measurement interval. Similarly, the even beats for the first measurement interval are 2, 4, 6, 8 and so forth to the end of the measurement interval. For all subsequent measurement intervals, if the last beat of the immediately preceding measurement interval was even, the odd beats will be beats 1, 3, 5, 7, etc. and the even beats will be beats 2, 4, 6, 8, etc. However if the last beat of the immediately preceding measurement interval was odd, then the odd beats will be beats 2, 4, 6, 8, etc. and the even beats will be beats 1, 3, 5, 7, etc. This latter rule preserves the relative groupings of odd and even beats throughout an ECG data sample if multiple measurement intervals are involved in data collection. [0055] While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. * * * * *
{"url":"http://www.patents.com/us-20030069512.html","timestamp":"2014-04-18T14:12:41Z","content_type":null,"content_length":"67904","record_id":"<urn:uuid:720ec642-20bf-4aec-a11d-ccdb62f5f48f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Galois and Separable, Characteristics Last edited by dabien; December 7th 2009 at 07:56 PM. that should be $BK \cong B \otimes K$ not $EK \cong E \otimes K.$ anyway, as i showed in this thread, $BK,$ the compositum of $B,K,$ is $Q.$ so we only need to show that $Q \cong B \otimes_F K$: let $\{\alpha_1, \cdots, \alpha_m \}$ and $\{\beta_1, \cdots , \beta_n \}$ be an $F$-basis for $B$ and $K$ respectively. then $\{\alpha_i \otimes \beta_j: \ \ i \leq m, \ j \leq n \}$ is an $F$-basis for $B \otimes_F K.$ we define the map $\psi: B \otimes K \longrightarrow Q$ by $\psi(\sum_{i,j} c_{ij} \alpha_i \otimes \beta_j)=\sum_{i,j} c_{ij}\alpha_i \beta_j, \ \ c_{ij} \in F.$ the fact that $\ psi$ is a well-defined algebra homomorphism is a trivial result of the universal property of tensor product. it's also obvious that $\psi$ is surjective. so we only need to prove that $\psi$ is injective. to prove this first show that separability of $B$ implies that $\{\alpha_1^{p^s}, \cdots , \alpha_m^{p^s} \}$ is an $F$-linearly independent set, for any integer $s \geq 0.$ now suppose that $\gamma = \sum_{i,j} c_{ij} \alpha_i \otimes \beta_j \in \ker \psi,$ which means that $\sum_{i,j} c_{ij}\alpha_i \beta_j=0.$ for any $i \leq m$ let $\gamma_i=\sum_{j=1}^n c_{ij} \beta_j \in K.$ then $\sum_{i=1}^m \alpha_i \gamma_i = 0.$ since $K$ is purely inseparable, there exists an integer $s \geq 0$ such that $d_i=\gamma_i^{p^s} \in F,$ for all $i \leq m.$ but then $0=\left (\sum_{i=1}^ m \alpha_i \gamma_i \right)^{p^s}=\sum_{i=1}^m d_i\alpha_i^{p^s}.$ thus $d_i = 0, \ \forall i \leq m,$ because, as i asked you to prove, $\{\alpha_1^{p^s}, \cdots , \alpha_m^{p^s} \}$ is an $F$ -linearly independent set. clearly $d_i = 0$ implies that $\gamma_i = 0$ and hence $c_{ij}=0,$ for all $i,j,$ because the set $\{\beta_1, \cdots , \beta_n \}$ is an $F$-basis for $K.$ finally $c_{ij} =0,$ for all $i,j,$ gives us $\gamma= 0. \ \Box$
{"url":"http://mathhelpforum.com/advanced-algebra/118413-galois-separable-characteristics.html","timestamp":"2014-04-16T06:14:30Z","content_type":null,"content_length":"44577","record_id":"<urn:uuid:8635d287-6ff4-434b-8184-1773fe0841b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Equation Latest answer by Vivian L. Middletown, CT A student has some $1 bills and some $5 bills totaling $47. What 2 equations using substitutions can be used to solve? Latest answer by Evan L. Haddonfield, NJ i do not understand how to simplify this equation Answered by Tim H. Overland Park, KS Find the domain of the rational function. f(x)=7x-8/7 Theresa from Milltown, NJ 1 Answer | 0 Votes Top voted answer by Beth L. Deland, FL There is a list of graphs and i thought I got my answer right but none of the answers match. I thought the answer was -3 > x > -6. Is that wrong or is there something wrong with the question... Answered by Felice R. Oconomowoc, WI compare 5 feet to 32 inches. The ratio is? Tammy from Racine, WI 1 Answer | 0 Votes Latest answer by Parviz F. Woodland Hills, CA The center of the circle is (?,?) The radius of the circle is ? Latest answer by Parviz F. Woodland Hills, CA The center of the circle is (?,?) The radius of the circle is ? Latest answer by Parviz F. Woodland Hills, CA 4-x/x-2=-2/x-2 volume of a cube V=s3 the length of the cube is round a. 800cm(small) 3 b.500cm(small three) formula for wind chill w=33-(10... Answered by Amanda P. Bend, OR How do you write 5 + p in Algebra, as opposed to 5 x p? Thanks :) Tahlia from Schenectady, NY 1 Answer | 0 Votes Latest answer by Sunil D. Ann Arbor, MI Given that y varies directly with x, use the specified values to write a direct variation equation that relates x and y Top voted answer by Gene G. Sherman, TX how do I work this problem? a/-9=2 Top voted answer by Mary Ann F. Poway, CA How would I simplify this equation, where do I start? Top voted answer by George T. Ambler, PA write equation of a line parallel to the given line but passing the given point. Latest answer by Kirill Z. Willowbrook, IL The equation is as follows: 6mn+4m+3n+2 Latest answer by Brad M. Blacksburg, VA Help please! I have substituted 25, 17, 13 and -25 from the answer bank - none of which balance out to a -4. Completely stumped. Top voted answer by Mike C. Sammamish, WA help me find the value of variable a Answered by Kevin S. Seneca, SC y = 2x + 8 I know it's going to be above the line and it is going to be dashed, but I don't know how to find the origin. Does it included the origin or does not include the origin... Amy from Titusville, FL 2 Answers | 0 Votes Latest answer by Tamara J. Jacksonville, FL solve the equation (y-1)(y-2)=0 Top voted answer by Ralph L. Chicago, IL I am not good at graphing. I need to find the slope if it exists of the line containing the pair of points (3, 9) and (-2, -3) Top voted answer by Katherine S. Jamaica, NY example the sum of x and y equals 3 more than z . Expressed algebraically= x + y = z + 3 Help! :( 1. simple interest is the product of "p" dollars, "r"...
{"url":"http://www.wyzant.com/resources/answers/algebra_equation?f=new-answers&pagesize=20&pagenum=3","timestamp":"2014-04-20T11:52:00Z","content_type":null,"content_length":"61655","record_id":"<urn:uuid:b679624e-f537-4857-823b-eec4fc115077>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: IEEE arithmetic handling eggert@twinsun.com (Paul Eggert) Tue, 17 Nov 1992 18:58:37 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: eggert@twinsun.com (Paul Eggert) Organization: Twin Sun, Inc Date: Tue, 17 Nov 1992 18:58:37 GMT References: 92-11-041 92-11-087 Keywords: arithmetic bill@amber.csd.harris.com (Bill Leonard) writes: >Fortran does not *conflict* with the IEEE vis-a-vis negative zero. It merely >says that the processor must never *output* a negatively-signed zero. But that conflicts with IEEE Std 754-1985, section 5.6, which requires that converting a number from binary to decimal and back be the identity if the proper precision and rounding is used. The Fortran standard says -0.0 must be output as 0.0; this loses information. One way to work around the problem is to supply IEEE-specific binary/decimal conversion routines to the Fortran programmer, but there's no standard for this, and most implementors don't bother. So in practice, I'm afraid that Fortran and IEEE are indeed in conflict here. Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/92-11-097","timestamp":"2014-04-19T14:32:08Z","content_type":null,"content_length":"5956","record_id":"<urn:uuid:2f98a2ce-7d0b-4b21-a766-cddeef5652fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
How Much Do Math Teachers Make Math Teacher Salary The average math teacher salary is $55,000, and this is an educator who teaches the concepts of math to students. The grades will vary greatly such as will the level of math being taught. At an elementary level, the requirements to teach are going to be much different than if they were at a high school level. In high school, a math educator would specify whether or not he wants to instruct algebra, calculus, geometry etc. There’s a multitude of degrees to choose from which is why the student should do some research before he proceeds with his decision. Math Teachers From These Universities Get Better Job Opportunities Grand Canyon University - B.S - Secondary Education: Business Education, M.Ed in Secondary Education (Leads to initial teacher licensure) & M.Ed. in Elementary Education (Leads to initial teacher licensure). Grand Canyon University is a premier private university and has been helping their Teaching students to achieve their potential by landing them the career of their dreams. Grand Canyon University also encourages students to apply Christian values & ethic in their workplace. Get Information Now It's Free! Liberty University Online - M.Ed in Teaching & Learning - Elementary Education, B.S in Early Education - Interdisciplinary Studies & Master of Education - Teaching and Learning - Middle Grades. Training Champions Liberty University is one of our top universities if you are interested in becoming a Teacher. Pioneering distance education since 1985, Liberty University now enrolls more than for Christ since 1971 90,000 students from around the world in its thriving online program. With more than 160 fully accredited programs of study, Liberty University Online offers degrees from the certificate to the postgraduate level. Get Information Now It's Free! Walden University - B.S - Educational Studies (General), Education Specialist - College Teaching and Learning & M.S in Education - Mathematics (Grades 5-8). Walden University Teaching degree programs are customized to advance your knowledge and help students apply that knowledge to their careers as Teachers. Walden University also provides excellent student support and guidance. Get Information Now It's Free! Kaplan University - M.A - Teaching (for Aspiring Teachers Grades 5-12), M.S - Education (for Existing Teachers Grades K-12) & M.S in Higher Education - Online College Teaching. Kaplan University is one of our top accounting universities. They provide exclusive accounting degree programs dedicated to undergraduate, graduate and post graduates. KU provides intensive and comprehensive education both onsite and online to excel their students in academic achievement. Get Information Now It's Free! How Much Do Math Teachers Make In 2013 Math Teacher Salary by Position • High School Math Teacher: $51,000 • Elementary Math Teacher: $55,000 • Math Teacher: $55,000 • SAT Math Teacher: $55,000 • Math Science Teacher: $46,000 • Middle School Math Teacher: $56,000 By Location • Florida: $53,000 • Minnesota: $50,000 • Mississippi: $65,000 • Illinois: $58,000 • Vermont: $52,000 • Wyoming: $52,000 • South Carolina: $49,000 • Ohio: $53,000 • California: $61,000 • West Virginia: $55,000 • Louisiana: $45,000 • New Jersey: $58,000 • South Dakota: $43,000 • North Dakota: $53,000 • Oregon: $52,000 • Indiana: $56,000 • Oklahoma: $50,000 • Nevada: $52,000 • Hawaii: $40,000 • Maryland: $56,000 By Experience • 1 to 4 years: $34,183 – $58,958 • 5 to 9 years: $32,500 – $67,407 By Industry • College or University: $27,658 – $60,053 • Public K-12 Education: $28,536 – $61,130 • Education: $28,006 – $60,694 By Gender • Female: $43,745 – $66,129 • Male: $35,800 – $58,983 By Certification • Secondary Teacher Certification (Grade 9-12): $37,340 – $66,129 How to Become a Math Teacher To become a math teacher a BA of Science in Mathematics or in Education with a focus in math is often required. There are also some specialized degree programs which will narrow down your area of specialization. This degree will have to be obtained from a 4 year college – if the math teacher would like to work at a higher level, he will often obtain a master’s degree or a doctorate. There are some states that even require that public teachers obtain their license. It will vary from state to state which is why each student should look up the requirements. Some may require more What people are searching:
{"url":"http://www.howmuchdowemake.com/how-much-do-math-teachers-make/","timestamp":"2014-04-16T04:21:54Z","content_type":null,"content_length":"41863","record_id":"<urn:uuid:730db273-0cfc-41ab-ab9d-d8258844bf07>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
d G Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry, Vol. 42, No. 1, pp. 219-233 (2001) Generalized GCD Rings Majid M. Ali; David J. Smith Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, New Zealand; e-mail: majid@math.auckland.ac.nz; smith@math.auckland.ac.nz Abstract: All rings are assumed to be commutative with identity. A generalized GCD ring (G-GCD ring) is a ring (zero-divisors admitted) in which the intersection of every two finitely generated (f.g.) faithful multiplication ideals is a f.g. faithful multiplication ideal. Various properties of G-GCD rings are considered. We generalize some of Jäger's and Lüneburg's results to f.g. faithful multiplication ideals. Keywords: multiplication ideal, Prüfer domain, greatest common divisor, least common multiple Classification (MSC2000): 13A15; 13F05 Full text of the article: [Previous Article] [Next Article] [Contents of this Number] © 2000 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/BAG/vol.42/no.1/14.html","timestamp":"2014-04-20T16:23:51Z","content_type":null,"content_length":"2111","record_id":"<urn:uuid:daffb5ba-3d72-4dd9-851c-16900fa95cb8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
UNSW Handbook Course - Discrete Mathematics - MATH1081 Campus: Kensington Campus Career: Undergraduate Units of Credit: 6 Indicative Contact Hours per Week: 6 Enrolment Requirements: Corequisite: MATH1131 or MATH1141 or MATH1151. Excluded: MATH1090 Role of proof in mathematics, logical reasoning and implication, different types of proofs. Sets, algebra of sets, operations on sets. Mathematical logic, truth tables, syntax, induction. Graphs and directed graphs, basic graph algorithms. Counting, combinatorial identities, binomial and multinomial theorems. Binary operations and their properties, ordered structures. Recursion relations. Assumed knowledge: HSC Mathematics Extension 1. Students will be expected to have achieved a combined mark of at least 100 in Mathematics and Mathematics Extension 1.
{"url":"http://www.handbook.unsw.edu.au/undergraduate/courses/2012/MATH1081.html","timestamp":"2014-04-16T07:13:34Z","content_type":null,"content_length":"18041","record_id":"<urn:uuid:c3b18fb9-4be1-4e24-af05-3f7c28e16fdb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Is 0 a whole number? Date: 7/17/96 at 23:29:40 From: Lucas W Tolbert Subject: Is 0 a whole number? Is 0 a whole number? Date: 7/29/96 at 13:32:52 From: Doctor Jerry Subject: Re: Is 0 a whole number? What are whole numbers? This term is rarely used by mathematicians. It is used in teaching mathematics to young children, to distinguish between whole and fractional numbers. The term whole number often is used to mean the counting numbers, 1, 2, 3, .... Some would argue, I suppose, that 0 is also a counting number. Others would disagree. One starting point for the real number system is Peano's Postulates, which amount to axioms for the positive integers. No mention is made of 0. The number 0 emerges when, having defined addition for the set of positive integers, one asks for solutions to equations of the form m + x = n, where m and n are positive integers. So that such equations can be solved, the set of positive integers is enlarged. The negative numbers and 0 are created/constructed so that this equation can be The rationals and reals can be created in a similar way. -Doctor Jerry, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/57090.html","timestamp":"2014-04-16T22:51:50Z","content_type":null,"content_length":"6080","record_id":"<urn:uuid:234d81de-5e82-494b-b632-1c4c691ee2ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Countably Infinite Sets Replies: 2 Last Post: Dec 28, 2012 12:04 AM Messages: [ Previous | Next ] Re: Countably Infinite Sets Posted: Dec 28, 2012 12:04 AM On Dec 27, 10:37 pm, Aatu Koskensilta <aatu.koskensi...@uta.fi> wrote: > Butch Malahide <fred.gal...@gmail.com> writes: > > On Dec 27, 8:43 pm, netzweltler <reinhard_fisc...@arcor.de> wrote: > >> Does the set {{1}, {1,2}, {1,2,3}, ...} contain all natural numbers? > > No, because it does not contain the natural number 1; if it did, then > > 1 would be an element of itself, contradiction the axiom of regularity. > We don't need foundation here. It's a nice exercise in pointless > pedantry to show there is no representation of the naturals (in ordinary > set theory) on which the set {{1}, {1,2}, {1,2,3}, ...} contains all > natural numbers. Really? How do you prove that? Why can't you have n = {1,2,...,n} for each natural n? Date Subject Author 12/27/12 Re: Countably Infinite Sets Aatu Koskensilta 12/28/12 Re: Countably Infinite Sets Butch Malahide
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2422792&messageID=7944457","timestamp":"2014-04-16T04:26:06Z","content_type":null,"content_length":"17919","record_id":"<urn:uuid:96aa5601-b160-4694-891a-4dc76c059fea>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of particular affirmative Aristotle's logical system Aristotle recognised four kinds of quantified sentences, each of which contain a subject and a predicate: • Universal affirmative: Every S is a P. • Universal negative: No S is a P. • Particular affirmative: Some S are P. • Particular negative: Not every S is a P. • Univ. Affirm: Written SaP (a comes from a-fir-mo, Latin for to affirm, we take the first vowel, since it's universal) • Univ. Neg: Written SeP (e comes from ne-go, Latin for to deny, we take the first vowel, since it's universal) • Part. Affirm: Written SiP (i comes from a-fir-mo, Latin for to affirm, we take the second vowel, since it's particular) • Part. Neg Written SoP (o comes from ne-go, Latin for to deny, we take the second vowel, since it's particular) There are various ways to combine such sentences into syllogisms, both valid and invalid. In Mediaeval times, students of Aristotelian logic classified every possibility and gave them names. For example, the Barbara syllogism is as follows: • Every Y is a Z. • Every X is a Y. • Therefore, every X is a Z. Barbara comes from the three sentences used: At first glance, this may seem the same as: However, in Aristotelian logic this is not so. One logical law states that the predicate must be given by the first premise, the subject by the second. It would however be correct to write: A syllogism can, furthermore fall in one of the following patterns: I I I III I V M?P | P?M | M?P | P?M S?M | S?M | M?S | M?S For each there are several valid Modes. To check for validity, we see if the terms are distributed. To be distributed means to be either: 1) Subject of a Universal Premise (SaP ; SeP) 2) Predicate of a Negative Premise (Sep ; SoP) Lastly, a premise can be obverted or converted to fall to a specific valid case. Conversion is generally achieved by switching terms. Sap = PiS SiP = PiS SeP = PoS SoP = Ø Obvertion is generally achieved by negating the Predicate. SaP = Se-P SiP = So-P SeP = Sa-P SoP = Ø Due to technical limitations the negated term cannot be displayed as it should be. It is not S?-P, but rather S?P with a line over the P. Aristotle also recognised the various immediate entailments that each type of sentence has. For example, the truth of a universal affirmative entails the truth of the corresponding particular affirmative, as well as the falsity of the corresponding universal negative and particular negative. The square of opposition OR square of Boethius lists all these logical entailments. Famously, Aristotelian logic runs into trouble when one or more of the terms involved is empty (has no members). For example, under Aristotelian logic, "all trespassers will be prosecuted" implies the existence of at least one trespasser.
{"url":"http://www.reference.com/browse/particular+affirmative","timestamp":"2014-04-16T22:42:57Z","content_type":null,"content_length":"76613","record_id":"<urn:uuid:9c54b33f-1829-42db-8eb0-d9f21fd0a2fb>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
The outer product Next: Eigenvalues and eigenvectors Up: Fundamental concepts Previous: Operators So far we have formed the following products: This clearly depends linearly on the ket since 39) by a general bra Mathematicians term the operator outer product of Next: Eigenvalues and eigenvectors Up: Fundamental concepts Previous: Operators Richard Fitzpatrick 2006-02-16
{"url":"http://farside.ph.utexas.edu/teaching/qm/lectures/node10.html","timestamp":"2014-04-21T12:07:49Z","content_type":null,"content_length":"7275","record_id":"<urn:uuid:7f8ef578-725d-4534-aa26-c9994d9866b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Disk Rolling inside a Rotating Ring This Demonstration shows a disk rolling, without slipping, inside a rotating ring. The system has two degrees of freedom: the angles of rotation of the ring and of the center of the disk within the ring. A brake shoe can stop the ring, reducing the system to only one degree of freedom: a disk rolling inside a stationary ring. A steel ring of mass is supported at the bottom by two frictionless Teflon rollers. This ring can be rotated with an initial angular speed . Inside the ring, a steel disk of mass is released from angular position with an angular speed . The friction between the ring and disk is sufficient to avoid sliding. Lagrangian mechanics are used to extract the equations of motion for this system with two degrees of freedom: 1. or the angle between the vertical and the line connecting the center of the disk with the center of the ring; 2. or the angle of rotation of the ring around its center. Since no slipping is allowed between the disk and the ring, or the angle of rotation of the disk around its center can be calculated from and and the radii of the disk and ring: , where is the inner radius of the ring and is the radius of the disk. The Lagrangian of the system is the difference between the kinetic and potential energies: where and are the moments of inertia of the ring and disk. The equations of motion can be derived from the Euler–Lagrange equations: resulting in: In order to better verify the rotations of ring and disk, you may have to slow down the animation to eliminate stroboscopic effects. Run the bookmark "sliding check" at different speeds to convince
{"url":"http://demonstrations.wolfram.com/DiskRollingInsideARotatingRing/","timestamp":"2014-04-18T20:46:27Z","content_type":null,"content_length":"46486","record_id":"<urn:uuid:5ac1bd1c-43b0-4708-ac40-ada710f00445>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Formula Question #3 May 6th 2013, 09:09 AM #1 Feb 2013 I am not good at creating equations off of word problems... I try but I always get it wrong, I'm not sure how to do any word problems like these. 275x^2 - 15x - 19600 = 0 That equation was my attempt at it, not sure if it is correct... Any help would be great, thanks! Re: Quadratic Formula Question #3 if R is the revenue, S is the number of sales and P is the price. R=SP When the price is decreased by $15 x times the price is 275-15x and the sales is 90+5x. You want to find x when R>19600 Re: Quadratic Formula Question #3 Shakarri has given the correct direction. Form the equation for R you will get R = x^2+104x-206 Now proceed to differentiate to get min value of x. Re: Quadratic Formula Question #3 Since this question is posted in the Pre-Calculus forum, you may not know how to use differential calculus to optimize a function. In this case, since you have a quadratic, look for the axis of May 6th 2013, 11:11 AM #2 Super Member Oct 2012 May 6th 2013, 05:54 PM #3 Super Member Jul 2012 May 6th 2013, 06:39 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/218635-quadratic-formula-question-3-a.html","timestamp":"2014-04-17T22:22:36Z","content_type":null,"content_length":"38737","record_id":"<urn:uuid:66828362-94e5-4c41-8913-73f153edabfe>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 14th 2011, 01:41 AM #1 I've got a question about the definition of plane curves as equivalence classes under the relation of reparametrization. It is said that whenever two plane curves have different trace, i.e. their graphical representation on the plane differs, the curves belong to distinct equivalence classes. The converse is not true, however. The two curves $c_1:[0,2\pi] \longrightarrow \mathbb{R}^2$ and $c_2:[0,4\pi] \longrightarrow \mathbb{R}^2$ defined by $c_i(t)=(\cos t, \sin t)$ are different in the above sense, in spite of the fact that they have the same trace. This is odd, since I can reparametrize one to the other by means of the map $\varphi(t)=2t$. Is it just because the cirve is closed and periodic? How do we actually unambiguously distinguish closed curves/periodic curves? Re: Curves I suspect what is meant by "plane curves as equivalence classes under the relation of reparameterization" is that all the curves are assumed to have domain [0,1], and reparameterizing is defined as precomposing with a homeomorphism of [0,1]. Re: Curves Aha...I mean we have defined a reparametrization map to be a diffeomorphism of intervals, but without the requirement that the intervals be the same. Althought it is certainly reasonable to require that for periodic curves, since as in the example above the curves have different lenghts, which is not what we actually wish if we consider two similar curves. OK, so this is done in order to incorporate the case of periodic curves, isn't it? Re: Curves Once you have decided to "mod out by" diffeomorphisms of the domain, it's no longer meaningful to ask whether the intervals are the same, since any closed interval is diffeomorphic to any other. For convenience you may as well take them all to be [0,1]. The reason why the two you mentioned are not equivalent is that one goes once around a circle, while the other goes twice around. One is injective (okay, except at the endpoints) and the other is not. No diffeomorphism of the domain will change one into the other. Re: Curves one way to distinguish between two closed paths is to calculate their winding numbers. provided there is a point on the region of the plane that is bounded by the curve that the curve does not pass through, we can remove that point from the plane. $\mathbb{R}^2 - {x_0}$ is homotopy-equivalent to $S^1$, so we can calculate the homotopy class of the deformation of c(t) to a path on $S^1$, which will give us an integer (since $\pi_1(S^1) \cong \mathbb{Z}$). in your example, the winding number of $c_1$ is 1, while the winding number of $c_2$ is 2, so they are not homotopic. Re: Curves I don't know if this is quite what you're looking for, but it would answer the question for me if I were you. Suppose you have some trace of a curve in the plane and you wish to find out which of two periodic curves $c_1,c_2$ is the least "redundant". Of course, this amounts to asking what the smallest interval you can really take is. An interesting fact that lets you know this is always answerable (in the sense that there really is a smallest period) is that the set $G_\gamma$ of periods of some smooth curve $\gamma$ is, not surprisingly, a subgroup of $(\mathbb{R},+)$. In fact, it's not hard to prove that $G_\gamma$ is a discrete subgroup of $\mathbb{R}$ (i.e. a topological group whose inherited topology from $\mathbb{R}$ is discrete). Then, the basic theory of discrete subgroups allows us to conclude that there exists some $x\in\mathbb{R}$ such that $G_\gamma=x\mathbb{Z}$. Clearly then the curve you really want is the one whose domain is on an interval of length $x$--the fundamental period. Thus, in your example your curve has fundamental period $2\pi$ and moreover any other curve will have period $n2\pi$ for some $n\in\mathbb{Z}$. This allows you to differentiate between closed curves based on what multiplicity of the fundamental period they have. More information (including a self-contained proof of some of the discrete subgroup facts) can be found here on my blog. I hope this was of some interest to you. November 14th 2011, 04:49 AM #2 November 14th 2011, 05:15 AM #3 November 14th 2011, 08:53 AM #4 November 14th 2011, 09:18 AM #5 MHF Contributor Mar 2011 November 14th 2011, 10:55 AM #6
{"url":"http://mathhelpforum.com/differential-geometry/191866-curves.html","timestamp":"2014-04-17T22:14:30Z","content_type":null,"content_length":"50608","record_id":"<urn:uuid:d300dbd8-f4d7-4bf7-a982-bb2ff6382f74>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Simonton Science Tutor Find a Simonton Science Tutor ...I love one-on-one tutoring and enjoy working with students of all levels. I am patient and have the flexibility to adapt my teaching style to match a student`s needs. I am also confident that I can explain complex math and science problems in a way that is easy to understand. 13 Subjects: including genetics, organic chemistry, ACT Math, algebra 1 ...Because I have personally dealt with test anxiety myself and helped students conquer theirs as well, I am well-equipped to help students overcome their fears through warm, gentle tutoring sessions where I will always expect good effort but not perfection. Excellence is a far better goal. In add... 47 Subjects: including biology, calculus, chemistry, microbiology ...I have also tutored high school students in Houston and Sugar Land in various subjects: math, physics, literature, Spanish, studying skills and SAT preparation. Currently, I am the translator for the Alief Independent School District. As a certified bilingual teacher, I am required to take 30 hours a year of professional development, which I cover by taking teaching workshops. 41 Subjects: including anthropology, reading, philosophy, English ...This is one of the subjects I love to tutor in because it's exciting and interesting to read the styles of each individual and to help them write and analyze a beautiful paper. I took Pre-AP English I and II from a top Fort Bend school and made A's in English for all four years of high school En... 39 Subjects: including pharmacology, microbiology, physical science, astronomy ...This is often a simple misunderstanding stemming from the basic concepts. While it might sound tedious and superfluous, many of the more complex topics are easily understood from the basics. Finally, it is important to inspire passion and a desire to learn in each student. 22 Subjects: including chemical engineering, physical science, physics, geometry
{"url":"http://www.purplemath.com/simonton_science_tutors.php","timestamp":"2014-04-16T08:00:59Z","content_type":null,"content_length":"23856","record_id":"<urn:uuid:33e03184-d1d4-4a71-9ec2-382d6a5468c4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal kinematics of supercoiled filaments Seminar Room 1, Newton Institute In this talk we propose kinematics of supercoiling of closed filaments as solutions of the elastic energy minimization [2]. The analysis is based on the thin rod approximation of linear elastic theory, under conservation of self-linking number with elastic energy evaluated by means of bending and torsional influence. Time evolution functions are described by means of piecewise polynomial transformations based on cubic spline functions. In contrast with traditional interpolation, the parameters defining the cubic splines representing the evolution functions are considered as the unknowns in a non-linear optimization problem. We show how the coiling process is associated with conversion of mean twist energy into bending energy through the passage by an inflexional configuration [3] in relation to geometric characteristics of the filament evolution. Geometric models for chromatin fibre folding are finally presented, compared in terms of geometric quantities and tested on ChIP Data (Chromatin ImmunoPrecipitation) generated from human pluripotent embryonic stem cells. These results provide new insights on the folding mechanism [1] and associated energy contents and may find useful applications in folding of macromolecules and DNA packing in cell biology [4]. References: [1] F. Maggioni & R.L. Ricca (2006) Writhing and coiling of closed filaments. Proc. R. Soc. A 462, 3151-3166. [2] Maggioni, F., Potra, F. & Bertocchi, M. Optimal kinematics of a looped filament (under referee reviewing). [3] H.K. Moffatt & R.L. Ricca (1992) Helicity and the C?lug?reanu invariant. Proc. R. Soc. Lond A 439, 411- 429. [4] R.L. Ricca & F. Maggioni (2008) Multiple folding and packing in DNA Computer & Mathematics with Applications, 55, 1044-1053 The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/TOD/seminars/2012103011301.html","timestamp":"2014-04-19T09:56:34Z","content_type":null,"content_length":"7317","record_id":"<urn:uuid:0202e3fb-0337-4e2b-8c6c-787ba5b96e07>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Kingstowne, VA Math Tutor Find a Kingstowne, VA Math Tutor ...This approach does not promote conceptual understanding! Students need to be able to think critically and creatively when they face a new problem. They need to be challenged with rich problems, while being provided with the tools to tackle those problems with creativity and confidence. 16 Subjects: including algebra 1, algebra 2, calculus, geometry ...I was also a special education teacher for 14 years. Most recently I was employed by a high school as a special education teacher in self-contained Algebra and Geometry classes for ED/LD youth. In addition, I have taught or been a tutor for Pre-Algebra, US/VA History and Government, English, Bi... 15 Subjects: including geometry, reading, dyslexia, world history ...I had more than 3 years' intense training in programming, especially in C and Java, both of which have been widely used in my daily job. I also have the tutored C and Java courses when I'm an undergraduate. I am not only know how to program, but also know how to learn to program. 27 Subjects: including SAT math, geometry, precalculus, trigonometry ...I love history and try to bring it to life for students that I work with. I have a bachelor's degree in classical studies and graduated with honors. The studies included the history, religion, and architecture of both ancient Rome and Greece. 21 Subjects: including algebra 1, ACT Math, prealgebra, reading ...I have a Master's degree in Chemistry and I am extremely proficient in mathematics. I have taken many math classes and received a perfect score on my SAT Math test. I have tutored students for years in standardized tests and provide a customized tutoring plan for each individual student. 11 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2 Related Kingstowne, VA Tutors Kingstowne, VA Accounting Tutors Kingstowne, VA ACT Tutors Kingstowne, VA Algebra Tutors Kingstowne, VA Algebra 2 Tutors Kingstowne, VA Calculus Tutors Kingstowne, VA Geometry Tutors Kingstowne, VA Math Tutors Kingstowne, VA Prealgebra Tutors Kingstowne, VA Precalculus Tutors Kingstowne, VA SAT Tutors Kingstowne, VA SAT Math Tutors Kingstowne, VA Science Tutors Kingstowne, VA Statistics Tutors Kingstowne, VA Trigonometry Tutors Nearby Cities With Math Tutor Berwyn, MD Math Tutors Cameron Station, VA Math Tutors Franconia, VA Math Tutors Green Meadow, MD Math Tutors Greenway, VA Math Tutors Jefferson Manor, VA Math Tutors Marlow Heights, MD Math Tutors Mason Neck, VA Math Tutors Newington, VA Math Tutors North Springfield, VA Math Tutors Pisgah, MD Math Tutors Rison, MD Math Tutors Tuxedo, MD Math Tutors W Bethesda, MD Math Tutors West Hyattsville, MD Math Tutors
{"url":"http://www.purplemath.com/kingstowne_va_math_tutors.php","timestamp":"2014-04-20T06:30:31Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:a777944d-17d2-4236-8d87-59cf76731b79>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
#include <boost/math/special_functions/cbrt.hpp> namespace boost{ namespace math{ template <class T> calculated-result-type cbrt(T x); template <class T, class Policy> calculated-result-type cbrt(T x, const Policy&); }} // namespaces Returns the cubed root of x: x^1/3. The return type of this function is computed using the result type calculation rules: the return is double when x is an integer type and T otherwise. The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more Implemented using Halley iteration. The following graph illustrates the behaviour of cbrt: For built in floating-point types cbrt should have approximately 2 epsilon accuracy. A mixture of spot test sanity checks, and random high precision test values calculated using NTL::RR at 1000-bit precision.
{"url":"http://www.boost.org/doc/libs/1_43_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/powers/cbrt.html","timestamp":"2014-04-17T16:28:37Z","content_type":null,"content_length":"8608","record_id":"<urn:uuid:89c31178-a3cb-4f55-95ef-16f1c1aca51f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: maths chantelle wrote: maths is great That is correct. Welcome to the forum! Hi Agnishom I see that you like going through old threads, like me. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://mathisfunforum.com/viewtopic.php?id=65","timestamp":"2014-04-20T13:52:06Z","content_type":null,"content_length":"16758","record_id":"<urn:uuid:087a97ff-43f5-4828-b7e9-26f35f26bae1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Triangle Areas - Please help before I smash my laptop into pieces! 11-07-2010, 01:19 PM Atia of the julii Triangle Areas - Please help before I smash my laptop into pieces! public class TriangleArea int a, b, c; TriangleArea(int a, int b, int c) public static void main( String[] args){} public double Sidea =0.0, Sideb =0.0, Sidec =0.0, area=0.0; public TriangleArea(double sa, double sb, double sc) public void calcArea() double s=0.0, Num=0.0; s = 0.5*(Sidea+Sideb+Sidec); Num = s*((s-Sidea)*(s-Sideb)*(s-Sidec)); area = Math.sqrt(Num); public double getArea() return area; public static boolean isValid(double side1, double side2, double side3) return(((side1 + side2) >side3) && ((side2 + side3) > side1) && ((side3 + side1) >side2)); public static double area(double side1, double side2, double side3) double s = (side1 + side2 + side3)/2; double area = Math.sqrt(s * (s - side1) * (s - side3)); return area; Moderator Edit: code tags added 11-07-2010, 01:39 PM Can you tell us what has you "the most" stuck right now? Is it deciding how to create the triangles sides that satisfy the rules? The more specific the question, usually the more helpful the Luck and welcome to Java forums! Oh, I've added code tags in your post above. 11-07-2010, 01:51 PM You could do it the stupid way, i.e. enumerate all the sides a, b and c up to a certain number and check the two constraints: the total length constraint: a+b+c <= 20 and the triangle constraint: a < b+c && b < a+c && c < a+b. If both constraints are satisfied do the area calculation. If you keep a <= b <= c you don't have to check for those equivalent triangles. A cowardesque upperbound for those three sides would be 20. kind regards, 11-07-2010, 02:10 PM Atia of the julii Thanks guys, I've only been teaching myself for 2 weeks and I'm still finding allot of java very confusing, I'm not too bad at logic I just haven't had much practice actually writing it out into a coherent program. I'm just getting used to things like while and if statements and I can't even work out if i need say, an if statement to work the constraints that JosAH mentioned above. I am a complete beginner as you can probably tell! xD Apparently this exercise should not take long at all, i've been working on it for over 6 hours now and just can't get my head around it :( 11-07-2010, 02:17 PM But again to reiterate, if you could tell us what specifically has you stumped, we can give you better help. 11-07-2010, 02:18 PM Thanks guys, I've only been teaching myself for 2 weeks and I'm still finding allot of java very confusing, I'm not too bad at logic I just haven't had much practice actually writing it out into a coherent program. I'm just getting used to things like while and if statements and I can't even work out if i need say, an if statement to work the constraints that JosAH mentioned above. I am a complete beginner as you can probably tell! xD Apparently this exercise should not take long at all, i've been working on it for over 6 hours now and just can't get my head around it :( Ok, I'll help you with the loop boundaries: for (int a= 1; a <= 20; a++) { for (int b= a; a <= 20; b++) { for (int c= b; c <= 20; c++) { if (!lengthConStraint(a, b, c)) break; // rest of the logic goes here Note that the way the loop boundaries are set up it is guaranteed that a <= b <= c so you don't have to check for equivalent triangles. I added one constraint check, the length constraint. It is a simple method: private boolean lengthConstraint(int a, int b, int c) { return a+b+c <= 20; i.e. it checks whether or not the sum of the lengths of the sides is less or equal to 20 as the assignment states. If the constraint fails you can simply break out of the loop because only larger values of a, b or c are tested in a next pass through the loop. kind regards, 11-07-2010, 02:33 PM Atia of the julii Thanks very much Jos, I appreciate it. I'll keep working at it, I came across 'exceptions' but I haven't studied or learnt them yet so wasn't too keen to use them as I would only get more confused so thanks for the 'if' statements, they are more at my level! 11-07-2010, 02:55 PM Atia of the julii I searched the forum for any examples and I found this.... I am unfamiliar with floats and I am not sure what the term 'precision' is used for, could anyone please enlighten me? I know I will be using the constraints Jos kindly laid out for me above but the question seems to be similar to mine so I am wondering if I can build on this example I found? public class Heron { public static void main(String[] args) { // Sides for triangle in float float af, bf, cf; float sf, areaf; // Ditto in double double ad, bd, cd; double sd, aread; // Area of triangle in float af = 12345679.0f; bf = 12345678.0f; cf = 1.01233995f; sf = (af+bf+cf)/2.0f; areaf = (float)Math.sqrt(sf * (sf - af) * (sf - bf) * (sf - cf)); System.out.println("Single precision: " + areaf); // Area of triangle in double ad = 12345679.0; bd = 12345678.0; cd = 1.01233995; sd = (ad+bd+cd)/2.0d; aread = Math.sqrt(sd * (sd - ad) * (sd - bd) * (sd - cd)); System.out.println("Double precision: " + aread); 11-07-2010, 03:02 PM Atia of the julii Ah... i think i understand, a double it a longer byte form than a float, so it will show more decimal places, I will follow this code but stick only to double.... 11-07-2010, 03:05 PM Atia of the julii So I am now working off this as a base... public class Area { public static void main(String[] args) { // Sides for a triangle in double double a, b, c; double s, aread; // Area of triangle in double a = 12345679.0; b = 12345678.0; c = 1.01233995; s = (a+b+c)/2.0d; aread = Math.sqrt(s * (s - a) * (s - b) * (s - c)); System.out.println("Area: " + aread); for (int a= 1; a <= 20; a++) { for (int b= a; a <= 20; b++) { for (int c= b; c <= 20; c++) { if (!lengthConStraint(a, b, c)) break; // rest of the logic goes here 11-07-2010, 03:07 PM Don't copy code from the forum. For this assignment you're much better off creating your code from scratch. 11-07-2010, 03:20 PM I call this `Frankenstein code', code that is glue together from scraps copied and pasted from the internet. Most of the time it turns out a complete failure or just an ugly monster. ;-) kind regards, 11-07-2010, 03:23 PM Atia of the julii I just don't feel confident enough to do it all from scratch, my BA of several years ago was in English.. and I haven't studied maths since my GCSE in 90's lol. I wish I could find some similar tasks on the net to study but they're difficult to find and typically much more advanced with coordinates etc that derail me even more. I just got a java for dummies book from amazon that isn't proving very useful either.... frustration.... 11-07-2010, 03:27 PM Atia of the julii 11-07-2010, 04:07 PM I just don't feel confident enough to do it all from scratch, my BA of several years ago was in English.. and I haven't studied maths since my GCSE in 90's lol. I wish I could find some similar tasks on the net to study but they're difficult to find and typically much more advanced with coordinates etc that derail me even more. I just got a java for dummies book from amazon that isn't proving very useful either.... frustration.... frustration == brain cells working, so don't let it dismay you, but rather embrace it. Again, try to create it on your own, and if you run into a road block, post your code here and your errors, and if we can, we'll try to help you. 11-07-2010, 04:41 PM Atia of the julii Thanks Furarable, I'm determined to get there in the end! I'm going from scratch atm so i'll post my results soon hopefully :) 11-07-2010, 04:45 PM We can hardly wait to see the result and our nipples will explode in delight if you didn't name your class 'Area'. kind regards, Jos ;-) 11-07-2010, 05:09 PM Atia of the julii Haha Jos, I had 'TriangleArea' and 'Area' because I wanted to have 2 classes to play around with in eclipse, i just deleted all the crap I had in 'TriangleArea' and made this from scratch with the help of your 'if' statement. I have cleaned away all the errors apart from the syntax error I am getting on the 'return returnable' line... it says insert '}' but I do this and it remains an I'm sure I haven't accomplished anything here and it'll all be wrong yet again but i just wanted to run it and see what this did, thought I am aware I don't have a System.out.println in it yet... public class TriangleArea { public static boolean isValid(double sidea, double sideb, double sidec){ boolean returnable = false; for (int a= 1; a <= 20; a++) { for (int b= a; a <= 20; b++) { for (int c= b; c <= 20; c++) { if (!lengthConStraint(a, b, c)) break; int s; if ( sidea+ sideb <= sidec){ if (sideb + sidec <= sidea){ if (sidea + sidec <= sideb){ return false; returnable = true;}} else { returnable = true; return returnable; private static boolean lengthConStraint(int sidea, int sideb, int sidec) { // TODO Auto-generated method stub return false; public static double area(double sidea, double sideb, double sidec){ double s = Math.sqrt( (sidea + sideb + sidec) / 2 ) ; s = s * (s - sidea) * ( s - sideb ) * (s - sidec ); return s; Moderator Edit: Code tags added 11-07-2010, 05:29 PM I added code tags to your post to have the code retain its formatting and be readable, but you should edit it and fix your indentation because as posted, it's unreadable. I'm confused: what is the purpose of your isValid method? You call a lengthConstraint method, but I don't see code for it? Or a main method? 11-07-2010, 05:31 PM Atia of the julii public class TriangleArea { public static boolean isValid(double a, double b, double c){ boolean returnable = false; for ( a = 1; a <= 20; a++) { for ( b = a; a <= 20; b++) { for ( c = b; c <= 20; c++) { if (!lengthConStraint(a, b, c)) break; int s; if ( a+ b <= c){ if (b + c <= a){ if (a + c <= b){ return false; returnable = true;}} else { returnable = true; private static boolean lengthConStraint(double a, double b, double c) { // TODO Auto-generated method stub return false; public static double area(double a, double b, double c){ double s = Math.sqrt( (a + b + c) / 2 ) ; s = s * (s - a) * ( s - b ) * (s - c ); return s;
{"url":"http://www.java-forums.org/new-java/34456-triangle-areas-please-help-before-i-smash-my-laptop-into-pieces-print.html","timestamp":"2014-04-17T10:59:24Z","content_type":null,"content_length":"30913","record_id":"<urn:uuid:6958e033-7604-4df6-8b8a-3f6729ff110b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:The Solar System - Six Lectures - Lowell.djvu/25 This page has been Which of the two orbits a body is pursuing may be determined either by actually finding the body's path or by finding the distance of the body from the Sun and its speed at the moment. For an interesting equation connects the speed with the distance, giving the major axis of the orbit, upon which alone the class of curve depends. This equation is $\textstyle v^{2}=\mu\left(\frac{2}{r}\mp\frac{1}{a}\right) \,\!$, in which the – sign betokens the ellipse, the + sign the hyperbola. Suppose a body at p moving along the curve whose tangent is pt with acceleration f always directed to s. Then $\textstyle \dot v \,\!$, the resolved part of the acceleration along the tangent, is f cos ψ.
{"url":"http://en.wikisource.org/wiki/Page:The_Solar_System_-_Six_Lectures_-_Lowell.djvu/25","timestamp":"2014-04-19T10:48:59Z","content_type":null,"content_length":"24654","record_id":"<urn:uuid:3b223142-1fc0-4f4d-af3a-ba7da55333ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (9 - 12) Title: Trigonometric Art Description: Students will investigate and analyze the effects of parameter changes on a trinometric function using a graphing calculator. Students will be able to create their own trigonometric art using sine, cosine, and tangent functions. Subject: Mathematics (9 - 12) Title: Traveling Around The Unit Circle Description: This lesson is intended to review the key ideas associated with trigonometric functions.The design of this lesson provides a collection of reproducible activities to be used in a classroom setting where students have varied backgrounds / experience with trigonometric functions. Subject: Arts Education (7 - 12), or Mathematics (9 - 12) Title: Graphing at all levels: It’s a beautiful thing! Description: This lesson addresses the societal issue of the arts being eliminated in many public schools be employing graphs (at any level) as an artistic media. Review of all types of graphs is included through various interactive websites.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Subject: Mathematics (9 - 12) Title: "Woody Sine" Description: The students will discover the shape of the sine function without creating a t-chart. The students will create a sine function using toothpicks. The students will then discover how the amplitude (height) and the period (number of compete waves) change when the basic graph created from the broken tooth picks was changed. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Thinkfinity Lesson Plans Subject: Mathematics,Science Title: Do You Hear What I Hear? Description: In this lesson, from Illuminations, students explore the dynamics of a sound wave. Students use an interactive Java applet to view the effects of changing the initial string displacement and the initial tension. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Arts,Mathematics Title: Hearing Music, Seeing Waves Description: This reproducible pre-activity sheet, from an Illuminations lesson, presents summary questions about the mathematics of music, specifically focused on sine waves and the geometric sequences of notes that are an octave apart. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Thinkfinity Learning Activities Subject: Mathematics Title: Trigonometric Graphing Description: This student interactive from Illuminations allows students to graph trigonometric functions. Parameters that affect the amplitude, period, and phase shift of the function can be Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics,Science Title: Sound Wave Description: This student interactive, from Illuminations, helps students understand the mathematical models used to represent sound. Students come to understand the origins of the terms pitch, tone, frequency, and intensity, as well as explore the dynamics of a sound wave. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
{"url":"http://alex.state.al.us/all.php?std_id=54434","timestamp":"2014-04-18T18:18:17Z","content_type":null,"content_length":"56854","record_id":"<urn:uuid:b60679fa-6b0b-4cf3-ade5-06b3956568d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Product formula for Hermite polynomials Replies: 5 Last Post: Jan 24, 2013 3:50 PM Messages: [ Previous | Next ] ksoileau Re: Product formula for Hermite polynomials Posted: Jan 20, 2013 3:01 PM Posts: 83 From: Houston, TX On Saturday, January 19, 2013 3:38:03 PM UTC-6, ksoileau wrote: Registered: 3/9/08 > I'm looking for a formula which expresses the product of two Hermite polynomials as a linear combination of Hermite polynomials, i.e. $a_{m,n,i}$ verifying > $$ > H_m(x)H_n(x)=\sum \limits_{i=0}^{m+n} a_{m,n,i} H_i(x). > $$ > for all nonegative $m,n$. > If such a formula is known, I'd be most appreciative of a citation or link describing it. > Thanks for any help! > Kerry M. Soileau I already did that and found no answer to my question at any of these links: If the answer was so easy to find using Google, why did you take the trouble to write a reply without providing a link? You have a rather eccentric concept of "helpfulness." Thanks anyway! Date Subject Author 1/19/13 Product formula for Hermite polynomials ksoileau 1/20/13 Re: Product formula for Hermite polynomials Don Redmond 1/20/13 Re: Product formula for Hermite polynomials ksoileau 1/20/13 Re: Product formula for Hermite polynomials Don Redmond 1/20/13 Re: Product formula for Hermite polynomials Roland Franzius 1/24/13 Re: Product formula for Hermite polynomials AP
{"url":"http://mathforum.org/kb/message.jspa?messageID=8115287","timestamp":"2014-04-20T19:33:39Z","content_type":null,"content_length":"23225","record_id":"<urn:uuid:5c8baba3-a530-4820-a911-767a1f524b25>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
First In Math® Glossary of Math Terms - Visitor Acute Angle: An angle with a measure less than 90º Addend: Any number that is being added. Adjacent angles: Two angles that share a common side and vertex, but don’t overlap. Algorithm: A step-by-step solution to a problem. Analog Time: Time displayed on a timepiece having hour and minute hands. Area: The measure, in square units, of the inside of a plane figure. Array: A rectangular arrangement of objects in equal rows or columns. Attribute: A property of an object. (Example: color or shape) back to top Bar Graph : A graph that uses bars to show data. Benchmark fractions: Commonly used fractions, often for estimation. back to top Capacity: The amount that something can hold. Chord: A line segment whose endpoints are on a circle. Circumference: The distance around a circle. Combination: A group of items. Placing these items in a different order does not create a new combination. Complementary Angles: Two angles whose measurements add to 90°. Compose (Composition): Put together, join. Composite number: A whole number having more than two factors. Cone: A solid figure that has a circular base and one vertex. Congruent: Having the same size and shape. EXAMPLE - Congruent angles have the same measure; congruent segments have the same length. Cube: (noun) A rectangular solid having six congruent, square faces. Cube: (verb) To raise a quantity or number to the third power. (x)(x)(x) Cylinder: A three-dimensional figure with two circular bases, which are parallel and congruent. back to top Decompose (Decomposition): Take apart, separate. Denominator: The bottom number of a fraction. Diameter: A line segment that has endpoints on a circle and passes through the center of the circle. Difference: The answer in a subtraction problem. EXAMPLE: 8 - 3 = 5; 5 is the difference. back to top Edge: The line segment where two faces of a solid figure meet. Equation: A statement that two mathematical expressions are equal. Equilateral triangle: A triangle with all three sides of equal length. The angles of an equilateral triangle are always 60° Endpoint: Either of two points marking the end of a line segment. Equivalent: Having the same value. Estimate: A close guess of the actual amount. Expanded form: Writing out a number to show the value of each digit. (Also called Expanded notation) Expanded notation: A way to write numbers that shows the value of each digit (EXAMPLE: 4372 = 4000+300+70+2). Exponent: Written as a small number to the right and above the base number. An exponent shows how many times a number is to be multiplied by itself. Expression: A variable, or any combination of numbers, variables, and symbols that represents a mathematical relationship (EXAMPLE: 24 x 2 + 5 or 4a – 9). Exterior Angles: The angle formed by any side of a polygon and the extension of its adjacent side. Even number: Any integer that can be divided evenly by 2. back to top Face: A plane figure that serves as one side of a solid figure. Fact family: A set of related addition and subtraction, or multiplication and division equations using the same numbers (EXAMPLE: 6+9=15, 15-9=6, 9+6=15, 15-6=9). Factor: A whole number that divides evenly into another whole number (EXAMPLE: 1, 3, 5, and 15 are factors of 15). Factor pairs: Any two numbers being multiplied together that give you a chosen number. (Example: Some factor pairs of 16 are: 8x2, 4x4, and 16x1) Function: A relation in which every input value has a unique output value. back to top Glide reflection: A combination of two transformations: a reflection over a line followed by a translation in the same direction as the line. Greatest common factor (GCF): The largest factor that 2 or more numbers have in common. back to top Heptagon/Septagon: A polygon with 7 sides. Each interior angle in a regular heptagon/septagon has a measure of 128.57°. Hexagon: A polygon with 6 sides. Each interior angle in a regular hexagon has a measure of 120°. Histogram: A bar graph in which the labels for the bars are numerical intervals. Hypotenuse: The longest side of a right triangle (which is also the side opposite the right angle). back to top Improper fraction: A fraction where the numerator (the top number) is greater than or equal to the denominator (the bottom number). Inequality: A mathematical sentence that contains a symbol that shows the terms on either side of the symbol are unequal (EXAMPLE: 3+4>6). Integer: Any of the natural numbers, the negatives of these numbers, or zero. A whole number (a number that is not a fraction). Interior Angles: An angle inside a polygon. Intersecting lines: Lines that cross. Interval: The distance between two points or events. Inverse operation: The operation that reverses the effect of another operation. (Example: Addition and subtraction are inverse operations) Irrational Number: a number that cannot be written as a simple fraction - it's decimal goes on forever without repeating. It is called "irrational" because it cannot be written as a ratio (or Isosceles triangle: A triangle with two equal sides. back to top Least common denominator (LCD): The least common multiple of the denominators in two or more fractions. Least common multiple (LCM): The smallest number, other than zero, that is a common multiple of two or more numbers. Leg (of a right triangle): Either of the two sides that form the right angle in a right triangle. Line: A straight path extending in both directions with no endpoints. Line of symmetry: A line that divides a figure into two halves that are mirror images of each other. Line plot: A graph showing the frequency of data on a number line. Line segment: A part of a line with two endpoints. back to top Mass: A measure of how much matter is in an object. (Commonly measured as weight) Maximum: The largest value possible. Mean (average): The number found by dividing the sum of a set of numbers by the number of addends. Median: The middle number in an ordered set of data, or the average of the two middle numbers when the set has two middle numbers. Minimum: The smallest value possible. Mixed number: A whole number and a fraction written as one number. Mode: The number(s) that occurs most often in a set of data. Multiples: The product of a given whole number and another whole number (EXAMPLE: multiples of 4 are 4, 8, 12, 16….). back to top Nonagon: A polygon with 9 sides. Each interior angle in a regular nonagon has a measure of 140°. Number line: A line with numbers placed in their correct position. Numerator: The top number of a fraction. Number sentence: An equation or inequality with numbers. back to top Obtuse angle: An angle with a measure more than 90º. Octagon: A polygon with 8 sides. Each interior angle in a regular octagon has a measure of 135°. Odd number: Any integer that CANNOT be divided evenly by 2. Ordered pair: A pair of numbers used to locate a point on a coordinate grid. The first number tells how far to move horizontally, and the second number tells how far to move vertically. back to top Parallel lines: Lines that never intersect and are always the same distance apart. Parallelogram: A quadrilateral whose opposite sides are parallel and congruent. Pentagon: A polygon with 5 sides. Each interior angle in a regular pentagon has a measure of 108 °. Perimeter: The distance around a figure. Permutation: The action of changing the arrangement, especially the linear order, of a set of items. Perpendicular lines: Two lines, segments or rays that intersect to form right angles. Pictograph: A graph that uses pictures to show and compare information. Plane: A flat surface that extends infinitely in all directions. Point: An exact location or position. It has no size, only position. Polygon: A closed plane shape made with three or more straight sides. Prime number: A whole number that has exactly two factors, 1 and itself. Probability: The chance an event will occur. Product: The answer to a multiplication problem. EXAMPLE: 6 x 2 = 12; the product is 12. Properties of Operations: An operation works to change numbers. There are six operations: addition, subtraction, multiplication, division, raising to powers, and taking roots. Protractor: An instrument used in measuring or drawing angles. Pyramid: A solid figure with a polygon base and triangular sides that meet at a single point (vertex). back to top Quadrants: The four regions of a coordinate plane that are separated by the axes. Quadrilateral: A polygon with 4 sides. Quotient: The answer that is a result of dividing one number by another. back to top Radius: A line segment that has one endpoint on a circle and the other endpoint at the center of the circle. Range: The difference between the greatest and least numbers in a set of data. Rate: A ratio that compares two quantities having different units (EXAMPLE: 95 miles in 2 hours). Ratio: A comparison of two numbers using division. Ray: A part of a line that has one endpoint and continues without end in one direction. Rectangular prism: A solid figure in which all six faces are rectangles. Reflection (flip): A transformation that produces the mirror image of a figure. Reflex Angles: An angle greater than 180°. Regular polygon: A polygon that has all sides congruent and all angles congruent. Remainder: The amount left over when a number cannot be divided equally. Repeating decimal: A decimal that has a repeating sequence of numbers after the decimal point. Rhombus: A parallelogram with four equal sides. Right angle: An angle that measures exactly 90º. Right triangle: A triangle that has a 90º angle. Rotation (turn): A movement of a figure that turns that figure around a fixed point. Round: Changing a number to its nearest group of ten, hundred, thousand, etc. (e.g. 631 rounded to the nearest ten = 630; 631 rounded to its nearest hundred = 600; 631 rounded to its nearest thousand = 1,000) back to top Scalene triangle: A triangle in which no sides are equal. Similar polygons: Polygons that have the same shape, but not necessarily the same size. Corresponding sides of similar polygons are proportional. Sphere: A solid figure that has all points the same distance from the center. Square: (noun) A 4-sided polygon where all sides have equal length and every angle is a right angle (90°). Square: (verb) To rasie a number or quantity to the second power. (The number is multiplied by itself.) (x)(x) Stem and leaf plot: A way to show a list of numbers in a graph format, where each data value is split into a STEM and a LEAF (usually the digit in the ones place). Example: the numbers 22, 24 and 28 have a stem of "2" and "2", "4", "8" are the leafs. Straight angle: An angle with a measure of 180º. Sum: The answer to an addition problem. EXAMPLE: 12 + 7 = 19; the sum is 19. Supplementary Angles: Two angles whose measurements add to 180°. Symmetry: When a figure can be divided by one straight line resulting in two halves that are mirror images. back to top Tally chart: A table that uses tally marks to record data. Terminating decimal: A decimal that contains a finite number of digits. Tessellate: To combine plane figures so that they cover an area without any gaps or overlaps. Transformation: The moving of a figure by a translation (slide), rotation (turn) or reflection (flip). Translation (slide): A movement of a figure to a new position without turning or flipping it. Trapezoid: A quadrilateral with exactly one pair of parallel sides. back to top Unit price: The price of a single item or amount (EXAMPLE: $3.50 per pound). Unit rate: A rate with the second term being one unit (EXAMPLE: 50 mi/gal, 4.5 km/sec). back to top Variable: A letter or symbol that stands for a number or numbers. Venn diagram: A diagram that shows relationships among sets of objects. Vertex: A point where lines, rays, sides of a polygon or edges of a polyhedron meet (corner). Vertical Angles: Angles opposite each other when two lines cross. Volume (capacity): The amount of space (in cubic units) that a solid figure can hold. back to top Whole number: Any of the numbers 0, 1, 2, 3, 4, 5, … (and so on). back to top X-axis: The horizontal number line on a coordinate plane. back to top Y-axis: The vertical number line on a coordinate plane. back to top
{"url":"http://www.firstinmath.com/FIM_glossary.asp","timestamp":"2014-04-24T00:39:16Z","content_type":null,"content_length":"29467","record_id":"<urn:uuid:11f71a87-c108-4862-9910-bf7fa0b6d606>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Zeros of principal L-functions and random matrix theory Results 1 - 10 of 97 , 1999 "... Hilbert and Polya suggested that there might be a natural spectral interpretation of the zeroes of the Riemann Zeta function. While at the time there was little evidence for this, today the evidence is quite convincing. Firstly, there are the “function field” analogues, that is zeta functions of cur ..." Cited by 105 (2 self) Add to MetaCart Hilbert and Polya suggested that there might be a natural spectral interpretation of the zeroes of the Riemann Zeta function. While at the time there was little evidence for this, today the evidence is quite convincing. Firstly, there are the “function field” analogues, that is zeta functions of curves over finite fields and their generalizations. For these a spectral interpretation for their zeroes exists in terms of eigenvalues of Frobenius on cohomology. Secondly, the developments, both theoretical and numerical, on the local spacing distributions between the high zeroes of the zeta function and its generalizations give striking evidence for such a spectral connection. Moreover, the low-lying zeroes of various families of zeta functions follow laws for the eigenvalue distributions of members of the classical groups. In this paper we review these developments. In order to present the material fluently, we do not proceed in chronological order of discovery. Also, in concentrating entirely on the subject matter of the title, we are ignoring the standard body of important work that has been done on the zeta function and L-functions. , 2000 "... We study the characteristic polynomials Z(U,#)of matrices U in the Circular Unitary Ensemble (CUE) of Random Matrix Theory. Exact expressions for any matrix size N are derived for the moments of and Z/Z # , and from these we obtain the asymptotics of the value distributions and cumulants of the re ..." Cited by 85 (15 self) Add to MetaCart We study the characteristic polynomials Z(U,#)of matrices U in the Circular Unitary Ensemble (CUE) of Random Matrix Theory. Exact expressions for any matrix size N are derived for the moments of and Z/Z # , and from these we obtain the asymptotics of the value distributions and cumulants of the real and imaginary parts of log Z as N ##. In the "... Contents 1. Introduction 1 2. Notations and Preliminaries 5 3. Construction of # : A(GL(2)) A(GL(2)) # A(GL(4)) 8 3.1. Relevant objects and the strategy 9 3.2. Weak to strong lifting, and the cuspidality criterion 13 3.3. Triple product L-functions: local factors and holomorphy 15 3.4. Boundednes ..." Cited by 58 (13 self) Add to MetaCart Contents 1. Introduction 1 2. Notations and Preliminaries 5 3. Construction of # : A(GL(2)) A(GL(2)) # A(GL(4)) 8 3.1. Relevant objects and the strategy 9 3.2. Weak to strong lifting, and the cuspidality criterion 13 3.3. Triple product L-functions: local factors and holomorphy 15 3.4. Boundedness in vertical strips 18 3.5. Modularity in the good case 30 3.6. A descent criterion 32 3.7. Modularity in the general case 35 4. Applications 37 4.1. A multiplicity one theorem for SL(2) 37 4.2. Some new functional equations 40 4.3. Root numbers and representations of orthogonal type 42 4.4. Triple product L-functions revisited 44 4.5. The Tate conjecture for 4-fold products of modular curves 47 Bibliography 52 1. Introduction Let f, g be primitive cusp forms, holomorphic or otherwise, on - SIAM Rev , 1999 "... Comparison between formulae for the counting functions of the heights t n of the Riemann zeros and of semiclassical quantum eigenvalues En suggests that the t n are eigenvalues of an (unknown) hermitean operator H, obtained by quantizing a classical dynamical system with hamiltonian H cl . Many feat ..." Cited by 42 (5 self) Add to MetaCart Comparison between formulae for the counting functions of the heights t n of the Riemann zeros and of semiclassical quantum eigenvalues En suggests that the t n are eigenvalues of an (unknown) hermitean operator H, obtained by quantizing a classical dynamical system with hamiltonian H cl . Many features of H cl are provided by the analogy; for example, the "Riemann dynamics" should be chaotic and have periodic orbits whose periods are multiples of logarithms of prime numbers. Statistics of the t n have a similar structure to those of the semiclassical En ; in particular, they display random-matrix universality at short range, and nonuniversal behaviour over longer ranges. Very refined features of the statistics of the t n can be computed accurately from formulae with quantum analogues. The Riemann-Siegel formula for the zeta function is described in detail. Its interpretation as a relation between long and short periodic orbits gives further insights into the quantum spectral fluctuations. We speculate that the Riemann dynamics is related to the trajectories generated by the classical hamiltonian H cl = XP. Key words. spectral asymptotics, number theory AMS subject classifications. 11M26, 11M06, 35P20, 35Q40, 41A60, 81Q10, 81Q50 PII. S0036144598347497 1. , 2000 "... Random matrix theory (RMT) is used to model the asymptotics of the discrete moments of the derivative of the Riemann zeta function, ? (s), evaluated at the complex zeros + iγn, using the methods introduced by Keating and Snaith in [14]. We also discuss the probability distribution of ln |? ´(1/2 + ..." Cited by 34 (7 self) Add to MetaCart Random matrix theory (RMT) is used to model the asymptotics of the discrete moments of the derivative of the Riemann zeta function, ? (s), evaluated at the complex zeros + iγn, using the methods introduced by Keating and Snaith in [14]. We also discuss the probability distribution of ln |? ´(1/2 + iγn)|, proving the central limit theorem for the corresponding random matrix distribution and analysing its large deviations. , 1998 "... By looking at the average behavior (n-level density) of the low lying zeros of certain families of L-functions, we find evidence, as predicted by function field analogs, in favor of a spectral interpretation of the non-trivial zeros in terms of the classical compact groups. This is further supported ..." Cited by 33 (7 self) Add to MetaCart By looking at the average behavior (n-level density) of the low lying zeros of certain families of L-functions, we find evidence, as predicted by function field analogs, in favor of a spectral interpretation of the non-trivial zeros in terms of the classical compact groups. This is further supported by numerical experiments for which an efficient algorithm to compute L-functions was developed and implemented. iii Acknowledgements When Mike Rubinstein woke up one morning he was shocked to discover that he was writing the acknowledgements to his thesis. After two screenplays, a 40000 word manifesto, and many fruitless attempts at making sushi, something resembling a detailed academic work has emerged for which he has people to thank. Peter Sarnak- from Chebyshev's Bias to USp(1). For being a terrific advisor and teacher. For choosing problems suited to my talents and involving me in this great project to understand the zeros of L-functions. Zeev Rudnick and Andrew Oldyzko for many disc... - COMMUN. MATH. PHYS , 2003 "... We calculate the autocorrelation functions (or shifted moments) of the characteristic polynomials of matrices drawn uniformly with respect to Haar measure from the groups U(N), O(2N) and USp (2N). In each case the result can be expressed in three equivalent forms: as a determinant sum (and hence in t ..." Cited by 32 (17 self) Add to MetaCart We calculate the autocorrelation functions (or shifted moments) of the characteristic polynomials of matrices drawn uniformly with respect to Haar measure from the groups U(N), O(2N) and USp(2N). In each case the result can be expressed in three equivalent forms: as a determinant sum (and hence in terms of symmetric polynomials), as a combinatorial sum, and as a multiple contour integral. These formulae are analogous to those previously obtained for the Gaussian ensembles of Random Matrix Theory, but in this case are identities for any size of matrix, rather than large-matrix asymptotic approximations. They also mirror exactly the autocorrelation formulae conjectured to hold for L-functions in a companion paper. This then provides further evidence in support of the connection between Random Matrix Theory and the theory of L-functions. - Duke Math. J , 2001 "... By looking at the average behavior (n-level density) of the low-lying zeros of certain families of L-functions, we find evidence, as predicted by function field analogs, in favor of a spectral interpretation of the nontrivial zeros in terms of the classical compact groups. 1. ..." Cited by 31 (0 self) Add to MetaCart By looking at the average behavior (n-level density) of the low-lying zeros of certain families of L-functions, we find evidence, as predicted by function field analogs, in favor of a spectral interpretation of the nontrivial zeros in terms of the classical compact groups. 1. , 2006 "... Abstract. All physical systems in equilibrium obey the laws of thermodynamics. In other words, whatever the precise nature of the interaction between the atoms and molecules at the microscopic level, at the macroscopic level, physical systems exhibit universal behavior in the sense that they are all ..." Cited by 21 (0 self) Add to MetaCart Abstract. All physical systems in equilibrium obey the laws of thermodynamics. In other words, whatever the precise nature of the interaction between the atoms and molecules at the microscopic level, at the macroscopic level, physical systems exhibit universal behavior in the sense that they are all governed by the same laws and formulae of thermodynamics. In this paper we describe some recent history of universality ideas in physics starting with Wigner’s model for the scattering of neutrons off large nuclei and show how these ideas have led mathematicians to investigate universal behavior for a variety of mathematical systems. This is true not only for systems which have a physical origin, but also for systems which arise in a purely mathematical context such as the Riemann hypothesis, and a version of the card game solitaire called patience sorting. 1. - J. PHYS A MATH GEN , 2003 "... In recent years there has been a growing interest in connections between the statistical properties of number theoretical L-functions and random matrix theory. We review the history of these connections, some of the major achievements and a number of applications. ..." Cited by 19 (7 self) Add to MetaCart In recent years there has been a growing interest in connections between the statistical properties of number theoretical L-functions and random matrix theory. We review the history of these connections, some of the major achievements and a number of applications.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1436729","timestamp":"2014-04-21T00:58:40Z","content_type":null,"content_length":"36741","record_id":"<urn:uuid:8bfea5d7-dff6-47e8-96ec-be1f8c77928b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms Directory Lecture Notes Title Author Description Data Complete data structures and algorithms course. This course is taught at the University Of Western Australia. The course teaches the following topics : Objects and ADTs, Structures Constructors and destructors, Data Structure , Methods, Pre- and post-conditions C conventions, Error Handling, Arrays, Lists, Stacks, Stack Frames, Recursion, Sequential and John Morris Searches, Binary Search , Trees, Complexity, Queues, Priority Queues , Heaps, Bubble , Heap Sort, Quick Sort , Bin Sort, Radix Sort, Searching Revisited, Red-Black trees, AVL Algorithms trees, General n-ary trees, Hash Tables, Dynamic Algorithms, Fibonacci Numbers , Binomial Coefficients, Optimal Binary Search Trees, Matrix Chain Multiplication, Longest Common Subsequence, Optimal Triangulation , Graphs , Minimum Spanning Tree , Dijkstra's Algorithm , Huffman Encoding , FFT , Hard or Intractable Problems, Eulerian or (No Frames) Hamiltonian Paths , Travelling Salesman's Problem Sorting and Searching Thomas A small guide on the following topics : Arrays, Linked Lists, Timing Estimates, Insertion sort, Shell sort, Quicksort, Hash Tables, Binary Search Trees, Red-Black Trees, Skip Algorithms Niemann Lists, External Sorts, B-Trees. (No Frames) AVL Trees Brad A tutorial on AVL trees. The tutorial is part of a freely available public domain AVL tree library written in C++. The full C++ source code distribution may be found in the Appleton tutorial. (No Frames) Woi Ang, Animated Chein Wei Collection of animated algorithms of the following algorithms : Insertion Sort, QuickSort, Bin Sort, Radix Sort, Priority Queue Sorting, Hash Tables Searching, Optimal Binary Algorithms Tan, Mervyn Search Tree, Huffman Encoding, Dijkstra's Shortest Path, Minimum Spanning Tree ( MST ). Title Author Description A book on the following topics : Course Introduction, Analyzing Algorithms: the 2-D Maxima Problem, Summations and Analyzing Programs with Loops, the 2-D Maxima revisited Algorithms CMSC Dave and Asymptotics, Asymptotics, Divide and Conquer and MergeSort, Recurrences, Medians and Selection, Long Integer Multiplication, Heaps and HeapSort, HeapSort Analysis and 251 Mount Partitioning, QuickSort, Lower Bounds for Sorting, Linear Time Sorting, Introduction to Graphs, Graphs, Graph Representation and BFS, All Pairs Shortest Paths, Floyd Warshall Algorithm, Longest Common Subsequence, Chain Matrix Multiplication, NP Completeness: General Introduction, NP Completeness and Reductions A collection of lecture notes covering the following topics: Administrivia, Introduction to algorithms, Introduction to algorithms (cont), Graphs, Breadth First Search, Depth First Search Topological Sorting, Divide-and-conquer (Intro), Matrix multiplcation, The Greedy method; Activity selection problem, Huffman Codes, Single-source Algorithms COMP Antonios shortest path problems, Minimum Spanning Trees, Dynamic Programming; Matrix chain-multiplcation, Longest Common subsequences, All-Pairs shortest paths, On lower bounds, 3001 Symvonis Sorting in Linear time, Acyclicity testing, Strongly connected components, MST:Kruskal's Alg, Single source shortest paths:Belman-Ford alg, SPs in DAGs, Maximum flow. The Ford-Fulkerson method, Cuts on Network flows, Planar graphs, Colouring of Planar graphs, Approximation Algorithms, Approximation Algorithms (cont), Heuristics, Pseudo-polynomial algorithms Data Structures Dave A document covering many data structures topics. Great for most university courses. The document has 117 pages. CMSC 420 Mount Design and A book on the following subjects : Asymptotics and Summations, Divide and Conquer and Recurrences, Sorting, Dynaminc Programming, Longest Command Subsequence, Chain Matrix Analysis of Multiplication, Memoization and Triangulation, Greedy Algorithms, Huffman Coding, Activity Selection and Fractional Knapsack, Definitions and Representations, Breadth-First Computer Dave and Depth-First Search, Topological Sort and Strong Components, Minimum Spanning Trees and Kruskal's Algorithm, Prim's and Baruvka's Algorithms for MST's, Dijkstra's Algorithms CMSC Mount Shortest Path Algorithms, Bellman Ford and Floyd Marshall shortes path, NP-Completeness : Languages and NP, Reductions, Cook's Theorem and Other Reducations, Clique, Vertex 451 Cover and Dominating Set, Hamiltonian Path, Approximation Algorithms : VC and TSP, Set Cover and Bin packing, the k-center problem, Recurrences and Generating Functions, Articulation Points and Biconnected Components, Network Flows and Matching, The 0-1 Knapsack problem, Subset Sum & Approximation. Graph Theory Prof. A book covering many graph theory topics. Advanced Johan A document on the following topics : notation and basics, primality testing, factoring integers, discrete logarithms, quantum computation, factoring polynomials in finite Algorithms Hastad fields, factoring polynomials over integers, Lovaszs lattics basis reduction algorithm, multiplication of large integers, matrix multiplication, matrix flow, Bipartite Johan A document containing on following topics: recursive functions, efficient computation, hierarchy theorems, the complexity classes L, P and PSPACE, nondeterministic Complexity Theory Hastad computation, relations among complexity classes, complete problems, constructing more complexity-classes, probabilistic computation, pseudorandom number generators, parallel computation, relativized computation, interactive proofs. Linear Johan A short document on linear programming. Programming Hastad
{"url":"http://www.oopweb.com/Algorithms/Files/Algorithms.html","timestamp":"2014-04-21T00:19:32Z","content_type":null,"content_length":"24796","record_id":"<urn:uuid:cf099bd3-f231-4085-8320-71ab6b42b82d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
device to measure the acceleration in three axes of the borehole imager , including the acceleration of gravity which is used to find up-down of the tool from the ratio of x and y acceleration. The acceleration of the tool along the wellbore is used to calculate the exact z-position of measurement at a specific time. Usually of the piezoelectric type. acoustic imager acoustic imager a spiral-type borehole imager based on ultrasound reflectance of the wellbore surface. Commercial acronyms CBIL, UBI, CAST.. the statistically best fitting line of intersection of planes thought to be of common origin, such as cross-lamination of varying paleodip belonging to the same set, bedding variably rotated by cylindrical folding, or intersection of two conjugate fracture sets. angle in a plane tangent to the earth, with reference to the projection on this plane of the direction to either geographic or magnetic pole, or UTM north (grid north). borehole image (or image, or BHI) borehole image (or image, or BHI) an oriented 2D image of the interior surface of a borehole or a zone just behind it. the image may be constructed (if necessary by interpolation) from any physical quantity which can be measured with the necessary resolution and which has sufficient dynamic range. borehole imager borehole imager a logging designed for acquiring measurements which can be used to compose a borehole image . There are two basic types: spiral imagers acquiring data while the sensor is spinning around the tool axis while moving along the borehole, and linear imagers where sensor arrays glide along the compressive failure ("crumbling") of rock along the wellbore axis in the directions of maximum tangential . Equivalent to rock burst in tunnels. diameter of borehole calculated from extension of sensor arms on a logging tool. Diameters in several relative bearings may be calculated. an axisymmetric fabric, for example 1) caused by a polar force such as gravity, arising from the parallel arrangement of platy minerals (lamination), in the absence of any horizontal traction; 2) caused by an axial force such as stress, forcing platy mineral grains to become parallel (foliation). A perfect cluster has within a window of measured depth, the standard deviation of the distances between adjacent events f.x. fracture intersections. a function of 2 variables, for example the shift between windows of curves 1 and 3 and the shift betwen curves 2 and 4 in a 4-arm dipmeter. The function is the sum of the correlation coefficients of the 6 pairs of curves. Each point in the correlogram is an orientation in the tool coordinate system. The global maximum is the best fitting dip (2) for the window. depth increment depth increment the distance in the z direction between each sensor measurement. A characteristic of a logging tool. depth of investigation depth of investigation the depth behind the borehole wall where the rock outside it contributes the same as the rock inside it to the value read by the imaging sensor. To investigate the impact of different DOI values on dip data Eriksfiord offers its Java based DOI Sensitivity Analyzer tool : Eriksfiord DOI Sensitivity Analyzer . Note: Java 6 or higher is required on your computer to run this tool. rotating dips back to their orientation relative to their continental plate at the time of deposition, by subtracting the PHZ. For paleoclimatology, a correction for plate rotation may be applied. angle between vertical and a line tangent to the borehole dip 1 dip 1 the (smallest) angle between a horizontal plane and the tangent plane of a geological surface, measured in the plane orthogonal to their intersection line dip 2 dip 2 informal expression for a tangent plane to a geological surface, defined on an image by its dip (1) and dip azimuth dip azimuth dip azimuth the azimuth of the vector in the dip (2) orthogonal to the precursor to the linear borehole imager , acquiring curves of electric conductivity by eletrodes mounted on pads arranged in 3, 4 or 6 different directions (commercial acronyms HDT, SHDT, Diplog...) for a matrix A, solutions to det(A-λI)=0 where I is the identity matrix. An orientation matrix of dips has 3 eigenvalues, whereof the greatest indicates the tightness of a dip cluster. for a matrix A, substituting λ , the solutions x to Ax=λx. The eigenvector of the greatest eigenvalue is an estimator of the mean dip. equatorial plane 1 equatorial plane 1 the x-y plane of a logging tool containing the reference sensor equatorial plane 2 equatorial plane 2 the horizontal plane through the centre of the orientation sphere, the plane of the stereographic projection. the pattern of poles to planes in a stereographic projection or an equal-area projection. The symmetry of the fabric is composed of the symmetries of all causes (deformational or depositional mechanisms) having acted on the rock. facies (image) facies (image) the appearance of a sedimentary facies on a borehole image. Obtained from certain diagnostic features. For example, the truncation of cross-laminae by the following cross-set proves that the image represents cross-bedded facies. FLF (fault-like feature) FLF (fault-like feature) a plane identified on a borehole image which may be the reflection of a fault plane intersected by the borehole. May be elevated to "fault plane" or "fault zone" if associated beyond doubt by evidence of shear motion, e.g. drag folding of adjacent lamination. the space between two fracture surfaces once adjacent, separated by a fluid (tar, gas, oil, carbon dioxide...) or a solid, like clay smear, rock flour, glass, or crystals (quartz, calcite, anhydrite, pyrite...) precipitated in situ. fracture surface fracture surface surface within a solid generated by rupture, either shear, torsion or tension. fracture statistics fracture statistics linear or spherical statistics apply. Linear statistics describe the frequency and clustering of fractures observed along a wellbore, whereas spherical statistics describe the shape and symmetry of the distribution of poles to fracture planes on the directional sphere, e.g. as a function of the galvanic imager galvanic imager a borehole imager working on the principle of measuring the amount of current injected by a given voltage pulse. Commercial acronyms FMI, FMS, EMI, XRMI, an axisymmetric, orthorhombic or monoclinic , caused by for example a) a combination of gravity and horizontal traction during sedimentation, b) folding of a foliated or laminated rock, c) the transposition of lamination/foliation by shear fractures or d) extrusion. A perfect girdle has eigenvalues of (0,.5,.5). senior Eriksfiord employee an axisymmetric, orthorhombic or monoclinic , caused by for example a) a combination of gravity and horizontal traction during sedimentation, b) folding of a foliated or laminated rock, c) the transposition of lamination/foliation by shear fractures or d) extrusion. A perfect girdle has eigenvalues of (0,.5,.5). gyroscopic survey gyroscopic survey a downhole gyroscope measures the orientation of the borehole relative to the universe. by fitting a line through successive measurements, e.g. by splining, the welltrace in 3 dimensions is reconstructed (survey). high side (image high side, HS) high side (image high side, HS) the point on an image which would be at if the well were rotated back to vertical about an axis which would preserve direction of . HS is equal to on a north-referenced image. hole azimuth (HAZI) hole azimuth (HAZI) the azimuth of the projection of a tangent to the welltrace on the horizontal plane. induced fracture induced fracture a tensile in the sector of the wellbore which has tensile effective stress greater than tensile strength, and caused by the drilling (replacement of a cylinder of solid by a liquid). some trace fossils like are sufficiently large and characteristic to be recognized on galvanic borehole images. inversion (of stress) inversion (of stress) from the evidence of stress-induced instability on a borehole image and knowledge of the mechanical properties of the rock, determination by iteration of the stress tensor which caused this LWD imager LWD imager spiral type imager based on electrical resistivity (RAB,,,) or density (ADN,,,,) acquired while drilling. Data are either stored downhole or transmitted in a reduced form to surface by mudpulsing. device used to measure the magnetic azimuth of the imaging tool. Usually a triaxial fluxgate type is used. Magnetometer readings are verified by comparison with the gyro survey. magnetic declination magnetic declination the azimuth of the magnetic vector magnetic inclination magnetic inclination the angle between a tangent surface to the earth and the geomagnetic vector measured depth (MD) measured depth (MD) the length of cable or drillstring between the reference point of a tool and the drillfloor. normalisation 1 normalisation 1 manipulating a timeseries so it gets a Gaussian distribution over the interval in question. normalization 2 normalization 2 manipulating a piece of image so pixel values get a flat distribution. The piece may be sliding, leading to a "dynamic" normalization, or fixed, leading to a "static" normalization. north (image north) north (image north) the point which would be geographic or magnetic north on a borehole image if the well were rotated back to vertical about an axis which would preserve the direction of hole azimuth oil base mud oil base mud drilling fluid which is an emulsion of water in oil. Electrically resistive as the water is the discontinuous phase. Low viscosity and low density is an advantage to acoustic imagers. Chemical interaction with the host rock is limited compared to water base mud.\ orientation matrix orientation matrix for a set of directions, the matrix of the sums of the cross-products of direction cosines. dip and azimuth of a plane, relative to the paleohorizontal paleohorizontal (PHZ) paleohorizontal (PHZ) the orientation of a geological surface considered to have been horizontal at the time of its deposition. the stress state before deformation took place, e.g. the stress required to initiate a fold-and-thrust belt, annihilated or modified by the folding and thrusting. viewing the BHI as a matrix, the elements of that matrix. May be originally measured values or a transform of them (e.g. by normalization 2 relative bearing relative bearing angle from the intersection of a vertical plane with an equatorial (x-y) plane of a logging instrument in the equatorial plane to a reference point such as the extension of the reference sensor parallel to the z-axis to the plane resistivity imager resistivity imager a borehole imager which measures the potential drop over a short interval within a larger scale potential gradient. OBMI equivalent to , logging tool, but referring more to the sensor equipment than the tool body. power displayed as a function of frequency. Used f.x. on dipmeter curves to analyze the fineness of lamination or periodicity of bedding. Can be calculated quickly for 2n samples by the Cooley-Tukey "fast Fourier transform" after tapering both ends. a projection of normal vectors to dips (2) onto the equatorial plane of the directional sphere. Preserves angles. Also called Wulff diagram. paper with a printed net, used for counting angles between poles and for rotation of poles in stereograms. now superseded by computer graphics programs such as Interpret-2 in Recall. sticking (tool sticking) sticking (tool sticking) tool decelerates or stops completely its alonghole motion due to excessive friction against the wellbore, either forever, or until cable tension builds up to overcome it. speed correction speed correction calculation of the exact location of each measurement, permitting an image to be constructed. involves integration of the z-axis accelerometer. the relative incremental deformation of a solid due to stress, described by a second rank tensor with 6 independent components. azimuth of the intersection between the horizontal and the dip-2. oriented pressure in a solid, described by a second rank tensor with 6 independent components. the appearance of the internal structure of a rock on a borehole image. tool measure point offset. for a 'train' of logging tools with sensors at multiple z-distances, the end point of the tool combination or any other depth may be chosen as the reference point. The reference point is used to synchronize simultaneous measurements, or inversely, to correlate depths observed at different times by sensors placed at different levels (z-distances). tool (logging tool) a physical measurement device, usually built into a tubular structure equipped with sensor arms, designed to acquire data while passing through a borehole, propulsed either by gravity, a cable, a tractor or a drillstring. tool coordinate system the z-axis is the centerline of a cylindrical tool, pointing in the direction of decreasing measured depth, the x-axis is perpendicular to the z-axis and penetrates a reference point such as a reference sensor, the y-axis is perpendicular to both. x and y have origo in their intersection with z-axis. y increases to the right if looking towards origo from the positive x direction true vertical depth (TVD) true vertical depth (TVD) the vertical distance between a subsurface event and a surface reference, f.x. drill floor or mean sea level. a computer system, developed and owned by Eriksfiord, for analyzing stress in wellbores, based on analytic solutions and linear elastic mechanics. water base mud water base mud drilling fluid which is a solution of salts and suspension of solids in water. High salinity is an advantage for galvanic imagers. Low density and viscosity is an advantage for acoustic imagers. a cable which propulses a logging tool along the wellbore, provides electric power to the tool, and conveys data from the tool to the surface computer during logging.
{"url":"http://www.eriksfiord.com/dictionary.php","timestamp":"2014-04-16T10:10:05Z","content_type":null,"content_length":"22316","record_id":"<urn:uuid:2a3ead0b-f851-4f73-8bcd-8abe22374405>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Determine the Measure of an Angle whose Vertex Is Inside a Circle How to Determine the Measure of an Angle whose Vertex Is Inside a Circle An angle that intersects a circle can have its vertex inside, on, or outside the circle. This article covers angles that have their vertex inside a circle—so-called chord-chord angles. The measure of a chord-chord angle is one-half the sum of the measures of the arcs intercepted by the angle and its vertical angle. For example, check out the above figure, which shows you chord-chord angle SVT. You find the measure of the angle like this: Look at the following figure: Here’s a problem to show how the formula plays out: To use the formula to find angle 1, you need the measures of arcs MJ and KL. You know the ratio of all four arcs is 1 : 3 : 4 : 2, so you can set their measures equal to 1x, 3x, 4x, and 2x. The four arcs make up an entire circle, so they must add up to 360°. Thus, 1x + 3x + 4x + 2x = 360 10x = 360 x = 36 Now use the formula: That's a wrap.
{"url":"http://www.dummies.com/how-to/content/how-to-determine-the-measure-of-an-angle-whose-ver.navId-407427.html","timestamp":"2014-04-19T07:06:33Z","content_type":null,"content_length":"52601","record_id":"<urn:uuid:76069040-4d77-4d93-8e45-aa2f7fd3c84e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Formula for a function August 15th 2009, 02:24 PM #1 Aug 2009 Formula for a function "Find a formula that describes the following function, which is composed of straight lines and a semicircle" So far all I know is the equation has (x-1)(x+1), intersects at 1 and -1, but I don't get how the circle and straight lines kick in. Do I have to use some sort of hybrid between a straight line equation and the circle equation? (x - h)2 + (y - k)2 = r2 "Find a formula that describes the following function, which is composed of straight lines and a semicircle" So far all I know is the equation has (x-1)(x+1), intersects at 1 and -1, but I don't get how the circle and straight lines kick in. Do I have to use some sort of hybrid between a straight line equation and the circle equation? (x - h)2 + (y - k)2 = r2 It is a hybrid function. The equations of the two lines cannot be uniquely found fro the given information. The equation of the circle can be found since you know the radius and the coordinates of the centre. $<br /> \frac{1-sgn(x+1)}{2}(-x-1)+\frac{1-sgn(|x|-1)}{2}\sqrt{1-x^2}+\frac{1+sgn(x-1)}{2}(-x+1)$ Last edited by mr fantastic; September 18th 2009 at 09:01 AM. Reason: Restored original reply $f(x) = \left\{ {\begin{array}{rl}<br /> {m\left( {\frac{{\left| x \right|}}<br /> {x} - x} \right),} & {\left| x \right| > 1\;\& \,m > 0} \\<br /> {\sqrt {1 - x^2 } ,} & {\left| x \right| \ leqslant 1} \\<br /> \end{array} } \right.$ August 15th 2009, 02:31 PM #2 August 16th 2009, 08:33 AM #3 Aug 2009 August 16th 2009, 08:59 AM #4
{"url":"http://mathhelpforum.com/calculus/98169-formula-function.html","timestamp":"2014-04-20T08:00:00Z","content_type":null,"content_length":"41335","record_id":"<urn:uuid:1d62b252-b543-4127-ae07-d347e7ae19fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard operators as functions. The operator module exports a set of functions implemented in C corresponding to the intrinsic operators of Python. For example, operator.add(x, y) is equivalent to the expression x+y. The function names are those used for special class methods; variants without leading and trailing "__" are also provided for convenience. The functions fall into categories that perform object comparisons, logical operations, mathematical operations, sequence operations, and abstract type tests. The object comparison functions are useful for all objects, and are named after the rich comparison operators they support: lt(a, b) le(a, b) eq(a, b) ne(a, b) ge(a, b) gt(a, b) __lt__(a, b) __le__(a, b) __eq__(a, b) __ne__(a, b) __ge__(a, b) __gt__(a, b) Perform ``rich comparisons'' between a and b. Specifically, lt(a, b) is equivalent to a < b, le(a, b) is equivalent to a <= b, eq(a, b) is equivalent to a == b, ne(a, b) is equivalent to a != b, gt(a, b) is equivalent to a > b and ge(a, b) is equivalent to a >= b. Note that unlike the built-in cmp(), these functions can return any value, which may or may not be interpretable as a Boolean value. See the Python Reference Manual for more informations about rich comparisons. New in version 2.2. The logical operations are also generally applicable to all objects, and support truth tests and Boolean operations: Return the outcome of not o. (Note that there is no __not__() method for object instances; only the interpreter core defines this operation. The result is affected by the __nonzero__() and __len__() methods.) Return 1 if o is true, and 0 otherwise. The mathematical and bitwise operations are the most numerous: Return the absolute value of o. add(a, b) __add__(a, b) Return a + b, for a and b numbers. and_(a, b) __and__(a, b) Return the bitwise and of a and b. div(a, b) __div__(a, b) Return a / b when __future__.division is not in effect. This is also known as ``classic'' division. floordiv(a, b) __floordiv__(a, b) Return a // b. New in version 2.2. Return the bitwise inverse of the number o. This is equivalent to ~o. The names invert() and __invert__() were added in Python 2.0. lshift(a, b) __lshift__(a, b) Return a shifted left by b. mod(a, b) __mod__(a, b) Return a % b. mul(a, b) __mul__(a, b) Return a * b, for a and b numbers. Return o negated. or_(a, b) __or__(a, b) Return the bitwise or of a and b. Return o positive. rshift(a, b) __rshift__(a, b) Return a shifted right by b. sub(a, b) __sub__(a, b) Return a - b. truediv(a, b) __truediv__(a, b) Return a / b when __future__.division is in effect. This is also known as division. New in version 2.2. xor(a, b) __xor__(a, b) Return the bitwise exclusive or of a and b. Operations which work with sequences include: concat(a, b) __concat__(a, b) Return a + b for a and b sequences. contains(a, b) __contains__(a, b) Return the outcome of the test b in a. Note the reversed operands. The name __contains__() was added in Python 2.0. countOf(a, b) Return the number of occurrences of b in a. delitem(a, b) __delitem__(a, b) Remove the value of a at index b. delslice(a, b, c) __delslice__(a, b, c) Delete the slice of a from index b to index c-1. getitem(a, b) __getitem__(a, b) Return the value of a at index b. getslice(a, b, c) __getslice__(a, b, c) Return the slice of a from index b to index c-1. indexOf(a, b) Return the index of the first of occurrence of b in a. repeat(a, b) __repeat__(a, b) Return a * b where a is a sequence and b is an integer. Deprecated since release 2.0. Use contains() instead. Alias for contains(). setitem(a, b, c) __setitem__(a, b, c) Set the value of a at index b to c. setslice(a, b, c, v) __setslice__(a, b, c, v) Set the slice of a from index b to index c-1 to the sequence v. The operator module also defines a few predicates to test the type of objects. Note: Be careful not to misinterpret the results of these functions; only isCallable() has any measure of reliability with instance objects. For example: >>> class C: ... pass >>> import operator >>> o = C() >>> operator.isMappingType(o) Deprecated since release 2.0. Use the callable() built-in function instead. Returns true if the object o can be called like a function, otherwise it returns false. True is returned for functions, bound and unbound methods, class objects, and instance objects which support the __call__() method. Returns true if the object o supports the mapping interface. This is true for dictionaries and all instance objects. Warning: There is no reliable way to test if an instance supports the complete mapping protocol since the interface itself is ill-defined. This makes this test less useful than it otherwise might be. Returns true if the object o represents a number. This is true for all numeric types implemented in C, and for all instance objects. Warning: There is no reliable way to test if an instance supports the complete numeric interface since the interface itself is ill-defined. This makes this test less useful than it otherwise might be. Returns true if the object o supports the sequence protocol. This returns true for all objects which define sequence methods in C, and for all instance objects. Warning: There is no reliable way to test if an instance supports the complete sequence interface since the interface itself is ill-defined. This makes this test less useful than it otherwise might be. Example: Build a dictionary that maps the ordinals from 0 to 256 to their character equivalents. >>> import operator >>> d = {} >>> keys = range(256) >>> vals = map(chr, keys) >>> map(operator.setitem, [d]*len(keys), keys, vals) Subsections See About this document... for information on suggesting changes.
{"url":"http://www.wingware.com/psupport/python-manual/2.2/lib/module-operator.html","timestamp":"2014-04-18T00:33:01Z","content_type":null,"content_length":"17891","record_id":"<urn:uuid:56eaf2d1-3b4c-42bb-917e-15d068be30b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
San Mateo, CA Algebra 2 Tutor Find a San Mateo, CA Algebra 2 Tutor ...I am familiar working with QuickBooks since 2008. Initially, I had the hands-on experience and then have completed the course in College of San Mateo. I love the accounting software and do know how it helps in portraying the financial statements in a clear and concise manner and in no time. 20 Subjects: including algebra 2, English, geometry, accounting ...I have tutored engineering students back in college. I can tutor any level Math. I am currently tutoring Math, Chemistry and Physics to high school students in the evenings and on Saturdays. 18 Subjects: including algebra 2, physics, calculus, statistics ...I think I have more appreciation now for the beauty and power of calculus than I did when I first learned it! It's so rewarding to explain these concepts to students, and watch their excitement when things all start to make sense. Calculus is so much easier for students when they understand the... 10 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I taught pre-calculus sections as a TA at UC Santa Cruz for two years. I have taken classes in teaching literacy at Mills College. I have a degree in Sociology which required me to read thousands of pages from close to 100 different authors. 15 Subjects: including algebra 2, reading, calculus, writing ...Please be advised that the writing section of the SAT is being eliminated in 2016 and the top SAT score will revert back to 1600, the way it was before the writing test was included in 2006. The multiple choice sections require you to: recognize sentence errors, choose the best version of a pie... 32 Subjects: including algebra 2, reading, English, calculus Related San Mateo, CA Tutors San Mateo, CA Accounting Tutors San Mateo, CA ACT Tutors San Mateo, CA Algebra Tutors San Mateo, CA Algebra 2 Tutors San Mateo, CA Calculus Tutors San Mateo, CA Geometry Tutors San Mateo, CA Math Tutors San Mateo, CA Prealgebra Tutors San Mateo, CA Precalculus Tutors San Mateo, CA SAT Tutors San Mateo, CA SAT Math Tutors San Mateo, CA Science Tutors San Mateo, CA Statistics Tutors San Mateo, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/San_Mateo_CA_algebra_2_tutors.php","timestamp":"2014-04-21T15:11:22Z","content_type":null,"content_length":"24048","record_id":"<urn:uuid:f62f4f85-49ab-425e-a0ed-40986920d645>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
LANS Publications "Spectral-Element Discontinuous Galerkin Lattice Boltzmann Simulation of Flow Past Two Cylinders in Tandem with an Exponential Time Integrator" K. C. Uga, M. Min, T. Lee, and P. F. Fischer Computers and Mathematics with Applications, . Also Preprint ANL/MCS-P1829-0111 Preprint Version: [pdf] In this paper, a spectral-element discontinuous Galerkin (SEDG) lattice Boltzmann discretization and an exponential time-marching scheme are used to study the flow field past two circular cylinders in tandem arrangement. The basic idea is to discretize the streaming step of the lattice Boltzmann equation by using SEDG method to get a system of ordinary differential equations (ODEs) whose exact solutions are expressed by using a large matrix exponential. The approximate solution of the resulting ODEs are obtained from a projection method based on a Krylov subspace approximation. This approach allows us to approximate the matrix exponential of a very large and sparse matrix by using a matrix of much smaller dimension. The exponential time integration scheme is useful especially when computations are carried out at high Courant-Friedrichs-Lewy (CFL) numbers, where most explicit time-marching schemes are inaccurate. Simulations of flow were carried out for a circular cylinder at Re=20 and for two circular cylinders in tandem at Re=40 and a spacing of 2:5D, where D is the diameter of the cylinders. We compare our results with those from a fourth-order Runge-Kutta scheme that is restricted by the CFL number. In addition, important flow parameters such as the drag coefficients of the two cylinders and the wake length behind the rear cylinder were calculated by using the exponential time integration scheme. These results are compared with results from our simulation using the RK scheme and with existing benchmark results.
{"url":"http://www.mcs.anl.gov/research/LANS/publications/index.php?p=pub_detail&id=1469","timestamp":"2014-04-20T04:28:27Z","content_type":null,"content_length":"5550","record_id":"<urn:uuid:c66842ef-1909-4114-892d-8d9b1e767124>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
I’m sure you’re familiar with the Walmart growth map. A basic visualization looks like this: I wanted to play with the data too, but from an Excel user perspective there isn’t much to add, if any. Unless you take a different path. Perhaps a business perspective. For instance, the maps above can’t answer a simple question like this: “in 1986, approximately how many people were living within a 10-mile radius of next Walmart store?” The answer seems straightforward: geo-code population, geo-code stores and calculate the distance, between the two, for each store. Then make an Excel dashboard like this, without VBA (click to Not so fast. Let’s start with population. In the first map, each gray dot represents a county and a certain amount of population. But think about it for a minute. Imagine that there are two buildings in the county only. One is the store itself, the other one is a skyscraper where everyone lives. When you measure the distance between the store and the building the error is negligible. But that’s not what really happens, is it? So, how do we define an acceptable point in space as the point where all population in a county lives (we can’t afford to calculate distances for each single address)? Take a look at this diagram, it represents a county with a city on the top right corner: Usually, when you want/need to use a point to represent a region the geographical center (A) is chosen. This is not the ideal solution when you need to calculate distances. Let’s assume that there is a county with only 16 inhabitants, and they all live in a city. When Walmart moves in it will open the store where people actually live (C). The catchment area in a 5-mile radius is represented in red. As you can see, everyone lives within that radius. Problem is, if you assume that everyone lives in the geographic center when you calculate the distance between A and the store C population will fall outside the 5-mile radius. This is specially important in rural areas, where counties are much larger than in urban areas. Clearly, you need a different center, like the mean center of population (B). That would be much more accurate. Luckily, the US Census Bureau calculates the center of population for the US as a whole but also for each county (you can get the data from here).This can help improve accuracy, but it isn’t perfect. In the diagram above, if there is a city at the bottom left corner with the same population, geographic and population centers would overlap and the calculations would be wrong for the county. Please also note that population centers are dynamic: each year they move a bit, depending on local population dynamics. The good news is population centers are just the starting point, you can manually define a different center. Assuming that not all the stores were open in a single year you may want to have year population estimates. You can download the data from the National Cancer Institute. The Stores If you have a list of stores then you have a list of addresses, and you must geo-code those addresses. It’s impossible to list all the available options you can choose from. Here are a few ideas: if you have a small list of stores and you can easily identify them from above, Google Earth may be an option. Batch geo-coding is also an option, using a site like this (inspect the results: there will be a few misplaced addresses). If you don’t have/need a complete address but you have a list of zip codes, you can use the Excel lookup functions to find them in this database. Calculating distances Now that you have your population and stores geo-coded, it’s time to calculate distances between all possible pairs store/county. In Excel, you can do this in VBA using nested loops. If you want to avoid VBA you have to make a table like this one: So, in each column you have, for each store, latitude and longitude, opening year, store id code and distances from the store to each county in the US. In each row, you have the counties and their latitude/longitude ( the population center). To calculate these distances you can use the following formula for cell E9): =ACOS(COS(RADIANS(90-$A9)) *COS(RADIANS(90-E$4)) +SIN(RADIANS(90-$A9)) *SIN(RADIANS(90-E$4)) *COS(RADIANS($B9-E$5))) *3958.756 This is something that you have to do only once, then you can remove the formulas. I actually prefer to use Access to make a table with the pairs of coordinates I may be interested in. Assuming that you have a County table (county id and coordinates) and a store table (store id, coordinates and opening year) create a new query without joining them. This will create a table with all possible Now, since some trigonometric functions are missing in Access, I suggest that you add two calculated fields, DistanceBase and Distance: DistanceBase: Cos((90-[CountyLat])*0.017453293)*Cos((90-[StoreLat])*0.017453293)+Sin((90-[CountyLat])*0.017453293)*Sin((90-[StoreLat])*0.017453293)*Cos(([CountyLong]-[StoreLon])*0.017453293) Distance: Round((Atn(-[DistanceBase]/Sqr(-[DistanceBase]*[DistanceBase]+1))+2*Atn(1))*3958.756,2) For a a dataset like the Walmart stores you’ll get a table with more than 10 million records, but you only need a small fraction, because the catchment area for each store is relatively small. Let’s assume that the largest radius we are interested in is 25 miles.Use this query to make a new table. This new table contains SotoreId, Opening year, countyId and distance. Filter for Distance<=25. As you can see, only 10721 records remain. That’s manageable, even in Excel. All the data is stored in Access. No it’s time to add a connection for each relevant table between Access and Excel, in table or pivot table format. Tables for the Excel dashboard Let me show you all the tables I’m using. The first one contains population data, county by age group (“3034″, for example, means age group 30-34). Page fields allow you to select Year, large age groups (young, adult or old) and sex. The next one is the original Walmart dataset, with a few more fields (county and state codes, latitude, longitude): And this is the county dataset, with codes, geographic and population centers: Take a look at the dashboard above. The map is just a scatter plot with five series, two for counties and three for stores: • County dot: if there is no store in the county and the county is beyond the maximum distance (defined on the left), than is a gray dot; • County dot: If there is no store but the county falls within radius, then it is an orange dot; • Store dot: if it is an old store (not opened this year) then it is a dark blue dot; • Store dot: if it is a new store, then it is a large light blue dot; • Store dot: if it is the store selected from the list below, then it is a red dot. Every time the user selects a different year there must be a table that dynamically calculates how many stores fall within the radius. If one or more, a formula gets data from the population table, if not, it returns zero. This is calculated in the next table: The table also creates the gray and orange series (that’s the meaning of the #N/A above, there is nothing wrong with them). The next table evaluates each store state (old, new or selected) and creates the series for each state: Remember the distances table we created in Access? Here it is: And the last table calculates data at state level for the remaining charts: There are two remaining sheets, a sheet to store data for the controls and another one to calculate the data for the population pyramid. This was done in Excel 2010 because I wanted to control the population pivot table with slices, but it can also be made with Excel 2007 with a small macro to perform the same task. In this case I’m using the Walmart dataset, but once this structure is in place you can easily replace it with your own list and add more data to the dashboard. Catchment area analysis is much more complex than this, but can and Excel dashboard like this be useful as a starting point? How would you improve it? Please share your comments. By the way: the answer to the above question is 48 279 951, 20% of total population. [Registered users of the Demographic Dashboard courses can freely download these files from the Lookup Resources module.] 2 Comments Wow Jorge! This is one extensive exercise! Thanks for sharing. Hi Jorge What a great and detailed article with really clever ideas. This has inspired me to improve the dashboard I’m making which plots out my future trip in Europe… I was struggling with a useful way to measure distance by air – so your distance formula will be very useful. I love the distance information you’ve addded. Did you consider using the google directions API (https://developers.google.com/maps/documentation/directions/) to do this work? It could add more info about the actual road distance and travel duration. I suppose the limitations with this are the API limit of 2500 queries per day and the fact that the population centre is an estimate. However, it wouldn’t take long to develop a database with this information (running some VBA for five days) and it might produce interesting results. For example, there may be similarities in the distance by car between population centres and walmarts, or perhaps average duration of travel. Really interesting work – looking forward to the next installments. Leave A Comment
{"url":"http://www.excelcharts.com/blog/excel-dashboard-catchment-area/","timestamp":"2014-04-20T18:38:53Z","content_type":null,"content_length":"60456","record_id":"<urn:uuid:78361668-8033-476b-8a6f-68d4538cbf9f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivalent functors up vote 0 down vote favorite Let $R$ be a commutative Noetherian ring, $M$ is a finitely generated $R$-module. If $F: Mod \to Mod$ is a left exact functor and $R^iF(E)=0$ where $E$ is injective module. Assume that $F(-) \cong Hom(M,-)$, can we infer the $i-th$ right devired functors $R^iF(-)\cong Ext^i(M,-)$? ac.commutative-algebra homological-algebra modules You can define $Ext$ by taking derived functors of $Hom$ in either variable. If you are using the second variable, it's tautologically true (see Andreas Blass' answer). So I suspect you might be defining it via a projective resolution of $M$ (or via Yoneda or...) Then there is something to prove, but this can be found in many homological algebra texts. I'm sure someone can elaborate, but it would be good to clarify this. – Donu Arapura Sep 2 '11 at 17:17 add comment 1 Answer active oldest votes Yes. For example, if you compute right derived functors by injective resolutions, then naturality of the isomorphism between $F$ and $\text{Hom}(M,-)$ will ensure that you have an up vote 1 down vote isomorphism between the two complexes whose cohomology groups give you the two derived functors. add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra homological-algebra modules or ask your own question.
{"url":"http://mathoverflow.net/questions/74371/equivalent-functors?sort=newest","timestamp":"2014-04-20T09:02:07Z","content_type":null,"content_length":"50655","record_id":"<urn:uuid:13c778a9-02b9-4ee9-80a4-1a47ca906564>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Tasks, Activities, and Tools This summary provides an overview of chapter 5 of Effective pedagogy in Mathematics/Pangarau: Best evidence synthesis iteration [BES] (Anthony and Walshaw, 2007). This chapter discusses the forms of mathematical tasks used by teachers to support their students in learning mathematics. The full document can be downloaded from this page. Big Idea: It is important to choose appropriate mathematical tasks and activities for students. Key Points to be understood: The primary purpose of tasks in a mathematics lesson should be to engage students with a mathematical idea. • While there is a place in a mathematics class for practice and consolidation, “tasks that require students to engage in complex and non-algorithmic thinking promote exploration of connections across mathematical concepts” (p 97). • Tasks that are open ended, rather than procedural, provide students with “something they need to think about, not simply a disguised way of practising already demonstrated algorithms.” (p.97). Some examples of open ended tasks: □ Instead of finding the average of a list of numbers ask the students to suggest a possible list of numbers for a given average. (For example, the average number of family members of students in the class is 4. What might this look like for a class of 20 students?) □ As an exercise to encourage addition and subtraction ask “If Jimmy and his 2 brothers took out a total of 10 books from the library, how many might they each have taken?” □ Instead of learning lists of rules to identify quadrilaterals such as diamond, rectangle, kite, parallelogram, and rhombus, you could ask students to “Draw as many different 4 sided shapes as you can and describe their features.” • Implementation of tasks should focus on students’ strategies not just the answer to problems. For this reason, tasks with multiple possible solution strategies are to be encouraged. Asking students to explain their methods helps them develop their mathematical understandings. Students’ success on a task should be judged by their understanding rather than superficial indicators such as speed of completion or correctness of answers. • Productive mathematical tasks encourage the development of learning dispositions: reflection, generalisation, curiosity, conjecture and exploration. Enquiry based tasks, exploring students’ questions (for example, what shapes can be in a tessellation?) are more appealing to students than prescriptive tasks. However, keeping mathematics interesting and fun should not be at the expense of content. Use of manipulatives and individual choices of activity can in some instances allow students to avoid ‘real maths’ instruction. In selecting or developing mathematical tasks, consideration needs to be given to the ability of students and to their previous life experiences. Tasks should be appropriate for students’ level of ability: • Productive task engagement requires that tasks relate closely enough to current knowledge and skills to be understood but be different enough to extend students’ thinking. If tasks are too easy or too hard they are not motivating and are unlikely to engage students. “Tasks that are too easy or too hard have limited cognitive value” (p119). • Students do not all progress along a common developmental path at the same rate. Use your ongoing evaluation of students’ performance to help pitch the tasks at the right level. For this to be possible you need to know what the students are capable of. This is one particular strength of the Numeracy Development Project. Contexts used in tasks should relate to students’ life experiences where possible: • Use of real life contexts that are appropriate to your students’ experiences may make the mathematics more meaningful, accessible and appealing for students. Deciding which contexts are familiar for students is challenging (for example addition with money is not necessarily a familiar context as children usually buy things one at a time). • It is important that the context does not obscure the mathematics involved in the problem; overly complicated contexts can lead to a task being more about interpreting the question than the actual mathematics. • For the purposes of this discussion, tools include but are not restricted to: physical equipment, diagrams, graphs, examples, illustrations, problem contexts, equations, calculators and computers. Such resources can be used to provide a frame of reference to support the development of mathematical understanding. • Materials need to provide a scaffold to think about mathematical ideas rather than the focus for the learning. Using a range of materials to illustrate the same key idea helps students build conceptual mathematical ideas. • Getting students to record their own representations of learning can be valuable as it can help teachers make sense of students’ strategies and provide evidence of student learning. Students own representations are also often the most accessible to the student. When representations are chosen by the teacher the students may not see the link between the representation and the maths as clearly as when they use their own model. • Using both informal and formal recording is important. Students should be given opportunities to create their own representations before being introduced to conventional ones; this may make the effectiveness of conventional forms more apparent to students. • Artefacts, including pictures, diagrams and symbols, can be used to provide a ‘bridge’ between the context of the task and the mathematical ideas involved.
{"url":"http://www.nzmaths.co.nz/node/1538","timestamp":"2014-04-21T01:03:50Z","content_type":null,"content_length":"29320","record_id":"<urn:uuid:760f5ac5-7782-4cd1-afb7-31c336a4e5cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Simonton Science Tutor Find a Simonton Science Tutor ...I love one-on-one tutoring and enjoy working with students of all levels. I am patient and have the flexibility to adapt my teaching style to match a student`s needs. I am also confident that I can explain complex math and science problems in a way that is easy to understand. 13 Subjects: including genetics, organic chemistry, ACT Math, algebra 1 ...Because I have personally dealt with test anxiety myself and helped students conquer theirs as well, I am well-equipped to help students overcome their fears through warm, gentle tutoring sessions where I will always expect good effort but not perfection. Excellence is a far better goal. In add... 47 Subjects: including biology, calculus, chemistry, microbiology ...I have also tutored high school students in Houston and Sugar Land in various subjects: math, physics, literature, Spanish, studying skills and SAT preparation. Currently, I am the translator for the Alief Independent School District. As a certified bilingual teacher, I am required to take 30 hours a year of professional development, which I cover by taking teaching workshops. 41 Subjects: including anthropology, reading, philosophy, English ...This is one of the subjects I love to tutor in because it's exciting and interesting to read the styles of each individual and to help them write and analyze a beautiful paper. I took Pre-AP English I and II from a top Fort Bend school and made A's in English for all four years of high school En... 39 Subjects: including pharmacology, microbiology, physical science, astronomy ...This is often a simple misunderstanding stemming from the basic concepts. While it might sound tedious and superfluous, many of the more complex topics are easily understood from the basics. Finally, it is important to inspire passion and a desire to learn in each student. 22 Subjects: including chemical engineering, physical science, physics, geometry
{"url":"http://www.purplemath.com/simonton_science_tutors.php","timestamp":"2014-04-16T08:00:59Z","content_type":null,"content_length":"23856","record_id":"<urn:uuid:33e03184-d1d4-4a71-9ec2-382d6a5468c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Landfalling Hurricane Probability Project United States Landfalling Hurricane Probability Project · Interactive Landfall Probability Display The user selects a county, and landfall probabilities based on CSU's Tropical Meteorology Project's 2014 tropical cyclone forecast are presented. The numbers in parentheses are the climatological averages based on landfalling tropical cyclones in HURDAT. · Landfall Probability Table A Microsoft Excel table displaying all landfall probability calculations · State Landfall Probability Table A Microsoft Excel table displaying all landfall probability calculations for each of the coastal states. · Region Map Map of the eleven regions for which landfall probabilities have been created · Methodology Documentation A Microsoft Word document describing how the landfall probabilities were calculated. Caribbean and Central America Landfalling Hurricane Probability Project · Landfall Probability Table A Microsoft Excel table displaying all landfall probability calculations · Methodology Documentation A Microsoft Word document describing how the landfall probabilities were calculated. The United States landfalling hurricane web project has been co-developed by William Gray's Tropical Meteorology Research Project at Colorado State University and the GeoGraphics Laboratory at Bridgewater State University.
{"url":"http://www.e-transit.org/hurricane/welcome.html","timestamp":"2014-04-21T00:13:31Z","content_type":null,"content_length":"11740","record_id":"<urn:uuid:8a649095-0669-4677-8e05-f92027617ffa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
- IN TENTH ANNUAL IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE, IEEE COMPUTER , 1995 "... The Verilog hardware description language (HDL) is widely used to model the structure and behaviour of digital systems ranging from simple hardware building blocks to complete systems. Its semantics is based on the scheduling of events and the propagation of changes. Different Verilog models of the ..." Cited by 34 (1 self) Add to MetaCart The Verilog hardware description language (HDL) is widely used to model the structure and behaviour of digital systems ranging from simple hardware building blocks to complete systems. Its semantics is based on the scheduling of events and the propagation of changes. Different Verilog models of the same device are used during the design process and it is important that these be `equivalent'; formal methods for ensuring this could be commercially significant. Unfortunately, there is very little theory available to help. This self-contained tutorial paper explains the semantics of Verilog informally and poses a number of logical and semantic problems that are intended to provoke further research. Any theory developed to support Verilog is likely to be useful for the analysis of the similar (but more complex) language VHDL. - In Workshop on Formal Techniques for Hardware and Hardware-like Systems, Marstrand , 1998 "... Most hardware verification techniques tend to fall under one of two broad, yet separate caps: simulation or formal verification. This paper briefly presents a framework in which formal verification plays a crucial role within the standard approach currently used by the hardware industry. As a basis ..." Cited by 8 (2 self) Add to MetaCart Most hardware verification techniques tend to fall under one of two broad, yet separate caps: simulation or formal verification. This paper briefly presents a framework in which formal verification plays a crucial role within the standard approach currently used by the hardware industry. As a basis for this, the formal semantics of Verilog HDL are defined, and properties about synchronization and mutual exclusion algorithms are proved. - in the proceedings of the Asian Paci Software Engineering Conference, APSEC , 1999 "... The actual behaviour of a hardware devices available for an implementation of a control system can be simulated by a program, and this allows a hardware device to be proved correct by standard software techniques. This report takes advantage of algebraic approaches in investigation of simulator algo ..." Cited by 6 (0 self) Add to MetaCart The actual behaviour of a hardware devices available for an implementation of a control system can be simulated by a program, and this allows a hardware device to be proved correct by standard software techniques. This report takes advantage of algebraic approaches in investigation of simulator algorithm. Here we formalise Event semantics of Hardware Description Language in the form of relations and use Relation calculus to prove properties of combinational programs, cycle behaviours of which are defined as a conditional loop of non-deterministic choices between generalised parallel assignments. This reports investigates the properties (including termination and stability and uniqueness of final state) of combinational programs, and explores the condition under which a combination program can be reduced to a sequential normal form by removing all the parallel operators. We examine three classes of programs which are widely used in design of well-behaved combinational circuits. Tran V... , 1998 "... Up to a few years ago, the approaches taken to check whether a hardware component works as expected could be classified under one of two styles: hardware engineers in the industry would tend to exclusively use simulation to (empirically) test their circuits, whereas computer scientists would tend to ..." Cited by 4 (2 self) Add to MetaCart Up to a few years ago, the approaches taken to check whether a hardware component works as expected could be classified under one of two styles: hardware engineers in the industry would tend to exclusively use simulation to (empirically) test their circuits, whereas computer scientists would tend to advocate an approach based almost exclusively on formal verification. This thesis proposes a unified approach to hardware design in which both simulation and formal verification can co-exist. Relational Duration Calculus (an extension of Duration Calculus) is developed and used to define the formal semantics of Verilog HDL (a standard industry hardware description language). Relational Duration Calculus is a temporal logic which can deal with certain issues raised by the behaviour of typical hardware description languages and which are hard to describe in a pure temporal logic. These semantics are then used to unify the simulation of Verilog programs, formal verification and the use of algebraic laws during the design stage. - In CHARME'95, volume 987 of Lecture Notes in Computer Science , 1995 "... . This paper gives operational semantics for a subset of VHDL in terms of abstract machines. Restrictions to the VHDL source code are the finiteness of data types, and the absence of quantitative timing informations. The abstract machine of a design unit is built by composition of the abstract machi ..." Cited by 4 (1 self) Add to MetaCart . This paper gives operational semantics for a subset of VHDL in terms of abstract machines. Restrictions to the VHDL source code are the finiteness of data types, and the absence of quantitative timing informations. The abstract machine of a design unit is built by composition of the abstract machines for its embedded processes and blocks. The kernel process in our model is distributed among the composed machines. One transition of the final abstract machine models a VHDL delta cycle. This model can be used for symbolic model checking and equivalence verification. 1 Introduction Giving a formal definition of the semantics of VHDL [7] is of highest importance for synthesis and formal verification. Many different approaches have been proposed to fulfill this need, see eg [1, 3, 4, 5, 8, 9, 10, 11]. A first conclusion can be drawn from a study of these works: one has to trade off the number of VHDL features modeled, and the practical usefulness of the semantics. VHDL is a very complex l... - In Conference on Hardware Description Languages (CHDL '95 , 1995 "... Besides a formal syntax definition, few formal semantic models for HDLs are ever constructed. This paper reports our efforts to construct formal models for the hardware description language VHDL. In particular, a static model for VHDL that addresses well-formedness, static equivalences, and static r ..." Cited by 3 (2 self) Add to MetaCart Besides a formal syntax definition, few formal semantic models for HDLs are ever constructed. This paper reports our efforts to construct formal models for the hardware description language VHDL. In particular, a static model for VHDL that addresses well-formedness, static equivalences, and static rewriting is presented. A rewriting algebra is presented that defines a set of transforms that allow the rewriting of VHDL descriptions into a reduced form. The dynamic semantics is under development and the reductions attained by the rewriting algebra have greatly simplified the language constructs that the dynamic semantics have to characterize. I. Introduction Hardware Description Languages (HDLs) and their affiliated design tools are widely used to aid the computer architect in the design and implementation of digital systems. In many cases such languages and tools are proprietary systems --- used locally at only one or two sites. However, advances in design automation have driven the co... , 1993 "... In this paper, we present a framework for defining the formal semantics of behavioral VHDL92 descriptions. We propose a complementary application of denotational and operational semantics. The static semantics is defined by denotational means. The definition of the dynamic semantics is based on an o ..." Add to MetaCart In this paper, we present a framework for defining the formal semantics of behavioral VHDL92 descriptions. We propose a complementary application of denotational and operational semantics. The static semantics is defined by denotational means. The definition of the dynamic semantics is based on an operational model using Interval Event Structures. 1 Introduction Approaching the definition of a formal semantics of the IEEE Std-1076 hardware description language VHDL [VHDL87] as well as of its successor VHDL92 [VHDL92] is of high interest for the synthesis and the formal verification of VHDL designs. First work in the formal definition of VHDL87 has been done by Borrione and Paillet [BoPa87] defining the formal semantics of a VHDL subset in terms of a functional model. Wilsey [Wilsey90] defines the semantics of a small VHDL87 subset based on interval temporal logic. More recently Davis [Davis93] has introduced the denotational semantics of the VHDL87 simulation cycle by the use of an in... , 1995 "... Present day hardware complexity mandates prior simulation of designs to achieve some level of confidence about their correctness. The hardware description language VHDL is widely used to describe and simulate hardware designs. However, the semantics of VHDL, presented in informal prose form in the L ..." Add to MetaCart Present day hardware complexity mandates prior simulation of designs to achieve some level of confidence about their correctness. The hardware description language VHDL is widely used to describe and simulate hardware designs. However, the semantics of VHDL, presented in informal prose form in the Language Reference Manual (LRM), is ambiguous and therefore needs to be formalized. Past efforts in characterizing VHDL are lacking in either that they handle only a small subset of VHDL or that they simply describe the simulation cycle as prescribed by the LRM (usually both). In this work, a comprehensive, declarative semantics for VHDL is presented which is independent of the simulation cycle and uses a temporal logic that captures the timing constraints crucial to the understanding of a discrete event based language like VHDL. The semantics is presented as a construction of sets of time intervals over which the values of signals, ports and variables are defined. It is argued that this sema...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1631072","timestamp":"2014-04-19T22:55:33Z","content_type":null,"content_length":"33254","record_id":"<urn:uuid:fce55dea-6216-4356-bf61-0193d730ffa4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Baltimore, MD SAT Math Tutor Find a Baltimore, MD SAT Math Tutor ...I am a Regents scholar at Morgan State University required to maintain a 3.6 minimum GPA each semester. At school, I have been a tutor in both volunteer and paid positions, helping students on campus, as well as high school students, to achieve higher grade point averages. I have held quite a n... 24 Subjects: including SAT math, reading, writing, English ...I am also very good in general music. Although these are the only ones I play, I can help and tutor in anything, especially with reading music and general aspects of it. I can and have tutored all ages, from elementary age to high school. 19 Subjects: including SAT math, reading, piano, algebra 1 ...Paul's. In the past, I have spent a great deal of time tutoring students in all things English -- writing papers, grammar skills, analyzing poetry & novels, etc. I have also tutored everything from public speaking skills to college French to algebra. 23 Subjects: including SAT math, reading, English, writing ...It is my goal to find a way to make the material relate to students because if they understand the material then the exercises come as second nature. I have been attending Towson University since the fall of 2012 in the pursuit of a degree in Mathematics-Secondary Education. It is my goal to be a teacher in the Maryland school system and eventually become a professor at a community 26 Subjects: including SAT math, English, writing, calculus Hello, I'm a current faculty member at Johns Hopkins School of Public health. I also work in the Division of Health Science Informatics at the School of Medicine. At DHSI, I mentor, advise and help place all our informatics students from the School of Medicine, School of Nursing, and School of Public Health, in their practicum/internships. 36 Subjects: including SAT math, reading, chemistry, writing
{"url":"http://www.purplemath.com/Baltimore_MD_SAT_Math_tutors.php","timestamp":"2014-04-20T19:21:47Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:6b3d7a7c-f733-42b5-97f8-92b2131bca0b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
axiom of foundation axiom of foundation For any nonempty set $X$ there is some $y\in X$ such that $y\cap X=\emptyset$. For any set $X$, there is no function $f$ from $\omega$ to the transitive closure of $X$ such that for every $n$, $f(n+1)\in f(n)$. For any formula $\phi$, if there is any set $x$ such that $\phi(x)$ then there is some $X$ such that $\phi(X)$ but there is no $y\in X$ such that $\phi(y)$. Sets which satisfy this axiom are called artinian. It is known that, if ZF without this axiom is consistent, then this axiom does not add any inconsistencies. One important consequence of this property is that no set can contain itself. For instance, if there were a set $X$ such that $X\in X$ then we could define a function $f(n)=X$ for all $n$, which would then have the property that $f(n+1)\in f(n)$ for all $n$. artinian, artinian set, artinian sets foundation, regularity, axiom of regularity Mathematics Subject Classification no label found Added: 2002-09-28 - 23:26 Attached Articles
{"url":"http://planetmath.org/axiomoffoundation","timestamp":"2014-04-16T04:15:00Z","content_type":null,"content_length":"47917","record_id":"<urn:uuid:f84d5ce7-3bd8-4adf-b429-c79574b792c2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Science the GNU Way, Part I In my past several articles, I've looked at various packages to do all kinds of science. Sometimes, however, there just isn't a tool to solve a particular problem. That's the great thing about science. There is always something new to discover and study. But, this means it's up to you to develop the software tools you need to do your analysis. This article takes a look at the GNU Scientific Library, or GSL. This library is the Swiss Army library of routines that you will find useful in your work. First, you need to to get a copy of GSL and install it on your system. Because it is part of the GNU Project, it is hosted at http://www.gnu.org/s/gsl. You always can download and build from the source code, but all major distributions should have packages available. For example, on Debian-based systems, you need to install the package libgsl0-dev to develop your code and gsl-bin to run that code. GSL is meant for C and C++, so you also need a compiler. Most of you probably already are familiar with GCC, so I stick with that here. The next step is actually the last step. I'm looking at compiling now so that I can focus on all the tools available in GSL. All the header files for GSL are stored in a subdirectory named gsl. So, for example, if you wanted to include the math header file, you would use: #include <gsl/gsl_math.h> All the functions are stored in a single library file called libgsl.a or libgsl.so. You also need a library to handle basic linear algebra. GSL provides one (in the file libgslcblas.so), but you can use your own. Additionally, you need to link in the math library. So, the final compile and link command should look like this: Garrick, one line below. gcc -o hello_world hello_world.c -lgsl -lgslcblas -lm There are optional inline versions for some of the performance-critical functions in GSL. To use these, you need to include -DHAVE_INLINE with your compile command. To try to help with portability issues, GSL offers some functions that exist only on certain platforms (and not others). As an example, the BSD math library has a function called hypot. GSL offers its own version, called gsl_hypot, that you can use on non-BSD platforms. Some functions have both a general algorithm as well as optimized versions for specific platforms. This way, if you are running on a SPARC, for example, you can select a version optimized for SPARC if it exists. One of the first things you likely will want to do is check whether you are getting correct results from your code or if there were errors. GSL has a number of functions and data structures available in the header file named gsl_errno.h. Functions return a value of zero if everything is fine. If there were any problems in trying to complete the requested action, a nonzero value is returned. This could be an actual error condition, like a wrong data type or memory error, or it could be a condition like not being able to converge to within the requested accuracy in the function call. This is why you always need to check the return value for all GSL function calls. The actual values returned in an error condition are error codes, defined in the file gsl_errno.h. They are defined as macros that start with GSL_. Examples include the following: • GSL_EDOM — domain error, used by functions when an argument doesn't fall into the domain over which the function is defined. • GSL_ERANGE — range error, either an overflow or underflow. • GSL_ENOMEM — no memory available. The library will use values only up to 1024. Values above this are available for use in your own code. There also are string versions of these error codes available. You can translate the error code to its text value with the function gsl_errno(). Now that you know how to compile your program and what to do with errors, let's start looking at what kind of work you can do with GSL. Basic mathematical functions are defined in the file gsl_math.h. The set of mathematical constants from the BSD math library are provided by this part of GSL. All of the constants start with M_. Here are a few of them: • M_PI — pi. • M_SQRT2 — the square root of 2. • M_EULER — Euler's constant. There also are capabilities for dealing with infinities and non-numbers. Three macros define the values themselves: • GSL_POSINF — positive infinity. • GSL_NEGINF — negative infinity. • GSL_NAN — not a number. There also are functions to test variables: • gsl_isnan — is it not a number? • gsl_isinf — is it infinite? • gsl_finite — is it finite? There is a macro to find the sign of a number. GSL_SIGN(x) returns the sign of x: 1 if it is positive and –1 if it is negative. If you are interested in seeing whether a number is even or odd, two macros are defined: GSL_IS_ODD(x) and GSL_IS_EVEN(x). These return 1 if the condition is true and 0 if it is not. A series of elementary functions are part of the BSD math library. GSL provides versions of these for platforms that don't have native versions, including items like: • gsl_hypot — calculate hypotenuse. • gsl_asinh, gsl_acosh, gsl_atanh — the arc hyperbolic trig functions. If you are calculating the power of a number, you would use gsl_pow_int(x,n), which gives you x to the power of n. There are specific versions for powers less than 10. So if you wanted to find the cube of a number, you would use gsl_pow_3. These are very efficient and highly optimized. You even can inline these specialized functions when HAVE_INLINE is defined. Several macros are defined to help you find the maximum or minimum of numbers, based on data type. The basic GSL_MAX(a,b) and GSL_MIN(a,b) simply return either the maximum or minimum of the two numbers a and b. GSL_MAX_DBL and GSL_MIN_DBL find the maximum and minimum of two doubles using an inline function. GSL_MAX_INT and GSL_MIN_INT do the same for integer arguments. When you do any kind of numerical calculation on a computer, errors always are introduced by round-off and truncation. This is because you can't exactly reproduce numbers on a finite binary system. But, what if you want to compare two numbers and see whether they are approximately the same? GSL provides the function gsl_fcmp(x,y,epsilon). This function compares the two doubles x and y, and checks to see if they are within epsilon of each other. If they are within this range, the function returns 0. If x < y, it returns –1, and it returns 1 if x > y. Complex numbers are used in many scientific fields. Within GSL, complex data types are defined in the header file gsl_complex.h, and relevant functions are defined in gsl_complex_math.h. To store complex numbers, the data type gsl_complex is defined. This is a struct that stores the two portions. You can set the values with the functions gsl_complex_rect(x,y) or gsl_complex_polar(x,y). In the first, this represents x+iy; whereas in the second, x is the radius, and y is the angle in a polar representation. You can pull out the real and imaginary parts of a complex number with the macros GSL_REAL and GSL_IMAG. There is a function available to find the absolute value of a complex number, gsl_complex_abs(x), where x is of type gsl_complex. Because complex numbers actually are built up of two parts, even basic arithmetic is not simple. To do basic math, you can use the following: • gsl_complex_add(a,b) • gsl_complex_sub(a,b) • gsl_complex_mul(a,b) • gsl_complex_div(a,b) You can calculate the conjugate with gsl_complex_conjugate(a) and the inverse with gsl_complex_inverse(a). There are functions for basic mathematical functions. To calculate the square root, you would use gsl_complex_sqrt(x). To calculate the logarithm, you would use gsl_complex_log(x). Several others are available too. Trigonometric functions are provided, like gsl_complex_sin(x). There also are functions for hyperbolic trigonometric functions, along with the relevant inverse functions. Now that you have the basics down, my next article will explore all the actual scientific calculations you can do. I'll look at statistics, linear algebra, random numbers and many other topics. Science image via Shutterstock.com. Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming. Superb article. The way you explained this is superb. Thanks for sharing this. Submitted by on Sat, 07/14/2012 - 07:10. In my past several articles, I've looked at various packages to do all kinds of science. Sometimes, however, there just isn't a tool to solve a particular problem. That's the great thing about Submitted by on Mon, 07/09/2012 - 19:54. gsl_strerror() instead of gsl_errno() ? "You can translate the error code to its text value with the function gsl_errno(). " jogos online jogos de futebol Submitted by herry (not verified) on Mon, 07/16/2012 - 02:07. Submitted by on Sat, 07/07/2012 - 19:50. Thanks for your sharing so important ideas, the function of the GNU is good, but fell the library is a little complicated to most. jogos do mario jogos de carros Submitted by diu (not verified) on Wed, 07/11/2012 - 00:36. Sometimes, however, there just isn't a tool to solve a particular problem.Jobs in the UK Submitted by mike hussy (not verified) on Fri, 07/13/2012 - 04:29. This post is so informative and makes a very nice image on the topic in my mind. It is the first time I visit your blog, but I was extremely impressed. Keep posting as I am gonna come to read it everyday! Truss Display Submitted by mike hussy (not verified) on Fri, 07/13/2012 - 04:29. This post is so informative and makes a very nice image on the topic in my mind. It is the first time I visit your blog, but I was extremely impressed. Keep posting as I am gonna come to read it everyday! Truss Display Submitted by scot wlozt (not verified) on Mon, 07/16/2012 - 06:58. Your article is very exciting and informational. I am trying to decide on a career move and this has helped me with one aspect. Thank you so much! كيف تعرف شخصيتك Submitted by on Fri, 07/06/2012 - 17:14. Undoubtedly one of the best articles I read here in LinuxJournal, you are the best at what they do, congratulations jogos de vestir jogos da barbie Submitted by sunita (not verified) on Mon, 07/09/2012 - 01:18. I think this is one of the most significant information for me. And i am glad reading your article. But should remark on few general things, The site style is ideal, the articles is really nice : D. Good job, cheers. solar panels. Submitted by Anonymous (not verified) on Fri, 07/06/2012 - 02:01. I really appreciate the site for having such nice articles and good collections of information provided here.the article on mental health is very informative which would help many peoples.< ahref= "http://www.backlinktime.com/" rel="follow">seo backlinks Submitted by on Thu, 07/05/2012 - 20:56. Thanks for your sharing so important ideas, the function of the GNU is good, but fell the library is a little complicated to most, anyway can put the tool more simple and easy install, that's will be fine, thanks! jogos de meninas jogos do ben 10 Submitted by Jhunne (not verified) on Wed, 07/04/2012 - 12:55. great article. But I wonder if there is another address with a copy of the GSL? When I tried to me was dead. Gstei very step-by-step. Once you manage to enter the site will realize it ... News send to my site, since already grateful! jogos Submitted by Anonymous on Wed, 07/04/2012 - 01:09. Thanks for sharing this information with us. I am very impressed with this article. Your blog is very interesting. I appreciate your work.Visual Search Technology Submitted by kellybrown (not verified) on Fri, 07/13/2012 - 05:35. Submitted by Fionna (not verified) on Mon, 07/02/2012 - 11:05. I agree with you on finding new situations is never enough. I speak from experience. I program in C and C + +, and I confess that I never know quite where I lack knowledge, and when that happens nothing better than hooking up with people who now have a better than how our. It has a website where I am always looking for new codes for my developments Ver Tv Online Submitted by Favor Deal on Wed, 06/13/2012 - 22:28. Thanks for your sharing so important ideas, the function of the GNU is good, but fell the library is a little complicated to most, anyway can put the tool more simple and easy install, that's will be fine, thanks!costume jewelry rings, rings for sale Submitted by srikant (not verified) on Sat, 06/02/2012 - 02:34. Very well written post. It will be valuable to everyone who utilizes it, including myself. Keep doing what you are doing for sure I will check out more posts. There is a macro to find the sign of a number. GSL_SIGN(x) returns the sign of x: 1 if it is positive and –1 if it is negative. If you are interested in seeing whether a number is even or odd, two macros are defined: GSL_IS_ODD(x) and GSL_IS_EVEN(x). These return 1 if the condition is true and 0 if it is not.linkkei Submitted by Santiado (not verified) on Fri, 06/01/2012 - 18:53. Undoubtedly one of the best articles I read here in LinuxJournal, you are the best at what they do, congratulations spc serasa Submitted by Anonymous (not verified) on Thu, 05/03/2012 - 22:41. gsl_strerror() instead of gsl_errno() ? "You can translate the error code to its text value with the function gsl_errno(). " Submitted by Anonymous (not verified) on Sun, 05/27/2012 - 07:10. Common form in many libraries. gsl_strerror(gsl_errno()) gives you a nice text representation of an error. Users would rather see explanatory text than a numeric value. Submitted by Vaishakh (not verified) on Sun, 04/29/2012 - 01:28. Waiting for your next article. :) Linux Management with Red Hat Satellite: Measuring Business Impact and ROI Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success. Sponsored by Red Hat Private PaaS for the Agile Enterprise If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control. Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise. Sponsored by ActiveState Win a Tegra Note 7! We're giving away one of these super cool NVIDIA Tegra Note 7 tablets in April, courtesy of Microway. Come back every day and enter to increase your chances of winning.
{"url":"http://www.linuxjournal.com/content/science-gnu-way-part-i","timestamp":"2014-04-16T07:46:57Z","content_type":null,"content_length":"88690","record_id":"<urn:uuid:b3ebb078-0ebe-4461-8d9e-9ba4ccb0f731>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of a plane which intersects the two surfaces? October 2nd 2009, 07:13 PM Equation of a plane which intersects the two surfaces? The question asks to find the equation for a plane which intersects these two surfaces. I was sure that I solved it correctly, but my professor says it is wrong and I don't know what I am doing wrong here. Here are the two surfaces... 1) $x^2+2y^2-z^2+3x=1$ 2) $2x^2+4y^2-2z^2-5y=0$ Here's what I did... I found three points that are on both surfaces, hence points at which they intersect. I did this by multiplying the first surface equation by two and subtracting the equations from each other to end up with it $6x+5y=2$. After that, I just picked some numbers for x and y and then plugged it back into the surface equations to find z. The three points I found were $(1/3,0,1/3), (1/3,0,-1/ 3)$, and $(2,-2,\sqrt{17})$. After finding these three points, I just used them to find the normal vector and then used the normal vector and the first point to find the equation. Here is the equation that I got... Can someone please tell me whether this is correct or not. If its not correct, can someone tell me what I might be doing wrong? Thanks. October 2nd 2009, 07:44 PM The question asks to find the equation for a plane which intersects these two surfaces. I was sure that I solved it correctly, but my professor says it is wrong and I don't know what I am doing wrong here. Here are the two surfaces... 1) $x^2+2y^2-z^2+3x=1$ 2) $2x^2+4y^2-2z^2-5y=0$ Here's what I did... I found three points that are on both surfaces, hence points at which they intersect. I did this by multiplying the first surface equation by two and subtracting the equations from each other to end up with it $6x+5y=2$. After that, I just picked some numbers for x and y and then plugged it back into the surface equations to find z. The three points I found were $(1/3,0,1/3), (1/3,0,-1/ 3)$, and $(2,-2,\sqrt{17})$. After finding these three points, I just used them to find the normal vector and then used the normal vector and the first point to find the equation. Here is the equation that I got... Can someone please tell me whether this is correct or not. If its not correct, can someone tell me what I might be doing wrong? Thanks. $\left\{ \begin{gathered}<br /> {x^2} + 2{y^2} - {z^2} + 3x = 1, \hfill \\<br /> 2{x^2} + 4{y^2} - 2{z^2} - 5y = 0; \hfill \\ <br /> \end{gathered} \right. \Leftrightarrow \left\{ \begin {gathered}<br /> {z^2} = {x^2} + 2{y^2} + 3x - 1, \hfill \\<br /> {z^2} = {x^2} + 2{y^2} - \frac{5}<br /> {2}y; \hfill \\ <br /> \end{gathered} \right. \Leftrightarrow$ $\Leftrightarrow {x^2} + 2{y^2} + 3x - 1 = {x^2} + 2{y^2} - \frac{5}{2}y \Leftrightarrow$ $\Leftrightarrow 3x + \frac{5}{2}y - 1 = 0 \Leftrightarrow 6x + 5y - 2 = 0.$
{"url":"http://mathhelpforum.com/calculus/105781-equation-plane-intersects-two-surfaces-print.html","timestamp":"2014-04-18T00:52:30Z","content_type":null,"content_length":"9856","record_id":"<urn:uuid:00cee7ad-503d-4adf-a809-4f6e7c035e2c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Seatac, WA ACT Tutor Find a Seatac, WA ACT Tutor ...I have been working with middle school and high school students on specific study skills, like goal setting, organization, reading comprehension, listening skills, and more since 2010. These skills are essential to success now and into college and work. I am an accomplished vocalist who has been teaching voice lessons during the past four years. 46 Subjects: including ACT Math, English, reading, algebra 1 ...I have been tutoring and teaching both professionally and through non-profit organizations for many years now in Seattle, Hawaii and Chicago. I have worked with students K-12 in the Seattle Public Schools, Chicago Public Schools, and Seattle area independent schools. I have also worked with students at the University level. 27 Subjects: including ACT Math, chemistry, reading, writing ...I have been successfully working as a tutor for the past 5 years with a variety of children in grade school with good feedback. I found my experience as a research scientist fascinated the kids which created an environment which they wanted to be in. I have worked throughout the digital revolut... 45 Subjects: including ACT Math, chemistry, physics, calculus ...I currently volunteer with a group which helps introduce middle and high school age students into science by bringing them to biomedical research facilities. I have been a TA while in high school, college and graduate school (referred to as a GSI), providing one-on-one and small group study sess... 25 Subjects: including ACT Math, Spanish, chemistry, writing ...I have programmed professionally in several high level languages such as Basic, Fortran, C++ and developed websites using HTML, CSS, Flash, Perl and PHP. I have also worked in a couple of assembly languages. I am open to teach any language that also has value in the real world. 43 Subjects: including ACT Math, chemistry, physics, calculus Related Seatac, WA Tutors Seatac, WA Accounting Tutors Seatac, WA ACT Tutors Seatac, WA Algebra Tutors Seatac, WA Algebra 2 Tutors Seatac, WA Calculus Tutors Seatac, WA Geometry Tutors Seatac, WA Math Tutors Seatac, WA Prealgebra Tutors Seatac, WA Precalculus Tutors Seatac, WA SAT Tutors Seatac, WA SAT Math Tutors Seatac, WA Science Tutors Seatac, WA Statistics Tutors Seatac, WA Trigonometry Tutors Nearby Cities With ACT Tutor Auburn, WA ACT Tutors Bellevue, WA ACT Tutors Burien, WA ACT Tutors Des Moines, WA ACT Tutors Federal Way ACT Tutors Issaquah ACT Tutors Kent, WA ACT Tutors Kirkland, WA ACT Tutors Newcastle, WA ACT Tutors Normandy Park, WA ACT Tutors Redmond, WA ACT Tutors Renton ACT Tutors Seattle ACT Tutors Tacoma ACT Tutors Tukwila, WA ACT Tutors
{"url":"http://www.purplemath.com/Seatac_WA_ACT_tutors.php","timestamp":"2014-04-21T00:16:59Z","content_type":null,"content_length":"23671","record_id":"<urn:uuid:502aabfe-74d1-472f-b14c-1c8a3d1f7776>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Ideals and Maximal Commutative Subrings of Graded Rings (2009) In Doctoral Theses in Mathematical Sciences 2009:5. This thesis is mainly concerned with the intersection between ideals and (maximal commutative) subrings of graded rings. The motivation for this investigation originates in the theory of C*-crossed product algebras associated to topological dynamical systems, where connections between intersection properties of ideals and maximal commutativity of certain subalgebras are well-known. In the last few years, algebraic analogues of these C*-algebra theorems have been proven by C. Svensson, S. Silvestrov and M. de Jeu for different kinds of skew group algebras arising from actions of the group Z. This raised the question whether or not this could be further generalized to other types of (strongly) graded rings. In this thesis we show that it can... (More) This thesis is mainly concerned with the intersection between ideals and (maximal commutative) subrings of graded rings. The motivation for this investigation originates in the theory of C*-crossed product algebras associated to topological dynamical systems, where connections between intersection properties of ideals and maximal commutativity of certain subalgebras are well-known. In the last few years, algebraic analogues of these C*-algebra theorems have been proven by C. Svensson, S. Silvestrov and M. de Jeu for different kinds of skew group algebras arising from actions of the group Z. This raised the question whether or not this could be further generalized to other types of (strongly) graded rings. In this thesis we show that it can indeed be done for many other types of graded rings and actions! Given any (category) graded ring, there is a canonical subring which is referred to as the neutral component or the coefficient subring. Through this thesis we successively show that for algebraic crossed products, crystalline graded rings, general strongly graded rings and (under some conditions) groupoid crossed products, each nonzero ideal of the ring has a nonzero intersection with the commutant of the center of the neutral component subring. In particular, if the neutral component subring is maximal commutative in the ring this yields that each nonzero ideal of the ring has a nonzero intersection with the neutral component subring. Not only are ideal intersection properties interesting in their own right, they also play a key role when investigating simplicity of the ring itself. For strongly group graded rings, there is a canonical action such that the grading group acts as automorphisms of certain subrings of the graded ring. By using the previously mentioned ideal intersection properties we are able to relate G-simplicity of these subrings to simplicity of the ring itself. It turns out that maximal commutativity of the subrings plays a key role here! Necessary and sufficient conditions for simplicity of a general skew group ring are not known. In this thesis we resolve this problem for skew group rings with commutative coefficient rings. (Less) □ Professor Søren Eilers, University of Copenhagen, Denmark Dissertation (Composite) Doctoral Theses in Mathematical Sciences 183 pages Centre for Mathematical Sciences defense location Lecture hall MH:C, Centre for Mathematical Sciences, Sölvegatan 18, Lund university, Faculty of Engineering defense date 2009-08-17 13:15 research group LU publication? date added to LUP 2009-07-01 13:35:57 date last changed 2009-07-01 13:35:57 abstract = {This thesis is mainly concerned with the intersection between ideals and (maximal commutative) subrings of graded rings. The motivation for this investigation originates in the theory of C*-crossed product algebras associated to topological dynamical systems, where connections between intersection properties of ideals and maximal commutativity of certain subalgebras are well-known. In the last few years, algebraic analogues of these C*-algebra theorems have been proven by C. Svensson, S. Silvestrov and M. de Jeu for different kinds of skew group algebras arising from actions of the group Z. This raised the question whether or not this could be further generalized to other types of (strongly) graded rings. In this thesis we show that it can indeed be done for many other types of graded rings and actions! Given any (category) graded ring, there is a canonical subring which is referred to as the neutral component or the coefficient subring. Through this thesis we successively show that for algebraic crossed products, crystalline graded rings, general strongly graded rings and (under some conditions) groupoid crossed products, each nonzero ideal of the ring has a nonzero intersection with the commutant of the center of the neutral component subring. In particular, if the neutral component subring is maximal commutative in the ring this yields that each nonzero ideal of the ring has a nonzero intersection with the neutral component subring. Not only are ideal intersection properties interesting in their own right, they also play a key role when investigating simplicity of the ring itself. For strongly group graded rings, there is a canonical action such that the grading group acts as automorphisms of certain subrings of the graded ring. By using the previously mentioned ideal intersection properties we are able to relate G-simplicity of these subrings to simplicity of the ring itself. It turns out that maximal commutativity of the subrings plays a key role here! Necessary and sufficient conditions for simplicity of a general skew group ring are not known. In this thesis we resolve this problem for skew group rings with commutative coefficient rings.}, author = {\"O inert, Johan}, isbn = {978-91-628-7832-0}, issn = {1404-0034}, keyword = {ideals,simple rings,maximal commutativity,Crossed products,graded rings}, language = {eng}, pages = {183}, publisher = {Centre for Mathematical Sciences}, school = {Lund University}, series = {Doctoral Theses in Mathematical Sciences}, title = {Ideals and Maximal Commutative Subrings of Graded Rings}, volume = {2009:5}, year = {2009},
{"url":"https://lup.lub.lu.se/search/publication/1433647","timestamp":"2014-04-20T17:18:57Z","content_type":null,"content_length":"40386","record_id":"<urn:uuid:4c7d03db-0302-4aa5-a310-b929c35a1769>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Vivaldi Antenna Microwave Engineering Europe's (MWEE) EM simulation benchmark has become quite a tradition over the last few years, enticing some of the best known software providers to put their diverse simulation methods to the test. The results are always eagerly awaited as they represent the current status of simulation technology and permit revealing comparisons between individual methods and software Figure 1: Geometry of a vivaldi antenna For the first time in MWEE benchmark's history an antenna problem was set. The balanced Vivaldi antenna posed a worthy challenge to the benchmark participants due to its complex form and size (Fig.1). The CAD benchmark was presented in the October 2000 edition of MWEE and results from six contributors were published in the subsequent editions with the measured results ending the series in February 2001. The comparison between the published measured and simulated results from all software producers revealed significant differences. No possible reason for this striking discrepancy was put forward by MWEE. CST calculated the Vivaldi structure with great diligence and, having carried out a detailed convergence study, consider our results to be highly accurate. We would therefore like to share our thoughts with you, commenting on some of the results. Over the next few pages you will find a discussion of performance and accuracy, commentary on the differences between measured and simulation results, remarks on model input time, and the benchmark results achieved with CST MICROWAVE STUDIO® illustrated with a wide range of plots and animations. In conclusion we can only recommend that interested readers accept our offer of a CST MICROWAVE STUDIO® test licence. See for yourself what today's 3D EM field simulation tools are capable of. Figure 2: Benchmark results Figure 1 illustrates the results published in the February 2001 edition of MWEE submitted by six benchmark participants. At first glance all curves - at least in the frequency range from 0-5 GHz - are similar. But when you take a closer look they demonstrate clear differences. The importance of these differences is shown by the following convergence study performed with the help of CST MWS®'s automatic mesh adaptor. Figure 3: Final solution in a rough mesh with 10.000 mesh nodes Figure 2 illustrates large variations with the final solution in a rough mesh with 10.000 mesh nodes (Pass 1). After the fifth run (with 53.000 mesh nodes, 12 min. calculation time) hardly any deviation is present in the results and strong convergence can be seen. It has been mathematically proven that our method must always converge and so is absolutely reliable. Figure 4: Frequency band section We will now take a closer look at one section of the frequency band (Figure 3). The resonance peak around 3 GHz decreases in intensity and the resonance frequency shifts until a mesh density of 330.000 nodes is used where an extremely high accuracy is attained (pass 9). The difference between the absolute minimum of these peaks for runs 1 and 9 is, nevertheless, 25 dB, the frequency shift 330 MHz. A renewed look at the results submitted by the benchmark participants, clearly reflects the time spent and reliability of the various methods. The difference between the individual resonance peaks amounts to 12dB / 250 MHz for this frequency range. This enormous difference impressively clarifies the importance of a carefully carried out convergence study. It should be noted that the 9 runs made here, were only carried out in order to clearly illustrate the convergence process. In practice, substantially fewer runs are sufficient to reach a reliable result. As an optional extra, MWEE had challenged the participants to calculate results for the 10 to 20 GHz range. Unfortunately MWEE neither presented the measured results for this range, nor published a comparison between the submitted simulation results. It is precisely in this frequency range that the widely differing abilities of the methods would have appeared most clearly, due to the problem size in wave lengths and the resulting increase in the number of mesh nodes points. A comparison of the data provided by the individual competitors revealed the clear superiority of the Time Domain with regards computation time and memory requirements. The Finite Element program Ansoft's "HFSS" needed 143 minutes for the frequency range 0 to 10 GHz, whereas CST MICROWAVE STUDIO® only needed 15 minutes and with a memory requirement eight times smaller. No results from HFSS were published for the frequency range 10 - 20 GHz. CST MICROWAVE STUDIO® produced results for this frequency range within 15 minutes, but as it turned out, to achieve our desired high levels of accuracy at 20 GHz, a finer discretisation and therefore a longer computation time (up to 64 minutes) was necessary. Figure 5: Measurement Figure 1 illustrates the reflection for the same frequency range (0.5 – 10 Ghz) as measured by BAE Systems, Great Baddow. The variation with the common trend present in all simulation results is so large, that it has to be asked whether the layout of the measured antenna completely corresponds with the structural information made available. It seems very unlikely that all published simulation results should be so full of errors. We would like you to bear in mind that the published simulation results represent state of the art 3D EM simulation and their accuracy and agreement with measurements have been verified over the years by thousands of users of these methods. A possible explanation for the deviation would be the confirmed presence of an SMA launcher in the real antenna model, which was additionally given as the explanation for the enormous ripple (see MWEE Feb. 2001 edition). Figure 6: Geometry of a vivaldi antenna User friendliness, along with the accuracy and speed of calculations, is an important criterion of modern CAE packages. Often a large part of the time available for design and analysis will be used for the inputting of data. The input times submitted by the competitors vary greatly. They range from 10 to <120 minutes. Many customers have already certified that CST MICROWAVE STUDIO® has the most easy to use interface of all EM simulation programs. Ironically, the makers of this software submitted the longest input time. We leave it up to you and your experience of interpreting technical drawings, inputting structures, setting up simulation boundary conditions and the parametrisation of models, to determine how realistic a complete input time of 15 minutes is for such a structure. CST believes that only realistic input times are of any use to readers of this Benchmark. We therefore stand by our statement of <120 minutes, which implies that an experienced user, with a CAD interface as user-friendly as present in CST MICROWAVE STUDIO®, could also achieve substantially shorter times. Microwave Engineering Europe have unveiled their CAD Benchmark 2000 - a free space electromagnetic problem based on a balanced antipodal Vivaldi antenna. The results achieved using CST MICROWAVE STUDIO can be viewed here or alternatively on the MEE website. Model Geometry The Vivaldi Antenna was modelled with CST MICROWAVE STUDIO. The structure was built as a fully parametric model and the real thickness of the metallic layers (17µm) was taken into account. Figure 7: Model geometry S-Parameters and Input Impedance Figure 2 shows the amplitude of the S-Parameters over the range 0 - 20GHz, at a reference impedance of 41.88 Ohm without normalisation. Figure 8: S-Parameter Pattern Data Output at 10GHz This plot shows E-Plane co-polar and H-Plane cross-polar directivity at 10GHz. For this far field plot the coordinate system was oriented so that the z-axis was parallel to the main lobe. Figure 9: E-Plane co-polar and H-Plane cross-polar plots Pattern Data Output at 10GHz This plot shows the E-Plane cross-polar and H-Plane co-polar directivity at 10GHz. For this far field plot the coordinate system was oriented so that the z-axis was parallel to the main lobe. Figure 10: E-Plane cross-polar and H-Plane co-polar plots Electric Near Field at 10 GHz This plot shows a contour plot of the electric near field at 10 GHz. Figure 11: Contour plot Current Distribution at 10 GHz This plot shows a vector plot of current distribution on the antenna at 10 GHz. Figure 12: Current distribution Hardware information A fast calculation of the S-parameters for the desired frequency range 0 - 10GHz was performed within 15 minutes on a 800MHz PIII. The results of this quick calculation are in excellent agreement with a second simulation made for the whole frequency band (0 - 20GHz) using a refined mesh. This high accuracy solution for the whole frequency range 0 - 20GHz was obtained in 64 minutes on the same PC and required about 100MB memory . Both calculations were performed on a single processor machine. The usage of a dual processor would reduce this time additionally. Software information The Benchmark's simulation was performed with version 2.1 of CST MICROWAVE STUDIO®. This 3D solver is based on the FI method in combination with the Perfect Boundary Approximation™ (PBA™) which allows the partial filling of mesh cells. As an optional extra, MWEE had challenged the participants to calculate results for the 10 - 20 GHz range. Unfortunately MWEE neither presented the measured results for this range, nor published a comparison of the submitted simulation results. It is precisely in this frequency range that the widely differing abilities of the methods would have appeared most clearly, due to the problem size in wave lengths and the resulting increase in the number of mesh node points. A comparison of the data provided by the individual competitors revealed the clear superiority of the Time Domain with regards computation time and memory requirements. The Finite Element program, Ansoft’s "HFSS", needed 143 minutes for the frequency range 0 - 10 GHz, whereas CST MICROWAVE STUDIO™ only needed 15 minutes and with a memory requirement eight times No results from HFSS were published for the frequency range 10 - 20 GHz. CST MICROWAVE STUDIO® produced results for this frequency range within 15 minutes, but as it turned out, to achieve our desired high levels of accuracy at 20 GHz, a finer discretisation and therefore a longer computation time (up to 64 minutes) was necessary. CST Article "Vivaldi Antenna" last modified 19. Jan 2009 3:14 printed 16. Apr 2014 12:18, Article ID 15 All rights reserved. Without prior written permission of CST, no part of this publication may be reproduced by any method, be stored or transferred into an electronic data processing system, neither mechanical or by any other method. Other Articles In this article, CST MICROWAVE STUDIO® is used to illuminate an infinite dielectric half-space with a uniform plane wave and the reflection and transmission quantities are obtained. This problem has an analytical solution which serves to validate the simulation. The same procedure is then applied to a more generalized geometry which lacks a known analytical solution. Read full article.. Optenni Lab is a novel software with innovative analysis features that increases the productivity of antenna designers and speeds up the antenna design process. Optenni Lab offers fast fully automatic matching circuit optimization tools, estimation of the obtainable bandwidth of antennas and calculation of the worst-case isolation in multi-antenna systems. With these tools the antenna designer can quickly evaluate various antenna designs and concepts, including multiport antennas and tunable matching circuits. Optenni Lab is very easy to use and does not require specialist know-ledge in impedance matching. Read full article.. The dual mode filter was developed, measured, and simulated by ESA (European Space Agency). Read full article.. Aircraft, ships, land vehicles and satellites represent some of the most demanding and complex electromagnetic environments. CST STUDIO SUITE brings together 20 years of experience in the simulation of 3D microwave & RF components, antennas and systems. It is used by government agencies and defense contractors worldwide on a daily basis for mission-critical projects. Read full article.. The Delcross Savant electromagnetic (EM) software package is used to select an optimal location for a 5.9 GHz Inter-Vehicular Communication (IVC) antenna. Read full article..
{"url":"https://www.cst.com/Applications/Article/15","timestamp":"2014-04-16T10:18:24Z","content_type":null,"content_length":"38118","record_id":"<urn:uuid:679068d4-ed27-4c9a-8155-19e0c3a8dd3c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
EconPort - This experiment is largely based upon "Classroom Games: Voluntary Provision of a Public Good" by Charles A. Holt and Susan K. Laury, Journal of Economic Perspectives 11(4), Fall 1997, pages 209-215. The professor distributes four cards to each student: two red cards (hearts and diamonds) and two black cards (clubs and spades). The professor comes to each student and collects two cards: two red cards, two black cards, or one of each. Red cards represent the student's endowment that may be kept or contributed to a group fund. Red cards kept generate earnings only to the person holding the card. Red cards turned in (contributed) generate earnings to everyone in the group. (Black cards generate no earnings and are used to keep decisions private; when a student turns in two cards others cannot tell whether the student is turning in two red cards, two black cards, or one of each.) After all cards have been turned in, the professor counts the number of red cards contributed and announces this total to the group. Cards are returned to the students so they can make another contribution decision. After several contribution rounds, the professor announces a change in the value of red cards kept. After several more contribution rounds, students are given the opportunity to engage in a non-binding communication period before the experiment continues. The topics that students discuss typically form the basis of the subsequent class discussion and lecture. (PDF file of instructions for printing) (These instructions are written assuming that the teacher reads them to students in class, but can easily be modified for reading outside of class prior to the lecture period. They can also be given in writing to students, but many professors simply read them to the students and highlight the key rules on the board. In this case, the students can create their own record sheet on their own paper. Cards can be handed out while instructions are being read.) This is a simple card game. Each of you will be given 4 cards; two of these cards are red (hearts or diamonds) and two of these cards are black (clubs or spades). All of your cards will be the same number. The exercise will consist of a number of rounds (you have 15 rounds listed on your record sheet - we will probably complete fewer than 15 rounds). When a round begins I will come to each of you in order, and you will play TWO of your four cards by placing these two cards face down on top of the stack in my hand. Your earnings in dollars are determined by what you do with your red cards. In the first several rounds: you will earn $4 for each red card that you keep and $0 for each black card that you keep. Red cards that are placed on the stack will affect everyone's earnings in the following manner. I will count up the number of red cards, and everyone will earn this number of dollars. Black cards placed on the stack have no effect on the count. When the cards are counted, I will not reveal who made which decisions, but I will return your cards to you at the end of the round (by returning to each of you in reverse order and giving you the top two cards from the stack in my hand). To summarize, your earnings for the period will be calculated as: Earnings = $4 x (number of red cards you kept) + $1 x (total number of red cards I collect) After several rounds, I will announce a change in the earnings for each red card you keep. Red cards placed on the stack will always earn $1 for each person. Use the space below to record your decisions, your (hypothetical) earnings, and your cumulative earnings. 1. You should conduct several rounds with the earnings specified in the students' instructions ($4 for each red-card kept and $1 for each red-card turned in), until the number of red-cards contributed starts to decline. Typically there is a marked decrease in contributions after three or four rounds. After this, change the value of a red-card kept to $2, while keeping the value of a red-card turned in fixed at $1. This does not change the Nash equilibrium, but because the opportunity cost of keeping a card declines, one typically observes an increase in contributions as a result of this change. This increase is particularly dramatic since it occurs after contributions had been declining for two or three rounds in the first treatment. After several rounds with this payment structure, announce that you will give students a chance to talk for several minutes. They can talk about anything they like, but tell them that you will not enforce any agreements that they make and that they will not be given another chance to talk with one another before the end of the experiment. Give students up to 5 minutes for discussion, then conduct several more rounds. 2. If you are running short of time, reduce the number of rounds conducted under the second treatment ($2 for each red card kept) and conduct only one or two rounds after the discussion period. It's best not to skip the discussion period since this is when many students start to figure out the conflict between what is best for the individual and what is best for the group. However, we suggest that you not allow students to talk to one another at any other time during the experiment. 3. Each student should start a round with two red cards and two black cards, therefore it is essential that you return cards you give a student the same two cards that he or she played. An easy way to do this is to go back to each student in the reverse order that you used to collect the cards, then give the student the top two cards from the stack in your hand. This will work as long as (a) each student played two cards as instructed, and (b) you keep the cards in order when you count the number of red cards turned in. Since students are originally given four cards with the same number (i.e., 4 of hearts, diamonds, clubs, and spades), if you make a mistake in returning cards it can be quickly corrected. 4. Students typically need some help in filling out their record sheet during the first round or two, so you should lead them through the process. Remind them that they earned $4 for each red card kept; so the column labeled "earnings for cards kept" will be $0 if they kept no red cards (they turned in both), $4 if they kept 1 red card, and $8 if they kept both red cards. Then announce the earnings that EVERYONE will receive from the red cards turned in (for example, if 18 red cards were turned in, announce that everyone should write down $18 in the column labeled "$1 x (total # of red cards in stack)"). Be sure to remind them that each person will receive this amount, even those who turned in no red cards. Then tell them to fill in their total earnings in the period: earnings from cards kept + earnings from cards in the stack. 5. After each round, you should write on the board the total number of red cards that were turned in. Be sure to indicate the value of the red card kept for each decision, since you will refer to this and the number of cards contributed during the class discussion.
{"url":"http://www.econport.org/econport/request?page=web_experiments_modules_pubgoods_smallexp","timestamp":"2014-04-20T01:43:36Z","content_type":null,"content_length":"28725","record_id":"<urn:uuid:77b4c7ea-79ca-457e-919c-c57684aaaaac>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
The Math Forum Internet News eInstruction Contest The 5th Annual eInstruction Classroom Makeover Video Contest has begun. Submit your own music video for a chance to win a $75,000 classroom-technology makeover! In addition to the grand prize, participating K-12 students and teachers can also win a package valued at $2,840 from the Math Forum, consisting of a Problems of the Week School Membership and three registrations in our online courses: • PoW Class Membership: Resources & Strategies for Effective Implementation • Learning from Student Work: Make the Most of Your PoW Membership • Mentor Your Own: Supporting Strong Development of Mathematical Practices Deadline for submitting your music video about classroom technology use is Tuesday, October 25, 2011, at 12 PM ET. Finalists for each grade category will be announced on November 1; and winners, on or about December 16, 2011. So roll that camera and sing that song ... or parody! PoW taking place: math problem-solving moment of the week "The other common way to solve this problem was through guess and check. Most submitters only told me about their correct guess, which left me wondering again... How hard was it to get to that correct guess? Marion B. from Rosemont School of the Holy Child noticed that the first guess was too small and so tried doubling it for the next guess. As the guesses got close, Marion went up by twos because the length of the body had to be even." - Max, commenting on the Pre-Algebra PoW's Latest Solution Apply for a Travel Grant Don't forget ICME 12: apply now to meet the September 30 deadline for travel grants to attend the Twelfth International Congress on Mathematical Education to be held in Seoul, Korea, from July 8-15, View the official call and the application on the NCTM website: Now taking place: math education conversation of the hour "As an 'unschooler' I 'liked' this because The Math Forum is unique in the world of, well, learning services. All [other] learning services, schools, books, online... are based on the idea of a teacher who presumes what you need to know, when you need to know it, how you need to know it (whether a mass schooling standard or more free form styles of something like 'khanacademy' — I've used the latter for some things, btw). The Math Forum is effective because it is fashioned after the way the brain actually works, how living creatures learn — in unschooling terms: student led. I was just thinking about Dr. Math, today, and it finally struck me: 'This is student led, and I'm all about that! Why did I never notice that before!?'" - James, posted to the Forum Facebook Wall "Who Wants to Be a Mathematician" National Contest In the American Mathematical Society (AMS) game "Who Wants to Be a Mathematician," high school students answer multiple choice mathematics questions as they compete for cash and prizes. The winning contestant pockets $5,000, with the math department of that teen's school earning an additional $5,000. Request a qualifying test for your high school students by emailing the AMS Public Awareness Office, paoffice at ams dot org, with the subject line "National WWTBAM." In the body of the message, include your name, school, contact information, and courses taught this year.
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews16.36.html","timestamp":"2014-04-17T05:54:26Z","content_type":null,"content_length":"12443","record_id":"<urn:uuid:79819f17-ac89-4f6e-b899-ba97cf44b72a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Parosh Aziz Abdulla (Uppsala University), Language Inclusion for Timed Automata We consider the language inclusion problem for timed automata. Alur and Dill showed in 1994 that the problem is undecidable if the automata are allowed to have at least two clocks. Recently, Ouaknine and Worrell showed that the problem becomes decidable in case the automata are restricted to have single clocks. In this presentation, we describe a zone-based algorithm for the one-clock language inclusion problem over finite words. We also show the following results: 1. Language inclusion is undecidable for single-clock timed automata over infinite words. This reveals a surprising divergence between the theory of timed automata over finite words and over infinite 2. If epsilon-transitions or non-singular postconditions are allowed, then the one-clock language inclusion problem is undecidable over both finite and infinite words. 3. Language inclusion has non-primitive recursive complexity for single-clock timed automata over finite words.
{"url":"http://www.mpi-inf.mpg.de/conferences/cv06/AbdullahAbstract.html","timestamp":"2014-04-17T19:16:17Z","content_type":null,"content_length":"3931","record_id":"<urn:uuid:d8b89c9a-afe5-4828-962d-288f826ecae6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Mountain View, CA Precalculus Tutor Find a Mountain View, CA Precalculus Tutor ...Using molecular cloning, I have also conducted research in biochemistry and neuroscience laboratories at UC Riverside and San Diego State University. I have extensive teaching experience in college-level general chemistry, biochemistry and neuroscience from three different universities, as well ... 18 Subjects: including precalculus, chemistry, calculus, physics ...Long division is learned during the primary years and an introduction to basic problem solving is also included. Elementary mathematics is used in everyday life in activities such as making change, cooking, and buying in a store. It is also an essential step on the path to understanding science. 13 Subjects: including precalculus, English, geometry, algebra 1 ...I believe that it is necessary to teach math for understanding rather than having students memorizing procedures, and it is probably through my engineering background that I recognize that it is critical to break down problems and concepts into steps, the size of which depends directly on the nee... 10 Subjects: including precalculus, calculus, geometry, algebra 1 ...I have been tutoring in the Language arts department for 3 years now. I specialize in content structure, literary analysis and general work with research essays. I know English does not come easy to everyone, but I believe the right tutor can make all the difference. 17 Subjects: including precalculus, reading, writing, English ...I have used the C language for building products for 20 years. This includes a Unix disk driver I developed from scratch. I have used C to help me to analyze Unix file system performance. 23 Subjects: including precalculus, physics, algebra 2, algebra 1 Related Mountain View, CA Tutors Mountain View, CA Accounting Tutors Mountain View, CA ACT Tutors Mountain View, CA Algebra Tutors Mountain View, CA Algebra 2 Tutors Mountain View, CA Calculus Tutors Mountain View, CA Geometry Tutors Mountain View, CA Math Tutors Mountain View, CA Prealgebra Tutors Mountain View, CA Precalculus Tutors Mountain View, CA SAT Tutors Mountain View, CA SAT Math Tutors Mountain View, CA Science Tutors Mountain View, CA Statistics Tutors Mountain View, CA Trigonometry Tutors Nearby Cities With precalculus Tutor Campbell, CA precalculus Tutors Cupertino precalculus Tutors East Palo Alto, CA precalculus Tutors Fremont, CA precalculus Tutors Los Altos precalculus Tutors Los Altos Hills, CA precalculus Tutors Menlo Park precalculus Tutors Palo Alto precalculus Tutors Redwood City precalculus Tutors San Jose, CA precalculus Tutors Santa Clara, CA precalculus Tutors Stanford, CA precalculus Tutors Sunnyvale, CA precalculus Tutors Union City, CA precalculus Tutors Woodside, CA precalculus Tutors
{"url":"http://www.purplemath.com/mountain_view_ca_precalculus_tutors.php","timestamp":"2014-04-20T19:23:38Z","content_type":null,"content_length":"24430","record_id":"<urn:uuid:07cf6878-867f-47d4-836e-a01b8ada1078>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Investigation of Swirling Flows in Mixing Chambers Modelling and Simulation in Engineering Volume 2011 (2011), Article ID 259401, 15 pages Research Article Investigation of Swirling Flows in Mixing Chambers Department of Biomechatronics Engineering, National Ping Tung University of Science and Technology, Pingtung 912, Taiwan Received 9 November 2010; Revised 25 February 2011; Accepted 1 March 2011 Academic Editor: Guan Yeoh Copyright © 2011 Jyh Jian Chen and Chun Huei Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This investigation analyzed the three-dimensional momentum and mass transfer characteristics arising from multiple inlets and a single outlet in micromixing chamber. The chamber consists of a right square prism, an octagonal prism, or a cylinder. Numerical results which were presented in terms of velocity vector plots and concentration distributions indicated that the swirling flows inside the chamber dominate the mixing index. Particle trajectories were utilized to demonstrate the rotational and extensional local flows which produce steady stirring, and the configuration of colored particles at the outlet section expressed at different Re represented the mixing performance qualitatively. The combination of the Taylor dispersion and the vorticity was first introduced and made the mixing successful. The effects of various geometric parameters and Reynolds numbers on the mixing characteristics were investigated. An optimal design of the cylindrical chamber with 4 inlets can be found. At larger Reynolds number, more inertia caused the powerful swirling flows in the chamber, and more damping effect on diffusion was diminished, which then increased the mixing performance. 1. Introduction In the last decades, microfluidic devices have been widely utilized in microelectromechanical systems and employed as powerful instruments to perform biological experiments. Some products have been mass-produced, such as micropump, micromixer, microvalve, and even microreactor. Microfluidic devices have been developed for a broad range of applications. These applications cover genomic analysis [1, 2], chemical engineering [3], as well as lab-on-a-chip [4]. Effective mixing of various fluids is required in many miniaturized multicomponent flow systems. Fast mixing can reduce the time required for analysis and improve reaction efficiency in industrial applications. Micromixer can be integrated in a microfluidic system or used as a single device. Furthermore, the investigation of micromixers is the basis for understanding the transport phenomena in micro-scale systems. Traditionally, both the stirring and the creation of turbulent flow are exploited to improve the mixing characteristics in the macroscopic world. However, it is very difficult to create the conventional mixing mechanisms inside the miniaturized systems because the Reynolds number is rarely larger than In general, since the Reynolds number is very low in a microscopic system (Reynolds number, Re, indicates the ratio of inertia to viscous forces and is expressed as , with being the inlet flow velocity, the hydraulic diameter of the microchannel, and the kinematic viscosity), liquid flows inside microchannels are generally observed as laminar flows. Viscous forces dominate the flow fields, and vortices cannot exist in such flow. As a result of the small geometric sizes and the laminar flow regimes in microfluidic analytical devices, fast mixing cannot be achieved by the conventional methods. In a typical microfluidic device, the mixing of two or more miscible fluid streams is dominated by molecular diffusion, which is driven by a concentration difference and is a rather slow process. According to Fick’s first law of diffusion, the flux of the diffusing species is proportional to the diffusivity and a divergence of the concentration [5]. In addition, the mixing time increases in proportion to the square value of the diffusing distance and furthermore depends on the diffusivity of the diffusing compound [6]. In order to speed up mixing, the essentials of a diffusion-based micromixer are, therefore, the maximization of the contact area of different fluids and the minimization of the diffusing distance. Among these technologies a number of micromixers have been developed. These can be classified into two groups: active and passive micromixers. Though the time and channel length required for active mixing are less than those for passive mixing, the active micromixers are difficult to fabricate, clean, and integrate into microfluidic systems. The pronounced advantage of passive micromixers is that they utilize no external power except the mechanism used to drive the fluids into the microfluidic systems. During the past ten years, studies on passive mixing in micromixers have been conducted. Nguyen and Wu [7] presented an elaborate review for the micromixers and reported on the progress of recent development of micromixers. Since generating a turbulent liquid flow inside the microdevice is complicated, the passive micromixers primarily have two types of mixing mechanisms, namely, chaotic advection and molecular diffusion. A chaotic advection micromixer refers to unit operation that stretches and folds fluid volumes over a channel cross-section [8, 9]. The challenges of chaotic micromixers are the microfabrication of complex structures. By considering the characteristics of molecular diffusion, some have dealt with the injection mixing, where one liquid fluid is injected into the other with microplumes [10]. Some studies proposed micromixers based on the increase of the boundary surface, with the concept of lamination [11]. These designs split incoming streams into several narrower confluent streams and rejoined these narrower streams together. Then the contact surface of two fluids is implemented, and it increases the diffusion speed. The thickness of each fluid layer is greatly reduced in comparison with the characteristic diffusion length, which reduces the mixing time. Parallel laminated mixers with simple two-dimensional structures were fabricated without difficulty, and mixing in such laminar flows can be very easily enhanced. In all of the aforementioned studies, two representative micromixers were discussed in detail. One design is a device with multiple intersecting channels. A micromixer based on the principle of distributive mixing was presented by Bessoth et al. [12], and the feasibility of the chip being used for mixing in the millisecond regime was demonstrated. Hinsmann et al. [13] also introduced a micromixing device. The fluid entering through the inlet was split and forced to flow under the separating layer. Then the fluid streams entered the main channel and were mixed. To improve the mixing by placing obstacles in the channel shown in Figure 1(a), Maeng et al. [14] created the disruption to the flow field, which reduced the diffusion path. An in-plane micromixer using the Coanda effect was designed by Hong et al. [15], and a Poisseuille flow was produced in the perpendicular direction of the flow to achieve a great mixing performance. A passive micromixer using a recycle flow was devised by Jeon et al. [16], and a recycle flow was introduced from side channels to the inlet flow. Cha et al. [17] proposed a mixer designed for chessboard mixing shown in Figure 1(b), and the mixer achieved a high mixing performance in a very short channel length by enlarging the contact areas between two liquid fluids. Bhagat et al. [18] reported a planar passive mixer shown in Figure 1(c), and this design incorporated diamond-shaped obstructions within the channel to breakup and recombine the flows. The other design split the main streams into several narrower streams and rejoined them together. A laminated micromixer was first reported by Koch et al. [19] who conducted two silicon-based mixers to achieve good mixing performances. The mixer geometries, which share an interdigital arrangement of inlet streams, were proposed by some researchers [20, 21]. Results suggested that geometric focusing of a large number of liquid streams is a powerful micromixing principle. In order to increase the mixing by reducing the diffusion distance, the main stream is needed to be split into several substreams. In the case of the mixer utilizing distributive mixing, the pressure loss is higher because of the narrower intersecting channels. A circular vortex micromixer with sixteen tangential inlets was presented by Bohm et al. [22]. The liquids were injected into the micromixer to induce a swirling flow field, and the mixing could be performed in a shorter time-scale. Lin et al. [23] developed a three-layer glass vortex mixer for low . Two inlet channels divide into 8 individual channels tangent to a circular chamber. The is higher than a value of 2.32, and a self-rotation effect is induced in the chamber. A swirl micromixer with two branch channels, a central chamber, and an exit tube was carried out by Jin et al. [24]. The dependence of the mixture viscosity and density on the mass fraction of solution was taken into consideration. The cost-effective mixing at was obtained because of the generation of the swirling flow. Yang et al. [25] presented a vortex-type micromixer with four tangential inlets which utilized pneumatically driven membranes to generate a swirling flow. A mixing efficiency of 0.95 could be achieved in time period of 0.6s for the two-membrane layout. A three-dimensional polystyrene vortex micromixer was demonstrated by Long et al. [26] and shown in Figure 1(d). The device employs one inlet, one vertical cylinder, and one outlet. Results were compared with the mixing performance of a two-dimensional square-wave channel. For the vortex micromixer, many substreams are needed to improve the mixing since the mixing of two or more fluid streams occurs by virtue of molecular diffusion. To reduce the pressure and increase the contact area of two fluids, the mixing chamber inside the micromixer is imposed. In previous work [22, 23], eight or sixteen inlets were used to inject two different kinds of fluids into the mixing chamber. The more inlets utilized, the more significantly swirling effects could be seen. However, the experimental apparatus would be more complicated in order to supply all these inlet flows. The effects of the numbers of the inlet channels have not been studied in detail. In all of the aforementioned studies, the effects of mixer geometries on flow characteristics inside the vortex micromixers were not investigated. The geometric effects of the mixer affect the liquid flows, so the success of the mixing depends strongly on this effect. In fluid dynamics, vorticity is considered as the circulation per unit area at a point in a fluid flow field. Besides, whereas two fluids are flowing in a flat channel, both convective and molecular diffusions contribute to mixing performance due to the Taylor dispersion effect. The combination of the radial dispersion and the vortex flow makes the mixing successful, and no previous studies examined the flow and mixing performances inside the vortex micromixer by utilizing the concepts of the vorticity and the Taylor dispersion. Characterization of the mixing performance of mixing devices using CFD simulations has been described in many publications. The particle trajectories are used to numerically study the mixing and fluidic behaviors by chaotic advection inside the microchannel. Therefore, understanding the fluid flows in vortex micromixer by utilizing the particle trajectories is of marked importance in the related fields of microfluidics. The objective of the present study is to demonstrate a designed lamination micromixer with a right square prism, an octagonal prism, or a cylinder, plus several pairs of inlets and one outlet. A three-dimensional fluid field is used to describe the flow characteristics in the microfluidic system. The effects of swirling flows on mixing performances are expressed in terms of particle trajectories, vorticity profiles, concentration distributions, and mixing indexes. The effects of various geometric parameters and Re on the mixing characteristics were also investigated. 2. Mathematical Model Figure 2 schematically depicts one of the basic configurations for the analysis. The physical problem is that the liquids to be mixed enter the mixing chamber at a specified speed from several tangential inlets and then flow out from the outlet channel at the central location of the chamber. The physical domain is a three-dimensional volume consisting of a right square prism, an octagonal prism, or a cylinder. is the width of the inlet channel, and is the height of the inlet channel. In addition, is the height of the outlet channel having the length and the width of . The boundary surfaces of the domain are solid, rigid, and impermeable walls. It is assumed that the steady state is reached in the chamber and the channels; the variations of the concentration do not modify the viscosity and the density of the fluid; the channel walls are assumed to be smooth. The effects of surface tension on the interaction between two miscible liquids on Earth are masked by the effects of gravity and density. On Earth, miscible liquids effectively combine into one relatively homogenous (or equally distributed) solution [27]. There is no real interface when liquids are perfectly miscible. Surface tension forces in this study are neglected. The equations to be solved are the continuity equation, the Navier-Stokes equations, and the conservation of species equations, along with the appropriate boundary conditions. The following assumptions have been adopted to obtain a proper mathematical formulation of this problem.(1)Fluid flow is Newtonian and laminar. As reasonably assumed herein, the fluid is incompressible since the velocities encountered in this type of flow are expected to be markedly lower than the sonic velocity in the fluid.(2)All thermophysical properties of the liquid are to be evaluated at the fixed reference temperature of 300K.(3)Gravity is negligible.(4)The nonslip and nonpenetration conditions are usually encountered at solid surfaces. The conservation of mass and momentum equations are solved to determine the flow field of the liquids. In symbolic notation, the continuity equation can be expressed as follows: where is the fluid velocity vector. The momentum equation for a continuum is the analogue of Newton’s second law for a point mass. In symbolic notation the momentum equation is expressed as where is the fluid density, is the pressure, and is the fluid viscosity. Species transport by pressure-driven flows occurs as a result of convection and diffusion and can be described by the combined species convection-diffusion equation. In symbolic notation the diffusion-convection equation is where is the diffusivity and is the mass concentration. This equation must be solved together with (1) and (2) in order to achieve computational coupling between the velocity field solution and the concentration distribution. The liquids enter into the mixing chamber from several tangential inlets and then flow out from the outlet channel at the central location of the chamber. The rotation of a fluid about a vertical axis can be observed in the chamber. Reynolds number in the physical domain, Re, indicates the ratio of inertial forces due to rotation of a fluid about a vertical axis to viscous forces. The effects of inertia on the elastic instabilities in Dean and Taylor-Couette flows were investigated through a linear stability analysis [28]. It was shown that, when rotation of the inner cylinder drives Taylor-Couette flow, the Reynolds stresses produce energy, and thus are destabilizing, while for the flow driven by the rotation of the outer cylinder alone, the Reynolds stresses dissipate energy, thus stabilizing the flow. The stability of axially symmetric flow at nonzero Reynolds number was also analyzed [29]. The effect of inertia on axisymmetric disturbances was investigated and showed that inertia tended to destabilize the flow. The physical system in our micromixer is that the liquids enter the mixing chamber with a fixed wall from several tangential inlets and then flow out from the outlet channel at the central location of the chamber. So inertia force tends to destabilize a system, and then the mixing performance can be enhanced. On the other hand, viscous force tends to stabilize a system and damp out perturbations, such as the diffusion. 3. Numerical Analysis For a better understanding of the fluid flow and the mixing characteristics in the mixer, a computational fluid dynamics package, CFD−ACE+, using a finite volume approach is utilized to simulate three-dimensional flow fields as well as two-fluid mixing. A nonlinear steady-state algorithm is used for hydrodynamic calculations, and a linear steady-state one is applied to solve the diffusion-convection equation. The inlet boundary condition is set to have the same constant velocity for two fluids, and the outlet is assumed to be a constant pressure condition. The user scalar module enables us to compute the transport of species. In this study, the transport mechanism in fluid volumes is assumed, and species do not affect the velocity, or any other computed field. For the scalars, the boundary conditions are generalized as: For the Dirichlet boundary conditions at the inlet, is set to 0, is set to 1, and is set to 1 (for species A) or 0 (for species B). This study considered three different configurations of the flow system. The general configuration consists of a right square prism, an octagonal prism, and a cylinder (c.f. Figure 3). Three dimensional structured grids are employed. The grid systems of the computation domain are chosen to assure the orthogonality, smoothness, and low aspect ratios to prevent the numerical divergence. Poor grid systems can enhance the numerical diffusion effects. Numerical errors due to discretization of the convective terms in the transport equation of the concentration fields introduce an additional, unphysical diffusion mechanism [20]. This so-called numerical diffusion is likely to dominate diffusive mass transfer on computational grids. Higher-order discretization schemes reduce the numerical errors. Numerical diffusion strongly depends on the relative orientation of flow velocity and grid cells, and it can be minimized by choosing grid cells with edges parallel to the local flow velocity. Regarding to our grid systems shown in Figure 2, the numerical diffusion can be reduced comprehensively in the computational results. To minimize the effects of the meshing on mixing, the mesh density is increased until the mixing index variation is less that 5%. The SIMPLEC method is adopted for pressure-velocity coupling, and then all spatial discretizations are performed using the second-order upwind scheme. The algebraic multigrid (AMG) solver is utilized for pressure correction, whereas the conjugates gradient squared method (CGS) and preconditioning (Pre) solver are employed for velocity and species corrections. The solution is considered to be converged when the relative errors of all independent variables are less than 10^-4 between the sweeps and . The cross-sectional images of outlets obtained from the CFD software are converted to text formats. The uniformity of mixing at sampled sections is assessed by determining the mixing index of the solute concentration, which is defined as where I is the concentration value (between 0 and 1) on the sampled section A, I[0] is the concentration value at the inlet plane A[0], and is the averaged value of the concentration over the sampled section. The mixing index range is from 0 for no mixing to 1 for complete mixing. 4. Conceptual Design In this section, the design concept of a three-dimensional swirling micromixer is introduced. From the preliminary studies on the vortex micromixers, a strongly rotational motion of the fluids is generated. A designed lamination micromixer with a chamber, plus several pairs of inlets and one outlet, is demonstrated. The diameter of the cylindrical chamber is equal to 800μm. The inlets are tangential to the chamber, and the outlet is located in the central region of the chamber. The width, W, and height, H, of the inlets are set at the value of 100μm. The width, length, and height of the outlet are set at the value of 100μm, too. All the inlets are symmetric with the central axis of the mixing chamber. The fluid properties are set to the physical and thermodynamic properties of water, which are diffusivity, , of m^2/s, kinematic viscosity, , ofm^2/s, density, , of 1×10^3kg/m^3. The fluid with red color stands for species A, and the fluid with blue color for species B. The concentration of species is normalized as one and zero for the inlet of species A and species B, respectively. The parameters of the above values are used, unless noted otherwise. A streak line is defined as a line formed by the particles which pass through a given location in the flow field [30]. In a steady flow, the streak lines coincide with the stream lines and the particle trajectories. These streak lines can be seen as the trajectories of the particles released from the specific locations of the inlets to the outlet and used to describe the nature of the flow field inside the micromixer. Figure 4 depicts the streak lines as the fluids flow through the cylindrical chamber at the inlet velocity of 0.75m/s and the chamber height of 100μm. The corresponding Re is 75. The initial positions of particles are shown as markers “+” illustrated in the figures. Fluids enter the chamber in the tangential directions, and the flow is almost unidirectional. As the fluid approaches the entrance of the other fluid, the fluid is deflected and forms a clockwise vortex (Figure 4(a)). These streak lines are symmetric with the center of the chamber. The rotational flow field is observed and extends towards the outlet of the mixer (Figure 4(b)). As shown in Figures 4(c) and 4(d), the trajectories are quite regular. Thus the stretching of the interface can be presented, and the rotational and extensional local flow can produce steady stirring in the chamber. In order to perform a comprehensive analysis of the mass transfer mechanism in vortex micromixer, the cross-sectional concentration distributions are utilized to demonstrate the mixing characteristics. The concentration distributions for mixing in the cylindrical chamber are shown in Figure 5. The predicted concentration distribution of two fluids regarded from the top and eight cross-sectional areas are presented in Figures 5(a) and 5(b), respectively. In order to improve the mixing, two different fluids enter into the four inlets alternately. The inlet channel is tangential to the mixing chamber, and a rotation flow field is induced owing to the high inertia force, as shown in Figure 5(a). The fluid, A, meets the other fluid, B, near the inlet B and the interface of two fluids becomes curved. The contact area of two fluids is increased. Then two fluids enter into the outlet channel. Because the area of the outlet is much smaller than that of the chamber, the diffusion distance of two fluids in the outlet channel becomes smaller and the mixing can be enhanced. Two fluids flow tangentially into the chamber and flow clockwise inside the chamber. Since the outlet is located at the central region of the chamber, fluids tend to flow out of the outlet channel and the vortex flow is formed. Flows in the angular and radial directions are both significant. Whereas two fluids are flowing in a capillary tube, both convective and molecular diffusions contribute to mixing performance due to the axial dispersion effect. This is the so-called Taylor dispersion. In this micromixer, the height of the chamber is small, and the shear force establishes the radial concentration gradient enhancing the mass transport rate in the radial direction. The radial dispersion effect can be observed in Figure 5(b). This system shows the mixing characteristics similar to Taylor dispersion, with contributions from both diffusion and convection. The combination of the radial dispersion and the vortex flow makes the mixing successful. In fluid dynamics, vorticity is the curl of the fluid velocity. It can also be considered as the circulation per unit area at a point in a fluid flow field. It is a vector quantity whose direction is along the axis of the fluid's rotation. So the value of the vorticity inside the chamber volume can be seen as the indicator of the vortex flow. Figure 6(a) shows the vorticity distributions at three cross-sectional areas of the chamber with cylinder, the same as the case illustrated in Figure 4. It is presented that the vorticity is much larger near the central region of the chamber and the outlet channel than in other regions. So the rotation rate of the fluid is greatest at the center and decreases progressively with distance from the center. The vorticity greatly depends on the velocity gradient in the mixer. The fluids enter into the outlet channel, and the gradient of the velocity near the wall of the channel becomes larger; then the vorticity increases more. As shown in Figure 6(b), the vorticity along the central axis from the bottom of the chamber increases, reaches the maximum value near the entrance of the outlet channel, and then decreases to the location of the outlet. The vortex flow is intensified more near the entrance of the outlet channel; the contact areas of fluids can be further enlarged. In an effort to understand the two-fluid mixing inside the mixer, the velocity vector planes are utilized to demonstrate the rotation of the fluid flows. Figure 7 illustrates the mixing and flow characteristics at eight cross-sectional areas from the bottom of the chamber for the case presented in Figure 4. The location of the cross-sectional area in Figure 7(a) is 25μm from the bottom of the chamber. The distance between the following, from Figures 7(b) to 7(h), are 25μm apart. The concentration distributions and the vector planes are shown. With respect to the mass transport mechanism in mixing, two fluids flow tangentially into the chamber and the radial component of velocity grows because fluids tend to flow through the outlet channel. These two components result in an overall rotating flow field inside the chamber. The fluids flow clockwise, and the contact area between them increases as shown in Figure 7(a). In the central region of the chamber, and the mixing performance is great. The rotation effect grows as the fluids move away from the bottom of the chamber. The mixing region spreads from the central region, and the mixing becomes much better, as presented in Figure 7(b). Whereas the fluids approach the top of the chamber, the mixing region shrinks owing to the no-slip conditions at the wall of the chamber. However, the rotation effect increases, as shown in Figure 7(c), the mixing is enhanced even more. At the location of the upper wall of the chamber, the pattern of the mixing characteristics, as shown in Figure 7(d), is similar to the pattern in Figure 7(a). The circulation of the fluid flow in the chamber is clearly observed. The maximum velocity is also shown near the central region of the chamber. Then the fluids enter into the channel of the outlet. The distance of molecular diffusion is reduced much more as the fluids flow from the chamber into the channel. Thus the rotating flow achieves superior mixing. The increased interface area of two fluids can promote a mass transfer based on diffusion. The configurations of interfacial areas between two different fluids play an important role in the micromixers. As shown in Figure 8, the interfacial regions of two fluids are demonstrated. The interfacial regions represent the area in which the concentration over the sampled section is equal to the value of the concentration equal to 0.5. Due to the cylindrical chamber of the mixer, as the case shown in Figure 4, the interfacial area between two fluids was stretched. Four inlets and one outlet in the mixer are studied to show the influences of self-rotation effects; it is found that interfacial layers are stretched spirally from the intersections of the inlet channels and the chamber to the central region of the chamber. The interfacial regions between two fluids are enlarged, and steady stirring flow is created, which are exploited to enhance the mixing performances. 5. Results and Discussion This study solved the concentration distributions of liquid fluids in the micromixers, and the effects of various geometric parameters on momentum and mass transports in solution were examined. It also investigated the influences of various Reynolds numbers on the momentum and mass transfer characteristics in the fluids in terms of the mixing index and concentration distributions. In the following micromixers with different geometric chambers which are right square prism, octagonal prism, and cylinder with four inlets and one outlet are studied to show the influences of self-rotation effects on mixing. The following is a comparison of the transfer of momentum and mass in mixing chambers with different configurations and examining the influences of the various geometrical parameters on mixing results. The length of the right square prism is equal to 800μm. The length of each side of the octagonal prism is about 200.63μm, and the distance between two opposite sides of the octagonal prism is equal to 800μm. The diameter of the cylinder is equal to 800μm. Figure 9 summarizes the results of mixing with various configurations. The velocity of all inlets is equal to 0.75m/s, and the height of the mixing chamber, , is 100μm, as well as the height of the inlet channel. The predicted concentration profiles of two fluids regarded from the top and the cross-section are presented. Corresponding results are listed from left to right for chambers which are right square prism, octagonal prism, and cylinder, respectively. The corresponding mixing index of the outlet is 0.881, 0.877, and 0.893. The radial dispersion in the chamber is affected by the configurations of the mixing chamber. In Figure 9(a), the interface of two fluids can be clearly observed. The effect of the radial dispersion is the smallest for the case of right square prism. From the results shown in Figure 9(c), the mixed fluids are well dispersed near the central region of the chamber, and the effect of the radial dispersion is the largest for the cylinder. The vortex flow is intensified more for the cylindrical chamber where the contact areas of fluids can be further enlarged. Results show that the self-rotation effect is enhanced in micromixers with cylindrical chamber, which makes this mixer an effective vortex mixer. To perform the higher mixing index, various numbers of inlets are used to investigate the mixing characteristics of micromixers with the chambers of octagonal prism, as shown in Figure 10. Results of 8, 4, and 2 inlets with the inlet velocity, equal to 0.25m/s, 0.5m/s, and 1m/s, respectively, are presented in Figures 10(a), 10(b), and 10(c), respectively. For the number of inlets equal to 8, the contact areas of two fluids increase dramatically. The shape of the cross-sectional area of the outlet is square, and 8 inlets are used in the mixer. It is found that the pattern of the mixing characteristics is not symmetric with respect to the center of the chamber. As the red fluid (species A) approaches the entrance of the blue fluid (species B), it is deflected and forms a clockwise vortex. Then it flows into the entrance of the outlet channel. The fluid of species B meets the fluid of species A around the joint of the inlet and the chamber, where it is deflected and also forms a clockwise vortex. However, the inlet velocity is small (0.25m/s), and it enters the outlet channel directly. The swirling effects are shown in Figure 10(a). Figure 10 also depicts the streak lines as the fluids flow through the chamber with octagonal prism. It is shown in Figure 10(a) that the trajectories of species B are quite similar to the cases shown in Figures 10(b) and 10(c). However, the trajectories of species A are different from the trajectories of species B. For the case of 4 inlets, the contact areas of two fluids decrease in the chamber. Due to the large inlet velocity (0.5m/s), it swirls around the central region at a longer path and it is the result of the large mixing region that is created. For the case of 2 inlets, the contact areas of two fluids decrease significantly, and the interfacial layer only exists from the joint of the inlet of species A and the chamber to the other joint along the diagonal line. The corresponding mixing index for the chambers with 8, 4, and 2 inlets is 0.676, 0.785, and 0.681, respectively. The mixing chamber with 4 inlets shows the greatest mixing index among these micromixers. So an optimal design of the chamber with 4 inlets can be found. This study analyzes three different configurations of a microfluidic system via numerical simulations. The results are compared with the previous works [23, 24]: lamination micromixers with a cylinder, plus one or four pairs of inlets and one outlet. Results of various total inlet flow rates on the mixing performance are presented in Figure 10(d). Because the inertial force of the fluids was so small that it is negligible at flow rates less than 0.5mL/min, mixing is dominated by pure molecular diffusion. The interface between streams is large for the circular vortex micromixer with 8 tangential inlets. One can see that mixing in this micromixer is the highest. When the flow rate is greater than 0.5, the increased inertial force enlarges the rotational flow. The mixing in the vortex micromixer with 2 tangential inlets is improved. Our designed mixer with 4 inlets can perform well for the range of flow rate from 0 to 1.2mL/min. The effects of the chamber heights, , and the outlet channel lengths, , on mixing in the cylindrical chamber are presented in Figure 11. The predicted mixing index at various chamber heights (upper portion) and at various channel lengths (bottom portion) is shown. Corresponding results are listed from (1) to (4) for chamber height equal to 100μm, 200μm, 300μm, and 400μm, respectively, with the length of outlet channel being 100μm. And corresponding results are listed from (5) to (8) for channel length equal to 100μm, 200μm, 300 μm, and 400μm, respectively, with the height of chamber being 100μm. The mixing index of the chamber decreases at the start and then increases slowly because the vortex flow is weak for the cases with larger chamber length. Nevertheless, the contact areas in the chamber with larger volume are augmented. The mixing can be improved, and the mixing index starts to increase for the cases (3) and (4). Whereas two fluids enter into the outlet channel, the axial flow of fluids is dominant and the rotation of the fluids is almost constant along the channel. The contact areas are enlarged with the increasing channel length; then the mixing index improves as well. Figure 12 plots the effects of various inlet velocities on the mixing for the height of the mixing chamber being 100μm. They show not only the mixing within the microchamber, but also the vortex index of the mixer. The inertial force is very important when the inlet velocity is 1m/s and the corresponding Re is 100. Because the wall of the inlet channel is tangential to the wall of the chamber and the inertial force drives the liquid flow forward, the fluid meets the other fluid with strong inertia. The large inertia force causes the fluid to proceed fast and for a large clockwise vortex to form. Inertia force tends to destabilize a system, whereas viscous force tends to stabilize a system and damp out perturbations, such as the diffusion. From the definition of Re, strong inertia causes the powerful vortex flow in the chamber, and the damping effect on diffusion is diminished. The mixing performance can be enhanced. Re decreases with the decreasing inlet velocity, and the viscous force becomes dominant. The inertial effect is not very obvious, and the area of mixing region on the top wall is decreased. The inertial force is very trivial when the inlet velocity is 0.1m/s. The competition between inertia and viscous force permits the shape of the interface to be clearly observed. It is extremely apparent that the mixing index is the lowest. Because the mixing at a position depends on the upstream history of the flow, the vorticity history determines the mixing performance at a cross-section of the channel. A vortex index represents the vorticity history of a flow [14] and is introduced into our study. The influences of inlet velocity on vortex index are plotted in Figure 12(d). A large inlet velocity represents a large Re. At low Re, , viscous forces in the fluid are larger than inertial forces; thus, inertia can be neglected. The streak lines from the center of the two inlets shown in the upper of Figure 12(d) almost direct towards the outlet. No swirling flow can be observed. This is the so-called no swirling region. When the Re are 15−100, both viscous and inertial forces are important. The streak lines shown in the upper right side of Figure 12(d) are deflected. The rotational and extensional flow can be created in the chamber, and the interface between the two fluids increases. Therefore the mixing performance is also The particle trajectories for the micromixer at various inlet velocities are investigated. The particle trajectories are the traces where the particles are released from the specific locations of the inlets to the outlet. In Figure 13(a), particles are released from the four inlet sections viewed from the inlet. Three lines which are located at the central line of the cross section, at the offsets of central line of 25 and 35μm, are selected. Twenty-four particles per line are chosen. The configuration of colored particles at the outlet section can represent the mixing performance qualitatively and expressed at Re of 100, 50, and 10 in Figures 13(b), 13(c), and 13(d), respectively. Particles are labeled with a specific color. The particle with red color stands for species A and the particle with blue color for species B. From the particle distributions, it is easily understood that the self-rotational effect is very advanced at an Re of 100 and the pattern of the particle distribution is very irregular. Particles with different colors are spread over the entire cross section. Then the mixing can be enhanced. Whereas the inlet velocity decreases to the value of 0.5m/s, corresponding to an Re of 50, four spiral-like patterns are created at the outlet cross section. Obviously, the stretched flows of the micromixer in the angular direction can be seen, and the interface of the two fluids is enhanced. These clockwise patterns are more orderly than these of an Re of 100. And a similar pattern can be seen at the outlet section in Figure 12(b). No rotational effect is observed at a Re of 10 (shown in Figure 13(d)). The patterns of the particle distributions are very simple and like straight lines. The exchange of two fluids is negligible, and the mixing is very poor. 6. Conclusions This study investigated the steady behavior of three-dimensional momentum and mass transfer in micro chamber which consists of a right square prism, an octagonal prism, or a cylinder. The research was studied by a computational fluid dynamics software using a finite element approach. Results which were presented in terms of particle trajectories, vorticity profiles, and concentration distributions indicated that the swirling flows inside the chamber dominate the mixing index. Particle trajectories were utilized to demonstrate the rotational and extensional local flows which produce steady stirring, and the combination of the radial dispersion and the vortex flow makes the mixing successful. The effects of various geometric parameters and inlet velocities on momentum and mass transfer mechanisms were discussed in detail. In addition, numbers of inlets significantly affect mixing characteristics. The mixing index decreases by using fewer inlets. With larger numbers of inlets, the significant swirling effects could be seen, and thus, the mixing index is higher. It is also shown that the force driven by the inertia has a crucial impact on the mixing. At higher Reynolds number, , inertia causes more vortex flow in the chamber and diminishes the damping effect on diffusion, thereby leading to a higher mixing performance. Finally the configurations of colored particles at the outlet section were applied to qualitatively predict the performance of micromixers. Particular emphasis was placed on a qualitative analysis of the physical phenomena, which will benefit the study of microfluidic components in industrial applications. The authors would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract no. NSC 96-2221-E-020-021. We are also grateful to the National Center for High-performance Computing for computer time and facilities. 1. M. K. McQuain, K. Seale, J. Peek et al., “Chaotic mixer improves microarray hybridization,” Analytical Biochemistry, vol. 325, no. 2, pp. 215–226, 2004. View at Publisher · View at Google Scholar · View at Scopus 2. S. R. Nugen, P. J. Asiello, J. T. Connelly, and A. J. Baeumner, “PMMA biosensor for nucleic acids with integrated mixer and electrochemical detection,” Biosensors and Bioelectronics, vol. 24, no. 8, pp. 2428–2433, 2009. View at Publisher · View at Google Scholar · View at Scopus 3. C. Rosenfeld, C. Serra, C. Brochon, V. Hessel, and G. Hadziioannou, “Use of micromixers to control the molecular weight distribution in continuous two-stage nitroxide-mediated copolymerizations,” Chemical Engineering Journal, vol. 135, no. 1, pp. S242–S246, 2008. View at Publisher · View at Google Scholar · View at Scopus 4. N. Gadish and J. Voldman, “High-throughput positive-dielectrophoretic bioparticle microconcentrator,” Analytical Chemistry, vol. 78, no. 22, pp. 7870–7876, 2006. View at Publisher · View at Google Scholar · View at Scopus 5. P. I. Frank, P. D. David, L. B. Theodore, and S. L. Adrienne, Fundamentals of Heat and Mass Transfer, John Wiley & Sons, New York, NY, USA, 2006. 6. E. L. Cussler, Diffusion Mass Transfer in Fluid Systems, Cambridge University Press, New York, NY, USA, 2009. 7. N. T. Nguyen and Z. Wu, “Micromixers—a review,” Journal of Micromechanics and Microengineering, vol. 15, no. 2, pp. R1–R16, 2005. View at Publisher · View at Google Scholar · View at Scopus 8. J. M. Ottino and S. Wiggins, “Introduction:mixing in microfluidics,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 362, no. 1818, pp. 923–935, 2004. View at Publisher · View at Google Scholar · View at Scopus 9. C. S. Lu, J. J. Chen, J. H. Liau , and T. Y. Hsieh, “Flow and concentration analysis inside a microchannel with lightning grooves at two floors,” Journal of Biomechatronics Engineering, vol. 2, pp. 13–32, 2009. 10. R. Miyake, T. S. J. Lammerink, M. Elwenspoek, and J. H. J. Fluitman, “Micro mixer with fast diffusion,” in Proceedings of the 6th IEEE International Workshop Micro Electromechanical System (MEMS '93), pp. 248–253, IEEE Computer Society Press, San Diego, Calif, USA, Febuary 1993. 11. H. Mobius, W. Ehrfeld, V. Hessel , and T. Richter, “Sensor controlled processes in chemical microreactors,” in Proceedings of the 8th International Conference on Solid-State Sensors and Actuators (Transducers '95), pp. 775–778, Stockholm, Sweden, June 1995. 12. F. G. Bessoth, A. J. DeMello, and A. Manz, “Microstructure for efficient continuous flow mixing,” Analytical Communications, vol. 36, no. 6, pp. 213–215, 1999. View at Scopus 13. P. Hinsmann, J. Frank, P. Svasek, M. Harasek, and B. Lendl, “Design, simulation and application of a new micromixing device for time resolved infrared spectroscopy of chemical reactions in solution,” Lab Chip, vol. 1, no. 1, pp. 16–21, 2001. View at Publisher · View at Google Scholar · View at Scopus 14. J. S. Maeng, K. Yoo, S. Song, and S. Heu, “Modeling for fluid mixing in passive micromixers using the vortex index,” Journal of the Korean Physical Society, vol. 48, no. 5, pp. 902–907, 2006. View at Scopus 15. C. C. Hong, J. W. Choi, and C. H. Ahn, “A novel in-plane passive microfluidicmixer with modified tesla structures,” Lab Chip, vol. 4, no. 2, pp. 109–113, 2004. 16. M. K. Jeon, J. H. Kim, J. Noh, S. H. Kim, H. G. Park, and S. I. Woo, “Design and characterization of a passive recycle micromixer,” Journal of Micromechanics and Microengineering, vol. 15, no. 2, pp. 346–350, 2005. View at Publisher · View at Google Scholar · View at Scopus 17. J. Cha, J. Kim, S. K. Ryu et al., “A highly efficient 3D micromixer using soft PDMS bonding,” Journal of Micromechanics and Microengineering, vol. 16, no. 9, pp. 1778–1782, 2006. View at Publisher · View at Google Scholar · View at Scopus 18. A. A. S. Bhagat, E. T. K. Peterson, and I. Papautsky, “A passive planar micromixer with obstructions for mixing at low Reynolds numbers,” Journal of Micromechanics and Microengineering, vol. 17, no. 5, pp. 1017–1024, 2007. View at Publisher · View at Google Scholar · View at Scopus 19. M. Koch, D. Chatelain, A. G. R. Evans, and A. Brunnschweiler, “Two simple micromixers based on silicon,” Journal of Micromechanics and Microengineering, vol. 8, no. 2, pp. 123–126, 1998. View at 20. S. Hardt and F. Schonfeld, “Laminar mixing in different interdigital micromixers: II. numerical simulations,” AIChE Journal, vol. 49, no. 3, pp. 578–584, 2003. View at Publisher · View at Google Scholar · View at Scopus 21. P. Lob, H. Pennemann, V. Hessel, and Y. Men, “Impact of fluid path geometry and operating parameters on l/l-dispersion in interdigital micromixers,” Chemical Engineering Science, vol. 61, no. 9, pp. 2959–2967, 2006. View at Publisher · View at Google Scholar · View at Scopus 22. S. Bohm, K. Greiner, S. Schlautmann, S. de Vries, and A. van den Berg, “A rapid vortex micromixer for studying high-speed chemical reactions,” in Proceedings of the 5th International Conference on Micro Total Analysis Systems (micro-TAS '01), Monterey, Calif, USA, October 2001. 23. C. H. Lin, C. H. Tsai, and L. M. Fu, “A rapid three-dimensional vortex micromixer utilizing self-rotation effects under low Reynolds number conditions,” Journal of Micromechanics and Microengineering, vol. 15, no. 5, pp. 935–943, 2005. View at Publisher · View at Google Scholar · View at Scopus 24. S. Y. Jin, Y. Z. Liu, W. Z. Wang, Z. M. Cao, and H. Koyama, “Numerical evaluation of two-fluid mixing in a swirl micro-mixer,” Journal of Hydrodynamics, vol. 18, no. 5, pp. 542–546, 2006. View at Publisher · View at Google Scholar 25. S. Y. Yang, J. L. Lin , and G. B. Lee, “A vortex-type micromixer utilizing pneumatically driven membranesr,” Journal of Micromechanics and Microengineering, vol. 19, no. 3, Article ID 035022, 2009. View at Publisher · View at Google Scholar · View at Scopus 26. M. Long, M. A. Sprague, A. A. Grimes, B. D. Rich, and M. Khine, “A simple three-dimensional vortex micromixer,” Applied Physics Letters, vol. 94, no. 13, Article ID 133501, 2009. View at Publisher · View at Google Scholar · View at Scopus 27. J. A. Pojman, N. Bessonov, V. Volpert, and M. S. Paley, “Miscible fluids in microgravity (MFMG): a zero-upmass investigation on the international space station,” Microgravity Science and Technology, vol. 19, no. 1, pp. 33–41, 2007. View at Publisher · View at Google Scholar · View at Scopus 28. Y. L. Joo and E. S. G. Shaqfeh, “The effects of inertia on the viscoelastic Dean and Taylor-Couette flow instabilities with application to coating flows,” Physics of Fluids A, vol. 4, no. 11, pp. 2415–2431, 1992. View at Scopus 29. D. O. Olagunju, “Inertial effect on the stability of viscoelastic cone-and-plate flow,” Journal of Fluid Mechanics, vol. 343, pp. 317–330, 1997. View at Scopus 30. H. Wang, P. Iovenitti, E. Harvey, and S. Masood, “Numerical investigation of mixing in microchannels with patterned grooves,” Journal of Micromechanics and Microengineering, vol. 13, no. 6, pp. 801–808, 2003. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/mse/2011/259401/","timestamp":"2014-04-17T00:53:22Z","content_type":null,"content_length":"125403","record_id":"<urn:uuid:34ff0ec2-87f8-4440-bf86-0f4d25c2d1fb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Barrel shifter Is it considered nerdy to have a favourite functional unit in a microprocessor? Well, you can call me whatever names you like, but for my money, I'd have to pick the barrel shifter as being my Where other functional units calculate such dull and banal things as x+y, x-y, x&y and x|y, the barrel shifter gets the exciting task of calculating the results of shift operations like x<<y and x>>y (or, if you prefer, x2^y), as well as the somewhat more exotic and seldom-seen x>>y + (x<<(8*sizeof(int)-y)) 'rotate' operation. These operations make it possible for us to do multiplication, division, cyclic redundancy checks, all manner of cryptography things, floating point arithmetic, and to store Boolean true/false values in single bits within words; and all of the myriad algorithms that depend on the ability to do these things. A barrel shifter can perform these operations in a stateless, combinatorial manner that can easily complete in a single cycle even on aggressively clocked modern microprocessors. Of course, shifts can be calculated by a good old-fashioned shift register, taking one cycle for every bit position shifted. And for many algorithms (notably multiplication) this is perfectly fine, because the multiplier will be used at each shift position in succession, and applying a little bit of strength reduction, each successive position can be calculated using only a single-position shift from the previously shifted value of the multiplier. So for many applications, a barrel shifter doesn't make things much quicker; but barrel shifters are large structures to put onto silicon; far larger than a simple adder or shift register. Hence many early processors, for example all Intel x86 processors before the 386, had simple shift registers instead of barrel shifters. Almost all modern processors since the i386 have included barrel shifters, with the exception of some embedded processors, and rather more bizarrely the Willamette and Northwood generations of Pentium 4s. Barrel shifters are also to be found as components in the normalisation stages of FPUs. So, what does a barrel shifter actually look like, and how do we build one? Well, to implement a shifter, we must have the ability to move any bit in the input word to any position in the output word. So, if you have any grounding at all in electronics, you're probably already thinking to yourself "Well, we could implement it with a bundle of multiplexors, one for each output bit, which has an input for each input-bit, and arrange the connections between input and output so that the control input for each multiplexor can be the same, sharing the decode." If this is what you're thinking, you may well have followed that up with "Ah, but for an n-bit input/output word, this requires of the order of n^2 gates, which is a hell of a lot of gates for a 32-bit or, heaven help us, 64-bit input word" and abandoned your line of reasoning in the expectation that you're about to be told not to be so silly, and that a barrel shifter implementation is actually a lot smaller and more efficient than that. Except of course, you would have been absolutely right. A barrel shifter is essentially that: an array of multiplexors, with a size (in terms of gates and silicon area) related to the product of the number of bits in the input and output words. Additional logic on the inputs and outputs of the barrel shifter extends the basic functionality of the barrel shifter to implement the full range of shifts provided by a processor's instruction set by masking off low or high bits in the output, sign-extending the result, and selecting between left and right shifts. We can see why barrel shifters are expensive in terms of silicon area, and why they were omitted from early microprocessors where silicon area was at a premium. It's also obvious why barrel shifters are fast: the longest signal path through the shifter only goes through a single multiplexor, albeit one with a rather high fan-in. And it's incredibly lucky that barrel shifters happen to be so quick, because it's not feasible to pipeline a barrel shifter. In order to split a barrel shifter across more than one pipeline stage, the internal state which would have to be registered across pipeline stages is of the order of n^2. For a 32-bit word width, requiring something like a kilobit of register state. So, overall, a barrel shifter is O(n^2) in terms of silicon area, and approximately O(1) in terms of time. If you're thinking ahead, you're probably thinking that O(n^2) is rather awful, and that there must, surely, be a better way. If you're really on the ball, in fact, you've probably come up with the idea of a logarithmic shifter. A logarithmic shifter performs the same function as a barrel shifter, but in a slightly different manner, using less logic gates and silicon area. A logarithmic shifter works by successively shifting (or, not, depending on a bit of the index register) the input by powers of two, such that the result, the product of each of these successive shifts, is shifted overall by the value of the shift index: the first stage shifts by 1 place if bit 0 of the index is set; the next by 2 places if bit 1 is set; the next by 4 places if bit 2 is set; and so on until the last stage, which shifts by n/2 places if bit (log[2]n)-1 of the index is set. This can be implemented very simply with multiplexors picking their inputs from the relevant bits of the input. Since there are log[2]n stages in this circuit, each proportional in size to the input word width, the number of gates is O(n log n), far smaller than a barrel shifter. However, because the stages operate successively, the propagation delay overall is O(log n). For a 32-bit shifter, this implies at least 5 gate delays before any additional logic is added, substantially slower than a barrel shifter, and possibly making the shifter slow enough overall that it might violate timing on a fast microprocessor. Logarithmic shifters are, however, easy to pipeline since the intermediate state at any given point in the shifter is only n bits. If a microprocessor's pipeline structure can support early completion of results, a simple zero-detection operating in the first pipeline stage of the shifter can indicate whether a shifter's result has completed, or has any further activity needed; thus by performing the most commonly used shifts (or the components thereof) first, a logarithmic shifter can operate, in practice, almost as quickly as a barrel shifter. I'm only speculating, but I suspect it's an arrangement like this that was used in the early Intel Pentium 4 processors, where shifts could allegedly take a variable number of cycles to complete, depending on the index. It can be observed that using 4-bit multiplexors instead of single-bit multiplexors will create a shifter that can shift by two powers of two per stage, resulting in a faster but slightly larger circuit. Applying this inductively, we can observe that this forms a series of shifter implementations: at one end the log-2 shifter with 1-bit multiplexors on each of log[2]n stages, through to the full barrel shifter with log[2]n-bit multiplexors on each of one stage. These can be freely mixed and matched, too. For example, adding a single logarithmic shift stage can half the number of connections required in a barrel shift stage, for only a comparatively small delay penalty. And finally... No, I don't know where the name "barrel shifter" comes from, actually. Perhaps the structure somehow resembles a barrel, in the eyes of the original inventor. Perhaps it's simply because, as a functional unit, it's more fun than a barrel of monkeys. My best guess is that it's related to the fact that on the surface of a barrel (or any cylinder), rotating around its axis, everything moves with the same rotational velocity at every point, simultaneously, and this property is more or less shared with a barrel shifter.
{"url":"http://everything2.com/title/Barrel+shifter","timestamp":"2014-04-21T05:27:27Z","content_type":null,"content_length":"30521","record_id":"<urn:uuid:a48d6691-40e6-4537-af9b-6135d7ff66ba>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-negative Least Squares (NNLS) algorithm • Permalink • Email • Follow A little while ago, I looked at this group and the web for an implementation of the NNLS algorithm, but all I found was a post in 1 Replies this group from 1997 from somebody also looking for an implementation, 108 Views so I had to do it myself. The problem: Given a matrix A (usually more rows than columns) and vector f, find vector x such that: [PageSpeed] 1) All elements of x are non-negative 25 2) subject to contraint (1), the Euclidian norm (2-norm) of vector (A.x-f) is minimized. (Note that the unconstrained problem - find x to minimize (A.x-f) - is a simple application of QR decomposition.) The NNLS algorithm is published in chapter 23 of Lawson and Hanson, "Solving Least Squares Problems" (Prentice-Hall, 1974, republished SIAM, 1995) Some preliminary comments on the code: 1) It hasn't been thoroughly tested. 2) I only have a few months experience programming Mathematica (but lots of programming experience in general) so there are probably style issues. I am happy to receive constructive criticism. 3) There is some ugliness where a chunk of code is repeated twice to be able to have a simple 'while' loop, rather than one that breaks out in the middle. Arguably breaking the middle of the loop would be better than repeating the code. 4) I've left in a bunch of debugging statements. 5) I've cut-and-pasted this from my notebook, which is a bit ugly with the "\[LeftDoubleBracket]" instead of "[[" etc. 6) I haven't paid much attention to efficiency - e.g. I multiply by "Inverse[R]" rather than trying to do a backsubstitution (R is a triangular matrix.) (* Coded by Michael Woodhams, from algorithm by Lawson and Hanson, *) (* "Solving Least Squares Problems", 1974 and 1995. *) bitsToIndices[v_] := Select[Table[i, {i, Length[v]}], v[[#]] == 1 &]; NNLS[A_, f_] := Module[ {x, zeroed, w, t, Ap, z, q, \[Alpha], i, zeroedSet, positiveSet, toBeZeroed, compressedZ, Q, R}, (* Use delayed evaluation so that these are recalculated on the fly as \ needed : *) zeroedSet := bitsToIndices[zeroed]; positiveSet := bitsToIndices[1 - zeroed]; (* Init x to vector of zeros, same length as a row of A *) debug["A=", MatrixForm[A]]; x = 0 A\[LeftDoubleBracket]1\[RightDoubleBracket]; debug["x=", x]; (* Init zeroed to vector of ones, same length as x *) zeroed = 1 - x; debug["zeroed=", zeroed]; w = Transpose[A].(f - A.x); debug["w=", w]; While[zeroedSet != {} && Max[w\[LeftDoubleBracket]zeroedSet\[RightDoubleBracket]] > 0, debug["Outer loop starts."]; (* The index t of the largest element of w, *) (* subject to the constraint t is zeroed *) t = Position[w zeroed, Max[w zeroed], 1, debug["t=", t]; zeroed\[LeftDoubleBracket]t\[RightDoubleBracket] = 0; debug["zeroed=", zeroed]; (* Ap = the columns of A indexed by positiveSet *) Ap = debug["Ap=", MatrixForm[Ap]]; (* Minimize (Ap . compressedZ - f) by QR decomp *) {Q, R} = QRDecomposition[Ap]; compressedZ = Inverse[R].Q.f; Create vector z with 0 in zeroed indices and compressedZ entries \ elsewhere *) z = 0 x; z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] = debug["z=", z]; While[Min[z] < 0, (* There is a wart here : x can have zeros, giving infinities or indeterminates. They don't matter, as we ignore those elements (not in postitiveSet) but it will \ produce warnings. *) debug["Inner loop start"]; find smallest x\[LeftDoubleBracket] \[RightDoubleBracket] - z\[LeftDoubleBracket]q\[RightDoubleBracket]) (* such that : q is not zeroed, z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0 *) \[Alpha] = Infinity; For[q = 1, q <= Length[x], q++, If[zeroed\[LeftDoubleBracket]q\[RightDoubleBracket] == 0 z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0, \[Alpha] = \[LeftDoubleBracket]q\[RightDoubleBracket] - debug["After trying index q=", q, " \[Alpha]=", ]; (* if *) ]; (* for *) debug["\[Alpha]=", \[Alpha]]; x = x + \[Alpha](z - x); debug["x=", x]; toBeZeroed = Abs[x\[LeftDoubleBracket]#\[RightDoubleBracket]] < 10^-13 &]; debug["toBeZeroed=", toBeZeroed]; zeroed\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = x\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = 0; (* Duplicated from above *) (* Ap = the columns of A indexed by positiveSet *) Ap = Transpose[ debug["Ap=", MatrixForm[Ap]]; (* Minimize (Ap . compressedZ - f) by QR decomp *) {Q, R} = QRDecomposition[Ap]; compressedZ = Inverse[R].Q.f; Create vector z with 0 in zeroed indices and compressedZ entries \ elsewhere *) z = 0 x; z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] = debug["z=", z]; ]; (* end inner while loop *) x = z; debug["x=", x]; w = Transpose[A].(f - A.x); debug["w=", w]; ]; (* end outer while loop *) ]; (* end module *) And some final comments: Don't 'reply' to the e-mail address in the header of this post - it is old and defunct, and not updated because of spammers. I am at massey.ac.nz, and my account name is m.d.woodhams. (3 year post-doc appointment, so by the time you read this, that address might also be out of date.) I put this code in the public domain - but I'd appreciate it if you acknowledge my authorship if you use it. I will be writing a bioinformatics paper about "modified closest tree" algorithm that uses this, and I might (not decided about this) give a link to Mathematica code, which will include the above. If this happens, you could cite that paper. This software comes with no warrantee etc etc. Your warrantee is that you have every opportunity to test the code yourself before using it. If you are reading this long after it was posted, I may have an improved version by then. Feel free to enquire by e-mail. Reply mdw1 (2) 10/3/2003 6:53:03 AM mdw@ccu1.auckland.ac.nz (Michael Woodhams) wrote in message news:<blj6cf$1j6$1@smc.vnet.net>... > A little while ago, I looked at this group and the web for an > implementation of the NNLS algorithm, but all I found was a post in > this group from 1997 from somebody also looking for an implementation, > so I had to do it myself. > The problem: Given a matrix A (usually more rows than columns) and > vector f, find vector x such that: > 1) All elements of x are non-negative > 2) subject to contraint (1), the Euclidian norm (2-norm) of vector > (A.x-f) is minimized. Congratulations for your NNLS code! I had a 10-liner naive solution (see below) using FindMinimum. It seems to work most of the time, but your solution is more accurate and... at least sixty times faster! NNLSFindMinimum[A_, f_] := Module[{nbx = Length[First[A]], xi, x, axf, xinit}, xi = Array[x,nbx]; axf = A.xi^2 - f; xinit = PseudoInverse[A].f; If[And@@( #>=0& /@ xinit), fm = FindMinimum[Evaluate[axf.axf], Evaluate[Sequence@@Transpose[{xi, xinit}]]]; xi^2 /. fm[[2]] (* a small test : *) a = Table[Random[],{i,10},{j,20}]; f = Table[Random[],{i,10}]; x = NNLSFindMinimum[a,f]//Timing % // Chop[#, 10.^-5] & {1.001 Second,{0.0580067,0.0246345,0,0.0227949,0,0,0,0.386312,0,0,0,0,0,0,0,0, x = NNLS[a, f] // Timing {0.016 Second,{0.058114,0.02338271,0,0.0227961,0,0,0,0.386253,0,0,0,0,0,0,0,0, Reply poujadej (31) 10/4/2003 6:15:38 AM • Permalink • Email • Follow Non Negative Least Squares Hi Folks, does somebody have a working implementation of the NNLS and can kindly provide this? Cheers CR A generalization of NNLS is available at http://www-astro.physics.ox.ac.uk/~mxc/ idl/bvls.pro --Wayne On Nov 3, 3:21=A0pm, chris <rog...@googlemail.com> wrote: > Hi Folks, > does somebody have a working implementation of the NNLS and can kindly > provide this? For simple problems, you might also consider running the MPFIT family of functions with parameter constraints that force the parameters to be non-negative. Craig ... Large-Scale Non-Negative Linear Least Squares All non-negative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles instead. Help! "Gordon Wetzstein" <gordon.wetzstein@gmail.com> wrote in message <i7u6am$2ue$1@fred.mathworks.com>... > All non-negative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles ins... Non-negative least squares with large spare matrix Hi, I have a sparse matrix of size ~750,000x10,000, and I am looking to find the non-negative least-squares solution of the problem Ax = b. I have tried using lsqnonneg but after about 30 minutes it spit out an error saying out of memory (I guess this is because the matrix is too large?). I don't have the optimization toolbox but I would like to try lsqlin with only specifying the lower bound. I would consider buying it if I was sure it would solve my problem, so does anyone know if it will work on a matrix of the size with reasonable speed? If not does anyone have any other solution in m... least squares algorithm Hello All! Any one know where can I find some php code (or "php-able" code) implementing least squares method? Thx in advance and pls excuse my poor eng J.SanTanA schreef: > Hello All! > Any one know where can I find some php code (or "php-able" code) > implementing least squares method? > > Thx in advance and pls excuse my poor eng http://www.google.nl/search?hl=nl&q= php+least+squares JW On 13 Feb, 12:42, Janwillem Borleffs <j...@jwscripts.com> wrote: > J.SanTanA schreef: > > > Hello All! > > Any one know where can I find s... Does anybody know a non-linearly inequality constrainted non-linear least square solver called ENLSIP? I saw in some previous posts people mentioned about it. I've looked into it from the decision tree - it is very old and there is no documentation, no tutorial, etc., only Fortran code. I don't know how to use it. If anybody know where to find the user guide or documentation, it would be great! If there is Matlab interface floating around, that's even greater... Moreover, I would appreciate if anybody can give me some pointers about non-linearly inequality constrainted non-linear least square solvers that more modern? Thanks a lot! On Sep 2, 7:12 pm, "Gino genetic algorithms and nonlinear least squares hi, I am working about genetic algorithms. I want to solve y = q(1)*exp(q(2)*t) equation.(Nonlinear) In this eq. %problem discription% y is a data q(1) and q(2) are parameters that we want to estimate and than t is a time %my purpose is minimize this function% S=sum[ydata - yestimate]^2 with this equation (nonlinear least squares), I want to estimate q(1) and q(2). how can I solve this poblem with genetic algorithm in MATLAB. please help me Thank you ... non linear least square minimization.... Hi all, I need a non linear least square minimization routine for correlated data (PCR or partial least square perhaps) but I don't know how I can download it (probably because I didn't know the name of my problem and its solution). You know where I can find it? Best regards, Dr. G. Pileio ... least squares with non-constant variance I'm trying to perform least squares regression to a simple x,y data set, and to get the regression statistics so that I can model the data set using Monte-Carlo techniques. The regression is non-linear (ax^b power function). I've been successful doing this using nlinfit and related functions for the case where a constant variance is assumed (as is common for least squares). However, the data set would be better modeled using a non-constant variance and I'm having some difficulty getting the regression statistics for this case. Specifically, I would like to perform weighted ... Non Linear Least squares problem Hi I have been trying to use 'lsqnonlin' function to solve a nonlinear least squares problem with Matlab educational copy. But matlab is gerneating an error of unknown function. Let me know on how one uses this function to estimate parameters for a nonlinear function which bounds estimated parameters in upper and lower bound. Thanks in advance. Ganesh Ganesh Lolge wrote: > Hi > I have been trying to use 'lsqnonlin' function to solve a nonlinear > least squares problem with Matlab educational copy. But matlab is > gerneating an error of unknown function. > &gt...
{"url":"http://compgroups.net/comp.soft-sys.math.mathematica/non-negative-least-squares-nnl/899124","timestamp":"2014-04-23T13:32:45Z","content_type":null,"content_length":"32481","record_id":"<urn:uuid:20de7e11-f228-4584-8293-1aaa25536c7d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the distribution of the $L^\infty$ norm of minimal polynomials of numbers in a number field? up vote 5 down vote favorite Given a number field $K$, I am interested in the distribution of the $L^\infty$ norm of minimal polynomials (over $\mathbb{Z}$) of numbers in $K$. Also, it is interesting to restrict to numbers $\ alpha$ with $K=\mathbb{Q} (\alpha)$. All such polynomials' discriminants are a square factor away from the discriminant of $K$. So that must say something. And it makes it very interesting - the discriminant is a high-degree multivariate polynomial, so the fact that any discriminant can be achieved many times (even if we allow the distance of square factors) amazes me. To be precise, so no one is annoyed, the question is exactly: what is the asymptotic behavior and error term for $A(x) = |\{ \alpha \in K | \parallel m(\alpha)\parallel_\infty < x\}|$, and $B(x) = |\ { \alpha \in K | K=\mathbb{Q} (\alpha), \parallel m(\alpha)\parallel_\infty < x\}|$, as $x \rightarrow \infty$? Have you tried a (possibly) simple case, like $K={\bf Q}(i)$? – Gerry Myerson Apr 8 '10 at 23:39 @Gerry: Both $A(x)$ and $B(x)$ are $\sim c_i x^2$ for some constants, and this will be true for any quadratic field, where constant will depend on discriminant, class number, and regulator. But, for a non-quadratic number field, the coefficients of the minimal polynomial of the generic number given in a $\mathbb{Q}$-basis, will be of high degree, and I can't see why the same asymptotic behavior should remain - even though it probably will. – Dror Speiser Apr 9 '10 at 11:26 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/20798/what-is-the-distribution-of-the-l-infty-norm-of-minimal-polynomials-of-number","timestamp":"2014-04-16T10:35:41Z","content_type":null,"content_length":"48727","record_id":"<urn:uuid:a601207b-6a6c-41c0-a611-25191b0d0971>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Researchers create first large-scale model of human mobility that incorporates human nature For more than half a century, many social scientists and urban geographers interested in modeling the movement of people and goods between cities, states or countries have relied on a statistical formula called the gravity law, which measures the “attraction” between two places. Introduced in its contemporary form by linguist George Zipf in 1946, the law is based on the assumption that the number of trips between two cities is dependent on population size and the distance between the cities. (The name comes from an analogy with Newton’s Law of Gravity, which describes the attraction of two objects based on mass and distance.) Though widely used in empirical studies, the gravity model isn’t very accurate in making predictions. Researchers must retrofit data to the model by including variables specific to each study in order to force the results to match reality. And with much more data now being generated by new technologies such as cellphones and the Internet, researchers in many fields are eyeing the study of human mobility with a desire to increase its scientific rigor. To this end, researchers from MIT, Northeastern University and Italy’s University of Padua have identified an underlying flaw in the gravity model: The distance between two cities is far less important than the population size in the area surrounding them. The team has now created a model that takes human motives into account rather than simply assuming that a larger city attracts commuters. They then tested their “radiation model” on five types of mobility studies and compared the results to existing data. In each case, the radiation model’s predictions were far more accurate than the gravity model’s, which are sometimes off by an order of magnitude. “Using a multidisciplinary approach, we came up with a simple formula that works better in all situations and shows that population distribution is the key factor in determining mobility fluxes, not distance,” says Marta González, the Gilbert Winslow Career Development Assistant Professor in MIT’s Department of Civil and Environmental Engineering and Engineering Systems Division, and co-author of a paper published Feb. 26 in the online edition of Nature. “I wanted to see if we could find a way to make the gravity model work more accurately without having to change it to fit each Physics professor Albert-László Barabási of Northeastern is lead author and principal investigator on the project. Filippo Simini of Northeastern and Amos Maritan of the University of Padua are “I think this paper is a major advance in our understanding of human behavior,” says Dirk Brockmann, an associate professor of engineering sciences and applied mathematics at Northwestern University who was not involved in the research project. “The key value of the work is that they propose a real theory of mobility making a few basic assumptions, and this model is surprisingly consistent with empirical data.” The gravity law states that the number of people in a city who will commute to a larger city is based on the population of the larger city. (The larger the population of the big city, the more trips the model predicts.) The number of trips will decrease as the distance between cities grows. One obvious problem with this model is that it will predict trips to a large city without taking into account that the population size of the smaller city places a finite limit on how many people can possibly travel. The radiation model accounts for this and other limitations of the gravity model by focusing on the population of the surrounding area, which is defined by the circle whose center is the point of origin and whose radius is the distance to the point of attraction, usually a job. It assumes that job availability is proportional to the population size of the entire area and rates a potential job’s attractiveness based on population density and travel distance. (People are willing to accept longer commutes in sparsely populated areas that have fewer job opportunities.) To demonstrate the radiation model’s accuracy in predicting the number of commuters, the researchers selected two pairs of counties in Utah and Alabama — each with a set of cities with comparable population sizes and distances between them. In this instance, the gravity model predicts that one person will commute between each set of cities. But according to census data, 44 people commuted in Utah and six in the sparsely populated area of Alabama. The radiation model predicts 66 commuters in Utah and two in Alabama, a result well within the acceptable limit of statistical error, González The co-authors also tested the model on other indices of connectedness, including hourly trips measured by phone data, commuting between U.S. counties, migration between U.S. cities, intercity telephone calls made by 10 million anonymous users in a European country, and the shipment of goods by any mode of transportation among U.S. states and major metropolitan areas. In all cases, the model’s results matched existing data. “What differentiates the radiation model from other phenomenological models is that Simini et al. assume that an individual’s migration or move to a new location is determined by what ‘is offered’ at the location — e.g., job opportunities — and that this employment potential is a function of the size of a location,” Brockmann says. “Unlike the gravity model and other models of the same nature, the radiation model is thus based on a plausible human motive. Gravity models just assume that people move to large cities with high probability and that also this movement probability decreases with distance; they are not based on an underlying first principle.”
{"url":"http://newsoffice.mit.edu/2012/mobility-migration-model-0227","timestamp":"2014-04-16T07:39:57Z","content_type":null,"content_length":"87511","record_id":"<urn:uuid:46b873a9-0a9d-4ce3-9e43-e9060404031f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
normalizer of Sylow p-subgroup of simple groups A_n But my point is if you compute the order of the normalizer of an arbitrary Sylow p-subgroup, then you can count the number of Sylow p-subgroups (because the number of Sylows is the index of the normalizer of any one of them in the group), which is (as far as I know) very difficult.
{"url":"http://www.physicsforums.com/showthread.php?t=257905","timestamp":"2014-04-17T00:57:02Z","content_type":null,"content_length":"27341","record_id":"<urn:uuid:53e24c29-ea19-48a0-ade6-58a70863153b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: handling of MV in summary stats [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: handling of MV in summary stats From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: handling of MV in summary stats Date Wed, 17 Dec 2003 09:18:46 -0500 On Dec 17, 2003, at 2:33 AM, David wrote: Note that for commands that take several variables together, they generally work on the set of observations having no missing values on all the variables under consideration. An important exception to this general rule involves the 'r...' egen functions, which perform spreadsheet-like operations across variables (and thus provide an alternative to doing something like gen avg = (x1+x2+x3)/3 which will indeed be missing if any component is missing. egen avg = rmean(x1 x2 x3) will not behave the same way; it computes the mean from what is available, and the associated rmiss() will give you the effective divisors. The r... functions are very useful, but one must read the help carefully to know how each one will deal with MVs. (I once had a problem with egen rsd(), row standard deviation, which ignores MVs; it returned zero if _all_ arguments were missing. I agreed with Stata's developers that there was no variance across those variables, but argued successfully that the rule that a function of NaNs should return a NaN should trump the statistical logic!) * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-12/msg00507.html","timestamp":"2014-04-17T06:55:41Z","content_type":null,"content_length":"5940","record_id":"<urn:uuid:f8df339e-f7e7-4dac-abc2-a71b071d33ef>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Logic Handbook Book Description: This book provides the definitive documentation for one of the most well-known and highly regarded theorem-proving programs ever written. The program described is one of the more significant, enduring, and prize-awarded accomplishments in the fields of artificial intelligence, formal methods, and applied logic. The book provides an exact statement of the logic for which the program is a prover, a complete description of the user's commands, installation instructions, and much tutorial information, including references to thousands of pages of examples. Among the examples is a formally verified microprocessor and a formally verified compiler targeting that microprocessor. The second edition of A Computational Logic handbook provides all the information necessry for using the most recently releases version of Nqthm, the freely available"Boyer-Moore"theorem-proving program. The second edition includes a precise description of all recent changes to the logic in the past nine years, including many enhanced syntactic features and rules of inference, which were added to support work on large scale projects in formal methods. Thousands of pages of fascinating, exemplary, mathematically-checked input are described, examples that deal with very difficult questions in formal mehtods and mathematics. New material includes: Description of the new syntax, including COND, CASE, LET, LIST*, and backquote; describes some higher order inference procedures, including"constrained functions"and"functional instantiation"; documents more sophisticated control machinery for manipulating very large theories; introduces a secure proof-checking environment; describes thousands of pages of fascinating example input dealing with very difficult questions in formal methods and mathematics; provides a formal parserfor the syntax; compares the proof complexity of many interesting checked examples; includes much new tutorial help, especially for the many new features. A computational logic is a mathematical logic that is both oriented towards discussion of computation and mechanised so that proofs can be checked by computation. The computational logic discussed in the handbook is that developed by Boyer & Moore. The first edition, published in 1988, is an acknowledged classic in the field of formal methods and computational logic. However it no longer reflects existing technology. The second edition provides a complete overview of the Boyer/Moore theorem proving approach (Nqthm) and provides examples. It includes several significant new features that have been aded to the Nquthm system since 1988. The book is structured in thefollowing way: Part 1 discusses logic without regard for its mechanisation and answers the question what are the axioms and rules of inference? Part 2 discusses its mechanisation and answers the question how does one use the Boyer/Moore theorem prover to prove theorems? Tags: Computers & Internet, Computer Science, Artificial Intelligence,
{"url":"http://www.campusbooks.com/books/computers-internet/computer-science/artificial-intelligence/9780121229559_Boyer-Robert-S-Boyer-J-Strother-Moore-Hal-G-Moo.html","timestamp":"2014-04-16T07:31:17Z","content_type":null,"content_length":"48622","record_id":"<urn:uuid:1b7a02b8-54f4-42da-9f1e-dd0573242c72>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Experimental implementation of direct-proportional length-based dna computing for numerical optimization of the shortest path problem Ibrahim, Zuwairie and Tsuboi, Yusei and Ono, Osamu and Khalid, Marzuki (2007) Experimental implementation of direct-proportional length-based dna computing for numerical optimization of the shortest path problem. International Journal of Unconventional Computing, 3 (1). pp. 15-34. ISSN 1548-7199 Restricted to Repository staff only Official URL: http://www.oldcitypublishing.com/IJUC/IJUC%203.1%2... DNA computing has emerged as an interdisciplinary field that draws together molecular biology, chemistry, computer science, and mathematics. The fundamental of this unconventional computation has been proven to solve weighted graph problems, such as the shortest path problem and the travelling salesman problem. One of the fundamental improvements in DNA computing is direct-proportional length-based DNA computing for the shortest path problem. Generally, in DNA computing, the DNA sequences used for the computation should be critically designed in order to reduce error that could occur during computation. In this paper, a procedure to design the DNA sequences for the direct-proportional length-based DNA computing is presented. The procedure includes DNASequenceGenerator, a graph-based approach for designing a set of good DNA sequences Repository Staff Only: item control page
{"url":"http://eprints.utm.my/8767/","timestamp":"2014-04-16T19:13:49Z","content_type":null,"content_length":"20149","record_id":"<urn:uuid:da33f982-181c-4d0e-91d5-f78ed722ac99>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivation of BPSK BER in Rayleigh channel This is a guest post by Jose Antonio Urigüen who is an Electrical and Electronic Engineer currently studying an MSc in Communications and Signal Processing at Imperial College in London. This guest post has been created due to his own curiosity when reviewing some concepts of BER for BPSK in Rayleigh channnel published in the dsplog.com From the post on BER for BPSK in Rayleigh channnel, it was shown that, in the presence of channel $h$, the effective bit energy to noise ratio is $\frac{|h|^2E_b}{N_0}$. The bit error probability for a given value of $h$is, $P_{b|h}=\frac{1}{2}erfc\left({\sqrt{\frac{|h|^2E_b}{N_0}}}\right)=\frac{1}{2}erfc\left(\sqrt{\gamma}\right)$, where $\gamma = \frac{|h|^2E_b}{N_0}$ . The resulting BER in a communications sytem in the presence of a channel $h$, for any random values of $|h|^2$, must be calculated evaluating the conditional probability density function $P_{b|h}$ over the probability density function of $\gamma$. $P_{b}=\int_0^\infty\frac{1}{2}erfc\left(\sqrt{\gamma}\right)p\left(\gamma\right)d\gamma$ where, the probability density function of $\gamma$ is $p\left(\gamma\right) = \frac{1}{\bar{\gamma}}e^{\frac{-\gamma}{\bar{\gamma}}},\ \gamma \ge 0$ and $\bar{\gamma} = \frac{E_b}{N_0}$. First, we are going to derive the result for the definite integral using a different notation, and then we will apply the result to the concrete expression obtained for the BER. Using the definitions for the ‘erfc’ and ‘erf’ functions, we can directly compute the first derivative: $erfc(x) = \frac{2}{\sqrt{\pi}}\int_x^{\infty}e^{-t^2}dt$ $erf(x) = \frac{2}{\sqrt{\pi}}\int_0^{x}e^{-t^2}dt$ $\frac{d}{dx}erf(x) = -\frac{2}{\sqrt{\pi}}e^{-x^2}$ And also the definite integral, simply calculating it by parts: $\begin{eqnarray}\int erfc(x) dx & = & x erfc(x) + \int x \frac{2}{\sqrt{\pi}}e^{-x^2}dx \\<br /> & = & x erfc(x) -\frac{1}{\sqrt{\pi}}e^{-x^2}\end{eqnarray}$ In the following steps we will use the next also straightforward results: $\frac{d}{dx} erfc (\sqrt{x}) = -\frac{1}{\sqrt{\pi}}e^{-x}x^{-1/2}$ $\frac{d}{dx} erf (\sqrt{x}) = \frac{1}{\sqrt{\pi}}e^{-x}x^{-1/2}$ In total, we want to derive the following integral, which can be solved by parts: $\int erfc(\sqrt{x})e^{-\frac{x}{a}}dx = -a erfc({\sqrt{x}})e^{-\frac{x}{a}} - \int (-a)e^{-\frac{x}{a}} (-)\frac{1}{\sqrt{\pi}}e^{-x}x^{-\frac{1}{2}}dx$ Lets find the last term applying the previous result for the ‘erf’ function, and doing a change of variable: $\begin{eqnarray}\int e^{-\frac{x}{a}}e^{-x}x^{-\frac{1}{2}}dx & = & \int e ^{-x\left(\frac{a+1}{a}\right)}x^{-\frac{1}{2}}dx\\ & = & \int e^{-u} \left(\frac{a}{a+1}\right)^{-\frac{1}{2}}u^{-\frac{1} {2}}\left(\frac{a}{a+1}\right)du\\ & = & \left(\frac{a}{a+1}\right)^{\frac{1}{2}} \int e^{-u}u^{-\frac{1}{2}}du\\ & = & \sqrt{\pi}\sqrt{\frac{a}{a+1}} erf(\sqrt{u})\\&=&\sqrt{\pi}\sqrt{\frac{a}{a+1}} Finally, we conclude the expected result: $\begin{eqnarray}\int erfc (\sqrt{x}) e^{-\frac{x}{a}}dx & = &-a erfc(\sqrt{x})e^{-\frac{x}{a}}-\frac{a}{\sqrt{\pi}}\sqrt{\pi}\sqrt{\frac{a}{a+1}}erf\left(\sqrt{\frac{a+1}{a}}\sqrt{x}\right)\\ & = & -a erfc(\sqrt{x})e^{-\frac{x}{a}}-{a}\sqrt{\frac{a}{a+1}}erf\left(\sqrt{\frac{a+1}{a}}\sqrt{x}\right) \end{eqnarray}$ With the above demonstration, we can easily derive the BER for a Rayleigh channel using BPSK modulation: $\begin{eqnarray}P_{b} & = & \frac{1}{2\bar{\gamma}}\int_0^{\infty} erfc (\sqrt{\gamma}) e^{-\frac{\gamma}{\bar{\gamma}}}d\gamma \\ & = & \frac{1}{2\bar{\gamma}}\left[\bar{\gamma}erfc(\sqrt{\gamma})e ^{-\frac{\gamma}{\bar{\gamma}}} + \bar{\gamma}\sqrt{\frac{\bar{\gamma}}{\bar{\gamma}+1}}erf\left(\sqrt{\frac{\bar{\gamma}}{\bar{\gamma}+1}}\sqrt{{\gamma}}\right) \right]_\infty^0\\ & = & \frac{1}{2} \lef(1-\sqrt{\frac{\bar{\gamma}}{\bar{\gamma}+1}}\right)\\ & = & \frac{1}{2} \lef(1-\sqrt{\frac{(E_b/N_0)}{(E_b/N_0)+1}}\right)\end{eqnarray}$. This forms the proof for BER for BPSK modulation in Rayleigh channel. D id you like this article? Make sure that you do not miss a new article by subscribing to RSS feed OR subscribing to e-mail newsletter. Note: Subscribing via e-mail entitles you to download the free e-Book on BER of BPSK/QPSK/16QAM/16PSK in AWGN. { 51 comments… read them below or add one } Leave a Comment
{"url":"http://www.dsplog.com/2009/01/22/derivation-ber-rayleigh-channel/","timestamp":"2014-04-19T07:26:46Z","content_type":null,"content_length":"128830","record_id":"<urn:uuid:0e965a89-1ce4-4325-b23c-852698458a21>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Formulating Differential equations for RLC circuit with switches. April 28th 2012, 02:16 AM Formulating Differential equations for RLC circuit with switches. For Question 2 (attached) I am trying to figure out how to go about answering part a, but I am not confident with the physics so I am unsure how to begin formulating the required equations. Any help is much appreciated, April 28th 2012, 10:50 PM Re: Formulating Differential equations for RLC circuit with switches. Switch 1 is closed for a long time so the current in the inductor builds up to its steady state value. This inductor current (which is easily calculated) is then the initial value for the differential equation involving R2 and L1 For convenience I have assumed that S1 opens and S2 close at time equals zero. April 30th 2012, 10:24 PM Re: Formulating Differential equations for RLC circuit with switches. Thanks, I think I get it. I have a very similar question: A fully charged inductor is hooked up to be in series with a resistor and capacitor. How do I figure out the differential equation for the current change over time for the inductor and the voltage change over time for the capacitor? I would be able to figure it out if it was just a resistor and inductor, but I can't find anywhere that explains it it terms of a previously charged May 1st 2012, 02:26 PM Re: Formulating Differential equations for RLC circuit with switches. I'll give you a few pointers. When your question talks about the capacitor being charged or the inductor carrying current (inductors cannot be "charged") they are giving you a clue about how to work out the initial conditions. YOU DON'T NEED THIS INFORMATION until after you have formulated your differential equation and found the general form of the solution. You then use the initial conditions to determine some constants in the final solution. You can also FORGET that the question is asking for specific information UNTIL you have solved your differential equation. Once you have a solution for the circuit current (or capacitor charge) then finding the answers to their questions will be easy enough. FIRST you must formulate the differential equation for an RLC circuit. To start with you don't need any other information from the question. To formulate the DE calculate the voltage across each component in terms of current i or charge q. Add these voltages together and set them equal to the supply voltage (zero in your case). The last piece of information that you need is that the "charge" on a capacitor is the integral of the current that passes through it. You can formulate your DE in terms of current or capacitor charge, it is YOUR CHOICE. If you choose current then your DE will have a differential term, the term iR and an integral term. If you choose charge then your DE will have a second differential term, a differential term an the term q/C. I hope that this helps. May 1st 2012, 07:54 PM Re: Formulating Differential equations for RLC circuit with switches. Thanks, that makes more sense. Just to clarify the situation I am refering to, attached is the question. So for the current through the inductor, is the equation I= (I inital)e^(-t/L/R) and the voltage across capacitor V= (V inital)e^(1-e^(-t/RC))? Then just put in the L,R and C values to get the actual equations? Thanks so much for your help. May 3rd 2012, 09:34 PM Re: Formulating Differential equations for RLC circuit with switches. I'm a bit confused what they are asking for, but the key DE I would be working from is: $L\frac{di}{dt}+iR+\frac 1C \int i dt = 0$ this is simply the sum of the voltages around the loop. You also have $V_C=\frac qC =\frac 1C \int i dt$ Note that there is also a differential equations page on this site. These electrical problems are so common that you would probably get more responses there.
{"url":"http://mathhelpforum.com/advanced-applied-math/198039-formulating-differential-equations-rlc-circuit-switches-print.html","timestamp":"2014-04-18T13:14:19Z","content_type":null,"content_length":"9234","record_id":"<urn:uuid:79123fe6-2955-49c3-a477-636e9cb79473>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Oscillatory properties of fourth order self-adjoint differential equations Simona Fisnarova Department of Mathematics, Masaryk University, Janackovo nam. 2a, 662 95 Brno, Czech Republic E-mail. simona@math.muni.cz Oscillation and nonoscillation criteria for the self-adjoint linear differential equation $$ (t^\alpha y^{\prime\prime})''-\frac{\gamma_{2,\alpha}}{t^{4-\alpha}}y=q(t)y,\quad \alpha \not \in \{1, 3\} \,, $$ where $$ \gamma_{2,\alpha}=\frac{(\alpha-1)^2(\alpha-3)^2}{16}$$ and $q$ is a real and continuous function, are established. It is proved, using these criteria, that the equation $$\left(t^\ alpha y''\right)''-\left(\frac{\gamma_{2,\alpha}}{t^{4-\alpha}} + \frac{\gamma}{t^{4-\alpha}\ln^2 t}\right)y = 0$$ is nonoscillatory if and only if $\gamma \leq \frac{\alpha^2-4\alpha+5}{8}$. AMSclassification. 34C10. Keywords. Self-adjoint differential equation, oscillation and nonoscillation criteria, variational method, conditional oscillation.
{"url":"http://www.maths.soton.ac.uk/EMIS/journals/AM/04-4/am1156.htm","timestamp":"2014-04-21T02:17:07Z","content_type":null,"content_length":"1751","record_id":"<urn:uuid:5c9fb75d-650e-4bcc-84da-3569f9329016>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameterizing a curve in R^2 December 14th 2013, 01:26 PM #1 Junior Member Apr 2013 Parameterizing a curve in R^2 A circle with radius a centered at the origin O. T is a point on the circle such that OT makes angle t with the positive x-axis. The tangent to the circle at T meets the x-axis at X. The point P= (x,y) is at the intersection of the vertical line through X and horizontal line through T. I need to parametrize the curve C traced out by P with angle t as the independent variable. (Let O denote the origin). My approach was this: The x component of P is the same as the x component of X, and the y component of P is the same as the y component of T. so y = a*sin(t), which is correct according to the answer key. To find the vector OX, I decided to find vectors OT and TX, then OT + TX = OX. OT = [a*cos(t) , a*sin(t)] is easy to find. TX would be perpendicular to OT and have a length from T to X. TX = [a*sin(t) , -a*cos(t)] does not work since it has a fixed length (only works for multiples of t= pi/4) TX = [sec(t)/a , -csc(t)/a] is the only other vector I could find that satisfies OT.TX = 0, but OT + TX = OX = [a*cos(t) + sec(t)/a , a*sin(t) - csc(t)/a] has a y-component that is generally not 0. My problem is I don't know how to find TX, which means I can't find OX, and hence the x-component of P in regard to t. The answer key says P = [a*sec(t) , a*sin(t)] Re: Parameterizing a curve in R^2 Last edited by johng; December 14th 2013 at 06:31 PM. December 14th 2013, 06:07 PM #2 Super Member Dec 2012 Athens, OH, USA
{"url":"http://mathhelpforum.com/advanced-algebra/225074-parameterizing-curve-r-2-a.html","timestamp":"2014-04-20T11:30:54Z","content_type":null,"content_length":"49254","record_id":"<urn:uuid:840c2237-4bc7-4fb1-92e2-de2ca2627b80>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Salt Tank - Differential Equation This problem has been perplexing me all week, it doesn't look hard but somehow I can't get the right answer. The question is - A salt tank of capacity 500 gallons contains 200 gallons of water and 100 gallons of salt. Water is pumped into the tank at 3 gallon/min with salt of 1 lb/gallon. This is uniformly mixed, on the other end end water leaves the tank at 2 gallon/min. Setup a equation that predicts the amount of salt in the tank at any time up to the point when the tank overflows. What I wrote was dy/dx = rate in - rate out dy/dx = 3*1 - 2*Q(t)/(200+t) Okay, good start. Although it would be better to state what y and x mean! I would have guessed, since the problem asks you to set up "equation that predicts the amount of salt in the tank at any time" that y(x) was the amount of salt, in pounds, in the tank at time x in minutes, except that you then declared that Q(x) is the amount of salt in the tank! Perhaps explicitly writing "y(x) is the amount of salt in the tank after x minutes" would help remember what you are doing! If y(x) is the amount of salt in the tank, in pounds, after x minutes, then, yes, dy/dx= rate in- rate out. Since you know "Water is pumped into the tank at 3 gallon/min with salt of 1 lb/gallon", you know that salt is coming at 3 gallons/min*1lb/gal= 3 lb/gal. If the amount of salt in the tank is y(x) and the amount of water is 200+ (3-2)x = 200+ x then the density of salt is y/(200+x) pounds /gal and so the rate out is (y/(200+x))*2 gal/min= 2y/(200+x) lbs/min. dy/dx= 3- 2y/(200+x) Where Q(t) is the amount of salt in the tank. And 200+t represents the increasing volume of water in the tank. This equation doesn't give the correct answer, I have tried tacking 100 lbs of salt so that it becomes dy/dx = 3*1 - 2*(Q(t)+100)/(200+t) but nothing seems to work. Can someone help me setup the correct equation? As I said you have confused yourself by using different letters to represent the same things. It is a really good idea to write out exactly what each letter represents so you won't do that. In order that dy/dx represent the rate at which salt is coming in, it is necessary that y= amount of salt in the tank and x= time the salt has been coming in. But then you have used "Q(t)" and "t" to represent the same things!
{"url":"http://www.physicsforums.com/showthread.php?t=132765","timestamp":"2014-04-19T09:50:14Z","content_type":null,"content_length":"32264","record_id":"<urn:uuid:ecea77fe-6838-49cf-a24c-73ed20b13afb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] 486:Naturalness Issues Harvey Friedman friedman at math.ohio-state.edu Wed Mar 14 10:50:57 EDT 2012 I have now presented the main Pi01 independent statements at the Harvard math dept (March 1, 2012), and at the 90th birthday celebration of Patrick Suppes (March 10, 2012) at Stanford. Notes for the two talks can be found at http://www.math.osu.edu/~friedman.8/manuscripts.html , Lecture Notes 57, 58. I have now obtained a certain amount of feedback from core mathematicians and other noted non logicians, as well as from some noted logicians, concerning PROPOSITION 1. EVERY ORDER INVARIANT SUBSET OF Q[0,16]^32 HAS A Here I want to discuss issues concerning the naturalness or appropriateness of Proposition 1, with the hope that this will generate interesting feedback from FOM subscribers. The key issue I want to address is the use of the special function Z +up:Q* into Q*. Let's start with the self contained background for the discussion. We start with This abstract set theoretic statement is immediate from Zorn's Lemma, and is in fact equivalent to Zorn's Lemma. But the following has a very explicit greedy construction proof: We are moving towards My general setup for the invariance conditions involves using the master space Q* of all finite subsequences of Q. Let S be a subset of an ambient space K. Let R be any binary relation. We say that S is R invariant if and only if for all x,y in K with R(x,y), we have x in S implies y in S. We say that S is completely R invariant if and only if for all x,y in K with R(x,y), we have x in S iff y in S. We use two important special cases: R is an equivalence relation, and R is a function. Let Q be the set of all rationals, and Q* be the set of all finite sequences of rationals. We use the order equivalence relation on Q*, and the function Z+up:Q* into Q*. x,y in Q* are order equivalent if and only if they have the same length, and for all 1 <= i,j <= lth(x), x_i < x_j iff y_i < y_j. Z+up(x) results from adding 1 to all coordinates greater than all coordinates outside Z+. Nobody has questioned the naturalness of order equivalence on Q*. Some have questioned the naturalness of Z+up on Q*. We will address this concern below. We use Q[0,n] as an ambient space, where Q[0,n] is the set of all rationals in [0,n]. PROPOSITION 2. EVERY ORDER INVARIANT SUBSET OF Q[0,n]^2k HAS A Note that Proposition 1 is the special case where n = 16 and k = 32. THEOREM 3. Proposition 2 is provably equivalent to Con(SRP) over ACA'. In particular, it is provable in SRP+ but not in ZFC (assuming ZFC is A proof of Theorem 3 has been submitted for publication. THEOREM 4. Proposition 1 is provable in SRP+ but not in ZFC (assuming ZFC is consistent). SRP+ = ZFC + (for all k)(there is an ordinal with the k-SRP). SRP = ZFC + {there is an ordinal with the k-SRP)_k. The k-SRP asserts that every partition of the unordered k-tuples from lambda into two pieces has a homogenous set stationary in lambda. I would like to declare VICTORY on the basis of Theorems 3,4, together with the observation that Propositions 1,2 can be seen to be "concrete" in various senses. E.g., they are constructively equivalent to the satisfiability of sentences in predicate calculus, and also constructively equivalent to asserting that obvious associated algorithms can be carried out for any given number of finite steps. QUESTION: Exactly how should the VICTORY claim be stated? As usual, I have been in contact with various kinds of scholars concerning Propositions 1,2. Generally speaking, discussion focuses on the function Z+up:Q* into Q*. Here are some reactions. 1. Outright acceptance of Z+up:Q* into Q* as entirely natural and appropriate, both intrinsically, and for use in complete invariance. This includes at least one multiple prize winning icon in core mathematics - possibly more than one. 2. Another prominent core mathematician said that Propositions 1,2 are "artificially created", meant as a criticism, and seems to base this feeling on Z+up:Q* into Q*. My own view is that they are of course artificially created (by me) - but, after the fact, we recognize that they are entirely natural and inevitable. They are now part of a new, fully credible, and deep mathematical theory. In this sense, I succeeded in greatly speeding up history. Pat Suppes views this work as providing counterexamples to statements like "any intelligible transparent concrete mathematical question can be answered in ZFC". My own view is that Suppes is right, and the work goes further. Since proofs are given using well studied extensions of ZFC, the "counterexamples" also provide important new mathematical theories. 3. Noted logicians agreeing that the statement is transparent, but expressing doubts that the mathematicians will accept Propositions 1,2 as natural, because of Z+up:Q* into Q*. I have decided to focus on the Z+up:Q* into Q* issue in the following way. I have isolated some simple and desirable abstract properties of Z +up. I prove that any T:Q* into Q* obeying these properties can be used in Propositions 1,2 - but using large cardinals in a necessary way. THere are a few "fundamental" equivalence relations on Q*. We expect to have a theory of "fundamental" equivalence relations on Q*, but here we just present and use one of them. For x,y in Q*, we define x ~ y if and only if i. x,y are order equivalent. ii. if x_i is in Z+ then y_i is in Z+. iii. if x_i is not in Z+ then x_i = y_i. We say that T:Q* into Q* is a UNIFORM TRANSFORMATION (with respect to ~) if and only if for all x in Q*, (x,Tx) ~ (Tx,TTx). Note that Z+up:Q* into Q* is a uniform transformation. We can show that all uniform transformations are very much like Z+up. PROPOSITION 5. LET T:Q* into Q* BE A UNIFORM TRANSFORMATION. EVERY ORDER INVARIANT SUBSET OF Q[0,n]^2k HAS A COMPLETELY T INVARIANT PROPOSITION 6. LET T:Q* into Q* BE A UNIFORM TRANSFORMATION. EVERY ORDER INVARIANT SUBSET OF Q[0,16]^32 HAS A COMPLETELY T INVARIANT THEOREM 7. Proposition 5 is provably equivalent to Con(SRP) over ACA'. In particular, it is provable in SRP+ but not in ZFC (assuming ZFC is consistent). Proposition 6 is provable in SRP+ but not in ZFC (assuming ZFC is consistent). QUESTION. Are reservations about Propositions 1,2 with Z+up:Q* into Q* reasonable? How far do Propositions 5,6 go to defuse reservations about Propositions 1,2 with Z+up:Q* into Q*? KEY PHRASE: Invariant Maximality. I use http://www.math.ohio-state.edu/~friedman/ for downloadable manuscripts. This is the 486th in a series of self contained numbered postings to FOM covering a wide range of topics in f.o.m. The list of previous numbered postings #1-449 can be found in the FOM archives at http://www.cs.nyu.edu/pipermail/fom/2010-December/015186.html 450: Maximal Sets and Large Cardinals II 12/6/10 12:48PM 451: Rational Graphs and Large Cardinals I 12/18/10 10:56PM 452: Rational Graphs and Large Cardinals II 1/9/11 1:36AM 453: Rational Graphs and Large Cardinals III 1/20/11 2:33AM 454: Three Milestones in Incompleteness 2/7/11 12:05AM 455: The Quantifier "most" 2/22/11 4:47PM 456: The Quantifiers "majority/minority" 2/23/11 9:51AM 457: Maximal Cliques and Large Cardinals 5/3/11 3:40AM 458: Sequential Constructions for Large Cardinals 5/5/11 10:37AM 459: Greedy CLique Constructions in the Integers 5/8/11 1:18PM 460: Greedy Clique Constructions Simplified 5/8/11 7:39PM 461: Reflections on Vienna Meeting 5/12/11 10:41AM 462: Improvements/Pi01 Independence 5/14/11 11:53AM 463: Pi01 independence/comprehensive 5/21/11 11:31PM 464: Order Invariant Split Theorem 5/30/11 11:43AM 465: Patterns in Order Invariant Graphs 6/4/11 5:51PM 466: RETURN TO 463/Dominators 6/13/11 12:15AM 467: Comment on Minimal Dominators 6/14/11 11:58AM 468: Maximal Cliques/Incompleteness 7/26/11 4:11PM 469: Invariant Maximality/Incompleteness 11/13/11 11:47AM 470: Invariant Maximal Square Theorem 11/17/11 6:58PM 471: Shift Invariant Maximal Squares/Incompleteness 11/23/11 11:37PM 472. Shift Invariant Maximal Squares/Incompleteness 11/29/11 9:15PM 473: Invariant Maximal Powers/Incompleteness 1 12/7/11 5:13AMs 474: Invariant Maximal Squares 01/12/12 9:46AM 475: Invariant Functions and Incompleteness 1/16/12 5:57PM 476: Maximality, CHoice, and Incompleteness 1/23/12 11:52AM 477: TYPO 1/23/12 4:36PM 478: Maximality, Choice, and Incompleteness 2/2/12 5:45AM 479: Explicitly Pi01 Incompleteness 2/12/12 9:16AM 480: Order Equivalence and Incompleteness 481: Complementation and Incompleteness 2/15/12 8:40AM 482: Maximality, Choice, and Incompleteness 2 2/19/12 7:43AM 483: Invariance in Q[0,n]^k 2/19/12 7:34AM 484: Finite Choice and Incompleteness 2/20/12 6:37AM__ 485: Large Large Cardinals 2/26/12 5:55AM Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2012-March/016316.html","timestamp":"2014-04-21T10:53:38Z","content_type":null,"content_length":"12419","record_id":"<urn:uuid:3d3aec4f-39fc-4439-9e5e-61e4bbc3a919>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Analysis, Logistic Regression, Sample Size Calculation Site Map Our Services Request For Service Contact Us Dissertation & Thesis Help How We Can Help You Dissertation Services Topics and Ideas Proposal Writing Research Methods Statistics Resources Editing & APA Style RQ-LS Builder Statistics Consulting SPSS Statistics Help Biostatistics Consulting Multivariate Statistics Calculate Sample Size Power Analysis Help Effect Size Calculation Mental Health Resources Workshops & Courses: Mental Health Information & Articles on Mental Health Measuring Treatment Outcome Power Analysis for Logistic Regression: Examples for Dissertation Students & Researchers It is hoped that a desired sample size of at least 150 will be achieved for the study. A power analysis was conducted to determine the number of participants needed in this study (Cohen, 1988). The primary model will be examined using logistic regression. The model will test whether the independent variables (the Multidimensional Health Locus of Control subscales: Internal, Chance, Powerful-Others and Doctors) predict the dependent/criterion variable (attendance in a cardiac support group, Yes/No). The α for the test of this model will be set at .05. To achieve power of .80 and a medium effect size a sample size of 300 is required to detect a significant model. For logistic regression of a binary dependent variable using several continuous, normally distributed independent variables, at 80% power at a 0.05 significance level, to detect a change in Prob (Y = 1) from the value of 0.050 at the mean of X to 0.100 when X is increased to one standard deviation above the mean, requires a sample size of 150. This change corresponds to an odds ratio of 2.61. You may want to cite this reference: Hsieh, F.Y., Block, D.A., and Larsen, M.D. (1998). A Simple Method of Sample Size Calculation for Linear and Logistic Regression. Statistics in Medicine, Volume 17, pages 1623-1634. Odds R Power N P0 P1 Ratio Squared Alpha Beta 0.80530 300 0.050 0.100 2.111 0.000 0.05000 0.19470 a. Power is the probability of rejecting a false null hypothesis. It should be close to one. b. N is the size of the sample drawn from the population. c. P0 is the response probability at the mean of X. d. P1 is the response probability when X is increased to one standard deviation above the mean. e. Odds Ratio is the odds ratio when P1 is on top. That is, it is [P1/(1-P1)]/[P0/(1-P0)]. f. R-Squared is the R² achieved when X is regressed on the other independent variables in the regression. g. Alpha is the probability of rejecting a true null hypothesis. h. Beta is the probability of accepting a false null hypothesis. Conducting a logistic regression with other binary independent variables will require you to have almost 1000 participants to achieve 80% power. I imagine it would be difficult to achieve this for your dissertation. For this reason, only mention the power analysis above, requiring 300 participants for your four primary independent variables of interest.
{"url":"http://www.researchconsultation.com/power-analysis-logistic-regression-sample-size.asp","timestamp":"2014-04-18T06:27:16Z","content_type":null,"content_length":"23613","record_id":"<urn:uuid:bd69a401-def2-4ba7-979e-e3166434f48f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Stock Market Pseudoscience 07.21Stock Market Pseudoscience July 1997 by Jon Blumenfeld Do big fund managers and Wall Street experts use pseudoscience to make trading decisions? If you have any mutual funds, insurance policies, or any other managed investment, chances are that the managers use a pseudoscientific principle known as technical analysis to decide when to buy and sell your investments. Technical analysis is the principle of applying statistical analysis to the historical chart of the price of some security in order to predict future prices – without reference to any other information. It is not necessary to know what is happening in the world, what the supply and demand situation is, or even what security is being traded. All the data that is needed is in the chart. It is used by almost every Wall Street trader from time to time, even though many profess to know that it is pseudoscientific. It is recognized by the US regulatory authorities as one of the two basics types of trading – the other being fundamental analysis, which tries to predict future prices by placing an actual value on a security. Investment literature is rife with advertisements from companies trying to sell technical systems. Many of them read like get-rich-quick schemes, promising guaranteed returns with very little risk. Ads in a recent issue of Futures Magazine1 have copy like “$2500 a Day. Quit your job w/my new S&P Trading System… 6 mo@$1995,” and “The big moves generate the big profits. Risk $500 to make $5000 and win on 8 out of 10 trades.” KeyPoint, a video workshop, promises that you will trade “profitably 88% of the time,” for only $95. Recurrence V, a computer program put out by Avco Financial Corp., of Greenwich, CT, promises to win 70% of the time, and Avco will refund your purchase price (several thousand dollars) if you follow the system for a year and fail to show a profit. The question is, if these systems are so good, why are they for sale? The fact is that in commodities trading, a highly leveraged activity in which most small participants are using technical systems like the ones advertised in Futures, between 70% and 90% of individuals lose money2. What is Technical Analysis? There are basically three different types of market analysis – Fundamental Analysis, which uses information like supply and demand or economic figures, Technical Analysis, which uses only information found in historical prices, and Arbitrage (or relative value) which compares the value of different securities against each other. The first technician was Richard Donchian, who started his Wall Street career in 1930 and started the first commodity fund in 1949. In 1957 he introduced his first trend following system. Trend following systems are supposed to tell traders when a market is going into a prolonged period of movement in one direction or the other. Donchian’s first system used moving averages to identify trends, and to this day many technical systems (including some of the most successful) are “trend followers.” Today, technical systems are developed using complicated statistical analyses, and such fashionable “new” concepts as fractal geometry and chaos theory. Some systems attempt to model the markets on a known physical system, or on the Fibbonacci sequence (1,1,2,3,5,8, etc… where f(n) = f(n-1) + f(n-2)). Some systems try to capture market psychology by identifying so-called “high volume” prices where lots of trading has taken place. Why is it pseudoscience? You might, at this point, be saying to yourself “Well, okay, it all sounds a little funny, but why is it pseudoscience? Maybe these guys are right.” Maybe. Maybe not. First of all, is there anything wrong with trying to formalize a system based only on data and not on any theoretical principles? This is, after all, what we’re saying: some technical systems are – a formalization of prices based only on data gathered in the market, with no external theorizing. Are there any examples of scientific principles being established this way? Yes, there are, and one of the most famous is the development of the laws of planetary motion by Johannes Kepler (1571 – 1630). Using only data on planetary positions, Kepler developed his three laws of planetary motion. Only later did Sir Isaac Newton show that the theory of gravitation led directly to Kepler’s laws. If Kepler can do it, why not today’s technical trader? There are some important reasons. The system Kepler was looking at was very simple and was periodic, repeating itself over and over, in a predictable way. Kepler also had very few variables – really only one independent variable, the diameter of the planet’s orbit. Despite this, it still took a leap of imagination for him, because he had to give up the Copernican idea of circular orbits in favor of elliptical orbits. This reduced his error in predicting the positions of the planets from 5 degrees (which many might have accepted, considering the difficulty in making observations without telescopes) down to 10 minutes, an error of just 0.05%. Imagine if a modern-day Kepler tried to analyze a market. Could he write an equation to describe the observed data? Yes, he could. There are several mathematical tools for creating equations out of a set of points, such as Fourier analysis. Unfortunately, for these tools to be useful in predicting the future, the functions they describe must be periodic, like the orbit of a planet around the sun. Establishing this periodicity for markets has never been done. In stark contrast to the regular motion of the planets, the stock market is a chaotic system. In addition, while Kepler knew what his variables were, the technical system designer doesn’t even know how to quantify the variables underlying the market. Those who have tried to apply Fourier analysis to markets have found this out to their cost. This type of analysis, commonly called curve-fitting, leads us to one of the main reasons why technical analysis is pseudoscience. One can always describe a curve through a set of points, even random points, to any arbitrary degree of accuracy. This, however, will not predict the future behavior of the curve unless it actually describes a physical system. All evidence points to the fact that the stock market is a truly chaotic and random system, with minor trends and cycles obscured in the chaos. Using curves to predict the future of the market is like trying to predict coin tosses from a previous sequence of heads and tails. Some analysts claim to have found the physical system underlying the market, but if this were true they could make millions in trading, and by selling the secret they make the knowledge worthless. Furthermore, if there are fifty different analysts claiming that the market behaves like fifty different physical systems, can they all be right? Can more than one of them be right? Why does it work sometimes? Why, then, do these systems seem to work sometimes, some of them so much so that just about every market professional uses them at least sporadically? How do fund managers who use these systems to the exclusion of all other forms of analysis stay in business? The most important reason why these systems seem to succeed is that the best “technical” traders are really very good “fundamental” traders. There are at least two areas where fundamental decisions are hidden in technical trading systems. First, most technicians believe that no system works in all markets. The decision of when to switch from one system to another is “highly subjective,” as famous technician Richard Dennis said at a recent conference (MFA Forum ‘97). Second, decisions about how much of an investor’s resources should be committed to a trade, and when to cut losses are often subjective as well. These subjective decisions are made by feel, but are really based on the fundamental external situation surrounding the market. Another important reason is the “misinterpretation of statistical significance” – also known as luck. It’s like the old ESP test where a thousand researchers are sent out into the world, each to test the ability of one subject to control the flip of a coin. Subjects are told to concentrate on heads. After the first flip, five hundred potential subjects are eliminated. After the second flip, another two hundred fifty fail. After nine flips there are one or two subjects who have successfully tossed heads every time and are considered to have psychokinetic powers. There are thousands of technical systems out there, and they only have to be right a little better than 50% of the time to be successful. How many of them will succeed for years just by luck? In addition, statistics on the success or failure of technical systems are biased toward winners. It may be true that nearly all technical traders with a three-year track record are winners, but this only means that systems that fail are abandoned early, leaving only winning systems for the statisticians. Widely available technicals tend to become self-fulfilling prophecies, with thousands of traders at least aware of the same set of levels. Traders don’t care whether the system makes realistic predictions or whether their own behavior validates pseudoscience. They just want the best price every time. Finally, it must be grudgingly admitted that there are some cycles in the markets. Some commodities have “seasonals,” which are cycles that reflect different patterns of consumption in different seasons of the year. These seasonals are so well understood, though, that they can no longer be of use to technical systems. The market simply prices that information in too quickly. So how well do these technical systems work? We’ve already seen that on a daily basis, with thousands of traders watching the same levels, they can be too powerful to ignore. The general rule of thumb seems to be, the shorter the time frame and the more widely disseminated the system, the better chance it has of being successful. In the case of a technical system which truly captures something that no one else understands, wide dissemination tends to destroy the system. For instance, if there was a seasonal effect in soybeans that only I knew about, I could make money out of it, but if the great mass of bean traders found out about it, the price would quickly move to reflect the previously unseen effect. Wall Street is full of traders who call technical analysis “tea-leaf reading” and “astrology.” That doesn’t stop them from checking the technicals every morning. There are plenty of traders who have made careers on technical analysis, but unfortunately many of them have made those careers by selling their worthless systems to individuals only too willing to take a chance on hitting the big jackpot. Some have succeeded through luck, many have failed, and the best of them have brought hidden fundamental analysis into their supposedly rigidly technical trading regimen. Maybe someday someone will discover the great system underlying the markets, but until then, keep a careful eye on your money. 1 Futures Magazine, June 1997 2 ‘Managed Futures Accounts’, Chicago Mercantile Exchange, 1996 3 ‘How to Spot Profitable Timing Signals’, Commodity Price Charts, 1990 The following is from the Letters To the Editor concerning this article Stock Market Pseudoscience The only thing that this article proved to be pseudoscientific was the article itself. Statistically how many errors are needed in an article before it is classified as being unworthy of publication. I will not pick on the errors regarding the technical analysis as it would just be claimed to be bias in favor of TA and would frankly take up most of the day. So let’s pick on Mr. Blumenfeld’s use of “misinterpretation of statistical significance.” Having taken 1000 subjects and tossed a coin with the object of obtaining heads by telekinetic power, we eliminate those who achieve tails. “After nine flips there are one or two subjects who have successfully tossed heads every time and are considered to have psychokinetic powers.” Who has considered them to have these powers, the author? No rational scientist would. The fact that mathematically it fits does not have any relevance to the conclusion. This was typical of the approach : 1. Have a theory 2. Find some scientific or statistical commentary which gives the impression of research regardless of its worth. 3. Apply that commentary as if to show that it proved the theory. Pseudoscience at its best. Virtually all the conclusions drawn in the article are incorrect. All that I will say about myself is that I do not sell systems or books – I trade using technical analysis and have done so for 25 years. The profits prove it works and they have been checked for - Unsigned Author’s Response: I congratulate the author on his 25 year track record of using technical analysis. It should be noted that the article never said that this could not be done, but that there are alternative explanations to the technical analysis hypothesis, and while randomness and luck is one of them, it is by no means the most important. In my opinion, it is the use of other, non-technical information, either consciously or unconsciously, that is responsible for most successful, long-term technical trading. The article is called pseudoscientific, and we are told that it would take up most of the day to enumerate the errors contained in it, but the only example cited is completely misinterpreted and turned on its head. No rational scientist would consider my coin flipper to have psychokinetic powers? This is exactly my point – the story was intended to show how pseudoscientists often misinterpret lucky events. Where is the three-step process described in the letter used in the article? If “virtually all the conclusions in the article are incorrect,” then name one and refute it. Accusations fly in this letter, such as that we will dismiss it by claiming bias, but we are not given the chance because not a single fact (other than the author’s biographical information) is provided. Who’s being pseudoscientific here? - Jon Blumenfeld Dear Editor, The above (Stock Market Pseudoscience by Jon Blumenfeld) was well written, and I enjoyed reading it. It was true what was written, but let me add one caveat. The writer has ignored one fact that has been too compelling for those of us that have delved into the matter, and that is the trading of W.D. Gann. I would encourage you to take an objective look at what I have written on my personal website, and entertain the idea that there really is an order that can be useful in trading, but that no man has been able to find and implement other than Gann, and no man has been able to determine what Gann did enough to be useful, except for the ways that you describe in your writing. When you look at the website, I encourage you to take what I have written objectively. Don’t let my poor writing style hamper my point that I am trying to make. Don’t dismiss what I am trying to say because I come across as being a kook. My website address is http://pages.prodigy.net/davebarry David Barry Author’s Response Thank you very much for your response to my article. I have spent a little time on your website, but I must say that nothing you have said there convinces me that Gann was anything other than the standard technical analyst. I don’t know how he traded, or how to explain his high rate of success, but I believe that if a close investigation were possible, we’d find the same sorts of things I wrote in my article. I have seen many traders with high rates of success, but further investigation usually tells us that there was something more than pure technical trading going on. Of course, the possibility exists that Gann really found THE system to describe the market, but now you say that we don’t know exactly what he did or how he did it. It seems almost like a mystical Holy Grail, with Gann providing us just not quite enough information to reach it. If you have more concrete information about Gann, or can direct me to something, I’d appreciate it. I certainly won’t dismiss you out of hand, and I’m always interested in this fascinating subject. Jon Blumenfeld
{"url":"http://www.theness.com/index.php/stock-market-pseudoscience/","timestamp":"2014-04-17T19:08:02Z","content_type":null,"content_length":"25071","record_id":"<urn:uuid:eeb81e6c-738b-4cfe-b27d-714543e2119d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
st: how to generate groups based on some characteristics and obtain the [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: how to generate groups based on some characteristics and obtain the mean/median value for each group From "Yi, Bingsheng" <byi@coba.usf.edu> To statalist@hsphsun2.harvard.edu Subject st: how to generate groups based on some characteristics and obtain the mean/median value for each group Date Tue, 18 Jun 2002 19:11:01 -0400 Dear Statalisters, I wonder whether you will help me figure out the codes to solve the following problem: I have 12 years panel data containing these four variables: Tobin's q, size, 4-digit industry code (ind4), and id. For each year, I want to make some adjusments in one variable (Tobin's q) based on the other two variables (industry and size). First I need to ensure that there are lat least 10 firms within each industry. If the number of firms within a 4-digit industry code is less than 10, I use 3-digit industry code generated by gen str4 ind3=substr(ind4,1,3), see whether the number of firms with the same 3-digit industry code is greater or equal to 10, if not, then generate and use 2-digit industry code. So in the end there are at least 10 firms within an industry ( which are classified by 4-digit, 3-digit, 2-digit, or 1-digit industry code). The problem is how to get and record the number of firms in each industry. Then within each industry, I want classify firms into three groups based on size, the smallest group contains the smallest 30% firms in size within that industry, the middle group contains the middle 40% firms in size (30% to 70%), and the large group contain firms whose size belongs to the largest 30% in that industry. Then I get the median or mean value of Tobin's q for each of these groups within an industry. the industry-size adjusted Tobin's q = Tobin's q of a firm in an industry - the mean tobin's q of a group to which the firm belongs within the same For example, if IBM belongs to the largest size group in computer industry, then the industry-size adjusted Tobin's q of IBM is equal to Tobin's q of IBM - the mean/median value of the largest size group within the computer industry. But I don't know how to generate these groups and obtain the industry-size adjusted Tobin's q for a sample firm. I will greatly appreciate your help!!! * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2002-06/msg00250.html","timestamp":"2014-04-17T18:27:46Z","content_type":null,"content_length":"7182","record_id":"<urn:uuid:962dfa49-b9f1-44da-9d12-2d37ed708113>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Anonymous on Sunday, September 23, 2007 at 11:20pm. This is a statistics problem. A laboratory test for the detection of a certain disease give a positive result 5 percent of the time for people who do not have the disease. The test gives a negative result 0.3 percent of the time for people who have the disease. Large-scale studies have shown that the disease occurs in about 2 percent of the population. What is the probability that a person selected at random would test positive for this disease? For this one, I got 0.0194, so 1.94% but I wasn’t sure if the answer is correct. What is the probability that a person selected at random who test positive for the disease does not have the disease? This one was confusing to me… I got 0.71637, or 71.637% but I doubt this is right. Please help! • math - drwls, Monday, September 24, 2007 at 9:20am Consider the four possibilities and their probabilities. has disease and tests positive: (0.02)x(0.997)= 0.0199 has disease but tests negative: (0.02)x(0.003)= 0.0001 has no disease and tests negative: (0.98)x(0.95) = 0.9310 has no disease but tests positive: (0.98)x(0.05) = 0.0490 Note the probabilities they add up to 1.000, as they should. The answer to the first question (positive test probability) is 0.0199 + 0.0490 = 0.0689 The answer to the second question is: 0.0490/(0.0199+0.0490)= 0.7112 This is a rather large ratio of "false positives" and comes about because the disease is rare (2% of population) and there are many more false positives than "true" positives • math - Anonymous, Monday, September 24, 2007 at 12:09pm That makes perfect sense. Thank you very much. • math - Anon Ymous, Friday, November 12, 2010 at 8:04pm hey, we just did this problem in class today...this is from the ap stats 1197 free response prep...the first question's ans. is 0.06894 and the second ques.'s ans. is 0.710763 Related Questions statistics - In a certain population, 1% of all individuals are carriers of a ... statistics - (Blood Test Problem) Suppose 100 people are waiting for blood test... probability - A new test for detecting a disease has been developed. Medical ... Probability - A screening test for a disease shows a positive test result in 95... Probability - Suppose that 3% of the population ha a certain disease. We have a ... Math - Can someone please help me? certain medical diagnostic test is used to ... Probability - Can someone please help me? certain medical diagnostic test is ... Probability - Can someone please help me? certain medical diagnostic test is ... Probability - Suppose the prevalence of people with certain cancer in a give ... tats - No(Did not lie) Yes(lied) positive test result 15(false positive) 42(true...
{"url":"http://www.jiskha.com/display.cgi?id=1190604001","timestamp":"2014-04-16T17:46:29Z","content_type":null,"content_length":"10157","record_id":"<urn:uuid:27ec617b-6065-4d79-b970-60f0652b495e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving an Exponential equation symbolically using Logarithms August 31st 2010, 04:07 PM #1 Aug 2010 Solving an Exponential equation symbolically using Logarithms I've been working on this problem and I just can't seem to figure it out. Solve the expression I just keep running into dead ends where I have t on both sides or both t's on one side. What have you tried so far? Is this ok? $\displaystyle Pa^t = Vg^t$ $\displaystyle \frac{P}{V} = \frac{g^t}{a^t}$ $\displaystyle \frac{P}{V} = \left(\frac{g}{a}\right)^t$ $\displaystyle t = \log_{\frac{g}{a}}\frac{P}{V}$ August 31st 2010, 04:43 PM #2 Oct 2009 August 31st 2010, 04:54 PM #3
{"url":"http://mathhelpforum.com/pre-calculus/154887-solving-exponential-equation-symbolically-using-logarithms.html","timestamp":"2014-04-18T12:52:39Z","content_type":null,"content_length":"36244","record_id":"<urn:uuid:7bb2a908-89b6-445c-b35f-6b17d2800064>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Developing Mathematical Ideas (DMI) is a professional development curriculum designed to help teachers think through the major ideas of elementary and middle-school mathematics and examine how students develop those ideas. At the heart of the materials are sets of classroom episodes (cases) illustrating student thinking as described by their teachers. In addition to case discussions, the curriculum offers teachers opportunities: to explore mathematics in lessons led by facilitators; to share and discuss the work of their own students; to view and discuss DVD clips of mathematics classrooms; to write their own classroom cases; to analyze lessons taken from innovative elementary mathematics curricula; and to read overviews of related research. DMI seminars are designed to bring together teachers from Kindergarten through middle school to: Learn mathematics content Learn to recognize key mathematical ideas with which their students are grappling Learn to support the power and complexity of student thinking Learn how core mathematical ideas develop across the grades Learn how to continue learning about children and mathematics DMI consists of seven modules, each designed for eight 3-hour sessions of professional development. Building a System of Tens: Calculation with Whole Numbers and Decimals Participants explore the base-ten structure of the number system, consider how that structure is exploited in multidigit computational procedures, and examine how basic concepts of whole numbers reappear when working with decimals. (New edition in November 2009.) Making Meaning for Operations: In the Domain of Whole Numbers and Fractions Participants examine the actions and situations modeled by the four basic operations. The seminar begins with a view of young children’s counting strategies as they encounter word problems, moves to an examination of the four basic operations on whole numbers, and revisits the operations in the context of rational numbers. (New edition in November 2009) Examining Features of Shape Participants examine aspects of 2D and 3D shapes, develop geometric vocabulary, and explore both definitions and properties of geometric objects. The seminar includes a study of angle, similarity, congruence, and the relationships between 3D objects and their 2D representations. Measuring Space in One, Two and Three Dimensions Participants examine different attributes of size, develop facility in composing and decomposing shapes, and apply these skills to make sense of formulas for area and volume. They also explore conceptual issues of length, area, and volume, as well as their complex inter-relationships. Working with Data Participants work with the collection, representation, description, and interpretation of data. They learn what various graphs and statistical measures show about features of the data, study how to summarize data when comparing groups, and consider whether the data provide insight into the questions that led to data collection. Reasoning Algebraically about Operations: In the Domain of Whole Numbers and Integers Participants examine generalizations at the heart of the study of operations in the elementary grades. They express these generalizations in common language and in algebraic notation, develop arguments based on representations of the operations, study what it means to prove a generalization, and extend their generalizations and arguments when the domain under consideration expands from whole numbers to integers. Patterns, Functions, and Change Participants discover how the study of repeating patterns and number sequences can lead to ideas of functions, learn how to read tables and graphs to interpret phenomena of change, and use algebraic notation to write function rules. With a particular emphasis on linear functions, participants also explore quadratic and exponential functions and examine how various features of a function are seen in graphs, tables, or rules. Materials for Participants: The Casebooks Each seminar is built around a casebook containing 25 to 30 cases, grouped into seven chapters, which track a particular mathematical theme from kindergarten into middle school. Casebooks begin with an overview of the seminar, and each chapter contains an introduction intended to orient the reader to the major theme of the cases in that chapter. They conclude with an essay summarizing the ideas explored in the seminar through the lens of educational research or from the perspective of a mathematician. Materials for Facilitators The DMI Facilitator’s Guides include detailed agendas for each session. Other components are intended to help facilitators: understand the major ideas to be explored in each session; identify particular strategies useful in leading case discussions and mathematics activities; plan seminar sessions; and think through issues of teacher change. The DMI DVD Cases show students in a wide variety of classroom settings with children and teachers of different races and ethnic groups. While written cases allow users to examine student thinking at their own pace, returning, if necessary, to ponder and analyze particular passages, the DVD clips offer users the opportunity to listen to real student voices in real time and provides rich images of classrooms organized around student thinking. Order materials from Pearson On line: www.pearsonschool.com/dmi By phone: 1-800-321-3106 By fax: 1-800-393-3156 By mail: 145 S. Mount Zion Road PO Box 2500 Lebanon, IN 46052 © 2009 – All Rights Reserved. Mathematics Leadership Programs(MLP)
{"url":"http://mathleadership.org/feed/atom/","timestamp":"2014-04-21T04:32:26Z","content_type":null,"content_length":"42886","record_id":"<urn:uuid:0641a06f-5d63-4999-b503-fb51ea99c860>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Why does this blow up? I'm looking at the BVP: [tex]y'' + ay' + e^{ax}y = 1[/tex], with y(0) = 0 and y(10) = 0. The numerical solution blows up at certain values of [tex]a[/tex]. For example, a near 0.089 and a near 0.2302. Why does this happen and how do I predict it? Erm. I found the problem. Near those values of 'a', there exists a zero eigenvalue of the linear operator. I guess that means that, [tex]y'' + ay' + e^{ax}y = 0\cdot u^* = 1[/tex], is a possible solution, and thus the eigenfunction [tex]u^* \to \infty[/tex] will cause the blowup. Is this correct? It's been a while since I've done Sturm-Liouville stuff.
{"url":"http://www.physicsforums.com/showthread.php?t=291018","timestamp":"2014-04-20T03:27:22Z","content_type":null,"content_length":"27637","record_id":"<urn:uuid:dce5ebb5-5953-450a-9b16-e8febc1d89eb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Success Stories - Solano Math Guide Many students hear teachers talk about how to be successful in their math course, though often students don't really listen. Are you one of them? Would you pay more attention if a student from the course told you what they did to pass the course? Well, welcome! On this page you will find actual Solano students speaking about what they did to be successful in a few of our most popular math Math Success in Intermediate Algebra Math Success in Intermediate Algebra General Math Success Ruben Math Success in Arithmetic through Intermediate Algebra Math Success in Algebra Algebra Success in Algebra Brant Nelson Elementary Algebra Success Online Math Success Elementary Algebra Success
{"url":"https://www.sites.google.com/site/sccmathguide/home/students-helping-students","timestamp":"2014-04-20T15:52:38Z","content_type":null,"content_length":"27102","record_id":"<urn:uuid:38c2a889-ccfe-43c4-998f-5fd21849ea93>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
The SIAM Journal on Computing contains research articles on the mathematical and formal aspects of computer science and nonnumerical computing. Topics include analysis and design of algorithms, data structures, computational complexity, computational algebra, computational aspects of combinatorics and graph theory, computational geometry, computational robotics, the mathematical aspects of programming languages, artificial intelligence, computational learning, databases, information retrieval, cryptography, networks, distributed computing, parallel algorithms, and computer
{"url":"http://www.emicronano.com/emicron/pages/j_midpages/siam_j_on_computing.htm","timestamp":"2014-04-20T03:10:28Z","content_type":null,"content_length":"7001","record_id":"<urn:uuid:3ba00934-ad37-463d-8223-4f9547a4dce7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
texm3x3spec - ps Performs a 3x3 matrix multiply and uses the result to perform a texture lookup. This can be used for specular reflection and environment mapping. texm3x3spec must be used in conjunction with two texm3x3pad - ps instructions. texm3x3spec dst, src0, src1 • dst is the destination register. • src0 and src1 are source registers. Pixel shader versions 1_1 1_2 1_3 1_4 2_0 2_x 2_sw 3_0 3_sw texm3x3spec x x x This instruction performs the final row of a 3x3 matrix multiply, uses the resulting vector as a normal vector to reflect an eye-ray vector, and then uses the reflected vector to perform a texture lookup. The shader reads the eye-ray vector from a constant register. The 3x3 matrix multiply is typically useful for orienting a normal vector to the correct tangent space for the surface being The 3x3 matrix is comprised of the texture coordinates of the third texture stage and the two preceding texture stages. The resulting post reflection vector (u,v,w) is used to sample the texture on the final texture stage. Any texture assigned to the preceding two texture stages is ignored. This instruction must be used with two texm3x3pad instructions. Texture registers must use the following sequence. tex t(n) // Define tn as a standard 3-vector (tn must // be defined in some way before it is used) texm3x3pad t(m), t(n) // where m > n // Perform first row of matrix multiply texm3x3pad t(m+1), t(n) // Perform second row of matrix multiply texm3x3spec t(m+2), t(n), c0 // Perform third row of matrix multiply // Then do a texture lookup on the texture // associated with texture stage m+2 The first texm3x3pad instruction performs the first row of the multiply to find u'. u' = TextureCoordinates(stage m)UVW * t(n)RGB The second texm3x3pad instruction performs the second row of the multiply to find v'. v' = TextureCoordinates(stage m+1)UVW * t(n)RGB The texm3x3spec instruction performs the third row of the multiply to find w'. w' = TextureCoordinates(stage m+2)UVW * t(n)RGB The texm3x3spec instruction then does a reflection calculation. (u'' , v'' , w'' ) = 2*[(N*E)/(N*N)]*N - E // where the normal N is given by // N = (u' , v' , w' ) // and the eye-ray vector E is given by the constant register // E = c# (Any constant register--c0, c1, c2, etc.--can be used) Lastly, the texm3x3spec instruction samples t(m+2) with (u'',v'',w'') and stores the result in t(m+2). t(m+2)RGBA = TextureSample(stage m+2)RGBA using (u'' , v'' , w'' ) as coordinates Here is an example shader with the texture maps and the texture stages identified. tex t0 // Bind texture in stage 0 to register t0 (tn must // be defined in some way before it is used) texm3x3pad t1, t0 // First row of matrix multiply texm3x3pad t2, t0 // Second row of matrix multiply texm3x3spec t3, t0, c# // Third row of matrix multiply to get a 3-vector // Reflect 3-vector by the eye-ray vector in c# // Use reflected vector to lookup texture in // stage 3 mov r0, t3 // Output the result This example requires the following texture stage setup. • Stage 0 is assigned a texture map with normal data. This is often referred to as a bump map. The data is (XYZ) normals for each texel. Texture coordinates at stage n defines where to sample this normal map. • Texture coordinate set m is assigned to row 1 of the 3x3 matrix. Any texture assigned to stage m is ignored. • Texture coordinate set m+1 is assigned to row 2 of the 3x3 matrix. Any texture assigned to stage m+1 is ignored. • Texture coordinate set m+2 is assigned to row 3 of the 3x3 matrix. Stage m+2 is assigned a volume or cube texture map. The texture provides color data (RGBA). • The eye-ray vector E is given by a constant register E = c#.
{"url":"http://msdn.microsoft.com/en-us/library/bb206229(v=vs.85)","timestamp":"2014-04-16T14:31:48Z","content_type":null,"content_length":"64922","record_id":"<urn:uuid:4ab19bf6-7c47-490c-acf7-01965f160f23>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Getting stuck at outreg [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Getting stuck at outreg From Ramani Gunatilaka <Ramani.Gunatilaka@buseco.monash.edu.au> To statalist@hsphsun2.harvard.edu Subject Re: st: Getting stuck at outreg Date Mon, 01 Dec 2003 12:56:48 +0000 Hi everybody, Sorry to come up with yet another query so soon after my last. But I am stuck. I am running regression-based decompositions of consumption for three survey years. I want to use outreg so the output is formatted nicely. Here is part of my code. local fname3 "c:\data95\hhineqvar95" local fname2 "c:\data90\hhineqvar90" local fname1 "c:\data85\hhineqvar85" local l=1 while `l'<=3 { drop _all use `fname`l'', clear regress y var1 var2 var3 etc. if `l'<2{ outreg using ineqdec.out, se bdec(2,5,2) bracket noaster ctitle("1985") title ("Decomposition of inequality by individual regressor") replace if `l'>1 & `l'<3{ outreg using ineqdec.out, se bdec(2,5,2) bracket noaster ctitle("1990") append if `l'>2 { outreg using ineqdec.out, se bdec(2,5,2) bracket noaster ctitle("1995") append Thereafter the code goes on to save Beta coefficients of the regression Multiply the variables by the Betas regress those variables on y, and save those Betas. I will spare you those details. My problem is that the loop runs once right through, producing the expected output, then it produces the regression output for the second run (1990), but gets stuck somewhere at the second outreg. I set trace on, too, but couldn't figure out what went wrong. The message I get when the whole thing stalls is, invalid 'and' I suspect something is wrong with my if statement. I replaced the second with - if `l'=2{ - but that didn't work either. Would anybody have any ideas? Thanks so much, * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-12/msg00021.html","timestamp":"2014-04-17T18:59:29Z","content_type":null,"content_length":"6513","record_id":"<urn:uuid:660d9962-3b9c-48f0-b273-e0a95415f8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Fountain Valley Math Tutor ...I hold a Bachelors of Arts in piano performance and composition from Santa Clara University, California and a Master of Music in composition from the College-Conservatory of Music at the University of Cincinnati. I enjoy working with young composition students and assisting them in building the ... 16 Subjects: including algebra 1, Spanish, English, piano ...I currently have a 3.85 GPA in my college courses at IVC. My high school GPA for grades 10-12 was 3.79 while I simultaneously participated in sports and other extracurricular activities. I have been babysitting since the age of 12, and have always been happy to help with homework. 11 Subjects: including algebra 1, prealgebra, reading, English ...Students in high school and college greatly benefit from learning to prioritize, starting with information which will provide the quickest benefit and ending with information that is the least vital. The goal of art history is the discerning appreciation and enjoyment of art. Originally I studi... 27 Subjects: including SAT math, algebra 1, algebra 2, ACT Math ...I am currently working on obtaining my California Teaching Credentials at National University. I am a third generation math teacher who has a passion for teaching younger generations. I'm very enthusiastic when it comes to Mathematics, and have a commitment to educating children. 3 Subjects: including algebra 1, prealgebra, trigonometry ...Aperture, time and ISO are three most basic knowledge of photography. Besides that, I know lots of tricky skills of using exposure compensation, lens filter, flash to create wonderful photos. I also learned the fundamental knowledge of lens. 7 Subjects: including SAT math, Chinese, photography, Microsoft Excel
{"url":"http://www.purplemath.com/fountain_valley_ca_math_tutors.php","timestamp":"2014-04-17T04:40:54Z","content_type":null,"content_length":"23893","record_id":"<urn:uuid:de1cf850-0fcb-4f06-a395-d559482a2d19>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Microeconomics" output levels with 2 plants? February 7th 2009, 06:50 PM #1 Jan 2009 A firm has the average cost equation AC=10- 1/3q +1/180q^2. when output=0, AC is 10. When output= 30 average cost is minimized at 5 dollars. Suppose the owner buys another similar plant, so the business now has two of the same plants, what is the Average cost curve for the expanded operation? I think the new AC function will be AC=20 - 1/3q +1/180q^2. Correct? Also, prove that the output level at which it pays to switch from one plant to two plants is q=40
{"url":"http://mathhelpforum.com/business-math/72389-microeconomics-output-levels-2-plants.html","timestamp":"2014-04-20T09:50:25Z","content_type":null,"content_length":"29984","record_id":"<urn:uuid:bc95fded-4717-45d8-a538-ac908b334e4c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Logical Expressions From Positive and Negative Examples via a Branch-and-Bound Approach - Mathematical Programming , 1992 "... In this paper we describe an interior point mathematical programming approach to inductive inference. We list several versions of this problem and study in detail the formulation based on hidden Boolean logic. We consider the problem of identifying a hidden Boolean function F : f0; 1g n ! f0; 1g ..." Cited by 38 (2 self) Add to MetaCart In this paper we describe an interior point mathematical programming approach to inductive inference. We list several versions of this problem and study in detail the formulation based on hidden Boolean logic. We consider the problem of identifying a hidden Boolean function F : f0; 1g n ! f0; 1g using outputs obtained by applying a limited number of random inputs to the hidden function. Given this input-output sample, we give a method to synthesize a Boolean function that describes the sample. We pose the Boolean Function Synthesis Problem as a particular type of Satisfiability Problem. The Satisfiability Problem is translated into an integer programming feasibility problem, that is solved with an interior point algorithm for integer programming. A similar integer programming implementation has been used in a previous study to solve randomly generated instances of the Satisfiability Problem. In this paper we introduce a new variant of this algorithm, where the Riemannian metric used... - INFORMS Journal on computing , 2002 "... This paper describes a method for learning logic relationships that correctlyclassifya given data set. The method derives from given logic data certain minimum cost satisfiabilityproblems, solves these problems, and deduces from the solutions the desired logic relationships. Uses of the method inclu ..." Cited by 13 (11 self) Add to MetaCart This paper describes a method for learning logic relationships that correctlyclassifya given data set. The method derives from given logic data certain minimum cost satisfiabilityproblems, solves these problems, and deduces from the solutions the desired logic relationships. Uses of the method include data mining, learning logic in expert systems, and identification of critical characteristics for recognition systems. Computational tests have proved that the method is fast and effective. , 1998 "... Two types of errors can be associated with classifying data into two categories A and B: misclassifying an item in A as one of B and, conversely, misclassifying an item in B as one of A. Tight control over these two errors is needed in many practical situations. For example, an error of one type may ..." Cited by 7 (5 self) Add to MetaCart Two types of errors can be associated with classifying data into two categories A and B: misclassifying an item in A as one of B and, conversely, misclassifying an item in B as one of A. Tight control over these two errors is needed in many practical situations. For example, an error of one type may have far more serious consequences than one of the other type. Most previous work on two-class classifiers does not allow for such control. In this paper, we describe a general approach that supports tight error control for two-class classification and that can utilize almost any two-class classification method as part of the decision mechanism. The main idea is to construct from the given training data a family of classifiers and then, using the training data once more, to estimate two distributions of certain vote totals. The error control is achieved via the two estimated distributions. The control is effective if the estimates of the two distributions are close to the true distributions... , 2001 "... In this paper we describe the application of a new learning tool for the diagnosis of hepatocellular carcinoma. The method adopted operates in the logic domain and presents several interesting features for the development of medical diagnostic systems. We consider a database of 128 patients, 64 of w ..." Add to MetaCart In this paper we describe the application of a new learning tool for the diagnosis of hepatocellular carcinoma. The method adopted operates in the logic domain and presents several interesting features for the development of medical diagnostic systems. We consider a database of 128 patients, 64 of which suffered from hepatocellular carcinoma, while the others suffered from cirrhosis but not from hepatocellular carcinoma. Each patient is described by a number of attributes measured in non-invasive way. The system, after the training, is able to correctly separate the 64 patients affected by cirrhosis from the others 64 affected by hepatocellular carcinoma and is now ready to produce automatic diagnosis for new patients. The hepatocellular carcinoma is one of the most widely spread malignant tumors in the world. The ability to detect the tumor in its early stages in a minimally invasive way is crucial to the treatment of patients with this disease.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=590285","timestamp":"2014-04-18T12:23:56Z","content_type":null,"content_length":"21643","record_id":"<urn:uuid:e1a3600a-b027-406a-8f18-07fa796de859>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
AP Statistics Chapter 1 Terms 18 terms · Terms for AP Statsitics the science of collecting, analyzing and drawing conclusions from data. population of interest the entire collection of individuals or objects about which information is desired. a subset of the population, selected for study in some prescribed manner. descriptive statistics methods for organizing and summarizing data. inferential statsitics generalizing from a sample to the population from which it was selected. univariate data set a data set consisting of observations on a single attribute. categorical data (qualitative) if the individual observations are categorical responses. numerical data (quantitative) if each observation is a number. bivariate data set a data set consisting of observations on two different attributes. multivariate data set a data set consisting of observations for each of two or more attributes. numerical data: discrete if the possible values are isolated points on the number line. numerical data: continuous if the set of possible values forms an entire interval on the number line. frequency distribution for categorical data a table that displays the possible categories along with the associated frequencies or relative frequencies. for a particular category, it is the number of times the category appears in the data set. relative freqency for a particular category, it is the fraction or proportion of the time that the category appears in the data set, it is calculated as: relative frequency = frequency/number of observations in the data set relative frequency distribution when the table includes relative frequencies. bar chart a graph of the frequency distribution of categorical data. a simple way to display numerical data when the data set is reasonably small.
{"url":"http://quizlet.com/26534611/ap-statistics-chapter-1-terms-flash-cards/","timestamp":"2014-04-18T10:40:31Z","content_type":null,"content_length":"60811","record_id":"<urn:uuid:b4e1b8d5-c2a5-4482-8bf2-b749e9fec8e0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Representing Solid Figures 10.2: Representing Solid Figures Created by: CK-12 The Box Project On their second day at the mall, Candice and Trevor were taken to the back store room where there were tons of boxes. Only these boxes weren’t put together, they were all flat. “This is where we keep boxes for wrapping,” Mrs. Scott told them. “That is a lot of boxes,” Trevor said looking at all of the piles of boxes. “Yes, and they all have gotten mixed up. We need you to sort them. Put all of the square boxes in one pile and all of the rectangular boxes in another pile. Then we will be able to get to the box that we need when the time comes,” Mrs. Scott instructed. “We can do that,” Candice said. Trevor did not look as certain. “Alright. I’ll come back and check on you in a little while,” Mrs. Scott said, walking away. “What do you mean, we can do that?” Trevor asked Candice after Mrs. Scott had left. “It’s easy. The square ones look like the nets of cubes-you know from math class. The other boxes look like rectangular prisms.” Trevor is puzzled. He can’t remember what a net for a cube looks like. Do you know? What is the difference between a net for a cube and a net for a rectangular prism? This lesson is all about nets and two-dimensional representations of three-dimensional figures. By the end of the lesson, you will know how to identify the net of a cube and the net of a rectangular What You Will Learn In this lesson, you will learn the following skills. • Draw two-dimensional representations of solid figures. • Draw top, side and front views of solid figures. • Identify and classify solid figures given top, side and front views. • Use top, side and front views involving unit cubes to draw solid figures. Teaching Time I. Draw Two-Dimensional Representations of Solid Figures Think about the solid figures that you learned about in the last lesson. Solid figures are three-dimensional figures that have height, width and length. Here are some of the solid figures that you learned about in the last lesson. If you look at these figures, you can see that there are some parts that you can see and some parts that you can’t see. We can assume that the other parts are there, but because this is a solid figure we can only see parts of each figure. What if we wanted to see the whole figure? If we want to see the entire figure, then we need to draw a representation of the figure that is two-dimensional. You can think of this as unfolding or taking apart the solid so that you can see all of its parts. Look back at the three figures that we just reviewed. The first one is a rectangular prism. If we wanted to draw this as a two-dimensional figure, then we would have to think about all of the parts of the rectangular prism. There are four sides to the base of a rectangle, so we know that there are six faces. But how do the faces look? Let’s look at this drawing of a rectangular prism. Look at how this box has been unfolded. You can see the pattern that it makes and if you picture it in your mind, you can see how we could fold up the box again. The edges form the lines and we could fold it along those lines to create a rectangular prism once again. Let’s look at the second figure. It is a square pyramid. To draw a square pyramid, think about its parts and how it might look if we unfolded it. The base is a square and there are four triangles on the sides. We can unfold it from the vertex down. Here is what it looks like. You can see that the sides could be rejoined at the vertex and we would have the square pyramid once again. What about the last figure? It is a cylinder. We know that the two bases of a cylinder are circles. What about the middle? Think about that middle space as a rectangle that is wound up around the edges of the circle ends. If you think about it this way, you can see what it would look like taken apart. A two-dimensional representation of a solid like this one is called a net. A net is a visual way of drawing a solid figure so that all of the parts of the figure can be seen and measured. 10C. Lesson Exercises Draw a net of a pentagonal prism. Compare your drawing with a friend’s drawing. Do you have all of the parts of the figure represented? To check, cut out your drawing and see if you can fold it up or tape it into a prism again. II. Draw Top, Side and Front Views of Solid Figures Nets, as we know, allow us to see the top, bottom, and sides of a solid figure all at once. We can also get a sense of a solid just by drawing its side, front, and top views. These are like pieces of a net, without having to draw the whole net. As with nets, we’ll need to use our imagination to draw the side, top, and front view. Imagine you can hold the figure and can look down on it from above, or look at it from the side. This may be hard to imagine at first. If so, find some real solid objects, such as a soup can and cereal box. What is the top view of a cylinder? Consider what you would see if you look down on a regular can, you would see the circle that is the base (or the top). What about if we looked at it from the side or front? Then you would see a rectangle of sorts. It would be long and represent the length of the can. Look at this figure. It is a cereal box, but we can also think of it given its geometric properties. It is a rectangular prism. We know that a rectangular prism is made up of rectangles, but would we see a rectangle from each view? Yes. We would, but the position of the rectangle would change depending on the view. Here is a top view. Here is a side view. Here is a front view and a back view. What about a cone? Cones can be tricky because of what you would see. If you look at the side, you would see a triangle. If you look from the bottom up, you would see a circle. What about from the top down? If you think about this, you would see a circle with a point in the middle. The point would represent the top of the cone. The circle represents the bottom part. You can draw a view of any three-dimensional figure as long as you take the time to visualize it in your mind first. If you have trouble with this, try to use a real object to help you with the visualization and the drawing. III. Identify and Classify Solid Figures Given Top, Side and Front Views As we have seen, we can classify a solid figure by drawing a net of it. The net lets us see what shape the base is, how many sides the figure has, and whether or not it has any parallel faces. In the last section, we practiced drawing different views of three-dimensional figures. Now we can use those views as we work to identify different solids. This is how we can identify and/or classify solid figures by using their side and top views. For example, we know from drawing pyramids that their sides are triangles. If we look at a pyramid from the side, then, all we can see is one triangular face. Pyramids always have triangular sides, so we would know just from this one side view that the figure must be a pyramid. What about this figure? The square is a tricky one to work with. Any prism could have a square as one of its views. We could also have a square pyramid with a bottom up view, although this would not be as common. Depending on the view, we can either be general or specific in our identification of the solid. You can see that the square is a more general view to work with. Let’s look at an example. What solid figure is represented below? We have been given a side view and a top view. What shape are the side faces of this solid figure? They are rectangles. What shape is the top face of the figure? It is a circle. Now let’s think about what we solid figures we know. Many solid figures can have rectangular sides (any prism), but not very many have a circular top face. Cylinders, as we know, have a circular top face and a circular base. This could be a cylinder. But what about the side? The side of a cylinder is actually curved. Remember, when we “unroll” it to make a net, the side looks like a rectangle. Imagine you could hold up a can of soup. It would look like a rectangle from the side! These two views must be showing a cylinder. When given different views, you will need to think about each part of the different solids. Keep in mind that knowing that there are prisms, cylinders, cones and pyramids can help you in this task. IV. Use Top, Side and Front Views Involving Unit Cubes to Draw Solid Figures We can use cubes called “unit cubes” to help us draw different views of a solid figure. This is a way to “build” a solid figure. This is where we are going to begin to see some of the dimensions of different solid figures come into practice. This is the building block that we are going to use for our drawings. Here is how this is going to work. Draw the top view of rectangular prism. The dimensions are $2 \times 3$ To draw this figure, we are going to use cubes. We know that the top view is $2 \times 3$ This is what the top view of the rectangular prism would look like. What if we were to know all of the dimensions of a rectangular prism? Could we build it then? Yes, we could. To know all of the dimensions, we would have the length, the width and the height. Then we would use cubes to “build the different views” of the figure. Let’s look at an example. Build a front, top and side view of a rectangular prism that is $2 \ \text{units wide} \times 3 \ \text{units long} \times 4 \ \text{units high}$ The top view would show the width and the length. It would look just like the view that we just built. Next, we build the front. It is going to show the length and the height of the prism. Finally, we build the side view. It is going to show a width of two units and a height of four units. Whew! We have really looked at solids inside and out! Now we can take them apart and put them together again, all by understanding the shapes of their faces and how the faces fit together. In the next few lessons you will see how useful it is to understand solids. Real–Life Example Completed The Box Project Here is the original problem once again. Reread it and then see how Candice explains the difference between the net of a cube and the net of a rectangular prism. On their second day at the mall, Candice and Trevor were taken to the back store room where there were tons of boxes. Only these boxes weren’t put together, they were all flat. “This is where we keep boxes for wrapping,” Mrs. Scott told them. “That is a lot of boxes,” Trevor said, looking at all of the piles of boxes. “Yes and they all have gotten mixed up. We need you to sort them. Put all of the square boxes in one pile and all of the rectangular boxes in another pile. Then we will be able to get to the box that we need when the time comes,” Mrs. Scott instructed. “We can do that,” Candice said. Trevor did not look as certain. “Alright. I’ll come back and check on you in a little while,” Mrs. Scott said, walking away. “What do you mean, we can do that?” Trevor asked Candice after Mrs. Scott had left. “It’s easy. The square ones look like the nets of cubes-you know from math class. The other boxes look like rectangular prisms.” Trevor is puzzled. He can’t remember what a net for a cube looks like. Candice looks at Trevor. She picks up one of the boxes. “Okay, see this one, it is a square box or you could think of it as the net of a cube. Look at the center of the box. All four sides are the same length. When we fold up the sides around this bottom, you can see that all of the sides match. They are the same length and so they will form a cube.” Trevor smiles. “Okay, now I get it. The rectangular box has a rectangle at the center. The sides will fold up around it.” “Yes,” Candice adds. “We can see that there is a length and a width for the rectangular bottom and top. Also, notice that the sides match these lengths. Therefore, when we fold up the sides, we will have a box that is a rectangular prism.” With this explanation complete, the students go to work sorting all of the boxes. Here are the vocabulary words that are found in this lesson. Solid Figure a three-dimensional figure with height, width and length. a visual way to represent a three-dimensional figure in two dimensions. Technology Integration 1. http://www.onlinemathlearning.com/3d-shapes-nets.html – This is a good basic video on how to draw the nets of different solid figures. Time to Practice Directions: Identify each solid by its net. Directions: Practice for students. Draw an example of a new net for each of the figures in numbers 1 – 10. 11 – 20 Directions: Draw the top view, side view and front view of a rectangular prism with the following dimensions. 21. $4 \ \text{units wide} \times 6 \ \text{units long} \times 8 \ \text{units high}$ Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math---Grade-7/r3/section/10.2/","timestamp":"2014-04-18T12:19:40Z","content_type":null,"content_length":"126958","record_id":"<urn:uuid:21f76f4a-61e4-45f2-89b7-27b85c876f30>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Big Red Bits Here are three puzzles I got from 3 different people recently. The first I got from Charles, who got it from a problem set of the Brazilian Programming Competition. Puzzle #1: Given $n$ and a vector of in-degrees and out-degrees $\text{in}_i$ and $\text{out}_i$ for $i=1..n$, find if there is a simple directed graph on $n$ nodes with those in and out-degrees in time $O(n \log n)$. By a simple directed graph, I mean at most one edge between each pair $(i,j)$, allowing self loops $(i,i)$. Solving this problem in time $O(n^3)$ is easy using a max-flow computation – simply consider a bipartite graph with and edge $(i,j)$ between each pair with capacity $1$. Add a source and connect to each node in the left size with capacity $\text{in}_i$ and add also a sink in the natural way, compute max-flow and check if it is $\sum_i \text{in}_i$. But it turns out we can do it in a lot more efficient way. The solution I thought works in time $O(n \log n)$ but maybe there is a linear solution out there. If you know of one, I am curious. The second puzzle was given to me by Hyung-Chan An: Puzzle #2: There is grid of size $n \times m$ formed by rigid bars. Some cells of the grid have a rigid bar in the diagonal, making that whole square rigid. The question is to decide, given a grid and the location of the diagonal bars if the entire structure is rigid or not. By rigid I mean, being able to be deformed. We thought right away in a linear algebraic formulation: look at each node and create a variable for each of the 4 angles around it. Now, write linear equations saying that some variables sum to 360, since they are around one node. Equations saying that some variable must be 90 (because it is in a rigid cell). Now, for the variables internal to each square, write that opposite angles must be equal (since all the edges are of equal length) and then you have a linear system of type $Ax = b$ where $x$ are the variables (angles). Now, we need to check if this system admits more then one solution. We know a trivial solution to it, which is all variable is 90. So, we just need to check if the matrix $A$ has full rank. It turns out this problem has a much more beautiful and elegant solution and it is totally combinatorial – it is based on verifying that a certain bipartite graph is connected. You can read more about this solution in Bracing rectangular frameworks. I by (Bolker and Crapo 1979). A cute idea is to use the the following more general linear system (which works for rigidity in any number of dimensions). Consider a rigid bar from point $p_a \in \mathbb{R}^n$ to point $p_b \in \mathbb{R}^n$. If the structure is not rigid, then there is a movement it can make: let $v_a$ and $v_b$ be the instantaneous velocities of points $a$ and $b$. If $p_a(t), p_b(t)$ are the movements of points $a,b$, then it must hold that: $\Vert p_a(t) - p_b(t) \Vert = \Vert p_a(0) - p_b(0) \Vert$, so taking derivatives we have: $(p_a(0) - p_b(0)) \cdot (v_a - v_b) = 0$ This is a linear system in the velocities. Now, our job is to check if there are non zero velocities, which again is to check that the matrix of the linear system is or is not full-rank. An interesting thing is that if we look at this question for the grid above, this matrix will be the matrix of a combinatorial problem! So we can simply check if it has full rank by solving the combinatorial problem. Look at the paper for more details. The third puzzle I found in the amazing website called The Puzzle Toad, which is CMU’s puzzle website: Puzzle #3: There is a game played between Arthur and Merlin. There is a table with $n$ lamps disposed in a circle, initially some are on and some are off. In each timestep, Arthur writes down the position of the lamps that are off. Then Merlin (in an adversarial way) rotates the table. The Arthur’s servant goes and flips (on –> off, off –> on) the lamps whose position Arthur wrote down (notice now he won’t be flipping the correct lamps, since Merlin rotated the table. if Arthur wrote lamp 1 and Merlin rotated the table by 3 positions, the servant will actually be flipping lamp 4. The question is: given $n$ and an initial position of the table, is there a strategy for Merlin such that Arthur never manages to turn all the lamps on. See here for a better description and a link to the solution. For $n = 2^k$ no matter what Merlin does, Arthur always manages to turn on all the lamps eventually, where eventually means in $O(n^2)$ time. The solution is a very pretty (and simple) algebraic argument. I found this problem really nice. Probability Puzzles February 16th, 2010 No comments Today in a dinner with Thanh, Hu and Joel I heard about a paradox I haven’t heard so far. Probability is full of cute problems that challenge our understanding of the basic concepts. The most famous of them is the Monty Hall Problem, which asks: You are on a TV game show and there are ${3}$ doors – one of them contains a prize, say a car and the other two door contain things you don’t care about, say goats. You choose a door. Then the TV host, who knows where the prize is, opens one door you haven’t chosen and that he knows has a goat. Then he asks if you want to stick to the door you have chosen or if you want to change to the other door. What should you do? Probably you’ve already came across this question in some moment of your life and the answer is that changing doors would double your probability of getting the price. There are several ways of convincing your intuitions: • Do the math: when you chose the door, there were three options so the prize is in the door you chose with ${\frac{1}{3}}$ probability and in the other door with probability ${\frac{2}{3}}$ (note that the presenter can always open some door with a goat, so conditioning on that event doesn’t give you any new information). • Do the actual experiment (computationally) as done here. One can always ask a friend to help, get some goats and perform the actual experiment. • To convince yourself that “it doesn’t matter” is not correct, think ${100}$ doors. You choose one and the TV host open ${98}$ of them and asks if you want to change or stick with your first choice. Wouldn’t you change? I’ve seen TV shows where this happened and I acknowledge that other things may be involved: there might be behavioral and psychologic issues associated with the Monty Hall problem – and possibly those would interest Dan Ariely, whose book I began reading today – and looks quite fun. But the problem they told me about today in dinner was another: the envelope problem: There are two envelopes and you are told that in one of them there is twice the amount that there is in the other. You choose one of the envelopes at random and open it: it contains ${100}$ bucks. Now, you don’t know if the other envelope has ${50}$ bucks or ${200}$ bucks. Then someone asks you if you wanted to pay ${10}$ bucks and change to the other envelope. Should you change? Now, consider two different solutions to this problem: the first is fallacious and the second is correct: 1. If I don’t change, I get ${100}$ bucks, if I change I pay a penalty of ${10}$ and I get either ${50}$ or ${200}$ with equal probability, so my expected prize if I change is ${\frac{200+50}{2}-10 = 115 > 100}$, so I should change. 2. I know there is one envelope with ${x}$ and one with ${2x}$, then my expected prize if I don’t change is ${\frac{x + 2x}{2} = \frac{3}{2}x}$. If I change, my expected prize is ${\frac{x + 2x}{2} - 10 < \frac{3}{2}x}$, so I should not change. The fallacy in the first argument is perceiving a probability distribution where there is no one. Either the other envelope contains ${50}$ bucks or it contains ${200}$ bucks – we just don’t know, but there is no probability distribution there – it is a deterministic choice by the game designer. Most of those paradoxes are a result of either an ill-defined probability space, as Bertrand’s Paradox or a wrong comprehension of the probability space, as in Monty Hall or in several paradoxes exploring the same idea as: Three Prisioners, Sleeping Beauty, Boy or Girl Paradox, … There was very recently a thrilling discussion about a variant on the envelope paradox in the xkcd blag – which is the blog accompaning that amazing webcomic. There was a recent blog post with a very intriguing problem. A better idea is to go there and read the discussion, but if you are not doing so, let me summarize it here. The problem is: There are two envelopes containing each of them a distinct real number. You pick one envelope at random, open it and see the number, then you are asked to guess if the number in the other envelope is larger or smaller then the previous one. Can you guess correctly with more than ${\frac{1}{2}}$ probability? A related problem is: given that you are playing the envelope game and there are number ${A}$ and ${B}$ (with ${A < B}$). You pick one envelope at random and then you are able to look at the content of the first envelope you open and then decide to switch or not. Is there a strategy that gives you expected earnings greater than ${\frac{A+B}{2}}$ ? The very unexpected answers is yes !!! The strategy that Randall presents in the blog and there is a link to the source here is: let ${X}$ be a random variable on ${{\mathbb R}}$ such that for each $ {a<b}$ we have ${P(a < X < b) > 0}$, for example, the normal distribution or the logistic distribution. Sample ${X}$ then open the envelope and find a number ${S}$ now, if ${X < S}$ say the other number is lower and if ${X > S}$ say the other number is higher. You get it right with probability $\displaystyle P(\text{picked }A) P(X > A) + P(\text{picked }B) P(X < B) = \frac{1}{2} (1 + P(A < X < B))$ which is impressive. If you follow your guess, your expected earning ${Y}$ is: \displaystyle \begin{aligned} &P(\text{picked }A) \mathop{\mathbb E}[Y \vert \text{picked }A] + P(\text{picked }B) \mathop{\mathbb E}[Y \vert \text{picked }B] = \\ & = \frac{1}{2} [P(X<A) A + P(X>A) B] + \frac{1}{2} [P(X<B) B + P(X>B) A] \\ &= \frac{1}{2}[A [P(X<A) + P(X>B)] + B [P(X>A) + P(X<B)]] > \frac{A+B}{2} \\ \end{aligned} The xkcd pointed to this cool archive of puzzles and riddles. I was also told that the xkcd puzzle forum is also a source of excellent puzzles, as this: You are the most eligible bachelor in the kingdom, and as such the King has invited you to his castle so that you may choose one of his three daughters to marry. The eldest princess is honest and always tells the truth. The youngest princess is dishonest and always lies. The middle princess is mischievous and tells the truth sometimes and lies the rest of the time. As you will be forever married to one of the princesses, you want to marry the eldest (truth-teller) or the youngest (liar) because at least you know where you stand with them. The problem is that you cannot tell which sister is which just by their appearance, and the King will only grant you ONE yes or no question which you may only address to ONE of the sisters. What yes or no question can you ask which will ensure you do not marry the middle sister? copied from here. Categories: puzzles probability, puzzles More about hats and auctions October 29th, 2009 No comments In my last post about hats, I told I’ll soon post another version with some more problems, which I ended up not doing and would talk a bit more about those kind of problems. I ended up not doing, but here are a few nice problems: Those ${n}$ people are again a room, each with a hat which is either black or white (picked with probability ${\frac{1}{2}}$ at random) and they can see the color of the other people’s hats but they can’t see their own color. They write in a piece of paper either “BLACK” or “WHITE”. The whole team wins if all of them get their colors right. The whole team loses, if at least one writes the wrong color. Before entering the room and getting the hats, they can strategyze. What is a strategy that makes them win with ${\frac{1}{2}}$ probability? If they all choose their colors at random, the probability of winning is very small: ${\frac{1}{2^n}}$. So we should try to correlate them somehow. The solution is again related with error correcting codes. We can think of the hats as a string of bits. How to correct one bit if it is lost? The simple engineering solution is to add a parity check. We append to the string ${x_0, x_1, \hdots, x_n}$ a bit ${y = \sum_i x_i \mod 2}$. So, if bit ${i}$ is lost, we know it is ${x_i = (y + \sum_{j eq i} x_j) \mod 2}$. We can use this idea to solve the puzzle above: if hats are places with ${\frac{1} {2}}$ probability, the parity check will be ${0}$ with probability ${\frac{1}{2}}$ and ${1}$ with probability ${\frac{1}{2}}$. They can decide before hand that everyone will use ${y = 0}$ and with probability ${\frac{1}{2}}$ they are right and everyone gets his hat color right. Now, let’s extend this problem in some ways: The same problem, but there are ${k}$ hat colors, they are choosen independently with probability ${\frac{1}{k}}$ and they win if everyone gets his color right. Find a strategy that wins with probability ${\frac{1}{k}}$. There are again ${k}$ hat colors, they are choosen independently with probability ${\frac{1}{k}}$ and they win if at least a fraction ${f}$ (${0 < f < 1}$) of the people guesses the right color. Find a strategy that wins with probability ${\frac{1}{fk}}$. Again to the problem where we just have BLACK and WHITE colors, they are chosen with probability ${\frac{1}{2}}$ and everyone needs to find the right color to win, can you prove that ${\frac{1} {2}}$ is the best one can do? And what about the two other problems above? The first two use variations of the parity check idea in the solution. For the second case, given any strategy of the players, for each string ${x \in \{0,1\}^n}$ they have probability ${p_x}$. Therefore the total probability of winning is ${\frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x}$. Let ${x' = (1-x_1, x_2, \hdots, x_n)}$, i.e., the same input but with the bit ${1}$ flipped. Notice that the answer of player ${1}$ is the same (or at least has the same probabilities) in both ${x}$ and ${x'}$, since he can’t distinguish between ${x}$ and ${x'}$. Therefore, ${p_{x} + p_{x'} \leq 1}$. So, $\displaystyle 2 \frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x = \frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x + \frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x' \leq 1$ . This way, no strategy can have more than ${\frac{1}{2}}$ probability of winning. Another variation of it: Suppose now we have two colors BLACK and WHITE and the hats are drawn from one distribution ${D}$, i.e., we have a probability distribution over ${x \in \{0,1\}^n}$ and we draw the colors from that distribution. Notice that now the hats are not uncorrelated. How to win again with probability ${\frac{1}{2}}$ (to win, everyone needs the right answer). I like a lot those hat problems. A friend of mine just pointed out to me that there is a very nice paper by Bobby Kleinberg generalizing several aspects of hat problems, for example, when players have limited visibility of other players hats. I began being interested by this sort of problem after reading the Derandomization of Auctions paper. Hat guessing games are not just a good model for error correcting codes, but they are also a good model for truthful auctions. Consider an auction with a set ${N}$ single parameter agents, i.e., an auction where each player gives one bid ${b_i}$ indicating how much he is willing to pay to win. We have a set of constraints: ${\mathcal{X} \subseteq 2^N}$ of all feasible allocations. Based on the bids ${(b_i)_{i \in N}}$ we choose an allocation ${S \in \mathcal{X}}$ and we charge payments to the bidders. An example of a problem like this is the Digital Goods Auction, where ${\mathcal{X} = 2^N}$. In this blog post, I discussed the concept of truthful auction. If an auction is randomized, an universal truthful auction is an auction that is truthful even if all the random bits in the mechanism are revealed to the bidders. Consider the Digital Goods Auction. We can characterize universal truthful digital goods auction as bid-independent auctions. A bid-independent auction is given by function ${f_i(b_{-i})}$, which associated for each ${b_{-i}}$ a random variable ${f_i(b_{-i})}$. In that auction, we offer the service to player ${i}$ at price ${f_i(b_{-i})}$. If ${b_i \geq f_i(b_ {-i})}$ we allocate to ${i}$ and charge him ${f_i(b_{-i})}$. Otherwise, we don’t allocate and we charge nothing. It is not hard to see that all universal truthful mechanisms are like that: if ${x_i(b_i)}$ is the probability that player ${i}$ gets the item bidding ${b_i}$ let ${U}$ be an uniform random variable on ${[0,1]}$ and define ${f_i(b_{-i}) = x_i^{-1}(U)}$. Notice that here ${x_i(.) = x_i(., b_{-i})}$, but we are inverting with respect to ${b_i}$. It is a simple exercise to prove that. With this characterization, universal truthful auctions suddenly look very much like hat guessing games: we need to design a function that looks at everyone else’s bid but not on our own and in some sense, “guesses” what we probably have and with that calculated the price we offer. It would be great to be able to design a function that returns ${f(b_{-i}) = b_i}$. That is unfortunately impossible. But how to approximate ${b_i}$ nicely? Some papers, like the Derandomization of Auctions and Competitiveness via Consensus use this idea. Categories: puzzles, theory coding theory, mechanism design, profit maximization, puzzles, theory
{"url":"http://www.bigredbits.com/archives/tag/puzzles","timestamp":"2014-04-17T12:29:29Z","content_type":null,"content_length":"60199","record_id":"<urn:uuid:39111d65-69ff-4f53-a75a-95d9107363a8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Elliptic Curves Explores how elliptic curves are used in cryptography and number theory Requires no background in algebraic geometry Discusses both the Weil and Tate–Lichtenbaum pairings Covers elliptic curve cryptosystems, such as pairing-based encryption Introduces three popular packages (Pari, Magma, and Sage) for computing with elliptic curves Like its bestselling predecessor, Elliptic Curves: Number Theory and Cryptography, Second Edition develops the theory of elliptic curves to provide a basis for both number theoretic and cryptographic applications. With additional exercises, this edition offers more comprehensive coverage of the fundamental theory, techniques, and applications of elliptic curves. New to the Second Edition Chapters on isogenies and hyperelliptic curves A discussion of alternative coordinate systems, such as projective, Jacobian, and Edwards coordinates, along with related computational issues A more complete treatment of the Weil and Tate–Lichtenbaum pairings Doud’s analytic method for computing torsion on elliptic curves over Q An explanation of how to perform calculations with elliptic curves in several popular computer algebra systems Taking a basic approach to elliptic curves, this accessible book prepares readers to tackle more advanced problems in the field. It introduces elliptic curves over finite fields early in the text, before moving on to interesting applications, such as cryptography, factoring, and primality testing. The book also discusses the use of elliptic curves in Fermat’s Last Theorem. Relevant abstract algebra material on group theory and fields can be found in the appendices. Table of Contents Weierstrass Equations The Group Law Projective Space and the Point at Infinity Proof of Associativity Other Equations for Elliptic Curves Other Coordinate Systems The j-Invariant Elliptic Curves in Characteristic 2 Singular Curves Elliptic Curves mod n Torsion Points Division Polynomials The Weil Pairing The Tate–Lichtenbaum Pairing Elliptic Curves over Finite Fields The Frobenius Endomorphism Determining the Group Order A Family of Curves Schoof’s Algorithm Supersingular Curves The Discrete Logarithm Problem The Index Calculus General Attacks on Discrete Logs Attacks with Pairings Anomalous Curves Other Attacks Elliptic Curve Cryptography The Basic Setup Diffie–Hellman Key Exchange Massey–Omura Encryption ElGamal Public Key Encryption ElGamal Digital Signatures The Digital Signature Algorithm A Public Key Scheme Based on Factoring A Cryptosystem Based on the Weil Pairing Other Applications Factoring Using Elliptic Curves Primality Testing Elliptic Curves over Q The Torsion Subgroup: The Lutz–Nagell Theorem Descent and the Weak Mordell–Weil Theorem Heights and the Mordell–Weil Theorem The Height Pairing Fermat’s Infinite Descent 2-Selmer Groups; Shafarevich–Tate Groups A Nontrivial Shafarevich–Tate Group Galois Cohomology Elliptic Curves over C Doubly Periodic Functions Tori Are Elliptic Curves Elliptic Curves over C Computing Periods Division Polynomials The Torsion Subgroup: Doud’s Method Complex Multiplication Elliptic Curves over C Elliptic Curves over Finite Fields Integrality of j-Invariants Numerical Examples Kronecker’s Jugendtraum Definitions and Examples The Weil Pairing The Tate–Lichtenbaum Pairing Computation of the Pairings Genus One Curves and Elliptic Curves Equivalence of the Definitions of the Pairings Nondegeneracy of the Tate–Lichtenbaum Pairing The Complex Theory The Algebraic Theory Vélu’s Formulas Point Counting Hyperelliptic Curves Basic Definitions Cantor’s Algorithm The Discrete Logarithm Problem Zeta Functions Elliptic Curves over Finite Fields Elliptic Curves over Q Fermat’s Last Theorem Galois Representations Sketch of Ribet’s Proof Sketch of Wiles’s Proof APPENDIX D: COMPUTER packages Exercises appear at the end of each chapter. Editorial Reviews … the book is well structured and does not waste the reader’s time in dividing cryptography from number theory-only information. This enables the reader just to pick the desired information. … a very comprehensive guide on the theory of elliptic curves. … I can recommend this book for both cryptographers and mathematicians doing either their Ph.D. or Master’s … I enjoyed reading and studying this book and will be glad to have it as a future reference. —IACR book reviews, April 2010 Praise for the First Edition There are already a number of books about elliptic curves, but this new offering by Washington is definitely among the best of them. It gives a rigorous though relatively elementary development of the theory of elliptic curves, with emphasis on those aspects of the theory most relevant for an understanding of elliptic curve cryptography. … an excellent companion to the books of Silverman and Blake, Seroussi and Smart. It would be a fine asset to any library or collection. —Mathematical Reviews, Issue 2004e Washington … has found just the right level of abstraction for a first book … . Notably, he offers the most lucid and concrete account ever of the perpetually mysterious Shafarevich–Tate group. A pleasure to read! Summing Up: Highly recommended. —CHOICE, March 2004 … a nice, relatively complete, elementary account of elliptic curves. —Bulletin of the AMS
{"url":"http://www.crcpress.com/product/isbn/9781420071467","timestamp":"2014-04-19T19:57:41Z","content_type":null,"content_length":"108986","record_id":"<urn:uuid:5c934676-d30e-4c30-8fad-8c99a00547ad>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Elm Algebra 2 Tutor Find a Little Elm Algebra 2 Tutor ...Math can be easy and fun when you understand the key concepts. My teaching style is to ask my students focused questions that cause them to think, then determine the correct answers for themselves. I then like to show multiple ways to solve the same problem and how to verify the answer as being correct. 48 Subjects: including algebra 2, chemistry, physics, calculus ...Not all students learn in the same way, and that is why I make sure that I am able to adapt to the student's learning needs. After all, I am a visual learner and auditory learner as well as a hands on learner. Please feel free to email me to get any additional information that may be necessary for an informed decision. 17 Subjects: including algebra 2, chemistry, geometry, biology ...I have always loved math and took a variety of math classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side. I really enjoy tutoring because with every student there is a challenge of figuring out the best way to explain concepts so that they will understand. 7 Subjects: including algebra 2, statistics, geometry, SAT math ...Sometimes, this is all a student needs in order to achieve success in the classroom.I have a degree in elementary education from Texas Wesleyan University, as well as a grade 1 - 8 Texas teaching certificate. I have more than 10 years of public school teaching experience, a master's degree in gi... 39 Subjects: including algebra 2, reading, English, chemistry ...I have taken a number of Statistics courses, a list which includes elementary statistics, probability and statistics for engineers, mathematical statistics, and biostatistics, to name a few. I have also tutored many students in statistics ranging from high school to graduate level. It is therefore a subject in which I am very strong and very comfortable thanks to my extensive exposure to it. 41 Subjects: including algebra 2, chemistry, physics, statistics Nearby Cities With algebra 2 Tutor Addison, TX algebra 2 Tutors Copper Canyon, TX algebra 2 Tutors Corinth, TX algebra 2 Tutors Cross Roads, TX algebra 2 Tutors Crossroads, TX algebra 2 Tutors Double Oak, TX algebra 2 Tutors Fairview, TX algebra 2 Tutors Frisco, TX algebra 2 Tutors Hickory Creek, TX algebra 2 Tutors Highland Village, TX algebra 2 Tutors Lake Dallas algebra 2 Tutors Lakewood Village, TX algebra 2 Tutors Oak Point, TX algebra 2 Tutors Shady Shores, TX algebra 2 Tutors The Colony algebra 2 Tutors
{"url":"http://www.purplemath.com/Little_Elm_Algebra_2_tutors.php","timestamp":"2014-04-17T10:53:27Z","content_type":null,"content_length":"24323","record_id":"<urn:uuid:6abf7a9a-1774-40bd-add4-41bf9fd46e50>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00309-ip-10-147-4-33.ec2.internal.warc.gz"}