content
stringlengths
86
994k
meta
stringlengths
288
619
Robust Output Model Predictive Control of an Unstable Rijke Tube Journal of Combustion Volume 2012 (2012), Article ID 927345, 11 pages Research Article Robust Output Model Predictive Control of an Unstable Rijke Tube Institute of Automatic Control, RWTH Aachen University, 52074 Aachen, Germany Received 6 January 2012; Accepted 12 March 2012 Academic Editor: Xue-Song Bai Copyright © 2012 Fabian Jarmolowitz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This work investigates the active control of an unstable Rijke tube using robust output model predictive control (RMPC). As internal model a polytopic linear system with constraints is assumed to account for uncertainties. For guaranteed stability, a linear state feedback controller is designed using linear matrix inequalities and used within a feedback formulation of the model predictive controller. For state estimation a robust gain-scheduled observer is developed. It is shown that the proposed RMPC ensures robust stability under constraints over the considered operating range. 1. Introduction Modern gas turbines have to comply with increasingly stringent emission requirements for . One of the most effective ways reducing these emissions is the development of lean premixed () combustor systems. For gas turbines operated with natural gas thermal is the most relevant source, and its formation is highly temperature dependent. Both, a lean mixture and a premixing of fuel and oxidizer, which increases the mixture homogeneity and therefore avoids temperature peaks in the flame, reduce the combustion temperature and consequently the thermal emissions. Alongside the aforementioned advantages systems are more susceptible to combustion oscillations than conventional burner, because the heat release of lean-premixed flames is very sensitive to flow disturbances. Fluctuating heat release leads to fluctuating gas expansion. The gas expansion in the combustion zone acts as acoustic source and the emitted acoustic waves in turn influence the flame if reflected. Due to this coupling between acoustics of the combustion chamber and the heat release of the flame a feedback path is established which can give rise to thermoacoustic instabilities (see Figure 1). A well-known criterion for thermoacoustic instability is the Rayleigh-criterion. Originally, Rayleigh defined the criterion as [1], but for example in [2] it is shown, that the acoustic losses have to be considered for checking instability. This leads to the original Rayleigh-criterion determines only if acoustic energy is fed into the system by the combustion process as energy source: where is the acoustic pressure, the fluctuating rate of heat release, and the acoustic losses. If the integral over one period of oscillation is positive, the heat release and acoustic pressure interfere constructively meaning that the heat release is in phase (up to ) with the pressure waves. Thus, the flame feeds energy into the system. If this energy feed is higher than the acoustic losses , the system is unstable and moves after an exponential growth into a limit cycle. The limit cycle is clearly a phenomenon of a nonlinear system and possibly a result of saturation effects in the heat release. It represents a stable trajectory, where losses and energy feed are in balance. As a consequence high-pressure oscillations take place in the combustor which are detrimental for performance, emissions and durability of the combustor components. To avoid these drawbacks, two main directions are possible, namely, passive and active control, to stabilize the thermoacoustic system [3, 4]. Passive control techniques include geometry modifications and integration of additional acoustic dampers like Helmholtz resonators. However, the operational range at which stabilization is achieved is limited. To overcome this limitation, active control can be applied to enlarge the region of stable operations. In addition, existing burners can be equipped with active control systems as a retrofit. This can be necessary not only for older systems but also for systems, where computational fluid dynamics (CFD) design tools failed to predict instabilities during development. In this paper active control is investigated, that is, the closed-loop control of the thermoacoustic system. Closed-loop control uses a feedback path consisting of sensors, a control law and actuators to control a system, cf. Figure 1. The main goal for active control is to stabilize the system robustly possibly under constraints over a wide operating range. Robust stabilization means stabilization in the presence of uncertain system parameters. The increased demand for variable operations of gas turbines especially in power plants stems from the increased use of renewable energy sources. As a consequence, gas-fired power plants are operated more flexible to compensate the fluctuating input of renewable energy sources. The connected problem is two-fold. On the one hand, it may be the case that the optimal operating point in terms of efficiency, and emissions might be unstable itself and thus cannot be operated at. On the other hand, a change of the operating point can be delayed because unstable conditions have to be avoided during transition. Another demand for an active control system is to handle multiple inputs and multiple outputs (MIMO), since gas turbines are equipped with different types of actuators and sensors at different locations [3, 4], cf. Figure 1. Different sensors are presented in the literature like pressure transducers and chemiluminescence imaging techniques. For actuation different types of fuel modulation and acoustic forcing are applied. A MIMO controller has the advantage of using the combined information of all available sensors and the actuator operation of possibly different actuators is adapted to the actual machine state. In case multiple actuators are used, it is very likely that one actuator on its own has not the control authority to stabilize the system. Consequently, constraints on the control input have to be considered. In addition, the MIMO setup has another important benefit. The controller can be designed to be fault-tolerant, that is, if there exists some redundancy in the sensor and/or actuator setup the controller can maintain operation if some sensor and/or actuator fails. This fault-tolerance can be improved further by adapting the input constraints of the failed actuator. In summary, the following demands have to be fulfilled by an active control system: (i)robust stabilization under constraints; (ii)handling MIMO; (iii) fault-tolerance. To cope with these demands, robust output model predictive control (RMPC) is investigated in this work. A common physical modelling approach is applied to the Rijke tube, and from this a simplistic linear polytopic model is derived to model the system dynamics with (assumed) parameter uncertainties. The model is used to determine a robust linear state feedback controller by the use of linear matrix inequalities (LMI). This controller is incorporated in an approach presented in [6] to formulate a robust MPC and combined with a robust gain-scheduled observer. The proposed RMPC can have all the proposed characteristics needed for active control systems applied in combustion systems. In this work, however, the solution for a single input and single output (SISO) setup is shown, thus focusing on robust stabilization under constraints. The paper is organised as follows. Section 2 presents the Rijke tube setup, the corresponding physical modelling, and its validation. Section 3 introduces the proposed robust output MPC, and Section 4 gives some control results for, varied tube length to demonstrate the robustness. 2. Modelling of the Rijke Tube 2.1. Setup A schematic drawing of the Rijke tube setup is shown in Figure 2 with corresponding dimensions in Table 1. The test rig consists of a vertical glass tube that contains a smaller tube which can be drawn out vertically in order to vary the total length from the lower to upper end in a range of up to . This change of parameter is used to validate the robustness of the proposed active controller. The flame holder in the lower part of the tube is connected to a fuel feed line and holds a diffusion flame. A microphone is positioned a small distance beneath the flame holder. The test rig features both types of adjustment control, the variation of the fuel mass flow rate via valves at the feed line and antinoise. For the last mentioned purpose, a loudspeaker is positioned underneath the tube, which is used as actuator in this study. 2.2. Acoustics The acoustic model is a one-dimensional acoustic network consisting of simple geometric components, which are analytically tractable. The underlying assumption, which is common in thermoacoustic community [7, 8] is that the mean flow is homogeneous, the length to diameter ratio of the considered elements is sufficiently large such that only axial waves are relevant, the acoustic disturbances are linear, and that the flow is isentropic. Then, it is sufficient to consider only one-dimensional linear acoustics and the corresponding acoustic state vector (four-pole) is of order . These assumptions are fulfilled for a Rijke tube [9]. The state can be defined, for example, by acoustic velocity and acoustic pressure . The total pressure and velocity are defined as the of and acoustic value as , and . The solution of the governing conservation equations of mass, momentum, and energy [8] leads to the well-known wave equation whose solution under the aforementioned assumptions is with the speed of sound, the temperature and the density. The functions and are the Riemann invariants representing the up and down travelling waves. Instead of pressure and velocity, we use the Riemann invariants as acoustic states. The advantage is that the resulting transfer functions are always causal if incoming waves are taken as input and outgoing waves as output. As a consequence, every element in the one-dimensional network has to connect the Riemann invariants on both sides of the element. Straight duct elements can be represented by the time delays and which correspond to the travel times of the acoustic waves. As gas air under atmospheric conditions is assumed for the upstream part, for the downstream part with a mean temperature of due to the heat release of the flame. The mean flow speed is neglected because of its negligible influence on the acoustic travel times compared to the speed of sound. Therefore, and can be calculated with with the considered distance, and with the specific gas constant and the adiabatic exponent. At both ends of the tube, the model is closed by acoustic reflection coefficients and relating incoming with outgoing wave, cf. Figure 3. Acoustic reflection coefficients less than one represent the only acoustic losses (see (1)) in the model and can be interpreted as acoustic flux across the boundaries of the system. 2.3. Flame Since only low frequencies are of interest the flame zone is short compared to the acoustic wavelength. Thus, the flame zone represents a discontinuity for the acoustic waves. An approach derived in [7, 9] exploiting this simplification is used to model the flame zone. The flame zone can then be modelled with the conservation equations for mass, momentum, and energy: with the subscripts 1 and 2 for up- and down-stream of the flame zone (cf. Figure 2), the heat release, the specific heat capacity, and the area. Since , the area is omitted in the following. Linearisation leads to the following system of equations: with the matrices and . To close the feedback between acoustics and combustion, a dynamic relation between acoustic disturbances and fluctuating heat release is needed. Because a large number of physical pathways exist [10], one often chooses to subsume these effects in one flame transfer function (the term “transfer function” is used for the Laplace transform of an ordinary differential equation. Note that the term “nonlinear transfer function” is not compatible with this definition), relating one acoustic variable to the integral heat release. In the flame model, we relate the acoustic velocity upstream of the flame to the rate of heat release . In [7], it is shown that so that at a low Mach number pressure fluctuations remain small even, when . The same conclusions apply for density and temperature fluctuations. Thus, it is reasonable to regard acoustic velocity fluctuations as the main excitation source for heat release fluctuations at low Mach numbers [11] as is the case for the present setup. As flame transfer function the --model of Crocco and Cheng [12] is often used, where is the so-called interaction index or combustion efficiency and a time delay. The time delay can be explained by convective transportation and mixing time. This model can be combined with a low-pass filter. Several researchers reported a low-pass behaviour of the flame transfer function at least for premixed flames [11, 13]. These considerations lead to the following transfer function: with the static gain and the time constant of the low-pass element, the travel respectively dead time and the acoustic velocity upstream of the flame. Due to the resulting interaction between acoustics and combustion, the thermoacoustic instability reaches a limit cycle after a period of exponential growth which is a clear indicator for nonlinear effects in the system. Since only linear acoustics are expected in the operational range of a combustor, the nonlinear effects are supposed to be in the combustion itself. The main non-linear effect of the combustion is possibly a saturation in the heat release. Besides sophisticated models relating the flame surface to the heat release via the -equation [14, 15], there are empirical ways of predicting a limit cycle by using a parameter-dependent --model [11]. This non-linear behaviour, however, is not pursued further in this paper because the goal for active control is to keep the system near the steady state. 2.4. Model Validation For simulation purpose, the presented model is implemented in signal oriented form in Matlab/SIMULINK as shown in Figure 3. In order to validate the model and the chosen parameters, the frequency response of the analytic model and from measurements are compared. The used modelling parameters are shown in Table 2. For the derivation of the complete set of parameters with a discussion of the diffusion flame in the setup, see [16]. Figure 4 shows a comparison of the frequency response of the model and measurements from the test rig taken from closed-loop identification. The four illustrated positions represent measurements respectively simulation results at increasing length of the tube. Since a positive phase shift indicates instability, it can be seen that at positions 2–4 the system is unstable, whereas at position 1 it is stable. Since all parameters are kept constant over the presented operating range, the change of dynamical behaviour depends only on the value of the time delays in the upper part of the tube that are determined by its length. In the presented diagrams, the length was varied from to resulting in a maximal difference of for the time delays. This explains the high sensitivity of the dynamical behaviour of the model concerning small changes in the parameters of the flame. The diagrams show that a parameter configuration could be found, that predicts instability or stability over the chosen operating range correct. The simulated resonant frequencies are in good agreement with the measurement results. There are bigger differences, however, in the amplitudes in these points especially at the unstable positions 2–4. Obviously, the amplification of acoustic fluctuations at the flame is too high at these frequencies, however there is only little room for improvement via parameter variation in the given setup of the model, as a reduction of the influence of the flame stabilises the system, assuming constant energy losses. 3. Robust Output Model Predictive Control Model predictive control (MPC) is a control technique that utilizes modern optimization algorithms for control by using a predictive optimization formulation and algorithms for constrained optimization online. In a receding horizon policy the cost function: is minimized over the prediction horizon iteratively in every time step with updated state, under the input, state and terminal constraints , for and . In addition, the dynamic model of the system: with the state and the input acts as a state constraint for . The stage cost is used to define the control objective, for example, by penalizing the deviation of the models future trajectory of some reference trajectory. In the following is assumed with the positive definite matrices . The terminal penalty can be used to guarantee stability (see Section 3.3). The first entry of the minimizer of is used as the actual control input to the system. The advantage of the MPC approach over classical control approaches like PIDs is the capability of explicitly accounting for constraints in the actuating and/or state variables. 3.1. Polytopic System of the Rijke Tube In order to explicitly take into account model uncertainties within the MPC framework, we consider a linear polytopic model of the Rijke tube in the following form: with . In order to derive such a model, we follow a pragmatic approach for reduced modelling. We model the unstable first (fundamental) mode as an oscillatory second-order element and neglect higher-order dynamics: with the characteristic angular frequency and the damping coefficient . Of course this is only an approximation of the real system, but the unstable mode is by far the most dominant one in the system. Nevertheless, higher-order dynamics could destabilize the system if the controller excites them. Therefore, it is mandatory that the higher orders are filtered out. This can be accomplished using the reduced model within the state observer (see Section 3.5) with suitable tuning. Using the analytic model we found a parameter range for the damping of and for the characteristic angular frequency of due to a change in the length of the Rijke tube between . This range is chosen because it represents the unstable regime regarding the tube length in the setup. The static gain is constant with . As linear parameters and are chosen. The resulting discretized linear parameter varying system (LPV) is This LPV system can be transformed into a polytopic system (10) with vertices. As sampling time is chosen in order to be able to resolve the frequency range of the unstable modes. This is also the time slot in which each optimization problem for the MPC has to be solved. For the proposed robust observer in Section 3.5 the actual model and therefore the parameters and , respectively and , have to be known. Since we assume that the length of the tube can be measured online, the relation between the total tube length and these parameters is sought. For it is a reasonable assumption to consider only the acoustics for determining the resonant frequency since the flame has negligible influence on it in the current setup. Thus, the following relation for the fundamental mode can be used: with the index for the lower part of the tube below the flame holder and the downstream part of the tube. To relate the damping to the tube length , a data fit from the simulation is used. The LPV model (12) and its equivalent polytopic form (10) are used in the following for the robust state observer and for the determination of the constraints within the optimization problem. 3.2. Feedback MPC Because an open-loop prediction of the systems state trajectory, as it is standard in most MPC formulations, cannot account for the reduced sensitivity to disturbances or modelling errors due to a feedback controller, we use the approach of incorporating an internal state feedback controller in the prediction model. This technique is called closed-loop paradigm (CLP, [17]). In addition, the CLP has the advantage of a better numerical conditioning of the prediction matrices especially for an unstable system because the system with controller is stable. As a result the influence of uncertainties, in the present case represented by the polytopic system, is reduced over the prediction horizon due to the presence of the internal controller. Otherwise, the optimization problem would be far TOO conservative, because no controller action would be assumed over the prediction horizon to react on the uncertainties. The re-parametrisation starts with with the inputs and the terminal penalty Formulating the cost function in results in (see [17]) under the (virtual) input, state and terminal constraints. Since (18) is just a re-parametrisation of the original optimization problem, the optimum is identical. The optimization variable is the deviation of the nominal controller in (15) and is used for constrained handling only as is evident from (18). The minimizer is identical zero in case of no active constraints. A pragmatic view of designing a MPC in CLP formulation is to design a controller matrix and use in (18) for the weighting of over the prediction horizon for constraint handling. This approach is used in the following. 3.3. Robust Stability of MPC Formulations for guaranteed stability within the MPC framework can be considered as a mature topic nowadays. A standard approach is to utilize the cost function as a Lyapunov function by requiring that a minimizer for exists for all (for details see [18, 19]). Note that (19) implicitly renders positive invariant Positive invariant means , for all . In order to extend this requirement to the robust case, one has to consider the worst-case for guaranteed stability. This can be done by use of a min-max optimization: with the model uncertainty. For the terminal penalty this results in the following requirement for robust stability [19]: for all . Since as asymptotic stability of the origin can be established for the LPV model, and a solution to (21) exists in case of adequate constraints. Thus, the problem is to find a terminal penalty which fulfills this condition in the closed set . In the unconstrained case and for the system (10) this problem can be cast as the following linear matrix inequality (LMI). Having (21) in mind one searches for a linear state feedback and with , an upper bound of the worst-case costs, that fulfills:for all and with which makes a control-Lyapunov function for the complete model set. Inserting the linear state feedback and the model leads to the following matrix inequality:A linear state feedback fulfilling this condition is robustly stabilizing the unconstrained LPV system. According to [20], this matrix inequality can be transformed into an LMI by substituting and and applying a Schur complement:Since the model set and are convex, it is sufficient to check the vertices of the set . If one searches in addition for a controller that minimizes the worst-case cost:one has to solve the semi definite program:subject to the LMI (24). When comparing (21) and (22) it can easily be seen that by choosing the stage cost in (7) to be equal to , condition (21) is fulfilled, since holds for the complete model set in the unconstrained case (if the global minimum is guaranteed to be found by the optimizer, then this condition holds. In this work a quadratic program is solved, so this condition holds). Therefore, the terminal state has to reach the aforementioned terminal region , here a region where and for all holds, that is, no active constraints. Furthermore, has to be a robust positive invariant set. In the literature, this technique of guaranteed stability is sometimes called dual-mode control. The optimizer has to steer the system into the terminal region taking into account the constraints. Within the terminal region a virtual second controller becomes active with guaranteed stability. Because the terminal region is designed in such a way that this controller never violates the constraints in this set, linear theory can be used to design , in this work by the use of LMIs. As already mentioned, (21) guarantees robust stability only in case a min-max optimization is used in standard MPC formulation. A optimization is time consuming and can usually not be solved in real time, that is, within the sampling interval. In case of a re-parametrisation as feedback formulation using the derived though robust stability can be guaranteed for a minimization of (18) as quadratic program (QP) [6]. It can be shown that is Lyapunov, that is, strictly monotonically decreasing over 3.4. Robust Constraint Satisfaction As terminal region the maximal robust positive invariant set (MAS) for polytopic systems can be constructed using the algorithm presented in [21] if it holds that known as quadratic stability, for all . This is fulfilled because of (23). In order to reach the terminal set robustly, we follow the lines of [6] and use all permutations of model set vertices for a -step ahead prediction: to predict the future states. Applying the constraints to all possible future states leads to the constraint set: which can be checked for redundancy using the (reformulated) constraint set Restricting the future trajectory to (30) guarantees robust constraint satisfaction for the complete model set and arbitrary fast model changes within the prediction horizon. 3.5. Robust Observer Since the actual state is not measurable in the Rijke tube setup it has to be estimated by a state observer. In this work, the observer structure of a Luenberger observer is used where and are the estimated state and output, and is the observer gain. Defining the observer error as leads to Thus, the observer error converges to zero if (33) is asymptotically stable for all . To do so, one can make use of the duality between the control and observer problem. Since the eigenvalues, and therefore the stability properties of are identical with its transposed the observer system (32) can be cast into a control problem [22]: and one can use the semi-definite program shown in Section 3.3 to find a positive definite and the corresponding observer gain fulfilling the (quadratic stability) condition for all with . As a consequence, (33) is robustly asymptotically stable for the model set. Thus, the observer error converges to zero for the complete model set, if the actual model is known to the observer. Here, the model is known due to the online measurement of the tube length. 4. Results The proposed robust MPC with its internal polytopic model and the robust state observer is able to stabilize the Rijke tube robustly by solving a quadratic program (QP) online. Critical for the application are the number of constraints in the QP as a result of the feedback MPC formulation. For a prediction horizon of and vertices in the polytopic model under the input constraints the problem formulation leads to a QP with 180 constraints. This QP can be solved online within the sampling time of with modern hardware. In Figure 5 the control of the physical model is shown for four different lengths of the Rijke tube in simulation. The controller is activated after . This delay can be interpreted as a temporary loss of actuation or a disturbance deflecting the system from steady state. The controller has the task to drive the system back to the origin. It can be seen that the RMPC is capable of robustly stabilizing the different configurations while respecting the input constraints. Especially for the length , the RMPC uses the full input range. The reason for this extensive use of the actuator can be seen in the fact that the RMPC has to guarantee stability for any arbitrarily fast model change (change in tube length) within the prediction horizon. This conservatism could be reduced by using only the actual model for constraint satisfaction while sacrificing some robustness concerning model changes. Then, however, the guaranteed stability would be lost during a parameter variation, that is, during transition from one tube length to another. Another source of conservatism is the formulation as feedback MPC. The robust optimization problem, that is, finding an input trajectory which guarantees constraint satisfaction in the presence of (known) uncertainties, can be solved exactly for example by dynamic programming [19]. Unfortunately, dynamic programming is not applicable for most systems due to its complexity. The feedback MPC formulation is a good compromise between conservatism and optimization complexity. Figure 6 presents the control results for the RMPC of the Rijke tube for and in the experiment. The successful control proofs the real-time capability of the control. As input and output constraints and are used leading to 176 constraints for the QP. The maximum calculation time on the dSPACE rapid control prototyping hardware DS1006 equipped with a AMD Opteron CPU is below the sampling time of . As QP solver qpOASES was used [23]. The robust state observer uses the presented LPV model which was derived from the analytical model. Using the estimated state, the RMPC can stabilize the Rijke tube while respecting the given constraints. For comparison the output estimated by the robust state observer is plotted. It can be seen that the frequency is in excellent agreement with the measurement from the microphone, and the estimated amplitude is a little bit to conservative, when the controller becomes active. 5. Conclusions An analytical model for a Rijke tube has been applied which is able to reproduce the stability map of the thermoacoustic setup as well as the dynamic behaviour over different lengths of the tube. Therefore, it is an ideal test bench for the robust control of unstable thermoacoustic systems, especially the real-time capability of MPC algorithms. It is used to derive a simplistic linear parameter varying system which represents the unstable modes. Using this system, it is shown that a robust output MPC can be designed that is capable of steering the system robustly to the origin under constraints. The proposed RMPC is a good compromise between conservatism and computational load when considering the very fast system dynamics of a thermoacoustic system and as a consequence the short calculation times needed for the application online. Thus, RMPC is a promising approach in order to fulfill the demands for an active control system in modern gas turbines. The robust stabilization and constraint handling are shown in this paper. In addition, a MIMO setup and fault-tolerant control can be incorporated quiet naturally into the MPC framework. Finally, the importance of physical understanding and modelling for the estimation of unavoidable uncertainties is demonstrated for the robust control of thermoacoustic instabilities. The authors gratefully acknowledge the contribution of the Deutsche Forschungsgemeinschaft through the Collaborative Research Center 686 “Model-Based Control of Homogenized Low-Temperature 1. J. S. W. Rayleigh, The Theory of Sound, vol. 2, Courier Dover Publications, 1945. 2. F. Nicoud and T. Poinsot, “Thermoacoustic instabilities: should the Rayleigh criterion be extended to include entropy changes?” Combustion and Flame, vol. 142, no. 1-2, pp. 153–159, 2005. View at Publisher · View at Google Scholar · View at Scopus 3. S. Candel, “Combustion dynamics and control: progress and challenges,” in Proceedings of the 29th International Symposium on Combustion Hokkaido University Sapporo Japan, vol. 29, pp. 1–28, July 2002. View at Scopus 4. A. P. Dowling and A. S. Morgans, “Feedback control of combustion oscillations,” Annual Review of Fluid Mechanics, vol. 37, pp. 151–182, 2005. View at Publisher · View at Google Scholar · View at 5. B. T. Zinn and T. C. Lieuwen, “Combustion instabilities: basic concepts,” in Combustion Instabilities in Gas Turbine Engines: Operational Experience, Fundamental Mechanisms, and Modeling, T. C. Lieuwen and V. Yang, Eds., vol. 210, pp. 3–26, American Institute of Aeronautics and Astronautics, 2005. 6. B. Pluymers, J. A. Rossiter, J. Suykens, and B. De Moor, “A simple algorithm for robust MPC,” in Proceedings of the IFACWorld Congress, Prague, Czech Republic, 2005. 7. A. P. Dowling, “Nonlinear self-excited oscillations of a ducted flame,” Journal of Fluid Mechanics, vol. 346, pp. 271–290, 1997. View at Scopus 8. T. Poinsot and D. Veynante, Theoretical and Numerical Combustion, R. T. Edwards, 2nd edition, 2005. 9. A. S. Morgans and A. P. Dowling, “Model-based control of combustion instabilities,” Journal of Sound and Vibration, vol. 299, no. 1-2, pp. 261–282, 2007. View at Publisher · View at Google Scholar · View at Scopus 10. T. Lieuwen, “Modeling premixed combustion-acoustic wave interactions: a review,” Journal of Propulsion and Power, vol. 19, no. 5, pp. 765–781, 2003. View at Scopus 11. P. A. Hield, M. J. Brear, and S. H. Jin, “Thermoacoustic limit cycles in a premixed laboratory combustor with open and choked exits,” Combustion and Flame, vol. 156, no. 9, pp. 1683–1697, 2009. View at Publisher · View at Google Scholar · View at Scopus 12. L. Crocco and S. L. Cheng, “Theory of combustion instability in liquid propellant rocket motors,” AGARDOGRAPH 8, Butterworths Science Publication, 1956. 13. T. Schuller, D. Durox, and S. Candel, “A unified model for the prediction of laminar flame transfer functions: comparisons between conical and V-flame dynamics,” Combustion and Flame, vol. 134, no. 1-2, pp. 21–34, 2003. View at Publisher · View at Google Scholar · View at Scopus 14. A. P. Dowling, “A kinematic model of a ducted flame,” Journal of Fluid Mechanics, vol. 394, pp. 51–72, 1999. View at Scopus 15. X. Yuan, K. Glover, and A. P. Dowling, “Modeling investigation for thermoacoustic oscillation control,” in Proceedings of the American Control Conference, pp. 3323–3328, Baltimore, Md, USA, 2010. 16. F. Jarmolowitz, Ch. Gro-Weege, Th. Lammersen, and D. Abel, “Modelling and robust Model Predictive Control of an unstable thermoacoustic system with input constraints,” in Proceedings of the American Control Conference, Montreal, Canada, 2012. 17. J. A. Rossiter, Model-Based Predictive Control: A Practical Approach, CRC Press, 2003. 18. D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000. View at Publisher · View at Google Scholar · View at Scopus 19. J. B. Rawlings and D. Q. Mayne, Model Predictive Control: Theory and Design, Nob Hill, 2009. 20. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994. 21. B. Pluymers, J. A. Rossiter, J. Suykens, and B. De Moor, “The effcient computation of polyhedral invariant sets for linear systems with polytopic uncertainty,” in Proceedings of the American Control Conference, pp. 804–809, Portland, Ore, USA, 2005. 22. J. Lunze, Regelungstechnik 2: Mehrgroensysteme, Digitale Regelung, Springer, 2010. 23. H. J. Ferreau, H. G. Bock, and M. Diehl, “An online active set strategy to overcome the limitations of explicit MPC,” International Journal of Robust and Nonlinear Control, vol. 18, no. 8, pp. 816–830, 2008. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/jc/2012/927345/","timestamp":"2014-04-17T16:55:57Z","content_type":null,"content_length":"330906","record_id":"<urn:uuid:79a0936a-0185-4a3d-9f7b-befed59c1459>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Statement Anthony Bak Teaching Statement Anthony Bak I. Introduction Teaching is a rewarding and enriching experience. My experiences have given me first hand knowledge of the many sides of the learning process, from how students respond to lecture, to how to best help them outside of the classroom. I value teaching and my ideal position would maintain a balance between teaching and My goal is to help the learning unfold, watching students struggle and through their own hard work find the Eureka moment when the math clicks and they gain a new insight. I was once an undergraduate and I remember well what it was like to have an inspiring teacher who helped me see the “cool” in math. Years later I like serving that same role as mentor. II. Teaching Philosophy Although it is always a pleasure to get that impassioned and gifted student, we should not forget that for many students there are substantial barriers to learning. Students are busy with other classes, their personal lives, and many have had previous experiences leading them to believe that math is boring or worse. Teaching is an opportunity to help students overcome their inhibitions and develop their analytic abilities, find applications of math to their own interests and inspire a love for pure math as an end in itself. This is achieved in the following ways: Course Design. The syllabus needs to cover the core course material while being flexible enough to meet the particular needs of the students. Many students take math classes to fulfill requirements for other departments, and so their interests may be quite diverse. On the first day of class, I find out the backgrounds of the students and why they are in the course. From there I try to modify the lectures and examples to match the interests and skills the students need to develop in their coming careers. For example, one of my Calculus III classes was filled with concrete-thinking Bio-Engineering students who greatly appreciated the physical examples I gave them (see Projects). Of course, I keep in mind that I must serve the needs of all the students taking the While at Mt. Holyoke I developed a new differential equations course. Most of the students were taking the course because of a requirement for another department and are interested in mathematics primarily through applications. I contacted professors outside of the Math department (for example in Biology, Chemistry and Physics) to provide differential equations that come up in their own research thereby making the course more important and relevant to students from those departments. I also included a computer lab portion of the course so that they have tools to tackle unknown differential equations when they come across them in the future. Some of the faculty who suggested lab projects agreed to give “mini lectures” before the labs to explain how the equations arise in their research.. Lecture Style. There is also a subtle art to making lectures engaging. I try to keep the atmosphere fun and relaxed so the students feel free to ask questions (or point out mistakes). It is also important for lectures to go beyond what is presented in the textbook so that the students see real value in coming to class. In my lectures, I share with students my own interpretation of the material, give different pictures, more examples, and try to connect the material to previous topics. I wanted to briefly share with you an experience I had teaching at Mount Holyoke College of which I am quite proud. In my differential equations course we were studying eigenfunction solutions to differential equations. I assigned a homework problem that asked for the shape of a string under tension rotating on its axis (not like a jump rope - at low speeds the string in this example will be totally flat). When you setup and solve the differential equation eigenvalue problem you find that there are non-zero solutions for a discrete set of rotational speeds ωi . Imagine a string stretched between two points with a variable speed motor attached. When you start the motor the string is stretched flat (the zero solution). As you slowly speed up the string stays flat at first, but then when it hits ω0 it jumps out to the first eigenmode (one arc of a sin function). The really interesting (and I would argue counterintuitive!) part is that as you speed up and pass ω0 the string should snap back to flat again until it hits ω1 - since your differential equation tells you that only the zero solution exists in between the ωi . When you hit ω2 it pops back out and then back in as you speed up further. It occurred to me that it should be possible to build a physical demonstration of these eigenfunctions - to convince the students that they really do exist. I went to speak with the lab technician in the physics department. As I explained the idea he immediately realized that if it worked as I claimed it would be better demonstration of eigenfunctions then what they were currently using - exactly because the string going to zero in between critical rotational speeds is counter intuitive. Twenty minutes later he was in my office to tell me that the demonstration worked exactly as predicted. The following class I brought the students down to see it. They were just as excited as I. Some of the students commented that his was the first time they felt math really told them something (unexpected!) about the real world. News of the demonstration quickly spread to other physics and math faculty where it’s been used to demonstrate the ”reality” of eigenfunctions in both departments. With the encouragement of some of the physics faculty I am writing a small paper for the American Physics Society so that the demonstration becomes more widely known. Availability and Feedback. I want there to be as little barrier to learning as possible. As a teacher, I therefore schedule my time to be available to students outside of lectures in ways most convenient to them. The aim is to simply eliminate easy excuses for not mastering the material. I also use my office interactions with students to gain vital feedback on how well they understand the material and which topics need reinforcement. In Calculus III at UPenn, I handed out five index cards to students on the first day of class and throughout the semester would ask them to anonymously return the cards with a question or comment. Also, my Web page had a comment box that allows students to anonymously post. In both cases, I want to give my students an avenue to communicate their difficulties without intimidation or fear of retribution. All these techniques are useful ways for me to gauge student understanding so they are prepared for exams and no one finds unpleasant Being available is not only important for the struggling students but for the more talented students as well. In my Calculus III course at Mount Holyoke there was first year student who is both talented and motivated to go beyond the course material. I identified her at the beginning of the year and proposed that we work on some extensions to the material covered in the course (for example discussing differential forms in general and working towards the generalized Stoke’s theorem). We met weekly to discuss reading assignments and to take a look at extra problem sets. Outside projects are an important tool to help students own the material by using research to actively engage in the learning process. I see my role as teacher to assist students in finding that applicability whether it’s connecting the course to the wider mathematics curriculum or finding examples relevant to the student’s interests. For my Calculus III Bio-Engineering students, I supplied project ideas after canvassing recommenda- tions from graduate students and professors in Biology and Bio-Physics. For the non Bio-Engineering students I created some Math and Physics problems on my own. I presented the ideas as extra credit problems for interested students. Several of the students took on these projects and I worked with them outside of class. When I taught ”Ideas in Mathematics” most of the students were from the nursing school at the University of Pennsylvania. The course is designed as a terminal course in math and covered elementary topics in game theory, statistics, and computer science. Many of the students had previous negative experience with Math and the professor and I put effort to make the course fun, while at the same time giving them mathematics tools that they could apply to their interests. Rather than a final exam the students wrote term papers on topics of their choosing. We received papers on a wide variety of topics, from how to make the electoral college more “fair”, to a report on the classic book “On Growth and Form” by D’Arcy Wentworth Thompson. At the end of the class some of the students commented that this was their first interesting math course. Computers are an important part mathematics research and instruction. For the differential equations course at Mt. Holyoke I developed a series of computer labs the students complete in parallel with the regular homework assignments. The main purpose is to give the students tools to analyze differential equation encountered outside of the course. The secondary purpose I did not discover until the course was already underway. At the start of the course I planned to use Maple, but I found that students did not understand basic concepts of programming and could not think critically about limitations of computer solutions. Instead I chose to use Python, a general purpose programming language, so that students had to develop their own routines to numerically find integrals and solve differential equations. Most of the students had never programmed before and said that this was a rewarding experience, learning for the first time how a tool they use daily works. Project assignments allow students to push beyond the structure of the course into topics of individual interest. In doing so, the breadth of the overall coursework is complemented with a necessary depth, giving the students a well-rounded learning experience. Undergraduate Research. Involving students in active research projects is an important part of developing young mathematicians. As an undergraduate I took part in a number of research projects both at my home institution and at other institutions via the REU program. These experiences gave me confidence that I could go on to advanced study and a sense of participation in the larger scientific community. In collaboration with a researcher at the NIH I am currently working on a project in applied topology to identify new treatments for cancer. The basic topological ideas (persistent homology for instance) are very intuitive and the problem would benefit from more systematic exploration of the parameter space by an undergraduate student. My research in theoretical physics entails writing computer programs to explore possible solutions to Hermitian Yang Mills equations and construct explicit examples. While the physical concepts can be difficult the problems can be distilled into simple and concrete mathematical problems suitable for an undergraduate project such as finding integer lattice points in a region of space. I also see my work as being a fusion of mathematics and physics and I have interests in applications of math to real world problems. I would enjoy sponsoring and overseeing cross disciplinary research that involved collaborating with faculty in other departments particularly Physics, Chemistry, Computer Science, and Biology. Outside Help Resources. Sometimes we forget that teaching is not about teaching; it is about learning. And for learning to be effective for all students, a variety of options need to be available, including resources outside the classroom. I see my role as teacher to help students meet their diverse learning needs. Such resources include but are not limited to tutoring, math workshops, and math question centers. My experience working in outside support arenas has given me valuable insight into how they may help students who, for whatever reason, are not learning well in the classroom environment. As the manager of Penn’s math workshop program, I trained undergraduate students to run active learning problem solving sessions and provided continued supervision throughout the semester, helping to troubleshoot mathematical and pedagogical problems. As professional mathematicians we know that math is an active pursuit, that you never understand the material until you have engaged and wrestled with it. The math workshops provided a venue for the students to actively engage in the material as well as develop their abilities to work in groups and communicate mathematical concepts to others. A typical session would divide the students into groups, assigning a different problem to each group to solve. The students explained the concepts to each other making sure that all the students in the group understood the end result. After solving the problems, a student from each group explained their solution on the blackboard to the rest of the class. Perhaps most importantly, at no time are the students being lectured to; instead, the undergraduate workshop leaders are trained to guide the groups to the correct solutions without telling them directly how to find it. The process of discussing the problem, solving it in their group, and finally explaining the problem on the blackboard gives the student multiple chances to engage with the material, increasing their understanding each time. I believe this kind of program can be an invaluable way to increase student comprehension.
{"url":"http://www.docstoc.com/docs/114279755/Teaching-Statement-Anthony-Bak","timestamp":"2014-04-24T12:13:46Z","content_type":null,"content_length":"64716","record_id":"<urn:uuid:77e8d9d7-2c94-4c33-ab4d-fe2569209151>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Mystery Matrix' printed from http://nrich.maths.org/ Have a look at this table square or matrix: Can you see how it has been constructed? Why are some numbers in black and some in red? Can you explain why the red $6$ is in that particular square? Why is there a $45$ in the bottom right-hand corner? You will notice that the numbers $2$ - $9$ are used to generate the matrix and only one of these numbers is used twice (the $2$). Can you fill in the matrix (table square) below? The numbers $2$ -$12$ were used to generate it with, again, just one number used twice. Click here for a poster of this problem.
{"url":"http://nrich.maths.org/1070/solution?nomenu=1","timestamp":"2014-04-20T03:25:38Z","content_type":null,"content_length":"4039","record_id":"<urn:uuid:7c88a719-464d-462a-94d4-6608f86aee98>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential-Algebraic Equations and Their Fifth SIAM Conference on Optimization SIAM Short Course 2 Differential-Algebraic Equations and Their Connections to Optimization Sunday, May 19, 1996 Victoria Conference Centre, Victoria, British Columbia, Canada Organizers: H. Georg Bock, Johannes P. Schloder and Volker H. Schulz, Interdisciplinary Center for Scientific Computing (IWR), University of Heidelberg, Germany In recent years Differential-Algebraic Equations(DAE) have attracted much interest - partially because of their importance as models for a large class of dynamical processes, e.g. in mechanics, robotics, chemical engineering but also because of their intrinsic numerical difficulties. DAE are connected to optimization in at least two ways. On the one hand variational principles are used to formulate DAE, e.g. in multibody dynamics or in boundary value problems associated with optimal control problems. This results in special structures of the differential algebraic equations. On the other hand many relevant practical problems, which are modelled by DAE, call for optimization rather than forward simulation alone. The special structures of the optimization problems arising from discretization of the DAE are investigated. It is shown how these structures can - and must - be exploited for the design of efficient optimization methods. Session 1 recalls why many relevant dynamical processes in science and industry are favourably modelled by DAE and explains the typical properties of such models (index, invariants, implicitness, discontinuities). Challenging classes of optimization problems parameter estimation, optimum experiment design and path constrained optimal control problems are formulated. Their demands on methods for discretization and optimization are described. Session 2 reviews onestep and multistep discretization methods for DAE. Emphasis is laid on advanced BDFmethods. It is shown how such methods can be designed to optimally cooperate with optimization algorithms.This includes solution of systems with relaxed algebraic constraints and efficient generation of derivatives for the optimization procedure even in the case of DAE with implicitly characterized discontinuities. Session 3 is devoted to the Optimization Boundary Value Problem Approach. A class of efficient numerical methods is presented, which solve problems with DAE boundary value problems as nonlinear constraint. Adequate numerical formulations of these problems are given, that help to transfer the inherent structures of DAE optimization problems so that they can be exploited in the largescale optimization methods. For the discretization of DAE boundary value problems multiple shooting and collocation methods are described and compared. These methods take advantage of invariants to improve the conditioning. For the treatment of the resulting large scale constrained nonlinear optimization problems structure exploiting SQPtype methods, that have proven to be very effective in practical applications, are discussed. These are GaussNewton Methods for discretized parameter estimation problems and structured SQPmethods using high rank updates or partially reduced SQPmethods for constrained optimal control problems and optimal design problems. The first part of session 4 concentrates on techniques for the evaluation and solution of quadratic subproblems, which are computationally most expensive. Techniques are described that allow to generate the functions and derivatives with an accuracy that just meets the demands of the optimization procedure in order to reduce the computational load. Structure exploiting recursive techniques for the solution of the quadratic subproblems are given. The methods offer a high potential of parallelism on several levels, which can be exploited for further acceleration. New parallel algorithms are described and performance results are reported. Finally, a thorough discussion of several practical applications with typical numerical challenges is given. These problems include identification of mechanical systems, optimum experiment design for estimation of dynamical parameters in models for industrial robots, optimal trajectories for satellite mounted robots and chemical engineering applications in combustion. All instructors are working at the Interdisciplinary Center for Scientific Computing(IWR) at the University of Heidelberg. Their work is devoted to the design, implementation and application of efficient optimization methods for large-scale optimization problems with emphasis on real-life dynamical processes descibed by nonlinear ODE, DAE or PDE. The instructors have strong experience in consulting and solution of problems in industry. H. Georg Bock holds a chair for Scientific Computing and Optimization. University of Heidelberg. In 1986 he received his Ph.D. in Mathematics from the University of Bonn. In 1988 he got a professorship at the University of Augsburg and moved to Heidelberg 1991. His main interests include the optimal combination of discretization and optimization methods and the solution of nonlinear optimal control and feedback problems. Johannes P. Schloder is a senior scientist at IWR. Since 1987 he holds a Ph.D. in Mathematics from the University of Bonn and had teaching and research positions at the Universities of Bonn and Augsburg. Dr. Schloder's current research aereas are parameter estimation and optimum experiment design. Volker H. Schulz completed his Ph.D. in Mathematics this year.His main interest is the development of partially reduced SQP-methods for the treatment of constrained optimal control problems in DAE and the combination of SQP-methods with multigrid procedures for the optimization of PDE. Who Should Attend? The course describes powerful methods and algorithms for the efficient solution of large-scale optimization problems in DAE. It is intended for scientists interested in optimization problems connected with DAE, large scale nonlinear programming and practical optimization of industrial dynamical processes. Not only mathematicians but also people from engineering, especially mechanical and chemical engineering should profit from it. Recommended Background Attendees should be familiar with basic discretization methods for ODE and preferably also DAE. A basic knowledge of numerical optimization methods, especially SQP methods, would be helpful. │ 8:00 │ Registration │ │ 9:00-10:00 │ Session 1: Introduction to Optimization Connections to DAE. │ │ │ - Features of DAE Models in Applications │ │ │ - Parameter Estimation, Optimum Experiment Design, Optimal Control Problems in DAE │ │ 10:00-10:30 │ Coffee │ │ 10:30-12:00 │ Session 2: Discretization Methods for DAE. │ │ │ - BDF Methods and Alternative Discretizations │ │ │ - Higher Index Problems and Invariants │ │ │ - Treatment of Inconsistent Initial Values and Discontinuities │ │ │ - Linear Algebra Techniques for the Reduction of the Overall Effort │ │ 12:00-1:30 │ Lunch (attendees are on their own for lunch) │ │ 1:30-3:00 │ Session 3: Efficient Treatment of Optimization Problems in DAE. │ │ │ - The Boundary Value Problem Approach │ │ │ - Multiple Shooting and Collocation Discretization │ │ │ - Generalized Gauss Newton Methods │ │ │ - Structured and Partially Reduced SQPMethods │ │ 3:00-3:30 │ Coffee │ │ 3:30-5:30 │ Session 4: Further Algorithmic Features and Practical Applications. │ │ │ - Parallel Evaluation and Solution of Quadratic Subproblems │ │ │ - Internal Differentiation for Efficient Derivative Evaluation in Adaptive Discretization Applications from, e.g., │ │ │ - Mechanical Engineering │ │ │ - Robotics │ │ │ - Chemical Engineering │ │ │ - Enviromental Physics │ │ 5:30 │ Short Course adjourns │ Registration Fees (for either Short Course) │ │ SIAG/Opt Member* │ SIAM Member │ Non-Member │ Student │ │ Preregistration (before 5/6/96) │ $110 │ $110 │ $125 │ $40 │ │ Registration (after 5/6/96) │ $125 │ $125 │ $140 │ $55 │ *Member of SIAM Activity Group on Optimization. Short Course fees include course notes and refreshment breaks. To register for either short course, the conference, or both, please fill-in and submit the preregistration form. On-site registration will start on Saturday, May 18 at 6:00 PM at the entrance, Lobby Level of the Conference Centre. MEM, 3/11/96
{"url":"http://www.siam.org/meetings/archives/op96/scourse2.htm","timestamp":"2014-04-16T18:58:26Z","content_type":null,"content_length":"10516","record_id":"<urn:uuid:69738769-5c1a-4095-950d-e46248fc43e5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Jan C. Bioch, Toshihide Ibaraki, "Generating and Approximating Nondominated Coteries," IEEE Transactions on Parallel and Distributed Systems, vol. 6, no. 9, pp. 905-914, September, 1995. BibTex x @article{ 10.1109/71.466629, author = {Jan C. Bioch and Toshihide Ibaraki}, title = {Generating and Approximating Nondominated Coteries}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {6}, number = {9}, issn = {1045-9219}, year = {1995}, pages = {905-914}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.466629}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Generating and Approximating Nondominated Coteries IS - 9 SN - 1045-9219 EPD - 905-914 A1 - Jan C. Bioch, A1 - Toshihide Ibaraki, PY - 1995 KW - Almost-self-dual functions KW - coteries KW - dualization KW - monotone Boolean functions KW - mutual-exclusion KW - nondominated coteries KW - positive Boolean functions KW - self-dual functions. VL - 6 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—A coterie, which is used to realize mutual exclusion in a distributed system, is a family C of incomparable subsets such that every pair of subsets in C has at least one element in common. Associate with a family of subsets C a positive (i.e., monotone) Boolean function f[C] such that f[C](x) = 1 if the Boolean vector x is equal to or greater than the characteristic vector of some subset in C, and 0 otherwise. It is known that C is a coterie if and only if f[C] is dual-minor, and is a nondominated (ND) coterie if and only if f[C] is self-dual. In this paper, we introduce an operator ρ, which transforms a positive self-dual function into another positive self-dual function, and the concept of almost-self-duality, which is a close approximation to self-duality and can be checked in polynomial time (the complexity of checking positive self-duality is currently unknown). After proving several interesting properties of them, we propose a simple algorithm to check whether a given positive function is self-dual or not. Although this is not a polynomial algorithm, it is practically efficient in most cases. Finally, we present an incrementally polynomial algorithm that generates all positive self-dual functions (ND coteries) by repeatedly applying ρ operations. Based on this algorithm, all ND coteries of up to seven variables are computed. [1] D. Agrawal and A. El Abbadi, “An Efficient and Fault-Tolerant Algorithm for Distributed Mutual Exclusion,” Proc. Eighth Ann. ACM Symp. Principles of Distributed Computing, pp. 193-200, 1989. [2] D. Avis and K. Fukuda,“A pivoting algorithm for convex hulls and vertexenumeration of arrangements and polyhedra,” Discrete and Computational Geometry, vol. 8, pp. 295-313, 1992. [3] D. Avis and K. Fukuda,“Reverse search for enumeration,” Research Report 92-5, Graduate School of Systems Management,Univ. of Tsukuba, 1992. [4] D. Barbara and H. Garcia-Molina, "The Vulnerability of Vote Assignments," ACM Trans. Computer Systems, vol. 4, pp. 187-213, Aug. 1986. [5] D. Barbara and H. Garcia-Molina, "The Reliability of Voting Mechanisms," IEEE Trans. Computers, vol. 36, pp. 1,197-1,208, Oct. 1987. [6] L.J. Billera,“On the composition and decomposition of clutters,” J. Combinatorial Theory, vol. 11, pp. 234-245, 1971. [7] J.C. Bioch,“Decompositions of coteries and Boolean functions,” Tech. Rept., Dept. of Computer Science No. 2, Erasmus Univ., Rotterdam, The Netherlands, Jan. 1993. A revised version also appeared in Proc. of Computing Science in The Netherlands CSN-93, pp. 75-86, Nov. 1993. [8] J.C. Bioch and T. Ibaraki,“Decompositions of positive self-dual Boolean functions,” Discrete Math, vol. 140, pp. 23-46, 1995. [9] J.C. Bioch and T. Ibaraki,“Complexity of identification and dualization ofpositive Boolean functions,” Report RRR25-93, RUTCOR, Rutgers Univ., July 1993. To appear in Infomation and Computation. [10] J.C. Bioch and T. Ibaraki,“Generating and approximating ND coteries,” Report RRR42-94, RUTCOR, Rutgers Univ., Nov. 1994. [11] E. Boros,P.L. Hammer,T. Ibaraki,, and K. Kawakami,“Identifying 2-monotonic positive Boolean functions in polynomial time,” ISA’91 Algorithms, W.L. Hsu and R.C.T. Lee, eds., Springer Lecture Notes in Computer Science 557, pp. 104-115, 1991. [12] Y. Crama,“Dualization of regular Boolean functions,” Discrete Applied Math., vol. 16, pp. 79-85, 1987. [13] S.B. Davidson,“Replicated data and partition failures,” S. Mullender ed., Distributed Systems, ch.13, Addison-Wesley, 1989. [14] T. Eiter and G. Gottlob,“Identifying the minimal transversals of ahypergraph and related problems,” Tech. Report CD-TR 91/16, Christian Doppler Labor für Expertensysteme,Technische Universität Wien, Jan. 1991. To appear in SIAM J. Computing. [15] H. Garcia-Molina and D. Barbara, “How to Assign Votes in a Distributed System,” J. ACM, vol. 32, no. 4, pp. 841-860, Oct. 1985. [16] T. Ibaraki and T. Kameda, "A Theory of Coteries: Mutual Exclusion in Distributed Systems," IEEE Trans. Parallel and Distributed Systems, vol. 4, pp. 779-794, July 1993. [17] T. Ibaraki,H. Nagamochi,, and T. Kameda,“Optimal coteries for rings andrelated networks,” Distributed Computing, vol. 8,. pp. 791-201, 1995. [18] D.S. Johnson,M. Yannakakis,, and C.H. Papadimitriou,“On generating allmaximal independent sets,” Information Processing Letters, vol. 27, pp. 119-123, 1988. [19] L. Lamport, "Time, clocks and the ordering of events in a distributed system," Comm. ACM, vol. 21, no. 7, pp. 558-565, July 1978. [20] K. Makino and T. Ibaraki,“The maximum latency and identification ofpositive Boolean functions,” ISAAC 1994, Algorithms and Computation, D. Z. Du and X. S. Zhang, eds., Springer Lecture Notes in Computer Science, vol. 834, pp. 324-332, Aug. 1994. [21] S. Muroga,Threshold Logic and Its Applications. Wiley-Interscience, 1971. [22] S. Muroga,T. Tsuboi,, and C.R. Bauch,“Enumeration of threshold functionsof eight variables,” IEEE Trans. Computers, vol. 19, no. 9, pp. 818-825, 1970. [23] M.L. Nielsen and M. Mizuno,“Coterie join algorithm,” IEEE Trans. Parallel and Distributed Systems, vol. 3, no. 5, pp. 582-590, Sept. 1992. [24] U.N. Peled and B. Simeone,“Polynomial-time algorithm for regularset-covering and threshold synthesis,” Discrete Applied Math., vol. 12, pp. 57-69, 1985. [25] D. Peleg and A. Wool,“The availability of quorum systems,” Tech. Report CS93-17, The Weizmann Institute of Science, 1993. [26] K.G. Ramamurthy,Coherent Structures and Simple Games. Kluwer, 1990. [27] R.H. Thomas, “A Majority Consensus Approach to Concurrency Control,” ACM Trans. Database Systems, vol. 4, no. 2, pp. 180-209, June 1979. [28] Z. Tong and R. Kain, "Vote Assignments in Weighted Voting Mechanisms," Proc. Seventh Symp. Reliable Distributed Systems, IEEE, Oct. 1988. [29] I. Wegener,The Complexity of Boolean Functions. Wiley-Teubner, 1987. [30] R.O. Winder,“Enumeration of seven-argument threshold functions,” IEEE Trans. Electronic Computers, vol. 14, no. 3, pp. 315-325, 1965. Index Terms: Almost-self-dual functions, coteries, dualization, monotone Boolean functions, mutual-exclusion, nondominated coteries, positive Boolean functions, self-dual functions. Jan C. Bioch, Toshihide Ibaraki, "Generating and Approximating Nondominated Coteries," IEEE Transactions on Parallel and Distributed Systems, vol. 6, no. 9, pp. 905-914, Sept. 1995, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/1995/09/l0905-abs.html","timestamp":"2014-04-20T23:31:02Z","content_type":null,"content_length":"61592","record_id":"<urn:uuid:b845ccbe-7f7c-43b8-86a9-54d395411378>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Centering and Intercepts in Mplus Paraskevas Petrou posted on Wednesday, February 02, 2011 - 8:27 am Dear Mplus users, In the multilevel SEM model that I am testing I am encountering the following issues: 1. Is there any default centering method for the twolevel analysis in v6 of Mplus or do I still need to define the centering myself? 2. In my model, I have decided not to define within-level variables (but only between-level) because this is the only way I can repeat all my within-level paths at both levels. I need the within-level part of the model to be tested at both levels. This means I cannot use groupmean centering for my within-level predictors as Mplus only permits that for within-level variables, which I have not specified. Is this a problem? Should I use grandmean centering for all predictors? 3. To plot one interaction that is part of my model, I need the intercept of my dependent variable. The answering scale of this variable is 1-5, but the unstandardized intercept in the output is 25.86 and the standardized one is 50.15. How is that possible? I look at the "Intercepts" column in the between part of the output. Thank you in advance. Kind regards, Paris Petrou Bengt O. Muthen posted on Wednesday, February 02, 2011 - 5:04 pm 1. No centering is the default. 2. See top of page 243 of the Version 6 User's Guide, showing that when an x variable has both a latent within and a latent between-level part, there is an implicit latent group-mean centering of the latent within-level covariate. 3. The intercept is not the mean of the variable. Paraskevas Petrou posted on Thursday, February 03, 2011 - 12:14 am Thank you very much Bengt. 2. Does that mean that I can leave all my within-level variables (which repeat at both levels) uncentered and only apply centering to my between-level variables (which appear only at the between-level)? Or leave all my variables uncentered? 3. I need to report my interaction plot and the y axis will have a range between 25 and 35. That happens because of the high interecept. I wonder if that will look strange for the reader, because the y variable ranges through a scale of 1-5. Kind regards, Paris Petrou Bengt O. Muthen posted on Thursday, February 03, 2011 - 11:56 am 2) You can leave all your covariates uncentered, but you can also center the between-level covariates. 3) The estimated mean for your DV should not be 25-35 if your DV has a sample mean of 1-5. Either the model is set up wrong or is misinterpreted. Paraskevas Petrou posted on Friday, February 04, 2011 - 12:59 am 3. By "estimated mean" do you mean intercept? Can I trust an unstandardized estimate of 19.09 for the intercept of a DV which ranges through a scale of 1-5? When I ask for sampstat this DV has a mean of 0.00 at the within level and 2.02 at the between level. Thank you Paris Petrou Bengt O. Muthen posted on Friday, February 04, 2011 - 3:16 pm No, the intercept is not the estimated mean. Take for example a regression of y on x: y= a + b*x + e so that the mean of y, E(y), is E(y) = a + b E(x), where a is the intercept and b is the slope. Mplus estimates a and b, not E(y), but you can express and compute E(y) using a, b, and the sample mean of x. If this doesn't clear it up, please send input, output, data, and license number to support@statmodel.com. Paraskevas Petrou posted on Monday, February 07, 2011 - 2:40 am I managed to change my model in a way that I don't get these big intercepts any more. But now I get this warning: CONDITION NUMBER IS 0.422D-34. PROBLEM INVOLVING PARAMETER 1. Parameter 1 is the alpha for one of my within-level predictors. One way to stop getting this warning is not centering my predictors, which I do not find a good idea since this is multilevel analysis. I will send my data to the support email, thank you. Linda K. Muthen posted on Tuesday, February 08, 2011 - 11:37 am Please send your output and license number to support@statmodel.com. Paraskevas Petrou posted on Friday, March 18, 2011 - 7:23 am Dear Linda, Returning to point 3 of my first post (see above) I need to find some parameters in my multilevel SEM model (TECH1)and then get some values of the covariance matrix (TECH3) and use them so as to plot a significant interaction effect. In particular, I need the parameters that correspond to the intercept of one dependent variable of my model. Is this the ALPHA? In the Mplus guide you say that ALPHA stands for means and/or intercept of the latent variables. But I want the parameter for intercept (and not mean). I am puzzled because in my output I get ALPHAs also for variables which are only treated as predictors and not outcomes (but are correlated with other variables at the between level). Kind regards, Paris Petrou Linda K. Muthen posted on Friday, March 18, 2011 - 7:50 am All mean and intercept parameters are in either alpha or nu. It does not matter which matrix they are in. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=12&page=6681","timestamp":"2014-04-18T03:02:23Z","content_type":null,"content_length":"30307","record_id":"<urn:uuid:18ceaeae-781a-4d18-9565-7188a8ab6b8e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Westchester SAT Math Tutor Hi! Thank you for considering my tutoring services. I have a diverse background that makes me well suited to help you with your middle school through college level math classes, as well as physics, mechanical engineering, intro computer science and Microsoft Office products. 17 Subjects: including SAT math, physics, calculus, GRE ...I am passionate about learning and guiding students as they work to achieve their goals. I truly enjoy the process of finding the way to communicate most effectively with each individual student. Please contact me if you are interested in personal instruction for any of the fields listed below. 11 Subjects: including SAT math, writing, GMAT, ACT Math ...The more you know how it fits together, the easier it gets. I show a student a way to understand a math problem. I taught high school Latin over 30 years ago. 11 Subjects: including SAT math, calculus, algebra 1, geometry ...I am very good at algebra and can generally be helpful to those students who are highly motivated to improve their skills in this area. I have taken many math courses in my academic career, and I have helped many students get through algebra related topics. However, I am at my best with highly motivated students who seriously want to learn the subject matter. 13 Subjects: including SAT math, statistics, algebra 2, algebra 1 I am certified math teacher. Currently, I work as a substitute teacher at Elmwood Park School District and Morton High Schools in Cicero. I have been tutoring students since 2008 and preparing them for ACT. I have BA in Mathematics and Secondary Education from Northeastern Illinois University. 12 Subjects: including SAT math, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/westchester_sat_math_tutors.php","timestamp":"2014-04-20T21:18:34Z","content_type":null,"content_length":"23971","record_id":"<urn:uuid:683e48b1-040f-42f1-bd56-ec3e998d0af4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
AS Level mechanics - 2 Particles moving together September 28th 2011, 08:36 AM AS Level mechanics - 2 Particles moving together Please help me with this question, i have no idea how to answer it. A motorcyclist M leaves a road junction at time t=0s. She accelerates at a rate of 3ms-2 for 8s and then maintains the speed she has reached. A car C leaves the same road junction as M at time t= os. The car accelerates from rest to 30ms-1 in 20s and then maintains the speed of 30ms-1. C passes M as they both pass a pedestrian. Find the distance of the pedestrian from the road junction September 28th 2011, 10:25 AM Re: AS Level mechanics - 2 Particles moving together Firstly, lay out UVAST for the acceleration parts of the motorcyclist and car and fill in any gaps using the UVAST equations. So you need to work out the final velocity and distance for the motorcyclist and the acceleration and distance for the car. As we have different times for both the car and motorcyclist, I would suggest working out how far the motorcyclist has travelled after 20 seconds so both are at the same time - makes working out much easier. Don't forget that for the remaining 12 seconds of the motorcyclist it is no longer accelerating but moving at a constant speed. Now you know how far they have both travelled after 20 seconds you can set up an equation like this: Total distance travelled by motorcyclist in 20 seconds + (Velocity x t) = Total distance travelled by car in 20 seconds + (Velocity x t) Don't forget that the velocity is now constant. T represents the number of seconds after 20 seconds that the car and motorcyclist meet. Solve the equation and find t. Add it to 20 seconds and that is the time where they pass the pedestrian. You need the distance, however, so you can form another UVAST list for either the motorcyclist or car to work this out (car should be easier!) for just the constant speed part. Use this to work out the distance but don't forget to add it to the distance for the accelerating part! I hope that that makes sense and that it doesn't sound like a load of waffle! Let me know if you need clarification on any of the points. Good luck! September 28th 2011, 02:54 PM Re: AS Level mechanics - 2 Particles moving together Is that the whole question? It's just that (having recently studied this sort of thing) it seems to me to be directing you towards a speed-time graph. I think you could draw an s-t graph and then work out the area underneath either of the lines, which would tell you the total distance travelled by the car/motorcycle.
{"url":"http://mathhelpforum.com/math-topics/189063-level-mechanics-2-particles-moving-together-print.html","timestamp":"2014-04-19T08:49:45Z","content_type":null,"content_length":"6068","record_id":"<urn:uuid:8f0b94da-59e2-410b-ad8a-dfa55b1b36f1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Closing text files before quitting level 1 Fortran Mex function Replies: 0 Closing text files before quitting level 1 Fortran Mex function Posted: Jan 29, 2013 5:36 AM I open a text file in my level 1 fortran mex function to write the ouput of the my calculations at each instance it is called. However, I am having trouble closing the file after finishing the simulation in simulink. The sample code is as below SUBROUTINE MEXFUNCTION(NLHS, PLHS, NRHS, PRHS) ! Define all the variables call main(t,u,y) !Called at each instance of the simulation SUBROUTINE MAIN(T,U,Y) ! Perform calculations with t,u,y write(20,*) T !(say) Since I cannot close my input.txt, the next time I run a simulation the simulink crashes saying that it can't open input.txt (unit=20). If I knew where this mexfunction is being called from, I could just put call a terminate function at the end write a simple line saying Can anyone give me advice?
{"url":"http://mathforum.org/kb/thread.jspa?messageID=8184565&tstart=0","timestamp":"2014-04-17T10:41:54Z","content_type":null,"content_length":"14526","record_id":"<urn:uuid:91ba38d2-ed86-4efd-b3e7-f66ccc4873ad>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
You wait all day for a bus… Any and all didn’t appear in Python until version 2.5, released in 2006, when the language was already well into its teens. Why the delay in offering such fundamental functions? An oversight? Or simply that they’re so easy to implement they weren’t thought necessary. Either way, they’re here now. The functions are closely related and complementary. We can define any in terms of all and vice-versa. def any_(xs): return not all(map(operator.not_, xs)) def all_(xs): return not any(map(operator.not_, xs)) C++ reached its 30s before introducing its own versions of these logical algorithms, any_of and all_of, but made up for lost time by finding room for a third, none_of, which is not any_of. template <class Iter, class Pred> bool none_of_(Iter b, Iter e, Pred p) return std::find_if(b, e, p) == e; template <class Iter, class Pred> bool any_of_(Iter b, Iter e, Pred p) return !none_of_(b, e, p); template <class Iter, class Pred> bool all_of_(Iter b, Iter e, Pred p) return !any_of_(b, e, std::not1(p));
{"url":"http://wordaligned.org/articles/all-any-none","timestamp":"2014-04-21T12:09:12Z","content_type":null,"content_length":"5726","record_id":"<urn:uuid:ce723cce-03ae-4b39-a83c-db138e2a7f9d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Ioannidis on effect size inflation, with guest appearance by Bozo the Clown Andrew Gelman posted a link on his blog today to a paper by John Ioannidis I hadn’t seen before. In many respects, it’s basically the same paper I wrote earlier this year as a commentary on the Vul et al “voodoo correlations” paper (the commentary was itself based largely on an earlier chapter I wrote with my PhD advisor, Todd Braver). Well, except that the Ioannidis paper came out a year earlier than mine, and is also much better in just about every respect (more on this below). What really surprises me is that I never came across Ioannidis’ paper when I was doing a lit search for my commentary. The basic point I made in the commentary–which can be summarized as the observation that low power coupled with selection bias almost invariably inflates significant effect sizes–is a pretty straightforward statistical point, so I figured that many people, and probably most statisticians, were well aware of it. But no amount of Google Scholar-ing helped me find an authoritative article that made the same point succinctly; I just kept coming across articles that made the point tangentially, in an off-hand “but of course we all know we shouldn’t trust these effect sizes, because…” kind of way. So I chalked it down as one of those statistical factoids (of which there are surprisingly many) that live in the unhappy land of too-obvious-for-statisticians-to-write-an-article-about-but-not-obvious-enough-for-most-psychologists-to-know-about. And so I just went ahead and wrote the commentary in a non-technical way that I hoped would get the point across intuitively. Anyway, after the commentary was accepted, I sent a copy to Andrew Gelman, who had written several posts about the Vul et al controversy. He promptly send me back a link to this paper of his, which basically makes the same point about sampling error, but with much more detail and much better examples than I did. His paper also cites an earlier article in American Scientist by Wainer, which I also recommend, and again expresses very similar ideas. So then I felt a bit like a fool for not stumbling across either Gelman’s paper or Wainer’s earlier. And now that I’ve read Ioannidis’ paper, I feel even dumber, seeing as I could have saved myself a lot of trouble by writing two or three paragraphs and then essentially pointing to Ioannidis’ work. Oh well. That all said, it wasn’t a complete loss; I still think the basic point is important enough that it’s worth repeating loudly and often, no matter how many times it’s been said before. And I’m skeptical that many fMRI researchers would have appreciated the point otherwise, given that none of the papers I’ve mentioned were published in venues fMRI researchers are likely to read regularly (which is presumably part of the reason I never came across them!). Of course, I don’t think that many people who do fMRI research actually bothered to read my commentary, so it’s questionable whether it had much impact anyway. At any rate, the Ioannidis paper makes a number of points that my paper didn’t, so I figured I’d talk about them a bit. I’ll start by revisiting what I said in my commentary, and then I’ll tell you why you should read Ioannidis’ paper instead of mine. The basic intuition can be captured as follows. Suppose you’re interested in the following question: Do clowns suffer depression at a higher rate than us non-comical folk do? You might think this is a contrived (to put it delicately) question, but I can assure you it has all sorts of important real-world implications. For instance, you wouldn’t be so quick to book a clown for your child’s next birthday party if you knew that The Great Mancini was going to be out in the parking lot half an hour later drinking cheap gin out of a top hat. If that example makes you feel guilty, congratulations: you’ve just discovered the translational value of basic science. Anyway, back to the question, and how we’re going to answer it. You can’t just throw a bunch of clowns and non-clowns in a room and give them a depression measure. There’s nothing comical about that. What you need to do, if you’re rigorous about it, is give them multiple measures of depression, because we all know how finicky individual questionnaires can be. So the clowns and non-clowns each get to fill out the Beck Depression Inventory (BDI), the Center for Epidemiologic Studies Depression Scale, the Depression Adjective Checklist, the Zung Self-Rating Depression Scale (ZSRDS), and, let’s say, six other measures. Ten measures in all. And let’s say we have 20 individuals in each group, because that’s all I personally a cash-strapped but enthusiastic investigator can afford. After collecting the data, we score the questionnaires and run a bunch of t-tests to determine whether clowns and non-clowns have different levels of depression. Being scrupulous researchers who care a lot about multiple comparisons correction, we decide to divide our critical p-value by 10 (the dreaded Bonferroni correction, for 10 tests in this case) and test at p < .005. That’s a conservative analysis, of course; but better safe than sorry! So we run our tests and get what look like mixed results. Meaning, we get statistically significant positive correlations between clown-dom status and depression for 2 measures–the BDI and Zung inventories–but not for the other 8 measures. So that’s admittedly not great; it would have been better if all 10 had come out right. Still, it at least partially supports our hypothesis: Clowns are fucking miserable! And because we’re already thinking ahead to how we’re going to present these results when they (inevitably) get published in Psychological Science, we go ahead and compute the effect sizes for the two significant correlations, because, after all, it’s important to know not only that there is a “real” effect, but also how big that effect is. When we do that, it turns out that the point-biserial correlation is huge! It’s .75 for the BDI and .68 for the ZSRDS. In other words, about half of the variance in clowndom can be explained by depression levels. And of course, because we’re well aware that correlation does not imply causation, we get to interpret the correlation both ways! So we quickly issue a press release claiming that we’ve discovered that it’s possible to conclusively diagnose depression just by knowing whether or not someone’s a clown! (We’re not going to worry about silly little things like base rates in a press release.) Now, this may all seem great. And it’s probably not an unrealistic depiction of how much of psychology works (well, minus the colorful scarves, big hair, and face paint). That is, very often people report interesting findings that were selectively reported from amongst a larger pool of potential findings on the basis of the fact that the former but not the latter surpassed some predetermined criterion for statistical significance. For example, in our hypothetical in press clown paper, we don’t bother to report results for the correlation between clownhood and the Center for Epidemiologic Studies Depression Scale (r = .12, p > .1). Why should we? It’d be silly to report a whole pile of additional correlations only to turn around and say “null effect, null effect, null effect, null effect, null effect, null effect, null effect, and null effect” (see how boring it was to read that?). Nobody cares about variables that don’t predict other variables; we care about variables that do predict other variables. And we’re not really doing anything wrong, we think; it’s not like the act of selective reporting is inflating our Type I error (i.e., the false positive rate), because we’ve already taken care of that up front by deliberately being overconservative in our analyses. Unfortunately, while it’s true that our Type I error doesn’t suffer, the act of choosing which findings to report based on the results of a statistical test does have another unwelcome consequence. Specifically, there’s a very good chance that the effect sizes we end up reporting for statistically significant results will be artificially inflated–perhaps dramatically so. Why would this happen? It’s actually entailed by the selection procedure. To see this, let’s take the classical measurement model, under which the variance in any measured variable reflects the sum of two components: the “true” scores (i.e., the scores we would get if our measurements were always completely accurate) and some random error. The error term can in turn be broken down into many more specific sources of error; but we’ll ignore that and just focus on one source of error–namely, sampling error. Sampling error refers to the fact that we can never select a perfectly representative group of subjects when we collect a sample; there’s always some (ideally small) way in which the sample characteristics differ from the population. This error term can artificially inflate an effect or artificially deflate it, and it can inflate or deflate it more or less, but it’s going to have an effect one way or the other. You can take that to the bank as sure as my name’s Bozo the Clown. To put this in context, let’s go back to our BDI scores. Recall that what we observed is that clowns have higher BDI scores than non-clowns. But what we’re now saying is that that difference in scores is going to be affected by sampling error. That is, just by chance, we may have selected a group of clowns that are particularly depressed, or a group of non-clowns who are particularly jolly. Maybe if we could measure depression in all clowns and all non-clowns, we would actually find no difference between groups. Now, if we allow that sampling error really is random, and that we’re not actively trying to pre-determine the outcome of our study by going out of our way to recruit The Great Depressed Mancini and his extended dysthymic clown family, then in theory we have no reason to think that sampling error is going to introduce any particular bias into our results. It’s true that the observed correlations in our sample may not be perfectly representative of the true correlations in the population; but that’s not a big deal so long as there’s no systematic bias (i.e., that we have no reason to think that our sample will systematically inflate correlations or deflate them). But here’s the problem: the act of choosing to report some correlations but not others on the basis of their statistical significance (or lack thereof) introduces precisely such a bias. The reason is that, when you go looking for correlations that are of a certain size or greater, you’re inevitably going to be more likely to select those correlations that happen to have been helped by chance than hurt by it. Here’s a series of figures that should make the point even clearer. Let’s pretend for a moment that the truth of the matter is that there is in fact a positive correlation between clown status and all 10 depression measures. Except, we’ll make it 100 measures, because it’ll be easier to illustrate the point that way. Moreover, let’s suppose that the correlation is exactly the same for all 100 measures, at .3. Here’s what that would look like if we just plotted the correlations for all 100 measures, 1 through 100: It’s just a horizontal red line, because all the individual correlations have the same value (0.3). So that’s not very exciting. But remember, these are the population correlations. They’re not what we’re going to observe in our sample of 20 clowns and 20 non-clowns, because depression scores in our sample aren’t a perfect representation of the population. There’s also error to worry about. And error–or at least, sampling error–is going to be greater for smaller samples than for bigger ones. (The reason for this can be expressed intuitively: other things being equal, the more observations you have, the more representative your sample must be of the population as a whole, because deviations in any given direction will tend to cancel each other out the more data you collect. And if you keep collecting, at the limit, your sample will constitute the whole population, and must therefore by definition be perfectly representative). With only 20 subjects in each group, our estimates of each group’s depression level are not going to be terrifically stable. You can see this in the following figure, which shows the results of a simulation on 100 different variables, assuming that all have an identical underlying correlation of .3: Notice how much variability there is in the correlations! The weakest correlation is actually negative, at -.18; the strongest is much larger than .3, at .63. (Caveat for more technical readers: this assumes that the above variables are completely independent, which in practice is unlikely to be true when dealing with 100 measures of the same construct.) So even though the true correlation is .3 in all cases, the magic of sampling will necessarily produce some values that are below .3, and some that are above .3. In some cases, the deviations will be substantial. By now you can probably see where this is going. Here we have a distribution of effect sizes that to some extent may reflect underlying variability in population effect sizes, but is also almost certainly influenced by sampling error. And now we come along and decide that, hey, it doesn’t really make sense to report all 100 of these correlations in a paper; that’s too messy. Really, for the sake of brevity and clarity, we should only report those correlations that are in some sense more important and “real”. And we do that by calculating p-values and only reporting the results of tests that are significant at some predetermined level (in our case, p < .005). Well, here’s what that would look like: This is exactly the same figure as the previous one, except we’ve now grayed out all the non-significant correlations. And in the process, we’ve made Bozo the Clown cry: Why? Because unfortunately, the criterion that we’ve chosen is an extremely conservative one. In order to detect a significant difference in means between two groups of 20 subjects at p < .005, the observed correlation (depicted as the horizontal black line above) needs to be .42 or greater! That’s substantially larger than the actual population effect size of .3. Effects of this magnitude don’t occur very frequently in our sample; in fact, they only occur 16 times. As a result, we’re going to end up failing to detect 84 of 100 correlations, and will walk away thinking they’re null results–even though the truth is that, in the population, they’re actually all pretty strong, at .3. This quantity–the proportion of “real” effects that we’re likely to end up calling statistically significant given the constraints of our sample–is formally called statistical power. If you do a power analysis for a two-sample t-test on a correlation of .3 at p < .005, it turns out that power is only .17 (which is essentially what we see above; the slight discrepancy is due to chance). In other words, even when there are real and relatively strong associations between depression and clownhood, our sample would only identify those associations 17% of the time, on average. That’s not good, obviously, but there’s more. Now the other shoe drops, because not only have we systematically missed out on most of the effects we’re interested in (in virtue of using small samples and overly conservative statistical thresholds), but notice what we’ve also done to the effect sizes of those correlations that we do end up identifying. What is in reality a .3 correlation spuriously appears, on average, as a .51 correlation in the 16 tests that surpass our threshold. So, through the combined magic of low power and selection bias, we’ve turned what may in reality be a relatively diffuse association between two variables (say, clownhood and depression) into a seemingly selective and extremely strong association. After all the excitement about getting a high-profile publication, it might ultimately turn out that clowns aren’t really so depressed after all–it’s all an illusion induced by the sampling apparatus. So you might say that the clowns get the last laugh. Or that the joke’s on us. Or maybe just that this whole clown example is no longer funny and it’s now time for it to go bury itself in a hole somewhere. Anyway, that, in a nutshell, was the point my commentary on the Vul et al paper made, and it’s the same point the Gelman and Wainer papers make too, in one way or another. While it’s a very general point that really applies in any domain where (a) power is less than 100% (which is just about always) and (b) there is some selection bias (which is also just about always), there were some considerations that were particularly applicable to fMRI research. The basic issue is that, in fMRI research, we often want to conduct analyses that span the entire brain, which means we’re usually faced with conducting many more statistical comparisons than researchers in other domains generally deal with (though not, say, molecular geneticists conducting genome-wide association studies). As a result, there is a very strong emphasis in imaging research on controlling Type I error rates by using very conservative statistical thresholds. You can agree or disagree with this general advice (for the record, I personally think there’s much too great an emphasis in imaging on Type I error, and not nearly enough emphasis on Type II error), but there’s no avoiding the fact that following it will tend to produce highly inflated significant effect sizes, because in the act of reducing p-value thresholds, we’re also driving down power dramatically, and making the selection bias more While it’d be nice if there was an easy fix for this problem, there really isn’t one. In behavioral domains, there’s often a relatively simple prescription: report all effect sizes, both significant and non-significant. This doesn’t entirely solve the problem, because people are still likely to overemphasize statistically significant results relative to non-significant ones; but at least at that point you can say you’ve done what you can. In the fMRI literature, this course of action isn’t really available, because most journal editors are not going to be very happy with you when you send them a 25-page table that reports effect sizes and p-values for each of the 100,000 voxels you tested. So we’re forced adopt other strategies. The one I’ve argued for most strongly is to increase sample size (which increases power and decreases the uncertainty of resulting estimates). But that’s understandably difficult in a field where scanning each additional subject can cost $1,000 or more. There are a number of other things you can do, but I won’t talk about them much here, partly because this is already much too long a post, but mostly because I’m currently working on a paper that discusses this problem, and potential solutions, in much more detail. So now finally I get to the Ioannidis article. As I said, the basic point is the same one made in my paper and Gelman’s and others, and the one I’ve described above in excruciating clownish detail. But there are a number of things about the Ioannidis that are particularly nice. One is that Ioannidis considers not only inflation due to selection of statistically significant results coupled with low power, but also inflation due to the use of flexible analyses (or, as he puts it, “vibration” of effects–also known as massaging the data). Another is that he considers cultural aspects of the phenomenon, e.g., the fact that investigators tend to be rewarded for reporting large effects, even if they subsequently fail to replicate. He also discusses conditions under which you might actually get deflation of effect sizes–something I didn’t touch on in my commentary, and hadn’t really thought about. Finally, he makes some interesting recommendations for minimizing effect size inflation. Whereas my commentary focused primarily on concrete steps researchers could take in individual studies to encourage clearer evaluation of results (e.g., reporting confidence intervals, including power calculations, etc.), Ioannidis focuses on longer-term solutions and the possibility that we’ll need to dramatically change the way we do science (at least in some fields). Anyway, this whole issue of inflated effect sizes is a critical one to appreciate if you do any kind of social or biomedical science research, because it almost certainly affects your findings on a regular basis, and has all sorts of implications for what kind of research we conduct and how we interpret our findings. (To give just one trivial example, if you’ve ever been tempted to attribute your failure to replicate a previous finding to some minute experimental difference between studies, you should seriously consider the possibility that the original effect size may have been grossly inflated, and that your own study consequently has insufficient power to replicate the effect.) If you only have time to read one article that deals with this issue, read the Ioannidis paper. And remember it when you write your next Discussion section. Bozo the Clown will thank you for it. Ioannidis, J. (2008). Why Most Discovered True Associations Are Inflated Epidemiology, 19 (5), 640-648 DOI: 10.1097/EDE.0b013e31818131e7 Yarkoni, T. (2009). Big Correlations in Little Studies: Inflated fMRI Correlations Reflect Low Statistical Power-Commentary on Vul et al. (2009) Perspectives on Psychological Science, 4 (3), 294-298 DOI: 10.1111/j.1745-6924.2009.01127.x 11 thoughts on “Ioannidis on effect size inflation, with guest appearance by Bozo the Clown” 1. Hi Tal, Great post. I agree this is a big issue in neuroscience research given the exploratory nature of the research and the small samples typically used. A few random thoughts that I had: 1. If the journal does not have space to report all effect sizes, report the effect sizes available in an online supplement. 2. Distinguish exploratory from confirmatory analyses. Limit the confirmatory analyses to a small set. Then acknowledge the tentative and biased nature of the larger set of exploratory analyses. 3. Make the inferential approach for dealing with multiplicities in the data clear. 4. Consider the use of a big-picture to detailed approach. For example, if you are interested in the difference between clown and non-clown depression and you have multiple depression measures, take one measure to be the main test. This could be a composite of the measures used or perhaps the most reliable or valid measure. In other contexts this might involve testing various alternative models of the relationship between a set of predictors and an outcome. One model could be an overall null hypothesis where all correlations are constrained to be zero. A second model could constrain all correlations to be equal, but non-zero. Additional models could test other particular patterns of interest. The fit of each model in absolute and relative terms could then be evaluated. 2. Don’t feel too bad – I’m an Ioannidis fan and I’d never heard of this paper before either! The problem of effect size inflation is pretty easy to understand but as you say, solving it is hard. Still I wonder whether, in the case of fMRI specifically, there isn’t a possible solution… “While it’d be nice if there was an easy fix for this problem, there really isn’t one. In behavioral domains, there’s often a relatively simple prescription: report all effect sizes, both significant and non-significant. This doesn’t entirely solve the problem, because people are still likely to overemphasize statistically significant results relative to non-significant ones; but at least at that point you can say you’ve done what you can. In the fMRI literature, this course of action isn’t really available, because most journal editors are not going to be very happy with you when you send them a 25-page table that reports effect sizes and p-values for each of the 100,000 voxels you tested.” It’s true that no journal is going to publish all of the data from an fMRI experiment but – do we need journals to report fMRI results? Why couldn’t we make all of our results available online? Suppose you look for a correlation between personality and neural activity in while looking at pictures of clowns. You take 10 personality measures and correlate each one with activity in every voxel. You apply a ridiculously conservative Bonferroni correction and you find some blobs where activity correlates very strongly with personality. Now if you publish this in a journal, you’re only going to report those blobs where the effect size is very high. Because you will be applying a very conservative threshold. That’s the problem that Ioannidis and you identified. But what you actually have in this example is 10 3D brain volumes where higher values mean more correlation. Why not make those 3D images your results, and leave it to the reader to threshold them? If the reader then wants to threshold them conservatively they can do so, knowing (hopefully) not to take the effect sizes too seriously, then if they want they can calculate the correlations in particular areas using different thresholds or no thresholds at all… all of which will be appropriate for certain purposes. It’s always struck me as odd that we publish fMRI data in journals in exactly the same way as 19th century scientists published their results. 1. Hi Neuroskeptic, Thanks for the comments and compliment! It’s true that no journal is going to publish all of the data from an fMRI experiment but – do we need journals to report fMRI results? Why couldn’t we make all of our results available I totally agree with this idea, and it’s actually something I talk about as really being the ultimate long-term solution when I give talks on this stuff (and also in the paper I’m working on). Ultimately, the only (or at least, best) way to solve the problem is to dump all data, significant or not, in a massive database somewhere. But the emphasis in my post was on “easy” fixes, which this certainly isn’t. I think the main problem isn’t a technical one–there already are online databases (e.g., BrainMap and SuMS DB) that could in theory be modified to handle complete maps instead of coordinates–it’s a sociological one. It’s going to be pretty difficult to convince researchers to (a) allow their raw data to be submitted to a repository and (b) actually convince them to do it themselves (since it would inevitably entail filling out a bunch of meta-data for each map). I’m not saying this isn’t the way to go, just that it’s not going to be an easy sell. The Journal of Cognitive Neuroscience already experimented with something like this back when they required authors to submit their raw data along with manuscripts. I think the general verdict is that it was a nice idea that didn’t really work in practice, because (a) it was a pain in the ass to process, (b) authors weren’t happy with it, and (c) hardly anyone actually requested or used the data. These aren’t insurmountable problems, and to some extent they’d be ameliorated by having an online database rather than an off-line one, but nonetheless, I think there are some big challenges involved… 3. Hi Tal, That’s a nice write-up, but I’m wondering why you think there isn’t a practical “solution”? A whole-sale solution seems unlikely — publication bias will always be there in one form or another — but the huge exacerbation of this inflation that arises in fMRI (due to the sheer dimensionality of the data) can be solved: Use one portion of the data to reduce the dimensionality to a much smaller set of ROIs, and the other part of the data to assess the effect size (either using independent localizers, or using cross-validation methods). Using independent localizers to reduce the dimensionality of fMRI data (by several orders of magnitude) will go a long way to minimizing the problem: It will still be the case that the reported significant effects in any given ROI will be inflated, like all significant effects are inflated, but much less so. Moreover, if the dimensionality of fMRI is reduced sufficiently, there will only be a few ROIs, and reporting confidence intervals on non-significant effects ceases to be wildly impractical… 1. Hi Ed, Thanks for the comment. I think cross-validation and independent localization are generally a good idea, but I’m not sure they help much with inflation caused by sampling error (as opposed to measurement error). Actually, there’s two issues. One is that power takes a hit any time you reduce the data. So in the case of individual differences in particular, I think it’s usually a bad idea to take a sample of, say, 20 subjects and split it in half, because (a) power in one half of the sample is going to be so low you won’t detect anything and (b) uncertainty in the other half is going to be so great you won’t be able to replicate anything anyway. But if you have a sample of, say, 500 people, and you’re not dealing with tiny effects, then sure, this is a good way to go (and is something I do when I have behavioral samples that big to work with). Doing cross-validation or independent localization within subjects is less of an issue from a power perspective. The problem here though is that you don’t necessarily reduce inflation that’s caused by sampling error. You can allow that you’re measuring each individual’s true score with perfect reliability and still have massive inflation caused by low power/selection bias. And cross-validation really won’t help you in that case. You could do even/odd runs or localizers or leave-one-out analyses, and the fundamental problem remains that people’s scores could be more or less the same across all permutations of the data and still grossly unrepresentative of the population distribution. That doesn’t mean we shouldn’t do these types of analyses, since they do nicely control for other types of error; but I’m not at all sure they do much to solve the particular problem I’m talking about. 4. “The Journal of Cognitive Neuroscience already experimented with something like this back when they required authors to submit their raw data along with manuscripts.” Ah, I wasn’t aware of that. But that’s not quite what I was suggesting. raw data is not very interesting especially when it’s fMRI data which will take you at a minimum hours to analyze. I’d want the final analyzed statistical parametric maps made available. And ideally all of the intermediate steps which the data went through on its journal from raw data to final results. My ultimate fantasy solution would be to have one massive supercomputer where everyone in the world uploads their data and does all their analysis, and everyone can see what everyone else is doing and everything they’ve ever done. This is not going to happen… but I think this is the ideal – total openness about what we do. Because frankly as scientists (and especially as scientists paid by the taxpayer!) we should be completely open about what we do, there shouldn’t be any “file drawers” at all.
{"url":"http://www.talyarkoni.org/blog/2009/11/21/ioannidis-on-effect-size-inflation-with-guest-appearance-by-bozo-the-clown/","timestamp":"2014-04-16T05:35:22Z","content_type":null,"content_length":"87051","record_id":"<urn:uuid:3452d0c6-415a-4493-bac1-82a36d3eda68>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Inclusion and Neighborhood Properties for Certain Classes of Multivalently Analytic Functions Journal of Complex Analysis Volume 2013 (2013), Article ID 754598, 5 pages Research Article Inclusion and Neighborhood Properties for Certain Classes of Multivalently Analytic Functions Civil Aviation College, Kocaeli University, Arslanbey Campus, İzmit, 41285 Kocaeli, Turkey Received 10 May 2013; Accepted 5 October 2013 Academic Editor: Lianzhong Yang Copyright © 2013 Serap Bulut. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We introduce and investigate two new general subclasses of multivalently analytic functions of complex order by making use of the familiar convolution structure of analytic functions. Among the various results obtained here for each of these function classes, we derive the coefficient inequalities and other interesting properties and characteristics for functions belonging to the classes introduced here. 1. Introduction and Definitions Let be the set of real numbers, let be the set of complex numbers, let be the set of positive integers, and let . Let denote the class of functions of the form which are analytic and -valent in the open unit disk Denote by the Hadamard product (or convolution) of the functions and ; that is, if is given by (1) and is given by then Definition 1. Let the function . Then one says that is in the class if it satisfies the condition where is given by (3), and denotes the falling factorial defined as follows: Various special cases of the class were considered by many earlier researchers on this topic of Geometric Function Theory. For example, reduces to the function class(i) for , and , studied by Mostafa and Aouf [1];(ii)for and , studied by Srivastava et al. [2];(iii) for , and , studied by Prajapat et al. [3];(iv) for , and , studied by Srivastava and Bulut [4];(v)for , , , and , studied by Ali et al. [5]. Definition 2. Let the function . Then one says that is in the class if it satisfies the condition where and are defined by (3) and (6), respectively. Setting , in Definition 2, we have the special class (which generalizes the class defined by Prajapat et al. [3]) introduced by Srivastava et al. [2]. Following a recent investigation by Frasin and Darus [6], if and , then we define the -neighborhood of the function by It follows from the definition (9) that if then The main object of this paper is to investigate the various properties and characteristics of functions belonging to the above-defined classes Apart from deriving coefficient bounds and coefficient inequalities for each of these classes, we establish several inclusion relationships involving the -neighborhoods of functions belonging to the general classes which are introduced above. 2. Coefficient Bounds and Coefficient Inequalities We begin by proving a necessary and sufficient condition for the function to be in each of the classes Theorem 3. Let the function be given by (1). Then is in the class if and only if where Proof. We first suppose that the function given by (1) is in the class . Then, in view of (3)–(6), we have or equivalently If we choose to be real and let , we arrive easily at the inequality (14). Conversely, we suppose that the inequality (14) holds true and let Then, we find that Hence, by the Maximum Modulus Theorem, we have which evidently completes the proof of Theorem 3. Remark 4. If we set and in Theorem 3, then we have [2, Theorem 1]. Lemma 5. Let the function given by (1) be in the class . Then, for , one has where is defined by (15). Proof. Let . Then, in view of the assertion (14), we have Furthermore, by rewriting the assertion (14) as follows: we obtain Similar to Theorem 3, we can prove the following result. Theorem 6. Let the function be given by (1). Then is in the class if and only if where is defined by (15). Remark 7. If we set and in Theorem 6, then we have [2, Theorem 2]. Lemma 8. Let the function given by (1) be in the class . Then, for , one has where is defined by (15). Proof. Let . Then, in view of the assertion (26), we have Furthermore, we also have from the assertion (26) 3. A Set of Inclusion Relationships In this section, we determine inclusion relations for the classes involving -neighborhoods defined by (9) and (11). Theorem 9. If and then where and are given by (10) and (15), respectively. Proof. The inclusion relation (33) would follow readily from the definition (11) and the assertion (22). Remark 10. If we set and in Theorem 9, then we have [2, Theorem 3]. Theorem 11. If and then where and are given by (10) and (15), respectively. Proof. The inclusion relation (35) would follow readily from the definition (11) and the assertion (28). Remark 12. If we set and in Theorem 11, then we have [2, Theorem 4]. 4. Neighborhood Properties In this section, we determine the neighborhood properties for each of the function classes which are defined as follows. Definition 13. A function is said to be in the class if there exists a function such that Definition 14. A function is said to be in the class if there exists a function such that the inequality (37) holds true. Setting in Definitions 13 and 14, we have the special classes introduced by Srivastava et al. [2], respectively. Theorem 15. If and then where is defined by (15). Proof. Suppose that . Then we find from (9) that which readily implies that Since , we find from (21) that so that where is given by (39). Thus, by Definition 13, . This completes the proof of Theorem 15. Remark 16. If we set and in Theorem 15, then we have [2, Theorem 5]. The proof of Theorem 17 (based upon Definition 14) is similar to that of Theorem 15. Therefore we omit the details involved. Theorem 17. If and then where is defined by (15). Remark 18. If we set and in Theorem 17, then we have [2, Theorem 6]. The present investigation was supported by the Kocaeli University under Grant HD 2011/22.
{"url":"http://www.hindawi.com/journals/jca/2013/754598/","timestamp":"2014-04-18T17:10:32Z","content_type":null,"content_length":"424535","record_id":"<urn:uuid:a1c1f5b8-5968-407d-a3fd-f189cc5882c4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Paul Gustav Samuel Stäckel Born: 20 August 1862 in Berlin, Prussia (now Germany) Died: 12 December 1919 in Heidelberg, Germany Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Paul Stäckel was the only son of Marie Elisabeth Ringe and her husband Ernst Gustav Stäckel. Marie Stäckel died early in her son's life. Ernst Stäckel was the head of a secondary school for girls in the city, as well as being a school inspector, so he clearly knew the value of education and wanted the best for his son. Consequently, in 1871, the young Stäckel was sent to Berlin's Joachimsthalsches Gymnasium [2]:- ... an educational establishment which enjoyed a good reputation ... established in 1607. Stäckel applied himself well at school, though he seems to have had no particular inclination for mathematics until his seventh year. Even when he was in the equivalent of the sixth form, he was still undecided about his future path in life, eventually narrowing it down to a choice between studying law or mathematics. That mathematics was the path he eventually chose was largely due to the influence of two of his teachers, Mr Schindler and Mr Seebeck, who succeeded in further arousing his interest in and enthusiasm for the subject. Stäckel's reference, written at the end of his schooling in 1880, talks of [3]:- ... a lively interest in mathematics, outstanding achievements [with] a clear understanding [of mathematical and physical concepts]. His exam results were described as superb. In 1866, when Stäckel was four years old, the political situation became somewhat volatile. The Prussian leader, Otto von Bismarck, provoked Austria into declaring war on Prussia. Against expectations, Prussia was victorious, winning what is now referred to as the Austro-Prussian War. Four years later, Bismarck engineered what is now known as the Franco-Prussia War which was won resoundingly by Prussia. Bismarck now moved to form a union of the states with the Prussian King Wilhelm I as Kaiser. Although the Kaiser was in overall control, the states largely ruled themselves and the political situation was reasonably stable. This was the background against which Stäckel lived, the balance only being disturbed late on in his life by the outbreak of World War I. However, this was far in the future as Stäckel left school and set off on his chosen career path. Having chosen to undertake further study, Stäckel entered the University of Berlin and attended lectures, primarily in mathematics and physics, but also some in philosophy, psychology, history and educational theory. Among his teachers were the mathematicians Kronecker, Kummer, Wangerin and Weierstrass, the latter seemingly having the biggest influence in the early years of his career. As a student, Stäckel did not go unnoticed, his tutors soon recognising both his talent and his diligence. Stäckel became a member of the university Mathematische Verein, a mathematical society for interested students to meet and discuss the latest developments in the subject. At one point he served as its president. Stäckel completed his degree in 1884 and gained his PhD a year later with a thesis entitled The Motion of a Particle Across a Surface which was inspired by the work of, among others, Euler, Lagrange and Jacobi. At this point, Stäckel's intention was to become a teacher, so in 1886 he took and passed his teaching exams, specialising in mathematics and physics. It was compulsory for young men to undertake one year of military service and Stäckel completed this in his home city from October 1886 to September 1887. The following twelve months were spent completing the necessary probationary teaching year at the Königlichen Wilhelms-Gymnasium, also in Berlin. After the probationary period was over, he remained at the school for a further two years as a science teaching assistant. We know from the official evaluation report that his time at the school was successful and he had a good rapport with the students. It is uncertain whether Stäckel [2]:- ... felt under-challenged as a teacher ... or whether there were other factors, however he chose to turn his back on teaching and instead opted for the uncertain path of academia. In 1891, Stäckel's habilitation thesis, Integration of Hamiltonian-Jacobian differential equations by means of separation of variables, was accepted by the University of Halle, near Leipzig, and he took up a lectureship there. The move to Halle in February was the first of three defining events in Stäckel's life in that year. He also became a member of the German Mathematical Society, the newly-formed society for German mathematicians, in which he would play an important role over the years, including serving as president. The third event was of a more personal nature. On 6 May 1891, Stäckel married his fiancée, (Alwine Eleonore) Elisabeth Lüdecke, in a ceremony in her home town of Wittenberg. During the time in Halle, the marriage produced a daughter, Hildegard, and a son, Walter, born in January 1893. A second son, Gerhard (Gerd), followed in 1898. Stäckel thrived during his time at Halle, publishing numerous papers, mainly on topics in analysis, mechanics and differential geometry. His efforts did not go unnoticed; Hilbert, who had had dealings with Stäckel, wrote in a letter to Klein that [5]:- ... among the younger lecturers, Stäckel distinguishes himself through his enthusiasm and activity. Working in the department at the time was Cantor and this undoubtedly had an influence on Stäckel and some of the directions his mathematical research took. Being so close to Leipzig, the University of Halle had strong links with the University of Leipzig, including a joint mathematics colloquium which met several times a year. It was here that Stäckel became acquainted with Engel, a professor at Leipzig. The two became close friends and exchanged a great many letters; about one thousand are preserved in the archives of the University of Giessen. Later, the two collaborated on the study of non-Euclidean geometry, as well as research into the history of mathematics, perhaps most notably collaborating on the publication of the complete works of Stäckel's stay in Halle lasted until 1895 when he was called to take up the post of associate professor at the University of Königsberg as a successor to Minkowski, who had himself recommended Stäckel for the post, having been impressed by his work at Halle. In fact, Stäckel's stay in Königsberg lasted little more than eighteen months, though he earned a great deal of respect during that time. In 1897, a full professorship became available at the University of Kiel, and Stäckel was invited to fill the post. He remained there for the next eight years, during which he produced about five works per year, including some on the teaching of mathematics, in which he still retained an interest. See an article on Stäckel's teaching for more information on his interest in mathematical education. He developed strong ties in Kiel, including having a house built for him and his family, and it took something special for him to break those ties. That something was the offer of the prestigious Chair of Mathematics in Hannover, whose previous holders included Hilbert and Cantor. Stäckel accepted the post and spent three years there, receiving various honours in that time. At the beginning of 1908, the Chair of Mathematics became available at Karlsruhe in the state of Baden in southern Germany. Stäckel had been offered the same post six years earlier, but had declined, citing the ties he had in Kiel. When the post became vacant again, Stäckel's name was once more top of the list. A report by the university academic appointments committee explains their choice of candidate for the post:- Stäckel possesses a great teaching ability, eloquence and agreeable freshness of manner. [...] He has held lectures on the most varying areas of mathematics, has shown a great interest for its application, is a very pleasant person and would be a credit to the university. Clearly he was rated very highly, but this was still a huge decision for him to move with his family such a long way from home. The deciding factor seems to have been his being granted the title Privy Councillor to the Grand Duke of Baden, conditional on accepting the post in Karlsruhe. Stäckel's time in Karlsruhe may be seen as the high point of his international work and he was certainly very active during the five years he spent there. Alongside his teaching and research duties, he was a member of numerous committees and councils, including the Baden Teaching Committee, the Baden Schools Examinations Council and the Commission for the Encyclopaedia of Mathematical Sciences. He was also rector of the university for the academic year 1910-11. By this time, Stäckel's renown had stretched not just to other German states, but also to France and Switzerland. He commanded great respect from fellow mathematicians and had many high-profile contacts abroad. We have seen that he was extremely industrious, as well as being very talented. He was also a real polyglot, able to write Latin, speak French and Spanish, and understand basic Italian, English and Russian. He showed a real enthusiasm for mathematics: when he spoke of these things, a particular radiance shone from him. Stäckel was also renowned among his students for delivering new sets of lectures every academic year. Stäckel's final posting was to the University of Heidelberg, also in Baden. In 1912, the Baden state parliament decided to create a second mathematics chair at the university. First on their list of candidates was Landau at Göttingen, but he turned down the offer, not wanting to leave the comfort of his current post in order to have the relatively major task of creating the new position in Heidelberg. The post was then offered to Stäckel, who relished such a challenge and threw himself into it wholeheartedly. In fact, during his negotiation with the academic committee prior to his appointment, he secured certain guarantees with regard to his plans for the department which he was able to call upon later. His energy and enthusiasm, as well as his organisational skills, meant he was able to completely transform the department of mathematics. The use of one of the university buildings was secured and this was turned into a separate Mathematical Institute, including two lecture theatres, seminar rooms and a separate library. Stäckel also secured the services of a mathematics assistant. World War I broke out in Europe in 1914 and, though he was in his fifties, Stäckel saw it as his duty to support his country and joined the army as an officer, whilst still continuing with his university commitments. In fact, he was able to secure more and more funding for the department from the university, despite the constraints the war had brought. The war affected Stäckel on a personal level with the death of his sixteen year old son, Gerd, who had been serving as an officer cadet. Gerd was killed in April 1915 near Arras, now in France. This was a hard blow for Stäckel; he wrote in reply to a letter of consolation from his friend Engel [5]: Many thanks for your warm condolences on the loss, which has now affected us, like so many, many families. Here in Heidelberg, the sons of 18 professors have fallen up to now, and who knows what more is to come. Certainly, my brave son would say: don't grieve, my fate has been a noble one. But when I think how he went out, in a few months matured from a boy into a solid, confident man, in seriousness and happiness and how all the blossoms of the great and good have now been destroyed, then the pain breaks through with renewed force ... Stäckel's thoughts on the loss of his son echo those of many thousands of parents, wives and children who lost loved ones during the Great War. Considering the circumstances, it is quite remarkable that Stäckel was able to remain mathematically so productive during the war years. Towards the end of the war, Stäckel's health deteriorated following an operation. After several months of treatment, his health improved for a short time, however his condition worsened again in the summer of 1919. He died of a brain tumour aged just 57. Stäckel left behind his wife, daughter and remaining son. Elisabeth Stäckel outlived her husband by nearly twenty years, passing away on 6 May 1938, on what would have been her 47 wedding anniversary. Walter Stäckel survived the war and went on to study chemistry, receiving his PhD in 1922. An extract from a letter written by Elisabeth Stäckel to Engel in September 1920 describes her husband's final months: ... Even a year ago, when we went to Plättig to help his recovery, my husband hoped to gain new strength there for the long winter semester, but even after six weeks at work, the fate which had threatened him for so long overtook him. It was lucky for him that he was able to work for so long and suffered little in his final weeks... Stäckel continued lecturing long after his illness had taken over and hid the signs even from his closest friends with heroic self-control. It seems he knew his life was nearing its end but chose to carry on as normal. An obituary of Stäckel written by Willy Hellpach described him as:- ... a scientific figure of quite extraordinary flexibility and versatility. Hellpach was a friend of Stäckel and had learnt much from him:- ... there were few people, from who one could learn so much in such a pleasant way. He describes how Stäckel's death has left a hole which would be felt for years to come. With reference to Stäckel's popularity, Hellpach comments that:- ... he was well-known, particularly abroad, and his unusually large number of international connections would have made him one of the most valuable representatives of an intellectual renaissance in the future. ... perhaps his untimely death has spared him the gradual fading away of his work. This comment seems to have two different interpretations. Either Hellpach meant that Stäckel was spared seeing his mathematical ability falter as he reached old age, or he may have meant that Stäckel dying earlier than he should have, would somehow make him better remembered in years to come. From the context it is far from clear. Certainly, the name Paul Stäckel, despite being a well-known one during his lifetime, has been forgotten by all but a very few mathematicians of the present day. Article by: Vicky Ryan (University of St Andrews) Click on this link to see a list of the Glossary entries for this page List of References (5 books/articles) Mathematicians born in the same country Additional Material in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © February 2005 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Biographies/Stackel.html","timestamp":"2014-04-18T10:35:44Z","content_type":null,"content_length":"27592","record_id":"<urn:uuid:0143ff91-7bb0-4c0f-aca8-0bedd0660c33>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
A curiosity From Futility Closet, a fun blog of random tidbits I enjoy reading, comes the following curious sequence of equations, attributed to J.A.H. Hunter: I managed to extend this pattern for a few more digits before I got bored. Does it continue forever or does it eventually stop? Is there any deeper mathematical explanation lurking behind this supposed “curiosity”? What’s so special about $f(x) = 2x^2 - 1$? Do patterns like this exist for other functions? 20 Responses to A curiosity 1. Taking f(x)=x^2-6 I find 264453123^2-6 = 69935454 264453123 and so on for truncations of this number, but there doesn’t appear to be such a pattern for f(x)=x^2-1 2. It starts here: http://oeis.org/A151752 Hint: look at the leading digits. □ Fascinating! However, the pattern seems not to continue: the 13th term in that sequence is 3773193359375, suggesting that $2 * 377319335937^2 - 1$ ought to end in 377319335937, but it is actually equal to 284739762543877319335937 instead! In fact, the right digit to use here is 8: $2 * 877319335937^2 - 1 = 1539378434417 877319335937$. What’s the connection to all-odd-digit number divisible by powers of 5, and why does the pattern stop? ☆ One (obvious) thing I see here is that the pattern basically says that a number of the form $2n^2-n-1$ is divisible by a large power of 10. Maybe I’ll have time to look at more details 3. It seems to stop at 88384389877319335937 4. Actually, my code had a bug that did not account for the first zero in the sequence! It certainly seems to go on forever: □ Cool, what language did you use to implement that? Can you share your code? ☆ I used Mathematica. Here is the code I used: digs = {7}; d = Select[Range[0, 9], t = digs~Prepend~#; FromDigits[t] == Mod[2 FromDigits[t]^2 – 1, 10^(Length[digs] + 1)]] &][[1]]; digs = digs~Prepend~d It assumes that there is always some digit that will work, and selects the first one that works in each iteration. It seems to work so far. ○ Your assumption is valid. There is always exactly one number that works to continue the sequence. What’s more, that number depends completely on the digit of 2n^2-1 appearing immediately before n appears. I haven’t found a better way to find that digit yet other than simply computing 2n^2-1 out to enough digits though. 5. It seems like there should something to say concerning 10-adic numbers, but I’m not sure what. □ I thought of that too, but didn’t follow it up because I thought the p-adic numbers are only defined when p is prime. But after your comment I dug a little deeper and realized that although the n-adic numbers for n composite do not form a *field* they do still form a *ring*, which is just fine for the purposes of this problem. I conjecture the following: $2x^2 - 1 = x$, considered as an equation in the ring of 10-adic numbers, has a unique solution, of which 7, 37, 937, … are all suffixes. ☆ Hmm, this is not quite right (re: unique solution), since the 10-adics admit the solutions 1 and (-1/2) = …99999.5, just like the reals. But it seems the 10-adics also admit one other, weirder solution…? □ An interesting link sent to me by Matt Gardner Spencer, which seems relevant: http://www.numericana.com/answer/p-adic.htm#decimal 6. I wonder if it is possible to come up with an algorithm for generating the infinite sequence without using brute-force guessing. One simple observation: any such algorithm will necessarily use an unbounded amount of memory; a bounded-memory algorithm would produce a periodic sequence, but periodic 10-adic numbers are equivalent to negative rationals, and we know there are no rational solutions other than 1 and -1/2. □ Let k_i denote the number just to the left of the gap (running down the pyramid) in the i-th row. Then d_{i+1} (the next digit to be prepended) is 7k_i mod 10. So getting the next digit from the previous line is straightforward. But finding k_{i+1} seems to require both k_i and the digit to the left of k_i, at least the way I’m approaching it. That means it’s really no better than simply computing k_{i+1} with brute force ((2x^2-x-1)/10^{i+1} mod 10) ☆ Hmm, well, but at least that gives a way of directly computing each successive digit (rather than having to guess and check). ☆ Using this method I have computed the first 20,000 digits (takes about 15 seconds on my computer). You can find them here. This entry was posted in arithmetic, modular arithmetic, number theory, pattern. Bookmark the permalink.
{"url":"http://mathlesstraveled.com/2011/09/14/a-curiosity/","timestamp":"2014-04-19T07:05:30Z","content_type":null,"content_length":"83611","record_id":"<urn:uuid:9e46d2c8-6e34-4de1-b92b-4ed2d9c6ebae>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: To assess Mendel’s law of segregation using tomatoes, a true-breeding tall variety (SS) is crossed with a true-breeding short variety (ss). The heterozygous F1 tall plants (Ss) were crossed to produce two sets of F2 data, as follows: • one year ago • one year ago Best Response You've already chosen the best response. Set I: Set II: 45 tall 450 tall 10 short 100 short Best Response You've already chosen the best response. I know I need to do a chi square test but I'm not sure how to set it up Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ff45f9e4b00c5a3be64782","timestamp":"2014-04-17T22:08:01Z","content_type":null,"content_length":"30082","record_id":"<urn:uuid:b5bc983b-fcb7-4a88-b0e1-8053bb1ff120>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Chicago Heights Algebra 2 Tutor ...We'll have fun and you will succeed! Many students have difficulty with Algebra. Algebra need not be troublesome. 49 Subjects: including algebra 2, English, reading, writing ...I then pursued my graduate degree in Software Engineering at DePaul University which working professional for Lucent Technologies (Alcatel).After graduating I switch careers to the Hospitality Industry.I enjoyed Math in high school and college and use it during my every day work life I have over ... 16 Subjects: including algebra 2, calculus, piano, algebra 1 ...I recently ran a table at a local Boy Scout Jamboree for the Chess Merit Badge, where I had 3 chess boards set up and was playing 3 people at-a-time for most of the day. I had another Boy Scout leader recommend me as a Chess Merit Badge counselor. I have already bought and read the Boy Scout Chess Merit Badge book just for fun. 59 Subjects: including algebra 2, chemistry, Spanish, writing ...The course began with simple programming commands, progressed to logic and more complicated problem solving, and culminated with object oriented programming. During my masters degree I was a TA for the intro to computer science course. For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. 17 Subjects: including algebra 2, calculus, physics, geometry ...I completed a Discrete math course (included formal logic, graph theory, etc.) in college, and computer science courses that handled automata theory, finite state machine, etc. I completed a semester course on Ordinary Differential Equations (ODE's) at Caltech. My course textbook was Elementary... 21 Subjects: including algebra 2, chemistry, calculus, statistics
{"url":"http://www.purplemath.com/chicago_heights_algebra_2_tutors.php","timestamp":"2014-04-17T11:31:07Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:e06cbcab-cd6b-4c40-a33e-5e523b2e7e17>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairless Hills Science Tutor Find a Fairless Hills Science Tutor ...I believe the best results come from the combination of a dedicated student and a committed tutor, so I only accept assignments when I honestly believe I can make a difference.As an undergraduate, I received an A in general zoology at the College of William and Mary and an A in graduate level mam... 16 Subjects: including chemistry, grammar, writing, English ...I feel that getting experience teaching students one on one is the best way for me to have an immediate impact. This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to... 16 Subjects: including mechanical engineering, Spanish, physical science, calculus ...I retired this last June from full time work and have been teaching part time. During the last 9 months I have taught an SAT II Physics prep course as well as a full Physics preview course at a specialized learning center and am currently teaching Lab Physics and Lab Chemistry at a very small pr... 1 Subject: physics ...Tutoring this subject is a mighty and noble challenge and is rewarding. I try to make the student's entrance into algebra one that flows from their knowledge of arithmetic. I constantly go back to the principles of arithmetic and show the student how algebra flows from the knowledge of arithmetic he or she already has in hand. 11 Subjects: including pharmacology, organic chemistry, reading, chemistry ...I'm the right tutor for you. You won't believe that I could merely say "You're a boy" and that's an apple" when I first arrived United States as a Ph.D student two decades ago. Now, I'm a recipient of Excellent teaching Award, an honoree of Who is who American, and a professor of University teaching the subjects of Philosophy, Religion and Humanities. 20 Subjects: including philosophy, reading, Chinese, English Nearby Cities With Science Tutor Bristol, PA Science Tutors Fallsington, PA Science Tutors Feasterville Trevose Science Tutors Feasterville, PA Science Tutors Fieldsboro, NJ Science Tutors Florence, NJ Science Tutors Hulmeville, PA Science Tutors Langhorne Science Tutors Levittown, PA Science Tutors Middletown Twp, PA Science Tutors Morrisville, PA Science Tutors Newtown, PA Science Tutors Penndel, PA Science Tutors Riverside, NJ Science Tutors Tullytown, PA Science Tutors
{"url":"http://www.purplemath.com/Fairless_Hills_Science_tutors.php","timestamp":"2014-04-16T13:26:46Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:c3725435-25ea-4f62-9d12-8486a73c59d1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Reasoning: Mental Models Long, long ago, Richard asked how reasoning works. I don't think I've really answered that question in the first three posts responding to Richard's question, and this post isn't likely to either, but I hope I've provided some food for thought. In the first post , I talked about two types of reasoning errors that humans commonly make. In the , I discussed the domain-general vs. domain-specific debate raised by research on one type of reasoning error. In the , I talked about analogical reasoning and category-based induction. In this post, I'm going to describe mental models, and how they explain various types of reasoning. If you haven't guessed by now that my own view of reasoning is largely domain-general, this post should leave no doubt. Mental models treat domain-specific reasoning phenomena as products of domain-specific representations, rather than domain-specific reasoning mechanisms. This is, by and large, my own view as well. Mental models can be adapted to fit into traditional AI frameworks (though they differ from classical AI approaches in important ways, which will be described below) or more recent embodied cognition approaches (Lakoff's idealized cognitive models, for instance, are a form of mental models). Mental models comprise what may be one of the most controversial and powerful theories of human cognition yet formulated. The basic idea is quite simple: thinking consists of the construction and use of models in the mind/brain that are structurally isomorphic to the situations they represent. This very simple idea, it turns out, can be used to explain a wide range of cognitive tasks, including deductive, inducitive (probabilistic), and modal reasoning. There are several ways of conceptualizing mental models, such as scripts or schemas, perceptual symbols, or diagrams. Here I am only going to deal with what mental model theories have in common in their treatments of human reasoning. The idea that cognition uses mental models to think and reason is not a new one. "Picture theories" of thougth were common among the British empiricists of the 17th and 18th centuries, and were also held by many philosophers and psychologists in the first half of the 20th century (e.g., Wittgenstein's picture theory from the ). However, with the beginning of the cognitive revolution in the late 1950s and early 1960s, the computational metaphor of mind led to the prominence of propositional, or digital theories of representation and reasoning. Mental models returned to prominence in the 1980s because of their sheer predictive power. Study after study demonstrated that human reasoning exhibited certain features predicted by mental models, and not by propositional theories. Thus, for the last two decades, mental model theorists and propositional reasoning theorists have been locking horns and trading experimental arguments over how best to conceptualize human thought. As I said before, the mental models view of mind is quite simple. For instance, the statement "The red square is above of the green circle" might be represented by the following mental model: The model can be altered to fit any possible configuration. If the statement was more vague, we might construct several mental models to account for all of the possible interpretations. For instance, if the statement were "The red square is next to the green circle," we might construct multiple models with different "next-to" relationships, including one with the square to the right of the circle, one with it to the left, one with above, one with it below, and so on until all of the possible interpretations were represented. The difference between this representation and the propositional one are straightforward. The propositional represents the structure of the situation without preserving that structure in the form of the representation, while the mental model represents that structure through an isomorphic structure in the representation. One problem with the way I've described mental models thusfar is that it might be tempting to see mental models synonymous with perceptual (particularly visual) imagery. However, not all mental models are perceptual images, and research has shown that perceptual imagery itself may be built on top of mental models. In addition, mental models can represent abstract concepts that may not be visualizable, such as "justice" or "good." The important aspect of mental models is not that they are or are not perceptual, but that they preserve the structure of the situations they represent, which can be done with perceptual images or non-perceptual representations. To see how mental models work in reasoning, I'm going to describe mental model theories of three different types of reasoning: deductive/syllogistic, probabilistic, and modal. For each type of reasoning, mental model theories make unique predictions about the types of errors people will make. Hopefully by the end, even if you don't agree with the mental models perspective, the research on reasoning errors will have provided you with some new insights into the way the human mind works. Deductive Reasoning Recall the description of participants' performance in the Wason selection task from the first post. For the part of the solution that is identical with modus ponens participants answered successfully, but not for the part of the solution identical with modus tollens . According to propositional theories of reasoning, the best explanation for this is that people are in fact using modus ponens , or an equivalent rule, but not modus tollens . The mental models explanation differs in that it doesn't posit the use (or failure to use) either rule. Instead, people construct mental models of the situation, and use those to draw their conclusions. Here is how it works. Participants build a model of the premise(s) based on their background knowledge, and then come up with conclusions based on that model. To determine whether these conclusions are true, they check to see if they can construct any models in which it does not hold. If a conclusion holds in all of the models a person construct, then it is true. If not, then it is In the Wason selection task, the problem is to determine which models result in the rule being true, and which result in it being false, and use this knowledge to decide which cards to turn over. In the version of the Wason task from the first post, the rule was, "If there is a vowel on one side of a card, then there is an odd number on the other side," and we had these cards: There are various models in which this rule is true. They are the following: VOWEL ODD CONSONANT EVEN CONSONANT ODD ODD VOWEL ODD CONSONANT EVEN CONSONANT However, only the first of these explicitly represents the elements mentioned in the rule, and given working memory restrictions, we're likely to only use the elements in the first model when testing the rule. If there is a vowel (e.g., E) on the visible side of a card, then there are two possibilities for the other side. In a mental model, they would be represented with the following: In some models (the ones with ODD on the other side) the rule is true, and in some (the ones with EVEN) it is false. Thus, the person knows that he or she must turn the card with the vowel over to test the rule. However, due to working memory constraints, we tend to consider only a small number of alternative models, and thus we're only likely to represent models that contain elements from our mental model of the rule. Since there is no in our model of the rule, we're unlikely to think that we need to turn over the card with the 2 showing, and because there is an in the rule model, we are likely to mistakenly think that we need to turn over the 7. Things are actually a little bit more complex than this. It turns out that representing negations in situations like this (where the content of the models if fairly abstract or unfamiliar) is more difficult (because it requires the construction of more mental models) than representing positive situations, but for our purposes, it suffices to say that working memory makes it less likely to consider models that aren't easily derived from our model of the rule. To see better how this works, consider the following problems from Johnson-Laird et al. (1998) Only one of the following premises is true about a particular hand of cards: There is a king in the hand or there is an ace, or both. There is a queen in the hand or there is an ace, or both. There is a jack in the hand or there is a 10, or both. Is it possible that there is an ace in the hand? Johnson-Laird et al. give the following explantion of participants' performance on this sort of problem: For problem 1, the model theory postulates that individuals consider the true possibilities for each of the three premises. For the first premise, they consider three models, shown here on separate lines, which each correspond to a possibility given the truth of the premise: king ace Two of the models show that an ace is possible. Hence, reasoners should respond, "yes, it is possible for an ace to be in the hand". The second premise also supports the same conclusion. In fact, reasoners are failing to take into account that when, say, the first premise is true, the second premise: There is a queen in the hand or there is an ace, or both is false, and so there cannot be an ace in the hand. The conclusion is therefore a fallacy. Indeed, if there were an ace in the hand, then two of the premises would be true, contrary to the rubric that only one of them is true. The same strategy, however, will yield a correct response to a control problem in which only one premise refers to an ace. Problem 1 is an illusion of possibility: reasoners infer wrongly that a card is possible. A similar problem to which reasoners should respond "no" and thereby commit an illusion of impossibility can be created by replacing the two occurrences of "there is an ace" in problem 1 above with, "there is not an ace". (emphasis in the original) Here is another example from Johnson-Laird, et al. (1998): Suppose you know the following about a particular hand of cards: If there is a jack in the hand then there is a king in the hand, or else if there isn’t a jack in the hand then there is a king in the hand. There is a jack in the hand. What, if anything, follows? For which they give this explanation: Nearly everyone infers that there is a king in the hand, which is the conclusion predicted by the mental models of the premises. This problem tricked the first author in the output of his computer program implementing the model theory. He thought at first that there was a bug in the program when his conclusion -- that there is a king -- failed to tally with the one supported by the fully explicit models. The program was right. The conclusion is a fallacy granted a disjunction, exclusive or inclusive, between the two conditional assertions. The disjunction entails that one or other of the two conditionals could be false. If, say, the first conditional is false, then there need not be a king in the hand even though there is a jack. And so the inference that there is a king is invalid: the conclusion could be false. The take-home lesson from all of this is that in deductive problems, working memory constraints that cause us to consider a limited number of mental models lead us to make systematic errors in reasoning, including those found in the original Wason selection task. It also explains why, when we place the Wason task in a familiar context (e.g., alcohol drinkers and age), we perform better. For these scenarios, we already have more complex mental models in memory, and can use these, rather than models constructed on-line for the specific task, to reason about which cards to turn over. Probabilistic Reasoning Reasoning about probabilities occurs in a way that is very similar to deductive reasoning. The mental models view of probabilistic reasoning has three principles: 1.) people will construct models representing what is true about the situations, according to their knowledge of them 2.) all things being equal (i.e., unless we have some reason to believe otherwise), all mental models are equally prob2bly, and 3.) the probability of any situation is determined by the proportion of the mental models in which it occurs . If an event occurs in four out of six of the mental models i construct for a situation, then the probability of that event is 67%. However, because mental models are subject to working memory restrictions, they will lead to certain types of systematic reasoning errors in probabilistic reasoning, as they do in deductive reasoning. For instance, consider this problem ^ 4 On my blog, I will post about politics, or if I don't post about politics, I will post about both cognitive science and anthropology, but I won't post all on three topics. For this scenario, the following mental models will be constructed: COG SCI ANTHRO What is the probability that I will post about politics and anthropology? If all possibilities are equally likely, the answer is 1/4. However, researchers found that in problems like this, people tend to answer 0. The reason for this is that in the two mental models people construct for this situation, there is no combination of politics and anthropology. Another interesting and widely-studied example is the famous Monty Hall problem. For those of you who haven't heard of it, I'll briefly describe it, and then discuss it from a mental models perspective. On the popular game show Let's Make a Deal , hosted by Monty Hall, people were sometimes presented with a difficult decision. There were three doors, behind one of which was a great prize. The other two had lesser prizes behind them. After people made their first choice, Hall would open one of the doors with a lesser prize, and ask them if they wanted to stick with their choice, or choose the remaining door. The question is, what should participants do? Does it matter whether you switch or stay with your original choice? If you're not familiar with the problem, think about it for a minute, and come with an aswer before reading on. Do you have an answer? If you're not familiar with the problem, the chances are you decided that it doesn't matter whether you switch. However, if this is your answer, then you are wrong. The probability of selecting the door with the big prize, if you stick with your original choice, is 33%. The probability of selecting the door with the big prize if you switch, however, it 66.6%. To see why this is, imagine another situation, in which there are a million doors to choose from. You choose a door, and Monty then opens 999,998 doors, leaving one more, and asks you if you want to choose? In this situation, it's quite clear that while the chance of selecting the right door on the first choice was 1/1,000,000, the chance of the remaining door being the right one are 999,999/1,000,000. When participants have been given this problem in experiments, 80-90% of them say that there is no benefit to switching. They believe that the probability of the remaining door containing the big prize is no greater than the probability of the door they've already chosen containing it (the so-called "uniformity belief") . In general they either report that the number of original cases determines the probability of success (probability = 1/N), so that the probability in the first choice is 1/3, and in the second choice it is 1/2, or they believe that the probability of success remains constant for each choice even when one is eliminated . Recall the three principles of the mental models theory of probabilistic reasoning. According to these, people will first construct mental models of what they know about the situation. In the Monty Hall problem, they will construct these three models: DOOR 1 BIG PRIZE DOOR 2 BIG PRIZE DOOR 3 BIG PRIZE Each of these models is assumed to be equally probably, and the probability of any one of the models being true is 1 over the total number of models, or 1/3. After the first choice has been made, and one of the other doors eliminated, people either use this original model, in which case they assume that the probability of the first door chosen and the remaining door are both 1/3, or they construct a new mental model with the two doors, and thus reason that the probability for both doors is 50%. Their failure to represent the right number of equipossible models, and therefore reasoning correctly, is likely due to working memory straights, and research has shown that by manipulating the working memory load of the Monty Hall problem, you can get better or worse performance Modal Reasoning The mental models account of modal reasoning should be obvious, from the preceding. To determine whether a state of affairs is necessary, possible, or not possible, one simply has to construct all of the possible mental models, and determine whether the state of affairs occurs in all (necessary), some (possible) , or none (not possible) of the models. There hasn't been a lot of research on modal reasoning using mental models, because it is similar to probabilistic reasoning, but people's modal reasoning behaviors do seem to be consistent with the predictions of mental models theories.In particular, people seem to make assumptions of necessity or impossibility that are incorrect, due to the limited number of mental models that they construct (e.g., in Johnson-Laird et al. (1998)'s first problem described in the section on deductive reasoning). So, that's reasoning. There are all sorts of phenomena that I haven't talked about, and maybe I'll get to them some day, but for now, I'm going to leave the topic of reasoning. I hope Richard is at least partially satisfied, and maybe someone else has enjoyed these posts as well. If anyone has any further requests, let me know, and I'll post about them if I can. ^1 There are various ways of describing propositional representations. A common way of representing the sentence "The redsquare is above the green circle" is ABOVE(RED(SQUARE),GREEN(CIRCLE)). ^2 Johnson-Laird, P. N., Girotto, V., & Legrenzi, P. (1998). Mental models: a gentle guide for outsiders. ^3 Johnson-Laird, P.N., Legrenzi, P., Girotto, V., Legrenzi, M.S., & Caverni, J.P. (1999). Naive probability: a mental model theory of extensional reasoning. Psychological Review, 109(4), 722-728. ^4 Adapted from Johnson-Laird, et al (1999). ^5 See, e.g., Granberg, D., & Brown, T. A. (1995). The Monty Hall dilemma. Personality and Social Psychology Bulletin, 21, 711–723; Falk, R. (1992). A closer look at the probabilities of the notorious three prisoners. Cognition, 43, 197–223. ^6 Shimojo, S., & Ichikawa, S. (1989). Intuitive reasoning about probability: Theoretical and experimental analyses of the “problem of three prisoners.” Cognition, 32, 1–24. ^7 Ben-Zeev, T., Dennis, M, Stibel, J. M., & Sloman, S. A. (2000). Increasing working memory demands improves probabilistic choice but not judgment on the Monty Hall Dilemma. Paper submitted for 6 comments: Fascinating stuff! "If there is a jack in the hand then there is a king in the hand, or else if there isn’t a jack in the hand then there is a king in the hand." Maybe it's just me, but this seems poorly worded. Within that context I understood "or else if" to mean "but if", i.e. a conjunction rather than disjunction. If it had been clearly separated into "one of the following two conditionals is true", as with the Aces question, then I wouldn't have fallen for it. (So, in my case at least, mental models don't explain this mistake.) Frankly, I'm amazed anyone would get the Aces question wrong. But some of the other cases you relate are very interesting and convincing - especially the explanation of the Monty Hall problem (which fascinated me in high school). And the account of modal reasoning is very intuitive, I like it a lot. It strikes me as having some interesting philosophical implications also, which I hope to blog about at some point. One other aspect of reasoning I was wondering about is how it gets neurologically instantiated. (I'm guessing little is yet known, since there's even controversy as to whether simple beliefs have any clear neurological correlate; but it can't hurt to ask!) Posted by Richard Whew! I'm glad the part you found to be poorly worded was a quote, instead of something I wrote. Anyway, it may be that it is poorly worded. I understand it, but I've seen it so many times, and read so much about it, that it seems straightforward to me. However, one of the problems that many of us face when coming up with experimental materials like this arise from trying to get the wording just right. There are a lot of constraints, related to control and interpretation, and they often lead to awkwardly worded passages. I've had participants complain about several of the stories that I've written for some of my experiments, for instance. It amazes me that people get the aces problem wrong, as well, but then it amazes me that people perform so poorly on the Wason selection task, so I'm clearly not a good judge of how people are likely to perform on such reasoning tasks. As a philosophy major, and someone who's probably pretty good at solving logic problems, your intuitions are probably no good either. I do have to admit, however, that when I first heard the Monty Hall problem, I was convinced that there was no benefit to switching. I even went out and wrote a simulation (yes, that's how big of a nerd I am) to show that I was right. Much to my dismay, the simulation showed that I was wrong. I didn't feel so bad, however, when a graduate student from the math department got into a heated argument with the professor who had described the problem about how wrong the notion that you should switch really was. I figure if the math people can't get probability problems right, then I can't be faulted for getting them wrong. On the neuroscience, I'm afraid I don't know a whole lot about it. I know a bit about the neuroscience perceptual simulations, which I didn't get into, but which are discussed in the mental models literature.I've also read a little bit of the neuroscientific research on probabilistic reasoning, particularly with gamblers, because it relates to the role of emotion in reasoning, but I don't remember which brain areas, other than the amygdala and some other emotion-centers, were involved. I'll run through my notes and the papers I have around and see if I can scrounge up enough for a post. Posted by Chris One more note about the neuroscience. We know a whole hell of a lot about where certain types of information are represented (at least where in terms of brain regions). If you count representations as beliefs, then we know where certain types of beliefs are represented, and we even know the pathways by which these representations, or beliefs, become conscious. What we don't know is how, specifically, these beliefs are represented in those brain regions. Outside of the primary visual cortext (V1, and maybe V2 and V3), we don't know much of anything about how complex representations are instantiated at the level of neurons and collections of neurons. At best, we have models (neural nets), and for higher-order cognitive processes like reasoning, there is absolutely no data to connect the neural nets to actual operations of the brain. If you want really hard neuroscience like that, you pretty much have to restrict your inquiry to low-level vision. And we know a whole hell of a lot about the neuroscience of low-level vision, thanks in large part to the unsung heroes of visual neuroscience: monkeys with their brains exposed and electrodes placed directly on specific neurons or groups of neurons in their primary visual cortex.If only we could get monkeys to do the Wason selection task! Posted by Chris There must be some mistake in the way you have explained the Monty Hall problem. Once one of lesser prize doors has been removed, the situation is that there are now two doors, one of which has a large prize behind it and one which has a smaller prize behind it. Whichever of the two doors you choose, there is a 50% chance of the large prize being behind it. So it doesn't matter which one you pick, which one you picked before or whether you switch your choice now. The example with the million doors makes no difference. In general, I have found your articles interesting, but I'd be grateful if you could give a link to an established source that explains this problem correctly, because the way you have explained it, the conclusions are not supported by the premises. hehe - sorry, my bad.. :-) I guess I accept it now, having looked at the problem on wikipedia. I still can't grasp intuitively where my mistake is, but I'm getting persuaded that the example you gave is correct :-) Find and download what you need at Rapidshare Search Engine. Top Site List Free Proxy Site Free Download mp3 Michael Jackson song All Michael Jackson Lirics Oes Tsetnoc Mengembalikan Jati Diri Bangsa Download Mp3 Gratis
{"url":"http://mixingmemory.blogspot.com/2004/12/reasoning-mental-models.html","timestamp":"2014-04-21T09:36:21Z","content_type":null,"content_length":"104432","record_id":"<urn:uuid:50ea57d7-c662-4d5d-9c70-a0a59e6ce61e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Perpenso Calc You don't always have a calculator with you but you always have your phone. Perpenso^® Calc 4 is a calculator application for the Apple iPhone, iPad and iPod touch. It offers five calculators in one upgradable lite app: scientific, statistics, business, hex and bill/tip. Scientific mode is included and the other modes are optional. Perpenso Calc is a related fully paid app. All modes are included. Perpenso Calc is offered at a discounted price and may cost less than adding all your desired functionality to Perpenso Calc 4. Both versions are feature rich apps that are comparable to traditional handheld scientific, business and hex calculators. iOS 4.3 or later is required. Perpenso Calc 4 includes: A calculator tape (except in lite mode). A built-in manual. Nine color schemes (except in lite mode). In all calculators Perpenso Calc 4 offers: Copy and paste. Optional RPN entry (except in lite mode). Decimal based arithmetic (no binary rounding). 20 digit precision (enough for 64-bit arithmetic). 10 memory registers. Customizable display lines (iPad only, not available in lite mode) Scientific mode provides: 72 operations (43 complex number). Complex numbers, cartesian (rectangular) and polar. 2D rotation. Scientific and engineering notation. Unit and time conversions. Statistics mode (an optional In-App Purchase) includes: 42 operations. Optional worksheet for data entry/editing and viewing common stats. Importing data from the web. Single and two variable data. Descriptive Statistics. Linear regression. Business mode (an optional In-App Purchase) offers: 53 operations. Optional worksheet for data entry/editing and viewing common calculations. Breakeven point. Profit margin. Time value of money. Cash flows. Interest rate conversion. Running total. Hex mode (an optional In-App Purchase) provides: 43 operations. Optional worksheets for data entry/editing and performing common conversions. Decimal, hex, octal and binary. 64-bit and 32-bit modes. Unsigned and signed modes. Bitwise, shift and rotate operators. Bit set and clear. Byte swapping. UTF-8, UTF-16 and UTF-32 encoding, decoding and conversions. IEEE single and double precision floating point encoding and decoding. Dotted quad encoding. RGBA decoding with a preview of the color. In bill / tip mode (an optional In-App Purchase) Perpenso Calc 4 easily determines the portion of a bill that you are responsible for and an appropriate tip. It allows you to: Add items to a scrollable list. Set item quantity. Set the percentage of an item that you are responsible for. Set tax and tip percentage. Set tax and tip amount. Split the total. Send itemized bill to calculator tape. Enter/edit items and view some calculations on an optional worksheet All Modes Customizable display lines. On an iPad there are seven user definable display lines. These lines may show additional values on the stack, memory registers or special purpose registers for business and hex modes. Labels appear to the left of the numbers to identify the value currently displayed. Tapping on the label will allow you to change the value. Or use buttons that will redefine all the user definable display lines according to a particular theme. Stack, memory, break even point, profit margin, time value of money and hex, octal, binary and unicode conversions. Works with other apps. Copy and paste are convenient ways to transfer data to and from other applications such as Notes and Mail. The numeric value may be a decimal number, a fraction or a complex number in cartesian or polar form. Efficient and explicit calculations. Reverse Polish Notation (RPN), also known as postfix notation, is an optional entry method that gives you explicit control over calculations without using parenthesis. RPN is a favorite of many handheld calculator users. Not available in lite mode. 20 digit precision, enough for 64-bit arithmetic. Can your calculator calculate 2^64 (2 y^x 64) = 18446744073709551616? Very few can, they lose lower end digits and display a truncated value using exponential notation. Many calculator apps use the hardware floating point unit (FPU). While this is perfectly fine for many applications and games it is, in our opinion, insufficient for a calculator. The FPU is limited to around 16 decimal digits of precision, far fewer than the 20 needed to perform 64-bit arithmetic. Regardless of how many fractional digits Perpenso Calc 4 is configured to display it is always using 20 digits of precision Decimal arithmetic, no binary rounding. Can other calculator apps calculate 0.5 − 0.4 − 0.1 = 0? Surprisingly many cannot and display a very small number rather than zero. This is due to another problem with the FPU, it performs binary arithmetic rather than the decimal arithmetic that people use. As numbers are converted between binary and decimal small rounding errors can occur. Handheld calculators use decimal arithmetic, not binary, to avoid such rounding errors. So does Perpenso Calc 4. Scientific Mode 72 operations, 43 of the calculations support complex numbers (marked with ^c). Addition^ c, subtraction^ c, multiplication^ c and division^ c. Reciprocal^ c. Percent^ c. Negate^ c. Sign^ c. Truncate^ c, fractional component^ c, absolute value^ c, round^ c, floor^ c and ceiling^ c. Square^ c, square root^ c, power^ c and xth root^ c. Natural exponential^ c, natural logarithm^ c, common exponential^ c and common logarithm^ c. Degree, radian and gradian conversion^ c. Sine^ c, cosine^ c and tangent^ c. Hyperbolic^ c and inverse^ c. Real part^ c and imaginary part^ c. Magnitude^ c and argument (angle)^ c. Conjugate^ c. Rotate^ c. Cartesian (rectangular) and polar conversions^ c. Random number. Decimal / fraction conversions. Inch / cm conversions. Mile / kilometer conversions. Pound / kilogram conversions. Ounce / gram conversions. Gallon / liter conversions, US and Imperial. Fluid ounce / milliliter conversions, US and Imperial. Fahrenheit / Celsius conversions. Hours (or degrees), minutes and seconds / decimal conversions. Store and recall. Last result and last x. Drop X, swap XY, roll up and roll down. Seamless support for fractions. In Perpenso Calc 4 fractions are generally interchangeable with decimal numbers. Nearly all calculations accept fractions and will return a fraction when the operands are fractions. Fractions may be displayed as, and converted between, proper and improper forms. Better support for complex numbers than many handheld calculators. Most mathematical operations in Perpenso Calc 4 accept complex numbers, more operations than many handheld calculators. Complex numbers may be displayed as, and converted between, cartesian (rectangular) and polar forms. Points in the complex plane may be rotated about the origin. Statistics Mode 42 operations. Optional worksheet for data entry/editing and viewing common statistics. Importing data from the web. Single and two variable data modes. Swap XY data. Mean, weighted mean and geometric mean. Median and mode. Min and max. Standard deviations. Covariance and correlation. Predicted x/y. Geometric, hypergeometric, poisson and binomial probability functions. A statistics worksheet for more natural editing. While the button based interface of a traditional handheld calculator works well for many things it is awkward for some tasks. Editing data is one such task. Fortunately iOS provides a modern and powerful graphical interface. Worksheets offer a more natural way to enter, edit and view data and to perform some common calculations. In the statistics worksheet data may be entered and edited in a table. Above this table descriptive statistics such as mean and standard deviation are automatically displayed. In two variable mode a linear regression is also automatically displayed. Business Mode 53 operations. Optional worksheet for data entry/editing and viewing common calculations. Breakeven point. Profit margin. Time value of money. Cash flows. Interest rate conversion. Running total. Business worksheets provide a convenient alternative to the traditional calculator interface. In breakeven calculations you can set any four of the five variables and calculate the fifth. In profit margin calculations a particular pair of variables is set in order to calculate the desired value. Time Value of Money (TVM) calculations can solve a wide range of problems. Loans, mortgages, leases, savings, annuities, etc. There are five key variables. Set any four of these variables and the fifth may be calculated. Once a TVM calculation has been performed the amortization worksheet may be used to see how the remaining balance, principal payments and interest payments change over time. You can also see the amount of interest and principal paid over the life of the loan or during a specific period of time. In the cash flow worksheet cash flows may be entered and edited in a table. Traditional measurements such as net present value, internal rate of return, payback, etc are automatically calculated and Hex Mode 43 operations. Optional worksheets for data entry/editing and performing common conversions. Decimal, hexadecimal, octal and binary conversions. 32-bit and 64-bit conversions. Unsigned and signed conversions. Binary exponential and binary logarithm. Not, and, or, exclusive or, nand, nor, xnor. Bit set and clear. Byte swap. Logical shift left and right. Arithmetic shift right. Rotate left and right. Text to UTF-8. UTF-8, UTF-16 and UTF-32 to text. Value to IEEE 32 or IEEE 64. IEEE 32 or IEEE 64 to value. Value to dotted quad. Value to RGBA and color. Work with various Unicode character encodings. Perpenso Calc 4 allows you to convert to and from 8-, 16- and 32-bit Unicode character encodings. Enter a character at the keyboard and find its UTF-8, UTF-16 or UTF-32 encoding. Or enter a numeric value representing a character encoding and find out what character it is. You may also convert from one encoding to another, for example UTF-16 to UTF-8. Decode and visualize RGB values with alpha. Enter numeric values representing RGBA colors and find the intensity of the color and alpha channels. There is also a color preview. The background for this preview includes white and black panels so that the alpha effect may be more easily recognized. In the example above the same color is shown at 100% and 50% alpha. Encode and decode IEEE floating point values. IEEE single precision (32-bit) and double precision (64-bit) floating point values may be converted to and from their hex encodings. You may enter decimal floating point values or hex encodings. Support for dotted quad notation. You may convert numeric values to dotted quad (dot-decimal) notation. You may also enter numeric values using dotted quad notation. A hex worksheet for simple conversions. The hex worksheet provides various fields showing how a value may be represented. You may also reinterpret values as signed or unsigned or as 64-bit or 32-bit. In the unicode worksheet you may enter characters in the text field and the various UTF encodings will be automatically displayed. Hex values may be edited and the other fields are automatically Bill / Tip Mode Quickly determine a tip. Perpenso Calc 4 allows you to specify the details of your bill but does not require you to do so. Want to quickly determine a tip? Simply enter your total as an item and a tip will be calculated using your default tip rate. If you would like to change the tip you may enter a tip percentage or if you prefer you may specify the tip amount. Determine the portion of a bill that you are responsible for and a tip. Optionally specifying the percentage of an individual item that you are responsible for provides functionality that simply splitting the total does not. You may calculate the actual portion of a bill that you are responsible for. For example if you shared an appetizer with three friends, had an entree and two drinks simply enter the full price of the appetizer and your percentage of 25, the price of your entree and then the price of your drink and a quantity of 2. Convenient editing using a bill worksheet. The bill worksheet provides a convenient way to enter, edit and view the various bill amounts. At the top of the worksheet the calculated amounts are automatically displayed. Below are editable fields for the tax rate, tip rate, spit and item amounts. Tape, Manual and Settings On its flip side Perpenso Calc 4 offers a calculator tape (except in lite mode), worksheets, a built-in manual and settings controls. The tape allows you to position operations on the left or right. A double tap allows you to copy the tape. The built-in manual is designed for the iPhone display. A downloadable PDF-based manual with additional content is available.
{"url":"http://www.perpenso.com/calc/index.html","timestamp":"2014-04-19T20:11:25Z","content_type":null,"content_length":"30146","record_id":"<urn:uuid:d05e4576-798a-47e5-aabb-bd35f35ba3bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation For Levitating Magnets? If you are using electro-magnets it might be slightly easier because you can simply work to current values. More current = stronger magnetic field. So you can increase the lifting force by increasing the current through the coil and simply use current values to show the lifting force. I'd recommend starting here: Perhaps under the lift section.
{"url":"http://www.physicsforums.com/showthread.php?t=458671","timestamp":"2014-04-18T08:21:28Z","content_type":null,"content_length":"30273","record_id":"<urn:uuid:08cb2ef6-2093-4b15-b3b7-58789a1816c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Boyce E. Griffith Assistant Professor of Medicine and Mathematics New York University Faculty, Leon H. Charney Division of Cardiology, Department of Medicine, New York University School of Medicine Associated Faculty, Department of Mathematics, Courant Institute of Mathematical Sciences, New York University Affiliated Faculty, Sackler Institute of Graduate Biomedical Sciences, New York University School of Medicine Affiliated Faculty, Center for Health Informatics and Bioinformatics, New York University School of Medicine email: boyce.griffith@nyumc.org or griffith@cims.nyu.edu phone: 212.263.4131 (office), 212.263.4129 (fax) web: http://www.cims.nyu.edu/~griffith IBAMR: IBAMR is a distributed-memory parallel implementation of the immersed boundary (IB) method with support for Cartesian grid adaptive mesh refinement (AMR). Support for distributed-memory parallelism is via MPI, the Message Passing Interface. Support for spatial adaptivity is via SAMRAI, the Structured Adaptive Mesh Refinement Application Infrastructure, which is developed at the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. This implementation of the IB method also makes extensive use of functionality provided by several high-quality third-party software libraries, including: IBAMR outputs visualization files that can be read by the VisIt Visualization Tool. Work is also underway to implement support for finite element mechanics models in IBAMR via the libMesh finite element library. IBAMR source code is hosted by Google Code at http://ibamr.googlecode.com. Multi-beat simulations of the fluid dynamics of the aortic heart valve with physiological driving and loading conditions using the immersed boundary method. A model aortic valve is mounted in a semi-rigid aortic root model with anatomically realistic aortic sinuses. This "valve tester" is immersed in a fluid box. Pressure boundary conditions are imposed at the inlet (bottom) of the vessel using a prescribed left ventricular pressure waveform, and at the outlet (top) of the vessel using a Windkessel model. (See schematic diagram below.) Schematic diagram showing how boundary conditions are imposed on the model vessel. At the upstream boundary, a time-dependent left-ventricular pressure waveform is prescribed. At the downstream boundary, the three-dimensional fluid-structure interaction model is coupled to a three-element Windkessel model fit to human data. Notice that the flow rate is not prescribed in this model, but rather emerges during the computation. The opening and closing dynamics of the model aortic valve. The lower inset shows the prescribed driving pressure (blue curve) and computed loading pressure (green curve). The upper inset shows the computed flow rate through the model valve (blue curve). Net flow through the model valve is approximately 65 ml per cardiac cycle, which is within the physiological range. Notice that the flow rate is not specified in the model; rather, it emerges during the course of the fluid-structure interaction simulation. (Click on the above images to view linked QuickTime movies.) The opening and closing dynamics of the model aortic valve along with the axial (streamwise) fluid velocity. The fluid velocity is shown on a plane that bisects the model vessel and one of the model valve leaflets. Forward flow is indicated in red, reverse flow is indicated in blue. Notice that, except for the first beat, the model valve permits essentially no regurgitation during closure. (Click on the above image to view linked QuickTime movie.) For further details, see: B.E. Griffith. Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions. Int J Numer Meth Biomed Eng, 28:317-345, 2012. (DOI, Simulations of a prosthetic mitral heart valve using the immersed boundary method. A chorded prosthetic mitral valve (left panel) and the corresponding immersed boundary model (right panel). The model mitral valve is mounted in a rigid tube that is immersed in a fluid box. Time-dependent velocity boundary conditions are prescribed at the upstream boundary (located at the left of the figure) and zero-pressure boundary conditions are prescribed at the downstream boundary (located at the right of the figure). The opening and closing dynamics of the model prosthetic mitral valve viewed from the side (left panel) and top (right panel). (Click on the above images to view linked QuickTime movies.) Streamlines during the opening phase of the model mitral valve. For further details, see: B.E. Griffith, X.Y. Luo, D.M. McQueen, and C.S. Peskin. Simulating the fluid dynamics of natural and prosthetic heart valves using the immersed boundary method. Int J Appl Mech, 1:137-177, 2009. (DOI, PDF) Simulations of the electrical function of the heart by an immersed boundary approach to the bidomain equations. Click on the above images to view the corresponding animated GIFs. Simulations of cardiac fluid mechanics by an adaptive version of the immersed boundary method. Click on the above images to view the corresponding animated GIFs. Additional animations are available here, and an overview of the three-dimensional fiber structure of the heart and great vessels used in this work is available here. Curriculum Vitae (PDF) Research Interests Mathematical and computational methods in medicine and biology; computer simulation in physiology, especially cardiovascular mechanics and fluid-structure interaction, cardiac electrophysiology, and cardiac electro-mechanical coupling; adaptive numerical methods; high-performance computing All peer-reviewed publications (in reverse chronological order by publication date) 1. T.G. Fai, B.E. Griffith, Y. Mori, and C.S. Peskin. Immersed boundary method for variable viscosity and variable density problems using fast constant-coefficient linear solvers. II: Theory. SIAM J Sci Comput. To appear. 2. D.M. McQueen, T. O'Donnell, B.E. Griffith, and C.S. Peskin. Constructing a Patient-Specific Model Heart from CT Data. In N. Paragios, N. Ayache, and J. Duncan, editors, Handbook of Biomedical Imaging. Springer-Verlag, New York, NY, USA. To appear. 3. T. Skorczewski, B.E. Griffith, and A.L. Fogelson. Multi-bond models for platelet adhesion and cohesion. In S.D. Olson and A.T. Layton, editors, Biological Fluid Dynamics: Modeling, Computation, and Applications, Contemporary Mathematics, Providence, RI, USA. American Mathematical Society. To appear. 4. S. Delong, F. Balboa Usabiaga, R. Delgado-Buscalioni, B.E. Griffith, and A. Donev. Brownian dynamics without Green's functions. J Chem Phys, 140(13):134110 (23 pages), 2014. (DOI) 5. F. Balboa Usabiaga, R. Delgado-Buscalioni, B.E. Griffith, and A. Donev. Inertial Coupling Method for particles in an incompressible fluctuating fluid. Comput Meth Appl Mech Eng, 269:139-172, 2014. (PDF, DOI) 6. H.M. Wang, X.Y. Luo, H. Gao, R.W. Ogden, B.E. Griffith, C. Berry, and T.J. Wang. A modified Holzapfel-Ogden law for a residually stressed finite strain model of the human left ventricle in diastole. Biomech Model Mechanobiol, 3(1):99-113, 2014. (DOI) 7. A.P.S. Bhalla, R. Bale, B.E. Griffith, and N.A. Patankar. Fully resolved immersed electrohydrodynamics for particle motion, electrolocation, and self-propulsion. J Comput Phys, 256:88-108, 2014. (PDF, DOI) 8. V. Flamini, A. DeAnda, and B.E. Griffith. Simulating the effects of intersubject variability in aortic root compliance by the immersed boundary method. In P. Nithiarasu, R. Löhner, and K.M. Liew, editors, Proceedings of the Third International Conference on Computational & Mathematical Biomedical Engineering, 2013. 9. X.S. Ma, H. Gao, N. Qi, C. Berry, B.E. Griffith, and X.Y. Luo. Image-based immersed boundary/finite element model of the human mitral valve. In P. Nithiarasu, R. Löhner, and K.M. Liew, editors, Proceedings of the Third International Conference on Computational & Mathematical Biomedical Engineering, 2013. 10. A.P.S. Bhalla, B.E. Griffith, N.A. Patankar, and A. Donev. A minimally-resolved immersed boundary model for reaction-diffusion problems. J Chem Phys, 139(21):214112 (15 pages), 2013. (DOI) 11. B.E. Griffith, V. Flamini, A. DeAnda, and L. Scotten. Simulating the dynamics of an aortic valve prosthesis in a pulse duplicator: Numerical methods and initial experience. J Med Dev, 7(4):040912 (2 pages), 2013. (DOI) 12. B.E. Griffith and C.S. Peskin. Electrophysiology. Comm Pure Appl Math, 66(12):1837-1913, 2013. (DOI) 13. T.G. Fai, B.E. Griffith, Y. Mori, and C.S. Peskin. Immersed boundary method for variable viscosity and variable density problems using fast constant-coefficient linear solvers. I: Numerical method and results. SIAM J Sci Comput, 35(5):B1131-B1161, 2013. (DOI) 14. A.P.S. Bhalla, R. Bale, B.E. Griffith, and N.A. Patankar. A unified mathematical framework and an adaptive numerical method for fluid-structure interaction with rigid, deforming, and elastic bodies. J Comput Phys, 250:446-476, 2013. (DOI) 15. S.L. Maddalo, A. Ward, V. Flamini, B. Griffith, P. Ursomanno, and A. DeAnda. Antihypertensive strategies in the management of aortic disease. J Am Coll Surg, 217(3):S39, 2013. (DOI) 16. H. Gao, B.E. Griffith, D. Carrick, C. McComb, C. Berry, and X.Y. Luo. Initial experience with a dynamic imaging-derived immersed boundary model of human left ventricle. In S. Ourselin, D. Rueckert, and N. Smith, editors, Functional Imaging and Modeling of the Heart: 7th International Conference, FIMH 2013, London, UK, June 20-22, 2013, volume 7945 of Lecture Notes in Computer Science, pages 11-18, 2013. (DOI) 17. A.P.S. Bhalla, B.E. Griffith, and N.A. Patankar. A forced damped oscillation framework for undulatory swimming provides new insights into how propulsion arises in active and passive swimming. PLOS Comput Biol, 9(6):e100309 (16 pages), 2013. (DOI) 18. S. Delong, B.E. Griffith, E. Vanden-Eijnden, and A. Donev. Temporal integrators for fluctuating hydrodynamics. Phys Rev E, 87(3):033302 (22 pages), 2013. (DOI, PDF) 19. X.S. Ma, H. Gao, B.E. Griffith, C. Berry, and X.Y. Luo. Image-based fluid-structure interaction model of the human mitral valve. Comput Fluid, 71:417-425, 2013. (DOI, PDF) 20. H.M. Wang, H. Gao, X.Y. Luo, C. Berry, B.E. Griffith, R.W. Ogden, and T.J. Wang. Structure-based finite strain modelling of the human left ventricle in diastole. Int J Numer Meth Biomed Eng, 29 (1):83-103, 2013. (DOI, PDF) 21. F. Balboa Usabiaga, J.B. Bell, R. Delgado-Buscalioni, A. Donev, T.G. Fai, B.E. Griffith, and C.S. Peskin. Staggered schemes for fluctuating hydrodynamics. Multiscale Model Sim, 10(4):1369-1408, 2012. (DOI, PDF) 22. B.E. Griffith and S. Lim. Simulating an elastic ring with bend and twist by an adaptive generalized immersed boundary method. Commun Comput Phys, 12(2):433-461, 2012. (DOI, PDF) 23. B.E. Griffith. On the volume conservation of the immersed boundary method. Commun Comput Phys, 12(2):401-432, 2012. (DOI, PDF) 24. X.Y. Luo, B.E. Griffith, X.S. Ma, M. Yin, T.J. Wang, C.L. Liang, P.N. Watton, and G.M. Bernacca. Effect of bending rigidity in a dynamic model of a polyurethane prosthetic mitral valve. Biomechan Model Mechanobiol, 11(6):815-827, 2012. (DOI, PDF) 25. B.E. Griffith. Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions. Int J Numer Meth Biomed Eng, 28(3):317-345, 2012. (DOI, PDF; the published version of this paper includes significant typographical errors that were introduced by the publisher following the proofing process; these errors do not appear in the linked PDF document) Erratum: B.E. Griffith. Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions. Int J Numer Meth Biomed Eng, 29(5):698-700, 2013. (DOI) 26. P.E. Hand and B.E. Griffith. Empirical study of an adaptive multiscale model for simulating cardiac conduction. Bull Math Biol, 73(12):3071-3089, 2011. (DOI, PDF) 27. P.E. Hand and B.E. Griffith. Adaptive multiscale model for simulating cardiac conduction. Proc Natl Acad Sci U S A, 107(33):14603-14608, 2010. (DOI, PDF; Supporting Information: HTTP, PDF) 28. P. Lee, B.E. Griffith, and C.S. Peskin. The immersed boundary method for advection-electrodiffusion with implicit timestepping and local mesh refinement. J Comput Phys, 229(13):5208-5227, 2010. ( DOI, PDF) 29. B.E. Griffith, R.D. Hornung, D.M. McQueen, and C.S. Peskin. Parallel and Adaptive Simulation of Cardiac Fluid Dynamics. In M. Parashar and X. Li, editors, Advanced Computational Infrastructures for Parallel and Distributed Adaptive Applications. John Wiley and Sons, Hoboken, NJ, USA, 2009. (DOI, PDF) 30. B.E. Griffith. An accurate and efficient method for the incompressible Navier-Stokes equations using the projection method as a preconditioner. J Comput Phys, 228(20):7565-7595, 2009. (DOI, PDF) 31. P.E. Hand, B.E. Griffith, and C.S. Peskin. Deriving macroscopic myocardial conductivities by homogenization of microscopic models. Bull Math Biol, 71(7):1707-1726, 2009. (DOI, PDF) 32. B.E. Griffith, X.Y. Luo, D.M. McQueen, and C.S. Peskin. Simulating the fluid dynamics of natural and prosthetic heart valves using the immersed boundary method. Int J Appl Mech, 1(1):137-177, 2009. (DOI, PDF) 33. B.E. Griffith, R.D. Hornung, D.M. McQueen, and C.S. Peskin. An adaptive, formally second order accurate version of the immersed boundary method. J Comput Phys, 223(1):10-49, 2007. (DOI, PDF) 34. B.E. Griffith and C.S. Peskin. On the order of accuracy of the immersed boundary method: Higher order convergence rates for sufficiently smooth problems. J Comput Phys, 208(1):75-105, 2005. (DOI, 35. S.J. Cox and B.E. Griffith. Recovering quasi-active properties of dendritic neurons from dual potential recordings. J Comput Neurosci, 11(2):95-110, 2001. (DOI, PDF) 36. L.J. Gray and B.E. Griffith. A faster Galerkin boundary integral algorithm. Comm Numer Meth Eng, 14(12):1109-1117, 1998. (DOI, PDF) Submitted for publication (in alphabetical order by author) 1. M. Cai, A. Nonaka, J.B. Bell, B.E. Griffith, and A. Donev. Efficient variable-coefficient finite-volume Stokes solvers. Submitted. 2. D. Devendran and B.E. Griffith. Comparison of two approaches to using finite element methods for structural mechanics with the immersed boundary method. Submitted. 3. V. Flamini, A. DeAnda, and B.E. Griffith. Fluid-structure interaction model of the aortic root. Submitted. 4. H. Gao, D. Carrick, C. Berry, B.E. Griffith, and X.Y. Luo. Dynamic finite-strain modelling of the human left ventricle in health and disease using an immersed boundary-finite element method. 5. H. Gao, H.M. Wang C. Berry, X.Y. Luo, and B.E. Griffith. Quasi-static imaged-based immersed boundary-finite element model of human left ventricle in diastole. Submitted. 6. B.E. Griffith and X.Y. Luo. Hybrid finite difference/finite element version of the immersed boundary method. Submitted. (PDF) 7. R.D. Guy, B. Phillip, and B.E. Griffith. Geometric multigrid for an implicit-time immersed boundary method. Submitted. 1. B.E. Griffith. Simulating the blood-muscle-valve mechanics of the heart by an adaptive and parallel version of the immersed boundary method. PhD Thesis, Courant Institute of Mathematical Sciences, New York University, 2005. (PS, PDF)
{"url":"http://www.math.nyu.edu/~griffith/","timestamp":"2014-04-17T06:45:22Z","content_type":null,"content_length":"31142","record_id":"<urn:uuid:3369d76f-303d-4073-b3a1-bda30807ef4b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to V-Maths • Quick answers in one-line ! • Answers lead to intuition !! • Intuition stimulates Brain !!! Vedic Mathematics is pure mathematics Mathematics itself is of Vedic origin. The decimal number system currently in use was perceived by a Vedic Rishi up to 13 places. They practised all fundamental operations at amazing speed with the help of some Sanskrit sutras. So also Geometry was very much advanced in those periods. Now this is being rejuvenated thanks to H.H. Swamy Bharati Krishna Thirthaji’s yeomen efforts. We have developed a system that is closely connected with the present syllabus. It is very much helpful for students and wipe away the scare towards maths. Girls and Boys in secondary schools learn v-maths with smile and are thrilled by its speed and precision. Courses Offered Basic Course □ Duration 40 Hrs □ For eighth ninth and tenth grade students (or any aspirant) □ Attains competency in eight fundamental operations upto 6 digits. □ Can solve problems in Arithmetic, Algebra, Trigonometry and Geometry with ease and precision. Advanced Course □ Duration 60 Hrs □ For those who have completed Basic Course [+1, +2 students] □ Attains competency in fundamental operations up to ten digits. □ Solve problems from Arithmetic, Algebra, Geometry, Analytical Geometry, Trigonometry and Calculus. □ They will be acquainted with Basic philosophy, Sanskrit sutras and its meaning, modus operandy etc. □ Original Mantras for those who are interested Crash Programme □ Duration 30 Hrs □ For all Entrance and Competitive Exams □ Saves time and score more
{"url":"http://vedic-maths.org/courses.html","timestamp":"2014-04-19T11:57:17Z","content_type":null,"content_length":"5748","record_id":"<urn:uuid:7d4e0ee6-9a89-459f-a339-38c81c83e97a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Similar-Case-Based Optimization of Beam Arrangements in Stereotactic Body Radiotherapy for Assisting Treatment Planners BioMed Research International Volume 2013 (2013), Article ID 309534, 10 pages Research Article Similar-Case-Based Optimization of Beam Arrangements in Stereotactic Body Radiotherapy for Assisting Treatment Planners ^1Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan ^2Department of Radiology, The University of Tokyo Hospital, Tokyo 1138655, Japan ^3Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan ^4Division of Quantum Radiation Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan ^5Department of Heavy Particle Therapy and Radiation Oncology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan ^6Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan Received 20 July 2013; Accepted 21 September 2013 Academic Editor: Noriyoshi Sawabata Copyright © 2013 Taiki Magome et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Objective. To develop a similar-case-based optimization method for beam arrangements in lung stereotactic body radiotherapy (SBRT) to assist treatment planners. Methods. First, cases that are similar to an objective case were automatically selected based on geometrical features related to a planning target volume (PTV) location, PTV shape, lung size, and spinal cord position. Second, initial beam arrangements were determined by registration of similar cases with the objective case using a linear registration technique. Finally, beam directions of the objective case were locally optimized based on the cost function, which takes into account the radiation absorption in normal tissues and organs at risk. The proposed method was evaluated with 10 test cases and a treatment planning database including 81 cases, by using 11 planning evaluation indices such as tumor control probability and normal tissue complication probability (NTCP). Results. The procedure for the local optimization of beam arrangements improved the quality of treatment plans with significant differences () in the homogeneity index and conformity index for the PTV, V10, V20, mean dose, and NTCP for the lung. Conclusion. The proposed method could be usable as a computer-aided treatment planning tool for the determination of beam arrangements in SBRT. 1. Introduction Stereotactic body radiotherapy (SBRT) is a sophisticated technique to improve survival rates for early-stage lung cancers [1–4]. SBRT can be used to deliver highly conformal doses to tumors while minimizing radiation doses to surrounding organs at risk (OAR) and normal tissues with steep dose gradients. Radiotherapy treatment planning (RTP) is one of the most important procedures for SBRT, which is determined by treatment planners in time-consuming iterative manners. In particular, it is essential to determine an appropriate beam arrangement, which generally consists of a large number of coplanar and non-coplanar beams [5, 6]. The beam arrangement includes not only beam directions but also nominal beam energies, collimator angles, and beam weights. Many researchers have investigated automated methods for beam angle optimization (BAO). Li and Lei [7] developed a DNA-based genetic algorithm to solve the BAO problem in coplanar directions for intensity-modulated radiation therapy (IMRT) planning. De Pooter et al. [8] investigated an optimization method for non-coplanar beams based on the Cycle algorithm for SBRT of liver tumors. Meyer et al. [9] developed an automated method for the selection of non-coplanar beams by use of the cost function based on radiation absorption in normal tissue and OAR for three-dimensional conformal radiotherapy. Although the above-cited authors stated that the treatment planning time could be reduced by using their BAO algorithms, there is still room for improvement in the routine clinical use of BAO methods. The usefulness of similar cases in the field of radiation oncology has been shown in several studies. Commowick and Malandain [10] used a similar image in a database for the segmentation of critical structures. Chanyavanich et al. [11] developed new prostate IMRT plans based on a similar case. Mishra et al. [12] investigated the case-based reasoning approach to determine the most appropriate dose plan for prostate cancer patients. Schlaefer and Dieterich [13] showed the feasibility of case-based beam generation for robotic radiosurgery. These studies motivated us to adopt some novel strategies to determine the clinically usable beam arrangements for SBRT based on past similar cases. In our earlier research, we investigated the usefulness of similar cases in an RTP database for the determination of beam arrangements in SBRT [14, 15]. However, our previous work had a problem that the beam direction of the past similar case may not be optimal for a new case. Therefore, there is a potential to improve the accuracy and efficiency of the determination of beam arrangements based on the combination of similar-case-based beam arrangement and the BAO algorithm. Our purpose in the present study was to develop a similar-case-based optimization method for beam arrangement in lung SBRT for assisting treatment planners. We evaluated our method by comparing the method without and with an optimization step of beam arrangements and also by comparing the most usable beam arrangements based on the proposed method with original beam arrangements based on a manual method. 2. Materials and Methods Figure 1 presents the overall scheme of the proposed method, which consists mainly of three steps. First, cases that are similar to an objective case were automatically selected based on geometrical features related to structures such as the location, size, and shape of the planning target volume (PTV), lung, and spinal cord. Second, the initial beam arrangements of the objective case were determined by registering cases that are similar to the objective case in terms of lung regions, using a linear registration technique, that is, an affine transformation [16]. Finally, the beam directions of the objective case were locally optimized based on the cost function, which takes into account the radiation absorption in normal tissues and OAR. 2.1. Clinical Cases The institutional review board of our university hospital approved this retrospective study. We selected 96 patients (ages: 42–92 years; median: 76 years) with lung cancer (right lung: 52 cases, left lung: 44 cases) who were treated with SBRT from November 2003 to April 2010. Sixty patients were males and 36 patients were females. Their mean effective PTV dia. was cm. All patients were fixed with a body cast system composed of a thermoplastic body cast, a vacuum pillow, arm and leg supports, and a carbon plate [17]. Experienced radiation oncologists performed the treatment planning on an RTP system by using a pencil beam convolution algorithm (Eclipse versions 6.5 and 8.1; Varian Medical Systems Inc., Palo Alto, CA, USA). The contours of the gross tumor volumes of lung cancers were manually outlined on planning computed tomography (CT) images acquired from a four-channel detector CT scanner (Mx 8000; Philips, Amsterdam, The Netherlands). Each CT slice had a matrix size of 512 512, a slice thickness of 2.0–5.0mm, a pixel size of 0.78–0.98mm, and a stored bit of 12. The internal target volume (ITV) was determined individually according to the internal respiratory motion, which was measured with an X-ray simulator (Ximatron; Varian Medical Systems). The setup margins surrounding the ITV were 5mm in all directions. Seven to eight static beams—including beams in the coplanar and non-coplanar directions—were arranged depending on each patient. All patients received a dose of 48Gy in four fractions at the isocenter with 4, 6, or 10MV beams on linear accelerators (Clinac 21EX; Varian Medical Systems). All cases were randomly separated into three datasets: a dataset of the RTP database including 81 cases (right lung: 46 cases, left lung: 35 cases), a dataset of five training cases (right lung: 3 cases, left lung: 2 cases), and a dataset of 10 test cases (right lung: 3 cases, left lung: 7 cases). The five training cases were used to determine the parameters for the selection of similar cases, and the 10 test cases were used for the evaluation of our method. 2.2. Selection of Similar Cases Based on Geometrical Features Five cases that were similar to an objective case were automatically selected based on geometrical features from the treatment planning point of view. The geometrical features were the PTV location, the PTV shape, the PTV size, the lung volume, and the geometrical relationship between the PTV and spinal cord in making an SBRT treatment plan for lung cancer. The five most similar cases were defined as the cases who had first to fifth shortest distances to the objective case in a feature space. The RTP database was searched for the five cases most similar to the objective case by considering the weighted Euclidean distance of geometrical feature vectors between the objective case and each case in the RTP database. The weighted Euclidian distance was calculated by the following equation: where is the number of geometrical features, is the weight of the th geometrical feature, is the th geometrical feature for the objective case, and is the th geometrical feature for each case in the RTP database. Note that each geometrical feature was divided by the standard deviation of all 81 cases in the RTP database for normalizing the range of each feature value. Table 1 shows 10 geometrical features which were used for the selection of similar cases. The geometrical features were calculated by the following definitions. The PTV centroid was calculated in a fixed reference coordinate system. Each case in the RTP database was registered to a reference case based on the linear registration technique, that is, affine transformation [16]. Feature points for the registration were automatically set to the vertices of a circumscribed parallelepiped of a lung including left and right lung regions as follows. First, the minimum and maximum , , and coordinates, , , , , , and , were obtained in the original coordinate system of the planning CT image from the lung segmented by a treatment planner, and then the six planes , , , , and were determined as those of the circumscribed parallelepiped. Finally, two vertices of the circumscribed parallelepiped of the lung region () and () were used as feature points for the determination of parameters in the affine transformation matrix. In this study, we used a special case of an affine transformation including only the translation and scaling based on two feature points. The reason for this was to reduce the calculation time for finding the feature points of the lung regions. The effective diameter was defined as the diameter of a sphere with the same volume as the PTV. The sphericity was defined as a roundness of the PTV and was given by a ratio of the number of logical AND voxels between the PTV and its equivalent sphere with the same centroid and volume as the PTV to the number of PTV voxels. Lung dimensions were defined as three side-lengths of the circumscribed parallelepiped of the lung regions in the left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions. The distance between the PTV and spinal cord was measured between the centroid of the PTV and that of the spinal cord in the isocenter axial plane. The angle from the spinal cord to the PTV was defined in the two-dimensional coordinate system with the origin at the centroid of the spinal cord in the isocenter axial plane and ranged from (clockwise) to (counterclockwise) for a baseline of the posterior-anterior direction. The weights of the geometrical features were needed in order to determine the features’ importance. Therefore, each institute should determine the appropriate weights of the geometrical features based on their own philosophy or policy of treatment planning when applying the proposed method to their own databases. In the present study, the weights of the geometrical features were empirically set as follows: based on our institution’s treatment planning policy by using the five training cases with a trial-and-error procedure so that cases more similar to the objective case could be selected in terms of appearance relevant to the features. As a result, the geometrical feature weights were the PTV centroid = 0.3, effective dia. of the PTV = 0.1, sphericity of the PTV = 0.1, lung dimension = 0.3, distance between the PTV and spinal cord = 1.0, and angle from spinal cord to the PTV = 1.0. The weights for geometrical features were normalized as the sum of the weights was 1.0 when the similarity measure was calculated in our system. 2.3. Determination of Initial Beam Arrangements Based on the Linear Registration Figure 2 illustrates the determination of initial beam arrangements based on the linear registration technique. Five beam arrangements, each of which had seven or eight beam directions, for an objective case were automatically determined by the registration of five similar cases with the objective case in terms of lung regions using a linear registration technique, that is, affine transformation [16]. Please note that the beam directions are determined indirectly by the registration of the lung regions, because the linear registration maps straight lines, that is, beam angles, to straight lines. First, we calculated the affine transformation matrix to register the lung regions of each similar case with those of the objective case based on feature points, which were automatically selected for the registration in vertices of the circumscribed parallelepiped of the lung regions. Second, a beam direction, that is, beam direction vector, based on a gantry angle and couch angle was transformed from a spherical polar coordinate system to a Cartesian coordinate system as the unit direction vector as follows: Third, each beam direction vector of the similar case in the Cartesian coordinate system was modified by using the same affine transformation matrix as a registration in terms of lung regions. Finally, the resulting direction vector in the Cartesian coordinate system was converted into the spherical polar coordinate system as gantry angle and couch angle as follows: 2.4. Local Optimization of Beam Arrangements The beam directions of the objective case were locally optimized based on the cost function, which takes into account the radiation absorption in normal tissues and OAR [9]. Although Meyer et al. [9] developed the cost function for a global optimization of beam arrangements, we used the Meyer cost function for the local optimization of each beam direction. The cost function of the beam with gantry angle and couch angle was defined as follows: where represents a dose absorption in normal tissue until X-ray beams reach the PTV surface, is a term for the irradiation of th OAR, and is a weight for the th OAR. The first term is determined by the following equation: where is a linear attenuation coefficient in water, and is the mean depth in cm from the body surface to the PTV surface. In the present study, the values for the 4, 6, and 10MV beams were set to 0.05730, 0.05271, and 0.03859cm^−1, respectively. The second term for the th OAR is defined as follows: where is an irradiated fractional volume of the th OAR, is the mean depth from the body surface to the th OAR surface, and is a parameter to control the relative significance of the first and second terms. The term of exp() represents the number of incident photons in the th OAR. The beam directions were locally optimized in the ascending order of the cost functions of the initial angles. Each beam direction was locally optimized in the range of degree with the interval of degree. Here, the beam directions were constrained by following three conditions: (i) although the non-coplanar beams can be arranged with the change of gantry and couch angles, the coplanar beams can be shifted only in the direction of gantry rotation; (ii) the interval with the optimized beam directions was greater than or equal to degree, and (iii) the available beam direction space was limited to the space which was used in past cases including the RTP database (as shown in Figure 3) to avoid the collision of the patient with a gantry head. In this study, the lung and spinal cord were incorporated as OAR in the cost function, and both weights (for lung and spinal cord) were empirically set to 5 by using the training case dataset. The parameters for the local optimization of beam arrangement , , , and were empirically set to 0.6, 4°, 2°, and 40°, respectively, by using the training case dataset. Although the parameters for the local optimization of beam arrangements were empirically set based on the preferences of our institution, each institute can determine appropriate parameters based on their own philosophy or policy of treatment planning in the same way as the geometrical feature weights. Each optimal beam direction was defined as the direction of the beam which has the lowest cost value in the beam directions of the local range. 2.5. Evaluation of Beam Arrangements Using Planning Evaluation Indices The five patterns of beam arrangements determined by the proposed method were evaluated by manually making five plans based on the beam arrangements with other planning parameters (nominal beam energies, collimator angles, beam weight, etc.) derived from treatment plans of similar cases in a radiation treatment planning system. Table 2 shows 11 planning evaluation indices for the evaluation of our method. The most usable plan of the objective case was selected by sorting the five plans based on an RTP evaluation measure with these 11 planning evaluation indices, which was the Euclidean distance in a feature space between each plan and an ideal plan. In this study, the ideal plan was assumed to produce a uniform irradiation with a prescribed dose in the PTV and no irradiation in the surrounding OAR and normal tissues. The usefulness of each plan was estimated by the following Euclidean distance of the planning evaluation vector between the ideal plan and each plan determined by a similar case, which was considered as the RTP evaluation measure: where is the number of planning evaluation indices, is the th planning evaluation index for the ideal plan, and is the th planning evaluation index for the plan based on the five most similar cases. Each planning evaluation index was normalized by the standard deviation in the same manner as the geometrical features based on the RTP database of 81 cases with lung cancer. The planning evaluation indices for the ideal plan were defined as follows: D95=48Gy (prescribed dose), homogeneity index (HI) = 1.0, conformity index (CI) = 1.0, tumor control probability (TCP) = 100%, V5 = 0%, V10 = 0%, V20 = 0%, mean lung dose = 0Gy, normal tissue complication probability (NTCP) for the lung = 0%, spinal cord maximum dose = 0Gy, and the NTCP for the spinal cord = 0%. We evaluated similar-case-based beam arrangements suggested by the proposed method using (7) based on the Euclidean distance of the 11 planning evaluation indices. Although we can apply weights to planning evaluation indices based on planners’ preferences, we decided to give a constant weight to each planning evaluation index in this study. The planning evaluation indices for the PTV calculated in this study were the D95, HI, CI, and TCP. The D95 was defined as the minimum dose in the PTV that encompasses at least 95% of the PTV. The HI was calculated as the ratio of the maximum dose to the minimum dose in the PTV [18]. The CI was the ratio of the treated volume to the PTV. The treated volume was defined as the tissue volume that is intended to receive at least the selected dose and is specified by the radiation oncologist as being appropriate to achieve the purpose of the treatment [19]. In the present study, the treated volume was defined as the volume receiving the minimum PTV dose. The TCP was estimated based on a linear-quadratic (LQ) model according to a Poisson distribution by consideration of the radiosensitivity variation and nonuniform dose distribution [20, 21]. We used parameters for TCP calculation which were obtained from Kanai et al. [22] in patients with lung cancers. The details of the calculation of the TCP were described previously [15]. The planning evaluation indices for normal tissues, that is, the lung and spinal cord, were calculated as described below. For the lung volume, which was defined as total lung volume minus PTV, a V5, V10, V20, and mean dose were calculated. The V was defined as a percentage of the total lung minus PTV receiving ≥Gy. The maximum dose for the spinal cord was also calculated. Moreover, the NTCP values for the lung and spinal cord were calculated using the Lyman-Kutcher-Burman model [23, 24]. The fitting parameter values for NTCP calculation were obtained from Burman et al. [25] More details for the calculation of the NTCP can be found in our previous work [15]. 2.6. Assessment of the Proposed Method The proposed method was assessed with an RTP database including 81 cases with lung cancer (right lung: 46 cases, left lung: 35 cases) and 10 test cases (right lung: 3 cases, left lung: 7 cases), which were randomly chosen from all 96 cases. The 10 test cases were not included in the RTP database of 81 cases and were not used as five training cases for the determination of the weights of geometrical features. The similar cases were selected from cases that have ipsilateral lung cancers with the test case. The effectiveness of the combination method for the determination of the initial beam arrangement based on similar cases and the local optimization of the beam arrangement was evaluated by comparing the planning evaluation indices of 50 plans (5 plans 10 test cases) with and without the local optimization of the beam arrangement. In addition, the most usable beam arrangements determined based on the RTP evaluation measure were retrospectively compared with the original beam arrangements, which were used in clinical practice in 10 test cases. The same beam weights and wedges of the similar case were used for the plan with the beam arrangement determined by our method. The irradiation fields were adjusted to the tumor using a multileaf collimator (millennium 120 MLC; Varian Medical Systems) with an additional margin of 5 mm around the PTV. 3. Results Figure 4 shows an objective case (a) and the first to fifth most similar cases ((b)–(f)) to the objective case. The similar cases geometrically resemble the objective case (Figure 4(a)), especially in terms of the geometrical relationship between the tumor and the spinal cord, because we gave greater weights to the geometrical features related to the spinal cord. Figure 5 illustrates the similar-case-based beam arrangements based on a linear registration technique (a) and the optimized similar-case-based beam arrangement after the local optimization of each beam direction (b). These plans consisted of seven beams with gantry angle and couch angle of (214, 0), (146, 0), (90, 0), (40, 333), (35, 32), (325, 328), and (328, 90) in Figure 5(a) and (210, 0), (142, 0), (92, 0), (40, 333), (39, 28), (329, 324), and (328, 90) in Figure 5(b). Although the lateral beam passed the spinal cord in the beam arrangement before the local optimization step in Figure 5(a) (spinal cord max. dose: 3.13Gy), the optimized beam arrangement avoids the spinal cord in Figure 5(b) (spinal cord max. dose: 1.68Gy). Table 3 shows mean standard deviation (SD) of the planning evaluation indices in 50 treatment plans of 10 test cases obtained from the dose distributions of similar-case-based beam arrangements without and with the beam direction optimization. The procedure for the local optimization of beam arrangements improved the quality of the treatment plans with significant differences () in both the homogeneity index and conformity index for the PTV and V10, V20, mean dose, and NTCP for the lung. Figure 6 shows dose distributions of the original plan (Figure 6(a)) and the most usable plan (Figure 6(b)) determined by the RTP evaluation measure. Figure 7 provides dose-volume histograms (DVHs) for the case shown in Figure 6. In terms of the PTV, the similar-case-based plan has a DVH curve that is almost the same as that of the original plan. However, the similar-case-based plan resulted in better sparing of spinal cord and lung regions compared to the original plan. Table 4 shows mean SD of the planning evaluation indices in 10 test cases obtained from the dose distributions produced by original beam arrangements and similar-case-based beam arrangements of the most usable plans determined by the RTP evaluation measure. The proposed method may provide usable beam directions that are not significantly different from those obtained with the original beam arrangements () in terms of the 10 planning evaluation indices out of 11 indices. The mean value of D95 was significantly improved based on the proposed method compared to that of the original beam arrangements (). 4. Discussion We have shown the feasibility of our similar-case-based optimization method for the determination of beam arrangements in SBRT. In general, the appropriate beam arrangement in lung SBRT has varied with an institution’s situation. In terms of the number of beams, Takayama et al. [5] reported that they routinely used 5 to 10 beams with coplanar and non-coplanar directions for lung SBRT in order to make a homogenous target dose distribution, while avoiding high doses to normal tissues. Liu et al. [26] found that the optimal number of beams for lung SBRT was 13 to 15 with coplanar and non-coplanar directions. A large number of beams increase the required treatment time, which should be as short as possible to reduce the patient’s burden. Moreover, an available beam direction space is restricted by the size of the gantry and an immobilizer. In our method, the beam arrangement was automatically determined based on the past similar case, followed by the local optimization of each beam direction. Therefore, the proposed method could adjust an institution’s specific situation by replacing the RTP database. One of the most difficult problems in RTP is the patient-specific tradeoff between the benefit to a tumor and the risk to surrounding normal tissues. Therefore, treatment planners should select a plan that is most suitable for each individual patient, from a set of optimal plans. In our method, treatment planners can select a plan considered the best for the patient from among several plans based on similar cases in the RTP database with the knowledge of experienced treatment planners. Although the RTP is a time-consuming task—especially for the less experienced treatment planners—the combination of similar cases and the BAO algorithm may reduce both the workload for treatment planners and the interplanner variability of treatment plans. As a result of the local optimization of beam arrangements, the planning evaluation indices were improved with significant differences as shown in Table 3. Although the improvement of the planning evaluation indices may not have a great impact on a clinical outcome from dosimetric point of view (e.g., mean lung V20 reduced from 4.34% to 4.25%), the step of the local optimization of beam arrangements gave robustness to the proposed method. As shown in Figure 5, the optimized beam arrangement avoided OARs, even if the beam arrangement of the past similar case may not be suitable for a new case. Therefore, the treatment planning time for the manual corrections of beam arrangements could be reduced based on the proposed method with the BAO. It would be difficult to compare the planning evaluation indices between our results and other BAO literatures [7–9] due to a difference of clinical cases. Although Liu et al. applied the BAO algorithm to IMRT treatment plans for stage III lung cancers [27], to the best of our knowledge, there are no literatures which applied the BAO algorithm to the SBRT treatment plans for early stage lung cancers with 48Gy in four fractions. In addition, the commercialized BAO software (Eclipse version 8.1; Varian Medical Systems) was not used for the comparison with the proposed method, because the beam directions obtained by the BAO software might be located at the outer side of the available beam direction space as shown in Figure 3. It is important to investigate how long it takes for treatment planners, who are not experienced, to make SBRT plans with and without the proposed method. In addition, it is necessary to evaluate which type of case the proposed method is more effective with respect to the planning time, simple cases (frequent cases, whose tumors are located at far from the OARs), and the complex cases (rare cases, whose tumors are located at close to the OARs). Therefore, we should compare the treatment planning time with and without the proposed method and also between simple and complex cases in future work. There are some limitations of this method. First of all, the proposed method has a possibility to be trapped into local minima of the cost function in the local optimization step of beam arrangement, because each beam direction is locally optimized with the initial angle, which is derived from the similar case. In other words, our method depends on the quality of treatment plans in the RTP database. Although the RTP database used here consisted of treatment plans developed by senior experienced radiation oncologists at our institution, we should collect many more treatment plans with clinical outcomes to improve the quality of the RTP database in future work. In addition, there are several parameters to adjust in our method. In the present study, we empirically determined the parameters based on the institution’s policy of treatment planning by using the five training cases with a trial-and-error procedure. In the future, we will need to optimize the parameters by using a larger number of cases. 5. Conclusion We developed a similar-case-based optimization method for beam arrangements in lung SBRT for assisting treatment planners. The local BAO algorithm improved the quality of treatment plans with significant differences () in the homogeneity index and conformity index for the PTV, V10, V20, mean dose, and NTCP for the lung. Moreover, the proposed method may provide usable beam arrangements which are not significantly different from the original beam arrangements () in terms of the 10 planning evaluation indices. The mean value of D95 was significantly improved based on the proposed method compared to that of the original beam arrangements (). Therefore, our system will be useful for treatment planners, and thus, the quality and efficiency of radiotherapy would be improved. This research was supported by a Grant-in-aid from the Japan Society for the Promotion of Science (JSPS) Fellows and the Ministry of Education, Science, Sports and Culture, Grant-in-aid for Scientific Research (C), 22611011, 2010 to 2012. 1. R. Timmerman, R. Paulus, J. Galvin et al., “Stereotactic body radiation therapy for inoperable early stage lung cancer,” Journal of the American Medical Association, vol. 303, no. 11, pp. 1070–1076, 2010. View at Publisher · View at Google Scholar · View at Scopus 2. Y. Nagata, J. Wulf, I. Lax et al., “Stereotactic radiotherapy of primary lung cancer and other targets: results of consultant meeting of the international atomic energy agency,” International Journal of Radiation Oncology Biology Physics, vol. 79, no. 3, pp. 660–669, 2011. View at Publisher · View at Google Scholar · View at Scopus 3. H. Onishi, H. Shirato, Y. Nagata et al., “Stereotactic body radiotherapy (SBRT) for operable Stage i non-small-cell lung cancer: can SBRT be comparable to surgery?” International Journal of Radiation Oncology Biology Physics, vol. 81, no. 5, pp. 1352–1358, 2011. View at Publisher · View at Google Scholar · View at Scopus 4. M. Taremi, A. Hope, M. Dahele et al., “Stereotactic body radiotherapy for medically inoperable lung cancer: prospective, single-center study of 108 consecutive patients,” International Journal of Radiation Oncology Biology Physics, vol. 82, no. 2, pp. 967–973, 2012. View at Publisher · View at Google Scholar · View at Scopus 5. K. Takayama, Y. Nagata, Y. Negoro et al., “Treatment planning of stereotactic radiotherapy for solitary lung tumor,” International Journal of Radiation Oncology Biology Physics, vol. 61, no. 5, pp. 1565–1571, 2005. View at Publisher · View at Google Scholar · View at Scopus 6. D. H. Lim, B. Y. Yi, A. Mirmiran, A. Dhople, M. Suntharalingam, and W. D. D'Souza, “Optimal beam arrangement for stereotactic body radiation therapy delivery in lung tumors,” Acta Oncologica, vol. 49, no. 2, pp. 219–224, 2010. View at Publisher · View at Google Scholar · View at Scopus 7. Y. Li and J. Lei, “A feasible solution to the beam-angle-optimization problem in radiotherapy planning with a DNA-based genetic algorithm,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 3, pp. 499–508, 2010. View at Publisher · View at Google Scholar · View at Scopus 8. J. A. de Pooter, A. Méndez Romero, W. P. A. Jansen et al., “Computer optimization of noncoplanar beam setups improves stereotactic treatment of liver tumors,” International Journal of Radiation Oncology Biology Physics, vol. 66, no. 3, pp. 913–922, 2006. View at Publisher · View at Google Scholar · View at Scopus 9. J. Meyer, S. M. Hummel, P. S. Cho, M. M. Austin-Seymour, and M. H. Phillips, “Automatic selection of non-coplanar beam directions for three-dimensional conformal radiotherapy,” British Journal of Radiology, vol. 78, no. 928, pp. 316–327, 2005. View at Publisher · View at Google Scholar · View at Scopus 10. O. Commowick and G. Malandain, “Efficient selection of the most similar image in a database for critical structures segmentation,” Medical Image Computing and Computer-Assisted Intervention, vol. 4792, no. 2, pp. 203–210, 2007. View at Publisher · View at Google Scholar · View at Scopus 11. V. Chanyavanich, S. K. Das, W. R. Lee, and J. Y. Lo, “Knowledge-based IMRT treatment planning for prostate cancer,” Medical Physics, vol. 38, no. 5, pp. 2515–2522, 2011. View at Publisher · View at Google Scholar · View at Scopus 12. N. Mishra, S. Petrovic, and S. Sundar, “A self-adaptive case-based reasoning system for dose planning in prostate cancer radiotherapy,” Medical Physics, vol. 38, no. 12, pp. 6528–6538, 2011. View at Publisher · View at Google Scholar · View at Scopus 13. A. Schlaefer and S. Dieterich, “Feasibility of case-based beam generation for robotic radiosurgery,” Artificial Intelligence in Medicine, vol. 52, no. 2, pp. 67–75, 2011. View at Publisher · View at Google Scholar · View at Scopus 14. T. Magome, H. Arimura, Y. Shioyama et al., “Computer-aided beam arrangement based on similar cases in radiation treatment-planning databases for stereotactic lung radiation therapy,” Journal of Radiation Research, vol. 54, no. 3, pp. 569–577, 2013. View at Publisher · View at Google Scholar 15. T. Magome, H. Arimura, Y. Shioyama, et al., “Computer-assisted radiation treatment planning system for determination of beam directions based on similar cases in a database for stereotactic body radiotherapy,” in Medical Imaging, vol. 8319 of Proceedings of SPIE, pp. 1–7, February 2012. View at Publisher · View at Google Scholar 16. W. Burger and M. J. Burge, Digital Image Processing: An Algorithmic Introduction Using Java, Springer, 1st edition, 2007. 17. Y. Shioyama, K. Nakamura, S. Anai et al., “Stereotactic radiotherapy for lung and liver tumors using a body cast system: setup accuracy and preliminary clinical outcome,” Radiation Medicine, vol. 23, no. 6, pp. 407–413, 2005. View at Scopus 18. N. Kadoya, Y. Obata, T. Kato et al., “Dose-volume comparison of proton radiotherapy and stereotactic body radiotherapy for non-small-cell lung cancer,” International Journal of Radiation Oncology Biology Physics, vol. 79, no. 4, pp. 1225–1231, 2011. View at Publisher · View at Google Scholar · View at Scopus 19. “Prescribing, recording and reporting photon beam therapy (supplement to ICRU report 50),” ICRU Report 62, International Commission on Radiation Units and Measurements, 1999. 20. S. Webb and A. E. Nahum, “A model for calculating tumour control probability in radiotherapy including the effects of inhomogeneous distributions of dose and clonogenic cell density,” Physics in Medicine and Biology, vol. 38, no. 6, pp. 653–666, 1993. View at Publisher · View at Google Scholar · View at Scopus 21. B. Sanchez-Nieto and A. E. Nahum, “The delta-TCP concept: a clinically useful measure of tumor control probability,” International Journal of Radiation Oncology Biology Physics, vol. 44, no. 2, pp. 369–380, 1999. View at Publisher · View at Google Scholar · View at Scopus 22. T. Kanai, N. Matsufuji, T. Miyamoto et al., “Examination of GyE system for HIMAC carbon therapy,” International Journal of Radiation Oncology Biology Physics, vol. 64, no. 2, pp. 650–656, 2006. View at Publisher · View at Google Scholar · View at Scopus 23. J. T. Lyman, “Complication probability as assessed from dose-volume histograms,” Radiation Research, vol. 8, pp. S13–S19, 1985. View at Scopus 24. G. J. Kutcher and C. Burman, “Calculation of complication probability factors for non-uniform normal tissue irradiation: the effective volume method,” International Journal of Radiation Oncology Biology Physics, vol. 16, no. 6, pp. 1623–1630, 1989. View at Scopus 25. C. Burman, G. J. Kutcher, B. Emami, and M. Goitein, “Fitting of normal tissue tolerance data to an analytic function,” International Journal of Radiation Oncology Biology Physics, vol. 21, no. 1, pp. 123–135, 1991. View at Scopus 26. R. Liu, J. M. Buatti, T. L. Howes, J. Dill, J. M. Modrick, and S. L. Meeks, “Optimal number of beams for stereotactic body radiotherapy of lung and liver lesions,” International Journal of Radiation Oncology Biology Physics, vol. 66, no. 3, pp. 906–912, 2006. View at Publisher · View at Google Scholar · View at Scopus 27. H. H. Liu, M. Jauregui, X. Zhang, X. Wang, L. Dong, and R. Mohan, “Beam angle optimization and reduction for intensity-modulated radiation therapy of non-small-cell lung cancers,” International Journal of Radiation Oncology Biology Physics, vol. 65, no. 2, pp. 561–572, 2006. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/bmri/2013/309534/","timestamp":"2014-04-18T02:59:21Z","content_type":null,"content_length":"166136","record_id":"<urn:uuid:f6182f62-8ad5-4599-afd8-77b692af8214>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Decision Procedure for Inequalities between Homogeneous Polynomials up vote 3 down vote favorite Given two polynomials $p_1$ and $p2$ each of which is a multi-variate polynomial with positive integer coefficients, we want to decide if $p_1 \leq p_2$ over all integral values of the variables. The undecidability of Hilbert's tenth problem implies the undecidability of the above problem. Now we throw in the additional restriction that each of $p_1$ and $p_2$ is homogeneous. Is there any decision procedure known for this restricted version? ag.algebraic-geometry diophantine-equations 1 Isn't this equivalent to asking several instances of the original problem (for the dehomogenized versions of the inequality) over Q? I think this is open. – Qiaochu Yuan Nov 29 '10 at 1:24 1 @Qiaochu: the solvability of polynomial equations in rationals is a big open problem. But here we have an inequality. The statement $f\le 0$ for all rationals is equivalent to the same statement for all reals and is decidable by Tarski (I wrote it in my answer below). – Mark Sapir Nov 29 '10 at 1:52 Hi Mark & SJR, Thanks for your insightful responses. I forgot to add an additional restriction: the variables can only take non-negative integral values. I don't know if this restriction helps. Also in general, the degrees of the two polynomials can be different. Thanks, Raghav – user11202 Nov 30 '10 at 17:32 add comment 2 Answers active oldest votes It seems to me that for homogeneous polynomials it is decidable. First of all note that $p\le q$ is the same as $p-q\le 0$ and the condition of positivity of coefficients is extra. Let $f (x_1,...,x_n)$ be a polynomial. Then $f(x_1/y^2,...,x_n/y^2)=g(x_1,...,x_n,y)/y^m$ where $g$ is homogeneous and $m$ is even. Then $f\le 0$ is equivalent to $g\le 0$. So your problem (for up vote homogeneous polynomial $g$) is equivalent to inequality $f\le 0$ for all rational values of variables $x_1,...,x_n$. But that is equivalent to $f\le 0$ for all (real) values of 6 down $x_1,...,x_n$. That is decidable by Tarski (the elementary theory of the reals is decidable). Shouldn't the conclusion be: the problem (for homogeneous polynomial $g$) is equivalent to the inequality $f \leq 0$ for all rational values of variables $x_1,\ldots,x_n$ with the same denominator?> – alex Nov 29 '10 at 4:29 @alex: every two rational numbers have the same denominator: $3/5=6/10$. – Mark Sapir Nov 29 '10 at 10:51 1 Just to be clear, this procedure only gives decidability in the case that $p$ and $q$ are homogeneous of the same degree. – Noah Stein Nov 29 '10 at 17:44 @Noah: Yes, you are right. If $p$ and $q$ are homogeneous of different degrees, then the problem is undecidable in general as shown by SJR's answer below. I do not know which interpretation of the question is correct. – Mark Sapir Nov 29 '10 at 18:51 add comment I can prove that the following problem is undecidable: To determine, given two homogenous polynomials $p_1$ and $p_2$, whether or not the inequality $p_1\le p_2$ holds for all integer Indeed, suppose there was an algorithm $A$ to determine whether $p_1\le p_2$ always holds. Then we can use this algorithm to determine whether any polynomial $f$ has an integer zero. To see this, suppose $f=f(x_1,\ldots,,x_n)$ has total degree $d$. Let $$g(x_1,\ldots,,x_n,z)=z^d f(x_1/z,\ldots,x_n/z),$$ so $g$ is homogeneous of degree $d$. I claim that $f$ has no integer zero if and only if the inequality $$2z^d\le g(x_1,\ldots,,x_n,z)^2+z^{2d} $$ holds for all integer arguments. Note that the left and right hand sides are up vote 4 homogeneous polynomials, so if the claim is true then we can we can use algorithm $A$ to decide whether or not $f$ has an integer zero. down vote To verify the claim, suppose first that $f$ has no integer zero. If $z=1$ then the inequality reduces to $2\le g(x_1,\ldots,,x_n,1)^2+1$, i.e., $1\le f(x_1,\ldots,,x_n)^2$. If $z$ is different than 1, then already $2z^d\le z^{2d}$, therefore $2z^d\le g(x_1,\ldots,,x_n,z)^2+z^{2d}$. So if $f$ has no integer zero then the inequality holds for all integer arguments. Conversely, if the inequality holds for all integer arguments, then put $z=1$ to obtain $1\le f(x_1,\ldots,,x_n)^2$. What about the case that the coefficients of the $p_i$ are assumed to be positive? It would be interesting if in this case the problem was decidable. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry diophantine-equations or ask your own question.
{"url":"https://mathoverflow.net/questions/47618/decision-procedure-for-inequalities-between-homogeneous-polynomials/47693","timestamp":"2014-04-16T20:04:59Z","content_type":null,"content_length":"63399","record_id":"<urn:uuid:d269633e-f21e-45a1-ba5b-917900b35e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
12315 search hits Ein Fall struktureller Korruption? Die Familienbürgschaft in der Kollision unverträglicher Handlungslogiken (2000) Gunther Teubner Das Recht hybrider Netzwerke (2001) Gunther Teubner Englische Fassung: Hybrid Laws: Constitutionalizing Private Governance Networks. In: Robert Kagan and Kenneth Winston (Hg.) Legality and Community: On the Intellectual Legacy of Philip Selznick. Berkeley, Berkeley Public Policy Press 2002, 311-331. Italienische Fassung: Diritti ibridi: la costituzionalizzazione delle reti private di governance. In: Gunther Teubner, Costituzionalismo societario. Armando, Roma 2004 (im Erscheinen). Vertragswelten: Das Recht in der Fragmentierung von private governance regimes (1998) Gunther Teubner Englische Fassung: Contracting Worlds: Invoking Discourse Rights in Private Governance Regimes (Annual Lecture Edinburgh 1997) Social and Legal Studies 9, 2000, 399-417. Italienische Fassung: Mondi contrattuali. Discourse rights nel diritto privato. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 113-142. Portugiesische Fassung: Mundos contratuais: o direito na fragmentacao de regimes de private governance. In: Gunther Teubner, Direito, Sistema, Policontexturalidade, Editora Unimep, Piracicaba Sao Paolo, Brasil 2005, 269-298. Contracting Worlds: Invoking Discourse Rights in Private Governance Regimes (2000) Gunther Teubner Preservation of the photographic archive of the German Colonial Society and making it accessible at the Stadt-und Universitätsbibliothek Frankfurt a.M. (1997) Irmtraud Dietlinde Wolcke-Renk Sicherung und Erschließung des Bildbestandes der Deutschen Kolonialgesellschaft in der Stadt- und Universitätsbibliothek Frankfurt am Main (1996) Irmtraud Dietlinde Wolcke-Renk Die Deutsche Kolonialgesellschaft (DKG) war mit ihren bis zu 42.000 Mitgliedern der größte und einflußreichste Interessenverband der deutschen Kolonialbewegung. Sie bestand, wenn auch seit 1933 dem Reichskolonialbund eingegliedert, von 1882 bis 1943....Bisher wurde erst ein Teil unserer Sammlung verfilmt und im Detail gesichtet - die wissenschaftliche Erschließung ist Zukunftsmusik. Dank der Unterstützung durch die Deutsche Forschungs- gemeinschaft, die Marga- und Kurt-Möllgaard-Stiftung und die Adolf-Messer-Stiftung ließ sich jedoch inzwischen wenigstens ein solides Fundament legen. Die Stadt- und Universitätsbibliothek Frankfurt am Main wird bemüht sein, ihre wohl einzigartige kulturhistorische Bildsammlung der deutschen Kolonialaktivitäten, der ein hoher wissenschaftlicher Rang zukommen dürfte, in absehbarer Zeit wieder vollständig verfügbar machen zu können. Globale Bukowina: Zur Emergenz eines transnationalen Rechtspluralismus (1996) Gunther Teubner Englische Fassung: Global Bukowina: Legal Pluralism in the World-Society. in: Gunther Teubner (Hg.) Global Law Without A State. Dartsmouth: London 1996, 3-28. Italienische Fassung: La Bukowina globale: il pluralismo giuridico nella società mondiale. Sociologic a politiche sociali 2, 1999, 49-80. Portugiesische Fassung: Bukowina global sobre a emergência de um pluralismo jurídico transnacional. Impulso: Direito e Globalização 14, 2003. Georgische Fassung: Globaluri bukovina: samarTlebrivi pluralizmi msoflio sazogadoebaSi. Journal of the Institute of State and Law of the Georgian Academy of Sciences 2005 (im Erscheinen) Altera pars audiatur: Law in the Collision of Discourses (1996) Gunther Teubner s.a. Deutsche Fassung: Archiv für Rechts- und Sozialphilosophie. Beiheft 65, 1996, 199-220. Italienische Fassung: Altera pars audiatur: Il diritto nella collisione dei discorsi. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 27-70. Französische Fassung: Altera pars audiatur: le droit dans la collision des discours. Droit et Société 35, 1997, 99-123. Portugiesische Fassung: Altera pars audiatur: o direito na colisao de disursos. In: J.A. Lindgren Alves, Gunther Teubner, Joaquim Leonel de Rezende Alvim, Dorothe Susanne Rüdiger, Direito e Cidadania na Pos-Modernidade. Editora Unimep, Piracicaba, Brasilia 2002; 93-129. Altera Pars Audiatur: Das Recht in der Kollision anderer Universalitätsansprüche (1996) Gunther Teubner Englische Fassung: Altera pars audiatur: Law in the Collision of Discourses. In: Richard Rawlings (Hg.) Law, Society and Economy. Oxford University Press, Oxford 1997, 150-176. Italienische Fassung: Altera pars audiatur: Il diritto nella collisione dei discorsi. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 27-70. Französische Fassung: Altera pars audiatur: le droit dans la collision des discours. Droit et Société 35, 1997, 99-123. Portugiesische Fassung: Altera pars audiatur: o direito na colisao de disursos. In: J.A. Lindgren Alves, Gunther Teubner, Joaquim Leonel de Rezende Alvim, Dorothe Susanne Rüdiger, Direito e Cidadania na Pos-Modernidade. Editora Unimep, Piracicaba, Brasilia 2002; 93-129. Substantive and reflexive elements in modern law (1982) Gunther Teubner
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/simple/query/*%3A*/browsing/true/doctypefq/article/start/9020/rows/10","timestamp":"2014-04-20T04:07:26Z","content_type":null,"content_length":"40647","record_id":"<urn:uuid:42f947ae-8882-4895-8602-3394f66ad180>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2006/392 The Tate Pairing via Elliptic NetsKatherine E. StangeAbstract: We derive a new algorithm for computing the Tate pairing on an elliptic curve over a finite field. The algorithm uses a generalisation of elliptic divisibility sequences known as elliptic nets, which are maps from $\Z^n$ to a ring that satisfy a certain recurrence relation. We explain how an elliptic net is associated to an elliptic curve and reflects its group structure. Then we give a formula for the Tate pairing in terms of values of the net. Using the recurrence relation we can calculate these values in linear time. Computing the Tate pairing is the bottleneck to efficient pairing-based cryptography. The new algorithm has time complexity comparable to Miller's algorithm, and is likely to yield to further Category / Keywords: implementation / Tate pairing, elliptic curve cryptography, elliptic divisibility sequence, elliptic net, Miller's algorithm, pairing-based cryptography.Date: received 6 Nov 2006, last revised 12 Jun 2007Contact author: stange at math brown eduAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Note: Minor corrections to publication version. Publication date June 2007: Pairing-Based Cryptography First International Conference, Pairing 2007, Tokyo, Japan, July 2-4, 2007, Proceedings Series: Lecture Notes in Computer Science, Vol. 4575 Version: 20070612:200319 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2006/392","timestamp":"2014-04-18T05:32:12Z","content_type":null,"content_length":"2965","record_id":"<urn:uuid:c6428310-ca46-4f78-ae57-150635fb52a1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Patton Vlg, TX Math Tutor Find a Patton Vlg, TX Math Tutor ...I specialize in tutoring math (elementary math, geometry, prealgebra, algebra 1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA programming. I'd love to talk more about tutoring for your specific situation and look forward to hearing from you.During my time at T... 17 Subjects: including geometry, elementary math, reading, ACT Math ...Quieres aprender Ingles? Necesitas ayuda actualizando tus habilidades con la computadora? No esperes mas! 39 Subjects: including geometry, prealgebra, reading, chemistry ...When one method is hard to grasp, you simply choose another one. I use this as an advantage with my students. If what I am saying isn’t getting across, I change the approach. 14 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry ...I love teaching others subjects that I have an in depth understanding in. I think that the best way to learn math is to do lots of example problems. Sometimes with help and sometimes without 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...Second, we will dig to the root of the misunderstanding or challenging material in order to locate the source of confusion. This is often a simple misunderstanding stemming from the basic concepts. While it might sound tedious and superfluous, many of the more complex topics are easily understood from the basics. 22 Subjects: including trigonometry, SAT math, photography, public speaking Related Patton Vlg, TX Tutors Patton Vlg, TX Accounting Tutors Patton Vlg, TX ACT Tutors Patton Vlg, TX Algebra Tutors Patton Vlg, TX Algebra 2 Tutors Patton Vlg, TX Calculus Tutors Patton Vlg, TX Geometry Tutors Patton Vlg, TX Math Tutors Patton Vlg, TX Prealgebra Tutors Patton Vlg, TX Precalculus Tutors Patton Vlg, TX SAT Tutors Patton Vlg, TX SAT Math Tutors Patton Vlg, TX Science Tutors Patton Vlg, TX Statistics Tutors Patton Vlg, TX Trigonometry Tutors Nearby Cities With Math Tutor Hufsmith Math Tutors New Caney Math Tutors North Cleveland, TX Math Tutors North Houston Math Tutors Patton Village, TX Math Tutors Plum Grove, TX Math Tutors Porter, TX Math Tutors Roman Forest, TX Math Tutors Romayor Math Tutors Rye, TX Math Tutors Shep, TX Math Tutors Shepherd, TX Math Tutors Splendora Math Tutors Willis, TX Math Tutors Woodbranch, TX Math Tutors
{"url":"http://www.purplemath.com/Patton_Vlg_TX_Math_tutors.php","timestamp":"2014-04-19T02:12:45Z","content_type":null,"content_length":"23540","record_id":"<urn:uuid:590162a5-8fdf-4b3a-a9bd-28db7a1bb107>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Berichte des Fraunhofer-Instituts für Techno- und Wirtschaftsmathematik (ITWM Report) 26 search hits Fluid structure interaction problems in deformable porous media: Toward permeability of deformable porous media (2004) O. Iliev A. Mikelic P. Popov In this work the problem of fluid flow in deformable porous media is studied. First, the stationary fluid-structure interaction (FSI) problem is formulated in terms of incompressible Newtonian fluid and a linearized elastic solid. The flow is assumed to be characterized by very low Reynolds number and is described by the Stokes equations. The strains in the solid are small allowing for the solid to be described by the Lame equations, but no restrictions are applied on the magnitude of the displacements leading to strongly coupled, nonlinear fluid-structure problem. The FSI problem is then solved numerically by an iterative procedure which solves sequentially fluid and solid subproblems. Each of the two subproblems is discretized by finite elements and the fluid-structure coupling is reduced to an interface boundary condition. Several numerical examples are presented and the results from the numerical computations are used to perform permeability computations for different geometries. On Modelling and Simulation of Different Regimes for Liquid Polymer Moulding (2004) R. Ciegis O. Iliev S. Rief K. Steiner In this paper we consider numerical algorithms for solving a system of nonlinear PDEs arising in modeling of liquid polymer injection. We investigate the particular case when a porous preform is located within the mould, so that the liquid polymer flows through a porous medium during the filling stage. The nonlinearity of the governing system of PDEs is due to the non-Newtonian behavior of the polymer, as well as due to the moving free boundary. The latter is related to the penetration front and a Stefan type problem is formulated to account for it. A finite-volume method is used to approximate the given differential problem. Results of numerical experiments are presented. We also solve an inverse problem and present algorithms for the determination of the absolute preform permeability coefficient in the case when the velocity of the penetration front is known from measurements. In both cases (direct and inverse problems) we emphasize on the specifics related to the non-Newtonian behavior of the polymer. For completeness, we discuss also the Newtonian case. Results of some experimental measurements are presented and discussed. On the Performance of Certain Iterative Solvers for Coupled Systems Arising in Discretization of Non-Newtonian Flow Equations (2004) O. Iliev J. Linn M. Moog D. Niedziela V. Starikovicius Iterative solution of large scale systems arising after discretization and linearization of the unsteady non-Newtonian Navier–Stokes equations is studied. cross WLF model is used to account for the non-Newtonian behavior of the fluid. Finite volume method is used to discretize the governing system of PDEs. Viscosity is treated explicitely (e.g., it is taken from the previous time step), while other terms are treated implicitly. Different preconditioners (block–diagonal, block–triangular, relaxed incomplete LU factorization, etc.) are used in conjunction with advanced iterative methods, namely, BiCGStab, CGS, GMRES. The action of the preconditioner in fact requires inverting different blocks. For this purpose, in addition to preconditioned BiCGStab, CGS, GMRES, we use also algebraic multigrid method (AMG). The performance of the iterative solvers is studied with respect to the number of unknowns, characteristic velocity in the basic flow, time step, deviation from Newtonian behavior, etc. Results from numerical experiments are presented and discussed. Multigrid – adaptive local refinement solver for incompressible flows (2003) O. Iliev D. Stoyanov A non-linear multigrid solver for incompressible Navier-Stokes equations, exploiting finite volume discretization of the equations, is extended by adaptive local refinement. The multigrid is the outer iterative cycle, while the SIMPLE algorithm is used as a smoothing procedure. Error indicators are used to define the refinement subdomain. A special implementation approach is used, which allows to perform unstructured local refinement in conjunction with the finite volume discretization. The multigrid - adaptive local refinement algorithm is tested on 2D Poisson equation and further is applied to a lid-driven flows in a cavity (2D and 3D case), comparing the results with bench-mark data. The software design principles of the solver are also discussed. On a Multigrid Adaptive Refinement Solver for Saturated Non-Newtonian Flow in Porous Media (2003) W. Dörfler O. Iliev D. Stoyanov D. Vassileva On a Multigrid Adaptive Refinement Solver for Saturated Non-Newtonian Flow in Porous Media A multigrid adaptive refinement algorithm for non-Newtonian flow in porous media is presented. The saturated flow of a non-Newtonian fluid is described by the continuity equation and the generalized Darcy law. The resulting second order nonlinear elliptic equation is discretized by a finite volume method on a cell-centered grid. A nonlinear full-multigrid, full-approximation-storage algorithm is implemented. As a smoother, a single grid solver based on Picard linearization and Gauss-Seidel relaxation is used. Further, a local refinement multigrid algorithm on a composite grid is developed. A residual based error indicator is used in the adaptive refinement criterion. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Several results from numerical experiments are presented in order to examine the performance of the solver. On Numerical Simulation of Flow Through Oil Filters (2003) O. Iliev V. Laptev This paper concerns numerical simulation of flow through oil filters. Oil filters consist of filter housing (filter box), and a porous filtering medium, which completely separates the inlet from the outlet. We discuss mathematical models, describing coupled flows in the pure liquid subregions and in the porous filter media, as well as interface conditions between them. Further, we reformulate the problem in fictitious regions method manner, and discuss peculiarities of the numerical algorithm in solving the coupled system. Next, we show numerical results, validating the model and the algorithm. Finally, we present results from simulation of 3-D oil flow through a real car filter.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16165/start/20/rows/10/author_facetfq/O.+Iliev","timestamp":"2014-04-19T09:46:55Z","content_type":null,"content_length":"38931","record_id":"<urn:uuid:34fbce73-8774-4527-956b-453e0064a479>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Recent Topology and its Applications Articles Recently published articles from Topology and its Applications. 15 April 2014 Tulsi Srinivasan We extend the theory of the Lusternik–Schnirelmann category to general metric spaces by means of covers by arbitrary subsets. We also generalize the definition of the strict category weight. We 15 April 2014 Ryszard Frankiewicz | Sławomir Szczepaniak It is proved that no non-meager subspace of the space [ω]ω equipped with the Ellentuck topology does admit a Kuratowski partition, that is such a subset cannot be covered by a family F of disjoint 15 April 2014 Jaka Smrekar Let f:Z→Z be a self-map on the topological space Z. Generalizing the well-known factorization of a map into the composite of a homotopy equivalence and a Hurewicz fibration we prove that f is 15 April 2014 Damian Sawicki We show that for a given metric space (X,d) of asymptotic dimension n there exists a coarsely and topologically equivalent hyperbolic metric d′ of the form d′=f∘d such that (X,d′) is of asymptotic 15 April 2014 M. Ghirati | A. Taherifar In this paper, we present a new characterization for the intersection of all essential (resp., free) maximal ideals of C(X). Thereafter topological spaces X for which CF(X) and CK(X) are equal to 15 April 2014 Jindřich Zapletal There is a proper Baire category preserving forcing which adds infinitely equal real but no Cohen real. This resolves a long-standing open problem of David Fremlin. The forcing has a natural 15 April 2014 Oliver Goertsches | Augustin-Liviu Mare We show that if G×M→M is a cohomogeneity one action of a compact connected Lie group G on a compact connected manifold M then HG⁎(M) is a Cohen–Macaulay module over H⁎(BG). Moreover, this module 15 April 2014 Louis Block | Dennis Ledis Let f denote a continuous map of the compact interval to itself. Suppose that f is topologically transitive. We show that if f exhibits a pattern and has the same topological entropy as the 15 April 2014 Fucai Lin | Kexiu Zhang | Changqing Li | Wei Chen In this short article, we give complete answers to four problems of Arhangel'skii on diagonal-flexible spaces or rotoids. We list the four problems as follows:... 15 April 2014 Strashimir G. Popvassilev | John E. Porter Gartside and Moody proved that a space is protometrizable if and only if it has a monotone star-refinement operator on open covers. They called this property monotone paracompactness but noted 15 April 2014 Claudia G. Domínguez-López Let C(X) be the hyperspace of subcontinua of a continuum X. Given a proper subcontinuum A of X, we study the boundary Bd(C(A)) of C(A) in C(X). We show that Bd(C(A)) is always arcwise connected, 1 April 2014 Naseer Shahzad | W.A. Kirk | Maryam A. Alghamdi It is shown that several theorems known to hold in complete geodesically bounded R-trees extend to arcwise connected Hausdorff topological spaces which have the property that every monotone 1 April 2014 Włodzimierz J. Charatonik | Robert P. Roe We are defining a new operator called Mahavier product. This operator generalizes the inverse limit operator with multivalued functions first introduced in [6] by William S. Mahavier in 2004. 1 April 2014 Olivier Olela Otafudu In [4] Kemajou et al. constructed the injective hull in the category of T0-quasi-metric spaces with nonexpansive maps that they called q-hyperconvex hull. In this paper, we study properties of 1 April 2014 Dominique Lecomte | Miroslav Zelený We give, for each countable ordinal ξ⩾1, an example of a Δ20 countable union of Borel rectangles that cannot be decomposed into countably many Πξ0 rectangles. In fact, we provide a graph of a 1 April 2014 S. Todorčević | C. Uzcátegui A topological space X is said to be maximal if its topology is maximal among all T1 topologies over X without isolated points. It is known that a space is maximal if, and only if, it is extremely 1 April 2014 James P. Kelly In this paper, we develop a definition for a class of set-valued functions which will be called irreducible functions. We show that these functions can be used to obtain an indecomposable 1 April 2014 Peter Ozsváth | András I. Stipsicz | Zoltán Szabó We provide an integral lift of the combinatorial definition of Heegaard Floer homology for nice diagrams, and show that the proof of independence using convenient diagrams adapts to this setting.... 1 April 2014 Z. Ercan Starting with the initial paper of Huang and Zhang [2] in 2007, more than six hundred papers dealing with cone metric spaces have been published so far. In this short note we present a different 1 April 2014 Sina Greenwood | Judy Kennedy We introduce a new tool which we call an Ingram–Mahavier product to aid in the study of inverse limits with set-valued functions, and with this tool, obtain some new results about the Available online 24 March 2014 Guillaume C.L. Brümmer This article gives a brief outline of the life of Sergio Salbany (1941–2005) and of his mathematical career, coupled with his social outreach and political activity in South Africa until 1980, in Available online 20 March 2014 Dikran Dikranjan | Anna Giordano Bruno For a totally disconnected locally compact abelian group, we prove that the topological entropy of a continuous endomorphism coincides with the algebraic entropy of the dual endomorphism with Available online 17 March 2014 Mitrofan M. Choban | Petar S. Kenderov | Warren B. Moors For a pseudocompact (strongly pseudocompact) space T we show that every strongly bounded (bounded) subset A of the space C(T) of all continuous functions on T has compact closure with respect to Available online 17 March 2014 Taras Banakh | Bogdan Bokalo | Nadiya Kolos Given a topological space X, we study the structure of ∞-convex subsets in the space SCp(X) of scatteredly continuous functions on X. Our main result says that for a topological space X with 15 March 2014 A. Arbieto | M. Tavares In this article we show that any conservative volume preserving homeomorphism of Rn can be approximated in the uniform topology by transitive one. We also prove some results about the 15 March 2014 Maddalena Bonanzinga | Filippo Cammaroto | Bruno Antonio Pansera | Boaz Tsaban We develop a unified framework for the study of classic and new properties involving diagonalizations of dense families in topological spaces. We provide complete classification of these 15 March 2014 Mohammad Akbari tootkaboni | Zeinab Eslami We introduce some notions of density in a locally compact topological group G which extend some notions in discrete semigroups in which density is based on nets of finite sets. The new notions are 15 March 2014 P. Szewczak Let P be the class of all spaces whose Cartesian product with every paracompact space is paracompact. We prove that if X is a paracompact, first-countable GO-space with σDC dense subset then X∈P 15 March 2014 R. Rojas-Hernández The notion of monotonically monolithic space was introduced by V.V. Tkachuk in 2009 [8]. In this paper we introduce the notion of monotone stability and show that a space Cp(X) is monotonically 15 March 2014 Garith Botha | Yevhen Zelenyuk | Yuliya Zelenyuk Let G be a countably infinite discrete group and let βG be the Stone–Čech compactification of G. Let I denote the finest decomposition of G⁎=βG∖G into closed left ideals of βG with the property 15 March 2014 Hans-Peter A. Künzi | Attila Losonczi For any positive integer n>1 we construct on an infinite set a maximal pairwise complementary family of partial orders that has n elements. The example is motivated by a question of J. Steprāns 15 March 2014 Fucai Lin In this paper, we mainly discuss the cardinal invariants on some class of paratopological groups. For each i∈{0,1,2,3,3.5}, we define the class of locally Ti-minimal paratopological groups by the 15 March 2014 A. Taherifar By a characterization of semiprime SA-rings by Birkenmeier, Ghirati and Taherifar in [4, Theorem 4.4], and by the topological characterization of C(X) as a Baer-ring by Stone and Nakano in [11, 15 March 2014 Jungsoo Kim In this article, we give a sufficient condition for a Heegaard splitting of non-splittable, tunnel number two, three component link exterior to be critical. Moreover, we prove that if F is a genus 15 March 2014 Hui Wang | Fengchun Lei | Lidong Wang It is known that in a compact dynamical system, the whole space can be a Li–Yorke scrambled set, but this does not hold for distributional chaos. In this paper we prove that the complement of a 15 March 2014 Alfonso Artigue We show that every positive expansive flow on a compact metric space consists of a finite number of periodic orbits and fixed points.... Available online 14 March 2014 S.I. Bogataya | S.A. Bogatyy We find the high commutants of the Jennings group J(Z2) of substitutions of formal power series with coefficients in the ring Z2. We explicitly provide the corresponding abelianizing Available online 14 March 2014 Strashimir G. Popvassilev | John E. Porter We define some monotone properties using stars of coverings. This relates to work of J. van Mill, V. Tkachuk, R. Wilson, O. Alas, L. Junqueira, M. Matveev and others who generalized the D-space Available online 14 March 2014 Dharmanand Baboolal N-star compactifications of frames are introduced as the analog of the known concept of N-point compactification of a topological space due to Magill [5]. We characterize such frames and we show Available online 14 March 2014 Dušan Repovš | Lyubomyr Zdomskyy We prove that under certain set-theoretic assumptions every productively Lindelöf space has the Hurewicz covering property, thus improving upon some earlier results of Aurichi and Tall.... Available online 13 March 2014 Jiling Cao | Heikki J.K. Junnila In this paper, we study normality and metrizability of Wijsman hyperspaces. We show that every hereditarily normal Wijsman hyperspace is metrizable. This provides a partial answer to a problem of Available online 13 March 2014 Taras Banakh | Igor Protasov | Olga Sipacheva Given a set X and a family G of self-maps of X, we study the problem of the existence of a non-discrete Hausdorff topology on X with respect to which all functions f∈G are continuous. A topology Available online 13 March 2014 Vitaly V. Fedorchuk We introduce and investigate transfinite dimensions tr-(m,n)-Ind, where m,n are positive integers, n⩽m. For n=1 these dimension functions were introduced in [5] and investigated in [6].... Available online 13 March 2014 A.V. Arhangel'skii | M.M. Choban | M.A. Al Shumrani Following a general idea in [6,7], we introduce and study in this paper the concept of a JPM-space. We call in this way topological spaces admitting a metric which metrizes every metrizable Available online 13 March 2014 Javier Gutiérrez García | Jorge Picado | María Ángeles de Prada Vicente Monotone normality is usually defined in the class of T1 spaces. In this paper we study it under the weaker condition of subfitness, a separation condition that originates in pointfree topology. Available online 12 March 2014 Dimitris N. Georgiou | Athanasios C. Megaritis | Seithuti P. Moshokoa Alexandroff spaces include finite spaces and have a wide range of applications in many areas such as computer graphics and image analysis. We give results on small inductive dimension of Available online 12 March 2014 Alexander Dranishnikov | Michael Zarichnyi The notion of the decomposition complexity was introduced in [14] using a game theoretic approach. We introduce a notion of straight decomposition complexity and compare it with the original as
{"url":"http://www.journals.elsevier.com/topology-and-its-applications/recent-articles/","timestamp":"2014-04-17T04:09:42Z","content_type":null,"content_length":"104867","record_id":"<urn:uuid:2c1c345d-7fba-4fc3-9cfd-7c249b238e67>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
On the optimality of conditional expectation as a bregman predictor Results 1 - 10 of 30 - JOURNAL OF MACHINE LEARNING RESEARCH , 2005 "... A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Mahalanobis distance and relative entropy. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergence ..." Cited by 310 (52 self) Add to MetaCart A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Mahalanobis distance and relative entropy. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences. The proposed algorithms unify centroid-based parametric clustering approaches, such as classical kmeans and information-theoretic clustering, which arise by special choices of the Bregman divergence. The algorithms maintain the simplicity and scalability of the classical kmeans algorithm, while generalizing the basic idea to a very large class of clustering loss functions. There are two main contributions in this paper. First, we pose the hard clustering problem in terms of minimizing the loss in Bregman information, a quantity motivated by rate-distortion theory, and present an algorithm to minimize this loss. Secondly, we show an explicit bijection between Bregman divergences and exponential families. The bijection enables the development of an alternative interpretation of an ecient EM scheme for learning models involving mixtures of exponential distributions. This leads to a simple soft clustering algorithm for all Bregman divergences. - In KDD , 2004 "... Co-clustering is a powerful data mining technique with varied applications such as text clustering, microarray analysis and recommender systems. Recently, an informationtheoretic co-clustering approach applicable to empirical joint probability distributions was proposed. In many situations, co-clust ..." Cited by 97 (25 self) Add to MetaCart Co-clustering is a powerful data mining technique with varied applications such as text clustering, microarray analysis and recommender systems. Recently, an informationtheoretic co-clustering approach applicable to empirical joint probability distributions was proposed. In many situations, co-clustering of more general matrices is desired. In this paper, we present a substantially generalized co-clustering framework wherein any Bregman divergence can be used in the objective function, and various conditional expectation based constraints can be considered based on the statistics that need to be preserved. Analysis of the coclustering problem leads to the minimum Bregman information principle, which generalizes the maximum entropy principle, and yields an elegant meta algorithm that is guaranteed to achieve local optimality. Our methodology yields new algorithms and also encompasses several previously known clustering and co-clustering algorithms based on alternate minimization. - JOURNAL OF MACHINE LEARNING RESEARCH , 2009 "... We unify f-divergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROC-curves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all ..." Cited by 17 (6 self) Add to MetaCart We unify f-divergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROC-curves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all are related to cost-sensitive binary classification. As well as developing relationships between generative and discriminative views of learning, the new machinery leads to tight and more general surrogate regret bounds and generalised Pinsker inequalities relating f-divergences to variational divergence. The new viewpoint also illuminates existing algorithms: it provides a new derivation of Support Vector Machines in terms of divergences and relates Maximum Mean Discrepancy to Fisher Linear Discriminants. - IEEE Transactions on Information Theory , 2009 "... Abstract—In this paper, we generalize the notions of centroids (and barycenters) to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences form a rich and versatile family of distances that unifies quadratic Euclidean distances with various well- ..." Cited by 15 (7 self) Add to MetaCart Abstract—In this paper, we generalize the notions of centroids (and barycenters) to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences form a rich and versatile family of distances that unifies quadratic Euclidean distances with various well-known statistical entropic measures. Since besides the squared Euclidean distance, Bregman divergences are asymmetric, we consider the left-sided and right-sided centroids and the symmetrized centroids as minimizers of average Bregman distortions. We prove that all three centroids are unique and give closed-form solutions for the sided centroids that are generalized means. Furthermore, we design a provably fast and efficient arbitrary close approximation algorithm for the symmetrized centroid based on its exact geometric characterization. The geometric approximation algorithm requires only to walk on a geodesic linking the two left/right-sided centroids. We report on our implementation for computing entropic centers of image histogram clusters and entropic centers of multivariate normal distributions that are useful operations for processing multimedia information and retrieval. These experiments illustrate that our generic methods compare favorably with former limited ad hoc methods. Index Terms—Bregman divergence, Bregman information, Bregman power divergence, Burbea–Rao divergence, centroid, - Journal of Machine Learning Research , 2007 "... Quadratic discriminant analysis is a common tool for classification, but estimation of the Gaussian parameters can be ill-posed. This paper contains theoretical and algorithmic contributions to Bayesian estimation for quadratic discriminant analysis. A distribution-based Bayesian classifier is deriv ..." Cited by 9 (3 self) Add to MetaCart Quadratic discriminant analysis is a common tool for classification, but estimation of the Gaussian parameters can be ill-posed. This paper contains theoretical and algorithmic contributions to Bayesian estimation for quadratic discriminant analysis. A distribution-based Bayesian classifier is derived using information geometry. Using a calculus of variations approach to define a functional Bregman divergence for distributions, it is shown that the Bayesian distribution-based classifier that minimizes the expected Bregman divergence of each class conditional distribution also minimizes the expected misclassification cost. A series approximation is used to relate regularized discriminant analysis to Bayesian discriminant analysis. A new Bayesian quadratic discriminant analysis classifier is proposed where the prior is defined using a coarse estimate of the covariance based on the training data; this classifier is termed BDA7. Results on benchmark data sets and simulations show that BDA7 performance is competitive with, and in some cases significantly better than, regularized quadratic discriminant analysis and the cross-validated Bayesian quadratic discriminant analysis classifier Quadratic Bayes. - IEEE Trans. Pattern Anal. Mach. Intell., 2009 [Online]. Available: http://ieeexplore.ieee.org/xpl/preabsprintf.jsp?arnumber=4626960 "... Abstract—Bartlett et al. (2006) recently proved that a ground condition for surrogates, classification calibration, ties up their consistent minimization to that of the classification risk, and left as an important problem the algorithmic questions about their minimization. In this paper, we address ..." Cited by 8 (6 self) Add to MetaCart Abstract—Bartlett et al. (2006) recently proved that a ground condition for surrogates, classification calibration, ties up their consistent minimization to that of the classification risk, and left as an important problem the algorithmic questions about their minimization. In this paper, we address this problem for a wide set which lies at the intersection of classification calibrated surrogates and those of Murata et al. (2004). This set coincides with those satisfying three common assumptions about surrogates. Equivalent expressions for the members—sometimes well known—follow for convex and concave surrogates, frequently used in the induction of linear separators and decision trees. Most notably, they share remarkable algorithmic features: for each of these two types of classifiers, we give a minimization algorithm provably converging to the minimum of any such surrogate. While seemingly different, we show that these algorithms are offshoots of the same “master ” algorithm. This provides a new and broad unified account of different popular algorithms, including additive regression with the squared loss, the logistic loss, and the top-down induction performed in CART, C4.5. Moreover, we show that the induction enjoys the most popular boosting features, regardless of the surrogate. Experiments are provided on 40 readily available domains. - IEEE Transactions on Information Theory , 2009 "... We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifies the connection of coherence and proper scor ..." Cited by 8 (4 self) Add to MetaCart We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifies the connection of coherence and proper scoring rules to Bregman divergence. 1 - CoRR "... Abstract—A class of distortions termed functional Bregman divergences is defined, which includes squared error and relative entropy. A functional Bregman divergence acts on functions or distributions, and generalizes the standard Bregman divergence for vectors and a previous pointwise Bregman diverg ..." Cited by 7 (1 self) Add to MetaCart Abstract—A class of distortions termed functional Bregman divergences is defined, which includes squared error and relative entropy. A functional Bregman divergence acts on functions or distributions, and generalizes the standard Bregman divergence for vectors and a previous pointwise Bregman divergence that was defined for functions. A recent result showed that the mean minimizes the expected Bregman divergence. The new functional definition enables the extension of this result to the continuous case to show that the mean minimizes the expected functional Bregman divergence over a set of functions or distributions. It is shown how this theorem applies to the Bayesian estimation of distributions. Estimation of the uniform distribution from independent and identically drawn samples is presented as a case study. Index Terms—Bayesian estimation, Bregman divergence, convexity, Fréchet derivative, uniform distribution. - Int. Symp. Inf. Theory , 2008 "... Abstract — To characterize the differences between two positive functions or two distributions, a class of distortion functions has recently been defined termed the functional Bregman divergences. The class generalizes the standard Bregman divergence defined for vectors, and includes total squared d ..." Cited by 5 (0 self) Add to MetaCart Abstract — To characterize the differences between two positive functions or two distributions, a class of distortion functions has recently been defined termed the functional Bregman divergences. The class generalizes the standard Bregman divergence defined for vectors, and includes total squared difference and relative entropy. Recently a key property was discovered for the vector Bregman divergence: that the mean minimizes the average Bregman divergence for a finite set of vectors. In this paper the analog result is proven: that the mean function minimizes the average Bregman divergence for a set of positive functions that can be parameterized by a finite number of parameters. In addition, the relationship of the functional Bregman divergence to the vector Bregman divergence and pointwise Bregman divergence is stated, as well as some important properties. I. , 2010 "... We present the elements of a new approach to the foundations of quantum theory and information theory which is based on the algebraic approach to integration, information geometry, and maximum relative entropy methods. It enables us to deal with conceptual and mathematical problems of quantum theory ..." Cited by 4 (4 self) Add to MetaCart We present the elements of a new approach to the foundations of quantum theory and information theory which is based on the algebraic approach to integration, information geometry, and maximum relative entropy methods. It enables us to deal with conceptual and mathematical problems of quantum theory without any appeal to Hilbert space framework and without frequentist or subjective interpretation of probability. PACS: 89.70.Cf 02.50.Cw 03.67.-a 03.65.-w 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=345731","timestamp":"2014-04-19T17:46:44Z","content_type":null,"content_length":"40863","record_id":"<urn:uuid:aea480ca-f16d-4d55-be44-194e410c4f8a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Branch Current Method,Loop Current Method,Equivalent Circuits,Thevenin’s and Norton’s Theorem,Determination of Thevenin and Norton Circuit Elements : Missing argument 2 for wpdb::prepare(), called in /home3/nithish/public_html/btechzone.com/wp-content/plugins/sharebar/sharebar.php on line 112 and defined in on line 992 Warning : Missing argument 2 for wpdb::prepare(), called in /home3/nithish/public_html/btechzone.com/wp-content/plugins/sharebar/sharebar.php on line 124 and defined in on line Use both of Kirchoff’s laws. But be aware that an arbitrary application of Kirchoff’s two equations will not always yield an independent set of equations. But the following approach will probably 1. Label the current in each branch (do not worry about the direction of the actual current). 2. Use only interior loops and all but one node. 3. Solve the system of algebraic equations. 4. This method is also referred to as the mesh loop method. The independent current variables are taken to be the circulating current in each of the interior loops. 1. Label interior loop currents on a diagram. 2. Obtain expressions for the voltage changes around each interior loop. 3. Solve the system of algebraic equations. Depending on the problem, it may ultimately be necessary to algebraically sum two loop currents in order to obtain the needed interior branch current for the final answer. Lets consider the example of the Wheatstone bridge circuit shown in figure 1.6. We wish to calculate the currents around the loops. The three currents are identified as: Figure 1.6: Loop method for the Wheatstone bridge circuit. Collecting terms containing the same current gives If the values for the parameters shown in the diagram are used, the current values can be found by solving the set of simultaneous equations to give Moreover, if we number the individual currents through each resistor using the same scheme as we have for each component (current through These are the same currents that would be found using only Kirchoff’s equations; however, here we had to handle only three simultaneous equations instead of six. Example: Use the loop current method to determine the voltage developed across the terminals AB in the circuit shown in figure 1.7. Figure 1.7: Example circuit for analysis using the loop current method. Consider the clockwise current loop Solving the above two equations for the unknown loop currents The voltage across AB is given simply by Equivalent circuits is often the hardest concept and most numerically intensive in the course. Learning them well could make a difference on your midterm exam. Look in several books until you find the explanation you understand best. Since Ohm’s law and Kirchoff’s equations are linear, we can replace any DC circuit by a simplified circuit. Just like a combination of resistors and Ohm’s law could give an equivalent resistor, a combination of circuit elements and Kirchoff’s laws can give an equivalent circuit. Two possibilities are shown in figure A Thevenin equivalent circuit contains an equivalent voltage source Figure 1.8: Thevenin and Norton equivalent circuits. One approach to determine the equivalent circuits is: 1. Thevenin – calculate the open-circuit voltage 2. Norton – calculate the short-circuit current between A and B; An alternative to step 2 is to short all voltage sources, open all current sources, and calculate the equivalent resistance remaining between A and B. We will use the latter approach whenever manageable. To see if you understand equivalent circuits so far, convince yourself that Solution: From Thevenin’s theorem According to Notron’s theorem Lets now return to our Wheatstone bridge example shown in figure 1.6. We will calculate the current through ☆ The evaluation of The result is Figure 1.9: Thevenin’s theorem applied to the Wheatstone bridge circuit. ☆ The voltage source is shorted out and 1.9): Note that when the source is shorted out, the resistors that were in series ( ☆ The network is assembled in series as shown in figure 1.9 and the current through Note that the numerical value of the current is the same as that in the preceding calculations, but the sign is opposite. This is simply due to the incorrect choice of polarity of Example: Find the Thevenin equivalent components 1.10. Figure 1.10: Example circuit for analysis using a Thevenin equivalent circuit. Shorting the V‘s to find The open circuit voltage gives
{"url":"http://btechzone.com/branch-current-methodloop-current-methodequivalent-circuitsthevenins-and-nortons-theoremdetermination-of-thevenin-and-norton-circuit-elements","timestamp":"2014-04-20T11:43:10Z","content_type":null,"content_length":"55832","record_id":"<urn:uuid:e0e023c2-75dd-483e-b1b9-a76ac540fbe9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Impulse - definition of impulse Impulse is defined as a multiplied by the amount of time it acts over. In calculus terms, the impulse can be calculated as the integral of force with respect to time. Alternately, impulse can be calculated as the difference between two given instances. The SI units of impulse are N*s or kg*m/s.
{"url":"http://physics.about.com/od/glossary/g/Impulse.htm","timestamp":"2014-04-20T15:52:09Z","content_type":null,"content_length":"34690","record_id":"<urn:uuid:62ae8a92-614f-44b9-8b70-f0b01473e914>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Recent Topology and its Applications Articles Recently published articles from Topology and its Applications. 15 April 2014 Tulsi Srinivasan We extend the theory of the Lusternik–Schnirelmann category to general metric spaces by means of covers by arbitrary subsets. We also generalize the definition of the strict category weight. We 15 April 2014 Ryszard Frankiewicz | Sławomir Szczepaniak It is proved that no non-meager subspace of the space [ω]ω equipped with the Ellentuck topology does admit a Kuratowski partition, that is such a subset cannot be covered by a family F of disjoint 15 April 2014 Jaka Smrekar Let f:Z→Z be a self-map on the topological space Z. Generalizing the well-known factorization of a map into the composite of a homotopy equivalence and a Hurewicz fibration we prove that f is 15 April 2014 Damian Sawicki We show that for a given metric space (X,d) of asymptotic dimension n there exists a coarsely and topologically equivalent hyperbolic metric d′ of the form d′=f∘d such that (X,d′) is of asymptotic 15 April 2014 M. Ghirati | A. Taherifar In this paper, we present a new characterization for the intersection of all essential (resp., free) maximal ideals of C(X). Thereafter topological spaces X for which CF(X) and CK(X) are equal to 15 April 2014 Jindřich Zapletal There is a proper Baire category preserving forcing which adds infinitely equal real but no Cohen real. This resolves a long-standing open problem of David Fremlin. The forcing has a natural 15 April 2014 Oliver Goertsches | Augustin-Liviu Mare We show that if G×M→M is a cohomogeneity one action of a compact connected Lie group G on a compact connected manifold M then HG⁎(M) is a Cohen–Macaulay module over H⁎(BG). Moreover, this module 15 April 2014 Louis Block | Dennis Ledis Let f denote a continuous map of the compact interval to itself. Suppose that f is topologically transitive. We show that if f exhibits a pattern and has the same topological entropy as the 15 April 2014 Fucai Lin | Kexiu Zhang | Changqing Li | Wei Chen In this short article, we give complete answers to four problems of Arhangel'skii on diagonal-flexible spaces or rotoids. We list the four problems as follows:... 15 April 2014 Strashimir G. Popvassilev | John E. Porter Gartside and Moody proved that a space is protometrizable if and only if it has a monotone star-refinement operator on open covers. They called this property monotone paracompactness but noted 15 April 2014 Claudia G. Domínguez-López Let C(X) be the hyperspace of subcontinua of a continuum X. Given a proper subcontinuum A of X, we study the boundary Bd(C(A)) of C(A) in C(X). We show that Bd(C(A)) is always arcwise connected, 1 April 2014 Naseer Shahzad | W.A. Kirk | Maryam A. Alghamdi It is shown that several theorems known to hold in complete geodesically bounded R-trees extend to arcwise connected Hausdorff topological spaces which have the property that every monotone 1 April 2014 Włodzimierz J. Charatonik | Robert P. Roe We are defining a new operator called Mahavier product. This operator generalizes the inverse limit operator with multivalued functions first introduced in [6] by William S. Mahavier in 2004. 1 April 2014 Olivier Olela Otafudu In [4] Kemajou et al. constructed the injective hull in the category of T0-quasi-metric spaces with nonexpansive maps that they called q-hyperconvex hull. In this paper, we study properties of 1 April 2014 Dominique Lecomte | Miroslav Zelený We give, for each countable ordinal ξ⩾1, an example of a Δ20 countable union of Borel rectangles that cannot be decomposed into countably many Πξ0 rectangles. In fact, we provide a graph of a 1 April 2014 S. Todorčević | C. Uzcátegui A topological space X is said to be maximal if its topology is maximal among all T1 topologies over X without isolated points. It is known that a space is maximal if, and only if, it is extremely 1 April 2014 James P. Kelly In this paper, we develop a definition for a class of set-valued functions which will be called irreducible functions. We show that these functions can be used to obtain an indecomposable 1 April 2014 Peter Ozsváth | András I. Stipsicz | Zoltán Szabó We provide an integral lift of the combinatorial definition of Heegaard Floer homology for nice diagrams, and show that the proof of independence using convenient diagrams adapts to this setting.... 1 April 2014 Z. Ercan Starting with the initial paper of Huang and Zhang [2] in 2007, more than six hundred papers dealing with cone metric spaces have been published so far. In this short note we present a different 1 April 2014 Sina Greenwood | Judy Kennedy We introduce a new tool which we call an Ingram–Mahavier product to aid in the study of inverse limits with set-valued functions, and with this tool, obtain some new results about the Available online 24 March 2014 Guillaume C.L. Brümmer This article gives a brief outline of the life of Sergio Salbany (1941–2005) and of his mathematical career, coupled with his social outreach and political activity in South Africa until 1980, in Available online 20 March 2014 Dikran Dikranjan | Anna Giordano Bruno For a totally disconnected locally compact abelian group, we prove that the topological entropy of a continuous endomorphism coincides with the algebraic entropy of the dual endomorphism with Available online 17 March 2014 Mitrofan M. Choban | Petar S. Kenderov | Warren B. Moors For a pseudocompact (strongly pseudocompact) space T we show that every strongly bounded (bounded) subset A of the space C(T) of all continuous functions on T has compact closure with respect to Available online 17 March 2014 Taras Banakh | Bogdan Bokalo | Nadiya Kolos Given a topological space X, we study the structure of ∞-convex subsets in the space SCp(X) of scatteredly continuous functions on X. Our main result says that for a topological space X with 15 March 2014 A. Arbieto | M. Tavares In this article we show that any conservative volume preserving homeomorphism of Rn can be approximated in the uniform topology by transitive one. We also prove some results about the 15 March 2014 Maddalena Bonanzinga | Filippo Cammaroto | Bruno Antonio Pansera | Boaz Tsaban We develop a unified framework for the study of classic and new properties involving diagonalizations of dense families in topological spaces. We provide complete classification of these 15 March 2014 Mohammad Akbari tootkaboni | Zeinab Eslami We introduce some notions of density in a locally compact topological group G which extend some notions in discrete semigroups in which density is based on nets of finite sets. The new notions are 15 March 2014 P. Szewczak Let P be the class of all spaces whose Cartesian product with every paracompact space is paracompact. We prove that if X is a paracompact, first-countable GO-space with σDC dense subset then X∈P 15 March 2014 R. Rojas-Hernández The notion of monotonically monolithic space was introduced by V.V. Tkachuk in 2009 [8]. In this paper we introduce the notion of monotone stability and show that a space Cp(X) is monotonically 15 March 2014 Garith Botha | Yevhen Zelenyuk | Yuliya Zelenyuk Let G be a countably infinite discrete group and let βG be the Stone–Čech compactification of G. Let I denote the finest decomposition of G⁎=βG∖G into closed left ideals of βG with the property 15 March 2014 Hans-Peter A. Künzi | Attila Losonczi For any positive integer n>1 we construct on an infinite set a maximal pairwise complementary family of partial orders that has n elements. The example is motivated by a question of J. Steprāns 15 March 2014 Fucai Lin In this paper, we mainly discuss the cardinal invariants on some class of paratopological groups. For each i∈{0,1,2,3,3.5}, we define the class of locally Ti-minimal paratopological groups by the 15 March 2014 A. Taherifar By a characterization of semiprime SA-rings by Birkenmeier, Ghirati and Taherifar in [4, Theorem 4.4], and by the topological characterization of C(X) as a Baer-ring by Stone and Nakano in [11, 15 March 2014 Jungsoo Kim In this article, we give a sufficient condition for a Heegaard splitting of non-splittable, tunnel number two, three component link exterior to be critical. Moreover, we prove that if F is a genus 15 March 2014 Hui Wang | Fengchun Lei | Lidong Wang It is known that in a compact dynamical system, the whole space can be a Li–Yorke scrambled set, but this does not hold for distributional chaos. In this paper we prove that the complement of a 15 March 2014 Alfonso Artigue We show that every positive expansive flow on a compact metric space consists of a finite number of periodic orbits and fixed points.... Available online 14 March 2014 S.I. Bogataya | S.A. Bogatyy We find the high commutants of the Jennings group J(Z2) of substitutions of formal power series with coefficients in the ring Z2. We explicitly provide the corresponding abelianizing Available online 14 March 2014 Strashimir G. Popvassilev | John E. Porter We define some monotone properties using stars of coverings. This relates to work of J. van Mill, V. Tkachuk, R. Wilson, O. Alas, L. Junqueira, M. Matveev and others who generalized the D-space Available online 14 March 2014 Dharmanand Baboolal N-star compactifications of frames are introduced as the analog of the known concept of N-point compactification of a topological space due to Magill [5]. We characterize such frames and we show Available online 14 March 2014 Dušan Repovš | Lyubomyr Zdomskyy We prove that under certain set-theoretic assumptions every productively Lindelöf space has the Hurewicz covering property, thus improving upon some earlier results of Aurichi and Tall.... Available online 13 March 2014 Jiling Cao | Heikki J.K. Junnila In this paper, we study normality and metrizability of Wijsman hyperspaces. We show that every hereditarily normal Wijsman hyperspace is metrizable. This provides a partial answer to a problem of Available online 13 March 2014 Taras Banakh | Igor Protasov | Olga Sipacheva Given a set X and a family G of self-maps of X, we study the problem of the existence of a non-discrete Hausdorff topology on X with respect to which all functions f∈G are continuous. A topology Available online 13 March 2014 Vitaly V. Fedorchuk We introduce and investigate transfinite dimensions tr-(m,n)-Ind, where m,n are positive integers, n⩽m. For n=1 these dimension functions were introduced in [5] and investigated in [6].... Available online 13 March 2014 A.V. Arhangel'skii | M.M. Choban | M.A. Al Shumrani Following a general idea in [6,7], we introduce and study in this paper the concept of a JPM-space. We call in this way topological spaces admitting a metric which metrizes every metrizable Available online 13 March 2014 Javier Gutiérrez García | Jorge Picado | María Ángeles de Prada Vicente Monotone normality is usually defined in the class of T1 spaces. In this paper we study it under the weaker condition of subfitness, a separation condition that originates in pointfree topology. Available online 12 March 2014 Dimitris N. Georgiou | Athanasios C. Megaritis | Seithuti P. Moshokoa Alexandroff spaces include finite spaces and have a wide range of applications in many areas such as computer graphics and image analysis. We give results on small inductive dimension of Available online 12 March 2014 Alexander Dranishnikov | Michael Zarichnyi The notion of the decomposition complexity was introduced in [14] using a game theoretic approach. We introduce a notion of straight decomposition complexity and compare it with the original as
{"url":"http://www.journals.elsevier.com/topology-and-its-applications/recent-articles/","timestamp":"2014-04-17T04:09:42Z","content_type":null,"content_length":"104867","record_id":"<urn:uuid:2c1c345d-7fba-4fc3-9cfd-7c249b238e67>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Ridgewood, NJ Precalculus Tutor Find a Ridgewood, NJ Precalculus Tutor ...I scored 800 on the Math section of the SAT, and a 34 for my overall ACT score. I have also recently passed the Praxis II Exam in Math Content Knowledge while earning Recognition of Excellence for scoring in the top 15 percent over the last 5 years. My most valuable quality, however, is my abil... 22 Subjects: including precalculus, reading, English, chemistry ...I have tutored SAT prep (reading, writing, and math) both privately and for the Princeton Review. I earned a BA from the University of Pennsylvania and an MA from Georgetown University. I have tutored GRE prep both privately and for the Princeton Review. 20 Subjects: including precalculus, English, algebra 2, grammar ...Although I can tutor many different courses, I should mention that because I worked at a liberal arts college, I have a good deal of experience alongside students struggling to understand basic math and pre-algebra. I worked with people anywhere from teenagers to adults in their mid-to-late fift... 28 Subjects: including precalculus, chemistry, writing, physics ...I also always have the student model and explain what they've learned, showing their process for deriving an answer. In this way, I know that they fully understand and will hopefully accomplish learning other math concepts more easily. I am fulfilled the most about teaching math when students who do not like it, recognize their ability to learn it well. 16 Subjects: including precalculus, chemistry, calculus, geometry ...These office hours were for any math students, so I quickly became adept at answering questions about almost any math-related problem, be it a problem with an integral for a calculus student, or a misunderstanding about trigonometry. I try to guide students to understanding the material by tryin... 18 Subjects: including precalculus, physics, calculus, trigonometry Related Ridgewood, NJ Tutors Ridgewood, NJ Accounting Tutors Ridgewood, NJ ACT Tutors Ridgewood, NJ Algebra Tutors Ridgewood, NJ Algebra 2 Tutors Ridgewood, NJ Calculus Tutors Ridgewood, NJ Geometry Tutors Ridgewood, NJ Math Tutors Ridgewood, NJ Prealgebra Tutors Ridgewood, NJ Precalculus Tutors Ridgewood, NJ SAT Tutors Ridgewood, NJ SAT Math Tutors Ridgewood, NJ Science Tutors Ridgewood, NJ Statistics Tutors Ridgewood, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/ridgewood_nj_precalculus_tutors.php","timestamp":"2014-04-19T05:02:48Z","content_type":null,"content_length":"24300","record_id":"<urn:uuid:d3cf4c78-4aad-480e-9fc8-80008c3892c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal number of edges in a DAG when we bound the degree of the nodes up vote 2 down vote favorite Does anyone know how to build an acyclic directed graph with N nodes such that both: (1) the degree of the nodes is bounded (say less than k); and (2) the number of edges in the graph is maximal ? I know that the problem is trivial if I lift condition (1); the maximal number of edges in this case is N(N-1)/2. Could you point me to the right theory to apply if I am interested by graphs where the average degree is less than k, or where all but a fixed percentage of nodes has degree less than k ? Should this DAG be a subgraph of some given graph? – Chad Musick Oct 11 '11 at 11:45 add comment 1 Answer active oldest votes Well, this is embarrassing. I was able to answer the first half of my question after a quick discussion with a student. If I assume that $N \gg k$, the maximal number of edges in my graph is in $\Theta(k.N)$, as expected. The most precise answer, assuming that $N > k$ is $k.(N-k) + k.(k-1) / 2$. To build an example, I can start with a DAG that has k nodes and a "maximal number of edges". Then I recursively add a vertex that has k edges towards the graph obtained at the previous up vote 2 I am still interested by good references for solving the problem using random directed graphs with a bounded average out-degree. down vote To give some background to why I was asking this question. I am interested by the worst-case complexity of the following problem. I start with a directed graph and recursively remove all the leaves (nodes with out-degree zero) until I am blocked. When I remove a vertex, I also remove all the edges to and from this vertex. I know that the graph is a DAG if and only if I can remove all its vertices. In the context of my problem, the cost of each iteration is equal to the number of vertices in the graph (at that particular time), plus the number of transitions. You might find "DAG-width and parity games" (Berwanger, Dawar, Hunter, and Kreutzer) helpful: springerlink.com/content/x3316wx248373vvk – Chad Musick Oct 11 '11 at 13:43 add comment Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/77804/maximal-number-of-edges-in-a-dag-when-we-bound-the-degree-of-the-nodes","timestamp":"2014-04-19T02:39:00Z","content_type":null,"content_length":"53453","record_id":"<urn:uuid:3d84a0ae-869f-4d02-9fea-2d540d3cd764>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
How to convert string equation to number in javascript? up vote 0 down vote favorite How to convert string equation to number in javascript? Say I have "100*100" or "100x100" how do I evaluate that and convert to a number? javascript jquery html 3 Is this homework? What have you tried so far? – JohnFx Apr 23 '12 at 18:48 2 This is not homework, I'm attempting to convert lot sizes into the longforms from shorthand directly at the user input. I have tried eval() and parseInt and parseFloat. – Event_Horizon Apr 23 '12 at 18:49 What have you tried so far? – JohnFx Apr 23 '12 at 18:50 add comment 3 Answers active oldest votes This will give you the product whether the string is using * or x: var str = "100x100"; up vote 2 down vote accepted var tmp_arr = str.split(/[*x]/); var product = tmp_arr[0]*tmp_arr[1]; // 10000 1 This is just what I was looking for thanks. – Event_Horizon Apr 23 '12 at 19:03 jsfiddle.net/jlaceda/hcDra example in a function with tests. – jlaceda Apr 23 '12 at 19:12 @Event_Horizon You're welcome :) – Paulpro Apr 23 '12 at 20:01 add comment If you're sure that the string will always be something like "100*100" you could eval() it, although most people will tell you this isn't a good idea on account of the fact that people could pass in malicious code to be eval'd. >> 10000 Otherwise, you'll have to find or write a custom equation parser. In that case, you might want to take a look at the Shunting-yard algorithm, and read up on parsing. up vote 3 down vote Using split(): var myEquation = "100*100"; var num = myEquation.split("*")[0] * myEquation.split("*")[1]; >> 10000 Is there a way to split it by the x and the numbers,and multiply just the numbers? Would split work for that? – Event_Horizon Apr 23 '12 at 18:52 Yes, I'll update. – Elliot Bonneville Apr 23 '12 at 18:53 add comment use parseInt() ,The parseInt() function parses a string and returns an integer. up vote 0 down vote parseInt("100")*parseInt("100") //10000 add comment Not the answer you're looking for? Browse other questions tagged javascript jquery html or ask your own question.
{"url":"http://stackoverflow.com/questions/10286386/how-to-convert-string-equation-to-number-in-javascript","timestamp":"2014-04-18T23:33:01Z","content_type":null,"content_length":"78363","record_id":"<urn:uuid:e9a0ce9a-7cb6-446a-a2a6-dbaab625eb02>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Clausal Function Definitions A function may bind more than one argument by using a pattern, rather than a variable, in the argument position. Function expressions may have the form fn pat => exp where pat is a pattern and exp is an expression. Application of such a function proceeds much as before, except that the argument value is matched against the parameter pattern to determine the bindings of zero or more variables, which are then used during the evaluation of the body of the function. For example, we may make the following definition of the Euclidean distance function: val dist : real * real -> real = fn (x:real, y:real) => sqrt (x*x + y*y) This function may then be applied to a pair (two-tuple!) of arguments to yield the distance between them. For example, dist (2.0,3.0) evaluates to (approximately) 4.0. Using fun notation, the distance function may be defined more concisely as follows: fun dist (x:real, y:real):real = sqrt (x*x + y*y) The meaning is the same as the more verbose val binding given earlier. Keyword parameter passing is supported through the use of record patterns. For example, we may define the distance function using keyword parameters as follows: fun dist' {x=x:real, y=y:real} = sqrt (x*x + y*y) The expression dist' {x=2.0,y=3.0} invokes this function with the indicated x and y values. Functions with multiple results may be thought of as functions yielding tuples (or records). For example, we might compute two different notions of distance between two points at once as follows: fun dist2 (x:real, y:real):real*real = (sqrt (x*x+y*y), abs(x-y)) Notice that the result type is a pair, which may be thought of as two results. These examples illustrate a pleasing regularity in the design of ML. Rather than introduce ad hoc notions such as multiple arguments, multiple results, or keyword parameters, we make use of the general mechanisms of tuples, records, and pattern matching. It is sometimes useful to have a function to select a particular component from a tuple or record (e.g., the third component or the component labeled url). Such functions may be easily defined using pattern matching. But since they arise so frequently, they are pre-defined in ML using sharp notation. For any record type typ[1]*...*typ[n], and each i between 1 and n, there is a function #i of type typ[1]*...*typ[n]->typ[i] defined as follows: fun #i (_, ..., _, x, _, ..., _) = x where x occurs in the ith position of the tuple (and there are underscores in the other n-1 positions). Thus we may refer to the second field of a three-tuple val by writing #2 val. It is bad style, however, to over-use the sharp notation; code is generally clearer and easier to maintain if you use patterns wherever possible. Compare, for example, the following definition of the Euclidean distance function written using sharp notation with the original. fun dist (p:real*real):real = sqrt((#1 p)*(#1 p)+(#2 p)*(#2 p)) You can easily see that this gets out of hand very quickly, leading to unreadable code. Use of the sharp notation is strongly discouraged! A similar notation is provided for record field selection. The following function #lab selects the component of a record with label lab. fun #lab {lab=x,...} = x Notice the use of ellipsis! Bear in mind the disambiguation requirement: any use of #lab must be in a context sufficient to determine the full record type of its argument. Tuple types have the property that all values of that type have the same form (n-tuples, for some n determined by the type); they are said to be homogeneous. For example, all values of type int*real are pairs whose first component is an integer and whose second component is a real. Any type-correct pattern will match any value of that type; there is no possibility of failure of pattern matching. The pattern (x:int,y:real) is of type int*real and hence will match any value of that type. On the other hand the pattern (x:int,y:real,z:string) is of type int*real*string and cannot be used to match against values of type int*real; attempting to do so fails at compile time. Other types have values of more than one form; they are said to be heterogeneous types. For example, a value of type int might be 0, 1, ~1, ... or a value of type char might be #"a" or #"z". (Other examples of heterogeneous types will arise later on.) Corresponding to each of the values of these types is a pattern that matches only that value. Attempting to match any other value against that pattern fails at execution time. For the time being we will think of match failure as a fatal run-time error, but later on we will see that the extent of the failure can be controlled. Here are some simple examples of pattern-matching against values of a heterogeneous type: val 0 = 1-1 val (0,x) = (1-1, 34) val (0, #"0") = (2-1, #"0") The first two bindings succeed, the third fails. In the case of the second, the variable x is bound to 34 after the match. No variables are bound in the first or third examples. The importance of constant patterns becomes clearer once we consider how to define functions over heterogeneous types. This is achieved in ML using a clausal function definition. The general form of a function is fn pat[1] => exp[1] | ... | pat[n] => exp[n] where each pat[i] is a pattern and each exp[i] is an expression involving the variables of the pattern pat[i]. Each component pat => exp is called a clause or rule; the entire assembly of rules is called a match. The typing rules for matches ensure consistency of the clauses. Specifically, 1. Each pattern in the match must have the same type typ. 2. Each expression in the match must have the same type typ', given the types of the variables in the patterns. The type of a function whose body is a match satisfying these requirements is typ->typ'. Note that there is no requirement that typ and typ' coincide! Application of functions with multiple clauses to a value val proceeds by considering each clause in the order written. At stage i the argument value val is matched against the pattern pat[i]; if the pattern match succeeds, evaluation continues with the evaluation of expression exp[i], with the variables replaced by the values determined by the pattern matching process. Otherwise we proceed to stage i+1. If no pattern matches (i.e., we reach stage n+1), then the application fails with an execution error. Here's an example. val recip : int -> int = fn 0 => 0 | n:int => 1 div n This defines an integer-valued reciprocal function on the integers, where the reciprocal of 0 is arbitrarily defined to be 0. The function has two clauses, one for the argument 0, the other for non-zero arguments n. (Note that n is guaranteed to be non-zero because the patterns are considered in order: we reach the pattern n:int only if the argument fails to match the pattern 0.) Using fun notation we may define recip as follows: fun recip 0 = 0 | recip (n:int) = 1 div n One annoying thing to watch out for is that the "fun" form uses an equal sign to separate the pattern from the expression in a clause, whereas the "fn" form uses an arrow. Heterogeneous types abound. Perhaps the most fundamental one is the type bool of booleans. Its values are true and false. Functions may be defined on booleans using clausal definitions that dispatch on true and false. For example, the negation function is defined clausally as follows: fun not true = false | not false = true In fact, this function is pre-defined in ML. Case analysis on the values of a heterogeneous type is performed by application of a clausally-defined function. The notation case exp of pat[1] => exp[1] | ... | pat[n] => exp[n] is short for the application (fn pat[1] => exp[1] | ... | pat[n] => exp[n]) exp. Evaluation proceeds by first evaluating exp, then matching its value successively against the patterns in the match until one succeeds, and continuing with evaluation of the corresponding expression. The case expression fails if no pattern succeeds to match the value. The conditional expression if exp then exp[1] else exp[2] is short-hand for the case analysis case exp of true => exp[1] | false => exp[2] which is itself short-hand for the application (fn true => exp[1] | false => exp[2]) exp. The "short-circuit" conjunction and disjunction operations are defined as follows. The expression exp[1] andalso exp[2] is short for if exp[1] then exp[2] else false and the expression exp[1] orelse exp[2] is short for if exp[1] then true else exp[2]. You should expand these into case expressions and check that they behave as expected. Pay particular attention to the evaluation order, and observe that the call-by-value principle is not violated by these expressions. Conceptually, equality and comparison operations on the types int, char, and string are defined by infinite (or, at any rate, enormously large) matches, but in practice they are built into the language as primitives. (The ordering on char and string are the lexicographic orderings.) Thus we may write fun is_alpha c:char = (#"a" <= c andalso c <= #"z") orelse (#"A" <= c andalso c <= #"Z") to test for alphabetic characters. All this talk of success and failure of pattern matching brings up the issue of exhaustiveness and redundancy in a match. A clause in a match is redundant if any value matching its pattern must have matched the pattern of a preceding clause in the match. A redundant rule can never be reached during execution. It is always an error to have a redundant clause in a match. Redundant clauses often arise accidentally. For example, the second clause of the following function definition is redundant for annoyingly subtle reasons: fun not True = false | not false = true The mistake is to have capitalized True so that it is no longer the boolean-typed constant pattern, but is rather a variable that matches any value of Boolean type. Hence the second clause is redundant. Reversing the order of clauses can also lead to redundancy, as in the following mistaken definition of recip: fun recip (n:int) = 1 div n | recip 0 = 0 The second clause is redundant because the first clause will always match any integer value, including 0. A match (as a whole) is exhaustive if every possible value of the domain type of the match must match some clause of that match. In other words, pattern matching against an exhaustive pattern cannot fail at run-time. The clauses in the (original) definition of recip are exhaustive because they cover every possible integer value. The match comprising the body of the following function is not fun is_numeric #"0" = true | is_numeric #"1" = true | is_numeric #"2" = true | is_numeric #"3" = true | is_numeric #"4" = true | is_numeric #"5" = true | is_numeric #"6" = true | is_numeric #"7" = true | is_numeric #"8" = true | is_numeric #"9" = true When applied to, say, #"a", this function fails. It is often, but not always, an error to have an inexhaustive match. The reason is that the type system cannot record many invariants (such as the property that is_numeric is only called with a character representing a decimal digit), and hence the compiler will issue a warning about inexhaustive matches. It is a good idea to check each such warning to ensure that you have not accidentally omitted a clause from the match. Any match can be made exhaustive by the inclusion of a catch-all clause of the form _ => exp where exp is an expression of the appropriate type. It is a bad idea to simply stick such a clause at the end of every match in order to eliminate "inexhaustive pattern" warnings. By doing so you give up the possibility that the compiler may warn you of a legitimate error (due to a forgotten case) in your program. The compiler is your friend! Use it to your advantage!
{"url":"http://www.cs.cmu.edu/~rwh/introsml/core/clauses.htm","timestamp":"2014-04-25T02:58:17Z","content_type":null,"content_length":"17611","record_id":"<urn:uuid:b951c63e-5b2f-46f6-acbb-de3f7af4f267>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosemont, IL Statistics Tutor Find a Rosemont, IL Statistics Tutor ...Finally, I tutored math through college to stay fresh, and would be able to work with any student needing assistance at whatever point in their development they encounter geometry. I was an advanced math student, completing the equivalent of pre-algebra in 6th grade. I continued applying algebraic skills in high school, where I was a straight A student and completed Calculus as a 13 Subjects: including statistics, calculus, geometry, algebra 1 ...I am certified to teach English in the state of Illinois and have experience with Chicago Public Schools. I have additional tutoring experience in the subjects of Accounting, Algebra, Biology, Computer Skills, Geometry, History, and Humanities. I also have experience working as both a GED and E... 39 Subjects: including statistics, reading, English, calculus ...I have taken (and got the highest grade) medical school physiology. I am a medical doctor, who practiced for 23 years before leaving the practice of medicine in 2008. I was board certified in family medicine. 17 Subjects: including statistics, chemistry, geometry, reading ...My teaching style is one of: * LISTENING to see what your student is doing; to learn how he or she thinks. * ENCOURAGING students to push a little farther; to show them what they're really capable of. * EXPLAINING to teach what students need, in ways they can understand and remember! I have o... 21 Subjects: including statistics, chemistry, calculus, geometry ...He has been teaching professionally in the Chicago suburbs since 2009. Education: Bachelors of Science in Mathematics, University of Illinois; Associate Degree in Mechanical Engineering, India. Certifications: Certified Teacher (Sub: K-12, Illinois State Board of Education); Certified Paraprofe... 12 Subjects: including statistics, calculus, geometry, algebra 2 Related Rosemont, IL Tutors Rosemont, IL Accounting Tutors Rosemont, IL ACT Tutors Rosemont, IL Algebra Tutors Rosemont, IL Algebra 2 Tutors Rosemont, IL Calculus Tutors Rosemont, IL Geometry Tutors Rosemont, IL Math Tutors Rosemont, IL Prealgebra Tutors Rosemont, IL Precalculus Tutors Rosemont, IL SAT Tutors Rosemont, IL SAT Math Tutors Rosemont, IL Science Tutors Rosemont, IL Statistics Tutors Rosemont, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Rosemont_IL_Statistics_tutors.php","timestamp":"2014-04-20T01:56:08Z","content_type":null,"content_length":"24256","record_id":"<urn:uuid:58d9a712-d4cb-4587-bd44-13be47668887>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
positive and negative number calculator positive and negative number calculator Related topics: algebra-calculator-software Statistics Formula Cheat Sheet prentice hall mathematics algebra 1 workbook course outline for algebra ii to calculate e value with the help of p and q values in rsa algorithm using java sats paper maths yr 6 level 5 transformations of functions second order linear ti 89 trigonometric equations worksheets Math Transformation Worksheets Author Message Xoywaad Jahempni Posted: Wednesday 19th of Jun 10:19 I have trouble with positive and negative number calculator . I tried a lot to locate somebody who can help me out with this. I also looked out for a tutor to coach me and explain my problems on algebra formulas, ratios and roots. Though I found a few who could possibly crack my problem, I recognized that I cannot manage to pay them. I do not have much time too. My quiz is coming up shortly. I am distressed. Can somebody help me out of this situation? I would greatly value any help or any information. Registered: 23.10.2002 From: San Diego kfir Posted: Thursday 20th of Jun 07:08 Well, I cannot solve your assignment for you as that would mean cheating. However, I can give you an idea. Try using Algebrator . You can find detailed and well explained answers to all your queries in positive and negative number calculator . Registered: 07.05.2006 From: egypt Flash Fnavfy Liom Posted: Friday 21st of Jun 16:07 I am a student turned teacher; I give classes to high school children. Along with the traditional mode of explanation, I use Algebrator to solve examples practically in front of the learners. Registered: 15.12.2001 Verd Viirsdna Posted: Saturday 22nd of Jun 08:35 Amazing! This sounds extremely useful to me. I was looking for such tool only. Please let me know where I can buy this software from? Registered: 27.05.2002 From: Singaraja, Bali, malhus_pitruh Posted: Sunday 23rd of Jun 13:56 A extraordinary piece of math software is Algebrator . Even I faced similar difficulties while solving hypotenuse-leg similarity, angle suplements and factoring. Just by typing in the problem workbookand clicking on Solve – and step by step solution to my algebra homework would be ready. I have used it through several math classes - Algebra 2, Remedial Algebra and Remedial Algebra. I highly recommend the program. Registered: 23.04.2003 From: Girona, Catalunya Dolknankey Posted: Sunday 23rd of Jun 21:31 Here you go, click on this link – http://www.softmath.com/. I personally think it’s a really good software and the fact that they even offer an unconstrained money back guarantee makes it a deal, you can’t miss. Registered: 24.10.2003 From: Where the trout streams flow and the air is nice
{"url":"http://softmath.com/algebra-software/radical-equations/positive-and-negative-number.html","timestamp":"2014-04-19T22:48:11Z","content_type":null,"content_length":"37721","record_id":"<urn:uuid:85b47728-044d-4328-ad43-60a2375ae1cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Is an infinite series of random numbers possible? As far as my limited understanding goes in physics, it is thought that since light can not escape from a black-hole that EM information under this paradigm ought not to either if this model is valid. I don't know about other kinds of information, but at least the implication (and please correct me if I am wrong about this) is that photons under the given conditions that have been observed are not able to escape a black-hole which is where I think all of these ideas stem from. Now from a general point of view we have to consider all the information. In physics we usually associate the information with particles of certain kinds and we have forces for these as well as fields which are accounting in using the modern field theories of physics. Now here's the kicker: what if we haven't found all the information yet? What if there is another particle or something similar with it's own force-carrier and field, or even if it doesn't have a force-carrier and just works completely non-locally? [Speculation]I understand that information which otherwise has the potential to "reach" infinity (the spin and mass effects of gravity, through gravitons, and the charge effects of E-M radiation, through photons) has the potential to escape the black hole's event horizon through Hawking radiation. The photons or gravitons which escape a black hole do so obeying a black body spectrum. What if such particles, according to their specific energies, together fulfill black body states so that such a spectrum is indistinguishable, part photonic and part gravitonic? That is, black body in energy yet anomalous in particle species. The Higgs seems a candidate for an entity of greater information See The Higgs boson is a hypothetical elementary particle predicted by the Standard Model (SM) of particle physics. It belongs to a class of particles known as bosons, characterized by an integer value of their spin quantum number. The Higgs field is a quantum field with a non-zero value that fills all of space, and explains why fundamental particles such as quarks and electrons have mass. The Higgs boson is an excitation of the Higgs field above its ground state. The existence of the Higgs boson is predicted by the Standard Model to explain how spontaneous breaking of electroweak symmetry (the Higgs mechanism) takes place in nature, which in turn explains why other elementary particles have mass. Its discovery would further validate the Standard Model as essentially correct, as it is the only elementary particle predicted by the Standard Model that has not yet been observed in particle physics experiments. The Standard Model completely fixes the properties of the Higgs boson, except for its mass. It is expected to have no spin and no electric or color charge, and it interacts with other particles through weak interaction and Yukawa interactions. Alternative sources of the Higgs mechanism that do not need the Higgs boson are also possible and would be considered if the existence of the Higgs boson were ruled out. They are known as Higgsless models. If you wanted to model this kind of non-local interaction, one way that I see visually is that you can model this kind of information exchange under a situation where the distance between any set of two points is zero. Mathematically, in any space if you have two points then all metrics need to positive when we deal with d(x,y) where x is not y, but consider for the moment that you have such a metric space with this property. What are the implications of this? So to answer the question specifically it will depend on whether all the known particles that we have are actually a representation of all the information in the system and also with regard to the interactions that are bound on these bits of information. If the only information is the information contained in electrons, photons, protons, neutrons and all that other jazz and the assumptions for the constraints we have are also right, then mathematically it seems sound. I'm skeptical though that we have discovered all the 'fields' as you would put it though. The real answer to this is currently unknown, but I imagine that if there are new information quantities and mechanisms to communicate the information, then they will be found in something like the LHC. However if you have to rely on mathematical arguments and existing data without having access to a particle accelerator with massive energies, you could look at any experimental situation where you get entropy anomalies. Also the thing is that we don't just have black-holes in the lab or nearby (at least to my knowledge :)) which means that we can't get the actual data, but then again if (and this is an IMO hypothesis) you can create a black-hole type scenario by inducing a situation of enough entropy so that this mechanism is created (using the ideas talked about earlier in this very thread), then what will happen is that you could create such an element and study what happens. In the RHIC experiment, they had what they called a 'fireball' when they smashed gold-ions together. If this was representative of 'entropy-control' or 'stability-enforcement', then it could give a bit of insight as to how a 'black-hole like mechanism' should act in an information theoretic context. Non-local interactions To dramatize what's happening in this EPR experiment, imagine that Green detector is on Earth, and Blue detector is on Betelgeuse (540 light-years away) while twin-state correlated light is coming from a spaceship parked halfway in between. Although in its laboratory versions the EPR experiment spans only a room-size distance, the immense dimensions of this thought experiment remind us that, in principle, photon correlations don't depend on distance. The spaceship acts as a kind of interstellar lighthouse directing a Green light beam to earth, a Blue light beam to Betelgeuse in the opposite direction. Forget for the moment that Green and Blue detectors are measuring something called "polarization" and regard their outputs as coded messages from the spaceship. Two synchronized binary message sequences composed of ups and downs emerge from calcite crystals 500 light-years apart. How these two messages are connected is the concern of Bell's proof. When both calcites are set at the same angle (say, twelve o'clock), then PC = 1. Green polarization matches perfectly with Blue. Two typical synchronized sequences of distant P measurements might look like this: If we construe these polarization measurements as binary message sequences, then whenever the calcites are lined up, the Blue observer on Betelgeuse gets the same message as the Green observer on Earth. Since PC varies from 1 to 0 as we change the relative calcite angle, there will be some angle α at which PC = 3/4. At this angle, for every four photon pairs, the number of matches (on the average), is three while the number of misses is one. At this particular calcite separation, a sequence of P measurements might look like this: BLUE: UD At angle α, the messages received by Green and Blue are not the same but contain "errors"—G's message differs from B's message by one miss in every four marks. Now we are ready to demonstrate Bell's proof. Watch closely; this proof is so short that it goes by fast. Align the calcites at twelve o'clock. Observe that the messages are identical. Move the Green calcite by α degrees. Note that the messages are no longer the same but contain "errors"—one miss out of every four marks. Move the Green calcite back to twelve and these errors disappear, the messages are the same again. Whenever Green moves his calcite by α degrees in either direction, we see the messages differ by one character out of four. Moving the Green calcite back to twelve noon restores the identity of the two messages. The same thing happens on Betelgeuse. With both calcites set at twelve noon, messages are identical. When Blue moves her calcite by α degrees in either direction, we see the messages differ by one part in four. Moving the Blue calcite back to twelve noon restores the identity of the two messages. Everything described so far concerns the results of certain correlation experiments which can be verified in the laboratory. Now we make an assumption about what might actually be going on—a supposition which cannot be directly verified: the locality assumption, which is the core of Bell's proof. We assume that turning the Blue calcite can change only the Blue message; likewise turning the Green calcite can change only the Green message. This is Bell's famous locality assumption. It is identical to the assumption Einstein made in his EPR paradox: that Blue observer's acts cannot affect Green observer's results. The locality assumption—that Blue's acts don't change Green's code—seems entirely reasonable: how could an action on Betelgeuse change what's happening right now on Earth? However, as we shall see, this "reasonable" assumption leads immediately to an experimental prediction which is contrary to fact. Let's see what this locality assumption forces us to conclude about the outcome of possible experiments. With both calcites originally set at twelve noon, turn Blue calcite by α degrees, and at the same time turn Green calcite in the apposite direction by α degrees. Now the calcites are misaligned by 2α degrees. What is the new error rate? Since turning Blue calcite α degrees puts one miss in the Blue sequence (for every four marks) and turning the Green calcite α degrees puts one miss in the Green sequence, we might naively guess that when we turn both calcites we will gel exactly two misses per four marks. However, this guess ignores the possibility that a "Blue error" might fall on the same mark as a "Green error"—a coincidence which produces an apparent match and restores character identity. Taking into account the possibility of such "error-correcting overlaps," we revise our error estimate and predict that whenever the calcites are misaligned by 2α degrees, the error rate will be two misses—or less. This prediction is an example of a Bell inequality. This Bell inequality says: If the error rate at angle α is 1/4, then the error rate at twice this angle cannot be greater than 2/4. This Bell inequality follows from the locality assumption and makes a definite prediction concerning the value of the PC attribute at a certain angle for photon pairs in the twin state. It predicts that when the calcites are misaligned by 2α degrees the difference between the Green and Blue polarization sequences will not exceed two misses out of four marks. The quantum facts, however, say otherwise. John Clauser and Stuart Freedman carried out this EPR experiment al Berkeley and showed that a calcite separation of 2α degrees gives three misses for every four marks - a quite substantial violation of the Bell inequality. Clauser's experiment conclusively violates the Bell inequality. Hence one of the assumptions that went into its derivation must be false. But Bell's argument uses mainly facts that can be verified - photon PCs at particular angles. The only assumption not experimentally accessible is the locality assumption. Since it leads to a prediction that strongly disagrees with experimental results, this locality assumption must be wrong. To save the appearances, we must deny Denying locality means accepting the conclusion that when Blue ob server turns her calcite on Betelgeuse she instantly changes some of Green's code on Earth. In other words, locations B and G some five hundred light years apart are linked somehow by a non-local interaction. This experimental refutation of the locality assumption is the factual basis of Bell's theorem: no local reality can underlie the results of the EPR Nick Herbert, Quantum Reality: Beyond the New Physics (Anchor, 1987, ISBN 0-385-23569-0) [Speculation]Does the violation of the probabilistic Bell inequality relate to a like second law of thermodynamics? Would black hole Hawking radiation obey a "Bell equality"? The best way to detect a black hole may be to seek its spectrum of annihilation. This may be relatively thermal at first but also discretized -- as the hole diminishes, so the number of constituent particles to radiate and fill out the Planck curve. The upper limit on such spectra may determine the upper limit on black hole density. Given that a "Planck datum" is the smallest unit of information, how many would be necessary to describe our physical universe? Maybe a myriad of identical cosmological, intersecting black holes would similarly suffice. Since the highly symmetric black hole requires high energy to create, we will gradually produce entities of closer and closer approximations in symmetric reactions. On the other hand, we may assemble a pocket watch.
{"url":"http://www.physicsforums.com/showthread.php?p=3884681","timestamp":"2014-04-17T19:06:06Z","content_type":null,"content_length":"187381","record_id":"<urn:uuid:74ff686d-432b-4f25-a843-c7c0aa3b1700>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Memory Error while constructing Compound Dictionary Alex Martelli aleaxit at yahoo.com Wed Sep 8 08:48:45 CEST 2004 Benjamin Scott <mynewjunkaccount at hotmail.com> wrote: > Thanks for the replies. > First I will make a minor correction to the code I originally posted > and then I will describe the original problem I am trying to solve, > per Alex's request. > Correction: > for s in Lst: > for t in nuerLst: > for r in nuestLst: > Dict[s][t][r]={} > ...should actually be... > for s in Lst: > for t in nuerLst: > for r in nuestLst: > Dict[s][t][r]=[] > That is, the object accessed by 3 keys is a list, not a 4th > dictionary. OK, unfortunately that doesn't change memory requirements, as 16 bytes is still a minimum allocation for an object. > The Original Problem: > The data set: 3 Columns and at least 100,000 rows. However, it can > be up to 1,000,000 rows. Aha -- a sparse 3D matrix, VERY sparse, no more than 1 million "true" entries out of 125 million slots, and all the rest just > For the purpose of illustration let's suppose that the first column > has the name of 1,000 "Factories", i.e. there are 1,000 unique symbols > in the first column. Likewise, suppose the second column contains a > "production date" or just a date; there are 250 unique dates in the > second column. Finally, suppose the third column contains a > description of a "widget type"; there are 500 unique widget > descriptions. Sure, quite clear. > *** i.e. each row contains the name of one factory which produced one > widget type on a particular date. If a factory produced more than one > widget on a given date it is reflected in the data as a new row. *** > The motivation to construct the mentioned compound dictionary comes > from the fact that I need quick access to the following data sets: > len(Lst[n])=3 > Lst[n][0]="Factory" > Lst[n][1]="date" > Lst[n][2]="WidgetType" > for s in Lst: > Dict[s[0]][s[1]][s[2]].append('1') > . > . > . > len(Dict["Factory"]["date"]["WidgetType"]) = #Widgets of some type > produced at a Factory on a given date. > The idea here is that I will be graphing a handful of the data sets at > a time; they will then be discarded and a new handful will be > graphed... etc. > What I might attempt next is to construct the required data in R (or > NumPy) since an array object seems better suited for the task. > However, I'm not sure this will avert the memory error. So, does When you represent a sparse matrix as if it was a dense one, you hit a typical wall of combinatorial explosion: you need memory proportional to the product of all the dimensions, and for a matrix of high dimensionality it's a horrid prospect. > anyone know how to increase the RAM limit for a process? Other With a 32-bit CPU you're SOL, IMHO. One option is to change machines: Apple has just announced a very reasonably priced iMac G5, a 64-bit machine intended for the home; or, you can do as I did, and look for a little-used, well-reconditioned, guaranteed PowerMac G5 -- the latter can use 8 GB of physical RAM and more importantly the address space is only bounded by the amount of disk available, so a few hundred GBs may be handled if you're in no hurry. While these are wonderful machines, however, I think you can do better. Consider...: > suggestions are also welcome. The Null Object Design Pattern is more likely to be what you want ( a fancy name for what in this case is quite simple, read on...): Start by placing in each slot of the compound dictionary the SAME object, which is just a placeholder. So you'll still have 125 million slots, but all initially will point at the same placeholder: so you're spending only 125 million times the size of a SLOT, about 4 bytes, for a total of 500 megabytes -- plus something because dictionaries being hash table are always "overdimensioned" a bit, but you should fit very comfortably in your 2GB anyway. Now, as the data come in, you ADD 1 instead of APPENDING a string of '1' to the appropriate slot. THEN and only then, for those relatively very few cells of the 3D matrix take up space for a new object, Moreover with the operations you appear to need you don't need to make a special null object, I think: just the integer 0 will do, and you will not call len() at the end since the integer is already stored in the cell. If you wanted to store more info in each cell or somehow keep track more directly of what cells are non-empty, etc etc, then you would go for a more complete Null Object DP. But for your problem as stated, the following might suffice: 1. initialize your dictionary with: for s in Lst: for t in nuerLst: for r in nuestLst: Dict[s][t][r] = 0 2. update it on each incoming datum with: for s in Lst: Dict[s[0]][s[1]][s[2]] += 1 3 consult it when done with: Dict["Factory"]["date"]["WidgetType"] = #Widgets of some type produced at a Factory on a given date. Hope this helps -- if you do need a bit more, write about it here and I'll happily show you a richer Null Object Design Pattern variant! More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2004-September/252855.html","timestamp":"2014-04-21T02:52:37Z","content_type":null,"content_length":"8620","record_id":"<urn:uuid:6166bb7a-e881-412b-ac3a-287eee367530>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Venn diagram Venn diagram, graphical method of representing categorical propositions and testing the validity of categorical syllogisms, devised by the English logician and philosopher John Venn (1834–1923). Long recognized for their pedagogical value, Venn diagrams have been a standard part of the curriculum of introductory logic since the mid-20th century. Venn introduced the diagrams that bear his name as a means of representing relations of inclusion and exclusion between classes, or sets. Venn diagrams consist of two or three intersecting circles, each representing a class and each labeled with an uppercase letter. Lowercase x’s and shading are used to indicate the existence and nonexistence, respectively, of some (at least one) member of a given class. Two-circle Venn diagrams are used to represent categorical propositions, whose logical relations were first studied systematically by Aristotle. Such propositions consist of two terms, or class nouns, called the subject (S) and the predicate (P); the quantifier all, no, or some; and the copula are or are not. The proposition “All S are P,” called the universal affirmative, is represented by shading the part of the circle labeled S that does not intersect the circle labeled P, indicating that there is nothing that is an S that is not also a P. “No S are P,” the universal negative, is represented by shading the intersection of S and P; “Some S are P,” the particular affirmative, is represented by placing an x in the intersection of S and P; and “Some S are not P,” the particular negative, is represented by placing an x in the part of S that does not intersect P. Three-circle diagrams, in which each circle intersects the other two, are used to represent categorical syllogisms, a form of deductive argument consisting of two categorical premises and a categorical conclusion. A common practice is to label the circles with capital (and, if necessary, also lowercase) letters corresponding to the subject term of the conclusion, the predicate term of the conclusion, and the middle term, which appears once in each premise. If, after both premises are diagrammed (the universal premise first, if both are not universal), the conclusion is also represented, the syllogism is valid; i.e., its conclusion follows necessarily from its premises. If not, it is invalid. Three examples of categorical syllogisms are the following. All Greeks are human. No humans are immortal. Therefore, no Greeks are immortal. Some mammals are carnivores. All mammals are animals. Therefore, some animals are carnivores. Some sages are not seers. No seers are soothsayers. Therefore, some sages are not soothsayers. To diagram the premises of the first syllogism, one shades the part of G (“Greeks”) that does not intersect H (“humans”) and the part of H that intersects I (“immortal”). Because the conclusion is represented by the shading in the intersection of G and I, the syllogism is valid. To diagram the second premise of the second example—which, because it is universal, must be diagrammed first—one shades the part of M (“mammals”) that does not intersect A (“animals”). To diagram the first premise, one places an x in the intersection of M and C. Importantly, the part of M that intersects C but does not intersect A is unavailable, because it was shaded in the diagramming of the first premise; thus, the x must be placed in the part of M that intersects both A and C. In the resulting diagram the conclusion is represented by the appearance of an x in the intersection of A and C, so the syllogism is valid. To diagram the universal premise in the third syllogism, one shades the part of Se (“seers”) that intersects So (“soothsayers”). To diagram the particular premise, one places an x in Sa (“sages”) on that part of the boundary of So that does not adjoin a shaded area, which by definition is empty. In this way one indicates that the Sa that is not an Se may or may not be an So (the sage that is not a seer may or may not be a soothsayer). Because there is no x that appears in Sa and not in So, the conclusion is not represented, and the syllogism is invalid. Venn’s Symbolic Logic (1866) contains his fullest development of the method of Venn diagrams. The bulk of that work, however, was devoted to defending the algebraic interpretation of propositional logic introduced by the English mathematician George Boole.
{"url":"http://www.britannica.com/print/topic/625448","timestamp":"2014-04-19T07:47:30Z","content_type":null,"content_length":"14049","record_id":"<urn:uuid:eb18f7f0-c83b-45fe-9d0b-57fd788ed7d7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Speeding tickets for R and Stata April 10, 2011 By Murtaza Haider How fast is R? Is it as fast in executing routines as the other off-the-shelf software, such as Stata? After some comparative experimentation, I found Stata to be 5 to 8 times faster than R. For me, speed has not been a concern in the past. I had used R with smaller datasets of roughly 5000 to 10,000 observations and found it to be as fast as other statistical software. More recently, I have been working with still a relatively small-sized data set of 63,122 observations. After realizing that R was very slow in executing the built-in routines for multinomial and ordinal logit models, I ran similar models in Stata with the same data set and found Stata to be much faster than R. Before I go any further, I must confess that I did not try to determine ways to improve speed in R by, for instance, choosing faster converging algorithms. I hope readers would send me comments on how to speed-up execution for the routines I tested in R. My data set comprised an ordinal dependant variable [5 categories] and categorical explanatory variables with 63,122 observations. I used a computer running Windows 7 on Intel Core 2 Quad CPU Q9300 @ 2.5 GHz with 8 GB of RAM. Further details about the test are listed in the following Table. │ Software Routine │ Stata 11 (duo core) │ R (2.12.0) │ │ Multinomial Logit │ mlogit, 9.06 seconds │ multinom, 50.59 seconds │ │ │ │ zelig (mlogit), 77.89 sec │ │ │ │ VGLM (multinomial), 64.4 sec │ │ Proportional odds model │ ologit, 1.69 sec │ VGLM (parallel = T), 16.26 sec │ │ │ │ polr, 22.62 seconds │ │ Generalized Logit │ gologit2, 18.67 sec │ VGLM (parallel = F), 64.71 sec │ I first estimated the standard multinomial logit model in R using the multinom routine. R took almost 51 seconds to return the results. The subsequent call to summarise the model took another 52.29 seconds, thus making the total execution time in R to be 103 seconds. Surprised at the slow speed, I tried other options in R to estimate the same model. I first tested mlogit option in Zelig. The execution time was even slower at 78 seconds. I followed up with VGAM package, which returned a slightly better result with 64.4 seconds. Other examples listed above suggest similar slower times for R in comparison with Stata. What could be the reason for such an order of magnitude difference in speed between R and Stata. I unfortunately don’t have the answer. I do know that Revolution Analytics offers similar performance benchmark comparisons between their version of souped-up R (Revolution R) and the generic R. Revolution R was found to be five to eight times faster than regular R. Other performance benchmarks revealed even greater speed differentials between Revolution R and the generic R. There must be ways to make routines execute faster in R. A few weeks earlier, Professor John Fox ( a long-time contributor to R and the programmer of the R GUI, R Commander) delivered a guest lecture at the Ted Rogers School of Management in Toronto at the GTA R Users’ Group meeting. His talk focussed on how to program using binary logit model as an example. His code for binary logit was found to be much faster than the one that comes bundled with the GLM in R. This makes me wonder: are there ways to make the generic R run faster? for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/speeding-tickets-for-r-and-stata/","timestamp":"2014-04-18T18:13:49Z","content_type":null,"content_length":"43604","record_id":"<urn:uuid:cfe8e880-4efc-4267-9b74-b2f901aba172>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Condensation for L[U] MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Does anyone know where I can find a proof for the Condensation Lemma for the L[U] hierarchy in Set Theory? Thanks a lot. up vote 3 down vote favorite 1 set-theory lo.logic add comment Does anyone know where I can find a proof for the Condensation Lemma for the L[U] hierarchy in Set Theory? Eran, there is some ambiguity in your question, as there are several possible interpretations of what is meant by "the $L[U]$ hierarchy". The fine details of the argument depend on the version you choose. For the non-fine structural version, Kanamori's "The higher infinite" has a very general argument, see the proof of Lemma 20.2. This argument refers to Theorem 3.3.(b), which is not proved there, but references are provided, to Devlin's "Constructibility" and Moschovakis "Descriptive Set Theory". For a fine structural version, using the old fine structure of Jensen-Dodd, see "The Core Model", by A.J. Dodd. For a fine structural version, using the current (Mitchell style) fine structure, see "Fine structure and iteration trees" by Mitchell and Steel, and Steel's article in the Handbook of Set [Edit: Added Oct. 10, 2010.] Let me add a remark about the relevance of distinguishing between fine structural or 'coarse' approaches. In the usual, non-fine structural setting, we use a predicate $U$ to build $L[U]$ and $U$ is, in $L[U]$, a normal, fine measure on some cardinal $\kappa$. (There seems to be an issue with LaTeX, so let me on occasion write $s(A,\gamma)$ for $A_\gamma$.) Note that $s(L[U],\kappa)=L_\kappa$. (Because, as one can easily verify by induction, $U\cap s(L[U],\gamma)=\emptyset$ for all $\gamma\le\kappa$. Note that this does not happen when we form $L[A]$, for $A$ a set of ordinals. But $U$ is not a set of ordinals, but rather a set of sets of ordinals.) However, as soon as we see enough of the measure, we are actually able to define new subsets not just of $\kappa$ but even of $\omega$ (for example, $0^\sharp$). up vote Consider a countable $X\prec s(L[U],\lambda)$, where $\lambda$ is some sufficiently nice ordinal to ensure that $L[U]_\lambda$ is a model of a sufficiently decent fragment $T$ of ZFC. (That 7 down $\lambda$ exists is a consequence of the reflection theorem.) Then (by the non-fine structural version of condensation) the transitive collapse of $X$ is $\bar X=s(L[D],\gamma)$ for some vote countable $\gamma$ and $D$ a set that, in $\bar X$, seems to be a normal measure on some cardinal $\tau$. In particular, there is a real $x\in\bar X$ such that $\bar X\models x=0^\sharp$ (because we can assume $T$ strong enough that the existence of $0^\sharp$ is provable in $T$ from the existence of measurable cardinals). Since the collapse map is the identity on reals, it follows that actually $0^\sharp=x$. This means that $0^\sharp$ is ("quickly") definable from $D$. But then $D$ cannot be in $s(L[U],\ kappa)$ or else $0^\sharp$ would also be there, which contradicts that $L[U]_\kappa$ is just an initial segment of $L$. This means that, not only is $\bar X$ not an initial segment of the constructibility hierarchy of $L[U]$, but it is not even a subset of a very large initial segment of $L[U]$. Hence, if we want a strong version of condensation to hold, where the structures $\bar X$ not only ``have the right shape'' but are also initial segments, then we necessarily must use a different hierarchy, meaning we cannot simply form $L[U]$ by constructing from $U$. (Note that, as classes, $L[U]=L[A]$ for many sets $A\in L[U]$.) This suggests (almost forces on us) the approach that fine structure takes, of considering a more elaborate predicate than just $U$ but rather one of the form $(U_\alpha\mid\alpha<\tau)$ where each $U_\alpha$ is a measure (such as $U$) or a "small" measure (such as a sharp): In this sequence we would add $0^\sharp$, something like the set $D$ in the example above, and many The result is that in a sense it takes us longer to build the stage of the construction where we finally add $U$, since we will be adding more and more sets along the way. But the payoff is that we get back a strong version of condensation. This also has additional advantages, of course, although one needs to understand a bit of fine structure to appreciate them. For example, it is a popular question in the Qual exams in Set Theory at UC Berkeley, to ask for a proof of diamond in $L[U]$. Once one understands that $L[U]$ can be reorganized in the way hinted at above, one can then prove diamond rather easily, essentially by the same argument as in $L$ (using the now available strong version of condensation). show 3 more comments Eran, there is some ambiguity in your question, as there are several possible interpretations of what is meant by "the $L[U]$ hierarchy". The fine details of the argument depend on the version you For the non-fine structural version, Kanamori's "The higher infinite" has a very general argument, see the proof of Lemma 20.2. This argument refers to Theorem 3.3.(b), which is not proved there, but references are provided, to Devlin's "Constructibility" and Moschovakis "Descriptive Set Theory". For a fine structural version, using the old fine structure of Jensen-Dodd, see "The Core Model", by A.J. Dodd. For a fine structural version, using the current (Mitchell style) fine structure, see "Fine structure and iteration trees" by Mitchell and Steel, and Steel's article in the Handbook of Set theory. Let me add a remark about the relevance of distinguishing between fine structural or 'coarse' approaches. In the usual, non-fine structural setting, we use a predicate $U$ to build $L[U]$ and $U$ is, in $L[U]$, a normal, fine measure on some cardinal $\kappa$. (There seems to be an issue with LaTeX, so let me on occasion write $s(A,\gamma)$ for $A_\gamma$.) Note that $s(L[U],\kappa)=L_\kappa$. (Because, as one can easily verify by induction, $U\cap s(L[U],\gamma)=\emptyset$ for all $\gamma\le\kappa$. Note that this does not happen when we form $L[A]$, for $A$ a set of ordinals. But $U$ is not a set of ordinals, but rather a set of sets of ordinals.) However, as soon as we see enough of the measure, we are actually able to define new subsets not just of $\kappa$ but even of $\omega$ (for example, $0^\sharp$). Consider a countable $X\prec s(L[U],\lambda)$, where $\lambda$ is some sufficiently nice ordinal to ensure that $L[U]_\lambda$ is a model of a sufficiently decent fragment $T$ of ZFC. (That $\lambda$ exists is a consequence of the reflection theorem.) Then (by the non-fine structural version of condensation) the transitive collapse of $X$ is $\bar X=s(L[D],\gamma)$ for some countable $\gamma$ and $D$ a set that, in $\bar X$, seems to be a normal measure on some cardinal $\tau$. In particular, there is a real $x\in\bar X$ such that $\bar X\models x=0^\sharp$ (because we can assume $T$ strong enough that the existence of $0^\sharp$ is provable in $T$ from the existence of measurable cardinals). Since the collapse map is the identity on reals, it follows that actually $0^\sharp=x$. This means that $0^\sharp$ is ("quickly") definable from $D$. But then $D$ cannot be in $s(L[U],\kappa)$ or else $0^\sharp$ would also be there, which contradicts that $L[U]_\kappa$ is just an initial segment of $L$. This means that, not only is $\bar X$ not an initial segment of the constructibility hierarchy of $L[U]$, but it is not even a subset of a very large initial segment of $L[U]$. Hence, if we want a strong version of condensation to hold, where the structures $\bar X$ not only ``have the right shape'' but are also initial segments, then we necessarily must use a different hierarchy, meaning we cannot simply form $L[U]$ by constructing from $U$. (Note that, as classes, $L[U]=L[A]$ for many sets $A\in L[U]$.) This suggests (almost forces on us) the approach that fine structure takes, of considering a more elaborate predicate than just $U$ but rather one of the form $(U_\alpha\mid\alpha<\tau)$ where each $U_\alpha$ is a measure (such as $U$) or a "small" measure (such as a sharp): In this sequence we would add $0^\sharp$, something like the set $D$ in the example above, and many others. The result is that in a sense it takes us longer to build the stage of the construction where we finally add $U$, since we will be adding more and more sets along the way. But the payoff is that we get back a strong version of condensation. This also has additional advantages, of course, although one needs to understand a bit of fine structure to appreciate them. For example, it is a popular question in the Qual exams in Set Theory at UC Berkeley, to ask for a proof of diamond in $L[U]$. Once one understands that $L[U]$ can be reorganized in the way hinted at above, one can then prove diamond rather easily, essentially by the same argument as in $L$ (using the now available strong version of condensation).
{"url":"http://mathoverflow.net/questions/41530/condensation-for-lu/41532","timestamp":"2014-04-19T18:03:28Z","content_type":null,"content_length":"60912","record_id":"<urn:uuid:adce5e61-0758-4955-b6a6-afeb26472a42>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
The One Sided Balance Beam Problem A one sided balance beam actually has two sides but one side is reserved for fixed weights that come with the set, and the other side is reserved for the object to be weighed. There are 4 weights which will enable one to weigh any whole number of grams from 1 t0 15. What are they? We can proceed by taking the smallest possible weight and working up. The only way to measure a 1g weight would be if one of the weights were 1 gram. The smallest number of grams that you could not measure with this weight would be a 2 gram weight, so we need a 2 gram weight as well. With a 1g and 2g weight we can measure 1, 2, or 3 grams. The last weight is weighed by putting the 1g and 2g weights together. The smallest weight we cannot measure with these weights would be a 4 g weight. So we heed a 4 gram weight. With these weights, we can measure any number of grams up to 7. 7 grams is measured by putting all three weights together. The smallest number of grams that we cannot yet measure is 8 grams. Our fourth and final weight will be an 8 gram weight. With is and the 7 grams from the previous weights, we can measure up to 15 grams. Notice that the weights we used, 1, 2, 4, and 8 grams are all powers of 2. Moreover, the next weight we will need, 16 is also a power of 2. The powers of 2 are used in base 2. Very simply, if there is a 1 in a place in the base 2 representation of a number, put the weight that corresponds to that place on the scale. If there is a 0 in that place, do not put that weight on the scale. There is a combinatorial aspect to this problem. How many different configurations of weights are there? With each weight, there are 2 choices: to put it on or leave it off. So with 4 weights, there will be 24 = 16 configurations. That is enough to weigh any number of grams from 0 to 15. This illustrates a well known formula from algebra: 1 + 2 + 2^2 + 2^3 + . . . + 2^n-1 = 2^n - 1 The 2 Sided Balance Beam Problem
{"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_300/Groupwork/Nos23-/GWSp28.html","timestamp":"2014-04-16T16:59:52Z","content_type":null,"content_length":"3282","record_id":"<urn:uuid:cbe3b2e5-e012-4ba3-bd47-66f014035ddb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson Preview What You'll Learn 1. To test if ratios can form a proportion 2. To use cross products And Why To compare answers to a survey, as in Example 1 Testing Ratios You know that the ratios You can write the equation A proportion is an equation stating that two ratios are equal. You can test a pair of ratios to determine whether they can form a proportion. One method of testing ratios is to write both ratios in simplest form and see if they are equal. SurveysThere are 24 students in class A and 60 students in class B. Ten students in class A saw a movie this weekend. Twenty-five students in class B saw a movie this weekend. For each class, look at the ratio of the number of students who saw a movie to the total number of students. Can the ratios form a proportion? Since both ratios are equal to Another way to show that two ratios can form a proportion is to show that a common multiplier connects their numerators and denominators. Determine whether the ratios in each pair can form a proportion by finding a common multiplier.
{"url":"http://www.phschool.com/iText/mgmath_course2/Ch05/05-04/PH_MSM2_ch05-04_Obj1.html","timestamp":"2014-04-19T11:56:44Z","content_type":null,"content_length":"11405","record_id":"<urn:uuid:8a2f2c59-dd86-4279-a414-df93e7610379>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Doctor To: All Dr. Math Doctors From: Ian Underwood Subject: Ask Dr. Math in March Hi Math Doctors, On average, we received 263 questions per day, and answered 130 of them - just a skooch less than half. However, we answered 4044 questions, which is the first time we've broken the 4000 mark! The number of math doctors who contributed during March was 29. Thanks to all of you, with special thanks to the following: 20/day | | Peterson 15/day | 10/day | | Rick 5/day | Jeremiah, Twe | Anthony, Douglas, Jerry, Mitteldorf, Paul, Roy, Schwa | Fenton, Jubal, Shawn | Floor, Pete, Tim, Tom 1/day | Achilles, Greenie, Jodi, Wilkinson 1/wk | Rob, White Welcome to new math doctors Jean and Meyer. New archive format We're in the process of integrating the Dr. Math archives with the rest of the Math Forum's Internet Mathematics Library. When the changes are final, the current Browse-By-Level pages will be replaced with the following pages: Things to notice: 1. The topic categories are more descriptive, and less coarse. 2. When you browse a topic, the list of answers under that topic is returned as a series of pages, rather than as one long page. 3. From a browsing page, you now have the option of searching within a particular category, or across the entire archive. 4. Each browsing page contains the hierarchy for the corresponding level (e.g., Elementary) in the left-hand margin; and each individual answer page contains a search form at the bottom. Ideally, these should reduce the amount of backing-and-forthing required to find what you're looking for. 5. The individual answer pages now have multiple back-links, i.e., if an answer is categorized in more than one way, you can get to any of those categories by following a link at the bottom of the page. 6. The URLs of the individual items have changed, from the format to the format If you have a list of URLs in the old format that you like to paste into answers, don't worry; references to the old URLs will be forwarded to the library, so they'll still work. If you've got some extra time, I encourage you to take a look at these new pages. We're still tweaking them before making the final switch, so we welcome any feedback that you care to give us. That's it for March! Go forth, be fruitful, and teach kids to Dr. Ian Attending Physician Ask Dr. Math
{"url":"http://mathforum.org/dr.math/office_help/mathdoc.news/mathdocnews.mar02.html","timestamp":"2014-04-17T15:39:43Z","content_type":null,"content_length":"3893","record_id":"<urn:uuid:6d9036cd-5b07-4188-a467-c476c3a6e429>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
interval graph Results 1 - 10 of 22 - Journal of Algorithms , 1985 "... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..." Cited by 188 (0 self) Add to MetaCart This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder) presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should , 1997 "... . An independent set of three vertices such that each pair is joined by a path that avoids the neighborhood of the third is called an asteroidal triple. A graph is asteroidal triple-free (AT-free, for short) if it contains no asteroidal triples. The motivation for this investigation was provided, in ..." Cited by 55 (10 self) Add to MetaCart . An independent set of three vertices such that each pair is joined by a path that avoids the neighborhood of the third is called an asteroidal triple. A graph is asteroidal triple-free (AT-free, for short) if it contains no asteroidal triples. The motivation for this investigation was provided, in part, by the fact that the asteroidal triple-free graphs provide a common generalization of interval, permutation, trapezoid, and cocomparability graphs. The main contribution of this work is to investigate and reveal fundamental structural properties of AT-free graphs. Specifically, we show that every connected AT-free graph contains a dominating pair, that is, a pair of vertices such that every path joining them is a dominating set in the graph. We then provide characterizations of AT-free graphs in terms of dominating pairs and minimal triangulations. Subsequently, we state and prove a decomposition theorem for AT-free graphs. An assortment of other properties of AT-free graphs is also p... "... We give the first efficient parallel algorithms for recognizing chordal graphs, finding a maximum clique and a maximum independent set in a chordal graph, finding an optimal coloring of a chordal graph, finding a breadth-first search tree and a depth-first search tree of a chordal graph, recognizing ..." Cited by 26 (0 self) Add to MetaCart We give the first efficient parallel algorithms for recognizing chordal graphs, finding a maximum clique and a maximum independent set in a chordal graph, finding an optimal coloring of a chordal graph, finding a breadth-first search tree and a depth-first search tree of a chordal graph, recognizing interval graphs, and testing interval graphs for isomorphism. The key to our results is an efficient parallel algorithm for finding a perfect elimination ordering. - SIAM J. Comput , 1997 "... An independent set of three of vertices is called an asteroidal triple if between each pair in the triple there exists a path that avoids the neighbourhood of the third. A graph is asteroidal triple-free (AT-free, for short) if it contains no asteroidal triple. The motivation for this work is pro ..." Cited by 25 (7 self) Add to MetaCart An independent set of three of vertices is called an asteroidal triple if between each pair in the triple there exists a path that avoids the neighbourhood of the third. A graph is asteroidal triple-free (AT-free, for short) if it contains no asteroidal triple. The motivation for this work is provided, in part, by the fact that AT-free graphs offer a common generalization of interval, permutation, trapezoid, and cocomparability graphs. Previously, the authors have given an existential proof of the fact that every connected AT-free graph contains a dominating pair, that is, a pair of vertices such that every path joining them is a dominating set in the graph. The main contribution of this paper is a constructive proof of the existence of dominating pairs in connected AT-free graphs. The resulting simple algorithm, based on the well-known Lexicographic Breadth-First Search, can be implemented to run in time linear in the size of the input, whereas the best algorithm - Proc. 18th Int. Workshop (WG '92), Graph-Theoretic Concepts in Computer Science , 1992 "... An interval graph is the intersection graph of a collection of intervals. Interval graphs are a special class of chordal graphs. This class of graphs has a wide range of applications. Several linear time algorithms have been designed to recognize interval graphs. Booth & Lueker first used PQ-trees t ..." Cited by 19 (2 self) Add to MetaCart An interval graph is the intersection graph of a collection of intervals. Interval graphs are a special class of chordal graphs. This class of graphs has a wide range of applications. Several linear time algorithms have been designed to recognize interval graphs. Booth & Lueker first used PQ-trees to recognize interval graphs in linear time. However, the data manipulation of PQ-trees is rather involved and the complexity analysis is also quite tricky. Korte and Möhring simplified the operations on a PQ-tree using an incremental algorithm. Hsu and Ma gave a simpler decomposition algorithm without using PQ-trees. All of these algorithms rely on the following fact: a graph is an interval graph iff there exists a linear order of its maximal cliques such that for each vertex v, all maximal cliques containing v are consecutive. Thus, the precomputation of all maximal cliques is required for these algorithms. Based on graph decomposition, we give a much simpler recognition algorithm in this paper which directly places the intervals without precomputing all maximal cliques. A linear time isomorphism algorithm can be easily derived as a by-product. Another advantage of our approach is that it can be used to develop an O(nlog n) on-line recognition algorithm for interval graphs. 1. - Proc. of Graph Drawing 99, Lecture Notes in Computer Science 1731:276-285 , 1999 "... . We give a short introduction to an heuristic to find automorphisms in a graph such as axial, central or rotational symmetries. Using technics of factorial analysis, we embed the graph in an Euclidean space and try to detect and interpret the geometric symmetries of of the embedded graph. 1. Introd ..." Cited by 6 (1 self) Add to MetaCart . We give a short introduction to an heuristic to find automorphisms in a graph such as axial, central or rotational symmetries. Using technics of factorial analysis, we embed the graph in an Euclidean space and try to detect and interpret the geometric symmetries of of the embedded graph. 1. Introduction Testing whether a graph has any axial (rotational, central, respectively) symmetry is a NP-complete problem [9]. Some restrictions (central symmetry with exactly one fixed vertex and no fixed edge) are polynomialy equivalent to the graph isomorphism test. Notice that this latter problem is not known to be either polynomial or NP-complete in general. But several heuristics are known (e.g. [3]) and several restrictions leads to efficient algorithms: linear time isomorphism test for planar graphs [6] and interval graphs [8], polynomial time isomorphism test for fixed genus [10, 5], k-contractible graphs [12] and pairwise k-separable graphs [11], linear axial symmetry detection for plana... - in Proceedings of the 25th IEEE Symposium on Logic in Computer Science, 2010, this volume "... The present paper proves a characterization of all polynomial-time computable queries on the class of interval graphs by sentences of fixed-point logic with counting. The result is one of the first establishing the capturing of polynomial time on a graph class which is defined by forbidden induced s ..." Cited by 5 (1 self) Add to MetaCart The present paper proves a characterization of all polynomial-time computable queries on the class of interval graphs by sentences of fixed-point logic with counting. The result is one of the first establishing the capturing of polynomial time on a graph class which is defined by forbidden induced subgraphs. More precisely, it is shown that on the class of unordered interval graphs, any query is polynomial-time computable if and only if it is definable in fixed-point logic with counting. Furthermore, it is shown that fixed-point logic is not expressive enough to capture polynomial time on the classes of chordal graphs or incomparability graphs. 1 , 2003 "... We prove that every (claw, net)-free graph contains an induced doubly dominating cycle or a dominating pair. Moreover, using LexBFS we present alS[SE timealen##ES which, for a given (claw, net) -free graph, finds either a dominating pair or an induceddoubl dominatingcycln We show aln how one can uses ..." Cited by 4 (3 self) Add to MetaCart We prove that every (claw, net)-free graph contains an induced doubly dominating cycle or a dominating pair. Moreover, using LexBFS we present alS[SE timealen##ES which, for a given (claw, net)-free graph, finds either a dominating pair or an induceddoubl dominatingcycln We show aln how one can usestructural properties of (claw, net)-free graphs tosolI efficiently the domination, independent domination, and independent set problems on these graphs. - in Algorithms and Data Structures WADS '95, Lecture , 1998 "... An independent set of three of vertices is called an asteroidal triple if between each pair in the triple there exists a path that avoids the neighborhood of the third. A graph is asteroidal triple-free (AT-free, for short) if it contains no asteroidal triple. The motivation for this work is prov ..." Cited by 3 (2 self) Add to MetaCart An independent set of three of vertices is called an asteroidal triple if between each pair in the triple there exists a path that avoids the neighborhood of the third. A graph is asteroidal triple-free (AT-free, for short) if it contains no asteroidal triple. The motivation for this work is provided, in part, by the fact that AT-free graphs offer a common generalization of interval, permutation, trapezoid, and cocomparability graphs. Previously, the authors have given an existential proof of the fact that every connected AT-free graph contains a dominating pair, that is, a pair of vertices such that every path joining them is a dominating set in the graph. The main contribution of this paper is a constructive proof of the existence of dominating pairs in connected AT-free graphs. The resulting simple algorithm can be implemented to run in time linear in the size of the input, whereas the best algorithm previously known for this problem has complexity O(jV j 3 ) for - Discrete Appl. Math "... A number of problems in computational semantics, group-based collaboration, automated theorem proving, networking, scheduling, and cluster analysis suggested the study of graphs featuring certain "local density" characteristics. Typically, the notion of local density is equated with the absence of c ..." Cited by 3 (1 self) Add to MetaCart A number of problems in computational semantics, group-based collaboration, automated theorem proving, networking, scheduling, and cluster analysis suggested the study of graphs featuring certain "local density" characteristics. Typically, the notion of local density is equated with the absence of chordless paths of length three or more. Recently, a new metric for local density has been proposed, allowing a number of such induced paths to occur. More precisely, a graph G is called P4-sparse if no set of five vertices in G induces more than one chordless path of length three. P4-sparse graphs generalize the well-known class of cographs corresponding to a more stringent local density metric. One remarkable feature of P4-sparse graphs is that they admit a tree representation unique up to isomorphism. In this work we present a parallel algorithm to recognize P4-sparse graphs and show how the data structures returned by the recognition algorithm can be used to construct the corresponding tr...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=436681","timestamp":"2014-04-23T19:59:51Z","content_type":null,"content_length":"38881","record_id":"<urn:uuid:7236929a-cf24-4ec3-8988-3f8ebac39aec>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
The Black Vault Message Forums I've been thinking, for whatever reason, what ET math would be like. I bet a lot of people would think that it and our math will not only be comparable, they will likely be essentially identical. For example, the induction principle in arithmetic would likely also be a theorem in their math. The Pythagorean theorem is probably among their theorems, as another example though their name for it (if it has a name) might be just "Theorem #691,865,863,532,987". To that end, I was wondering how various ET mathematicians might define the word equation. A lot of us on earth are taught first off that an equation is just a formula or expression that contains the following symbol: =. But the ETs won't have that symbol in their language. So when fabricating the Rosetta Stone which will provide a means to translate their language to ours and vice versa, to give the proper meaning of our word equation, we have to capture the essence of what equality is. Equality is a "principle" of sorts that exists only in certain formal systems. Many formal systems have nothing like equality while other formal systems have an equality. If xEy (which intuitively means x=y) is a grammatically-correct utterance in the formal system and if E is a symbol in the formal system possessing the following properties, then I would say E represents equality: 1. What E points to is an equivalence relation, meaning 1a. for all x, xEx (reflexivity) 1b. for all x and y, if xEy then yEx (symmetry) 1c. for all x, y and z, if xEy and yEz then xEz (transivity) 2. every equivalence class defined by E has cardinality exactly 1. An equivalence class defined by E means that, given x, the equivalence class generated by x is the set of all things E-equivalent to x. Everything in one equivalence class is E-equivalent to everything else in that equivalence class. Saying it has cardinality one means that it has one element. If we did not have criteria #2 in the definition of "equation," then it might be the case that aEb although a and b are different. In that case, the equivalence class generated by a has at least one other element, b; so the cardinality of that equivalence class would not be 1, it would be at least 2. Looking at the three criteria under #1, it is clear that equality behaves so, and equivalence relations just generalize equality. In a class where the word equation is defined, soon to follow is the definition of the word solution, as in "a solution to an equation." I haven't thought much about what "solution to an equation" might mean to an ET mathematician. "it is easy to grow crazy" On the other hand, they are probably wondering why humans have no idea what equal means, when they can't even balance a check book and stay out of credit card debt. cuz it's all about the MONEY "it is easy to grow crazy" Check book balance is equal, greater, or less than cost of items to purchase? Cost of Items to purchase, equal, greater, or less than available credit on card? I would say simple math, but evidently not when you look at bankruptcy cases. More than simple math goes into bankruptcy, such as human psychology. I think the ETs already know all the factors that go into it. "it is easy to grow crazy"
{"url":"http://www.theblackvault.com/phpBB3/post118615.html","timestamp":"2014-04-20T08:33:45Z","content_type":null,"content_length":"53026","record_id":"<urn:uuid:ebbe8f3c-ee7c-4a1e-85fb-76dc3d6d005f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Second order freeness and fluctuations of random matrices: I. Gaussian and Wishart matrices and cyclic Fock spaces Results 1 - 10 of 29 , 2008 "... Abstract. Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a ..." Cited by 24 (3 self) Add to MetaCart Abstract. Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a unified technique for deriving such results via relatively soft arguments. In the process, we introduce a notion of ‘second order Poincaré inequalities’: just as ordinary Poincaré inequalities give variance bounds, second order Poincaré inequalities give central limit theorems. The proof of the main result employs Stein’s method of normal approximation. A number of examples are worked out; some of them are new. One of the new results is a CLT for the spectrum of gaussian Toeplitz matrices. 1. - Probab. Theory Relat. Fields , 2005 "... Abstract. A law of large numbers and a central limit theorem are derived for linear statistics of random symmetric matrices whose on-or-above diagonal entries are independent, but neither necessarily identically distributed, nor necessarily all of the same variance. The derivation is based on system ..." Cited by 15 (0 self) Add to MetaCart Abstract. A law of large numbers and a central limit theorem are derived for linear statistics of random symmetric matrices whose on-or-above diagonal entries are independent, but neither necessarily identically distributed, nor necessarily all of the same variance. The derivation is based on systematic combinatorial enumeration, study of generating functions, and concentration inequalities of the Poincaré type. Special cases treated, with an explicit evaluation of limiting variances, are generalized Wigner and Wishart matrices. 1. , 2004 "... Large random matrices appear in different fields of mathematics and physics such as combinatorics, probability theory, statistics, operator theory, number theory, quantum field theory, string theory etc... In the last ten years, they attracted lots of interests, in particular due to a serie of math ..." Cited by 13 (0 self) Add to MetaCart Large random matrices appear in different fields of mathematics and physics such as combinatorics, probability theory, statistics, operator theory, number theory, quantum field theory, string theory etc... In the last ten years, they attracted lots of interests, in particular due to a serie of mathematical breakthroughs allowing for instance a better understanding of local properties of their spectrum, answering universality questions, connecting these issues with growth processes etc. In this survey, we shall discuss the problem of the large deviations of the empirical measure of Gaussian random matrices, and more generally of the trace of words of independent Gaussian random matrices. We shall describe how such issues are motivated either in physics/combinatorics by the study of the so-called matrix models or in free probability by the definition of a non-commutative entropy. We shall show how classical large deviations techniques can be used in this context. These lecture notes are supposed to be accessible to non probabilists and non free-probabilists. - IEEE Trans. Signal Process , 2008 "... Abstract—In many channel measurement applications, one needs to estimate some characteristics of the channels based on a limited set of measurements. This is mainly due to the highly time varying characteristics of the channel. In this contribution, it will be shown how free probability can be used ..." Cited by 12 (9 self) Add to MetaCart Abstract—In many channel measurement applications, one needs to estimate some characteristics of the channels based on a limited set of measurements. This is mainly due to the highly time varying characteristics of the channel. In this contribution, it will be shown how free probability can be used for channel capacity estimation in MIMO systems. Free probability has already been applied in various application fields such as digital communications, nuclear physics and mathematical finance, and has been shown to be an invaluable tool for describing the asymptotic behaviour of many large-dimensional systems. In particular, using the concept of free deconvolution, we provide an asymptotically (w.r.t. the number of observations) unbiased capacity estimator for MIMO channels impaired with noise called the free probability based estimator. Another estimator, called the Gaussian matrix mean based estimator, is also introduced by slightly modifying the free probability based estimator. This estimator is shown to give unbiased estimation of the moments of the channel matrix for any number of observations. Also, the estimator has this property when we extend to MIMO channels with phase off-set and frequency drift, for which no estimator has been provided so far in the literature. It is also shown that both the free probability based and the Gaussian matrix mean based estimator are asymptotically unbiased capacity estimators as the number of transmit antennas go to infinity, regardless of whether phase off-set and frequency drift are present. The limitations in the two estimators are also explained. Simulations are run to assess the performance of the estimators for a low number of antennas and samples to confirm the usefulness of the asymptotic results. , 2007 "... Abstract. Consider a N × n random matrix Yn = (Y n ij) where the entries are given by Y n σij(n) ij = √ X n n ij, the Xn ij being centered, independent and identically distributed random variables with unit variance and (σij(n); 1 ≤ i ≤ N,1 ≤ j ≤ n) being an array of numbers we shall refer to as a ..." Cited by 11 (6 self) Add to MetaCart Abstract. Consider a N × n random matrix Yn = (Y n ij) where the entries are given by Y n σij(n) ij = √ X n n ij, the Xn ij being centered, independent and identically distributed random variables with unit variance and (σij(n); 1 ≤ i ≤ N,1 ≤ j ≤ n) being an array of numbers we shall refer to as a variance profile. We study in this article the fluctuations of the random variable log det (YnY ∗ n + ρIN) where Y ∗ is the Hermitian adjoint of Y and ρ> 0 is an additional parameter. We prove that when centered and properly rescaled, this random variable satisfies a Central Limit Theorem (CLT) and has a Gaussian limit whose parameters are identified. A complete description of the scaling parameter is given; in particular it is shown that an additional term appears in this parameter in the case where the 4 th moment of the Xij’s differs from the 4 th moment of a Gaussian random variable. Such a CLT is of interest in the field of wireless communications. Key words and phrases: Random Matrix, empirical distribution of the eigenvalues, Stieltjes - IEEE Trans. on Information Theory , 2008 "... Abstract—In this first part, analytical methods for finding moments of random Vandermonde matrices are developed. Vandermonde Matrices play an important role in signal processing and communication applications such as direction of arrival estimation, precoding or sparse sampling theory for example. ..." Cited by 8 (6 self) Add to MetaCart Abstract—In this first part, analytical methods for finding moments of random Vandermonde matrices are developed. Vandermonde Matrices play an important role in signal processing and communication applications such as direction of arrival estimation, precoding or sparse sampling theory for example. Within this framework, we extend classical freeness results on random matrices with i.i.d entries and show that Vandermonde structured matrices can be treated in the same vein with different tools. We focus on various types of Vandermonde matrices, namely Vandermonde matrices with or without uniformly distributed phases, as well as generalized Vandermonde matrices (with nonuniform distribution of powers). In each case, we provide explicit expressions of the moments of the associated Gram matrix, as well as more advanced models involving the Vandermonde matrix. Comparisons with classical i.i.d. random matrix theory are provided and free deconvolution results are also discussed. Index Terms—Vandermonde matrices, Random Matrices, deconvolution, limiting eigenvalue distribution, MIMO. , 2008 "... Abstract—Analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle are developed. Vandermonde Matrices play an important role in signal processing and wireless applications such as direction of arrival estimation, precoding, or sparse sampling theory, just ..." Cited by 8 (4 self) Add to MetaCart Abstract—Analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle are developed. Vandermonde Matrices play an important role in signal processing and wireless applications such as direction of arrival estimation, precoding, or sparse sampling theory, just to name a few. Within this framework, we extend classical freeness results on random matrices with i.i.d. entries and show that Vandermonde structured matrices can be treated in the same vein with different tools. We focus on various types of matrices, such as Vandermonde matrices with and without uniform phase distributions, as well as generalized Vandermonde matrices. In each case, we provide explicit expressions of the moments of the associated Gram matrix, as well as more advanced models involving the Vandermonde matrix. Comparisons with classical i.i.d. random matrix theory are provided, and deconvolution results are discussed. We review some applications of the results to the fields of signal processing and wireless communications. Index Terms—Vandermonde matrices, Random Matrices, deconvolution, limiting eigenvalue distribution, MIMO. - ALEA LAT. AM. J. PROBAB. MATH. STAT , 2005 "... We show that under reasonably general assumptions, the first order asymptotics of the free energy of matrix models are generating functions for colored planar maps. This is based on the fact that solutions of the differential Schwinger-Dyson equations are, by nature, generating functions for enumera ..." Cited by 7 (3 self) Add to MetaCart We show that under reasonably general assumptions, the first order asymptotics of the free energy of matrix models are generating functions for colored planar maps. This is based on the fact that solutions of the differential Schwinger-Dyson equations are, by nature, generating functions for enumerating planar maps, a remark which bypasses the use of Gaussian calculus. , 2007 "... We show that Connes’ embedding problem for II1–factors is equivalent to a statement about distributions of sums of self–adjoint operators with matrix coefficients. This is an application of a linearization result for finite von Neumann algebras, which is proved using asymptotic second order freeness ..." Cited by 4 (3 self) Add to MetaCart We show that Connes’ embedding problem for II1–factors is equivalent to a statement about distributions of sums of self–adjoint operators with matrix coefficients. This is an application of a linearization result for finite von Neumann algebras, which is proved using asymptotic second order freeness of Gaussian random matrices. , 909 "... Abstract. In this paper, we connect rectangular free probability theory and spherical integrals. In this way, we prove the analogue, for rectangular or square non symmetric real matrices, of a result that Guionnet and Maïda proved for symmetric matrices in [GM05]. More specifically, we study the lim ..." Cited by 3 (2 self) Add to MetaCart Abstract. In this paper, we connect rectangular free probability theory and spherical integrals. In this way, we prove the analogue, for rectangular or square non symmetric real matrices, of a result that Guionnet and Maïda proved for symmetric matrices in [GM05]. More specifically, we study the limit, as n, m tend to infinity, of 1 n log E{exp[ √ nmθXn]}, where Xn is an entry of UnMnVm, θ ∈ R, Mn is a certain n×m deterministic matrix and Un, Vm are independent uniform random orthogonal matrices with respective sizes n × n, m × m. We prove that when the operator norm of Mn is bounded and the singular law of Mn converges to a probability measure µ, for θ small enough, this limit actually exists and can be expressed with the rectangular R-transform of µ. This gives an interpretation of this transform, which linearizes the rectangular free convolution, as the limit of a sequence of logarithms of Laplace transforms.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4414288","timestamp":"2014-04-18T14:30:07Z","content_type":null,"content_length":"39427","record_id":"<urn:uuid:9db10b61-97aa-4850-899b-4be895d91891>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Interpolation question Andrea Gavana andrea.gavana@gmail.... Mon Mar 29 16:31:19 CDT 2010 Hi Brennan & All, On 29 March 2010 00:46, Brennan Williams wrote: > Andrea Gavana wrote: >> As for your question, the parameter are not spread completely >> randomly, as this is a collection of simulations done over the years, >> trying manually different scenarios, without having in mind a proper >> experimental design or any other technique. Nor the parameter values >> vary only on one axis in each simulation (few of them are like that). > I assume that there is a default "norm" that calculates the distance > between points irrespective of the order of the input coordinates? > So if that isn't working, leading to the spurious results, the next step > is to normalise all the inputs so they are in the same range, e.g > max-min=1.0 Scaling the input data using their standard deviation worked very well for my case. > On a related note, what approach would be best if one of the input > parameters wasn't continuous? e.g. I have three quite different > geological distributions called say A,B and C. > SO some of my simulations use distribution A, some use B and some use C. > I could assign them the numbers 1,2,3 but a value of 1.5 is meaningless. Not sure about this: I do have integer numbers too (the number of wells can not be a fractional one, obviously), but I don't care about it as it is an input parameter (i.e., the user choose how many o2/o3/injector wells he/she wants, and I get an interpolated production profiles). Are you saying that the geological realization is one of your output variables? > Andrea, if you have 1TB of data for 1,000 simulation runs, then, if I > assume you only mean the smspec/unsmry files, that means each of your > summary files is 1GB in size? It depends on the simulation, and also for how many years the forecast is run. Standard runs go up to 2038, but we have a bunch of them running up to 2120 (!) . As we do have really many wells in this field, the ECLIPSE summary file dimensions skyrocket pretty quickly. > Are those o2w,o3w and inw figures the number of new wells only or > existing+new? It's fun dealing with this amount of data isn't it? They're only new wells, with a range of 0 <= o2w <= 150 and 0 <= o3 <= 84 and 0 <= inw <= 37, and believe it or not, our set of simulations contains a lot of the possible combinations for these 2 variables (and the other 4 variables too)... "Imagination Is The Only Weapon In The War Against Reality." ==> Never *EVER* use RemovalGroup for your house removal. You'll regret it forever. http://thedoomedcity.blogspot.com/2010/03/removal-group-nightmare.html <== More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-March/049654.html","timestamp":"2014-04-18T20:43:58Z","content_type":null,"content_length":"5569","record_id":"<urn:uuid:56787c42-3242-4ea0-8069-0523f1431bba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
22 projects tagged "Linux" DISLIN is a high-level, easy-to-use plotting library for displaying data as curves, bar graphs, pie charts, 3D-colour plots, surfaces, contours, and maps. Several output formats are supported, such as X11, VGA, PostScript, PDF, CGM, HPGL, TIFF, and PNG. Plotting extensions for the interpreter-based languages Perl, Python, and Java are also supported for most operating systems.
{"url":"http://freecode.com/tags/linux?page=1&with=2906%2C2776&without=","timestamp":"2014-04-19T18:11:34Z","content_type":null,"content_length":"76866","record_id":"<urn:uuid:d08a8509-dbb9-4504-b47c-e2e8af94cb97>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
pie chart Definition of Pie Chart ● A Pie Chart is a circular chart which is divided into sectors in which the area of each sector represents the size of the data. More about Pie Chart • It is also known as circle graph. • Pie charts are used to show data in proportion. • When all the sectors are combined together it forms a complete disk. Example of Pie Chart • The pie chart shown above represents the number of teachers for different subjects in a school. The number at each pie slice indicates the number of teachers allotted for the particular subject. As the pie chart shows, the number of teachers allotted for Mathematics is 24; the number of teachers allotted for Social Studies is 13; the number of teachers allotted for English is 15; the number of teachers allotted for Spanish is 10; and the number of teachers allotted for Science is 15. Solved Example on Pie Chart The pie chart shown below represents the percentage of dams built for different purposes. Find the total percentage of dams built for irrigation and hydroelectricity generation. A. 56% B. 38% C. 40% D. 45% Correct Answer: A Step 1: From the pie chart, the percentage of dams built for irrigation = 36 Step 2: Percentage of dams built for generating hydroelectricity = 20 Step 3: Percentage of dams built for irrigation and for generating hydroelectricity = 36 + 20 = 56 [Add the percentages.] Step 4: So, 56% of the dams were built for irrigation and for generating hydroelectricity. Related Terms for Pie Chart • Area • Circle Graph • Data • Graph • Sector
{"url":"http://www.icoachmath.com/math_dictionary/pie_chart.html","timestamp":"2014-04-18T08:18:40Z","content_type":null,"content_length":"8877","record_id":"<urn:uuid:c4178d73-70c6-40b5-9e59-9c16ada3cf91>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
What is an algebraic group over a noncommutative ring? up vote 11 down vote favorite Let $R$ be a (noncommutative) ring. (For me, the words "ring" and "algebra" are isomorphic, and all rings are associative with unit, and usually noncommutative.) Then I think I know what "linear algebra in characteristic $R$" should be: it should be the study of the category $R\text{-bimod}$ of $(R,R)$-bimodules. For example, an $R$-algebra on the one hand is a ring $S$ with a ring map $R \ to S$. But this is the same as a ring object in the $R\text{-bimod}$. When $R$ is a field, we recover the usual linear algebra over $R$; in particular, when $R = \mathbb Z/p$, we recover linear algebra in characteristic $p$. Suppose that $G$ is an algebraic group (or perhaps I mean "group scheme", and maybe I should say "over $\mathbb Z$"); then my understanding is that for any commutative ring $R$ we have a notion of $G (R)$, which is the group $G$ with coefficients in $R$. (Probably there are some subtleties and modifications to what I just said.) My question: What is the right notion of an algebraic group "in characteristic $R$"? It's certainly a bit funny. For example, it's reasonable to want $GL(1,R)$ to consist of all invertible elements in $R$. On the other hand, in $R\text{-bimod}$, the group $\text{Aut}(R,R)$ consists of invertible elements in the center $Z(R)$. Incidentally, I'm much more interested in how the definitions must be modified to accommodate noncommutativity than in how they must be modified to accommodate non-invertibility. So I'm happy to set $R = \mathbb H$, the skew field of quaternions. Or $R = \mathbb K[[x,y]]$, where $\mathbb K$ is a field and $x,y$ are noncommuting formal variables. qa.quantum-algebra ra.rings-and-algebras 5 I'm curious: why do you use the word "characteristic" here? I would expect terminology to make sense when you pass to a special case (e.g., commutative rings), but here it does not. – S. Carnahan♦ Jan 17 '10 at 19:06 add comment 5 Answers active oldest votes It seems that you want some notions on noncommutative group scheme,right? In fact, A.Rosenberg has introduced noncommutative group scheme in his work with Kontsevich "noncommutative grassmannian and related constructions" (2008). Actually, this work gave a systematically treatment to the noncommutative grassmannian type space introduced in their early paper noncommutative smooth space and the work of Rosenberg himself on noncommutative spaces and schemes. up vote 2 down More comments: It seems that you want to know the linear algebra over noncommutative ring. I think you need to look at the paper by Gelfand and Retakh on Quasideterminants, I. And the vote accepted main motivation for the "noncommutative grassmannian and related constructions" is to give a geometric explanation to the work of Gelfand and Retakh. All of these work are based on functor of point of view. I was unable to find a general definition of noncommutative group scheme in the paper you linked. They only defined a noncommutative version of GL(V). – S. Carnahan♦ Jan 17 '10 at 1 Sorry to response late. I just asked Kontsevich about this notion, he told me that in this paper I linked, they did not give the definition of general noncommutative group scheme. However, I remembered Rosenberg ever give a lecture course introducing general notions. I will reply when I got back to campus and found out what it is – Shizhuo Zhang Jan 19 '10 at 1 Maybe you can suggest to Kontsevich to check out mathoverflow :) – Kevin H. Lin Jan 19 '10 at 8:36 add comment I'd like to add that there is an interesting paper http://arxiv.org/abs/math/0701399 that discusses Lie algebras and groups over noncommutative rings. up vote 3 down vote The functoriality in this article is in the wrong direction; while it touches on some interesting algebra it is definitely not a canonical answer. – Zoran Skoda Mar 8 '10 at 15:16 I agree it is not a canonical answer, and I am not sure there is a canonical answer at all, but I did not understand what you mean by functoriality being in the wrong direction. Can you please specify what statement in the article you are referring to? – Pavel Etingof Mar 8 '10 at 22:21 add comment I'd say an affine algebraic group over $R$ is an $R$-Hopf algebra, that is, a Hopf algebra object in the category of R-R bimodules. Further than that, it's hard for me to say. up vote 2 [EDIT: This bit doesn't make any sense. Ignore it. I was up until 7am doing Mystery Hunt, so at least I have a good excuse.] I am pretty suspicious of a definition in terms of the functor down vote of points; the whole problem with non-commutative geometry is that the points don't capture nearly enough information. But what is a Hopf algebra object in the category of R-R-bimodules? I thought Hopf algebra objects only make sense in symmetric (or at least braided) monoidal categories, as one needs to permute the factors in order to write down the axiom of compatibility between the multiplication and the comultiplication. – Leonid Positselski Jan 17 '10 at 18:58 1 @Ben Webster, the functor of points is the description by the Yoneda embedding, of course. It has all of the information available in the original representation plus more given to us by the noncommutative generalization of a grothendieck topology. – Harry Gindi Jan 17 '10 at 19:23 @Leonid- My recollection is that the problem can be gotten around if one writes the axiom a bit differently, but we may need someone more expert than me in Hopfological algebra to be sure. – Ben Webster♦ Jan 17 '10 at 20:24 Given that Hopf algebras are Koszul dual to E_2 algebras, they shouldn't make sense in anything more than braided probably.. – David Ben-Zvi Jan 18 '10 at 20:55 add comment There is more than one category of noncommutative spaces of algebraic flavour, hence there is more than one notion of the algebraic group. In affine case, notice that the categorical product noncommutative affine schemes $NAff=Ass^{op}$ is opposite to the free product of the corresponding rings. There are extremely few such schemes, and they correspond to algebra which are very close to the free associative algebras (cf. I. Berstein, On cogroups in the category of graded algebras. Trans. Amer. Math. Soc. 115 (1965), 257–269)-- the example of $NGL_n$ like in Kontsevich-Rosenberg article mentioned by Zhang is just one of the few interesting examples. One can try not to work with categorical product, and work with tensor product like in some approaches to linear quantum groups (B. Parshall and J.Wang, Quantum linear groups. Mem. Amer. Math. Soc. 89 (1991), No. 439, vi+157 pp.), however then some categorical construction do not pass. However if we represent a space by the category of quasicoherent sheaves, then a group scheme is represented by a monoidal category, namely (up to various properness/finiteness conditions) the monoidal product is given by taking the external tensor product of sheaves on the group $G$ what gives a product on $G \times G$ and then one pushes doen this categorical product along the action to $G$. Similar pushdown along the action induces the action of this monoidal category of sheaves on the aprpopriate category of sheaves on the space the group acts on. Then in noncommutative case, we can replace Hopf algebra by its monoidal category of modules, and this category acts on the category of modules over any comodule algebra over that Hopf algebra in a canonical way. This way in the world of categories one indeed has actions of monoidal categories, which are in addition geometrically admissible in the sense explained in my vote 1 Zoran Škoda, Some equivariant constructions in noncommutative algebraic geometry, Georgian Mathematical Journal 16 (2009), No. 1, 183–202, arXiv:0811.4770. vote While Kontsevich-Rosenberg treatment of $NGL_n$ is nice functorialy (unfortunately the main part of the work from 1999 is still not a publically avaiulable article) and it was originallz motivated bz Gel'fand-Retakh quasideterminants this motivation is not fully justified by the results: namely various identities of quasideterminants were not explained as geometric statements about various maps of noncommutative schemes. There is another approach which I develop for a number of years and I hope to be able to finish and write down soon is by taking another version of $NGL_n$ namely the Manin's example of the Hopf envelope of the free matrix bialgebra on $n^2$-generators. This Hopf algebra has infinitely many generators and has interesting structure. There is a geometric quotient which I call a universal noncommutative flag variety. I succeeded to get some of the identities for quasideterminants as geometric statements on that variety. This variety is not a noncommutative scheme, but sort of noncommutative homotopy scheme as the descent is higher descent for *Cohn localization*s which do not have good flatness properties needed for the usual descent. On the other hand this variety is not represented by a group-valued functor on $NAff$ unlike the noncommutative flag variety of Kontsevich-Rosenberg (which is also glued using Cohn localizations as I was told). Tomasz Maszczyk has his own approach to noncommutative group schemes (mainly unpublished) which emphasises on the categories of bimodules. But you should talk to him. add comment You seem to be asking two different questions. The first is, "how do I define the notion of algebraic group over a noncommutative ring?" The second is, "given an algebraic group (viewed as a functor from commutative rings to groups), how do I evaluate it on noncommutative rings?" My answers are probably naive, but I don't understand noncommutative geometry. First question: It should be a functor from rings to groups that preserves finite limits. You may need more conditions, but this is essentially what you get from the definition of formal up vote 0 groups by removing the "commutative Artinian" condition. down vote Second question: [S:Evaluate the functor on the center of the ring. I can't think of a canonical alternative.:S] Edit: Based on the helpful comments, I'd recommend evaluating the functor on the quotient by the two-sided ideal generated by commutators. 3 Second answer: that sounds pretty terrible. Remember "center" is NOT a functor from noncommutative rings to commutative rings. – Ben Webster♦ Jan 17 '10 at 20:26 Are there any functors from rings to commutative rings that restrict to the identity on commutative rings? It seems unlikely, but I don't have a proof of nonexistence. – S. Carnahan♦ Jan 17 '10 at 21:12 What about associating to a ring R the new ring R/I, where I is the two-sided ideal generated by all elements of the form (ab-ba)? Since a ring homomorphism sends I of one ring to I of another, this gives a functor from rings to commutative rings, which is clearly the identity on commutative rings. Note that it sends most easy examples of non-commutative rings to the zero ring. – Chris Schommer-Pries Jan 18 '10 at 2:32 Yeah, that's what I meant with the Hochschild homology statement. I probably should have deleted the previous comment. – S. Carnahan♦ Jan 18 '10 at 2:49 1 I don't think Hochschild homology of a noncommutative ring is a ring in general.. it's a quotient by all commutators, not the two-sided ideal they generate – David Ben-Zvi Jan 18 '10 at show 2 more comments Not the answer you're looking for? Browse other questions tagged qa.quantum-algebra ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/12118/what-is-an-algebraic-group-over-a-noncommutative-ring?sort=votes","timestamp":"2014-04-16T16:27:13Z","content_type":null,"content_length":"91240","record_id":"<urn:uuid:f5c006b8-2080-484b-9fa6-5e3c04024de4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating a Slope Topographic Maps Tutorial Topographic Maps Field Exercises April 8, 2008 Calculating a Slope Determining the average slope of a hill using a topographic map is fairly simple. Slope can be given in two different ways, a percent gradient or an angle of the slope. The initial steps to calculating slope either way are the same. • Decide on an area for which you want to calculate the slope (note, it should be an area where the slope direction does not change; do not cross the top of a hill or the bottom of a valley). • Decide on an area for which you want to calculate the slope (note, it should be an area where the slope direction does not change; do not cross the top of a hill or the bottom of a valley). • Once you have decided on an area of interest, draw a straight line perpendicular to the contours on the slope. For the most accuracy, start and end your line on, rather than between, contours on the map. • Measure the length of the line you drew and, using the scale of the map, convert that distance to feet. (insert image with the line drawn on it, conversion calculation) • Determine the total elevation change along the line you drew (subtract the elevation of the lowest contour used from the elevation of the highest contour used). You do not need to do any conversions on this measurement, as it is a real-world elevation change. To calculate a percent slope, simply divide the elevation change in feet by the distance of the line you drew (after converting it to feet). Multiply the resulting number by 100 to get a percentage value equal to the percent slope of the hill. If the value you calculate is, for example, 20, then what this means is that for every 100 feet you cover in a horizontal direction, you will gain (or lose) 20 feet in elevation. To calculate the angle of the slope, divide the elevation change in feet by the distance of the line you drew (after converting it to feet). This is the tangent value for the angle of the slope. Apply an arctangent function to this value to obtain the angle of the slope (hit the ‘inv’ button and then the ‘tan’ button on most scientific calculators to get the slope angle). The angle you calculated is the angle between a horizontal plane and the surface of the hill. Using the example above, (click here or on image for larger picture) a hill with a 20% slope is equivalent to an 11° slope.
{"url":"http://geology.isu.edu/geostac/Field_Exercise/topomaps/slope_calc.htm","timestamp":"2014-04-18T15:40:18Z","content_type":null,"content_length":"6308","record_id":"<urn:uuid:3c2eb004-cfc6-41b1-a86b-d65dc41b7ec9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Lynnfield Algebra 2 Tutor Find a Lynnfield Algebra 2 Tutor ...I also utilize my own experiences to challenge students to think like psychologists and bring those skills to tackle future endeavors. I have extensive coursework and research experience in the area of Physiology and completed my Master's degree in Physiology in May 2013. I was also the TA for a graduate Physiology course and tutored groups of healthcare and graduate students 10 Subjects: including algebra 2, chemistry, geometry, biology ...SAT Math preparation involves reviewing Algebra, Geometry, and Pre-Algebra topics learned mostly in Middle School. In addition, the questions require applying these topics in real world scenarios. I offer tutoring based on each student's level of need to review or learn the skills, then work on applying them in practice test problems. 15 Subjects: including algebra 2, geometry, algebra 1, precalculus ...I have helped others with all aspects of hardware, software, and networking during every stage of personal computer development from the very first personal computers released on the market (The Tandy TRS-80, Commodore-64, and Apple-IIe) through to our current mobile framework. In addition to my... 46 Subjects: including algebra 2, calculus, geometry, statistics I have extensive tutoring experience ranging from high school level to college level. I tutored college-level physics from my sophomore year till graduation. After graduation, I became a full-time math tutor with MATCH Education at Lawrence High School. 10 Subjects: including algebra 2, calculus, probability, algebra 1 I obtained my BS and PhD in Biomedical Engineering, focusing on applying mathematical and computational tools to solve biomedical problems. MATLAB is my main computer language. I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. 16 Subjects: including algebra 2, calculus, geometry, algebra 1 Nearby Cities With algebra 2 Tutor Danvers, MA algebra 2 Tutors Hathorne algebra 2 Tutors Ipswich, MA algebra 2 Tutors Marblehead algebra 2 Tutors Melrose, MA algebra 2 Tutors Middleton, MA algebra 2 Tutors North Reading algebra 2 Tutors Reading, MA algebra 2 Tutors Saugus algebra 2 Tutors Stoneham, MA algebra 2 Tutors Swampscott algebra 2 Tutors Wakefield, MA algebra 2 Tutors Wilmington, MA algebra 2 Tutors Winchester, MA algebra 2 Tutors Winthrop, MA algebra 2 Tutors
{"url":"http://www.purplemath.com/lynnfield_ma_algebra_2_tutors.php","timestamp":"2014-04-21T10:57:49Z","content_type":null,"content_length":"24083","record_id":"<urn:uuid:7ed4aab9-ebb0-4243-b669-bcf594cc6942>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
NA Digest Sunday, July 31, 1988 Volume 88 : Issue 30 NA Digest Sunday, July 31, 1988 Volume 88 : Issue 30 Today's Editor: Cleve Moler Today's Topics: From: William LeVeque <LEV@MATH.AMS.COM> Date: Tue 26 Jul 88 10:49:23-EDT Subject: AMS Matrix Short Course Introductory Survey Lectures "Matrix Theory and Applications" January 10-11, 1989, Phoenix, Arizona in conjunction with the Society's ninety-fifth Annual Meeting Organizer: Charles R. Johnson, The College of William and Mary Speakers: Richard A. Brualdi, Persi Diaconis, I. C. Gohberg, Roger A. Horn, Arunava Mukherjea, Ingram Olkin The emphasis in the Short Course will be on concepts from matrix analysis that are important in areas of modern applied mathematics. The two-day program consists of seven lectures, representing areas of applied mathematics that are major users of and stimulus to matrix theory: operations research/economics applied algebra/combinatorics probability theory/random matrices systems and control/electrical engineering The Short Course will exhibit the interplay between applications and theoretical development of the subject and will provide a setting in which participants can become acquainted with newer methods and ideas of the subject. The lectures will be expository, not at the research level. Inquiries to: Monica Foulkes, AMS, P.O. Box 6248, Providence, RI 02940 Tel: 401-272-9500 Internet: mxf@math.ams.com From: Michael Mascagni <mascagni@ncifcrf.gov> Date: Mon, 25 Jul 88 08:31:02 EDT Subject: Finite Element Software Needed A friend in the lab is looking for a finite element package to run on an AT class machine (MS-DOS) which allows solution of Neumann type boundary value problems. This means that more than just piecewise linear elements must be available. My friend has little experience with FEMs, and I have offered to help him get started. If the code is public domain, that's great, but as long as we're not talking more than several hundreds of dollars, we can consider comercial products. Also, if you know of code that runs on Suns or Vaxes that might be fine too, but my partner in this is pretty much only a PC user, so the PC code would be best as he could get it running himself. Thanks in advance, Michael Mascagni mascagni@ncifcrf.gov (arpanet) na.mascagni@score.stanford.edu (nanet) ...!uunet!ncifcrf.gov!mascagni (uucp) From: David Hough <dgh@Sun.COM> Date: Tue, 26 Jul 88 20:21:38 PDT Subject: Floating-point Software Support Position at Sun Microsystems During the month of August we will be interviewing candidates for a new position in floating-point software support in Sun's Programming Languages department. The successful candidate will join K-C Ng, Shing Ma, and me at Sun's Mountain View site. The successful candidate will eventually take over my existing responsibilities for Sun-3 support in the following areas: libm development and maintenance libc development and maintenance validation suite enhancements release engineering contact floating-point lab machines floating-point hardware diagnostics and afterward will take on other tasks as appropriate to background and interests. Candidates should be acquainted with most of the issues raised in the floating-point indoctrination lecture series syllabus and knowledgeable about some of them. Familiarity with Unix, C, and Fortran in general, and SunOS in particular, is required eventually and so would be preferable initially. I would appreciate resumes from persons qualified and interested in taking over these tasks. Electronic troff or LaTeX source is preferred (to dhough@sun.com); paper mail may be sent to David Hough MS 12-40 Sun Microsystems 2550 Garcia Av Mountain View, CA 94043 From: Nestor Martinez <atina!mrecvax!nestor@uunet.uu.net> Date: 26 Jul 88 14:27:48 GMT Subject: Statistical Computing Group in Argentina We're planning (I and other statisticians and mathematicians who work in universities and research institutions) to form a statisticial computing group in Argentina. We'll agree all suggests and informations about how organize it and how contact with similar groups. For example publications to subscribe, stat packages for micros and main frames, events and meetings, etc. - Nestor Marcelo G. Martinez M.R.E. y C. Bs. As. Argentina Please, respond by e-mail at Postal address: Aguero 1440 - 1ro. "B" 1425 - Buenos Aires Republica Argentina From: Gustav Meglicki <munnari!mimir!wacsvax!gustav@uunet.uu.net> Date: 27 Jul 88 04:42:16 GMT Subject: Software for PDEs Wanted I am looking for either a library or full programs for solving systems of partial differential equations in 2 and 3 dimensions. Ideally software would be written for UNIX V, or in standard FORTRAN 77. Gustav Meglicki, Department of Electrical and Electronic Engineering, The University of Western Australia, Nedlands, W.A. 6009, ACSnet: gustav@wacsvax.oz End of NA Digest
{"url":"http://netlib.org/na-digest-html/88/v88n30.html","timestamp":"2014-04-19T15:05:33Z","content_type":null,"content_length":"7133","record_id":"<urn:uuid:e85f4a58-689e-4ba8-876b-bb76443a876e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the derivative of an inverse function June 19th 2009, 07:19 PM #1 Finding the derivative of an inverse function Find $(f^{-1})(a)$ for the function $f$ and real number $a$. $f(x)=x^3+2x-1$, $a=2$ $f(x)=\frac{1}{27}(x^5+2x^3)$, $a=-11$ I'm having trouble with these. I've got a test on monday and I'm freaking out. The biggest problem that I'm having is finding the number that corresponds to a. I can't sub for $f(x)$ because the function can't be solved easily in most cases, and I can't determine the inverse function by switching variables and solving for y. What do I do? for first, f(1)=2....hit and trial for second, f(-3)=-11.......again hit and trial Inverse Function Theorem $(f^{-1})'(b) = \frac {1}{f'(a)}$ where b =f(a) $f(x) = x^{3}+2x-1$ $f'(x)= 3x^{2}+2$ $(f^{-1})'\big(f(x)\big) = \frac {1}{f'(x)}$ $(f^{-1})'(x^3+2x-1) = \frac {1}{3x^{2}+2}$ $(f^{-1})'(2) = (f^{-1})'(1^{3}+2(1)-1) = \frac {1}{3(1^{2})+2}$ so $(f^{-1})'(2) = \frac {1}{5}$ I know how to evaluate once I have the value. The question is: How do I find the value? Some things simply can't be solved. Do I just try inspection? Malaygoel seems to think that if a is an integer, then inspection is the easiest way. Is he right? in general, by inspection (trial and error), is the most efficient way to find the inverse value, and it is usually the way you are expected to do it. most of the time, professors will use functions where it is hard to compute the value, but it is easily seen. here is how it would be computed. we wish to find $f^{-1}(a)$, so call this value $x$ (this is a suggestive name). so we have that $f^{-1}(a) = x$ by taking $f$ of both sides, we obtain the equation $a = f(x)$ and so we see, we are looking for a value that when we plug it into our function, the result is $a$. this is what the inverse function dos after all, gives you the original input that gave you the current output, namely $a$. so for example, to deal with your first problem, we would find $f^{-1}(a)$ by solving the equation $x^3 + 2x - 1 = 2$, that is $x^3 + 2x - 3 = 0$ now this is not that hard to solve, however, it takes a lot less time and effort to just eye-ball the solution at the beginning as malaygoel did June 19th 2009, 07:25 PM #2 June 19th 2009, 07:27 PM #3 June 19th 2009, 07:36 PM #4 June 19th 2009, 07:48 PM #5 June 19th 2009, 07:57 PM #6 June 19th 2009, 08:09 PM #7 June 19th 2009, 08:23 PM #8
{"url":"http://mathhelpforum.com/calculus/93296-finding-derivative-inverse-function.html","timestamp":"2014-04-17T05:04:17Z","content_type":null,"content_length":"59570","record_id":"<urn:uuid:e22edb4e-ab96-488e-8ef3-ca84f87c1de1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
- Handbook of Process Algebra, chapter 17 "... This chapter addresses the question how to verify distributed and communicating systems in an e#ective way from an explicit process algebraic standpoint. This means that all calculations are based on the axioms and principles of the process algebras. ..." Cited by 62 (16 self) Add to MetaCart This chapter addresses the question how to verify distributed and communicating systems in an e#ective way from an explicit process algebraic standpoint. This means that all calculations are based on the axioms and principles of the process algebras. - Theor. Comput. Sci , 2004 "... Abstract. This note addresses the history of process algebra as an area of research in concurrency theory, the theory of parallel and distributed systems in computer science. Origins are traced back to the early seventies of the twentieth century, and developments since that time are sketched. The a ..." Cited by 56 (1 self) Add to MetaCart Abstract. This note addresses the history of process algebra as an area of research in concurrency theory, the theory of parallel and distributed systems in computer science. Origins are traced back to the early seventies of the twentieth century, and developments since that time are sketched. The author gives his personal views on these matters. He also considers the present situation, and states some challenges for the future. - University of Sussex , 1997 "... We investigate the use of symbolic operational semantics for value-passing process languages. Symbolic semantics provide analytical tools for reasoning about particular infinite state systems where traditional methods fail. We eschew the use of Milner's encoding of value-passing agents into pure pro ..." Cited by 9 (2 self) Add to MetaCart We investigate the use of symbolic operational semantics for value-passing process languages. Symbolic semantics provide analytical tools for reasoning about particular infinite state systems where traditional methods fail. We eschew the use of Milner's encoding of value-passing agents into pure process algebra and advocate the treatment of value-passing terms as first-order processes proper. Such an approach enables us to build finitary proof systems for reasoning within a variety of value-passing calculi. All work carried out here is parametric with respect to the language of data expressions and, as such, reasoning about processes must be done relative to reasoning about data. Firstly, we consider...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=59944","timestamp":"2014-04-18T06:01:06Z","content_type":null,"content_length":"16669","record_id":"<urn:uuid:6fea93ff-49d8-4277-8bda-49bc3f60fa57>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Trouble understanding the concepts behind the code 10-12-2012 #1 Registered User Join Date Oct 2012 For the assignment, I have to add upon the previous by including a function to detect that the characters inputted are appropriate. They can't be letters or symbols, can only be numbers, one decimal, and only two places from the decimal. Before I start writing code and make it really obvious I don't know what I'm doing, I'm trying to get the concepts through my head with pseudocode. I can't even get the pseudocode down because I have trouble getting the concepts. I know in order to check characters, there needs to be conversion going on-- but why? Is it to check values? To store data? I don't understand. This is where I'm stuck. A classmate told me the code has to check for fractions to make sure the decimal isn't over 100 but beyond that it went into one ear and out the other. I can't write pseudocode if I can't get the basic concepts. Can anyone help me out? My assignment involves entering values for pay and wages. After calculating deductions, a function returns the values as net pay. However, I need to understand the how and why behind checking each character entered. If you enter the characters "56" at the keyboard when prompted to enter an integer, the keyboard sends two characters with values 53 (the '5') and 54 (the '6'). (Assuming your operating system or compiler works with the ASCII character set, which most do but not all do). To interpret those two characters as meaning a value 56, it is necessary to interpret the meaning of both characters, and then interpret the pair of them as meaning 56. Your description is not entirely accurate, because there are many possible ways of interpreting the pair of consecutive characters '5' and '6' as the value 56. Some of the techniques for doing that involve conversion (which means translate a value of type X to a value of type Y) and some don't. Some methods of interpreting data may even involve multiple conversions. In the example I described, there are often I/O functions to interpret a set of characters as a value. For example, scanf()'s %d format tells scanf() to interpret coming data as if it represented an integral value. Right 98% of the time, and don't care about the other 3%. If you enter the characters "56" at the keyboard when prompted to enter an integer, the keyboard sends two characters with values 53 (the '5') and 54 (the '6'). (Assuming your operating system or compiler works with the ASCII character set, which most do but not all do). To interpret those two characters as meaning a value 56, it is necessary to interpret the meaning of both characters, and then interpret the pair of them as meaning 56. Your description is not entirely accurate, because there are many possible ways of interpreting the pair of consecutive characters '5' and '6' as the value 56. Some of the techniques for doing that involve conversion (which means translate a value of type X to a value of type Y) and some don't. Some methods of interpreting data may even involve multiple conversions. In the example I described, there are often I/O functions to interpret a set of characters as a value. For example, scanf()'s %d format tells scanf() to interpret coming data as if it represented an integral value. The classmate who had showed me his code explained it to me this way, but he used one method of doing so. Why do the characters get interpreted as 53 and 54? The compiler Bloodshed uses ASCII from what I recall. All characters need distinct values, in order to tell them apart. There are various specifications that assign values to particular characters (the letters in the alphabet, digits, punctuation characters, various types of whitespace, symbols, etc). ASCII (American National Code for Information Interchange) is just one of the specifications. It is one of the older ones, but still in common use. As to why digits don't map exactly to their value, it is quite common in several programming languages that a character with value zero represents "end of string" (where a string is a sequence of characters that collectively have some meaning, such as "hello") or "no data in this character". It would not be a good thing to give the digit '0' a value of zero .... For example, that would make it impossible to represent the value 100 in a string. The digit '0' therefore has a non-zero value in most specifications. That affects other digits as well since, with most character sets (usually digits map to consecutive values, just not a set starting with zero). There are other character sets too, such as Unicode, EBCDIC, and others. All of them are just a specification mapping particular characters (human readable or not) into numeric values. Right 98% of the time, and don't care about the other 3%. Okay, in a step by step process, how does this look exactly? I think if someone explained the process to me in steps I'd understand better. Say for example, what a user inputted was 44.75. Try writing down on paper how you would convert a string containing the characters "44.75" into the value (or values) you want. Imagine you are writing step-by-step instructions to do that, which will be followed by a pedantic ignoramus who will do what you tell him, no more no less, and will electrocute you if he gets You'll learn more by working out the steps for yourself, than you will if someone simply gives you the steps. Right 98% of the time, and don't care about the other 3%. Okay, the sarcasm is not needed considering all I was asking for was an example. Read the post again. There is absolutely no trace of sarcasm whatsoever. grumpy is describing how to come up with the algorithm yourself, attempting to teach you how to fish instead of simply cramming a trout down your gullet. The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss One thing that I think you're not understanding - could be wrong, is this: If you say define a variable as char, and then assign it a number, like this: char num = 26; then if I'm not mistaken it is actually a letter because of the ascII table. I was not being sarcastic. You are better off formulating your own answer to your question, rather than having someone else do it for you. I gave a pointer to how you might do that. You are very much mistaken. ASCII 26 is a control character, known as the "substitute character" (I won't go into the reason for that name). It is the code generated by low level keyboard drivers when the user hits CTRL-Z (holds down the CTRL key and the Z key at the same time). It is also used under some older disk operating systems (like MS-DOS) to represent end of file. Right 98% of the time, and don't care about the other 3%. 10-12-2012 #2 Registered User Join Date Oct 2012 10-12-2012 #3 Registered User Join Date Jun 2005 10-12-2012 #4 Registered User Join Date Oct 2012 10-12-2012 #5 Registered User Join Date Jun 2005 10-13-2012 #6 Registered User Join Date Oct 2012 10-13-2012 #7 Registered User Join Date Jun 2005 10-14-2012 #8 Registered User Join Date Oct 2012 10-14-2012 #9 10-14-2012 #10 Registered User Join Date Sep 2012 10-14-2012 #11 Registered User Join Date Jun 2005
{"url":"http://cboard.cprogramming.com/c-programming/151348-trouble-understanding-concepts-behind-code.html","timestamp":"2014-04-16T22:56:25Z","content_type":null,"content_length":"88525","record_id":"<urn:uuid:15090b82-7f7d-4b36-a3b2-8b287b93f866>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Write 90% as a fraction in simplest form. its either 9/10,1, or 90/75 ...i think its 9/10.. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51016d30e4b03186c3f84255","timestamp":"2014-04-21T04:48:16Z","content_type":null,"content_length":"44186","record_id":"<urn:uuid:6d7c0569-357b-4f3d-ad1b-9ecd46fb8c47>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Conductor/Image Charge, Volume Charge Distribution Hi tyj8i, welcome to PF! 2a. My understand is that the electric potential inside the shell is only affected by the enclosed q? Is that correct? Because if I was to place an image charge to solve the problem, the image charge wouldn't contribute anything and since the sphere is grounded, that doesn't contribute anything either. No. In the actual problem, you have a point charge and a conductor. The point charge will induce some unknown charge density onto the conductor, and the resulting potential will be the potential due to the point charge the induced charge on the conducting shell. the charge distribution was spherically symmetric, then you could use Gauss' Law to find the field and then integrate it to get the potential, and you would find that the potential is that of just the point charge. , since the point charge is off-center, there is no reason to assume spherical symmetry and a different method must be used. This is where the method of images comes into play. You forget about the conductor and instead add an image charge (not in the region you are trying to find the potential) outside the now forgotten shell (r>R) in such a place that the total potential due to the point charge an the image charge will be zero on the shell (r=R). When you do that, the uniqueness theorem can be used to show that the total potential due to these two charges (point and image) is the same as the potential of your original problem, inside the shell. (How? 2b. Total induced charge would be -q to cancel out the q's e-field. Correct? 2c. Total force on the conducting shell would be zero since the shell is not moving, other wise the shell would start moving around. I'm pretty sure that's wrong...but I can't think of anything else...Unless I have to use an image charge (read 2d). NO, just because an object isn't moving (has zero velocity) at a given instant, doesn't mean it can't have a non-zero acceleration (force acting on it) at that same instant. You'll want to find the electric field (inside and out) using MOI, and the induced surface charge density [itex]\sigma(\theta,\phi)[/itex] on the shell. From there, you can take a tiny piece of the shell [itex]dq=\sigma da[/itex] and calculate the force on it, by treating it as a point charge and using the Lorentz force law...to find the total force on the shell, you simply integrate this over the entire shell.
{"url":"http://www.physicsforums.com/showthread.php?t=344178","timestamp":"2014-04-17T07:24:33Z","content_type":null,"content_length":"27870","record_id":"<urn:uuid:2b8df4da-0a8c-4c30-9d10-0e5e13044aa7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Re: Question on the Scope of Mathematics A.P. Hazen a.hazen at philosophy.unimelb.edu.au Sat Jul 31 01:09:15 EDT 2004 It has been pointed out (as a "sociological" fact) that mathematicians, even when finding and convincing themselves of the (probable) truth of propositions about undeniably mathematical subject matter SOMETIMES proceed by giving rigorous proofs, and sometimes... do other things. Just as another "sociological" observation, not all "mathematical" discourse consists of expounding (rigorous and non-rigorous) proofs. One thing that is certainly a significant part of teaching mathematics, and I think also of discussion among mathematical professionals, is explanation: particularly, explanation of why initially attractive proof strategies WON'T (or at least shouldn't be expected to) work. And I think this sort of explanation is a useful part of "mathematical life" (mathematical conversation?) even when it does not contain anything proof-like, rigorous OR heuristic. Trivial example: an elementary logic student, impressed by how usable the tableau method is for classroom examples, is surprised when told that First-Order Logic is undecidable: don't tableaux constitute a decision procedure? And we explain that, yes, the tableau method WOULD be a decision procedure IF we could set a bound on the number of new instantial terms we might have to use, but that if there is an AE formula on the branch, we may have to go on forever. We haven't proven that FOL is undecidable, we haven't proven that tableaux aren't a decision procedure, we haven't even shown that there isn't some-- unobvious-- way of calculating a bound on the number of instantial terms such that we can say a formula is satisfiable if the tableau hasn't closed after that many new terms are added. But I think we have said something useful, and something that it is the business of "mathematicians QUA mathematicians" to say. Generalizing wildly and irresponsibly from the example... I guess I'd like to make the philosophical claim that mathematics includes the giving of rigorous proofs, but it ALSO includes the question-asking and preliminary discussion which -- when we are lucky -- leads to something we CAN give a rigorous proof of. (I don't think this necessarily commits me to denying the centrality of proof to the whole family of activities making up mathematics, however.) Allen Hazen Philosophy Department University of Melbourne More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-July/008375.html","timestamp":"2014-04-20T13:21:46Z","content_type":null,"content_length":"4589","record_id":"<urn:uuid:1780094f-90db-45c2-9591-716648e7ac4e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Quartic curve - what is the genus ? up vote 1 down vote favorite I am studying the following quartic curve: $f(x,y) = c_1x^2 + c_2x^4 + c_3x^2y + c_4x^2y^2 + c_5y^2 + c_ 6y^3 + c_7y^4$ where $c_i$ are constant (in fact they are expressions in terms of other constants). Starting to learn a bit about curves, I found that a necessary condition for a point $(x_0, y_0)$ be singular (a double point) is that $$F(x_0, y_0) = 0,\qquad F_x (x_0, y_0) = 0,\qquad F_y (x_0, y_0) = 0$$ and that the second derivatives calculated at that point are not all equal to zero. Solving these three equations (trial and error) I got two solutions: $$(x_0, y_0) = (0,0),\qquad (x_0, y_0) = (0, -2 c_5/c_6)$$ The second solution is a solution due to the fact that the coefficients $c_i$ are interrelated. For both points the second derivatives are not equal to zero. Therefore, this curve has apparently has two double points, both with multiplicity equal to 2. Thus, this curve would have genus = 1, if there are no more singular points. My questions are: 1) What I said above accurate? 2) is there any simple way to test if there is more singular points? 3) if no more singular points, how to parameterize a quartic curve like that? (I tried to transform this curve in an elliptic one, making $x^2 = z$, but i'm not sure if this is correct.) Thank you very much 1 If the curve is singular, there is more than one notion of genus for it. – Mikhail Bondarko Nov 19 '10 at 12:32 1 In order to have an ordinary double point (= a node), you also need to have distinct tangents at the point, which means the Hessian matrix at that point is invertible. – François Brunault Nov 19 '10 at 13:01 1 If indeed your curve has genus 1, because it has two simple double points as you claim (I have not checked that), then you can transform it into a smooth cubic in the following way. Take a quadratic Cremona transformation based on three points on the curve, two of them being the singular points. The result must be a smooth cubic. – Chris Wuthrich Nov 19 '10 at 13:37 1 I think you should look also at the points "at infinity", by homogenizing your equation, to see if there are more singular points there, since this is an affine equation. – roy smith Nov 19 '10 at 1 I don't think your second "singular" point is on the curve for general values of the coefficients. It looks like it's generically a curve of genus two. It is clearly elliptic if $c_2=0$. – Felipe Voloch Nov 19 '10 at 16:49 show 5 more comments 1 Answer active oldest votes This is not a real answer, since the curve you are interested in is not the generic one of the type you describe (you say that there are relations between the coefficients). However, if you are starting to learn about curves maybe you will be interested in seeing how the generic such curve can be studied by hand. The proper setting for the question, as pointed out in a comment, is the projective plane $P^2$, so I'm going to add a variable $z$ and make everything homogeneous. Also, since you say nothing about the coefficients, I will work over the complex numbers. up vote 7 down Consider the linear system of plane quartics spanned by $z^2x^2$, $x^4$, $zx^2y$, $x^2y^2$, $z^2y^2$, $zy^3$ and $y^4$. The only base point of this system is the point $P=[1,0,0]$. It is vote easy to see that every curve of the system is singular at $P$ (it is true for all the generators) and that there is at least one curve (e.g. $z^2(x^2+y^2)=0$) that has an ordinary double point at $P$. Hence by Bertini's theorem the general curve of the system has an ordinary double point at $P$ and is smooth elsewhere. It is easy to show directly that such a curve cannot be reducible, so by the genus formula it has geometric genus 2. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/46618/quartic-curve-what-is-the-genus?sort=newest","timestamp":"2014-04-20T03:51:20Z","content_type":null,"content_length":"58566","record_id":"<urn:uuid:3cf20924-ef5c-440a-b025-32a556ddfd53>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
General Principles of Data Analysis │General Principles of Data Analysis │1│ The choice of an appropriate statistical technique is a complex issue. Real-life data often contain mixtures of different types of data, which makes the choice of analysis technique somewhat arbitrary. It is quite possible that two statisticians confronted with the same data set may select different methods of data analysis, depending upon what assumptions they are willing to take into account while interpreting the results of analysis. Suppose, there is one dependent variable measured on the interval scale, and five independent variables, of which three are interval-scaled variables, one nominal variable and one ordinal variable with five modalities. In such a situation, some statisticians would use multiple regression analysis, treating one ordinal variable as interval-scale variable and use dummy variables for the nominal variable. Some statisticians may categorize all the interval scale variables and perform an analysis of variance. However, certain general principles for choosing a statistical technique can be discussed. Besides, certain extraneous factors like the availability of software and its limitations, and availability of time and financial resources, the choice of a statistical technique depends essentially upon the following factors: (i) Characteristics of the analysis question; (ii) Characteristics of the data; (iii) Characteristics of the sampling design.
{"url":"http://www.unesco.org/webworld/idams/advguide/Chapt1.htm","timestamp":"2014-04-24T05:22:30Z","content_type":null,"content_length":"8820","record_id":"<urn:uuid:29d8c5c9-8a7a-4b92-b0be-d8b6bcb34909>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Integral of (x^3.^root(4x-x^2) dx with integral limits 0,4 Integral of x^4/(1+x^2)^4 dx with integral limits 0,5 pls giv me solution for above 2 questns in detailled steps • one year ago • one year ago Best Response You've already chosen the best response. \[\int x^{3}\sqrt{x^{2}-4} dx\] Best Response You've already chosen the best response. Is that the integral? Best Response You've already chosen the best response. I mean: \[\large \int\limits x^{3}\sqrt{4x-x^{2}} dx\]? Best Response You've already chosen the best response. Yes limits given in questn Best Response You've already chosen the best response. Pls giv me slution in steps as i need to note it Best Response You've already chosen the best response. i think you're supposed to do completing the swuare here Best Response You've already chosen the best response. well, it's completing the square first then trig-sub. Why not you try it; then me typing the solution for you? Best Response You've already chosen the best response. than me* Best Response You've already chosen the best response. I dont know how to do,pls giv me steps Best Response You've already chosen the best response. \[4x - x^2 = -(x^2 - 4x)\]\[= -(x^2 - 4x) - 4 + 4\]\[= -(x^2 - 4x + 4) + 4\]\[= 4 - (x^2 - 4x + 4)\]\[= 4 - (x - 2)^2\]\[\int\limits x^3 \sqrt{4x - x^2}dx = \int\limits x^3 \sqrt{4 - (x - 2)^2} dx\]there i already completed the square for you, just do trig subs and you'll be fine Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50016af4e4b0848ddd65e890","timestamp":"2014-04-19T19:43:44Z","content_type":null,"content_length":"49375","record_id":"<urn:uuid:b5d33ca8-c50e-48e8-ba5c-3cce4ae477ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple Imputations Applied to the DREAM3 Phosphoproteomics Challenge: A Winning Strategy DREAM is an initiative that allows researchers to assess how well their methods or approaches can describe and predict networks of interacting molecules [1]. Each year, recently acquired datasets are released to predictors ahead of publication. Researchers typically have about three months to predict the masked data or network of interactions, using any predictive method. Predictions are assessed prior to an annual conference where the best predictions are unveiled and discussed. Here we present the strategy we used to make a winning prediction for the DREAM3 phosphoproteomics challenge. We used Amelia II, a multiple imputation software method developed by Gary King, James Honaker and Matthew Blackwell[2] in the context of social sciences to predict the 476 out of 4624 measurements that had been masked for the challenge. To chose the best possible multiple imputation parameters to apply for the challenge, we evaluated how transforming the data and varying the imputation parameters affected the ability to predict additionally masked data. We discuss the accuracy of our findings and show that multiple imputations applied to this dataset is a powerful method to accurately estimate the missing data. We postulate that multiple imputations methods might become an integral part of experimental design as a mean to achieve cost savings in experimental design or to increase the quantity of samples that could be handled for a given cost. Citation: Guex N, Migliavacca E, Xenarios I (2010) Multiple Imputations Applied to the DREAM3 Phosphoproteomics Challenge: A Winning Strategy. PLoS ONE 5(1): e8012. doi:10.1371/journal.pone.0008012 Editor: Mark Isalan, Center for Genomic Regulation, Spain Received: July 30, 2009; Accepted: October 15, 2009; Published: January 18, 2010 Copyright: © 2010 GUEX et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported in part by the Experimental Network for Functional INtegration (ENFIN; http://www.enfin.org), a Network of Excellence funded by the European Commission within its FP6 Programme, under the thematic area “Life sciences, genomics and biotechnology for health”, contract number LSHG-CT-2005-518254. The research was funded in part by the Integrated Computational Genomics Resources of the Swiss Institute of Bioinformatics (http://www.isb-sib.ch/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the Competing interests: The authors have declared that no competing interests exist. DREAM is an initiative that is quite essential in the field of methods development to critically evaluate current computational methodologies (http://wiki.c2b2.columbia.edu/dream/index.php/ The_DREAM_Project). In this respect, it follows the well-established Critical Assessment of methods of protein Structure Prediction (CASP) [3], [4], [5], [6], [7], [8], which has spurred innovation in this field. DREAM is now at its 4^th instance, and there is no doubt that it will become as beneficial for the Systems Biology world as CASP already is for the structural biology domain. We participated in the 3^rd instance of the DREAM challenge, in the phosphoproteomics section. Briefly, this challenge is based on a data set provided by Peter Sorger et al[9], where the authors measured the difference in signaling between normal and cancerous cells using phosphoproteomics assays. Predictors were given only 90% of the data and had to predict the value of the remaining measurements, which had been masked by the authors. This consisted in predicting the concentration of 17 phosphoproteins at two time points for 7 combinations of stimuli and inhibitors applied to normal and cancer hepatocytes (Figure 1). For each of the 17 phosphoproteins, 42 distinct combinations of stimuli and inhibitors measurements were given, in addition to un-stimulated and un-inhibited Figure 1. Description of the DREAM3 phosphoproteomics challenge. 17 phosphoproteins have been measured in Normal and Cancer Cells, following various combinations of Stimulus and Inhibitor at various time-points. A series of measurements (476 out of 4624) have been masked (diagonal). The challenge consisted in providing the most accurate prediction of those missing data. In this article, we describe the approach we took to analyze the data and make a winning prediction, and discuss the applicability of the process to other data sets. Given the complexity of the biological networks affected by the various stimuli and inhibitors, we decided to approach this challenge by imputing the missing data based solely on the exiting measured data. We took advantage of the Vital-IT high-performance computing center to run thousands of simulations to determine the best multiple imputation parameters to apply for our final prediction. This article will describe our approach in details. It is important to mention that, although our multiple imputations strategy resulted in a winning contribution, it does not provide any insights into the biomolecular system underlying the data. In other words, it does not infer nor uses the wiring structure of the signaling network. As a consequence, it would not be possible to infer the outcome of multiple simultaneous perturbations on the phosphoproteomics measurements using this approach. To this end, other methods that implicitly take advantage of the signaling network using kinetic modeling or logical modeling should be used [10]. These methods will likely be used in the 2009 DREAM challenges, as several groups are focusing their attention towards methodologies to infer and reconstruct regulatory networks and evaluate their dynamical behaviour. One interesting aspect of the DREAM challenge is that there is only about three months between the time the data are released and the due date for the analysis. This does not leave much time to develop and validate novel methods, and predictors typically apply methods they have been developing in their laboratory over time. We took a slightly different approach, which consisted in analyzing the problem, identifying a suitable tool to perform the analysis, tuning the parameters during the time allowed and performing our final prediction. The summary of the analysis workflow is described in Figure 2. Each step is described in more depth in the following sections. Figure 2. Analysis workflow summary. Description of the different steps applied to the DREAM challenge. Step 1: Understanding the Challenge We immediately recognized that the masked data could be assimilated to missing data. Missing data is a recurrent and very annoying problem, as most statistical tools do not tolerate missing data. Common ways to deal with this issue include ignoring samples as soon as one measurement is missing, which prunes the dataset. Although applicable in cases of large datasets with few missing values, this is far from ideal and inapplicable in our case, as it is indeed the objective of the challenge to predict the masked data. The other common approach is to replace the missing data either with random values, or by the mean or median of non-missing values. Both approaches can lead to biases and inefficiencies. Fortunately, solutions to impute the missing data have been developed, in particular in the field of social sciences, where multiple questions polls are usually only partially filled and where removing any sample partially filled would amount to discarding most of the dataset. We elected to use the Amelia II package[2] of R[11], a multiple imputation method described in depth in a report entitled “What to do About Missing Values in Time Series Cross-Section Data”, available at http://gking.harvard.edu/amelia/. Step 2: Performing Exploratory Data Analysis To get a “feel” for the data, we performed a principal component analysis (PCA) using the dudi.pca module of the ade4 package[12] of R (Figure 3). It is obviously apparent that there is a large difference between Cancer and Normal cells. Likewise, some grouping is also apparent for the various time points. Measurements at time zero and 180mn cluster in relatively tight neighboring regions of the PCA space. In contrast, there is a large dispersion of the measurements for time 30 mn. Moreover, those measurements tend to be further away from measurements at time zero than measurements at time 180 mn are. Those observations led us to try various parameters that would account for time effect and cell type effect (cross-section Normal vs Cancer) during the multiple imputation process. Figure 3. Inspection of the challenge data through Principal Component Analysis (PCA). All measurements classes were pooled together, irrespective of the CellType, Time, Stimulus and Inhibitor. Scatter plots with representation of the various classes were produced with the s.class command of the ade4 R package. The various classes are: Top left: CellType (Normal, Cancer). Top right: Time (0, 30, 180 mn). Bottom left: The seven stimuli. Bottom right: The seven inhibitors. Step 3: Optimizing the Multiple Imputation Parameters Although any additional prior data already present in the literature could be used to help solve the challenge, we decided to use only the rich dataset at our disposition to make our predictions, since the conditions, laboratories and experimentalists affect experimental readouts. Therefore we committed to two principles before starting the analysis: (1) let the data drive the prediction process and (2) do not correct our predictions based on any particular biological knowledge. Amelia II has several input parameters, and can apply various transformations to the input data. To determine the best combination of parameters to use to impute the missing data of the challenge dataset, we randomly chose three Stimuli/Inhibitors pairs among the 42 combinations of Stimuli/ Inhibitors for which we had data, with the restriction that a given Stimulus or Inhibitor could not be picked more than once. We then masked the 17 phosphoproteomics measurement data associated with those three pairs at time points 30 and 180 mn for Cancer and Normal cells. This corresponded to the masking of 204 (17×3×2×2) measurements. We then used Amelia II to impute the 204 masked data with various input parameters and assessed the performance of the prediction by computing the Pearson correlation coefficient between the median of multiple imputations and the actual measurement (Figure 4). The process was repeated 50 times, selecting different combinations of masked Stimulus/Inhibitors pairs. Thus, we collected 50 correlation coefficients for any set of multiple imputations parameters tested. To make our prediction for the challenge, we chose the set of parameters for which the median of the 50 Pearson Correlation coefficients was the highest. We then applied those parameters to the 476 masked data of the challenge. Figure 4. Identification of the best multiple imputation parameters. A. Selection of three Stimulus/Inhibitor pairs and masking (red) of the 17 associated phosphoproteomics measurements at 30 and 180 mn in both normal and cancer cells (17×3×2×2 = 204 masked measurements). B. Example of multiple imputations results with a given set of Amelia II parameters for the 17 masked phosphoproteomics measurements associated with an IFNγ stimulation and JNK inhibition at 30 mn in cancer cells. The boxplots show the spread of the 14 multiple imputations performed for each phophoprotein and the median of the prediction (black) can be compared to the actual measurement (red). C. The correlation between the median of each of the 204 predictions and the 204 actual measureents which have been masked is computed and provides an evaluation of the prediction performance for a given set of Amelia II input parameters. As can be seen in Figures S1 and S2, the variation of the multiple imputations parameters influenced the ability to predict the masked data. In particular, increasing the number of multiple imputations improved the correlation (Figure S1). Likewise, increasing the polynomial order used to model the time effect was beneficial (vectors are parallels in the PCA space; Figure S2). Actually, we determined that the best correlation was achieved using a second order polynomial (data not shown). This is consistent with the observation that time points zero and 180 mn were close in the PCA space, whereas measurements belonging to the 30 mn time point were more distant and scattered (Figure 3). We also observed that when the cell status (Cancer, Normal) was considered as a cross-section, it was absolutely necessary to allow modeling of the time effect differently for each cell type. On another hand, the method used to initialize the imputation process (listwise deletion or identity matrix) had no effect. The best overall correlation (0.94) was obtained with 50 multiple imputations, a cross-section on the cell type and the possibility to apply a different model of the time effect for each cell type using a second order polynomial on the raw (untransformed) data. When the number of imputations was large, we did not observe a statistical difference between imputing the missing data using untransformed or squared root transformed measurements, although we noticed a slightly tighter variance when untransformed data was used. Log transforming the data consistently gave inferior results (data not shown). However, we anticipated a beneficial effect of transforming the data, because during our initial data exploration phase, we observed that the measurements acquired for several of the 17 phosphoproteins were not normally distributed (data not shown). This violated the assumption made by the imputation model implemented in Amelia II, which optimally requires multivariate normally distributed data. During our search for optimal parameters, we either used the data as-is, or applied a squared root transformation on all measurements. As the various phosphoprotein measurements follow distinct distributions, we reasoned that the putative improvement obtained by transforming some measurements was compensated by the detrimental effect of transforming measurements that should have been left untransformed. Thus, we kept the multiple imputation parameters that gave us the best correlation with our own masked data and further evaluated the effect of transforming measurements for just some of the 17 phosphoproteins. We identified that a squared root transformation of Akt, IkBα, p38, p70S6 and HSP27 measurements modestly but significantly improved the overall correlation from 0.94 to 0.95 (unpaired t-test P-value 0.02). This is what we used for our final prediction. Overall, the median of the multiple imputation process produced an extremely accurate estimation of the actual measured data. Representative predictions examples are provided in Figure 5. The jury evaluated the predictions using a normalized square error by comparing the predictions with a null-model in which the missing values were sampled from the dataset to estimate a p-value. In our case, the chance to obtain such a prediction randomly was of 10^−22. The main advantage of using multiple imputations is that it naturally gives a prediction range for each missing value. We observed that the actual measurement fell out of this range for only 30 out of the 476 predictions, that is 6.3% of the time (Table S1). Interestingly, 14 of those “outliers” concern the combination of IL-1 stimulation with PI3K inhibition, and 10 (e.g. a third) are more specifically under-predicted for this specific combination of stimulus/inhibitor at 30 mn in cancer cells. The fact that a third of the “outliers” are found in this combination (out of the 28 distinct combinations of Stimulus/Inhibitor/CellType/Time for which the data had been masked) might reflect that PI3K inhibition can affect the apparent concentration of the IL-1 stimuli perceived by the cell. Indeed PI3K is linked in part with the rapid induction of IL-1R1[13]. The combination of TGFα stimulation with GSK3 inhibition also takes its share of outliers (4 out of 30), and there is evidence that both play an antagonizing role in the case of keratinocyte migration in HaCat cells, a cell type similar to the HepG2 cells used to produce the challenge data[14]. Figure 5. Evaluation of the quality of the DREAM3 challenge prediction. The multiple imputation process generated 50 predictions for each measurement, which are represented as boxplots. The median (black) was submitted as our prediction. In red, actual experimental measurement unveiled shortly before the DREAM3 conference. Top: example of high quality prediction. Bottom: Worse prediction. Interestingly, both IL-1 and TGFα stimuli clearly behave differently from the other stimuli in our preliminary PCA (Figure 2). Based on this observation, it was expected that it would be more challenging to accurately predict the missing values for those stimuli. To come with a more sensible prediction of IL-1, it might have been useful to benefit from results of other interleukin stimuli such as IL-8 or IL-6 to better cover the signaling space. The PCA (Figure 2) does not discriminate the various inhibitors, which appear superposed. This is consistent with the presence of biological cross-talks between those inhibitors, such as for example GSK3i and PI3Ki[15]. For the DREAM3 challenge, about 10.3% of the measurements had been masked. Once all of the actual measurements were made available, we masked 952 out of 4624 measurements (e.g. about 20.6% of the data) randomly drawn from time points other than zero. We then used the optimal prediction parameters determined earlier to predict the masked data. Here again, we observed that the multiple imputation process defined a range in which the actual measurement almost always fell. Indeed, the actual measurement fell out of this range for only 49 out of the 952 predictions, that is 5.1% of the time (Table S2). This time, no clear pattern of misprediction could be identified for the 49 “outliers”. This absence of clear pattern might be due to the fact that the masked data was missing completely at random in this case, which is the best situation for multiple imputations. After the DREAM conference, out of curiosity, we also tested the multiple imputation method on another challenge dataset: The gene expression prediction challenge, whose dataset was generously provided by Neil Clarke et al. Briefly, the challenge consisted in predicting the expression level of 50 genes in a gat1Δ yeast strain, for different time points following the addition of an histidine synthesis inhibitor. The expression level of these 50 genes as well as 9285 others was provided for wild type and 3 other mutant strains. We first back transformed the data to obtain raw measurements from the log transformed data supplied, and formatted the data to place genes in rows and mutants in columns. Contrarily to the phosphoproteomics challenge, we did not attempt to identify the optimal multiple imputations parameters by predicting the measurements of additionally masked genes. We directly imputed the missing data using just one set of (arbitrary) parameters: cross-section on the various genes, modeling the effect of time with a 2^nd order polynomial not varying across the cross-section, and 100 multiple imputations. We then evaluated what would have been our performance using the evaluation scripts used by the assessors, which are available from http://wiki.c2b2.columbia.edu/dream/index.php/D3c3. Although the prediction might probably have been improved by careful tuning of the parameters, it turns out that with this simple protocol, we would already have achieved the 3^rd best prediction (Table S3), with a score significantly better than several other predictors. Unfortunately, we cannot comment on the merit and pitfalls of the various methods used by the participating teams, because only anonymous rankings are provided by the organizers, so as to encourage submissions of experimental methods. However, a thorough comparative study of the different submissions is under preparation: Robert J. Prill, Daniel Marbach, Julio Saez-Rodriguez, Gregoire Altan-Bonnet, Peter Sorger, Neil Clarke, Gustavo Stolovitzky, Lessons from the DREAM3 challenges (this title may change), DREAM3 collection, PLoS One (to be published). From this work, we conclude that the multiple imputation method is a powerful technique that can be generally applied to many situations relevant to large-scale biological data acquisition where missing data are encountered, such as microarrays experiments [16]. This is also particularly relevant to longitudinal studies where patients might not come to every appointment, or where measurements might be missing for a variety of reasons. For example, in a longitudinal study examining 13 biomarkers as predictors of mortality, about 40% of the participants were missing information on one or more biomarker [17]. Although we applied multiple imputations to somewhat artificial conditions where known data are removed from a set, this work could be extended to influence the experimental design phase of new projects. Indeed, most of the current approaches rely on the use of checkerbox design (combinations of stimuli and inhibitor), which is very expensive both in time and in consumable price. Knowing that, for some datasets, as much as 20% of the data could be imputed could be used to reduce the amount of data to actually measure to reach a biological conclusion. This approach could also be used to plan a multi-step experiment approach in which the best combinations of stimuli and inhibitors worth measuring in the next experiment are “imputed” from the current experiment, reminiscent to the “pay as you go” strategy suggested for example in the protein-protein interactions field[18]. An other potential application could be to circumvent inherent limitations of some technologies. For example, flow cytometry cannot simultaneously quantify more than 10 cell surface markers. This is due to the difficulty to find fluorescently labeled antibodies whose emission spectra does not overlap, or to the lack of antibodies coupled to different fluorophores. It might be possible to design experiments where cells would be split in batches marked with near complete set of antibodies. For example, assuming that antibodies A and D cannot be used simultaneously, an experiment splitting cells into a first batch marked with antibodies A,B and C and a second batch marked with antibodies B,C and D, should make it possible to impute the missing measurements and thus obtain a prediction of markers A,B,C and D for each cell. To conclude, we believe that initiatives such as DREAM and ENFIN[19], which both provide a framework where the predictive power of computational methods can be rigorously benchmarked against experimental data should be encouraged. The structural biology community benefited strongly from CASP, and the systems biology and reverse-engineering fields will without doubt benefit from such Supporting Information Overall effect of varying the multiple imputation parameters. The process presented in Figure 4 has been repeated 50 times, masking different selections of 3 pairs of Stimuli/Inhibitors. In each case, 32 distinct combinations of parameters were tested, with 18 distinct number of multiple imputations (1–10, 15, 20, 25, 30, 35, 40, 45 and 50). For each of those 576 (32x18) parameters (x axis), the distribution of the 50 correlations computed as described in Figure 4C is presented as a boxplot. It is immediately apparent that for any of the 32 combinations of parameters tested, increasing the number of multiple imputations improves the prediction accurracy, but reaches a plateau after about 40 multiple imputations. (0.14 MB DOC) Principal Component Analysis of the multiple imputation parameters effect. #imputations: number of multiple imputations. Sqrt: Effect of applying a squared root transformation on all input data. Polytime: Effect of increasing the polynome order used to model the effect of time. Cross-section: indicates whether we should consider the cell status (Cancer, Normal) as a cross-section. Model cross-section time indicates whether the effect of the time should be modeled differently for Cancer and Normal cells. (0.06 MB DOC) List of the 30 combinations of Stimulus/Inhibition/timepoint/CellType measurements (out of 476) whose actual value falls outside of the min-max prediction range defined by the multiple imputation (0.09 MB DOC) List of the 49 combinations of Stimulus/Inhibition/timepoint/CellType measurements (out of 952 measurements masked completely at random) whose actual value falls outside of the min-max prediction range defined by the multiple imputations process. (0.13 MB DOC) Assessment of how the multiple imputation method would have performed on the DREAM3 Expression Challenge. Score: log-transformed “average” of the overall gene-profile p-value and the overall time-profile P-value, computed as -0.5 log10 (GeneProfile*TimeProfile); larger scores indicate greater statistical significance of the prediction. Overall Gene-Profile P-value: geometric mean of the 50 gene-profile P-values for a given time point. Overall Time-Profile P-value: geometric mean of the 8 time-profile p-values for a given gene. Assessment details can be found on the DREAM website at (0.04 MB DOC) We wish to acknowledge the organizers of the DREAM conference for providing new challenges year after year; the data providers, without whom no challenge could ever be released; and the groups that are willing to participate to the DREAM challenge. The computations were performed at the Vital-IT (http://www.vital-it.ch) Center for high-performance computing of the Swiss Institute of Author Contributions Conceived and designed the experiments: NG. Performed the experiments: NG. Analyzed the data: NG EM IX. Wrote the paper: NG EM IX.
{"url":"http://www.ploscollections.org/article/info:doi/10.1371/journal.pone.0008012","timestamp":"2014-04-16T07:53:32Z","content_type":null,"content_length":"109218","record_id":"<urn:uuid:32b24021-181b-4f00-8fcb-e11775eb5de5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponential Growth and Decay Geologic context: radioactive decay, population growth, changes in atmospheric CO[2] Teaching Exponential Growth and Decay ^by Jennifer M. Wenner, Geology Department, University of Wisconsin-Oshkosh Jump down to: Teaching strategies Materials & Exercises Student Resources Exponential growth and decay are rates; that is, they represent the change in some quantity through time. Exponential growth is any increase in a quantity (N) -- exponential decay is any decrease in N -- through time according to the equations: N(t) = N[0]e^kt (exponential growth) N(t) = N[0]e^-kt (exponential decay) • N[0] is the initial quantity • t is time • N(t) is the quantity after time t • k is a constant (analogous to the decay constant) and • e^x is the exponential function (e is the base of the natural logarithm) Teaching Strategies: Ideas from Math Education Put quantitative concepts in context There are a number of geologic contexts in which to introduce the concept of exponential growth and decay. Some of these include: • Population growth • Increases in atmospheric CO[2] Use multiple representations Because everyone has different ways of learning, mathematicians have defined a number of ways that quantitative concepts can be represented to individuals. Here are some ways that exponential growth and decay can be represented. Use technology appropriately Students have any number of technological tools that they can use to better understand quantitative concepts -- from the calculators in their backpacks to the computers in their dorm rooms. Exponential growth and decay can make use of these tools to help the students understand this often difficult concept. • Graphing calculators Graphing calculators are an easy way for all students to enter data and to see what a curve of that data looks like. All graphing calculators are slightly different and students may need help with their particular model. There are some helpful hints for some calculators at Prentice-Hall's Calculator help website (more info) • Computers Exponential growth and decay provide an excellent opening for an introduction to the use of spreadsheet programs. Students are likely to encounter spreadsheet programs in many of their classes and they are excellent tools for visualizing the shape of an equation. Work in groups to do multiple day, in-depth problems Mathematicians also indicate that students learn quantitative concepts better when they work in groups and revisit a concept on more than one day. Therefore, when discussing quantitative concepts in entry-level geoscience courses, have students discuss or practice the concepts together. Also, make sure that you either include problems that may be extended over more than one class period or revisit the concept on numerous occasions. Exponential growth and decay is a concept that comes up over and over in introductory geoscience: Radioactive decay, population growth, CO[2] increase, etc. When each new topic is introduced, make sure to point out that they have seen this type of function before and should recognize it. Teaching Materials and Exercises • Participants of the Quantitative Skills Workshop in 2002 developed a template for teaching mathematical functions. Included in this activity page are exercises for teaching Population Growth and Atmospheric CO[2] Increase -- excellent examples of exponential growth. Student resources Geomaths has a page explaining the math behind radioactive decay with a link to a very nice MathHelp tutorial on exponential functions. has some very good resources explaining:
{"url":"http://serc.carleton.edu/quantskills/methods/quantlit/expGandD.html","timestamp":"2014-04-16T17:33:32Z","content_type":null,"content_length":"29748","record_id":"<urn:uuid:9cbf04b2-d6ea-461f-8525-d92aec5f7af4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
On undecidability of the weakened Kruskal’s Theorem - Journal of Symbolic Logic , 2003 "... For α less than ε0 let Nα be the number of occurrences of ω in the Cantor normal form of α. Further let |n | denote the binary length of a natural number n, let |n|h denote the h-times iterated binary length of n and let inv(n) be the least h such that |n|h ≤ 2. We show that for any natural number h ..." Cited by 12 (2 self) Add to MetaCart For α less than ε0 let Nα be the number of occurrences of ω in the Cantor normal form of α. Further let |n | denote the binary length of a natural number n, let |n|h denote the h-times iterated binary length of n and let inv(n) be the least h such that |n|h ≤ 2. We show that for any natural number h first order Peano arithmetic, PA, does not prove the following sentence: For all K there exists an M which bounds the lengths n of all strictly descending sequences 〈α0,..., αn 〉 of ordinals less than ε0 which satisfy the condition that the Norm Nαi of the i-th term αi is bounded by K + |i | · |i|h. As a supplement to this (refined Friedman style) independence result we further show that e.g. primitive recursive arithmetic, PRA, proves that for all K there is an M which bounds the length n of any strictly descending sequence 〈α0,..., αn 〉 of ordinals less than ε0 which satisfies the condition that the Norm Nαi of the i-th term αi is bounded by K +|i|· inv(i). The proofs are based on results from proof theory and techniques from asymptotic analysis of Polya-style enumerations. Using results from Otter and from Matouˇsek and Loebl we obtain similar characterizations for finite bad sequences of finite trees in terms of Otter’s tree constant 2.9557652856.... ∗ Research supported by a Heisenberg-Fellowship of the Deutsche Forschungsgemeinschaft. † The main results of this paper were obtained during the authors visit of T. Arai in
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=955210","timestamp":"2014-04-20T17:58:16Z","content_type":null,"content_length":"14795","record_id":"<urn:uuid:43f294bc-7505-4242-a25d-c9ddaed99277>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Maintainer Ertugrul Soeylemez <es@ertes.de> Event system. None of these wires except event supports feedback, because they all can inhibit. Producing events after :: Monad m => Time -> Wire m a aSource Produce a signal once after the specified delay and never again. The event's value will be the input signal at that point. afterEach :: forall a b m. Monad m => [(Time, b)] -> Wire m a bSource Produce an event according to the given list of time deltas and event values. The time deltas are relative to each other, hence from the perspective of switching in [(1, a), (2, b), (3, c)] produces the event a after one second, b after three seconds and c after six seconds. edgeBy :: forall a b m. Monad m => (a -> Bool) -> (a -> b) -> Wire m a bSource Whenever the predicate in the first argument switches from False to True for the input signal, produce an event carrying the value given by applying the second argument function to the input signal. never :: Monad m => Wire m a bSource Never produce an event. This is equivalent to inhibit, but with a contextually more appropriate exception message. Event transformers Delaying events dam :: forall a m. Monad m => Wire m [a] aSource Event dam. Collects all values from the input list and emits one value at each instant. Note that this combinator can cause event congestion. If you feed values faster than it can produce, it will leak memory. delayEvents :: forall a m. Monad m => Wire m (Time, Maybe a) aSource Delay events by the time interval in the left signal. Note that this event transformer has to keep all delayed events in memory, which can cause event congestion. If events are fed in faster than they can be produced (for example when the framerate starts to drop), it will leak memory. Use delayEventSafe to prevent this. delayEventsSafe :: forall a m. Monad m => Wire m (Time, Int, Maybe a) aSource Delay events by the time interval in the left signal. The event queue is limited to the maximum number of events given by middle signal. If the current queue grows to this size, then temporarily no further events are queued. As suggested by the type, this maximum can change over time. However, if it's decreased below the number of currently queued events, the events are not deleted. Selecting events dropFor :: forall a m. Monad m => Wire m (Time, a) aSource Timed event gate for the right signal, which begins closed and opens after the time interval in the left signal has passed. takeFor :: forall a m. Monad m => Wire m (Time, a) aSource Timed event gate for the right signal, which starts open and slams shut after the left signal time interval passed. event :: Monad m => Wire m a b -> Wire m a (Maybe b)Source Variant of exhibit, which produces a Maybe instead of an Either. Never inhibits. Same feedback properties as argument wire.
{"url":"http://hackage.haskell.org/package/netwire-1.2.6/docs/FRP-NetWire-Event.html","timestamp":"2014-04-19T22:22:39Z","content_type":null,"content_length":"20329","record_id":"<urn:uuid:8dd8d47d-df6c-4fc1-b73b-e78881650813>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Don't Lose Your Marbles - Game Rules The basic game of marbles is a simple one. The first decision to make is if you are playing for "keepsies" where the winner gets to keep your marbles if you lose -- or just for fun. Basic Rules Players: 2 to 6 players. Needed: Marbles (13 mibs and 1 shooter per player minimum) and a circle. Each player decides on how many marbles they are going to use in their game. Players begin by drawing a circle that is 3 to 10 feet in diameter. This is often determined by the skill of the players. The bigger the circle, the better the players. Players place 13 mibs (one of the thirteen smaller 5/8" marbles) in the center of the circle to form an "X" or a circle. The game begins by one player knuckling down at the edge of the circle and flicking their shooter. The object is to knock out one or more of the mibs, without the player's shooter leaving the circle. If, the player has been successful, then the player can shoot again from the place where the shooter rested. If, after the player has missed and his/her shooter end up outside the circle, then the player must leave the shooter inside the circle. The next player takes a turn. Each mib that was knocked out counts for one point. A player may also knock out any other player's shooter that remains in the circle. The game continues until all of the original mibs have been knocked out. The player with the most points wins. In some versions the marbles knocked out of the circle are kept by the shooter. This is sometimes called "keepsies". Next page > Marble Golf
{"url":"http://www.kidsturncentral.com/topics/sports/marbles3.htm","timestamp":"2014-04-19T19:57:05Z","content_type":null,"content_length":"5803","record_id":"<urn:uuid:7bfd4402-3c00-4de0-a5de-4c2ac16f132e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Locker Problem Imagine a hallway with 100 lockers, all closed. 100 students are sent down the hall as follows: student 1 opens all the lockers; student 2 closes every other locker, beginning with the second; student 3 changes the state of every third locker, beginning with the third; and so on. After all the students have marched, which lockers remain open? This Demonstration illustrates the changing locker states as the students march. Black squares represent closed lockers and white squares represent open lockers. The first row in the graphic shows the initial hallway with locker 1 on the left and locker 100 on the right. Each subsequent row shows the hallway after the next student has marched, with the bottom row showing the final locker configuration. The user can select certain subsets of the students to send marching, using the convention that student will change the state of every locker, beginning with the . Can you see how the final locker state relates to the set of students sent marching? Further reading: B. Torrence, "Extending the Locker Problem," Mathematica in Education and Research (1), 2006 pp. 83–95. B. Torrence and S. Wagon, "The Locker Problem," Crux Mathematicorum (4), 2007, pp. 232–236.
{"url":"http://demonstrations.wolfram.com/TheLockerProblem/","timestamp":"2014-04-21T12:21:48Z","content_type":null,"content_length":"43547","record_id":"<urn:uuid:97a5a5d7-300e-4e20-887a-4d20e75d8f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
ALERT: Before you purchase, check with your instructor or review your course syllabus to ensure that you select the correct ISBN. Several versions of Pearson's MyLab & Mastering products exist for each title, including customized versions for individual schools, and registrations are not transferable. In addition, you may need a CourseID, provided by your instructor, to register for and use Pearson's MyLab & Mastering products. Access codes for Pearson's MyLab & Mastering products may not be included when purchasing or renting from companies other than Pearson; check with the seller before completing your purchase. Used or rental books If you rent or purchase a used book with an access code, the access code may have been redeemed previously and you may have to purchase a new access code. Access codes Access codes that are purchased from sellers other than Pearson carry a higher risk of being either the wrong ISBN or a previously redeemed code. Check with the seller prior to purchase. Precalculus, Fifth Edition , by Lial, Hornsby, Schneider, and Daniels, engages and supports students in the learning process by developing both the conceptual understanding and the analytical skills necessary for success in mathematics. With the Fifth Edition , the authors adapt to the new ways in which students are learning, as well as the ever-changing classroom environment. Table of Contents R. Review of Basic Concepts R.1 Sets R.2 Real Numbers and Their Properties R.3 Polynomials R.4 Factoring Polynomials R.5 Rational Expressions R.6 Rational Exponents R.7 Radical Expressions 1. Equations and Inequalities 1.1 Linear Equations 1.2 Applications and Modeling with Linear Equations 1.3 Complex Numbers 1.4 Quadratic Equations 1.5 Applications and Modeling with Quadratic Equations 1.6 Other Types of Equations and Applications 1.7 Inequalities 1.8 Absolute Value Equations and Inequalities 2. Graphs and Functions 2.1 Rectangular Coordinates and Graphs 2.2 Circles 2.3 Functions 2.4 Linear Functions 2.5 Equations of Lines and Linear Models 2.6 Graphs of Basic Functions 2.7 Graphing Techniques 2.8 Function Operations and Composition 3. Polynomial and Rational Functions 3.1 Quadratic Functions and Models 3.2 Synthetic Division 3.3 Zeros of Polynomial Functions 3.4 Polynomial Functions: Graphs, Applications, and Models 3.5 Rational Functions: Graphs, Applications, and Models 3.6 Variation 4. Inverse, Exponential, and Logarithmic Functions 4.1 Inverse Functions 4.2 Exponential Functions 4.3 Logarithmic Functions 4.4 Evaluating Logarithms and the Change-of-Base Theorem 4.5 Exponential and Logarithmic Equations 4.6 Applications and Models of Exponential Growth and Decay 5. Trigonometric Functions 5.1 Angles 5.2 Trigonometric Functions 5.3 Evaluating Trigonometric Functions 5.4 Solving Right Triangles 6. The Circular Functions and Their Graphs 6.1 Radian Measure 6.2 The Unit Circle and Circular Functions 6.3 Graphs of the Sine and Cosine Functions 6.4 Translations of the Graphs of the Sine and Cosine Functions 6.5 Graphs of the Tangent, Cotangent, Secant, and Cosecant 6.6 Harmonic Motion 7. Trigonometric Identities and Equations 7.1 Fundamental Identities 7.2 Verifying Trigonometric Identities 7.3 Sum and Difference Identities 7.4 Double-Angle and Half-Angle Identities 7.5 Inverse Circular Functions 7.6 Trigonometric Equations 7.7 Equations Involving Inverse Trigonometric Functions 8. Applications of Trigonometry 8.1 The Law of Sines 8.2 The Law of Cosines 8.3 Vectors, Operation, and the Dot Product 8.4 Applications of Vectors 8.5 Trigonometric (Polar) Form of Complex Numbers; Products and Quotients 8.6 De Moivre's Theorem; Powers and Roots of Complex Numbers 8.7 Polar Equations and Graphs 8.8 Parametic Equations, Graphs, and Applications 9. Systems and Matrices 9.1 Systems of Linear Equations 9.2 Matrix Solution of Linear Systems 9.3 Determinant Solution of Linear Systems 9.4 Partial Fractions 9.5 Nonlinear Systems of Equations 9.6 Systems of Inequalities and Linear Programming 9.7 Properties of Matrices 9.8 Matrix Inverses 10. Analytic Geometry 10.1 Parabolas 10.2 Ellipses 10.3 Hyperbolas 10.4 Summary of the Conic Sections 11. Further Topics in Algebra 11.1 Sequences and Series 11.2 Arithmetic Sequences and Series 11.3 Geometric Sequences and Series 11.4 The Binomial Theorem 11.5 Mathematical Induction 11.6 Counting Theory 11.7 Basics of Probability Appendix A. Polar Form of Conic Sections Appendix B. Rotation of Axes Appendix C. Geometry Formulas Solutions to Selected Exercises Answers to Selected Exercises Index of Applications Photo Credits Enhance your learning experience with text-specific study materials. Purchase Info ISBN-10: 0-321-82808-9 ISBN-13: 978-0-321-82808-8 Format: Book $238.67 | Free Ground Shipping. Add to Cart Digital Choices MyLab & Mastering ? MyLab & Mastering products deliver customizable content and highly personalized study paths, responsive learning tools, and real-time evaluation and diagnostics. MyLab & Mastering products help move students toward the moment that matters most—the moment of true understanding and learning. eTextbook ? With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere. Print Choices Alternative Options ? Click on the titles below to learn more about these options. Loose Leaf Version ? Books a la Carte are less-expensive, loose-leaf versions of the same textbook.
{"url":"http://www.mypearsonstore.com/bookstore/precalculus-plus-new-mymathlab-with-pearson-etext-access-9780321828088","timestamp":"2014-04-17T10:49:40Z","content_type":null,"content_length":"25716","record_id":"<urn:uuid:ad6c906f-1f40-4f14-adec-a11580abdc5e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
marginal moment for random sums - question about proof January 31st 2011, 03:29 AM #1 Senior Member Nov 2010 Hong Kong marginal moment for random sums - question about proof Suppose that { $X_j$} is a sequence of intedependent identically distributed random variables and that N is a random variable taking non-negative integer values. If $Y=\Sigma_{j=1}^N$, then $M_Y(t)=M_N(lnM_X(t))$ The beginning and the end is from the textbook, the part in the middle with $E[e^{NXt}]=E[exp({N*ln(e^{xt})]$ is mine. I am not quite sure about the last transition I can see how $E[e^{N*something}]$ becomes $M_N(something)$. I am not sure why $ln(e^{xt})=lnM_X(t)$ if there is no expected value E anywhere around (since the definition of MGF is $M_X(t)=E(e^ Hello ! The beginning and the end is from the textbook, the part in the middle with $E[e^{NXt}]=E[exp({N*ln(e^{xt})]$ is mine. I can see how $E[e^{N*something}]$ becomes $M_N(something)$. I am not sure why $ln(e^{xt})=lnM_X(t)$ if there is no expected value E anywhere around (since the definition of MGF is $M_X(t)=E(e^ That's because it's $E[e^{NXt}]=E[\exp(N*\ln(e^{Xt}))]$ (capital X) P.S. : HK It's not so obvious to me. Instead, I would write $E[exp(N*ln(e^{Xt})]=M_N(ln(e^{Xt}))=M_N(Xt)$ ?? I suspect there is something here about the expected value operator that I don't know... Oh right sorry, I misunderstood your question. There is a big mistake for the third '=' sign : $E\left[M_X(t)^N\right]=E\left[\left(E\left[e^{Xt}\right]\right)^N\right]$, which is completely different from what you wrote. I would like to know what you mean by "the beginning" and "the end". Do you mean that the book only provides $M_Y(t)=M_N(\ln M_X(t))$ ? Oh right sorry, I misunderstood your question. There is a big mistake for the third '=' sign : $E\left[M_X(t)^N\right]=E\left[\left(E\left[e^{Xt}\right]\right)^N\right]$, which is completely different from what you wrote. I would like to know what you mean by "the beginning" and "the end". Do you mean that the book only provides $M_Y(t)=M_N(\ln M_X(t))$ ? This is from the book: The mistake you pointed out is mine ))) I now understand completely, from the double expected value expression you wrote. Thank you!! Last edited by Volga; February 1st 2011 at 12:25 AM. January 31st 2011, 11:59 AM #2 January 31st 2011, 03:43 PM #3 Senior Member Nov 2010 Hong Kong January 31st 2011, 10:09 PM #4 February 1st 2011, 12:12 AM #5 Senior Member Nov 2010 Hong Kong
{"url":"http://mathhelpforum.com/advanced-statistics/169794-marginal-moment-random-sums-question-about-proof.html","timestamp":"2014-04-20T20:37:09Z","content_type":null,"content_length":"49710","record_id":"<urn:uuid:6bd3b9b1-ebd0-4d83-a221-a34908f28b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula Yes, the trig identities are probably best -- especially since the integral has to be between 0 and π/2. Nothing from adriana yet tonight... Re: Linear Interpolation FP1 Formula When is that Birthday party scheduled for? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula It is this weekend, don't know which day though. Re: Linear Interpolation FP1 Formula Maybe she is already out partying. To party well you have to practice alot. So maybe she goes to practice parties. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Maybe... the old adriana would have found a way to talk to me every day though, even using her phone to send them. Re: Linear Interpolation FP1 Formula Not if she was in the arms of her future husband/wife. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes, that would be difficult. Re: Linear Interpolation FP1 Formula Or, she could hiding somewhere working on those integrals to surprise you. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula There could also be pigs flying outside. Re: Linear Interpolation FP1 Formula Yes there could. Mammym used to say, when you stop believing you stop receiving. Of course, she was talking about the tooth fairy. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I have no expectation with her these days. She has already made it clear I am just her puppet who will happily boost her confidence. I don't mind though, as I haven't got many others to talk to. Re: Linear Interpolation FP1 Formula There must be something better than a puppetmaster. You do not have to look for a girl that is interested in math. I only had one like that and she was the worst of the bunch. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula The girls into maths seem to be the only ones who talk to me, apart from PJ, and even she just stops talking when I mention maths. I don't think they tend to find other elements of my character interesting; adriana and Holly for example lose interest when I stop talking maths. Re: Linear Interpolation FP1 Formula What? A female that loses interest when you are telling her how gorgeous she is? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I never told any girl that apart from adriana, and it was only really as a return compliment. Re: Linear Interpolation FP1 Formula You should do more of that. They fall for that better than a left hook. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula But I am already being controlled by one puppetmaster. Re: Linear Interpolation FP1 Formula Control is impossible when you are detached. When you are not they will treat you like their puppydog. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I think I already am a puppydog... Just 8 more weeks of school left before it's over forever... Re: Linear Interpolation FP1 Formula Why be one? Make them your puppydog. Over? It is just beginning! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I do not know how... Re: Linear Interpolation FP1 Formula You will figure it all out from what we have talked about. Last edited by bobbym (2013-03-16 12:44:17) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I know they like dominant guys but I don't feel like I'm in a position to do that. I am more aggressive sometimes, such as disagreeing sometimes or taking the lead in a conversation, but it seems to have no effect. For instance, I notice that it doesn't seem to have an effect on adriana, but she suddenly seems to find me more attractive when she sees me working on maths or when I talk about it. Says it reminds her of those maths movies. PJ appears to be a headcase, now if I try to be dominant she will hyperventilate and require medical attention. Re: Linear Interpolation FP1 Formula You have to know what it is that you want from each of these persons and get it! If she does not comply then I do not see any reason to continue with them. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Because there is the possibility that there won't be another prospect for a very long time and I may regret not trying harder.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=257378","timestamp":"2014-04-20T18:46:50Z","content_type":null,"content_length":"34971","record_id":"<urn:uuid:2a9b1362-3552-4078-ac71-af963d6d1973>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with some questions... December 29th 2006, 09:32 AM #1 Dec 2006 Help with some questions... The first one - I have no idea... the second - I guess we can use Cauchy's mean value theorem, but how?? the third - Taylor's theorem? I do not really like the notation, $y=y(x)$ and $x=x(y)$. It makes me want to vomit. Let us assume, $y=f(x)$ has an inverse on some open interval $y^{-1}=f^{-1}(x)$ is the inverse function. Further, assume $y$ is twice differenciable on the interval. Then, $y^{-1}$ is twice differenciable on the interval. $f(f^{-1}(x))=x$ throughout the interval. Take derivative (chain rule), Take derivative again, use chain on left use quotient on right. Thus, (I assume, but am lazy to check that). In your notation, $\frac{d^2 y}{dx^2}=-\frac{ \frac{d^2 x}{dy^2} }{\left( \frac{ dx }{dy}\right)^2}$ *)Note $(f^{-1}(x))'ot = 0$ at some point. There are two possibilities. Either zero throught the interval in that case, $f^{-1}(x)=C$ but then $f(x)$ cannot exists, because it is one-to-one map. And it cannot happen that it is zero at some point but not all, that will lead to non-differenciability. Thus, we can divide. Yes! Use the Extended Mean Value theorem. Work on the interval, Both are differenciable on, Both are continous on, $f(x)=\ln (1+x)$ thus, $f'(x)=\frac{1}{1+x}$ $g(x)=\sin^{-1} (x)$ thus, $g'(x)=\frac{1}{ \sqrt{1-x^2} }$ Where, $g'(x)ot = 0 \forall x\in I$. Thus, $\exists c\in I$ $\frac{\sqrt{1-c^2}}{1+c}=\frac{\ln (1+x)}{\sin^{-1} x}$ But, $0<c<x$. $\sqrt{\frac{1-x}{1+x}}<\sqrt{\frac{1-c}{1+c}}=\frac{\ln (1+x)}{\sin^{-1} x}$ thank you very much! i didn't understand one thing... (in the image) can you explain? This is wrong, probably because you trusted the problem statement to be correct. Suppose y is a function of x and you want to switch the roles of y and x, i.e. make y the independent variable of the function x. It is safer to introduce new variables, let me show it for Phoebe83 in particular. Let: $<br /> \left\{ \begin{array}{l}<br /> x = u \\ <br /> y = t \\ <br /> \end{array} \right.<br />$ $<br /> y' = \frac{{dy}}{{dx}} = \frac{{\frac{{dy}}{{dt}}}}{{\frac{{dx}}{{dt}}}} = \frac{1}{{\frac{{dx}}{{dt}}}} = \frac{1}{{\frac{{dx}}{{dy}}}} = \frac{1}{{x'}}<br />$ Of course, y' means that y as a function of x is being differentiated with respect to x, while x' means that x as a function of y is being differentiated with respect to y. Taking the second derivative, which I'll take with respect to t again, using the chain rule: $<br /> y'' = \frac{{dy'}}{{dx}} = \frac{{\frac{{dy'}}{{dt}}}}{{\frac{{dx}}{{dt}}}} = \frac{1}{{\frac{{dx}}{{dt}}}}\frac{d}{{dt}}\left( {\frac{{dy}}{{dx}}} \right) = \frac{1}{{\frac{{dx}}{{dt}}}} \frac{d}{{dt}}\left( {\frac{1}{{\frac{{dx}}{{dt}}}}} \right)\frac{1}{{\frac{{dx}}{{dt}}}}\left( {\frac{{0 \cdot \frac{{dx}}{{dt}} - 1 \cdot \frac{{d^2 x}}{{dt^2 }}}}{{\left( {\frac{{dx}}{{dt}}} \ right)^2 }}} \right)<br />$ In the last step, I explicitly wrote the quotient rule. This simplifies to: $<br /> y'' = - \frac{{\frac{{d^2 x}}{{dt^2 }}}}{{\left( {\frac{{dx}}{{dt}}} \right)^3 }} = - \frac{{x''}}{{x'^3 }}<br />$ So there should be a cube instead of a square in the denominator. How does that happen then? If these expressions are not equal then the original problem is not true*) *)Unless $(f^{-1}(x))'f''(f^{-1}(x))-f''(x)=C$ They differ by some constant throught the interval. That's why I said: "This is wrong, probably because you trusted the problem statement to be correct." The intial statement isn't correct, at least I think it's incorrect. December 29th 2006, 10:30 AM #2 Global Moderator Nov 2005 New York City December 29th 2006, 10:49 AM #3 Global Moderator Nov 2005 New York City December 30th 2006, 03:39 AM #4 Dec 2006 December 30th 2006, 12:18 PM #5 Senior Member Jan 2006 Brussels, Belgium December 30th 2006, 02:06 PM #6 Global Moderator Nov 2005 New York City December 30th 2006, 02:34 PM #7 Senior Member Jan 2006 Brussels, Belgium
{"url":"http://mathhelpforum.com/calculus/9355-help-some-questions.html","timestamp":"2014-04-18T14:38:38Z","content_type":null,"content_length":"57532","record_id":"<urn:uuid:1732fd03-738d-486f-af46-58892dbd81bb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
trying to write with Tex Let's see: The number of my house is [math\] {\frac{1}{\sum\limits_{n = 1}^{\infty}{\frac{1}{n^2)\ Hello, tonio! Well, of course it doesn't work . . . Let's see: The number of my house is [math\] {\frac{1}{\sum\limits_{n = 1}^{\infty}{\frac{1}{n^2)\ You must start with [tex] . . . and close with Also, the TeX compiler is very fussy about insisting that opening and closing brackets should match up. In the expression {\frac{1}{\sum\limits_{n = 1}^{\infty}{\frac{1}{n^2)\ there are 8 opening braces and only four closing ones. If you correct this imbalance then you should find that the input [tex]\frac{1}{\sum\limits_{n = 1}^{\infty}\frac{1}{n^2}}[/tex] will produce $\frac{1}{\sum\limits_ {n = 1}^{\infty}\frac{1}{n^2}}$. Thank you both very much for the input. I'm beginning to realize that writing in LaTex sucks big time! You've got to have hawk eyes to keep track of all those darn round, square, curly parentheses, the slashes and the whole thing...it's awful! Do you guys happen to know whether there's some program to write mathematics more or less like html or ASCII and the program then compiles it or translates into Tex? Thanx Tonio TeX/LaTeX is compiled from ASCII, since ASCII has such a restricted character set and is difficult to represent multiple line input is why LaTeX appears so difficult to you. But it takes only a few hours (using the tutorial and other resources) to become sufficiently proficient to use it on MHF. Last edited by tonio; November 16th 2009 at 05:42 AM.
{"url":"http://mathhelpforum.com/latex-help/108477-trying-write-tex.html","timestamp":"2014-04-19T07:13:54Z","content_type":null,"content_length":"60058","record_id":"<urn:uuid:cd100a4c-d091-4bcf-accc-f680fcef48bc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Our Place in Space: The Milky Way Galaxy Use your powers of estimation to figure out the answer! It's simple: start with a small amount and work your way out. Have you ever tried to estimate the number of jellybeans in a jar? If so, you may have looked at the size of a single jellybean and compared it to the size of the jar. Then, based on size, you may have tried to imagine how many jellybeans could fit in that jar. Or maybe you ve tried to estimate how many steps it takes to walk to school. You may not have counted every step. Instead, you may have thought about how many steps it takes you to walk one block; then multiplied that number by the number of blocks you have to walk to get to school. You get the idea. You can use this same technique to estimate even larger numbers!
{"url":"http://www.amnh.org/ology/features/milkyway/pages/MoreorLess_Hint.php","timestamp":"2014-04-18T18:21:24Z","content_type":null,"content_length":"5293","record_id":"<urn:uuid:a62d5867-ce48-497f-8a74-67e50e3195fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Meeting Times Lectures: 2 sessions / week, 1.5 hours / session This course covers the concepts and physical pictures behind various phenomena that appear in interacting quantum many-body systems. Key ideas/techniques to be covered include broken symmetry, effective field theories, functional integral methods, and quantum phase transition theory. A rough syllabus is: • Second quantization; path integrals in quantum mechanics • Interacting bosons - superfluidity • Broken symmetry and its consequences • Low dimensional quantum magnetism Quantum Physics II (8.05), Statistical Physics II (8.08), Relativity (8.033) or Classical Mechanics II (8.21), Physics of Solids I (8.231) There will typically be one homework every week which will be due on the same day of the following week. Late homework is strongly discouraged!! There will be no final exam - instead there will be a short in-class presentation on a topic of general interest to the material in the course. Specific recommendations for presentation topics will be provided later. No single textbook will be used. What follows is a list of books that I find useful. When appropriate, lecture notes or references to review articles/research papers will be provided. Negele, John W., and Henri Orland. Quantum Many-Particle Systems. Boulder, CO: Westview Press, October 1998. ISBN: 9780738200521. Stone, Michael. The Physics of Quantum Fields. New York, NY: Springer-Verlag, February 1999. ISBN: 9780387989099. Auerbach, Assa. Interacting Electrons and Quantum Magnetism. New York, NY: Springer-Verlag, 1994. ISBN: 9780387942865. Wen, Xiao-Gang. Quantum Field Theory Of Many-body Systems: From The Origin Of Sound To An Origin Of Light And Electrons. Oxford, UK; New York, NY: Oxford University Press, 2004. ISBN: 9780198530947. Sachdev, Subir. Quantum Phase Transitions. Cambridge, UK; New York, NY: Cambridge University Press, 2000. ISBN: 9780521582544. Anderson, P. W. Basic Notions of Condensed Matter Physics. Boulder, CO: Westview Press, November 27, 1997. ISBN: 9780201328301. Fradkin, Eduardo. Field Theories of Condensed Matter Systems. Boulder, CO: Westview Press, August 26, 1998. ISBN: 9780201328592. Polyakov, A. M. Gauge Fields and Strings. New York, NY: Harwood Academic Publishers, Taylor & Francis Scientific, Technical and Medical, October 1, 1987. ISBN: 9783718603930.
{"url":"http://ocw.mit.edu/courses/physics/8-513-many-body-theory-for-condensed-matter-systems-fall-2004/syllabus/","timestamp":"2014-04-19T20:26:45Z","content_type":null,"content_length":"30861","record_id":"<urn:uuid:dd10f44d-3cbc-4e19-aece-5c34b7c376ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Are braid links proper links? Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Are braid links proper links? Or are the concepts involved unrelated? Could you clarify what you mean by "braid link" and "proper link"? Ryan Budney Jan 29 '11 at 18:40 Oops! Alexander's Thm says that all tame links are closed braids, so "braid links"="tame links." I'm studying a result of Murakami in "A recursive calculation of the Arf invariant of a link" (J. Math. Soc. Japan 38, #2 (1986). Murakami says "a link $L$ is proper if $lk(K,L−K)$ is even for every component of $K$ in $L$, where $lk$ means a linking number," and I don't have a very good visual picture for what that means. In that sense, are most tame links proper? Few? tuppsphd Jan 30 '11 at 3:36 For a picture of what "linking number" means, see: en.wikipedia.org/wiki/Linking_number As Paul mentions, Murakami's notion of "proper link" is fairly special and most links aren't of that sort. Just so you know "proper link" isn't a standard terminology. Ryan Budney Jan 30 '11 at 6:40 add comment According to the definitions in your comment, the closure of the 2 stranded braid with braid word $\sigma_1^6$ is not proper, since the closure is a 2 component link with linking number 3. It's hard to think of a more straightforward definition than what murakami says, but if you want examples, any link with all pairwise linking numbers even is proper. If you want odd linking number consider three fibers of the Hopf fibration. Thanks a lot. Nice, concise answer. tuppsphd Jan 30 '11 at 8:17 add comment
{"url":"http://mathoverflow.net/questions/53731/are-braid-links-proper-links?sort=oldest","timestamp":"2014-04-17T15:29:50Z","content_type":null,"content_length":"54308","record_id":"<urn:uuid:6c81055e-3041-4327-a67d-e2f5d8cf15b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Net Positive Suction Head: NPSHR and NPSHA Written by: Joe Evans, Ph.D. In Pumps & Systems January 2007, I wrote an article about cavitation and how a collapsing water vapor bubble can damage an impeller. Since then, I have received a number of requests to address Net Positive Suction Head (NPSH) and its relationship to cavitation. Here it is in a very simple, Pump Ed 101 perspective. The process of boiling is not as simple as it may seem. We tend to think that it is all about temperature and often forget that pressure has an equal role in the process. The point at which water boils is proportional to both its temperature and the pressure acting upon its surface. As pressure decreases, so does the temperature required to initiate boiling. The onset of cavitation also follows this rule. When water-at some ambient temperature-travels through an area of low pressure, it can undergo a change of state from liquid to vapor (boiling). As it progresses into an area of higher pressure, it will return to the liquid state (cavitation). The bubbles that form and collapse during this process are those of water vapor-not air. Although dissolved or entrained air can affect pump performance, it produces a totally different kind of bubble than the one produced by boiling. The fact that boiling is proportional to both temperature and pressure is the reason cavitation is such a persistent problem. Simply stated, water can boil at virtually any temperature. At sea level, where atmospheric pressure is about 14.7-psi (34-ft), it takes 212-deg F. Increase that elevation to 6,000-ft and it drops to around 200-deg F because the corresponding atmospheric pressure decreases to 11.7-psi (27-ft). If we introduce a vacuum and continue to reduce pressure to about 0.2-ft, it will boil at its freezing point. Well, so what? We don't usually operate a pump in a vacuum, and even at the top of Mt. Everest we still have almost 5.2-psi (12-ft) of atmospheric pressure! Well, it turns out that all centrifugal pumps produce a partial vacuum. If they did not, they would be unable to pump water from a lower level. During normal operation, the area of lowest pressure occurs near the impeller vane entrances, and if the pressure in this area drops to about 1-ft, water will boil at 75-deg F! For a pump to operate cavitation free, an excess of pressure energy is required of the water entering this area. We typically refer to this requirement as NPSHR, or the NPSH required. Where does this pressure energy come from? It is a combination of several different forms of energy that exist, at various levels, on the suction side of the pumping system. We refer to this available pressure energy as NPSHA, or the NPSH available. The NPSH available to a centrifugal pump combines the effect of atmospheric pressure, water temperature, supply elevation and the dynamics of the suction piping. The following equation illustrates this relationship. All values are in feet of water, and the sum of these components represents the total pressure available at the pump suction. NPSHA = Ha +/- Hz - Hf + Hv - Hvp Ha is the atmospheric or absolute pressure Hz is the vertical distance from the surface of the water to the pump centerline Hf is the friction formed in the suction piping Hv is the velocity head at the pump's suction Hvp is the vapor pressure of the water at its ambient temperature Ha is the atmospheric or absolute pressure exerted on the surface of the water supply. Atmospheric pressure is the pressure due to the density of the earth's atmosphere at some elevation. It develops its greatest pressure (14.7-psi) at sea level (where it is most dense) and approaches zero at its upper boundary. We seldom think about this pressure because, out of the box or on the work bench, the typical pressure gauge reads 0-psi. These gauges are calibrated to something we call "gauge" scale (PSIG) and totally ignore atmospheric pressure. Gauges calibrated to the "absolute" scale (PSIA) include atmospheric pressure and will read 14.7-psi at sea level. The figure below compares these two pressure scales. On the absolute scale, 0-psi equates to a perfect vacuum, but on the gauge scale it equates to atmospheric pressure. If the water source is a reservoir or an open (or vented) tank, Ha is simply the measured atmospheric pressure. It takes on another dimension if the supply is an enclosed, unvented tank. In this case, Ha becomes the absolute pressure or the sum of the measured atmospheric pressure plus or minus the actual gauge pressure of the air in the tank. Hz takes into account the positive or negative pressure of the water source due to its elevation. If it is above the pump, Hz is a positive number and if it is below, Hz is negative. Hf is simply the friction generated due to flow in the suction piping and is always a negative number. It is a function of the pipe length and diameter plus the fittings and valves it incorporates. Hv and Hvp may be a little less familiar to some of us. Hv, or velocity head, is the kinetic energy of a mass of water moving at some velocity V. It is equivalent to the distance that water would have to fall in order to reach that velocity. It can be calculated by determining the velocity in the suction piping from a velocity table and substituting that value for V in the equation "h = V^2/ 2g" (where g is the universal gravitational constant, 32-ft/sec^2). It is usually small-at a velocity of 7-fps, Hv is just 0.765-ft-and is often ignored if Ha and Hz are sufficiently large. Related Stories
{"url":"http://www.pump-zone.com/topics/net-positive-suction-head-npshr-and-npsha","timestamp":"2014-04-16T10:16:30Z","content_type":null,"content_length":"53668","record_id":"<urn:uuid:ea8aa773-d3be-4dfb-940f-f81c7c72210d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse hyperbolic cotangent: 3D plots over the complex plane 3D plots over the complex plane Entering the complex plane Upper picture: in the upper half of the near the real axis viewed from the lower half‐plane. Lower picture: in the lower half of the near the real axis viewed from the upper half‐plane. Here the complex variable is expressed as . The red surface is the real part of . The blue, semi‐transparent surface is the imaginary part of . The pink tube is the real part of the function along the real axis and the skyblue tube is the imaginary part of the function along the real axis. At , the function has logarithmic singularities. Along the real axis outside the interval , the imaginary part of vanishes identically; going away from the real axis into the gives a function that approaches 0. Along the real axis, the imaginary part of is piecewise constant; going away from the real axis into the is a function that approaches . The imaginary part is discontinuous along the branch cuts . The imaginary part has lower lip continuity in the interval and upper lip continuity in the interval . Branch cuts The real part and the imaginary part of over the . The left graphic shows and the right graphic shows . Along the intervals the function has branch cuts. The imaginary part has discontinuities along the branch cuts. At , the function has logarithmic branch points. The real part and the imaginary part of over the . The left graphic shows and the right graphic shows . The viewpoint is from the lower half‐plane. is a regular point of . The branch cuts of the real part and the imaginary part of over the . The left graphic shows and the right graphic shows . The red and blue vertical surfaces connect points from the immediate lower and upper neighborhoods of the branch cuts. The branch points at are logarithmic branch points. Only the imaginary part shows discontinuities due to the branch cuts. The viewpoint is from the upper The branch cuts of the real part and the imaginary part of over the . The left graphic shows and the right graphic shows . The red and blue vertical surfaces connect points from the immediate lower and upper neighborhoods of the branch cuts. is a regular point of . The viewpoint is from the lower half‐plane. Real part over the complex plane The real part of where . The surface is colored according to the imaginary part. The right graphic is a contour plot of the scaled real part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. The function has logarithmic singularities at ; going away from the real axis into the upper half of the gives a function that asymptotically approaches 0. The absolute value of the real part of where . The surface is colored according to the absolute value of the imaginary part. The right graphic is a contour plot of the scaled absolute value of the real part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. The function has logarithmic singularities at ; going away from the real axis into the upper half of the gives a function that asymptotically approaches 0. Imaginary part over the complex plane The imaginary part of where . The surface is colored according to the real part. The right graphic is a contour plot of the scaled imaginary part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. Along the real axis, the imaginary part of is piecewise constant; going away from the real axis into the gives a function that approaches . is a discontinuous function over the along the interval . The branch points at are logarithmic branch points. The absolute value of the imaginary part of where . The surface is colored according to the absolute value of the real part. The right graphic is a contour plot of the scaled absolute value of the imaginary part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. Along the real axis, the imaginary part of is piecewise constant; going away from the real axis into the gives function that approaches . is a discontinuous function over the along the interval . The branch points at are logarithmic branch Absolute value part over the complex plane The absolute value of where . The surface is colored according to the argument. The right graphic is a contour plot of the scaled absolute value, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. The logarithmic singularities at are clearly visible. Argument over the complex plane The argument of where . The surface is colored according to the absolute value. The right graphic is a contour plot of the scaled argument, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. has lines of discontinuities over the . The square of the sine of the argument of where . For dominantly real values, the function values are near 0, and for dominantly imaginary values, the function values are near 1. The surface is colored according to the absolute value. The right graphic is a cyclically colored contour plot of the argument. Red represents arguments near and light‐blue represents arguments near 0. Zero-pole plot The logarithm of the absolute value of where in the upper half‐plane. The surface is colored according to the square of the argument. In this plot zeros are easily visible as spikes extending downwards and poles and logarithmic singularities as spikes extending upwards.The logarithmic branch points at are visible. Real part over the complex plane near infinity The real part of where . The surface is colored according to the imaginary part. The right graphic is a contour plot of the scaled real part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. At , the function has no singularity. The absolute value of the real part of where . The surface is colored according to the absolute value of the imaginary part. The right graphic is a contour plot of the scaled absolute value of the real part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. Imaginary part over the complex plane near infinity The imaginary part of where . The surface is colored according to the real part. The right graphic is a contour plot of the scaled imaginary part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. The absolute value of the imaginary part of where . The surface is colored according to the absolute value of the real part. The right graphic is a contour plot of the scaled absolute value of the imaginary part, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. Absolute value part over the complex plane near infinity The absolute value of where . The surface is colored according to the argument. The right graphic is a contour plot of the scaled absolute value, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. Argument over the complex plane near infinity The argument of where . The surface is colored according to the absolute value. The right graphic is a contour plot of the scaled argument, meaning the height values of the left graphic translate into color values in the right graphic. Red is smallest and violet is largest. The square of the sine of the argument of where . For dominantly real values, the functions values are near 0, and for dominantly imaginary values, the function values are near 1. The surface is colored according to the absolute value. The right graphic is a cyclically colored contour plot of the argument. Red represents arguments near and light‐blue represents arguments near 0. Zero-pole plot near infinity The logarithm of the absolute value of where in the upper half‐plane. The surface is colored according to the square of the argument. In this plot zeros are easily visible as spikes extending downwards and poles and logarithmic singularities as spikes extending upwards.
{"url":"http://functions.wolfram.com/ElementaryFunctions/ArcCoth/visualizations/5/ShowAll.html","timestamp":"2014-04-19T17:13:24Z","content_type":null,"content_length":"83167","record_id":"<urn:uuid:b2ece7f6-572f-483f-8d9d-ce55f8cb6683>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Where is this statement true? October 20th 2011, 02:07 PM Where is this statement true? So, my calculus teacher just said in class today that, fxy = ((δ^2)f)/(δxδy) is true. Where exactly is this true? October 20th 2011, 02:13 PM Re: Where is this statement true? Not sure exactly what you're saying, but it brings this to mind. Symmetry of second derivatives - Wikipedia, the free encyclopedia October 21st 2011, 09:29 AM Re: Where is this statement true? I think what was meant was that $f_{xy}= \frac{\partial^2 f}{\partial x\partial y}$. If that is what was meant, it is always true- those are different notations for the same thing. If those really are $\delta$ then I would guess that they refer to "small" changes in x, y, and f and so that is an approximation to $f_{xy}$ but is exact in the case that f is a linear function of x and y. A third possibility is that your teacher was saying that $f_{xy}= \frac{\partial^2 f}{\partial x\partial y}= f_{yx}$ which is the same as saying that $\frac{\partial^2 f}{\partial x\partial y}= \ frac{\partial^2 f}{\partial y\partial x}$- that is that the "mixed" second derivative are equal- independent of the order of differentiation. That is true as long as the second partial derivatives are continuous. October 21st 2011, 11:42 AM Re: Where is this statement true? Hello, bagels0! $\text{So my calculus teacher said: }\:f_{xy} \:=\: \frac{\partial^2\!f}{\partial x\,\partial y}$ $\text{Where exactly is this true?}$ What an UGLY way to write that identity! Note that: . $f_{xy} \:=\:\frac{\partial f}{\partial y}\left(\frac{\partial f}{\partial x}\right) \:=\:\frac{\partial^2\!f}{\partial y\,\partial x}$ And so the claim is that: . $\begin{Bmatrix} \dfrac{\partial^2\!f}{\partial y\,\partial x} \:=\: \dfrac{\partial^2\!f}{\partial x\,\partial y} \\ \text{or} \\ f_{xy} \:=\:f_{yx} \end{Bmatrix}$ This is true for all functions, $f(x,y).$
{"url":"http://mathhelpforum.com/calculus/190891-where-statement-true-print.html","timestamp":"2014-04-17T01:18:29Z","content_type":null,"content_length":"8906","record_id":"<urn:uuid:9555401e-afd0-4dc1-8cfe-3f7e1d06983d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Sudoku Problems Here are two Sudoku meta-problems I have been thinging about for a while. The first is about the normal form of a sudoku. The rules of sudoku have a huge symmetry group of order (3!)^8 x 2 x 9! = 1.22E12. It is generated by permutations of rows in a group of three rows, permutations of these groups, same thing for columns, transposition of the grid and permutations of the numbers 1,...,9 which are just labels for nine different things. So for each grid, there are 1.22E12 grids which are basically the same. Is there an easy way to determine if two puzzles are related by the symmetry and even better, is there a normal form (a distinct element for each orbit)? There is a trivial solution to this problem: You just write out all 10^12 puzzles you obtain by acting with the symmetry group and then sort them according to some lexicographic ordering. This gives a normal form. What I am asking for is a more direct construction. One which sounds more like: Use the 9! permutations of the symbols to make the first row read 123456789. Then use row permutations to make the square below the 1 as small as possible. Then use row permutations to make the next squares under that square as small as possible... Unfortunately, starting to permute colums screws up what was achieved with this ordering prescription. So how would a better algorithm read? The other problem is how to rate the difficulty of a puzzle? Question one really applied after the puzzle is solved, this question is about puzzles to be done. Newepaper which publish these puzzles often give ratings such as "simple", "intermediate", "hard". But I found these ratings differ significantly between papers (what Zeit online considers hard is much much easier compared to what The Guardian calls a hard sudoku) but are also not consistent amongst themselves. Earlier, I have talked about the perl program I wrote to solve sudokus. It recursively figures out for all squares which numbers are still allowed and then takes the square with the least number and tries to put these numbers there. If there is an empty square with no allowed numbers remaining it backtracks. Thus the search can be represented by a tree where each node represents a square to be filled out and there are as many branches from that node as there are numbers which are not yet rules out. What I am looking for is a numerical rating for a puzzle which is a predictor of how hard I find to do the puzzle and for example correlates with the time it takes me to solve it. Even if I use a different strategy when doing these puzzles by hand I would expect the information could be obtained form the tree. Do you have any good idea for such a function from trees to the reals, say? Obviously the trees all have a depth given by the number of empty squares in the puzzle and each node can have at most nine branches but typically has much less (even for "hard" puzzles most of the nodes have only one branch). An easy guess is of course the number of nodes or the number of leaves but I found those at least not be proportional to my manual solution time. To give you an idea: Today's hard puzzle from Die has 52 nodes (four times the program encounters situtations with two possibilities, all others are unique or dead ends, manually it took me exactly 6:30) while has 2313 nodes and took me well over an hour some months ago. Of course, if early on you have several possibilities and learn only much later which ones do not work this is much worse than having many possibilities which are ruled out immediately. UPDATE: In case anybody is interested, I put up the decision tree for the difficult puzzle. 4 comments: Just another 17 clue puzzle I haven't thought about this for very long, but are you sure that for the Sudoku symmetry group the operations you named are actually all independent generators? I was wondering whether the operation of renaming digits 1-9 couldn't possibly always, or at least sometimes, be reformulated as a sequence of row and column permutations, given that the entries in each row, column and square are distinct. In that case the real symmetry group would be reduced by some relations between the different generators. As far as I can tell this is not a 17 but an 81 clue puzzle and thus even Die Zeit would not rate it as "hard". Georg raises an interesting point: I think what I wrote about the symmetry group of all sudokus is correct. However, not all orbits will be of this full size, some puzzles will not change under specific transformations. You could imagine a puzzle where a permutation of row blocks could be undone by a relabeling of the symbols for example. These transformations which leave a specific puzzle invariant form a subgroup of the full symmetry group called the stabiliser. This opens the possibility for a third sudoku problem: Find a puzzle with a large (the largest?) stabiliser! I've read that most 'zines use a group of basic, human tactics as a measure for difficulty of the puzzle. (there's actually a solid library of quite a _lot_ of tactics... a puzzle that requires, from any some set, a particular strategy, requires that tactic's level of abstraction: so it isn't entirely ill-founded.) As for your difficulty rating not scaling, just some thoughts (I'm no mathematician so forgive me): can you combine leaf-count with backtrackings required? Backtracking by itself may not show difficulty correctly, because it is required in places where, when working the puzzle yourself, you would just use an abstraction (for hard puzzles in the Washington Post, it is common-place to have to look at a 3x3 cell and mark possibilities, such that one square may just happen to only have one possibility: it's the pigeon hole principle!) So you need some further refinement..
{"url":"http://atdotde.blogspot.com/2006/11/two-sudoku-problems.html","timestamp":"2014-04-19T11:56:28Z","content_type":null,"content_length":"72807","record_id":"<urn:uuid:0f3fac6c-3bdc-422f-86d7-316602d407ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Freckle Face This lessson initiates the development of skills in collecting and recording data. Students collect data about a partner's face and tally the data from the whole class. They learn the convention for displaying a set of five using tally marks. Then students create a pictograph and pose and answer questions about the data set. To assess prior knowledge, ask the students wearing sneakers to stand and form a column in a free space within the room. Ask the students not wearing sneakers to stand and form a column alongside the column of students wearing sneakers. (If all students are wearing sneakers, choose another classification system, such as glasses/no glasses.) Explain to the students that they have just collected and displayed data. Record the data on a line plot on the board. Have the students return to their seats. Ask questions to focus the students' attention on the information displayed on the line plot. This exercise will help you determine the students' experiences and knowledge with regard to collecting and recording data. If it is possible, read the book Freckle Juice by Judy Blume on the day of the lesson. If this book is not available, read or tell another story about a child with freckles. Call the students together and tell them that they are to select a partner and look at their partner to determine if he or she has freckles. Ask the students to describe where on the face freckles usually are found. Draw on the board or chart paper a copy of the tally chart below. Introduce the convention for grouping five tally marks for easier counting. Invite the students, one at a time, to place a tally mark in the correct row to describe their partner's face. When all the students have recorded the data, call on a volunteer to count the number of tallies in each row and record the number at the end of the row. Give each student a self-stick note. Ask the students to draw pictures of their faces, with freckles if they have them and without freckles if they do not have them. Now explain that they will show the data another way--using a pictograph rather than tally marks. (A pictograph is a graph that uses pictures to show data.) Draw two lines on the board and tell the students that the top line will show how many students in the room have freckles; label the far-left column "Have Freckles." Then ask the students what the second line should be labeled, and enter their suggestion in the far-left column in the second line of the grid. Have Freckles Do Not Have Freckles When the students are ready, invite them to place their drawings on the pictograph, being careful that each drawing abuts the one before it. Now ask a student to count the number of faces in each row and to write the total amount at the end of the line. Encourage the students to formulate questions that can be answered by looking at the pictograph. Collecting and keeping student work samples will allow you to review the students' growth over time, assess their understanding of mathematical concepts, and address any areas of misconception or lack of knowledge. Ask the students to attach their self-stick note to an index card and to write on the card two sentences that describe this lesson. This card might be a suitable first entry for a portfolio of work completed and assessed during this unit of study. If it is appropriate for their level, ask the students to include a subtraction sentence that describes the comparison of the two categories, "Freckles" and "No Freckles." • Paper • Chart paper • Crayons and markers • Self-stick notes • Index cards • Book: Freckle Juice by Judy Blume 1. To facilitate a rich class discussion, use the Questions for Students. You may wish to take anecdotal records on students' responses to these questions as you plan your instruction for the rest of this unit. 2. As you focus on individual accomplishments, you may wish to record these accomplishments on the Class Notes recording sheet. You may find this information useful when discussing the students' progress toward learning targets with parents, administrators, colleagues, and the students themselves. Questions for Students 1. Can you name the two categories we collected data about? How did we show what we found out? How else did we show it? [Freckles, no freckles; Using tally marks; We also used a pictograph.] 2. How did we make it easier to count the tally marks in the tally chart? Why did that step make it easier? [We used a diagonal line to show 5; we can count by 5s.] 3. What questions can you answer from looking at the tally chart? [Questions may include: How many more students had no freckles than freckles? What is the difference between the two groups?] 4. Look at the pictograph that we made. Which row contained more pictures? What information does that provide for you about students in our class and freckles? [Answers will depend upon the class data set.] 5. How is a tally chart like a pictograph? How is it different? [Both show categorical data; tally charts use tally marks, and pictographs use pictures to represent numbers.] 6. How would you describe making a pictograph to a friend? [Student responses may vary, but they should be able to explain the basic process of creating a pictograph.] Teacher Reflection • Which students were able to make single tallies? Which students understood how and why tallies are collected in groups of five? • Were all the students able to contribute to the creation of the tally chart? The pictograph? • Were all the students able to answer questions from the tally chart? From the pictograph? • Which students did not meet the objectives of this lesson? What instructional experiences do they need next? • Would I make any adjustments the next time that I teach this lesson? Students collect data about the eye color of class members. They create bar graphs with several classifications of data. They pose and answer questions about the data by looking at the graph, and they find the range and mode. In this lesson students generate bar graphs. Posing and answering questions using the graphs gives them an opportunity to apply their reasoning and communication skills. They also consider whether a given category is likely, certain, or impossible. In this lesson, students learn a powerful way to display data—using a glyph. They collect data and create pictures using the data. Students also interpet glyphs made by other students. Learning Objectives Students will: • Collect and tally real-world data • Classify data according to a given attribute • Create pictographs • Pose questions about the data set that can be answered from the representations Common Core State Standards – Mathematics Grade 1, Measurement & Data • CCSS.Math.Content.1.MD.C.4 Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another. Grade 2, Measurement & Data • CCSS.Math.Content.2.MD.D.10 Draw a picture graph and a bar graph (with single-unit scale) to represent a data set with up to four categories. Solve simple put-together, take-apart, and compare problems using information presented in a bar graph.
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=545","timestamp":"2014-04-20T08:16:45Z","content_type":null,"content_length":"74478","record_id":"<urn:uuid:6e110d84-4136-414e-ba04-a945b1da6785>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
University Place Geometry Tutor Find an University Place Geometry Tutor ...Lastly, reading comprehension skills and vocabulary skills play a big part in successfully navigating the reading tasks. I have worked with new learners, students with dyslexic patterns and advanced students in reading skills. I have taught SAT prep classes for several years, and find that the SAT challenges students to use several types of Math knowledge in new and innovative ways. 12 Subjects: including geometry, chemistry, GRE, reading ...I have a PhD from UC Berkeley in Infectious Diseases & Immunity, where I completed a dissertation describing the role of coding-region RNA elements and their function in regulating the viral life cycle of the Dengue virus. I currently work on genetic manipulation and characterization of proteins... 25 Subjects: including geometry, Spanish, writing, chemistry ...I am pursuing my degree in Early Childhood Education, and in the mean time, I am working in a child care program through the YMCA. I work with children from K to 5th grade, and aside from providing a safe place for kids to stay after school while their parents work, I provide special one-on-one ... 16 Subjects: including geometry, reading, Spanish, algebra 1 ...I have taught English in Slovakia and Poland, and I have also taught an undergraduate psychology course. As a teacher I am patient and knowledgeable. I set myself high standards, and I know how to present complicated information in a simple and straightforward way. 56 Subjects: including geometry, English, chemistry, GED ...I can tutor any level of math from arithmetic to intermediate algebra, and even a little bit of precalculus. I also do beginning Spanish and some computer work (Word, Excel, PowerPoint). I'm fairly low-key, and students tend to feel at ease around me. I have a lot of patience and don't mind explaining a concept as many times as necessary. 8 Subjects: including geometry, algebra 1, Microsoft Excel, algebra 2
{"url":"http://www.purplemath.com/University_Place_geometry_tutors.php","timestamp":"2014-04-21T07:19:25Z","content_type":null,"content_length":"24288","record_id":"<urn:uuid:4c3ac6cb-e669-4497-8fa8-605ee0960e0a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector Derivatives April 4th 2010, 05:35 PM #1 Apr 2010 Vector Derivatives Given a vector field V(x,y,z) = (xi + yj + zk)/(r^3) where r = sqrt(x^2 + y^2 + z^2) a) what is the x,y,z components of V(x,y,z) b) partial derivatives of each component do i substitute the r^3 with the sqrt thing or i just leave it as r^3 when i find the x,y,z components and derivatives? Given a vector field V(x,y,z) = (xi + yj + zk)/(r^3) where r = sqrt(x^2 + y^2 + z^2) a) what is the x,y,z components of V(x,y,z) b) partial derivatives of each component do i substitute the r^3 with the sqrt thing or i just leave it as r^3 when i find the x,y,z components and derivatives? I think you can safely leave as $r^3$: as long as it is clear what it is who cares? Now, $r^3=\left(x^2+y^2+z^2\right)^{3\slash 2}\Longrightarrow \frac{d(r^3)}{dx}=3x\sqrt{x^2+y^2+z^2}$ , and the same with the other two variables but instead $3x$ we have $3y,\,3z$ , resp. , so: $V(x,y,z)=\left(\frac{x}{r^3},\frac{y}{r^3},\frac{z }{r^3}\right)$$\Longrightarrow \frac{dV}{dx}=\left(\frac{r^3-x\frac{d(r^3)}{dx}}{r^6},\frac{-y\frac{d(r^3)}{dy}}{r^6},\frac{-z\frac{d(r^3)} {dz}}{r^6}\right)$ . if i wanna find the curl and divergence of v(x y z)? leave it as r too? April 5th 2010, 07:23 AM #2 Oct 2009 April 8th 2010, 08:14 AM #3 Apr 2010
{"url":"http://mathhelpforum.com/calculus/137287-vector-derivatives.html","timestamp":"2014-04-21T12:14:15Z","content_type":null,"content_length":"36847","record_id":"<urn:uuid:84ebf305-d753-42f6-901a-5b40716fe9cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Marblehead Statistics Tutor Find a Marblehead Statistics Tutor ...I have a PhD in physics and was a mathematics olympiad winner in high school. I've always been extremely good at math. I've taught all the math subjects that are included within the GMAT, including algebra, geometry, and word problems. 47 Subjects: including statistics, chemistry, reading, calculus ...Finally you learn about the wide variety of real world situations that can be modeled to predict future outcomes from current data. Calculus is the study of rates of change, and has numerous and varied applications from business, to physics, to medicine. The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong. 23 Subjects: including statistics, physics, calculus, geometry ...I have several years part-time experience holding office hours and working in a tutorial office. I have a BA and an MA in mathematics. In both degrees the focus was on discrete math, logic, and 29 Subjects: including statistics, reading, English, writing ...In addition I have planned note-taking workshops, and am able to effectively teach study habits and strategies. Before I was at SSU I was a student at an engineering school, and have a strong math background. I am committed to helping my current students, often going above and beyond to make sure that they succeed in their class. 4 Subjects: including statistics, algebra 2, study skills, prealgebra I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including statistics, calculus, ACT Math, algebra 1
{"url":"http://www.purplemath.com/marblehead_ma_statistics_tutors.php","timestamp":"2014-04-18T23:25:46Z","content_type":null,"content_length":"24074","record_id":"<urn:uuid:630fb87c-4714-4006-8da3-22d66810e52d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 1,940 Is this right?? Stats.... How many green elements are required to make this a legitimate probability distribution if there are a total of 50 elements in this sample? x red blue orange brown green P(x) 0.20 0.16 0.28 0.24 Would it be 0.12?? PLZ help!! Stats.... In testing a new drug, researchers found that 10% of all patients using it will have a mild side effect. A random sample of 14 patients using the drug is selected. Find the probability that: (A) exactly two will have this mild side effect (B) at least three will have this mild... How many green elements are required to make this a legitimate probability distribution if there are a total of 50 elements in this sample? x red blue orange brown green P(x) 0.20 0.16 0.28 0.24 Would it be 0.11?? In testing a new drug, researchers found that 10% of all patients using it will have a mild side effect. A random sample of 14 patients using the drug is selected. Find the probability that: (A) exactly two will have this mild side effect (B) at least three will have this mild... Classify the following as discrete or continuous random variables. (A) The time it takes to run a marathon (B) The number of fractions between 1 and 2 (C) A pair of dice is rolled, and the sum to appear on the dice is recorded (D) The length of a broad jump I got: (A) Continuo... A bag of colored blocks contains the following assortment of colors: red (18), blue (14), orange (20), purple (14), green (10), and yellow (4). Construct the probability distribution for x. I'm so PLZ HELP....STATS You are given the following data. # of Absences Final Grade 0 96 1 92 2 71 3 66 4 60 5 51 A. Find the correlation coefficient for the data. B. Find the equation for the regression line for the data, and predict the final grade of a student who misses 3.5 days. NEED HELP PLZ!!! STATS Three cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of tens drawn in a set of 3 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed... USA snapshot presented a bar graph depicting business travelers impression of wait times in airport security lines over the past 12 months. Statistics were derived from Travel Industry Association of American Business Traveler Survey of 2034 respondents. Is this a probab... You are given the following data. # of Absences Final Grade 0 96 1 92 2 71 3 66 4 60 5 51 A. Find the correlation coefficient for the data. B. Find the equation for the regression line for the data, and predict the final grade of a student who misses 3.5 days. Three cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of tens drawn in a set of 3 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed... Oh....I understand a little better now...thank you! The linear correlation coefficient is a numerical value that ranges from -1.00 to +1.00. Describe in a sentence or two the meaning of each of these correlation coefficients: a) -1.00 b) +1.00 So they are all probability distributions? USA snapshot presented a bar graph depicting business travelers impression of wait times in airport security lines over the past 12 months. Statistics were derived from Travel Industry Association of American Business Traveler Survey of 2034 respondents. Is this a probab... The following data lists the average monthly snowfall for January in 15 cities around the US: 29 26 6 46 17 34 15 45 19 43 37 6 9 33 37 Find the mean, variance, and standard deviation. Plz help me Sue!! oh....so i get x = 53?? Plz help me Sue!! Did i do that correctly?? Plz help me Sue!! 81 = (x + 352)/5 81 = 352x/5 81 = 70.4x x = 1.15?? Plz help me Sue!! Starting with the data values 70 and 100, add three data values to the sample so that the mean is 81, the median is 91, and the mode is 91. How do i do this?? Starting with the data values 70 and 100, add three data values to the sample so that the mean is 81, the median is 91, and the mode is 91. How do i do this?? Stats....plz help.... can u plz help me with the other problem i posted? i need help so bad! thank u!! Stats....plz help.... ok thank u!! Stats....plz help.... 5.04? but how did u come up with multiplying it by 9? because it's the highest number? Stats....plz help.... Stats....plz help.... Rank the following data in increasing order and find the position and value of the 56th percentile. Please show all of your work. 0 1 6 8 9 9 8 6 3 1 0 9 How do i find the position and value of the 56th percentile?? Math-confused...plz help I. Use Chebyshev s theorem to find what percent of the values will fall between 183 and 227 for a data set with a mean of 205 and standard deviation of 11. II. Use the Empirical Rule to find what two values 99.7% of the data will fall between for a data set with a mean of... Please tell me if i did this correct Given the following frequency distribution, find the mean, variance, and standard deviation. Please show all of your work. Errors Frequency 51-53 9 54-56 19 57-59 16 60-62 24 63-65 25 This is what I got: Mean = 59.19 Variance = 1.43 Standard Deviation = 2.05 still confused.... Start with x=75 and add four x values to make a smaple of five data such that the standard deviation of these data equals 0. so the SD = 30?? V = 30^2 = 900?? A set of 50 data values has a mean of 15 and a variance of 36. Find the standard score of a data value = 30 001136688999?? How do I find the position and value of the 56th percentile? Rank the following data in increasing order and find the position and value of the 56th percentile. Please show all of your work. 0 1 6 8 9 9 8 6 3 1 0 9 yay!! thank u!! Z = 64-51/5.0 = 13/5.0 = 2.6 Z = 47-51/5.0 = -4/5.0 = -0.8 Is this correct? A Math test has a mean of 51 and standard deviation of 5.0. Find the corresponding z scores for: I. a test score of 64 II. a test score of 47 okay so the first would be 0.75 correct? the second one would be 3 = -16.46 -3 = -16.46 is this correct? 1. Z = (s-205)/11 = 205/11 = 18.63??? This one confuses me 2. I have no idea how to do this.... Please help!! I. Use Chebyshev s theorem to find what percent of the values will fall between 183 and 227 for a data set with a mean of 205 and standard deviation of 11. II. Use the Empirical Rule to find what two values 99.7% of the data will fall between for a data set with a mean of... Stats.....plz help me..... Given the following frequency distribution, find the mean, variance, and standard deviation. Please show all of your work. Errors Frequency 51-53 9 54-56 19 57-59 16 60-62 24 63-65 25 How are z scores and the "empirical rule" related? Math Help looks correct to me Given the following frequency distribution, find the mean, variance, and standard deviation. Please show all of your work. Errors Frequency 51-53 9 54-56 19 57-59 16 60-62 24 63-65 25 Can someone help me please? Statistics confuses me....thank you for your help :) Can someone help me please? A set of 50 data values has a mean of 15 and a variance of 25. Find the standard score of a data value = 20 Help plz!! Math.... hmmmm....this is so confusing for me.... Help plz!! Math.... I apologize...thank you....so is it A data point of 20 is one standard deviation above the mean of 15, so the standard or "z" score is 1.0 misin ideas regular aerobicise an individual more endurance tha show cause and effect When a 5.50g sample of solid sodium hydroxide dissolves in 100.0 g of water in a coffee cup calorimeter, the temperature of rises from 21.6 degrees C to 37.8 degrees C. Calculate Change of heat for the reaction in kJ (also in kJ/mol NaOH) for the solution process. Assume the s... Find the length of the golden rectangle whose width is 14.72 Fibonacci numbers I have been working on this question all day and can not figure out what to do. The question says according to the Quadratic Formula, the solutions of x^2-x-1=0 I know the positive solution is the golden rule ratio and the negative solution is the conjugate of 0(with a slash t... algebra - reply to answer why is this a poor model for hitting a baseball? y = -0.002x^2 + 0.879x + 3.981 I know it gets hit at approx. 4 feet high. The -a gives the right shape like a hill. There's a -4.48 and 443.95 zero, which can be hit for distance. The max is (219.75, 100) so that seems doabl... algebra Please Help New question: A baseball was 2.74 feet above ground when it was hit. It reached a max. height of 116.3ft when it was approx 215.3 ft away from where he hit the ball. The ball lands after travelling a ground distance of approx 433.1 ft. Find an equation of form y = A(x-h)^2 + k... why is this a poor model for hitting a baseball? y = -0.002x^2 + 0.879x + 3.981 I know it gets hit at approx. 4 feet high. The -a gives the right shape like a hill. There's a -4.48 and 443.95 zero, which can be hit for distance. The max is (219.75, 100) so that seems doabl... Suppose the ball was 2.74 feet above ground when it was hit. It reached a max. height of 116.3ft when it was approx 215.3 ft away from where he hit the ball. The ball lands after travelling a ground distance of approx 433.1 ft. -- find an equation in form of y=C(x-z1)(x-z2) wh... Baseballs cost $5 and baseball gloves cost $20. Assume you have $100 total to spend on these items. Construct a table similar to the one on page 158. What is the point, based on the Equimarginal Rule, that has equal marginal benefit (or the closest) for the two purchases? Can DNA control how many teeth we will have? Fibonacci Numbers Thank you Jennifer! I just figured out where I went wrong in wording the problem. a= 2x8=16 b=2x15=30 c=9+25=34 16^2+30^2=34^2 256+900=1156 1156=1156 Thank you for helping to guide me in the right direction. One more problem to go by Friday...I just may be calling on you! Fibonacci Numbers I think it is the fibonacci numbers and the wording of the word problem. I thought I had to add 4 and 2 together to get the 6 and then square it. I am just confused. Thanks Fibonacci Numbers This is not a test question, but I did not know what the appropriate topic for my question is. If I used the terms 2, 3, 4, 8 consecutive terms of the Fibonacci sequence. "a" is the first and fourth terms and "b" is twice the product of the second and third... Hi there! I have this question for my bio/genetics homework, and I can't seem to figure it out. It reads, "A true-breeding red snapdragon is crossed to a true-breeding white snapdragon. The F1 progeny are all red. When the F1 is selfed, the following F2 progeny are ob... Fibonacci numbers I have three problems to answer dealing with Fibonacci numbers. I understand the first two, but would like for reassurrance that they are correct and lost on the last one and could use some help. Thank you. 1. Fibonacci numbers can not be used more than once. Find sums for num... For a distribution in which the mean is 100 and the standard deviation is 12, find the following: the percentile of a score of 106. start the shopper at (0,0) After pushing the cart 43 m south, he's now at (0, -43) After turning west and pushing the cart 15 m, he's now at (-15, -43) Now he has to turn 90 degrees again. . . he will then be going either north or south. . . south will result in the la... Looks right to me! A 73 kg student traveling in acar with a constant velocity has a kinetic energy of 1.3 x 10^4 J. What is the speedometer reading of the car in km/h? business law what is the gist of proximate cause? why do courts struggle with its concept? a ball is thrown up with an initial velocity 20m/s. using g=10m/s2, a) what's the maximum height it can reach. b) what is the magnitude and direction of the ball's velocity 1 second after it is thrown. c) what is the magnitude and direction of the ball's velocity 3... nursing evidence _ _ _ _ _ _ _ _ _ 1. Information used to conduct our lives _ _ _ _ _ _ _ _ _ 2. To solve a problem by collecting the necessary pieces of information and putting them together _ _ _ _ _ AND _ _ _ _ _ 3. Try several approaches until one works _ _ _ _ _ _ _ _ _ 4. Do what feels l... It still doesnt make aense because the answer comes out to 240 liters. I still do not understand. Calculate the amount of 6 M nitric acid you need to add to fully react with .5 grams of copper and add a 20 percent excess. In a right triangle ABC, C = 90°, B = 38° 35' and a = 6.434 mi. Solve for all the missing parts using the given information. (Round each answer to three decimal places for sides and to two decimal places for angle.) Never mind, I figured it out A 41 g piece of ice at 0.0 degree Celsius is added to a sample of water at 8.6 degree Celsius. All of the ice melts and the temperature of the water decreases to 0.0 degree Celsius. How many grams of water were in the sample? thought it asked for school, why does it matter anyways Suggest any limitations to the use of recrystallization as a purification method for a solid. I'm suppose to draw velocity time and acceleration time graph for a rocket that is launched vertically into the air and falls back down to its initial height. We are to assume that x axis is up. So would I draw an curved figure like a downward parabola or a straight vertic... The label on a 1-pint bottle of mineral water lists the following components. If the density is the same as pure water and you drink 3 bottles of water in one day, how many milligrams of each component will you obtain? calcium 28 ppm In the manufacturing of computer chips, cylinders of silicon are cut into thin wafers that are 2.80 inches in diameter and have a mass of 2.00g of silicon. How thick (mm) is each wafer if silicon has a density of 2.33g/cm^3? (The volume of a cylinder is V=(pi)r^{2}h.) nevermind, I just figured it out Suppose you have two 100-{\rm mL} graduated cylinders. In each cylinder there is 58.5mL of water. You also have two cubes: One is lead, and the other is aluminum. Each cube measures 2.0cm on each side. After you carefully lower each cube into the water of its own cylinder, wha... In the explosion of a hydrogen filled balloon, 0.50 g of hydrogen reacted with 4.0 g of oxygen to form how many grams of water vapor? (water vapor is the only product) A car can average 140 miles on 5 gallons of gasoline. Write an equation for the distance "d" in miles the car can tavel on "g" gallons of gas? 3/11 minutes keith scored 75,86,79,91 what score must he recieve on on the fifth test to have a mean of 85 ? You are on a boat heading out to sea. The boat is going 5m/s toward a buoy (the origin in your chosen frame of reference) that is 300 meters away. How long will it take to be 500 meters beyond the 3. Indicate whether each of the given statements could apply to a data set consisting of 1,000 values that are all different. a. The 29th percentile is greater than the 30th percentile. (True or False) b. The median is greater than the first quartile. (True or False) c. The th... how to do the problem? I'm lost . Application 3 to chapter 15 (page 340) suggests increased health care expenditures will crowd out other expenditures. What component of GDP do you think will suffer? Using that same argument, Nebraska has debated casino gambling several times. Proponents say the additional e... Find the gradient of the line joining the points (5,2) and (6,3) A theater purchases $500 worth of Sticky Bears and chocolate bombs. Each bag of Sticky Bears costs $1.50 and each bag of Chocolate Bombs costs $1.00. If a total of 400 bags of candy were purchased, how many bags of Chocolate Bombs did the theater buy? Twenty (20) students randomly assigned to an experimental group are studying for a test while listening to classical music. Thirty (30) students randomly assigned to a control group are studying for the same test in complete silence. Both groups take the test after studying fo... comp/155 week 4 assignment need help A parking lot has 5 spots remaining and 5 different cars in line waiting to be parked. If one of the cars is too big to fit in the two outermost spots, in how many different ways can the five cars be solve by the substitution method 5x+7y=6 -6x+y=21 Hum/205 college level OK, I fixed it. The history of the Western Art is a steeped in religious symbolism; but all art is in one way or another, an extension of the religious urge in humanity. However to some westerners it is often difficult to leave the comfort domain of a familiar religious iconog... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jennifer&page=8","timestamp":"2014-04-16T08:55:00Z","content_type":null,"content_length":"30094","record_id":"<urn:uuid:98e30d4a-1320-4d14-887a-3cc2b32e4dc3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Find measures of angles, massive error ! December 12th 2009, 11:39 AM Find measures of angles, massive error ! So this is the triangle I'm dealing with. I have to find the measures of the angles x & y. First thing I tried to do was find the measure of the angle @ A. a/SinA = c/SinC 66/SinA = 25/Sin10.5 and then I end up with 28 degrees for the angle @ A. Maybe I'm looking at it the wrong way but beyond that I have no idea how to proceed with this question. Any help ? December 12th 2009, 01:19 PM So this is the triangle I'm dealing with. I have to find the measures of the angles x & y. First thing I tried to do was find the measure of the angle @ A. a/SinA = c/SinC 66/SinA = 25/Sin10.5 and then I end up with 28 degrees for the angle @ A. Maybe I'm looking at it the wrong way but beyond that I have no idea how to proceed with this question. Any help ? (angle C + angle A) - 180 = angle B December 12th 2009, 01:56 PM December 12th 2009, 02:28 PM December 13th 2009, 10:17 AM The issue here is that I'm getting 28 degrees with the calculation 66/SinA = 25/Sin10.5 the measurement for angle A is obviously not 28 degrees, so basically I'm ending up with an error and I can't understand why because so far as I understand my procedure is correct. I want to solve for angle A, then because I already have angle C I can solve for angle B. From angle B I could find the angle next to (x), thereby finding x becasue the sum of the two would be 180. makes sense now ? December 13th 2009, 03:22 PM Yes I get 28 as well. The only thing I can think of is the 10.5 angle is wrong or one of the lengths is wrong. Doesn't matter, just keep on going. So $180 - (28.76+10.5) = 140.74$ So $y =140.74$ Now use cosine law to find the right hand side then you can work out x. December 14th 2009, 02:48 AM So this is the triangle I'm dealing with. I have to find the measures of the angles x & y. First thing I tried to do was find the measure of the angle @ A. a/SinA = c/SinC 66/SinA = 25/Sin10.5 and then I end up with 28 degrees for the angle @ A. Maybe I'm looking at it the wrong way but beyond that I have no idea how to proceed with this question. Any help ? Your thinking is correct: the angle at A is 28.7574degrees. (see my attached image) But it is NOT the angle CAB, it is the angle EAB or angle CEB $<br /> \angle CEB = \dfrac{66 \sin(10.5deg)}{25} = 28.7574 deg<br />$ $\angle CEB = \angle EAB$ $\angle CAB = 180 - 28.7574 = 151.2426 <br />$ $<br /> y = \angle ABD = 180 - (10.5 + 151.2426) = 180 - 161.7426 = 18.2574<br />$ $x = \angle ADC = 161.7426$ That is what is shown on the post by bigwave.
{"url":"http://mathhelpforum.com/trigonometry/120081-find-measures-angles-massive-error-print.html","timestamp":"2014-04-18T19:05:37Z","content_type":null,"content_length":"11286","record_id":"<urn:uuid:8b56b52a-ba92-4863-b1f9-0d8d47994eee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Viewing and Camera Transforms, and 8 Using Viewing and Camera Transforms, and gluLookAt() 8.010 How does the camera work in OpenGL? As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation. 8.020 How can I move my eye, or camera, in my scene? OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack. 8.030 Where should my camera go, the ModelView or Projection matrix? The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates. The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix. Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it. The game dev FAQ has good information on these two matrices. Read Steve Baker's article on projection abuse ( local mirror ). This article is highly recommended and well-written. It's helped several new OpenGL programmers. 8.040 How do I implement a zoom operation? A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large. A better method is to restrict the width and height of the view volume in the Projection matrix. For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like: static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */ /* A routine for setting the projection matrix. May be called from a resize event handler in a typical application. Takes integer width and height dimensions of the drawing area. Creates a projection matrix with correct aspect ratio and zoom factor. */ void setProjectionMatrix (int width, int height) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar); /* ...Where 'zNear' and 'zFar' are up to you to fill in. */ } Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this: glFrustum(left*zoomFactor, right*zoomFactor, bottom*zoomFactor, top*zoomFactor, zNear, zFar); glOrtho() is similar. 8.050 Given the current ModelView matrix, how can I determine the object-space location of the camera? The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera. OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code. 8.060 How do I make the camera "orbit" around a point in my scene? You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this: gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */ 0, 0, 0, /* look at the origin */ 0, 1, 0); /* positive Y up vector */ glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */ /* ...where orbitDegrees is derived from mouse motion */ glCallList(SCENE); /* draw the scene */ If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already). 8.070 How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) The following is from a posting by Dave Shreiner on setting up a basic viewing system: First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam"). Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set zNear = 1.0; zFar = zNear + diam; Structure your matrix calls in this order (for an Orthographic projection): GLdouble left = c.x - diam; GLdouble right = c.x + diam; GLdouble bottom c.y - diam; GLdouble top = c.y + diam; glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(left, right, bottom, top, zNear, zFar); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho(): GLdouble aspect = (GLdouble) windowWidth / windowHeight; if ( aspect < 1.0 ) { // window taller than wide bottom /= aspect; top /= aspect; } else { left *= aspect; right *= aspect; } The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it. A typical viewing transform will go on the ModelView matrix and might look like this: GluLookAt (0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0); 8.080 Why doesn't gluLookAt work? This is usually caused by incorrect transformations. Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar. It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows: glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(50.0, 1.0, 3.0, 7.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); It's important to note how the Projection and ModelView transforms work together. In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere. The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane. Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units. If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry. 8.090 How do I get a specified point (XYZ) to appear at the center of the scene? gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt(). 8.100 I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? Look at question 8.030 for an explanation of this problem. 8.110 How can I create a stereo view? Paul Bourke has assembled information on stereo OpenGL viewing.
{"url":"http://www.opengl.org/archives/resources/faq/technical/viewing.htm","timestamp":"2014-04-20T13:51:21Z","content_type":null,"content_length":"15371","record_id":"<urn:uuid:c151d28f-7ecb-4171-929c-6b35d5a53b5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Locomotive Boilers and Engines Cylinders. The formula commonly used in determining the thickness of boiler shells, circular tanks, and cylinders is t = thickness of cylinder wall in inches p = pressure in pounds per square inch d = diameter of cylinder in inches f = safe fiber stress which for cast iron is usually taken at 1500 pounds per square inch For cylinder heads, the following empirical formula may be used in calculating the thickness: T = the thickness of the cylinder head in inches p = boiler pressure in pounds per square inch d = diameter of stud bolt circle Cylinder specifications usually call for a close grain metal as hard as can be conveniently worked. The securing of the proper proportions of a cylinder for a locomotive is a matter of great importance in locomotive design. The cylinders must be large enough so that with a maximum steam pressure they can always turn the driving wheels when the locomotive is starting a train. They should not be much greater than this, however, otherwise the pressure on the piston would probably slip the wheels on the rails. The maximum force of the steam in the cylinders should therefore be equal to the adhesion of the wheels to the rails. This may be assumed to be equal to one-fourth of the total weight on the driving wheels. The maximum mean effective piston pressure in pounds per square inch may be taken to be 85 per cent of the boiler pressure. As the length of the stroke is usually fixed, by the convenience of arrangement and the diameter of the driving wheels, a determination of the size of the cylinder usually consists in the calculation of its diameter. In order to make this calculation, the diameter of the driving wheels and the weight on them, the boiler pressure, and the stroke of the piston must be known. With this data, the diameter of the cylinder can be calculated as follows: The relation between the weight on the drivers and the diameter of the cylinder may be expressed by the following equation: W = the weight in pounds on drivers d = diameter of cylinders in inches p = boiler pressure in pounds per square inch L = stroke of piston ininches D = diameter of drivers in inches C = the numerical coefficient of adhesion From the above equation, the value of d may be obtained since the coefficient of adhesion C may be taken as .25. The equation then becomes from which Example. What will be the diameter of the cylinders for a locomotive having 196,000 pounds on the drivers, a stroke of 24 inches, drivers 63 inches in diameter, and a working steam pressure of 200 pounds per square inch? The above formula gvies a method of calculating the size of cylinders to be used with a locomotive when the steam pressure, weight on drivers, diameter of drivers, and stroke are known. This formula is based upon the tractive force of a locomotive or the amount of pull which ibis capable of exerting. The tractive force of a locomotive may be defined as being the force exerted in turning its wheels and moving itself with or without a load along the rails. It depends upon the steam pressure, the diameter and stroke of the piston, and the ratio of the weight on the drivers to the total weight of the engine, not including the tender. The formula for the tractive force of a simple engine is T = the tractive force in pounds d = diameter of cylinders in inches L = stroke of the piston in inches D = diameter of the driving wheels in inches p = boiler pressure in pounds per square inch When indicator cards are available, the mean effective pressure on the piston in pounds per square inch may be accurately determined and its value p1, may be used instead of .85 p, in which case the formula becomes Some railroads make a practice of reducing the diameter of the drivers D by 2 inches in order to allow for worn tires. In the case of a two-cylinder compound locomotive, the formula for tractive force is D = the diameter of the drivers in inches d1, = diameter of low-pressure cylinder in inches d2, = diameter of high-pressure cylinder in inches Train Resistance. The resistance offered by a train per ton of weight varies with the speed, the kind of car hauled, the condition of the track, journals and bearings, and atmospheric conditions. Taking the average condition as found upon American railroads, the train resistance is probably best represented by the Engineering News formula in which R = the resistance in pounds per net ton (2000 pounds) of load S = speed in miles per hour The force for starting is, however, about 20 pounds per ton which falls to 5 pounds as soon as a low rate of speed is obtained. The resistance due to grades is expressed by the formula
{"url":"http://sdrm.org/faqs/boilers/page131.html","timestamp":"2014-04-20T13:20:16Z","content_type":null,"content_length":"6991","record_id":"<urn:uuid:c7e65562-03b9-4fea-8287-4aea96204a46>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Relativity by Albert Einstein & Nigel Calder The Special and the General Theory ISBN 9781440627125 | 208 pages | 25 Jul 2006 | Penguin Classics | 8.26 x 5.23in | 18 - AND UP Summary of Relativity Summary of Relativity Reviews for Relativity An Excerpt from Relativity The Nobel Prize-winning scientist’s presentation of his landmark theory According to Einstein himself, this book is intended “to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics.” When he wrote the book in 1916, Einstein’s name was scarcely known outside the physics institutes. Having just completed his masterpiece, The General Theory of Relativity—which provided a brand-new theory of gravity and promised a new perspective on the cosmos as a whole—he set out at once to share his excitement with as wide a public as possible in this popular and accessible book. Introduction by Nigel Calder Suggestions for Further Reading Preface by Albert Einstein Part I: The Special Theory of Relativity 1. Physical Meaning of Geometrical Propositions 2. The System of Co-ordinates 3. Space and Time in Classical Mechanics 4. The Galileian System of Co-ordinates 5. The Principle of Relativity (in the Restricted Sense) 6. The Theorem of the Addition of Velocities Employed in Classical Mechanics 7. The Apparent Incompatibility of the Law of Propagation of Light with the Principle of Relativity 8. On the Idea of Time in Physics 9. The Relativity of Simultaneity 10. On the Relativity of the Conception of Distance 11. The Lorentz Transformation 12. The Behaviour of Measuring-Rods and Clocks in Motion 13. Theorem of the Addition of the Velocities. The Experiment of Fizeau 14. The Heuristic Value of the Theory of Relativity 15. General Results of the Theory 16. Experience and the Special Theory of Relativity 17. Minkowski's Four-Dimensional Space Part II: The General Theory of Relativity 18. Special and General Principle of Relativity 19. The Gravitational Field 20. The Equality of Inertial and Gravitational Mass as an Argument for the General Postulate of Relativity 21. In What Respects Are the Foundations of Classical Mechanics and of the Special Theory of Relativity Unsatisfactory? 22. A Few Inferences from the Genral Principle of Relativity 23. Behaviour of Clocks and Measuring-Rods on a Rotating Body of Reference 24. Euclidean and Non-Euclidean Continuum 25. Gaussian Co-ordinates 26. The Space-Time Continuum of the Special Theory of Relativity Considered as a Euclidean Continuum 27. The Space-Time Continuum of the General Theory of Relativity Is Not a Euclidean Continuum 28. Exact Formulation of the General Principle of Relativity 29. The Solution of the Problem of Gravitation on the Basis of the General Principle of Relativity Part III: Considerations on the Universe as a Whole 30. Cosmological Difficulties of Newton's Theory 31. The Possibility of a "Finite" and Yet "Unbounded" Universe 32. The Structure of Space According to the General Theory of Relativity 1. Simple Derivation of the Lorentz Transformation 2. Minkowski's Four-Dimensional Space ("World") 3. The Experimental Confirmation of the General Theory of Relativity (a) Motion of the Perihelion of Mercury (b) Deflection of Light by a Gravitational Field (c) Displacement of Spectral Lines towards the Red To keep up-to-date, input your email address, and we will contact you on publication Please alert me via email when:
{"url":"http://www.us.penguingroup.com/nf/Book/BookDisplay/0,,9781440627125,00.html?Relativity_Albert_Einstein","timestamp":"2014-04-19T20:34:56Z","content_type":null,"content_length":"31201","record_id":"<urn:uuid:5eee9dce-910d-42c2-9a41-8bff906e8b89>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
LO 7-1.1: Identify equations, terms, factors, constants, variables, and coefficients An equation is a statement that two quantities are equal. A variable is a letter that represents an unknown value. A root or solution of an equation is the value of the variable that makes the equation a true statement. x = 4 + 3 is an equation. x is the variable. 7 is the root or solution. Factors are expressions of multiplication. 3x means 3 times x. 3 is a factor of 3x. x is a factor of 3x. Terms are algebraic expressions that are added or subtracted. In the expression 2m + 3n + 8, 2m is a term and 3n is a term. 8 is a term. Constants are terms that contain only numbers. In 2m + 3n + 8, the constant term is 8. Variable terms are terms that have at least one letter. In 5a + 3b + 7, 5a and 3b are variable terms. A coefficient is one factor as it relates to the remaining factors of a term. In 2m + 3n + 8, 2 is the coefficient of m and 3 is the coefficient of n. 8 has no coefficient. The coefficients 2 and 3 are also called the numerical coefficients. Practice Exercises
{"url":"http://wps.prenhall.com/chet_cleaves_cmupdate_7/93/23910/6121211.cw/content/index.html","timestamp":"2014-04-21T02:01:29Z","content_type":null,"content_length":"37939","record_id":"<urn:uuid:4634269d-bbbc-4106-9160-92cd51f7833f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] How to implement digital filters using Arrows John Lask jvlask at hotmail.com Mon Oct 31 23:19:02 CET 2011 On 1/11/2011 1:35 AM, Captain Freako wrote: you need to study ArrowLoop and understand that. In the code rec (y,s')<- arr f -< (x,s) s<- delay s0 -< s' the state is 'captured' in the recursive binding. i.e. just like in real circuits the output state "s" is threaded back as an input. The recursive binding is just sugar for the application of the loop combinator. The signature of the loop combinator is loop :: arrow (input, feedback) (output, feedback) -> arrow input output with the loop combinator (with which recursive arrow bindings are defined) the function could have been defined as... liftAu f s0 = loop (second (delay s0) >>> arr f ) the delay is neccessary to break the recursion. i.e. to calculate the next output and state the previous state is used. > > liftAu :: ((x,FilterState s)->(y,FilterState s)) -> FilterState s -> > FilterAu x y > > liftAu f s0 = proc x -> do > > rec (y,s')<- arr f -< (x,s) > > s<- delay s0 -< s' > > returnA -< y > I think I understand how the `returnA' in the last line of your > `liftAu' function is getting translated by those instance definitions > into: > c where > c = Automaton ( arr id&&& arr (const c) ) > and, furthermore, how that is passing the supplied `y' into the first > element of the resulting couple. However, I don't understand how the > recursively defined `c' is capturing the modified filter state and > preserving it for the next call. It seems like the Automaton being > referred to by `c' is a newly constructed entity, which knows nothing > about the current state of the running Automaton. > Any help in understanding this would be greatly appreciated. > Thanks! > -db More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2011-October/096479.html","timestamp":"2014-04-24T22:00:39Z","content_type":null,"content_length":"4883","record_id":"<urn:uuid:87ef5858-9329-4ef8-bd2f-42d3d49ea617>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Predation and Parasitism Lotka-Volterra Equations Competition involved two species, each of which negatively affected each other. Each species reduced the carrying capacity of the environment for the other. Predation or parasitism, however, is an interaction where one species benefits (the predator or parasite) and the other is harmed (the prey or host). This system is often modeled by using exponential growth, not the logistic equation which we studied previously. Exponential growth means that the rate of increase (or decrease) of the population of each species depends on how many of that species there are. In the Lotka-Volterra model, we add a term to the rate of change of population for the predator. This term depends upon the number of prey. So, the more prey there are, the more positive the rate of change of predator population. The rate of change of prey has a term subtracted. This term depends upon the number of predators. So, the more predators there are, the more negative the rate of change of prey population becomes. Even these simple equations are difficult to solve but when we do so, we find the following, very interesting cyclic change in predator and prey populations. Notice that this Mathematica simulation done by your instructor shows that the cyclic changes in predator and prey numbers do not follow a nice sinusoidal curve. The changes are cyclic but are not a sin function as shown in many textbooks! (The drawing to the right is a more whimsical version.) Another way to show this is with what is called a phase plane plot. Here we plot predator numbers on the vertical axis and prey numbers on the horizontal axis: followthe peaks in prey population. The predator eats the prey and so population of predator follows closely after peaks in the prey population. Problems with Lotka-Volterra There are some serious problems with the Lotka-Volterra model. One of these is the following: Some solutions to problems with Lotka-Volterra The model which leads to the results to follow puts in a carrying capacity for the prey (not the predator). This is a realistic scenario since the prey may be eating some plant material which runs out at some point. Also, this model incorporates a more realistic "encounter function" which describes the interaction between the predator and prey. The results given here are only shown as a plot of predator and prey numbers vs. time. The phase plane plot is omitted for lack of time. However, note that in this model, the results do not show sustained oscillations - predator and prey numbers approach an equilibrium point. Thus, the trajectory on a phase plane plot will spiral inwards (not outwards as shown in the last example) towards a stable equilibrium point. But, how fast the oscillations die down depends upon the various parameters in the equations. In real life, the time for the oscillations to die down could be very long. By the time they were supposed to die down, something could get them going again. The Lynx and the Snowshoe Hare Between 1845 and 1935, the Hudson Bay Company of Canada kept records of the number of lynx (a cat) and snowshoe hare (a rabbit) pelts which were sold by them. Trappers would bring in pelts and the Hudson Bay Company would buy them and resell them to furriers. Of course, the lynx is a predator on the hare and so we expect to find cyclic changes in the populations. The Hudson Bay Company records do show a remarkable, long lived cyclic behavior. But, in the period between 1875 and 1905, the cyclic change goes in the wrong direction. Plotted on a phase plane, the cycle goes clockwise, instead of counterclockwise, as we would expect. This indicates that the hare is the predator of the lynx, a very unlikely possiblity. Many people have thought about this anomaly - One possible explanation is that the trappers are also a predator. Or that the plant material which the hares eat is a third species which itself is the "prey" for the hare. So, the lynx eats the hare and the hare eats the plants. Click on the picture below to go to a link which has an article on recent population changes in the lynx and showshoe hare populations: /www.bio.miami.edu/tom/graphics/line_for.gif >
{"url":"http://www.bio.miami.edu/tom/courses/bil358/preddiscuss.html","timestamp":"2014-04-17T16:13:12Z","content_type":null,"content_length":"7210","record_id":"<urn:uuid:aec60409-b2c1-4efc-815e-7af8e01017f2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Measure for Measure for measure Issue 17 Nov 2001 Measure for Measure, or, How to make a carpet out of nothing. "How long is a piece of string?" [1] As long as you want it to be, of course! But when someone has asked that, either as a joke or to make a point in a conversation, did you ever stop to think that you were taking something very important for granted? Was it a question that worried you? Are you sure? "It all depends on what you mean by..." [2] When you are asked about the length of something - your height, the diameter of the earth, the size of an atom, the distance across our galaxy - you probably never doubt that an answer exists. It might be almost impossible for you to find, or you might not know what it is, but you are sure that an answer exists. Aren't you? In the real world you're (probably) right, but can you imagine objects that you can't measure? Not ones that don't exist, but real things that have no length or area or volume? Sounds weird, but they're out there. Before we meet some of them, we'll have a look at some of the strange objects you can produce just by applying a bit of (un)common sense to a line and a square. "...and then he's gone - like that!" [3] Start off with a line 1m long (Fig. 1) and remove the middle third, leaving the end points of the removed piece where they were - this gives you two lines, each Next remove the middle third of each of these two lines (Fig. 3). That gives you four lines. Now remove the middle third of each of these and so on. Imagine that you could keep on doing this indefinitely, repeating the process an infinite number of times. What would you end up with? It seems obvious that it will be a whole lot of points - the endpoints of each of the intervals; remember that we are not removing these endpoints when we take away the line segments. It might even be obvious to you that as we have repeated our procedure an infinite number of times, we will be left with an infinite number of dots. Nothing to worry about so far! Now - how much of the line did you take away? We can work this out by noticing that after the first piece has been removed we have two-thirds of the original length left. After the second stage we have two-thirds of that left. That means that we Now there might be a problem - we've removed the whole of the interval (remember that we started with a line of length one, and we've taken away lots of bits that all add up to the length one), but our common sense tells us that we should have an infinite number of points left over. How can it be that we seem to have taken the whole line away and yet there are an infinite number of points left? Don't worry - there's nothing wrong with your common sense, or even with your summing of an infinite series. What's wrong is that we're missing quite a subtle point; a point is not a little bit of a line. A point doesn't have some very small length; it has no length at all. No matter how many of them you put next to each other they will never take up any space, will never have any length. Remember that no matter how many zeros you add together, your answer will still be zero. (Notice that this is just the same method as that used to create a Cantor Dust in How big is the Milky Way? in Issue 15 of Plus except that we are using lines instead of squares. In fact, the set of points we have left in our example is called the Cantor Set. If you were wondering why the title of this article mentioned carpets, it's because when I first met the Cantor Dust, I heard it called Sierpinski's Carpet, and it's made out of nothing because if you add up the areas you have taken away, you find that you have removed all of the area with which you started.) It's time to stop talking about length and get a bit more mathematically accurate. To avoid the confusion that we've seen is easy when thinking about this subject what we need is the mathematical idea of a measure. No, not one of those metal things that you get out and unreel to check if your new curtains will fit. In maths, a measure is a way of measuring how big something is, but it has a much more precise meaning than length or area or volume, and it can be applied in a much wider range of settings. Here (briefly) are some of the important facts about measures. "Do you always play by the rules?"[4] The rules for using a measure are pretty simple. You can make any way of recording a fact about something mathematical - its size, how many pieces it's in or (for a set) how many members it has - into a measure provided it follows some basic rules. Since most things in maths can be expressed in the language of sets (unions, intersections, empty sets etc.) it makes sense to use this language when we write down our rules. 1. The measure of any set is a real number. 2. The empty set has measure zero. 3. If A and B are two sets with no elements in common (disjoint) then the measure of I've simplified these rules a bit, but they should be enough to give you an idea of how a measure works. It should be even more useful to see what happens when we apply these rules to the Cantor Set and the sections of line which we removed to create it. Call the Cantor Set, A, and call the set made up of all the sections of line that were removed, B. To save us keep writing the phrase "the measure of" in every other line, we'll adopt a useful bit of notation and write 1. For any set, 2. If (If you've ever done any probability, don't you think that rule 3 looks like the addition law for probabilities? This isn't a coincidence, but that's another story...) Sets of zero measure are very useful in maths. We're going to look at one example of how they (and the idea of measure in general) let us extend calculus and allow us to integrate some very odd "It doesn't matter. Not really now not any more."[5] Integration is (in its simplest form) a way of finding the area enclosed by a graph, the x-axis and two lines drawn up from the x-axis at x=a and x=b (a and b are called the limits of integration). Before we go on to see how measures can help us, it will be useful to see how the process of integration works out this area. What you do is to take small sections of the x-axis and draw rectangles up to the graph (Fig. 4). The sum of the areas of these rectangles then gives you an approximate value for the area under the graph, and if we let the rectangles become infinitely thin (whatever that means!), then we have the exact value - in other words we look at the limiting value as the width of the rectangles tends to zero. Now for the clever bit. When we calculate the area of the rectangles, instead of using the length of a bit of the x-axis as the width, we can use the measure of the set made up of that bit of the axis. (For the more interested reader, what we are doing here is moving from Riemann integration to Lebesgue integration. These are two really big names in the world of pure maths - a university textbook or a good encyclopaedia or online search will tell you more. In fact, the type of measure we are using here is called Lebesgue Measure.) Impressed? No? Well, you should be. Using a measure means that anything that happens on a set of measure zero makes no difference to the value of the integral. That means we can integrate lots of functions that are not smooth curves, but jump about all over the place. An example should help. Suppose that we want to integrate a simple function. We should have no trouble with the following: Now suppose that instead of the nice smooth function Since the set of points where the function takes the value zero is just that - a set of isolated points - it has measure zero and so our theory tells us that we can ignore it, and integrating f will give us the same value as before. Similarly, we can make the function take any value we like on any set of measure zero and it will have no effect on the result of the integration. This means that we can ignore a lot of jumps in functions and integrate them as if they were nice, smooth, continuous functions. It's worth noticing at this point that we need to be talking about a set of isolated points for the set to have measure zero. By 'isolated' I mean that each of the points must be separate from the rest. If the points were allowed to be 'touching' (and there were an infinite number of them!) then we'd get a section of a line, and that would have a measure greater than zero. "Your mission, should you choose to accept it..."[6] So everything's fine. We have our idea of a measure, and we've seen at least one circumstance where it's useful. Problem solved. Case closed. Not quite. If we were to leave it at that, it would be straightforward but dull! Once the ideas of measures had been worked out, it occurred to someone to ask an awkward question: "Are all sets measurable?" What we mean by this question is, once we've defined what our measure is going to be, can we calculate the measure (which should be a number remember!) for every possible set? Again, I'm simplifying here, but we have captured the spirit of the question. Fortunately (to give this article a point!) the answer is no, not all sets can be measured. Without going into too many details, what happens is that a set can be so complicated that it is impossible to measure it. Imagine a three-dimensional shape that is so jagged and crinkled that it is actually impossible to measure the volume of it, and you have a good idea of what is going on. (Naturally, the actual maths needed to make this concept precise is a bit tricky!) However, the fact that sets that have no measure exist (we'll call them non- measurable sets, just to sound more mathematical!) mean that one of the most bizarre results in all of maths is true. Something so weird that many people in the last eighty years or so have called whole areas of pure maths rubbish just because they thought that this result couldn't possibly be right. This is the Banach-Tarski Paradox, and (translated from mathematical symbols into English) it says: It is possible to take a solid sphere, cut it up into pieces and reassemble them, without bending, stretching or distorting them, to give you two solid spheres, each of which has exactly the same volume as the original. Go back and read that last paragraph again. If your common sense didn't bring you screeching to a halt in disbelief, you didn't understand it properly. Think about what this means. Get yourself a lump of gold, volume Obviously this can't happen in real life (or I'd have been out getting some old gold jewellery and a knife instead of typing this article), but there's nothing wrong with our theory. Let me explain. What the Banach-Tarski Paradox tells you to do is to take your sphere (let's say it has a volume of 1, for convenience) and cut it up into pieces which are non-measurable. Now, because they are non-measurable, you've 'lost' the information of what volume you had to start with. This means that when you put them back together you can get any volume you want; 2, 3, 4, 97 etc. There is no volume in your pieces which has to be preserved when you reassemble them. That's why the Paradox works, but it doesn't tell you why you can't do it in practice. The reason is that you can't actually physically create a non- measurable three-dimensional shape. It has to be infinitely complicated and so, although we can imagine and describe it, we can't actually make it. "Your eyes can deceive you, Luke..."[7] It might help at this point if we look at a simpler example where we take something apart, perform an operation on the bits and then put them back together to form two copies of the original. In fact, the example we are going to look at is the Banach-Tarski Paradox in action, it just doesn't seem as remarkable because we are dealing with objects that are much less familiar to us. As we go through this example, I'll point out how it corresponds to the impossible sounding version of the Paradox given above. Suppose you have four objects, Now, let (Notes, for those who already knew what a group was: Now split (This splitting corresponds to cutting up the sphere in the previous example.) Now put an extra Similarly, we can get: (Putting the extra letter in front of these sets of words corresponds to moving around the pieces of the sphere in the first example. This is the least obvious point of comparison, but mathematically we are doing the same thing in each case - performing an operation on the bits that we have obtained by cutting up our original object.) Now we glue the pieces together by finding the union of pairs of sets. In each case when we take the union of the two sets we see that the words can start with any of the four symbols and so we must have the whole of "...the end of the beginning."[8] That's it. We started off with a problem - not understanding the Cantor Set - and introduced the idea of a measure to help us get to grips with it. It then turned out that the measure is a powerful tool in a mathematician's armoury, letting us do all sorts of useful things like integrate some nasty functions. Finally we saw that when we carry our ideas to their logical conclusion, they give us some very strange results, (hopefully) opening up new areas for maths to move in and making the subject richer and its tools yet more applicable to real problems. Hopefully at this point someone is asking "What real problems?" Are measures something apart from an intellectual challenge, a game for mathematicians? They certainly are. A quick search on the Internet for 'applications of Measure Theory' will produce references to probability (where probabilities can be calculated using something called the 'Probability Measure'), dynamical systems and ergodic theory (which have applications to imaging, number theory and communication theory), physics, chemistry and mathematical economics. If reading this has whetted your appetite and you long to know more (or fill in the gaps where I've missed things out!), take a look at "The Banach-Tarski Paradox" by Stan Wagon and prepare to be [1] Common saying. [2] C. E. M. Joad, start of answer to questions on the BBC television programme Brains Trust. [3] Roger 'Verbal' Kint (Kevin Spacey) in The Usual Suspects (1995). [4] Mrs. Emma Peel (Uma Thurman) in The Avengers (1998). [5] Closing line, Red Shift by Alan Garner. [6] From Mission: Impossible, various TV and movie incarnations (1966-date) [7] Obi-Wan Kenobi (Alec Guinness) in Star Wars (1977) [8] From a speech by Winston Churchill About the author Andrew Davies graduated from Leeds University with a First in Mathematics in 1992. He then trained as a teacher, and has been working as a Maths (and occasional ICT) teacher since 1993. He took a career break to study more maths from 1996-9, and was a speaker at the NRICH-organised IMECT2 conference in July 2000, which he claims to have enjoyed, despite being terrified at the time! From September 2001 he will be working as Numeracy Coordinator at Whitehaven School in Cumbria. Submitted by Anonymous on September 4, 2012. Good argumets.. So, give me an example of what the "inverse" of an apples is..
{"url":"http://plus.maths.org/content/measure-measure","timestamp":"2014-04-18T03:00:13Z","content_type":null,"content_length":"66324","record_id":"<urn:uuid:b9e19b2e-52a8-429c-babd-9585a248b1c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics in a Postmodern Age: A Christian Perspective The philosophy of mathematics is undergoing a significant revival, resulting in a recent spate of books representing the beginnings of new subfields of study. This book is an example of this: it is the first book I know of (certainly in the last couple of centuries) in the subfield of religious perspectives on mathematics and its philosophy. In this review, I will discuss both the aims and value of this subfield, as well as the aims and value of the book itself. Religious perspectives on (philosophical issues in) mathematics Religious perspectives on the philosophy of mathematics have two foci, as well as two quite different potential audiences. One focus is on the issues in the philosophy of mathematics from the perspective of the given religious viewpoint: how might the religion contribute to the general discussion of the philosophy of mathematics. When this is done well, it is if interest to both those interested in philosophical issues related to mathematics, and to those sharing the faith of the authors of the book. When the discussion reconceptualizes some questions or concepts in ways which can make sense in the absence of that shared faith, it is of interest to the wider community. When, however, the issues are resolved by appeal to a deus ex machina particular to that faith, the discussion loses its wider appeal. The second focus is how mathematics and its philosophy can help the faithful understand issues with which that particular religion and its believers are struggling: just as philosophers over the centuries have used mathematics as an archetypical example for assorted epistemological and ontological discussions, various religious traditions may also want to make use of mathematics. The audience for this second agenda, however, is primarily restricted to those sharing the particular religious faith, or at least, actively interested in its discussions. This book attempts to introduce both aspects of these issues from a Christian perspective. There are people interested in such explorations from other religious perspectives as well: for example, at the first POMSIGMAA Contributed Paper Session in the Philosophy of Mathematics in January, 2003, M. Anne Dow, from the Maharishi University of Management, discussed, from the perspective of Transcendental Meditation, "A Unifying Principle Describing How Mathematical Knowledge Unfolds." In reviewing this kind of work, there are two separate questions: of what interest is the discussion to the general mathematical or philosophical community, and of what interest is it to members of the given faith. As I am not a Christian, and am writing for a mathematical publication, I will not try to address the second question at all. This book's contribution to the discussion The aim of the book is to introduce the concept of examining mathematics from a Christian perspective. The book succeeds admirably in its main project: to demonstrate that, while Christians don't have a different mathematics than do atheists, Jews, or Hindus, there is a distinctive Christian perspective on mathematics. However, while the book's nearly 400 pages would seem to imply — contrary to the comment of a friend of one of the editors, quoted in the Conclusion, "That's going to be one short book!" — that there is a lot to be said about this topic, there is in fact not a large proportion of the book devoted to what Christianity has to say about mathematics. This is presumably partly due to a desire of the editors to keep the book largely self-contained. Therefore, in many chapters most of the space is devoted to giving the general background of the philosophical issues, or the relevant pieces of mathematics, or the historical background, and only the final 10% of the chapter is spent on the Christian perspective on what has been introduced. Many of these summaries are quite nicely done. I'm not an expert in world intellectual history, but having spent twenty years at a liberal arts college where mathematics faculty take their turn teaching discussion sections of "Cultures and Traditions," I have some familiarity with the subject. The summaries of mathematics and its relation to the development of science and culture which constitutes part II of the book, "The Influence of Mathematics," seem quite balanced and well presented, for the very brief summaries they are. There is little which is specifically Christian in what is presented. The author of chapters 5 and 6 suggests that the pre-modern and modern idea that mathematics represents a form of certainty independent of God may be somewhat problematic for Christianity. So is the vision of mathematics as the structure within which one frames questions of whether an investigation is scientific and thus entitled to serious consideration. In the chapter on the mathematization of culture, the danger (both for Christians and for the world) of mathematics presenting itself as a value-free, impersonal basis for decision making is discussed. The first chapter of the book describes the modern versus postmodern world views, taking Frege as an example of the former and Paul Ernest as an example of the latter. It is basically well-written, although it starts with a list of common answers to "Why are the theorems of mathematics true?" not one which is the one which a mathematical "platonist" would give: "because they correctly describe facts about and relationships between mathematical objects." The second chapter's comparative study of the role of mathematics in ancient Greece, medieval Islam, and pre-modern China gives a good background for considering the issue of justification in mathematics, as well as to what extent is mathematics culturally determined. Overall, the quality of the discussion in this book is very uneven. The book is a collection of articles, each written primarily by one or two of its ten different authors at nine different universities on a dozen different topics in one way or another involving the philosophy of mathematics — and a bit of the sociology, psychology, and pedagogy of mathematics — from a Christian perspective. However, the editors of the book have chosen to only indicate in the acknowledgements section who was the primary author of which chapter (and have not assigned the primary responsibility of chapter 12 — Teaching and Learning Mathematics: The Influence of Constructivism — to anyone). Had they exercised sufficient editorial oversight that the book read like a uniform whole, this might have been more understandable. As is, I found myself spending a lot of time speculating, as I was reading the book, why they chose to present the book this way. (Their explanation in the acknowledgements is not very satisfactory.) There are many places where their choice is extremely awkward. For example, chapter 10, "The Possibility of Detecting Intelligent Design," refers, especially in its first half, over and over again to the work of one William Dembski. As I found this chapter particularly irritating, I finally turned to the Acknowledgements and found that he was, in fact, the principal author of the chapter! Unfortunately, the two chapters which are largely devoted to philosophical issues are the least satisfactory. In "God and Mathematical Objects" (Chapter 3) Christopher Menzel sets out to establish mathematical objects as objective and independent of human minds. This is a view I'm sympathetic to, but there are many challenges confronting this viewpoint, such as, if mathematical objects are not physical objects, and yet they're not in our minds, where are they; how can people have knowledge of these objects, etc. Menzel sets out to establish numbers as properties situated in the mind of God. To do so, he gives a thorough exposition of how Russell's paradox can be applied not just to sets, but to properties and propositions. To resolve this paradox, God must continually reconstructing all the levels of the set-theoretic hierarchy (and equivalent ones for properties) and, in his beneficence, share an understanding of this with man. One of the dangers of religious philosophy is the "pulling the rabbit out of the hat" nature of many of the arguments — as soon as there is a potential contradiction or complexity, it's dealt with via God's omnipotence, omniscience, etc. Chapter 4, "The Pragmatic Nature of Mathematical Inquiry," has nothing particularly relevant (that I can find) to Christianity in it, but it is full of rash, unsupported statements, such as "from 'no contradiction has to date been derived from B' (the proscriptive support) mathematicians conclude that 'no contradiction is in fact derivable from B' (the proscriptive generalization)." (p. 108) The point of the chapter seems to be to demonstrate that there is no more certainty in mathematics than anywhere else, since we use unsupported hypotheses as much as any science. The third part of the book, "Faith Perspectives in Mathematics," is, except for its first chapter ("Mathematics and Values," which discusses intrinsic and extrinsic arguments for the value of mathematics) about issues other than philosophy on the border of mathematics which are relevant for Christians. It seems primarily aimed at the faithful rather than at the larger community. One chapter, "Creativity and Computer Reasoning," attempts to explain why it's unlikely that we can build a computer which can think like a human being. Another, "The Possibility of Detecting Intelligent Design," attempts to shore up proofs of the existence of God by delineating what would constitute proof that the world has been designed by some intelligence rather than arising randomly. "A Psychological Perspective on Mathematical Thinking and Learning" introduces only a small part of that subject and seems more out of place than most chapters in this book. The final chapter, "Teaching and Learning Mathematics: The Influence of Constructivism," explains what constructivism means for mathematical pedagogy and where it can come into conflict with Christian values. Overall, the book is an intriguing, though not unflawed, introduction to this subject. I look forward to more discussion of these topics by some of the authors in the future. Bonnie Gold (bgold@monmouth.edu) is professor of mathematics at Monmouth University. Her interests include alternative pedagogies in undergraduate mathematics education and the philosophy of
{"url":"http://www.maa.org/publications/maa-reviews/mathematics-in-a-postmodern-age-a-christian-perspective","timestamp":"2014-04-16T20:19:37Z","content_type":null,"content_length":"104771","record_id":"<urn:uuid:3a4c65b5-1c03-4566-a851-82d3a720dce4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from May 2011 on Random Math The following exercise is taken from Atiyah-Macdonald, chapter 1. Let $A$ be a commutative ring. Show that the following statements are equivalent: (i) $X=\mbox{Spec }A$ is disconnected; (ii) $A \cong A_1 \oplus A_2$, where neither $A_1$ or $A_2$ is the zero ring; (iii) $A$ contains an idempotent $eq 0, 1$. Proof : first we do the easiest parts. It’s clear that $(ii) \Rightarrow (iii)$; for example, the elements $(1, 0)$ and $(0,1)$ are such idempotents. To show that $(iii) \Rightarrow (ii)$, let $r \in A$ be an idempotent which is not $0$ or $1$. Let $r'=1-r$. Then $r'^2 = 1-2r+r^2 = 1-2r+r = 1-r$, so $r'$ is also an idempotent which is not $0$ or $1$. Now since $1=r+r'$, we have $a=ra+r'a$ for every $a \in A$; so $A=(r)+(r')$. Since $rr'=r(1-r)=r-r^2=r-r=0$, the sum is actually direct (as a sum of $A$-modules); indeed, if $ra+r'b=0$, then, multiplying by $r$ we have $r^2a=ra=0$. Similarily $r'b=0$. So the sum is direct. Give $(r)$ an internal ring structure by defining its unit to be $r$; this works, since $r(ra)=ra$ for every $ra \in (r)$. Then $A \cong (r) \oplus (r')$ as a direct sum of rings. Now clearly $(ii) \Rightarrow (i)$; indeed, every prime ideal of $A$ contains one of $(0,1)$ or $(1,0)$, but none contains both. Now we show that $(i) \Rightarrow (iii)$. Let $V(R_1)$ and $V(R_2)$ be closed sets in $X$ such that $X = V(R_1) \sqcup V(R_2) = V(R_1 \cap R_2)$. Suppose without loss of generality that $R_1$ and $R_2$ are ideals. Since the union is disjoint, we must have $R_1+R_2=A$, so there exists $r_1 \in R_1$ and $r_2 \in R_2$ with $r_1+r_2=1$. We must show that we can take $r_1$ and $r_2$ so that $r_1r_2=0$; this will cause the sum $R_1+R_2=A$ to be direct. Since $V(R_1 \cap R_2) = X$, and since $r_1r_2 \in R_1\cap R_2$, we have $V(r_1r_2) \supset V(R_1R_2)=X$. Thus every prime ideal in $A$ contains $r_1r_2$. Thus $r_1r_2$ is contained in the nilradical of $A$, so it is nilpotent. Let $n$ be such that $r_1^nr_2^n=0$. Then, by the binomial theorem, $r_1^n+r_2^n = 1 +r_1r_2s$ for some $s \in A$. Since $r_1r_2$ is nilpotent, so is $r_1r_2s$; hence $r_1^n+r_2^n$ is a unit, say $\ alpha(r_1^n+r_2^n)=1$. Let $r_1'=\alpha r_1^n$ and $r_2'=\alpha r_2^n$. Then $r_1'+r_2'=1$ and $r_1'r_2'=0$, as promised. This shows that $A=r_1' A \oplus r_2' A$ as $A$-modules. We can, as before, define a ring structure on $r_i'A$ be letting $r_i'$ be the unit, since we have $r_1'(r_1'+r_2')=r_1'=r_1'^2$ since $r_1r_2=0$. Thus $A \cong r_1' A \oplus r_2' A$ as rings. A cute problem I came up with this little problem last night. It’s not very difficult to prove but still fun (I think). Here it is : let $A$ be a commutative ring, and let $h(u,v) \in A[u,v]$. Suppose that, for any polynomials $f(u), g(u) \in A[u]$, we have $h(f(u), g(u))=h(g(u), f(u))$. Then $h(u,v)=h(v,u)$. I’ll post my solution in a couple of days to see if anyone can come up with an alternative solution in the meantime. :) A nice problem in Galois theory This problem was given to me by my research supervisor. This is the problem and my solution: Let $k$ be a field of characteristic zero, and $k(x)$ the rational function field in one variable over $k$. Suppose $F_1$ and $F_2$ are subfields of $k(x)$ such that $[k(x):F_1]$ and $[k(x):F_1]$ are finite. Is it possible for $[k(x):F_1\cap F_2]$ to be infinite? Indeed, it is. Note that $\mbox{Gal }(k(x)/k) = \mbox{PSL}_2(k)$, the projective special general linear group over $k$ (we identify its elements with Möbius transformations). Let $S, T$ denote the generators of the modular group (or rather, their image in $\mbox{PSL}_2(k)$). Note that they are of finite order. Hence, taking $F_1=k (x)^{<S>}$ and $F_2=k(x)^{<T>}$, we have $[k(x):F_i]<\infty$ for $i=1,2$. However, $F_1 \cap F_2$ is fixed by the whole modular group, which is infinite since $k$ has characteristic zero. Hence $k(x) $ cannot be of finite index over $F_1 \cap F_2$. I don’t know whether such a construction is possible if $k$ has prime characteristic. I’d be interested to know if you find out!
{"url":"http://mathramble.wordpress.com/2011/05/","timestamp":"2014-04-21T15:02:57Z","content_type":null,"content_length":"40424","record_id":"<urn:uuid:7dde23d0-0461-42c1-b8d9-054099e2571b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
# Polynomials with no zero April 9th 2009, 09:47 AM #1 MHF Contributor May 2008 # Polynomials with no zero Let $n \geq 0$ be an integer and $p$ a prime number. Find a formula for the number of monic polynomials of degree $n$ in $\mathbb{F}_p[x]$ which have no zero in $\mathbb{F}_p.$ Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/math-challenge-problems/83021-polynomials-no-zero.html","timestamp":"2014-04-18T08:08:30Z","content_type":null,"content_length":"31740","record_id":"<urn:uuid:7cfd0081-2841-4448-aa56-a821cfb0dbfe>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00573-ip-10-147-4-33.ec2.internal.warc.gz"}